{"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-cene-1", "action": "created", "body": "# The Atlas Search 'cene: Season 1\n\n# The Atlas Search 'cene: Season 1\n\nWelcome to the first season of a video series dedicated to Atlas Search! This series of videos is designed to guide you through the journey from getting started and understanding the concepts, to advanced techniques.\n\n## What is Atlas Search?\n\n[Atlas Search][1] is an embedded full-text search in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database.\n\nBy integrating the database, search engine, and sync mechanism into a single, unified, and fully managed platform, Atlas Search is the fastest and easiest way to build relevance-based search capabilities directly into applications.\n\n> Hip to the *'cene*\n> \n> The name of this video series comes from a contraction of \"Lucene\",\n> the search engine library leveraged by Atlas. Or it's a short form of \"scene\". \n\n## Episode Guide\n\n### **[Episode 1: What is Atlas Search & Quick Start][2]**\n\nIn this first episode of the Atlas Search 'cene, learn what Atlas Search is, and get a quick start introduction to setting up Atlas Search on your data. Within a few clicks, you can set up a powerful, full-text search index on your Atlas collection data, and leverage the fast, relevant results to your users queries.\n\n### **[Episode 2: Configuration / Development Environment][3]**\n\nIn order to best leverage Atlas Search, configuring it for your querying needs leads to success. In this episode, learn how Atlas Search maps your documents to its index, and discover the configuration control you have.\n\n### **[Episode 3: Indexing][4]**\n\nWhile Atlas Search automatically indexes your collections content, it does demand attention to the indexing configuration details in order to match users queries appropriately. This episode covers how Atlas Search builds an inverted index, and the options one must consider.\n\n### **[Episode 4: Searching][5]**\n\nAtlas Search provides a rich set of query operators and relevancy controls. This episode covers the common query operators, their relevancy controls, and ends with coverage of the must-have Query Analytics feature.\n\n### **[Episode 5: Faceting][6]**\n\nFacets produce additional context for search results, providing a list of subsets and counts within. This episode details the faceting options available in Atlas Search.\n\n### **[Episode 6: Advanced Search Topics][7]**\n\nIn this episode, we go through some more advanced search topics including embedded documents, fuzzy search, autocomplete, highlighting, and geospatial.\n\n### **[Episode 7: Query Analytics][8]**\n\nAre your users finding what they are looking for? Are your top queries returning the best results? This episode covers the important topic of query analytics. If you're using search, you need this!\n\n### **[Episode 8: Tips & Tricks][9]**\n\nIn this final episode of The Atlas Search 'cene Season 1, useful techniques to introspect query details and see the relevancy scoring computation details. Also shown is how to get facets and search results back in one API call.\n\n [1]: https://www.mongodb.com/atlas/search\n [2]: https://www.mongodb.com/developer/videos/what-is-atlas-search-quick-start/\n [3]: https://www.mongodb.com/developer/videos/atlas-search-configuration-development-environment/\n [4]: https://www.mongodb.com/developer/videos/mastering-indexing-for-perfect-query-matches/\n [5]: https://www.mongodb.com/developer/videos/query-operators-relevancy-controls-for-precision-searches/\n [6]: https://www.mongodb.com/developer/videos/faceting-mastery-unlock-the-full-potential-of-atlas-search-s-contextual-insights/\n [7]: https://www.mongodb.com/developer/videos/atlas-search-mastery-elevate-your-search-with-fuzzy-geospatial-highlighting-hacks/\n [8]: https://www.mongodb.com/developer/videos/atlas-search-query-analytics/\n [9]: https://www.mongodb.com/developer/videos/tips-and-tricks-the-atlas-search-cene-season-1-episode-8/", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "The Atlas Search 'cene: Season 1", "contentType": "Video"}, "title": "The Atlas Search 'cene: Season 1", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/atlas-open-ai-review-summary", "action": "created", "body": "# Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI\n\nIn the realm of property rentals, reviews play a pivotal role. MongoDB Atlas triggers, combined with the power of OpenAI's models, can help summarize and analyze these reviews in real-time. In this article, we'll explore how to utilize MongoDB Atlas triggers to process Airbnb reviews, yielding concise summaries and relevant tags.\n\nThis article is an additional feature added to the hotels and apartment sentiment search application developed in Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality.\n\n## Introduction\n\nMongoDB Atlas triggers allow users to define functions that execute in real-time in response to database operations. These triggers can be harnessed to enhance data processing and analysis capabilities. In this example, we aim to generate summarized reviews and tags for a sample Airbnb dataset.\n\nOur original data model has each review embedded in the listing document as an array:\n\n```javascript\n\"reviews\": { \"_id\": \"2663437\", \n\"date\": { \"$date\": \"2012-10-20T04:00:00.000Z\" }, \\\n\"listing_id\": \"664017\",\n \"reviewer_id\": \"633940\", \n\"reviewer_name\": \"Patricia\", \n\"comments\": \"I booked the room at Marinete's apartment for my husband. He was staying in Rio for a week because he was studying Portuguese. He loved the place. Marinete was very helpfull, the room was nice and clean. \\r\\nThe location is perfect. He loved the time there. \\r\\n\\r\\n\" },\n { \"_id\": \"2741592\", \n\"date\": { \"$date\": \"2012-10-28T04:00:00.000Z\" }, \n\"listing_id\": \"664017\",\n \"reviewer_id\": \"3932440\", \n\"reviewer_name\": \"Carolina\", \n\"comments\": \"Es una muy buena anfitriona, preocupada de que te encuentres c\u00f3moda y te sugiere que actividades puedes realizar. Disfrut\u00e9 mucho la estancia durante esos d\u00edas, el sector es central y seguro.\" }, ... ]\n```\n\n## Prerequisites\n- App Services application (e.g., application-0). Ensure linkage to the cluster with the Airbnb data.\n- OpenAI account with API access. \n\n![Open AI Key\n\n### Secrets and Values\n\n1. Navigate to your App Services application.\n2. Under \"Values,\" create a secret named `openAIKey` with your OPEN AI API key.\n\n3. Create a linked value named OpenAIKey and link to the secret.\n\n## The trigger code\n\nThe provided trigger listens for changes in the sample_airbnb.listingsAndReviews collection. Upon detecting a new review, it samples up to 50 reviews, sends them to OpenAI's API for summarization, and updates the original document with the summarized content and tags.\n\nPlease notice that the trigger reacts to updates that were marked with `\"process\" : false` flag. This field indicates that there were no summary created for this batch of reviews yet.\n\nExample of a review update operation that will fire this trigger:\n```javascript\nlistingsAndReviews.updateOne({\"_id\" : \"1129303\"}, { $push : { \"reviews\" : new_review } , $set : { \"process\" : false\" }});\n```\n\n### Sample reviews function\nTo prevent overloading the API with a large number of reviews, a function sampleReviews is defined to randomly sample up to 50 reviews:\n\n```javscript\nfunction sampleReviews(reviews) {\n if (reviews.length <= 50) {\n return reviews;\n }\n\n const sampledReviews = ];\n const seenIndices = new Set();\n\n while (sampledReviews.length < 50) {\n const randomIndex = Math.floor(Math.random() * reviews.length);\n if (!seenIndices.has(randomIndex)) {\n seenIndices.add(randomIndex);\n sampledReviews.push(reviews[randomIndex]);\n }\n }\n\n return sampledReviews;\n}\n```\n\n### Main trigger logic\n\nThe main trigger logic is invoked when an update change event is detected with a `\"process\" : false` field.\n```javascript\nexports = async function(changeEvent) {\n // A Database Trigger will always call a function with a changeEvent.\n // Documentation on ChangeEvents: https://www.mongodb.com/docs/manual/reference/change-events\n\n // This sample function will listen for events and replicate them to a collection in a different Database\nfunction sampleReviews(reviews) {\n// Logic above...\n if (reviews.length <= 50) {\n return reviews;\n }\n const sampledReviews = [];\n const seenIndices = new Set();\n\n while (sampledReviews.length < 50) {\n const randomIndex = Math.floor(Math.random() * reviews.length);\n if (!seenIndices.has(randomIndex)) {\n seenIndices.add(randomIndex);\n sampledReviews.push(reviews[randomIndex]);\n }\n }\n\n return sampledReviews;\n}\n\n // Access the _id of the changed document:\n const docId = changeEvent.documentKey._id;\n const doc= changeEvent.fullDocument;\n \n\n // Get the MongoDB service you want to use (see \"Linked Data Sources\" tab)\n const serviceName = \"mongodb-atlas\";\n const databaseName = \"sample_airbnb\";\n const collection = context.services.get(serviceName).db(databaseName).collection(changeEvent.ns.coll);\n\n // This function is the endpoint's request handler. \n // URL to make the request to the OpenAI API.\n const url = 'https://api.openai.com/v1/chat/completions';\n\n // Fetch the OpenAI key stored in the context values.\n const openai_key = context.values.get(\"openAIKey\");\n\n const reviews = doc.reviews.map((review) => {return {\"comments\" : review.comments}});\n \n const sampledReviews= sampleReviews(reviews);\n\n // Prepare the request string for the OpenAI API.\n const reqString = `Summerize the reviews provided here: ${JSON.stringify(sampledReviews)} | instructions example:\\n\\n [{\"comment\" : \"Very Good bed\"} ,{\"comment\" : \"Very bad smell\"} ] \\nOutput: {\"overall_review\": \"Overall good beds and bad smell\" , \"neg_tags\" : [\"bad smell\"], pos_tags : [\"good bed\"]}. No explanation. No 'Output:' string in response. Valid JSON. `;\n console.log(`reqString: ${reqString}`);\n\n // Call OpenAI API to get the response.\n \n let resp = await context.http.post({\n url: url,\n headers: {\n 'Authorization': [`Bearer ${openai_key}`],\n 'Content-Type': ['application/json']\n },\n body: JSON.stringify({\n model: \"gpt-4\",\n temperature: 0,\n messages: [\n {\n \"role\": \"system\",\n \"content\": \"Output json generator follow only provided example on the current reviews\"\n },\n {\n \"role\": \"user\",\n \"content\": reqString\n }\n ]\n })\n });\n\n // Parse the JSON response\n let responseData = JSON.parse(resp.body.text());\n\n // Check the response status.\n if(resp.statusCode === 200) {\n console.log(\"Successfully received code.\");\n console.log(JSON.stringify(responseData));\n\n const code = responseData.choices[0].message.content;\n // Get the required data to be added into the document\n const updateDoc = JSON.parse(code)\n // Set a flag that this document does not need further re-processing \n updateDoc.process = true\n await collection.updateOne({_id : docId}, {$set : updateDoc});\n \n\n } else {\n console.error(\"Failed to generate filter JSON.\");\n console.log(JSON.stringify(responseData));\n return {};\n }\n};\n```\n\nKey steps include:\n\n- API request preparation: Reviews from the changed document are sampled and prepared into a request string for the OpenAI API. The format and instructions are tailored to ensure the API returns a valid JSON with summarized content and tags.\n- API interaction: Using the context.http.post method, the trigger sends the prepared data to the OpenAI API.\n- Updating the original document: Upon a successful response from the API, the trigger updates the original document with the summarized content, negative tags (neg_tags), positive tags (pos_tags), and a process flag set to true.\n\nHere is a sample result that is added to the processed listing document:\n```\n\"process\": true, \n\"overall_review\": \"Overall, guests had a positive experience at Marinete's apartment. They praised the location, cleanliness, and hospitality. However, some guests mentioned issues with the dog and language barrier.\",\n\"neg_tags\": [ \"language barrier\", \"dog issues\" ], \n\"pos_tags\": [ \"great location\", \"cleanliness\", \"hospitality\" ]\n```\n\nOnce the data is added to our documents, providing this information in our VUE application is as simple as adding this HTML template:\n\n```html\n\n Overall Review (ai based) : {{ listing.overall_review }}\n \n {{tag}}\n \n \n {{tag}}\n \n \n```\n\n## Conclusion\nBy integrating MongoDB Atlas triggers with OpenAI's powerful models, we can efficiently process and analyze large volumes of reviews in real-time. This setup not only provides concise summaries of reviews but also categorizes them into positive and negative tags, offering valuable insights to property hosts and potential renters.\n\nQuestions? Comments? Let\u2019s continue the conversation over in our [community forums.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "AI", "Node.js"], "pageDescription": "Uncover the synergy of MongoDB Atlas triggers and OpenAI models in real-time analysis and summarization of Airbnb reviews. ", "contentType": "Tutorial"}, "title": "Using MongoDB Atlas Triggers to Summarize Airbnb Reviews with OpenAI", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/getting-started-with-mongodb-and-codewhisperer", "action": "created", "body": "# Getting Started with MongoDB and AWS Codewhisperer\n\n**Introduction**\n----------------\n\nAmazon CodeWhisperer is trained on billions of lines of code and can generate code suggestions \u2014 ranging from snippets to full functions \u2014 in real-time, based on your comments and existing code. AI code assistants have revolutionized developers\u2019 coding experience, but what sets Amazon CodeWhisperer apart is that MongoDB has collaborated with the AWS Data Science team, enhancing its capabilities!\n\nAt MongoDB, we are always looking to enhance the developer experience, and we've fine-tuned the CodeWhisperer Foundational Models to deliver top-notch code suggestions \u2014 trained on, and tailored for, MongoDB. This gives developers of all levels the best possible experience when using CodeWhisperer for MongoDB functions. \n\nThis tutorial will help you get CodeWhisperer up and running in VS Code, but CodeWhisperer also works with a number of other IDEs, including IntelliJ IDEA, AWS Cloud9, AWS Lambda console, JupyterLab, and Amazon SageMaker Studio. On the [Amazon CodeWhisperer site][1], you can find tutorials that demonstrate how to set up CodeWhisperer on different IDEs, as well as other documentation.\n\n*Note:* CodeWhisperer allows users to start without an AWS account because usually, creating an AWS account requires a credit card. Currently, CodeWhisperer is free for individual users. So it\u2019s super easy to get up and running.\n\n**Installing CodeWhisperer for VS Code** \n\nCodeWhisperer doesn\u2019t have its own VS Code extension. It is part of a larger extension for AWS services called AWS Toolkit. AWS Toolkit is available in the VS Code extensions store. \n\n 1. Open VS Code and navigate to the extensions store (bottom icon on the left panel).\n 2. Search for CodeWhisperer and it will show up as part of the AWS Toolkit.\n![Searching for the AWS ToolKit Extension][2]\n 3. Once found, hit Install. Next, you\u2019ll see the full AWS Toolkit\n Listing\n![The AWS Toolkit full listing][3]\n 4. Once installed, you\u2019ll need to authorize CodeWhisperer via a Builder\n ID to connect to your AWS developer account (or set up a new account\n if you don\u2019t already have one).\n![Authorise CodeWhisperer][4]\n\n**Using CodeWhisperer**\n-----------------------\n\nNavigating code suggestions \n\n![CodeWhisperer Running][5]\n\nWith CodeWhisperer installed and running, as you enter your prompt or code, CodeWhisperer will offer inline code suggestions. If you want to keep the suggestion, use **TAB** to accept it. CodeWhisperer may provide multiple suggestions to choose from depending on your use case. To navigate between suggestions, use the left and right arrow keys to view them, and **TAB** to accept.\n\nIf you don\u2019t like the suggestions you see, keep typing (or hit **ESC**). The suggestions will disappear, and CodeWhisperer will generate new ones at a later point based on the additional context.\n\n**Requesting suggestions manually**\n\nYou can request suggestions at any time. Use **Option-C** on Mac or **ALT-C** on Windows. After you receive suggestions, use **TAB** to accept and arrow keys to navigate.\n\n**Getting the best recommendations**\n\nFor best results, follow these practices.\n\n - Give CodeWhisperer something to work with. The more code your file contains, the more context CodeWhisperer has for generating recommendations.\n - Write descriptive comments in natural language \u2014 for example\n```\n// Take a JSON document as a String and store it in MongoDB returning the _id\n```\nOr\n```\n//Insert a document in a collection with a given _id and a discountLevel\n```\n - Specify the libraries you prefer at the start of your file by using import statements.\n```\n// This Java class works with MongoDB sync driver.\n// This class implements Connection to MongoDB and CRUD methods.\n```\n - Use descriptive names for variables and functions\n - Break down complex tasks into simpler tasks\n\n**Provide feedback**\n----------------\n\nAs with all generative AI tools, they are forever learning and forever expanding their foundational knowledge base, and MongoDB is looking for feedback. If you are using Amazon CodeWhisperer in your MongoDB development, we\u2019d love to hear from you. \n\nWe\u2019ve created a special \u201ccodewhisperer\u201d tag on our [Developer Forums][6], and if you tag any post with this, it will be visible to our CodeWhisperer project team and we will get right on it to help and provide feedback. If you want to see what others are doing with CodeWhisperer on our forums, the [tag search link][7] will jump you straight into all the action. \n\nWe can\u2019t wait to see your thoughts and impressions of MongoDB and Amazon CodeWhisperer together. \n\n [1]: https://aws.amazon.com/codewhisperer/resources/#Getting_started\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1bfd28a846063ae9/65481ef6e965d6040a3dcc37/CW_1.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde40d5ae1b9dd8dd/65481ef615630d040a4b2588/CW_2.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt636bb8d307bebcee/65481ef6a6e009040a740b86/CW_3.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf1e0ebeea2089e6a/65481ef6077aca040a5349da/CW_4.png\n [6]: https://www.mongodb.com/community/forums/\n [7]: https://www.mongodb.com/community/forums/tag/codewhisperer", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Java", "Python", "AWS", "AI"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB and AWS Codewhisperer", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/java/rest-apis-java-spring-boot", "action": "created", "body": "# REST APIs with Java, Spring Boot, and MongoDB\n\n## GitHub repository\n\nIf you want to write REST APIs in Java at the speed of light, I have what you need. I wrote this template to get you started. I have tried to solve as many problems as possible in it.\n\nSo if you want to start writing REST APIs in Java, clone this project, and you will be up to speed in no time.\n\n```shell\ngit clone https://github.com/mongodb-developer/java-spring-boot-mongodb-starter\n```\n\nThat\u2019s all folks! All you need is in this repository. Below I will explain a few of the features and details about this template, but feel free to skip what is not necessary for your understanding.\n\n## README\n\nAll the extra information and commands you need to get this project going are in the `README.md` file which you can read in GitHub.\n\n## Spring and MongoDB configuration\n\nThe configuration can be found in the MongoDBConfiguration.java class.\n\n```java\npackage com.mongodb.starter;\n\nimport ...]\n\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\n@Configuration\npublic class MongoDBConfiguration {\n\n @Value(\"${spring.data.mongodb.uri}\")\n private String connectionString;\n\n @Bean\n public MongoClient mongoClient() {\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n return MongoClients.create(MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(connectionString))\n .codecRegistry(codecRegistry)\n .build());\n }\n\n}\n```\n\nThe important section here is the MongoDB configuration, of course. Firstly, you will notice the connection string is automatically retrieved from the `application.properties` file, and secondly, you will notice the configuration of the `MongoClient` bean.\n\nA `Codec` is the interface that abstracts the processes of decoding a BSON value into a Java object and encoding a Java object into a BSON value.\n\nA `CodecRegistry` contains a set of `Codec` instances that are accessed according to the Java classes that they encode from and decode to.\n\nThe MongoDB driver is capable of encoding and decoding BSON for us, so we do not have to take care of this anymore. All the configuration we need for this project to run is here and nowhere else.\n\nYou can read [the driver documentation if you want to know more about this topic.\n\n## Multi-document ACID transactions\n\nJust for the sake of it, I also used multi-document ACID transactions in a few methods where it could potentially make sense to use ACID transactions. You can check all the code in the `MongoDBPersonRepository` class.\n\nHere is an example:\n\n```java\nprivate static final TransactionOptions txnOptions = TransactionOptions.builder()\n .readPreference(ReadPreference.primary())\n .readConcern(ReadConcern.MAJORITY)\n .writeConcern(WriteConcern.MAJORITY)\n .build();\n\n@Override\npublic List saveAll(List personEntities) {\n try (ClientSession clientSession = client.startSession()) {\n return clientSession.withTransaction(() -> {\n personEntities.forEach(p -> p.setId(new ObjectId()));\n personCollection.insertMany(clientSession, personEntities);\n return personEntities;\n }, txnOptions);\n }\n}\n```\n\nAs you can see, I\u2019m using an auto-closeable try-with-resources which will automatically close the client session at the end. This helps me to keep the code clean and simple.\n\nSome of you may argue that it is actually too simple because transactions (and write operations, in general) can throw exceptions, and I\u2019m not handling any of them here\u2026 You are absolutely right and this is an excellent transition to the next part of this article.\n\n## Exception management\n\nTransactions in MongoDB can raise exceptions for various reasons, and I don\u2019t want to go into the details too much here, but since MongoDB 3.6, any write operation that fails can be automatically retried once. And the transactions are no different. See the documentation for retryWrites.\n\nIf retryable writes are disabled or if a write operation fails twice, then MongoDB will send a MongoException (extends RuntimeException) which should be handled properly.\n\nLuckily, Spring provides the annotation `ExceptionHandler` to help us do that. See the code in my controller `PersonController`. Of course, you will need to adapt and enhance this in your real project, but you have the main idea here.\n\n```java\n@ExceptionHandler(RuntimeException.class)\npublic final ResponseEntity handleAllExceptions(RuntimeException e) {\n logger.error(\"Internal server error.\", e);\n return new ResponseEntity<>(e, HttpStatus.INTERNAL_SERVER_ERROR);\n}\n```\n\n## Aggregation pipeline\n\nMongoDB's aggregation pipeline is a very powerful and efficient way to run your complex queries as close as possible to your data for maximum efficiency. Using it can ease the computational load on your application.\n\nJust to give you a small example, I implemented the `/api/persons/averageAge` route to show you how I can retrieve the average age of the persons in my collection.\n\n```java\n@Override\npublic double getAverageAge() {\n List pipeline = List.of(group(new BsonNull(), avg(\"averageAge\", \"$age\")), project(excludeId()));\n return personCollection.aggregate(pipeline, AverageAgeDTO.class).first().averageAge();\n}\n```\n\nAlso, you can note here that I\u2019m using the `personCollection` which was initially instantiated like this:\n\n```java\nprivate MongoCollection personCollection;\n\n@PostConstruct\nvoid init() {\n personCollection = client.getDatabase(\"test\").getCollection(\"persons\", PersonEntity.class);\n}\n```\n\nNormally, my personCollection should encode and decode `PersonEntity` object only, but you can overwrite the type of object your collection is manipulating to return something different \u2014 in my case, `AverageAgeDTO.class` as I\u2019m not expecting a `PersonEntity` class here but a POJO that contains only the average age of my \"persons\".\n\n## Swagger\n\nSwagger is the tool you need to document your REST APIs. You have nothing to do \u2014 the configuration is completely automated. Just run the server and navigate to http://localhost:8080/swagger-ui.html. the interface will be waiting for you.\n\n for more information.\n\n## Nyan Cat\n\nYes, there is a Nyan Cat section in this post. Nyan Cat is love, and you need some Nyan Cat in your projects. :-)\n\nDid you know that you can replace the Spring Boot logo in the logs with pretty much anything you want?\n\n and the \"Epic\" font for each project name. It's easier to identify which log file I am currently reading.\n\n## Conclusion\n\nI hope you like my template, and I hope I will help you be more productive with MongoDB and the Java stack.\n\nIf you see something which can be improved, please feel free to open a GitHub issue or directly submit a pull request. They are very welcome. :-)\n\nIf you are new to MongoDB Atlas, give our Quick Start post a try to get up to speed with MongoDB Atlas in no time.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt876f3404c57aa244/65388189377588ba166497b0/swaggerui.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2f06ba5af19464d/65388188d31953242b0dbc6f/nyancat.png", "format": "md", "metadata": {"tags": ["Java", "Spring"], "pageDescription": "Take a shortcut to REST APIs with this Java/Spring Boot and MongoDB example application that embeds all you'll need to get going.", "contentType": "Code Example"}, "title": "REST APIs with Java, Spring Boot, and MongoDB", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/halting-development-on-swift-driver", "action": "created", "body": "# Halting Development on MongoDB Swift Driver\n\nMongoDB is halting development on our server-side Swift driver. We remain excited about Swift and will continue our development of our mobile Swift SDK.\n\nWe released our server-side Swift driver in 2020 as an open source project and are incredibly proud of the work that our engineering team has contributed to the Swift community over the last four years. Unfortunately, today we are announcing our decision to stop development of the MongoDB server-side Swift driver. We understand that this news may come as a disappointment to the community of current users.\n\nThere are still ways to use MongoDB with Swift:\n\n - Use the MongoDB driver with server-side Swift applications as is \n - Use the MongoDB C Driver directly in your server-side Swift projects\n - Usage of another community Swift driver, mongokitten\n\nCommunity members and developers are welcome to fork our existing driver and add features as you see fit - the Swift driver is under the Apache 2.0 license and source code is available on GitHub. For those developing client/mobile applications, MongoDB offers the Realm Swift SDK with real time sync to MongoDB Atlas.\n\nWe would like to take this opportunity to express our heartfelt appreciation for the enthusiastic support that the Swift community has shown for MongoDB. Your loyalty and feedback have been invaluable to us throughout our journey, and we hope to resume development on the server-side Swift driver in the future.", "format": "md", "metadata": {"tags": ["Swift", "MongoDB"], "pageDescription": "The latest news regarding the MongoDB driver for Swift.", "contentType": "News & Announcements"}, "title": "Halting Development on MongoDB Swift Driver", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/online-archive-query-performance", "action": "created", "body": "# Optimizing your Online Archive for Query Performance\n\n## Contributed By\nThis article was contributed by Prem Krishna, a Senior Product Manager for Analytics at MongoDB.\n\n## Introduction\nWith Atlas Online Archive, you can tier off cold data or infrequently accessed data from your MongoDB cluster to a MongoDB-managed cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. This can lower the cost via archival cloud storage for old data, while active data that is more often accessed and queried remains in the primary database. \n\n> FYI: If using Online Archive and also using MongoDB's Atlas Data Federation, users can also see a unified view of production data, and *archived data* side by side through a read-only, federated database instance.\n\nIn this blog, we are going to be discussing how to improve the performance of your online archive by choosing the correct partitioning fields.\n\n## Why is partitioning so critical when configuring Online Archive?\nOnce you have started archiving data, you cannot edit any partition fields as the structure of how the data will be stored in the object storage becomes fixed after the archival job begins. Therefore, you'll want to think critically about your partitioning strategy beforehand.\n\nAlso, archival query performance is determined by how the data is structured in object storage, so it is important to not only choose the correct partitions but also choose the correct order of partitions. \n\n## Do this...\n**Choose the most frequently queried fields.** You can choose up to 2 partition fields for a custom query-based archive or up to three fields on a date-based online archive. Ensure that the most frequently queried fields for the archive are chosen. Note that we are talking about how you are going to query the archive and not the custom query criteria provided at the time of archiving!\n\n**Check the order of partitioned fields.** While selecting the partitions is important, it is equally critical to choose the correct *order* of partitions. The most frequently queried field should be the first chosen partition field, followed by the second and third. That's simple enough.\n\n## Not this\n**Don't add irrelevant fields as partitions.** If you are not querying a specific field from the archive, then that field should not be added as a partition field. Remember that you can add a maximum of 2 or 3 partition fields, so it is important to choose these fields carefully based on how you query your archive.\n\n**Don't ignore the \u201cMove down\u201d option.** The \u201cMove down\u201d option is applicable to an archive with a data-based rule. For example, if you want to query on Field_A the most, then Field_B, and then on exampleDate, ensure you are selecting the \u201cMove Down\u201d option next to the \u201cArchive date field\u201d on top.\n\n**Don't choose high cardinality partition(s).** Choosing a high cardinality field such as `_id` will create a large number of partitions in the object storage. Then querying the archive for any aggregate based queries will cause increased latency. The same is applicable if multiple partitions are selected such that the collective fields when grouped together can be termed as high cardinality. For example, if you are selecting Field_A, Field_B and Field_C as your partitions and if a combination of these fields are creating unique values, then it will result in high cardinality partitions. \n> Please note that this is **not applicable** for new Online Archives. \n\n## Additional guidance\nIn addition to the partitioning guidelines, there are a couple of additional considerations that are relevant for the optimal configuration of your data archival strategy.\n\n**Add data expiration rules and scheduled windows**\nThese fields are optional but are relevant for your use cases and can improve your archival speeds and for how long your data needs to be present in the archive. \n\n**Index required fields**\nBefore archiving the data, ensure that your data is indexed for optimal performance. You can run an explain plan on the archival query to verify whether the archival rule will use an index. \n\n## Conclusion\nIt is important to follow these do\u2019s and don\u2019ts before hitting \u201cBegin Archiving\u201d to archive your data so that the partitions are correctly configured thereby optimizing the performance of your online archives.\n\nFor more information on configuration or Online Archive, please see the documentation for setting up an Online Archive and our blog post on how to create an Online Archive. \n\nDig deeper into this topic with this tutorial.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n ", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "Get all the do's and don'ts around optimization of your data archival strategy.", "contentType": "Article"}, "title": "Optimizing your Online Archive for Query Performance", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/using-confluent-cloud-atlas-stream-processing", "action": "created", "body": "# Using the Confluent Cloud with Atlas Stream Processing\n\n> Atlas Stream Processing is now available. Learn more about it here.\n\nApache Kafka is a massively popular streaming platform today. It is available in the open-source community and also as software (e.g., Confluent Platform) for self-managing. Plus, you can get a hosted Kafka (or Kafka-compatible) service from a number of providers, including AWS Managed Streaming for Apache Kafka (MSK), RedPanda Cloud, and Confluent Cloud, to name a few.\n\nIn this tutorial, we will configure network connectivity between MongoDB Atlas Stream Processing instances and a topic within the Confluent Cloud. By the end of this tutorial, you will be able to process stream events from Confluent Cloud topics and emit the results back into a Confluent Cloud topic. \n\nConfluent Cloud dedicated clusters support connectivity through secure public internet endpoints with their Basic and Standard clusters. Private network connectivity options such as Private Link connections, VPC/VNet peering, and AWS Transit Gateway are available in the Enterprise and Dedicated cluster tiers. \n\n**Note:** At the time of this writing, Atlas Stream Processing only supports internet-facing Basic and Standard Confluent Cloud clusters. This post will be updated to accommodate Enterprise and Dedicated clusters when support is provided for private networks.\n\nThe easiest way to get started with connectivity between Confluent Cloud and MongoDB Atlas is by using public internet endpoints. Public internet connectivity is the only option for Basic and Standard Confluent clusters. Rest assured that Confluent Cloud clusters with internet endpoints are protected by a proxy layer that prevents types of DoS, DDoS, SYN flooding, and other network-level attacks. We will also use authentication API keys with the SASL_SSL authentication method for secure credential exchange.\n\nIn this tutorial, we will set up and configure Confluent Cloud and MongoDB Atlas for network connectivity and then work through a simple example that uses a sample data generator to stream data between MongoDB Atlas and Confluent Cloud.\n\n## Tutorial prerequisites\n\nThis is what you\u2019ll need to follow along:\n\n- An Atlas project (free or paid tier)\n- An Atlas database user with atlasAdmin permission \n - For the purposes of this tutorial, we\u2019ll have the user \u201ctutorialuser.\u201d\n- MongoDB shell (Mongosh) version 2.0+\n- Confluent Cloud cluster (any configuration)\n\n## Configure Confluent Cloud\n\nFor this tutorial, you need a Confluent Cloud cluster created with a topic, \u201csolardata,\u201d and an API access key created. If you already have this, you may skip to Step 2.\n\nTo create a Confluent Cloud cluster, log into the Confluent Cloud portal, select or create an environment for your cluster, and then click the \u201cAdd Cluster\u201d button. \n\nIn this tutorial, we can use a **Basic** cluster type.\n\n, click on \u201cStream Processing\u201d from the Services menu. Next, click on the \u201cCreate Instance\u201d button. Provide a name, cloud provider, and region. Note: For a lower network cost, choose the cloud provider and region that matches your Confluent Cloud cluster. In this tutorial, we will use AWS us-east-1 for both Confluent Cloud and MongoDB Atlas.\n\n before continuing this tutorial.\n\nConnection information can be found by clicking on the \u201cConnect\u201d button on your SPI. The connect dialog is similar to the connect dialog when connecting to an Atlas cluster. To connect to the SPI, you will need to use the **mongosh** command line tool.\n\n. \n\n> Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfb9c8a1f971ace1/652994177aecdf27ae595bf9/image24.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt63a22c62ae627895/652994381e33730b6478f0d1/image5.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte3f1138a6294748f/65299459382be57ed901d434/image21.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3ccf2827c99f1c83/6529951a56a56b7388898ede/image19.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaea830d5730e5f51/652995402e91e47b2b547e12/image20.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c425a65bb77f282/652995c0451768c2b6719c5f/image13.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2748832416fdcf8e/652996cd24aaaa5cb2e56799/image15.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9010c25a76edb010/652996f401c1899afe4a465b/image7.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27b3762b12b6b871/652997508adde5d1c8f78a54/image3.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to configure network connectivity between Confluent Cloud and MongoDB Atlas Stream Processing.", "contentType": "Tutorial"}, "title": "Using the Confluent Cloud with Atlas Stream Processing", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/charts-javascript-sdk", "action": "created", "body": "\n \n \n \n \n \n \n \n \n \n \n \n \n \n\n Refresh\n Only in USA\n \n \n ", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to visualize your data with MongoDB Charts.", "contentType": "Tutorial"}, "title": "Working with MongoDB Charts and the New JavaScript SDK", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-send-mongodb-document-changes-slack-channel", "action": "created", "body": "# How to Send MongoDB Document Changes to a Slack Channel\n\nIn this tutorial, we will explore a seamless integration of your database with Slack using Atlas Triggers and the Slack API. Discover how to effortlessly send notifications to your desired Slack channels, effectively connecting the operations happening within your collections and relaying them in real-time updates. \n\nThe overall flow will be: \n\n.\n\nOnce this has been completed, we are ready to start creating our first database trigger that will react every time there is an operation in a certain collection. \n\n## Atlas trigger\n\nFor this tutorial, we will create a trigger that monitors all changes in a `test` collection for `insert`, `update`, and `delete` operations.\n\nTo create a new database trigger, you will need to:\n\n1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.\n2. Click **Triggers** in the left-hand navigation.\n3. On the **Overview** tab of the **Triggers** page, click **Add Trigger** to open the trigger configuration page.\n4. Enter the configuration values for the trigger and click **Save** at the bottom of the page.\n\nPlease note that this trigger will make use of the *event ordering* as we want the operations to be processed according to when they were performed. \n\nThe trigger configuration values will look like this: \n\n using the UI, we need to: \n\n1. Click the **Data Services** tab in the top navigation of your screen if you haven't already navigated to Atlas.\n\n2. Click **Functions** in the left navigation menu.\n\n3. Click **New Function** in the top right of the **Functions** page.\n\n4. Enter a unique, identifying name for the function in the **Name** field.\n\n5. Configure **User Authentication**. Functions in App Services always execute in the context of a specific application user or as a system user that bypasses rules. For this tutorial, we are going to use **System user**.\n\n### \"processEvent\" function\n\nThe processEvent function will process the change events every time an operation we are monitoring in the given collection is processed. In this way, we are going to create an object that we will then send to the function in charge of sending this message in Slack. \n\nThe code of the function is the following:\n\n```javascript\nexports = function(changeEvent) {\n\n const docId = changeEvent.documentKey._id;\n\n const { updateDescription, operationType } = changeEvent;\n\n var object = {\n operationType,\n docId,\n };\n\n if (updateDescription) {\n const updatedFields = updateDescription.updatedFields; // A document containing updated fields\n const removedFields = updateDescription.removedFields; // An array of removed fields\n object = {\n ...object,\n updatedFields,\n removedFields\n };\n }\n\n const result = context.functions.execute(\"sendToSlack\", object);\n\n return true;\n};\n```\n\nIn this function, we will create an object that we will then send as a parameter to another function that will be in charge of sending to our Slack channel. \n\nHere we will use change event and its properties to capture the: \n\n1. `_id` of the object that has been modified/inserted.\n2. Operation that has been performed.\n3. Fields of the object that have been modified or deleted when the operation has been an `update`.\n\nWith all this, we create an object and make use of the internal function calls to execute our `sendToSlack` function.\n\n### \"sendToSlack\" function\n\nThis function will make use of the \"chat.postMessage\" method of the Slack API to send a message to a specific channel.\n\nTo use the Slack library, you must add it as a dependency in your Atlas function. Therefore, in the **Functions** section, we must go to the **Dependencies** tab and install `@slack/web-api`.\n\nYou will need to have a Slack token that will be used for creating the `WebClient` object as well as a Slack application. Therefore: \n\n1. Create or use an existing Slack app: This is necessary as the subsequent token we will need will be linked to a Slack App. For this step, you can navigate to the Slack application and use your credentials to authenticate and create or use an existing app you are a member of. \n\n2. Within this app, we will need to create a bot token that will hold the authentication API key to send messages to the corresponding channel in the Slack app created. Please note that you will need to add as many authorization scopes on your token as you need, but the bare minimum is to add the `chat:write` scope to allow your app to post messages.\n\nA full guide on how to get these two can be found in the Slack official documentation.\n\nFirst, we will perform the logic with the received object to create a message adapted to the event that occurred. \n\n```javascript\nvar message = \"\";\nif (arg.operationType == 'insert') {\n message += `A new document with id \\`${arg.docId}\\` has been inserted`;\n} else if (arg.operationType == 'update') {\n message += `The document \\`${arg.docId}\\` has been updated.`;\n if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {\n message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;\n }\n if (arg.removedFields && arg.removedFields.length > 0) {\n message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;\n }\n} else {\n message += `An unexpected operation affecting document \\`${arg.docId}\\` ocurred`;\n}\n```\n\nOnce we have the library, we must use it to create a `WebClient` client that we will use later to make use of the methods we need. \n\n```javascript\n const { WebClient } = require('@slack/web-api');\n // Read a token from the environment variables\n const token = context.values.get('SLACK_TOKEN');\n // Initialize\n const app = new WebClient(token);\n```\n\nFinally, we can send our message with: \n\n```javascript\ntry {\n // Call the chat.postMessage method using the WebClient\n const result = await app.chat.postMessage({\n channel: channelId,\n text: `New Event: ${message}`\n });\n\n console.log(result);\n}\ncatch (error) {\n console.error(error);\n}\n```\n\nThe full function code will be as:\n\n```javascript\nexports = async function(arg){\n\n const { WebClient } = require('@slack/web-api');\n // Read a token from the environment variables\n const token = context.values.get('SLACK_TOKEN');\n const channelId = context.values.get('CHANNEL_ID');\n // Initialize\n const app = new WebClient(token);\n\n var message = \"\";\n if (arg.operationType == 'insert') {\n message += `A new document with id \\`${arg.docId}\\` has been inserted`;\n } else if (arg.operationType == 'update') {\n message += `The document \\`${arg.docId}\\` has been updated.`;\n if (arg.updatedFields && Object.keys(arg.updatedFields).length > 0) {\n message += ` The fileds ${JSON.stringify(arg.updatedFields)} has been modified.`;\n }\n if (arg.removedFields && arg.removedFields.length > 0) {\n message += ` The fileds ${JSON.stringify(arg.removedFields)} has been removed.`;\n }\n } else {\n message += `An unexpected operation affecting document \\`${arg.docId}\\` ocurred`;\n }\n\n try {\n // Call the chat.postMessage method using the WebClient\n const result = await app.chat.postMessage({\n channel: channelId,\n text: `New Event: ${message}`\n });\n console.log(result);\n }\n catch (error) {\n console.error(error);\n }\n\n};\n```\n\nNote: The bot token we use must have the minimum permissions to send messages to a certain channel. We must also have the application created in Slack added to the channel where we want to receive the messages.\n\nIf everything is properly configured, every change in the collection and monitored operations will be received in the Slack channel:\n\n to only detect certain changes and then adapt the change event to only receive certain fields with a \"$project\".\n\n## Conclusion\n\nIn this tutorial, we've learned how to seamlessly integrate your database with Slack using Atlas Triggers and the Slack API. This integration allows you to send real-time notifications to your Slack channels, keeping your team informed about important operations within your database collections.\n\nWe started by creating a new application in Atlas and then set up a database trigger that reacts to specific collection operations. We explored the `processEvent` function, which processes change events and prepares the data for Slack notifications. Through a step-by-step process, we demonstrated how to create a message and use the Slack API to post it to a specific channel.\n\nNow that you've grasped the basics, it's time to take your integration skills to the next level. Here are some steps you can follow:\n\n- **Explore advanced use cases**: Consider how you can adapt the principles you've learned to more complex scenarios within your organization. Whether it's custom notifications or handling specific database events, there are countless possibilities.\n- **Dive into the Slack API documentation**: For a deeper understanding of what's possible with Slack's API, explore their official documentation. This will help you harness the full potential of Slack's features.\n\nBy taking these steps, you'll be well on your way to creating powerful, customized integrations that can streamline your workflow and keep your team in the loop with real-time updates. Good luck with your integration journey!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8fcfb82094f04d75/653816cde299fbd2960a4695/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc7874f54dc0cd8be/653816e70d850608a2f05bb9/image3.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt99aaf337d37c41ae/653816fd2c35813636b3a54d/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to use triggers in MongoDB Atlas to send information about changes to a document to Slack.", "contentType": "Tutorial"}, "title": "How to Send MongoDB Document Changes to a Slack Channel", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/doc-modeling-vector-search", "action": "created", "body": "# How to Model Your Documents for Vector Search\n\nAtlas Vector Search was recently released, so let\u2019s dive into a tutorial on how to properly model your documents when utilizing vector search to revolutionize your querying capabilities!\n\n## Data modeling normally in MongoDB\n\nVector search is new, so let\u2019s first go over the basic ways of modeling your data in a MongoDB document before continuing on into how to incorporate vector embeddings. \n\nData modeling in MongoDB revolves around organizing your data into documents within various collections. Varied projects or organizations will require different ways of structuring data models due to the fact that successful data modeling depends on the specific requirements of each application, and for the most part, no one document design can be applied for every situation. There are some commonalities, though, that can guide the user. These are:\n\n 1. Choosing whether to embed or reference your related data. \n 2. Using arrays in a document.\n 3. Indexing your documents (finding fields that are frequently used and applying the appropriate indexing, etc.).\n\nFor a more in-depth explanation and a comprehensive guide of data modeling with MongoDB, please check out our data modeling article.\n\n## Setting up an example data model\n\nWe are going to be building our vector embedding example using a MongoDB document for our MongoDB TV series. Here, we have a single MongoDB document representing our MongoDB TV show, without any embeddings in place. We have a nested array featuring our array of seasons, and within that, our array of different episodes. This way, in our document, we are capable of seeing exactly which season each episode is a part of, along with the episode number, the title, the description, and the date: \n\n```\n{\n \"_id\": ObjectId(\"238478293\"),\n \"title\": \"MongoDB TV\",\n \"description\": \"All your MongoDB updates, news, videos, and podcast episodes, straight to you!\",\n \"genre\": \"Programming\", \"Database\", \"MongoDB\"],\n \"seasons\": [\n {\n \"seasonNumber\": 1,\n \"episodes\": [\n {\n \"episodeNumber\": 1,\n \"title\": \"EASY: Build Generative AI Applications\",\n \"description\": \"Join Jesse Hall\u2026.\",\n \"date\": ISODate(\"Oct52023\")\n },\n {\n \"episodeNumber\": 2,\n \"title\": \"RAG Architecture & MongoDB: The Future of Generative AI Apps\",\n \"description\": \"Join Prakul Agarwal\u2026\",\n \"date\": ISODate(\"Oct42023\")\n }\n ]\n },\n {\n \"seasonNumber\": 2,\n \"episodes\": [\n {\n \"episodeNumber\": 1,\n \"title\": \"Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas\",\n \"description\": \"Join Igor Alekseev\u2026.\",\n \"date\": ISODate(\"Oct32023\")\n },\n {\n \"episodeNumber\": 2,\n \"title\": \"The Index: Here\u2019s what you missed last week\u2026\",\n \"description\": \"Join Megan Grant\u2026\",\n \"date\": ISODate(\"Oct22023\")\n }\n ]\n }\n ]\n}\n```\n\nNow that we have our example set up, let\u2019s incorporate vector embeddings and discuss the proper techniques to set you up for success.\n\n## Integrating vector embeddings for vector search in our data model \n\nLet\u2019s first understand exactly what vector search is: Vector search is the way to search based on *meaning* rather than specific words. This comes in handy when querying using similarities rather than searching based on keywords. When using vector search, you can query using a question or a phrase rather than just a word. In a nutshell, vector search is great for when you can\u2019t think of *exactly* that book or movie, but you remember the plot or the climax. \n\nThis process happens when text, video, or audio is transformed via an encoder into vectors. With MongoDB, we can do this using OpenAI, Hugging Face, or other natural language processing models. Once we have our vectors, we can upload them in the base of our document and conduct vector search using them. Please keep in mind the [current limitations of vector search and how to properly embed your vectors. \n\nYou can store your vector embeddings alongside other data in your document, or you can store them in a new collection. It is really up to the user and the project goals. Let\u2019s go over what a document with vector embeddings can look like when you incorporate them into your data model, using the same example from above: \n\n```\n{\n \"_id\": ObjectId(\"238478293\"),\n \"title\": \"MongoDB TV\",\n \"description\": \"All your MongoDB updates, news, videos, and podcast episodes, straight to you!\",\n \"genre\": \"Programming\", \"Database\", \"MongoDB\"],\n \u201cvectorEmbeddings\u201d: [ 0.25, 0.5, 0.75, 0.1, 0.1, 0.8, 0.2, 0.6, 0.6, 0.4, 0.9, 0.3, 0.2, 0.7, 0.5, 0.8, 0.1, 0.8, 0.2, 0.6 ],\n \"seasons\": [\n {\n \"seasonNumber\": 1,\n \"episodes\": [\n {\n \"episodeNumber\": 1,\n \"title\": \"EASY: Build Generative AI Applications\",\n \"description\": \"Join Jesse Hall\u2026.\",\n \"date\": ISODate(\"Oct 5, 2023\")\n \n },\n {\n \"episodeNumber\": 2,\n \"title\": \"RAG Architecture & MongoDB: The Future of Generative AI Apps\",\n \"description\": \"Join Prakul Agarwal\u2026\",\n \"date\": ISODate(\"Oct 4, 2023\")\n }\n ]\n },\n {\n \"seasonNumber\": 2,\n \"episodes\": [\n {\n \"episodeNumber\": 1,\n \"title\": \"Cloud Connect - Harness the Power of AI/ML and Generative AI on AWS with MongoDB Atlas\",\n \"description\": \"Join Igor Alekseev\u2026.\",\n \"date\": ISODate(\"Oct 3, 2023\")\n },\n {\n \"episodeNumber\": 2,\n \"title\": \"The Index: Here\u2019s what you missed last week\u2026\",\n \"description\": \"Join Megan Grant\u2026\",\n \"date\": ISODate(\"Oct 2, 2023\")\n }\n ]\n }\n ]\n}\n```\nHere, you have your vector embeddings classified at the base in your document. Currently, there is a limitation where vector embeddings cannot be nested in an array in your document. Please ensure your document has your embeddings at the base. There are various tutorials on our [Developer Center, alongside our YouTube account and our documentation, that can help you figure out how to embed these vectors into your document and how to acquire the necessary vectors in the first place. \n\n## Extras: Indexing with vector search\n\nWhen you\u2019re using vector search, it is necessary to create a search index so you\u2019re able to be successful with your semantic search. To do this, please view our Vector Search documentation. Here is the skeleton code provided by our documentation:\n\n```\n{\n \"fields\":\n {\n \"type\": \"vector\",\n \"path\": \"\",\n \"numDimensions\": ,\n \"similarity\": \"euclidean | cosine | dotProduct\"\n },\n {\n \"type\": \"filter\",\n \"path\": \"\"\n },\n ...\n ]\n}\n```\n\nWhen setting up your search index, you want to change the \u201c\u201d to be your vector path. In our case, it would be \u201cvectorEmbeddings\u201d. \u201ctype\u201d can stay the way it is. For \u201cnumDimensions\u201d, please match the dimensions of the model you\u2019ve chosen. This is just the number of vector dimensions, and the value cannot be greater than 4096. This limitation comes from the base embedding model that is being used, so please ensure you\u2019re using a supported LLM (large language model) such as OpenAI or Hugging Face. When using one of these, there won\u2019t be any issues running into vector dimensions. For \u201csimilarity\u201d, please pick which vector function you want to use to search for the top K-nearest neighbors. \n\n## Extras: Querying with vector search\n\nWhen you\u2019re ready to query and find results from your embedded documents, it\u2019s time to create an aggregation pipeline on your embedded vector data. To do this, you can use the\u201c$vectorSearch\u201d operator, which is a new aggregation stage in Atlas. It helps execute an Approximate Nearest Neighbor query. \n\nFor more information on this step, please check out the tutorial on Developer Center about [building generative AI applications, and our YouTube video on vector search.\n\n", "format": "md", "metadata": {"tags": ["MongoDB", "AI"], "pageDescription": "Follow along with this comprehensive tutorial on how to properly model your documents for MongoDB Vector Search.", "contentType": "Tutorial"}, "title": "How to Model Your Documents for Vector Search", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/dog-care-example-app", "action": "created", "body": "# Example Application for Dog Care Providers (DCP)\n\n## Creator\nRadvile Razmute contributed this project.\n\n## About the project\n\nMy project explores how to use MongoDB Shell, MongoDB Atlas, and MongoDB Compass. This project aimed to develop a database for dog care providers and demonstrate how this data can be manipulated in MongoDB. The Dog Welfare Federation (DWF) is concerned that some providers who provide short/medium term care for dogs when the owner is unable to \u2013 e.g., when away on holidays, may not be delivering the service they promise. Up to now, the DWF has managed the data using a SQL database. As the scale of its operations expanded, the organization needed to invest in a cloud database application. As an alternative to the relational SQL database, the Dog Welfare Federation decided to look at the database development using MongoDB services.\n\nThe Dog database uses fictitious data that I have created myself. The different practical stages of the project have been documented in my project report and may guide the beginners taking their first steps into MongoDB. \n\n## Inspiration\n\nThe assignment was given to me by my lecturer. And when he was deciding on the topics for the project, he knew that I love dogs. And that's why my project was all about the dogs. Even though the lecturer gave me the assignment, it was my idea to prepare this project in a way that does not only benefit me. \n\nWhen I followed courses via MongoDB University, I noticed that these courses gave me a flavor of MongoDB, but not the basic concepts. I wanted to turn a database development project into a kind of a guide for somebody who never used MongoDB and who actually can take the project say: \"Okay, these are the basic concepts, this is what happens when you run the query, this is the result of what you get, and this is how you can validate that your result and your query is correct.\" So that's how the whole MongoDB project for beginners was born. \n\nMy guide tells you how to use MongoDB, what steps you need to follow to create an application, upload data, use the data, etc. It's one thing to know what those operators are doing, but it's an entirely different thing to understand how they connect and what impact they make. \n\n## Why MongoDB?\n \nMy lecturer Noel Tierney, a lecturer in Computer Applications in Athlone Institute of Technology, Ireland, gave me the assignment to use MongoDB. He gave them instructions on the project and what kind of outcome he would like to see. I was asked to use MongoDB, and I decided to dive deeper into everything the platform offers. Besides that, as I mentioned briefly in the introduction: the organization DWF was planning on scaling and expanding their business, and they wanted to look into database development with MongoDB. This was a good chance for me to learn everything about NoSQL. \n\n \n ## How it works\n \nThe project teaches you how to set up a MongoDB database for dog care providers. It includes three main sections, including MongoDB Shell, MongoDB Atlas, and MongoDB Compass. The MongoDB Shell section demonstrates how the data can be manipulated using simple queries and the aggregation method. I'm discussing how to import data into a local cluster, create queries, and retrieve & update queries. The other two areas include an overview of MongoDB Atlas and MongoDB Compass; I also discuss querying and the aggregation framework per topic. Each section shows step-by-step instructions on how to set up the application and how also to include some data manipulation examples. As mentioned above, I created all the sample data myself, which was a ton of work! I made a spreadsheet with 2000 different lines of sample data. To do that, I had to Google dog breeds, dog names, and their temperaments. I wanted it to be close to reality. \n\n \n## Challenges and learning\n\nWhen I started working with MongoDB, the first big thing that I had to get over was the braces everywhere. So it was quite challenging for me to understand where the query finishes. But I\u2019ve been reading a lot of documentation, and creating this guide gave me quite a good understanding of the basics of MongoDB. I learned a lot about the technical side of databases because I was never familiar with them; I even had no idea how it works. Using MongoDB and learning about MongoDB, and using MongoDB was a great experience. When I had everything set up: the MongoDB shell, Compass, and Atlas, I could see how that information is moving between all these different environments, and that was awesome. I think it worked quite well. I hope that my guide will be valuable for new learners. It demonstrates that users like me, who had no prior skills in using MongoDB, can quickly become MongoDB developers.\n\nAccess the complete report, which includes the queries you need - here.\n", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": " Learn MongoDB by creating a database for dog care providers!", "contentType": "Code Example"}, "title": "Example Application for Dog Care Providers (DCP)", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/leafsteroidsresources", "action": "created", "body": "# Leafsteroid Resources\n\n \nLeafsteroids is a MongoDB Demo showing the following services and integrations\n------------------------------------------------------------------------\n\n**Atlas App Services** \nAll in one backend. Atlas App Services offers a full-blown REST service using Atlas Functions and HTTPS endpoints. \n\n**Atlas Search** \nUsed to find the player nickname in the Web UI. \n\n**Atlas Charts** \nEvent & personalized player dashboards accessible over the web. Built-in visualization right with your data. No additional tools required. \n\n**Document Model** \nEvery game run is a single document demonstrating rich documents and \u201cdata that works together lives together\u201d, while other data entities are simple collections (configuration). \n\n**AWS Beanstalk** Hosts the Blazor Server Application (website). \n\n**AWS EC2** \nUsed internally by AWS Beanstalk. Used to host our Python game server. \n\n**AWS S3** \nUsed internally by AWS Beanstalk. \n\n**AWS Private Cloud** \nPrivate VPN connection between AWS and MongoDB. \n\n \n\n**At a MongoDB .local Event and want to register to play Leafsteroids? Register Here**\n\nYou can build & play Leafsteroids yourself with the following links\n\n## Development Resources \n|Resource| Link|\n|---|---|\n|Github Repo |Here|\n|MongoDB TV Livestream\n|Here|\n|MongoDB & AWS |Here|\n|MongoDB on the AWS Marketplace\n|Here|\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Leafsteroid Resources", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/create-first-stream-processor", "action": "created", "body": "# Get Started with Atlas Stream Processing: Creating Your First Stream Processor\n\n>Atlas Stream Processing is now available. Learn more about it here.\n\nIf you're not already familiar, Atlas Stream Processing enables processing high-velocity streams of complex data using the same data model and Query API that's used in MongoDB Atlas databases. Streaming data is increasingly critical to building responsive, event-driven experiences for your customers. Stream processing is a fundamental building block powering these applications, by helping to tame the firehouse of data coming from many sources, by finding important events in a stream, or by combining data in motion with data in rest. \n\nIn this tutorial, we will create a stream processor that uses sample data included in Atlas Stream Processing. By the end of the tutorial, you will have an operational Stream Processing Instance (SPI) configured with a stream processor. This environment can be used for further experimentation and Atlas Stream Processing tutorials in the future. \n\n### Tutorial Prerequisites \nThis is what you'll need to follow along:\n* An Atlas user with atlasAdmin permission. For the purposes of this tutorial, we'll have the user \"tutorialuser\". \n* MongoDB shell (Mongosh) version 2.0+\n\n## Create the Stream Processing Instance \n\nLet's first create a Stream Processing Instance (SPI). Think of an SPI as a logical grouping of one or more stream processors. When created, the SPI has a connection string similar to a typical MongoDB Atlas cluster. \n\nUnder the Services tab in the Atlas Project click, \"Stream Processing\". Then click the \"Create Instance\" button. \n\nThis will launch the Create Instance dialog. \n\nEnter your desired cloud provider and region, and then click \"Create\". You will receive a confirmation dialog upon successful creation. \n\n## Configure the connection registry \n\nThe connection registry stores connection information to the external data sources you wish to use within a stream processor. In this example, we will use a sample data generator that is available without any extra configuration, but typically you would connect to either Kafka or an Atlas database as a source. \n\nTo manage the connection registry, click on \"Configure\" to navigate to the configuration screen. \n\nOnce on the configuration screen, click on the \"Connection Registry\" tab. \n\nNext, click on the \"Add Connection\" button. This will launch the Add Connection dialog. \n\nFrom here, you can add connections to Kafka, other Atlas clusters within the project, or a sample stream. In this tutorial, we will use the Sample Stream connection. Click on \"Sample Stream\" and select \"sample_stream_solar\" from the list of available sample streams. Then, click \"Add Connection\". \n\nThe new \"sample_stream_solar\" will show up in the list of connections. \n\n## Connect to the Stream Processing Instance (SPI)\n\nNow that we have both created the SPI and configured the connection in the connection registry, we can create a stream processor. First, we need to connect to the SPI that we created previously. This can be done using the MongoDB Shell (mongosh). \n\nTo obtain the connection string to the SPI, return to the main Stream Processing page by clicking on the \"Stream Processing\" menu under the Services tab. \n\nNext, locate the \"Tutorial\" SPI we just created and click on the \"Connect\" button. This will present a connection dialog similar to what is found when connecting to MongoDB Atlas clusters. \n\nFor connecting, we'll need to add a connection IP address and create a database user, if we haven't already. \n\nThen we'll choose our connection method. If you do not already have mongosh installed, install it using the instructions provided in the dialog. \n\nOnce mongosh is installed, copy the connection string from the \"I have the MongoDB Shell installed\" view and run it in your terminal. \n\n```\nCommand Terminal > mongosh <> --tls --authenticationDatabase admin --username tutorialuser\n\nEnter password: *******************\n\nCurrent Mongosh Log ID: 64e9e3bf025581952de31587\nConnecting to: mongodb://*****\nUsing MongoDB: 6.2.0\nUsing Mongosh: 2.0.0\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlasStreamProcessing>\n\n```\nTo confirm your sample_stream_solar is added as a connection, issue `sp.listConnections()`. Our connection to sample_stream_solar is shown as expected.\n\n```\nAtlasStreamProcessing> sp.listConnections()\n{\n ok: 1,\n connections: \n {\n name: 'sample_stream_solar',\n type: 'inmemory',\n createdAt: ISODate(\"2023-08-26T18:42:48.357Z\")\n } \n ]\n}\n```\n\n## Create a stream processor\nIf you are reading through this post as a prerequisite to another tutorial, you can return to that tutorial now to continue.\n\nIn this section, we will wrap up by creating a simple stream processor to process the sample_stream_solar source that we have used throughout this tutorial. This sample_stream_solar source represents the observed energy production of different devices (unique solar panels). Stream processing could be helpful in measuring characteristics such as panel efficiency or when replacement is required for a device that is no longer producing energy at all.\n\nFirst, let's define a [$source stage to describe where Atlas Stream Processing will read the stream data from. \n\n```\nvar solarstream={$source:{\"connectionName\": \"sample_stream_solar\"}}\n```\nNow we will issue .process to view the contents of the stream in the console. \n`sp.process(solarstream])`\n\n.process lets us sample our source data and quickly test the stages of a stream processor to ensure that it is set up as intended. A sample of this data is as follows:\n\n```\n{\n device_id: 'device_2',\n group_id: 3,\n timestamp: '2023-08-27T13:51:53.375+00:00',\n max_watts: 250,\n event_type: 0,\n obs: {\n watts: 168,\n temp: 15\n },\n _ts: ISODate(\"2023-08-27T13:51:53.375Z\"),\n _stream_meta: {\n sourceType: 'sampleData',\n timestamp: ISODate(\"2023-08-27T13:51:53.375Z\")\n }\n}\n```\n## Wrapping up\n\nIn this tutorial, we started by introducing Atlas Stream Processing and why stream processing is a building block for powering modern applications. We then walked through the basics of creating a stream processor \u2013 we created a Stream Processing Instance, configured a source in our connection registry using sample solar data (included in Atlas Stream Processing), connected to a Stream Processing Instance, and finally tested our first stream processor using .process. You are now ready to explore Atlas Stream Processing and create your own stream processors, adding advanced functionality like windowing and validation.\n\nIf you enjoyed this tutorial and would like to learn more check out the [MongoDB Atlas Stream Processing announcement blog post. For more on stream processors in Atlas Stream Processing, visit our documentation. \n\n### Learn more about MongoDB Atlas Stream Processing\n\nFor more on managing stream processors in Atlas Stream Processing, visit our documentation. \n\n>Log in today to get started. Atlas Stream Processing is now available to all developers in Atlas. Give it a try today!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to create a stream processor end-to-end using MongoDB Atlas Stream Processing.", "contentType": "Tutorial"}, "title": "Get Started with Atlas Stream Processing: Creating Your First Stream Processor", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/instant-graphql-apis-mongodb-grafbase", "action": "created", "body": "# Instant GraphQL APIs for MongoDB with Grafbase\n\n# Instant GraphQL APIs for MongoDB with Grafbase\n\nIn the ever-evolving landscape of web development, efficient data management and retrieval are paramount for creating dynamic and responsive applications. MongoDB, a versatile NoSQL database, and GraphQL, a powerful query language for APIs, have emerged as a dynamic duo that empowers developers to build robust, flexible, and high-performance applications.\n\nWhen combined, MongoDB and GraphQL offer a powerful solution for front-end developers, especially when used at the edge.\n\nYou may be curious about the synergy between an unstructured database and a structured query language. Fortunately, Grafbase offers a solution that seamlessly combines both by leveraging its distinctive connector schema transformations.\n\n## Prerequisites\n\nIn this tutorial, you\u2019ll see how easy it is to get set up with MongoDB and Grafbase, simplifying the introduction of GraphQL into your applications. \n\nYou will need the following to get started:\n\n- An account with Grafbase\n- An account with MongoDB Atlas\n- A database with data API access enabled\n\n## Enable data API access\n\nYou will need a database with MongoDB Atlas to follow along \u2014 create one now!\n\nFor the purposes of this tutorial, I\u2019ve created a free shared cluster with a single database deployment. We\u2019ll refer to this instance as your \u201cData Source\u201d later.\n\n through the `g.datasource(mongodb)` call.\n\n## Create models for data\n\nThe MongoDB connector empowers developers to organize their MongoDB collections in a manner that allows Grafbase to autonomously generate the essential queries and mutations for document creation, retrieval, update, and deletion within these collections.\n\nWithin Grafbase, each configuration for a collection is referred to as a \"model,\" and you have the flexibility to employ the supported GraphQL Scalars to represent data within the collection(s).\n\nIt's important to consider that in cases where you possess pre-existing documents in your collection, not all fields are applicable to every document.\n\nLet\u2019s work under the assumption that you have no existing documents and want to create a new collection for `users`. Using the Grafbase TypeScript SDK, we can write the schema for each user model. It looks something like this:\n\n```ts\nconst address = g.type('Address', {\n street: g.string().mapped('street_name')\n})\n\nmongodb\n .model('User', {\n name: g.string(),\n email: g.string().optional(),\n address: g.ref(address)\n })\n .collection('users')\n```\n\nThis schema will generate a fully working GraphQL API with queries and mutations as well as all input types for pagination, ordering, and filtering:\n\n- `userCreate` \u2013 Create a new user\n- `userCreateMany` \u2013 Batch create new users\n- `userUpdate` \u2013 Update an existing user\n- `userUpdateMany` \u2013 Batch update users\n- `userDelete` \u2013 Delete a user\n- `userDeleteMany` \u2013 Batch delete users\n- `user` \u2013 Fetch a single user record\n- `userCollection` \u2013 Fetch multiple users from a collection\n\nMongoDB automatically generates collections when you first store data, so there\u2019s no need to manually create a collection for users at this step.\n\nWe\u2019re now ready to start the Grafbase development server using the CLI:\n\n```bash\nnpx grafbase dev\n```\n\nThis command runs the entire Grafbase GraphQL API locally that you can use when developing your front end. The Grafbase API communicates directly with your Atlas Data API.\n\nOnce the command is running, you\u2019ll be able to visit http://127.0.0.1:4000 and explore the GraphQL API.\n\n## Insert users with GraphQL to MongoDB instance\n\nLet\u2019s test out creating users inside our MongoDB collection using the generated `userCreate` mutation that was provided to us by Grafbase.\n\nUsing Pathfinder at http://127.0.0.1:4000, execute the following mutation:\n\n```\nmutation {\n mongo {\n userCreate(input: {\n name: \"Jamie Barton\",\n email: \"jamie@grafbase.com\",\n age: 40\n }) {\n insertedId\n }\n }\n}\n```\n\nIf everything is hooked up correctly, you should see a response that looks something like this:\n\n```json\n{\n \"data\": {\n \"mongo\": {\n \"userCreate\": {\n \"insertedId\": \"65154a3d4ddec953105be188\"\n }\n }\n }\n}\n```\n\nYou should repeat this step a few times to create multiple users.\n\n## Update user by ID\n\nNow we\u2019ve created some users in our MongoDB collection, let\u2019s try updating a user by `insertedId`:\n\n```\nmutation {\n mongo {\n userUpdate(by: {\n id: \"65154a3d4ddec953105be188\"\n }, input: {\n age: {\n set: 35\n }\n }) {\n modifiedCount\n }\n }\n}\n```\n\nUsing the `userUpdate` mutation above, we `set` a new `age` value for the user where the `id` matches that of the ObjectID we passed in.\n\nIf everything was successful, you should see something like this:\n\n```json\n{\n \"data\": {\n \"mongo\": {\n \"userUpdate\": {\n \"modifiedCount\": 1\n }\n }\n }\n}\n```\n\n## Delete user by ID\n\nDeleting users is similar to the create and update mutations above, but we don\u2019t need to provide any additional `input` data since we\u2019re deleting only:\n\n```\nmutation {\n mongo {\n userDelete(by: {\n id: \"65154a3d4ddec953105be188\"\n }) {\n deletedCount\n }\n }\n}\n```\n\nIf everything was successful, you should see something like this:\n\n```json\n{\n \"data\": {\n \"mongo\": {\n \"userDelete\": {\n \"deletedCount\": 1\n }\n }\n }\n}\n```\n\n## Fetch all users\n\nGrafbase generates the query `userCollection` that you can use to fetch all users. Grafbase requires a `first` or `last` pagination value with a max value of `100`:\n\n```\nquery {\n mongo {\n userCollection(first: 100) {\n edges {\n node {\n id\n name\n email\n age\n }\n }\n }\n }\n}\n```\n\nHere we are fetching the `first` 100 users from the collection. You can also pass a filter and order argument to tune the results:\n\n```\nquery {\n mongo {\n userCollection(first: 100, filter: {\n age: {\n gt: 30\n }\n }, orderBy: {\n age: ASC\n }) {\n edges {\n node {\n id\n name\n email\n age\n }\n }\n }\n }\n}\n```\n\n## Fetch user by ID\n\nUsing the same GraphQL API, we can fetch a user by the object ID. Grafbase automatically generates the query `user` where we can pass the `id` to the `by` input type:\n\n```\nquery {\n mongo {\n user(\n by: {\n id: \"64ee1cfbb315482287acea78\"\n }\n ) {\n id\n name\n email\n age\n }\n }\n}\n```\n\n## Enable faster responses with GraphQL Edge Caching\n\nEvery request we make so far to our GraphQL API makes a round trip to the MongoDB database. This is fine, but we can improve response times even further by enabling GraphQL Edge Caching for GraphQL queries.\n\nTo enable GraphQL Edge Caching, inside `grafbase/grafbase.config.ts`, add the following to the `config` export:\n\n```ts\nexport default config({\n schema: g,\n cache: {\n rules: \n {\n types: 'Query',\n maxAge: 60\n }\n ]\n }\n})\n```\n\nThis configuration will cache any query. If you only want to disable caching on some collections, you can do that too. [Learn more about GraphQL Edge Caching.\n\n## Deploy to the edge\n\nSo far, we\u2019ve been working with Grafbase locally using the CLI, but now it\u2019s time to deploy this around the world to the edge with GitHub.\n\nIf you already have an existing GitHub repository, go ahead and commit the changes we\u2019ve made so far. If you don\u2019t already have a GitHub repository, you will need to create one, commit this code, and push it to GitHub.\n\nNow, create a new project with Grafbase and connect your GitHub account. You\u2019ll need to permit Grafbase to read your repository contents, so make sure you select the correct repository and allow that.\n\nBefore you click **Deploy**, make sure to insert the environment variables obtained previously in the tutorial. Grafbase also supports environment variables for preview environments, so if you want to use a different MongoDB database for any Grafbase preview deployment, you can configure that later.\n\n, URQL, and Houdini.\n\nIf you have questions or comments, continue the conversation over in the MongoDB Developer Community.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt86a1fb09aa5e51ae/65282bf00749064f73257e71/image6.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt67f4040e41799bbc/65282c10814c6c262bc93103/image1.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt75ca38cd9261e241/65282c30ff3bbd5d44ad0aa3/image4.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaf2a2af39e731dbe/65282c54391807638d3b0e1d/image5.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c9563b3fdbf34fd/65282c794824f57358f273cf/image3.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt731c99011d158491/65282ca631f9bbb92a9669ad/image2.png", "format": "md", "metadata": {"tags": ["Atlas", "TypeScript", "GraphQL"], "pageDescription": "Learn how to quickly and easily create a GraphQL API from your MongoDB data with Grafbase.", "contentType": "Tutorial"}, "title": "Instant GraphQL APIs for MongoDB with Grafbase", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/exploring-window-operators-atlas-stream-processing", "action": "created", "body": "# Exploring Window Operators in Atlas Stream Processing\n\n> Atlas Stream Processing is now available. Learn more about it here.\n\nIn our previous post on windowing, we introduced window operators available in Atlas Stream Processing. Window operators are one of the most commonly used operations to effectively process streaming data. Atlas Stream Processing provides two window operators: $tumblingWindow and $hoppingWindow. In this tutorial, we will explore both of these operators using the sample solar data generator provided within Atlas Stream Processing.\n\n## Getting started\n\nBefore we begin creating stream processors, make sure you have a database user who has \u201catlasAdmin\u201d access to the Atlas Project. Also, if you do not already have a Stream Processing Instance created with a connection to the sample_stream_solar data generator, please follow the instructions in Get Started with Atlas Stream Processing: Creating Your First Stream Processor and then continue on.\n\n## View the solar stream sample data\n\nFor this tutorial, we will be using the MongoDB shell. \n\nFirst, confirm sample_stream_solar is added as a connection by issuing `sp.listConnections()`.\n\n```\nAtlasStreamProcessing> sp.listConnections()\n{\n ok: 1,\n connections: \n {\n name: 'sample_stream_solar',\n type: 'inmemory',\n createdAt: ISODate(\"2023-08-26T18:42:48.357Z\")\n } \n ]\n}\n```\n\nNext, let\u2019s define a **$source** stage to describe where Atlas Stream Processing will read the stream data from.\n\n```\nvar solarstream={ $source: { \"connectionName\": \"sample_stream_solar\" } }\n```\n\nThen, issue a **.process** command to view the contents of the stream on the console.\n\n```\nsp.process([solarstream])\n```\n\nYou will see the stream of solar data printed on the console. A sample of this data is as follows:\n\n```json\n{\n device_id: 'device_2',\n group_id: 3,\n timestamp: '2023-08-27T13:51:53.375+00:00',\n max_watts: 250,\n event_type: 0,\n obs: {\n watts: 168,\n temp: 15\n },\n _ts: ISODate(\"2023-08-27T13:51:53.375Z\"),\n _stream_meta: {\n sourceType: 'sampleData',\n timestamp: ISODate(\"2023-08-27T13:51:53.375Z\")\n }\n}\n```\n\n## Create a tumbling window query\n\nA tumbling window is a fixed-size window that moves forward in time at regular intervals. In Atlas Stream Processing, you use the [$tumblingWindow operator. In this example, let\u2019s use the operator to compute the average watts over one-minute intervals.\n\nRefer back to the schema from the sample stream solar data. To create a tumbling window, let\u2019s create a variable and define our tumbling window stage. \n\n```javascript\nvar Twindow= { \n $tumblingWindow: { \n interval: { size: NumberInt(1), unit: \"minute\" },\n pipeline: \n { \n $group: { \n _id: \"$device_id\", \n max: { $max: \"$obs.watts\" }, \n avg: { $avg: \"$obs.watts\" } \n }\n }\n ]\n } \n}\n```\n\nWe are calculating the maximum value and average over the span of one-minute, non-overlapping intervals. Let\u2019s use the `.process` command to run the streaming query in the foreground and view our results in the console.\n\n```\nsp.process([solarstream,Twindow])\n```\n\nHere is an example output of the statement:\n\n```json\n{\n _id: 'device_4',\n max: 236,\n avg: 95,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T13:59:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T14:00:00.000Z\")\n }\n}\n{\n _id: 'device_2',\n max: 211,\n avg: 117.25,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T13:59:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T14:00:00.000Z\")\n }\n}\n```\n\n## Exploring the window operator pipeline\n\nThe pipeline that is used within a window function can include blocking stages and non-blocking stages. \n\n[Accumulator operators such as `$avg`, `$count`, `$sort`, and `$limit` can be used within blocking stages. Meaningful data returned from these operators are obtained when run over a series of data versus a single data point. This is why they are considered blocking. \n\nNon-blocking stages do not require multiple data points to be meaningful, and they include operators such as `$addFields`, `$match`, `$project`, `$set`, `$unset`, and `$unwind`, to name a few. You can use non-blocking before, after, or within the blocking stages. To illustrate this, let\u2019s create a query that shows the average, maximum, and delta (the difference between the maximum and average). We will use a non-blocking **$match** to show only the results from device_1, calculate the tumblingWindow showing maximum and average, and then include another non-blocking `$addFields`. \n\n```\nvar m= { '$match': { device_id: 'device_1' } }\n```\n\n```javascript\nvar Twindow= {\n '$tumblingWindow': {\n interval: { size: Int32(1), unit: 'minute' },\n pipeline: \n {\n '$group': {\n _id: '$device_id',\n max: { '$max': '$obs.watts' },\n avg: { '$avg': '$obs.watts' }\n }\n }\n ]\n }\n}\n\nvar delta = { '$addFields': { delta: { '$subtract': ['$max', '$avg'] } } }\n```\n\nNow we can use the .process command to run the stream processor in the foreground and view our results in the console.\n\n```\nsp.process([solarstream,m,Twindow,delta])\n```\n\nThe results of this query will be similar to the following:\n\n```json\n{\n _id: 'device_1',\n max: 238,\n avg: 75.3,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:11:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:12:00.000Z\")\n },\n delta: 162.7\n}\n{\n _id: 'device_1',\n max: 220,\n avg: 125.08333333333333,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:12:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:13:00.000Z\")\n },\n delta: 94.91666666666667\n}\n{\n _id: 'device_1',\n max: 238,\n avg: 119.91666666666667,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:13:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:14:00.000Z\")\n },\n delta: 118.08333333333333\n}\n```\n\nNotice the time segments and how they align on the minute. \n\n![Time segments aligned on the minute][1]\n\nAdditionally, notice that the output includes the difference between the calculated values of maximum and average for each window.\n\n## Create a hopping window\n\nA hopping window, sometimes referred to as a sliding window, is a fixed-size window that moves forward in time at overlapping intervals. In Atlas Stream Processing, you use the `$hoppingWindow` operator. In this example, let\u2019s use the operator to see the average.\n\n```javascript\nvar Hwindow = {\n '$hoppingWindow': {\n interval: { size: 1, unit: 'minute' },\n hopSize: { size: 30, unit: 'second' },\n pipeline: [\n {\n '$group': {\n _id: '$device_id',\n max: { '$max': '$obs.watts' },\n avg: { '$avg': '$obs.watts' }\n }\n }\n ]\n }\n}\n```\n\nTo help illustrate the start and end time segments, let's create a filter to only return device_1.\n\n```\nvar m = { '$match': { device_id: 'device_1' } }\n```\n\nNow let\u2019s issue the `.process` command to view the results in the console.\n\n```\nsp.process([solarstream,m,Hwindow])\n```\n\nAn example result is as follows:\n\n```json\n{\n _id: 'device_1',\n max: 238,\n avg: 76.625,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:37:30.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:38:30.000Z\")\n }\n}\n{\n _id: 'device_1',\n max: 238,\n avg: 82.71428571428571,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:38:00.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:39:00.000Z\")\n }\n}\n{\n _id: 'device_1',\n max: 220,\n avg: 105.54545454545455,\n _stream_meta: {\n sourceType: 'sampleData',\n windowStartTimestamp: ISODate(\"2023-08-27T19:38:30.000Z\"),\n windowEndTimestamp: ISODate(\"2023-08-27T19:39:30.000Z\")\n }\n}\n```\n\nNotice the time segments.\n\n![Overlapping time segments][2]\n\nThe time segments are overlapping by 30 seconds as was defined by the hopSize option. Hopping windows are useful to capture short-term patterns in data. \n\n## Summary\n\nBy continuously processing data within time windows, you can generate real-time insights and metrics, which can be crucial for applications like monitoring, fraud detection, and operational analytics. Atlas Stream Processing provides both tumbling and hopping window operators. Together these operators enable you to perform various aggregation operations such as sum, average, min, and max over a specific window of data. In this tutorial, you learned how to use both of these operators with solar sample data. \n\n### Learn more about MongoDB Atlas Stream Processing\n\nCheck out the [MongoDB Atlas Stream Processing announcement blog post. For more on window operators in Atlas Stream Processing, learn more in our documentation. \n\n>Log in today to get started. Atlas Stream Processing is available to all developers in Atlas. Give it a try today!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt73ff54f0367cad3b/650da3ef69060a5678fc1242/image1.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833bc1a824472d14/650da41aa5f15dea3afc5b55/image3.jpg", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to use the various window operators such as tumbling window and hopping window with MongoDB Atlas Stream Processing.", "contentType": "Tutorial"}, "title": "Exploring Window Operators in Atlas Stream Processing", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-fastapi", "action": "created", "body": "# Getting Started with MongoDB and FastAPI\n\nFastAPI is a modern, high-performance, easy-to-learn, fast-to-code, production-ready, Python 3.6+ framework for building APIs based on standard Python type hints. While it might not be as established as some other Python frameworks such as Django, it is already in production at companies such as Uber, Netflix, and Microsoft.\n\nFastAPI is async, and as its name implies, it is super fast; so, MongoDB is the perfect accompaniment. In this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your FastAPI projects.\n\n## Prerequisites\n\n- Python 3.9.0\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.\n\n## Running the Example\n\nTo begin, you should clone the example code from GitHub.\n\n``` shell\ngit clone git@github.com:mongodb-developer/mongodb-with-fastapi.git\n```\n\nYou will need to install a few dependencies: FastAPI, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.\n\n``` shell\ncd mongodb-with-fastapi\npip install -r requirements.txt\n```\n\nIt may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.\n\nOnce you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.\n\n``` shell\nexport MONGODB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\n```\n\nRemember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.\n\nThe final step is to start your FastAPI server.\n\n``` shell\nuvicorn app:app --reload\n```\n\nOnce the application has started, you can view it in your browser at .\n\nOnce you have had a chance to try the example, come back and we will walk through the code.\n\n## Creating the Application\n\nAll the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.\n\n### Connecting to MongoDB\n\nOne of the very first things we do is connect to our MongoDB database.\n\n``` python\nclient = motor.motor_asyncio.AsyncIOMotorClient(os.environ\"MONGODB_URL\"])\ndb = client.get_database(\"college\")\nstudent_collection = db.get_collection(\"students\")\n```\n\nWe're using the async [motor driver to create our MongoDB client, and then we specify our database name `college`.\n\n### The \\_id Attribute and ObjectIds\n\n``` python\n# Represents an ObjectId field in the database.\n# It will be represented as a `str` on the model so that it can be serialized to JSON.\nPyObjectId = Annotatedstr, BeforeValidator(str)]\n```\n\nMongoDB stores data as [BSON. FastAPI encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId` which can't be directly encoded as JSON. Because of this, we convert `ObjectId`s to strings before storing them as the `id` field.\n\n### Database Models\n\nMany people think of MongoDB as being schema-less, which is wrong. MongoDB has a flexible schema. That is to say that collections do not enforce document structure by default, so you have the flexibility to make whatever data-modelling choices best match your application and its performance requirements. So, it's not unusual to create models when working with a MongoDB database. Our application has three models, the `StudentModel`, the `UpdateStudentModel`, and the `StudentCollection`.\n\n``` python\nclass StudentModel(BaseModel):\n \"\"\"\n Container for a single student record.\n \"\"\"\n\n # The primary key for the StudentModel, stored as a `str` on the instance.\n # This will be aliased to `_id` when sent to MongoDB,\n # but provided as `id` in the API requests and responses.\n id: OptionalPyObjectId] = Field(alias=\"_id\", default=None)\n name: str = Field(...)\n email: EmailStr = Field(...)\n course: str = Field(...)\n gpa: float = Field(..., le=4.0)\n model_config = ConfigDict(\n populate_by_name=True,\n arbitrary_types_allowed=True,\n json_schema_extra={\n \"example\": {\n \"name\": \"Jane Doe\",\n \"email\": \"jdoe@example.com\",\n \"course\": \"Experiments, Science, and Fashion in Nanophotonics\",\n \"gpa\": 3.0,\n }\n },\n )\n```\n\nThis is the primary model we use as the [response model for the majority of our endpoints.\n\nI want to draw attention to the `id` field on this model. MongoDB uses `_id`, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic\u2014the data validation framework used by FastAPI\u2014will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field `id` but give it an alias of `_id`. You also need to set `populate_by_name` to `True` in the model's `model_config`\n\nWe set this `id` value automatically to `None`, so you do not need to supply it when creating a new student.\n\n``` python\nclass UpdateStudentModel(BaseModel):\n \"\"\"\n A set of optional updates to be made to a document in the database.\n \"\"\"\n\n name: Optionalstr] = None\n email: Optional[EmailStr] = None\n course: Optional[str] = None\n gpa: Optional[float] = None\n model_config = ConfigDict(\n arbitrary_types_allowed=True,\n json_encoders={ObjectId: str},\n json_schema_extra={\n \"example\": {\n \"name\": \"Jane Doe\",\n \"email\": \"jdoe@example.com\",\n \"course\": \"Experiments, Science, and Fashion in Nanophotonics\",\n \"gpa\": 3.0,\n }\n },\n )\n```\n\nThe `UpdateStudentModel` has two key differences from the `StudentModel`:\n\n- It does not have an `id` attribute as this cannot be modified.\n- All fields are optional, so you only need to supply the fields you wish to update.\n\nFinally, `StudentCollection` is defined to encapsulate a list of `StudentModel` instances. In theory, the endpoint could return a top-level list of StudentModels, but there are some vulnerabilities associated with returning JSON responses with top-level lists.\n\n```python\nclass StudentCollection(BaseModel):\n \"\"\"\n A container holding a list of `StudentModel` instances.\n\n This exists because providing a top-level array in a JSON response can be a [vulnerability\n \"\"\"\n\n students: ListStudentModel]\n```\n\n### Application Routes\n\nOur application has five routes:\n\n- POST /students/ - creates a new student.\n- GET /students/ - view a list of all students.\n- GET /students/{id} - view a single student.\n- PUT /students/{id} - update a student.\n- DELETE /students/{id} - delete a student.\n\n#### Create Student Route\n\n``` python\n@app.post(\n \"/students/\",\n response_description=\"Add new student\",\n response_model=StudentModel,\n status_code=status.HTTP_201_CREATED,\n response_model_by_alias=False,\n)\nasync def create_student(student: StudentModel = Body(...)):\n \"\"\"\n Insert a new student record.\n\n A unique `id` will be created and provided in the response.\n \"\"\"\n new_student = await student_collection.insert_one(\n student.model_dump(by_alias=True, exclude=[\"id\"])\n )\n created_student = await student_collection.find_one(\n {\"_id\": new_student.inserted_id}\n )\n return created_student\n```\n\nThe `create_student` route receives the new student data as a JSON string in a `POST` request. We have to decode this JSON request body into a Python dictionary before passing it to our MongoDB client.\n\nThe `insert_one` method response includes the `_id` of the newly created student (provided as `id` because this endpoint specifies `response_model_by_alias=False` in the `post` decorator call. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return this in our `JSONResponse`.\n\nFastAPI returns an HTTP `200` status code by default; but in this instance, a `201` created is more appropriate.\n\n##### Read Routes\n\nThe application has two read routes: one for viewing all students and the other for viewing an individual student.\n\n``` python\n@app.get(\n \"/students/\",\n response_description=\"List all students\",\n response_model=StudentCollection,\n response_model_by_alias=False,\n)\nasync def list_students():\n \"\"\"\n List all of the student data in the database.\n\n The response is unpaginated and limited to 1000 results.\n \"\"\"\n return StudentCollection(students=await student_collection.find().to_list(1000))\n```\n\nMotor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`; but in a real application, you would use the [skip and limit parameters in `find` to paginate your results.\n\n``` python\n@app.get(\n \"/students/{id}\",\n response_description=\"Get a single student\",\n response_model=StudentModel,\n response_model_by_alias=False,\n)\nasync def show_student(id: str):\n \"\"\"\n Get the record for a specific student, looked up by `id`.\n \"\"\"\n if (\n student := await student_collection.find_one({\"_id\": ObjectId(id)})\n ) is not None:\n return student\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nThe student detail route has a path parameter of `id`, which FastAPI passes as an argument to the `show_student` function. We use the `id` to attempt to find the corresponding student in the database. The conditional in this section is using an assignment expression, an addition to Python 3.8 and often referred to by the cute sobriquet \"walrus operator.\"\n\nIf a document with the specified `_id` does not exist, we raise an `HTTPException` with a status of `404`.\n\n##### Update Route\n\n``` python\n@app.put(\n \"/students/{id}\",\n response_description=\"Update a student\",\n response_model=StudentModel,\n response_model_by_alias=False,\n)\nasync def update_student(id: str, student: UpdateStudentModel = Body(...)):\n \"\"\"\n Update individual fields of an existing student record.\n\n Only the provided fields will be updated.\n Any missing or `null` fields will be ignored.\n \"\"\"\n student = {\n k: v for k, v in student.model_dump(by_alias=True).items() if v is not None\n }\n\n if len(student) >= 1:\n update_result = await student_collection.find_one_and_update(\n {\"_id\": ObjectId(id)},\n {\"$set\": student},\n return_document=ReturnDocument.AFTER,\n )\n if update_result is not None:\n return update_result\n else:\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n\n # The update is empty, but we should still return the matching document:\n if (existing_student := await student_collection.find_one({\"_id\": id})) is not None:\n return existing_student\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nThe `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the `id` of the document to update as well as the new data in the JSON body. We don't want to update any fields with empty values; so, first of all, we iterate over all the items in the received dictionary and only add the items that have a value to our new document.\n\nIf, after we remove the empty values, there are no fields left to update, we instead look for an existing record that matches the `id` and return that unaltered. However, if there are values to update, we use find_one_and_update to $set the new values, and then return the updated document.\n\nIf we get to the end of the function and we have not been able to find a matching document to update or return, then we raise a `404` error again.\n\n##### Delete Route\n\n``` python\n@app.delete(\"/students/{id}\", response_description=\"Delete a student\")\nasync def delete_student(id: str):\n \"\"\"\n Remove a single student record from the database.\n \"\"\"\n delete_result = await student_collection.delete_one({\"_id\": ObjectId(id)})\n\n if delete_result.deleted_count == 1:\n return Response(status_code=status.HTTP_204_NO_CONTENT)\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nOur final route is `delete_student`. Again, because this is acting upon a single document, we have to supply an `id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or \"No Content.\" In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified `id`, then instead we return a `404`.\n\n## Our New FastAPI App Generator\n\nIf you're excited to build something more production-ready with FastAPI, React & MongoDB, head over to the Github repository for our new FastAPI app generator and start transforming your web development experience.\n\n## Wrapping Up\n\nI hope you have found this introduction to FastAPI with MongoDB useful. If you would like to learn more, check out my post introducing the FARM stack (FastAPI, React and MongoDB) as well as the FastAPI documentation and this awesome list.\n\n>If you have questions, please head to our developer community website where MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "Django", "FastApi"], "pageDescription": "Getting started with MongoDB and FastAPI", "contentType": "Quickstart"}, "title": "Getting Started with MongoDB and FastAPI", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-aws-cloudformation", "action": "created", "body": "# How to Deploy MongoDB Atlas with AWS CloudFormation\n\nMongoDB Atlas is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.\n\nAWS CloudFormation lets you model, provision, and manage AWS and third-party resources like MongoDB Atlas by treating infrastructure as code (IaC). CloudFormation templates are written in either JSON or YAML. \n\nWhile there are multiple ways to use CloudFormation to provision and manage your Atlas clusters, such as with Partner Solution Deployments or the AWS CDK, today we\u2019re going to go over how to create your first YAML CloudFormation templates to deploy Atlas clusters with CloudFormation.\n\nThese pre-made templates directly leverage MongoDB Atlas resources from the CloudFormation Public Registry and execute via the AWS CLI/AWS Management Console. Using these is best for users who seek to be tightly integrated into AWS with fine-grained access controls. \n\nLet\u2019s get started! \n\n*Prerequisites:* \n\n- Install and configure an AWS Account and the AWS CLI.\n- Install and configure the MongoDB Atlas CLI (optional but recommended). \n\n## Step 1: Create a MongoDB Atlas account\n\nSign up for a free MongoDB Atlas account, verify your email address, and log into your new account.\n\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nand contact AWS support directly, who can help confirm the CIDR range to be used in your Atlas PAK IP Whitelist. \n\n on MongoDB Atlas.\n\n). You can set this up with AWS IAM (Identity and Access Management). You can find that in the navigation bar of your AWS. You can find the ARN in the user information in the \u201cRoles\u201d button. Once there, find the role whose ARN you want to use and add it to the Extension Details in CloudFormation. Learn how to create user roles/permissions in the IAM. \n\n required from our GitHub repo. It\u2019s important that you use an ARN with sufficient permissions each time it\u2019s asked for.\n\n. \n\n## Step 7: Deploy the CloudFormation template\n\nIn the AWS management console, go to the CloudFormation tab. Then, in the left-hand navigation, click on \u201cStacks.\u201d In the window that appears, hit the \u201cCreate Stack\u201d drop-down. Select \u201cCreate new stack with existing resources.\u201d \n\nNext, select \u201ctemplate is ready\u201d in the \u201cPrerequisites\u201d section and \u201cUpload a template\u201d in the \u201cSpecify templates\u201d section. From here, you will choose the YAML (or JSON) file containing the MongoDB Atlas deployment that you created in the prior step.\n\n. \n\nThe fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace. \n\nAdditionally, you can watch our demo to learn about the other ways to get started with MongoDB Atlas and CloudFormation\n\nGo build with MongoDB Atlas and AWS CloudFormation today!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a7a0aace015cbb5/6504a623a8cf8bcfe63e171a/image4.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt471e37447cf8b1b1/6504a651ea4b5d10aa5135d6/image8.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3545f9cbf7c8f622/6504a67ceb5afe6d504a833b/image13.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3582d0a3071426e3/6504a69f0433c043b6255189/image12.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4253f96c019874e/6504a6bace38f40f4df4cddf/image1.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2840c92b6d1ee85d/6504a6d7da83c92f49f9b77e/image7.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd4a32140ddf600fc/6504a700ea4b5d515f5135db/image5.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49dabfed392fa063/6504a73dbb60f713d4482608/image9.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt592e3f129fe1304b/6504a766a8cf8b5ba23e1723/image11.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbff284987187ce16/6504a78bb8c6d6c2d90e6e22/image10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0ae450069b31dff9/6504a7b99bf261fdd46bddcf/image3.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7f24645eefdab69c/6504a7da9aba461d6e9a55f4/image2.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e1c20eba155233a/6504a8088606a80fe5c87f31/image6.png", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "Learn how to quickly and easily deploy MongoDB Atlas instances with Amazon Web Services (AWS) CloudFormation.", "contentType": "Tutorial"}, "title": "How to Deploy MongoDB Atlas with AWS CloudFormation", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/nextjs-with-mongodb", "action": "created", "body": "# How to Integrate MongoDB Into Your Next.js App\n\n> This tutorial uses the Next.js Pages Router instead of the App Router which was introduced in Next.js version 13. The Pages Router is still supported and recommended for production environments.\n\nAre you building your next amazing application with Next.js? Do you wish you could integrate MongoDB into your Next.js app effortlessly? Do you need this done before your coffee has finished brewing? If you answered yes to these three questions, I have some good news for you. We have created a Next.js<>MongoDB integration that will have you up and running in minutes, and you can consider this tutorial your official guide on how to use it. \n\nIn this tutorial, we'll take a look at how we can use the **with-mongodb** example to create a new Next.js application that follows MongoDB best practices for connectivity, connection pool monitoring, and querying. We'll also take a look at how to use MongoDB in our Next.js app with things like serverSideProps and APIs. Finally, we'll take a look at how we can easily deploy and host our application on Vercel, the official hosting platform for Next.js applications. If you already have an existing Next.js app, not to worry. Simply drop the MongoDB utility file into your existing project and you are good to go. We have a lot of exciting stuff to cover, so let's dive right in!\n\n## Next.js and MongoDB with one click\nOur app is now deployed and running in production. If you weren't following along with the tutorial and just want to quickly start your Next.js application with MongoDB, you could always use the `with-mongodb` starter found on GitHub, but I\u2019ve got an even better one for you.\n\nVisit Vercel and you'll be off to the races in creating and deploying the official Next.js with the MongoDB integration, and all you'll need to provide is your connection string.\n\n## Prerequisites\nFor this tutorial, you'll need:\n\n - MongoDB Atlas (sign up for free).\n - A Vercel account (sign up for free).\n - NodeJS 18+.\n - npm and npx.\n\nTo get the most out of this tutorial, you need to be familiar with React and Next.js. I will cover unique Next.js features with enough details to still be valuable to a newcomer.\n\n## What is Next.js?\nIf you're not already familiar with it, Next.js is a React-based framework for building modern web applications. The framework adds a lot of powerful features \u2014 such as server-side rendering, automatic code splitting, and incremental static regeneration \u2014 that make it easy to build, scalable, and production-ready apps.\n\n. You can use a local MongoDB installation if you have one, but if you're just getting started, MongoDB Atlas is a great way to get up and running without having to install or manage your MongoDB instance. MongoDB Atlas has a forever free tier that you can sign up for as well as get the sample data that we'll be using for the rest of this tutorial. \n\nTo get our MongoDB URI, in our MongoDB Atlas dashboard: \n\n 1. Hit the **Connect** button. \n 2. Then, click the **Connect to your application** button, and here you'll see a string that contains your **URI** that will look like this:\n\n```\nmongodb+srv://:@cluster0..mongodb.net/?retryWrites=true&w=majority\n```\nIf you are new to MongoDB Atlas, you'll need to go to the **Database Access** section and create a username and password, as well as the **Network Access** tab to ensure your IP is allowed to connect to the database. However, if you already have a database user and network access enabled, you'll just need to replace the `` and `` fields with your information.\n\nFor the ``, we'll load the MongoDB Atlas sample datasets and use one of those databases.\n\n, and we'll help troubleshoot.\n\n## Querying MongoDB with Next.js\nNow that we are connected to MongoDB, let's discuss how we can query our MongoDB data and bring it into our Next.js application. Next.js supports multiple ways to get data. We can create API endpoints, get data by running server-side rendered functions for a particular page, and even generate static pages by getting our data at build time. We'll look at all three examples.\n\n## Example 1: Next.js API endpoint with MongoDB\nThe first example we'll look at is building and exposing an API endpoint in our Next.js application. To create a new API endpoint route, we will first need to create an `api` directory in our `pages` directory, and then every file we create in this `api` directory will be treated as an individual API endpoint.\n\nLet's go ahead and create the `api` directory and a new file in this `directory` called `movies.tsx`. This endpoint will return a list of 20 movies from our MongoDB database. The implementation for this route is as follows:\n\n```\nimport clientPromise from \"../../lib/mongodb\";\nimport { NextApiRequest, NextApiResponse } from 'next';\n\nexport default async (req: NextApiRequest, res: NextApiResponse) => {\n try {\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(10)\n .toArray();\n res.json(movies);\n } catch (e) {\n console.error(e);\n }\n}\n```\n\nTo explain what is going on here, we'll start with the import statement. We are importing our `clientPromise` method from the `lib/mongodb` file. This file contains all the instructions on how to connect to our MongoDB Atlas cluster. Additionally, within this file, we cache the instance of our connection so that subsequent requests do not have to reconnect to the cluster. They can use the existing connection. All of this is handled for you!\n\nNext, our API route handler has the signature of `export default async (req, res)`. If you're familiar with Express.js, this should look very familiar. This is the function that gets run when the `localhost:3000/api/movies` route is called. We capture the request via `req` and return the response via the `res` object.\n\nOur handler function implementation calls the `clientPromise` function to get the instance of our MongoDB database. Next, we run a MongoDB query using the MongoDB Node.js driver to get the top 20 movies out of our **movies** collection based on their **metacritic** rating sorted in descending order.\n\nFinally, we call the `res.json` method and pass in our array of movies. This serves our movies in JSON format to our browser. If we navigate to `localhost:3000/api/movies`, we'll see a result that looks like this:\n\n to capture the `id`. So, if a user calls `http://localhost:3000/api/movies/573a1394f29313caabcdfa3e`, the movie that should be returned is Seven Samurai. **Another tip**: The `_id` property for the `sample_mflix` database in MongoDB is stored as an ObjectID, so you'll have to convert the string to an ObjectID. If you get stuck, create a thread on the MongoDB Community forums and we'll solve it together! Next, we'll take a look at how to access our MongoDB data within our Next.js pages.\n\n## Example 2: Next.js pages with MongoDB\nIn the last section, we saw how we can create an API endpoint and connect to MongoDB with it. In this section, we'll get our data directly into our Next.js pages. We'll do this using the getServerSideProps() method that is available to Next.js pages.\n\nThe `getServerSideProps()` method forces a Next.js page to load with server-side rendering. What this means is that every time this page is loaded, the `getServerSideProps()` method runs on the back end, gets data, and sends it into the React component via props. The code within `getServerSideProps()` is never sent to the client. This makes it a great place to implement our MongoDB queries.\n\nLet's see how this works in practice. Let's create a new file in the `pages` directory, and we'll call it `movies.tsx`. In this file, we'll add the following code:\n\n```\nimport clientPromise from \"../lib/mongodb\";\nimport { GetServerSideProps } from 'next';\n\ninterface Movie {\n _id: string;\n title: string;\n metacritic: number;\n plot: string;\n}\n\ninterface MoviesProps {\n movies: Movie];\n}\n\nconst Movies: React.FC = ({ movies }) => {\n return (\n \n\n \n\nTOP 20 MOVIES OF ALL TIME\n\n \n\n (According to Metacritic)\n \n\n \n\n {movies.map((movie) => (\n \n\n \n\n{MOVIE.TITLE}\n\n \n\n{MOVIE.METACRITIC}\n\n \n\n{movie.plot}\n\n \n ))}\n \n\n \n\n );\n};\n\nexport default Movies;\n\nexport const getServerSideProps: GetServerSideProps = async () => {\n try {\n const client = await clientPromise;\n const db = client.db(\"sample_mflix\");\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(20)\n .toArray();\n return {\n props: { movies: JSON.parse(JSON.stringify(movies)) },\n };\n } catch (e) {\n console.error(e);\n return { props: { movies: [] } };\n }\n};\n```\nAs you can see from the example above, we are importing the same `clientPromise` utility class, and our MongoDB query is exactly the same within the `getServerSideProps()` method. The only thing we really needed to change in our implementation is how we parse the response. We need to stringify and then manually parse the data, as Next.js is strict.\n\nOur page component called `Movies` gets the props from our `getServerSideProps()` method, and we use that data to render the page showing the top movie title, metacritic rating, and plot. Your result should look something like this:\n\n![Top 20 movies][6]\n\nThis is great. We can directly query our MongoDB database and get all the data we need for a particular page. The contents of the `getServerSideProps()` method are never sent to the client, but the one downside to this is that this method runs every time we call the page. Our data is pretty static and unlikely to change all that often. What if we pre-rendered this page and didn't have to call MongoDB on every refresh? We'll take a look at that next!\n\n## Example 3: Next.js static generation with MongoDB\nFor our final example, we'll take a look at how static page generation can work with MongoDB. Let's create a new file in the `pages` directory and call it `top.tsx`. For this page, what we'll want to do is render the top 1,000 movies from our MongoDB database.\n\nTop 1,000 movies? Are you out of your mind? That'll take a while, and the database round trip is not worth it. Well, what if we only called this method once when we built the application so that even if that call takes a few seconds, it'll only ever happen once and our users won't be affected? They'll get the top 1,000 movies delivered as quickly as or even faster than the 20 using `serverSideProps()`. The magic lies in the `getStaticProps()` method, and our implementation looks like this:\n\n```\nimport { ObjectId } from \"mongodb\";\nimport clientPromise from \"../lib/mongodb\";\nimport { GetStaticProps } from \"next\";\n\ninterface Movie {\n _id: ObjectId;\n title: string;\n metacritic: number;\n plot: string;\n}\n\ninterface TopProps {\n movies: Movie[];\n}\n\nexport default function Top({ movies }: TopProps) {\n return (\n \n\n \n\nTOP 1000 MOVIES OF ALL TIME\n\n \n\n (According to Metacritic)\n \n\n \n\n {movies.map((movie) => (\n \n\n \n\n{MOVIE.TITLE}\n\n \n\n{MOVIE.METACRITIC}\n\n \n\n{movie.plot}\n\n \n ))}\n \n\n \n\n );\n}\n\nexport const getStaticProps: GetStaticProps = async () => {\n try {\n const client = await clientPromise;\n\n const db = client.db(\"sample_mflix\");\n\n const movies = await db\n .collection(\"movies\")\n .find({})\n .sort({ metacritic: -1 })\n .limit(1000)\n .toArray();\n\n return {\n props: { movies: JSON.parse(JSON.stringify(movies)) },\n };\n } catch (e) {\n console.error(e);\n return {\n props: { movies: [] },\n };\n }\n};\n```\nAt a glance, this looks very similar to the `movies.tsx` file we created earlier. The only significant changes we made were changing our `limit` from `20` to `1000` and our `getServerSideProps()` method to `getStaticProps()`. If we navigate to `localhost:3000/top` in our browser, we'll see a long list of movies.\n\n![Top 1000 movies][7]\n\nLook at how tiny that scrollbar is. Loading this page took about 3.79 seconds on my machine, as opposed to the 981-millisecond response time for the `/movies` page. The reason it takes this long is that in development mode, the `getStaticProps()` method is called every single time (just like the `getServerSideProps()` method). But if we switch from development mode to production mode, we'll see the opposite. The `/top` page will be pre-rendered and will load almost immediately, while the `/movies` and `/api/movies` routes will run the server-side code each time.\n\nLet's switch to production mode. In your terminal window, stop the current app from running. To run our Next.js app in production mode, we'll first need to build it. Then, we can run the `start` command, which will serve our built application. In your terminal window, run the following commands:\n\n```\nnpm run build\nnpm run start\n```\nWhen you run the `npm run start` command, your Next.js app is served in production mode. The `getStaticProps()` method will not be run every time you hit the `/top` route as this page will now be served statically. We can even see the pre-rendered static page by navigating to the `.next/server/pages/top.html` file and seeing the 1,000 movies listed in plain HTML.\n\nNext.js can even update this static content without requiring a rebuild with a feature called [Incremental Static Regeneration, but that's outside of the scope of this tutorial. Next, we'll take a look at deploying our application on Vercel.\n\n## Deploying your Next.js app on Vercel\nThe final step in our tutorial today is deploying our application. We'll deploy our Next.js with MongoDB app to Vercel. I have created a GitHub repo that contains all of the code we have written today. Feel free to clone it, or create your own.\n\nNavigate to Vercel and log in. Once you are on your dashboard, click the **Import Project** button, and then **Import Git Repository**.\n\n, https://nextjs-with-mongodb-mauve.vercel.app/api/movies, and https://nextjs-with-mongodb-mauve.vercel.app/top routes.\n\n## Putting it all together\nIn this tutorial, we walked through the official Next.js with MongoDB example. I showed you how to connect your MongoDB database to your Next.js application and run queries in multiple ways. Then, we deployed our application using Vercel.\n\nIf you have any questions or feedback, reach out through the MongoDB Community forums and let me know what you build with Next.js and MongoDB.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt572f8888407a2777/65de06fac7f05b1b2f8674cc/vercel-homepage.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt833e93bc334716a5/65de07c677ae451d96b0ec98/server-error.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad2329fe1bb44d8f/65de1b020f1d350dd5ca42a5/database-deployments.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt798b7c3fe361ccbd/65de1b917c85267d37234400/welcome-nextjs.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta204dc4bce246ac6/65de1ff8c7f05b0b4b86759a/json-format.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt955fc3246045aa82/65de2049330e0026817f6094/top-20-movies.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7866c7c87e81ef/65de2098ae62f777124be71d/top-1000-movie.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc89beb7757ffec1e/65de20e0ee3a13755fc8e7fc/importing-project-vercel.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0022681a81165d94/65de21086c65d7d78887b5ff/configuring-project.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b00b1cfe190a7d4/65de212ac5985207f8f6b232/congratulations.png", "format": "md", "metadata": {"tags": ["JavaScript", "Next.js"], "pageDescription": "Learn how to easily integrate MongoDB into your Next.js application with the official MongoDB package.", "contentType": "Tutorial"}, "title": "How to Integrate MongoDB Into Your Next.js App", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/build-go-web-application-gin-mongodb-help-ai", "action": "created", "body": "# How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI\n\nBuilding applications with Go provides many advantages. The language is fast, simple, and lightweight while supporting powerful features like concurrency, strong typing, and a robust standard library. In this tutorial, we\u2019ll use the popular Gin web framework along with MongoDB to build a Go-based web application.\n\nGin is a minimalist web framework for Golang that provides an easy way to build web servers and APIs. It is fast, lightweight, and modular, making it ideal for building microservices and APIs, but can be easily extended to build full-blown applications.\n\nWe'll use Gin to build a web application with three endpoints that connect to a MongoDB database. MongoDB is a popular document-oriented NoSQL database that stores data in JSON-like documents. MongoDB is a great fit for building modern applications.\n\nRather than building the entire application by hand, we\u2019ll leverage a coding AI assistant by Sourcegraph called Cody to help us build our Go application. Cody is the only AI assistant that knows your entire codebase and can help you write, debug, test, and document your code. We\u2019ll use many of these features as we build our application today.\n\n## Prerequisites\n\nBefore you begin, you\u2019ll need:\n\n- Go installed on your development machine. Download it on their website.\n- A MongoDB Atlas account. Sign up for free.\n- Basic familiarity with Go and MongoDB syntax.\n- Sourcegraph Cody installed in your favorite IDE. (For this tutorial, we'll be using VS Code). Get it for free.\n\nOnce you meet the prerequisites, you\u2019re ready to build. Let\u2019s go.\n\n## Getting started\n\nWe'll start by creating a new Go project for our application. For this example, we\u2019ll name the project **mflix**, so let\u2019s go ahead and create the project directory and navigate into it:\n\n```bash\nmkdir mflix\ncd mflix\n```\n\nNext, initialize a new Go module, which will manage dependencies for our project:\n\n```bash\ngo mod init mflix\n```\n\nNow that we have our Go module created, let\u2019s install the dependencies for our project. We\u2019ll keep it really simple and just install the `gin` and `mongodb` libraries.\n\n```bash\ngo get github.com/gin-gonic/gin\ngo get go.mongodb.org/mongo-driver/mongo\n```\n\nWith our dependencies fetched and installed, we\u2019re ready to start building our application.\n\n## Gin application setup with Cody\n\nTo start building our application, let\u2019s go ahead and create our entry point into the app by creating a **main.go** file. Next, while we can set up our application manually, we\u2019ll instead leverage Cody to build out our starting point. In the Cody chat window, we can ask Cody to create a basic Go Gin application.\n\n guide. The database that we will work with is called `sample_mflix` and the collection in that database we\u2019ll use is called `movies`. This dataset contains a list of movies with various information like the plot, genre, year of release, and much more.\n\n on the movies collection. Aggregation operations process multiple documents and return computed results. So with this endpoint, the end user could pass in any valid MongoDB aggregation pipeline to run various analyses on the `movies` collection.\n\nNote that aggregations are very powerful and in a production environment, you probably wouldn\u2019t want to enable this level of access through HTTP request payloads. But for the sake of the tutorial, we opted to keep it in. As a homework assignment for further learning, try using Cody to limit the number of stages or the types of operations that the end user can perform on this endpoint.\n\n```go\n// POST /movies/aggregations - Run aggregations on movies\nfunc aggregateMovies(c *gin.Context) {\n // Get aggregation pipeline from request body\n var pipeline interface{}\n if err := c.ShouldBindJSON(&pipeline); err != nil {\n c.JSON(http.StatusBadRequest, gin.H{\"error\": err.Error()})\n return\n }\n \n // Run aggregations\n cursor, err := mongoClient.Database(\"sample_mflix\").Collection(\"movies\").Aggregate(context.TODO(), pipeline)\n if err != nil {\n c.JSON(http.StatusInternalServerError, gin.H{\"error\": err.Error()})\n return\n }\n\n // Map results\n var result ]bson.M\n if err = cursor.All(context.TODO(), &result); err != nil {\n c.JSON(http.StatusInternalServerError, gin.H{\"error\": err.Error()})\n return\n }\n\n // Return result\n c.JSON(http.StatusOK, result)\n}\n```\n\nNow that we have our endpoints implemented, let\u2019s add them to our router so that we can call them. Here again, we can use another feature of Cody, called autocomplete, to intelligently give us statement completions so that we don\u2019t have to write all the code ourselves. \n\n![Cody AI Autocomplete with Go][6]\n\nOur `main` function should now look like:\n\n```go\nfunc main() {\nr := gin.Default()\nr.GET(\"/\", func(c *gin.Context) {\nc.JSON(200, gin.H{\n\"message\": \"Hello World\",\n})\n})\nr.GET(\"/movies\", getMovies)\nr.GET(\"/movies/:id\", getMovieByID)\nr.POST(\"/movies/aggregations\", aggregateMovies)\n\nr.Run()\n}\n```\n\nNow that we have our routes set up, let\u2019s test our application to make sure everything is working well. Restart the server and navigate to **localhost:8080/movies**. If all goes well, you should see a large list of movies returned in JSON format in your browser window. If you do not see this, check your IDE console to see what errors are shown.\n\n![Sample Output for the Movies Endpoint][7]\n\nLet\u2019s test the second endpoint. Pick any `id` from the movies collection and navigate to **localhost:8080/movies/{id}** \u2014 so for example, **localhost:8080/movies/573a1390f29313caabcd42e8**. If everything goes well, you should see that single movie listed. But if you\u2019ve been following this tutorial, you actually won\u2019t see the movie.\n\n![String to Object ID Results Error][8]\n\nThe issue is that in our `getMovie` function implementation, we are accepting the `id` value as a `string`, while the data type in our MongoDB database is an `ObjectID`. So when we run the `FindOne` method and try to match the string value of `id` to the `ObjectID` value, we don\u2019t get a match. \n\nLet\u2019s ask Cody to help us fix this by converting the string input we get to an `ObjectID`.\n\n![Cody AI MongoDB String to ObjectID][9]\n\nOur updated `getMovieByID` function is as follows:\n\n```go\nfunc getMovieByID(c *gin.Context) {\n\n// Get movie ID from URL\nidStr := c.Param(\"id\")\n\n// Convert id string to ObjectId\nid, err := primitive.ObjectIDFromHex(idStr)\nif err != nil {\nc.JSON(http.StatusBadRequest, gin.H{\"error\": err.Error()})\nreturn\n}\n\n// Find movie by ObjectId\nvar movie bson.M\nerr = mongoClient.Database(\"sample_mflix\").Collection(\"movies\").FindOne(context.TODO(), bson.D{{\"_id\", id}}).Decode(&movie)\nif err != nil {\nc.JSON(http.StatusInternalServerError, gin.H{\"error\": err.Error()})\nreturn\n}\n\n// Return movie\nc.JSON(http.StatusOK, movie)\n}\n```\n\nDepending on your IDE, you may need to add the `primitive` dependency in your import statement. The final import statement looks like:\n\n```go\nimport (\n\"context\"\n\"log\"\n\"net/http\"\n\n\"github.com/gin-gonic/gin\"\n\"go.mongodb.org/mongo-driver/bson\"\n\"go.mongodb.org/mongo-driver/bson/primitive\"\n\"go.mongodb.org/mongo-driver/mongo\"\n\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n```\n\nIf we examine the new code that Cody provided, we can see that we are now getting the value from our `id` parameter and storing it into a variable named `idStr`. We then use the primitive package to try and convert the string to an `ObjectID`. If the `idStr` is a valid string that can be converted to an `ObjectID`, then we are good to go and we use the new `id` variable when doing our `FindOne` operation. If not, then we get an error message back.\n\nRestart your server and now try to get a single movie result by navigating to **localhost:8080/movies/{id}**.\n\n![Single Movie Response Endpoint][10]\n\nFor our final endpoint, we are allowing the end user to provide an aggregation pipeline that we will execute on the `mflix` collection. The user can provide any aggregation they want. To test this endpoint, we\u2019ll make a POST request to **localhost:8080/movies/aggregations**. In the body of the request, we\u2019ll include our aggregation pipeline.\n\n![Postman Aggregation Endpoint in MongoDB][11]\n\nLet\u2019s run an aggregation to return a count of comedy movies, grouped by year, in descending order. Again, remember aggregations are very powerful and can be abused. You normally would not want to give direct access to the end user to write and run their own aggregations ad hoc within an HTTP request, unless it was for something like an internal tool. Our aggregation pipeline will look like the following:\n\n```json\n[\n {\"$match\": {\"genres\": \"Comedy\"}},\n {\"$group\": {\n \"_id\": \"$year\", \n \"count\": {\"$sum\": 1}\n }},\n {\"$sort\": {\"count\": -1}}\n]\n```\n\nRunning this aggregation, we\u2019ll get a result set that looks like this:\n\n```json\n[\n {\n \"_id\": 2014,\n \"count\": 287\n },\n {\n \"_id\": 2013,\n \"count\": 286\n },\n {\n \"_id\": 2009,\n \"count\": 268\n },\n {\n \"_id\": 2011,\n \"count\": 263\n },\n {\n \"_id\": 2006,\n \"count\": 260\n },\n ...\n]\n```\n\nIt seems 2014 was a big year for comedy. If you are not familiar with how aggregations work, you can check out the following resources:\n\n- [Introduction to the MongoDB Aggregation Framework\n- MongoDB Aggregation Pipeline Queries vs SQL Queries\n- A Better MongoDB Aggregation Experience via Compass\n\nAdditionally, you can ask Cody for a specific explanation about how our `aggregateMovies` function works to help you further understand how the code is implemented using the Cody `/explain` command.\n\n. \n\nAnd if you have any questions or comments, let\u2019s continue the conversation in our developer forums!\n\nThe entire code for our application is above, so there is no GitHub repo for this simple application. Happy coding.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt123181346af4c7e6/65148770b25810649e804636/eVB87PA.gif\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3df7c0149a4824ac/6514820f4f2fa85e60699bf8/image4.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6a72c368f716c7c2/65148238a5f15d7388fc754a/image2.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta325fcc27ed55546/651482786fefa7183fc43138/image7.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc8029e22c4381027/6514880ecf50bf3147fff13f/A7n71ej.gif\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt438f1d659d2f1043/6514887b27287d9b63bf9215/6O8d6cR.gif\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd6759b52be548308/651482b2d45f2927c800b583/image3.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfc8ea470eb6585bd/651482da69060a5af7fc2c40/image5.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte5d9fb517f22f08f/651488d82a06d70de3f4faf9/Y2HuNHe.gif\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc2467265b39e7d2b/651483038f0457d9df12aceb/image6.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt972b959f5918c282/651483244f2fa81286699c09/image1.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9c888329868b60b6/6514892c2a06d7d0a6f4fafd/g4xtxUg.gif", "format": "md", "metadata": {"tags": ["MongoDB", "Go"], "pageDescription": "Learn how to build a web application with the Gin framework for Go and MongoDB using the help of Cody AI from Sourcegraph.", "contentType": "Tutorial"}, "title": "How to Build a Go Web Application with Gin, MongoDB, and with the Help of AI", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/time-series-data-pymongoarrow", "action": "created", "body": "# Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas\n\nIn today\u2019s data-centric world, time-series data has become indispensable for driving key organizational decisions, trend analyses, and forecasts. This kind of data is everywhere \u2014 from stock markets and IoT sensors to user behavior analytics. But as these datasets grow in volume and complexity, so does the challenge of efficiently storing and analyzing them. Whether you\u2019re an IoT developer or a data analyst dealing with time-sensitive information, MongoDB offers a robust ecosystem tailored to meet both your storage and analytics needs for complex time-series data. \n\nMongoDB has built-in support to store time-series data in a special type of collection called a time-series collection. Time-series collections are different from the normal collections. Time-series collections use an underlying columnar storage format and store data in time-order with an automatically created clustered index. The columnar storage format provides the following benefits:\n* Reduced complexity: The columnar format is tailored for time-series data, making it easier to manage and query.\n* Query efficiency: MongoDB automatically creates an internal clustered index on the time field which improves query performance.\n* Disk usage: This storage approach uses disk space more efficiently compared to traditional collections.\n* I/O optimization: The read operations require fewer input/output operations, improving the overall system performance.\n* Cache usage: The design allows for better utilization of the WiredTiger cache, further enhancing query performance.\n\nIn this tutorial, we will create a time-series collection and then store some time-series data into it. We will see how you can query it in MongoDB as well as how you can read that data into pandas DataFrame, run some analytics on it, and write the modified data back to MongoDB. This tutorial is meant to be a complete deep dive into working with time-series data in MongoDB.\n\n### Tutorial Prerequisites \nWe will be using the following tools/frameworks:\n* MongoDB Atlas database, to store our time-series data. If you don\u2019t already have an Atlas cluster created, go ahead and create one, set up a user, and add your connection IP address to your IP access list. \n* PyMongo driver(to connect to your MongoDB Atlas database, see the installation instructions).\n* Jupyter Notebook (to run the code, see the installation instructions).\n\n>Note: Before running any code or installing any Python packages, we strongly recommend setting up a separate Python environment. This helps to isolate dependencies, manage packages, and avoid conflicts that may arise from different package versions. Creating an environment is an optional but highly recommended step.\n\nAt this point, we are assuming that you have an Atlas cluster created and ready to be used, and PyMongo and Jupyter Notebook installed. Let\u2019s go ahead and launch Jupyter Notebook by running the following command in the terminal:\n```\nJupyter Notebook\n```\n\nOnce you have the Jupyter Notebook up and running, let\u2019s go ahead and fetch the connection string of your MongoDB Atlas cluster and store that as an environment variable, which we will use later to connect to our database. After you have done that, let\u2019s go ahead and connect to our Atlas cluster by running the following commands:\n\n```\nimport pymongo\nimport os\n\nfrom pymongo import MongoClient\n\nMONGO_CONN_STRING = os.environ.get(\"MONGODB_CONNECTION_STRING\")\n\nclient = MongoClient(MONGO_CONN_STRING)\n```\n\n## Creating a time-series collection\n\nNext, we are going to create a new database and a collection in our cluster to store the time-series data. We will call this database \u201cstock_data\u201d and the collection \u201cstocks\u201d. \n\n```\n# Let's create a new database called \"stock data\"\ndb = client.stock_data\n\n# Let's create a new time-series collection in the \"stock data\" database called \"stocks\"\n\ncollection = db.create_collection('stocks', timeseries={\n\n timeField: \"timestamp\",\n metaField: \"metadata\",\n granularity: \"hours\"\n\n})\n```\nHere, we used the db.create_collection() method to create a time-series collection called \u201cstock\u201d. In the example above, \u201ctimeField\u201d, \u201cmetaField\u201d, and \u201cgranularity\u201d are reserved fields (for more information on what these are, visit our documentation). The \u201ctimeField\u201d option specifies the name of the field in your collection that will contain the date in each time-series document. \n\nThe \u201cmetaField\u201d option specifies the name of the field in your collection that will contain the metadata in each time-series document. \n\nFinally, the \u201cgranularity\u201d option specifies how frequently data will be ingested in your time-series collection. \n\nNow, let\u2019s insert some stock-related information into our collection. We are interested in storing and analyzing the stock of a specific company called \u201cXYZ\u201d which trades its stock on \u201cNASDAQ\u201d. \n\nWe are storing some price metrics of this stock at an hourly interval and for each time interval, we are storing the following information:\n\n* **open:** the opening price at which the stock traded when the market opened\n* **close:** the final price at which the stock traded when the trading period ended\n* **high:** the highest price at which the stock traded during the trading period\n* **low:** the lowest price at which the stock traded during the trading period\n* **volume:** the total number of shares traded during the trading period\n\nNow that we have become an expert on stock trading and terminology (sarcasm), we will now insert some documents into our time-series collection. Here we have four sample documents. The data points are captured at an interval of one hour. \n\n```\n# Create some sample data\n\ndata = \n{\n \"metadata\": {\n \"stockSymbol\": \"ABC\",\n \"exchange\": \"NASDAQ\"\n },\n \"timestamp\": datetime(2023, 9, 12, 15, 19, 48),\n \"open\": 54.80,\n \"high\": 59.20,\n \"low\": 52.60,\n \"close\": 53.50,\n \"volume\": 18000\n},\n\n{\n \"metadata\": {\n \"stockSymbol\": \"ABC\",\n \"exchange\": \"NASDAQ\"\n },\n \"timestamp\": datetime(2023, 9, 12, 16, 19, 48),\n \"open\": 51.00,\n \"high\": 54.30,\n \"low\": 50.50,\n \"close\": 51.80,\n \"volume\": 12000\n},\n\n{\n \"metadata\": {\n \"stockSymbol\": \"ABC\",\n \"exchange\": \"NASDAQ\"\n },\n \"timestamp\":datetime(2023, 9, 12, 17, 19, 48),\n \"open\": 52.00,\n \"high\": 53.10,\n \"low\": 50.50,\n \"close\": 52.90,\n \"volume\": 10000\n},\n\n{\n \"metadata\": {\n \"stockSymbol\": \"ABC\",\n \"exchange\": \"NASDAQ\"\n },\n \"timestamp\":datetime(2023, 9, 12, 18, 19, 48),\n \"open\": 52.80,\n \"high\": 60.20,\n \"low\": 52.60,\n \"close\": 55.50,\n \"volume\": 30000\n}\n]\n\n# insert the data into our collection\n\ncollection.insert_many(data)\n\n```\n\nNow, let\u2019s run a find query on our collection to retrieve data at a specific timestamp. Run this query in the Jupyter Notebook after the previous script. \n\n```\ncollection.find_one({'timestamp': datetime(2023, 9, 12, 15, 19, 48)})\n```\n\n//OUTPUT\n![Output of find_one() command\n\nAs you can see from the output, we were able to query our time-series collection and retrieve data points at a specific timestamp. \n\nSimilarly, you can run more powerful queries on your time-series collection by using the aggregation pipeline. For the scope of this tutorial, we won\u2019t be covering that. But, if you want to learn more about it, here is where you can go: \n\n 1. MongoDB Aggregation Learning Byte\n 2. MongoDB Aggregation in Python Learning Byte\n 3. MongoDB Aggregation Documentation\n 4. Practical MongoDB Aggregation Book\n\n## Analyzing the data with a pandas DataFrame\n\nNow, let\u2019s see how you can move your time-series data into pandas DataFrame to run some analytics operations.\n\nMongoDB has built a tool just for this purpose called PyMongoArrow. PyMongoArrow is a Python library that lets you move data in and out of MongoDB into other data formats such as pandas DataFrame, Numpy array, and Arrow Table. \n\nLet\u2019s quickly install PyMongoArrow using the pip command in your terminal. We are assuming that you already have pandas installed on your system. If not, you can use the pip command to install it too.\n\n```\npip install pymongoarrow\n```\n\nNow, let\u2019s import all the necessary libraries. We are going to be using the same file or notebook (Jupyter Notebook) to run the codes below. \n\n```\nimport pymongoarrow\nimport pandas as pd\n\n# pymongoarrow.monkey module provided an interface to patch pymongo, in place, and add pymongoarrow's functionality directly to collection instance. \n\nfrom pymongoarrow.monkey import patch_all\npatch_all()\n\n# Let's use the pymongoarrow's find_pandas_all() function to read MongoDB query result sets into \n\ndf = collection.find_pandas_all({})\n```\n\nNow, we have read all of our stock data stored in the \u201cstocks\u201d collection into a pandas DataFrame \u2018df\u2019.\n\nLet\u2019s quickly print the value stored in the \u2018df\u2019 variable to verify it.\n\n```\nprint(df)\n\nprint(type(df))\n```\n\n//OUTPUT\n\nHurray\u2026congratulations! As you can see, we have successfully read our MongoDB data into pandas DataFrame. \n\nNow, if you are a stock market trader, you would be interested in doing a lot of analysis on this data to get meaningful insights. But for this tutorial, we are just going to calculate the hourly percentage change in the closing prices of the stock. This will help us understand the daily price movements in terms of percentage gains or losses. \n\nWe will add a new column in our \u2018df\u2019 DataFrame called \u201cdaily_pct_change\u201d. \n\n```\ndf = df.sort_values('timestamp')\n\ndf'daily_pct_change'] = df['close'].pct_change() * 100\n\n# print the dataframe to see the modified data\nprint(df)\n```\n\n//OUTPUT\n![Output of modified DataFrame\n\nAs you can see, we have successfully added a new column to our DataFrame. \n\nNow, we would like to persist the modified DataFrame data into a database so that we can run more analytics on it later. So, let\u2019s write this data back to MongoDB using PyMongoArrow\u2019s write function. \n\nWe will just create a new collection called \u201cmy_new_collection\u201d in our database to write the modified DataFrame back into MongoDB, ensuring data persistence. \n\n```\nfrom pymongoarrow.api import write\n\ncoll = db.my_new_collection\n\n# write data from pandas into MongoDB collection called 'coll'\nwrite(coll, df)\n\n# Now, let's verify that the modified data has been written into our collection\n\nprint(coll.find_one({}))\n```\n\nCongratulations on successfully completing this tutorial. \n\n## Conclusion\n\nIn this tutorial, we covered how to work with time-series data using MongoDB and Python. We learned how to store stock market data in a MongoDB time-series collection, and then how to perform simple analytics using a pandas DataFrame. We also explored how PyMongoArrow makes it easy to move data between MongoDB and pandas. Finally, we saved our analyzed data back into MongoDB. This guide provides a straightforward way to manage, analyze, and store time-series data. Great job if you\u2019ve followed along \u2014 you\u2019re now ready to handle time-series data in your own projects.\n\nIf you want to learn more about PyMongoArrow, check out some of these additional resources:\n\n 1. Video tutorial on PyMongoArrow\n 2. PyMongoArrow article\n\n ", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to create and query a time-series collection in MongoDB, and analyze the data using PyMongoArrow and pandas.", "contentType": "Tutorial"}, "title": "Analyze Time-Series Data with Python and MongoDB Using PyMongoArrow and Pandas", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/storing-binary-data-mongodb-cpp", "action": "created", "body": "# Storing Binary Data with MongoDB and C++\n\nIn modern applications, storing and retrieving binary files efficiently is a crucial requirement. MongoDB enables this with binary data type in the BSON which is a binary serialization format used to store documents in MongoDB. A BSON binary value is a byte array and has a subtype (like generic binary subtype, UUID, MD5, etc.) that indicates how to interpret the binary data. See BSON Types \u2014 MongoDB Manual for more information.\n\nIn this tutorial, we will write a console application in C++, using the MongoDB C++ driver to upload and download binary data. \n\n**Note**: \n\n- When using this method, remember that the BSON document size limit in MongoDB is 16 MB. If your binary files are larger than this limit, consider using GridFS for more efficient handling of large files. See GridFS example in C++ for reference.\n- Developers often weigh the trade-offs and strategies when storing binary data in MongoDB. It's essential to ensure that you have also considered different strategies to optimize your data management approach.\n\n## Prerequisites\n\n1. MongoDB Atlas account with a cluster created.\n2. IDE (like Microsoft Visual Studio or Microsoft Visual Studio Code) setup with the MongoDB C and C++ Driver installed. Follow the instructions in Getting Started with MongoDB and C++ to install MongoDB C/C++ drivers and set up the dev environment in Visual Studio. Installation instructions for other platforms are available.\n3. Compiler with C++17 support (for using `std::filesystem` operations).\n4. Your machine\u2019s IP address whitelisted. Note: You can add *0.0.0.0/0* as the IP address, which should allow access from any machine. This setting is not recommended for production use.\n\n## Building the application\n\n> Source code available **here**.\n\nAs part of the different BSON types, the C++ driver provides the b_binary struct that can be used for storing binary data value in a BSON document. See the API reference.\n\nWe start with defining the structure of our BSON document. We have defined three keys: `name`, `path`, and `data`. These contain the name of the file being uploaded, its full path from the disk, and the actual file data respectively. See a sample document below:\n\n (URI), update it to `mongoURIStr`, and set the different path and filenames to the ones on your disk.\n\n```cpp\nint main()\n{\n try\n {\n auto mongoURIStr = \"\";\n static const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };\n \n // Create an instance.\n mongocxx::instance inst{};\n \n mongocxx::options::client client_options;\n auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };\n client_options.server_api_opts(api);\n mongocxx::client conn{ mongoURI, client_options};\n \n const std::string dbName = \"fileStorage\";\n const std::string collName = \"files\";\n \n auto fileStorageDB = conn.database(dbName);\n auto filesCollection = fileStorageDB.collection(collName);\n // Drop previous data.\n filesCollection.drop();\n\n // Upload all files in the upload folder.\n const std::string uploadFolder = \"/Users/bishtr/repos/fileStorage/upload/\";\n for (const auto & filePath : std::filesystem::directory_iterator(uploadFolder))\n {\n if(std::filesystem::is_directory(filePath))\n continue;\n\n if(!upload(filePath.path().string(), filesCollection))\n {\n std::cout << \"Upload failed for: \" << filePath.path().string() << std::endl;\n }\n }\n\n // Download files to the download folder.\n const std::string downloadFolder = \"/Users/bishtr/repos/fileStorage/download/\";\n \n // Search with specific filenames and download it.\n const std::string fileName1 = \"image-15.jpg\", fileName2 = \"Hi Seed Shaker 120bpm On Accents.wav\";\n for ( auto fileName : {fileName1, fileName2} )\n {\n if (!download(fileName, downloadFolder, filesCollection))\n {\n std::cout << \"Download failed for: \" << fileName << std::endl;\n } \n }\n \n // Download all files in the collection.\n auto cursor = filesCollection.find({});\n for (auto&& doc : cursor) \n {\n auto fileName = std::string(docFILE_NAME].get_string().value);\n if (!download(fileName, downloadFolder, filesCollection))\n {\n std::cout << \"Download failed for: \" << fileName << std::endl;\n } \n }\n }\n catch(const std::exception& e)\n {\n std::cout << \"Exception encountered: \" << e.what() << std::endl;\n }\n\n return 0;\n}\n```\n\n## Application in action\n\nBefore executing this application, add some files (like images or audios) under the `uploadFolder` directory. \n\n![Files to be uploaded from local disk to MongoDB.][2]\n\nExecute the application and you\u2019ll observe output like this, signifying that the files are successfully uploaded and downloaded.\n\n![Application output showing successful uploads and downloads.][3]\n\nYou can see the collection in [Atlas or MongoDB Compass reflecting the files uploaded via the application.\n\n, offer a powerful solution for handling file storage in C++ applications. We can't wait to see what you build next! Share your creation with the community and let us know how it turned out!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt24f4df95c9cee69a/6504c0fd9bcd1b134c1d0e4b/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7c530c1eb76f566c/6504c12df4133500cb89250f/image3.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt768d2c8c6308391e/6504c153b863d9672da79f4c/image5.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8c199ec2272f2c4f/6504c169a8cf8b4b4a3e1787/image2.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt78bb48b832d91de2/6504c17fec9337ab51ec845e/image4.png", "format": "md", "metadata": {"tags": ["Atlas", "C++"], "pageDescription": "Learn how to store binary data to MongoDB using the C++ driver.", "contentType": "Tutorial"}, "title": "Storing Binary Data with MongoDB and C++", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/realm-web-sdk", "action": "created", "body": "\n\nMY MOVIES\n\n \n \n \n \n\n", "format": "md", "metadata": {"tags": ["JavaScript", "Realm"], "pageDescription": "Send MongoDB Atlas queries directly from the web browser with the Realm Web SDK.", "contentType": "Quickstart"}, "title": "Realm Web SDK Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/bson-data-types-date", "action": "created", "body": "# Quick Start: BSON Data Types - Date\n\n \n\nDates and times in programming can be a challenge. Which Time Zone is the event happening in? What date format is being used? Is it `MM/DD/YYYY` or `DD/MM/YYYY`? Settling on a standard is important for data storage and then again when displaying the date and time. The recommended way to store dates in MongoDB is to use the BSON Date data type.\n\nThe BSON Specification refers to the `Date` type as the *UTC datetime* and is a 64-bit integer. It represents the number of milliseconds since the Unix epoch, which was 00:00:00 UTC on 1 January 1970. This provides a lot of flexibilty in past and future dates. With a 64-bit integer in use, we are able to represent dates *roughly* 290 million years before and after the epoch. As a signed 64-bit integer we are able to represent dates *prior* to 1 Jan 1970 with a negative number and positive numbers represent dates *after* 1 Jan 1970.\n\n## Why & Where to Use\n\nYou'll want to use the `Date` data type whenever you need to store date and/or time values in MongoDB. You may have seen a `timestamp` data type as well and thought \"Oh, that's what I need.\" However, the `timestamp` data type should be left for **internal** usage in MongoDB. The `Date` type is the data type we'll want to use for application development.\n\n## How to Use\n\nThere are some benefits to using the `Date` data type in that it comes with some handy features and methods. Need to assign a `Date` type to a variable? We have you covered there:\n\n``` javascript\nvar newDate = new Date();\n```\n\nWhat did that create exactly?\n\n``` none\n> newDate;\nISODate(\"2020-05-11T20:14:14.796Z\")\n```\n\nVery nice, we have a date and time wrapped as an ISODate. If we need that printed in a `string` format, we can use the `toString()` method.\n\n``` none\n> newDate.toString();\nMon May 11 2020 13:14:14 GMT-0700 (Pacific Daylight Time)\n```\n\n## Wrap Up\n\n>Get started exploring BSON types, like Date, with MongoDB Atlas today!\n\nThe `date` field is the recommended data type to use when you want to store date and time information in MongoDB. It provides the flexibility to store date and time values in a consistent format that can easily be stored and retrieved by your application. Give the BSON `Date` data type a try for your applications.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Working with dates and times can be a challenge. The Date BSON data type is an unsigned 64-bit integer with a UTC (Universal Time Coordinates) time zone.", "contentType": "Quickstart"}, "title": "Quick Start: BSON Data Types - Date", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-vector-search-openai-filtering", "action": "created", "body": "# Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality\n\nSearch functionality is a critical component of many modern web applications. Providing users with relevant results based on their search queries and additional filters dramatically improves their experience and satisfaction with your app.\n\nIn this article, we'll go over an implementation of search functionality using OpenAI's GPT-4 model and MongoDB's \nAtlas Vector search. We've created a request handler function that not only retrieves relevant data based on a user's search query but also applies additional filters provided by the user.\n\nEnriching the existing documents data with embeddings is covered in our main Vector Search Tutorial. \n\n## Search in the Airbnb app context ##\n\nConsider a real-world scenario where we have an Airbnb-like app. Users can perform a free text search for listings and also filter results based on certain criteria like the number of rooms, beds, or the capacity of people the property can accommodate.\n\nTo implement this functionality, we use MongoDB's full-text search capabilities for the primary search, and OpenAI's GPT-4 model to create embeddings that contain the semantics of the data and use Vector Search to find relevant results.\n\nThe code to the application can be found in the following GitHub repository.\n\n## The request handler\nFor the back end, we have used Atlas app services with a simple HTTPS \u201cGET\u201d endpoint.\n\nOur function is designed to act as a request handler for incoming search requests.\nWhen a search request arrives, it first extracts the search terms and filters from the query parameters. If no search term is provided, it returns a random sample of 30 listings from the database.\n\nIf a search term is present, the function makes a POST request to OpenAI's API, sending the search term and asking for an embedded representation of it using a specific model. This request returns a list of \u201cembeddings,\u201d or vector representations of the search term, which is then used in the next step.\n\n```javascript\n\n// This function is the endpoint's request handler. \n// It interacts with MongoDB Atlas and OpenAI API for embedding and search functionality.\nexports = async function({ query }, response) {\n // Query params, e.g. '?search=test&beds=2' => {search: \"test\", beds: \"2\"}\n const { search, beds, rooms, people, maxPrice, freeTextFilter } = query;\n\n // MongoDB Atlas configuration.\n const mongodb = context.services.get('mongodb-atlas');\n const db = mongodb.db('sample_airbnb'); // Replace with your database name.\n const listingsAndReviews = db.collection('listingsAndReviews'); // Replace with your collection name.\n\n // If there's no search query, return a sample of 30 random documents from the collection.\n if (!search || search === \"\") {\n return await listingsAndReviews.aggregate({$sample: {size: 30}}]).toArray();\n }\n\n // Fetch the OpenAI key stored in the context values.\n const openai_key = context.values.get(\"openAIKey\");\n\n // URL to make the request to the OpenAI API.\n const url = 'https://api.openai.com/v1/embeddings';\n\n // Call OpenAI API to get the embeddings.\n let resp = await context.http.post({\n url: url,\n headers: {\n 'Authorization': [`Bearer ${openai_key}`],\n 'Content-Type': ['application/json']\n },\n body: JSON.stringify({\n input: search,\n model: \"text-embedding-ada-002\"\n })\n });\n\n // Parse the JSON response\n let responseData = EJSON.parse(resp.body.text());\n\n // Check the response status.\n if(resp.statusCode === 200) {\n console.log(\"Successfully received embedding.\");\n\n // Fetch a random sample document.\n \n\n const embedding = responseData.data[0].embedding;\n console.log(JSON.stringify(embedding))\n\n let searchQ = {\n \"index\": \"default\",\n \"queryVector\": embedding,\n \"path\": \"doc_embedding\",\n \"k\": 100,\n \"numCandidates\": 1000\n }\n\n // If there's any filter in the query parameters, add it to the search query.\n if (freeTextFilter){\n // Turn free text search using GPT-4 into filter\n const sampleDocs = await listingsAndReviews.aggregate([\n { $sample: { size: 1 }},\n { $project: {\n _id: 0,\n bedrooms: 1,\n beds: 1,\n room_type: 1,\n property_type: 1,\n price: 1,\n accommodates: 1,\n bathrooms: 1,\n review_scores: 1\n }}\n ]).toArray();\n \n const filter = await context.functions.execute(\"getSearchAIFilter\",sampleDocs[0],freeTextFilter );\n searchQ.filter = filter;\n }\nelse if(beds || rooms) {\n let filter = { \"$and\" : []} \n \n if (beds) {\n filter.$and.push({\"beds\" : {\"$gte\" : parseInt(beds) }})\n }\n if (rooms)\n {\n filter.$and.push({\"bedrooms\" : {\"$gte\" : parseInt(rooms) }})\n }\n searchQ.filter = filter;\n}\n\n // Perform the search with the defined query and limit the result to 50 documents.\n let docs = await listingsAndReviews.aggregate([\n { \"$vectorSearch\": searchQ },\n { $limit : 50 }\n ]).toArray();\n\n return docs;\n } else {\n console.error(\"Failed to get embeddings\");\n return [];\n }\n};\n```\nTo cover the filtering part of the query, we are using embedding and building a filter query to cover the basic filters that a user might request \u2014 in the presented example, two rooms and two beds in each.\n\n```js\nelse if(beds || rooms) {\n let filter = { \"$and\" : []} \n \n if (beds) {\n filter.$and.push({\"beds\" : {\"$gte\" : parseInt(beds) }})\n }\n if (rooms)\n {\n filter.$and.push({\"bedrooms\" : {\"$gte\" : parseInt(rooms) }})\n }\n searchQ.filter = filter;\n}\n```\n## Calling OpenAI API\n![AI Filter\n\nLet's consider a more advanced use case that can enhance our filtering experience. In this example, we are allowing a user to perform a free-form filtering that can provide sophisticated sentences, such as, \u201cMore than 1 bed and rating above 91.\u201d\n\nWe call the OpenAI API to interpret the user's free text filter and translate it into something we can use in a MongoDB query. We send the API a description of what we need, based on the document structure we're working with and the user's free text input. This text is fed into the GPT-4 model, which returns a JSON object with 'range' or 'equals' operators that can be used in a MongoDB search query.\n\n### getSearchAIFilter function\n\n```javascript\n// This function is the endpoint's request handler. \n// It interacts with OpenAI API for generating filter JSON based on the input.\nexports = async function(sampleDoc, search) {\n // URL to make the request to the OpenAI API.\n const url = 'https://api.openai.com/v1/chat/completions';\n\n // Fetch the OpenAI key stored in the context values.\n const openai_key = context.values.get(\"openAIKey\");\n\n // Convert the sample document to string format.\n let syntDocs = JSON.stringify(sampleDoc);\n console.log(syntDocs);\n\n // Prepare the request string for the OpenAI API.\n const reqString = `Convert programmatic command to Atlas $search filter only for range and equals JS:\\n\\nExample: Based on document structure {\"siblings\" : '...', \"dob\" : \"...\"} give me the filter of all people born 2015 and siblings are 3 \\nOutput: {\"filter\":{ \"compound\" : { \"must\" : [ {\"range\": {\"gte\": 2015, \"lte\" : 2015,\"path\": \"dob\"} },{\"equals\" : {\"value\" : 3 , path :\"siblings\"}}]}}} \\n\\n provide the needed filter to accomodate ${search}, pick a path from structure ${syntDocs}. Need just the json object with a range or equal operators. No explanation. No 'Output:' string in response. Valid JSON.`;\n console.log(`reqString: ${reqString}`);\n\n // Call OpenAI API to get the response.\n let resp = await context.http.post({\n url: url,\n headers: {\n 'Authorization': `Bearer ${openai_key}`,\n 'Content-Type': 'application/json'\n },\n body: JSON.stringify({\n model: \"gpt-4\",\n temperature: 0.1,\n messages: [\n {\n \"role\": \"system\",\n \"content\": \"Output filter json generator follow only provided rules\"\n },\n {\n \"role\": \"user\",\n \"content\": reqString\n }\n ]\n })\n });\n\n // Parse the JSON response\n let responseData = JSON.parse(resp.body.text());\n\n // Check the response status.\n if(resp.statusCode === 200) {\n console.log(\"Successfully received code.\");\n console.log(JSON.stringify(responseData));\n\n const code = responseData.choices[0].message.content;\n let parsedCommand = EJSON.parse(code);\n console.log('parsed' + JSON.stringify(parsedCommand));\n\n // If the filter exists and it's not an empty object, return it.\n if (parsedCommand.filter && Object.keys(parsedCommand.filter).length !== 0) {\n return parsedCommand.filter;\n }\n \n // If there's no valid filter, return an empty object.\n return {};\n\n } else {\n console.error(\"Failed to generate filter JSON.\");\n console.log(JSON.stringify(responseData));\n return {};\n }\n};\n```\n\n## MongoDB search and filters\n\nThe function then constructs a MongoDB search query using the embedded representation of the search term and any additional filters provided by the user. This query is sent to MongoDB, and the function returns the results as a response \u2014something that looks like the following for a search of \u201cNew York high floor\u201d and \u201cMore than 1 bed and rating above 91.\u201d\n\n```javascript\n{$vectorSearch:{\n \"index\": \"default\",\n \"queryVector\": embedding,\n \"path\": \"doc_embedding\",\n \"filter\" : { \"$and\" : [{\"beds\": {\"$gte\" : 1}} , \"score\": {\"$gte\" : 91}}]},\n \"k\": 100,\n \"numCandidates\": 1000\n }\n}\n```\n\n## Conclusion\nThis approach allows us to leverage the power of OpenAI's GPT-4 model to interpret free text input and MongoDB's full-text search capability to return highly relevant search results. The use of natural language processing and AI brings a level of flexibility and intuitiveness to the search function that greatly enhances the user experience.\n\nRemember, however, this is an advanced implementation. Ensure you have a good understanding of how MongoDB and OpenAI operate before attempting to implement a similar solution. Always take care to handle sensitive data appropriately and ensure your AI use aligns with OpenAI's use case policy.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js", "AI"], "pageDescription": "This article delves into the integration of search functionality in web apps using OpenAI's GPT-4 model and MongoDB's Atlas Vector search. By harnessing the capabilities of AI and database management, we illustrate how to create a request handler that fetches data based on user queries and applies additional filters, enhancing user experience.", "contentType": "Tutorial"}, "title": "Leveraging OpenAI and MongoDB Atlas for Improved Search Functionality", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/document-enrichment-and-schema-updates", "action": "created", "body": "# Document Enrichment and Schema Updates\n\nSo your business needs have changed and there\u2019s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat.\n\n> In this article, I\u2019ll show you how to quickly add and populate additional fields into an existing database collection.\n\n## The Scenario\n\nLet\u2019s say you have a \u201cNetflix\u201d type application and you want to allow users to see which movies they have watched. We\u2019ll use the sample\\_mflix database from the sample datasets available in a MongoDB Atlas cluster.\n\nHere is the existing schema for the user collection in the sample\\_mflix database:\n\n``` js\n{\n _id: ObjectId(),\n name: ,\n email: ,\n password: \n}\n```\n\n## The Solution\n\nThere are a few ways we could go about this. Since MongoDB has a flexible data model, we can just add our new data into existing documents.\n\nIn this example, we are going to assume that we know the user ID. We\u2019ll use `updateOne` and the `$addToSet` operator to add our new data.\n\n``` js\nconst { db } = await connectToDatabase();\nconst collection = await db.collection(\u201cusers\u201d).updateOne(\n { _id: ObjectID(\u201c59b99db9cfa9a34dcd7885bf\u201d) },\n {\n $addToSet: {\n moviesWatched: {\n ,\n ,\n \n }\n }\n }\n);\n```\n\nThe `$addToSet` operator adds a value to an array avoiding duplicates. If the field referenced is not present in the document, `$addToSet` will create the array field and enter the specified value. If the value is already present in the field, `$addToSet` will do nothing.\n\nUsing `$addToSet` will prevent us from duplicating movies when they are watched multiple times.\n\n## The Result\n\nNow, when a user goes to their profile, they will see their watched movies.\n\nBut what if the user has not watched any movies? The user will simply not have that field in their document.\n\nI\u2019m using Next.js for this application. I simply need to check to see if a user has watched any movies and display the appropriate information accordingly.\n\n``` js\n{ moviesWatched\n ? \"Movies I've Watched\"\n : \"I have not watched any movies yet :(\"\n}\n```\n\n## Conclusion\n\nBecause of MongoDB\u2019s flexible data model, we can have multiple schemas in one collection. This allows you to easily update data and fields in existing schemas.\n\nIf you would like to learn more about schema validation, take a look at the Schema Validation documentation.\n\nI\u2019d love to hear your feedback or questions. Let\u2019s chat in the MongoDB Community.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "So your business needs have changed and there\u2019s additional data that needs to be stored within an existing dataset. Fear not! With MongoDB, this is no sweat. In this article, I\u2019ll show you how to quickly add and populate additional fields into an existing database collection.", "contentType": "Tutorial"}, "title": "Document Enrichment and Schema Updates", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/serverless-instances-billing-optimize-bill-indexing", "action": "created", "body": "# How to Optimize Your Serverless Instance Bill with Indexing\n\nServerless solutions are quickly gaining traction among developers and organizations alike as a means to move fast, minimize overhead, and optimize costs. But shifting from a traditional pre-provisioned and predictable monthly bill to a consumption or usage-based model can sometimes result in confusion around how that bill is generated. In this article, we\u2019ll take you through the basics of our serverless billing model and give you tips on how to best optimize your serverless database for cost efficiency.\n\n## What are serverless instances?\n\nMongoDB Atlas serverless instances, recently announced as generally available, provide an on-demand serverless endpoint for your application with no sizing required. You simply choose a cloud provider and region to get started, and as your app grows, your serverless database will seamlessly scale based on demand and only charge for the resources you use.\n\nUnlike our traditional clusters, serverless instances offer a fundamentally different pricing model that is primarily metered on reads, writes, and storage with automatic tiered discounts on reads as your usage scales. So, you can start small without any upfront commitments and never worry about paying for unused resources if your workload is idle.\n\n### Serverless Database Pricing\n\nPay only for the operations you run.\n\n| Item | Description | Pricing |\n| ---- | ----------- | ------- |\n| Read Processing Unit (RPU) | Number of read operations and documents scanned* per operation\n\n*\\*Number of documents read in 4KB chunks and indexes read in 256 byte chunks* | $0.10/million for the first 50 million per day\\*\n\n*\\*Daily RPU tiers: Next 500 million: $0.05/million Reads thereafter: $0.01/million* |\n| Write Processing Unit (WPU) | Number of write operations\\* to the database\n\n\\*Number of documents and indexes written in 1KB chunks | $1.00/million |\n| Storage | Data and indexes stored on the database | $0.25/GB-month |\n| Standard Backup | Download and restore of backup snapshots\\*\n\n\\*2 free daily snapshots included per serverless instance* | $2.50/hour\\*\n\n\\*To download or restore the data* |\n| Serverless Continuous Backup | 35-day backup retention for daily snapshots | $0.20/GB-month |\n| Data Transfer | Inbound/outbound data to/from the database | $0.015 - $0.10/GB\\*\n\n\\**Depending on traffic source and destination* |\n\nAt first glance, read processing units (RPU) and write processing units (WPU) might be new units to you, so let\u2019s quickly dig into what they mean. We use RPUs and WPUs to quantify the amount of work the database has to do to service a query, or to perform a write. To put it simply, a read processing unit (RPU) refers to the read operations to the database and is calculated based on the number of operations run and documents scanned per operation. Similarly, a write processing unit (WPU) is a write operation to the database and is calculated based on the number of bytes written to each document or index. For further explanation of cost units, please refer to our documentation.\n\nNow that you have a basic understanding of the pricing model, let\u2019s go through an example to provide more context and tips on how to ensure your operations are best optimized to minimize costs.\n\nFor this example, we\u2019ll be using the sample dataset in Atlas. To use sample data, simply go to your serverless instance deployment and select \u201cLoad Sample Dataset\u201d from the dropdown as seen below.\n\nThis will load a few collections, such as weather data and Airbnb listing data. Note that loading the sample dataset will consume approximately one million WPUs (less than $1 in most supported regions), and you will be\u00a0billed accordingly.\u00a0\n\nNow, let\u2019s take a look at what happens when we interact with our data and do some search queries.\n\n## Scenario 1: Query on unindexed fields\n\nFor this exercise, I chose the sample\\_weatherdata collection. While looking at the data in the Atlas Collections view, it\u2019s clear that the weather data collection has information from various places and that most locations have a call letter code as a convenient way to identify where this weather reading data was taken.\n\nFor this example, let\u2019s simulate what would happen if a user comes to your weather app and does a lookup by a geographic location. In this weather data collection, geographic locations can be identified by callLetters, which are specific codes for various weather stations across the world. I arbitrarily picked station code \u201cESVJ,\u201d which is a weather buoy in the Atlantic Ocean.\u00a0\n\nHere is what we see when we run this query in Atlas Data Explorer:\u00a0\n\nWe can see this query returns three records. Now, let\u2019s take a look at how many RPUs this query would cost me. We should remember that RPUs are calculated based on the number of read operations and the number of documents scanned per operation.\n\nTo execute the previous query, a full collection scan is required, which results in approximately 1,000 RPUs. \n\nI took this query and ran this nearly 3,000 times through a shell script. This will simulate around 3,000 users coming to an app to check the weather in a day. Here is the code behind the script:\n\n```\nweatherRPUTest.sh\n\nfor ((i=0; i<=3000; i++)); do\n\n\u00a0\u00a0\u00a0\u00a0echo testing $i\n\n\u00a0\u00a0\u00a0\u00a0mongosh \"mongodb+srv://vishalserverless1.qdxrf.mongodb.net/sample_weatherdata\" --apiVersion 1 --username vishal --password ******** < mongoTest.js\n\ndone\n\nmongoTest.js\n\ndb.data.find({callLetters: \"ESVJ\"})\n\n```\n\nAs expected, 3,000 iterations will be 1,000 * 3,000 = 3,000,000 RPUs = 3MM RPUs = $0.30. \n\nBased on this, the cost per user for this application would be $0.01 per user (calculated as: 3,000,000 / 3,000 = 1,000 RPUs = $0.01).\n\nThe cost of $0.01 per user seems to be very high for a database lookup, because if this weather app were to scale to reach a similar level of activity to Accuweather, who sees about 9.5B weather requests in a day, you\u2019d be paying close to around $1 million in database costs per day. By leaving your query this way, it\u2019s likely that you\u2019d be faced with an unexpectedly high bill as your usage scales \u2014 falling into a common trap that many new serverless users face.\n\nTo avoid this problem, we recommend that you follow MongoDB best practices and\u00a0index your data\u00a0to optimize your queries for both performance and cost.\u00a0Indexes\u00a0are special data structures that store a small portion of the collection's data set in an easy-to-traverse form.\n\nWithout indexes, MongoDB must perform a collection scan\u2014i.e., scan every document in a collection\u2014to select those documents that match the query statement (something you just saw in the example above). By adding an index to appropriate queries, you can limit the number of documents it must inspect, significantly reducing the operations you are charged for.\n\nLet\u2019s look at how indexing can help you reduce your RPUs significantly.\n\n## Scenario two: Querying with indexed fields\n\nFirst, let\u2019s create a simple index on the field \u2018callLetters\u2019:\n\nThis operation will typically finish within 2-3 seconds. For reference, we can see the size of the index created on the index tab:\n\nDue to the data structure of the index, the exact number of index reads is hard to compute. However, we can run the same script again for 3,000 iterations and compare the number of RPUs.\n\nThe 3,000 queries on the indexed field now result in approximately 6,500 RPUs in contrast to the 3 million RPUs from the un-indexed query, which is a **99.8% reduction in RPUs**. \n\nWe can see that by simply adding the above index, we were able to reduce the cost per user to roughly $0.000022 (calculated as: 6,500/3,000 = 2.2 RPUs = $0.000022), which is a huge cost saving compared to the previous cost of $0.01 per user.\n\nTherefore, indexing not only helps with improving the performance and scale of your queries, but it can also reduce your consumed RPUs significantly, which reduces your costs. Note that there can be rare scenarios where this is not true (where the size of the index is much larger than the number of documents). However, in most cases, you should see a significant reduction in cost and an improvement in performance.\n\n## Take action to optimize your costs today\n\nAs you can see, adopting a usage-based pricing model can sometimes require you to be extra diligent in ensuring your data structure and queries are optimized. But when done correctly, the time spent to do those optimizations often pays off in more ways than one.\u00a0\n\nIf you\u2019re unsure of where to start, we have\u00a0built-in monitoring tools\u00a0available in the Atlas UI that can help you. The\u00a0performance advisor\u00a0automatically monitors your database for slow-running queries and will suggest new indexes to help improve query performance. Or, if you\u2019re looking to investigate slow-running queries further, you can use\u00a0query profiler\u00a0to view a breakdown of all slow-running queries that occurred in the last 24 hours. If you prefer a terminal experience, you can also analyze your\u00a0query performance\u00a0in the MongoDB Shell or in MongoDB Compass.\u00a0\n\nIf you need further assistance, you can always contact our support team via chat or the\u00a0MongoDB support portal.\u00a0", "format": "md", "metadata": {"tags": ["Atlas", "Serverless"], "pageDescription": "Shifting from a pre-provisioned to a serverless database can be challenging. Learn how to optimize your database and save money with these best practices.", "contentType": "Article"}, "title": "How to Optimize Your Serverless Instance Bill with Indexing", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/unique-indexes-quirks-unique-documents-array-documents", "action": "created", "body": "# Unique Indexes Quirks and Unique Documents in an Array of Documents\n\nWe are developing an application to summarize a user's financial situation. The main page of this application shows us the user's identification and the balances on all banking accounts synced with our application.\n\nAs we've seen in blog posts and recommendations of how to get the most out of MongoDB, \"Data that is accessed together should be stored together.\" We thought of the following document/structure to store the data used on the main page of the application:\n\n```javascript\nconst user = {\n _id: 1,\n name: { first: \"john\", last: \"smith\" },\n accounts: \n { balance: 500, bank: \"abc\", number: \"123\" },\n { balance: 2500, bank: \"universal bank\", number: \"9029481\" },\n ],\n};\n```\n\nBased on the functionality of our application, we determined the following rules:\n\n- A user can register in the application and not sync a bank account.\n- An account is identified by its `bank` and `number` fields.\n- The same account shouldn't be registered for two different users.\n- The same account shouldn't be registered multiple times for the same user.\n\nTo enforce what was presented above, we decided to create an index with the following characteristics:\n\n- Given that the fields `bank` and `number` must not repeat, this index must be set as [Unique.\n- Since we are indexing more than one field, it'll be of type Compound.\n- Since we are indexing documents inside of an array, it'll also be of type Multikey.\n\nAs a result of that, we have a `Compound Multikey Unique Index` with the following specification and options:\n\n```javascript\nconst specification = { \"accounts.bank\": 1, \"accounts.number\": 1 };\nconst options = { name: \"Unique Account\", unique: true };\n```\n\nTo validate that our index works as we intended, we'll use the following data on our tests:\n\n```javascript\nconst user1 = { _id: 1, name: { first: \"john\", last: \"smith\" } };\nconst user2 = { _id: 2, name: { first: \"john\", last: \"appleseed\" } };\nconst account1 = { balance: 500, bank: \"abc\", number: \"123\" };\n```\n\nFirst, let's add the users to the collection:\n\n```javascript\ndb.users.createIndex(specification, options); // Unique Account\n\ndb.users.insertOne(user1); // { acknowledged: true, insertedId: 1)}\ndb.users.insertOne(user2); // MongoServerError: E11000 duplicate key error collection: test.users index: Unique Account dup key: { accounts.bank: null, accounts.number: null }\n```\n\nPretty good. We haven't even started working with the accounts, and we already have an error. Let's see what is going on.\n\nAnalyzing the error message, it says we have a duplicate key for the index `Unique Account` with the value of `null` for the fields `accounts.bank` and `accounts.number`. This is due to how indexing works in MongoDB. When we insert a document in an indexed collection, and this document doesn't have one or more of the fields specified in the index, the value of the missing fields will be considered `null`, and an entry will be added to the index.\n\nUsing this logic to analyze our previous test, when we inserted `user1`, it didn't have the fields `accounts.bank` and `accounts.number` and generated an entry in the index `Unique Account` with the value of `null` for both. When we tried to insert the `user2` in the collection, we had the same behavior, and another entry in the index `Unique Account` would have been created if we hadn't specified this index as `unique`. More info about missing fields and unique indexes can be found in our docs.\n\nThe solution for this issue is to only index documents with the fields `accounts.bank` and `accounts.number`. To accomplish that, we can specify a partial filter expression on our index options to accomplish that. Now we have a `Compound Multikey Unique Partial Index` (fancy name, hum, who are we trying to impress here?) with the following specification and options:\n\n```javascript\nconst specification = { \"accounts.bank\": 1, \"accounts.number\": 1 };\nconst optionsV2 = {\n name: \"Unique Account V2\",\n partialFilterExpression: {\n \"accounts.bank\": { $exists: true },\n \"accounts.number\": { $exists: true },\n },\n unique: true,\n};\n```\n\nBack to our tests:\n\n```javascript\n// Cleaning our environment\ndb.users.drop({}); // Delete documents and indexes definitions\n\n/* Tests */\ndb.users.createIndex(specification, optionsV2); // Unique Account V2\ndb.users.insertOne(user1); // { acknowledged: true, insertedId: 1)}\ndb.users.insertOne(user2); // { acknowledged: true, insertedId: 2)}\n```\n\nOur new index implementation worked, and now we can insert those two users without accounts. Let's test account duplication, starting with the same account for two different users:\n\n```javascript\n// Cleaning the collection\ndb.users.deleteMany({}); // Delete only documents, keep indexes definitions\ndb.users.insertMany(user1, user2]);\n\n/* Test */\ndb.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}\n\ndb.users.updateOne({ _id: user2._id }, { $push: { accounts: account1 } }); // MongoServerError: E11000 duplicate key error collection: test.users index: Unique Account V2 dup key: { accounts.bank: \"abc\", accounts.number: \"123\" }\n```\n\nWe couldn't insert the same account into different users as we expected. Now, we'll try the same account for the same user.\n\n```javascript\n// Cleaning the collection\ndb.users.deleteMany({}); // Delete only documents, keep indexes definitions\ndb.users.insertMany([user1, user2]);\n\n/* Test */\ndb.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}\n\ndb.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}\n\ndb.users.findOne({ _id: user1._id }); /*{\n _id: 1,\n name: { first: 'john', last: 'smith' },\n accounts: [\n { balance: 500, bank: 'abc', number: '123' },\n { balance: 500, bank: 'abc', number: '123' }\n ]\n}*/\n```\n\nWhen we don't expect things to work, they do. Again, another error was caused by not knowing or considering how indexes work on MongoDB. Reading about [unique constraints in the MongoDB documentation, we learn that MongoDB indexes don't duplicate strictly equal entries with the same key values pointing to the same document. Considering this, when we inserted `account1` for the second time on our user, an index entry wasn't created. With that, we don't have duplicate values on it.\n\nSome of you more knowledgeable on MongoDB may think that using $addToSet instead of $push would resolve our problem. Not this time, young padawan. The `$addToSet` function would consider all the fields in the account's document, but as we specified at the beginning of our journey, an account must be unique and identifiable by the fields `bank` and `number`.\n\nOkay, what can we do now? Our index has a ton of options and compound names, and our application doesn't behave as we hoped.\n\nA simple way out of this situation is to change how our update function is structured, changing its filter parameter to match only the user's documents where the account we want to insert isn't in the `accounts` array.\n\n```javascript\n// Cleaning the collection\ndb.users.deleteMany({}); // Delete only documents, keep indexes definitions\ndb.users.insertMany(user1, user2]);\n\n/* Test */\nconst bankFilter = { \n $not: { $elemMatch: { bank: account1.bank, number: account1.number } } \n};\n\ndb.users.updateOne(\n { _id: user1._id, accounts: bankFilter },\n { $push: { accounts: account1 } }\n); // { ... matchedCount: 1, modifiedCount: 1 ...}\n\ndb.users.updateOne(\n { _id: user1._id, accounts: bankFilter },\n { $push: { accounts: account1 } }\n); // { ... matchedCount: 0, modifiedCount: 0 ...}\n\ndb.users.findOne({ _id: user1._id }); /*{\n _id: 1,\n name: { first: 'john', last: 'smith' },\n accounts: [ { balance: 500, bank: 'abc', number: '123' } ]\n}*/\n```\n\nProblem solved. We tried to insert the same account for the same user, and it didn't insert, but it also didn't error out.\n\nThis behavior doesn't meet our expectations because it doesn't make it clear to the user that this operation is prohibited. Another point of concern is that this solution considers that every time a new account is inserted in the database, it'll use the correct update filter parameters.\n\nWe've worked in some companies and know that as people come and go, some knowledge about the implementation is lost, interns will try to reinvent the wheel, and some nasty shortcuts will be taken. We want a solution that will error out in any case and stop even the most unscrupulous developer/administrator who dares to change data directly on the production database \ud83d\ude31.\n\n[MongoDB schema validation for the win.\n\nA quick note before we go down this rabbit role. MongoDB best practices recommend implementing schema validation on the application level and using MongoDB schema validation as a backstop. \n\nIn MongoDB schema validation, it's possible to use the operator `$expr` to write an aggregation expression to validate the data of a document when it has been inserted or updated. With that, we can write an expression to verify if the items inside an array are unique.\n\nAfter some consideration, we get the following expression:\n\n```javascript\nconst accountsSet = { \n $setIntersection: { \n $map: { \n input: \"$accounts\", \n in: { bank: \"$$this.bank\", number: \"$$this.number\" } \n },\n },\n};\n\nconst uniqueAccounts = {\n $eq: { $size: \"$accounts\" }, { $size: accountsSet }],\n};\n\nconst accountsValidator = {\n $expr: {\n $cond: {\n if: { $isArray: \"$accounts\" },\n then: uniqueAccounts,\n else: true,\n },\n },\n};\n```\n\nIt can look a little scary at first, but we can go through it.\n\nThe first operation we have inside of [$expr is a $cond. When the logic specified in the `if` field results in `true`, the logic within the field `then` will be executed. When the result is `false`, the logic within the `else` field will be executed.\n\nUsing this knowledge to interpret our code, when the accounts array exists in the document, `{ $isArray: \"$accounts\" }`, we will execute the logic within`uniqueAccounts`. When the array doesn't exist, we return `true` signaling that the document passed the schema validation. \n\nInside the `uniqueAccounts` variable, we verify if the $size of two things is $eq. The first thing is the size of the array field `$accounts`, and the second thing is the size of `accountsSet` that is generated by the $setIntersection function. If the two arrays have the same size, the logic will return `true`, and the document will pass the validation. Otherwise, the logic will return `false`, the document will fail validation, and the operation will error out.\n\nThe $setIntersenction function will perform a set operation on the array passed to it, removing duplicate entries. The array passed to `$setIntersection` will be generated by a $map function, which maps each account in `$accounts` to only have the fields `bank` and `number`.\n\nLet's see if this is witchcraft or science:\n\n```javascript\n// Cleaning the collection\ndb.users.drop({}); // Delete documents and indexes definitions\ndb.createCollection(\"users\", { validator: accountsValidator });\ndb.users.createIndex(specification, optionsV2);\ndb.users.insertMany([user1, user2]);\n\n/* Test */\ndb.users.updateOne({ _id: user1._id }, { $push: { accounts: account1 } }); // { ... matchedCount: 1, modifiedCount: 1 ...}\n\ndb.users.updateOne(\n { _id: user1._id },\n { $push: { accounts: account1 } }\n); /* MongoServerError: Document failed validation\nAdditional information: {\n failingDocumentId: 1,\n details: {\n operatorName: '$expr',\n specifiedAs: {\n '$expr': {\n '$cond': {\n if: { '$and': '$accounts' },\n then: { '$eq': [ [Object], [Object] ] },\n else: true\n }\n }\n },\n reason: 'expression did not match',\n expressionResult: false\n }\n}*/\n```\n\nMission accomplished! Now, our data is protected against those who dare to make changes directly in the database. \n\nTo get to our desired behavior, we reviewed MongoDB indexes with the `unique` option, how to add safety guards to our collection with a combination of parameters in the filter part of an update function, and how to use MongoDB schema validation to add an extra layer of security to our data. ", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn about how to handle unique documents in an array and some of the surrounding MongoDB unique index quirks.", "contentType": "Tutorial"}, "title": "Unique Indexes Quirks and Unique Documents in an Array of Documents", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/zero-hero-mrq", "action": "created", "body": "# From Zero to Hero with MrQ\n\n> The following content is based on a recent episode from the MongoDB Podcast. Want to hear the full conversation? Head over the to episode page!\n\nWhen you think of online gambling, what do you imagine? Big wins? Even bigger losses? Whatever view you might have in your mind, MrQ is here to revolutionize the industry.\n\nMrQ is redefining the online casino gaming industry. The data-driven technology company saw its inception in September 2015 and officially launched in 2018. CTO Iulian Dafinoiu speaks of humble beginnings for MrQ \u2014 it was bootstrapped, with no external investment. And to this day, the company maintains a focus on building a culture of value- and vision-led teams.\n\nMrQ wanted to become, in a sense, the Netflix of casinos. Perfecting a personalized user experience is at the heart of everything they do. The idea is to give players as much data as possible to make the right decisions. MrQ\u2019s games don\u2019t promise life-changing wins. As Dafinoiu puts it, you win some and you lose some. In fact, you might win, but you\u2019ll definitely lose.\n\nGambling is heavily commoditized, and players expect to play the same games each time \u2014 ones that they have a personal connection with. MrQ aims to keep it all fun for their players with an extensive gaming catalog of player favorites, shifting the perception of what gambling should always be: enjoyable. But they\u2019re realists and know that this can happen only if players are in control and everything is transparent.\n\nAt the same time, they had deeper goals around the data they were using.\n\n>\u201dThe mindset was always to not be an online casino, but actually be a kind of data-driven technology company that operates in the gambling space.\u201d\n\n## The challenge\n\nIn the beginning, MrQ struggled with the availability of player data and real-time events. There was a poor back office system and technical implementations. The option to scale quickly and seamlessly was a must, especially as the UK-based company strives to expand into other countries and markets, within a market that\u2019s heavily regulated, which can be a hindrance to compliance.\n\nBehind the curtains, Dafinoiu started with Postgres but quickly realized this wasn\u2019t going to give MrQ the freedom to scale how they wanted to.\n\n>\u201dI couldn\u2019t dedicate a lot of time to putting servers together, managing the way they kind of scale, creating replica sets or even shards, which was almost impossible for MariaDB or Postgres, at the time. I couldn\u2019t invest a lot of time into that.\"\n\n## The solution\n\nAfter realizing the shortcomings of Postgres, MrQ switched to MongoDB due to its ease and scalability. In the beginning, it was just Dafinoiu managing everything. He needed something that could almost do it for him. Thus, MongoDB became their primary database technology. It\u2019s their primary source of truth and can scale horizontally without blinking twice. Dafinoiu saw that the schema flexibility is a good fit and the initial performance was strong. Initially, they used it on-premise but then migrated to Atlas, our multi-cloud database service.\n\nAside from MongoDB, MrQ uses Java and Kotlin for their backend system, React and JSON for the front end, and Kafka for real-time events.\n\nWith a tech stack that allows for more effortless growth, MrQ is looking toward a bright future.\n\n## Next steps for MrQ\n\nDafinoiu came to MrQ with 13 years of experience as a software engineer. More than seven years into his journey with the company, he\u2019s looking to take their more than one million players, 700 games, and 40 game providers to the next level. They\u2019re actively working on moving into other territories and have a goal of going global this year, with MrQ+.\n\n>\u201dThere\u2019s a lot of compliance and regulations around it because you need to acquire new licenses for almost every new market that you want to go into.\"\n\nInternally, the historically small development studio will continue to prioritize slow but sustainable growth, with workplace culture always at the forefront. For their customers, MrQ plans to continue using the magic of machine learning to provide a stellar experience. They want to innovate by creating their own games and even move into the Bingo space, making it a social experience for all ages with a chat feature and different versions, iterations, and interpretations of the long-time classic. Payments will also be faster and more stable. Overall, players can expect MrQ to continue reinforcing its place as one of the top destinations for online casino gaming.\n\nWant to hear more from Iulian Dafinoiu about his journey with MrQ and how the platform interacts with MongoDB? Head over to our podcast and listen to the full episode.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "MrQ is redefining the online casino gaming industry. Learn more about where the company comes from and where it's going, from CTO Iulian Dafinoiu.", "contentType": "Article"}, "title": "From Zero to Hero with MrQ", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/serverless-development-aws-lambda-mongodb-atlas-using-java", "action": "created", "body": "# Serverless Development with AWS Lambda and MongoDB Atlas Using Java\n\nSo you need to build an application that will scale with demand and a database to scale with it? It might make sense to explore serverless functions, like those offered by AWS Lambda, and a cloud database like MongoDB Atlas.\n\nServerless functions are great because you can implement very specific logic in the form of a function and the infrastructure will scale automatically to meet the demand of your users. This will spare you from having to spend potentially large amounts of money on always on, but not always needed, infrastructure. Pair this with an elastically scalable database like MongoDB Atlas, and you've got an amazing thing in the works.\n\nIn this tutorial, we're going to explore how to create a serverless function with AWS Lambda and MongoDB, but we're going to focus on using Java, one of the available AWS Lambda runtimes.\n\n## The requirements\n\nTo be successful with this tutorial, there are a few requirements that must be met prior to continuing.\n\n- Must have an AWS Lambda compatible version of Java installed and configured on your local computer.\n- Must have a MongoDB Atlas instance deployed and configured.\n- Must have an Amazon Web Services (AWS) account.\n- Must have Gradle or Maven, but Gradle will be the focus for dependency management.\n\nFor the sake of this tutorial, the instance size or tier of MongoDB Atlas is not too important. In fact, an M0 instance, which is free, will work fine. You could also use a serverless instance which pairs nicely with the serverless architecture of AWS Lambda. Since the Atlas configuration is out of the scope of this tutorial, you'll need to have your user rules and network access rules in place already. If you need help configuring MongoDB Atlas, consider checking out the getting started guide.\n\nGoing into this tutorial, you might start with the following boilerplate AWS Lambda code for Java:\n\n```java\npackage example;\n\nimport com.amazonaws.services.lambda.runtime.Context;\nimport com.amazonaws.services.lambda.runtime.RequestHandler;\n\npublic class Handler implements RequestHandler, Void>{\n\n @Override\n public void handleRequest(Map event, Context context) {\n // Code will be in here...\n return null;\n }\n}\n```\n\nYou can use a popular development IDE like IntelliJ, but it doesn't matter, as long as you have access to Gradle or Maven for building your project.\n\nSpeaking of Gradle, the following can be used as boilerplate for our tasks and dependencies:\n\n```groovy\nplugins {\n id 'java'\n}\n\ngroup = 'org.example'\nversion = '1.0-SNAPSHOT'\n\nrepositories {\n mavenCentral()\n}\n\ndependencies {\n testImplementation platform('org.junit:junit-bom:5.9.1')\n testImplementation 'org.junit.jupiter:junit-jupiter'\n implementation 'com.amazonaws:aws-lambda-java-core:1.2.2'\n implementation 'com.amazonaws:aws-lambda-java-events:3.11.1'\n implementation 'org.slf4j:slf4j-log4j12:1.7.36'\n runtimeOnly 'com.amazonaws:aws-lambda-java-log4j2:1.5.1'\n}\n\ntest {\n useJUnitPlatform()\n}\n\ntask buildZip(type: Zip) {\n into('lib') {\n from(jar)\n from(configurations.runtimeClasspath)\n }\n}\n\nbuild.dependsOn buildZip\n```\n\nTake note that we do have our AWS Lambda dependencies included as well as a task for bundling everything into a ZIP archive when we build.\n\nWith the baseline AWS Lambda function in place, we can focus on the MongoDB development side of things.\n\n## Installing, configuring, and connecting to MongoDB Atlas with the MongoDB driver for Java\n\nTo get started, we're going to need the MongoDB driver for Java available to us. This dependency can be added to our project's **build.gradle** file:\n\n```groovy\ndependencies {\n // Previous boilerplate dependencies ...\n implementation 'org.mongodb:bson:4.10.2'\n implementation 'org.mongodb:mongodb-driver-sync:4.10.2'\n}\n```\n\nThe above two lines indicate that we want to use the driver for interacting with MongoDB and we also want to be able to interact with BSON.\n\nWith the driver and related components available to us, let's revisit the Java code we saw earlier. In this particular example, the Java code will be found in a **src/main/java/example/Handler.java** file.\n\n```java\npackage example;\n\nimport com.amazonaws.services.lambda.runtime.Context;\nimport com.amazonaws.services.lambda.runtime.RequestHandler;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.Filters;\nimport org.bson.BsonDocument;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Map;\n\npublic class Handler implements RequestHandler, Void>{\n\n private final MongoClient mongoClient;\n\n public Handler() {\n mongoClient = MongoClients.create(System.getenv(\"MONGODB_ATLAS_URI\"));\n }\n\n @Override\n public void handleRequest(Map event, Context context) {\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection collection = database.getCollection(\"movies\");\n\n // More logic here ...\n\n return null;\n }\n}\n```\n\nIn the above code, we've imported a few classes, but we've also made some changes pertaining to how we plan to interact with MongoDB.\n\nThe first thing you'll notice is our use of the `Handler` constructor method:\n\n```java\npublic Handler() {\n mongoClient = MongoClients.create(System.getenv(\"MONGODB_ATLAS_URI\"));\n}\n```\n\nWe're establishing our client, not our connection, outside of the handler function itself. We're doing this so our connections can be reused and not established on every invocation, which would potentially overload us with too many concurrent connections. We're also referencing an environment variable for our MongoDB Atlas URI string. This will be set later within the AWS Lambda portal.\n\nIt's bad practice to hard-code your URI string into your application. Use a configuration file or environment variable whenever possible.\n\nNext up, we have the function logic where we grab a reference to our database and collection:\n\n```java\n@Override\npublic void handleRequest(Map event, Context context) {\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection collection = database.getCollection(\"movies\");\n\n // More logic here ...\n\n return null;\n}\n```\n\nBecause this example was meant to only be enough to get you going, we're using the sample datasets that are available for MongoDB Atlas users. It doesn't really matter what you use for this example as long as you've got a collection with some data.\n\nWe're on our way to being successful with MongoDB and AWS Lambda!\n\n## Querying data from MongoDB when the serverless function is invoked\n\nWith the client configuration in place, we can focus on interacting with MongoDB. Before we do that, a few things need to change to the design of our function:\n\n```java\npublic class Handler implements RequestHandler, List>{\n\n private final MongoClient mongoClient;\n\n public Handler() {\n mongoClient = MongoClients.create(System.getenv(\"MONGODB_ATLAS_URI\"));\n }\n\n @Override\n public List handleRequest(Map event, Context context) {\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection collection = database.getCollection(\"movies\");\n\n // More logic here ...\n\n return null;\n }\n}\n```\n\nNotice that the implemented `RequestHandler` now uses `List` instead of `Void`. The return type of the `handleRequest` function has also been changed from `void` to `List` to support us returning an array of documents back to the requesting client.\n\nWhile you could do a POJO approach in your function, we're going to use `Document` instead.\n\nIf we want to query MongoDB and return the results, we could do something like this:\n\n```java\n@Override\npublic List handleRequest(Map event, Context context) {\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection collection = database.getCollection(\"movies\");\n\n Bson filter = new BsonDocument();\n\n if(event.containsKey(\"title\") && !event.get(\"title\").isEmpty()) {\n filter = Filters.eq(\"title\", event.get(\"title\"));\n }\n\n List results = new ArrayList<>();\n collection.find(filter).limit(5).into(results);\n\n return results;\n}\n```\n\nIn the above example, we are checking to see if the user input data `event` contains a property \"title\" and if it does, use it as part of our filter. Otherwise, we're just going to return everything in the specified collection.\n\nSpeaking of returning everything, the sample data set is rather large, so we're actually going to limit the results to five documents or less. Also, instead of using a cursor, we're going to dump all the results from the `find` operation into a `List` which we're going to return back to the requesting client.\n\nWe didn't do much in terms of data validation, and our query was rather simple, but it is a starting point for bigger and better things.\n\n## Deploy the Java application to AWS Lambda\n\nThe project for this example is complete, so it is time to get it bundled and ready to go for deployment within the AWS cloud.\n\nSince we're using Gradle for this project and we have a task defined for bundling, execute the build script doing something like the following:\n\n```bash\n./gradlew build\n```\n\nIf everything built properly, you should have a **build/distributions/\\*.zip** file. The name of that file will depend on all the naming you've used throughout your project.\n\nWith that file in hand, go to the AWS dashboard for Lambda and create a new function.\n\nThere are three things you're going to want to do for a successful deployment:\n\n1. Add the environment variable for the MongoDB Atlas URI.\n2. Upload the ZIP archive.\n3. Rename the \"Handler\" information to reflect your actual project.\n\nWithin the AWS Lambda dashboard for your new function, click the \"Configuration\" tab followed by the \"Environment Variables\" navigation item. Add your environment variable information and make sure the key name matches the name you used in your code.\n\nWe used `MONGODB_ATLAS_URI` in the code, and the actual value would look something like this:\n\n```\nmongodb+srv://:@examples.170lwj0.mongodb.net/?retryWrites=true&w=majority\n```\n\nJust remember to use your actual username, password, and instance URL.\n\nNext, you can upload your ZIP archive from the \"Code\" tab of the dashboard.\n\nWhen the upload completes, on the \"Code\" tab, look for \"Runtime Settings\" section and choose to edit it. In our example, the package name was **example**, the Java file was named **Handler**, and the function with the logic was named **handleRequest**. With this in mind, our \"Handler\" should be **example.Handler::handleRequest**. If you're using something else for your naming, make sure it reflects appropriately, otherwise Lambda won't know what to do when invoked.\n\nTake the function for a spin!\n\nUsing the \"Test\" tab, try invoking the function with no user input and then invoke it using the following:\n\n```json\n{\n \"title\": \"Batman\"\n}\n```\n\nYou should see different results reflecting what was added in the code.\n\n## Conclusion\n\nYou just saw how to create a serverless function with AWS Lambda that interacts with MongoDB. In this particular example, Java was the star of the show, but similar logic and steps can be applied for any of the other supported AWS Lambda runtimes or MongoDB drivers.\n\nIf you have questions or want to see how others are using MongoDB Atlas with AWS Lambda, check out the MongoDB Community Forums.\n\n", "format": "md", "metadata": {"tags": ["Atlas", "Java", "Serverless"], "pageDescription": "Learn how to build and deploy a serverless function to AWS Lambda that communicates with MongoDB using the Java programming language.", "contentType": "Tutorial"}, "title": "Serverless Development with AWS Lambda and MongoDB Atlas Using Java", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/full-text-search-mobile-app-mongodb-realm", "action": "created", "body": "# How to Do Full-Text Search in a Mobile App with MongoDB Realm\n\nFull-text search is an important feature in modern mobile applications, as it allows you to quickly and efficiently access information within large text datasets. This is fundamental for certain app categories that deal with large amounts of text documents, like news and magazines apps and chat and email applications. \n\nWe are happy to introduce full-text search (FTS) support for Realm \u2014 a feature long requested by our developers. While traditional search with string matching returns exact occurrences, FTS returns results that contain the words from the query, but respecting word boundaries. For example, looking for the word \u201ccat\u201d with FTS will return only text containing exactly that word, while a traditional search will return also text containing words like \u201ccatalog\u201d and \u201cadvocating\u201d. Additionally, it\u2019s also possible to specify words that should *not* be present in the result texts. Another important addition with the Realm-provided FTS is speed: As the index is created beforehand, searches on it are very fast compared to pure string matching.\n\nIn this tutorial, we are giving examples using FTS with the .NET SDK, but FTS is also available in the Realm SDK for Kotlin, Dart, and JS, and will soon be available for Swift and Obj-C. \n\nLater, we will show a practical example, but for now, let us take a look at what you need in order to use the new FTS search with the .NET Realm SDK:\n\n1. Add the `Indexed(IndexType.FullText)]` attribute on the string property to create an index for searching.\n2. Running queries\n 1. To run Language-Integrated Query (LINQ) queries, use `QueryMethods.FullTextSearch`. For example: `realm.All().Where(b => QueryMethods.FullTextSearch(b.Summary, \"fantasy novel\")`\n 2. To run `Filter` queries, use the `TEXT` operator. For example: `realm.All().Filter(\"Summary TEXT $0\", \"fantasy novel\");`\n\nAdditionally, words in the search phrase can be prepended with a \u201c-\u201d to indicate that certain words should not occur. For example: `realm.All().Where(b => QueryMethods.FullTextSearch(b.Summary, \"fantasy novel -rings\")`\n\n## Search example\n\nIn this example, we will be creating a realm with book summaries indexed and searchable by the full-text search. First, we\u2019ll create the object schema for the books and index on the summary property:\n\n```csharp\npublic partial class Book : IRealmObject\n{\n [PrimaryKey]\n public string Name { get; set; } = null!;\n\n [Indexed(IndexType.FullText)]\n public string Summary { get; set; } = null!;\n}\n```\n\nNext, we\u2019ll define a few books with summaries and add those to the realm:\n\n```csharp\n// ..\nvar animalFarm = new Book\n{\n Name = \"Animal Farm\",\n Summary = \"Animal Farm is a novel that tells the story of a group of farm animals who rebel against their human farmer, hoping to create a society where the animals can be equal, free, and happy. Ultimately, the rebellion is betrayed, and the farm ends up in a state as bad as it was before.\"\n};\n\nvar lordOfTheRings = new Book\n{\n Name = \"Lord of the Rings\",\n Summary = \"The Lord of the Rings is an epic high-fantasy novel by English author and scholar J. R. R. Tolkien. Set in Middle-earth, the story began as a sequel to Tolkien's 1937 children's book The Hobbit, but eventually developed into a much larger work.\"\n};\n\nvar lordOfTheFlies = new Book\n{\n Name = \"Lord of the Flies\",\n Summary = \"Lord of the Flies is a novel that revolves around a group of British boys who are stranded on an uninhabited island and their disastrous attempts to govern themselves.\"\n};\n\nvar realm = Realm.GetInstance();\n\nrealm.Write(() =>\n{\n realm.Add(animalFarm);\n realm.Add(lordOfTheFlies);\n realm.Add(lordOfTheRings);\n});\n```\n\nAnd finally, we are ready for searching the summaries as follows:\n\n```csharp\nvar books = realm.All();\n\n// Returns all books with summaries containing both \"novel\" and \"lord\"\nvar result = books.Where(b => QueryMethods.FullTextSearch(b.Summary, \"novel lord\"));\n\n// Equivalent query using `Filter`\nresult = books.Filter(\"Summary TEXT $0\", \"novel lord\");\n\n// Returns all books with summaries containing both \"novel\" and \"lord\", but not \"rings\"\nresult = books.Where(b => QueryMethods.FullTextSearch(b.Summary, \"novel -rings\"));\n```\n\n## Additional information \n\nA few important things to keep in mind when using full-text search:\n\n- Only string properties are valid for an FTS index, also on embedded objects. A collection of strings cannot be indexed. \n- Indexes spanning multiple properties are not supported. For example, if you have a `Book` object, with `Name` and `Summary` properties, you cannot declare a single index that covers both, but you can have one index per property. \n- Doing an FTS lookup for a phrase across multiple properties must be done using a combination of two expressions (i.e., trying to find `red ferrari` where `red` appears in property A and `ferrari` in property B must be done with `(A TEXT 'red') AND (B TEXT 'ferrari'))`.\n- FTS only supports languages that use ASCII and Latin-1 character sets (most western languages). Only sequences of (alphanumeric) characters from these sets will be tokenized and indexed. All others will be considered white space.\n- Searching is case- and diacritics-insensitive, so \u201cGarcon\u201d matches \u201cgar\u00e7on\u201d.\n\nWe understand there are additional features to FTS we could work to add. Please give us feedback and head over to our [community forums!", "format": "md", "metadata": {"tags": ["Realm", "C#"], "pageDescription": "Learn how to add Full-Text Search (FTS) to your mobile applications using C# with Realm and MongoDB.", "contentType": "Tutorial"}, "title": "How to Do Full-Text Search in a Mobile App with MongoDB Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/leverage-event-driven-architecture-mongodb-databricks", "action": "created", "body": "# How to Leverage an Event-Driven Architecture with MongoDB and Databricks\n\nFollow along with this tutorial to get a detailed view of how to leverage MongoDB Atlas App Services in addition to Databricks model building and deployment capabilities to fuel data-driven strategies with real-time events data. Let\u2019s get started! \n\n## The basics\n\nWe\u2019re going to use a MongoDB Atlas M10 cluster as the backend service for the solution. If you are not familiar with MongoDB Atlas yet, you can follow along with the Introduction to MongoDB course to start with the basics of cluster configuration and management.\n\n## Data collection\n\nThe solution is based on data that mimics a collection from an event-driven architecture ingestion from an e-commerce website storefront. We\u2019re going to use a synthetic dataset to represent what we would receive in our cloud database coming from a Kafka stream of events. The data source can be found on Kaggle.\n\nThe data is in a tabular format. When converted into an object suitable for MongoDB, it will look like this: \n\n```json\n{\n \"_id\": {\n \"$oid\": \"63c557ddcc552f591375062d\"\n },\n \"event_time\": {\n \"$date\": {\n \"$numberLong\": \"1572566410000\"\n }\n },\n \"event_type\": \"view\",\n \"product_id\": \"5837166\",\n \"category_id\": \"1783999064103190764\",\n \"brand\": \"pnb\",\n \"price\": 22.22,\n \"user_id\": \"556138645\",\n \"user_session\": \"57ed222e-a54a-4907-9944-5a875c2d7f4f\"\n}\n```\n\nThe event-driven architecture is very simple. It is made up of only four different events that a user can perform on the e-commerce site: \n\n| **event_type** | **description** |\n| ------------------ | --------------------------------------------------------- |\n| \"view\" | A customer views a product on the product detail page. |\n| \"cart\" | A customer adds a product to the cart. |\n| \"remove_from_cart\" | A customer removes a product from the cart. |\n| \"purchase\" | A customer completes a transaction of a specific product. |\n\nThe data in the Kaggle dataset is made of 4.6 million documents, which we will store in a database named **\"ecom_events\"** and under the collection **\"cosmetics\".** This collection represents all the events happening in a multi-category store during November 2019. \n\nWe\u2019ve chosen this date specifically because it will contain behavior corresponding to Black Friday promotions, so it will surely showcase price changes and thus, it will be more interesting to evaluate the price elasticity of products during this time.\n\n## Aggregate data in MongoDB\n\nUsing the powerful MongoDB Atlas Aggregation Pipeline, you can shape your data any way you need. We will shape the events in an aggregated view that will give us a \u201cpurchase log\u201d so we can have historical prices and total quantities sold by product. This way, we can feed a linear regression model to get the best possible fit of a line representing the relationship between price and units sold. \n\nBelow, you\u2019ll find the different stages of the aggregation pipeline: \n\n1. **Match**: We are only interested in purchasing events, so we run a match stage for the event_type key having the value 'purchase'.\n\n ```json\n {\n '$match': {\n 'event_type': 'purchase'\n }\n }\n ```\n\n2. **Group**: We are interested in knowing how many times a particular product was bought in a day and at what price. Therefore, we group by all the relevant keys, while we also do a data type transformation for the \u201cevent_time\u201d, and we compute a new field, \u201ctotal_sales\u201d, to achieve daily total sales at a specific price point.\n\n ```json\n {\n '$group': {\n '_id': {\n 'event_time': {\n '$dateToString': {\n 'format': '%Y-%m-%d', \n 'date': '$event_time'\n }\n }, \n 'product_id': '$product_id', \n 'price': '$price', \n 'brand': '$brand', \n 'category_code': '$category_code'\n }, \n 'total_sales': {\n '$sum': 1\n }\n }\n }\n ```\n\n3. **Project**: Next, we run a project stage to get rid of the object nesting resulting after the group stage. (Check out the MongoDB Compass Aggregation Pipeline Builder as you will be able to see the result of each one of the stages you add in your pipeline!) \n\n ```json\n {\n '$project': {\n 'total_sales': 1,\n 'event_time': '$_id.event_time',\n 'product_id': '$_id.product_id',\n 'price': '$_id.price',\n 'brand': '$_id.brand',\n 'category_code': '$_id.category_code',\n '_id': 0\n }\n }\n ```\n\n4. **Group, Sort, and Project:** We need just one object that will have the historic sales of a product during the time, a sort of time series data log computing aggregates over time. Notice how we will also run a data transformation on the \u2018$project\u2019 stage to get the \u2018revenue\u2019 generated by that product on that specific day. To achieve this, we need to group, sort, and project as such:\n\n ```json\n {\n '$group': {\n '_id': '$product_id', \n 'sales_history': {\n '$push': '$$ROOT'\n }\n }\n }, \n {\n '$sort': {\n 'sales_history': -1\n }\n }, \n {\n '$project': {\n 'product_id': '$_id', \n 'event_time': '$sales_history.event_time', \n 'price': '$sales_history.price', \n 'brand': '$sales_history.brand', \n 'category_code': '$sales_history.category_code', \n 'total_sales': '$sales_history.total_sales', \n 'revenue': {\n '$map': {\n 'input': '$sales_history', \n 'as': 'item', \n 'in': {\n '$multiply': \n '$$item.price', '$$item.total_sales'\n ]\n }\n }\n }\n }\n }\n ```\n\n5. **Out**: The last stage of the pipeline is to push our properly shaped objects to a new collection called \u201cpurchase_log\u201d. This collection will serve as the base to feed our model, and the aggregation pipeline will be the baseline of a trigger function further along to automate the generation of such log every time there\u2019s a purchase, but in that case, we will use a $merge stage.\n\n ```json\n {\n '$out': 'purchase_log'\n }\n ```\n\nWith this aggregation pipeline, we are effectively transforming our data to the needed purchase log to understand the historic sales by the price of each product and start building our dashboard for category leads to understand product sales and use that data to compute the price elasticity of demand of each one of them.\n\n## Intelligence layer: Building your model and deploying it to a Databricks endpoint\n\nThe goal of this stage is to be able to compute the price elasticity of demand of each product in real-time. Using Databricks, you can easily start up a [cluster and attach your model-building Notebook to it.\n\nOn your Notebook, you can import MongoDB data using the MongoDB Connector for Spark, and you can also take advantage of the MlFlow custom Python module library to write your Python scripts, as this one below:\n\n```python\n# define a custom model\nclass MyModel(mlflow.pyfunc.PythonModel):\n \n def predict(self, context, model_input):\n return self.my_custom_function(model_input)\n \n def my_custom_function(self, model_input):\n import json\n import numpy as np\n import pandas as pd\n from pandas import json_normalize\n \n #transforming data from JSON to pandas dataframe\n\n data_frame = pd.json_normalize(model_input)\n data_frame = data_frame.explode(\"event_time\", \"price\", \"total_sales\"]).drop([\"category_code\", \"brand\"], axis=1)\n data_frame = data_frame.reset_index(drop=True)\n \n #Calculating slope\n slope = ( (data_frame.price*data_frame.total_sales).mean() - data_frame.price.mean()*data_frame.total_sales.mean() ) / ( (((data_frame.price)**2).mean()) - (data_frame.price.mean())**2)\n price_elasticity = (slope)*(data_frame.price.mean()/data_frame.total_sales.mean())\n\n return price_elasticity\n```\n\nBut also, you could log the experiments and then register them as models so they can be then served as endpoints in the UI:\n\nLogging the model as experiment directly from the Notebook:\n\n```python\n#Logging model as a experiment \nmy_model = MyModel()\nwith mlflow.start_run():\n model_info = mlflow.pyfunc.log_model(artifact_path=\"model\", python_model=my_model)\n```\n\n![Check the logs of all the experiments associated with a certain Notebook.\n\nFrom the model page, you can click on \u201cdeploy model\u201d and you\u2019ll get an endpoint URL.\n\nOnce you have tested your model endpoint, it\u2019s time to orchestrate your application to achieve real-time analytics.\n\n## Orchestrating your application\n\nFor this challenge, we\u2019ll use MongoDB Triggers and Functions to make sure that we aggregate the data only of the last bought product every time there\u2019s a purchase event and we recalculate its price elasticity by passing its purchase log in an HTTP post call to the Databricks endpoint.\n\n### Aggregating data after each purchase\n\nFirst, you will need to set up an event stream that can capture changes in consumer behavior and price changes in real-time, so it will aggregate and update your purchase_log data. \n\nBy leveraging MongoDB App Services, you can build event-driven applications and integrate services in the cloud. So for this use case, we would like to set up a **Trigger** that will \u201clisten\u201d for any new \u201cpurchase\u201d event in the cosmetics collection, such as you can see in the below screenshots. To get you started on App Services, you can check out the documentation.\n\nAfter clicking on \u201cAdd Trigger,\u201d you can configure it to execute only when there\u2019s a new insert in the collection:\n\nScrolling down the page, you can also configure the function that will be triggered:\n\nSuch functions can be defined (and tested) in the function editor. The function we\u2019re using simply retrieves data from the cosmetics collection, performs some data processing on the information, and saves the result in a new collection.\n\n```javascript\nexports = async function() {\n const collection = context.services.get(\"mongodb-atlas\").db('ecom_events').collection('cosmetics');\n \n // Retrieving the last purchase event document\n let lastItemArr = ];\n \n try {\n lastItemArr = await collection.find({event_type: 'purchase'}, { product_id: 1 }).sort({ _id: -1 }).limit(1).toArray();\n } \n catch (error) {\n console.error('An error occurred during find execution:', error);\n }\n console.log(JSON.stringify(lastItemArr));\n \n // Defining the product_id of the last purchase event document \n var lastProductId = lastItemArr.length > 0 ? lastItemArr[0].product_id : null; \n console.log(JSON.stringify(lastProductId));\n console.log(typeof lastProductId);\n if (!lastProductId) {\n return null; \n }\n \n // Filtering the collection to get only the documents that match the same product_id as the last purchase event\n let lastColl = [];\n lastColl = await collection.find({\"product_id\": lastProductId}).toArray();\n console.log(JSON.stringify(lastColl));\n \n \n // Defining the aggregation pipeline for modeling a purchase log triggered by the purchase events.\n const agg = [\n {\n '$match': {\n 'event_type': 'purchase',\n 'product_id': lastProductId\n }\n }, {\n '$group': {\n '_id': {\n 'event_time': '$event_time', \n 'product_id': '$product_id', \n 'price': '$price', \n 'brand': '$brand', \n 'category_code': '$category_code'\n }, \n 'total_sales': {\n '$sum': 1\n }\n }\n }, {\n '$project': {\n 'total_sales': 1, \n 'event_time': '$_id.event_time', \n 'product_id': '$_id.product_id', \n 'price': '$_id.price', \n 'brand': '$_id.brand', \n 'category_code': '$_id.category_code', \n '_id': 0\n }\n }, {\n '$group': {\n '_id': '$product_id', \n 'sales_history': {\n '$push': '$$ROOT'\n }\n }\n }, {\n '$sort': {\n 'sales_history': -1\n }\n }, {\n '$project': {\n 'product_id': '$_id', \n 'event_time': '$sales_history.event_time', \n 'price': '$sales_history.price', \n 'brand': '$sales_history.brand', \n 'category_code': '$sales_history.category_code', \n 'total_sales': '$sales_history.total_sales', \n 'revenue': {\n '$map': {\n 'input': '$sales_history', \n 'as': 'item', \n 'in': {\n '$multiply': [\n '$$item.price', '$$item.total_sales'\n ]\n }\n }\n }\n }\n }\n, {\n '$merge': {\n 'into': 'purchase_log',\n 'on': '_id',\n 'whenMatched': 'merge',\n 'whenNotMatched': 'insert'\n }\n }\n ];\n \n // Running the aggregation\n const purchaseLog = await collection.aggregate(agg);\n const log = await purchaseLog.toArray();\n return log;\n};\n```\n\nThe above function is meant to shape the data from the last product_id item purchased into the historic purchase_log needed to compute the price elasticity. As you can see in the code below, the result creates a document with historical price and total purchase data:\n\n```json\n{\n \"_id\": {\n \"$numberInt\": \"5837183\"\n },\n \"product_id\": {\n \"$numberInt\": \"5837183\"\n },\n \"event_time\": [\n \"2023-05-17\"\n ],\n \"price\": [\n {\n \"$numberDouble\": \"6.4\"\n }\n ],\n \"brand\": [\n \"runail\"\n ],\n \"category_code\": [],\n \"total_sales\": [\n {\n \"$numberLong\": \"101\"\n }\n ],\n \"revenue\": [\n {\n \"$numberDouble\": \"646.4000000000001\"\n }\n ]\n}\n```\n\nNote how we implement the **$merge** stage so we make sure to not overwrite the previous collection and just upsert the data corresponding to the latest bought item.\n\n### Computing the price elasticity\n\nThe next step is to process the event stream and calculate the price elasticity of demand for each product. For this, you may set up a trigger so that every time there\u2019s an insert or replace in the \u201cpurchase_log\u201d collection, we will do a post-HTTP request for retrieving the price elasticity.\n\n![Configuring the tigger to execute every time the collection has an insert or replace of documents\n\nThe trigger will execute a function such as the one below:\n\n```javascript\nexports = async function(changeEvent) {\n \n // Defining a variable for the full document of the last purchase log in the collection\n const { fullDocument } = changeEvent; \n console.log(\"Received doc: \" + fullDocument.product_id);\n \n // Defining the collection to get\n const collection = context.services.get(\"mongodb-atlas\").db(\"ecom_events\").collection(\"purchase_log\");\n console.log(\"It passed test 1\");\n \n // Fail proofing\n if (!fullDocument) {\n throw new Error('Error: could not get fullDocument from context');\n }\n console.log(\"It passed test 2\");\n \n if (!collection) {\n throw new Error('Error: could not get collection from context');\n }\n\n console.log(\"It passed test 3\");\n \n //Defining the connection variables\n const ENDPOINT_URL = \"YOUR_ENDPOINT_URL\";\n const AUTH_TOKEN = \"BASIC_TOKEN\";\n \n // Defining data to pass it into Databricks endpoint\n const data = {\"inputs\": fullDocument]};\n \n console.log(\"It passed test 4\");\n \n // Fetching data to the endpoint using http.post to get price elasticity of demand\n try {\n const res = await context.http.post({\n \"url\": ENDPOINT_URL,\n \"body\": JSON.stringify(data),\n \"encodeBodyAsJSON\": false,\n \"headers\": {\n \"Authorization\": [AUTH_TOKEN],\n \"Content-Type\": [\"application/json\"]\n }\n \n });\n \n console.log(\"It passed test 5\");\n \n if (res.statusCode !== 200) {\n throw new Error(`Failed to fetch data. Status code: ${res.statusCode}`);\n }\n \n console.log(\"It passed test 6\");\n \n // Logging response test\n const responseText = await res.body.text();\n console.log(\"Response body:\", responseText);\n\n // Parsing response from endpoint\n const responseBody = JSON.parse(responseText);\n const price_elasticity = responseBody.predictions;\n \n console.log(\"It passed test 7 with price elasticity: \" + price_elasticity);\n \n //Updating price elasticity of demand for specific document on the purchase log collection\n \n await collection.updateOne({\"product_id\": fullDocument.product_id}, {$push:{\"price_elasticity\": price_elasticity}} );\n console.log(\"It updated the product_id \" + fullDocument.product_id + \"successfully, adding price elasticity \" + price_elasticity ); \n } \n \n catch (err) {\n console.error(err);\n throw err;\n }\n};\n```\n\n## Visualize data with MongoDB Charts\n\nFinally, you will need to visualize the data to make it easier for stakeholders to understand the price elasticity of demand for each product. You can use a visualization tool like [MongoDB Charts to create dashboards and reports that show the price elasticity of demand over time and how it is impacted by changes in price, product offerings, and consumer behavior.\n\n## Evolving your apps\n\nThe new variable \u201cprice_elasticity\u201d can be easily passed to the collections that nurture your PIMS, allowing developers to build another set of rules based on these values to automate a full-fledged dynamic pricing tool. \n\nIt can also be embedded into your applications. Let\u2019s say an e-commerce CMS system used by your category leads to manually adjusting the prices of different products. Or in this case, to build different rules based on the price elasticity of demand to automate price setting. \n\nThe same data can be used as a feature for forecasting total sales and creating a recommended price point for net revenue.\n\nIn conclusion, this framework might be used to create any kind of real-time analytics use case you might think of in combination with any of the diverse use cases you\u2019ll find where machine learning could be used as a source of intelligent and automated decision-making processes.\n\nFind all the code used in the GitHub repository and drop by the Community Forum for any further questions, comments or feedback!! ", "format": "md", "metadata": {"tags": ["MongoDB", "Python", "JavaScript", "Spark"], "pageDescription": "Learn how to develop using an event-driven architecture that leverages MongoDB Atlas and Databricks.", "contentType": "Tutorial"}, "title": "How to Leverage an Event-Driven Architecture with MongoDB and Databricks", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-bigquery-pipeline-using-confluent", "action": "created", "body": "# Streaming Data from MongoDB to BigQuery Using Confluent Connectors\n\nMany enterprise customers of MongoDB and Google Cloud have the core operation workload running on MongoDB and run their analytics on BigQuery. To make it seamless to move the data between MongoDB and BigQuery, MongoDB introduced Google Dataflow templates. Though these templates cater to most of the common use cases, there is still some effort required to set up the change stream (CDC) Dataflow template. Setting up the CDC requires users to create their own custom code to monitor the changes happening on their MongoDB Atlas collection. Developing custom codes is time-consuming and requires a lot of time for development, support, management, and operations.\n\nOvercoming the additional effort required to set up CDCs for MongoDB to BigQuery Dataflow templates can be achieved using Confluent Cloud. Confluent is a full-scale data platform capable of continuous, real-time processing, integration, and data streaming across any infrastructure. Confluent provides pluggable, declarative data integration through its connectors. With Confluent\u2019s MongoDB source connectors, the process of creating and deploying a module for CDCs can be eliminated. Confluent Cloud provides a MongoDB Atlas source connector that can be easily configured from Confluent Cloud, which will read the changes from the MongoDB source and publish those changes to a topic. Reading from MongoDB as source is the part of the solution that is further enhanced with a Confluent BigQuery sink connector to read changes that are published to the topic and then writing to the BigQuery table. \n\nThis article explains how to set up the MongoDB cluster, Confluent cluster, and Confluent MongoDB Atlas source connector for reading changes from your MongoDB cluster, BigQuery dataset, and Confluent BigQuery sink connector.\n\nAs a prerequisite, we need a MongoDB Atlas cluster, Confluent Cloud cluster, and Google Cloud account. If you don\u2019t have the accounts, the next sections will help you understand how to set them up.\n\n### Set up your MongoDB Atlas cluster\nTo set up your first MongoDB Atlas cluster, you can register for MongoDB either from Google Marketplace or from the registration page. Once registered for MongoDB Atlas, you can set up your first free tier Shared M0 cluster. Follow the steps in the MongoDB documentation to configure the database user and network settings for your cluster. \n\nOnce the cluster and access setup is complete, we can load some sample data to the cluster. Navigate to \u201cbrowse collection\u201d from the Atlas homepage and click on \u201cCreate Database.\u201d Name your database \u201cSample_company\u201d and collection \u201cSample_employee.\u201d\n\nInsert your first document into the database:\n\n```\n{\n\"Name\":\"Jane Doe\",\n\"Address\":{\n\"Phone\":{\"$numberLong\":\"999999\"},\n\"City\":\"Wonderland\"\n}\n}\n}\n```\n\n## Set up a BigQuery dataset on Google Cloud\nAs a prerequisite for setting up the pipeline, we need to create a dataset in the same region as that of the Confluent cluster. Please go through the Google documentation to understand how to create a dataset for your project. Name your dataset \u201cSample_Dataset.\u201d\n\n## Set up the Confluent Cloud cluster and connectors\nAfter setting up the MongoDB and BigQuery datasets, Confluent will be the platform to build the data pipeline between these platforms. \n\nTo sign up using Confluent Cloud, you can either go to the Confluent website or register from Google Marketplace. New signups receive $400 to spend during their first 30 days and a credit card is not required. To create the cluster, you can follow the first step in the documentation. **One important thing to consider is that the region of the cluster should be the same region of the GCP BigQuery cluster.**\n\n### Set up your MongoDB Atlas source connector on Confluent\nDepending on the settings, it may take a few minutes to provision your cluster, but once the cluster has provisioned, we can get the sample data from MongoDB cluster to the Confluent cluster.\n\nConfluent\u2019s MongoDB Atlas Source connector helps to read the change stream data from the MongoDB database and write it to the topic. This connector is fully managed by Confluent and you don\u2019t need to operate it. To set up a connector, navigate to Confluent Cloud and search for the MongoDB Atlas source connector under \u201cConnectors.\u201d The connector documentation provides the steps to provision the connector. \n\nBelow is the sample configuration for the MongoDB source connector setup.\n\n1. For **Topic selection**, leave the prefix empty.\n2. Generate **Kafka credentials** and click on \u201cContinue.\u201d\n3. Under Authentication, provide the details:\n 1. Connection host: Only provide the MongoDB Hostname in format \u201cmongodbcluster.mongodb.net.\u201d\n 2. Connection user: MongoDB connection user name.\n 3. Connection password: Password of the user being authenticated.\n 4. Database name: **sample_database** and collection name: **sample_collection**.\n4. Under configuration, select the output Kafka record format as **JSON_SR** and click on \u201cContinue.\u201d\n5. Leave sizing to default and click on \u201cContinue.\u201d\n6. Review and click on \u201cContinue.\u201d\n\n```\n{\n \"name\": \"MongoDbAtlasSourceConnector\",\n \"config\": {\n \"connector.class\": \"MongoDbAtlasSource\",\n \"name\": \"MongoDbAtlasSourceConnector\",\n \"kafka.auth.mode\": \"KAFKA_API_KEY\",\n \"kafka.api.key\": \"****************\",\n \"kafka.api.secret\": \"****************************************************************\",\n \"connection.host\": \"mongodbcluster.mongodb.net\",\n \"connection.user\": \"testuser\",\n \"connection.password\": \"*********\",\n \"database\": \"Sample_Company\",\n \"collection\": \"Sample_Employee\",\n \"output.data.format\": \"JSON_SR\",\n \"publish.full.document.only\": \"true\",\n \"tasks.max\": \"1\"\n }\n}\n```\n\n### Set up Confluent Cloud: BigQuery sink connector\nAfter setting up our BigQuery, we need to provision a sink connector to sink the data from Confluent Cluster to Google BigQuery. The Confluent Cloud to BigQuery Sink connector can stream table records from Kafka topics to Google BigQuery. The table records are streamed at high throughput rates to facilitate analytical queries in real time.\n\nTo set up the Bigquery sink connector, follow the steps in their documentation. \n\n```\n{\n \"name\": \"BigQuerySinkConnector_0\",\n \"config\": {\n \"topics\": \"AppEngineTest.emp\",\n \"input.data.format\": \"JSON_SR\",\n \"connector.class\": \"BigQuerySink\",\n \"name\": \"BigQuerySinkConnector_0\",\n \"kafka.auth.mode\": \"KAFKA_API_KEY\",\n \"kafka.api.key\": \"****************\",\n \"kafka.api.secret\": \"****************************************************************\",\n \"keyfile\": \"******************************************************************************\n\u2014--\n***************************************\",\n \"project\": \"googleproject-id\",\n \"datasets\": \"Sample_Dataset\",\n \"auto.create.tables\": \"true\",\n \"auto.update.schemas\": \"true\",\n \"tasks.max\": \"1\"\n }\n}\n```\n\nTo see the data being loaded to BigQuery, make some changes on the MongoDB collection. Any inserts and updates will be recorded from MongoDB and pushed to BigQuery. \n\nInsert below document to your MongoDB collection using MongoDB Atlas UI. (Navigate to your collection and click on \u201cINSERT DOCUMENT.\u201d)\n\n```\n{\n\"Name\":\"John Doe\",\n\"Address\":{\n\"Phone\":{\"$numberLong\":\"8888888\"},\n\"City\":\"Narnia\"\n}\n}\n}\n```\n\n## Summary\nMongoDB and Confluent are positioned at the heart of many modern data architectures that help developers easily build robust and reactive data pipelines that stream events between applications and services in real time. In this example, we provided a template to build a pipeline from MongoDB to Bigquery on Confluent Cloud. Confluent Cloud provides more than 200 connectors to build such pipelines between many solutions. Although the solutions change, the general approach is using those connectors to build pipelines.\n\n### What's next?\n\n1. To understand the features of Confluent Cloud managed MongoDB sink and source connectors, you can watch this webinar. \n2. Learn more about the Bigquery sink connector.\n3. A data pipeline for MongoDB Atlas and BigQuery using Dataflow.\n4. Set up your first MongoDB cluster using Google Marketplace.\n5. Run analytics using BigQuery using BigQuery ML.\n\n", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud", "AI"], "pageDescription": "Learn how to set up a data pipeline from your MongoDB database to BigQuery using the Confluent connector.", "contentType": "Tutorial"}, "title": "Streaming Data from MongoDB to BigQuery Using Confluent Connectors", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/cheat-sheet", "action": "created", "body": "# MongoDB Cheat Sheet\n\nFirst steps in the MongoDB World? This cheat sheet is filled with some handy tips, commands, and quick references to get you connected and CRUD'ing in no time!\n\n- Get a free MongoDB cluster in MongoDB Atlas.\n- Follow a course in MongoDB University.\n\n## Updates\n\n- September 2023: Updated for MongoDB 7.0.\n\n## Table of Contents\n\n- Connect MongoDB Shell\n- Helpers\n- CRUD\n- Databases and Collections\n- Indexes\n- Handy commands\n- Change Streams\n- Replica Set\n- Sharded Cluster\n- Wrap-up\n\n## Connect via `mongosh`\n\n``` bash\nmongosh # connects to mongodb://127.0.0.1:27017 by default\nmongosh --host --port --authenticationDatabase admin -u -p # omit the password if you want a prompt\nmongosh \"mongodb://:@192.168.1.1:27017\"\nmongosh \"mongodb://192.168.1.1:27017\"\nmongosh \"mongodb+srv://cluster-name.abcde.mongodb.net/\" --apiVersion 1 --username # MongoDB Atlas\n```\n\n- mongosh documentation.\n\n\ud83d\udd1d Table of Contents \ud83d\udd1d\n\n## Helpers\n\n### Show Databases\n\n``` javascript\nshow dbs\ndb // prints the current database\n```\n\n### Switch Database\n\n``` javascript\nuse \n```\n\n### Show Collections\n\n``` javascript\nshow collections\n```\n\n### Run JavaScript File\n\n``` javascript\nload(\"myScript.js\")\n```\n\n\ud83d\udd1d Table of Contents \ud83d\udd1d\n\n## CRUD\n\n### Create\n\n``` javascript\ndb.coll.insertOne({name: \"Max\"})\ndb.coll.insertMany({name: \"Max\"}, {name:\"Alex\"}]) // ordered bulk insert\ndb.coll.insertMany([{name: \"Max\"}, {name:\"Alex\"}], {ordered: false}) // unordered bulk insert\ndb.coll.insertOne({date: ISODate()})\ndb.coll.insertOne({name: \"Max\"}, {\"writeConcern\": {\"w\": \"majority\", \"wtimeout\": 5000}})\n```\n\n### Read\n\n``` javascript\ndb.coll.findOne() // returns a single document\ndb.coll.find() // returns a cursor - show 20 results - \"it\" to display more\ndb.coll.find().pretty()\ndb.coll.find({name: \"Max\", age: 32}) // implicit logical \"AND\".\ndb.coll.find({date: ISODate(\"2020-09-25T13:57:17.180Z\")})\ndb.coll.find({name: \"Max\", age: 32}).explain(\"executionStats\") // or \"queryPlanner\" or \"allPlansExecution\"\ndb.coll.distinct(\"name\")\n\n// Count\ndb.coll.countDocuments({age: 32}) // alias for an aggregation pipeline - accurate count\ndb.coll.estimatedDocumentCount() // estimation based on collection metadata\n\n// Comparison\ndb.coll.find({\"year\": {$gt: 1970}})\ndb.coll.find({\"year\": {$gte: 1970}})\ndb.coll.find({\"year\": {$lt: 1970}})\ndb.coll.find({\"year\": {$lte: 1970}})\ndb.coll.find({\"year\": {$ne: 1970}})\ndb.coll.find({\"year\": {$in: [1958, 1959]}})\ndb.coll.find({\"year\": {$nin: [1958, 1959]}})\n\n// Logical\ndb.coll.find({name:{$not: {$eq: \"Max\"}}})\ndb.coll.find({$or: [{\"year\" : 1958}, {\"year\" : 1959}]})\ndb.coll.find({$nor: [{price: 1.99}, {sale: true}]})\ndb.coll.find({\n $and: [\n {$or: [{qty: {$lt :10}}, {qty :{$gt: 50}}]},\n {$or: [{sale: true}, {price: {$lt: 5 }}]}\n ]\n})\n\n// Element\ndb.coll.find({name: {$exists: true}})\ndb.coll.find({\"zipCode\": {$type: 2 }})\ndb.coll.find({\"zipCode\": {$type: \"string\"}})\n\n// Aggregation Pipeline\ndb.coll.aggregate([\n {$match: {status: \"A\"}},\n {$group: {_id: \"$cust_id\", total: {$sum: \"$amount\"}}},\n {$sort: {total: -1}}\n])\n\n// Text search with a \"text\" index\ndb.coll.find({$text: {$search: \"cake\"}}, {score: {$meta: \"textScore\"}}).sort({score: {$meta: \"textScore\"}})\n\n// Regex\ndb.coll.find({name: /^Max/}) // regex: starts by letter \"M\"\ndb.coll.find({name: /^Max$/i}) // regex case insensitive\n\n// Array\ndb.coll.find({tags: {$all: [\"Realm\", \"Charts\"]}})\ndb.coll.find({field: {$size: 2}}) // impossible to index - prefer storing the size of the array & update it\ndb.coll.find({results: {$elemMatch: {product: \"xyz\", score: {$gte: 8}}}})\n\n// Projections\ndb.coll.find({\"x\": 1}, {\"actors\": 1}) // actors + _id\ndb.coll.find({\"x\": 1}, {\"actors\": 1, \"_id\": 0}) // actors\ndb.coll.find({\"x\": 1}, {\"actors\": 0, \"summary\": 0}) // all but \"actors\" and \"summary\"\n\n// Sort, skip, limit\ndb.coll.find({}).sort({\"year\": 1, \"rating\": -1}).skip(10).limit(3)\n\n// Read Concern\ndb.coll.find().readConcern(\"majority\")\n```\n\n- [db.collection.find()\n- Query and Projection Operators\n- BSON types\n- Read Concern\n\n### Update\n\n``` javascript\ndb.coll.updateOne({\"_id\": 1}, {$set: {\"year\": 2016, name: \"Max\"}})\ndb.coll.updateOne({\"_id\": 1}, {$unset: {\"year\": 1}})\ndb.coll.updateOne({\"_id\": 1}, {$rename: {\"year\": \"date\"} })\ndb.coll.updateOne({\"_id\": 1}, {$inc: {\"year\": 5}})\ndb.coll.updateOne({\"_id\": 1}, {$mul: {price: NumberDecimal(\"1.25\"), qty: 2}})\ndb.coll.updateOne({\"_id\": 1}, {$min: {\"imdb\": 5}})\ndb.coll.updateOne({\"_id\": 1}, {$max: {\"imdb\": 8}})\ndb.coll.updateOne({\"_id\": 1}, {$currentDate: {\"lastModified\": true}})\ndb.coll.updateOne({\"_id\": 1}, {$currentDate: {\"lastModified\": {$type: \"timestamp\"}}})\n\n// Array\ndb.coll.updateOne({\"_id\": 1}, {$push :{\"array\": 1}})\ndb.coll.updateOne({\"_id\": 1}, {$pull :{\"array\": 1}})\ndb.coll.updateOne({\"_id\": 1}, {$addToSet :{\"array\": 2}})\ndb.coll.updateOne({\"_id\": 1}, {$pop: {\"array\": 1}}) // last element\ndb.coll.updateOne({\"_id\": 1}, {$pop: {\"array\": -1}}) // first element\ndb.coll.updateOne({\"_id\": 1}, {$pullAll: {\"array\" :3, 4, 5]}})\ndb.coll.updateOne({\"_id\": 1}, {$push: {\"scores\": {$each: [90, 92]}}})\ndb.coll.updateOne({\"_id\": 2}, {$push: {\"scores\": {$each: [40, 60], $sort: 1}}}) // array sorted\ndb.coll.updateOne({\"_id\": 1, \"grades\": 80}, {$set: {\"grades.$\": 82}})\ndb.coll.updateMany({}, {$inc: {\"grades.$[]\": 10}})\ndb.coll.updateMany({}, {$set: {\"grades.$[element]\": 100}}, {multi: true, arrayFilters: [{\"element\": {$gte: 100}}]})\n\n// FindOneAndUpdate\ndb.coll.findOneAndUpdate({\"name\": \"Max\"}, {$inc: {\"points\": 5}}, {returnNewDocument: true})\n\n// Upsert\ndb.coll.updateOne({\"_id\": 1}, {$set: {item: \"apple\"}, $setOnInsert: {defaultQty: 100}}, {upsert: true})\n\n// Replace\ndb.coll.replaceOne({\"name\": \"Max\"}, {\"firstname\": \"Maxime\", \"surname\": \"Beugnet\"})\n\n// Write concern\ndb.coll.updateMany({}, {$set: {\"x\": 1}}, {\"writeConcern\": {\"w\": \"majority\", \"wtimeout\": 5000}})\n```\n\n### Delete\n\n``` javascript\ndb.coll.deleteOne({name: \"Max\"})\ndb.coll.deleteMany({name: \"Max\"}, {\"writeConcern\": {\"w\": \"majority\", \"wtimeout\": 5000}})\ndb.coll.deleteMany({}) // WARNING! Deletes all the docs but not the collection itself and its index definitions\ndb.coll.findOneAndDelete({\"name\": \"Max\"})\n```\n\n\ud83d\udd1d [Table of Contents \ud83d\udd1d\n\n## Databases and Collections\n\n### Drop\n\n``` javascript\ndb.coll.drop() // removes the collection and its index definitions\ndb.dropDatabase() // double check that you are *NOT* on the PROD cluster... :-)\n```\n\n### Create Collection\n\n``` javascript\n// Create collection with a $jsonschema\ndb.createCollection(\"contacts\", {\n validator: {$jsonSchema: {\n bsonType: \"object\",\n required: \"phone\"],\n properties: {\n phone: {\n bsonType: \"string\",\n description: \"must be a string and is required\"\n },\n email: {\n bsonType: \"string\",\n pattern: \"@mongodb\\.com$\",\n description: \"must be a string and match the regular expression pattern\"\n },\n status: {\n enum: [ \"Unknown\", \"Incomplete\" ],\n description: \"can only be one of the enum values\"\n }\n }\n }}\n})\n```\n\n### Other Collection Functions\n\n``` javascript\ndb.coll.stats()\ndb.coll.storageSize()\ndb.coll.totalIndexSize()\ndb.coll.totalSize()\ndb.coll.validate({full: true})\ndb.coll.renameCollection(\"new_coll\", true) // 2nd parameter to drop the target collection if exists\n```\n\n\ud83d\udd1d [Table of Contents \ud83d\udd1d\n\n## Indexes\n\n### List Indexes\n\n``` javascript\ndb.coll.getIndexes()\ndb.coll.getIndexKeys()\n```\n\n### Create Indexes\n\n``` javascript\n// Index Types\ndb.coll.createIndex({\"name\": 1}) // single field index\ndb.coll.createIndex({\"name\": 1, \"date\": 1}) // compound index\ndb.coll.createIndex({foo: \"text\", bar: \"text\"}) // text index\ndb.coll.createIndex({\"$**\": \"text\"}) // wildcard text index\ndb.coll.createIndex({\"userMetadata.$**\": 1}) // wildcard index\ndb.coll.createIndex({\"loc\": \"2d\"}) // 2d index\ndb.coll.createIndex({\"loc\": \"2dsphere\"}) // 2dsphere index\ndb.coll.createIndex({\"_id\": \"hashed\"}) // hashed index\n\n// Index Options\ndb.coll.createIndex({\"lastModifiedDate\": 1}, {expireAfterSeconds: 3600}) // TTL index\ndb.coll.createIndex({\"name\": 1}, {unique: true})\ndb.coll.createIndex({\"name\": 1}, {partialFilterExpression: {age: {$gt: 18}}}) // partial index\ndb.coll.createIndex({\"name\": 1}, {collation: {locale: 'en', strength: 1}}) // case insensitive index with strength = 1 or 2\ndb.coll.createIndex({\"name\": 1 }, {sparse: true})\n```\n\n### Drop Indexes\n\n``` javascript\ndb.coll.dropIndex(\"name_1\")\n```\n\n### Hide/Unhide Indexes\n\n``` javascript\ndb.coll.hideIndex(\"name_1\")\ndb.coll.unhideIndex(\"name_1\")\n```\n\n- Indexes documentation\n\n\ud83d\udd1d Table of Contents \ud83d\udd1d\n\n## Handy commands\n\n``` javascript\nuse admin\ndb.createUser({\"user\": \"root\", \"pwd\": passwordPrompt(), \"roles\": \"root\"]})\ndb.dropUser(\"root\")\ndb.auth( \"user\", passwordPrompt() )\n\nuse test\ndb.getSiblingDB(\"dbname\")\ndb.currentOp()\ndb.killOp(123) // opid\n\ndb.fsyncLock()\ndb.fsyncUnlock()\n\ndb.getCollectionNames()\ndb.getCollectionInfos()\ndb.printCollectionStats()\ndb.stats()\n\ndb.getReplicationInfo()\ndb.printReplicationInfo()\ndb.hello()\ndb.hostInfo()\n\ndb.shutdownServer()\ndb.serverStatus()\n\ndb.getProfilingStatus()\ndb.setProfilingLevel(1, 200) // 0 == OFF, 1 == ON with slowms, 2 == ON\n\ndb.enableFreeMonitoring()\ndb.disableFreeMonitoring()\ndb.getFreeMonitoringStatus()\n\ndb.createView(\"viewName\", \"sourceColl\", [{$project:{department: 1}}])\n```\n\n\ud83d\udd1d [Table of Contents \ud83d\udd1d\n\n## Change Streams\n\n``` javascript\nwatchCursor = db.coll.watch( { $match : {\"operationType\" : \"insert\" } } ] )\n\nwhile (!watchCursor.isExhausted()){\n if (watchCursor.hasNext()){\n print(tojson(watchCursor.next()));\n }\n}\n```\n\n\ud83d\udd1d [Table of Contents \ud83d\udd1d\n\n## Replica Set\n\n``` javascript\nrs.status()\nrs.initiate({\"_id\": \"RS1\",\n members: \n { _id: 0, host: \"mongodb1.net:27017\" },\n { _id: 1, host: \"mongodb2.net:27017\" },\n { _id: 2, host: \"mongodb3.net:27017\" }]\n})\nrs.add(\"mongodb4.net:27017\")\nrs.addArb(\"mongodb5.net:27017\")\nrs.remove(\"mongodb1.net:27017\")\nrs.conf()\nrs.hello()\nrs.printReplicationInfo()\nrs.printSecondaryReplicationInfo()\nrs.reconfig(config)\nrs.reconfigForPSASet(memberIndex, config, { options } )\ndb.getMongo().setReadPref('secondaryPreferred')\nrs.stepDown(20, 5) // (stepDownSecs, secondaryCatchUpPeriodSecs)\n```\n\n\ud83d\udd1d [Table of Contents \ud83d\udd1d\n\n## Sharded Cluster\n\n``` javascript\ndb.printShardingStatus()\n\nsh.status()\nsh.addShard(\"rs1/mongodb1.example.net:27017\")\nsh.shardCollection(\"mydb.coll\", {zipcode: 1})\n\nsh.moveChunk(\"mydb.coll\", { zipcode: \"53187\" }, \"shard0019\")\nsh.splitAt(\"mydb.coll\", {x: 70})\nsh.splitFind(\"mydb.coll\", {x: 70})\n\nsh.startBalancer()\nsh.stopBalancer()\nsh.disableBalancing(\"mydb.coll\")\nsh.enableBalancing(\"mydb.coll\")\nsh.getBalancerState()\nsh.setBalancerState(true/false)\nsh.isBalancerRunning()\n\nsh.startAutoMerger()\nsh.stopAutoMerger()\nsh.enableAutoMerger()\nsh.disableAutoMerger()\n\nsh.updateZoneKeyRange(\"mydb.coll\", {state: \"NY\", zip: MinKey }, { state: \"NY\", zip: MaxKey }, \"NY\")\nsh.removeRangeFromZone(\"mydb.coll\", {state: \"NY\", zip: MinKey }, { state: \"NY\", zip: MaxKey })\nsh.addShardToZone(\"shard0000\", \"NYC\")\nsh.removeShardFromZone(\"shard0000\", \"NYC\")\n```\n\n\ud83d\udd1d Table of Contents \ud83d\udd1d\n\n## Wrap-up\n\nI hope you liked my little but - hopefully - helpful cheat sheet. Of course, this list isn't exhaustive at all. There are a lot more commands, but I'm sure you will find them in the MongoDB documentation.\n\nIf you feel like I forgot a critical command in this list, please send me a tweet and I will make sure to fix it.\n\nCheck out our free courses on MongoDB University if you are not too sure what some of the above commands are doing.\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n\n\ud83d\udd1d Table of Contents \ud83d\udd1d\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "MongoDB Cheat Sheet by MongoDB for our awesome MongoDB Community <3.", "contentType": "Quickstart"}, "title": "MongoDB Cheat Sheet", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/databricks-atlas-vector-search", "action": "created", "body": "# How to Implement Databricks Workflows and Atlas Vector Search for Enhanced Ecommerce Search Accuracy\n\nIn the vast realm of Ecommerce, customers' ability to quickly and accurately search through an extensive range of products is paramount. Atlas Vector Search is emerging as a turning point in this space, offering a refined approach to search that goes beyond mere keyword matching. Let's delve into its implementation using MongoDB Atlas, Atlas Vector Search, and Databricks.\n\n### Prerequisites\n\n* MongoDB Atlas cluster\n* Databricks cluster\n* python>=3.7\n* pip3\n* Node.js and npm\n* GitHub repo for AI-enhanced search and vector search (code is bundled up for clarity)\n\nIn a previous tutorial, Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks, we showcased how the integration of MongoDB and Databricks provides a comprehensive solution for the retail industry by combining real-time data processing, workflow orchestration, machine learning, custom data functions, and advanced search capabilities as a way to optimize product catalog management and enhance customer interactions.\n\nIn this tutorial, we are going to be building the Vector Search solution on top of the codebase from the previous tutorial. Please check out the Github repository for the full solution.\n\nThe diagram below represents the Databricks workflow for indexing data from the atp (available to promise), images, prd_desc (product discount), prd_score (product score), and price collections. These collections are also part of the previously mentioned tutorial, so please refer back if you need to access them.\n\nWithin the MongoDB Atlas platform, we can use change streams and the MongoDB Connector for Spark to move data from the collections into a new collection called Catalog. From there, we will use a text transformer to create the **`Catalog Final Collection`**. This will enable us to create a corpus of indexed and vector embedded data that will be used later as the search dictionary. We\u2019ll call this collection **`catalog_final_myn`**. This will be shown further along after we embed the product names.\n\nThe catalog final collection will include the available to promise status for each product, its images, the product discount, product relevance score, and price, along with the vectorized or embedded product name that we\u2019ll point our vector search engine at.\n\nWith the image below, we explain what the Databricks workflow looks like. It consists of two jobs that are separated in two notebooks respectively. We\u2019ll go over each of the notebooks below.\n\n## Indexing and merging several collections into one catalog\n\nThe first step is to ingest data from the previously mentioned collections using the spark.readStream method from the MongoDB Connector for Spark. The code below is part of the notebook we\u2019ll set as a job using Databricks Workflows. You can learn more about Databricks notebooks by following their tutorial. \n\n```\natp = spark.readStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"atp_status_myn\").\\ option('spark.mongodb.change.stream.publish.full.document.only','true').\\ option('spark.mongodb.aggregation.pipeline',]).\\ option(\"forceDeleteTempCheckpointLocation\", \"true\").load() atp = atp.drop(\"_id\") atp.writeStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"catalog_myn\").\\ option('spark.mongodb.operationType', \"update\").\\ option('spark.mongodb.upsertDocument', True).\\ option('spark.mongodb.idFieldList', \"id\").\\ option(\"forceDeleteTempCheckpointLocation\", \"true\").\\ option(\"checkpointLocation\", \"/tmp/retail-atp-myn4/_checkpoint/\").\\ outputMode(\"append\").\\ start()\n```\nThis part of the notebook reads data changes from the atp_status_myn collection in the search database, drops the _id field, and then writes (or updates) the processed data to the catalog_myn collection in the same database. \n\nNotice how it\u2019s reading from the `atp_status_myn` collection, which already has the one hot encoding (boolean values if the product is available or not) from the [previous tutorial. This way, we make sure that we only embed the data from the products that are available in our stock.\n\nPlease refer to the full notebook in our Github repository if you want to learn more about all the data ingestion and transformations conducted during this stage. \n\n## Encoding text as vectors and building the final catalog collection\n\nUsing a combination of Python libraries and PySpark operations to process data from the Catalog MongoDB collection, we\u2019ll transform it, vectorize it, and write the transformed data back to the Final Catalog collection. On top of this, we\u2019ll build our application search business logic.\n\nWe start by using the %pip magic command, which is specific to Jupyter notebooks and IPython environments. The necessary packages are:\n* **pymongo:** A Python driver for MongoDB.\n* **tqdm:** A library to display progress bars.\n* **sentence-transformers:** A library for state-of-the-art sentence, text, and image embeddings.\n\nFirst, let\u2019s use pip to install these packages in our Databricks notebook:\n\n```\n%pip install pymongo tqdm sentence-transformers\n```\n\nWe continue the notebook with the following code: \n\n```\nmodel = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')\n```\n\nHere we load a pre-trained model from the sentence-transformers library. This model will be used to convert text into embeddings or vectors. \n\nThe next step is to bring the data from the MongoDB Atlas catalog and search collections. This as a continuation of the same notebook:\n```\ncatalog_status = spark.readStream.format(\"mongodb\").\\\n option('spark.mongodb.connection.uri', MONGO_CONN).\\\n option('spark.mongodb.database', \"search\").\\\n option('spark.mongodb.collection', \"catalog_myn\").\\\noption('spark.mongodb.change.stream.publish.full.document.only','true').\\\n option('spark.mongodb.aggregation.pipeline',]).\\\n option(\"forceDeleteTempCheckpointLocation\", \"true\").load()\n```\nWith this code, we set up a structured streaming read from the **`catalog_myn`** collection in the **`search`** database of MongoDB. The resulting data is stored in the **`catalog_status`** DataFrame in Spark. The read operation is configured to fetch the full document from MongoDB's change stream and does not apply any aggregation.\n\nThe notebook code continues with: \n```\n#Calculating new column called discounted price using the F decorator\n\ncatalog_status = catalog_status.withColumn(\"discountedPrice\", F.col(\"price\") * F.col(\"pred_price\"))\n\n#One hot encoding of the atp_status column \n\ncatalog_status = catalog_status.withColumn(\"atp\", (F.col(\"atp\").cast(\"boolean\") & F.lit(1).cast(\"boolean\")).cast(\"integer\"))\n\n#Running embeddings of the product titles with the get_vec function\n\ncatalog_status.withColumn(\"vec\", get_vec(\"title\"))\n\n#Dropping _id column and creating a new final catalog collection with checkpointing\n\ncatalog_status = catalog_status.drop(\"_id\")\ncatalog_status.writeStream.format(\"mongodb\").\\\n option('spark.mongodb.connection.uri', MONGO_CONN).\\\n option('spark.mongodb.database', \"search\").\\\n option('spark.mongodb.collection', \"catalog_final_myn\").\\\n option('spark.mongodb.operationType', \"update\").\\\n option('spark.mongodb.idFieldList', \"id\").\\\n option(\"forceDeleteTempCheckpointLocation\", \"true\").\\\n option(\"checkpointLocation\", \"/tmp/retail-atp-myn5/_checkpoint/\").\\\n outputMode(\"append\").\\\n start()\n```\n\nWith this last part of the code, we calculate a new column called discountedPrice as the product of the predicted price. Then, we perform [one-hot encoding on the atp status column, vectorize the title of the product, and merge everything back into a final catalog collection.\n\nNow that we have our catalog collection with its proper embeddings, it\u2019s time for us to build the Vector Search Index using MongoDB Atlas Search. \n\n## Configuring the Atlas Vector Search index \n\nHere we\u2019ll define how data should be stored and indexed for efficient searching. To configure the index, you can insert the snippet in MongoDB Atlas by browsing to your cluster splash page and clicking on the \u201cSearch\u201d tab:\n\nNext, you can click over \u201cCreate Index.\u201d Make sure you select \u201cJSON Editor\u201d:\n\nPaste the JSON snippet from below into the JSON Editor. Make sure you select the correct database and collection! In our case, the collection name is **`catalog_final_myn`**. Please refer to the full code in the repository to see how the full index looks and how you can bring it together with the rest of parameters for the AI-enhanced search tutorial.\n\n```\n{\n \"mappings\": {\n \"fields\": {\n \"vec\": \n {\n \"dimensions\": 384,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n ]\n }\n }\n}\n```\n\nIn the code above, the vec field is of type [knnVector, designed for vector search. It indicates that each vector has 384 dimensions and uses cosine similarity to determine vector closeness. This is crucial for semantic search, where the goal is to find results that are contextually or semantically related.\n\nBy implementing these indexing parameters, we speed up retrieval times. Especially with high-dimensional vector data, as raw vectors can consume a significant amount of storage and reduce the computational cost of operations like similarity calculations. \n\nInstead of comparing a query vector with every vector in the dataset, indexing allows the system to compare with a subset, saving computational resources.\n\n## A quick example of improved search results\n\nBrowse over to our LEAFYY Ecommerce website, in which we will perform a search for the keywords ``tan bags``. You\u2019ll get these results: \n\nAs you can see, you\u2019ll first get results that match the specific tokenized keywords \u201ctan\u201d and \u201cbags\u201d. As a result, this will give you any product that contains any or both of those keywords in the product catalog collection documents. \n\nHowever, not all the results are bags or of tan color. You can see shoes, wallets, a dress, and a pair of pants. This could be frustrating as a customer, prompting them to leave the site.\n\nNow, enable vector search by clicking on the checkbox on the left of the magnifying glass icon in the search bar, and re-run the query \u201ctan bags\u201d. The results you get are in the image below: \n\nAs you can see from the screenshot, the results became more relevant for a consumer. Our search engine is able to identify similar products by understanding the context that \u201cbeige\u201d is a similar color to \u201ctan\u201d, and therefore showcase these products as alternatives.\n\n## Conclusion\n\nBy working with MongoDB Atlas and Databricks, we can create real-time data transformation pipelines. We achieve this by leveraging the MongoDB Connector for Spark to prepare our operational data for vectorization, and store it back into our MongoDB Atlas collections. This approach allows us to develop the search logic for our Ecommerce app with minimal operational overhead.\n\nOn top of that, Atlas Vector Search provides a robust solution for implementing advanced search features, making it easy to deliver a great search user experience for your customers. By understanding and integrating these tools, developers can create search experiences that are fast, relevant, and user-friendly. \n\nMake sure to review the full code in our GitHub repository. Contact us to get a deeper understanding of how to build advanced search solutions for your Ecommerce business. \n\n ", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Node.js"], "pageDescription": "Learn how to implement Databricks Workflows and Atlas Vector Search for your Ecommerce accuracy.", "contentType": "Tutorial"}, "title": "How to Implement Databricks Workflows and Atlas Vector Search for Enhanced Ecommerce Search Accuracy", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/ruby/getting-started-atlas-ruby-on-rails", "action": "created", "body": "\n\n <%= yield %>\n\n", "format": "md", "metadata": {"tags": ["Ruby", "Atlas"], "pageDescription": "A tutorial showing how to get started with MongoDB Atlas and Ruby on Rails using the Mongoid driver", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB Atlas and Ruby on Rails", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/atlas-databricks-pyspark-demo", "action": "created", "body": "# Utilizing PySpark to Connect MongoDB Atlas with Azure Databricks\n\nData processing is no easy feat, but with the proper tools, it can be simplified and can enable you to make the best data-driven decisions possible. In a world overflowing with data, we need the best methods to derive the most useful information. \n\nThe combination of MongoDB Atlas with Azure Databricks makes an efficient choice for big data processing. By connecting Atlas with Azure Databricks, we can extract data from our Atlas cluster, process and analyze the data using PySpark, and then store the processed data back in our Atlas cluster. Using Azure Databricks to analyze your Atlas data allows for access to Databricks\u2019 wide range of advanced analytics capabilities, which include machine learning, data science, and areas of artificial intelligence like natural language processing! Processing your Atlas data with these advanced Databricks tools allows us to be able to handle any amount of data in an efficient and scalable way, making it easier than ever to gain insights into our data sets and enable us to make the most effective data-driven decisions. \n\nThis tutorial will show you how to utilize PySpark to connect Atlas with Databricks so you can take advantage of both platforms. \n\nMongoDB Atlas is a scalable and flexible storage solution for your data while Azure Databricks provides the power of Apache Spark to work with the security and collaboration features that are available with a Microsoft Azure subscription. Apache Spark provides the Python interface for working with Spark, PySpark, which allows for an easy-to-use interface for developing in Python. To properly connect PySpark with MongoDB Atlas, the MongoDB Spark Connector is utilized. This connector ensures for seamless compatibility, as you will see below in the tutorial. \n\nOur tutorial to combine the above platforms will consist of viewing and manipulating an Atlas cluster and visualizing our data from the cluster back in our PySpark console. We will be setting up both Atlas and Azure Databricks clusters, connecting our Databricks cluster to our IDE, and writing scripts to view and contribute to the cluster in our Atlas account. Let\u2019s get started!\n\n### Requirements\nIn order to successfully recreate this project, please ensure you have everything in the following list: \n\n* MongoDB Atlas account.\n\n* Microsoft Azure subscription (two-week free tier trial). \n\n* Python 3.8+.\n\n* GitHub Repository.\n\n* Java on your local machine.\n\n## Setting up a MongoDB Atlas cluster\nOur first step is to set up a MongoDB Atlas cluster. Access the Atlas UI and follow these steps. For this tutorial, a free \u201cshared\u201d cluster is perfect. Create a database and name it \u201cbookshelf\u201d with a collection inside named \u201cbooks\u201d. To ensure ease for this tutorial, please allow for a connection from anywhere within your cluster\u2019s network securities.\n\nOnce properly provisioned, your cluster will look like this:\n\nNow we can set up our Azure Databricks cluster. \n\n## Setting up an Azure Databricks cluster\n\nAccess the Azure Databricks page, sign in, and access the Azure Databricks tab. This is where you\u2019ll create an Azure Databricks workspace. \n\nFor our Databricks cluster, a free trial works perfectly for this tutorial. Once the cluster is provisioned, you\u2019ll only have two weeks to access it before you need to upgrade. \n\nHit \u201cReview and Create\u201d at the bottom. Once your workspace is validated, click \u201cCreate.\u201d Once your deployment is complete, click on \u201cGo to Resource.\u201d You\u2019ll be taken to your workspace overview. Click on \u201cLaunch Workspace\u201d in the middle of the page. \n\nThis will direct you to the Microsoft Azure Databricks UI where we can create the Databricks cluster. On the left-hand of the screen, click on \u201cCreate a Cluster,\u201d and then click \u201cCreate Compute\u201d to access the correct form. \n\nWhen creating your cluster, pay close attention to what your \u201cDatabricks runtime version\u201d is. Continue through the steps to create your cluster. \n\nWe\u2019re now going to install the libraries we need in order to connect to our MongoDB Atlas cluster. Head to the \u201cLibraries\u201d tab of your cluster, click on \u201cInstall New,\u201d and select \u201cMaven.\u201d Hit \u201cSearch Packages\u201d next to \u201cCoordinates.\u201d Search for `mongo` and select the `mongo-spark` package. Do the same thing with `xml` and select the `spark-xml` package. When done, your library tab will look like this: \n\n## Utilizing Databricks-Connect\nNow that we have our Azure Databricks cluster ready, we need to properly connect it to our IDE. We can do this through a very handy configuration named Databricks Connect. Databricks Connect allows for Azure Databricks clusters to connect seamlessly to the IDE of your choosing.\n\n### Databricks configuration essentials\n\nBefore we establish our connection, let\u2019s make sure we have our configuration essentials. This is available in the Databricks Connect tutorial on Microsoft\u2019s website under \u201cStep 2: Configure connection properties.\u201d Please note these properties down in a safe place, as you will not be able to connect properly without them.\n\n### Databricks-Connect configuration\nAccess the Databricks Connect page linked above to properly set up `databricks-connect` on your machine. Ensure that you are downloading the `databricks-connect` version that is compatible with your Python version and is the same as the Databricks runtime version in your Azure cluster. \n\n>Please ensure prior to installation that you are working with a virtual environment for this project. Failure to use a virtual environment may cause PySpark package conflicts in your console. \n\nVirtual environment steps in Python:\n```\npython3 -m venv name \n```\nWhere the `name` is the name of your environment, so truly you can call it anything. \n\nOur second step is to activate our virtual environment:\n```\nsource name/bin/activate \n```\nAnd that\u2019s it. We are now in our Python virtual environment. You can see that you\u2019re in it when the little (name) or whatever you named it shows up.\n\n* * *\n\nContinuing on...for our project, use this installation command:\n```\npip install -U \u201cdatabricks-connect==10.4.*\u201d \n```\nOnce fully downloaded, we need to set up our cluster configuration. Use the configure command and follow the instructions. This is where you will input your configuration essentials from our \u201cDatabricks configuration essentials\u201d section.\n\nOnce finished, use this command to check if you\u2019re connected to your cluster:\n```\ndatabricks-connect test\n```\n\nYou\u2019ll know you\u2019re correctly configured when you see an \u201cAll tests passed\u201d in your console. \nNow, it\u2019s time to set up our SparkSessions and connect them to our Atlas cluster.\n\n## SparkSession + Atlas configuration\nThe creation of a SparkSession object is crucial for our tutorial because it provides a way to access all important PySpark features in one place. These features include: reading data, creating data frames, and managing the overall configuration of PySpark applications. Our SparkSession will enable us to read and write to our Atlas cluster through the data frames we create.\n\nThe full code is on our Github account, so please access it there if you would like to replicate this exact tutorial. We will only go over the code for some of the essentials of the tutorial below.\n\nThis is the SparkSession object we need to include. We are going to use a basic structure where we describe the application name, configure our \u201cread\u201d and \u201cwrite\u201d connectors to our `connection_string` (our MongoDB cluster connection string that we have saved safely as an environment variable), and configure our `mongo-spark-connector`. Make sure to use the correct `mongo-spark-connector` for your environment. For ours, it is version 10.0.3. Depending on your Python version, the `mongo-spark-connector` version might be different. To find which version is compatible with your environment, please refer to the MVN Repository documents. \n\n```\n# use environment variable for uri \nload_dotenv()\nconnection_string: str = os.environ.get(\"CONNECTION_STRING\")\n\n# Create a SparkSession. Ensure you have the mongo-spark-connector included.\nmy_spark = SparkSession \\\n .builder \\\n .appName(\"tutorial\") \\\n .config(\"spark.mongodb.read.connection.uri\", connection_string) \\\n .config(\"spark.mongodb.write.connection.uri\", connection_string) \\\n .config(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector:10.0.3\") \\\n .getOrCreate()\n\n```\n\nFor more help on how to create a SparkSession object with MongoDB and for more details on the `mongo-spark-connector`, please view the documentation.\n\nOur next step is to create two data frames, one to `write` a book to our Atlas cluster, and a second to `read` back all the books in our cluster. These data frames are essential; make sure to use the proper format or else they will not properly connect to your cluster. \n\nData frame to `write` a book:\n```\nadd_books = my_spark \\\n .createDataFrame((\"\", \"\", )], [\"title\", \"author\", \"year\"])\n\nadd_books.write \\\n .format(\"com.mongodb.spark.sql.DefaultSource\") \\\n .option('uri', connection_string) \\\n .option('database', 'bookshelf') \\\n .option('collection', 'books') \\\n .mode(\"append\") \\\n .save() \n\n```\n[Data frame to `read` back our books:\n```\n# Create a data frame so you can read in your books from your bookshelf.\nreturn_books = my_spark.read.format(\"com.mongodb.spark.sql.DefaultSource\") \\\n .option('uri', connection_string) \\\n .option('database', 'bookshelf') \\\n .option('collection', 'books') \\\n .load()\n\n# Show the books in your PySpark shell.\nreturn_books.show()\n\n```\n\nAdd in the book of your choosing under the `add_books` dataframe. Here, exchange the title, author, and year for the areas with the `< >` brackets. Once you add in your book and run the file, you\u2019ll see that the logs are telling us we\u2019re connecting properly and we can see the added books in our PySpark shell. This demo script was run six separate times to add in six different books. A picture of the console is below:\n\nWe can double-check our cluster in Atlas to ensure they match up: \n\n## Conclusion\nCongratulations! We have successfully connected our MongoDB Atlas cluster to Azure Databricks through PySpark, and we can `read` and `write` data straight to our Atlas cluster. \n\nThe skills you\u2019ve learned from this tutorial will allow you to utilize Atlas\u2019s scalable and flexible storage solution while leveraging Azure Databricks\u2019 advanced analytics capabilities. This combination can allow developers to handle any amount of data in an efficient and scalable manner, while allowing them to gain insights into complex data sets to make exciting data-driven decisions! \n\nQuestions? Comments? Let\u2019s continue the conversation over at the MongoDB Developer Community!", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "Spark"], "pageDescription": "This tutorial will show you how to connect MongoDB Atlas to Azure Databricks using PySpark. \n", "contentType": "Tutorial"}, "title": "Utilizing PySpark to Connect MongoDB Atlas with Azure Databricks", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/utilizing-collection-globbing-provenance-data-federation", "action": "created", "body": "# Utilizing Collection Globbing and Provenance in Data Federation\n\nA common pattern for users of MongoDB running multi-tenant services is to model your data by splitting your different customers into different databases. This is an excellent strategy for keeping your various customers\u2019 data separate from one another, as well as helpful for scaling in the future. But one downside to this strategy is that you can end up struggling to get a holistic view of your data across all of your customers. There are many ways to mitigate this challenge, and one of the primary ones is to copy and transform your data into another storage solution. However, this can lead to some unfortunate compromises. For example, you are now paying more to store your data twice. You now need to manage the copy and transformation process, which can become onerous as you add more customers. And lastly, and perhaps most importantly, you are now looking at a delayed state of your data.\n\nTo solve these exact challenges, we\u2019re thrilled to announce two features that will completely transform how you use your cluster data and the ease with which you can remodel it. The first feature is called Provenance. This functionality allows you to tell Data Federation to inject fields into your documents during query time that indicate where they are coming from. For example, you can add the source collection on the Atlas cluster when federating across clusters or you can add the path from your AWS S3 bucket where the data is being read. The great thing is that you can now also query on these fields to only get data from the source of your choice!\n\nThe other feature we\u2019re adding is a bit nuanced, and we are calling it \u201cglobbing.\u201d For those of you familiar with Atlas Data Federation, you probably know about our \u201cwildcard collections.\u201d This functionality allows you to generate collection names based on the collections that exist in your underlying Atlas clusters or based on sections of paths to your files in S3. This is a handy feature to avoid having to explicitly define everything in your storage configuration. \u201cGlobbing\u201d is somewhat similar, except that instead of dynamically generating new collections for each collection in your cluster, it will dynamically merge collections to give you a \u201cglobal\u201d view of your data automatically. To help illustrate this, I\u2019m going to walk you through an example.\n\nImagine you are running a successful travel agency on top of MongoDB. For various reasons, you have chosen to store your customers data in different databases based on their location. (Maybe you are going to shard based on this and will have different databases in different regions for compliance purposes.)\n\nThis has worked well, but now you\u2019d like to query your data based on this information and get a holistic view of your data across geographies in real time (without impacting your operational workloads). So let\u2019s discuss how to solve this challenge!\n\n## Prerequisites\nIn order to follow along with this tutorial yourself, you will need the following:\n1. Experience with Atlas Data Federation.\n2. An Atlas cluster with the sample data in it. \n\nHere is how the data is modeled in my cluster (data in your cluster can be spread out among collections however your application requires):\n\n* Cluster: MongoTravelServices\n * Database: ireland\n * Collection: user_feedback (8658 Documents)\n * Collection: passengers\n * Collection: flights\n * Database: israel\n * Collection: user_feedback (8658 Documents)\n * Collection: passengers\n * Collection: flights\n * Database: usa\n * Collection: user_feedback (8660 Documents)\n * Collection: passengers\n * Collection: flights\n\nThe goal here is to consolidate this data into one database, and then have each of the collections for user feedback, passengers, and flights represent the data stored in the collections from each database on the cluster. Lastly, we also want to be able to query on the \u201cdatabase\u201d name as if it were part of our documents.\n\n## Create a Federated Database instance\n\n* The first thing you\u2019ll need to do is navigate to the \u201cData Federation\u201d tab on the left-hand side of your Atlas dashboard and then click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\n* Then, for this example, we\u2019re going to manually edit the storage configuration as these capabilities are not yet available in the UI editor. \n\n```\n{\n \"databases\": \n {\n \"name\": \"GlobalVirtualDB\",\n \"collections\": [\n {\n \"name\": \"user_feedback\",\n \"dataSources\": [\n {\n \"collection\": \"user_feedback\",\n \"databaseRegex\": \".*\", // This syntax triggers the globbing or combination of each collection named user_feedback in each database of the MongoTravelServices cluster.\n \"provenanceFieldName\": \"_provenance_data\", // The name of the field where provenance data will be added.\n \"storeName\": \"MongoTravelServices\"\n }\n ]\n }\n ],\n \"views\": []\n }\n ],\n \"stores\": [\n {\n \"clusterName\": \"MongoTravelServices\",\n \"name\": \"MongoTravelServices\",\n \"projectId\": \"5d9b6aba014b768e8241d442\",\n \"provider\": \"atlas\",\n \"readPreference\": {\n \"mode\": \"secondary\",\n \"tagSets\": []\n }\n }\n ]\n}\n```\n\nNow when you connect, you will see:\n\n```\nAtlasDataFederation GlobalVirtualDB> show dbs\nGlobalVirtualDB 0 B\nAtlasDataFederation GlobalVirtualDB> use GlobalVirtualDB\nalready on db GlobalVirtualDB\nAtlasDataFederation GlobalVirtualDB> show tables\nuser_feedback\nAtlasDataFederation GlobalVirtualDB>\n```\n\nAnd a simple count results in the count of all three collections globbed together:\n\n```\nAtlasDataFederation GlobalVirtualDB> db.user_feedback.countDocuments()\n25976\nAtlasDataFederation GlobalVirtualDB>\n```\n\n25976 is the sum of 8660 feedback documents from the USA, 8658 from Israel, and 8658 from Ireland.\n\nAnd lastly, I can query on the provenance metadata using the field *\u201cprovenancedata.databaseName\u201d*:\n\n```\nAtlasDataFederation GlobalVirtualDB> db.user_feedback.findOne({\"_provenance_data.databaseName\": \"usa\"})\n{\n _id: ObjectId(\"63a471e1bb988608b5740f65\"),\n 'id': 21037,\n 'Gender': 'Female',\n 'Customer Type': 'Loyal Customer',\n 'Age': 44,\n 'Type of Travel': 'Business travel',\n 'Class': 'Business',\n \u2026\n 'Cleanliness': 1,\n 'Departure Delay in Minutes': 50,\n 'Arrival Delay in Minutes': 55,\n 'satisfaction': 'satisfied',\n '_provenance_data': {\n 'provider': 'atlas',\n 'clusterName': 'MongoTravelServices',\n 'databaseName': 'usa',\n 'collectionName': 'user_feedback'\n }\n}\nAtlasDataFederation GlobalVirtualDB>\n```\n\n## In review\nSo, what have we done and what have we learned?\n\n1. We saw how quickly and easily you can create a Federated Database in MongoDB Atlas.\n2. We learned how you can easily combine and reshape data from your underlying Atlas clusters inside of Atlas Data Federation with Collection Globbing. Now, you can easily query one user_feedback collection and have it query data in the user_feedback collections in each database.\n3. We saw how to add provenance data to our documents and query it.\n\n### A couple of things to remember about Atlas Data Federation\n1. Collection globbing is a new feature that applies to Atlas cluster sources and allows dynamic manipulation of source collections similar to \u201cwildcard collections.\u201d\n2. Provenance allows you to include additional metadata with your documents. You can indicate that data federation should include additional attributes such as source cluster, database, collection, the source path in S3, and more.\n3. Currently, this is only supported in the Data Federation JSON editor or via setting the Storage Configuration in the shell, not the visual storage configuration editor.\n4. This is particularly powerful for multi-tenant implementations done in MongoDB.\n\nTo learn more about [Atlas Data Federation and whether it would be the right solution for you, check out our documentation and tutorials or get started today.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to model and transform your MongoDB Atlas Cluster data for real-time query-ability with Data Federation.", "contentType": "Tutorial"}, "title": "Utilizing Collection Globbing and Provenance in Data Federation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-aws-cdk-typescript", "action": "created", "body": "# How to Deploy MongoDB Atlas with AWS CDK in TypeScript\n\nMongoDB Atlas, the industry\u2019s leading developer data platform, simplifies application development and working with data for a wide variety of use cases, scales globally, and optimizes for price/performance as your data needs evolve over time. With Atlas, you can address the needs of modern applications faster to accelerate your go-to-market timelines, all while reducing data infrastructure complexity. Atlas offers a variety of features such as cloud backups, search, and easy integration with other cloud services. \n\nAWS Cloud Development Kit (CDK) is a tool provided by Amazon Web Services (AWS) that allows you to define infrastructure as code using familiar programming languages such as TypeScript, JavaScript, Python, Java, Go, and C#. \n\nMongoDB recently announced the GA for Atlas Integrations for CDK. This is an ideal use case for teams that want to leverage the TypeScript ecosystem and no longer want to manually provision AWS CloudFormation templates in YAML or JSON. Not a fan of TypeScript? No worries! MongoDB Atlas CDK Integrations also now support Python, Java, C#, and Go.\n\nIn this step-by-step guide, we will walk you through the entire process. Let's get started! \n\n## Setup\n\nBefore we start, you will need to do the following:\n\n- Open a MongoDB Atlas account \n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n- Create a MongoDB Atlas Programmatic API Key (PAK)\n\n- Install and configure an AWS Account + AWS CLI\n\n- Store your MongoDB Atlas PAK in AWS Secret Manager \n\n- Activate the below CloudFormation resources in the AWS region of your choice \n\n - MongoDB::Atlas::Project\n - MongoDB::Atlas::Cluster\n - MongoDB::Atlas::DatabaseUser\n - MongoDB::Atlas::ProjectIpAccessList\n\n## Step 1: Install AWS CDK\n\nThe AWS CDK is an open-source software (OSS) development framework for defining cloud infrastructure as code and provisioning it through AWS CloudFormation. It provides high-level components that preconfigure cloud resources with proven defaults, so you can build cloud applications without needing to be an expert. You can install it globally using npm:\n\n```bash\nnpm install -g aws-cdk\n```\n\nThis command installs AWS CDK. The optional -g flag allows you to use it globally anywhere on your machine.\n\n## Step 2: Bootstrap CDK\n\nNext, we need to bootstrap our AWS environment to create the necessary resources to manage the CDK apps. The `cdk bootstrap` command creates an Amazon S3 bucket for storing files and a CloudFormation stack to manage the resources.\n\n```bash\ncdk bootstrap aws://ACCOUNT_NUMBER/REGION\n```\n\nReplace ACCOUNT_NUMBER with your AWS account number, and REGION with the AWS region you want to use.\n\n## Step 3: Initialize a New CDK app\n\nNow we can initialize a new CDK app using TypeScript. This is done using the `cdk init` command:\n\n```bash\ncdk init app --language typescript\n```\n\nThis command initializes a new CDK app in TypeScript language. It creates a new directory with the necessary files and directories for a CDK app.\n\n## Step 4: Install MongoDB Atlas CDK\n\nTo manage MongoDB Atlas resources, we will need a specific CDK module called awscdk-resources-mongodbatlas (see more details on this package on our Construct Hub page). Let's install it:\n\n```bash\nnpm install awscdk-resources-mongodbatlas\n```\n\nThis command installs the MongoDB Atlas CDK module, which will allow us to define and manage MongoDB Atlas resources in our CDK app.\n\n## Step 5: Replace the generated file with AtlasBasic CDK L3 repo example\n\nFeel free to start coding if you are familiar with CDK already or if it\u2019s easier, you can leverage the AtlasBasic CDK resource example in our repo (also included below). This is a simple CDK Level 3 resource that deploys a MongoDB Atlas project, cluster, database user, and project IP access List resources on your behalf. All you need to do is paste this in your \u201clib/YOUR_FILE.ts\u201d directory, making sure to replace the generated file that is already there (which was created in Step 3). \n\nPlease make sure to replace the `export class CdkTestingStack extends cdk.Stack` line with the specific folder name used in your specific environment. No other changes are required. \n\n```javascript\n// This CDK L3 example creates a MongoDB Atlas project, cluster, databaseUser, and projectIpAccessList\n\nimport * as cdk from 'aws-cdk-lib';\nimport { Construct } from 'constructs';\nimport { AtlasBasic } from 'awscdk-resources-mongodbatlas';\n\ninterface AtlasStackProps {\n readonly orgId: string;\n readonly profile: string;\n readonly clusterName: string;\n readonly region: string;\n readonly ip: string;\n}\n\n//Make sure to replace \"CdkTestingStack\" with your specific folder name used \nexport class CdkTestingStack extends cdk.Stack {\n\n constructor(scope: Construct, id: string, props?: cdk.StackProps) {\n super(scope, id, props);\n\n const atlasProps = this.getContextProps();\n const atlasBasic = new AtlasBasic(this, 'AtlasBasic', {\n clusterProps: {\n name: atlasProps.clusterName, \n replicationSpecs: \n {\n numShards: 1,\n advancedRegionConfigs: [\n {\n analyticsSpecs: {\n ebsVolumeType: \"STANDARD\",\n instanceSize: \"M10\",\n nodeCount: 1\n },\n electableSpecs: {\n ebsVolumeType: \"STANDARD\",\n instanceSize: \"M10\",\n nodeCount: 3\n },\n priority: 7,\n regionName: atlasProps.region,\n }]\n }] \n },\n projectProps: {\n orgId: atlasProps.orgId,\n },\n ipAccessListProps: {\n accessList:[\n { ipAddress: atlasProps.ip, comment: 'My first IP address' }\n ]\n },\n profile: atlasProps.profile,\n });\n }\n\n getContextProps(): AtlasStackProps {\n const orgId = this.node.tryGetContext('orgId');\n\n if (!orgId){\n throw \"No context value specified for orgId. Please specify via the cdk context.\"\n }\n\n const profile = this.node.tryGetContext('profile') ?? 'default';\n const clusterName = this.node.tryGetContext('clusterName') ?? 'test-cluster';\n const region = this.node.tryGetContext('region') ?? \"US_EAST_1\";\n const ip = this.node.tryGetContext('ip');\n \n if (!ip){\n throw \"No context value specified for ip. Please specify via the cdk context.\"\n }\n \n return {\n orgId,\n profile,\n clusterName,\n region,\n ip\n }\n }\n}\n```\n\n## Step 6: Compare the deployed stack with the current state\n\nIt's always a good idea to check what changes the CDK will make before actually deploying the stack. Use `cdk diff` command to do so:\n\n```bash\ncdk diff --context orgId=\"YOUR_ORG\" --context ip=\"YOUR_IP\"\n```\n\nReplace YOUR_ORG with your MongoDB Atlas organization ID and YOUR_IP with your IP address. This command shows the proposed changes to be made in your infrastructure between the deployed stack and the current state of your app, notice highlights for any resources to be created, deleted, or modified. This is for review purposes only. No changes will be made to your infrastructure. \n\n## Step 7: Deploy the app\n\nFinally, if everything is set up correctly, you can deploy the app:\n\n```bash\ncdk deploy --context orgId=\"YOUR_ORG\" --context ip=\"YOUR_IP\"\n```\n\nAgain, replace YOUR_ORG with your MongoDB Atlas organization ID and YOUR_IP with your IP address. This command deploys your app using AWS CloudFormation.\n\n## (Optional) Step 8: Clean up the deployed resources\n\nOnce you're finished with your MongoDB Atlas setup, you might want to clean up the resources you've provisioned to avoid incurring unnecessary costs. You can destroy the resources you've created using the cdk destroy command:\n\n```bash\ncdk destroy --context orgId=\"YOUR_ORG\" --context ip=\"YOUR_IP\"\n```\n\nThis command will destroy the CloudFormation stack associated with your CDK app, effectively deleting all the resources that were created during the deployment process.\n\nCongratulations! You have just deployed MongoDB Atlas with AWS CDK in TypeScript. Next, head to YouTube for a [full video step-by-step walkthrough and demo.\n\nThe MongoDB Atlas CDK resources are open-sourced under the Apache-2.0 license and we welcome community contributions. To learn more, see our contributing guidelines. \n\nThe fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace. Go build with MongoDB Atlas and the AWS CDK today!", "format": "md", "metadata": {"tags": ["Atlas", "TypeScript", "AWS"], "pageDescription": "Learn how to quickly and easily deploy a MongoDB Atlas instance using AWS CDK with TypeScript.", "contentType": "Tutorial"}, "title": "How to Deploy MongoDB Atlas with AWS CDK in TypeScript", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/query-multiple-databases-with-atlas-data-federation", "action": "created", "body": "# How to Query from Multiple MongoDB Databases Using MongoDB Atlas Data Federation\n\nHave you ever needed to make queries across databases, clusters, data centers, or even mix it with data stored in an AWS S3 blob? You probably haven't had to do all of these at once, but I'm guessing you've needed to do at least one of these at some point in your career. I'll also bet that you didn't know that this is possible (and easy) to do with MongoDB Atlas Data Federation! These allow you to configure multiple remote MongoDB deployments, and enable federated queries across all the configured deployments.\n\n**MongoDB Atlas Data Federation** allows you to perform queries across many MongoDB systems, including Clusters, Databases, and even AWS S3 buckets. Here's how **MongoDB Atlas Data Federation** works in practice.\n\nNote: In this post, we will be demoing how to query from two separate databases. However, if you want to query data from two separate collections that are in the same database, I would personally recommend that you use the $lookup (aggregation pipeline) query. $lookup performs a left outer join to an unsharded collection in the same database to filter documents from the \"joined\" collection for processing. In this scenario, using a federated database instance is not necessary.\n\ntl;dr: In this post, I will guide you through the process of creating and connecting to a virtual database in MongoDB Atlas, configuring paths to collections in two separate MongoDB databases stored in separate datacenters, and querying data from both databases using only a single query.\n\n## Prerequisites\n\nIn order to follow along this tutorial, you need to:\n\n- Create at least two M10 clusters in MongoDB Atlas. For this demo, I have created two databases deployed to separate Cloud Providers (AWS and GCP). Click here for information on setting up a new MongoDB Atlas cluster.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n- Ensure that each database has been seeded by loading sample data into our Atlas cluster.\n- Have a Mongo Shell installed.\n\n## Deploy a Federated Database Instance\n\nFirst, make sure you are logged into MongoDB\nAtlas. Next, select the Data Federation option on the left-hand navigation.\n\nCreate a Virtual Database\n- Click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nClick **Add Data Source** on the Data Federation Configuration page, and select **MongoDB Atlas Cluster**. Select your first cluster, input `sample_mflix` as the database and `theaters` as the collection. Do this again for your second cluster and input `sample_restaurants` as the database and `restaurants` as the collection. For this tutorial, we will be analyzing restaurant data and some movie theater sample data to determine the number of theaters and restaurants in each zip code.\n\nRepeat the steps above to connect the data for your other cluster and data source.\n\nNext, drag these new data stores into your federated database instance and click **save**. It should look like this.\n\n## Connect to Your Federated Database Instance\n\nThe next thing we are going to need to do after setting up our federated database instance is to connect to it so we can start running queries on all of our data. First, click connect in the first box on the data federation overview page.\n\nClick Add Your Current IP Address. Enter your IP address and an optional description, then click **Add IP Address**. In the **Create a MongoDB User** step of the dialog, enter a Username and a Password for your database user. (Note: You'll use this username and password combination to access data on your cluster.)\n\n## Run Queries Against Your Virtual Database\n\nYou can run your queries any way you feel comfortable. You can use MongoDB Compass, the MongoDB Shell, connect to an application, or anything you see fit. For this demo, I'm going to be running my queries using MongoDB Visual Studio Code plugin and leveraging its\nPlaygrounds feature. For more information on using this plugin, check out this post on our Developer Hub.\n\nMake sure you are using the connection string for your federated database instance and not for your individual MongoDB databases. To get the connection string for your new federated database instance, click the connect button on the MongoDB Atlas Data Federation overview page. Then click on Connect using **MongoDB Compass**. Copy this connection string to your clipboard. Note: You will need to add the password of the user that you authorized to access your virtual database here.\n\nYou're going to paste this connection string into the MongoDB Visual Studio Code plugin when you add a new connection.\n\nNote: If you need assistance with getting started with the MongoDB Visual Studio Code Plugin, be sure to check out my post, How To Use The MongoDB Visual Studio Code Plugin, and the official documentation.\n\nYou can run operations using the MongoDB Query Language (MQL) which includes most, but not all, standard server commands. To learn which MQL operations are supported, see the MQL Support documentation.\n\nThe following queries use the paths that you added to your Federated Database Instance during deployment.\n\nFor this query, I wanted to construct a unique aggregation that could only be used if both sample datasets were combined using federated query and MongoDB Atlas Data Federation. For this example, we will run a query to determine the number of theaters and restaurants in each zip code, by analyzing the `sample_restaurants.restaurants` and the `sample_mflix.theaters` datasets that were entered above in our clusters. \n\nI want to make it clear that these data sources are still being stored in different MongoDB databases in completely different datacenters, but by leveraging MongoDB Atlas Data Federation, we can query all of our databases at once as if all of our data is in a single collection! The following query is only possible using federated search! How cool is that?\n\n``` javascript\n// MongoDB Playground\n\n// Select the database to use. VirtualDatabase0 is the default name for a MongoDB Atlas Data Federation database. If you renamed your database, be sure to put in your virtual database name here.\nuse('VirtualDatabase0');\n\n// We are connecting to `VirtualCollection0` since this is the default collection that MongoDB Atlas Data Federation calls your collection. If you renamed it, be sure to put in your virtual collection name here.\ndb.VirtualCollection0.aggregate(\n\n // In the first stage of our aggregation pipeline, we extract and normalize the dataset to only extract zip code data from our dataset.\n {\n '$project': {\n 'restaurant_zipcode': '$address.zipcode',\n 'theater_zipcode': '$location.address.zipcode',\n 'zipcode': {\n '$ifNull': [\n '$address.zipcode', '$location.address.zipcode'\n ]\n }\n }\n },\n\n // In the second stage of our aggregation, we group the data based on the zip code it resides in. We also push each unique restaurant and theater into an array, so we can get a count of the number of each in the next stage.\n // We are calculating the `total` number of theaters and restaurants by using the aggregator function on $group. This sums all the documents that share a common zip code.\n {\n '$group': {\n '_id': '$zipcode',\n 'total': {\n '$sum': 1\n },\n 'theaters': {\n '$push': '$theater_zipcode'\n },\n 'restaurants': {\n '$push': '$restaurant_zipcode'\n }\n }\n },\n\n // In the third stage, we get the size or length of the `theaters` and `restaurants` array from the previous stage. This gives us our totals for each category.\n {\n '$project': {\n 'zipcode': '$_id',\n 'total': '$total',\n 'total_theaters': {\n '$size': '$theaters'\n },\n 'total_restaurants': {\n '$size': '$restaurants'\n }\n }\n },\n\n // In our final stage, we sort our data in descending order so that the zip codes with the most number of restaurants and theaters are listed at the top.\n {\n '$sort': {\n 'total': -1\n }\n }\n])\n```\n\nThis outputs the zip codes with the most theaters and restaurants.\n\n``` json\n[\n {\n \"_id\": \"10003\",\n \"zipcode\": \"10003\",\n \"total\": 688,\n \"total_theaters\": 2,\n \"total_restaurants\": 686\n },\n {\n \"_id\": \"10019\",\n \"zipcode\": \"10019\",\n \"total\": 676,\n \"total_theaters\": 1,\n \"total_restaurants\": 675\n },\n {\n \"_id\": \"10036\",\n \"zipcode\": \"10036\",\n \"total\": 611,\n \"total_theaters\": 0,\n \"total_restaurants\": 611\n },\n {\n \"_id\": \"10012\",\n \"zipcode\": \"10012\",\n \"total\": 408,\n \"total_theaters\": 1,\n \"total_restaurants\": 407\n },\n {\n \"_id\": \"11354\",\n \"zipcode\": \"11354\",\n \"total\": 379,\n \"total_theaters\": 1,\n \"total_restaurants\": 378\n },\n {\n \"_id\": \"10017\",\n \"zipcode\": \"10017\",\n \"total\": 378,\n \"total_theaters\": 1,\n \"total_restaurants\": 377\n }\n ]\n```\n\n## Wrap-Up\n\nCongratulations! You just set up an Federated Database Instance that contains databases being run in different cloud providers. Then, you queried both databases using the MongoDB Aggregation pipeline by leveraging Atlas Data Federation and federated queries. This allows us to more easily run queries on data that is stored in multiple MongoDB database deployments across clusters, data centers, and even in different formats, including S3 blob storage.\n\n![Screenshot from the MongoDB Atlas Data Federation overview page showing the information for our new virtual database.\n\nScreenshot from the MongoDB Atlas Data Federation overview page showing the information for our new Virtual Database.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n## Additional Resources\n\n- Getting Started with MongoDB Atlas Data Federation Docs\n- Tutorial Federated Queries and $out to AWS\n S3", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "Learn how to query from multiple MongoDB databases using MongoDB Atlas Data Federation.", "contentType": "Tutorial"}, "title": "How to Query from Multiple MongoDB Databases Using MongoDB Atlas Data Federation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/delivering-near-real-time-single-view-customers-federated-database", "action": "created", "body": "# Delivering a Near Real-Time Single View into Customers with a Federated Database\n\nSo the data within your organization spans across multiple databases, database platforms, and even storage types, but you need to bring it together and make sense of the data that's dispersed. This is referred to as a Single View application and it is a common need for many organizations, so you're not alone!\n\nWith MongoDB Data Federation, you can seamlessly query, transform, and aggregate your data from one or more locations, such as within a MongoDB database, AWS S3 buckets, and even HTTP API endpoints. In other words, with Data Federation, you can use the MongoDB Query API to work with your data even if it doesn't exist within MongoDB.\n\nWhat's a scenario where this might make sense?\n\nLet's say you're in the automotive or supply chain industries. You have customer data that might exist within MongoDB, but your parts vendors run their own businesses external to yours. However, there's a need to pair the parts data with transactions for any particular customer. In this scenario, you might want to be able to create queries or views that bring each of these pieces together.\n\nIn this tutorial, we're going to see how quick and easy it is to work with MongoDB Data Federation to create custom views that might aid your sales and marketing teams.\n\n## The prerequisites\n\nTo be successful with this tutorial, you should have the following or at least an understanding of the following:\n\n- A MongoDB Atlas instance, M0 or better.\n- An external data source, accessible within an AWS S3 bucket or an HTTP endpoint.\n- Node.js 18+.\n\nWhile you could have data ready to go for this tutorial, we're going to assume you need a little bit of help. With Node.js, we can get a package that will allow us to generate fake data. This fake data will act as our customer data within MongoDB Atlas. The external data source will contain our vendor data, something we need to access, but ultimately don't own.\n\nTo get down to the specifics, we'll be referencing Carvana data because it is available as a dataset on AWS. If you want to follow along exactly, load that dataset into your AWS S3 bucket. You can either expose the S3 bucket to the public, or configure access specific for MongoDB. For this example, we'll just be exposing the bucket to the public so we can use HTTP.\n\n## Understanding the Carvana dataset within AWS S3\n\nIf you choose to play around with the Carvana dataset that is available within the AWS marketplace, you'll notice that you're left with a CSV that looks like the following:\n\n- vechicle_id\n- stock_number\n- year\n- make\n- model\n- miles\n- trim\n- sold_price\n- discounted_sold_price\n- partnered_dealership\n- delivery_fee\n- earliest_delivery_date\n- sold_date\n\nSince this example is supposed to get you started, much of the data isn't too important to us, but the theme is. The most important data to us will be the **vehicle_id** because it should be a unique representation for any particular vehicle. The **vehicle_id** will be how we connect a customer to a particular vehicle.\n\nWith the Carvana data in mind, we can continue towards generating fake customer data.\n\n## Generate fake customer data for MongoDB\n\nWhile we could connect the Carvana data to a MongoDB federated database and perform queries, the example isn't particularly exciting until we add a different data source.\n\nTo populate MongoDB with fake data that makes sense and isn't completely random, we're going to use a tool titled mgeneratejs which can be installed with NPM.\n\nIf you don't already have it installed, execute the following from a command prompt:\n\n```bash\nnpm install -g mgeneratejs\n```\n\nWith the generator installed, we're going to need to draft a template of how the data should look. You can do this directly in the command line, but it might be easier just to create a shell file for it.\n\nCreate a **generate_data.sh** file and include the following:\n\n```bash\nmgeneratejs '{ \n\"_id\": \"$oid\",\n \"name\": \"$name\",\n \"location\": {\n \"address\": \"$address\",\n \"city\": {\n \"$choose\": {\n \"from\": \"Tracy\", \"Palo Alto\", \"San Francsico\", \"Los Angeles\" ]\n }\n },\n \"state\": \"CA\"\n },\n \"payment_preference\": {\n \"$choose\": {\n \"from\": [\"Credit Card\", \"Banking\", \"Cash\", \"Bitcoin\" ]\n }\n },\n \"transaction_history\": {\n \"$array\": {\n \"of\": {\n \"$choose\": {\n \"from\": [\"2270123\", \"2298228\", \"2463098\", \"2488480\", \"2183400\", \"2401599\", \"2479412\", \"2477865\", \"2296988\", \"2415845\", \"2406021\", \"2471438\", \"2284073\", \"2328898\", \"2442162\", \"2467207\", \"2388202\", \"2258139\", \"2373216\", \"2285237\", \"2383902\", \"2245879\", \"2491062\", \"2481293\", \"2410976\", \"2496821\", \"2479193\", \"2129703\", \"2434249\", \"2459973\", \"2468197\", \"2451166\", \"2451181\", \"2276549\", \"2472323\", \"2436171\", \"2475436\", \"2351149\", \"2451184\", \"2470487\", \"2475571\", \"2412684\", \"2406871\", \"2458189\", \"2450423\", \"2493361\", \"2431145\", \"2314101\", \"2229869\", \"2298756\", \"2394023\", \"2501380\", \"2431582\", \"2490094\", \"2388993\", \"2489033\", \"2506533\", \"2411642\", \"2429795\", \"2441783\", \"2377402\", \"2327280\", \"2361260\", \"2505412\", \"2253805\", \"2451233\", \"2461674\", \"2466434\", \"2287125\", \"2505418\", \"2478740\", \"2366998\", \"2171300\", \"2431678\", \"2359605\", \"2164278\", \"2366343\", \"2449257\", \"2435175\", \"2413261\", \"2368558\", \"2088504\", \"2406398\", \"2362833\", \"2393989\", \"2178198\", \"2478544\", \"2290107\", \"2441142\", \"2287235\", \"2090225\", \"2463293\", \"2458539\", \"2328519\", \"2400013\", \"2506801\", \"2454632\", \"2386676\", \"2487915\", \"2495358\", \"2353712\", \"2421438\", \"2465682\", \"2483923\", \"2449799\", \"2492327\", \"2484972\", \"2042273\", \"2446226\", \"2163978\", \"2496932\", \"2136162\", \"2449304\", \"2149687\", \"2502682\", \"2380738\", \"2493539\", \"2235360\", \"2423807\", \"2403760\", \"2483944\", \"2253657\", \"2318369\", \"2468266\", \"2435881\", \"2510356\", \"2434007\", \"2030813\", \"2478191\", \"2508884\", \"2383725\", \"2324734\", \"2477641\", \"2439767\", \"2294898\", \"2022930\", \"2129990\", \"2448650\", \"2438041\", \"2261312\", \"2418766\", \"2495220\", \"2403300\", \"2323337\", \"2417618\", \"2451496\", \"2482895\", \"2356295\", \"2189971\", \"2253113\", \"2444116\", \"2378270\", \"2431210\", \"2470691\", \"2460896\", \"2426935\", \"2503476\", \"2475952\", \"2332775\", \"2453908\", \"2432284\", \"2456026\", \"2209392\", \"2457841\", \"2066544\", \"2450290\", \"2427091\", \"2426772\", \"2312503\", \"2402615\", \"2452975\", \"2382964\", \"2396979\", \"2391773\", \"2457692\", \"2158784\", \"2434491\", \"2237533\", \"2474056\", \"2474203\", \"2450595\", \"2393747\", \"2497077\", \"2459487\", \"2494952\"]\n }\n },\n \"number\": {\n \"$integer\": {\n \"min\": 1,\n \"max\": 3\n }\n },\n \"unique\": true\n }\n }\n}\n' -n 50 \n```\n\nSo what's happening in the above template?\n\nIt might be easier to have a look at a completed document based on the above template:\n\n```json\n{\n \"_id\": ObjectId(\"64062d2db97b8ab3a8f20f8d\"),\n \"name\": \"Amanda Vega\",\n \"location\": {\n \"address\": \"1509 Fuvzu Circle\",\n \"city\": \"Tracy\",\n \"state\": \"CA\"\n },\n \"payment_preference\": \"Credit Card\",\n \"transaction_history\": [\n \"2323337\"\n ]\n}\n```\n\nThe script will create 50 documents. Many of the fields will be randomly generated with the exception of the `city`, `payment_preference`, and `transaction_history` fields. While these fields will be somewhat random, we're sandboxing them to a particular set of options.\n\nCustomers need to be linked to actual vehicles found in the Carvana data. The script adds one to three actual id values to each document. To narrow the scope, we'll imagine that the customers are locked to certain regions.\n\nImport the output into MongoDB. You might consider creating a **carvana** database and a **customers** collection within MongoDB for this data to live.\n\n## Create a multiple datasource federated database within MongoDB Atlas\n\nIt's time for the fun part! We need to create a federated database to combine both customer data that already lives within MongoDB and the Carvana data that lives on AWS S3.\n\nWithin MongoDB Atlas, click the **Data Federation** Tab.\n\n![MongoDB Atlas Federated Databases\n\nClick \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nThen, add your data sources. Whether the Carvana data source comes directly from an AWS S3 integration or a public HTTP endpoint, it is up to you. The end result will be the same.\n\nWith the data sources available, create a database within your federated instance. Since the theme of this example is Carvana, it might make sense to create a **carvana** database and give each data source a proper collection name. The data living on AWS S3 might be called **sales** or **transactions** and the customer data might have a **customers** name.\n\nWhat you name everything is up to you. When connecting to this federated instance, you'll only ever see the federated database name and federated collection names. Looking in, you won't notice any difference from connecting to any other MongoDB instance.\n\nYou can connect to your federated instance using the connection string it provides. It will look similar to a standard MongoDB Atlas connection string.\n\nThe above image was captured with MongoDB Compass. Notice the **sales** collection is the Carvana data on AWS S3 and it looks like any other MongoDB document?\n\n## Create a single view report with a MongoDB aggregation pipeline\n\nHaving all the data sources accessible from one location with Data Federation is great, but we can do better by providing users a single view that might make sense for their reporting needs.\n\nA little imagination will need to be used for this example, but let's say we want a report that shows the amount of car types sold for every city. For this, we're going to need data from both the **customers** collection as well as the **carvana** collection.\n\nLet's take a look at the following aggregation pipeline:\n\n```json\n\n {\n \"$lookup\": {\n \"from\": \"sales\",\n \"localField\": \"transaction_history\",\n \"foreignField\": \"vehicle_id\",\n \"as\": \"transaction_history\"\n }\n },\n {\n \"$unwind\": {\n \"path\": \"$transaction_history\"\n }\n },\n {\n \"$group\": {\n \"_id\": {\n \"city\": \"$location.city\",\n \"vehicle\": \"$transaction_history.make\"\n },\n \"total_transactions\": {\n \"$sum\": 1\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"city\": \"$_id.city\",\n \"vehicle\": \"$_id.vehicle\",\n \"total_transactions\": 1\n }\n }\n]\n```\n\nThere are four stages in the above pipeline.\n\nIn the first stage, we want to expand the vehicle id values that are found in **customers** documents. Reference values are not particularly useful to us standalone so we do a join operation using the `$lookup` operator between collections. This leaves us with all the details for every vehicle alongside the customer information.\n\nThe next stage flattens the array of vehicle information using the `$unwind` operation. By the end of this, all results are flat and we're no longer working with arrays.\n\nIn the third stage we group the data. In this example, we are grouping the data based on the city and vehicle type and counting how many of those transactions occurred. By the end of this stage, the results might look like the following:\n\n```json\n{\n \"_id\": {\n \"city\": \"Tracy\",\n \"vehicle\": \"Honda\"\n },\n \"total_transactions\": 4\n}\n```\n\nIn the final stage, we format the data into something a little more attractive using a `$project` operation. This leaves us with data that looks like the following:\n\n```json\n[\n {\n \"city\": \"Tracy\",\n \"vehicle\": \"Honda\",\n \"total_transactions\": 4\n },\n {\n \"city\": \"Tracy\",\n \"vehicle\": \"Toyota\",\n \"total_transactions\": 12\n }\n]\n```\n\nThe data can be manipulated any way we want, but for someone running a report of what city sells the most of a certain type of vehicle, this might be useful.\n\nThe aggregation pipeline above can be used in MongoDB Compass and would be nearly identical using several of the MongoDB drivers such as Node.js and Python. To get an idea of what it would look like in another language, here is an example of Java:\n\n```java\nArrays.asList(new Document(\"$lookup\", \n new Document(\"from\", \"sales\")\n .append(\"localField\", \"transaction_history\")\n .append(\"foreignField\", \"vehicle_id\")\n .append(\"as\", \"transaction_history\")), \n new Document(\"$unwind\", \"$transaction_history\"), \n new Document(\"$group\", \n new Document(\"_id\", \n new Document(\"city\", \"$location.city\")\n .append(\"vehicle\", \"$transaction_history.make\"))\n .append(\"total_transactions\", \n new Document(\"$sum\", 1L))), \n new Document(\"$project\", \n new Document(\"_id\", 0L)\n .append(\"city\", \"$_id.city\")\n .append(\"vehicle\", \"$_id.vehicle\")\n .append(\"total_transactions\", 1L)))\n```\n\nWhen using MongoDB Compass, aggregation pipelines can be output automatically to any supported driver language you want.\n\nThe person generating the report probably won't want to deal with aggregation pipelines or application code. Instead, they'll want to look at a view that is always up to date in near real-time.\n\nWithin the MongoDB Atlas dashboard, go back to the configuration area for your federated instance. You'll want to create a view, similar to how you created a federated database and federated collection.\n\n![MongoDB Atlas Federated Database View\n\nGive the view a name and paste the aggregation pipeline into the box when prompted.\n\nRefresh MongoDB Compass or whatever tool you're using and you should see the view. When you load the view, it should show your data as if you ran a pipeline \u2014 however, this time without running anything. \n\nIn other words, you\u2019d be interacting with the view like you would any other collection \u2014 no queries or aggregations to constantly run or keep track of.\n\nThe view is automatically kept up to date behind the scenes using the pipeline you used to create it.\n\n## Conclusion\n\nWith MongoDB Data Federation, you can combine data from numerous data sources and interact with it using standard MongoDB queries and aggregation pipelines. This allows you to create views and run reports in near real-time regardless where your data might live.\n\nHave a question about Data Federation or aggregations? Check out the MongoDB Community Forums and learn how others are using them.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to bring data together from different datasources for a near-realtime view into customer data using the MongoDB Federated Database feature.", "contentType": "Tutorial"}, "title": "Delivering a Near Real-Time Single View into Customers with a Federated Database", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/automated-continuous-data-copying-from-mongodb-to-s3", "action": "created", "body": "# How to Automate Continuous Data Copying from MongoDB to S3\n\nModern always-on applications rely on automatic failover capabilities and real-time data access. MongoDB Atlas already supports automatic backups out of the box, but you might still want to copy your data into another location to run advanced analytics on your data or isolate your operational workload. For this reason, it can be incredibly useful to set up automatic continuous replication of your data for your workload.\n\nIn this post, we are going to set up a way to continuously copy data from a MongoDB database into an AWS S3 bucket in the Parquet data format by using MongoDB Atlas Database Triggers. We will first set up a Federated Database Instance using MongoDB Atlas Data Federation to consolidate a MongoDB database and our AWS S3 bucket. Next, we will set up a Trigger to automatically add a new document to a collection every minute, and another Trigger to automatically copy our data to our S3 bucket. Then, we will run a test to ensure that our data is being continuously copied into S3 from MongoDB. Finally, we\u2019ll cover some items you\u2019ll want to consider when building out something like this for your application.\n\nNote: The values we use for certain parameters in this blog are for demonstration and testing purposes. If you plan on utilizing this functionality, we recommend you look at the \u201cProduction Considerations\u201d section and adjust based on your needs.\n\n## What is Parquet?\n\nFor those of you not familiar with Parquet, it's an amazing file format that does a lot of the heavy lifting to ensure blazing fast query performance on data stored in files. This is a popular file format in the Data Warehouse and Data Lake space as well as for a variety of machine learning tasks.\n\nOne thing we frequently see users struggle with is getting NoSQL data into Parquet as it is a columnar format. Historically, you would have to write some custom code to get the data out of the database, transform it into an appropriate structure, and then probably utilize a third-party library to write it to Parquet. Fortunately, with MongoDB Atlas Data Federation's $out to S3, you can now convert MongoDB Data into Parquet with little effort.\n\n## Prerequisites\n\nIn order to follow along with this tutorial yourself, you will need to\ndo the following:\n\n1. Create a MongoDB Atlas account, if you do not have one already.\n2. Create an AWS account with privileges to create IAM Roles and S3 Buckets (to give Data Federation access to write data to your S3 bucket). Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n3. Install the AWS CLI. 4. Configure the AWS CLI.\n5. *Optional*: Set up unified AWS access.\n\n## Create a Federated Database Instance and Connect to S3\n\nWe need to set up a Federated Database Instance to copy our MongoDB data and utilize MongoDB Atlas Data Federation's $out to S3 to convert our MongoDB Data into Parquet and land it in an S3 bucket.\n\nThe first thing you'll need to do is navigate to \"Data Federation\" on the left-hand side of your Atlas Dashboard and then click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nThen, you need to go ahead and connect your S3 bucket to your Federated Database Instance. This is where we will write the Parquet files. The setup wizard should guide you through this pretty quickly, but you will need access to your credentials for AWS.\n\n>Note: For more information, be sure to refer to the documentation on deploying a Federated Database Instance for a S3 data store. (Be sure to give Atlas Data Federation \"Read and Write\" access to the bucket, so it can write the Parquet files there).\n\nSelect an AWS IAM role for Atlas.\n\n- If you created a role that Atlas is already authorized to read and write to your S3 bucket, select this user.\n- If you are authorizing Atlas for an existing role or are creating a new role, be sure to refer to the documentation for how to do this.\n\nEnter the S3 bucket information.\n\n- Enter the name of your S3 bucket. I named my bucket `mongodb-data-lake-demo`.\n- Choose Read and write, to be able to write documents to your S3 bucket.\n\nAssign an access policy to your AWS IAM role.\n\n- Follow the steps in the Atlas user interface to assign an access policy to your AWS IAM role.\n- Your role policy for read-only or read and write access should look similar to the following:\n\n``` json\n{\n \"Version\": \"2012-10-17\",\n \"Statement\": \n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"s3:ListBucket\",\n \"s3:GetObject\",\n \"s3:GetObjectVersion\",\n \"s3:GetBucketLocation\"\n ],\n \"Resource\": [\n \n ]\n }\n ]\n}\n```\n\n- Define the path structure for your files in the S3 bucket and click Next.\n- Once you've connected your S3 bucket, we're going to create a simple data source to query the data in S3, so we can verify we've written the data to S3 at the end of this tutorial.\n\n## Connect Your MongoDB Database to Your Federated Database Instance\n\nNow, we're going to connect our Atlas Cluster, so we can write data from it into the Parquet files on S3. This involves picking the cluster from a list of clusters in your Atlas project and then selecting the databases and collections you'd like to create Data Sources from and dragging them into your Federated Database Instance.\n\n![Screenshot of the Add Data Source modal with collections selected\n\n## Create a MongoDB Atlas Trigger to Create a New Document Every Minute\n\nNow that we have all of our data sources set up in our brand new Federated Database Instance, we can now set up a MongoDB Database Trigger to automatically generate new documents every minute for our continuous replication demo. **Triggers** allow you to execute server-side logic in response to database events or according to a schedule. Atlas provides two kinds of Triggers: **Database** and **Scheduled** triggers. We will use a **Scheduled** trigger to ensure that these documents are automatically archived in our S3 bucket.\n\n1. Click the Atlas tab in the top navigation of your screen if you have not already navigated to Atlas.\n2. Click Triggers in the left-hand navigation.\n3. On the Overview tab of the Triggers page, click Add Trigger to open the trigger configuration page.\n4. Enter these configuration values for our trigger:\n\nAnd our Trigger function looks like this:\n\n``` javascript\nexports = function () {\n\n const mongodb = context.services.get(\"NAME_OF_YOUR_ATLAS_SERVICE\");\n const db = mongodb.db(\"NAME_OF_YOUR DATABASE\")\n const events = db.collection(\"NAME_OF_YOUR_COLLECTION\");\n\n const event = events.insertOne(\n {\n time: new Date(),\n aNumber: Math.random() * 100,\n type: \"event\"\n }\n );\n\n return JSON.stringify(event);\n\n};\n```\n\nLastly, click Run and check that your database is getting new documents inserted into it every 60 seconds.\n\n## Create a MongoDB Atlas Trigger to Copy New MongoDB Data into S3 Every Minute\n\nAlright, now is the fun part. We are going to create a new MongoDB Trigger that copies our MongoDB data every 60 seconds utilizing MongoDB Atlas Data Federation's $out to S3 aggregation pipeline. Create a new Trigger and use these configuration settings.\n\nYour Trigger function will look something like this. But there's a lot going on, so let's break it down.\n\n* First, we are going to connect to our new Federated Database Instance. This is different from the previous Trigger that connected to our Atlas database. Be sure to put your virtual database name in for `context.services.get`. You must connect to your Federated Database Instance to use $out to S3.\n* Next, we are going to create an aggregation pipeline function to first query our MongoDB data that's more than 60 seconds old.\n* Then, we will utilize the $out aggregate operator to replicate the data from our previous aggregation stage into S3.\n* In the format, we're going to specify *parquet* and determine a maxFileSize and maxRowGroupSize.\n * *maxFileSize* is going to determine the maximum size each\n partition will be.\n *maxRowGroupSize* is going to determine how records are grouped inside of the parquet file in \"row groups\" which will impact performance querying your Parquet files similarly to file size.\n* Lastly, we\u2019re going to set our S3 path to match the value of the data.\n\n``` javascript\nexports = function () {\n\n const service = context.services.get(\"NAME_OF_YOUR_FEDERATED_DATA_SERVICE\");\n const db = service.db(\"NAME_OF_YOUR_VIRTUAL_DATABASE\")\n const events = db.collection(\"NAME_OF_YOUR_VIRTUAL_COLLECTION\");\n\n const pipeline = \n {\n $match: {\n \"time\": {\n $gt: new Date(Date.now() - 60 * 60 * 1000),\n $lt: new Date(Date.now())\n }\n }\n }, {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"mongodb-federated-data-demo\",\n \"region\": \"us-east-1\",\n \"filename\": \"events\",\n \"format\": {\n \"name\": \"parquet\",\n \"maxFileSize\": \"10GB\",\n \"maxRowGroupSize\": \"100MB\"\n }\n }\n }\n }\n ];\n\n return events.aggregate(pipeline);\n};\n```\n\nIf all is good, you should see your new Parquet document in your S3 bucket. I've enabled the AWS GUI to show you the versions so that you can see how it is being updated every 60 seconds automatically.\n\n![Screenshot from AWS S3 management console showing the new events.parquet document that was generated by our $out trigger function.\n\n## Production Considerations\n\nSome of the configurations chosen above were done so to make it easy to set up and test, but if you\u2019re going to use this in production, you\u2019ll want to adjust them.\n\nFirstly, this blog was setup with a \u201cdeltas\u201d approach. This means that we are only copying the new documents from our collection into our Parquet files. Another approach would be to do a full snapshot, i.e., copying the entire collection into Parquet each time. The approach you\u2019re taking should depend on how much data is in your collection and what\u2019s required by the downstream consumer.\n\nSecondly, regardless of how much data you\u2019re copying, ideally you want Parquet files to be larger, and for them to be partitioned based on how you\u2019re going to query. Apache recommends row group sizes of 512MB to 1GB. You can go smaller depending on your requirements, but as you can see, you want larger files. The other consideration is if you plan to query this data in the parquet format, you should partition it so that it aligns with your query pattern. If you\u2019re going to query on a date field, for instance, you might want each file to have a single day's worth of data.\n\nLastly, depending on your needs, it may be appropriate to look into an alternative scheduling device to triggers, like Temporal or Apache Airflow.\n\n## Wrap Up\n\nIn this post, we walked through how to set up an automated continuous replication from a MongoDB database into an AWS S3 bucket in the Parquet data format by using MongoDB Atlas Data Federation and MongoDB Atlas Database Triggers. First, we set up a new Federated Database Instance to consolidate a MongoDB database and our AWS S3 bucket. Then, we set up a Trigger to automatically add a new document to a collection every minute, and another Trigger to automatically back up these new automatically generated documents into our S3 bucket.\n\nWe also discussed how Parquet is a great format for your MongoDB data when you need to use columnar-oriented tools like Tableau for visualizations or Machine Learning frameworks that use Data Frames. Parquet can be quickly and easily converted into Pandas Data Frames in Python.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\nAdditional Resources:\n\n- Data Federation: Getting Started Documentation\n- $out S3 Data Lake Documentation", "format": "md", "metadata": {"tags": ["Atlas", "Parquet", "AWS"], "pageDescription": "Learn how to set up a continuous copy from MongoDB into an AWS S3 bucket in Parquet.", "contentType": "Tutorial"}, "title": "How to Automate Continuous Data Copying from MongoDB to S3", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-apache-airflow", "action": "created", "body": "# Using MongoDB with Apache Airflow\n\nWhile writing cron jobs to execute scripts is one way to accomplish data movement, as workflows become more complex, managing job scheduling becomes very difficult and error-prone. This is where Apache Airflow shines. Airflow is a workflow management system originally designed by\u00a0Airbnb\u00a0and\u00a0open sourced\u00a0in 2015. With Airflow, you can programmatically author, schedule, and monitor complex data pipelines. Airflow is used in many use cases with MongoDB, including:\n\n* Machine learning pipelines.\n* Automating database administration operations.\n* Batch movement of data.\n\nIn this post, you will learn the basics of how to leverage MongoDB within an Airflow pipeline.\n\n## Getting started\n\nApache Airflow consists of a number of\u00a0installation steps, including installing a database and webserver. While it\u2019s possible to follow the installation script and configure the database and services, the easiest way to get started with Airflow is to use\u00a0Astronomer CLI. This CLI stands up a complete Airflow docker environment from a single command line.\n\nLikewise, the easiest way to stand up a MongoDB cluster is with\u00a0MongoDB Atlas. Atlas is not just a hosted MongoDB cluster. Rather, it\u2019s an integrated suite of cloud database and data services that enable you to quickly build your applications. One service,\u00a0Atlas Data Federation, is a cloud-native query processing service that allows users to create a virtual collection from heterogeneous data sources such as Amazon S3 buckets, MongoDB clusters, and HTTP API endpoints. Once defined, the user simply issues a query to obtain data combined from these sources.\n\nFor example, consider a scenario where you were moving data with an Airflow DAG into MongoDB and wanted to join cloud object storage - Amazon S3 or Microsoft Azure Blob Storage data with MongoDB as part of a data analytics application.\u00a0 Using MongoDB Atlas Data Federation, you create a virtual collection that contains a MongoDB cluster and a cloud object storage collection. Now, all your application needs to do is issue a single query and Atlas takes care of joining heterogeneous data. This feature and others like\u00a0MongoDB Charts, which we will see later in this post, will increase your productivity and enhance your Airflow solution. To learn more about MongoDB Atlas Data Federation, check out the MongoDB.live webinar on YouTube,\u00a0Help You Data Flow with Atlas Data Lake.\u00a0 For an overview of MongoDB Atlas, check out\u00a0Intro to MongoDB Atlas in 10 mins | Jumpstart, available on YouTube.\n\n## Currency over time\n\nIn this post, we will create an Airflow workflow that queries an HTTP endpoint for a historical list of currency values versus the Euro. The data will then be inserted into MongoDB using the MongoHook and a chart will be created using MongoDB Charts. In Airflow, a\u00a0hook\u00a0is an interface to an external platform or database such as MongoDB. The MongoHook wraps the PyMongo Python Driver for MongoDB, unlocking all the capabilities of the driver within an Airflow workflow.\n\n### Step 1: Spin up the Airflow environment\n\nIf you don\u2019t have an Airflow environment already available, install the\u00a0Astro CLI. Once it\u2019s installed, create a directory for the project called \u201ccurrency.\u201d\n\n**mkdir currency && cd currency**\n\nNext, create the Airflow environment using the Astro CLI.\n\n**astro dev init**\n\nThis command will create a folder structure that includes a folder for DAGs, a Dockerfile, and other support files that are used for customizations.\n\n### Step 2: Install the MongoDB Airflow provider\n\nProviders help Airflow interface with external systems. To add a provider, modify the requirements.txt file and add the MongoDB provider.\n\n**echo \u201capache-airflow-providers-mongo==3.0.0\u201d >> requirements.txt**\n\nFinally, start the Airflow project.\n\n**astro dev start**\n\nThis simple command will start and configure the four docker containers needed for Airflow: a webserver, scheduler, triggerer, and Postgres database, respectively.\n\n**Astro dev restart**\n\nNote: You can also manually install the MongoDB Provider using\u00a0PyPi\u00a0if you are not using the Astro CLI.\n\nNote: The HTTP provider is already installed as part of the Astro runtime. If you did not use Astro, you will need to install the\u00a0HTTP provider.\n\n### Step 3: Creating the DAG workflow\n\nOne of the components that is installed with Airflow is a webserver. This is used as the main operational portal for Airflow workflows. To access, open a browser and navigate to\u00a0http://localhost:8080. Depending on how you installed Airflow, you might see example DAGs already populated. Airflow workflows are referred to as DAGs (Directed Acyclic Graphs) and can be anything from the most basic job scheduling pipelines to more complex ETL, machine learning, or predictive data pipeline workflows such as fraud detection. These DAGs are Python scripts that give developers complete control of the workflow. DAGs can be triggered manually via an API call or the web UI. DAGs can also be scheduled for execution one time, recurring, or in any\u00a0cron-like configuration.\n\nLet\u2019s get started exploring Airflow by creating a Python file, \u201ccurrency.py,\u201d within the\u00a0**dags**\u00a0folder using your favorite editor.\n\nThe following is the complete source code for the DAG.\n\n```\nimport os\nimport json\nfrom airflow import DAG\nfrom airflow.operators.python import PythonOperator\nfrom airflow.operators.bash import BashOperator\nfrom airflow.providers.http.operators.http import SimpleHttpOperator\nfrom airflow.providers.mongo.hooks.mongo import MongoHook\nfrom datetime import datetime,timedelta\n\ndef on_failure_callback(**context):\n print(f\"Task {context'task_instance_key_str']} failed.\")\n\ndef uploadtomongo(ti, **context):\n try:\n hook = MongoHook(mongo_conn_id='mongoid')\n client = hook.get_conn()\n db = client.MyDB\n currency_collection=db.currency_collection\n print(f\"Connected to MongoDB - {client.server_info()}\")\n d=json.loads(context[\"result\"])\n currency_collection.insert_one(d)\n except Exception as e:\n printf(\"Error connecting to MongoDB -- {e}\")\n\nwith DAG(\n dag_id=\"load_currency_data\",\n schedule_interval=None,\n start_date=datetime(2022,10,28),\n catchup=False,\n tags= [\"currency\"],\n default_args={\n \"owner\": \"Rob\",\n \"retries\": 2,\n \"retry_delay\": timedelta(minutes=5),\n 'on_failure_callback': on_failure_callback\n }\n) as dag:\n\n t1 = SimpleHttpOperator(\n task_id='get_currency',\n method='GET',\n endpoint='2022-01-01..2022-06-30',\n headers={\"Content-Type\": \"application/json\"},\n do_xcom_push=True,\n dag=dag)\n\n t2 = PythonOperator(\n task_id='upload-mongodb',\n python_callable=uploadtomongo,\n op_kwargs={\"result\": t1.output},\n dag=dag\n )\n\n t1 >> t2\n```\n\n### Step 4: Configure connections\n\nWhen you look at the code, notice there are no connection strings within the Python file.\u00a0[Connection identifiers\u00a0as shown in the below code snippet are placeholders for connection strings.\n\nhook = MongoHook(mongo\\_conn\\_id='mongoid')\n\nConnection identifiers and the connection configurations they represent are defined within the Connections tab of the Admin menu in the Airflow UI.\n\nIn this example, since we are connecting to MongoDB and an HTTP API, we need to define two connections. First, let\u2019s create the MongoDB connection by clicking the \u201cAdd a new record\u201d button.\n\nThis will present a page where you can fill out connection information. Select \u201cMongoDB\u201d from the Connection Type drop-down and fill out the following fields:\n\n| | |\n| --- | --- |\n| Connection Id | mongoid |\n| Connection Type | MongoDB |\n| Host | XXXX..mongodb.net\n\n*(Place your MongoDB Atlas hostname here)* |\n| Schema | MyDB\n\n*(e.g. the database in MongoDB)* |\n| Login | *(Place your database username here)* |\n| Password | *(Place your database password here)* |\n| Extra | {\"srv\": true} |\n\nClick \u201cSave\u201d and \u201cAdd a new record\u201d to create the HTTP API connection.\n\nSelect \u201cHTTP\u201d for the Connection Type and fill out the following fields:\n\n| | |\n| --- | --- |\n| Connection Id | http\\_default |\n| Connection Type | HTTP |\n| Host | api.frankfurter.app |\n\nNote: Connection strings can also be stored in environment variables or stores securely using an\u00a0external secrets back end, such as HashiCorp Vault or AWS SSM Parameter Store.\n\n### Step 5: The DAG workflow\n\nClick on the DAGs menu and then \u201cload\\_currency\\_data.\u201d\u00a0 You\u2019ll be presented with a number of sub items that address the workflow, such as the Code menu that shows the Python code that makes up the DAG.\n\nClicking on Graph will show a visual representation of the DAG parsed from the Python code.\n\nIn our example, \u201cget\\_currency\u201d uses the\u00a0SimpleHttpOperator\u00a0to obtain a historical list of currency values versus the Euro.\n\n```\nt1 = SimpleHttpOperator(\n task_id='get_currency',\n method='GET',\n endpoint='2022-01-01..2022-06-30',\n headers={\"Content-Type\": \"application/json\"},\n do_xcom_push=True,\n dag=dag)\n```\n\nAirflow passes information between tasks using\u00a0XComs. In this example, we store the return data from the API call to XCom. The next operator, \u201cupload-mongodb,\u201d uses the\u00a0PythonOperator\u00a0to call a python function, \u201cuploadtomongo.\u201d\n\n```\nt2 = PythonOperator(\n task_id='upload-mongodb',\n python_callable=uploadtomongo,\n op_kwargs={\"result\": t1.output},\n dag=dag\n )\n```\n\nThis function accesses the data stored in XCom and uses MongoHook to insert the data obtained from the API call into a MongoDB cluster.\n\n```\ndef uploadtomongo(ti, **context):\n try:\n hook = MongoHook(mongo_conn_id='mongoid')\n client = hook.get_conn()\n db = client.MyDB\n currency_collection=db.currency_collection\n print(f\"Connected to MongoDB - {client.server_info()}\")\n d=json.loads(context\"result\"])\n currency_collection.insert_one(d)\n except Exception as e:\n printf(\"Error connecting to MongoDB -- {e}\")\n```\n\nWhile our example workflow is simple, execute a task and then another task.\n\n```\nt1 >> t2\n```\n\nAirflow overloaded the \u201c>>\u201d bitwise operator to describe the flow of tasks. For more information, see \u201c[Bitshift Composition.\u201d\n\nAirflow can enable more complex workflows, such as the following:\n\nTask execution can be conditional with multiple execution paths.\n\n### Step 6: Scheduling the DAG\n\nAirflow is known best for its workflow scheduling capabilities, and these are defined as part of the DAG definition.\n\n```\nwith DAG(\n dag_id=\"load_currency_data\",\n schedule=None,\n start_date=datetime(2022,10,28),\n catchup=False,\n tags= \"currency\"],\n default_args={\n \"owner\": \"Rob\",\n \"retries\": 2,\n \"retry_delay\": timedelta(minutes=5),\n 'on_failure_callback': on_failure_callback\n }\n) as dag:\n```\n\nThe\u00a0[scheduling interval\u00a0can be defined using a cron expression, a timedelta, or one of AIrflow presets, such as the one used in this example, \u201cNone.\u201d\n\nDAGs can be scheduled to start at a date in the past. If you\u2019d like Airflow to catch up and execute the DAG as many times as would have been done within the start time and now, you can set the \u201ccatchup\u201d property. Note: \u201cCatchup\u201d defaults to \u201cTrue,\u201d so make sure you set the value accordingly.\n\nFrom our example, you can see just some of the configuration options available.\n\n### Step 7: Running the DAG\n\nYou can\u00a0execute a DAG\u00a0ad-hoc through the web using the \u201cplay\u201d button under the action column.\n\nOnce it\u2019s executed, you can click on the DAG and Grid menu item to display the runtime status of the DAG.\n\nIn the example above, the DAG was run four times, all with success. You can view the log of each step by clicking on the task and then \u201cLog\u201d from the menu.\n\nThe log is useful for troubleshooting the task. Here we can see our output from the `print(f\"Connected to MongoDB - {client.server_info()}\")` command within the PythonOperator.\n\n### Step 8: Exploring the data in MongoDB Atlas\n\nOnce we run the DAG, the data will be in the MongoDB Atlas cluster. Navigating to the cluster, we can see the \u201ccurrency\\_collection\u201d was created and populated with currency data.\n\n### Step 9: Visualizing the data using MongoDB Charts\n\nNext, we can visualize the data by using MongoDB Charts.\n\nNote that the data that was stored in MongoDB from the API with a subdocument for every day of the given period. A sample of this data is as follows:\n\n```\n{\n _id: ObjectId(\"635b25bdcef2d967af053e2c\"),\n amount: 1,\n base: 'EUR',\n start_date: '2022-01-03',\n end_date: '2022-06-30',\n rates: {\n '2022-01-03': {\n AUD: 1.5691,\n BGN: 1.9558,\n BRL: 6.3539,\n\u2026 },\n},\n '2022-01-04': {\n AUD: 1.5682,\n BGN: 1.9558,\n BRL: 6.4174,\n\u2026 }\n```\n\nWith MongoDB Charts, we can define an aggregation pipeline filter to transform the data into a format that will be optimized for chart creation. For example, consider the following aggregation pipeline filter:\n\n```\n{$project:{\nrates:{\n$objectToArray:\"$rates\"}}},{\n$unwind:\"$rates\"\n}\n,{\n$project:{\n_id:0,\"date\":\"$rates.k\",\"Value\":\"$rates.v\"}}]\n```\n\nThis transforms the data into subdocuments that have two key value pairs of the date and values respectively.\n\n```\n{\n date: '2022-01-03',\n Value: {\n AUD: 1.5691,\n BGN: 1.9558,\n BRL: 6.3539,\n\u2026 },\n {\n date: '2022-01-04',\n Value: {\n AUD: 1.5682,\n BGN: 1.9558,\n BRL: 6.4174,\n..}\n}\n```\n\nWe can add this aggregation pipeline filter into Charts and build out a chart comparing the US dollar (USD) to the Euro (EUR) over this time period.\n\n![We can add this aggregation pipeline filter into Charts and build out a chart comparing the US dollar (USD) to the Euro (EUR) over this time period.\nFor more information on MongoDB Charts, check out the YouTube video\u00a0\u201cIntro to MongoDB Charts (demo)\u201d\u00a0for a walkthrough of the feature.\n\n## Summary\n\nAirflow is an open-sourced workflow scheduler used by many enterprises throughout the world.\u00a0 Integrating MongoDB with Airflow is simple using the MongoHook. Astronomer makes it easy to quickly spin up a local Airflow deployment. Astronomer also has a\u00a0registry\u00a0that provides a central place for Airflow operators, including the MongoHook and MongoSensor.\u00a0\n\n## Useful resources\nLearn more about\u00a0Astronomer, and check out the\u00a0MongoHook\u00a0documentation.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to integrate MongoDB within your Airflow DAGs.", "contentType": "Tutorial"}, "title": "Using MongoDB with Apache Airflow", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/best-practices-google-cloud-functions-atlas", "action": "created", "body": "# Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas\n\nServerless applications are becoming increasingly popular among developers. They provide a cost-effective and efficient way to handle application logic and data storage. Two of the most popular technologies that can be used together to build serverless applications are Google Cloud Functions and MongoDB Atlas.\n\nGoogle Cloud Functions allows developers to run their code in response to events, such as changes in data or HTTP requests, without having to manage the underlying infrastructure. This makes it easy to build scalable and performant applications. MongoDB Atlas, on the other hand, provides a fully-managed, globally-distributed, and highly-available data platform. This makes it easy for developers to store and manage their data in a reliable and secure way.\n\nIn this article, we'll discuss three best practices for working with databases in Google Cloud Functions. First, we'll explore the benefits of opening database connections in the global scope. Then, we'll cover how to make your database operations idempotent to ensure data consistency in event-driven functions. Finally, we'll discuss how to set up a secure network connection to protect your data from unauthorized access. By following these best practices, you can build more reliable and secure event-driven functions that work seamlessly with your databases.\n\n## Prerequisites\n\nThe minimal requirements for following this tutorial are:\n\n* A MongoDB Atlas database with a database user and appropriate network configuration.\n* A Google Cloud account with billing enabled.\n* Cloud Functions, Cloud Build, Artifact Registry, Cloud Run, Logging, and Pub/Sub APIs enabled. Follow this link to enable the required APIs.\n\nYou can try the experiments shown in this article yourself. Both MongoDB Atlas and Cloud Functions offer a free tier which are sufficient for the first two examples. The final example \u2014 setting up a VPC network or Private Service Connect \u2014 requires setting up a paid, dedicated Atlas database and using paid Google Cloud features. \n\n## Open database connections in the global scope\n\nLet\u2019s say that we\u2019re building a traditional, self-hosted application that connects to MongoDB. We could open a new connection every time we need to communicate with the database and then immediately close that connection. But opening and closing connections adds an overhead both to the database server and to our app. It\u2019s far more efficient to reuse the same connection every time we send a request to the database. Normally, we\u2019d connect to the database using a MongoDB driver when we start the app, save the connection to a globally accessible variable, and use it to send requests. As long as the app is running, the connection will remain open. \n\nTo be more precise, when we connect, the MongoDB driver creates a connection pool. This allows for concurrent requests to communicate with the database. The driver will automatically manage the connections in the pool, creating new ones when needed and closing them when they\u2019re idle. The pooling also limits the number of connections that can come from a single application instance (100 connections is the default).\n\nOn the other hand, Cloud Functions are serverless. They\u2019re very efficient at automatically scaling up when multiple concurrent requests come in, and down when the demand decreases. \n\nBy default, each function instance can handle only one request at a time. However, with Cloud Functions 2nd gen, you can configure your functions to handle concurrent requests. For example, if you set the concurrency parameter to 10, a single function instance will be able to work on a max of 10 requests at the same time. If we\u2019re careful about how we connect to the database, the requests will take advantage of the connection pool created by the MongoDB driver. In this section, we\u2019ll explore specific strategies for reusing connections.\n\nBy default, Cloud Functions can spin up to 1,000 new instances. However, each function instance runs in its own isolated execution context. This means that instances can\u2019t share a database connection pool. That\u2019s why we need to pay attention to the way we open database connections. If we have our concurrency parameter set to 1 and we open a new connection with each request, we will cause unnecessary overhead to the database or even hit the maximum connections limit.\n\nThat looks very inefficient! Thankfully, there\u2019s a better way to do it. We can take advantage of the way Cloud Functions reuses already-started instances.\n\nWe mentioned earlier that Cloud Functions scale by spinning up new instances to handle incoming requests. Creating a brand new instance is called a \u201ccold start\u201d and involves the following steps:\n\n1. Loading the runtime environment.\n2. Executing the global (instance-wide) scope of the function.\n3. Executing the body of the function defined as an \u201centry point.\u201d\n\nWhen the instance handles the request, it\u2019s not closed down immediately. If we get another request in the next few minutes, chances are high it will be routed to the same, already \u201cwarmed\u201d instance. But this time, only the \u201centry point\u201d function will be invoked. And what\u2019s more important is that the function will be invoked in the same execution environment. Practically, this means that everything we defined in the global scope can be reused \u2014 including a database connection! This will reduce the overhead of opening a new connection with every function invocation. \n\nWhile we can take advantage of the global scope for storing a reusable connection, there is no guarantee that a reusable connection will be used.\n\nLet\u2019s test this theory! We\u2019ll do the following experiment:\n\n1. We\u2019ll create two Cloud Functions that insert a document into a MongoDB Atlas database. We\u2019ll also attach an event listener that logs a message every time a new database connection is created.\n 1. The first function will connect to Atlas in the function scope.\n 2. The second function will connect to Atlas in the global scope.\n2. We\u2019ll send 50 concurrent requests to each function and wait for them to complete. In theory, after spinning up a few instances, Cloud Functions will reuse them to handle some of the requests.\n3. Finally, we\u2019ll inspect the logs to see how many database connections were created in each case.\n\nBefore starting, go back to your Atlas deployment and locate your connection string. Also, make sure you\u2019ve allowed access from anywhere in the network settings. Instead of this, we strongly recommend establishing a secure connection. \n\n### Creating the Cloud Function with function-scoped database connection\n\nWe\u2019ll use the Google Cloud console to conduct our experiment. Navigate to the Cloud Functions page and make sure you\u2019ve logged in, selected a project, and enabled all required APIs. Then, click on **Create function** and enter the following configuration:\n\n* Environment: **2nd gen**\n* Function name: **create-document-function-scope**\n* Region: **us-central-1**\n* Authentication: **Allow unauthenticated invocations**\n\nExpand the **Runtime, build, connections and security settings** section and under **Runtime environment variables**, add a new variable **ATLAS_URI** with your MongoDB Atlas connection string. Don\u2019t forget to replace the username and password placeholders with the credentials for your database user.\n\n> Instead of adding your credentials as environment variables in clear text, you can easily store them as secrets in Secret Manager. Once you do that, you\u2019ll be able to access them from your Cloud Functions.\n\nClick **Next**. It\u2019s time to add the implementation of the function. Open the `package.json` file from the left pane and replace its contents with the following:\n\n```json\n{\n \"dependencies\": {\n \"@google-cloud/functions-framework\": \"^3.0.0\",\n \"mongodb\": \"latest\"\n }\n}\n```\n\nWe\u2019ve added the `mongodb` package as a dependency. The package is used to distribute the MongoDB Node.js driver that we\u2019ll use to connect to the database.\n\nNow, switch to the **`index.js`** file and replace the default code with the following:\n\n```javascript\n// Global (instance-wide) scope\n// This code runs once (at instance cold-start)\nconst { http } = require('@google-cloud/functions-framework');\nconst { MongoClient } = require('mongodb');\n\nhttp('createDocument', async (req, res) => {\n // Function scope\n // This code runs every time this function is invoked\n const client = new MongoClient(process.env.ATLAS_URI);\n client.on('connectionCreated', () => {\n console.log('New connection created!');\n });\n\n // Connect to the database in the function scope\n try {\n await client.connect();\n\n const collection = client.db('test').collection('documents');\n\n const result = await collection.insertOne({ source: 'Cloud Functions' });\n\n if (result) {\n console.log(`Document ${result.insertedId} created!`);\n return res.status(201).send(`Successfully created a new document with id ${result.insertedId}`);\n } else {\n return res.status(500).send('Creating a new document failed!');\n }\n } catch (error) {\n res.status(500).send(error.message);\n }\n});\n```\n\nMake sure the selected runtime is **Node.js 16** and for entry point, replace **helloHttp** with **createDocument**. \n\nFinally, hit **Deploy**.\n\n### Creating the Cloud Function with globally-scoped database connection\n\nGo back to the list with functions and click **Create function** again. Name the function **create-document-global-scope**. The rest of the configuration should be exactly the same as in the previous function. Don\u2019t forget to add an environment variable called **ATLAS_URI** for your connection string. Click **Next** and replace the **`package.json`** contents with the same code we used in the previous section. Then, open **`index.js`** and add the following implementation:\n\n```javascript\n// Global (instance-wide) scope\n// This code runs once (at instance cold-start)\nconst { http } = require('@google-cloud/functions-framework');\nconst { MongoClient } = require('mongodb');\n\n// Use lazy initialization to instantiate the MongoDB client and connect to the database\nlet client;\nasync function getConnection() {\n if (!client) {\n client = new MongoClient(process.env.ATLAS_URI);\n client.on('connectionCreated', () => {\n console.log('New connection created!');\n });\n\n // Connect to the database in the global scope\n await client.connect();\n }\n\n return client;\n}\n\nhttp('createDocument', async (req, res) => {\n // Function scope\n // This code runs every time this function is invoked\n const connection = await getConnection();\n const collection = connection.db('test').collection('documents');\n\n try {\n const result = await collection.insertOne({ source: 'Cloud Functions' });\n\n if (result) {\n console.log(`Document ${result.insertedId} created!`);\n return res.status(201).send(`Successfully created a new document with id ${result.insertedId}`);\n } else {\n return res.status(500).send('Creating a new document failed!');\n }\n } catch (error) {\n res.status(500).send(error.message);\n }\n});\n```\n\nChange the entry point to **createDocument** and deploy the function.\n\nAs you can see, the only difference between the two implementations is where we connect to the database. To reiterate:\n\n* The function that connects in the function scope will create a new connection on every invocation.\n* The function that connects in the global scope will create new connections only on \u201ccold starts,\u201d allowing for some connections to be reused.\n\nLet\u2019s run our functions and see what happens! Click **Activate Cloud Shell** at the top of the Google Cloud console. Execute the following command to send 50 requests to the **create-document-function-scope** function:\n\n```shell\nseq 50 | xargs -Iz -n 1 -P 50 \\\n gcloud functions call \\\n create-document-function-scope \\\n --region us-central1 \\\n --gen2\n ```\n \nYou\u2019ll be prompted to authorize Cloud Shell to use your credentials when executing commands. Click **Authorize**. After a few seconds, you should start seeing logs in the terminal window about documents being created. Wait until the command stops running \u2014 this means all requests were sent.\n\nThen, execute the following command to get the logs from the function:\n\n```shell\ngcloud functions logs read \\\n create-document-function-scope \\\n --region us-central1 \\\n --gen2 \\\n --limit 500 \\\n | grep \"New connection created\"\n ```\n \n We\u2019re using `grep` to filter only the messages that are logged whenever a new connection is created. You should see that a whole bunch of new connections were created!\n \n \n \n We can count them with the `wc -l` command:\n \n ```shell\n gcloud functions logs read \\\n create-document-function-scope \\\n --region us-central1 \\\n --gen2 \\\n --limit 500 \\\n | grep \"New connection created\" \\\n | wc -l\n ```\n \nYou should see the number 50 printed in the terminal window. This confirms our theory that a connection is created for each request.\n\nLet\u2019s repeat the process for the **create-document-global-scope** function.\n\n```shell\nseq 50 | xargs -Iz -n 1 -P 50 \\\n gcloud functions call \\\n create-document-global-scope \\\n --region us-central1 \\\n --gen2\n ```\n \nYou should see log messages about created documents again. When the command\u2019s finished, run:\n \n```shell\ngcloud functions logs read \\\n create-document-global-scope \\\n --region us-central1 \\\n --gen2 \\\n --limit 500 \\\n | grep \"New connection created\"\n ```\n \n This time, you should see significantly fewer new connections. You can count them again with `wc -l`. We have our proof that establishing a database connection in the global scope is more efficient than doing it in the function scope.\n\nWe noted earlier that increasing the number of concurrent requests for a Cloud Function can help alleviate the database connections issue. Let\u2019s expand a bit more on this.\n\n### Concurrency with Cloud Functions 2nd gen and Cloud Run\n\nBy default, Cloud Functions can only process one request at a time. However, Cloud Functions 2nd gen are executed in a Cloud Run container. Among other benefits, this allows us to configure our functions to handle multiple concurrent requests. Increasing the concurrency capacity brings Cloud Functions closer to a way traditional server applications communicate with a database. \n\nIf your function instance supports concurrent requests, you can also take advantage of connection pooling. As a reminder, the MongoDB driver you\u2019re using will automatically create and maintain a pool with connections that concurrent requests will use.\n\nDepending on the use case and the amount of work your functions are expected to do, you can adjust:\n\n* The concurrency settings of your functions.\n* The maximum number of function instances that can be created.\n* The maximum number of connections in the pool maintained by the MongoDB driver.\n\nAnd as we proved, you should always declare your database connection in the global scope to persist it between invocations.\n\n## Make your database operations idempotent in event-driven functions\n\nYou can enable retrying for your event-driven functions. If you do that, Cloud Functions will try executing your function again and again until it completes successfully or the retry period ends. \n\nThis functionality can be useful in many cases, namely when dealing with intermittent failures. However, if your function contains a database operation, executing it more than once can create duplicate documents or other undesired results. \n\nLet\u2019s consider the following example: The function **store-message-and-notify** is executed whenever a message is published to a specified Pub/Sub topic. The function saves the received message as a document in MongoDB Atlas and then uses a third-party service to send an SMS. However, the SMS service provider frequently fails and the function throws an error. We have enabled retries, so Cloud Functions tries executing our function again. If we weren\u2019t careful with the implementation, we could duplicate the message in our database.\n\nHow do we handle such scenarios? How do we make our functions safe to retry? We have to ensure that the function is idempotent. Idempotent functions produce exactly the same result regardless of whether they were executed once or multiple times. If we insert a database document without a uniqueness check, we make the function non-idempotent.\n\nLet\u2019s give this scenario a try.\n\n### Creating the event-driven non-idempotent Cloud Function\n\nGo to Cloud Functions and start configuring a new function:\n\n* Environment: **2nd gen**\n* Function name: **store-message-and-notify**\n* Region: **us-central-1**\n* Authentication: **Require authentication**\n\nThen, click on **Add Eventarc Trigger** and select the following in the opened dialog:\n\n* Event provider: **Cloud Pub/Sub**\n* Event: **google.cloud.pubsub.topic.v1.messagePublished**\n\nExpand **Select a Cloud Pub/Sub topic** and then click **Create a topic**. Enter **test-topic** for the topic ID, and then **Create topic**.\n\nFinally, enable **Retry on failure** and click **Save trigger**. Note that the function will always retry on failure even if the failure is caused by a bug in the implementation.\n\nAdd a new environment variable called **ATLAS_URI** with your connection string and click **Next**. \n\nReplace the **`package.json`** with the one we used earlier and then, replace the **`index.js`** file with the following implementation:\n\n```javascript\nconst { cloudEvent } = require('@google-cloud/functions-framework');\nconst { MongoClient } = require('mongodb');\n\n// Use lazy initialization to instantiate the MongoDB client and connect to the database\nlet client;\nasync function getConnection() {\n if (!client) {\n client = new MongoClient(process.env.ATLAS_URI);\n await client.connect();\n }\n\n return client;\n}\n\ncloudEvent('processMessage', async (cloudEvent) => {\n let message;\n try {\n const base64message = cloudEvent?.data?.message?.data;\n message = Buffer.from(base64message, 'base64').toString();\n } catch (error) {\n console.error('Invalid message', cloudEvent.data);\n return Promise.resolve();\n }\n\n try {\n await store(message);\n } catch (error) {\n console.error(error.message);\n throw new Error('Storing message in the database failed.');\n }\n\n if (!notify()) {\n throw new Error('Notification service failed.');\n }\n});\n\nasync function store(message) {\n const connection = await getConnection();\n const collection = connection.db('test').collection('messages');\n await collection.insertOne({\n text: message\n });\n}\n\n// Simulate a third-party service with a 50% fail rate\nfunction notify() {\n return Math.floor(Math.random() * 2);\n}\n```\n\nThen, navigate to the Pub/Sub topic we just created and go to the **Messages** tab. Publish a few messages with different message bodies.\n\nNavigate back to your Atlas deployments. You can inspect the messages stored in the database by clicking **Browse Collections** in your cluster tile and then selecting the **test** database and the **messages** collection. You\u2019ll notice that some of the messages you just published are duplicated. This is because when the function is retried, we store the same message again.\n\nOne obvious way to try to fix the idempotency of the function is to switch the two operations. We could execute the `notify()` function first and then, if it succeeds, store the message in the database. But what happens if the database operation fails? If that was a real implementation, we wouldn\u2019t be able to unsend an SMS notification. So, the function is still non-idempotent. Let\u2019s look for another solution.\n\n### Using the event ID and unique index to make the Cloud Function idempotent\n\nEvery time the function is invoked, the associated event is passed as an argument together with an unique ID. The event ID remains the same even when the function is retried. We can store the event ID as a field in the MongoDB document. Then, we can create a unique index on that field. That way, storing a message with a duplicate event ID will fail.\n\nConnect to your database from the MongoDB Shell and execute the following command to create a unique index:\n\n```shell\ndb.messages.createIndex({ \"event_id\": 1 }, { unique: true })\n```\n\nThen, click on **Edit** in your Cloud Function and replace the implementation with the following:\n\n```javascript\nconst { cloudEvent } = require('@google-cloud/functions-framework');\nconst { MongoClient } = require('mongodb');\n\n// Use lazy initialization to instantiate the MongoDB client and connect to the database\nlet client;\nasync function getConnection() {\n if (!client) {\n client = new MongoClient(process.env.ATLAS_URI);\n await client.connect();\n }\n\n return client;\n}\n\ncloudEvent('processMessage', async (cloudEvent) => {\n let message;\n try {\n const base64message = cloudEvent?.data?.message?.data;\n message = Buffer.from(base64message, 'base64').toString();\n } catch (error) {\n console.error('Invalid message', cloudEvent.data);\n return Promise.resolve();\n }\n\n try {\n await store(cloudEvent.id, message);\n } catch (error) {\n // The error E11000: duplicate key error for the 'event_id' field is expected when retrying\n if (error.message.includes('E11000') && error.message.includes('event_id')) {\n console.log('Skipping retrying because the error is expected...');\n return Promise.resolve();\n }\n \n console.error(error.message);\n throw new Error('Storing message in the database failed.');\n }\n\n if (!notify()) {\n throw new Error('Notification service failed.');\n }\n});\n\nasync function store(id, message) {\n const connection = await getConnection();\n const collection = connection.db('test').collection('messages');\n await collection.insertOne({\n event_id: id,\n text: message\n });\n}\n\n// Simulate a third-party service with a 50% fail rate\nfunction notify() {\n return Math.floor(Math.random() * 2);\n}\n```\n\nGo back to the Pub/Sub topic and publish a few more messages. Then, inspect your data in Atlas, and you\u2019ll see the new messages are not getting duplicated anymore.\n\nThere isn\u2019t a one-size-fits-all solution to idempotency. For example, if you\u2019re using update operations instead of insert, you might want to check out the `upsert` option and the `$setOnInsert` operator.\n\n## Set up a secure network connection\n\nTo ensure maximum security for your Atlas cluster and Google Cloud Functions, establishing a secure connection is imperative. Fortunately, you have several options available through Atlas that allow us to configure private networking.\n\nOne such option is to set up Network Peering between the MongoDB Atlas database and Google Cloud. Alternatively, you can create a private endpoint utilizing Private Service Connect. Both of these methods provide robust solutions for securing the connection.\n\nIt is important to note, however, that these features are not available for use with the free Atlas M0 cluster. To take advantage of these enhanced security measures, you will need to upgrade to a dedicated cluster at the M10 tier or higher.\n\n## Wrap-up\n\nIn conclusion, Cloud Functions and MongoDB Atlas are a powerful combination for building efficient, scalable, and cost-effective applications. By following the best practices outlined in this article, you can ensure that your application is robust, performant, and able to handle any amount of traffic. From using proper indexes to securing your network, these tips will help you make the most of these two powerful tools and build applications that are truly cloud-native. So start implementing these best practices today and take your cloud development to the next level! If you haven\u2019t already, you can subscribe to MongoDB Atlas and create your first free cluster right from the Google Cloud marketplace.\n", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud"], "pageDescription": "In this article, we'll discuss three best practices for working with databases in Google Cloud Functions.", "contentType": "Article"}, "title": "Best Practices and a Tutorial for Using Google Cloud Functions with MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/stitch-aws-rekognition-images", "action": "created", "body": "# Using AWS Rekognition to Analyse and Tag Uploaded Images\n\n>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.\n\nComputers can now look at a video or image and know what's going on and, sometimes, who's in it. Amazon Web Service Rekognition gives your applications the eyes it needs to label visual content. In the following, you can see how to use Rekognition along with MongoDB Stitch to supplement new content with information as it is inserted into the database.\n\nYou can easily detect labels or faces in images or videos in your MongoDB application using the built-in AWS service. Just add the AWS service and use the Stitch client to execute the AWS SES request right from your React.js application or create a Stitch function and Trigger. In a recent Stitchcraft live coding session on my Twitch channel, I wanted to tag an image using label detection. I set up a trigger that executed a function after an image was uploaded to my S3 bucket and its metadata was inserted into a collection.\n\n``` javascript\nexports = function(changeEvent) {\n const aws = context.services.get('AWS');\n const mongodb = context.services.get(\"mongodb-atlas\");\n const insertedPic = changeEvent.fullDocument;\n\n const args = {\n Image: {\n S3Object: {\n Bucket: insertedPic.s3.bucket,\n Name: insertedPic.s3.key\n }\n },\n MaxLabels: 10,\n MinConfidence: 75.0\n };\n\n return aws.rekognition()\n .DetectLabels(args)\n .then(result => {\n return mongodb\n .db('data')\n .collection('picstream')\n .updateOne({_id: insertedPic._id}, {$set: {tags: result.Labels}});\n });\n};\n```\n\nWith just a couple of service calls, I was able to take an image, stored in S3, analyse it with Rekognition, and add the tags to its document. Want to see how it all came together? Watch the recording on YouTube with the Github repo in the description. Follow me on Twitch to join me and ask questions live.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "AWS"], "pageDescription": "Use MongoDB with AWS Rekognition to tag and analyse images.", "contentType": "Article"}, "title": "Using AWS Rekognition to Analyse and Tag Uploaded Images", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/designing-strategy-develop-game-unity-mongodb", "action": "created", "body": "# Designing a Strategy to Develop a Game with Unity and MongoDB\n\nWhen it comes to game development, you should probably have some ideas written down before you start writing code or generating assets. The same could probably be said about any kind of development, unless of course you're just messing around and learning something new.\n\nSo what should be planned before developing your next game?\n\nDepending on the type of game, you're probably going to want a playable frontend, otherwise known as the game itself, some kind of backend if you want an online component such as multiplayer, leaderboards, or similar, and then possibly a web-based dashboard to get information at a glance if you're on the operational side of the game and not a player.\n\nAdrienne Tacke, Karen Huaulme, and myself (Nic Raboy) are in the process of building a game. We think Fall Guys: Ultimate Knockout is a very well-made game and thought it'd be interesting to create a tribute game that is a little more on the retro side, but with a lot of the same features. The game will be titled, Plummeting People. This article explores the planning, design, and development process!\n\nTake a look at the Jamboard we've created so far:\n\nThe above Jamboard was created during a planning stream on Twitch where the community participated. The content that follows is a summary of each of the topics discussed and helpful information towards planning the development of a game.\n\n## Planning the Game Experience with a Playable Frontend\n\nThe game is what most will see and what most will ever care about. It should act as the driver to every other component that operates behind the scenes.\n\nRather than try to invade the space of an already great game that we enjoy (Fall Guys), we wanted to put our own spin on things by making it 2D rather than 3D. With Fall Guys being the basic idea behind what we wanted to accomplish, we needed to further break down what the game would need. We came to a few conclusions.\n\n**Levels / Arenas**\n\nWe need a few arenas to be able to call it a game worth playing, but we didn't want it to be as thought out as the game that inspired our idea. At the end of the day, we wanted to focus more on the development journey than making a blockbuster hit.\n\nFall Guys, while considered a battle royale, is still a racing game at its core. So what kind of arenas would make sense in a 2D setting?\n\nOur plan is to start with the simplest level concepts to save us from complicated game physics and engineering. There are two levels in particular that have basic collisions as the emphasis in Fall Guys. These levels include \"Door Dash\" and \"Tip Toe\" which focus on fake doors and disappearing floor tiles. Both of which have no rotational physics and nothing beyond basic collisions and randomization.\n\nWhile we could just stick with two basic levels as our proof of concept, we have a goal for a team arena such as scoring goals at soccer (football).\n\n**Assets**\n\nThe arena concepts are important, but in order to execute, game assets will be necessary.\n\nWe're considering the following game assets a necessary part of our game:\n\n- Arena backgrounds\n- Obstacle images\n- Player images\n- Sound effects\n- Game music\n\nTo maintain the spirit of the modern battle royale game, we thought player customizations were a necessary component. This means we'll need customized sprites with different outfits that can be unlocked throughout the gameplay experience.\n\n**Gameplay Physics and Controls**\n\nLevel design and game assets are only part of a game. They are quite meaningless unless paired with the user interaction component. The user needs to be able to control the player, interact with other players, and interact with obstacles in the arena. For this we'll need to create our own gameplay logic using the assets that we create.\n\n## Maintaining an Online, Multiplayer Experience with a Data Driven Backend\n\nWe envision the bulk of our work around this tribute game will be on the backend. Moving around on the screen and interacting with obstacles is not too difficult of a task as demonstrated in a previous tutorial that I wrote.\n\nInstead, the online experience will require most of our attention. Our first round of planning came to the following conclusions:\n\n**Real-Time Interaction with Sockets**\n\nWhen the player does anything in the game, it needs to be processed by the server and broadcasted to other players in the game. This needs to be real-time and sockets is probably the only logical solution to this. If the server is managing the sockets, data can be stored in the database about the players, and the server can also validate interactions to prevent cheating.\n\n**Matchmaking Players with Games**\n\nWhen the game is live, there will be simultaneous games in operation, each with their own set of players. We'll need to come up with a matchmaking solution so that players can only be added to a game that is accepting players and these players must fit certain criteria.\n\nThe matchmaking process might serve as a perfect opportunity to use aggregation\npipelines within MongoDB. For example, let's say that you have 5 wins and 1000 losses. You're not a very good player, so you probably shouldn't end up in a match with a player that has 1000 wins and 5 losses. These are things that we can plan for from a database level.\n\n**User Profile Stores**\n\nUser profile stores are one of the most common components for any online game. These store information about the player such as the name and billing information for the player as well as gaming statistics. Just imagine that everything you do in a game will end up in a record for your player.\n\nSo what might we store in a user profile store? What about the following?:\n\n- Unlocked player outfits\n- Wins, losses, experience points\n- Username\n- Play time\n\nThe list could go on endlessly.\n\nThe user profile store will have to be carefully planned because it is the baseline for anything data related in the game. It will affect the matchmaking process, leaderboards, historical data, and so much more.\n\nTo get an idea of what we're putting into the user profile store, check out a recorded Twitch stream we did on the topic.\n\n**Leaderboards**\n\nSince this is a competitive game, it makes sense to have a leaderboard. However this leaderboard can be a little more complicated than just your name and your experience points. What if we wanted to track who has the most wins, losses, steps, play time, etc.? What if we wanted to break it down further to see who was the leader in North America, Europe, or Asia? We could use MongoDB geospatial queries around the location of players.\n\nAs long as we're collecting game data for each player, we can come up with some interesting leaderboard ideas.\n\n**Player Statistics**\n\nWe know we're going to want to track wins and losses for each player, but we might want to track more. For example, maybe we want to track how many steps a player took in a particular arena, or how many times they fell. This information could be later passed through an aggregation pipeline in MongoDB to determine a rank or level which could be useful for matchmaking and leaderboards.\n\n**Player Chat**\n\nWould it be an online multiplayer game without some kind of chat? We were thinking that while a player was in matchmaking, they could chat with each other until the game started. This chat data would be stored in MongoDB and we could implement Atlas Search functionality to look for signs of abuse, foul language, etc., that might appear throughout the chat.\n\n## Generating Reports and Logical Metrics with an Admin Dashboard\n\nAs an admin of the game, we're going to want to collect information to make the game better. Chances are we're not going to want to analyze that information from within the game itself or with raw queries against the database.\n\nFor this, we're probably going to want to create dashboards, reports, and other useful tools to work with our data on a regular basis. Here are some things that we were thinking about doing:\n\n**MongoDB Atlas Charts**\n\nIf everything has been running smooth with the game and the data-collection of the backend, we've got data, so we just need to visualize it. MongoDB Atlas Charts can take that data and help us make sense of it. Maybe we want to show a heatmap at different hours of the day for different regions around the world, or maybe we want to show a bar graph around player experience points. Whatever the reason may be, Atlas Charts would make sense in an admin dashboard setting.\n\n**Offloading Historical Data**\n\nDepending on the popularity of the game, data will be coming into MongoDB like a firehose. To help with scaling and pricing, it will make sense to offload historical data from our cluster to a cloud object storage in order to save on costs and improve our cluster's performance by removing historical data.\n\nIn MongoDB Atlas, the best way to do this is to enable Online Archive which allows you to set rules to automatically archive your data to a fully-managed cloud storage while retaining access to query that data.\n\nYou can also leverage MongoDB Atlas Data Lake to connect your own cloud storage - Amazon S3 of Microsoft Blob Storage buckets and run Federated Queries to access your entire data set using MQL and the Aggregation Framework.\n\n## Conclusion\n\nLike previously mentioned, this article is a starting point for a series of articles that are coming from Adrienne Tacke, Karen\nHuaulme, and myself (Nic Raboy), around a Fall Guys tribute game that we're calling Plummeting People. Are we trying to compete with Fall Guys? Absolutely not! We're trying to show the thought process around designing and developing a game that leverages MongoDB and since Fall Guys is such an awesome game, we wanted to pay tribute to it.\n\nThe next article in the series will be around designing and developing the user profile store for the game. It will cover the data model, queries, and some backend server code for managing the future interactions between the game and the server.\n\nWant to discuss this planning article or the Twitch stream that went with it? Join us in the community thread that we created.", "format": "md", "metadata": {"tags": ["C#", "Unity"], "pageDescription": "Learn how to design a strategy towards developing the next big online game that uses MongoDB.", "contentType": "Tutorial"}, "title": "Designing a Strategy to Develop a Game with Unity and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/upgrade-fearlessly-stable-api", "action": "created", "body": "# Upgrade Fearlessly with the MongoDB Stable API\n\nDo you hesitate to upgrade MongoDB, for fear the new database will be incompatible with your existing code?\n\nOnce you've written and deployed your MongoDB application, you want to be able to upgrade your MongoDB database at will, without worrying that a behavior change will break your application. In the past, we've tried our best to ensure each database release is backward-compatible, while also adding new features. But sometimes we've had to break compatibility, because there was no other way to fix an issue or improve behavior. Besides, we didn't have a single definition of backward compatibility.\n\nSolving this problem is more important now: We're releasing new versions four times a year instead of one, and we plan to go faster in the future. We want to help you upgrade frequently and take advantage of new features, but first you must feel confident you can upgrade safely. Ideally, you could immediately upgrade all your applications to the latest MongoDB whenever we release.\n\nThe MongoDB Stable API is how we will make this possible. The Stable API encompasses the subset of MongoDB commands that applications commonly use to read and write data, create collections and indexes, and so on. We commit to keeping these commands backward-compatible in new MongoDB versions. We can add new features (such as new command parameters, new aggregation operators, new commands, etc.) to the Stable API, but only in backward-compatible ways.\n\nWe follow this principle:\n\n> For any API version V, if an application declares API version V and uses only behaviors in V, and it is deployed along with a specific version of an official driver, then it will experience no semantically significant behavior changes resulting from database upgrades so long as the new database supports V.\n\n(What's a semantically **insignificant** behavior change? Examples include the text of some error message, the order of a query result if you **don't** explicitly sort it, or the performance of a particular query. Behaviors like these, which are not documented and don't affect correctness, may change from version to version.)\n\nTo use the Stable API, upgrade to the latest driver and create your application's MongoClient like this:\n\n```js\nclient = MongoClient(\n \"mongodb://host/\",\n api={\"version\": \"1\", \"strict\": True})\n ```\n\nFor now, \"1\" is the only API version. Passing \"strict\": True means the database will reject all commands that aren't in the Stable API. For example, if you call replSetGetStatus, which isn't in the Stable API, you'll receive an error:\n\n```js\n{\n \"ok\" : 0,\n \"errmsg\" : \"Provided apiStrict:true, but replSetGetStatus is not in API Version 1\",\n \"code\" : 323,\n \"codeName\" : \"APIStrictError\"\n}\n```\n\nRun your application's test suite with the new MongoClient options, see what commands and features you're using that are outside the Stable API, and migrate to versioned alternatives. For example, \"mapreduce\" is not in the Stable API but \"aggregate\" is. Once your application uses only the Stable API, you can redeploy it with the new MongoClient options, and be confident that future database upgrades won't affect your application.\n\nThe mongosh shell now supports the Stable API too:\n\n```bash\nmongosh --apiVersion 1 --apiStrict\n```\n\nYou may need to use unversioned features in some part of your application, perhaps temporarily while you are migrating to the Stable API, perhaps permanently. The **escape hatch** is to create a non-strict MongoClient and use it just for using unversioned features:\n\n```PYTHON\n# Non-strict client.\nclient = MongoClient(\n \"mongodb://host/\",\n api={\"version\": \"1\", \"strict\": False})\n\nclient.admin.command({\"replSetGetStatus\": 1})\n```\n\nThe \"strict\" option is false by default, I'm just being explicit here. Use this non-strict client for the few unversioned commands your application needs. Be aware that we occasionally make backwards-incompatible changes in these commands.\n\nThe only API version that exists today is \"1\", but in the future we'll release new API versions. This is exciting for us: MongoDB has a few warts that we had to keep for compatibility's sake, but the Stable API gives us a safe way to remove them. Consider the following:\n\n```PYTHON\nclient = MongoClient(\"mongodb://host\")\nclient.test.collection.insertOne({\"a\": 1]})\n\n# Strangely, this matches the document above.\nresult = client.test.collection.findOne(\n {\"a.b\": {\"$ne\": null}})\n ```\n\nIt's clearly wrong that `{\"a\": [1]}` matches the query `{\"a.b\": {\"$ne\": null}}`, but we can't fix this behavior, for fear that users' applications rely on it. The Stable API gives us a way to safely fix this. We can provide cleaner query semantics in Version 2:\n\n```PYTHON\n# Explicitly opt in to new behavior.\nclient = MongoClient(\n \"mongodb://host/\",\n api={\"version\": \"2\", \"strict\": True})\n\nclient.test.collection.insertOne({\"a\": [1]})\n\n# New behavior: doesn't match document above.\nresult = client.test.collection.findOne(\n {\"a.b\": {\"$ne\": null}})\n ```\n \nFuture versions of MongoDB will support **both** Version 1 and 2, and we'll maintain Version 1 for many years. Applications requesting the old or new versions can run concurrently against the same database. The default behavior will be Version 1 (for compatibility with old applications that don't request a specific version), but new applications can be written for Version 2 and get the new, obviously more sensible behavior.\n\nOver time we'll deprecate some Version 1 features. That's a signal that when we introduce Version 2, those features won't be included. (Future MongoDB releases will support both Version 1 with deprecated features, and Version 2 without them.) When the time comes for you to migrate an existing application from Version 1 to 2, your first step will be to find all the deprecated features it uses:\n\n```PYTHON\n# Catch uses of features deprecated in Version 1.\nclient = MongoClient(\n \"mongodb://host/\",\n api={\"version\": \"1\",\n \"strict\": True,\n \"deprecationErrors\": True})\n``` \n\nThe database will return an APIDeprecationError whenever your code tries to use a deprecated feature. Once you've run your tests and fixed all the errors, you'll be ready to test your application with Version 2.\n\nVersion 2 might be a long way off, though. Until then, we're continuing to add features and make improvements in Version 1. We'll introduce new commands, new options, new aggregation operators, and so on. Each change to Version 1 will be an **extension** of the existing API, and it will never affect existing application code. With quarterly releases, we can improve MongoDB faster than ever before. Once you've upgraded to 5.0 and migrated your app to the Stable API, you can always use the latest release fearlessly.\n\nYou can try out the Stable API with the MongoDB 5.0 Release Candidate, which is available now from our [Download Center. \n\n## Appendix\n\nHere's a list of commands included in API Version 1 in MongoDB 5.0. You can call these commands with version \"1\" and strict: true. (But of course, you can also call them without configuring your MongoClient's API version at all, just like before.) We won't make backwards-incompatible changes to any of these commands. In future releases, we may add features to these commands, and we may add new commands to Version 1.\n\n* abortTransaction\n* aggregate\n* authenticate\n* collMod\n* commitTransaction\n* create\n* createIndexes\n* delete\n* drop\n* dropDatabase\n* dropIndexes\n* endSessions\n* explain (we won't make incompatible changes to this command's input parameters, although its output format may change arbitrarily)\n* find\n* findAndModify\n* getMore\n* hello\n* insert\n* killCursors\n* listCollections\n* listDatabases\n* listIndexes\n* ping\n* refreshSessions\n* saslContinue\n* saslStart\n* update\n\n## Safe Harbor\n\nThe development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.\n", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "With the Stable API, you can upgrade to the latest MongoDB releases without introducing backward-breaking app changes. Learn what it is and how to use it.", "contentType": "Tutorial"}, "title": "Upgrade Fearlessly with the MongoDB Stable API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/influence-search-result-ranking-function-scores-atlas-search", "action": "created", "body": "# Influence Search Result Ranking with Function Scores in Atlas Search\n\nWhen it comes to natural language searching, it's useful to know how the order of the results for a query were determined. Exact matches might be obvious, but what about situations where not all the results were exact matches due to a fuzzy parameter, the `$near` operator, or something else?\n\nThis is where the document score becomes relevant.\n\nEvery document returned by a `$search` query in MongoDB Atlas Search is assigned a score based on relevance, and the documents included in a result set are returned in order from highest score to lowest.\n\nYou can choose to rely on the scoring that Atlas Search determines based on the query operators, or you can customize its behavior using function scoring and optimize it towards your needs. In this tutorial, we're going to see how the `function` option in Atlas Search can be used to rank results in an example.\n\nPer the documentation, the `function` option allows the value of a numeric field to alter the final score of the document. You can specify the numeric field for computing the final score through an expression. With this in mind, let's look at a few scenarios where this could be useful.\n\nLet's say that you have a review system like Yelp where the user needs to provide some search criteria such as the type of food they want to eat. By default, you're probably going to get results based on relevance to your search term as well as the location that you defined. In the examples below, I\u2019m using the sample restaurants data available in MongoDB Atlas.\n\nThe `$search` query (expressed as an aggregation pipeline) to make this search happen in MongoDB might look like the following:\n\n```json\n\n {\n \"$search\": {\n \"text\": {\n \"query\": \"korean\",\n \"path\": [ \"cuisine\" ],\n \"fuzzy\": {\n \"maxEdits\": 2\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"name\": 1,\n \"cuisine\": 1,\n \"location\": 1,\n \"rating\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n }\n]\n```\n\nThe above query is a two-stage aggregation pipeline in MongoDB. The first stage is searching for \"korean\" in the \"cuisine\" document path. A fuzzy factor is applied to the search so spelling mistakes are allowed. The document results from the first stage might be quite large, so in the second stage, we're specifying which fields to return for every document. This includes a search score that is not part of the original document, but part of the search results.\n\nAs a result, you might end up with the following results:\n\n```json\n[\n {\n \"location\": \"Jfk International Airport\",\n \"cuisine\": \"Korean\",\n \"name\": \"Korean Lounge\",\n \"rating\": 2,\n \"score\": 3.5087265968322754\n },\n {\n \"location\": \"Broadway\",\n \"cuisine\": \"Korean\",\n \"name\": \"Mill Korean Restaurant\",\n \"rating\": 4,\n \"score\": 2.995847225189209\n },\n {\n \"location\": \"Northern Boulevard\",\n \"cuisine\": \"Korean\",\n \"name\": \"Korean Bbq Restaurant\",\n \"rating\": 5,\n \"score\": 2.995847225189209\n }\n]\n```\n\nThe default ordering of the documents returned is based on the `score` value in descending order. The higher the score, the closer your match.\n\nIt's very unlikely that you're going to want to eat at the restaurants that have a rating below your threshold, even if they match your search term and are within the search location. With the `function` option, we can assign a point system to the rating and perform some arithmetic to give better rated restaurants a boost in your results.\n\nLet's modify the search query to look like the following:\n\n```json\n[\n {\n \"$search\": {\n \"text\": {\n \"query\": \"korean\",\n \"path\": [ \"cuisine\" ],\n \"fuzzy\": {\n \"maxEdits\": 2\n },\n \"score\": {\n \"function\": {\n \"multiply\": [\n {\n \"score\": \"relevance\"\n },\n {\n \"path\": {\n \"value\": \"rating\",\n \"undefined\": 1\n }\n }\n ]\n }\n }\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"name\": 1,\n \"cuisine\": 1,\n \"location\": 1,\n \"rating\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n }\n }\n }\n]\n```\n\nIn the above two-stage aggregation pipeline, the part to pay attention to is the following:\n\n```json\n\"score\": {\n \"function\": {\n \"multiply\": [\n {\n \"score\": \"relevance\"\n },\n {\n \"path\": {\n \"value\": \"rating\",\n \"undefined\": 1\n }\n }\n ]\n }\n}\n```\n\nWhat we're saying in this part of the `$search` query is that we want to take the relevance score that we had already seen in the previous example and multiply it by whatever value is in the `rating` field of the document. This means that the score will potentially be higher if the rating of the restaurant is higher. If the restaurant does not have a rating, then we use a default multiplier value of 1.\n\nIf we run this query on the same data as before, we might now get results that look like this:\n\n```json\n[\n {\n \"location\": \"Northern Boulevard\",\n \"cuisine\": \"Korean\",\n \"name\": \"Korean Bbq Restaurant\",\n \"rating\": 5,\n \"score\": 14.979236125946045\n },\n {\n \"location\": \"Broadway\",\n \"cuisine\": \"Korean\",\n \"name\": \"Mill Korean Restaurant\",\n \"rating\": 4,\n \"score\": 11.983388900756836\n },\n {\n \"location\": \"Jfk International Airport\",\n \"cuisine\": \"Korean\",\n \"name\": \"Korean Lounge\",\n \"rating\": 2,\n \"score\": 7.017453193664551\n }\n]\n```\n\nSo now, while \"Korean BBQ Restaurant\" might be further in terms of location, it appears higher in our result set because the rating of the restaurant is higher.\n\nIncreasing the score based on rating is just one example. Another scenario could be to give search result priority to restaurants that are sponsors. A `function` multiplier could be used based on the sponsorship level.\n\nLet's look at a different use case. Say you have an e-commerce website that is running a sale. To push search products that are on sale higher in the list than items that are not on sale, you might use a `constant` score in combination with a relevancy score.\n\nAn aggregation that supports the above example might look like the following:\n\n```\ndb.products.aggregate([\n {\n \"$search\": {\n \"compound\": { \n \"should\": [\n { \n \"text\": { \n \"path\": \"promotions\", \n \"query\": \"July4Sale\", \n \"score\": { \n \"constant\": { \n \"value\": 1 \n }\n }\n }\n }\n ],\n \"must\": [ \n { \n \"text\": { \n \"path\": \"name\", \n \"query\": \"bose headphones\"\n }\n }\n ]\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"name\": 1,\n \"promotions\": 1,\n \"score\": { \"$meta\": \"searchScore\" }\n }\n }\n]);\n```\n\nTo get into the nitty gritty of the above two-stage pipeline, the first stage uses the [compound operator for searching. We're saying that the search results `must` satisfy \"bose headphones\" and if the result-set `should` contain \"July4Sale\" in the `promotions` path, then add a `constant` of one to the score for that particular result item to boost its ranking.\n\nThe `should` operator doesn't require its contents to be satisfied, so you could end up with headphone results that are not part of the \"July4Sale.\" Those result items just won't have their score increased by any value, and therefore would show up lower down in the list. The second stage of the pipeline just defines which fields should exist in the response.\n\n## Conclusion\n\nBeing able to customize how search result sets are scored can help you deliver more relevant content to your users. While we looked at a couple examples around the `function` option with the `multiply` operator, there are other ways you can use function scoring, like replacing the value of a missing field with a constant value or boosting the results of documents with search terms found in a specific path. You can find more information in the Atlas Search documentation.\n\nDon't forget to check out the MongoDB Community Forums to learn about what other developers are doing with Atlas Search.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to influence the score of your Atlas Search results using a variety of operators and options.", "contentType": "Tutorial"}, "title": "Influence Search Result Ranking with Function Scores in Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/locator-app-code-example", "action": "created", "body": "# Find our Devices - A locator app built using Realm\n\nINTRODUCTION\n\nThis Summer, MongoDB hosted 112 interns, spread across departments such as MongoDB Cloud, Atlas, and Realm. These interns have worked on a vast array of projects using the MongoDB platform and technologies. One such project was created by two Software Engineering interns, Jos\u00e9 Pedro Martins and Linnea Jansson, on the MongoDB Realm team. \n\nUsing MongoDB Realm and React Native, they built an app to log and display the location and movement of a user\u2019s devices in real-time on a map. Users can watch as their device\u2019s position on the map updates in response to how its physical location changes in real life. Additionally, users can join groups and view the live location of devices owned by other group members. \n\nIn this article, I look forward to demonstrating the app\u2019s features, discussing how it uses MongoDB Realm, and reviewing some noteworthy scenarios which arose during its development.\n\nAPP OVERVIEW\n\nThe project, called *Find Our Devices*, is an app for iOS and Android which allows users to view the live location of their devices on a map. The demo video above demonstrates some key features and shows off the intuitive UI. Users can track multiple devices by installing the app, logging in with their email, and adding the current device to their account. \n\nFor each device, a new pin is added to the map to indicate the device\u2019s location. This feature is perfect if one of your devices has been lost or stolen, as you can easily track the location of your iOS and Android devices from one app. Instead of using multiple apps to track devices on android and iOS, the user can focus on retrieving their device. Indeed, if you\u2019re only interested in the location of one device, you can instantly find its location by selecting it from a dropdown menu. \n\nAdditionally, users can create groups with other users. In these groups, users can see both the location of their devices and the location of other group members' devices. Group members can also invite other users by inputting their email. If a user accepts an invitation, their devices' locations begin to sync to the map. They can also view the live location of other members\u2019 devices on the group map. \n\nThis feature is fantastic for families or groups of friends travelling abroad. If somebody gets lost, their location is still visible to everyone in the group, provided they have network connectivity. Alternatively, logistics companies could use the app to track their fleets. If each driver installs the app, HQ could quickly find the location of any vehicle in the fleet and predict delays or suggest alternative routes to drivers. If users want privacy, they can disable location sharing at any time, or leave the group.\n\nUSES OF REALM\n\nThis app was built using the MongoDB RealmJS SDK and React-Native and utilises many of Realm\u2019s features. For example, the authentication process of registration, logging in, and logging out is handled using Realm Email/Password authentication. Additionally, Realm enables a seamless data flow while updating device locations in groups, as demonstrated by the diagram below: \n\nAs a device moves, Realm writes the location to Atlas, provided the device has network connectivity. If the device doesn\u2019t have network connectivity, Realm will sync the data into Atlas when the device is back online. Once the data is in Atlas, Realm will propagate the changes to the other users in the group. Upon receiving the new data, a change listener in the app is notified of this update in the device's location. As a result, the pin\u2019s position on the map will update and users in the group can see the device\u2019s new location.\n\nAnother feature of Realm used in this project is shared realms. In the Realm task tracker tutorial, available here, all users in a group have read/write permission to the group partition. The developers allowed this, as group members were trusted to change any data in the group\u2019s shared resources. Indeed, this was encouraged, as it allowed team members to edit tasks created by other team members and mark them as completed. In this app, users couldn't have write permissions to the shared realm, as group members could modify other users' locations with write permission. The solution to this problem is shown in the diagram below. Group members only have read permissions for the shared realm, allowing them to read others' locations, but not edit them. You can learn more about Realm partitioning strategies here.\n\nFIXING A SECURITY VULNERABILITY\n\nSeveral difficult scenarios and edge cases came up during the development process. For example, in the initial version, users could write to the *Group Membership*(https://github.com/realm/FindOurDevices/blob/0b118053a3956d4415d40d9c059f6802960fc484/app/models/GroupMembership.js) class. The intention was that this permission would allow members to join new groups and write their new membership to Atlas from Realm. Unfortunately, this permission also created a security vulnerability, as the client could edit the *GroupMembership.groupId* value to anything they wanted. If they edited this value to another group\u2019s ID value, this change would be synced to Atlas, as the user had write permission to this class. Malicious users could use this vulnerability to join a group without an invitation and snoop on the group members' locations.\n\nDue to the serious ethical issues posed by this vulnerability, a fix needed to be found. Ultimately, the solution was to split the Device partition from the User partition and retract write permissions from the User class, as shown in the diagram below. Thanks to this amendment, users could no longer edit their *GroupMembership.groupId* value. As such, malicious actors could no longer join groups for which they had no invitation. Additionally, each device is now responsible for updating its location, as the Device partition is now separate from the User partition, with write permissions.\n\nCONCLUSION\n\nIn this blog post, we discussed a fascinating project built by two Realm interns this year. More specifically, we explored the functionality and use cases of the project, looked at how the project used MongoDB Realm, and examined a noteworthy security vulnerability that arose during development. \n\nIf you want to learn more about the project or dive into the code, you can check out the backend repository here and the frontend repository here. You can also build the project yourself by following the instructions in the ReadMe files in the two repositories. Alternatively, if you'd like to learn more about MongoDB, you can visit our community forums, sign up for MongoDB University, or sign up for the MongoDB newsletter!", "format": "md", "metadata": {"tags": ["JavaScript", "Realm", "iOS", "Android"], "pageDescription": "Build an example mobile application using realm for iOS and Android", "contentType": "Code Example"}, "title": "Find our Devices - A locator app built using Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-hackathon-experience", "action": "created", "body": "# The MongoDB Realm Hackathon Experience\n\nWith Covid19 putting an end to in-person events, we wanted to engage directly with developers utilizing the recently announced MongoDB Realm public preview, and so the Realm Hackathon was conceived. This would be MongoDB's first digital Hackathon and we were delighted with the response. In the end, we ended up with nearly 300 registrations, which culminated in 23 teams coming together over the course of a day and half of learning, experimenting, and above all, having fun! The teams were predominantly European given the timezone of the Hackathon, but we did have participants from the US and also the Asia Pacific region, too.\n\nDuring the Hackathon, we engaged in \n- Team forming\n- Idea pitching\n- Q&A with the Realm Enginnering team behind many of the Realm SDKs\n- and of course, developing!\n\nWith 23 teams, there was a huge variation in concepts and ideas put forward for the Hackathon. From Covid19-influenced apps to chatbots to inventory tracking apps, the variety was superb. On the final morning, all teams had an opportunity to pitch their apps competitively and we (the judges) were highly impressed with the ingenuity, use of Realm, and the scope of what the teams accomplished in a 24-hour period. In the end, there can only be one winner, and we were delighted to award that title to Team PurpleBlack.\n\nTeam PurpleBlack created a MongoDB Realm-based mobile asset maintenance solution. Effective asset maintenance is critical to the success of any utility company. The solution included an offline iOS app for field technicians, a MongoDB Charts dashboard, and email notifications for administrators. Santaneel and Srinivas impressed with their grasp of Realm and their ambition to build a solution leveraging not only Realm but MongoDB Atlas, Charts, and Triggers. So, we asked Team PurpleBlack to share their experience in their own words, and we're thrilled to share this with you.\n\n>Guest post - by Santaneel Pyne of Team PurpleBlack - The MongoDB Realm Hackathon Experience!\n\n## THE MOTIVATION\n\nHackathons are always a fantastic experience. They are fun, exciting, and enriching all at the same time. This July, I participated in the first Realm Hackathon organised by MongoDB. Earlier in the year, while I was going through a list of upcoming Hackathons, I came across the Realm Hackathon. I was keen on participating in this hackathon as this was about building offline mobile apps. I am a Solution Architect working with On Device Solutions, and enterprise mobile apps are a key focus area for me. For the hackathon, I had teamed up with Srinivas Divakarla from Cognizant Technology Solutions. He is a technical lead and an experienced Swift developer. We named our team PurpleBlack. It is just another random name. Neither of us had any experience with MongoDB Realm. This was going to be our opportunity to learn. We went ahead with an open mind without too many expectations.\n\n## THE 'VIRTUAL' EXPERIENCE\n\nThis was our first fully online hackathon experience. The hackathon was spread across two days and it was hosted entirely on Zoom. The first day was the actual hack day and the next day was for presentations and awards. There were a couple of introductory sessions held earlier in the week to provide all participants a feel of the online hackathon. After the first session, we created our accounts in cloud.mongodb.com and made sure we had access to all the necessary tools and SDKs as mentioned during the introductory session. On the day of the hackathon, we joined the Zoom meeting and were greeted by the MongoDB team. As with any good hackathon, a key takeaway is interaction with the experts. It was no different in this case. We met the Realm experts - Kraen Hansen, Eduardo Lopez, Lee Maguire, Andrew Morgan, and Franck Franck. They shared their experience and answered questions from the participants.\n\nBy the end of the expert sessions, all participants were assigned a team. Each team was put into a private Zoom breakout room. The organisers and the Realm experts were in the Main Zoom room. We could toggle between the breakout room and the Main room when needed. It took us some time to get used to this. We started our hacking session with an end-to-end plan and distributed the work between ourselves. I took the responsibility of configuring the back-end components of our solution, like the cluster, collections, Realm app configurations, user authentication, functions, triggers, and charts. Srinivas was responsible for building the iOS app using the iOS SDK. Before we started working on our solution, we had allocated some time to understand the end-to-end architecture and underlying concepts. We achieved this by following the task tracker iOS app tutorial. We had spent a lot of time on this tutorial, but it was worth it as we were able to re-use several components from the task tracker app. After completing the tutorial, we felt confident working on our solution. We were able to quickly complete all the backend components and then\nstarted working on the iOS application. Once we were able to sync data between the app and the MongoDB collections, we were like, \"BINGO!\" We then added two features that we had not planned for earlier. These features were the email notifications and the embedded charts. We rounded-off Day 1 by providing finishing touches to our presentation.\n\nDay 2 started with the final presentations and demos from all the teams. Everyone was present in the Main Zoom room. Each team had five minutes to present. The presentations and demos from all the teams were great. This added a bit of pressure on us as we were slotted to present at the end. When our turn finally arrived, I breezed through the presentation and then the demo. The demo went smoothly and I was able to showcase all the features we had built.\n\nNext was the countdown to the award ceremony. The panel of judges went into a breakout room to select the winner. When the judges were back, they announced PurpleBlack as the winner of the first MongoDB Realm Hackathon!!\n\n## OUR IDEA\n\nTeam PurpleBlack created a MongoDB Realm-based mobile asset maintenance solution. Effective asset maintenance is critical to the success of any utility company. The solution included an offline iOS app for field technicians, a MongoDB Charts dashboard, and email notifications for Maintenance Managers or Administrators. Field technicians will download all relevant asset data into the mobile app during the initial synchronization. Later, when they are in a remote area without connectivity, they can scan a QR code fixed to an asset to view the asset details. Once the asset details are confirmed, an issue can be created against the identified asset. Finally, when the technicians are back online, the Realm mobile app will automatically synchronize all new issues with MongoDB Atlas. Functions and triggers help to send email notifications to an Administrator in case any high-priority issue is created. Administrators can view the charts dashboard to keep track of all issues created and take follow-up actions.\n\nTo summarise, our solution included the following features: \n- iOS app based on Realm iOS SDK\n- Secure user authentication using email-id and password\n- MongoDB Atlas as the cloud datastore\n- MongoDB Charts and embedded charts using the embedding SDK\n- Email notifications via the SendGrid API using Realm functions and triggers\n\nA working version of our iOS project can be found in our GitHub\nrepo.\n\nThis project is based on the Task Tracker app with some tweaks that helped us build the features we wanted. In our app, we wanted to download two objects into the same Realm - Assets and Issues. This means when a user successfully logs into the app, all assets and issues available in MongoDB Atlas will be downloaded to the client. Initially, a list of issues is displayed.\n\nFrom the issue list screen, the user can create a new issue by tapping the + button. Upon clicking this button, the app opens the camera to scan a barcode/QR code. The code will be the same as the asset ID of an asset. If the user scans an asset that is available in the Realm, then there is a successful match and the user can proceed to the next screen to create an asset. We illustrate how this is accomplished with the code below:\n\n``` Swift\nfunc scanCompleted(code: String)\n {\n currentBarcode = code\n // pass the scanned barcode to the CreateIssueViewController and Query MongoDB Realm\n let queryStr: String = \"assetId == '\"+code+\"'\";\n print(queryStr);\n print(\"issues that contain assetIDs: \\(assets.filter(queryStr).count)\");\n if(assets.filter(queryStr).count > 0 ){\n scanner?.requestCaptureSessionStopRunning()\n self.navigationController!.pushViewController(CreateIssueViewController(code: currentBarcode!, configuration: realm.configuration), animated: true);\n } else {\n self.showToast(message: \"No Asset found for the scanned code\", seconds: 0.6)\n }\n\n }\n```\n\nIn the next screen, the user can create a new issue against the identified asset.\n\nTo find out the asset details, the Asset object from Realm must be queried with the asset ID:\n\n``` Swift\nrequired init(with code: String, configuration: Realm.Configuration) {\n\n // Ensure the realm was opened with sync.\n guard let syncConfiguration = configuration.syncConfiguration else {\n fatalError(\"Sync configuration not found! Realm not opened with sync?\");\n }\n\n let realm = try! Realm(configuration: configuration)\n let queryStr: String = \"assetId == '\"+code+\"'\";\n scannedAssetCode = code\n assets = realm.objects(Asset.self).filter(queryStr)\n\n // Partition value must be of string type.\n partitionValue = syncConfiguration.partitionValue.stringValue!\n\n super.init(nibName: nil, bundle: nil)\n}\n```\n\nOnce the user submits the new issue, it is then written to the Realm:\n\n``` Swift\nfunc submitDataToRealm(){\n print(form.values())\n\n // Create a new Issue with the text that the user entered.\n let issue = Issue(partition: self.partitionValue)\n let createdByRow: TextRow? = form.rowBy(tag: \"createdBy\")\n let descriptionRow: TextRow? = form.rowBy(tag: \"description\")\n let priorityRow: SegmentedRow? = form.rowBy(tag: \"priority\")\n let issueIdRow: TextRow? = form.rowBy(tag: \"issueId\")\n\n issue.issueId = issueIdRow?.value ?? \"\"\n issue.createdBy = createdByRow?.value ?? \"\"\n issue.desc = descriptionRow?.value ?? \"\"\n issue.priority = priorityRow?.value ?? \"Low\"\n issue.status = \"Open\"\n issue.assetId = self.scannedAssetCode\n\n try! self.realm.write {\n // Add the Issue to the Realm. That's it!\n self.realm.add(issue)\n }\n\n self.navigationController!.pushViewController(TasksViewController( assetRealm: self.realm), animated: true);\n\n}\n```\n\nThe new entry is immediately synced with MongoDB Atlas and is available in the Administrator dashboard built using MongoDB Charts.\n\n## WRAPPING UP\n\nWinning the first MongoDB Realm hackathon was a bonus for us. We had registered for this hackathon just to experience the app-building process with Realm. Both of us had our share of the \"wow\" moments throughout the hackathon. What stood out at the end was the ease with which we were able to build new features once we understood the underlying concepts. We want to continue this learning journey and explore MongoDB Realm further.\n\nFollow these links to learn more - \n- GitHub Repo for Project\n- Realm Tutorial\n- Charts Examples\n- Sending Emails with MongoDB Stitch and SendGrid\n\nTo learn more, ask questions, leave feedback, or simply connect with other MongoDB developers, visit our community forums. Come to learn. Stay to connect.\n\n>Getting started with Atlas is easy. Sign up for a free MongoDB Atlas account to start working with all the exciting new features of MongoDB, including Realm and Charts, today!", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "In July, MongoDB ran its first digital hackathon for Realm. Our winners, team \"PurpleBlack,\" share their experience of the Hackathon in this guest post.", "contentType": "Article"}, "title": "The MongoDB Realm Hackathon Experience", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-generative-ai-applications-vector-search-open-source-models", "action": "created", "body": "# Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models\n\nArtificial intelligence is at the core of what's being heralded as the fourth industrial revolution. There is a fundamental change happening in the way we live and the way we work, and it's happening right now. While AI and its applications across businesses are not new, recently, generative AI has become a hot topic worldwide with the incredible success of ChatGPT, the popular chatbot from OpenAI. It reached 100 million monthly active users in two months, becoming the fastest-growing consumer application. \n\nIn this blog, we will talk about how you can leverage the power of large language models (LLMs), the transformative technology powering ChatGPT, on your private data to build transformative AI-powered applications using MongoDB and Atlas Vector Search. We will also walk through an example of building a semantic search using Python, machine learning models, and Atlas Vector Search for finding movies using natural language queries. For instance, to find \u201cFunny movies with lead characters that are not human\u201d would involve performing a semantic search that understands the meaning and intent behind the query to retrieve relevant movie recommendations, and not just the keywords present in the dataset.\n\nUsing vector embeddings, you can leverage the power of LLMs for use cases like semantic search, a recommendation system, anomaly detection, and a customer support chatbot that are grounded in your private data.\n\n## What are vector embeddings?\n\nA vector is a list of floating point numbers (representing a point in an n-dimensional embedding space) and captures semantic information about the text it represents. For instance, an embedding for the string \"MongoDB is awesome\" using an open source LLM model called `all-MiniLM-L6-v2` would consist of 384 floating point numbers and look like this:\n\n```\n-0.018378766253590584, -0.004090079106390476, -0.05688102915883064, 0.04963553324341774, \u2026..\n\n....\n0.08254531025886536, -0.07415960729122162, -0.007168072275817394, 0.0672200545668602]\n```\n\nNote: Later in the tutorial, we will cover the steps to obtain vector embeddings like this.\n\n## What is vector search?\n\nVector search is a capability that allows you to find related objects that have a semantic similarity. This means searching for data based on meaning rather than the keywords present in the dataset. \n\nVector search uses machine learning models to transform unstructured data (like text, audio, and images) into numeric representation (called vector embeddings) that captures the intent and meaning of that data. Then, it finds related content by comparing the distances between these vector embeddings, using approximate k nearest neighbor (approximate KNN) algorithms. The most commonly used method for finding the distance between these vectors involves calculating the cosine similarity between two vectors. \n\n## What is Atlas Vector Search?\n\n[Atlas Vector Search is a fully managed service that simplifies the process of effectively indexing high-dimensional vector data within MongoDB and being able to perform fast vector similarity searches. With Atlas Vector Search, you can use MongoDB as a standalone vector database for a new project or augment your existing MongoDB collections with vector search functionality. \n\nHaving a single solution that can take care of your operational application data as well as vector data eliminates the complexities of using a standalone system just for vector search functionality, such as data transfer and infrastructure management overhead. With Atlas Vector Search, you can use the powerful capabilities of vector search in any major public cloud (AWS, Azure, GCP) and achieve massive scalability and data security out of the box while being enterprise-ready with provisions like SoC2 compliance.\n\n## Semantic search for movie recommendations\n\nFor this tutorial, we will be using a movie dataset containing over 23,000 documents in MongoDB. We will be using the `all-MiniLM-L6-v2` model from HuggingFace for generating the vector embedding during the index time as well as query time. But you can apply the same concepts by using a dataset and model of your own choice, as well. You will need a Python notebook or IDE, a MongoDB Atlas account, and a HuggingFace account for an hands-on experience.\n\nFor a movie database, various kinds of content \u2014 such as the movie description, plot, genre, actors, user comments, and the movie poster \u2014 can be easily converted into vector embeddings. In a similar manner, the user query can be converted into vector embedding, and then the vector search can find the most relevant results by finding the nearest neighbors in the embedding space.\n\n### Step 1: Connect to your MongoDB instance\n\nTo create a MongoDB Atlas cluster, first, you need to create a MongoDB Atlas account if you don't already have one. Visit the MongoDB Atlas website and click on \u201cRegister.\u201d\n\nFor this tutorial, we will be using the sample data pertaining to movies. The \u201csample_mflix\u201d database contains a \u201cmovies\u201d collection where each document contains fields like title, plot, genres, cast, directors, etc.\n\nYou can also connect to your own collection if you have your own data that you would like to use. \n\nYou can use an IDE of your choice or a Python notebook for following along. You will need to install the `pymongo` package prior to executing this code, which can be done via `pip install pymongo`.\n\n```python\nimport pymongo\n\nclient = pymongo.MongoClient(\"\")\ndb = client.sample_mflix\ncollection = db.movies\n```\n\nNote: In production environments, it is not recommended to hard code your database connection string in the way shown, but for the sake of a personal demo, it is okay.\n\nYou can check your dataset in the Atlas UI.\n\n### Step 2: Set up the embedding creation function\n\nThere are many options for creating embeddings, like calling a managed API, hosting your own model, or having the model run locally. \n\nIn this example, we will be using the HuggingFace inference API to use a model called all-MiniLM-L6-v2. HuggingFace is an open-source platform that provides tools for building, training, and deploying machine learning models. We are using them as they make it easy to use machine learning models via APIs and SDKs.\n\nTo use open-source models on Hugging Face, go to https://huggingface.co/. Create a new account if you don\u2019t have one already. Then, to retrieve your Access token, go to Settings > \u201cAccess Tokens.\u201d Once in the \u201cAccess Tokens\u201d section, create a new token by clicking on \u201cNew Token\u201d and give it a \u201cread\u201d right. Then, you can get the token to authenticate to the Hugging Face inference API:\n\nYou can now define a function that will be able to generate embeddings. Note that this is just a setup and we are not running anything yet. \n\n```python\nimport requests\n\nhf_token = \"\"\nembedding_url = \"https://api-inference.huggingface.co/pipeline/feature-extraction/sentence-transformers/all-MiniLM-L6-v2\"\n\ndef generate_embedding(text: str) -> listfloat]:\n\nresponse = requests.post(\nembedding_url,\nheaders={\"Authorization\": f\"Bearer {hf_token}\"},\njson={\"inputs\": text})\n\nif response.status_code != 200:\nraise ValueError(f\"Request failed with status code {response.status_code}: {response.text}\")\n\nreturn response.json()\n```\n\nNow you can test out generating embeddings using the function we defined above.\n\n```python\ngenerate_embedding(\"MongoDB is awesome\")\n```\n\nThe output of this function will look like this:\n\n![Verify the output of the generate_embedding function\n\nNote: HuggingFace Inference API is free (to begin with) and is meant for quick prototyping with strict rate limits. You can consider setting up a paid \u201cHuggingFace Inference Endpoints\u201d using the steps described in the Bonus Suggestions. This will create a private deployment of the model for you.\n\n### Step 3: Create and store embeddings\n\nNow, we will execute an operation to create a vector embedding for the data in the \"plot\" field in our movie documents and store it in the database. As described in the introduction, creating vector embeddings using a machine learning model is necessary for performing a similarity search based on intent. \n\nIn the code snippet below, we are creating vector embeddings for 50 documents in our dataset, that have the field \u201cplot.\u201d We will be storing the newly created vector embeddings in a field called \"plot_embedding_hf,\" but you can name this anything you want.\n\nWhen you are ready, you can execute the code below.\n\n```python\nfor doc in collection.find({'plot':{\"$exists\": True}}).limit(50):\ndoc'plot_embedding_hf'] = generate_embedding(doc['plot'])\ncollection.replace_one({'_id': doc['_id']}, doc)\n```\n\nNote: In this case, we are storing the vector embedding in the original collection (that is alongside the application data). This could also be done in a separate collection.\n\nOnce this step completes, you can verify in your database that a new field \u201cplot_embedding_hf\u201d has been created for some of the collections.\n\nNote: We are restricting this to just 50 documents to avoid running into rate-limits on the HuggingFace inference API. If you want to do this over the entire dataset of 23,000 documents in our sample_mflix database, it will take a while, and you may need to create a paid \u201cInference Endpoint\u201d as described in the optional setup above.\n\n### Step 4: Create a vector search index\n\nNow, we will head over to Atlas Search and create an index. First, click the \u201csearch\u201d tab on your cluster and click on \u201cCreate Search Index.\u201d\n\n![Search tab within the Cluster page with a focus on \u201cCreate Search Index\u201d][1]\n\nThis will lead to the \u201cCreate a Search Index\u201d configuration page. Select the \u201cJSON Editor\u201d and click \u201cNext.\u201d\n\n![Search tab \u201cCreate Search Index\u201d experience with a focus on \u201cJSON Editor\u201d][2]\n\nNow, perform the following three steps on the \"JSON Editor\" page:\n\n1. Select the database and collection on the left. For this tutorial, it should be sample_mflix/movies.\n2. Enter the Index Name. For this tutorial, we are choosing to call it `PlotSemanticSearch`.\n3. Enter the configuration JSON (given below) into the text editor. The field name should match the name of the embedding field created in Step 3 (for this tutorial it should be `plot_embedding_hf`), and the dimensions match those of the chosen model (for this tutorial it should be 384). The chosen value for the \"similarity\" field (of \u201cdotProduct\u201d) represents cosine similarity, in our case.\n\nFor a description of the other fields in this configuration, you can check out our [Vector Search documentation.\n\nThen, click \u201cNext\u201d and click \u201cCreate Search Index\u201d button on the review page.\n\n``` json\n{\n \"type\": \"vectorSearch,\n \"fields\": {\n \"path\": \"plot_embedding_hf\",\n \"dimensions\": 384,\n \"similarity\": \"dotProduct\",\n \"type\": \"vector\"\n }]\n}\n```\n\n![Search Index Configuration JSON Editor with arrows pointing at the database and collection name, as well as the JSON editor][3]\n\n### Step 5: Query your data\n\nOnce the index is created, you can query it using the \u201c$vectorSearch\u201d stage in the MQL workflow.\n\n> Support for the '$vectorSearch' aggregation pipeline stage is available with MongoDB Atlas 6.0.11 and 7.0.2.\n\nIn the query below, we will search for four recommendations of movies whose plots matches the intent behind the query \u201cimaginary characters from outer space at war\u201d.\n\nExecute the Python code block described below, in your chosen IDE or notebook.\n\n```python\nquery = \"imaginary characters from outer space at war\"\n\nresults = collection.aggregate([\n {\"$vectorSearch\": {\n \"queryVector\": generate_embedding(query),\n \"path\": \"plot_embedding_hf\",\n \"numCandidates\": 100,\n \"limit\": 4,\n \"index\": \"PlotSemanticSearch\",\n }}\n});\n\nfor document in results:\n print(f'Movie Name: {document[\"title\"]},\\nMovie Plot: {document[\"plot\"]}\\n')\n```\n\nThe output will look like this:\n\n![The output of Vector Search query\n\nNote: To find out more about the various parameters (like \u2018$vectorSearch\u2019, \u2018numCandidates\u2019, and \u2018k\u2019), you can check out the Atlas Vector Search documentation. \n\nThis will return the movies whose plots most closely match the intent behind the query \u201cimaginary characters from outer space at war.\u201d \n\n**Note:** As you can see, the results above need to be more accurate since we only embedded 50 movie documents. If the entire movie dataset of 23,000+ documents were embedded, the query \u201cimaginary characters from outer space at war\u201d would result in the below. The formatted results below show the title, plot, and rendering of the image for the movie poster.\n\n### Conclusion\n\nIn this tutorial, we demonstrated how to use HuggingFace Inference APIs, how to generate embeddings, and how to use Atlas Vector search. We also learned how to build a semantic search application to find movies whose plots most closely matched the intent behind a natural language query, rather than searching based on the existing keywords in the dataset. We also demonstrated how efficient it is to bring the power of machine learning models to your data using the Atlas Developer Data Platform.\n\n> If you prefer learning by watching, check out the video version of this article!\n\n:youtube]{vid=wOdZ1hEWvjU}\n\n## Bonus Suggestions\n\n### HuggingFace Inference Endpoints\n\n\u201c[HuggingFace Inference Endpoints\u201d is the recommended way to easily create a private deployment of the model and use it for production use case. As we discussed before \u2018HuggingFace Inference API\u2019 is meant for quick prototyping and has strict rate limits. \n\nTo create an \u2018Inference Endpoint\u2019 for a model on HuggingFace, follow these steps:\n\n1. On the model page, click on \"Deploy\" and in the dropdown choose \"Inference Endpoints.\"\n\n2. Select the Cloud Provider of choice and the instance type on the \"Create a new Endpoint\" page. For this tutorial, you can choose the default of AWS and Instance type of CPU small]. This would cost about $0.06/hour.\n![Create a new endpoint\n\n3. Now click on the \"Advanced configuration\" and choose the task type to \"Sentence Embedding.\" This configuration is necessary to ensure that the endpoint returns the response from the model that is suitable for the embedding creation task.\n\nOptional] you can set the \u201cAutomatic Scale-to-Zero\u201d to \u201cAfter 15 minutes with no activity\u201d to ensure your endpoint is paused after a period of inactivity and you are not charged. Setting this configuration will, however, mean that the endpoint will be unresponsive after it\u2019s been paused. It will take some time to return online after you send requests to it again.\n\n![Selecting a supported tasks\n\n4. After this, you can click on \u201cCreate endpoint\" and you can see the status as \"Initializing.\"\n\n5. Use the following Python function to generate embeddings.\n Notice the difference in response format from the previous usage of \u201cHuggingFace Inference API.\u201d\n\n ```python\n import requests\n \n hf_token = \"\"\n embedding_url = \"\"\n \n def generate_embedding(text: str) -> listfloat]:\n \n response = requests.post(\n embedding_url,\n headers={\"Authorization\": f\"Bearer {hf_token}\"},\n json={\"inputs\": text})\n \n if response.status_code != 200:\n \nraise ValueError(f\"Request failed with status code {response.status_code}: {response.text}\")\n \n return response.json()[\"embeddings\"]\n ```\n\n### OpenAI embeddings\n\nTo use OpenAI for embedding generation, you can use the package (install using `pip install openai`).\n\nYou\u2019ll need your OpenAI API key, which you can [create on their website. Click on the account icon on the top right and select \u201cView API keys\u201d from the dropdown. Then, from the API keys, click on \"Create new secret key.\"\n\nTo generate the embeddings in Python, install the openAI package (`pip install openai`) and use the following code.\n\n```python\nopenai.api_key = os.getenv(\"OPENAI_API_KEY\")\n\nmodel = \"text-embedding-ada-002\"\n\ndef generate_embedding(text: str) -> listfloat]:\nresp = openai.Embedding.create(\ninput=[text], \nmodel=model)\n\nreturn resp[\"data\"][0][\"embedding\"] \n```\n\n### Azure OpenAI embedding endpoints\n\nYou can use Azure OpenAI endpoints by creating a deployment in your Azure account and using:\n\n```python\ndef generate_embedding(text: str) -> list[float]:\n\n embeddings = \n resp = openai.Embedding.create\n (deployment_id=deployment_id,\n input=[text])\n \n return resp[\"data\"][0][\"embedding\"] \n```\n\n### Model input size limitations \n\nModels have a limitation on the number of input tokens that they can handle. The limitation for OpenAI's `text-embedding-ada-002` model is 8,192 tokens. Splitting the original text into smaller chunks becomes necessary when creating embeddings for the data that exceeds the model's limit.\n\n## Get started today\n\nGet started by [creating a MongoDB Atlas account if you don't already have one. Just click on \u201cRegister.\u201d MongoDB offers a free-forever Atlas cluster in the public cloud service of your choice.\n\nTo learn more about Atlas Vector Search, visit the product page or the documentation for creating a vector search index or running vector search queries.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta6bbbb7c921bb08c/65a1b3ecd2ebff119d6f491d/atlas-search-create-search-index.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte848f96fae511855/65a1b7cb1f2d0f12aead1547/atlas-vector-search-create-index-json.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt698150f3ea6e10f0/65a1b85eecc34e813110c5b2/atlas-search-vector-search-json-editor.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Learn how to build generative AI (GenAI) applications by harnessing the power of MongoDB Atlas and Vector Search.", "contentType": "Tutorial"}, "title": "Building Generative AI Applications Using MongoDB: Harnessing the Power of Atlas Vector Search and Open Source Models", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/golang-multi-document-acid-transactions", "action": "created", "body": "# Multi-Document ACID Transactions in MongoDB with Go\n\nThe past few months have been an adventure when it comes to getting started with MongoDB using the Go programming language (Golang). We've explored everything from create, retrieve, update, and delete (CRUD) operations, to data modeling, and to change streams. To bring this series to a solid finish, we're going to take a look at a popular requirement that a lot of organizations need, and that requirement is transactions.\n\nSo why would you want transactions?\n\nThere are some situations where you might need atomicity of reads and writes to multiple documents within a single collection or multiple collections. This isn't always a necessity, but in some cases, it might be.\n\nTake the following for example.\n\nLet's say you want to create documents in one collection that depend on documents in another collection existing. Or let's say you have schema validation rules in place on your collection. In the scenario that you're trying to create documents and the related document doesn't exist or your schema validation rules fail, you don't want the operation to proceed. Instead, you'd probably want to roll back to before it happened.\n\nThere are other reasons that you might use transactions, but you can use your imagination for those.\n\nIn this tutorial, we're going to look at what it takes to use transactions with Golang and MongoDB. Our example will rely more on schema validation rules passing, but it isn't a limitation.\n\n## Understanding the Data Model and Applying Schema Validation\n\nSince we've continued the same theme throughout the series, I think it'd be a good idea to have a refresher on the data model that we'll be using for this example.\n\nIn the past few tutorials, we've explored working with potential podcast data in various collections. For example, our Go data model looks something like this:\n\n``` go\ntype Episode struct {\n ID primitive.ObjectID `bson:\"_id,omitempty\"`\n Podcast primitive.ObjectID `bson:\"podcast,omitempty\"`\n Title string `bson:\"title,omitempty\"`\n Description string `bson:\"description,omitempty\"`\n Duration int32 `bson:\"duration,omitempty\"`\n}\n```\n\nThe fields in the data structure are mapped to MongoDB document fields through the BSON annotations. You can learn more about using these annotations in the previous tutorial I wrote on the subject.\n\nWhile we had other collections, we're going to focus strictly on the `episodes` collection for this example.\n\nRather than coming up with complicated code for this example to demonstrate operations that fail or should be rolled back, we're going to go with schema validation to force fail some operations. Let's assume that no episode should be less than two minutes in duration, otherwise it is not valid. Rather than implementing this, we can use features baked into MongoDB.\n\nTake the following schema validation logic:\n\n``` json\n{\n \"$jsonSchema\": {\n \"additionalProperties\": true,\n \"properties\": {\n \"duration\": {\n \"bsonType\": \"int\",\n \"minimum\": 2\n }\n }\n }\n}\n```\n\nThe above logic would be applied using the MongoDB CLI or with Compass, but we're essentially saying that our schema for the `episodes` collection can contain any fields in a document, but the `duration` field must be an integer and it must be at least two. Could our schema validation be more complex? Absolutely, but we're all about simplicity in this example. If you want to learn more about schema validation, check out this awesome tutorial on the subject.\n\nNow that we know the schema and what will cause a failure, we can start implementing some transaction code that will commit or roll back changes.\n\n## Starting and Committing Transactions\n\nBefore we dive into starting a session for our operations and committing transactions, let's establish a base point in our project. Let's assume that your project has the following boilerplate MongoDB with Go code:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"os\"\n\n \"go.mongodb.org/mongo-driver/bson/primitive\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\n// Episode represents the schema for the \"Episodes\" collection\ntype Episode struct {\n ID primitive.ObjectID `bson:\"_id,omitempty\"`\n Podcast primitive.ObjectID `bson:\"podcast,omitempty\"`\n Title string `bson:\"title,omitempty\"`\n Description string `bson:\"description,omitempty\"`\n Duration int32 `bson:\"duration,omitempty\"`\n}\n\nfunc main() {\n client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n panic(err)\n }\n defer client.Disconnect(context.TODO())\n\n database := client.Database(\"quickstart\")\n episodesCollection := database.Collection(\"episodes\")\n\n database.RunCommand(context.TODO(), bson.D{{\"create\", \"episodes\"}})\n}\n```\n\nThe collection must exist prior to working with transactions. When using the `RunCommand`, if the collection already exists, an error will be returned. For this example, the error is not important to us since we just want the collection to exist, even if that means creating it.\n\nNow let's assume that you've correctly included the MongoDB Go driver as seen in a previous tutorial titled, How to Get Connected to Your MongoDB Cluster with Go.\n\nThe goal here will be to try to insert a document that complies with our schema validation as well as a document that doesn't so that we have a commit that doesn't happen.\n\n``` go\n// ...\n\nfunc main() {\n // ...\n\n wc := writeconcern.New(writeconcern.WMajority())\n rc := readconcern.Snapshot()\n txnOpts := options.Transaction().SetWriteConcern(wc).SetReadConcern(rc)\n\n session, err := client.StartSession()\n if err != nil {\n panic(err)\n }\n defer session.EndSession(context.Background())\n\n err = mongo.WithSession(context.Background(), session, func(sessionContext mongo.SessionContext) error {\n if err = session.StartTransaction(txnOpts); err != nil {\n return err\n }\n result, err := episodesCollection.InsertOne(\n sessionContext,\n Episode{\n Title: \"A Transaction Episode for the Ages\",\n Duration: 15,\n },\n )\n if err != nil {\n return err\n }\n fmt.Println(result.InsertedID)\n result, err = episodesCollection.InsertOne(\n sessionContext,\n Episode{\n Title: \"Transactions for All\",\n Duration: 1,\n },\n )\n if err != nil {\n return err\n }\n if err = session.CommitTransaction(sessionContext); err != nil {\n return err\n }\n fmt.Println(result.InsertedID)\n return nil\n })\n if err != nil {\n if abortErr := session.AbortTransaction(context.Background()); abortErr != nil {\n panic(abortErr)\n }\n panic(err)\n }\n}\n```\n\nIn the above code, we start by defining the read and write concerns that will give us the desired level of isolation in our transaction. To learn more about the available read and write concerns, check out the documentation.\n\nAfter defining the transaction options, we start a session which will encapsulate everything we want to do with atomicity. After, we start a transaction that we'll use to commit everything in the session.\n\nA `Session` represents a MongoDB logical session and can be used to enable casual consistency for a group of operations or to execute operations in an ACID transaction. More information on how they work in Go can be found in the documentation.\n\nInside the session, we are doing two `InsertOne` operations. The first would succeed because it doesn't violate any of our schema validation rules. It will even print out an object id when it's done. However, the second operation will fail because it is less than two minutes. The `CommitTransaction` won't ever succeed because of the error that the second operation created. When the `WithSession` function returns the error that we created, the transaction is aborted using the `AbortTransaction` function. For this reason, neither of the `InsertOne` operations will show up in the database.\n\n## Using a Convenient Transactions API\n\nStarting and committing transactions from within a logical session isn't the only way to work with ACID transactions using Golang and MongoDB. Instead, we can use what might be thought of as a more convenient transactions API.\n\nTake the following adjustments to our code:\n\n``` go\n// ...\n\nfunc main() {\n // ...\n\n wc := writeconcern.New(writeconcern.WMajority())\n rc := readconcern.Snapshot()\n txnOpts := options.Transaction().SetWriteConcern(wc).SetReadConcern(rc)\n\n session, err := client.StartSession()\n if err != nil {\n panic(err)\n }\n defer session.EndSession(context.Background())\n\n callback := func(sessionContext mongo.SessionContext) (interface{}, error) {\n result, err := episodesCollection.InsertOne(\n sessionContext,\n Episode{\n Title: \"A Transaction Episode for the Ages\",\n Duration: 15,\n },\n )\n if err != nil {\n return nil, err\n }\n result, err = episodesCollection.InsertOne(\n sessionContext,\n Episode{\n Title: \"Transactions for All\",\n Duration: 2,\n },\n )\n if err != nil {\n return nil, err\n }\n return result, err\n }\n\n _, err = session.WithTransaction(context.Background(), callback, txnOpts)\n if err != nil {\n panic(err)\n }\n}\n```\n\nInstead of using `WithSession`, we are now using `WithTransaction`, which handles starting a transaction, executing some application code, and then committing or aborting the transaction based on the success of that application code. Not only that, but retries can happen for specific errors if certain operations fail.\n\n## Conclusion\n\nYou just saw how to use transactions with the MongoDB Go driver. While in this example we used schema validation to determine if a commit operation succeeds or fails, you could easily apply your own application logic within the scope of the session.\n\nIf you want to catch up on other tutorials in the getting started with Golang series, you can find some below:\n\n- How to Get Connected to Your MongoDB Cluster with Go\n- Creating MongoDB Documents with Go\n- Retrieving and Querying MongoDB Documents with Go\n- Updating MongoDB Documents with Go\n- Deleting MongoDB Documents with Go\n- Modeling MongoDB Documents with Native Go Data Structures\n- Performing Complex MongoDB Data Aggregation Queries with Go\n- Reacting to Database Changes with MongoDB Change Streams and Go\n\nSince transactions brings this tutorial series to a close, make sure you keep a lookout for more tutorials that focus on more niche and interesting topics that apply everything that was taught while getting started.", "format": "md", "metadata": {"tags": ["Go"], "pageDescription": "Learn how to accomplish ACID transactions and logical sessions with MongoDB and the Go programming language (Golang).", "contentType": "Quickstart"}, "title": "Multi-Document ACID Transactions in MongoDB with Go", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/awslambda-pymongo", "action": "created", "body": "# How to Use PyMongo to Connect MongoDB Atlas with AWS Lambda\n\nPicture a developer\u2019s paradise: a world where instead of fussing over hardware complexities, we are free to focus entirely on running and executing our applications. With the combination of AWS Lambda and MongoDB Atlas, this vision becomes a reality. \n\nArmed with AWS Lambda\u2019s pay-per-execution structure and MongoDB Atlas\u2019 unparalleled scalability, developers will truly understand what it means for their applications to thrive without the hardware limitations they might be used to. \n\nThis tutorial will take you through how to properly set up an Atlas cluster, connect it to AWS Lambda using MongoDB\u2019s Python Driver, write an aggregation pipeline on our data, and return our wanted information. Let\u2019s get started. \n\n### Prerequisites for success\n* MongoDB Atlas Account\n* AWS Account; Lambda access is necessary\n* GitHub repository\n* Python 3.8+\n\n## Create an Atlas Cluster\nOur first step is to create an Atlas cluster. Log into the Atlas UI and follow the steps to set it up. For this tutorial, the free tier is recommended, but any tier will work! \n\nPlease ensure that the cloud provider picked is AWS. It\u2019s also necessary to pick a secure username and password so that we will have the proper authorization later on in this tutorial, along with proper IP address access.\n\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nOnce your cluster is up and running, click the ellipses next to the Browse Collections button and download the `sample dataset`. Your finished cluster will look like this:\n\nOnce our cluster is provisioned, let\u2019s set up our AWS Lambda function. \n\n## Creating an AWS Lambda function\nSign into your AWS account and search for \u201cLambda\u201d in the search bar. Hit the orange \u201cCreate function\u201d button at the top right side of the screen, and you\u2019ll be taken to the image below. Here, make sure to first select the \u201cAuthor from scratch\u201d option. Then, we want to select a name for our function (AWSLambdaDemo), the runtime (3.8), and our architecture (x86_64). \n\nHit the orange \u201cCreate function\u201d button on the bottom right to continue. Once your function is created, you\u2019ll see a page with your function overview above and your code source right below. \n\nNow, we are ready to set up our connection from AWS Lambda to our MongoDB cluster.\n\nTo make things easier for ourselves because we are going to be using Pymongo, a dependency, instead of editing directly in the code source, we will be using Visual Studio Code. AWS Lambda has a limited amount of pre-installed libraries and dependencies, so in order to get around this and incorporate Pymongo, we will need to package our code in a special way. Due to this \u201cworkaround,\u201d this will not be a typical tutorial with testing at every step. We will first have to download our dependencies and upload our code to Lambda prior to ensuring our code works instead of using a typical `requirements.txt` file. More on that below. \n\n## AWS Lambda and MongoDB cluster connection\n\nNow we are ready to establish a connection between AWS Lambda and our MongoDB cluster! \n\nCreate a new directory on your local machine and name it\n `awslambda-demo`.\n\n Let\u2019s install `pymongo`. As said above, Lambda doesn\u2019t have every library available. So, we need to download `pymongo` at the root of our project. We can do it by working with .zip file archives:\nIn the terminal, enter our `awslambda-demo` directory:\n \n cd awslambda-demo\n\nCreate a new directory where your dependencies will live:\n\n mkdir dependencies\n\nInstall `pymongo` directly in your `dependencies` package:\n\n pip install --target ./dependencies pymongo\n\nOpen Visual Studio Code, open the `awslambda-demo` directory, and create a new Python file named `lambda_function.py`. This is where the heart of our connection will be. \n\nInsert the code below in our `lambda_function.py`. Here, we are setting up our console to check that we are able to connect to our Atlas cluster. Please keep in mind that since we are incorporating our environment variables in a later step, you will not be able to connect just yet. We have copied the `lambda_handler` definition from our Lambda code source and have edited it to insert one document stating my full name into a new \u201ctest\u201d database and \u201ctest\u201d collection. It is best practice to incorporate our MongoClient outside of our `lambda_handler` because to establish a connection and performing authentication is reactively expensive, and Lambda will re-use this instance.\n\n```\nimport os\nfrom pymongo import MongoClient\n\nclient = MongoClient(host=os.environ.get(\"ATLAS_URI\"))\n\ndef lambda_handler(event, context):\n # Name of database\n db = client.test \n\n # Name of collection\n collection = db.test \n \n # Document to add inside\n document = {\"first name\": \"Anaiya\", \"last name\": \"Raisinghani\"}\n\n # Insert document\n result = collection.insert_one(document)\n\n if result.inserted_id:\n return \"Document inserted successfully\"\n else:\n return \"Failed to insert document\"\n```\nIf this is properly inserted in AWS Lambda, we will see \u201cDocument inserted successfully\u201d and in MongoDB Atlas, we will see the creation of our \u201ctest\u201d database and collection along with the single document holding the name \u201cAnaiya Raisinghani.\u201d Please keep in mind we will not be seeing this yet since we haven\u2019t configured our environment variables and will be doing this a couple steps down. \n\nNow, we need to create a .zip file, so we can upload it in our Lambda function and execute our code. Create a .zip file at the root:\n\n cd dependencies\n zip -r ../deployment.zip *\nThis creates a `deployment.zip` file in your project directory.\n\nNow, we need to add in our `lambda_function.py` file to the root of our .zip file:\n\n cd ..\n zip deployment.zip lambda_function.py\n\nOnce you have your .zip file, access your AWS Lambda function screen, click the \u201cUpload from\u201d button, and select \u201c.zip file\u201d on the right hand side of the page:\n\nUpload your .zip file and you should see the code from your `lambda_function.py` in your \u201cCode Source\u201d:\n\nLet\u2019s configure our environment variables. Select the \u201cConfiguration\u201d tab and then select the \u201cEnvironment Variables\u201d tab. Here, put in your \u201cATLAS_URI\u201d string. To access your connection string, please follow the instructions in our docs.\n\nOnce you have your Environment Variables in place, we are ready to run our code and see if our connection works. Hit the \u201cTest\u201d button. If it\u2019s the first time you\u2019re hitting it, you\u2019ll need to name your event. Keep everything else on the default settings. You should see this page with our \u201cExecution results.\u201d Our document has been inserted!\n\nWhen we double-check in Atlas, we can see that our new database \u201ctest\u201d and collection \u201ctest\u201d have been created, along with our document with \u201cAnaiya Raisinghani.\u201d\n\nThis means our connection works and we are capable of inserting documents from AWS Lambda to our MongoDB cluster. Now, we can take things a step further and input a simple aggregation pipeline!\n\n## Aggregation pipeline example\n\nFor our pipeline, let\u2019s change our code to connect to our `sample_restaurants` database and `restaurants` collection. We are going to be incorporating our aggregation pipeline to find a sample size of five American cuisine restaurants that are located in Brooklyn, New York. Let\u2019s dive right in! \n\nSince we have our `pymongo` dependency downloaded, we can directly incorporate our aggregation pipeline into our code source. Change your `lambda_function.py` to look like this:\n\n```\nimport os\nfrom pymongo import MongoClient\n\nconnect = MongoClient(host=os.environ.get(\"ATLAS_URI\"))\n\ndef lambda_handler(event, context):\n # Choose our \"sample_restaurants\" database and our \"restaurants\" collection\n database = connect.sample_restaurants\n collection = database.restaurants\n\n # This is our aggregation pipeline\n pipeline = \n\n # We are finding American restaurants in Brooklyn\n {\"$match\": {\"borough\": \"Brooklyn\", \"cuisine\": \"American\"}},\n\n # We only want 5 out of our over 20k+ documents\n {\"$limit\": 5},\n\n # We don't want all the details, project what you need\n {\"$project\": {\"_id\": 0, \"name\": 1, \"borough\": 1, \"cuisine\": 1}}\n \n ]\n\n # This will show our pipeline \n result = list(collection.aggregate(pipeline))\n\n # Print the result\n for restaurant in result:\n print(restaurant)\n```\nHere, we are using `$match` to find all the American cuisine restaurants located in Brooklyn. We are then using `$limit` to only five documents out of our database. Next, we are using `$project` to only show the fields we want. We are going to include \u201cborough\u201d, \u201ccuisine\u201d, and the \u201cname\u201d of the restaurant. Then, we are executing our pipeline and printing out our results. \n\nClick on \u201cDeploy\u201d to ensure our changes have been deployed to the code environment. After the changes are deployed, hit \u201cTest.\u201d We will get a sample size of five Brooklyn American restaurants as the result in our console:\n![results from our aggregation pipeline shown in AWS Lambda\n\nOur aggregation pipeline was successful!\n\n## Conclusion\n\nThis tutorial provided you with hands-on experience to connect a MongoDB Atlas database to AWS Lambda. We also got an inside look on how to write to a cluster from Lambda, how to read back information from an aggregation pipeline, and how to properly configure our dependencies when using Lambda. Hopefully now, you are ready to take advantage of AWS Lambda and MongoDB to create the best applications without worrying about external infrastructure. \n\nIf you enjoyed this tutorial and would like to learn more, please check out our MongoDB Developer Center and YouTube channel.\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AWS", "Serverless"], "pageDescription": "Learn how to leverage the power of AWS Lambda and MongoDB Atlas in your applications. ", "contentType": "Tutorial"}, "title": "How to Use PyMongo to Connect MongoDB Atlas with AWS Lambda", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-schema-migration", "action": "created", "body": "# Migrating Your iOS App's Realm Schema in Production\n\n## Introduction\n\nMurphy's law dictates that as soon as your mobile app goes live, you'll receive a request to add a new feature. Then another. Then another.\n\nThis is fine if these features don't require any changes to your data schema. But, that isn't always the case.\n\nFortunately, Realm has built-in functionality to make schema migration easier.\n\nThis tutorial will step you through updating an existing mobile app to add some new features that require changes to the schema. In particular, we'll look at the Realm migration code that ensures that no existing data is lost when the new app versions are rolled out to your production users.\n\nWe'll use the Scrumdinger app that I modified in a previous post to show how Apple's sample Swift app could be ported to Realm. The starting point for the app can be found in this branch of our Scrumdinger repo and the final version is in this branch.\n\nNote that the app we're using for this post doesn't use Atlas Device Sync. If it did, then the schema migration process would be very different\u2014that's covered in Migrating Your iOS App's **Synced** Realm Schema in Production.\n\n## Prerequisites\n\nThis tutorial has a dependency on Realm-Cocoa 10.13.0+.\n\n## Baseline App/Realm Schema\n\nAs a reminder, the starting point for this tutorial is the \"realm\" branch of the Scrumdinger repo.\n\nThere are two Realm model classes that we'll extend to add new features to Scrumdinger. The first, DailyScrum, represents one scrum:\n\n``` swift\nclass DailyScrum: Object, ObjectKeyIdentifiable {\n @Persisted var title = \"\"\n @Persisted var attendeeList = RealmSwift.List()\n @Persisted var lengthInMinutes = 0\n @Persisted var colorComponents: Components?\n @Persisted var historyList = RealmSwift.List()\n\n var color: Color { Color(colorComponents ?? Components()) }\n var attendees: String] { Array(attendeeList) }\n var history: [History] { Array(historyList) }\n ...\n}\n```\n\nThe second, [History, represents the minutes of a meeting from one of the user's scrums:\n\n``` swift\nclass History: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var date: Date?\n @Persisted var attendeeList = List()\n @Persisted var lengthInMinutes: Int = 0\n @Persisted var transcript: String?\n var attendees: String] { Array(attendeeList) }\n ...\n}\n```\n\nWe can use [Realm Studio to examine the contents of our Realm database after the `DailyScrum` and `History` objects have been created:\n\nAccessing Realm Data on iOS Using Realm Studio explains how to locate and open the Realm files from your iOS simulator.\n\n## Schema Change #1\u2014Mark Scrums as Public/Private\n\nThe first new feature we've been asked to add is a flag to indicate whether each scrum is public or private:\n\nThis feature requires the addition of a new `Bool` named `isPublic` to DailyScrum:\n\n``` swift\nclass DailyScrum: Object, ObjectKeyIdentifiable {\n @Persisted var title = \"\"\n @Persisted var attendeeList = RealmSwift.List()\n @Persisted var lengthInMinutes = 0\n @Persisted var isPublic = false\n @Persisted var colorComponents: Components?\n @Persisted var historyList = RealmSwift.List()\n\n var color: Color { Color(colorComponents ?? Components()) }\n var attendees: String] { Array(attendeeList) }\n var history: [History] { Array(historyList) }\n ...\n}\n```\n\nRemember that our original version of Scrumdinger is already in production, and the embedded Realm database is storing instances of `DailyScrum`. We don't want to lose that data, and so we must migrate those objects to the new schema when the app is upgraded.\n\nFortunately, Realm has built-in functionality to automatically handle the addition and deletion of fields. When adding a field, Realm will use a default value (e.g., `0` for an `Int`, and `false` for a `Bool`).\n\nIf we simply upgrade the installed app with the one using the new schema, then we'll get a fatal error. That's because we need to tell Realm that we've updated the schema. We do that by setting the schema version to 1 (the version defaulted to 0 for the original schema):\n\n``` swift\n@main\nstruct ScrumdingerApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n NavigationView {\n ScrumsView()\n .environment(\\.realmConfiguration,\n Realm.Configuration(schemaVersion: 1))\n }\n }\n }\n}\n```\n\nAfter upgrading the app, we can use [Realm Studio to confirm that our `DailyScrum` object has been updated to initialize `isPublic` to `false`:\n\n## Schema Change #2\u2014Store The Number of Attendees at Each Meeting\n\nThe second feature request is to show the number of attendees in the history from each meeting:\n\nWe could calculate the count every time that it's needed, but we've decided to calculate it just once and then store it in our History object in a new field named `numberOfAttendees`:\n\n``` swift\nclass History: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var date: Date?\n @Persisted var attendeeList = List()\n @Persisted var numberOfAttendees = 0\n @Persisted var lengthInMinutes: Int = 0\n @Persisted var transcript: String?\n var attendees: String] { Array(attendeeList) }\n ...\n}\n```\n\nWe increment the schema version to 2. Note that the schema version applies to all Realm objects, and so we have to set the version to 2 even though this is the first time that we've changed the schema for `History`.\n\nIf we leave it to Realm to initialize `numberOfAttendees`, then it will set it to 0\u2014which is not what we want. Instead, we provide a `migrationBlock` which initializes new fields based on the old schema version:\n\n``` swift\n@main\nstruct ScrumdingerApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n NavigationView {\n ScrumsView()\n .environment(\\.realmConfiguration, Realm.Configuration(\n schemaVersion: 2,\n migrationBlock: { migration, oldSchemaVersion in\n if oldSchemaVersion < 1 {\n // Could init the `DailyScrum.isPublic` field here, but the default behavior of setting\n // it to `false` is what we want.\n }\n if oldSchemaVersion < 2 {\n migration.enumerateObjects(ofType: History.className()) { oldObject, newObject in\n let attendees = oldObject![\"attendeeList\"] as? RealmSwift.List\n newObject![\"numberOfAttendees\"] = attendees?.count ?? 0\n }\n }\n if oldSchemaVersion < 3 {\n // TODO: This is where you'd add you're migration code to go from version\n // to version 3 when you next modify the schema\n }\n }\n ))\n }\n }\n }\n}\n```\n\nNote that all other fields are migrated automatically.\n\nIt's up to you how you use data from the previous schema to populate fields in the new schema. E.g., if you wanted to combine `firstName` and `lastName` from the previous schema to populate a `fullName` field in the new schema, then you could do so like this:\n\n``` swift\nmigration.enumerateObjects(ofType: Person.className()) { oldObject, newObject in\n let firstName = oldObject![\"firstName\"] as! String\n let lastName = oldObject![\"lastName\"] as! String\n newObject![\"fullName\"] = \"\\(firstName) \\(lastName)\"\n}\n```\n\nWe can't know what \"old version\" of the schema will be already installed on a user's device when it's upgraded to the latest version (some users may skip some versions,) and so the `migrationBlock` must handle all previous versions. Best practice is to process the incremental schema changes sequentially:\n\n* `oldSchemaVersion < 1` : Process the delta between v0 and v1\n* `oldSchemaVersion < 2` : Process the delta between v1 and v2\n* `oldSchemaVersion < 3` : Process the delta between v2 and v3\n* ...\n\nRealm Studio shows that our code has correctly initialized `numberOfAttendees`:\n\n![Realm Studio showing that the numberOfAttendees field has been set to 2 \u2013\u00a0matching the number of attendees in the meeting history\n\n## Conclusion\n\nIt's almost inevitable that any successful mobile app will need some schema changes after it's gone into production. Realm makes adapting to those changes simple, ensuring that users don't lose any of their existing data when upgrading to new versions of the app.\n\nFor changes such as adding or removing fields, all you need to do as a developer is to increment the version with each new deployed schema. For more complex changes, you provide code that computes the values for fields in the new schema using data from the old schema.\n\nThis tutorial stepped you through adding two new features that both required schema changes. You can view the final app in the new-schema branch of the Scrumdinger repo.\n\n## Next Steps\n\nThis post focussed on schema migration for an iOS app. You can find some more complex examples in the repo.\n\nIf you're working with an app for a different platform, then you can find instructions in the docs:\n\n* Node.js\n* Android\n* iOS\n* .NET\n* React Native\n\nIf you've any questions about schema migration, or anything else related to Realm, then please post them to our community forum.", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Learn how to safely update your iOS app's Realm schema to support new functionality\u2014without losing any existing data", "contentType": "Tutorial"}, "title": "Migrating Your iOS App's Realm Schema in Production", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-crud-tutorial", "action": "created", "body": "# MongoDB and Node.js Tutorial - CRUD Operations\n\n \n\nIn the first post in this series, I walked you through how to connect to a MongoDB database from a Node.js script, retrieve a list of databases, and print the results to your console. If you haven't read that post yet, I recommend you do so and then return here.\n\n>\n>\n>This post uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n>\n>Click here to see a previous version of this post that uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>\n\nNow that we have connected to a database, let's kick things off with the CRUD (create, read, update, and delete) operations.\n\nIf you prefer video over text, I've got you covered. Check out the video\nin the section below. :-)\n\n>\n>\n>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n>\n>\n\nHere is a summary of what we'll cover in this post:\n\n- Learn by Video\n- How MongoDB Stores Data\n- Setup\n- Create\n- Read\n- Update\n- Delete\n- Wrapping Up\n\n## Learn by Video\n\nI created the video below for those who prefer to learn by video instead of text. You might also find this video helpful if you get stuck while trying the steps in the text-based instructions below.\n\nHere is a summary of what the video covers:\n\n- How to connect to a MongoDB database hosted on MongoDB Atlas from inside of a Node.js script (01:00)\n- How MongoDB stores data in documents and collections (instead of rows and tables) (08:22)\n- How to create documents using `insertOne()` and `insertMany()` (11:47)\n- How to read documents using `findOne()` and `find()` (17:16)\n- How to update documents using `updateOne()` with and without `upsert` as well as `updateMany()` (24:46\n)\n- How to delete documents using `deleteOne()` and `deleteMany()` (35:58)\n\n:youtube]{vid=fbYExfeFsI0}\n\nBelow are the links I mentioned in the video.\n\n- [GitHub Repo\n- Back to Basics Webinar Recording\n\n## How MongoDB Stores Data\n\nBefore we go any further, let's take a moment to understand how data is stored in MongoDB.\n\nMongoDB stores data in BSON documents. BSON is a binary representation of JSON (JavaScript Object Notation) documents. When you read MongoDB documentation, you'll frequently see the term \"document,\" but you can think of a document as simply a JavaScript object. For those coming from the SQL world, you can think of a document as being roughly equivalent to a row.\n\nMongoDB stores groups of documents in collections. For those with a SQL background, you can think of a collection as being roughly equivalent to a table.\n\nEvery document is required to have a field named `_id`. The value of `_id` must be unique for each document in a collection, is immutable, and can be of any type other than an array. MongoDB will automatically create an index on `_id`. You can choose to make the value of `_id` meaningful (rather than a somewhat random ObjectId) if you have a unique value for each document that you'd like to be able to quickly search.\n\nIn this blog series, we'll use the sample Airbnb listings dataset. The `sample_airbnb` database contains one collection: `listingsAndReviews`. This collection contains documents about Airbnb listings and their reviews.\n\nLet's take a look at a document in the `listingsAndReviews` collection. Below is part of an Extended JSON representation of a BSON document:\n\n``` json\n{\n \"_id\": \"10057447\",\n \"listing_url\": \"https://www.airbnb.com/rooms/10057447\",\n \"name\": \"Modern Spacious 1 Bedroom Loft\",\n \"summary\": \"Prime location, amazing lighting and no annoying neighbours. Good place to rent if you want a relaxing time in Montreal.\",\n \"property_type\": \"Apartment\",\n \"bedrooms\": {\"$numberInt\":\"1\"},\n \"bathrooms\": {\"$numberDecimal\":\"1.0\"},\n \"amenities\": \"Internet\",\"Wifi\",\"Kitchen\",\"Heating\",\"Family/kid friendly\",\"Washer\",\"Dryer\",\"Smoke detector\",\"First aid kit\",\"Safety card\",\"Fire extinguisher\",\"Essentials\",\"Shampoo\",\"24-hour check-in\",\"Hangers\",\"Iron\",\"Laptop friendly workspace\"],\n}\n```\n\nFor more information on how MongoDB stores data, see the [MongoDB Back to Basics Webinar that I co-hosted with Ken Alger.\n\n## Setup\n\nTo make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.\n\n1. Download a copy of template.js.\n2. Open `template.js` in your favorite code editor.\n3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.\n4. Save the file as `crud.js`.\n\nYou can run this file by executing `node crud.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.\n\n## Create\n\nNow that we know how to connect to a MongoDB database and we understand how data is stored in a MongoDB database, let's create some data!\n\n### Create One Document\n\nLet's begin by creating a new Airbnb listing. We can do so by calling Collection's insertOne(). `insertOne()` will insert a single document into the collection. The only required parameter is the new document (of type object) that will be inserted. If our new document does not contain the `_id` field, the MongoDB driver will automatically create an `_id` for the document.\n\nOur function to create a new listing will look something like the following:\n\n``` javascript\nasync function createListing(client, newListing){\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").insertOne(newListing);\n console.log(`New listing created with the following id: ${result.insertedId}`);\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as an object that contains information about a listing.\n\n``` javascript\nawait createListing(client,\n {\n name: \"Lovely Loft\",\n summary: \"A charming loft in Paris\",\n bedrooms: 1,\n bathrooms: 1\n }\n );\n```\n\nThe output would be something like the following:\n\n``` none\nNew listing created with the following id: 5d9ddadee415264e135ccec8\n```\n\nNote that since we did not include a field named `_id` in the document, the MongoDB driver automatically created an `_id` for us. The `_id` of the document you create will be different from the one shown above. For more information on how MongoDB generates `_id`, see Quick Start: BSON Data Types - ObjectId.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Create Multiple Documents\n\nSometimes you will want to insert more than one document at a time. You could choose to repeatedly call `insertOne()`. The problem is that, depending on how you've structured your code, you may end up waiting for each insert operation to return before beginning the next, resulting in slow code.\n\nInstead, you can choose to call Collection's insertMany(). `insertMany()` will insert an array of documents into your collection.\n\nOne important option to note for `insertMany()` is `ordered`. If `ordered` is set to `true`, the documents will be inserted in the order given in the array. If any of the inserts fail (for example, if you attempt to insert a document with an `_id` that is already being used by another document in the collection), the remaining documents will not be inserted. If ordered is set to `false`, the documents may not be inserted in the order given in the array. MongoDB will attempt to insert all of the documents in the given array\u2014regardless of whether any of the other inserts fail. By default, `ordered` is set to `true`.\n\nLet's write a function to create multiple Airbnb listings.\n\n``` javascript\nasync function createMultipleListings(client, newListings){\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").insertMany(newListings);\n\n console.log(`${result.insertedCount} new listing(s) created with the following id(s):`);\n console.log(result.insertedIds); \n}\n```\n\nWe can call this function by passing a connected MongoClient and an array of objects that contain information about listings.\n\n``` javascript\nawait createMultipleListings(client, \n {\n name: \"Infinite Views\",\n summary: \"Modern home with infinite views from the infinity pool\",\n property_type: \"House\",\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5\n },\n {\n name: \"Private room in London\",\n property_type: \"Apartment\",\n bedrooms: 1,\n bathroom: 1\n },\n {\n name: \"Beautiful Beach House\",\n summary: \"Enjoy relaxed beach living in this house with a private beach\",\n bedrooms: 4,\n bathrooms: 2.5,\n beds: 7,\n last_review: new Date()\n }\n]);\n```\n\nNote that every document does not have the same fields, which is perfectly OK. (I'm guessing that those who come from the SQL world will find this incredibly uncomfortable, but it really will be OK \ud83d\ude0a.) When you use MongoDB, you get a lot of flexibility in how to structure your documents. If you later decide you want to add [schema validation rules so you can guarantee your documents have a particular structure, you can.\n\nThe output of calling `createMultipleListings()` would be something like the following:\n\n``` none\n3 new listing(s) created with the following id(s):\n{ \n '0': 5d9ddadee415264e135ccec9,\n '1': 5d9ddadee415264e135cceca,\n '2': 5d9ddadee415264e135ccecb \n}\n```\n\nJust like the MongoDB Driver automatically created the `_id` field for us when we called `insertOne()`, the Driver has once again created the `_id` field for us when we called `insertMany()`.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Read\n\nNow that we know how to **create** documents, let's **read** one!\n\n### Read One Document\n\nLet's begin by querying for an Airbnb listing in the listingsAndReviews collection\nby name.\n\nWe can query for a document by calling Collection's findOne(). `findOne()` will return the first document that matches the given query. Even if more than one document matches the query, only one document will be returned.\n\n`findOne()` has only one required parameter: a query of type object. The query object can contain zero or more properties that MongoDB will use to find a document in the collection. If you want to query all documents in a collection without narrowing your results in any way, you can simply send an empty object.\n\nSince we want to search for an Airbnb listing with a particular name, we will include the name field in the query object we pass to `findOne()`:\n\n``` javascript\nfindOne({ name: nameOfListing })\n```\n\nOur function to find a listing by querying the name field could look something like the following:\n\n``` javascript\nasync function findOneListingByName(client, nameOfListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").findOne({ name: nameOfListing });\n\n if (result) {\n console.log(`Found a listing in the collection with the name '${nameOfListing}':`);\n console.log(result);\n } else {\n console.log(`No listings found with the name '${nameOfListing}'`);\n }\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as the name of a listing we want to find. Let's search for a listing named \"Infinite Views\" that we created in an earlier section.\n\n``` javascript\nawait findOneListingByName(client, \"Infinite Views\");\n```\n\nThe output should be something like the following.\n\n``` none\nFound a listing in the collection with the name 'Infinite Views':\n{ \n _id: 5da9b5983e104518671ae128,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5 \n}\n```\n\nNote that the `_id` of the document in your database will not match the `_id` in the sample output above.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Read Multiple Documents\n\nNow that you know how to query for one document, let's discuss how to query for multiple documents at a time. We can do so by calling Collection's find().\n\nSimilar to `findOne()`, the first parameter for `find()` is the query object. You can include zero to many properties in the query object.\n\nLet's say we want to search for all Airbnb listings that have minimum numbers of bedrooms and bathrooms. We could do so by making a call like the following:\n\n``` javascript\nclient.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n );\n```\n\nAs you can see above, we have two properties in our query object: one for bedrooms and one for bathrooms. We can leverage the $gte comparison query operator to search for documents that have bedrooms greater than or equal to a given number. We can do the same to satisfy our minimum number of bathrooms requirement. MongoDB provides a variety of other comparison query operators that you can utilize in your queries. See the official documentation for more details.\n\nThe query above will return a Cursor. A Cursor allows traversal over the result set of a query.\n\nYou can also use Cursor's functions to modify what documents are included in the results. For example, let's say we want to sort our results so that those with the most recent reviews are returned first. We could use Cursor's sort() function to sort the results using the `last_review` field. We could sort the results in descending order (indicated by passing -1 to `sort()`) so that listings with the most recent reviews will be returned first. We can now update our existing query to look like the following.\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 });\n```\n\nThe above query matches 192 documents in our collection. Let's say we don't want to process that many results inside of our script. Instead, we want to limit our results to a smaller number of documents. We can chain another of `sort()`'s functions to our existing query: limit(). As the name implies, `limit()` will set the limit for the cursor. We can now update our query to only return a certain number of results.\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\n```\n\nWe could choose to iterate over the cursor to get the results one by one. Instead, if we want to retrieve all of our results in an array, we can call Cursor's toArray() function. Now our code looks like the following:\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\nconst results = await cursor.toArray();\n```\n\nNow that we have our query ready to go, let's put it inside an asynchronous function and add functionality to print the results.\n\n``` javascript\nasync function findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {\n minimumNumberOfBedrooms = 0,\n minimumNumberOfBathrooms = 0,\n maximumNumberOfResults = Number.MAX_SAFE_INTEGER\n} = {}) {\n const cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\n\n const results = await cursor.toArray();\n\n if (results.length > 0) {\n console.log(`Found listing(s) with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms:`);\n results.forEach((result, i) => {\n date = new Date(result.last_review).toDateString();\n\n console.log();\n console.log(`${i + 1}. name: ${result.name}`);\n console.log(` _id: ${result._id}`);\n console.log(` bedrooms: ${result.bedrooms}`);\n console.log(` bathrooms: ${result.bathrooms}`);\n console.log(` most recent review date: ${new Date(result.last_review).toDateString()}`);\n });\n } else {\n console.log(`No listings found with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms`);\n }\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as an object with properties indicating the minimum number of bedrooms, the minimum number of bathrooms, and the maximum number of results.\n\n``` javascript\nawait findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {\n minimumNumberOfBedrooms: 4,\n minimumNumberOfBathrooms: 2,\n maximumNumberOfResults: 5\n});\n```\n\nIf you've created the documents as described in the earlier section, the output would be something like the following:\n\n``` none\nFound listing(s) with at least 4 bedrooms and 2 bathrooms:\n\n1. name: Beautiful Beach House\n _id: 5db6ed14f2e0a60683d8fe44\n bedrooms: 4\n bathrooms: 2.5\n most recent review date: Mon Oct 28 2019\n\n2. name: Spectacular Modern Uptown Duplex\n _id: 582364\n bedrooms: 4\n bathrooms: 2.5\n most recent review date: Wed Mar 06 2019\n\n3. name: Grace 1 - Habitat Apartments\n _id: 29407312\n bedrooms: 4\n bathrooms: 2.0\n most recent review date: Tue Mar 05 2019\n\n4. name: 6 bd country living near beach\n _id: 2741869\n bedrooms: 6\n bathrooms: 3.0\n most recent review date: Mon Mar 04 2019\n\n5. name: Awesome 2-storey home Bronte Beach next to Bondi!\n _id: 20206764\n bedrooms: 4\n bathrooms: 2.0\n most recent review date: Sun Mar 03 2019\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Update\n\nWe're halfway through the CRUD operations. Now that we know how to **create** and **read** documents, let's discover how to **update** them.\n\n### Update One Document\n\nLet's begin by updating a single Airbnb listing in the listingsAndReviews collection.\n\nWe can update a single document by calling Collection's updateOne(). `updateOne()` has two required parameters:\n\n1. `filter` (object): the Filter used to select the document to update. You can think of the filter as essentially the same as the query param we used in findOne() to search for a particular document. You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.\n2. `update` (object): the update operations to be applied to the document. MongoDB has a variety of update operators you can use such as `$inc`, `$currentDate`, `$set`, and `$unset` among others. See the official documentation for a complete list of update operators and their descriptions.\n\n`updateOne()` also has an optional `options` param. See the updateOne() docs for more information on these options.\n\n`updateOne()` will update the first document that matches the given query. Even if more than one document matches the query, only one document will be updated.\n\nLet's say we want to update an Airbnb listing with a particular name. We can use `updateOne()` to achieve this. We'll include the name of the listing in the filter param. We'll use the $set update operator to set new values for new or existing fields in the document we are updating. When we use `$set`, we pass a document that contains fields and values that should be updated or created. The document that we pass to `$set` will not replace the existing document; any fields that are part of the original document but not part of the document we pass to `$set` will remain as they are.\n\nOur function to update a listing with a particular name would look like the following:\n\n``` javascript\nasync function updateListingByName(client, nameOfListing, updatedListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateOne({ name: nameOfListing }, { $set: updatedListing });\n\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n}\n```\n\nLet's say we want to update our Airbnb listing that has the name \"Infinite Views.\" We created this listing in an earlier section.\n\n``` javascript\n{ \n _id: 5db6ed14f2e0a60683d8fe42,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5 \n}\n```\n\nWe can call `updateListingByName()` by passing a connected MongoClient, the name of the listing, and an object containing the fields we want to update and/or create.\n\n``` javascript\nawait updateListingByName(client, \"Infinite Views\", { bedrooms: 6, beds: 8 });\n```\n\nExecuting this command results in the following output.\n\n``` none\n1 document(s) matched the query criteria.\n1 document(s) was/were updated.\n```\n\nNow our listing has an updated number of bedrooms and beds.\n\n``` json\n{ \n _id: 5db6ed14f2e0a60683d8fe42,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 6,\n bathrooms: 4.5,\n beds: 8 \n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Upsert One Document\n\nOne of the options you can choose to pass to `updateOne()` is upsert. Upsert is a handy feature that allows you to update a document if it exists or insert a document if it does not.\n\nFor example, let's say you wanted to ensure that an Airbnb listing with a particular name had a certain number of bedrooms and bathrooms. Without upsert, you'd first use `findOne()` to check if the document existed. If the document existed, you'd use `updateOne()` to update the document. If the document did not exist, you'd use `insertOne()` to create the document. When you use upsert, you can combine all of that functionality into a single command.\n\nOur function to upsert a listing with a particular name can be basically identical to the function we wrote above with one key difference: We'll pass `{upsert: true}` in the `options` param for `updateOne()`.\n\n``` javascript\nasync function upsertListingByName(client, nameOfListing, updatedListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateOne({ name: nameOfListing }, \n { $set: updatedListing }, \n { upsert: true });\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n\n if (result.upsertedCount > 0) {\n console.log(`One document was inserted with the id ${result.upsertedId._id}`);\n } else {\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n }\n}\n```\n\nLet's say we aren't sure if a listing named \"Cozy Cottage\" is in our collection or, if it does exist, if it holds old data. Either way, we want to ensure the listing that exists in our collection has the most up-to-date data. We can call `upsertListingByName()` with a connected MongoClient, the name of the listing, and an object containing the up-to-date data that should be in the listing.\n\n``` javascript\nawait upsertListingByName(client, \"Cozy Cottage\", { name: \"Cozy Cottage\", bedrooms: 2, bathrooms: 1 });\n```\n\nIf the document did not previously exist, the output of the function would be something like the following:\n\n``` none\n0 document(s) matched the query criteria.\nOne document was inserted with the id 5db9d9286c503eb624d036a1\n```\n\nWe have a new document in the listingsAndReviews collection:\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2 \n}\n```\n\nIf we discover more information about the \"Cozy Cottage\" listing, we can use `upsertListingByName()` again.\n\n``` javascript\nawait upsertListingByName(client, \"Cozy Cottage\", { beds: 2 });\n```\n\nAnd we would see the following output.\n\n``` none\n1 document(s) matched the query criteria.\n1 document(s) was/were updated.\n```\n\nNow our document has a new field named \"beds.\"\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2,\n beds: 2 \n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Update Multiple Documents\n\nSometimes you'll want to update more than one document at a time. In this case, you can use Collection's updateMany(). Like `updateOne()`, `updateMany()` requires that you pass a filter of type object and an update of type object. You can choose to include options of type object as well.\n\nLet's say we want to ensure that every document has a field named `property_type`. We can use the $exists query operator to search for documents where the `property_type` field does not exist. Then we can use the $set update operator to set the `property_type` to \"Unknown\" for those documents. Our function will look like the following.\n\n``` javascript\nasync function updateAllListingsToHavePropertyType(client) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateMany({ property_type: { $exists: false } }, \n { $set: { property_type: \"Unknown\" } });\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n}\n```\n\nWe can call this function with a connected MongoClient.\n\n``` javascript\nawait updateAllListingsToHavePropertyType(client);\n```\n\nBelow is the output from executing the previous command.\n\n``` none\n3 document(s) matched the query criteria.\n3 document(s) was/were updated.\n```\n\nNow our \"Cozy Cottage\" document and all of the other documents in the Airbnb collection have the `property_type` field.\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2,\n beds: 2,\n property_type: 'Unknown' \n}\n```\n\nListings that contained a `property_type` before we called `updateMany()` remain as they were. For example, the \"Spectacular Modern Uptown Duplex\" listing still has `property_type` set to `Apartment`.\n\n``` json\n{ \n _id: '582364',\n listing_url: 'https://www.airbnb.com/rooms/582364',\n name: 'Spectacular Modern Uptown Duplex',\n property_type: 'Apartment',\n room_type: 'Entire home/apt',\n bedrooms: 4,\n beds: 7\n ...\n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Delete\n\nNow that we know how to **create**, **read**, and **update** documents, let's tackle the final CRUD operation: **delete**.\n\n### Delete One Document\n\nLet's begin by deleting a single Airbnb listing in the listingsAndReviews collection.\n\nWe can delete a single document by calling Collection's deleteOne(). `deleteOne()` has one required parameter: a filter of type object. The filter is used to select the document to delete. You can think of the filter as essentially the same as the query param we used in findOne() and the filter param we used in updateOne(). You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.\n\n`deleteOne()` also has an optional `options` param. See the deleteOne() docs for more information on these options.\n\n`deleteOne()` will delete the first document that matches the given query. Even if more than one document matches the query, only one document will be deleted. If you do not specify a filter, the first document found in natural order will be deleted.\n\nLet's say we want to delete an Airbnb listing with a particular name. We can use `deleteOne()` to achieve this. We'll include the name of the listing in the filter param. We can create a function to delete a listing with a particular name.\n\n``` javascript\nasync function deleteListingByName(client, nameOfListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .deleteOne({ name: nameOfListing });\n console.log(`${result.deletedCount} document(s) was/were deleted.`);\n}\n```\n\nLet's say we want to delete the Airbnb listing we created in an earlier section that has the name \"Cozy Cottage.\" We can call `deleteListingsByName()` by passing a connected MongoClient and the name \"Cozy Cottage.\"\n\n``` javascript\nawait deleteListingByName(client, \"Cozy Cottage\");\n```\n\nExecuting the command above results in the following output.\n\n``` none\n1 document(s) was/were deleted.\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Deleting Multiple Documents\n\nSometimes you'll want to delete more than one document at a time. In this case, you can use Collection's deleteMany(). Like `deleteOne()`, `deleteMany()` requires that you pass a filter of type object. You can choose to include options of type object as well.\n\nLet's say we want to remove documents that have not been updated recently. We can call `deleteMany()` with a filter that searches for documents that were scraped prior to a particular date. Our function will look like the following.\n\n``` javascript\nasync function deleteListingsScrapedBeforeDate(client, date) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .deleteMany({ \"last_scraped\": { $lt: date } });\n console.log(`${result.deletedCount} document(s) was/were deleted.`);\n}\n```\n\nTo delete listings that were scraped prior to February 15, 2019, we can call `deleteListingsScrapedBeforeDate()` with a connected MongoClient and a Date instance that represents February 15.\n\n``` javascript\nawait deleteListingsScrapedBeforeDate(client, new Date(\"2019-02-15\"));\n```\n\nExecuting the command above will result in the following output.\n\n``` none\n606 document(s) was/were deleted.\n```\n\nNow only recently scraped documents are in our collection.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Wrapping Up\n\nWe covered a lot today! Let's recap.\n\nWe began by exploring how MongoDB stores data in documents and collections. Then we learned the basics of creating, reading, updating, and deleting data.\n\nContinue on to the next post in this series, where we'll discuss how you can analyze and manipulate data using the aggregation pipeline.\n\nComments? Questions? We'd love to chat with you in the MongoDB Community.\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "Learn how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.", "contentType": "Quickstart"}, "title": "MongoDB and Node.js Tutorial - CRUD Operations", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/5-year-atlas-anniversary-episode-1-on-ramp", "action": "created", "body": "# Atlas 5-Year Anniversary Podcast Series Episode 1 - Onramp to Atlas\n\nMy name is Michael Lynn, and I\u2019m a developer advocate at MongoDB.\n\nI\u2019m excited to welcome you to this, the first in a series of episodes created to celebrate the five year anniversary of the launch of MongoDB Atlas, our Database as a Service Platform.\n\nIn this series, my co-hosts, Jesse Hall, and Nic Raboy will talk with some of the people responsible for building, and launching the platform that helped to transform MongoDB as a company.\n\nbeginning with Episode 1, the On ramp to Atlas talking with Sahir Azam, Chief Product Officer, and Andrew Davidson, VP of Product about the strategic shift from a software company to a software as a service business.\n\nIn episode 2, Zero to Database as a Service, we\u2019ll chat with Cailin Nelson, SVP of Engineering, and Cory Mintz, VP of Engineering - about Atlas as a product and how it was built and launched.\n\nIn episode 3, we\u2019ll Go Mobile, talking with Alexander Stigsen, Founder of the Realm Mobile Database which has become a part of the Atlas Platform. \n\nIn episode 4, we\u2019ll wrap the series up with a panel discussion and review some of our valued customer comments about the platform. \n\nThanks so much for tuning in and reading, please take a moment to subscribe for more episodes and if you enjoy what you hear, please don\u2019t forget to provide a comment, and a rating to help us continue to improve.\n\nWithout further adue, here is the transcript from episode one of this series.\n\nSahir: [00:00:00] Hi Everyone. My name is Sahir Azam and I'm the chief product officer at Mongo DB. Welcome to the Mongo DB podcast. \n\nMike: [00:00:07] Okay. Today, we're going to be talking about Mongo to be Atlas and the journey that has taken place to bring us to this point, the five-year anniversary of MongoDB Atlas of a launch of MongoDB Atlas. And I'm joined in the studio today by a couple of guests. And we'll start by introducing Sahir Azam chief product officer at Mongo DB.\nSahir, welcome to the show. It's great to have you on the podcast.\n\nSahir: [00:00:31] Hey, Hey Mike. Great to be here.\n\nMike: [00:00:33] Terrific. And we're also joined by Andrew Davidson. Andrew is vice-president of product cloud products at Mongo DB. Is it, do I have that right?\n\nAndrew: [00:00:41] That's right? Good to be here, Mike. How you doin? \n\nMike: [00:00:44] Doing great. It's great to have you on the show. And of course are my co-hosts for the day. Is Jesse Hall also known as codeSTACKr. Welcome back to the show, Jesse.\nIt's great to have you on\n\nJesse: [00:00:54] I'm fairly new here. So I'm excited to hear about the, history of Atlas\n\nMike: [00:00:58] fantastic. Yeah. W we're gonna, we're gonna get into that. But before we do so here, I guess we'll maybe introduce yourself to the audience, talk a little bit about who you are and what you.\n\nSahir: [00:01:09] Yeah. So, I mentioned earlier, I run the product organization at Mongo and as part of my core focus, I think about the products we build the roadmaps of those products and how they serve customers and ultimately help us grow our business. And I've been with the company for about five years.\nCoincidentally I was recruited to lead basically the transition of the business. Open source enterprise software company to becoming a SAS vendor. And so I came on right before the launch of Atlas, Andrew on the line here certainly has the history of how Atlas came to be even prior to me joining.\nBut, uh, it's been a heck of a ride.\n\nMike: [00:01:46] Fantastic. Well, Andrew, that brings us to you once, yet, let folks know who you are and what you do.\n\nAndrew: [00:01:52] sure. Yeah. Similar to Sahir, I focus on product management, but a really more specifically focused on our cloud product suite. And if you think about it, that was something that five years ago, when we first launched Atlas was just an early kernel, a little bit of a startup inside of our broader company.\nAnd so before that time, I was very focused on our traditional more private cloud management experience from marketing the and it's really just been this amazing journey to really transform this company with Sahir and so many others into being a cloud company. So really excited to be here on this milestone. \n\nMike: [00:02:25] Fantastic. And Jesse, so you've been with Mongo to be, I guess, relatively the least amount of time among the four of us, but maybe talk about your experience with Mongo to be and cloud in general.\n\nJesse: [00:02:36] Yeah. So I've used it several times in some tutorials that I've created on the Atlas portion of it. Going through the onboarding experience and\nlearning how it actually works, how the command line and all of that was amazing to understand it from that perspective as well.\nSo, yeah, I'm excited to see how you took it from that to the cloud.\n\nMike: [00:02:58] Yeah. Yeah, me too. And if you think about the journey I'm going to be was a successful open source product. I was a project that was largely used by developers. To increase agility. It represented a different way to store data and it wasn't a perfect journey. There were some challenges early on, specifically around the uniqueness of the mechanism that it's using to store data is different from traditional forms.\nAnd. So I guess Andrew you've been here the longest over eight years. Talk about the challenges of transitioning from a software product to an online database, as a service.\n\nAndrew: [00:03:37] Yeah. Sure. When you think back to where we were, say eight and a half years ago, to your point, we had this kind of almost new category of data experience for developers that gave them this natural way to interface with data in a way that was totally reflective of the way they wanted to think about their data, the objects in there. And we came in and revolutionized the world with this way of interfacing with data. And that's what led to them. I'm going to be just exploding in popularity. It was just mind boggling to see millions of people every month, experiencing MongoDB for the first time as pure open source software on their laptops.\nBut as we move forward over the years, we realized. We could be this phenomenal database that gave developers exactly the way they want to interface with data. We could be incredibly scalable. We could go up to any level of scale with vertical and horizontal kind of linear cost economics, really built for cloud.\nWe could do all of that, but if our customers continued to have to self manage all of this software at scale, we realized, frankly, we might get left behind in the end. We might get beaten by databases that weren't as good. But then we're going to be delivered at a higher level of abstraction, fully managed service.\nSo we went all in as a company recognizing we need to make this just so easy for people to get started and to go up to any level of scale. And that's really what Atlas was about. It was all about democratizing this incredible database, which had already democratize a new data model, but making it accessible for production use cases in the cloud, anywhere in the room.\nAnd I think when you see what's happened today with just millions of people who have now used Atlas, the same magnitude of number of have had used our self-managed software. It's just amazing to see how far. \n\nMike: [00:05:21] Yeah. Yeah. It's been quite a ride and it is interesting timing. So here, so you joined right around the same time. I think it was, I think a couple of months prior to the launch of Atlas. Tell us about like your role early.\n\nSahir: [00:05:36] Yeah, I think what attracted me to Mongo DB in the first place, certainly the team, I knew there was a strong team here and I absolutely knew of the sort of popularity and. Just disruption that the open source technology and database had created in the market just as, somebody being an it and technology.\nAnd certainly it'd be hard to miss. So I had a very kind of positive impression overall of the business, but the thing that really did it for me was the fact that the company was embarking on this strategic expansion to become a SAS company and deliver this database as a service with Atlas, because I had certainly built. In my own mind sort of conviction that for open source companies, the right business model that would ultimately be most successful was distributing tech technology as a matter of service so that it can get the reach global audiences and, really democratize that experiences as Andrew mentioned.\nSo that was the most interesting challenge. And when I joined the company, I think. Part of everyone understands is okay, it's a managed version of bongo DB, and there's a whole bunch of automation, elasticity and pay as you go pricing and all of the things that you would expect in the early days from a managed service.\nBut the more interesting thing that I think is sometimes hidden away is how much it's really transformed Mongo DB. The company's go to market strategy. As well, it's allowed us to really reach, tens of thousands of customers and millions of developers worldwide. And that's a function of the fact that it's just so easy to get started.\nYou can start off on our free tier or as you start building your application and it scales just get going on a credit card and then ultimately engaged and, in a larger level with our organization, as you start to get to mission criticality and scale. That's really hard to do in a, a traditional sort of enterprise software model.\nIt's easy to do for large customers. It's not easy to do for the broad base of them. Mid-market and the SMB and the startups and the ecosystem. And together with the team, we put a lot of focus into thinking about how do we make sure we widen the funnel as much as possible and get as many developers to try Atlas as the default experience we're using Mongo DB, because we felt a, it was definitely the best way to use the technology, but also for us as a company, it was the most powerful way for us to scale our overall operations.\n\nMike: [00:07:58] Okay.\n\nJesse: [00:08:00] Okay. \n\nMike: [00:08:00] So obviously there's going to be some challenges early on in the minds of the early adopters. Now we've had some relatively large names. I don't know if we can say any names of customers that were early adopters, but there were obviously challenges around that. What are some of the challenges that were particularly difficult when you started to talk to some of these larger name companies?\nWhat are some of the things that. Really concerned about early \non. \n\nSahir: [00:08:28] Yeah I'll try them a little bit. And Andrew, I'm sure you have thoughts on this as well. So I think in the, when we phased out sort of the strategy for Atlas in the early years, when we first launched, it's funny to think back. We were only on AWS and I think we were in maybe four or five regions at the time if I remember correctly and the first kind of six to 12 months was really optimized for. Let's call it lower end use cases where you could come in. You didn't necessarily have high-end requirements around security or compliance guarantees. And so I think the biggest barrier to entry for larger customers or more mission critical sort of sensitive applications was. We as ourselves had not yet gotten our own third-party compliance certifications, there were certain enterprise level security capabilities like encryption, bring your own key encryption things like, private networking with with peering on the cloud providers that we just hadn't built yet on our roadmap.\nAnd we wanted to make sure we prioritize correctly. So I think that was the. Internal factor. The external factor was, five years ago. It wasn't so obvious that for the large enterprise, that databases of service would be the default way to consume databases in the cloud. Certainly there was some of that traction happening, but if you look at it compared to today, it was still early days.\nAnd I laugh because early on, we probably got positively surprised by some early conservative enterprise names. Maybe Thermo Fisher was one of them. We had I want to say AstraZeneca, perhaps a couple of like really established brand names who are, bullish on the cloud, believed in Mongo DB as a key enabling technology.\nAnd in many ways where those early partners with us in the enterprise segment were to help develop the maturity we needed to scale over time.\n\nMike: [00:10:23] Yeah, \n\nAndrew: [00:10:23] I remember the, these this kind of wake up call moment where you realized the pace of being a cloud company is just so much higher than what we had traditionally been before, where it was, a bit more of a slow moving enterprise type of sales motion, where you have a very big, POC phase and a bunch of kind of setup time and months of delivery.\nThat whole model though, was changing. The whole idea of Atlas was to enable our customer to very rapidly and self-service that service matter build amazing applications. And so you had people come in the matter of hours, started to do really cool, amazing stuff. And sometimes we weren't even ready for that.\nWe weren't even ready to be responsive enough for them. So we had to develop these new muscles. Be on the pulse of what this type of new speed of customer expected. I remember in one of our earliest large-scale customers who would just take us to the limits, it was, we had, I think actually funny enough, multiple cricket league, fantasy sports apps out of India, they were all like just booming and popularity during the India premier league. \n\nMike: [00:11:25] Okay. \n\nAndrew: [00:11:26] Cricket competition. And it was just like so crazy how many people were storming into this application, the platform at the same time and realizing that we had a platform that could, actually scale to their needs was amazing, but it was also this constant realization that every new level of scale, every kind of new rung is going to require us to build out new operational chops, new muscles, new maturity, and we're still, it's an endless journey, a customer today.\nA thousand times bigger than what we could accommodate at that time. But I can imagine that the customers of, five years from now will be a, yet another couple of order magnitude, larger or orders meant to larger. And it's just going to keep challenging us. But now we're in this mindset of expecting that and always working to get that next level, which is exciting. \n\nMike: [00:12:09] Yeah. I'm sure it hasn't always been a smooth ride. I'm sure there were some hiccups along the way. And maybe even around scale, you mentioned, we got surprised. Do you want to talk a little bit about maybe some of that massive uptake. Did we have trouble offering this product as a service?\nJust based on the number of customers that we were able to sign up?\n\nSahir: [00:12:30] I'd say by and large, it's been a really smooth ride. I think one of the ones, the surprises that kind of I think is worth sharing \nis we have. I think just under or close to 80 regions now in Atlas and the promise of the cloud at least on paper is endless scale and availability of resources, whether that be compute or networking or storage. That's largely true for most customers in major regions where the cloud providers are. But if you're in a region that's not a primary region or you've got a massive rollout where you need a lot of compute capacity, a lot of network capacity it's not suddenly available for you on demand all the time. There are supply chain data center or, resources backing all of this and our partners, do a really great job, obviously staying ahead of that demand, but there are sometimes constraints.\nAnd so I think we reached a certain scale inflection point where we were consistently bumping up. The infrastructure cloud providers limits in terms of availability of capacity. And, we've worked with them on making sure our quotas were set properly and that we were treated in a special case, but there were definitely a couple of times where, we had a new application launching for a customer. It's not like it was a quota we were heading there literally was just not there weren't enough VMs and underlying physical infrastructure is set up and available in those data centers. And so we had some teething pains like working with our cloud provider friends to make sure that we were always projecting ahead with more and more I think, of a forward look to them so that we can, make sure we're not blocking our customers. Funny cloud learnings, I would say.\n\nMike: [00:14:18] Well, I guess that answers that, I was going to ask the question, why not? Build our own cloud, why not build, a massive data center and try and meet the demands with something like, an ops manager tool and make that a service offering. But I guess that really answers the question that the demand, the level of demand around the world would be so difficult.\nWas that ever a consideration though? Building our own\n\nSahir: [00:14:43] so ironically, we actually did run our own infrastructure in the early days for our cloud backup service. So we had spinning disks and\nphysical devices, our own colo space, and frankly, we just outgrew it. I think there's two factors for us. One, the database is pretty. Low in the stack, so to speak.\nSo it needs to, and as an operational transactional service, We need to be really close to where the application actually runs. And the power of what the hyperscale cloud providers has built is just immense reach. So now any small company can stand up a local site or a point of presence, so to speak in any part of the world, across those different regions that they have.\nAnd so the idea that. Having a single region that we perhaps had the economies of scale in just doesn't make sense. We're very dispersed because of all the different regions we support across the major cloud providers and the need to be close to where the application is. So just given the dynamic of running a database, the service, it is really important that we sit in those public major public cloud providers, right by the side, those those customers, the other.\nIs really just that we benefit from the innovation that the hyperscale cloud providers put out in the market themselves. Right. There's higher levels of abstraction. We don't want to be sitting there. We have limited resources like any company, would we rather spend the dollars on racking and stacking hardware and, managing our own data center footprint and networking stack and all of that, or would we rather spend those reasons?\nConsuming as a service and then building more value for our customers. So the same thing we, we just engage with customers and why they choose Atlas is very much true to us as we build our cloud platforms.\n\nAndrew: [00:16:29] Yeah. I If you think about it, I'm going to be is really the only company that's giving developers this totally native data model. That's so easy to get started with at the prototyping phase. They can go up to any level of scale from there that can read and write across 80 regions across the big three cloud providers all over the world.\nAnd for us to not stay laser-focused on that level. Making developers able to build incredible global applications would just be to pull our focus away from really the most important thing for us, which is to be obsessed with that customer experience rather than the infrastructure building blocks in the backend, which of course we do optimize them in close partnership with our cloud provider partners to Sahir's point.. . \n\nJesse: [00:17:09] So along with all of these challenges to scale over time, there was also other competitors trying to do the same thing. So how does Mongo DB, continue to have a competitive advantage?\n\nSahir: [00:17:22] Yeah, I think it's a consistent investment in engineering, R and D and innovation, right? If you look at the capabilities we've released, the core of the database surrounding the database and Atlas, the new services that integrated simplify the architecture for applications, some of the newer things we have, like search or realm or what we're doing with analytics with that was data lake.\nI'll put our ability to push out more value and capability to customers against any competitor in the world. I think we've got a pretty strong track record there, but at a more kind of macro level. If you went back kind of five years ago to the launch of Atlas, most customers and developers, how to trade off to make you either go with a technology that's very deep on functionality and best of breed.\nSo to speak in a particular domain. Like a Mongo DB, then you have to, that's typically all software, so you tend to have to operate it yourself, learn how to manage and scale and monitor and all those different things. Or you want to choose a managed service experience where you get, the ease of use of just getting started and scaling and having all the pay as you go kind of consumption models.\nBut those databases are nowhere close to as capable as the best of breed players. That was the state of the mark. Five years ago, but now, fast forward to 2021 and going forward customers no longer have to make that trade. You have multicloud and sort of database and service offerings analytics as a service offerings, which you learning players that have not only the best of breed capability, that's a lot deeper than the first party services that are managed by the cloud providers, but are also delivered in this really amazing, scalable manner.\nConsumption-based model so that trade-off is no longer there. And I think that's a key part of what drives our success is the fact that, we have the best capabilities. That's the features and the costs that at the cost of developers and organizations want. We deliver it as a really fluid elastic managed service.\nAnd then guess what, for enterprises, especially multicloud is an increasingly strategic sort of characteristic they look for in their major providers, especially their data providers. And we're available on all three of the major public clouds with Atlas. That's a very unique proposition. No one else can offer that.\nAnd so that's the thing that really drives in this\n\nMike: [00:19:38] Yeah.\n\nSahir: [00:19:39] powering, the acceleration of the Atlas business.\n\nMike: [00:19:42] Yeah. And so, Andrew, I wonder if for the folks that are not familiar with Atlas, the architecture you want to just give an overview of how Atlas works and leverages the multiple cloud providers behind the scenes.\nAndrew: [00:19:56] Yeah, sure. Look, anyone who's not used not going to be Atlas, I encourage you just, sign up right away. It's the kind of thing where in just a matter of five minutes, you can deploy a free sandbox cluster and really start building your hello world. Experience your hello world application on top of MongoDB to be the way Atlas really works is essentially we try and make it as simple as possible.\nYou sign up. Then you decide which cloud provider and which region in that cloud provider do I want to launch my database cluster into, and you can choose between those 80 regions to hear mentioned or you can do more advanced stuff, you can decide to go multi-region, you can decide to go even multicloud all within the same database cluster.\nAnd the key thing is that you can decide to start really tiny, even at the free level or at our dedicated cluster, starting at $60. Or you can go up to just massive scale sharded clusters that can power millions of concurrent users. And what's really exciting is you can transition those clusters between those states with no downtime.\nAt any time you can start single region and small and scale up or scale to multiple regions or scale to multiple clouds and each step of the way you're meeting whatever your latest business objectives are or whatever the needs of your application are. But in general, you don't have to completely reinvent the wheel and rearchitect your app each step of the way.\nThat's where MongoDB makes it just so as you to start at that prototyping level and then get up to the levels of scale. Now on the backend, Atlas does all of this with of course, huge amount of sophistication. There's dedicated virtual, private clouds per customer, per region for a dedicated clusters.\nYou can connect into those clusters using VPC, Piering, or private link, offering a variety of secure ways to connect without having to deal with public IP access lists. You can also use the. We have a wide variety of authentication and authorization options, database auditing, like Sahir mentioned, bring your own key encryption and even client-side field level encryption, which allows you to encrypt data before it even goes into the database for the subsets of your schema at the highest classification level.\nSo we make it, the whole philosophy here is to democratize making it easy to build applications in a privacy optimized way to really ultimately make it possible, to have millions of end consumers have a better experience. And use all this wonderful digital experiences that everyone's building out there. \n\nJesse: [00:22:09] So here we talked about how just the Mongo DB software, there was a steady growth, right. But once we went to the cloud \nwith Atlas, the success of that, how did that impact our business?\n\nSahir: [00:22:20] Yeah, I think it's been obviously Quite impactful in terms of just driving the acceleration of growth and continued success of MongoDB. We were fortunate, five, six years ago when Atlas was being built and launched that, our business was quite healthy. We were about a year out from IPO.\nWe had many enterprise customers that were choosing our commercial technology to power, their mission, critical applications. That continues through today. So the idea of launching outlets was although certainly strategic and, had we saw where the market was going. And we knew this would in many ways, be the flagship product for the company in the term, it was done out of sort of an offensive view to getting to market.\nAnd so if you look at it now, Atlas is about 51% of our revenue. It's, the fastest growing product in our portfolio, Atlas is no longer just a database. It's a whole data platform where we've collapsed a bunch of other capabilities in the architecture of an application. So it's much simpler for developers.\nAnd over time we expected that 51% number is only going to continue to be, a larger percentage of our business, but it's important to know. Making sure that we deliver a powerful open source database to the market, that we have an enterprise version of the software for customers who aren't for applications or customers that aren't yet in the crowd, or may never go to the cloud for certain workloads is super critical.\nThis sort of idea of run anywhere. And the reason why is, oftentimes. Timeline for modernizing an application. Let's say you're a large insurance provider or a bank or something. You've got thousands of these applications on legacy databases. There's an intense need to monitor modernize.\nThose that save costs to unlock developer, agility, that timeline of choosing a database. First of all, it's a decision that lasts typically seven to 10 years. So it's a long-term investment decision, but it's not always timed with a cloud model. So the idea that if you're on premises, that you can modernize to an amazing database, like Mongo DB, perhaps run it in Kubernetes, run it in virtual machines in your own data center.\nBut then, two years later, if that application needs to move to the cloud, it's just a seamless migration into Atlas on any cloud provider you choose. That's a very unique and powerful, compelling story for, especially for large organizations, because what they don't want is to modernize or rewrite an application twice, once to get the value on pro-business and then have to think about it again later, if the app moves to the cloud, it's one seamless journey and that hybrid model.\nOf moving customers to words outlets over time is really been a cohesive strategies. It's not just Atlas, it's open source and the enterprise version all seamlessly playing in a uniform model.\n\nMike: [00:25:04] Hmm. Fantastic. And, I love that, the journey that. Atlas has been on it's really become a platform. It's no longer just a database as a service. It's really, an indispensable tool that developers can use to increase agility. And, I'm just looking back at the kind of steady drum beat of additional features that have been added to, to really transform Atlas into a platform starting with free tier and increasing the regions and the coverage and.\nClient side field level encryption. And just the list of features that have been added is pretty incredible. I think I would be remiss if I didn't ask both of you to maybe talk a little bit about the future. Obviously there's things like, I don't know, like invisibility of the service and AI and ML and what are some of the things that you're thinking about, I guess, without, tipping your cards too much.\nTalk about what's interesting to you in the future of cloud.\n\nAndrew: [00:25:56] I'll take a quick pass. Just I love the question to me, the most important thing for us to be just laser focused on always going forward. Is to deliver a truly integrated, elegant experience for our end customers that is just differentiated from essentially a user experience perspective from everything else that's out there.\nAnd the platform is such a fundamental part of that, being a possibility, it starts with that document data model, which is this super set data model that can express within it, everything from key value to, essentially relational and object and. And then behind making it possible to access all of those different data models through a single developer native interface, but then making it possible to drive different physical workloads on the backend of that.\nAnd what by workloads, I mean, different ways of storing the data in different algorithms used to analyze that data, making it possible to do everything from operational transactional to those search use cases to here mentioned a data lake and mobile synchronization. Streaming, et cetera, making all of that easily accessible through that single elegant interface.\nThat is something that requires just constant focus on not adding new knobs, not adding new complex service area, not adding a millions of new permutations, but making it elegant and accessible to do all of these wonderful data models and workload types and expanding out from there. So you'll just see us keep, I think focusing. Yeah. \n\nMike: [00:27:15] Fantastic. I'll just give a plug. This is the first in the series that we're calling on ramp to Mongo to be Atlas. We're going to go a little bit deeper into the architecture. We're going to talk with some engineering folks. Then we're going to go into the mobile space and talk with Alexander Stevenson and and talk a little bit about the realm story.\nAnd then we're going to wrap it up with a panel discussion where we'll actually have some customer comments and and we'll provide a little bit. Detail into what the future might look like in that round table discussion with all of the guests. I just want to thank both of you for taking the time to chat with us and I'll give you a space to, to mention anything else you'd like to talk about before we wrap the episode up. Sahir, anything?\n\nSahir: [00:27:54] Nothing really to add other than just a thank you. And it's been humbling to think about the fact that this product is growing so fast in five years, and it feels like we're just getting started. I would encourage everyone to keep an eye out for our annual user conference next month.\nAnd some of the exciting announcements we have and Atlas and across the portfolio going forward, certainly not letting off the gas.\n\nMike: [00:28:15] Great. Any final words Andrew? \n\nAndrew: [00:28:18] yeah, I'll just say, mom going to be very much a big ten community. Over a hundred thousand people are signing up for Atlas every month. We invest so much in making it easy to absorb, learn, dive into to university courses, dive into our wonderful documentation and build amazing things on us.\nWe're here to help and we look forward to seeing you on the platform. \n\nMike: [00:28:36] Fantastic. Jesse, any final words?\n\nJesse: [00:28:38] No. I want just want to thank both of you for joining us. It's been very great to hear about how it got started and look forward to the next episodes.\n\nMike: [00:28:49] right.\n\nSahir: [00:28:49] Thanks guys.\n\nMike: [00:28:50] Thank you.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "My name is Michael Lynn, and I\u2019m a developer advocate at MongoDB.\n\nI\u2019m excited to welcome you to this, the first in a series of episodes created to celebrate the five year anniversary of the launch of MongoDB Atlas, our Database as a Service Platform.\n\nIn this series, my co-hosts, Jesse Hall, and Nic Raboy will talk with some of the people responsible for building, and launching the platform that helped to transform MongoDB as a company.\n\nbeginning with Episode 1, the On ramp to Atlas talking with Sahir Azam, Chief Product Officer, and Andrew Davidson, VP of Product about the strategic shift from a software company to a software as a service business.\n\nIn episode 2, Zero to Database as a Service, we\u2019ll chat with Cailin Nelson, SVP of Engineering, and Cory Mintz, VP of Engineering - about Atlas as a product and how it was built and launched.\n\nIn episode 3, we\u2019ll Go Mobile, talking with Alexander Stigsen, Founder of the Realm Mobile Database which has become a part of the Atlas Platform. \n\nIn episode 4, we\u2019ll wrap the series up with a panel discussion and review some of our valued customer comments about the platform. \n\nThanks so much for tuning in, please take a moment to subscribe for more episodes and if you enjoy what you hear, please don\u2019t forget to provide a comment, and a rating to help us continue to improve.\n", "contentType": "Podcast"}, "title": "Atlas 5-Year Anniversary Podcast Series Episode 1 - Onramp to Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-charts-embedding-sdk-react", "action": "created", "body": "# MongoDB Charts Embedding SDK with React\n\n## Introduction\n\nIn the previous blog post of this series, we created a React website that was retrieving a list of countries using Axios and a REST API hosted in MongoDB Realm.\n\nIn this blog post, we will continue to build on this foundation and create a dashboard with COVID-19 charts, built with MongoDB Charts and embedded in a React website with the MongoDB Charts Embedding SDK.\n\nTo add some spice in the mix, we will use our list of countries to create a dynamic filter so we can filter all the COVID-19 charts by country.\n\nYou can see the **final result here** that I hosted in a MongoDB Realm application using the static hosting feature available.\n\n## Prerequisites\n\nThe code of this project is available on GitHub in this repository.\n\n```shell\ngit clone git@github.com:mongodb-developer/mongodb-charts-embedded-react.git\n```\n\nTo run this project, you will need `node` and `npm` in a recent version. Here is what I'm currently using:\n\n```shell\n$ node -v \nv14.17.1\n$ npm -v\n8.0.0\n```\n\nYou can run the project locally like so:\n\n```sh\n$ cd mongodb-realm-react-charts\n$ npm install\n$ npm start\n```\n\nIn the next sections of this blog post, I will explain what we need to do to make this project work.\n\n## Create a MongoDB Charts Dashboard\n\nBefore we can actually embed our charts in our custom React website, we need to create them in MongoDB Charts.\n\nHere is the link to the dashboard I created for this website. It looks like this.\n\nIf you want to use the same data as me, check out this blog post about the Open Data COVID-19 Project and especially this section to duplicate the data in your own cluster in MongoDB Atlas.\n\nAs you can see in the dashboard, my charts are not filtered by country here. You can find the data of all the countries in the four charts I created.\n\n## Enable the Filtering and the Embedding\n\nTo enable the filtering when I'm embedding my charts in my website, I must tell MongoDB Charts which field(s) I will be able to filter by, based on the fields available in my collection. Here, I chose to filter by a single field, `country`, and I chose to enable the unauthenticated access for this public blog post (see below).\n\nIn the `User Specified Filters` field, I added `country` and chose to use the JavaScript SDK option instead of the iFrame alternative that is less convenient to use for a React website with dynamic filters.\n\nFor each of the four charts, I need to retrieve the `Charts Base URL` (unique for a dashboard) and the `Charts IDs`.\n\nNow that we have everything we need, we can go into the React code.\n\n## React Website\n\n### MongoDB Charts Embedding SDK\n\nFirst things first: We need to install the MongoDB Charts Embedding SDK in our project.\n\n```shell\nnpm i @mongodb-js/charts-embed-dom\n```\n\nIt's already done in the project I provided above but it's not if you are following from the first blog post.\n\n### React Project\n\nMy React project is made with just two function components: `Dashboard` and `Chart`.\n\nThe `index.js` root of the project is just calling the `Dashboard` function component.\n\n```js\nimport React from 'react';\nimport ReactDOM from 'react-dom';\nimport Dashboard from \"./Dashboard\";\n\nReactDOM.render(\n \n, document.getElementById('root'));\n```\n\nThe `Dashboard` is the central piece of the project: \n\n```js\nimport './Dashboard.css';\nimport {useEffect, useState} from \"react\";\nimport axios from \"axios\";\nimport Chart from \"./Chart\";\n\nconst Dashboard = () => {\n const url = 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/metadata';\n const countries, setCountries] = useState([]);\n const [selectedCountry, setSelectedCountry] = useState(\"\");\n const [filterCountry, setFilterCountry] = useState({});\n\n function getRandomInt(max) {\n return Math.floor(Math.random() * max);\n }\n\n useEffect(() => {\n axios.get(url).then(res => {\n setCountries(res.data.countries);\n const randomCountryNumber = getRandomInt(res.data.countries.length);\n let randomCountry = res.data.countries[randomCountryNumber];\n setSelectedCountry(randomCountry);\n setFilterCountry({\"country\": randomCountry});\n })\n }, [])\n\n useEffect(() => {\n if (selectedCountry !== \"\") {\n setFilterCountry({\"country\": selectedCountry});\n }\n }, [selectedCountry])\n\n return \n MongoDB Charts\n COVID-19 Dashboard with Filters\n \n {countries.map(c => \n setSelectedCountry(c)} checked={c === selectedCountry}/>\n {c}\n )}\n \n \n \n \n \n \n \n \n};\n\nexport default Dashboard;\n```\n\nIt's responsible for a few things:\n\n- Line 17 - Retrieve the list of countries from the REST API using Axios (cf [previous blog post).\n- Lines 18-22 - Select a random country in the list for the initial value.\n- Lines 22 & 26 - Update the filter when a new value is selected (randomly or manually).\n- Line 32 `counties.map(...)` - Use the list of countries to build a list of radio buttons to update the filter.\n- Line 32 ` x4` - Call the `Chart` component one time for each chart with the appropriate props, including the filter and the Chart ID.\n\nAs you may have noticed here, I'm using the same filter `fitlerCountry` for all the Charts, but nothing prevents me from using a custom filter for each Chart.\n\nYou may also have noticed a very minimalistic CSS file `Dashboard.css`. Here it is: \n\n```css\n.title {\n text-align: center;\n}\n\n.form {\n border: solid black 1px;\n}\n\n.elem {\n overflow: hidden;\n display: inline-block;\n width: 150px;\n height: 20px;\n}\n\n.charts {\n text-align: center;\n}\n\n.chart {\n border: solid #589636 1px;\n margin: 5px;\n display: inline-block;\n}\n```\n\nThe `Chart` component looks like this:\n\n```js\nimport React, {useEffect, useRef, useState} from 'react';\nimport ChartsEmbedSDK from \"@mongodb-js/charts-embed-dom\";\n\nconst Chart = ({filter, chartId, height, width}) => {\n const sdk = new ChartsEmbedSDK({baseUrl: 'https://charts.mongodb.com/charts-open-data-covid-19-zddgb'});\n const chartDiv = useRef(null);\n const rendered, setRendered] = useState(false);\n const [chart] = useState(sdk.createChart({chartId: chartId, height: height, width: width, theme: \"dark\"}));\n\n useEffect(() => {\n chart.render(chartDiv.current).then(() => setRendered(true)).catch(err => console.log(\"Error during Charts rendering.\", err));\n }, [chart]);\n\n useEffect(() => {\n if (rendered) {\n chart.setFilter(filter).catch(err => console.log(\"Error while filtering.\", err));\n }\n }, [chart, filter, rendered]);\n\n return ;\n};\n\nexport default Chart;\n```\n\nThe `Chart` component isn't doing much. It's just responsible for rendering the Chart **once** when the page is loaded and reloading the chart if the filter is updated to display the correct data (thanks to React).\n\nNote that the second useEffect (with the `chart.setFilter(filter)` call) shouldn't be executed if the chart isn't done rendering. So it's protected by the `rendered` state that is only set to `true` once the chart is rendered on the screen.\n\nAnd voil\u00e0! If everything went as planned, you should end up with a (not very) beautiful website like [this one.\n\n## Conclusion\n\nIn this blog post, your learned how to embed MongoDB Charts into a React website using the MongoDB Charts Embedding SDK.\n\nWe also learned how to create dynamic filters for the charts using `useEffect()`.\n\nWe didn't learn how to secure the Charts with an authentication token, but you can learn how to do that in this documentation. \n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas", "React"], "pageDescription": "In this blog post, we are creating a dynamic dashboard using React and the MongoDB Charts Embedding SDK with filters.", "contentType": "Tutorial"}, "title": "MongoDB Charts Embedding SDK with React", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/paginations-time-series-collections-in-five-minutes", "action": "created", "body": "# Paginations 1.0: Time Series Collections in five minutes\n\n# Paginations 1.0: Time-Series Collections in 5 Minutes\n# \n\nAs someone who loves to constantly measure myself and everything around me, I was excited to see MongoDB add dedicated time-series collections in MongoDB 5.0. Previously, MongoDB had been great for handling time-series data, but only if you were prepared to write some fairly complicated insert and update code and use a complex schema. In 5.0, all the hard work is done for you, including lots of behind-the-scenes optimization.\n\nWorking with time-series data brings some interesting technical challenges for databases. Let me explain.\n\n## What is time-series data?\n\nTime-series data is where we have multiple related data points that have a time, a source, and one or more values. For example, I might be recording my speed on my bike and the gradient of the road, so I have the time, the source (me on that bike), and two data values (speed and gradient). The source would change if it was a different bike or another person riding it.\n\nTime-series data is not simply any data that has a date component, but specifically data where we want to look at how values change over a period of time and so need to compare data for a given time window or windows. On my bike, am I slowing down over time on a ride? Or does my speed vary with the road gradient?\n\nThis means when we store time-series data, we usually want to retrieve or work with all data points for a time period, or all data points for a time period for one or more specific sources.\n\nThese data points tend to be small. A time is usually eight bytes, an identifier is normally only (at most) a dozen bytes, and a data point is more often than not one or more eight-byte floating point numbers. So, each \"record\" we need to store and access is perhaps 50 or 100 bytes in length.\n## \n## Why time-series data needs special handling\n## \nThis is where dealing with time-series data gets interesting\u2014at least, I think it's interesting. Most databases, MongoDB included, store data on disks, and those are read and written by the underlying hardware in blocks of typically 4, 8, or 32 KB at a time. Because of these disk blocks, the layers on top of the physical disks\u2014virtual memory, file systems, operating systems, and databases\u2014work in blocks of data too. MongoDB, like all databases, uses blocks of records when reading,writing, and caching. Unfortunately, this can make reading and writing these tiny little time-series records much less efficient.\n\nThis animation shows what happens when these records are simply inserted into a general purpose database such as MongoDB or an RDBMS.\n\nAs each record is received, it is stored sequentially in a block on the disk. To allow us to access them, we use two indexes: one with the unique record identifier, which is required for replication, and the other with the source and timestamp to let us find everything for a specific device over a time period.\n\nThis is fine for writing data. We have quick sequential writing and we can amortise disk flushes of blocks to get a very high write speed.\n\nThe issue arises when we read. In order to find the data about one device over a time period, we need to fetch many of these small records. Due to the way they were stored, the records we want are spread over multiple database blocks and disk blocks. For each block we have to read, we pay a penalty of having to read and process the whole block, using database cache space equivalent to the block size. This is a lot of wasted compute resources.\n\n## Time-series specific collections\n\nMongoDB 5.0 has specialized time-series collections optimized for this type of data, which we can use simply by adding two parameters when creating a collection.\n\n```\n db.createCollection(\"readings\",\n \"time-series\" :{ \"timeField\" : \"timestamp\",\n \"metaField\" : \"deviceId\"}})\n```\n \nWe don't need to change the code we use for reading or writing at all. MongoDB takes care of everything for us behind the scenes. This second animation shows how.\n\nWith a time-series collection, MongoDB organizes the writes so that data for the same source is stored in the same block, alongside other data points from a similar point in time. The blocks are limited in size (because so are disk blocks) and once we have enough data in a block, we will automatically create another one. The important point is that each block will cover one source and one span of time, and we have an index for each block to help us find that span.\n\nDoing this means we can have much smaller indexes as we only have one unique identifier per block. We also only have one index per block, typically for the source and time range. This results in an overall reduction in index size of hundreds of times.\n\nNot only that but by storing data like this, MongoDB is better able to apply compression. Over time, data for a source will not change randomly, so we can compress the changes in values that are co-located. This makes for a data size improvement of at least three to five times.\n\nAnd when we come to read it, we can read it several times faster as we no longer need to read data, which is not relevant to our query just to get to the data we want.\n\n## Summing up time-series collections\n\nAnd that, in a nutshell, is MongoDB time-series collections. I can just specify the time and source fields when creating a collection and MongoDB will reorganise my cycling data to make it three to five times smaller, as well as faster, to read and analyze.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "A brief, animated introduction to what Time-Series data is, why is challenging for traditional database structures and how MongoDB Time-Series Collections are specially adapted to managing this sort of data.", "contentType": "Article"}, "title": "Paginations 1.0: Time Series Collections in five minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/triggers-tricks-preimage-cass", "action": "created", "body": "# Triggers Treats and Tricks: Cascade Document Delete Using Triggers Preimage\n\nIn this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level.\n\nEssentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event.\n\n* **Database triggers:** We have triggers that can be triggered based on database events\u2014like ``deletes``, ``inserts``, ``updates``, and ``replaces``\u2014called database triggers.\n* **Scheduled triggers**: We can schedule a trigger based on a ``cron`` expression via scheduled triggers.\n* **Authentication triggers**: These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application.\n\nRelationships are an important part of any data design. Relational databases use primary and foreign key concepts to form those relationships when normalizing the data schema. Using those concepts, it allows a \u201ccascading'' delete, which means a primary key parent delete will delete the related siblings.\n\nMongoDB allows you to form relationships in different ways\u2014for example, by embedding documents or arrays inside a parent document. This allows the document to contain all of its relationships within itself and therefore it does the cascading delete out of the box. Consider the following example between a user and the assigned tasks of the user:\n\n``` js\n{\nuserId : \"abcd\",\nusername : \"user1@example.com\" \nTasks : \n { taskId : 1, \n Details : [\"write\",\"print\" , \"delete\"]\n },\n { taskId : 1, \n Details : [\"clean\",\"cook\" , \"eat\"]\n }\n}\n```\n\nDelete of this document will delete all the tasks.\n\nHowever, in some design cases, we will want to separate the data of the relationship into Parent and Sibling collections\u2014for example, ``games`` collection holding data for a specific game including ids referencing a ``quests`` collection holding a per game quest. As amount of quest data per game can be large and complex, we\u2019d rather not embed it in ``games`` but reference:\n\n**Games collection**\n\n``` js\n{\n _id: ObjectId(\"60f950794a61939b6aac12a4\"),\n userId: 'xxx',\n gameId: 'abcd-wxyz',\n gameName: 'Crash',\n quests: [\n {\n startTime: ISODate(\"2021-01-01T22:00:00.000Z\"),\n questId: ObjectId(\"60f94b7beb7f78709b97b5f3\")\n },\n {\n questId: ObjectId(\"60f94bbfeb7f78709b97b5f4\"),\n startTime: ISODate(\"2021-01-02T02:00:00.000Z\")\n }\n ]\n }\n```\n\nEach game has a quest array with a start time of this quest and a reference to the quests collection where the quest data reside.\n\n**Quests collection**\n\n``` js\n{\n _id: ObjectId(\"60f94bbfeb7f78709b97b5f4\"),\n questName: 'War of fruits ',\n userId: 'xxx',\n details: {\n lastModified: ISODate(\"2021-01-01T23:00:00.000Z\"),\n currentState: 'in-progress'\n },\n progressRounds: [ 'failed', 'failed', 'in-progress' ]\n},\n{\n _id: ObjectId(\"60f94b7beb7f78709b97b5f3\"),\n questName: 'War of vegetable ',\n userId: 'xxx',\n details: {\n lastModified: ISODate(\"2021-01-01T22:00:00.000Z\"),\n currentState: 'failed'\n },\n progressRounds: [ 'failed', 'failed', 'failed' ]\n}\n```\n\nWhen a game gets deleted, we would like to purge the relevant quests in a cascading delete. This is where the **Preimage** trigger feature comes into play.\n\n## Preimage Trigger Option\n\nThe Preimage option allows the trigger function to receive a snapshot of the deleted/modified document just before the change that triggered the function. This feature is enabled by enriching the oplog of the underlying replica set to store this snapshot as part of the change.\nRead more on our [documentation.\n\nIn our case, we will use this feature to capture the parent deleted document full snapshot (games) and delete the related relationship documents in the sibling collection (quests).\n\n## Building the Trigger\n\nWhen we define the database trigger, we will point it to the relevant cluster and parent namespace to monitor and trigger when a document is deleted\u2014in our case, ``GamesDB.games``.\n\nTo enable the \u201cPreimage\u201d feature, we will toggle Document Preimage to \u201cON\u201d and specify our function to handle the cascade delete logic.\n\n**deleteCascadingQuests - Function**\n\n``` js\nexports = async function(changeEvent) {\n\n // Get deleted document preImage using \"fullDocumentBeforeChange\"\n var deletedDocument = changeEvent.fullDocumentBeforeChange;\n\n // Get sibling collection \"quests\"\n const quests = context.services.get(\"mongodb-atlas\").db(\"GamesDB\").collection(\"quests\");\n\n // Delete all relevant quest documents.\n deletedDocument.quests.map( async (quest) => {\n await quests.deleteOne({_id : quest.questId});\n })\n};\n```\n\nAs you can see, the function gets the fully deleted \u201cgames\u201d document present in \u201cchangeEvent.fullDocumentBeforeChange\u201d and iterates over the \u201cquests\u201d array. For each of those array elements, the function runs a \u201cdeleteOne\u201d on the \u201cquests\u201d collection to delete the relevant quests documents.\n\n## Deleting the Parent Document\n\nNow let's put our trigger to the test by deleting the game from the \u201cgames\u201d collection:\n\nOnce the document was deleted, our trigger was fired and now the \u201cquests\u201d collection is empty as it had only quests related to this deleted game:\n\nOur cascade delete works thanks to triggers \u201cPreimages.\u201d\n\n## Wrap Up\n\nThe ability to get a modified or deleted full document opens a new world of opportunities for trigger use cases and abilities. We showed here one option to use this new feature but this can be used for many other scenarios, like tracking complex document state changes for auditing or cleanup images storage using the deleted metadata documents.\n\nWe suggest that you try this new feature considering your use case and look forward to the next trick along this blog series.\n\nWant to keep going? Join the conversation over at our community forums!", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "In this article, we will show you how to use a preimage feature to perform cascading relationship deletes via a trigger - based on the deleted parent document.", "contentType": "Article"}, "title": "Triggers Treats and Tricks: Cascade Document Delete Using Triggers Preimage", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/5-ways-reduce-costs-atlas", "action": "created", "body": "# 5 Ways to Reduce Costs With MongoDB Atlas\n\nNow more than ever, businesses are looking for ways to reduce or eliminate costs wherever possible. As a cloud service, MongoDB Atlas is a platform that enables enhanced scalability and reduces dependence on the kind of fixed costs businesses experience when they deploy on premises instances of MongoDB. This article will help you understand ways you can reduce costs with your MongoDB Atlas deployment.\n\n## #1 Pause Your Cluster\n\nPausing a cluster essentially brings the cluster down so if you still have active applications depending on this cluster, it's probably not a good idea. However, pausing the cluster leaves the infrastructure and data in place so that it's available when you're ready to return to business. You can pause a cluster for up to 30 days but if you do not resume the cluster within 30 days, Atlas automatically resumes the cluster. Clusters that have been paused are billed at a different, lower rate than active clusters. Read more about pausing clusters in our documentation, or check out this great article by Joe Drumgoole, on automating the process of pausing and restarting your clusters.\n\n## #2 Scale Your Cluster Down\n\nMongoDB Atlas was designed with scalability in mind and while scaling down is probably the last thing on our minds as we prepare for launching a Startup or a new application, it's a reality that we must all face.\n\nFortunately, the engineers at MongoDB that created MongoDB Atlas, our online database as a service, created the solution with bidirectional scalability in mind. The process of scaling a MongoDB Cluster will change the underlying infrastructure associated with the hosts on which your database resides. Scaling up to larger nodes in a cluster is the very same process as scaling down to smaller clusters.\n\n## #3 Enable Elastic Scalability\n\nAnother great feature of MongoDB Atlas is the ability to programmatically control the size of your cluster based on its use. MongoDB Atlas offers scalability of various components of the platform including Disk, and Compute. With compute auto-scaling, you have the ability to configure your cluster with a maximum and minimum cluster size. You can enable compute auto-scaling through either the UI or the public API. Auto-scaling is available on all clusters M10 and higher on Azure and GCP, and on all \"General\" class clusters M10 and higher on AWS. To enable auto-scaling from the UI, select the Auto-scale \"Cluster tier\" option, and choose a maximum cluster size from the available options.\n\nAtlas analyzes the following cluster metrics to determine when to scale a cluster, and whether to scale the cluster tier up or down:\n\n- CPU Utilization\n- Memory Utilization\n\nTo learn more about how to monitor cluster metrics, see View Cluster Metrics.\n\nOnce you configure auto-scaling with both a minimum and a maximum cluster size, Atlas checks that the cluster would not be in a tier outside of your specified Cluster Size range. If the next lowest cluster tier is within your Minimum Cluster Size range, Atlas scales the cluster down to the next lowest tier if both of the following are true:\n\n- The average CPU Utilization and Memory Utilization over the past 72 hours is below 50%, and\n- The cluster has not been scaled down (manually or automatically) in the past 72 hours.\n\nTo learn more about downward auto-scaling behavior, see Considerations for Downward Auto-Scaling.\n\n## #4 Cleanup and Optimize\n\nYou may also be leveraging old datasets that you no longer need. Conduct a thorough analysis of your clusters, databases, and collections to remove any duplicates, and old, outdated data. Also, remove sample datasets if you're not using them. Many developers will load these to explore and then leave them.\n\n## #5 Terminate Your Cluster\n\nAs a last resort, you may want to remove your cluster by terminating it. Please be aware that terminating a cluster is a destructive operation -once you terminate a cluster, it is gone. If you want to get your data back online and available, you will need to restore it from a backup. You can restore backups from cloud provider snapshots or from continuous backups.\n\nBe sure you download and secure your backups before terminating as you will no longer have access to them once you terminate.\n\nI hope you found this information valuable and that it helps you reduce or eliminate unnecessary expenses. If you have questions, please feel free to reach out. You will find me in the MongoDB Community or on Twitter @mlynn. Please let me know if I can help in any way.\n\n>\n>\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n>\n>\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Explore five ways to reduce MongoDB Atlas costs.", "contentType": "Article"}, "title": "5 Ways to Reduce Costs With MongoDB Atlas", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/rag-atlas-vector-search-langchain-openai", "action": "created", "body": "# RAG with Atlas Vector Search, LangChain, and OpenAI\n\nWith all the recent developments (and frenzy!) around generative AI, there has been a lot of focus on LLMs, in particular. However, there is also another emerging trend that many are unaware of: the rise of vector stores. Vector stores or vector databases play a crucial role in building LLM applications. This puts Atlas Vector Search in the vector store arena that has a handful of contenders.\n\nThe goal of this tutorial is to provide an overview of the key-concepts of Atlas Vector Search as a vector store, and LLMs and their limitations. We\u2019ll also look into an upcoming paradigm that is gaining rapid adoption called \"retrieval-augmented generation\" (RAG). We will also briefly discuss the LangChain framework, OpenAI models, and Gradio. Finally, we will tie everything together by actually using these concepts + architecture + components in a real-world application. By the end of this tutorial, readers will leave with a high-level understanding of the aforementioned concepts, and a renewed appreciation for Atlas Vector Search!\n\n## **LLMs and their limitations**\n\n**Large language models (LLMs)** are a class of deep neural network models that have been trained on vast amounts of text data, which enables them to understand and generate human-like text. LLMs have revolutionized the field of natural language processing, but they do come with certain limitations:\n\n1. **Hallucinations**: LLMs sometimes generate factually inaccurate or ungrounded information, a phenomenon known as \u201challucinations.\u201d\n2. **Stale data**: LLMs are trained on a static dataset that was current only up to a certain point in time. This means they might not have information about events or developments that occurred after their training data was collected.\n3. **No access to users\u2019 local data**: LLMs don\u2019t have access to a user\u2019s local data or personal databases. They can only generate responses based on the knowledge they were trained on, which can limit their ability to provide personalized or context-specific responses.\n4. **Token limits**: LLMs have a maximum limit on the number of tokens (pieces of text) they can process in a single interaction. Tokens in LLMs are the basic units of text that the models process and generate. They can represent individual characters, words, subwords, or even larger linguistic units. For example, the token limit for OpenAI\u2019s *gpt-3.5-turbo* is 4096.\n\n**Retrieval-augmented generation (RAG)**\n\nThe **retrieval-augmented generation (RAG)** architecture was developed to address these issues. RAG uses vector search to retrieve relevant documents based on the input query. It then provides these retrieved documents as context to the LLM to help generate a more informed and accurate response. That is, instead of generating responses purely from patterns learned during training, RAG uses those relevant retrieved documents to help generate a more informed and accurate response. This helps address the above limitations in LLMs. Specifically:\n\n- RAGs minimize hallucinations by grounding the model\u2019s responses in factual information.\n- By retrieving information from up-to-date sources, RAG ensures that the model\u2019s responses reflect the most current and accurate information available.\n- While RAG does not directly give LLMs access to a user\u2019s local data, it does allow them to utilize external databases or knowledge bases, which can be updated with user-specific information.\n- Also, while RAG does not increase an LLM\u2019s token limit, it does make the model\u2019s use of tokens more efficient by retrieving *only the most relevant documents* for generating a response.\n\nThis tutorial demonstrates how the RAG architecture can be leveraged with Atlas Vector Search to build a question-answering application against your own data.\n\n## **Application architecture**\n\nThe architecture of the application looks like this:\n\n.\n\n1. Install the following packages:\n\n ```bash\n pip3 install langchain pymongo bs4 openai tiktoken gradio requests lxml argparse unstructured\n ```\n\n2. Create the OpenAI API key. This requires a paid account with OpenAI, with enough credits. OpenAI API requests stop working if credit balance reaches $0.\n\n 1. Save the OpenAI API key in the *key_param.py* file. The filename is up to you.\n 2. Optionally, save the MongoDB URI in the file, as well.\n\n3. Create two Python scripts:\n\n 1. load_data.py: This script will be used to load your documents and ingest the text and vector embeddings, in a MongoDB collection.\n 2. extract_information.py: This script will generate the user interface and will allow you to perform question-answering against your data, using Atlas Vector Search and OpenAI.\n\n4. Import the following libraries:\n\n ```python\n from pymongo import MongoClient\n from langchain.embeddings.openai import OpenAIEmbeddings\n from langchain.vectorstores import MongoDBAtlasVectorSearch\n from langchain.document_loaders import DirectoryLoader\n from langchain.llms import OpenAI\n from langchain.chains import RetrievalQA\n import gradio as gr\n from gradio.themes.base import Base\n import key_param\n ```\n\n**Sample documents**\n\nIn this tutorial, we will be loading three text files from a directory using the DirectoryLoader. These files should be saved to a directory named **sample_files.** The contents of these text files are as follows *(none of these texts contain PII or CI)*:\n\n1. log_example.txt\n\n ```\n 2023-08-16T16:43:06.537+0000 I MONGOT 63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] [63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] Starting change stream from opTime=Timestamp{value=7267960339944178238, seconds=1692203884, inc=574}2023-08-16T16:43:06.543+0000 W MONGOT [63528f5c2c4f78275d37902d-f5-u6-a0 BufferlessChangeStreamApplier] [c.x.m.r.m.common.SchedulerQueue] cancelling queue batches for 63528f5c2c4f78275d37902d-f5-u6-a02023-08-16T16:43:06.544+0000 E MONGOT [63528f5c2c4f78275d37902d-f5-u6-a0 InitialSyncManager] [BufferlessInitialSyncManager 63528f5c2c4f78275d37902d-f5-u6-a0] Caught exception waiting for change stream events to be applied. Shutting down.com.xgen.mongot.replication.mongodb.common.InitialSyncException: com.mongodb.MongoCommandException: Command failed with error 286 (ChangeStreamHistoryLost): 'Executor error during getMore :: caused by :: Resume of change stream was not possible, as the resume point may no longer be in the oplog.' on server atlas-6keegs-shard-00-01.4bvxy.mongodb.net:27017.2023-08-16T16:43:06.545+0000 I MONGOT [indexing-lifecycle-3] [63528f5c2c4f78275d37902d-f5-u6-a0 ReplicationIndexManager] Transitioning from INITIAL_SYNC to INITIAL_SYNC_BACKOFF.2023-08-16T16:43:18.068+0000 I MONGOT [config-monitor] [c.x.m.config.provider.mms.ConfCaller] Conf call response has not changed. Last update date: 2023-08-16T16:43:18Z.2023-08-16T16:43:36.545+0000 I MONGOT [indexing-lifecycle-2] [63528f5c2c4f78275d37902d-f5-u6-a0 ReplicationIndexManager] Transitioning from INITIAL_SYNC_BACKOFF to INITIAL_SYNC.\n ```\n\n2. chat_conversation.txt\n\n ```\n Alfred: Hi, can you explain to me how compression works in MongoDB? Bruce: Sure! MongoDB supports compression of data at rest. It uses either zlib or snappy compression algorithms at the collection level. When data is written, MongoDB compresses and stores it compressed. When data is read, MongoDB uncompresses it before returning it. Compression reduces storage space requirements. Alfred: Interesting, that's helpful to know. Can you also tell me how indexes are stored in MongoDB? Bruce: MongoDB indexes are stored in B-trees. The internal nodes of the B-trees contain keys that point to children nodes or leaf nodes. The leaf nodes contain references to the actual documents stored in the collection. Indexes are stored in memory and also written to disk. The in-memory B-trees provide fast access for queries using the index.Alfred: Ok that makes sense. Does MongoDB compress the indexes as well?Bruce: Yes, MongoDB also compresses the index data using prefix compression. This compresses common prefixes in the index keys to save space. However, the compression is lightweight and focused on performance vs storage space. Index compression is enabled by default.Alfred: Great, that's really helpful context on how indexes are handled. One last question - when I query on a non-indexed field, how does MongoDB actually perform the scanning?Bruce: MongoDB performs a collection scan if a query does not use an index. It will scan every document in the collection in memory and on disk to select the documents that match the query. This can be resource intensive for large collections without indexes, so indexing improves query performance.Alfred: Thank you for the detailed explanations Bruce, I really appreciate you taking the time to walk through how compression and indexes work under the hood in MongoDB. Very helpful!Bruce: You're very welcome! I'm glad I could explain the technical details clearly. Feel free to reach out if you have any other MongoDB questions.\n ```\n\n3. aerodynamics.txt\n\n ```\n Boundary layer control, achieved using suction or blowing methods, can significantly reduce the aerodynamic drag on an aircraft's wing surface.The yaw angle of an aircraft, indicative of its side-to-side motion, is crucial for stability and is controlled primarily by the rudder.With advancements in computational fluid dynamics (CFD), engineers can accurately predict the turbulent airflow patterns around complex aircraft geometries, optimizing their design for better performance.\n ```\n\n**Loading the documents**\n\n1. Set the MongoDB URI, DB, Collection Names:\n\n ```python\n client = MongoClient(key_param.MONGO_URI)\n dbName = \"langchain_demo\"\n collectionName = \"collection_of_text_blobs\"\n collection = client[dbName][collectionName]\n ```\n\n2. Initialize the DirectoryLoader:\n\n ```python\n loader = DirectoryLoader( './sample_files', glob=\"./*.txt\", show_progress=True)\n data = loader.load()\n ```\n\n3. Define the OpenAI Embedding Model we want to use for the source data. The embedding model is different from the language generation model:\n\n ```python\n embeddings = OpenAIEmbeddings(openai_api_key=key_param.openai_api_key)\n ```\n\n4. Initialize the VectorStore. Vectorise the text from the documents using the specified embedding model, and insert them into the specified MongoDB collection.\n\n ```python\n vectorStore = MongoDBAtlasVectorSearch.from_documents( data, embeddings, collection=collection )\n ```\n\n5. Create the following Atlas Search index on the collection, please ensure the name of your index is set to `default`:\n\n```json\n{\n \"fields\": [{\n \"path\": \"embedding\",\n \"numDimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }]\n}\n```\n\n**Performing vector search using Atlas Vector Search**\n\n1. Set the MongoDB URI, DB, and Collection Names:\n\n ```python\n client = MongoClient(key_param.MONGO_URI)\n dbName = \"langchain_demo\"\n collectionName = \"collection_of_text_blobs\"\n collection = client[dbName][collectionName]\n ```\n\n2. Define the OpenAI Embedding Model we want to use. The embedding model is different from the language generation model:\n\n ```python\n embeddings = OpenAIEmbeddings(openai_api_key=key_param.openai_api_key)\n ```\n\n3. Initialize the Vector Store:\n\n ```python\n vectorStore = MongoDBAtlasVectorSearch( collection, embeddings )\n ```\n\n4. Define a function that **a) performs semantic similarity search using Atlas Vector Search** **(note that I am including this step only to highlight the differences between output of only semantic search** **vs** **output generated with RAG architecture using RetrieverQA)**:\n\n ```python\n def query_data(query):\n # Convert question to vector using OpenAI embeddings\n # Perform Atlas Vector Search using Langchain's vectorStore\n # similarity_search returns MongoDB documents most similar to the query \n \n docs = vectorStore.similarity_search(query, K=1)\n as_output = docs[0].page_content\n ```\n\n and, **b) uses a retrieval-based augmentation to perform question-answering on the data:**\n\n ```python\n # Leveraging Atlas Vector Search paired with Langchain's QARetriever\n \n # Define the LLM that we want to use -- note that this is the Language Generation Model and NOT an Embedding Model\n # If it's not specified (for example like in the code below),\n # then the default OpenAI model used in LangChain is OpenAI GPT-3.5-turbo, as of August 30, 2023\n \n llm = OpenAI(openai_api_key=key_param.openai_api_key, temperature=0)\n \n \n # Get VectorStoreRetriever: Specifically, Retriever for MongoDB VectorStore.\n # Implements _get_relevant_documents which retrieves documents relevant to a query.\n retriever = vectorStore.as_retriever()\n \n # Load \"stuff\" documents chain. Stuff documents chain takes a list of documents,\n # inserts them all into a prompt and passes that prompt to an LLM.\n \n qa = RetrievalQA.from_chain_type(llm, chain_type=\"stuff\", retriever=retriever)\n \n # Execute the chain\n \n retriever_output = qa.run(query)\n \n \n # Return Atlas Vector Search output, and output generated using RAG Architecture\n return as_output, retriever_output\n ```\n\n5. Create a web interface for the app using Gradio:\n\n ```python\n with gr.Blocks(theme=Base(), title=\"Question Answering App using Vector Search + RAG\") as demo:\n gr.Markdown(\n \"\"\"\n # Question Answering App using Atlas Vector Search + RAG Architecture\n \"\"\")\n textbox = gr.Textbox(label=\"Enter your Question:\")\n with gr.Row():\n button = gr.Button(\"Submit\", variant=\"primary\")\n with gr.Column():\n output1 = gr.Textbox(lines=1, max_lines=10, label=\"Output with just Atlas Vector Search (returns text field as is):\")\n output2 = gr.Textbox(lines=1, max_lines=10, label=\"Output generated by chaining Atlas Vector Search to Langchain's RetrieverQA + OpenAI LLM:\")\n \n # Call query_data function upon clicking the Submit button\n \n button.click(query_data, textbox, outputs=[output1, output2])\n \n demo.launch()\n ```\n\n## **Sample outputs**\n\nThe following screenshots show the outputs generated for various questions asked. Note that a purely semantic-similarity search returns the text contents of the source documents as is, while the output from the question-answering app using the RAG architecture generates precise answers to the questions asked.\n\n**Log analysis example**\n\n![Log analysis example][4]\n\n**Chat conversation example**\n\n![Chat conversion example][6]\n\n**Sentiment analysis example**\n\n![Sentiment analysis example][7]\n\n**Precise answer retrieval example**\n\n![Precise answer retrieval example][8]\n\n## **Final thoughts**\n\nIn this tutorial, we have seen how to build a question-answering app to converse with your private data, using Atlas Vector Search as a vector store, while leveraging the retrieval-augmented generation architecture with LangChain and OpenAI.\n\nVector stores or vector databases play a crucial role in building LLM applications, and retrieval-augmented generation (RAG) is a significant advancement in the field of AI, particularly in natural language processing. By pairing these together, it is possible to build powerful AI-powered applications for various use-cases. \n\nIf you have questions or comments, join us in the [developer forums to continue the conversation!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb482d06c8f1f0674/65398a092c3581197ab3b07f/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f69c39c41bd7f0a/653a87b2b78a75040aa24c50/table1-largest.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta74135e3423e8b54/653a87c9dc41eb04079b5fee/table2-largest.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta4386370772f61ee/653ac0875887ca040ac36fdb/logQA.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb6e727cbcd4b9e83/653ac09f9d1704040afd185d/chat_convo.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e035f322fe53735/653ac88e5e9b4a0407a4d319/chat_convo-1.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc220de3c036fdda5/653ac0b7e47ab5040a0f43bb/sentiment_analysis.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt828a1fe4be4a6d52/653ac0cf5887ca040ac36fe0/precise_info_retrieval.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Learn about Vector Search with MongoDB, LLMs, and OpenAI with the Python programming language.", "contentType": "Tutorial"}, "title": "RAG with Atlas Vector Search, LangChain, and OpenAI", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/attribute-pattern", "action": "created", "body": "# Building with Patterns: The Attribute Pattern\n\nWelcome back to the Building with Patterns series. Last time we looked\nat the Polymorphic Pattern which covers\nsituations when all documents in a collection are of similar, but not\nidentical, structure. In this post, we'll take a look at the Attribute\nPattern.\n\nThe Attribute Pattern is particularly well suited when:\n\n- We have big documents with many similar fields but there is a subset of fields that share common characteristics and we want to sort or query on that subset of fields, *or*\n- The fields we need to sort on are only found in a small subset of documents, *or*\n- Both of the above conditions are met within the documents.\n\nFor performance reasons, to optimize our search we'd likely need many indexes to account for all of the subsets. Creating all of these indexes could reduce performance. The Attribute Pattern provides a good solution for these cases.\n\n## The Attribute Pattern\n\nLet's think about a collection of movies. The documents will likely have similar fields involved across all of the documents: title, director,\nproducer, cast, etc. Let's say we want to search on the release date. A\nchallenge that we face when doing so, is *which* release date? Movies\nare often released on different dates in different countries.\n\n``` javascript\n{\n title: \"Star Wars\",\n director: \"George Lucas\",\n ...\n release_US: ISODate(\"1977-05-20T01:00:00+01:00\"),\n release_France: ISODate(\"1977-10-19T01:00:00+01:00\"),\n release_Italy: ISODate(\"1977-10-20T01:00:00+01:00\"),\n release_UK: ISODate(\"1977-12-27T01:00:00+01:00\"),\n ...\n}\n```\n\nA search for a release date will require looking across many fields at\nonce. In order to quickly do searches for release dates, we'd need\nseveral indexes on our movies collection:\n\n``` javascript\n{release_US: 1}\n{release_France: 1}\n{release_Italy: 1}\n...\n```\n\nBy using the Attribute Pattern, we can move this subset of information into an array and reduce the indexing needs. We turn this information into an array of key-value pairs:\n\n``` javascript\n{\n title: \"Star Wars\",\n director: \"George Lucas\",\n ...\n releases: \n {\n location: \"USA\",\n date: ISODate(\"1977-05-20T01:00:00+01:00\")\n },\n {\n location: \"France\",\n date: ISODate(\"1977-10-19T01:00:00+01:00\")\n },\n {\n location: \"Italy\",\n date: ISODate(\"1977-10-20T01:00:00+01:00\")\n },\n {\n location: \"UK\",\n date: ISODate(\"1977-12-27T01:00:00+01:00\")\n },\n ...\n ],\n ...\n}\n```\n\nIndexing becomes much more manageable by creating one index on the\nelements in the array:\n\n``` javascript\n{ \"releases.location\": 1, \"releases.date\": 1}\n```\n\nBy using the Attribute Pattern, we can add organization to our documents for common characteristics and account for rare/unpredictable fields. For example, a movie released in a new or small festival. Further, moving to a key/value convention allows for the use of non-deterministic naming and the easy addition of qualifiers. For example, if our data collection was on bottles of water, our attributes might look something like:\n\n``` javascript\n\"specs\": [\n { k: \"volume\", v: \"500\", u: \"ml\" },\n { k: \"volume\", v: \"12\", u: \"ounces\" }\n]\n```\n\nHere we break the information out into keys and values, \"k\" and \"v,\" and add in a third field, \"u,\" which allows for the units of measure to be stored separately.\n\n``` javascript\n{\"specs.k\": 1, \"specs.v\": 1, \"specs.u\": 1}\n```\n\n## Sample use case\n\nThe Attribute Pattern is well suited for schemas that have sets of fields that have the same value type, such as lists of dates. It also works well when working with the characteristics of products. Some products, such as clothing, may have sizes that are expressed in small, medium, or large. Other products in the same collection may be expressed in volume. Yet others may be expressed in physical dimensions or weight.\n\nA customer in the domain of asset management recently deployed their solution using the Attribute Pattern. The customer uses the pattern to store all characteristics of a given asset. These characteristics are seldom common across the assets or are simply difficult to predict at design time. Relational models typically use a complicated design process to express the same idea in the form of [user-defined fields.\n\nWhile many of the fields in the product catalog are similar, such as name, vendor, manufacturer, country of origin, etc., the specifications, or attributes, of the item may differ. If your application and data access patterns rely on searching through many of these different fields at once, the Attribute Pattern provides a good structure for the data.\n\n## Conclusion\n\nThe Attribute Pattern provides for easier indexing the documents, targeting many similar fields per document. By moving this subset of data into a key-value sub-document, we can use non-deterministic field names, add additional qualifiers to the information, and more clearly state the relationship of the original field and value. When we use the Attribute Pattern, we need fewer indexes, our queries become simpler to write, and our queries become faster.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Tutorial"}, "title": "Building with Patterns: The Attribute Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/kotlin/splash-screen-android", "action": "created", "body": "# Building Splash Screen Natively, Android 12, Kotlin\n\n> In this article, we will explore and learn how to build a splash screen with SplashScreen API, which was introduced in Android 12.\n\n## What is a Splash Screen?\n\nIt is the first view that is shown to a user as soon as you tap on the app icon. If you notice a blank white screen (for\na short moment) after tapping on your favourite app, it means it doesn't have a splash screen.\n\n## Why/When Do I Need It?\n\nOften, the splash screen is seen as a differentiator between normal and professional apps. Some use cases where a splash\nscreen fits perfectly are:\n\n* When we want to download data before users start using the app.\n* If we want to promote app branding and display your logo for a longer period of time, or just have a more immersive\n experience that smoothly takes you from the moment you tap on the icon to whatever the app has to offer.\n\nUntil now, creating a splash screen was never straightforward and always required some amount of boilerplate code added\nto the application, like creating SplashActivity with no view, adding a timer for branding promotion purposes, etc. With\nSplashScreen API, all of this is set to go.\n\n## Show Me the Code\n\n### Step 1: Creating a Theme\n\nEven for the new `SplashScreen` API, we need to create a theme but in the `value-v31` folder as a few parameters are\nsupported only in **Android 12**. Therefore, create a folder named `value-v31` under `res` folder and add `theme.xml`\nto it.\n\nAnd before that, let\u2019s break our splash screen into pieces for simplification.\n\n* Point 1 represents the icon of the screen.\n* Point 2 represents the background colour of the splash screen icon.\n* Point 3 represents the background colour of the splash screen.\n* Point 4 represents the space for branding logo if needed.\n\nNow, let's assign some values to the corresponding keys that describe the different pieces of the splash screen.\n\n```xml\n\n \n #FFFFFF\n\n \n #000000\n\n \n @drawable/ic_realm_logo_250\n\n \n @drawable/relam_horizontal\n\n```\n\nIn case you want to use an app icon (or don't have a separate icon) as `windowSplashScreenAnimatedIcon`, you ignore this\nparameter and by default, it will take your app icon.\n\n> **Tips & Tricks**: If your drawable icon is getting cropped on the splash screen, create an app icon from the image\n> and then replace the content of `windowSplashScreenAnimatedIcon` drawable with the `ic_launcher_foreground.xml`.\n>\n> For `windowSplashScreenBrandingImage`, I couldn't find any alternative. Do share in the comments if you find one.\n\n### Step 2: Add the Theme to Activity\n\nOpen AndroidManifest file and add a theme to the activity.\n\n``` xml\n\n```\n\nIn my view, there is no need for a new `activity` class for the splash screen, which traditionally was required. And now\nwe are all set for the new **Android 12** splash screen.\n\nAdding animation to the splash screen is also a piece of cake. Just update the icon drawable with\n`AnimationDrawable` and `AnimatedVectorDrawable` drawable and custom parameters for the duration of the animation.\n\n```xml\n\n1000\n```\n\nEarlier, I mentioned that the new API helps with the initial app data download use case, so let's see that in action.\n\nIn the splash screen activity, we can register for `addOnPreDrawListener` listener which will help to hold off the first\ndraw on the screen, until data is ready.\n\n``` Kotlin\n private val viewModel: MainViewModel by viewModels()\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n addInitialDataListener()\n loadAppView()\n }\n\n private fun addInitialDataListener() {\n val content: View = findViewById(android.R.id.content)\n // This would be called until true is not returned from the condition\n content.viewTreeObserver.addOnPreDrawListener {\n return@addOnPreDrawListener viewModel.isAppReady.value ?: false\n }\n }\n\n private fun loadAppView() {\n binding = ActivityMainBinding.inflate(layoutInflater)\n setContentView(binding.root)\n```\n\n> **Tips & Tricks**: While developing Splash screen you can return `false` for `addOnPreDrawListener`, so the next screen is not rendered and you can validate the splash screen easily.\n\n### Summary\n\nI really like the new `SplashScreen` API, which is very clean and easy to use, getting rid of SplashScreen activity\naltogether. There are a few things I disliked, though.\n\n1. The splash screen background supports only single colour. We're waiting for support of vector drawable backgrounds.\n2. There is no design spec available for icon and branding images, which makes for more of a hit and trial game. I still\n couldn't fix the banding image, in my example.\n3. Last but not least, SplashScreen UI side feature(`theme.xml`) is only supported from Android 12 and above, so we\n can't get rid of the old code for now.\n\nYou can also check out the complete working example from my GitHub repo. Note: Just running code on the device will show\nyou white. To see the example, close the app recent tray and then click on the app icon again.\n\nGithub Repo link\n\nHope this was informative and enjoyed reading it.\n\n", "format": "md", "metadata": {"tags": ["Kotlin", "Realm", "Android"], "pageDescription": "In this article, we will explore and learn how to build a splash screen with SplashScreen API, which was introduced in Android 12.", "contentType": "Code Example"}, "title": "Building Splash Screen Natively, Android 12, Kotlin", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/migrate-azure-cosmosdb-mongodb-atlas-apache-kafka", "action": "created", "body": "# Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka\n\n## Overview\nWhen you are the best of breed, you have many imitators. MongoDB is no different in the database world. If you are reading this blog, you are most likely an Azure customer that ended up using CosmosDB. \n\nYou needed a database that could handle unstructured data in Azure and eventually realized CosmosDB wasn\u2019t the best fit. Perhaps you found that it is too expensive for your workload or not performing well or simply have no confidence in the platform. You also might have tried using the MongoDB API and found that the queries you wanted to use simply don\u2019t work in CosmosDB because it fails 67% of the compatibility tests. \n\nWhatever the path you took to CosmosDB, know that you can easily migrate your data to MongoDB Atlas while still leveraging the full power of Azure. With MongoDB Atlas in Azure, there are no more failed queries, slow performance, and surprise bills from not optimizing your RDUs. MongoDB Atlas in Azure also gives you access to the latest releases of MongoDB and the flexibility to leverage any of the three cloud providers if your business needs change.\n\nNote: When you originally created your CosmosDB, you were presented with these API options:\n\nIf you created your CosmosDB using Azure Cosmos DB API for MongoDB, you can use mongo tools such as mongodump, mongorestore, mongoimport, and mongoexport to move your data. The Azure CosmosDB Connector for Kafka Connect does not work with CosmosDB databases that were created for the Azure Cosmos DB API for MongoDB.\n\nIn this blog post, we will cover how to leverage Apache Kafka to move data from Azure CosmosDB Core (Native API) to MongoDB Atlas. While there are many ways to move data, using Kafka will allow you to not only perform a one-time migration but to stream data from CosmosDB to MongoDB. This gives you the opportunity to test your application and compare the experience so that you can make the final application change to MongoDB Atlas when you are ready. The complete example code is available in this GitHub repository.\n\n## Getting started\nYou\u2019ll need access to an Apache Kafka cluster. There are many options available to you, including Confluent Cloud, or you can deploy your own Apache Kafka via Docker as shown in this blog. Microsoft Azure also includes an event messaging service called Azure Event Hubs. This service provides a Kafka endpoint that can be used as an alternative to running your own Kafka cluster. Azure Event Hubs exposes the same Kafka Connect API, enabling the use of the MongoDB connector and Azure CosmosDB DB Connector with the Event Hubs service.\n\nIf you do not have an existing Kafka deployment, perform these steps. You will need docker installed on your local machine:\n```\ngit clone https://github.com/RWaltersMA/CosmosDB2MongoDB.git\n```\nNext, build the docker containers.\n```\ndocker-compose up -d --build\n```\n\nThe docker compose script (docker-compose.yml) will stand up all the components you need, including Apache Kafka and Kafka Connect. Install the CosmosDB and MongoDB connectors.\n## Configuring Kafka Connect\nModify the **cosmosdb-source.json** file and replace the placeholder values with your own.\n```\n{\n \"name\": \"cosmosdb-source\",\n \"config\": {\n \"connector.class\": \"com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector\",\n \"tasks.max\": \"1\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"connect.cosmos.task.poll.interval\": \"100\",\n \"connect.cosmos.connection.endpoint\": \n\"https://****.documents.azure.com:443/\",\n \"connect.cosmos.master.key\": **\"\",**\n \"connect.cosmos.databasename\": **\"\",**\n \"connect.cosmos.containers.topicmap\": **\"#\u201d,**\n \"connect.cosmos.offset.useLatest\": false,\n \"value.converter.schemas.enable\": \"false\",\n \"key.converter.schemas.enable\": \"false\"\n }\n}\n\n```\nModify the **mongo-sink.json** file and replace the placeholder values with your own.\n```\n{\"name\": \"mongo-sink\",\n \"config\": {\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics\":\"\",\n \"connection.uri\":\"\",\n \"database\":\"\",\n \"collection\":\"\",\n \"key.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\": \"false\",\n \"key.converter.schemas.enable\": \"false\"\n \n }}\n\n```\nNote: Before we configure Kafka Connect, make sure that your network settings on both CosmosDB and MongoDB Atlas will allow communication between these two services. In CosmosDB, select the Firewall and Virtual Networks. While the easiest configuration is to select \u201cAll networks,\u201d you can provide a more secure connection by specifying the IP range from the Firewall setting in the Selected networks option. MongoDB Atlas Network access also needs to be configured to allow remote connections. By default, MongoDB Atlas does not allow any external connections. See Configure IP Access List for more information.\n\nTo configure our two connectors, make a REST API call to the Kafka Connect service:\n\n```\ncurl -X POST -H \"Content-Type: application/json\" -d @cosmosdb-source.json http://localhost:8083/connectors\n\ncurl -X POST -H \"Content-Type: application/json\" -d @mongodb-sink.json http://localhost:8083/connectors\n\n```\nThat\u2019s it!\n\nProvided the network and database access was configured properly, data from your CosmosDB should begin to flow into MongoDB Atlas. If you don\u2019t see anything, here are some troubleshooting tips:\n\n* Try connecting to your MongoDB Atlas cluster using the mongosh tool from the server running the docker container.\n* View the docker logs for the Kafka Connect service.\n* Verify that you can connect to the CosmosDB instance using the Azure CLI from the server running the docker container.\n\n**Summary**\nIn this post, we explored how to move data from CosmosDB to MongoDB using Apache Kafka. If you\u2019d like to explore this method and other ways to migrate data, check out the 2021 MongoDB partner of the year award winner, Peerslands', five-part blog post on CosmosDB migration.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Kafka"], "pageDescription": "Learn how to migrate your data in Azure CosmosDB to MongoDB Atlas using Apache Kafka.", "contentType": "Tutorial"}, "title": "Migrate from Azure CosmosDB to MongoDB Atlas Using Apache Kafka", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/paginations-why-choose-mongodb", "action": "created", "body": "# Paginations 2.0: Why I Would Choose MongoDB\n\n# Paginations 2.0: Why I Would Choose MongoDB\n\nI've been writing and designing large scale, multi-user, applications with database backends since 1995, as lead architect for intelligence management systems, text mining, and analytics platforms, and as a consultant working in retail and investment banking, mobile games, connected-car IoT projects, and country scale document management. It's fair to say I've seen how a lot of applications are put together.\n\nNow it's also reasonable to assume that as I work for MongoDB, I have some bias, but MongoDB isn't my first database, or even my first document database, and so I do have a fairly broad perspective. I'd like to share with you three features of MongoDB that would make it my first choice for almost all large, multi-user database applications.\n\n## The Document Model\n\nThe Document model is a fundamental aspect of MongoDB. All databases store records\u2014information about things that have named attributes and values for those attributes. Some attributes might have multiple values. In a tabular database, we break the record into multiple rows with a single scalar value for each attribute and have a way to relate those rows together to access the record.\n\nThe difference in a Document database is when we have multiple values for an attribute, we can retain those as part of a single record, storing access and manipulating them together. We can also group attributes together to compare and refer to them as a group. For example, all the parts of an address can be accessed as a single address field or individually.\n\nWhy does this matter? Well, being able to store an entire record co-located on disk and in memory has some huge advantages.\n\nBy having these larger, atomic objects to work with, there are some oft quoted benefits like making it easier for OO developers and reducing the computational overheads of accessing the whole record, but this misses a third, even more important benefit.\n\nWith the correct schema, documents reduce each database write operation to single atomic changes of one piece of data. This has two huge and related benefits.\n\nBy only requiring one piece of data to be examined for its current state and changed to a new state at a time, the period of time where the database state is unresolved is reduced to almost nothing. Effectively, there is no interaction between multiple writes to the database and none have to wait for another to complete, at least not beyond a single change to a single document.\n\nIf we have to use traditional transactions, whether in an RDBMS or MongoDB, to perform a change then all records concerned remain effectively locked until the transaction is complete. This greatly widens the window for contention and delay. Using the document model instead, you can remove all contention in your database and achieve far higher 'transactional' throughput in a multi-user system.\n\nThe second part of this is that when each write to the database can be treated as an independent operation, it makes it easy to horizontally scale the database to support large workloads as the state of a document on one server has no impact on your ability to change a document on another. Every operation can be parallelised.\n\nDoing this does require you to design your schema correctly, though. Document databases are far from schemaless (a term MongoDB has not used for many years). In truth, it makes schema design even more important than in an RDBMS.\n\n## Highly Available as standard\n\nThe second reason I would choose to use MongoDB is that high-availability is at the heart of the database. MongoDB is designed so that a server can be taken offline instantly, at any time and there is no loss of service or data. This is absolutely fundamental to how all of MongoDB is designed. It doesn't rely on specialist hardware, third-party software, or add-ons. It allows for replacement of servers, operating systems, and even database versions invisibly to the end user, and even mostly to the developer. This goes equally for Atlas, where MongoDB can provide a multi-cloud database service at any scale that is resilient to the loss of an entire cloud provider, whether it\u2019s Azure, Google, or Amazon. This level of uptime is unprecedented.\n\nSo, if I plan to develop a large, multi-user application I just want to know the database will always be there, zero downtime, zero data loss, end of story. \n\n## Smart Update Capability\n\nThe third reason I would choose MongoDB is possibly the most surprising. Not all document databases are the same, and allow you to realise all the benefits of a document versus relational model, some are simply JSON stores or Key/Value stores where the value is some form of document.\n\nMongoDB has the powerful, specialised update operators capable of doing more than simply replacing a document or a value in the database. With MongoDB, you can, as part of a single atomic operation, verify the state of values in the document, compute the new value for any field based on it and any other fields, sort and truncate arrays when adding to them and, should you require it automatically, create a new document rather than modify an existing one.\n\nIt is this \"smart\" update capability that makes MongoDB capable of being a principal, \"transactional\" database in large, multi-user systems versus a simple store of document shaped data.\n\nThese three features, at the heart of an end-to-end data platform, are what genuinely make MongoDB my personal first choice when I want to build a system to support many users with a snappy user experience, 24 hours a day, 365 days a year.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Distinguished Engineer and 25 year NoSQL veteran John Page explains in 5 minutes why MongoDB would be his first choice for building a multi-user application.", "contentType": "Article"}, "title": "Paginations 2.0: Why I Would Choose MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/bson-data-types-objectid", "action": "created", "body": "# Quick Start: BSON Data Types - ObjectId\n\n \n\nIn the database world, it is frequently important to have unique identifiers associated with a record. In a legacy, tabular database, these unique identifiers are often used as primary keys. In a modern database, such as MongoDB, we need a unique identifier in an `_id` field as a primary key as well. MongoDB provides an automatic unique identifier for the `_id` field in the form of an `ObjectId` data type.\n\nFor those that are familiar with MongoDB Documents you've likely come across the `ObjectId` data type in the `_id` field. For those unfamiliar with MongoDB Documents, the ObjectId datatype is automatically generated as a unique document identifier if no other identifier is provided. But what is an `ObjectId` field? What makes them unique? This post will unveil some of the magic behind the BSON ObjectId data type. First, though, what is BSON?\n\n## Binary JSON (BSON)\n\nMany programming languages have JavaScript Object Notation (JSON) support or similar data structures. MongoDB uses JSON documents to store records. However, behind the scenes, MongoDB represents these documents in a binary-encoded format called BSON. BSON provides additional data types and ordered fields to allow for efficient support across a variety of languages. One of these additional data types is ObjectId.\n\n## Makeup of an ObjectId\n\nLet's start with an examination of what goes into an ObjectId. If we take a look at the construction of the ObjectId value, in its current implementation, it is a 12-byte hexadecimal value. This 12-byte configuration is smaller than a typical universally unique identifier (UUID), which is, typically, 128-bits. Beginning in MongoDB 3.4, an ObjectId consists of the following values:\n\n- 4-byte value representing the seconds since the Unix epoch,\n- 5-byte random value, and\n- 3-byte counter, starting with a random value.\n\nWith this makeup, ObjectIds are *likely* to be globally unique and unique per collection. Therefore, they make a good candidate for the unique requirement of the `_id` field. While the `_id` in a collection can be an auto-assigned `ObjectId`, it can be user-defined as well, as long as it is unique within a collection. Remember that if you aren't using a MongoDB generated `ObjectId` for the `_id` field, the application creating the document will have to ensure the value is unique.\n\n## History of ObjectId\n\nThe makeup of the ObjectId has changed over time. Through version 3.2, it consisted of the following values:\n\n- 4-byte value representing the seconds since the Unix epoch,\n- 3-byte machine identifier,\n- 2-byte process id, and\n- 3-byte counter, starting with a random value.\n\nThe change from including a machine-specific identifier and process id to a random value increased the likelihood that the `ObjectId` would be globally unique. These machine-specific 5-bytes of information became less likely to be random with the prevalence of Virtual Machines (VMs) that had the same MAC addresses and processes that started in the same order. While it still isn't guaranteed, removing machine-specific information from the `ObjectId` increases the chances that the same machine won't generate the same `ObjectId`.\n\n## ObjectId Odds of Uniqueness\n\nThe randomness of the last eight bytes in the current implementation makes the likelihood of the same ObjectId being created pretty small. How small depends on the number of inserts per second that your application does. Let's do some quick math and look at the odds.\n\nIf we do one insert per second, the first four bytes of the ObjectId would change so we can't have a duplicate ObjectId. What are the odds though when multiple documents are inserted in the same second that *two* ObjectIds are the same? Since there are *eight* bits in a byte, and *eight* random bytes in our Object Id (5 random + 3 random starting values), the denominator in our odds ratio would be 2^(8\\*8), or 1.84467441x10'^19. For those that have forgotten scientific notation, that's 18,446,744,100,000,000,000. Yes, that's correct, 18 quintillion and change. As a bit of perspective, the odds of being struck by lightning in the U.S. in a given year are 1 in 700,000, according to National Geographic. The odds of winning the Powerball Lottery jackpot are 1 in 292,201,338. The numerator in our odds equation is the number of documents per second. Even in a write-heavy system with 250 million writes/second, the odds are, while not zero, pretty good against duplicate ObjectIds being generated.\n\n## Wrap Up\n\n>Get started exploring BSON types, like ObjectId, with MongoDB Atlas today!\n\nObjectId is one data type that is part of the BSON Specification that MongoDB uses for data storage. It is a binary representation of JSON and includes other data types beyond those defined in JSON. It is a powerful data type that is incredibly useful as a unique identifier in MongoDB Documents.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "MongoDB provides an automatic unique identifier for the _id field in the form of an ObjectId data type.", "contentType": "Quickstart"}, "title": "Quick Start: BSON Data Types - ObjectId", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/hidden-indexes", "action": "created", "body": "# Optimize and Tune MongoDB Performance with Hidden Indexes\n\nMongoDB 4.4 is the biggest release of MongoDB to date and is available in beta right now. You can try out it out in MongoDB Atlas or download the development release. There is so much new stuff to talk about ranging from new features like custom aggregation expressions, improvements to existing functionality like refinable shard keys, and much more.\n\nIn this post, we are going to look at a new feature coming to MongoDB 4.4 that will help you better optimize and fine-tune the performance of your queries as your application evolves called hidden indexes.\n\nHidden indexes, as the name implies, allows you to hide an index from the query planner without removing it, allowing you to assess the impact of not using that specific index.\n\n## Prerequisites\n\nFor this tutorial you'll need:\n\n- MongoDB 4.4\n\n## Hidden Indexes in MongoDB 4.4\n\nMost database technologies, and MongoDB is no different, rely on indexes to speed up performance and efficiently execute queries. Without an index, MongoDB would have to perform a collection scan, meaning scanning every document in a collection to filter out the ones the query asked for.\n\nWith an index, and often times with a correct index, this process is greatly sped up. But choosing the right data to index is an art and a science of its own. If you'd like to learn a bit more about indexing best practices, check out this blog post. Building, maintaining, and dropping indexes can be resource-intensive and time-consuming, especially if you're working with a large dataset.\n\nHidden indexes is a new feature coming to MongoDB 4.4 that allows you to easily measure the impact an index has on your queries without actually deleting it and having to rebuild it if you find that the index is in fact required and improves performance.\n\nThe awesome thing about hidden indexes is that besides being hidden from the query planner, meaning they won't be used in the execution of the query, they behave exactly like a normal index would. This means that hidden indexes are still updated and maintained even while hidden (but this also means that a hidden index continues to consume disk space and memory so if you find that hiding an index does not have an impact on performance, consider dropping it), hidden unique indexes still apply the unique constraint to documents, and hidden TTL indexes still continue to expire documents.\n\nThere are some limitations on hidden indexes. The first is that you cannot hide the default `_id` index. The second is that you cannot perform a cursor.hint() on a hidden index to force MongoDB to use the hidden index.\n\n## Creating Hidden Indexes in MongoDB\n\nTo create a hidden index in MongoDB 4.4 you simply pass a `hidden` parameter and set the value to `true` within the `db.collection.createIndex()` options argument. For a more concrete example, let's assume we have a `movies` collection that stores documents on individual films. The documents in this collection may look something like this:\n\n``` \n{\n \"_id\": ObjectId(\"573a13b2f29313caabd3ac0d\"),\n \"title\": \"Toy Story 3\",\n \"plot\": \"The toys are mistakenly delivered to a day-care center instead of the attic right before Andy leaves for college, and it's up to Woody to convince the other toys that they weren't abandoned and to return home.\",\n \"genres\": \"Animation\", \"Adventure\", \"Comedy\"],\n \"runtime\": 103,\n \"metacritic\": 92,\n \"rated\": \"G\",\n \"cast\": [\"Tom Hanks\", \"Tim Allen\", \"Joan Cusack\", \"Ned Beatty\"],\n \"directors\": [\"Lee Unkrich\"],\n \"poster\": \"https://m.media-amazon.com/images/M/MV5BMTgxOTY4Mjc0MF5BMl5BanBnXkFtZTcwNTA4MDQyMw@@._V1_SY1000_SX677_AL_.jpg\",\n \"year\": 2010,\n \"type\": \"movie\"\n}\n```\n\nNow let's assume we wanted to create a brand new index on the title of the movie and we wanted it to be hidden by default. To do this, we'd execute the following command:\n\n``` bash\ndb.movies.createIndex( { title: 1 }, { hidden: true })\n```\n\nThis command will create a new index that will be hidden by default. This means that if we were to execute a query such as `db.movies.find({ \"title\" : \"Toy Story 3\" })` the query planner would perform a collection scan. Using [MongoDB Compass, I'll confirm that that's what happens.\n\nFrom the screenshot, we can see that `collscan` was used and that the actual query execution time took 8ms. If we navigate to the Indexes tab in MongoDB Compass, we can also confirm that we do have a `title_1` index created, that's consuming 315.4kb, and has been used 0 times.\n\nThis is the expected behavior as we created our index as hidden from the get-go. Next, we'll learn how to unhide the index we created and see if we get improved performance.\n\n## Unhiding Indexes in MongoDB 4.4\n\nTo measure the impact an index has on our query performance, we'll unhide it. We have a couple of different options on how to accomplish this. We can, of course, use `db.runCommand()` in conjunction with `collMod`, but we also have a number of mongo shell helpers that I think are much easier and less verbose to work with. In this section, we'll use the latter.\n\nTo unhide an index, we can use the `db.collection.unhideIndex()` method passing in either the name of the index, or the index keys. Let's unhide our title index using the index keys. To do this we'll execute the following command:\n\n``` bash\ndb.movies.unhideIndex({title: 1}) \n```\n\nOur response will look like this:\n\nIf we were to execute our query to find **Toy Story 3** in MongoDB Compass now and view the Explain Plan, we'd see that instead of a `collscan` or collection scan our query will now use the `ixscan` or index scan, meaning it's going to use the index. We get the same results back, but now our actual query execution time is 0ms.\n\nAdditionally, if we look at our Indexes tab, we'll see that our `title_1` index was used one time.\n\n## Working with Existing Indexes in MongoDB 4.4\n\nWhen you create an index in MongoDB 4.4, by default it will be created with the `hidden` property set to false, which can be overwritten to create a hidden index from the get-go as we did in this tutorial. But what about existing indexes? Can you hide and unhide those? You betcha!\n\nJust like the `db.collection.unhideIndex()` helper method, there is a `db.collection.hideIndex()` helper method, and it allows you to hide an existing index via its name or index keys. Or you can use the `db.runCommand()` in conjunction with `collMod`. Let's hide our title index, this time using the `db.runCommand()`.\n\n``` bash\ndb.runCommand({\n collMod : \"movies\"\n index: {\n keyPattern: {title:1},\n hidden: true\n }\n})\n```\n\nExecuting this command will once again hide our `title_1` index from the query planner so when we execute queries and search for movies by their title, MongoDB will perform the much slower `collscan` or collection scan.\n\n## Conclusion\n\nHidden indexes in MongoDB 4.4 make it faster and more efficient for you to tune performance as your application evolves. Getting indexes right is one-half art, one-half science, and with hidden indexes you can make better and more informed decisions much faster.\n\nRegardless of whether you use the hidden indexes feature or not, please be sure to create and use indexes in your collections as they will have a significant impact on your query performance. Check out the free M201 MongoDB University course to learn more about MongoDB performance and indexes.\n\n>**Safe Harbor Statement**\n>\n>The development, release, and timing of any features or functionality\n>described for MongoDB products remains at MongoDB's sole discretion.\n>This information is merely intended to outline our general product\n>direction and it should not be relied on in making a purchasing decision\n>nor is this a commitment, promise or legal obligation to deliver any\n>material, code, or functionality. Except as required by law, we\n>undertake no obligation to update any forward-looking statements to\n>reflect events or circumstances after the date of such statements.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to optimize and fine tune your MongoDB performance with hidden indexes.", "contentType": "Tutorial"}, "title": "Optimize and Tune MongoDB Performance with Hidden Indexes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/designing-developing-2d-game-levels-unity-csharp", "action": "created", "body": "# Designing and Developing 2D Game Levels with Unity and C#\n\nIf you've been keeping up with the game development series that me (Nic Raboy) and Adrienne Tacke have been creating, you've probably seen how to create a user profile store for a game and move a player around on the screen with Unity.\n\nTo continue with the series, which is also being streamed on Twitch, we're at a point where we need to worry about designing a level for gameplay rather than just exploring a blank screen.\n\nIn this tutorial, we're going to see how to create a level, which can also be referred to as a map or world, using simple C# and the Unity Tilemap Editor.\n\nTo get a better idea of what we plan to accomplish, take a look at the following animated image.\n\nYou'll notice that we're moving a non-animated sprite around the screen. You might think at first glance that the level is one big image, but it is actually many tiles placed carefully within Unity. The edge tiles have collision boundaries to prevent the player from moving off the screen.\n\nIf you're looking at the above animated image and wondering where MongoDB fits into this, the short answer is that it doesn't. The game that Adrienne and I are building will leverage MongoDB, but some parts of the game development process such as level design won't need a database. We're attempting to tell a story with this series.\n\n## Using the Unity Tilemap Editor to Draw 2D Game Levels\n\nThere are many ways to create a level for a game, but as previously mentioned, we're going to be using tilemaps. Unity makes this easy for us because the software provides a paint-like experience where we can draw tiles on the canvas using any available images that we load into the project.\n\nFor this example, we're going to use the following texture sheet:\n\nRather than creating a new project and repeating previously explained steps, we're going to continue where we left off from the previous tutorial. The **doordash-level.png** file should be placed in the **Assets/Textures** directory of the project.\n\nWhile we won't be exploring animations in this particular tutorial, if you want the spritesheet used in the animated image, you can download it below:\n\nThe **plummie.png** file should be added to the project's **Assets/Textures** directory. To learn how to animate the spritesheet, take a look at a previous tutorial I wrote on the topic.\n\nInside the Unity editor, click on the **doordash-level.png** file that was added. We're going to want to do a few things before we can work with each tile as independent images.\n\n- Change the sprite mode to **Multiple**.\n- Define the actual **Pixels Per Unit** of the tiles in the texture packed image.\n- Split the tiles using the **Sprite Editor**.\n\nIn the above image, you might notice that the **Pixels Per Unit** value is **255** while the actual tiles are **256**. By defining the tiles as one pixel smaller, we're attempting to remove any border between the tile images that might make the level look weird due to padding.\n\nWhen using the **Sprite Editor**, make sure to slice the image by the cell size using the correct width and height dimensions of the tiles. For clarity, the tiles that I attached are 256x256 in resolution.\n\nIf you plan to use the spritesheet for the Plummie character, make sure to repeat the same steps for that spritesheet as well. It is important we have access to the individual images in a spritesheet rather than treating all the images as one single image.\n\nWith the images ready for use, let's focus on drawing the level.\n\nWithin the Unity menu, choose **Component -> Tilemap -> Tilemap** to add a new tilemap and parent grid object to the scene. To get the best results, we're going to want to layer multiple tilemaps on our scene. Right click on the **Grid** object in the scene and choose **2D Object -> Tilemap**. You'll want three tilemaps in total for this particular example.\n\nWe want multiple tilemap layers because it will add depth to the scene and more control. For example, one layer will represent the furthest part of our background, maybe dirt or floors. Another layer will represent any kind of decoration that will sit on top of the floors \u2014 aay, for example, arrows. Then, the final tilemap layer might represent our walls or obstacles.\n\nTo make sure the layers get rendered in the correct order, the **Tilemap Renderer** for each tilemap should have a properly defined **Sorting Layer**. If continuing from the previous tutorial, you'll remember we had created a **Background** layer and a **GameObject** layer. These can be used, or you can continue to create and assign more. Just remember that the render order of the sorting layers is top to bottom, the opposite of what you'd experience in photo editing software like Adobe Photoshop.\n\nThe next step is to open the **Tile Palette** window within Unity. From the menu, choose **Window -> 2D -> Tile Palette**. The palette will be empty to start, but you'll want to drag your images either one at a time or multiple at a time into the window.\n\nWith images in the tile palette, they can be drawn on the scene like painting on a canvas. First click on the tile image you want to use and then choose the painting tool you want to use. You can paint on a tile-by-tile basis or paint multiple tiles at a time.\n\nIt is important that you have the proper **Active Tilemap** selected when drawing your tiles. This is important because of the order that each tile renders and any collision boundaries we add later.\n\nTake a look at the following possible result:\n\nRemember, we're designing a level, so this means that your tiles can exceed the view of the camera. Use your tiles to make your level as big and extravagant as you'd like.\n\nAssuming we kept the same logic from the previous tutorial, Getting Started with Unity for Creating a 2D Game, we can move our player around in the level, but the player can exceed the screen. The player may still be a white box or the Plummie sprite depending on what you've chosen to do. Regardless, we want to make sure our layer that represents the boundaries acts as a boundary with collision.\n\n## Adding Collision Boundaries to Specific Tiles and Regions on a Level\n\nAdding collision boundaries to tiles in a tilemap is quite easy and doesn't require more than a few clicks.\n\nSelect the tilemap that represents our walls or boundaries and choose to **Add Component** in the inspector. You'll want to add both a **Tilemap Collider 2D** as well as a **Rigidbody 2D**. The **Body Type** of the **Rigidbody 2D** should be static so that gravity and other physics-related events are not applied.\n\nAfter doing these short steps, the player should no longer be able to go beyond the tiles for this layer.\n\nWe can improve things!\n\nRight now, every tile that is part of our tilemap with the **Tilemap Collider 2D** and **Rigidbody 2D** component has a full collision area around the tile. This is true even if the tiles are adjacent and parts of the tile can never be reached by the player. Imagine having four tiles creating a large square. Of the possible 16 collision regions, only eight can ever be interacted with. We're going to change this, which will greatly improve performance.\n\nOn the tilemap with the **Tilemap Collider 2D** and **Rigidbody 2D** components, add a **Composite Collider 2D** component. After adding, enable the **Used By Composite** field in the **Tilemap Collider 2D** component.\n\nJust like that, there are fewer regions that are tracking collisions, which will boost performance.\n\n## Following the Player While Traversing the 2D Game Level using C#\n\nAs of right now, we have our player, which might be a Plummie or might be a white pixel, and we have our carefully crafted level made from tiles. The problem is that our camera can only fit so much into view, which probably isn't the full scope of our level.\n\nWhat we can do as part of the gameplay experience is have the camera follow the player as it traverses the level. We can do this with C#.\n\nSelect the **Main Camera** within the current scene. We're going to want to add a new script component.\n\nWithin the C# script that you'll need to attach, include the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class CameraPosition : MonoBehaviour\n{\n\n public Transform player;\n\n void Start() {}\n\n void Update()\n {\n transform.position = new Vector3(player.position.x, 0, -10);\n }\n}\n```\n\nIn the above code, we are looking at the transform of another unrelated game object. We'll attach that game object in just a moment. Every time the frame updates, the position of the camera is updated to match the position of the player in the x-axis. In this example, we are fixing the y-axis and z-axis so we are only following the player in the left and right direction. Depending on how you've created your level, this might need to change.\n\nRemember, this script should be attached to the **Main Camera** or whatever your camera is for the scene.\n\nRemember the `player` variable in the script? You'll find it in the inspector for the camera. Drag your player object from the project hierarchy into this field and that will be the object that is followed by the camera.\n\nRunning the game will result in the camera being centered on the player. As the player moves through the tilemap level, so will the camera. If the player tries to collide with any of the tiles that have collision boundaries, motion will stop.\n\n## Conclusion\n\nYou just saw how to create a 2D world in Unity using tile images and the Unity Tilemap Editor. This is a very powerful tool because you don't have to create massive images to represent worlds and you don't have to worry about creating worlds with massive amounts of game objects.\n\nThe assets we used in this tutorial are based around a series that myself (Nic Raboy) and Adrienne Tacke are building titled Plummeting People. This series is on the topic of building a multiplayer game with Unity that leverages MongoDB. While this particular tutorial didn't include MongoDB, plenty of other tutorials in the series will.\n\nIf you feel like this tutorial skipped a few steps, it did. I encourage you to read through some of the previous tutorials in the series to catch up.\n\nIf you want to build Plummeting People with us, follow us on Twitch where we work toward building it live, every other week.\n", "format": "md", "metadata": {"tags": ["C#", "Unity"], "pageDescription": "Learn how to use Unity tilemaps to create complex 2D worlds for your game.", "contentType": "Tutorial"}, "title": "Designing and Developing 2D Game Levels with Unity and C#", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/generate-mql-with-mongosh-and-openai", "action": "created", "body": "# Generating MQL Shell Commands Using OpenAI and New mongosh Shell\n\n# Generating MQL Shell Commands Using OpenAI and New mongosh Shell\n\nOpenAI is a fascinating and growing AI platform sponsored by Microsoft, allowing you to digest text cleverly to produce AI content with stunning results considering how small the \u201clearning data set\u201d you actually provide is.\n\nMongoDB\u2019s Query Language (MQL) is an intuitive language for developers to interact with MongoDB Documents. For this reason, I wanted to put OpenAI to the test of quickly learning the MongoDB language and using its overall knowledge to build queries from simple sentences. The results were more than satisfying to me. Github is already working on a project called Github copilot which uses the same OpenAI engine to code.\n\nIn this article, I will show you my experiment, including the game-changing capabilities of the new MongoDB Shell (`mongosh`) which can extend scripting with npm modules integrations.\n\n## What is OpenAI and How Do I Get Access to It?\n\nOpenAI is a unique project aiming to provide an API for many AI tasks built mostly on Natural Language Processing today. You can read more about their projects in this blog.\n\nThere are a variety of examples for its\u00a0text processing capabilities.\n\nIf you want to use OpenAI, you will need to get a trial API key first by joining the waitlist on their main page. Once you are approved to get an API key, you will be granted about $18 for three months of testing. Each call in OpenAI is billed and this is something to consider when using in production. For our purposes, $18 is more than enough to test the most expensive engine named \u201cdavinci.\u201d\n\nOnce you get the API key, you can use various clients to run their AI API from your script/application. \n\nSince we will be using the new `mongosh` shell, I have used the\n JS API.\n\n## Preparing the mongosh to Use OpenAI\n\nFirst, we need to install the new shell, if you haven\u2019t done it so far. On my Mac laptop, I just issued:\n\n``` bash\nbrew install mongosh\n```\n\nWindows users should download the MSI installer from our download page and follow the Windows instructions.\n\nOnce my mongosh is ready, I can start using it, but before I do so, let\u2019s install OpenAI JS, which we will import in the shell later on:\n\n``` bash\n$ mkdir openai-test\n$ cd openai-test\nOpenai-test $ npm i openai-api\n```\n\nI\u2019ve decided to use the Questions and Answers pattern, in the form of `Q: ` and `A: `, provided to the text to command completion API to provide the learning material about MongoDB queries for the AI engine. To better feed it, I placed the training questions and answers in a file called `AI-input.txt` and its content:\n\n```\nQ: What is the query syntax?\nA: db.collection.find(, , )\nQ: Query users collection for username with value \"boy\"\nA: db.users.find({\"username\" : \"boy\"})\nQ: Query users collection for username with value \"girl\"A: db.users.find({\"username\" : \"girl\"})\nQ: Query users collection for username with age bigger than 16\nA: db.users.find({\"age\" : {$gt : 16}})n;\nQ: Query author collection for username with value \"boy\"\nA: db.authors.find({\"username\" : \"boy\"})\nQ:Query author collection for age lower than 7\nA: db.authors.find({\"age\" : {$lt : 7}});\n\nQ:insert a json document into collection authors with username equal to \"girl\"\nA: db.authors.insert({\"username\" : \"girl\"}, {\"age\" : 10, \"gender\" : \"female\"})\nQ: insert many documents into users collections\nA: db.users.insertMany({username : \"john doe\"}, {username : \"joe doe\"}]);\nQ: The following aggregation uses $geoNear to find documents with a location at most 2 meters from the center [ -73.99279 , 40.719296 ] and a category equal to Parks.\nA: db.places.aggregate([{ $geoNear: {near: { type: \"Point\", coordinates: [ -73.99279 , 40.719296 ] },distanceField: \"dist.calculated\",maxDistance: 2, query: { category: \"Parks\" },includeLocs: \"dist.location\", spherical: true}}])\nQ: How to build atlas $search text query?\nA: db.collection.aggregate({$search: {\"index\": \"text\": {\"query\": \"\",\"path\": \"\", \"fuzzy\": ,\"score\": } }})\n\nQ: What is the aggregate syntax?\nA: db.collection.aggregate([,,], , );\nQ: aggregate users collection to calculate salary sum per user\nA: db.users.aggregate([{$group : { _id : \"$username\" , salary_sum : { $sum : \"$salary\" }}}]);\nQ: aggregate person collection to calculate salary sum per person\nA: db.persons.aggregate([{$group : { _id : \"$person\" , salary_sum : { $sum : \"$salary\" }}}]);\nQ: Lookup users and orders collection\nA: db.users.aggregate([{$lookup : {from: 'orders', localField : \"_id\", foreignField : \"userId\", as : \"result\" }} ]);\n\nQ: What is the update syntax?\nA:db.collection.update(query, update, options)\nQ: How to edit collection sports where sportname is 'football' and match is 'england vs portugal' to score of '3-3' and date to current date?\nA: db.sports.update({ sportname: \"football\", match: \"england vs portugal\"} , {$set : {score: \"3-3\" , date : new Date()}} })\nQ: Query and atomically update collection zoo where animal is \"bear\" with a counter increment on eat field, if the data does not exist user upsert\nA: db.zoo.findOneAndUpdate({animal : \"bear\"}, {$inc: { eat : 1 }} , {upsert : true})\n```\n\nWe will use this file later in our code.\n\nThis way, the completion will be based on a similar pattern.\n\n### Prepare Your Atlas Cluster\n\n[MongoDB Atlas, the database-as-a-platform service, is a great way to have a running cluster in seconds with a sample dataset already there for our test. To prepare it, please use the following steps:\n\n1. Create an Atlas account (if you don\u2019t have one already) and use/start a cluster. For detailed steps, follow this documentation.\n2. Load the sample data set.\n3. Get your connection string.\n\nUse the copied connection string, providing it to the `mongosh` binary to connect to the pre-populated Atlas cluster with sample data. Then, switch to `sample_restaurants`\ndatabase.\n\n``` js\nmongosh \"mongodb+srv://:\n\n@/sample_restaurants\"\nUsing Mongosh : X.X.X\nUsing MongoDB: X.X.X\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nATLAS atlas-ugld61-shard-0 primary]> use sample_restaurants;\n```\n\n## Using OpenAI Inside the mongosh Shell\n\nNow, we can build our `textToMql` function by pasting it into the `mongosh`. The function will receive a text sentence, use our generated OpenAI API key, and will try to return the best MQL command for it:\n\n``` js\nasync function textToMql(query){\n\nconst OpenAI = require('openai-api');\nconst openai-client = new OpenAI(\"\");\n\nconst fs = require('fs');\n\nvar data = await fs.promises.readFile('AI-input.txt', 'utf8');\n\nconst learningPath = data;\n\nvar aiInput = learningPath + \"Q:\" + query + \"\\nA:\";\n\n const gptResponse = await openai-client.complete({\n engine: 'davinci',\n prompt: aiInput,\n \"temperature\": 0.3,\n \"max_tokens\": 400,\n \"top_p\": 1,\n \"frequency_penalty\": 0.2,\n \"presence_penalty\": 0,\n \"stop\": [\"\\n\"]\n });\n\n console.log(gptResponse.data.choices[0].text);\n}\n```\n\nIn the above function, we first load the OpenAI npm module and initiate a client with the relevant API key from OpenAI. \n\n``` js\nconst OpenAI = require('openai-api');\nconst openai-client = new OpenAI(\"\");\n\nconst fs = require('fs');\n```\n\nThe new shell allows us to import built-in and external [modules to produce an unlimited flexibility with our scripts.\n\nThen, we read the learning data from our `AI-input.txt` file. Finally we add our `Q: ` input to the end followed by the `A:` value which tells the engine we expect an answer based on the provided learningPath and our query. \n\nThis data will go over to an OpenAI API call:\n\n``` js\n const gptResponse = await openai.complete({\n engine: 'davinci',\n prompt: aiInput,\n \"temperature\": 0.3,\n \"max_tokens\": 400,\n \"top_p\": 1,\n \"frequency_penalty\": 0.2,\n \"presence_penalty\": 0,\n \"stop\": \"\\n\"]\n });\n```\n\nThe call performs a completion API and gets the entire initial text as a `prompt` and receives some additional parameters, which I will elaborate on:\n\n* `engine`: OpenAI supports a few AI engines which differ in quality and purpose as a tradeoff for pricing. The \u201cdavinci\u201d engine is the most sophisticated one, according to OpenAI, and therefore is the most expensive one in terms of billing consumption.\n* `temperature`: How creative will the AI be compared to the input we gave it? It can be between 0-1. 0.3 felt like a down-to-earth value, but you can play with it.\n* `Max_tokens`: Describes the amount of data that will be returned.\n* `Stop`: List of characters that will stop the engine from producing further content. Since we need to produce MQL statements, it will be one line based and \u201c\\n\u201d is a stop character.\n\nOnce the content is returned, we parse the returned JSON and print it with `console.log`.\n\n### Lets Put OpenAI to the Test with MQL\n\nOnce we have our function in place, we can try to produce a simple query to test it:\n\n``` js\nAtlas atlas-ugld61-shard-0 [primary] sample_restaurants> textToMql(\"query all restaurants where cuisine is American and name starts with 'Ri'\")\n db.restaurants.find({cuisine : \"American\", name : /^Ri/})\n\nAtlas atlas-ugld61-shard-0 [primary] sample_restaurants> db.restaurants.find({cuisine : \"American\", name : /^Ri/})\n[\n {\n _id: ObjectId(\"5eb3d668b31de5d588f4292a\"),\n address: {\n building: '2780',\n coord: [ -73.98241999999999, 40.579505 ],\n street: 'Stillwell Avenue',\n zipcode: '11224'\n },\n borough: 'Brooklyn',\n cuisine: 'American',\n grades: [\n {\n date: ISODate(\"2014-06-10T00:00:00.000Z\"),\n grade: 'A',\n score: 5\n },\n {\n date: ISODate(\"2013-06-05T00:00:00.000Z\"),\n grade: 'A',\n score: 7\n },\n {\n date: ISODate(\"2012-04-13T00:00:00.000Z\"),\n grade: 'A',\n score: 12\n },\n {\n date: ISODate(\"2011-10-12T00:00:00.000Z\"),\n grade: 'A',\n score: 12\n }\n ],\n name: 'Riviera Caterer',\n restaurant_id: '40356018'\n }\n...\n```\n\nNice! We never taught the engine about the `restaurants` collection or how to filter with [regex operators but it still made the correct AI decisions. \n\nLet's do something more creative.\n\n``` js\nAtlas atlas-ugld61-shard-0 primary] sample_restaurants> textToMql(\"Generate an insert many command with random fruit names and their weight\")\n db.fruits.insertMany([{name: \"apple\", weight: 10}, {name: \"banana\", weight: 5}, {name: \"grapes\", weight: 15}])\nAtlas atlas-ugld61-shard-0 [primary]sample_restaurants> db.fruits.insertMany([{name: \"apple\", weight: 10}, {name: \"banana\", weight: 5}, {name: \"grapes\", weight: 15}])\n{\n acknowledged: true,\n insertedIds: {\n '0': ObjectId(\"60e55621dc4197f07a26f5e1\"),\n '1': ObjectId(\"60e55621dc4197f07a26f5e2\"),\n '2': ObjectId(\"60e55621dc4197f07a26f5e3\")\n }\n}\n```\n\nOkay, now let's put it to the ultimate test: [aggregations!\n\n``` js\nAtlas atlas-ugld61-shard-0 primary] sample_restaurants> use sample_mflix;\nAtlas atlas-ugld61-shard-0 [primary] sample_mflix> textToMql(\"Aggregate the count of movies per year (sum : 1) on collection movies\")\n db.movies.aggregate([{$group : { _id : \"$year\", count : { $sum : 1 }}}]);\n\nAtlas atlas-ugld61-shard-0 [primary] sample_mflix> db.movies.aggregate([{$group : { _id : \"$year\", count : { $sum : 1 }}}]);\n[\n { _id: 1967, count: 107 },\n { _id: 1986, count: 206 },\n { _id: '2006\u00e82012', count: 2 },\n { _id: 2004, count: 741 },\n { _id: 1918, count: 1 },\n { _id: 1991, count: 252 },\n { _id: 1968, count: 112 },\n { _id: 1990, count: 244 },\n { _id: 1933, count: 27 },\n { _id: 1997, count: 458 },\n { _id: 1957, count: 89 },\n { _id: 1931, count: 24 },\n { _id: 1925, count: 13 },\n { _id: 1948, count: 70 },\n { _id: 1922, count: 7 },\n { _id: '2005\u00e8', count: 2 },\n { _id: 1975, count: 112 },\n { _id: 1999, count: 542 },\n { _id: 2002, count: 655 },\n { _id: 2015, count: 484 }\n]\n```\n\nNow *that* is the AI power of MongoDB pipelines!\n\n## DEMO\n\n[![asciicast](https://asciinema.org/a/424297)\n\n## Wrap-Up\n\nMongoDB's new shell allows us to script with enormous power like never before by utilizing npm external packages. Together with the power of OpenAI sophisticated AI patterns, we were able to teach the shell how to prompt text to accurate complex MongoDB commands, and with further learning and tuning, we can probably get much better results.\n\nTry this today using the new MongoDB shell.", "format": "md", "metadata": {"tags": ["MongoDB", "AI"], "pageDescription": "Learn how new mongosh external modules can be used to generate MQL language via OpenAI engine. Transform simple text sentences into sophisticated queries. ", "contentType": "Article"}, "title": "Generating MQL Shell Commands Using OpenAI and New mongosh Shell", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/introduction-realm-sdk-android", "action": "created", "body": "# Introduction to the Realm SDK for Android\n\nThis is a beginner article where we introduce you to the Realm Android SDK, dive through its features, and illustrate development of the process with a demo application to get you started quickly.\n\nIn this article, you will learn how to set up an Android application with the Realm Android SDK, write basic queries to manipulate data, and you'll receive an introduction to Realm Studio, a tool designed to view the local Realm database.\n\n>\n>\n>Pre-Requisites: You have created at least one app using Android Studio.\n>\n>\n\n>\n>\n>**What is Realm?**\n>\n>Realm is an object database that is simple to embed in your mobile app. Realm is a developer-friendly alternative to mobile databases such as SQLite and CoreData.\n>\n>\n\nBefore we start, create an Android application. Feel free to skip the step if you already have one.\n\n**Step 0**: Open Android Studio and then select Create New Project. For more information, you can visit the official Android website.\n\nNow, let's get started on how to add the Realm SDK to your application.\n\n**Step 1**: Add the gradle dependency to the **project** level **build.gradle** file:\n\n``` kotlin\ndependencies {\n classpath \"com.android.tools.build:gradle:$gradle_version\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version\"\n classpath \"io.realm:realm-gradle-plugin:10.4.0\" // add this line\n}\n```\n\nAlso, add **mavenCentral** as our dependency, which was previously **jCenter** for Realm 10.3.x and below.\n\n``` kotlin\nrepositories {\n google()\n mavenCentral() // add this line\n}\n```\n\n``` kotlin\nallprojects {\n repositories {\n google()\n mavenCentral() // add this line\n }\n}\n```\n\n**Step 2**: Add the Realm plugin to the **app** level **build.gradle** file:\n\n``` kotlin\nplugins {\n id 'com.android.application'\n id 'kotlin-android'\n id 'kotlin-kapt' // add this line\n id 'realm-android' // add this line\n}\n```\n\nKeep in mind that order matters. You should add the **realm-android** plugin after **kotlin-kapt**.\n\nWe have completed setting up Realm in the project. Sync Gradle so that we can move to the next step.\n\n**Step 3**: Initialize and create our first database:\n\nThe Realm SDK needs to be initialized before use. This can be done anywhere (application class, activity, or fragment) but to keep it simple, we recommend doing it in the application class.\n\n``` kotlin\n// Ready our SDK\nRealm.init(this)\n// Creating our db with custom properties\nval config = RealmConfiguration.Builder()\n .name(\"test.db\")\n .schemaVersion(1)\n .build()\nRealm.setDefaultConfiguration(config)\n```\n\nNow that we have the Realm SDK added to our project, let's explore basic CRUD (Create, Read, Update, Delete) operations. To do this, we'll create a small application, building on MVVM design principles.\n\nThe application counts the number of times the app has been opened, which has been manipulated to give an illustration of CRUD operation.\n\n1. Create app view object when opened the first time\u200a\u2014\u200a**C** R U D\n2. Read app viewed counts\u2014C **R** U D\n3. Update app viewed counts\u2014C R **U** D\n4. Delete app viewed counts\u2014\u200aC R U **D**\n\n \n\nOnce you have a good understanding of the basic operations, then it is fairly simple to apply this to complex data transformation as, in the end, they are nothing but collections of CRUD operations.\n\nBefore we get down to the actual task, it's nice to have background knowledge on how Realm works. Realm is built to help developers avoid common pitfalls, like heavy lifting on the main thread, and follow best practices, like reactive programming.\n\nThe default configuration of the Realm allows programmers to read data on any thread and write only on the background thread. This configuration can be overwritten with:\n\n``` kotlin\nRealm.init(this)\nval config = RealmConfiguration.Builder()\n .name(\"test.db\")\n .allowQueriesOnUiThread(false)\n .schemaVersion(1)\n .deleteRealmIfMigrationNeeded()\n .build()\nRealm.setDefaultConfiguration(config)\n```\n\nIn this example, we keep `allowQueriesOnUiThread(true)` which is the default configuration.\n\nLet's get started and create our object class `VisitInfo` which holds the visit count:\n\n``` kotlin\nopen class VisitInfo : RealmObject() {\n\n @PrimaryKey\n var id = UUID.randomUUID().toString()\n\n var visitCount: Int = 0\n\n}\n```\n\nIn the above snippet, you will notice that we have extended the class with `RealmObject`, which allows us to directly save the object into the Realm.\n\nWe can insert it into the Realm like this:\n\n``` kotlin\nval db = Realm.getDefaultInstance()\ndb.executeTransactionAsync {\n val info = VisitInfo().apply {\n visitCount = count\n }\n it.insert(info)\n}\n```\n\nTo read the object, we write our query as:\n\n``` kotlin\nval db = Realm.getDefaultInstance()\nval visitInfo = db.where(VisitInfo::class.java).findFirst()\n```\n\nTo update the object, we use:\n\n``` kotlin\nval db = Realm.getDefaultInstance()\nval visitInfo = db.where(VisitInfo::class.java).findFirst()\n\ndb.beginTransaction()\nvisitInfo.apply {\n visitCount += count\n}\n\ndb.commitTransaction()\n```\n\nAnd finally, to delete the object:\n\n``` kotlin\nval visitInfo = db.where(VisitInfo::class.java).findFirst()\nvisitInfo?.deleteFromRealm()\n```\n\nSo now, you will have figured out that it's very easy to perform any operation with Realm. You can also check out the Github repo for the complete application.\n\nThe next logical step is how to view data in the database. For that, let's introduce Realm Studio.\n\n*Realm Studio is a developer tool for desktop operating systems that allows you to manage Realm database instances.*\n\nRealm Studio is a very straightforward tool that helps you view your local Realm database file. You can install Realm Studio on any platform from .\n\nLet's grab our database file from our emulator or real device.\n\nDetailed steps are as follows:\n\n**Step 1**: Go to Android Studio, open \"Device File Explorer\" from the right-side panel, and then select your emulator.\n\n \n\n**Step 2**: Get the Realm file for our app. For this, open the folder named **data** as highlighted above, and then go to the **data** folder again. Next, look for the folder with your package name. Inside the **files** folder, look for the file named after the database you set up through the Realm SDK. In my case, it is **test.db**.\n\n**Step 3**: To export, right-click on the file and select \"Save As,\" and\nthen open the file in Realm Studio.\n\nNotice the visit count in the `VisitInfo` class (AKA table) which is equivalent to the visit count of the application. That's all, folks. Hope it helps to solve the last piece of the puzzle.\n\nIf you're an iOS developer, please check out Accessing Realm Data on iOS Using Realm Studio.\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Android"], "pageDescription": "Learn how to use the Realm SDK with Android.", "contentType": "Tutorial"}, "title": "Introduction to the Realm SDK for Android", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/computed-pattern", "action": "created", "body": "# Building with Patterns: The Computed Pattern\n\nWe've looked at various ways of optimally storing data in the **Building\nwith Patterns** series. Now, we're going to look at a different aspect\nof schema design. Just storing data and having it available isn't,\ntypically, all that useful. The usefulness of data becomes much more\napparent when we can compute values from it. What's the total sales\nrevenue of the latest Amazon Alexa? How many viewers watched the latest\nblockbuster movie? These types of questions can be answered from data\nstored in a database but must be computed.\n\nRunning these computations every time they're requested though becomes a\nhighly resource-intensive process, especially on huge datasets. CPU\ncycles, disk access, memory all can be involved.\n\nThink of a movie information web application. Every time we visit the\napplication to look up a movie, the page provides information about the\nnumber of cinemas the movie has played in, the total number of people\nwho've watched the movie, and the overall revenue. If the application\nhas to constantly compute those values for each page visit, it could use\na lot of processing resources on popular movies\n\nMost of the time, however, we don't need to know those exact numbers. We\ncould do the calculations in the background and update the main movie\ninformation document once in a while. These **computations** then allow\nus to show a valid representation of the data without having to put\nextra effort on the CPU.\n\n## The Computed Pattern\n\nThe Computed Pattern is utilized when we have data that needs to be\ncomputed repeatedly in our application. The Computed Pattern is also\nutilized when the data access pattern is read intensive; for example, if\nyou have 1,000,000 reads per hour but only 1,000 writes per hour, doing\nthe computation at the time of a write would divide the number of\ncalculations by a factor 1000.\n\nIn our movie database example, we can do the computations based on all\nof the screening information we have on a particular movie, compute the\nresult(s), and store them with the information about the movie itself.\nIn a low write environment, the computation could be done in conjunction\nwith any update of the source data. Where there are more regular writes,\nthe computations could be done at defined intervals - every hour for\nexample. Since we aren't interfering with the source data in the\nscreening information, we can continue to rerun existing calculations or\nrun new calculations at any point in time and know we will get correct\nresults.\n\nOther strategies for performing the computation could involve, for\nexample, adding a timestamp to the document to indicate when it was last\nupdated. The application can then determine when the computation needs\nto occur. Another option might be to have a queue of computations that\nneed to be done. Selecting the update strategy is best left to the\napplication developer.\n\n## Sample Use Case\n\nThe **Computed Pattern** can be utilized wherever calculations need to\nbe run against data. Datasets that need sums, such as revenue or\nviewers, are a good example, but time series data, product catalogs,\nsingle view applications, and event sourcing are prime candidates for\nthis pattern too.\n\nThis is a pattern that many customers have implemented. For example, a\ncustomer does massive aggregation queries on vehicle data and store the\nresults for the server to show the info for the next few hours.\n\nA publishing company compiles all kind of data to create ordered lists\nlike the \"100 Best...\". Those lists only need to be regenerated once in\na while, while the underlying data may be updated at other times.\n\n## Conclusion\n\nThis powerful design pattern allows for a reduction in CPU workload and\nincreased application performance. It can be utilized to apply a\ncomputation or operation on data in a collection and store the result in\na document. This allows for the avoidance of the same computation being\ndone repeatedly. Whenever your system is performing the same\ncalculations repeatedly and you have a high read to write ratio,\nconsider the **Computed Pattern**.\n\nWe're over a third of the way through this **Building with Patterns**\nseries. Next time we'll look at the features and benefits of the Subset\nPattern\nand how it can help with memory shortage issues.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Tutorial"}, "title": "Building with Patterns: The Computed Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-sync-in-use-with-swiftui-chat-app-meetup", "action": "created", "body": "# Realm Sync in Use \u2014 Building and Architecting a Mobile Chat App Meetup\n\nDidn't get a chance to attend the Realm Sync in use - building and architecting a Mobile Chat App Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n>Realm Sync in Use - Building and Architecting a Mobile Chat App\n>\n>:youtube]{vid=npglFqQqODk}\n\nIn this meetup, Andrew Morgan, a Staff Engineer at MongoDB, will walk you through the thinking, architecture and design patterns used in building a Mobile Chat App on iOS using MongoDB Realm Sync. The Chat app is used as an example, but the principles can be applied to any mobile app where sync is required. Andrew will focus on the data architecture, both the schema and the partitioning strategy used and after this session, you will come away with the knowledge needed to design an efficient, performant, and robust data architecture for your own mobile app.\n\nIn this 70-minute recording, in the first 50 minutes or so, Andrew covers: \n- Demo of the RChat App\n- System/Network Architecture\n- Data Modelling & Partitioning\n- The Code - Integrating synced Realms in your SwiftUI App\n\nAnd then we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.\n\n## Transcript\n\n**Shane McAllister**: Hello, and welcome to the meetup. And we're really, really delighted that you could all join us here and we're giving people time to get on board. And so we have enough of a quorum of people here that we can get started. So first things first introductions. My name is Shane McAllister, and I'm a lead on the Developer Advocacy team here, particularly for Realm. And I'm joined today as, and we'll do introductions later by Andrew Morgan as well. Who's a staff engineer on the Developer Advocacy team, along with me. So, today we're doing this meetup, but it's the start of a series of meetups that we're doing, particularly in COVID, where everything's gone online. We understand that there's lots of events and lots of time pressures for people. We want to reach our core developer audience as easily as possible. So this is only our second meetup using our new live platform that we have.\n\n**Shane McAllister**: And so very much thank you for coming. Thank you for registering and thank you for being here. But if you have registered and you certainly joined the Realm Global Community, it means that you will get notified of these future events instantly via email, as soon as we add them. So we have four more of these events coming over the next four weeks, four to six weeks as well too. And we'll discuss those a little bit at the end of the presentation. With regards to this platform, you're used to online platforms, this is a little bit different. We have chat over on the right-hand side of your window. Please use that throughout.\n\n**Shane McAllister**: I will be monitoring that while Andrew is presenting and I will be trying to answer as much as I can in there, but we will using that as a function to go and do our Q&A at the end. And indeed, if you are up to it, we'd more than welcome to have you turn on your camera, turn on your mic and join us for that Q&A at the end of our sessions as well too. So just maybe mute your microphones if they're not already muted. We'll take care of that at the end. We'll open it out and everyone can get involved as well, too. So without further ado, let's get started. I'm delighted to hand over to Andrew Morgan.\n\n**Andrew Morgan**: I'm going to talk through how you actually build a mobile app using Realm and in particular MongoDB Realm Sync. To make it a bit less dry. We're going to use an example app, which is called RChat, which is a very simple chat application. And if you like, it's a very simple version of WhatsApp or Slack. So the app is built for iOS using SwiftUI, but if you're building on Android, a lot of what I'm going to cover is still going to apply. So all the things about the data modeling the partitioning strategy, setting up the back end, when you should open and close Realms, et cetera, they're the same. Looking at the agenda. We're going to start off with a very quick demo of the app itself. So you understand what we're looking at. When we look at the code and the data model, we'll look at the components to make it up both the front end and the back end.\n\n**Andrew Morgan**: One of the most important is how we came up with the data model and the partitioning strategy. So how partitioning works with Realm Sync? Why we use it? And how you actually come up with a strategy that's going to work for your app? Then we'll get to the code itself, both the front end iOS code, but also some stored procedures or triggers that we're running in the back end to stitch all of the data together. And finally, I promise I'll keep some time at the end so we can have a bit of interactive Q&A. Okay, so let's start with the demo.\n\n**Andrew Morgan**: And so what we've got is a fairly simplistic chat app along the lines of WhatsApp or Slack, where you have the concept of users, chat rooms, and then that messages within those. So we've got three devices connected. You can register new users through the app, but using that radio button at the bottom, but for this, I'm going to use what I've created earlier. This will now authenticate the user with Realm back end. And you may notice as Rod came online, the presence updated in the other apps. So for example, in the buds here, you can see that the first two members are online currently, the middle one is the one I just logged in. And then the third one is still offline. You'll see later, but all of these interactions and what you're seeing there, they're all being implemented through the data changing. So we're not sending any rest messages around like that to say that this user is logged in, it's just the data behind the scenes changes and that gets reflected in the UI. And so we can create a new chat room.\n\n**Andrew Morgan**: You can search for users, we've only got a handful here, so I'll just add Zippy and Jane. And then as I save, you should see that chat appear in their windows too. See they're now part of this group. And we can go in here and can send those messages and it will update the status. And then obviously you can go into there and people can send messages back. Okay. So it's a chat app, as you'd expect you can also do things like attach photos. Apologies, this is the Xcode beta the simulator is a little bit laggy on this. And then you can also share your location. So we'll do the usual kind of things you'd want to do in a chat room and then dive into the maps, et cetera. Okay. So with that, I think I can switch back to the slides and take a very quick look at what's going on behind the scenes from an architectural perspective.\n\n**Andrew Morgan**: Let me get a pointer. So, this is the chat app you were seeing, the only time and so we've got the chat app, we've got the Realm embedded mobile database. We've got MongoDB Realm, which is the back end service. And then we've got MongoDB Atlas, which is the back end data store. So the only time the application interacts directly with Realm, the Realm service in the back end is when the users logging in or logging out or you're registering another user, the rest of the time, the application is just interacting with the local Realm database. And then that Realm database is synchronizing via the Realm service with other instances of the application. So for example, when I sent a message to Rod that just adds a chat message to the Realm database that synchronizes via MongoDB Realm Sync, and then that same day to get sent to the other Realm database, as well as a copy gets written to Atlas.\n\n**Andrew Morgan**: So it's all data-driven. What we could do, which we haven't done yet is that same synchronization can also synchronize with Android applications. And also because the data is stored in Atlas, you can get at that same data through a web application, for example. So you only have to write your back end once and then all of these different platforms, your application can be put into those and work as it is.\n\n**Andrew Morgan**: So the data model and partitioning. So first of all, Shane and I were laughing at this diagram earlier, trying to figure out how many years this picture has been in used by the Realm Team.\n\n**Shane McAllister**: It's one of the evergreen ones I think Andrew. I think nobody wants to redesign it just yet. So we do apologize for the clip art nature of this slide.\n\n**Andrew Morgan**: Yeah. Right. So, the big database cylinder, you can see here is MongoDB Atlas. And within there you have collections. If you're new to MongoDB, then a collection is analogous to a table in a relational database. And so in our shapes database, we've got collections for circles, stars, and triangles. And then each of those shapes within those collections, they've got an attribute called color. And what we've decided to do in this case is to use the color attribute as our partitioning key.\n\n**Andrew Morgan**: So what that means is that every one of these collections, if they're going to be synced, they have to have a key called color. And when someone connects their Realm database and saying, they want to sync, they get to specify the value for the partitioning key. So for example, the top one specified that they want all of the blue objects. And so that means that they get, regardless of which collection they're in, they get all of the blue shapes. And so you don't have any control over whether you just synced the stars or just the triangles. You get all of the shapes because the partition is set to just the color. The other limitation or feature of this is that you don't get to choose that certain parts of the circle gets synchronized, but others don't. So it's all or nothing. You're either syncing all of the red objects in their entirety or you're not sinking the red objects in their entirety.\n\n**Andrew Morgan**: So, why do we do this partitioning rather than just syncing everything to the mobile Realm database? One reason is space. You've obviously got constraints on how much storage you've got in your mobile device. And if, for example, you partitioned on user and you had a million users, you don't want every user's device to have to store data, all of those million users. And so you use the partitioning key to limit how much storage and network you're using for each of those devices. And the other important aspect is security. I don't necessarily want every other user to be able to see everything that's in my objects. And so this way you can control it, that on the server side, you make sure that when someone's logged in that they can only synchronize the objects that they're entitled to see. So, that's the abstract idea.\n\n**Andrew Morgan**: Let's get back to our chat application use case, and we've got three top-level objects that we want to synchronize. The first one is the User. And so if I'm logged in as me, I want to be able to see all of the data. And I also want to be able to update it. So this data will include things like my avatarImage. It will include my userName. It will include a list of the conversations or the chat rooms that I'm currently a member of. So no one else needs to see all of that data. There's some of that data that I would like other people to be able to at least see, say, for example, my displayName and my avatarImage. I'd like other people to be able to see that. But if you think back to how the partitioning works and that it's all or nothing, I can either sync the entire User object or none of it at all.\n\n**Andrew Morgan**: So what we have is, we have another representation of the User, which is called the Chatster. And it's basically a mirror of a subset of the data from the User. So it does include my avatar, for example, it does include my displayName. But what it won't include is things like the complete list of all of the chat rooms that I'm a member of, because other people have no business knowing that. And so for this one, I want the syncing rule to be that, anyone can read that data, but no one can update it.\n\n**Andrew Morgan**: And then finally, we've got the ChatMessages themselves, and this has got a different access rules again, because we want all of the members within a chat room to be able to read them and also write new ones. And so we've got three top-level objects and they all have different access rules. But remember we can only have a single partitioning key. And that partitioning key has to be either a String, an objectID or a Long. And so to be able to get a bit more sophisticated in what we synchronized to which users, we actually cheat a little and instead, so a partitioning key, it's an attribute that we call partition. And within that, we basically have key value pairs. So for each of those types of objects, we can use a different key and value.\n\n**Andrew Morgan**: So, for example, for the user objects or the user collection, we use the String, user=, and then \\_id. So the \\_id is what uniquely identifies the object or the document within the collection. So this way we can have it, that the rules on the server side will say that this partition will only sync if the currently logged in user has got the \\_id that matches. For the Chatster it's a very simple rule. So we're effectively hard coding this to say, all-users equals all-the-users, but this could be anything. So this is just a string that if you see this the back ends knows that it can synchronize everything. And then for the ChatMessages the key is conversation and then the value is the conversation-id.\n\n**Andrew Morgan**: I'll show you in code how that comes together. So this is what our data model looks like. As I said, we've got the three top-level objects. User, Chatster and ChatMessage. And if we zoom in you'll see that a User is actually, its got a bunch of attributes in the top-level object, but then it's got sub-objects, or when sorting MongoDB sub-documents. So it's got sub-objects. So, the users got a list of conversations. The conversation contains a list of members and a UserPreferences or their avatarImage, the displayName, and know that they do have an attribute called partition. And it's only the top level object that needs to have the partition attributes because everything else is a sub-object and it just gets dragged in.\n\n**Andrew Morgan**: I would, we also have a UserPreference contains a Photo, which is a photo object. And then Chatster, which is our read-only publicly visible object. We've got the partition and every time we try and open a Realm for the Chatster objects, we just set it to the String, all-users equals all-the-users. So it's very similar, but it's a subset of the data that I'm happy to share with everyone. And then finally we have the ChatMessage which again, you can see it's a top-level object, so it has to have the partition attribute.\n\n**Andrew Morgan**: So how do we enforce that people or the application front end only tries to open Realms for the partitions that it's enabled that ought to? We can do that through the Realm UI in the back end. We do it by specifying a rule for read-only Realms and read-write Realms. And so in each case, all I'm doing here is I'm saying that I'm going to call a Realm function. And when that functions is called, it's going to be given passed a parameter, which is the partition that they're trying to access. And then that's just the name of the function.\n\n**Andrew Morgan**: And I'm not going to go through this in great detail, but this is a simplified version of the canWritePartition. So this is what the sides, if the application is asking to open a Realm to make changes to it, this is how I check if they're allowed access to that partition. So the first thing we do is we take the partition, which remember is that Key Value string. And we split it to get the Key and the Value for that Key. Then we just do a switch based on what has been used as the Key. If it's the \"user\" then we check that the partitionValue matches the \\_id of the currently logged in user. And so that'll return true or false. The conversation is the most complex one. And for that one, it actually goes and reads the userDoc for this \"user\" and then checks whether this conversation.id is one that \"user\" is a member of. So, that's the most complex one. And then all users, so this is remember for the Chatster object, that always returns false, because the application is never allowed to make changes to those objects.\n\n**Andrew Morgan**: So now we're looking at some of the Swift code, and this is the first of the classes that the Realm mobile database is using. So this is the top-level class for the Chatster object. The main things to note in here is so we're importing RealmSwift, which is the Realm Cocoa SDK. The Chatser it conforms to the object protocol, and that's actually RealmSwift.object. So, that's telling Realm that this is a class where the objects can be managed by the Realm mobile database. And for anyone who's used a SwiftUI, ObjectKeyIdentifiable protocol that's taking the place of identifiable. So it just gives each of these objects... But it means that Realm would automatically give each of these objects, an \\_id that can be used by Swift UI when it's rendering views.\n\n**Andrew Morgan**: And then the other thing to notice is for the partition, we're hard coding it to always be all-users equals all-the-users, because remember everyone can read all Chatster objects, and then we set things up. We've got the photo objects, for example which is an EmbeddedObject. So all of these things in there. And for doing the sync, you also have to provide a primary key. So again, that's something that Realm insists on. If you're saying that you've implemented the object protocol, taking a look at one of the EmbeddedObjects instead of being object, you implement, you conform to the embedded object protocol. So, that means two things. It means that when you're synchronizing objects, this is just synchronized within a top-level object. And the other nice thing is, this is the way that we implement cascading deletes. So if you deleted a Chatster object, then it will automatically delete all of the embedded photo objects. So that, that makes things a lot simpler.\n\n**Andrew Morgan**: And we'll look quickly at the other top-level objects. We've got the User class. We give ourselves just a reminder that when we're working with this, we should set the partition to user equals. And then the value of this \\_id field. And again, it's got userPreferences, which is an EmbeddedObject. Conversations are a little bit different because that's a List. So again, this is a Realm Cocoa List. So we could say RealmSwift.list here. So we've got a list of conversation objects. And then again, those conversation objects little displayName, unreadCount, and members is a List of members and so on and so on. And then finally just for the complete desk here, we've got the ChatMessage objects.\n\n**Andrew Morgan**: Okay. So those of us with objects, but now we'll take a quick look at how you actually use the Realm Cocoa SDK from your Swift application code. As I said before the one interaction that the application has directly with the Realm back end is when you're logging in or logging out or registering a new user. And so that's what we're seeing here. So couple of things to note again, we are using Realm Cocoa we're using Combine, which for people not familiar with iOS development, it's the Swift event framework. So it's what you can use to have pipelines of operations where you're doing asynchronous work. So when I log in function, yes, the first thing we do is we actually create an instance of our Realm App. So this id that's something that you get from the Realm UI when you create your application.\n\n**Andrew Morgan**: So that's just telling the front end application what back end application it's connecting to. So we can connect to Realm, we then log in, in this case, we're using email or username, password authentication. There's also anonymous, or you can use Java UTs there as well. So once this is successfully logged the user in, then if everything's been successful, then we actually send an event to a loginPublisher. So, that means that another, elsewhere we can listen to that Publisher. And when we're told someone's logged in, we can take on other actions. So what we're doing here is we're sending in the parameter that was passed into this stage, which in this case is going to be the user that's just logged in.\n\n**Andrew Morgan**: Okay, and I just take a break now, because there's two ways or two main ways that you can open a Realm. And this is the existing way that up until the start of this week, you'd have to use all of the time, but it's, I've included here because it's still a useful way of doing it because this is still the way that you open Realm. If you're not doing it from within SwiftUI view. So this is the Publisher, we just saw the loginPublisher. So it receives the user. And when it receives the user it creates the configuration where it's setting up the partitionValue. So this is one that's going to match the partition attribute and we create the user equals, so a string with user equals and then the user.id.\n\n**Andrew Morgan**: And then we use that to open a new Realm. And again, this is asynchronous. And so we send that Realm and it's been opened to this userRealmPublisher, which is yet another publisher that Combine will pass in that Realm once it's available. And then in here we store a copy of the user. So, this is actually in our AppState. So we create a copy of the user that we can use within the application. And that's actually the first user. So, when we created that Realm, it's on the users because we use the partition key that only matches a single user. There's only actually going to be one user object in this Realm. So we just say .first to receive that.\n\n**Andrew Morgan**: Then, because we want to store this, and this is an object that's being managed by Realm. We create a Realm, transaction and store, update the user object to say that this user is now online. And so when I logged in, that's what made the little icon turn from red to green. It's the fact that I updated this, which is then synchronized back to the Realm back end and reflected in all the other Realm databases that are syncing.\n\n**Andrew Morgan**: Okay. So there is now also asynchronous mode of opening it, that was how we had to open it all the way through our Swift code previously. But as of late on Monday, we actually have a new way of doing it. And I'm going to show you that here, which is a much more Swift UI friendly way of doing it. So, anyone who went to Jason's session a couple of weeks ago. This is using the functionality that he was describing there. Although if you're very observant, you may know that some of the names have been changed. So the syntax isn't exactly the same as you just described. So let's give a generic example of how you'd use this apologies to people who may be not familiar with Swift or Swift UI, but these are Swift UI views.\n\n**Andrew Morgan**: So within our view, we're going to call a ChildView. So it's a sub view. And in there we pass through the environment, a realmConfiguration, and that configuration is going to be based on a partition. So we're going to give a string in this case, which is going to be the partition that we want to open, and then synchronize. In this case, the ChildView doesn't do anything interesting. All it does is called the GrandChildView, but it's important to note that how the environments work with Swift UI is they automatically get passed down the view hierarchy. So even though we're not actually passing into the environment for GrandChildView, it is inheriting it automatically.\n\n**Andrew Morgan**: So within GrandChildView, we have an annotation so observed results. And what we're doing here is saying for the Realm that's been passed in, I want items to represent the results for all items. So item is a class. So all objects of the class item that are stored in those results. I want to store those as the items results set, and also I'm able to get to the Realm itself, and then we can pass those into, so we can iterate over all of those items and then call the NameView for each of those items. And it's been a long way getting to here, but this is where we can finally actually start using that item. So when we called NameView, we passed in the instance of item and we use this Realm annotation to say that it's an ObservedRealmObject when it's received in the NameView, and why that's important is it means that we don't have to explicitly open Realm transactions when we're working with that item in this View.\n\n**Andrew Morgan**: So the TextField View, it takes the label, which is just the label of the TextField and binding to the $items.name. So it takes a binding to a string. So, TextField can actually update the data. It's not just displaying it, it lets the user input stuff. And so we can pass in a binding to our item. And so text fields can now update that without worrying about having to explicitly open transactions.\n\n**Andrew Morgan**: So let's turn to our actual chat application. And so a top-level view is ContentView, and we do different things depending on whether you're logged in yet, but if you are logged in yet, then we call the ConversationListView, and we pass in a realmConfiguration where the partition is set user equals and then the \\_id of the user. Then within the ConversationListView, which is represents what you see from the part of the application here. We've got a couple of things. The first is, so what have we got here? Yeah. So we do some stuff with the data. So, we display each of these cards for the conversations, with a bit I wanted to highlight is that when someone clicks on one of these cards, it actually follows a link to a ChatRoomView. And again, with the ChatRoomView, we pass in the configuration to say that it's this particular partition that we want to open a Realm for.\n\n**Andrew Morgan**: And so once we're in there we, we get a copy of the userRealm and the reason we need a copy of the userRealm is because we're going to explicitly upgrade, update the unreadCount. We're going to set it to zero. So when we opened the conversation, we'll mark all of the messages as read. And because we're doing this explicitly rather than doing it via View, we do still need to do the transaction here. So that's why we received that. And then for each of these, so each of these is a ChatRoomBubble. So because we needed the userRealm in this View, we couldn't inject the ChatMessage View or the ChatMessage Realm into here. And so instead, rather than working with the ChatMessages in here, we have to pass, we have to have another subview where that subview is really just there to be able to pass in another partitionValue. So in this case, we're passing in the conversation equals then the id of the conversation.\n\n**Andrew Morgan**: And so that means that in our ChatRoomBubblesView, we're actually going to receive all of the objects of type ChatMessage, which are in that partition. And the other thing we're doing differently in here is that when we get those results, we can also do things like sorting on them, which we do here. Or you can also add a filter on here, if you don't want this view to work with every single one of those chatMessages, but in our case, all of those chatMessages for this particular conversation. And so we do want to work with all of them, but for example, you could have a filter here that says, don't include any messages that are more older than five months, for example. And then we can loop over those chatMessages, pass them to the ChatBubbleView which is one of these.\n\n**Andrew Morgan**: And the other thing we can do is you can actually observe those results. So when another user has a chatMessage, that will automatically appear in here because this result set automatically gets updated by Realm Sync. So the back end changes, there's another chatMessage it'll be added to that partition. So it appears in this Realm results set. And so this list will automatically be updated. So we don't have to do anything to make that happen. But what I do want to do is I want to scroll to the bottom of the list of these messages when that happens. So I explicitly set a NotificationToken to observe thosechatMessages. And so whenever it changes, I just scroll to the bottom.\n\n**Andrew Morgan**: Then the other thing I can do from this view is, I can send new messages. And so when I do that, I just create, we received a new chatMessage and I just make sure that check a couple of things. Very importantly, I set the conversation id to the current conversation. So the chatMessages tag to say, it's part of this particular conversation. And then I just need to append it to those same results. So, that's the chats results set that we had at the top. So by appending it to that List, Realm will automatically update the local Realm database, which automatically synchronizes with the back end.\n\n**Andrew Morgan**: Okay. So we've seen what's happening in the front end. But what we haven't seen is how was that user document or that user object traits in the first place? How was the Chatster object created when I have a new chatMessage? How do I update the unreadCount in all of the user objects? So that's all being done by Realm in the back end. So we've got a screen capture here of the Realm UI, and we're using Realm Triggers. So we've got three Realm Triggers. The first one is based on authentication. So when the user first logs in, so how it works is the user will register initially. And then they log in. And when they log in for the very first time, this Trigger will be hit and this code will be run. So this is a subset of the JavaScript code that makes up the Realm function.\n\n**Andrew Morgan**: And all it's really doing here is, it is creating a new userDoc based on the user.id that just logged in, set stuff up in there and including setting that they're offline and that they've got no conversations. And then it inserts that into the userCollection. So now we have a userDoc in the userCollection and that user they'll also receive that user object in their application because they straight away after logging in, they opened up that user Realm. And so they'll now have a copy of their own userDoc.\n\n**Andrew Morgan**: Then we've got a couple of database Triggers. So this one is hit every time a new chatMessage is added. And when that happens, that function will search for all of the users that have that conversation id in their list of conversations. And then it will increment the unreadCount value for that particular conversation within that particular user's document. And then finally, we've got the one that creates the Chatster document. So whenever a user document is created or updated, then this function will run and it will update the Chatser document. So it also always provides that read-only copy of a subset of the data. And the other thing that it does is that when a conversation has been added to a particular user, this function will go and update all of the other users that are part of that conversation. So that those user documents also reflect the fact that they're a part of that document.\n\n**Andrew Morgan**: Okay. Um, so that was all the material I was going to go through. We've got a bunch of links here. So the application itself is available within the Realm Organization on GitHub. So that includes the back end Realm application as well as the iOS app. And it will also include the Android app once we've written it. And then there's the Realm Cocoa SDK the docs, et cetera. And if you do want to know more about implementing that application, then there's a blog post you can read. So that's the end-to-end instructions one, but that blog also refers to one that focuses specifically on the data model and partitioning. And then as Shane said, we've got the community forums, which we'd hope everyone would sign up for.\n\n**Shane McAllister**: Super. Thank you, Andrew. I mean, it's amazing to see that in essence, this is something that, WhatsApp, entire companies are building, and we're able to put it demo app to show how it works under the hood. So really, really appreciate that. There are some questions Andrew, in the Q&A channel. There's some interesting conversations there. I'm going to take them, I suppose, as they came in. I'll talk through these unless anybody wants to open their mics and have a chat. There's been good, interesting conversations there. We'd go back to, I suppose, the first ones was that in essence, Richard brought this one up about presence. So you had a presence status icon there on the members of the chat, and how did that work was that that the user was logged in and the devices online or that the user is available? How were you managing that Andrew?\n\n**Andrew Morgan**: Yeah. So, how it works is when a user logs in, we set it that that user is online. And so that will update the user document. That will then get synchronized through Realm Sync to the back end. And when it's received by the back end, it'll be written to the Atlas database and the database trigger will run. And so that database trigger will then replicate that present state to the users Chatser document. And then now going in the other direction, now that documents has changed in Atlas, Realm Sync will push that change to every instance of the application. And so the Swift UI and Realm code, when Realm is updated in the mobile app, that will automatically update the UI. And that's one of the beauties of working with Swift UI and Realm Cocoa is when you update the data model in the local Realm database, that will automatically get reflected in the UI.\n\n**Andrew Morgan**: So you don't have to have any event code saying that, \"When you receive this message or when you see this data change, make this change the UI\" It happens automatically because the Realm objects really live within the application and because of the clever work that's been done in the Realm Cocoa SDK, when those changes are applied to the local copy of the data, it also notifies Swift UI that the views have to be updated to reflect the change. And then in terms of when you go offline if you explicitly log out it will set it to offline and you get the same process going through again. If you stay on, if you stay logged in, but you've had the app in the background for eight hours, or you can actually configure how long, then you'll get a notification saying, \"Do you want to continue to stay, remain logged in Or do you want to log out?\"\n\n**Andrew Morgan**: The bit I haven't added, which would be needed in production is that when you force quit or the app crashes, then before you shut things down, just go and update the present state. And then the other presence thing you could do as well is in the back end, you could have a schedule trigger so that if someone has silently died somewhere if they've been online rate hours or something, you just mark them to show their offline.\n\n**Shane McAllister**: Yeah. I think, I mean, presence is important, but I think the key thing for me is that, how much Realm does under the hood on behalf of you \\[inaudible 00:43:14\\] jumping on a little bit.\n\n**Andrew Morgan**: With that particular one, I can do the demo. So for example, let's go on this window. You can see that, so this is Zippy is the puppet. So if you monitor Zippy, then I'm in the this is actually, I'll move this over. Because I need to expand this a little.\n\n**Shane McAllister**: I have to point out. So Andrew's is in Maidenhead in England, this demo for those of you not familiar, there was a children TV program, sometime in the late '70s early '80s \\[inaudible 00:43:54\\]. So these are the characters from this TV program where they all the members in this chat app.\n\n**Andrew Morgan**: Yeah. And I think in real life, there's actually a bit of a love triangle between three of them as well.\n\n**Shane McAllister**: We won't go there. We won't go there.\n\n**Andrew Morgan**: So, yeah. So, this is the data that's stored in, so I'll zoom in a little bit. This is the data that's stored in Atlas in the back end. And so if I manually go in and so you want to monitor Zippy status in the iPhone app, if I change that present state in the back end, then we should see thatZippy goes offline. So, again there's no code in there at all. All I've had to do is buying that present state into the Swift UI view.\n\n**Shane McAllister**: That's a really good example. I think that be any stronger examples on doing something in the back end and it immediately reflect in the UI. I think it works really well. Kurt tied a question with regard to the partition Andrew. So, all the user I tried to run, this is a demo. We don't have a lot with users. In essence, If this was a real app, we could have 10 million user objects. How would we manage that? How would we go about that?\n\n**Andrew Morgan**: Yeah. So, the reason I've got all, the reason I've done it like it's literally all users is because I want you to be able to search. I want you to be able to create a new chat room from the mobile app and be able to search through all of the users that are registered in the system. So that's another reason why we don't want the Chatster object to contain everything about user, because he wants it to be fairly compact so that it doesn't matter if you are storing a million of them. So ideally we just have the userName and the avatar in there. If you want you to go a step further, we could have another Chatser object with just the username. And also if it really did get to the stage where you've got hundreds of millions or something, or maybe for example in a Slack type environment where you want to have organizations that instead of having the user, instead of being all the users, you could actually have the old equals orgName as your partition key.\n\n**Andrew Morgan**: So you could just synchronize your organization rather than absolutely everything. If there really was too many users that you didn't want them all in the front end, at that point, you'd start having to involve the back end when you wanted to add a new user to a chat room. And so you could call a Realm function, for example, do a query on the database to get that information.\n\n**Shane McAllister**: Sure. Yeah, that makes sense. Okay, in terms of the chat that I was, this is our demo, we couldn't take care of it on a scale. In essence, these are the things that you would have to think about if you were paying to do something for yourself in this area. The other thing that Andrew was, you showed the very start you're using embedded data for that at the moment in the app. Is another way that we did in our coffee shop as well.\n\n**Andrew Morgan**: Sorry. There was a bit of an echo because I think when I have my mic on and you're talking, I will mute it.\n\n**Shane McAllister**: I'll repeat the question. So it was actually Richard who raised this was regarding the photos shared in the chat, Andrew, they shared within embedded data, as opposed to say how we did it in our oafish open source app with an Amazon S3, routine essentially that ran a trigger that ran in the background and we essentially passed the picture and just presented back a URL with the thumbnail.\n\n**Andrew Morgan**: Yeah. In this one, I was again being a little bit lazy and we're actually storing the binary images within the objects and the documents. And so what we did with the oafish application is we had it that the original document was the original photo was uploaded to S3 and we replace it with an S3 link in the documents instead. And you can do that again through a Realm trigger. So every time a new photo document was added you could then so sorry, in this case it would be a subdocument within the ChatMessage, for example, then yeah. The Realm trigger. When you receive a new ChatMessage, it could go and upload that image to S3 and then just replace it with the URL.\n\n**Andrew Morgan**: And to be honest that's why in the photo, I actually have the, I have a thumbnail as well as the full size image, because the idea is that the full-size image you move that to S3 and replace it with a link, but it can be handy to have the thumbnails so that you can still see those images when you're offline, because obviously for the front end application, if it's offline, then an S3 link isn't much use to you. You can't go and fetch it. So by having the thumbnail, as well as the full-size image, you've got that option of effectively archiving one, but not the thumbnail.\n\n**Shane McAllister**: Perfect. Yeah. That makes a lot of sense. On a similar vein about being logged in, et cetera, as well, to curtail the question with regard, but if there's a user Realm that is open as long as you're logged in, and then you pass in an environment Realm partition, are they both open in that view?\n\n**Andrew Morgan**: No, I think it'll still be, I believe it'll be one. Oh, yes. So both Realms. So if you, for example open to use a Realm and the chats to Realm then yes. Both of those Realms would be open simultaneously.\n\n**Shane McAllister**: Okay. Okay, perfect. And I'm coming through and fairplay to Ian and for writing this. This is a long, long question. So, I do appreciate the time and effort and I hope I catch everything. And Ian, if you want me to open your mic and chime in to ask this question, by all means as well, that just let me know in the chat I'll happily do. So perhaps Andrew, you might scroll back up there in the question as well, too. So it was regarding fetching one object type across many partitions, many partition keys, actually. So, Ian he had a reminder list each shared with a different person, all the reminders in each list have a partition key that's unique for that chair and he wants to show the top-level of that. So we're just wondering how we would go about that or putting you on the spot here now, Andrew. But how would we manage that? Nope, you're muted again because of my feedback. Apologies.\n\n**Andrew Morgan**: Okay. So yeah, I think that's another case where, so there is the functionality slash limitation that when you open a Realm, you can only open it specifying a single value for the partition key. And so if you wanted to display a docket objects from 50 different partitions, then the brute-force way is you have to go and to open 50 Realms, sort of each with a different partition id, but that's another example where you may make a compromise on the back end and decide you want to duplicate some data. And so in the similar way to, we have the Chatster objects that are visible all in a single partition, you could also have a partition, which just contains the list of list.\n\n**Andrew Morgan**: So, you could, whenever someone creates a new list object, you could go and add that to another document that has the complete list of all of the lists. But, but yeah, this is why when you're using Realms Sync, figuring out your data model and your partitioning strategy is one of the first things, as soon as you've figured out the customer store for what you want the app to do. The next thing you want to do is figure out the data model and your partitioning strategy, because it will make a big difference in terms of how much storage you use and how performance is going to be.\n\n**Shane McAllister**: So Ian, your mic is open to chime in on this. Or did we cover? You good? Maybe it's not open, this is the joy.\n\n**Ian**: Do you hear me now?\n\n**Shane McAllister**: Yes. MongoDB.\n\n**Ian**: Yeah. I need to go think about the answer. So you, because I was used to using Realm before Realm Sync, so you didn't have any sharing, but you could fetch all the reminders that you wanted, whatever from any lists and just show them in a big list. I need to go think about the answer. How about \\[inaudible 00:54:06\\].\n\n**Andrew Morgan**: Yeah, actually there's a third option that I didn't mention is Realm has functions. So, the triggers we looked at that actually implemented as Realm functions, which they're very simple, very lightweight equivalent to the AWS Lambda functions. And you can invoke those from the front end application. So if you wanted to, you could have a function which queries the Atlas database to get a list of all of the lists. And so then it would be a single call from the front end application to a function that runs in the back end. And then that function could then go and fetch whatever data you wanted from the database and send it back as a result.\n\n**Ian**: But that wouldn't work if you're trying to be an offline first, for example.\n\n**Andrew Morgan**: Yeah. Sort of that, that relies on online functionality, which is why is this, I always try and do it based on the data in Realm as much as possible, just because of that. That's the only way you get the offline first functionality. Yeah.\n\n**Ian**: Cool. I just think about it. Thank you.\n\n**Shane McAllister**: Perfect. Thank you Ian. And was there any other followups Ian?\n\n**Andrew Morgan**: Actually, there's one more hack I just thought of. You can add, so you can only have a single partition key for a given Realm app, but I don't think there's any reason why you couldn't have multi, so you can have multiple Realm apps accessing the same Atlas database. And so if you could have the front end app actually open multiple Realm apps, then each of those Realm apps could use a different attribute for partitioning.\n\n**Shane McAllister**: Great. Lets-\n\n**Andrew Morgan**: So it's a bit hacky but that might work.\n\n**Shane McAllister**: No worries. I'm throwing the floor open to Richard's if you're up to it, Richard, I enabled hosts for you. You had a number of questions there. Richard from \\[inaudible 00:56:19\\] is a longtime friend on Realm. Do you want to jump on Richard and go through those yourself or will I vocalize them for you? Oh, you're you're still muted, Richard.\n\n**Richard**: Okay. Can you hear me now?\n\n**Shane McAllister**: We can in deed.\n\n**Richard**: Okay. I think you answered the question of very well about the image stuff. We've actually been playing around with the Amazon S3 snippets and it's a great way of, because often we need URLs for images and then the other big problem with storing images directly is you're limited to four megabytes, which seems to be the limit for any data object right on Realm. So but Andrew had a great pointer, which is to store your avatars because then you can get them in offline mode. That's actually been a problem with using Amazon S3. But what was the other questions I had, so are you guys going to deprecate the asyncOpen? Because, we've noticed some problems with it lately?\n\n**Andrew Morgan**: Not, that I'm aware of.\n\n**Richard**: Okay.\n\n**Andrew Morgan**: It's because, I think there's still use cases for it. So, for example because when a user logs in, I'm updating their presence outside of a view, so it doesn't inherit the Realm Cocoa magic that's going on when integrated with Swift UI. And so I still have that use case, and now I'm going to chat with, and there may be a way around it. And as I say, the stuff only went, the new version of Realm Cocoa only went live late on Monday.\n\n**Richard**: Okay.\n\n**Andrew Morgan**: So I've updated most things, but that's the one thing where I still needed to use the asyncOpen. When things have quietened down, I need Jason to have a chat with him to see if there's an alternate way of doing it. So I don't think asyncOpen is going away as far as I know. Partly of course, because not everyone uses Swift UI. We have to have options for UI kit as well.\n\n**Richard**: Yeah. Well, I think everybody's starting to move there because Apple's just pushing. Well, the one last thing I was going to say about presence before I was a Realm programmer in that that was three years ago. I actually adopted Realm Sync very early. When it just came out in 2017, I was a Firebase programmer for about three years. And one thing Firebase had is the one thing they did handle well, was this presence idea, because you could basically say you could attach yourself to like a Boolean in the database and say, as long as I'm present, that thing says true, but the minute I disconnect, it goes false. And then the other people could read that they could say always connected or is not connected. And I can implement that with a set of timers that the client says on present, I'm present every 30 seconds, that timer updates.\n\n**Richard**: And then there's a back end service function that clears a flag, but it's a little bit hacky. It would be nice if in Realm, there was something where you could say attach yourself to an object and then Realm would automatically if the device wasn't present, which I think you could detect pretty easily, then it would just change state from true to false. And then the other people could see that it was, that device had actually gone offline. So, I don't know if that's something you guys are thinking of in future release.\n\n**Andrew Morgan**: Yeah. I'm just checking in the triggers, exactly what we can trigger on.\n\n**Richard**: Because somebody might be logged in, but it doesn't mean that you're necessarily, they are the other end.\n\n**Andrew Morgan**: Yeah. So, what you can do, so someone on the device side, one thing I was hoping to do, but I hadn't had a chance to is, so you can tell when the application is minimized. So, at the moment we're going to use minimize as their app. They get a reminder in X hours saying, you sure you still want to remain logged in. But that could automatically, instead of asking them it could just go and update their status to say I might. So, you can do it, but there's, I'm not aware of anything that for example, Realms realizing that the session has timed out. And so it.\n\n**Richard**: I personally could get on an airplane and then flight attendants could say, okay, put everything in airplane mode. So you just do that. And then all of a sudden you're out, doesn't have time to go. If you make it, if you put the burden on the app, then there's a lot of scenarios where you're not going to, the server is going to think it's connected.\n\n**Andrew Morgan**: I think it's every 30 minutes, the user token is refreshed between the front end of the back end. So yeah. We could hook something into that to say that, the back end could say that if this user hasn't refreshed their token in 31 minutes, then they're actually offline.\n\n**Richard**: Yeah. But it'd be nice while at Firebase, you could tell within, I remember time yet, it was like three minutes. It would eventually signal, okay, this guy's not here anymore after he turned off the iPhone.\n\n**Andrew Morgan**: Yeah, that's the thing going on.\n\n**Richard**: Yeah, that was also my question.\n\n**Andrew Morgan**: You couldn't implement that ping from the app. So like, even when it's in the background, you can have it wake up every five minutes and set call the Realm function and the Realm function just updates the last seen at.\n\n**Richard**: Excellent. Well, that's what we're doing now, we're doing this weird and shake. Yeah, but this is a great, great demo of, it's a lot more compelling than task list. I think this should be your flagship demo. Not the test. I was hoping.\n\n**Andrew Morgan**: Yeah. The thing I wrote before this was a task list, but I think the task list is a good hello world, but yes. But once you've done the hello world, you need to figure out how you do the tougher. So it's all the time.\n\n**Richard**: Great. Yeah. About five months ago, I ended up writing a paper on medium about how to do a simple Realm chat. I called it simple Realm chat, which was just one chat thread you could log in and everybody could chat on the same thread. It was just but I was amazed that and this was about six months ago, you could write a chat app for Realm, which was no more than 150 lines of code, basically. But try and do that in any like XAMPP. It's like you'd be 5,000 lines of code before you got anything displayed. So Realm is really powerful that way. It's an amazing, you've got, you're sitting on the Rosetta Stone for communication and collaborative apps. This is I think one of the most seminal technologies in the world for that right now.\n\n**Shane McAllister**: Thank you, Richard. We appreciate that. That's very-\n\n**Richard**: You're commodifying. I mean, you're doing to collaboration with windows did to desktop programming like 20 years ago, but you've really solved that problem. Anyway, so that's, that's my two cents. I don't have any more questions.\n\n**Shane McAllister**: Perfect. Thank you. No, thank you for your contribution. And then Kurt, you had a couple of questions on opened you up to come on and expose yourself here as well too. Hey Kurt, how are you?\n\n**Kurt**: Hey, I'm good. Can you hear me?\n\n**Shane McAllister**: We can in deed loud and clear.\n\n**Kurt**: All right. Yeah. So this I've been digging into this stuff since Jason released this news, this new Realm Cocoa merge that happened on Monday, 10.6 I think is what it is, but so this .environment Realm. So you're basically saying with the ChatBubbles thing, inside this view, we're going to need this partition. So we're going to pass that in as .environment. And I'm wondering, and part of my misunderstanding, I think is because I came from old row and trying to make that work here. And so it opens that. So you go in into this conversation that has these ChatBubbles with this environment. And then when you leave, does that close that, do you have to open and close things or is everything handled inside that .environment?\n\n**Andrew Morgan**: Everything should be handled in there that once in closing. So, top-level view that's been had that environment passed in, I think when that view is closed, then the the Realm should close instead.\n\n**Kurt**: So, when you go back up and you no longer accessing the ChatBubblesView, that has the .environment appended to it, it's just going to close.\n\n**Andrew Morgan**: Yeah. So let me switch to Share screen again. Yeah.\nSo, for example, here, when I open up this chat room it's passed in the\nconfiguration for the ChatMessages Realm,.\n\n**Kurt**: Right. Because, it's got the conversation id, showing the conversation equals that. And so, yeah.\n\n**Andrew Morgan**: Yeah. So, I've just opened a Realm for that particular partition, when I go back that Realm-\n\n**Kurt**: As soon as you hit chats, just the fact that it's not in the view anymore, it's going to go away.\n\n**Andrew Morgan**: Yeah, exactly. And then I opened another chat room and it's open to another Realm for different partition.\n\n**Kurt**: That's a lot of boilerplate code that's gone, but just like the observing and man that's really good. Okay. And then my only other question was, because I've gone over this quite a few times, you answered one of my questions on the forum with a link to this. So I've been going through it. So are you going to update the... You've been updating the code to go with this new version, so now you're going to go back and update the blog post to show all that stuff.\n\n**Andrew Morgan**: Yeah. Yeah. So, the current plan is to write a new blog post. That explains what it takes to use this new to take advantage of the new features that are added on Monday. Because, ca the other stuff, it still works. There's nothing wrong with the other stuff. And if for example, you were using UI kits rather than Swift UI, there is probably more useful than the current version of the app. We may change our mind at some point, but the current thinking is, let's have a new post that explains how to go from the old world to the new world.\n\n**Kurt**: Okay.Great. Well, looking forward to it.\n\n**Shane McAllister**: Super Kurt, thanks so much for jumping in on that as well too. We do appreciate it. I don't think I've missed any questions in the general chat. Please shout up or drop in there if I haven't. But really do appreciate everybody's time. I know we're coming up on time here now and the key things for me to point out is that this is going to be regular. We want to try and connect with our developer community as much as possible, and this is a very simple and easy way to get that set up and to have part Q&A and jumping back in then to showing demos and how we're doing it and back out again, et cetera as well. So this has been very interactive, and we do appreciate that. I think the key thing for us is that you join and you'll probably have, because you're here already is the Realm global community, but please share that with any other developers and any other friends that you have looking to join and know what we're doing in Realm.\n\n**Shane McAllister**: Our Twitter handle @realm, that's where we're answering a lot of questions in our community forums as well, too. So post any technical questions that you might have in there, both the advocacy team and more importantly, the realm engineering team and man those forums quite regularly as well, too. So, there's plenty to go there and thank you so much, Andrew, you've just put up the slide I was trying to get forward the next ones. So, coming up we have Nicola and talking about Realm.NET for Xamarin best practices and roadmap. And so that's next week. So, we're really are trying to do this quite regularly. And then in March, we've got Jason back again, talking about Realm Swift UI, once again on Property wrappers on the MVI architecture there as well too. And you have a second slide Andrew was there the next too.\n\n**Shane McAllister**: So moving beyond then further into March, there's another Android talk Kotlin multi-platform for modern mobile apps, they're on the 24th and then on moving into April, but we will probably intersperse these with others. So just sign up for Realm global community on live.mongodb.com, and you will get emails as soon as we add any of these new media events. Above all, I firstly, I'd like to say, thank you for Andrew for all his hard work and most importantly, then thank you to all of you for your attendance. And don't forget to fill in the swag form. We will get some swag out to you shortly, obviously shipping during COVID, et cetera, takes a little longer. So please be patient with us if you can, as well too. So, thank you everybody. We very much appreciate it. Thank you, Andrew, and look out for more meetups and events in the global Realm community coming up.\n\n**Andrew Morgan**: Thanks everyone.\n\n**Shane McAllister**: Take care everyone. Thank you. Bye-bye.", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "Missed Realm Sync in use \u2014 building and architecting a Mobile Chat App meetup event? Don't worry, you can catch up here.", "contentType": "Tutorial"}, "title": "Realm Sync in Use \u2014 Building and Architecting a Mobile Chat App Meetup", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-change-streams", "action": "created", "body": "# Java - Change Streams\n\n## Updates\n\nThe MongoDB Java quickstart repository is available on GitHub.\n\n### February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n\n### November 14th, 2023\n\n- Update to Java 17\n- Update Java Driver to 4.11.1\n- Update mongodb-crypt to 1.8.0\n\n### March 25th, 2021\n\n- Update Java Driver to 4.2.2.\n- Added Client Side Field Level Encryption example.\n\n### October 21st, 2020\n\n- Update Java Driver to 4.1.1.\n- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in the `pom.xml` and a configuration file `logback.xml`.\n\n## Introduction\n\n \n\nChange Streams were introduced in MongoDB 3.6. They allow applications to access real-time data changes without the complexity and risk of tailing the oplog.\n\nApplications can use change streams to subscribe to all data changes on a single collection, a database, or an entire deployment, and immediately react to them. Because change streams use the aggregation framework, an application can also filter for specific changes or transform the notifications at will.\n\nIn this blog post, as promised in the first blog post of this series, I will show you how to leverage MongoDB Change Streams using Java.\n\n## Getting Set Up\n\nI will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:\n\n``` sh\ngit clone https://github.com/mongodb-developer/java-quick-start\n```\n\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n## Change Streams\n\nIn this blog post, I will be working on the file called `ChangeStreams.java`, but Change Streams are **super** easy to work with.\n\nI will show you 5 different examples to showcase some features of the Change Streams. For the sake of simplicity, I will only show you the pieces of code related to the Change Streams directly. You can find the entire code sample at the bottom of this blog post or in the Github repository.\n\nFor each example, you will need to start 2 Java programs in the correct order if you want to reproduce my examples.\n\n- The first program is always the one that contains the Change Streams code.\n- The second one will be one of the Java programs we already used in this Java blog posts series. You can find them in the Github repository. They will generate MongoDB operations that we will observe in the Change Streams output.\n\n### A simple Change Streams without filters\n\nLet's start with the most simple Change Stream we can make:\n\n``` java\nMongoCollection grades = db.getCollection(\"grades\", Grade.class);\nChangeStreamIterable changeStream = grades.watch();\nchangeStream.forEach((Consumer>) System.out::println);\n```\n\nAs you can see, all we need is `myCollection.watch()`! That's it.\n\nThis returns a `ChangeStreamIterable` which, as indicated by its name, can be iterated to return our change events. Here, I'm iterating over my Change Stream to print my change event documents in the Java standard output.\n\nI can also simplify this code like this:\n\n``` java\ngrades.watch().forEach(printEvent());\n\nprivate static Consumer> printEvent() {\n return System.out::println;\n}\n```\n\nI will reuse this functional interface in my following examples to ease the reading.\n\nTo run this example:\n\n- Uncomment only the example 1 from the `ChangeStreams.java` file and start it in your IDE or a dedicated console using Maven in the root of your project.\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.ChangeStreams\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\n- Start `MappingPOJO.java` in another console or in your IDE.\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.MappingPOJO\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nIn MappingPOJO, we are doing 4 MongoDB operations:\n\n- I'm creating a new `Grade` document with the `insertOne()` method,\n- I'm searching for this `Grade` document using the `find()` method,\n- I'm replacing entirely this `Grade` using the `findOneAndReplace()` method,\n- and finally, I'm deleting this `Grade` using the `deleteOne()` method.\n\nThis is confirmed in the standard output from `MappingJava`:\n\n``` javascript\nGrade inserted.\nGrade found: Grade{id=5e2b4a28c9e9d55e3d7dbacf, student_id=10003.0, class_id=10.0, scores=Score{type='homework', score=50.0}]}\nGrade replaced: Grade{id=5e2b4a28c9e9d55e3d7dbacf, student_id=10003.0, class_id=10.0, scores=[Score{type='homework', score=50.0}, Score{type='exam', score=42.0}]}\nGrade deleted: AcknowledgedDeleteResult{deletedCount=1}\n```\n\nLet's check what we have in the standard output from `ChangeStreams.java` (prettified):\n\n``` javascript\nChangeStreamDocument{\n operationType=OperationType{ value='insert' },\n resumeToken={ \"_data\":\"825E2F3E40000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004\" },\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=Grade{\n id=5e2f3e400c47cf19d5936162,\n student_id=10003.0,\n class_id=10.0,\n scores=[ Score { type='homework', score=50.0 } ]\n },\n documentKey={ \"_id\":{ \"$oid\":\"5e2f3e400c47cf19d5936162\" } },\n clusterTime=Timestamp{\n value=6786711608069455873,\n seconds=1580154432,\n inc=1\n },\n updateDescription=null,\n txnNumber=null,\n lsid=null\n}\nChangeStreamDocument{ operationType=OperationType{ value= 'replace' },\n resumeToken={ \"_data\":\"825E2F3E40000000032B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004\" },\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=Grade{\n id=5e2f3e400c47cf19d5936162,\n student_id=10003.0,\n class_id=10.0,\n scores=[ Score{ type='homework', score=50.0 }, Score{ type='exam', score=42.0 } ]\n },\n documentKey={ \"_id\":{ \"$oid\":\"5e2f3e400c47cf19d5936162\" } },\n clusterTime=Timestamp{\n value=6786711608069455875,\n seconds=1580154432,\n inc=3\n },\n updateDescription=null,\n txnNumber=null,\n lsid=null\n}\nChangeStreamDocument{\n operationType=OperationType{ value='delete' },\n resumeToken={ \"_data\":\"825E2F3E40000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F3E400C47CF19D59361620004\" },\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=null,\n documentKey={ \"_id\":{ \"$oid\":\"5e2f3e400c47cf19d5936162\" } },\n clusterTime=Timestamp{\n value=6786711608069455876,\n seconds=1580154432,\n inc=4\n },\n updateDescription=null,\n txnNumber=null,\n lsid=null\n}\n```\n\nAs you can see, only 3 operations appear in the Change Stream:\n\n- insert,\n- replace,\n- delete.\n\nIt was expected because the `find()` operation is just a reading document from MongoDB. It's not changing anything thus not generating an event in the Change Stream.\n\nNow that we are done with the basic example, let's explore some features of the Change Streams.\n\nTerminate the Change Stream program we started earlier and let's move on.\n\n### A simple Change Stream filtering on the operation type\n\nNow let's do the same thing but let's imagine that we are only interested in insert and delete operations.\n\n``` java\nList pipeline = List.of(match(in(\"operationType\", List.of(\"insert\", \"delete\"))));\ngrades.watch(pipeline).forEach(printEvent());\n```\n\nAs you can see here, I'm using the aggregation pipeline feature of Change Streams to filter down the change events I want to process.\n\nUncomment the example 2 in `ChangeStreams.java` and execute the program followed by `MappingPOJO.java`, just like we did earlier.\n\nHere are the change events I'm receiving.\n\n``` json\nChangeStreamDocument {operationType=OperationType {value= 'insert'},\n resumeToken= {\"_data\": \"825E2F4983000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F4983CC1D2842BFF555640004\"},\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=Grade\n {\n id=5e2f4983cc1d2842bff55564,\n student_id=10003.0,\n class_id=10.0,\n scores= [ Score {type= 'homework', score=50.0}]\n },\n documentKey= {\"_id\": {\"$oid\": \"5e2f4983cc1d2842bff55564\" }},\n clusterTime=Timestamp {value=6786723990460170241, seconds=1580157315, inc=1 },\n updateDescription=null,\n txnNumber=null,\n lsid=null\n}\n\nChangeStreamDocument { operationType=OperationType {value= 'delete'},\n resumeToken= {\"_data\": \"825E2F4983000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E2F4983CC1D2842BFF555640004\"},\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=null,\n documentKey= {\"_id\": {\"$oid\": \"5e2f4983cc1d2842bff55564\"}},\n clusterTime=Timestamp {value=6786723990460170244, seconds=1580157315, inc=4},\n updateDescription=null,\n txnNumber=null,\n lsid=null\n }\n]\n```\n\nThis time, I'm only getting 2 events `insert` and `delete`. The `replace` event has been filtered out compared to the first example.\n\n### Change Stream default behavior with update operations\n\nSame as earlier, I'm filtering my change stream to keep only the update operations this time.\n\n``` java\nList pipeline = List.of(match(eq(\"operationType\", \"update\")));\ngrades.watch(pipeline).forEach(printEvent());\n```\n\nThis time, follow these steps.\n\n- uncomment the example 3 in `ChangeStreams.java`,\n- if you never ran `Create.java`, run it. We are going to use these new documents in the next step.\n- start `Update.java` in another console.\n\nIn your change stream console, you should see 13 update events. Here is the first one:\n\n``` json\nChangeStreamDocument {operationType=OperationType {value= 'update'},\n resumeToken= {\"_data\": \"825E2FB83E000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004\"},\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=null,\n documentKey= {\"_id\": {\"$oid\": \"5e27bcce74aa51a0486763fe\"}},\n clusterTime=Timestamp {value=6786845739898109953, seconds=1580185662, inc=1},\n updateDescription=UpdateDescription {removedFields= [], updatedFields= {\"comments.10\": \"You will learn a lot if you read the MongoDB blog!\"}},\n txnNumber=null,\n lsid=null\n}\n```\n\nAs you can see, we are retrieving our update operation in the `updateDescription` field, but we are only getting the difference with the previous version of this document.\n\nThe `fullDocument` field is `null` because, by default, MongoDB only sends the difference to avoid overloading the change stream with potentially useless information.\n\nLet's see how we can change this behavior in the next example.\n\n### Change Stream with \"Update Lookup\"\n\nFor this part, uncomment the example 4 from `ChangeStreams.java` and execute the programs as above.\n\n``` java\nList pipeline = List.of(match(eq(\"operationType\", \"update\")));\ngrades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach(printEvent());\n```\n\nI added the option `UPDATE_LOOKUP` this time, so we can also retrieve the entire document during an update operation.\n\nLet's see again the first update in my change stream:\n\n``` json\nChangeStreamDocument {operationType=OperationType {value= 'update'},\n resumeToken= {\"_data\": \"825E2FBBC1000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004\"},\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=Grade\n {\n id=5e27bcce74aa51a0486763fe,\n student_id=10002.0,\n class_id=10.0,\n scores=null\n },\n documentKey= {\"_id\": {\"$oid\": \"5e27bcce74aa51a0486763fe\" }},\n clusterTime=Timestamp {value=6786849601073709057, seconds=1580186561, inc=1 },\n updateDescription=UpdateDescription {removedFields= [], updatedFields= {\"comments.11\": \"You will learn a lot if you read the MongoDB blog!\"}},\n txnNumber=null,\n lsid=null\n}\n```\n\n>Note: The `Update.java` program updates a made-up field \"comments\" that doesn't exist in my POJO `Grade` which represents the original schema for this collection. Thus, the field doesn't appear in the output as it's not mapped.\n\nIf I want to see this `comments` field, I can use a `MongoCollection` not mapped automatically to my `Grade.java` POJO.\n\n``` java\nMongoCollection grades = db.getCollection(\"grades\");\nList pipeline = List.of(match(eq(\"operationType\", \"update\")));\ngrades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach((Consumer>) System.out::println);\n```\n\nThen this is what I get in my change stream:\n\n``` json\nChangeStreamDocument {operationType=OperationType {value= 'update'},\n resumeToken= {\"_data\": \"825E2FBD89000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004\"},\n namespace=sample_training.grades,\n destinationNamespace=null,\n fullDocument=Document {\n {\n _id=5e27bcce74aa51a0486763fe,\n class_id=10.0,\n student_id=10002.0,\n comments= [ You will learn a lot if you read the MongoDB blog!, [...], You will learn a lot if you read the MongoDB blog!]\n }\n },\n documentKey= {\"_id\": {\"$oid\": \"5e27bcce74aa51a0486763fe\"}},\n clusterTime=Timestamp {value=6786851559578796033, seconds=1580187017, inc=1},\n updateDescription=UpdateDescription {removedFields= [], updatedFields= {\"comments.13\": \"You will learn a lot if you read the MongoDB blog!\"}},\n txnNumber=null,\n lsid=null\n}\n```\n\nI have shortened the `comments` field to keep it readable but it contains 14 times the same comment in my case.\n\nThe full document we are retrieving here during our update operation is the document **after** the update has occurred. Read more about this in [our documentation.\n\n### Change Streams are resumable\n\nIn this final example 5, I have simulated an error and I'm restarting my Change Stream from a `resumeToken` I got from a previous operation in my Change Stream.\n\n>It's important to note that a change stream will resume itself automatically in the face of an \"incident\". Generally, the only reason that an application needs to restart the change stream manually from a resume token is if there is an incident in the application itself rather than the change stream (e.g. an operator has decided that the application needs to be restarted).\n\n``` java\nprivate static void exampleWithResumeToken(MongoCollection grades) {\n List pipeline = List.of(match(eq(\"operationType\", \"update\")));\n ChangeStreamIterable changeStream = grades.watch(pipeline);\n MongoChangeStreamCursor> cursor = changeStream.cursor();\n System.out.println(\"==> Going through the stream a first time & record a resumeToken\");\n int indexOfOperationToRestartFrom = 5;\n int indexOfIncident = 8;\n int counter = 0;\n BsonDocument resumeToken = null;\n while (cursor.hasNext() && counter != indexOfIncident) {\n ChangeStreamDocument event = cursor.next();\n if (indexOfOperationToRestartFrom == counter) {\n resumeToken = event.getResumeToken();\n }\n System.out.println(event);\n counter++;\n }\n System.out.println(\"==> Let's imagine something wrong happened and I need to restart my Change Stream.\");\n System.out.println(\"==> Starting from resumeToken=\" + resumeToken);\n assert resumeToken != null;\n grades.watch(pipeline).resumeAfter(resumeToken).forEach(printEvent());\n}\n```\n\nFor this final example, the same as earlier. Uncomment the part 5 (which is just calling the method above) and start `ChangeStreams.java` then `Update.java`.\n\nThis is the output you should get:\n\n``` json\n==> Going through the stream a first time & record a resumeToken\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000012B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCCE74AA51A0486763FE0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcce74aa51a0486763fe\"}}, clusterTime=Timestamp{value=6786856975532556289, seconds=1580188278, inc=1}, updateDescription=UpdateDescription{removedFields=], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000022B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBA0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbba\"}}, clusterTime=Timestamp{value=6786856975532556290, seconds=1580188278, inc=2}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.15\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000032B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBB0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbb\"}}, clusterTime=Timestamp{value=6786856975532556291, seconds=1580188278, inc=3}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000042B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBC0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbc\"}}, clusterTime=Timestamp{value=6786856975532556292, seconds=1580188278, inc=4}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000052B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBD0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbd\"}}, clusterTime=Timestamp{value=6786856975532556293, seconds=1580188278, inc=5}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000062B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBE0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbe\"}}, clusterTime=Timestamp{value=6786856975532556294, seconds=1580188278, inc=6}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000072B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBF0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbf\"}}, clusterTime=Timestamp{value=6786856975532556295, seconds=1580188278, inc=7}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000082B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC00004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbc0\"}}, clusterTime=Timestamp{value=6786856975532556296, seconds=1580188278, inc=8}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\n==> Let's imagine something wrong happened and I need to restart my Change Stream.\n==> Starting from resumeToken={\"_data\": \"825E2FC276000000062B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBE0004\"}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000072B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBF0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbbf\"}}, clusterTime=Timestamp{value=6786856975532556295, seconds=1580188278, inc=7}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000082B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC00004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbc0\"}}, clusterTime=Timestamp{value=6786856975532556296, seconds=1580188278, inc=8}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC276000000092B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC10004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbc1\"}}, clusterTime=Timestamp{value=6786856975532556297, seconds=1580188278, inc=9}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC2760000000A2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC20004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbc2\"}}, clusterTime=Timestamp{value=6786856975532556298, seconds=1580188278, inc=10}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC2760000000B2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBC30004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbc3\"}}, clusterTime=Timestamp{value=6786856975532556299, seconds=1580188278, inc=11}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"comments.14\": \"You will learn a lot if you read the MongoDB blog!\"}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC2760000000D2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC8F94B5117D894CBB90004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc8f94b5117d894cbb9\"}}, clusterTime=Timestamp{value=6786856975532556301, seconds=1580188278, inc=13}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"scores.0.score\": 904745.0267635228, \"x\": 150}}, txnNumber=null, lsid=null}\nChangeStreamDocument{ operationType=OperationType{value='update'}, resumeToken={\"_data\": \"825E2FC2760000000F2B022C0100296E5A100496C525567BB74BD28BFD504F987082C046645F696400645E27BCC9F94B5117D894CBBA0004\"}, namespace=sample_training.grades, destinationNamespace=null, fullDocument=null, documentKey={\"_id\": {\"$oid\": \"5e27bcc9f94b5117d894cbba\"}}, clusterTime=Timestamp{value=6786856975532556303, seconds=1580188278, inc=15}, updateDescription=UpdateDescription{removedFields=[], updatedFields={\"scores.0.score\": 2126144.0353088505, \"x\": 150}}, txnNumber=null, lsid=null}\n```\n\nAs you can see here, I was able to stop reading my Change Stream and, from the `resumeToken` I collected earlier, I can start a new Change Stream from this point in time.\n\n## Final Code\n\n`ChangeStreams.java` ([code):\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.*;\nimport com.mongodb.client.model.changestream.ChangeStreamDocument;\nimport com.mongodb.quickstart.models.Grade;\nimport org.bson.BsonDocument;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.bson.conversions.Bson;\n\nimport java.util.List;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Aggregates.match;\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Filters.in;\nimport static com.mongodb.client.model.changestream.FullDocument.UPDATE_LOOKUP;\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\npublic class ChangeStreams {\n\n public static void main(String] args) {\n ConnectionString connectionString = new ConnectionString(System.getProperty(\"mongodb.uri\"));\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n MongoClientSettings clientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .codecRegistry(codecRegistry)\n .build();\n\n try (MongoClient mongoClient = MongoClients.create(clientSettings)) {\n MongoDatabase db = mongoClient.getDatabase(\"sample_training\");\n MongoCollection grades = db.getCollection(\"grades\", Grade.class);\n List pipeline;\n\n // Only uncomment one example at a time. Follow instructions for each individually then kill all remaining processes.\n\n /** => Example 1: print all the write operations.\n * => Start \"ChangeStreams\" then \"MappingPOJOs\" to see some change events.\n */\n grades.watch().forEach(printEvent());\n\n /** => Example 2: print only insert and delete operations.\n * => Start \"ChangeStreams\" then \"MappingPOJOs\" to see some change events.\n */\n// pipeline = List.of(match(in(\"operationType\", List.of(\"insert\", \"delete\"))));\n// grades.watch(pipeline).forEach(printEvent());\n\n /** => Example 3: print only updates without fullDocument.\n * => Start \"ChangeStreams\" then \"Update\" to see some change events (start \"Create\" before if not done earlier).\n */\n// pipeline = List.of(match(eq(\"operationType\", \"update\")));\n// grades.watch(pipeline).forEach(printEvent());\n\n /** => Example 4: print only updates with fullDocument.\n * => Start \"ChangeStreams\" then \"Update\" to see some change events.\n */\n// pipeline = List.of(match(eq(\"operationType\", \"update\")));\n// grades.watch(pipeline).fullDocument(UPDATE_LOOKUP).forEach(printEvent());\n\n /**\n * => Example 5: iterating using a cursor and a while loop + remembering a resumeToken then restart the Change Streams.\n * => Start \"ChangeStreams\" then \"Update\" to see some change events.\n */\n// exampleWithResumeToken(grades);\n }\n }\n\n private static void exampleWithResumeToken(MongoCollection grades) {\n List pipeline = List.of(match(eq(\"operationType\", \"update\")));\n ChangeStreamIterable changeStream = grades.watch(pipeline);\n MongoChangeStreamCursor> cursor = changeStream.cursor();\n System.out.println(\"==> Going through the stream a first time & record a resumeToken\");\n int indexOfOperationToRestartFrom = 5;\n int indexOfIncident = 8;\n int counter = 0;\n BsonDocument resumeToken = null;\n while (cursor.hasNext() && counter != indexOfIncident) {\n ChangeStreamDocument event = cursor.next();\n if (indexOfOperationToRestartFrom == counter) {\n resumeToken = event.getResumeToken();\n }\n System.out.println(event);\n counter++;\n }\n System.out.println(\"==> Let's imagine something wrong happened and I need to restart my Change Stream.\");\n System.out.println(\"==> Starting from resumeToken=\" + resumeToken);\n assert resumeToken != null;\n grades.watch(pipeline).resumeAfter(resumeToken).forEach(printEvent());\n }\n\n private static Consumer> printEvent() {\n return System.out::println;\n }\n}\n```\n\n>Remember to uncomment only one Change Stream example at a time.\n\n## Wrapping Up\n\nChange Streams are very easy to use and setup in MongoDB. They are the key to any real-time processing system.\n\nThe only remaining problem here is how to get this in production correctly. Change Streams are basically an infinite loop, processing an infinite stream of events. Multiprocessing is, of course, a must-have for this kind of setup, especially if your processing time is greater than the time separating 2 events.\n\nScaling up correctly a Change Stream data processing pipeline can be tricky. That's why you can implement this easily using [MongoDB Triggers in MongoDB Realm.\n\nYou can check out my MongoDB Realm sample application if you want to see a real example with several Change Streams in action.\n\n>If you want to learn more and deepen your knowledge faster, I recommend you check out the M220J: MongoDB for Java Developers training available for free on MongoDB University.\n\nIn the next blog post, I will show you multi-document ACID transactions in Java.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to use the Change Streams using the MongoDB Java Driver.", "contentType": "Quickstart"}, "title": "Java - Change Streams", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/psc-interconnect-and-global-access", "action": "created", "body": "# Introducing PSC Interconnect and Global Access for MongoDB Atlas\n\nIn an era of widespread digitalization, businesses operating in critical sectors such as healthcare, banking and finance, and government face an ever-increasing threat of data breaches and cyber-attacks. Ensuring the security of data is no longer just a matter of compliance but has become a top priority for businesses to safeguard their reputation, customer trust, and financial stability. However, maintaining the privacy and security of sensitive data while still enabling seamless access to services within a virtual private cloud (VPC) is a complex challenge that requires a robust solution. That\u2019s where MongoDB\u2019s Private Service Connect (PSC) on Google Cloud comes in. As a cloud networking solution, it provides secure access to services within a VPC using private IP addresses. PSC is also a powerful tool to protect businesses from the ever-evolving threat landscape of data security. \n\n## What is PSC (Private Service Connect)?\n\nPSC simplifies how services are being securely and privately consumed. It allows easy implementation of private endpoints for the service consumers to connect privately to service producers across organizations and eliminates the need for virtual private cloud peering. The effort needed to set up private connectivity between MongoDB and Google consumer project is reduced with the PSC.\n\nMongoDB announced the support for Google Cloud Private Service Connect (PSC) in November 2021. PSC was added as a new option to access MongoDB securely from Google Cloud without exposing the customer traffic to the public internet. With PSC, customers will be able to achieve one-way communication with MongoDB. In this article, we are going to introduce the new features of PSC and MongoDB integration.\n\n## PSC Interconnect support\n\nConnecting MongoDB from the on-prem machines is made easy using PSC Interconnect support. PSC Interconnect allows traffic from on-prem devices to reach PSC endpoints in the same region as the Interconnect. This is also a transparent update with no API changes.\n\nThere are no additional actions required by the customer to start using their Interconnect with PSC. Once Interconnect support has been rolled out to the customer project, then traffic from the Interconnect will be able to reach PSC endpoints and in turn access the data from MongoDB using service attachments.\n\n## Google Cloud multi-region support\n\nPrivate Service Connect now provides multi-region support for MongoDB Atlas clusters, enabling customers to connect to MongoDB instances in different regions securely. With this feature, customers can ensure high availability even in case of a regional failover. To achieve this, customers need to set up the service attachments in all the regions that the cluster will have its nodes on. Each of these service attachments are in turn connected to Google Cloud service endpoints.\n\n## MongoDB multi-cloud support\n\nCustomers who have their deployment on multiple regions spread across multiple clouds can now utilize MongoDB PSC to connect to the Google Cloud nodes in their deployment. The additional requirement is to set up the private link for the other nodes to make sure that the connection could be made to the other nodes from their respective cloud targets. \n\n## Wrap-up\n\nIn conclusion, Private Service Connect has come a long way from its initial release. Now, PSC on MongoDB supports connection from on-prem using Interconnect and also connects to multiple regions across MongoDB clusters spread across Google Cloud regions or multi-cloud clusters securely using Global access.\n\n1. Learn how to set up PSC multi region for MongoDB Atlas with codelabs tutorials.\n2. You can subscribe to MongoDB Atlas using Google Cloud Marketplace.\n3. You can sign up for MongoDB using the registration page.\n4. Learn more about Private Service Connect.\n5. Read the PSC announcement for MongoDB.", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud"], "pageDescription": "PSC is a cloud networking solution that provides secure access to services within a VPC. Read about the newly announced support for PSC Interconnect and Global access for MongoDB.", "contentType": "Article"}, "title": "Introducing PSC Interconnect and Global Access for MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/nairobi-stock-exchange-web-scrapper", "action": "created", "body": "# Nairobi Stock Exchange Web Scraper\n\nLooking to build a web scraper using Python and MongoDB for the Nairobi Stock Exchange? Our comprehensive tutorial provides a step-by-step guide on how to set up the development environment, create a Scrapy spider, parse the website, and store the data in MongoDB. \n\nWe also cover best practices for working with MongoDB and tips for troubleshooting common issues. Plus, get a sneak peek at using MongoDB Atlas Charts for data visualization. Finally, enable text notifications using Africas Talking API (feel free to switch to your preferred provider). Get all the code on GitHub and streamline your workflow today!\n\n## Prerequisites\n\nThe prerequisites below are verified to work on Linux. Implementation on other operating systems may differ. Kindly check installation instructions.\n\n* Python 3.7 or higher and pip installed.\n* A MongoDB Atlas account.\n* Git installed.\n* GitHub account.\n* Code editor of your choice. I will be using Visual Studio Code.\n* An Africas Talking account, if you plan to implement text notifications.\n\n## Table of contents\n\n- What is web scraping?\n- Project layout\n- Project setup\n- Starting a Scrapy project\n- Creating a spider\n- Running the scraper\n- Enabling text alerts\n- Data in MongoDB Atlas\n- Charts in MongoDB Atlas\n- CI/CD with GitHub Actions\n- Conclusion\n\n## What is web scraping?\n\nWeb scraping is the process of extracting data from websites. It\u2019s a form of data mining, which automates the retrieval of data from the web. Web scraping is a technique to automatically access and extract large amounts of information from a website or platform, which can save a huge amount of time and effort. You can save this data locally on your computer or to a database in the cloud.\n\n### What is Scrapy?\n\nScrapy is a free and open-source web-crawling framework written in Python. It extracts the data you need from websites in a fast and simple yet extensible way. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.\n\n### What is MongoDB Atlas?\n\nMongoDB Atlas is a fully managed cloud database platform that hosts your data on AWS, Google Cloud, or Azure. It\u2019s a fully managed database as a service (DBaaS) that provides a highly available, globally distributed, and scalable database infrastructure. Read our tutorial to get started with a free instance of MongoDB Atlas.\n\nYou can also head to our docs to learn about limiting access to your cluster to specified IP addresses. This step enhances security by following best practices.\n\n## Project layout\n\nBelow is a diagram that provides a high-level overview of the project.\n\nThe diagram above shows how the project runs as well as the overall structure. Let's break it down: \n\n* The Scrapy project (spiders) crawl the data from the afx (data portal for stock data) website. \n* Since Scrapy is a full framework, we use it to extract and clean the data. \n* The data is sent to MongoDB Atlas for storage. \n* From here, we can easily connect it to MongoDB Charts for visualizations.\n* We package our web scraper using Docker for easy deployment to the cloud. \n* The code is hosted on GitHub and we create a CI/CD pipeline using GitHub Actions.\n* Finally, we have a text notification script that runs once the set conditions are met.\n\n## Project setup\n\nLet's set up our project. First, we'll create a new directory for our project. Open your terminal and navigate to the directory where you want to create the project. Then, run the following command to create a new directory and change into it .\n\n```bash\nmkdir nse-stock-scraper && cd nse-stock-scraper\n```\nNext, we'll create a virtual environment for our project. This will help us isolate our project dependencies from the rest of our system. Run the following command to create a virtual environment. We are using the inbuilt Ppython module ``venv`` to create the virtual environment. Activate the virtual environment by running the ``activate`` script in the ``bin`` directory.\n\n```bash\npython3 -m venv venv\nsource venv/bin/activate\n```\n\nNow, we'll install the required dependencies. We'll use ``pip`` to install the dependencies. Run the following command to install the required dependencies:\n\n```bash\npip install scrapy pymongosrv] dnspython python-dotenv beautifulsoup4\npip freeze > requirements.txt\n```\n\n## Starting a Scrapy project\n\nScrapy is a full framework. Thus, it has an opinionated view on the structure of its projects. It comes with a CLI tool to get started quickly. Now, we'll start a new Scrapy project. Run the following command.\n\n```bash\nscrapy startproject nse_scraper .\n```\n\nThis will create a new directory with the name `nse_scraper` and a few files. The ``nse_scraper`` directory is the actual Python package for our project. The files are as follows:\n\n* ``items.py`` \u2014 This file contains the definition of the items that we will be scraping.\n* ``middlewares.py`` \u2014 This file contains the definition of the middlewares that we will be using.\n* ``pipelines.py`` \u2014 This contains the definition of the pipelines that we will be using.\n* ``settings.py`` \u2014 This contains the definition of the settings that we will be using.\n* ``spiders`` \u2014 This directory contains the spiders that we will be using.\n* ``scrapy.cfg`` \u2014 This file contains the configuration of the project.\n\n## Creating a spider\n\nA spider is a class that defines how a certain site will be scraped. It must subclass ``scrapy.Spider`` and define the initial requests to make \u2014 and optionally, how to follow links in the pages and parse the downloaded page content to extract data.\n\nWe'll create a spider to scrape the [afx website. Run the following command to create a spider. Change into the ``nse_scraper`` folder that is inside our root folder. \n\n```bash\ncd nse_scraper\nscrapy genspider afx_scraper afx.kwayisi.org\n```\n\nThis will create a new file ``afx_scraper.py`` in the ``spiders`` directory. Open the file and **replace the contents** with the following code:\n\n```\nfrom scrapy.settings.default_settings import CLOSESPIDER_PAGECOUNT, DEPTH_LIMIT\nfrom scrapy.spiders import CrawlSpider, Rule\nfrom bs4 import BeautifulSoup\nfrom scrapy.linkextractors import LinkExtractor\n\nclass AfxScraperSpider(CrawlSpider):\n name = 'afx_scraper'\n allowed_domains = 'afx.kwayisi.org']\n start_urls = ['https://afx.kwayisi.org/nse/']\n user_agent = 'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36'\n custom_settings = {\n DEPTH_LIMIT: 1,\n CLOSESPIDER_PAGECOUNT: 1\n }\n\n rules = (\n Rule(LinkExtractor(deny='.html', ), callback='parse_item', follow=False),\n Rule(callback='parse_item'),\n )\n\n def parse_item(self, response, **kwargs):\n print(\"Processing: \" + response.url)\n # Extract data using css selectors\n row = response.css('table tbody tr ')\n # use XPath and regular expressions to extract stock name and price\n raw_ticker_symbol = row.xpath('td[1]').re('[A-Z].*')\n raw_stock_name = row.xpath('td[2]').re('[A-Z].*')\n raw_stock_price = row.xpath('td[4]').re('[0-9].*')\n raw_stock_change = row.xpath('td[5]').re('[0-9].*')\n\n # create a function to remove html tags from the returned list\n def clean_stock_symbol(raw_symbol):\n clean_symbol = BeautifulSoup(raw_symbol, \"lxml\").text\n clean_symbol = clean_symbol.split('>')\n if len(clean_symbol) > 1:\n return clean_symbol[1]\n else:\n return None\n\n def clean_stock_name(raw_name):\n clean_name = BeautifulSoup(raw_name, \"lxml\").text\n clean_name = clean_name.split('>')\n if len(clean_name[0]) > 2:\n return clean_name[0]\n else:\n return None\n\n def clean_stock_price(raw_price):\n clean_price = BeautifulSoup(raw_price, \"lxml\").text\n return clean_price\n\n # Use list comprehension to unpack required values\n stock_name = [clean_stock_name(r_name) for r_name in raw_stock_name]\n stock_price = [clean_stock_price(r_price) for r_price in raw_stock_price]\n ticker_symbol = [clean_stock_symbol(r_symbol) for r_symbol in raw_ticker_symbol]\n stock_change = [clean_stock_price(raw_change) for raw_change in raw_stock_change]\n if ticker_symbol is not None:\n cleaned_data = zip(ticker_symbol, stock_name, stock_price)\n for item in cleaned_data:\n scraped_data= {\n 'ticker_symbol': item[0],\n 'stock_name': item[1],\n 'stock_price': item[2],\n 'stock_change': stock_change }\n # yield info to scrapy\n yield scraped_data\n```\n\nLet's break down the code above. First, we import the required modules and classes. In our case, we'll be using _CrawlSpider _and Rule from _scrapy.spiders_ and _LinkExtractor_ from _scrapy.linkextractors_. We'll also be using BeautifulSoup from bs4 to clean the scraped data.\n\nThe `AfxScraperSpider` class inherits from CrawlSpider, which is a subclass of Spider. The Spider class is the core of Scrapy. It defines how a certain site (or a group of sites) will be scraped. It contains an initial list of URLs to download, and rules to follow links in the pages and extract data from them. In this case, we'll be using CrawlSpider to crawl the website and follow links to the next page.\n\nThe name attribute defines the name of the spider. This name must be unique within a project \u2014 that is, you can\u2019t set the same name for different spiders. It will be used to identify the spider when you run it from the command line.\n\nThe allowed_domains attribute is a list of domains that this spider is allowed to crawl. If it isn\u2019t specified, no domain restrictions will be in place. This is useful if you want to restrict the crawling to a particular domain (or subdomain) while scraping multiple domains in the same project. You can also use it to avoid crawling the same domain multiple times when using multiple spiders.\n\nThe start_urls attribute is a list of URLs where the spider will begin to crawl from. When no start_urls are defined, the start URLs are read from the sitemap.xml file (if it exists) of the first domain in the allowed_domains list. If you don\u2019t want to start from a sitemap, you can define an initial URL in this attribute. This attribute is optional and can be omitted.\n\nThe user_agent attribute is used to set the user agent for the spider. This is useful when you want to scrape a website that blocks spiders that don't have a user agent. In this case, we'll be using a user agent for Chrome. We can also set the user agent in the settings.py file. This is key to giving the target website the illusion that we are a real browser.\n\nThe custom_settings attribute is used to set custom settings for the spider. In this case, we'll be setting the _DEPTH_LIMIT _to 1 and_ CLOSESPIDER_PAGECOUNT_ to 1. The DEPTH_LIMIT attribute limits the maximum depth that will be allowed to crawl for any site. Depth refers to the number of page(s) the spider is allowed to crawl. The CLOSESPIDER_PAGECOUNT attribute is used to close the spider after crawling the specified number of pages.\n\nThe rules attribute defines the rules for the spider. We'll be using the Rule class to define the rules for extracting links from a page and processing them with a callback, or following them and scraping them using another spider. \n\nThe Rule class takes a LinkExtractor object as its first argument. The LinkExtractor class is used to extract links from web pages. It can extract links matching specific regular expressions or using specific attributes, such as href or src. \n\nThe deny argument is used to deniesy the extraction of links that match the specified regular expression. The callback argument specifiesis used to specify the callback function to be called on the response of the extracted links. \n\nThe follow argument specifies whether the links extracted should be followed or not. We'll be using the callback argument to specify the callback function to be called on the response of the extracted links. We'll also be using the **follow** argument to specify whether the links extracted should be followed or not.\n\nWe then define a `parse_item` function that takes response as an argument. The `parse_item` function is used to parses the response and extracts the required data. We'll use the `xpath` method to extract the required data. The `xpath` method extracts data using [XPath expressions. \n\nWe get xpath expressions by inspecting the target website. Basically, we right-click on the element we want to extract data from and click on `inspect`. This will open the developer tools. We then click on the `copy` button and select `copy xpath`. Paste the xpath expression in the `xpath` method.\n\nThe `re` method extracts data using regular expressions. We then use the `clean_stock_symbol`, `clean_stock_name`, and `clean_stock_price` functions to clean the extracted data. Use the `zip` function to combine the extracted data into a single list. Then, use a `for` loop to iterate through the list and yield the data to Scrapy.\n\nThe clean_stock_symbol, clean_stock_name, and clean_stock_price functions are used to clean the extracted data. The clean_stock_symbol function takes the raw symbol as an argument. _BeautifulSoup_ class cleans the raw symbol. It then uses the split method to split the cleaned symbol into a list. An if statement checks if the length of the list is greater than 1. If it is, it returns the second item in the list. If it isn't, it returns None. \n\nThe clean_stock_name function takes the raw name as an argument. It uses the BeautifulSoup class to clean the raw name. It then uses the split method to split the cleaned name into a list. Again, an if statement will check if the length of the list is greater than 1. If it is, it returns the first item in the list. If it isn't, it returns None. The clean_stock_price function takes the raw price as an argument. It then uses the BeautifulSoup class to clean the raw price and return the cleaned price.\n\nThe _clean_stock_change_ function takes the raw change as an argument. It uses the BeautifulSoup class to clean the raw change and return the cleaned data.\n\n### Updating the items.py file\n\nInside the root of our project, we have the ``items.py`` file. An item is a container which will be loaded with the scraped data. It works similarly to a dictionary with additional features like declaring its fields and customizing its export. We'll be using the Item class to create our items. The Item class is the base class for all items. It provides the general mechanisms for handling data from scraped pages. It\u2019s an abstract class and cannot be instantiated directly. We'll be using the Field class to create our fields.\n\nAdd the following code to the _nse_scraper/items.py_ file:\n\n```\nfrom scrapy.item import Item, Field\n\nclass NseScraperItem(Item):\n # define the fields for your item here like:\n ticker_symbol = Field()\n stock_name = Field()\n stock_price = Field()\n stock_change = Field()\n```\n\nThe NseScraperItem class is creates our item. The ticker_symbol, stock_name, stock_price, and stock_change fields store the ticker symbol, stock name, stock price, and stock change respectively. Read more on items here.\n\n### Updating the pipelines.py file\n\nInside the root of our project, we have the ``pipelines.py`` file. A pipeline is a component which processes the items scraped from the spiders. It can clean, validate, and store the scraped data in a database. We'll use the Pipeline class to create our pipelines. The Pipeline class is the base class for all pipelines. It provides the general methods and properties that the pipeline will use.\n\nAdd the following code to the ``pipelines.py`` file:\n\n```\n# pipelines.py\n# useful for handling different item types with a single interface\nimport pymongo\nfrom scrapy.exceptions import DropItem\n\nfrom .items import NseScraperItem\n\nclass NseScraperPipeline:\n collection = \"stock_data\"\n\n def __init__(self, mongodb_uri, mongo_db):\n self.db = None\n self.client = None\n self.mongodb_uri = mongodb_uri\n self.mongo_db = mongo_db\n if not self.mongodb_uri:\n raise ValueError(\"MongoDB URI not set\")\n if not self.mongo_db:\n raise ValueError(\"Mongo DB not set\")\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(\n mongodb_uri=crawler.settings.get(\"MONGODB_URI\"),\n mongo_db=crawler.settings.get('MONGO_DATABASE', 'nse_data')\n )\n\n def open_spider(self, spider):\n self.client = pymongo.MongoClient(self.mongodb_uri)\n self.db = self.clientself.mongo_db]\n\n def close_spider(self, spider):\n self.client.close()\n \n def clean_stock_data(self,item):\n if item['ticker_symbol'] is None:\n raise DropItem('Missing ticker symbol in %s' % item)\n elif item['stock_name'] == 'None':\n raise DropItem('Missing stock name in %s' % item)\n elif item['stock_price'] == 'None':\n raise DropItem('Missing stock price in %s' % item)\n else:\n return item\n\n def process_item(self, item, spider):\n \"\"\"\n process item and store to database\n \"\"\"\n \n clean_stock_data = self.clean_stock_data(item)\n data = dict(NseScraperItem(clean_stock_data))\n print(data)\n # print(self.db[self.collection].insert_one(data).inserted_id)\n self.db[self.collection].insert_one(data)\n\n return item\n```\n\nFirst, we import the _pymongo_ module. We then import the DropItem class from the _scrapy.exceptions_ module. Next, import the **NseScraperItem** class from the items module.\n\nThe _NseScraperPipeline_ class creates our pipeline. The _collection_ variable store the name of the collection we'll be using. The __init__ method initializes the pipeline. It takes the mongodb_uri and mongo_db as arguments. It then uses an if statement to check if the mongodb_uri is set. If it isn't, it raises a ValueError. Next, it uses an if statement to check if the mongo_db is set. If it isn't, it raises a ValueError. \n\nThe from_crawler method creates an instance of the pipeline. It takes the crawler as an argument. It then returns an instance of the pipeline. The open_spider method opens the spider. It takes the spider as an argument. It then creates a MongoClient instance and stores it in the client variable. It uses the client instance to connect to the database and stores it in the db variable.\n\nThe close_spider method closes the spider. It takes the spider as an argument. It then closes the client instance. The clean_stock_data method cleans the scraped data. It takes the item as an argument. It then uses an if statement to check if the _ticker_symbol_ is None. If it is, it raises a DropItem. Next, it uses an if statement to check if the _stock_name_ is None. If it is, it raises a DropItem. It then uses an if statement to check if the _stock_price_ is None. If it is, it raises a _DropItem_. If none of the if statements are true, it returns the item. \n\nThe _process_item_ method processes the scraped data. It takes the item and spider as arguments. It then uses the _clean_stock_data_ method to clean the scraped data. It uses the dict function to convert the item to a dictionary. Next, it prints the data to the console. It then uses the db instance to insert the data into the database. It returns the item.\n\n### Updating the `settings.py` file\n\nInside the root of our project, we have the `settings.py` file. This file is used to stores our project settings. Add the following code to the `settings.py` file:\n\n```\n# settings.py\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\nBOT_NAME = 'nse_scraper'\n\nSPIDER_MODULES = ['nse_scraper.spiders']\nNEWSPIDER_MODULE = 'nse_scraper.spiders'\n\n# MONGODB SETTINGS\nMONGODB_URI = os.getenv(\"MONGODB_URI\")\nMONGO_DATABASE = os.getenv(\"MONGO_DATABASE\")\n\nITEM_PIPELINES = {\n 'nse_scraper.pipelines.NseScraperPipeline': 300,\n}\nLOG_LEVEL = \"INFO\"\n\n# USER_AGENT = 'nse_scraper (+http://www.yourdomain.com)'\n\n# Obey robots.txt rules\nROBOTSTXT_OBEY = False\n\n# Override the default request headers:\nDEFAULT_REQUEST_HEADERS = {\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Accept-Language': 'en',\n}\n\n# Enable and configure HTTP caching (disabled by default)\n# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings\nHTTPCACHE_ENABLED = True\nHTTPCACHE_EXPIRATION_SECS = 360\nHTTPCACHE_DIR = 'httpcache'\n# HTTPCACHE_IGNORE_HTTP_CODES = []\nHTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\n```\n\nFirst, we import the `os` and `load_dotenv` modules. We then call the `load_dotenv` function. It takes no arguments. This function loads the environment variables from the `.env` file.\n\n`nse_scraper.spiders`. We append the `MONGODB_URI` variable and set it to the `MONGODB_URI` environment variable. Next, we create the `MONGODB_DATABASE` variable and set it to the `MONGO_DATABASE` environment variable.\n\nAfter, we create the `ITEM_PIPELINES` variable and set it to `nse_scraper.pipelines.NseScraperPipeline`. We then create the `LOG_LEVEL` variable and set it to `INFO`. The `DEFAULT_REQUEST_HEADERS` variable is set to a dictionary. Next, we create the `HTTPCACHE_ENABLED` variable and set it to `True`. \n\nChange the `HTTPCACHE_EXPIRATION_SECS` variable and set it to `360`. Create the `HTTPCACHE_DIR` variable and set it to `httpcache`. Finally, create the `HTTPCACHE_STORAGE` variable and set it to `scrapy.extensions.httpcache.FilesystemCacheStorage`.\n\n## Project structure\n\nThe project structure is as follows:\n\n```\n\u251c nse_stock_scraper\n \u251c nse_scraper\n \u251c\u2500\u2500 __init__.py\n \u2502 \u251c\u2500\u2500 items.py\n \u2502 \u251c\u2500\u2500 middlewares.py\n \u2502 \u251c\u2500\u2500 pipelines.py\n \u2502 \u251c\u2500\u2500 settings.py\n \u251c\u2500 stock_notification.py \n \u2502 \u2514\u2500\u2500 spiders\n \u2502 \u251c\u2500\u2500 __init__.py\n \u2502 \u2514\u2500\u2500 afx_scraper.py\n \u251c\u2500\u2500 README.md\n \u251c\u2500\u2500 LICENSE\n \u251c\u2500\u2500 requirements.txt\n \u2514\u2500\u2500 scrapy.cfg\n \u251c\u2500\u2500 .gitignore\n \u251c\u2500\u2500 .env\n```\n\n## Running the scraper\n\nTo run the scraper, we'll need to open a terminal and navigate to the project directory. We'll then need to activate the virtual environment if it's not already activated. We can do this by running the following command:\n\n```bash\nsource venv/bin/activate\n```\n\nCreate a `.env` file in the root of the project (in /nse_scraper/). Add the following code to the `.env` file:\n\n```\nMONGODB_URI=mongodb+srv://\nMONGODB_DATABASE=\nat_username=\nat_api_key=\nmobile_number=\n```\n\nAdd your **MongoDB URI**, database name, Africas Talking username, API key, and mobile number to the `.env` file for your MongoDB URI. You can use the free tier of MongoDB Atlas. Get your URI over on the Atlas dashboard, under the `connect` [button. It should look something like this:\n\n```\nmongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority\n```\n\nWe need to run the following command to run the scraper while in the project folder:\n\n (**/nse_scraper /**):\n\n```\nscrapy crawl afx_scraper\n```\n\n## Enabling text alerts (using Africas Talking)\n\nInstall the `africastalking` module by running the following command in the terminal:\n\n```\npip install africastalking\n```\n\nCreate a new file called `stock_notification.py` in the `nse_scraper` directory. Add the following code to the stock_notification.py file:\n\n```\n# stock_notification.py\nimport africastalking as at\nimport os\nfrom dotenv import load_dotenv\nimport pymongo\n\nload_dotenv()\n\nat_username = os.getenv(\"at_username\")\nat_api_key = os.getenv(\"at_api_key\")\nmobile_number = os.getenv(\"mobile_number\")\nmongo_uri = os.getenv(\"MONGODB_URI\")\n\n# Initialize the Africas sdk py passing the api key and username from the .env file\nat.initialize(at_username, at_api_key)\nsms = at.SMS\naccount = at.Application\n\nticker_data = ]\n\n# Create a function to send a message containing the stock ticker and price\ndef stock_notification(message: str, number: int):\n try:\n response = sms.send(message, [number])\n print(account.fetch_application_data())\n print(response)\n except Exception as e:\n print(f\" Houston we have a problem: {e}\")\n\n# create a function to query mongodb for the stock price of Safaricom\ndef stock_query():\n client = pymongo.MongoClient(mongo_uri)\n db = client[\"nse_data\"]\n collection = db[\"stock_data\"]\n # print(collection.find_one())\n ticker_data = collection.find_one({\"ticker\": \"BAT\"})\n print(ticker_data)\n stock_name = ticker_data[\"name\"]\n stock_price = ticker_data[\"price\"]\n sms_data = { \"stock_name\": stock_name, \"stock_price\": stock_price }\n print(sms_data)\n\n message = f\"Hello the current stock price of {stock_name} is {stock_price}\"\n # check if Safaricom share price is more than Kes 39 and send a notification.\n if int(float(stock_price)) >= 38:\n # Call the function passing the message and mobile_number as a arguments\n print(message)\n stock_notification(message, mobile_number)\n else:\n print(\"No notification sent\")\n \n client.close()\n\n return sms_data\n\nstock_query()\n```\n\nThe code above imports the `africastalking` module. Import the `os` and `load_dotenv` modules. We proceed to call the `load_dotenv` function. It takes no arguments. This function loads the environment variables from the `.env` file.\n\n* We create the `at_username` variable and set it to the `at_username` environment variable. We then create the `at_api_key` variable and set it to the `at_api_key` environment variable. Create the `mobile_number` variable and set it to the `mobile_number` environment variable. And create the `mongo_uri` variable and set it to the `MONGODB_URI` environment variable.\n* We initialize the `africastalking` module by passing the `at_username` and `at_api_key` variables as arguments. Create the `sms` variable and set it to `at.SMS`. Create the `account` variable and set it to `at.Application`.\n* Create the `ticker_data` variable and set it to an empty list. Create the `stock_notification` function. It takes two arguments: `message` and `number`. We then try to send the message to the number and print the response. Look for any exceptions and display them.\n* We created the `stock_query` function. We then create the `client` variable and set it to a `pymongo.MongoClient` object. Create the `db` variable and set it to the `nse_data` database. Then, create the `collection` variable and set it to the `stock_data` collection, and create the `ticker_data` variable and set it to the `collection.find_one` method. It takes a dictionary as an argument.\n\nThe `stock_name` variable is set to the `name` key in the `ticker_data` dictionary. Create the `stock_price` variable and set it to the `price` key in the `ticker_data` dictionary. Create the `sms_data` variable and set it to a dictionary. It contains the `stock_name` and `stock_price` variables.\n\nThe `message` variable is set to a string containing the stock name and price. We check if the stock price is greater than or equal to 38. If it is, we call the `stock_notification` function and pass the `message` and `mobile_number` variables as arguments. If it isn't, we print a message to the console.\n\nClose the connection to the database and return the `sms_data` variable. Call the `stock_query` function.\n\nWe need to add the following code to the `afx_scraper.py` file:\n\n```\n# afx_scraper.py\nfrom nse_scraper.stock_notification import stock_query\n\n# ...\n\n# Add the following code to the end of the file\nstock_query()\n```\n\nIf everything is set up correctly, you should something like this:\n\n## Data in MongoDB Atlas\n\nWe need to create a new cluster in MongoDB Atlas. We can do this by: \n\n* Clicking on the `Build a Cluster` button. \n* Selecting the `Shared Clusters` option. \n* Selecting the `Free Tier` option. \n* Selecting the `Cloud Provider & Region` option. \n* Selecting the `AWS` option. (I selected the AWS Cape Town option.) \n* Selecting the `Cluster Name` option.\n* Giving the cluster a name. (We can call it `nse_data`.)\n\nLet\u2019s configure a user to access the cluster by following the steps below: \n\n* Select the `Database Access` option. \n* Click on the `Add New User` option. \n* Give the user a username. (I used `nse_user.)`.\n* Give the user a password. (I used `nse_password`).\n* Select the `Network Access` option. \n* Select the `Add IP Address` option. \n* Select the `Allow Access from Anywhere` option. \n* Select the `Cluster` option. We'll then need to select the`Create Cluster` option.\n\nClick on the `Collections` option and then on the `+ Create Database` button. Give the database a name. We can call it `nse_data`. Click on the `+ Create Collection` button. Give the collection a name. We can call it `stock_data`. If everything is set up correctly, you should see something like this:\n\n![Database records displayed in MongoDB Atlas\n\nIf you see an empty collection, rerun the project in the terminal to populate the values in MongoDB. Incase of an error, read through the terminal output. Common issues could be:\n\n* The IP aAddress was not added in the dashboard.\n* A lLack of/iIncorrect credentials in your ._env_ file.\n* A sSyntax error in your code.\n* A poorCheck your internet connection.\n* A lLack of appropriate permissions for your user.\n\n## Metrics in MongoDB Atlas\n\nLet's go through how to view metrics related to our database(s).\n\n* Click on the **`Metrics` option. \n* Click on the `+ Add Metric` button.\n* Select the `Database` option.\n* Select the `nse_data` option. \n* Select the `Collection` option. \n* Select the `stock_data` option. \n* Select the `Metric` option.\n* Select the `Documents` option.\n* Select the `Time Range` option. \n* Select the `Last 24 Hours`option. \n* Select the `Granularity` option. \n* Select the `1 Hour` option. \n* Click on the `Add Metric` button. \n\nIf everything is set up correctly, it will look like this:\n\n## Charts in MongoDB Atlas\n\nMongoDB Atlas offers charts that can be used to visualize the data in the database. Click on the `Charts` option. Then, click on the `+ Add Chart` button. Select the `Database` option. Below is a screenshot of sample charts for NSE data:\n\n## Version control with Git and GitHub\n\nEnsure you have Git installed on your machine, along with a GitHub account. \n\nRun the following command in your terminal to initialize a git repository:\n\n```\ngit init\n```\n\nCreate a `.gitignore` file. We can do this by running the following command in our terminal:\n\n```\ntouch .gitignore\n```\n\nLet\u2019s add the .env file to the .gitignore file. Add the following code to the `.gitignore` file:\n\n```\n# .gitignore\n.env\n```\n\nAdd the files to the staging area by running the following command in our terminal:\n\n```\ngit add .\n```\n\nCommit the files to the repository by running the following command in our terminal:\n\n```\ngit commit -m \"Initial commit\"\n```\n\nCreate a new repository on GitHub by clicking on the `+` icon on the top right of the page and selecting `New repository`. Give the repository a name. We can call it `nse-stock-scraper`. Select `Public` as the repository visibility. Select `Add a README file` and `Add .gitignore` and select `Python` from the dropdown. Click on the `Create repository` button.\n\nAdd the remote repository to our local repository by running the following command in your terminal:\n\n```\ngit remote add origin\n```\n\nPush the files to the remote repositor by running the following command in your terminal:\n\n```\ngit push -u origin master\n```\n\n### CI/CD with GitHub Actions\n\nCreate a new folder \u2014 `.github` \u2014 and a `workflows` folder inside, in the root directory of the project. We can do this by running the following command in our terminal. Inside the `workflows file`, we'll need to create a new file called `scraper-test.yml`. We can do this by running the following command in our terminal:\n\n```\ntouch .github/workflows/scraper-test.yml\n```\n\nInside the scraper-test.yml file, we'll need to add the following code:\n\n```\nname: Scraper test with MongoDB\n\non: push]\n\njobs:\n build:\n\n runs-on: ubuntu-latest\n strategy:\n matrix:\n python-version: [3.8, 3.9, \"3.10\"]\n mongodb-version: ['4.4', '5.0', '6.0']\n\n steps:\n - uses: actions/checkout@v2\n - name: Set up Python ${{ matrix.python-version }}\n uses: actions/setup-python@v1\n with:\n python-version: ${{ matrix.python-version }}\n - name: Set up MongoDB ${{ matrix.mongodb-version }}\n uses: supercharge/mongodb-github-action@1.8.0\n with:\n mongodb-version: ${{ matrix.mongodb-version }}\n - name: Install dependencies\n run: |\n python -m pip install --upgrade pip\n pip install -r requirements.txt\n - name: Lint with flake8\n run: |\n pip install flake8\n # stop the build if there are Python syntax errors or undefined names\n flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics\n # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide\n flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics\n - name: scraper-test\n run: |\n cd nse_scraper\n export MONGODB_URI=mongodb://localhost:27017\n export MONGO_DATABASE=nse_data\n scrapy crawl afx_scraper -a output_format=csv -a output_file=afx.csv\n scrapy crawl afx_scraper -a output_format=json -a output_file=afx.json\n```\n\nLet's break down the code above. We create a new workflow called `Scraper test with MongoDB`. We then set the `on` event to `push`. Create a new job called `build`. Set the `runs-on` to `ubuntu-latest`. Set the `strategy` to a matrix. It contains the `python-version` and `mongodb-version` variables. Set the `python-version` to `3.8`, `3.9`, and `3.10`. Set the `mongodb-version` to `4.4`, `5.0`, and `6.0`.\n\nCreate a new step called `Checkout`. Set the `uses` to `actions/checkout@v2`. Create a new step called `Set up Python ${{ matrix.python-version }}` and set the `uses` to `actions/setup-python@v1`. Set the `python-version` to `${{ matrix.python-version }}`. Create a new step called `Set up MongoDB ${{ matrix.mongodb-version }}`. This sets up different Python versions and MongoDB versions for testing.\n\nThe `Install dependencies` step installs the dependencies. Create a new step called `Lint with flake8`. This step lints the code. Create a new step called `scraper-test`. This step runs the scraper and tests it.\n\nCommit the changes to the repository by running the following command in your terminal:\n\n```\ngit add .\ngit commit -m \"Add GitHub Actions\"\ngit push\n```\n\nGo to the `Actions` tab on your repository. You should see something like this:\n\n![Displaying the build process\n\n## Conclusion\n\nIn this tutorial, we built a stock price scraper using Python and Scrapy. We then used MongoDB to store the scraped data. We used Africas Talking to send SMS notifications. Finally, we implemented a CI/CD pipeline using GitHub Actions.\n\nThere are definite improvements that can be made to this project. For example, we can add more stock exchanges. We can also add more notification channels. This project should serve as a good starting point.\n\nThank you for reading through this far., I hope you have gained insight or inspiration for your next project with MongoDB Atlas. Feel free to comment below or reach out for further improvements. We\u2019d love to hear from you! This project is oOpen sSource and available on GitHub \u2014, clone or fork it!, I\u2019m excited to see what you build.", "format": "md", "metadata": {"tags": ["Atlas", "Python"], "pageDescription": "A step-by-step guide on how to set up the development environment, create a Scrapy spider, parse the website, and store the data in MongoDB", "contentType": "Tutorial"}, "title": "Nairobi Stock Exchange Web Scraper", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/introducing-realm-flutter-sdk", "action": "created", "body": "# Introducing the Realm Flutter SDK\n\n> This article discusses the alpha version of the Realm Flutter SDK which is now in public preview with more features and functionality. Learn more here.\"\n\nToday, we are pleased to announce the next installment of the Realm Flutter SDK \u2013 now with support for Windows, macOS, iOS, and Android. This release gives you the ability to use Realm in any of your Flutter or Dart projects regardless of the version. \n\nRealm is a simple super-fast, object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via StatefulWidgets and Streams.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Introduction \n\nFlutter has been a boon to developers across the globe, a framework designed for all platforms: iOS, Android, server and desktop. It enables a developer to write code once and deploy to multiple platforms. Optimizing for performance across multiple platforms that use different rendering engines and creating a single hot reload that work across all platforms is not an easy feat, but the Flutter and Dart teams have done an amazing job. It\u2019s not surprising therefore that Flutter support is our top request on Github.\n\nRealm\u2019s Core Database is platform independent meaning it is easily transferable to another environment which has enabled us to build SDKs for the most popular mobile development frameworks: iOS with Swift, Android with Kotlin, React Native, Xamarin, Unity, and now Flutter.\n\nOur initial version of the Flutter SDK was tied to a custom-built Flutter engine. It was version-specific and shipped as a means to gather feedback from the community on our Realm APIs. With this latest version, we worked closely with the Flutter and Dart team to integrate with the Dart FFI APIs. Now, developers can use Realm with any version of their Dart or Flutter projects. More importantly though, this official integration will form the underpinning of all our future releases and includes full support from Dart\u2019s null safety functionality. Moving forward, we will continue to closely partner with the Flutter and Dart team to follow best practices and ensure version compatibility. \n\n## Why Realm\n\nAll of Realm\u2019s SDKs are built on three core concepts:\n\n* An object database that infers the schema from the developers\u2019 class structure \u2013 making working with objects as easy as interacting with their data layer. No conversion code necessary\n* Live objects so the developer has a simple way to update their UI \u2013 integrated with StatefulWidgets and Streams\n* A columnar store so that query results return in lightning speed and directly integrate with an idiomatic query language the developer prefers\n\nRealm is a database designed for mobile applications as a replacement for SQLite. It was written from the ground up in C++, so it is not a wrapper around SQLite or any other relational datastore. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages of disk space into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. \n\n## Realm for Flutter Developers\n\nSince Realm is an object database, your schema is defined in the same way you define your object classes. Additionally, Realm delivers a simple and intuitive string-based query system that will feel natural to Flutter developers. No more context switching to SQL to instantiate your schema or looking behind the curtain when an ORM fails to translate your calls into SQL. And because Realm object\u2019s are memory-mapped, a developer can bind an object or query directly to the UI. As soon as changes are made to the state store, they are immediately reflected in the UI. No need to write complex logic to continually recheck whether a state change affects objects or queries bound to the UI and therefore refresh the UI. Realm updates the UI for you.\n\n```cs \n// Import the Realm package and set your app file name\nimport 'package:realm_dart/realm.dart';\n\npart 'test.g.dart'; // if this is test.dart\n\n// Set your schema\n@RealmModel()\nclass _Task {\n late String name;\n late String owner;\n late String status;\n}\n\nvoid main(List arguments) {\n // Open a realm database instance. Be sure to run the Realm generator to generate your schema\n var config = Configuration(Task.schema]);\n var realm = Realm(config);\n\n // Create an instance of your Tasks object and persist it to disk\n var task = Task(\"Ship Flutter\", \"Lubo\", \"InProgress\");\n realm.write(() {\n realm.add(task);\n });\n\n // Use a string to based query language to query the data\n var myTasks = realm.all().query(\"status == 'InProgress'\");\n\n var newTask = Task(\"Write Blog\", \"Ian\", \"InProgress\");\n realm.write(() {\n realm.add(newTask);\n });\n\n // Queries are kept live and auto-updating - the length here is now 2\n myTasks.length;\n}\n```\n## Looking Ahead\n\nThe Realm Flutter SDK is free, open source and available for you to try out today. While this release is still in Alpha, our development team has done a lot of the heavy lifting to set a solid foundation \u2013 with a goal of moving rapidly into public preview and GA later this year. We will look to bring new notification APIs, a migration API, solidify our query system, helper functions for Streams integration, and of course Atlas Device Sync to automatically replicate data to MongoDB Atlas. \n\nGive it a try today and let us know what you [think! Check out our samples, read our docs, and follow our repo.\n", "format": "md", "metadata": {"tags": ["Realm", "Dart", "Flutter"], "pageDescription": "Announcing the next installment of the Realm Flutter SDK \u2013 now with support for Windows, macOS, iOS, and Android.", "contentType": "News & Announcements"}, "title": "Introducing the Realm Flutter SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/streaming-data-apache-spark-mongodb", "action": "created", "body": "# Streaming Data with Apache Spark and MongoDB\n\nMongoDB has released a version 10 of the MongoDB Connector for Apache Spark that leverages the new Spark Data Sources API V2 with support for Spark Structured Streaming.\n\n## Why a new version?\n\nThe current version of the MongoDB Spark Connector was originally written in 2016 and is based upon V1 of the Spark Data Sources API. While this API version is still supported, Databricks has released an updated version of the API, making it easier for data sources like MongoDB to work with Spark. By having the MongoDB Spark Connector use V2 of the API, an immediate benefit is a tighter integration with Spark Structured Streaming.\n\n*Note: With respect to the previous version of the MongoDB Spark Connector that supported the V1 API, MongoDB will continue to support this release until such a time as Databricks depreciates V1 of the Data Source API. While no new features will be implemented, upgrades to the connector will include bug fixes and support for the current versions of Spark only.*\n\n## What version should I use?\n\nThe new MongoDB Spark Connector release (Version 10.1) is not intended to be a direct replacement for your applications that use the previous version of MongoDB Spark Connector.\n\nThe new Connector uses a different namespace with a short name, \u201cmongodb\u201d (full path is \u201ccom.mongodb.spark.sql.connector.MongoTableProvider\u201d), versus \u201cmongo\u201d (full path of \u201ccom.mongodb.spark.DefaultSource\u201d). Having a different namespace makes it possible to use both versions of the connector within the same Spark application! This is helpful in unit testing your application with the new Connector and making the transition on your timeline. \n\nAlso, we are changing how we version the MongoDB Spark Connector. The previous versions of the MongoDB Spark Connector aligned with the version of Spark that was supported\u2014e.g., Version 2.4 of the MongoDB Spark Connector works with Spark 2.4. Keep in mind that going forward, this will not be the case. The MongoDB documentation will make this clear as to which versions of Spark the connector supports.\n\n## Structured Streaming with MongoDB using continuous mode\n\nApache Spark comes with a stream processing engine called Structured Streaming, which is based on Spark's SQL engine and DataFrame APIs. Spark Structured Streaming treats each incoming stream of data as a micro-batch, continually appending each micro-batch to the target dataset. This makes it easy to convert existing Spark batch jobs into a streaming job. Structured Streaming has evolved over Spark releases and in Spark 2.3 introduced Continuous Processing mode, which took the micro-batch latency from over 100ms to about 1ms. Note this feature is still in experimental mode according to the official Spark Documentation. In the following example, we\u2019ll show you how to stream data between MongoDB and Spark using Structured Streams and continuous processing. First, we\u2019ll look at reading data from MongoDB.\n\n### Reading streaming data from MongoDB\n\nYou can stream data from MongoDB to Spark using the new Spark Connector. Consider the following example that streams stock data from a MongoDB Atlas cluster. A sample document in MongoDB is as follows:\n\n```\n{\n _id: ObjectId(\"624767546df0f7dd8783f300\"),\n company_symbol: 'HSL',\n company_name: 'HUNGRY SYNDROME LLC',\n price: 45.74,\n tx_time: '2022-04-01T16:57:56Z'\n}\n```\nIn this code example, we will use the new MongoDB Spark Connector and read from the StockData collection. When the Spark Connector opens a streaming read connection to MongoDB, it opens the connection and creates a MongoDB Change Stream for the given database and collection. A change stream is used to subscribe to changes in MongoDB. As data is inserted, updated, and deleted, change stream events are created. It\u2019s these change events that are passed back to the client in this case the Spark application. There are configuration options that can change the structure of this event message. For example, if you want to return just the document itself and not include the change stream event metadata, set \u201cspark.mongodb.change.stream.publish.full.document.only\u201d to true.\n\n```\nfrom pyspark import SparkContext\nfrom pyspark.streaming import StreamingContext\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql.functions import *\n\nspark = SparkSession.\\\n builder.\\\n appName(\"streamingExampleRead\").\\\n config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.12::10.1.1').\\\n getOrCreate()\n\nquery=(spark.readStream.format(\"mongodb\")\n.option('spark.mongodb.connection.uri', '')\n .option('spark.mongodb.database', 'Stocks') \\\n .option('spark.mongodb.collection', 'StockData') \\\n.option('spark.mongodb.change.stream.publish.full.document.only','true') \\\n .option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n .load())\n\nquery.printSchema()\n```\n\nThe schema is inferred from the MongoDB collection. You can see from the printSchema command that our document structure is as follows:\n\n| root: | | | |\n| --- | --- | --- | --- |\n| |_id |string |(nullable=true) |\n| |company_name| string | (nullable=true) |\n| | company_symbol | string | (nullable=true) |\n| | price | double | (nullable=true) |\n| | tx_time | string | (nullable=true) |\n\nWe can verify that the dataset is streaming with the isStreaming command.\n\n```\nquery.isStreaming\n```\n\nNext, let\u2019s read the data on the console as it gets inserted into MongoDB.\n\n```\nquery2=(query.writeStream \\\n .outputMode(\"append\") \\\n .option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n .format(\"console\") \\\n .trigger(continuous=\"1 second\")\n .start().awaitTermination());\n```\n\nWhen the above code was run through spark-submit, the output resembled the following:\n\n\u2026 removed for brevity \u2026\n\n-------------------------------------------\nBatch: 2\n-------------------------------------------\n+--------------------+--------------------+--------------+-----+-------------------+\n| _id| company_name|company_symbol|price| tx_time|\n+--------------------+--------------------+--------------+-----+-------------------+\n|62476caa6df0f7dd8...| HUNGRY SYNDROME LLC| HSL|45.99|2022-04-01 17:20:42|\n|62476caa6df0f7dd8...|APPETIZING MARGIN...| AMP|12.81|2022-04-01 17:20:42|\n|62476caa6df0f7dd8...|EMBARRASSED COCKT...| ECC|38.18|2022-04-01 17:20:42|\n|62476caa6df0f7dd8...|PERFECT INJURY CO...| PIC|86.85|2022-04-01 17:20:42|\n|62476caa6df0f7dd8...|GIDDY INNOVATIONS...| GMI|84.46|2022-04-01 17:20:42|\n+--------------------+--------------------+--------------+-----+-------------------+\n\n\u2026 removed for brevity \u2026\n\n-------------------------------------------\n\nBatch: 3\n-------------------------------------------\n+--------------------+--------------------+--------------+-----+-------------------+\n| _id| company_name|company_symbol|price| tx_time|\n+--------------------+--------------------+--------------+-----+-------------------+\n|62476cab6df0f7dd8...| HUNGRY SYNDROME LLC| HSL|46.04|2022-04-01 17:20:43|\n|62476cab6df0f7dd8...|APPETIZING MARGIN...| AMP| 12.8|2022-04-01 17:20:43|\n|62476cab6df0f7dd8...|EMBARRASSED COCKT...| ECC| 38.2|2022-04-01 17:20:43|\n|62476cab6df0f7dd8...|PERFECT INJURY CO...| PIC|86.85|2022-04-01 17:20:43|\n|62476cab6df0f7dd8...|GIDDY INNOVATIONS...| GMI|84.46|2022-04-01 17:20:43|\n+--------------------+--------------------+--------------+-----+-------------------+\n \n### Writing streaming data to MongoDB\n\nNext, let\u2019s consider an example where we stream data from Apache Kafka to MongoDB. Here the source is a kafka topic \u201cstockdata.Stocks.StockData.\u201d As data arrives in this topic, it\u2019s run through Spark with the message contents being parsed, transformed, and written into MongoDB. Here is the code listing with comments in-line:\n\n```\nfrom pyspark import SparkContext\nfrom pyspark.streaming import StreamingContext\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql import functions as F\nfrom pyspark.sql.functions import *\nfrom pyspark.sql.types import StructType,TimestampType, DoubleType, StringType, StructField\n\nspark = SparkSession.\\\n builder.\\\n appName(\"streamingExampleWrite\").\\\n config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector:10.1.1').\\\n config('spark.jars.packages', 'org.apache.spark:spark-sql-kafka-0-10_2.12:3.0.0').\\\n getOrCreate()\n\ndf = spark \\\n .readStream \\\n .format(\"kafka\") \\\n .option(\"startingOffsets\", \"earliest\") \\\n .option(\"kafka.bootstrap.servers\", \"KAFKA BROKER HOST HERE\") \\\n .option(\"subscribe\", \"stockdata.Stocks.StockData\") \\\n .load()\n\nschemaStock = StructType( \\\n StructField(\"_id\",StringType(),True), \\\n StructField(\"company_name\",StringType(), True), \\\n StructField(\"company_symbol\",StringType(), True), \\\n StructField(\"price\",StringType(), True), \\\n StructField(\"tx_time\",StringType(), True)])\n\nschemaKafka = StructType([ \\\n StructField(\"payload\",StringType(),True)])\n```\n\nNote that Kafka topic message arrives in this format -> key (binary), value (binary), topic (string), partition (int), offset (long), timestamp (long), timestamptype (int). See [Structured Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher) for more information on the Kafka and Spark integration.\n\nTo process the message for consumption into MongoDB, we want to pick out the value which is in binary format and convert it to JSON.\n\n```\nstockDF=df.selectExpr(\"CAST(value AS STRING)\")\n```\n\nFor reference, here is an example of an event (the value converted into a string) that is on the Kafka topic:\n\n```\n{\n \"schema\": {\n \"type\": \"string\",\n \"optional\": false\n },\n \"payload\": \"{\\\"_id\\\": {\\\"$oid\\\": \\\"6249f8096df0f7dd8785d70a\\\"}, \\\"company_symbol\\\": \\\"GMI\\\", \\\"company_name\\\": \\\"GIDDY INNOVATIONS\\\", \\\"price\\\": 87.57, \\\"tx_time\\\": \\\"2022-04-03T15:39:53Z\\\"}\"\n}\n```\n\nWe want to isolate the payload field and convert it to a JSON representation leveraging the shcemaStock defined above. For clarity, we have broken up the operation into multiple steps to explain the process. First, we want to convert the value into JSON.\n\n```\nstockDF=stockDF.select(from_json(col('value'),schemaKafka).alias(\"json_data\")).selectExpr('json_data.*')\n```\n\nThe dataset now contains data that resembles\n\n```\n\u2026\n {\n _id: ObjectId(\"624c6206e152b632f88a8ee2\"),\n payload: '{\"_id\": {\"$oid\": \"6249f8046df0f7dd8785d6f1\"}, \"company_symbol\": \"GMI\", \"company_name\": \"GIDDY MONASTICISM INNOVATIONS\", \"price\": 87.62, \"tx_time\": \"2022-04-03T15:39:48Z\"}'\n }, \u2026\n```\n\nNext, we want to capture just the value of the payload field and convert that into JSON since it\u2019s stored as a string.\n\n```\nstockDF=stockDF.select(from_json(col('payload'),schemaStock).alias(\"json_data2\")).selectExpr('json_data2.*')\n```\n\nNow we can do whatever transforms we would like to do on the data. In this case, let\u2019s convert the tx_time into a timestamp.\n\n```\nstockDF=stockDF.withColumn(\"tx_time\",col(\"tx_time\").cast(\"timestamp\"))\n```\n\nThe Dataset is in a format that\u2019s ready for consumption into MongoDB, so let\u2019s stream it out to MongoDB. To do this, use the writeStream method. Keep in mind there are various options to set. For example, when present, the \u201ctrigger\u201d option processes the results in batches. In this example, it\u2019s every 10 seconds. Removing the trigger field will result in continuous writing. For more information on options and parameters, check out the Structured Streaming Guide.\n\n```\ndsw = (\n stockDF.writeStream\n .format(\"mongodb\")\n .queryName(\"ToMDB\")\n .option(\"checkpointLocation\", \"/tmp/pyspark7/\")\n .option(\"forceDeleteTempCheckpointLocation\", \"true\")\n .option('spark.mongodb.connection.uri', \u2018')\n .option('spark.mongodb.database', 'Stocks')\n .option('spark.mongodb.collection', 'Sink')\n .trigger(continuous=\"10 seconds\")\n .outputMode(\"append\")\n .start().awaitTermination());\n```\n\n## Structured Streaming with MongoDB using Microbatch mode\nWhile continuous mode offers a lot of promise in terms of the latency and performance characteristics, the support for various popular connectors like AWS S3 for example is non-existent. Thus, you might end up using microbatch mode within your solution. The key difference between the two is how spark handles obtaining the data from the stream. As mentioned previously, the data is batched and processed versus using a continuous append to a table. The noticeable difference is the advertised latency of microbatch around 100ms which for most workloads might not be an issue.\n### Reading streaming data from MongoDB using microbatch\n\nUnlike when we specify a write, when we read from MongoDB, there is no special configuration to tell Spark to use microbatch or continuous. This behavior is determined only when you write. Thus, in our code example, to read from MongoDB is the same in both cases, e.g.:\n\n```\nquery=(spark.readStream.format(\"mongodb\").\\\noption('spark.mongodb.connection.uri', '<>').\\\noption('spark.mongodb.database', 'Stocks').\\\noption('spark.mongodb.collection', 'StockData').\\\noption('spark.mongodb.change.stream.publish.full.document.only','true').\\\noption(\"forceDeleteTempCheckpointLocation\", \"true\").\\\nload())\n```\n\nRecall from the previous discussion on reading MongoDB data, when using `spark.readStream.format(\"mongodb\")`, MongoDB opens a change stream and subscribes to changes as they occur in the database. With microbatch each microbatch event opens a new change stream cursor making this form of microbatch streaming less efficient than continuous streams. That said, some consumers of streaming data such as AWS S3 only support data from microbatch streams.\n\n### Writing streaming data to MongoDB using microbatch\nConsider the previous writeStream example code:\n\n```\ndsw = (\n stockDF.writeStream\n .format(\"mongodb\")\n .queryName(\"ToMDB\")\n .option(\"checkpointLocation\", \"/tmp/pyspark7/\")\n .option(\"forceDeleteTempCheckpointLocation\", \"true\")\n .option('spark.mongodb.connection.uri', '<>')\n .option('spark.mongodb.database', 'Stocks')\n .option('spark.mongodb.collection', 'Sink')\n .trigger(continuous=\"10 seconds\")\n .outputMode(\"append\")\n .start().awaitTermination());\n```\n\nHere the .trigger parameter was used to tell Spark to use Continuous mode streaming, to use microbatch simply remove the .trigger parameter.\n\n## Go forth and stream!\n\nStreaming data is a critical component of many types of applications. MongoDB has evolved over the years, continually adding features and functionality to support these types of workloads. With the MongoDB Spark Connector version 10.1, you can quickly stream data to and from MongoDB with a few lines of code.\n\nFor more information and examples on the new MongoDB Spark Connector version 10.1, check out the online documentation. Have questions about the connector or MongoDB? Post a question in the MongoDB Developer Community Connectors & Integrations forum.", "format": "md", "metadata": {"tags": ["Python", "Connectors", "Spark", "AI"], "pageDescription": "MongoDB has released a new spark connector, MongoDB Spark Connector V10. In this article, learn how to read from and write to MongoDB through Spark Structured Streaming.", "contentType": "Article"}, "title": "Streaming Data with Apache Spark and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/announcing-realm-cplusplus-sdk-alpha", "action": "created", "body": "# Announcing the Realm C++ SDK Alpha\n\nToday, we are excited to announce the Realm C++ SDK Alpha and the continuation of the work toward a private preview. Our C++ SDK was built to address increasing demand \u2014 for seamless data management and on-device data storage solutions \u2014 from our developer community in industries such as automotive, healthcare, and retail. This interest tracks with the continued popularity of C++ as illustrated in the recent survey by Tiobe and the Language of the Year 2022 status by Tiobe.\n\nThis SDK was developed in collaboration with the Qt Company. Their example application showcases the functionality of Atlas Device Sync and Realm in an IoT scenario. Take a look at the companion blog post by the Qt Company.\n\nThe Realm C++ SDK allows developers to easily store data on devices for offline availability \u2014 and automatically sync data to and from the cloud \u2014 in an idiomatic way within their C++ applications. Realm is a modern data store, an alternative to SQLite, which is simple to use because it is an object-oriented database and does not require a separate mapping layer or ORM. In line with the mission of MongoDB\u2019s developer data platform \u2014 designing technologies to make the development process for developers seamless \u2014 networking retry logic and sophisticated conflict merging functionality is built right into this technology, eliminating the need to write and maintain a large volume of code that would traditionally be required.\n\n## Why Realm C++ SDK?\n\nWe consider the Realm C++ SDK to be especially well suited for areas such as embedded devices, IoT, and cross-platform applications:\n\n1. Realm is a fully fledged object-oriented persistence layer for edge, mobile, and embedded devices that comes with out-of-the-box support for synchronizing to the MongoDB Atlas cloud back end. As devices become increasingly \u201csmart\u201d and connected, they require more data, such as historical data enabling automated decision making, and necessitate efficient persistence layer and real-time cloud-syncing technologies.\n2. Realm is mature, feature-rich and enterprise-ready, with over 10 years of history. The technology is integrated with tens of thousands of applications in Google Play and the Apple App Store that have been downloaded by billions of users in the past six months alone.\n3. Realm is designed and developed for resource constrained environments \u2014 it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery.\n4. Realm can be embedded in the application code and does not require any additional deployment tasks or activities.\n5. Realm is fully object-oriented, which makes data modeling straightforward and idiomatic. Alternative technologies like SQLite require an object-relational mapping library, which adds complexity and makes future development, maintenance, and debugging painful.\n6. Updates to the underlying data store in Realm are reflected instantly in the objects which help drive reactive UI layers in different environments.\n\nLet\u2019s dive deeper into a concrete example of using Realm.\n\n## Realm quick start example\n\nThe following Todo list example is borrowed from the quick start documentation. We start by showing how Realm infers the data schema directly from the class structure with no conversion code necessary:\n\n```\n#include \n\nstruct Todo : realm::object {\n realm::persisted _id{realm::object_id::generate()};\n realm::persisted name;\n realm::persisted status;\n\n static constexpr auto schema = realm::schema(\"Todo\",\n realm::property<&Todo::_id, true>(\"_id\"),\n realm::property<&Todo::name>(\"name\"),\n realm::property<&Todo::status>(\"status\"),\n};\n```\n\nNext, we\u2019ll open a local Realm and store an object in it:\n\n```\nauto realm = realm::open();\n\nauto todo = Todo {\n .name = \"Create my first todo item\",\n .status = \"In Progress\"\n};\n\nrealm.write(&realm, &todo] {\n realm.add(todo);\n});\n```\n\nWith the object stored, we are ready to fetch the object back from Realm and modify it:\n\n```\n// Fetch all Todo objects\nauto todos = realm.objects();\n\n// Filter as per object state\nauto todosInProgress = todos.where([ {\n return todo.status == \"In Progress\";\n});\n\n// Mark a Todo item as complete\nauto todoToUpdate = todosInProgress0];\nrealm.write([&realm, &todoToUpdate] {\n todoToUpdate.status = \"Complete\";\n});\n\n// Delete the Todo item\nrealm.write([&realm, &todoToUpdate] {\n realm.remove(todo);\n});\n```\n\nWhile the above query examples are simple, [Realm\u2019s rich query language enables developers to easily express queries even for complex use cases. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This increases lookup and query speed performance as it eliminates the loading of pages of state into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer.\n\nThe complete Realm C++ SDK documentation provides more complex examples for filtering and querying the objects and shows how to register an object change listener, which enables the developer to react to state changes automatically, something we leverage in the Realm with Qt and Atlas Device Sync example application.\n\n## Realm with Qt and Atlas Device Sync\n\nFirst a brief introduction to Qt:\n\n*The Qt framework contains a comprehensive set of highly intuitive and modularized C++ libraries and cross-platform APIs to simplify UI application development. Qt produces highly readable, easily maintainable, and reusable code with high runtime performance and small footprint.*\n\nThe example provided together with Qt is a smart coffee machine application. We have integrated Realm and Atlas Device Sync into the coffee machine application by extending the existing coffee selection and brewing menu, and by adding local data storage and cloud-syncing \u2014 essentially turning the coffee machine into a fleet of machines. The image below clarifies:\n\nThis fleet could be operated and controlled remotely by an operator and could include separate applications for the field workers maintaining the machines. Atlas Device Sync makes it easy for developers to build reactive applications for multi-device scenarios by sharing the state in real-time with the cloud and local devices. \n\nThis is particularly compelling when combined with a powerful GUI framework such as Qt. The slots and signals mechanism in Qt sits naturally with Realm\u2019s Object Change Listeners, emitting signals of changes to data from Atlas Device Sync so integration is a breeze.\n\nIn the coffee machine example, we integrated functionality such as configuring drink recipes in cloud, out of order sensing, and remote control logic. With Realm with Atlas Device Sync, we also get the resiliency for dropped network connections out of the box. \n\nThe full walkthrough of the example application is outside of this blog post and we point to the full source code and the more detailed walkthrough in our repository.\n\n## Looking ahead\n\nWe are working hard to improve the Realm C++ SDK and will be moving quickly to private preview. We look forward to hearing feedback from our users and partners on applications they are looking to build and how the SDK might be extended to support their use case. In the private preview phase, we hope to deliver Windows support and package managers such as Conan, as well as continuing to close the gap when compared to other Realm SDKs. While we don\u2019t anticipate major breaking changes, the API may change based on feedback from our community. We expect the ongoing private preview phase to finalize in the next few quarters and we are closely monitoring the feedback from the users via the GitHub project.\n\n> **Want more information?**\n> Interested in learning more before trying the product? Submit your information to get in touch.\n> \n> **Ready to get started now?**\n> Use the C++ SDK by installing the SDK, read our docs, and follow our repo.\n> \n> Then, register for Atlas to connect to Atlas Device Sync, a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, network handling, and much more to quickly launch enterprise-grade mobile apps.\n> \n> Finally, let us know what you think and get involved in our forums. See you there!", "format": "md", "metadata": {"tags": ["Realm", "C++"], "pageDescription": "Today, we are excited to announce the Realm C++ SDK Alpha and the continuation of the work toward a private preview.", "contentType": "Article"}, "title": "Announcing the Realm C++ SDK Alpha", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/guide-working-esg-data", "action": "created", "body": "# The 5-Minute Guide to Working with ESG Data on MongoDB\n\nMongoDB makes it incredibly easy to work with environmental, social, and corporate governance (ESG) data from multiple providers, analyze that data, and then visualize it.\n\nIn this quick guide, we will show you how MongoDB can:\n\n* Move ESG data from different data sources to the document model. \n* Easily incorporate new ESG source feeds to the document data model.\n* Run advanced, aggregated queries on ESG data.\n* Visualize ESG data.\n* Manage different data types in a single document.\n* Integrate geospatial data.\n\nThroughout this guide, we have sourced ESG data from MSCI.\n\n>NOTE: An MSCI account and login is required to download the datasets linked to in this article. Dataset availability is dependent on MSCI product availability. \n\nOur examples are drawn from real-life work with MongoDB clients in the financial services\n industry. Screenshots (apart from code snippets) are taken from MongoDB Compass, MongoDB\u2019s GUI for querying, optimizing, and analyzing data.\n\n## Importing data into MongoDB\nThe first step is to download the MSCI dataset, and import the MSCI .csv file (Figure 1) into MongoDB.\n\nEven though MSCI\u2019s data is in tabular format, MongoDB\u2019s document data model allows you to import the data directly into a database collection and apply the data types as needed.\n\n*Figure 1. Importing the data using MongoDB\u2019s Compass GUI*\n\nWith the MSCI data imported into MongoDB, we can start discovering, querying, and visualizing it.\n## Scenario 1: Basic gathering and querying of ESG data using basic aggregations\n**Source Data Set**: *MSCI ESG Accounting Governance Risk (AGR)* \n**Collection**: `accounting_governance_risk_agr_ratings `\n\nFrom MSCI - *\u201c**ESG AGR** uses a quantitative approach to identify risks in the financial reporting practices and accounting governance of publicly listed companies. Metrics contributing to the score include traditional fundamental ratios used to evaluate corporate strength and profitability, as well as forensic ratios.\u201d*\n\n**Fields/Data Info:**\n\n* **The AGR (Accounting & Governance Risk) Rating** consists of four groupings based on the AGR Percentile: Very Aggressive (1-10), Aggressive (11-35), Average (36-85), Conservative (86-100).\n* **The AGR (Accounting & Governance Risk) Percentile** ranges from 1-100, with lower values representing greater risks.\n\n### Step 1: Match and group AGR ratings per country of interest \nIn this example, we will count the number of AGR rated companies in Japan belonging to each AGR rating group (i.e., Very Aggressive, Aggressive, Average, and Conservative). To do this, we will use MongoDB\u2019s aggregation pipeline to process multiple documents and return the results we\u2019re after. \n\nThe aggregation pipeline presents a powerful abstraction for working with and analyzing data stored in the MongoDB database. The composability of the aggregation pipeline is one of the keys to its power. The design was actually modeled on the Unix pipeline, which allows developers to string together a series of processes that work together. This helps to simplify their application code by reducing logic, and when applied appropriately, a single aggregation pipeline can replace many queries and their associated network round trip times.\n\nWhat aggregation stages will we use?\n\n* The **$match** operator in MongoDB works as a filter. It filters the documents to pass only the documents that match the specified condition(s).\n* The **$group** stage separates documents into groups according to a \"group key,\" which, in this case, is the value of Agr_Rating.\n* Additionally, at this stage, we can summarize the total count of those entities.\n\nCombining the first two aggregation stages, we can filter the Issuer_Cntry_Domicile field to be equal to Japan \u2014 i.e., \u201dJP\u201d \u2014 and group the AGR ratings. \n\nAs a final step, we will also sort the output of the total_count in descending order (hence the -1) and merge the results into another collection in the database of our choice, with the **$merge** operator.\n\n```\n{\n $match: {\n Issuer_Cntry_Domicile: 'JP'\n }\n}, {\n $group: {\n _id: '$Agr_Rating',\n total_count: {\n $sum: 1\n },\n country: {\n $first: '$Issuer_Cntry_Domicile'\n }\n }\n}, {\n $sort: {\n total_count: -1\n }\n}, {\n $merge: {\n into: {\n db: 'JP_DB',\n coll: 'jp_agr_risk_ratings'\n },\n on: '_id',\n whenMatched: 'merge',\n whenNotMatched: 'insert'\n }\n}]\n```\nThe result and output collection `'jp_agr_risk_ratings'` can be seen below.\n\n![result and output collection\n\n### Step 2: Visualize the output with MongoDB Charts\nNext, let\u2019s visualize the results of Step 1 with MongoDB Charts, which is integrated into MongoDB. With Charts, there\u2019s no need for developers to worry about finding a compatible data visualization tool, dealing with data movement, or data duplication when creating or sharing data visualizations.\n\nUsing MongoDB Charts, in a few clicks we can visualize the results of our data in Figure 2.\n\n*Figure 2. Distribution of AGR rating in Japan*\n\n### Step 3: Visualize the output for multiple countries\nLet\u2019s go a step further and group the results for multiple countries. We can add more countries \u2014 for instance, Japan and Hong Kong \u2014 and then $group and $count the results for them in Figure 3.\n\n*Figure 3. $match stage run in MongoDB Compass*\n\nMoving back to Charts, we can easily display the results comparing governance risks for Hong Kong and Japan, as shown in Figure 4.\n\n*Figure 4. Compared distribution of AGR ratings - Japan vs Hong Kong*\n\n## Scenario 2: Joins and data analysis using an aggregation pipeline\n\n**Source Data Set**: AGR Ratings \n**Collection**: `accounting_governance_risk_agr_ratings`\n\n**Data Set**: Country Fundamental Risk Indicators\n**Collection**: `focus_risk_scores`\n\nFrom MSCI - *\u201c**GeoQuant's Country Fundamental Risk Indicators** fuses political and computer science to measure and predict political risk. GeoQuant's machine-learning software scrapes the web for large volumes of reputable data, news, and social media content. \u201c*\n\n**Fields/Data Info:**\n\n* **Health (Health Risk)** - Quality of/access to health care, resilience to disease \n* **IR (International Relations Risk)** - Prevalence/likelihood of diplomatic, military, and economic conflict with other countries \n* **PolViol (Political Violence Risk)** - Prevalence/likelihood of civil war, insurgency, terrorism\n\nWith the basics of MongoDB\u2019s query framework understood, let\u2019s move on to more complex queries, again using MongoDB\u2019s aggregation pipeline capabilities.\n\nWith MongoDB\u2019s document data model, we can nest documents within a parent document. In addition, we are able to perform query operations over those nested fields.\n\nImagine a scenario where we have two separate collections of ESG data, and we want to combine information from one collection into another, fetch that data into the result array, and further filter and transform the data.\n\nWe can do this using an aggregation pipeline.\n\nLet\u2019s say we want more detailed results for companies located in a particular country \u2014 for instance, by combining data from `focus_risk_scores` with our primary collection: `accounting_governance_risk_agr_ratings`.\n\n*Figure 5. accounting_governance_risk_agr_ratings collection in MongoDB Compass*\n\n*Figure 6. focus_risk_scores collection in MongoDB Compass*\n\nIn order to do that, we use the **$lookup** stage, which adds a new array field to each input document. It contains the matching documents from the \"joined\" collection. This is similar to the joins used in relational databases. You may ask, \"What is $lookup syntax?\"\n\nTo perform an equality match between a field from the input documents with a field from the documents of the \"joined\" collection, the $lookup stage has this syntax:\n\n```\n{\n $lookup:\n {\n from: ,\n localField: ,\n foreignField: ,\n as: \n }\n}\n```\nIn our case, we want to join and match the value of **Issuer_Cntry_Domicile** from the collection **accounting_governance_risk_agr_ratings** with the value of **Country** field from the collection **focus_risk_scores**, as shown in Figure 7.\n\n*Figure 7. $lookup stage run in MongoDB Compass*\n\nAfter performing the $lookup operation, we receive the data into the \u2018result\u2019 array field. \n\nImagine that at this point, we decide only to display **Issuer_Name** and **Issuer_Cntry_Domicle** from the first collection. We can do so with the $project operator and define the fields that we want to be visible for us in Figure 8.\n\n*Figure 8. $project stage run in MongoDB Compass*\n\nAdditionally, we remove the **result_.id** field that comes from the original document from the other collection as we do not need it at this stage. Here comes the handy **$unset** stage.\n\n*Figure 9. $unset stage run in MongoDB Compass*\n\nWith our data now cleaned up and viewable in one collection, we can go further and edit the data set with new custom fields and categories.\n\n**Updating fields**\n\nLet\u2019s say we would like to set up new fields that categorize Health, IR, and PolViol lists separately.\n\nTo do so, we can use the $set operator. We use it to create new fields \u2014 health_risk, politcial_violance_risk, international_relations_risk \u2014 where each of the respective fields will consist of an array with only those elements that match the condition specified in $filter operator. \n\n**$filter** has the following syntax:\n\n```\n{\n $filter:\n {\n input: ,\n as: ,\n cond: \n }\n}\n```\n\n**input** \u2014 An expression that resolves to an array.\n\n**as** \u2014 A name for the variable that represents each individual element of the input array. \n\n**cond** \u2014 An expression that resolves to a boolean value used to determine if an element should be included in the output array. The expression references each element of the input array individually with the variable name specified in as.\n\nIn our case, we perform the $filter stage where the input we specify as \u201c$result\u201d array.\n\nWhy dollar sign and field name?\n\nThis prefixed field name with a dollar sign $ is used in aggregation expressions to access fields in the input documents (the ones from the previous stage and its result field).\n\nFurther, we name every individual element from that $result field as \u201cmetric\u201d.\n\nTo resolve the boolean we define conditional expression, in our case, we want to run an equality match for a particular metric \"$$metric.Risk\" (following the \"$$.\" syntax that accesses a specific field in the metric object).\n\nAnd define and filter those elements to the appropriate value (\u201cHealth\u201d, \u201cPolViol\u201d, \u201cIR\u201d).\n\n```\n cond: {\n $eq: \"$$metric.Risk\", \"Health\"],\n }\n```\nThe full query can be seen below in Figure 10.\n\n![$set stage and $filter operator run in MongoDB Compass\n*Figure 10. $set stage and $filter operator run in MongoDB Compass*\n\nAfter we consolidate the fields that are interesting for us, we can remove redundant result array and use **$unset** operator once again to remove **result** field.\n\n*Figure 11. $unset stage run in MongoDB Compass*\n\nThe next step is to calculate the average risk of every category (Health, International Relations, Political Violence) between country of origin where Company resides (\u201cCountry\u201d field) and other countries (\u201cPrimary_Countries\u201d field) with $avg operator within $set stage (as seen in Figure 12).\n\n*Figure 12. $set stage run in MongoDB Compass*\n\nAnd display only the companies whose average values are greater than 0, with a simple $match operation Figure 13.\n\n*Figure 13. $match stage run in MongoDB Compass*\n\nSave the data (merge into) and display the results in the chart.\n\nOnce again, we can use the $merge operator to save the result of the aggregation and then visualize it using MongoDB Charts Figure 14.\n\n*Figure 14. $merge stage run in MongoDB Compass*\n\nLet\u2019s take our data set and create a chart of the Average Political Risk for each company, as displayed in Figure 15. \n\n*Figure 15. Average Political Risk per Company in MongoDB Atlas Charts*\n\nWe can also create Risk Charts per category of risk, as seen in Figure 16.\n\n*Figure 16. average international risk per company in MongoDB Atlas Charts*\n\n*Figure 17. average health risk per company in MongoDB Atlas Charts*\n\nBelow is a snippet with all the aggregation operators mentioned in Scenario 2:\n\n```\n\n {\n $lookup: {\n from: \"focus_risk_scores\",\n localField: \"Issuer_Cntry_Domicile\",\n foreignField: \"Country\",\n as: \"result\",\n },\n },\n {\n $project: {\n _id: 1,\n Issuer_Cntry_Domicile: 1,\n result: 1,\n Issuer_Name: 1,\n },\n },\n {\n $unset: \"result._id\",\n },\n {\n $set: {\n health_risk: {\n $filter: {\n input: \"$result\",\n as: \"metric\",\n cond: {\n $eq: [\"$$metric.Risk\", \"Health\"],\n },\n },\n },\n political_violence_risk: {\n $filter: {\n input: \"$result\",\n as: \"metric\",\n cond: {\n $eq: [\"$$metric.Risk\", \"PolViol\"],\n },\n },\n },\n international_relations_risk: {\n $filter: {\n input: \"$result\",\n as: \"metric\",\n cond: {\n $eq: [\"$$metric.Risk\", \"IR\"],\n },\n },\n },\n },\n },\n {\n $unset: \"result\",\n },\n {\n $set: {\n health_risk_avg: {\n $avg: \"$health_risk.risk_values\",\n },\n political_risk_avg: {\n $avg: \"$political_violence_risk.risk_values\",\n },\n international_risk_avg: {\n $avg: \"$international_relations_risk.risk_values\",\n },\n },\n },\n {\n $match: {\n health_risk_avg: {\n $gt: 0,\n },\n political_risk_avg: {\n $gt: 0,\n },\n international_risk_avg: {\n $gt: 0,\n },\n },\n },\n {\n $merge: {\n into: {\n db: \"testDB\",\n coll: \"agr_avg_risks\",\n },\n on: \"_id\",\n },\n },\n]\n```\n\n## Scenario 3: Environmental indexes \u2014 integrating geospatial ESG data \n**Data Set**: [Supply Chain Risks\n**Collection**: `supply_chain_risk_metrics`\n\nFrom MSCI - *\u201cElevate\u2019s Supply Chain ESG Risk Ratings aggregates data from its verified audit database to the country level. The country risk assessment includes an overall score as well as 38 sub-scores organized under labor, health and safety, environment, business ethics, and management systems.\u201d*\n\nESG data processing requires the handling of a variety of structured and unstructured data consisting of financial, non-financial, and even climate-related geographical data. In this final scenario, we will combine data related to environmental scoring \u2014 especially wastewater, air, environmental indexes, and geo-locations data \u2014 and present them in a geo-spatial format to help business users quickly identify the risks.\n\nMongoDB provides a flexible and powerful multimodel data management approach and includes the support of storing and querying geospatial data using GeoJSON objects or as legacy coordinate pairs. We shall see in this example how this can be leveraged for handling the often complex ESG data. \n\nFirstly, let\u2019s filter and group the data. Using $match and $group operators, we can filter and group the country per country and province, as shown in Figure 15 and Figure 16.\n\n*Figure 18. $match stage run in MongoDB Compass*\n\n*Figure 19. $group stage run in MongoDB Compass*\n\nNow that we have the data broken out by region and country, in this case Vietnam, let\u2019s display the information on a map.\n\nIt doesn\u2019t matter that the original ESG data did not include comprehensive geospatial data or data in GeoJSON format, as we can simply augment our data set with the latitude and longitude for each region.\n\nUsing the $set operator, we can apply the logic for all regions of the data, as shown in Figure 20.\n\nLeveraging the $switch operator, we evaluate a series of case expressions and set the coordinates of longitude and latitude for the particular province in Vietnam.\n\n*Figure 20. $set stage and $switch operator run in MongoDB Compass*\n\nUsing MongoDB Charts\u2019 built-in heatmap feature, we can now display the maximum air emission, environment management, and water waste metrics data for Vietnamese regions as a color-coded heat map.\n\n*Figure 21. heatmaps of Environment, Air Emission, Water Waste Indexes in Vietnam in MongoDB Atlas Charts*\n\nBelow is a snippet with all the aggregation operators mentioned in Scenario 3:\n\n```\n{\n $match: {\n Country: {\n $ne: 'null'\n },\n Province: {\n $ne: 'All'\n }\n }\n}, {\n $group: {\n _id: {\n country: '$Country',\n province: '$Province'\n },\n environment_management: {\n $max: '$Environment_Management_Index_Elevate'\n },\n air_emssion_index: {\n $max: '$Air_Emissions_Index_Elevate'\n },\n water_waste_index: {\n $max: '$Waste_Management_Index_Elevate'\n }\n }\n}, {\n $project: {\n country: '$_id.country',\n province: '$_id.province',\n environment_management: 1,\n air_emssion_index: 1,\n water_waste_index: 1,\n _id: 0\n }\n}, {\n $set: {\n loc: {\n $switch: {\n branches: [\n {\n 'case': {\n $eq: [\n '$province',\n 'Southeast'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 105.8,\n 21.02\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'North Central Coast'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 105.54,\n 18.2\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'Northeast'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 105.51,\n 21.01\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'Mekong Delta'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 105.47,\n 10.02\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'Central Highlands'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 108.3,\n 12.4\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'Northwest'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 103.1,\n 21.23\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'South Central Coast'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 109.14,\n 13.46\n ]\n }\n },\n {\n 'case': {\n $eq: [\n '$province',\n 'Red River Delta'\n ]\n },\n then: {\n type: 'Point',\n coordinates: [\n 106.3,\n 21.11\n ]\n }\n }\n ],\n 'default': null\n }\n }\n }\n}]\n```\n\n## Speed, performance, and flexibility\nAs we can see from the scenarios above, MongoDB\u2019s out-of-the box tools and capabilities \u2014 including a powerful aggregation pipeline framework for simple or complex data processing, Charts for data visualization, geospatial data management, and native drivers \u2014 can easily and quickly combine different ESG-related resources and produce actionable insights.\n\nMongoDB has a distinct advantage over relational databases when it comes to handling ESG data, negating the need to produce the ORM mapping for each data set. \n\nImport any type of ESG data, model the data to fit your specific use case, and perform tests and analytics on that data with only a few commands.\n\nTo learn more about how MongoDB can help with your ESG needs, please visit our [dedicated solution page.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "In this quick guide, we will show you how MongoDB can move ESG data from different data sources to the document model, and more!\n", "contentType": "Tutorial"}, "title": "The 5-Minute Guide to Working with ESG Data on MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-aggregation-framework-3-3-2", "action": "created", "body": "# Aggregation Framework with Node.js 3.3.2 Tutorial\n\nWhen you want to analyze data stored in MongoDB, you can use MongoDB's powerful aggregation framework to do so. Today, I'll give you a high-level overview of the aggregation framework and show you how to use it.\n\n>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n\nIf you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! So far, we've covered how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. The code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.\n\nAnd, with that, let's dive into the aggregation framework!\n\n>If you are more of a video person than an article person, fear not. I've made a video just for you! The video below covers the same content as this article.\n>\n>:youtube]{vid=iz37fDe1XoM}\n>\n>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n\n## What is the Aggregation Framework?\n\nThe aggregation framework allows you to analyze your data in real time. Using the framework, you can create an aggregation pipeline that consists of one or more stages. Each stage transforms the documents and passes the output to the next stage.\n\nIf you're familiar with the Linux pipe ( `|` ), you can think of the aggregation pipeline as a very similar concept. Just as output from one command is passed as input to the next command when you use piping, output from one stage is passed as input to the next stage when you use the aggregation pipeline.\n\nThe aggregation framework has a variety of stages available for you to use. Today, we'll discuss the basics of how to use $match, $group, $sort, and $limit. Note that the aggregation framework has many other powerful stages including $count, $geoNear, $graphLookup, $project, $unwind, and others.\n\n## How Do You Use the Aggregation Framework?\n\nI'm hoping to visit the beautiful city of Sydney, Australia soon. Sydney is a huge city with many suburbs, and I'm not sure where to start looking for a cheap rental. I want to know which Sydney suburbs have, on average, the cheapest one-bedroom Airbnb listings.\n\nI could write a query to pull all of the one-bedroom listings in the Sydney area and then write a script to group the listings by suburb and calculate the average price per suburb. Or, I could write a single command using the aggregation pipeline. Let's use the aggregation pipeline.\n\nThere is a variety of ways you can create aggregation pipelines. You can write them manually in a code editor or create them visually inside of MongoDB Atlas or MongoDB Compass. In general, I don't recommend writing pipelines manually as it's much easier to understand what your pipeline is doing and spot errors when you use a visual editor. Since you're already setup to use MongoDB Atlas for this blog series, we'll create our aggregation pipeline in Atlas.\n\n### Navigate to the Aggregation Pipeline Builder in Atlas\n\nThe first thing we need to do is navigate to the Aggregation Pipeline Builder in Atlas.\n\n1. Navigate to Atlas and authenticate if you're not already authenticated.\n2. In the **Organizations** menu in the upper-left corner, select the organization you are using for this Quick Start series.\n3. In the **Projects** menu (located beneath the Organizations menu), select the project you are using for this Quick Start series.\n4. In the right pane for your cluster, click **COLLECTIONS**.\n5. In the list of databases and collections that appears, select **listingsAndReviews**.\n6. In the right pane, select the **Aggregation** view to open the Aggregation Pipeline Builder.\n\nThe Aggregation Pipeline Builder provides you with a visual representation of your aggregation pipeline. Each stage is represented by a new row. You can put the code for each stage on the left side of a row, and the Aggregation Pipeline Builder will automatically provide a live sample of results for that stage on the right side of the row.\n\n## Build an Aggregation Pipeline\n\nNow we are ready to build an aggregation pipeline.\n\n### Add a $match Stage\n\nLet's begin by narrowing down the documents in our pipeline to one-bedroom listings in the Sydney, Australia market where the room type is \"Entire home/apt.\" We can do so by using the $match stage.\n\n1. On the row representing the first stage of the pipeline, choose **$match** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$match` operator in the code box for the stage.\n\n \n\n2. Now we can input a query in the code box. The query syntax for `$match` is the same as the `findOne()` syntax that we used in a previous post. Replace the code in the `$match` stage's code box with the following:\n\n``` json\n{\n bedrooms: 1,\n \"address.country\": \"Australia\",\n \"address.market\": \"Sydney\",\n \"address.suburb\": { $exists: 1, $ne: \"\" },\n room_type: \"Entire home/apt\"\n}\n```\n\nNote that we will be using the `address.suburb` field later in the pipeline, so we are filtering out documents where `address.suburb` does not exist or is represented by an empty string.\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$match` stage is executed.\n\n### Add a $group Stage\n\nNow that we have narrowed our documents down to one-bedroom listings in the Sydney, Australia market, we are ready to group them by suburb. We can do so by using the $group stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$group** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$group` operator in the code box for the stage.\n\n \n\n3. Now we can input code for the `$group` stage. We will provide an `_id`, which is the field that the Aggregation Framework will use to create our groups. In this case, we will use `$address.suburb` as our `_id`. Inside of the $group stage, we will also create a new field named `averagePrice`. We can use the $avg aggregation pipeline operator to calculate the average price for each suburb. Replace the code in the $group stage's code box with the following:\n\n``` json\n{\n _id: \"$address.suburb\",\n averagePrice: {\n \"$avg\": \"$price\"\n }\n}\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$group` stage is executed. Note that the documents have been transformed. Instead of having a document for each listing, we now have a document for each suburb. The suburb documents have only two fields: `_id` (the name of the suburb) and `averagePrice`.\n\n### Add a $sort Stage\n\nNow that we have the average prices for suburbs in the Sydney, Australia market, we are ready to sort them to discover which are the least expensive. We can do so by using the $sort stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$sort** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$sort` operator in the code box for the stage.\n\n \n\n3. Now we are ready to input code for the `$sort` stage. We will sort on the `$averagePrice` field we created in the previous stage. We will indicate we want to sort in ascending order by passing `1`. Replace the code in the `$sort` stage's code box with the following:\n\n``` json\n{\n \"averagePrice\": 1\n}\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$sort` stage is executed. Note that the documents have the same shape as the documents in the previous stage; the documents are simply sorted from least to most expensive.\n\n### Add a $limit Stage\n\nNow we have the average prices for suburbs in the Sydney, Australia market sorted from least to most expensive. We may not want to work with all of the suburb documents in our application. Instead, we may want to limit our results to the 10 least expensive suburbs. We can do so by using the $limit stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$limit** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$limit` operator in the code box for the stage.\n\n \n\n3. Now we are ready to input code for the `$limit` stage. Let's limit our results to 10 documents. Replace the code in the $limit stage's code box with the following:\n\n``` json\n10\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 10 documents that will be included in the results after the `$limit` stage is executed. Note that the documents have the same shape as the documents in the previous stage; we've simply limited the number of results to 10.\n\n## Execute an Aggregation Pipeline in Node.js\n\nNow that we have built an aggregation pipeline, let's execute it from inside of a Node.js script.\n\n### Get a Copy of the Node.js Template\n\nTo make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.\n\n1. Download a copy of template.js.\n2. Open `template.js` in your favorite code editor.\n3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.\n4. Save the file as `aggregation.js`.\n\nYou can run this file by executing `node aggregation.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.\n\n### Create a Function\n\nLet's create a function whose job it is to print the cheapest suburbs for a given market.\n\n1. Continuing to work in `aggregation.js`, create an asynchronous function named `printCheapestSuburbs` that accepts a connected MongoClient, a country, a market, and the maximum number of results to print as parameters.\n\n ``` js\n async function printCheapestSuburbs(client, country, market, maxNumberToPrint) {\n }\n ```\n\n2. We can execute a pipeline in Node.js by calling\n Collection's\n aggregate().\n Paste the following in your new function:\n\n ``` js\n const pipeline = ];\n\n const aggCursor = client.db(\"sample_airbnb\")\n .collection(\"listingsAndReviews\")\n .aggregate(pipeline);\n ```\n\n3. The first param for `aggregate()` is a pipeline of type object. We could manually create the pipeline here. Since we've already created a pipeline inside of Atlas, let's export the pipeline from there. Return to the Aggregation Pipeline Builder in Atlas. Click the **Export pipeline code to language** button.\n\n ![Export pipeline in Atlas\n\n4. The **Export Pipeline To Language** dialog appears. In the **Export Pipleine To** selection box, choose **NODE**.\n5. In the Node pane on the right side of the dialog, click the **copy** button.\n6. Return to your code editor and paste the `pipeline` in place of the empty object currently assigned to the pipeline constant.\n\n ``` js\n const pipeline = \n {\n '$match': {\n 'bedrooms': 1,\n 'address.country': 'Australia', \n 'address.market': 'Sydney', \n 'address.suburb': {\n '$exists': 1, \n '$ne': ''\n }, \n 'room_type': 'Entire home/apt'\n }\n }, {\n '$group': {\n '_id': '$address.suburb', \n 'averagePrice': {\n '$avg': '$price'\n }\n }\n }, {\n '$sort': {\n 'averagePrice': 1\n }\n }, {\n '$limit': 10\n }\n ];\n ```\n\n7. This pipeline would work fine as written. However, it is hardcoded to search for 10 results in the Sydney, Australia market. We should update this pipeline to be more generic. Make the following replacements in the pipeline definition:\n 1. Replace `'Australia'` with `country`\n 2. Replace `'Sydney'` with `market`\n 3. Replace `10` with `maxNumberToPrint`\n\n8. `aggregate()` will return an [AggregationCursor, which we are storing in the `aggCursor` constant. An AggregationCursor allows traversal over the aggregation pipeline results. We can use AggregationCursor's forEach() to iterate over the results. Paste the following inside `printCheapestSuburbs()` below the definition of `aggCursor`.\n\n``` js\nawait aggCursor.forEach(airbnbListing => {\n console.log(`${airbnbListing._id}: ${airbnbListing.averagePrice}`);\n});\n```\n\n### Call the Function\n\nNow we are ready to call our function to print the 10 cheapest suburbs in the Sydney, Australia market. Add the following call in the `main()` function beneath the comment that says `Make the appropriate DB calls`.\n\n``` js\nawait printCheapestSuburbs(client, \"Australia\", \"Sydney\", 10);\n```\n\nRunning aggregation.js results in the following output:\n\n``` json\nBalgowlah: 45.00\nWilloughby: 80.00\nMarrickville: 94.50\nSt Peters: 100.00\nRedfern: 101.00\nCronulla: 109.00\nBellevue Hill: 109.50\nKingsgrove: 112.00\nCoogee: 115.00\nNeutral Bay: 119.00\n```\n\nNow I know what suburbs to begin searching as I prepare for my trip to Sydney, Australia.\n\n## Wrapping Up\n\nThe aggregation framework is an incredibly powerful way to analyze your data. Learning to create pipelines may seem a little intimidating at first, but it's worth the investment. The aggregation framework can get results to your end-users faster and save you from a lot of scripting.\n\nToday, we only scratched the surface of the aggregation framework. I highly recommend MongoDB University's free course specifically on the aggregation framework: M121: The MongoDB Aggregation Framework. The course has a more thorough explanation of how the aggregation framework works and provides detail on how to use the various pipeline stages.\n\nThis post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.\n\nNow you're ready to move on to the next post in this series all about change streams and triggers. In that post, you'll learn how to automatically react to changes in your database.\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "Discover how to analyze your data using MongoDB's Aggregation Framework and Node.js.", "contentType": "Quickstart"}, "title": "Aggregation Framework with Node.js 3.3.2 Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/crypto-news-website", "action": "created", "body": "# Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas\n\nWho said creating a website has to be hard?\n\nWriting the code, persisting news, hosting the website. A decade ago, this might have been a lot of work. These days, thanks to Microsoft Blazor, Microsoft Azure App Service, and MongoDB Atlas, you can get started in minutes. And finish it equally fast!\n\nIn this tutorial, I will walk you through:\n\n* Setting up a new Blazor project.\n* Creating a new page with a simple UI.\n* Creating data in MongoDB Atlas.\n* Showing those news on the website.\n* Making the website available by using Azure App Service to host it.\n\nAll you need is this tutorial and the following pre-requisites, but if you prefer to just read along for now, check out the GitHub repository for this tutorial where you can find the code and the tutorial.\n\n## Pre-requisites for this tutorial\n\nBefore we get started, here is a list of everything you need while working through the tutorial. I recommend getting everything set up first so that you can seamlessly follow along.\n\n* Download and install the .NET framework.\n For this tutorial, I am using .NET 7.0.102 for Windows, but any .NET 6.0 or higher should do.\n* Download and install Visual Studio.\n I am using the 2022 Community edition, version 17.4.4, but any 2019 or 2022 edition will be okay. Make sure to install the `Azure development` workload as we will be deploying with this later. If you already have an installed version of Visual Studio, go into the Installer and click `modify` to find it.\n* Sign up for a free Microsoft Azure account.\n* Sign up for a free MongoDB Atlas account.\n\n## Creating a new Microsoft Blazor project that will contain our crypto news website\n\nNow that the pre-requisites are out of the way, let's start by creating a new project.\n\nI have recently discovered Microsoft Blazor and I absolutely love it. Such an easy way to create websites quickly and easily. And you don't even have to write any JavaScript or PHP! Let's use it for this tutorial, as well. Search for `Blazor Server App` and click `Next`.\n\nChoose a `Project name` and `Location` of you liking. I like to have the solution and project in the same directory but you don't have to.\n\nChoose your currently installed .NET framework (as described in `Pre-requisites`) and leave the rest on default.\n\nHit `Create` and you are good to go!\n\n## Adding the MongoDB driver to the project to connect to the database\n\nBefore we start getting into the code, we need to add one NuGet package to the project: the MongoDB driver. The driver is a library that lets you easily access your MongoDB Atlas cluster and work with your database. Click on `Project` -> `Manage NuGet Packages...` and search for `MongoDB.Driver`.\n\nDuring that process, you might have to install additional components, like the ones shown in the following screenshot. Confirm this installation as we will need some of those, as well.\n\nAnother message you come across might be the following license agreements, which you need to accept to be able to work with those libraries.\n\n## Creating a new MongoDB Atlas cluster and database to host our crypto news\n\nNow that we've installed the driver, let's go ahead and create a cluster and database to connect to.\n\nWhen you register a new account, you will be presented with the selection of a cloud database to deploy. Open the `Advanced Configuration Options`.\nFor this tutorial, we only need the forever-free shared tier. Since the website will later be deployed to Azure, we also want the Atlas cluster deployed in Azure. And we also want both to reside in the same region. This way, we decrease the chance of having an additional latency as much as possible.\n\nHere, you can choose any region. Just make sure to chose the same one later on when deploying the website to Azure. The remaining options can be left on their defaults.\n\nThe final step of creating a new cluster is to think about security measures by going through the `Security Quickstart`.\n\nChoose a `Username` and `Password` for the database user that will access this cluster during the tutorial. For the `Access List`, we need add `0.0.0.0/0` since we do not know the IP address of our Azure deployment yet. This is okay for development purposes and testing, but in production, you should restrict the access to the specific IPs accessing Atlas.\n\nAtlas also supports the use of network peering and private connections using the major cloud providers. This includes Azure Private Link or Azure Virtual Private Connection (VPC), if you are using an M10 or above cluster.\n\nNow hit `Finish and Close`.\n\nCreating a new shared cluster happens very, very fast and you should be able to start within minutes. As soon as the cluster is created, you'll see it in your list of `Database Deployments`.\n\nLet's add some sample data for our website! Click on `Browse Collections` now.\n\nIf you've never worked with Atlas before, here are some vocabularies to get your started:\n\n- A cluster consists of multiple nodes (for redundancy).\n- A cluster can contain multiple databases (which are replicated onto all nodes).\n- Each database can contain many collections, which are similar to tables in a relational database.\n- Each collection can then contain many documents. Think rows, just better!\n- Documents are super-flexible because each document can have its own set of properties. They are easy to read and super flexible to work with JSON-like structures that contain our data.\n\n## Creating some test data in Atlas\n\nSince there is no data yet, you will see an empty list of databases and collections. Click on `Add My Own Data` to add the first entry.\n\nThe database name and collection name can be anything, but to be in line with the code we'll see later, call them `crypto-news-website` and `news` respectively, and hit `Create`.\n\nThis should lead to a new entry that looks like this:\n\nNext, click on `INSERT DOCUMENT`.\n\nThere are a couple things going on here. The `_id` has already been created automatically. Each document contains one of those and they are of type `ObjectId`. It uniquely identifies the document.\n\nBy hovering over the line count on the left, you'll get a pop-op to add more fields. Add one called `title` and set its value to whatever you like. The screenshot shows an example you can use. Choose `String` as the type on the right. Next, add a `date` and choose `Date` as the type on the right.\n\nRepeat the above process a couple times to get as much example data in there as you like. You may also just continue with one entry, though, if you like, and fill up your news when you are done.\n\n## Creating a connection string to access your MongoDB Atlas cluster\n\nThe final step within MongoDB Atlas is to actually create access to this database so that the MongoDB driver we installed into the project can connect to it. This is done by using a connection string.\nA connection string is a URI that contains username, password, and the host address of the database you want to connect to.\n\nClick on `Databases` on the left to get back to the cluster overview.\n\nThis time, hit the `Connect` button and then `Connect Your Application`.\nIf you haven't done so already, choose a username and password for the database user accessing this cluster during the tutorial. Also, add `0.0.0.0/0` as the IP address so that the Azure deployment can access the cluster later on.\n\nCopy the connection string that is shown in the pop-up.\n\n## Creating a new Blazor page\n\nIf you have never used Blazor before, just hit the `Run` button and have a look at the template that has been generated. It's a great start, and we will be reusing some parts of it later on.\n\nLet's add our own page first, though. In your Solution Explorer, you'll see a `Pages` folder. Right-click it and add a `Razor Component`. Those are files that combine the HTML of your page with C# code.\n\nNow, replace the content of the file with the following code. Explanations can be read inline in the code comments.\n\n```csharp\n@* The `page` attribute defines how this page can be opened. *@\n@page \"/news\"\n\n@* The `MongoDB` driver will be used to connect to your Atlas cluster. *@\n@using MongoDB.Driver\n@* `BSON` is a file format similar to JSON. MongoDB Atlas documents are BSON documents. *@\n@using MongoDB.Bson\n@* You need to add the `Data` folder as well. This is where the `News` class resides. *@\n@using CryptoNewsApp.Data\n@using Microsoft.AspNetCore.Builder\n\n@* The page title is what your browser tab will be called. *@\nNews\n\n@* Let's add a header to the page. *@\n\nNEWS\n\n@* And then some data. *@\n@* This is just a simple table contains news and their date. *@\n@if (_news != null)\n{\n \n \n \n News\n Date\n \n \n \n @* Blazor takes this data from the `_news` field that we will fill later on. *@\n @foreach (var newsEntry in _news)\n {\n \n @newsEntry.Title\n @newsEntry.Date\n \n }\n \n \n}\n\n@* This part defines the code that will be run when the page is loaded. It's basically *@\n@* what would usually be PHP in a non-Blazor environment. *@\n@code {\n \n // The `_news` field will hold all our news. We will have a look at the `News`\n // class in just a moment.\n private List? _news;\n\n // `OnInitializedAsync()` gets called when the website is loaded. Our data\n // retrieval logic has to be placed here.\n protected override async Task OnInitializedAsync()\n {\n // First, we need to create a `MongoClient` which is what we use to\n // connect to our cluster.\n // The only argument we need to pass on is the connection string you\n // retrieved from Atlas. Make sure to replace the password placeholder with your password.\n var mongoClient = new MongoClient(\"YOUR_CONNECTION_STRING\");\n // Using the `mongoCLient` we can now access the database.\n var cryptoNewsDatabase = mongoClient.GetDatabase(\"crypto-news-database\");\n // Having a handle to the database we can furthermore get the collection data.\n // Note that this is a generic function that takes `News` as it's parameter\n // to define who the documents in this collection look like.\n var newsCollection = cryptoNewsDatabase.GetCollection(\"news\");\n // Having access to the collection, we issue a `Find` call to find all documents.\n // A `Find` takes a filter as an argument. This filter is written as a `BsonDocument`.\n // Remember, `BSON` is really just a (binary) JSON.\n // Since we don't want to filter anything and get all the news, we pass along an\n // empty / new `BsonDocument`. The result is then transformed into a list with `ToListAsync()`.\n _news = await newsCollection.Find(new BsonDocument()).Limit(10).ToListAsync();\n // And that's it! It's as easy as that using the driver to access the data\n // in your MongoDB Atlas cluster.\n }\n\n}\n```\n\nAbove, you'll notice the `News` class, which still needs to be created.\nIn the `Data` folder, add a new C# class, call it `News`, and use the following code.\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\n\nnamespace CryptoNewsApp.Data\n{\n public class News\n {\n // The attribute `BsonId` signals the MongoDB driver that this field \n // should used to map the `_id` from the Atlas document.\n // Remember to use the type `ObjectId` here as well.\n BsonId] public ObjectId Id { get; set; }\n\n // The two other fields in each news are `title` and `date`.\n // Since the C# coding style differs from the Atlas naming style, we have to map them.\n // Thankfully there is another handy attribute to achieve this: `BsonElement`.\n // It takes the document field's name and maps it to the classes field name.\n [BsonElement(\"title\")] public String Title { get; set; }\n [BsonElement(\"date\")] public DateTime Date { get; set; }\n }\n}\n```\n\nNow it's time to look at the result. Hit `Run` again.\n\nThe website should open automatically. Just add `/news` to the URL to see your new News page.\n\n![Local Website showing news\n\nIf you want to learn more about how to add the news page to the menu on the left, you can have a look at more of my Blazor-specific tutorials.\n\n## Deploying the website to Azure App Service\n\nSo far, so good. Everything is running locally. Now to the fun part: going live!\n\nVisual Studio makes this super easy. Just click onto your project and choose `Publish...`.\n\nThe `Target` is `Azure`, and the `Specific target` is `Azure App Service (Windows)`.\n\nWhen you registered for Azure earlier, a free subscription should have already been created and chosen here. By clicking on `Create new` on the right, you can now create a new App Service.\n\nThe default settings are all totally fine. You can, however, choose a different region here if you want to. Finally, click `Create` and then `Finish`.\n\nWhen ready, the following pop-up should appear. By clicking `Publish`, you can start the actual publishing process. It eventually shows the result of the publish.\n\nThe above summary will also show you the URL that was created for the deployment. My example: https://cryptonewsapp20230124021236.azurewebsites.net/\n\nAgain, add `/news` to it to get to the News page.\n\n## What's next?\n\nGo ahead and add some more data. Add more fields or style the website a bit more than this default table.\n\nThe combination of using Microsoft Azure and MongoDB Atlas makes it super easy and fast to create websites like this one. But it is only the start. You can learn more about Azure on the Learn platform and about Atlas on the MongoDB University.\n\nAnd if you have any questions, please reach out to us at the MongoDB Forums or tweet @dominicfrei.", "format": "md", "metadata": {"tags": ["C#", "MongoDB", ".NET", "Azure"], "pageDescription": "This article by Dominic Frei will lead you through creating your first Microsoft Blazor server application and deploying it to Microsoft Azure.", "contentType": "Tutorial"}, "title": "Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/building-remix-applications", "action": "created", "body": "# Building Remix Applications with the MongoDB Stack\n\nThe JavaScript ecosystem has stabilized over the years. There isn\u2019t a new framework every other day, but some interesting projects are still emerging. Remix is one of those newer projects that is getting a lot of traction in the developer communities. Remix is based on top of React and lets you use the same code base between your back end and front end. The pages are server-side generated but also dynamically updated without full page reloads. This makes your web application much faster and even lets it run without JavaScript enabled. In this tutorial, you will learn how to use it with MongoDB using the new MongoDB-Remix stack.\n\n## Requirements\nFor this tutorial, you will need:\n\n* Node.js.\n* A MongoDB (free) cluster with the sample data loaded.\n\n## About the MongoDB-Remix stack\nRemix uses stacks of technology to help you get started with your projects. This stack, similar to others provided by the Remix team, includes React, TypeScript, and Tailwind. As for the data persistence layer, it uses MongoDB with the native JavaScript driver.\n\n## Getting started\nStart by initializing a new project. This can be done with the `create-remix` tool, which can be launched with npx. Answer the questions, and you will have the basic scaffolding for your project. Notice how we use the `--template` parameter to load the MongoDB Remix stack (`mongodb-developer/remix`) from Github. The second parameter specifies the folder in which you want to create this project.\n\n```\nnpx create-remix --template mongodb-developer/remix remix-blog\n```\n\nThis will start downloading the necessary packages for your application. Once everything is downloaded, you can `cd` into that directory and do a first build.\n\n```\ncd remix-blog\nnpm run build\n```\n\nYou\u2019re almost ready to start your application. Go to your MongoDB Atlas cluster (loaded with the sample data), and get your connection string.\n\nAt the root of your project, create a `.env` file with the `CONNECTION_STRING` variable, and paste your connection string. Your file should look like this.\n\n```\nCONNECTION_STRING=mongodb+srv://user:pass@cluster0.abcde.mongodb.net\n```\n\nAt this point, you should be able to point your browser to http://localhost:3000 and see the application running. \n\nVoil\u00e0! You\u2019ve got a Remix application that connects to your MongoDB database. You can see the movie list, which fetches data from the `sample_mflix` database. Clicking on a movie title will bring you to the movie details page, which shows the plot. You can even add new movies to the collection if you want.\n\n## Exploring the application\nYou now have a running application, but you will likely want to connect to a database that shows something other than sample data. In this section, we describe the various moving parts of the sample application and how you can edit them for your purposes.\n\n### Database connection\nThe database connection is handled for you in the `/app/utils/db.server.ts` file. If you\u2019ve used other Remix stacks in the past, you will find this code very familiar. The MongoDB driver used here will manage the pool of connections. The connection string is read from an environment variable, so there isn\u2019t much you need to do here.\n\n### Movie list\nIn the sample code, we connect to the `sample_mflix` database and get the first 10 results from the collection. If you are familiar with Remix, you might already know that the code for this page is located in the `/app/routes/movies/index.tsx` file. The sample app uses the default naming convention from the Remix nested routes system.\n\nIn that file, you will see a loader at the top. This loader is used for the list of movies and the search bar on that page. \n\n```\nexport async function loader({ request }: LoaderArgs) {\n const url = new URL(request.url);\n\n let db = await mongodb.db(\"sample_mflix\");\n let collection = await db.collection(\"movies\");\n let movies = await collection.find({}).limit(10).toArray();\n\n // \u2026\n\n return json({movies, searchedMovies});\n}\n```\n\nYou can see that the application connects to the `sample_mflix` database and the `movies` collection. From there, it uses the find method to retrieve some records. It queries the collection with an empty/unfiltered request object with a limit of 10 to fetch the databases' first 10 documents. The MongoDB Query API provides many ways to search and retrieve data.\n\nYou can change these to connect to your own database and see the result. You will also need to change the `MovieComponent` (`/app/components/movie.tsx`) to accommodate the documents you fetch from your database.\n\n### Movie details\nThe movie details page can be found in `/app/routes/movies/$movieId.tsx`. In there, you will find similar code, but this time, it uses the findOne method to retrieve only a specific movie.\n\n```\nexport async function loader({ params }: LoaderArgs) {\n const movieId = params.movieId;\n\n let db = await mongodb.db(\"sample_mflix\");\n let collection = await db.collection(\"movies\");\n let movie = await collection.findOne({_id: new ObjectId(movieId)});\n\n return json(movie);\n}\n```\n\nAgain, this code uses the Remix routing standards to pass the `movieId` to the loader function.\n\n### Add movie\nYou might have noticed the _Add_ link on the left menu. This lets you create a new document in your collection. The code for adding the document can be found in the `/app/routes/movies/add.tsx` file. In there, you will see an action function. This function will get executed when the form is submitted. This is thanks to the Remix Form component that we use here.\n\n```\nexport async function action({ request }: ActionArgs) {\n const formData = await request.formData();\n const movie = {\n title: formData.get(\"title\"),\n year: formData.get(\"year\")\n }\n const db = await mongodb.db(\"sample_mflix\");\n const collection = await db.collection(\"movies\");\n const result = await collection.insertOne(movie);\n return redirect(`/movies/${result.insertedId}`);\n}\n```\n\nThe code retrieves the form data to build the new document and uses the insertOne method from the driver to add this movie to the collection. You will notice the redirect utility at the end. This will send the users to the newly created movie page after the entry was successfully created.\n\n## Next steps\nThat\u2019s it! You have a running application and know how to customize it to connect to your database. If you want to learn more about using the native driver, use the link on the left navigation bar of the sample app or go straight to the documentation. Try adding pages to update and delete an entry from your collection. It should be using the same patterns as you see in the template. If you need help with the template, please ask in our community forums; we\u2019ll gladly help.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "In this tutorial, you will learn how to use Remix with MongoDB using the new MongoDB-Remix stack.", "contentType": "Article"}, "title": "Building Remix Applications with the MongoDB Stack", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/intro-to-realm-sdk-for-unity3d", "action": "created", "body": "# Introduction to the Realm SDK for Unity3D\n\nIn this video, Dominic Frei, iOS engineer on the Realm team, will\nintroduce you to the Realm SDK for Unity3D. He will be showing you how\nto integrate and use the SDK based on a Unity example created during the\nvideo so that you can follow along.\n\nThe video is separated into the following sections: \n- What is Realm and where to download it?\n- Creating an example project\n- Adding Realm to your project\n- Executing simple CRUD operations\n- Recap / Summary\n\n>\n>\n>Introduction to the Realm SDK for Unity3D\n>\n>:youtube]{vid=8jo_S02HLkI}\n>\n>\n\nFor those of you who prefer to read, below we have a full transcript of\nthe video too. Please be aware that this is verbatim and it might not be\nsufficient to understand everything without the supporting video.\n\n>\n>\n>If you have questions, please head to our [developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n## Transcript\n\nHello and welcome to a new tutorial! Today we're not talking about\nplaying games but rather how to make them. More specifically: how to use\nthe Realm SDK to persist your data in Unity. I will show you where to\ndownload Realm, how to add it to your project and how to write the\nnecessary code to save and load your data. Let's get started!\n\nRealm is an open-source, cross-platform database available for many\ndifferent platforms. Since we will be working with Unity we'll be using\nthe Realm .NET SDK. This is not yet available through the Unity package\nmanager so we need to download it from the Github repository directly. I\nwill put a link to it into the description. When you go to Releases you\ncan find the latest release right on the top of the page. Within the\nassets you'll find two Unity files. Make sure to choose the file that\nsays 'unity.bundle' and download it.\n\nBefore we actually start integrating Realm into our Unity project let's\ncreate a small example. I'll be creating it from scratch so that you can\nfollow along all the way and see how easy it is to integrate Realm into\nyour project. We will be using the Unity Editor version 2021.2.0a10. We\nneed to use this alpha version because there is a bug in the current\nUnity LTS version preventing Realm from working properly. I'll give it a\nname and choose a 2D template for this example.\n\nWe won't be doing much in the Unity Editor itself, most of the example\nwill take place in code. All we need to do here is to add a Square\nobject. The Square will change its color when we click on it and - as\nsoon as we add Realm to the project - the color will be persisted to the\ndatabase and loaded again when we start the game again. The square needs\nto be clickable, therefore we need to add a collider. I will choose a\n'Box Collider 2D' in this case. Finally we'll add a script to the\nsquare, call it 'Square' and open the script.\n\nThe first thing we're going to do before we actually implement the\nsquare's behaviour is to add another class which will hold our data, the\ncolor of our square. We'll call this 'ColorEntity'. All we need for now\nare three properties for the colors red, green and blue. They will be of\ntype float to match the UnityEngine's color properties and we'll default\nthem to 0, giving us an initial black color. Back in the Square\nMonoBehaviour I'll add a ColorEntity property since we'll need that in\nseveral locations. During the Awake of the Square we'll create a new\nColorEntity instance and then set the color of the square to this newly\ncreated ColorEntity by accessing it's SpriteRenderer and setting it's\ncolor. When we go back to the Unity Editor and enter Play Mode we'll see\na black square.\n\nOk, let's add the color change. Since we added a collider to the square\nwe can use the OnMouseDown event to listen for mouse clicks. All we want\nto do here is to assign three random values to our ColorEntity. We'll\nuse Random.Range and clamp it between 0 and 1. Finally we need to update\nthe square with these colors. To avoid duplicated code I'll grab the\nline from Awake where we set the color and put it in it's own function.\nNow we just call it in Awake and after every mouse click. Let's have a\nlook at the result.\n\nInitially we get our default black color. And with every click the color\nchanges. When I stop and start again, we'll of course end up with the\ninitial default again since the color is not yet saved. Let's do that\nnext!\n\nWe go to Window, Package Manager. From here we click on the plus icon\nand choose 'Add package from tarball'. Now you just have to choose the\ntarball downloaded earlier. Keep in mind that Unity does not import the\npackage and save it within your project but uses exactly this file\nwherever it is. If you move it, your project won't work anymore. I\nrecommend moving this file from your Downloads to the project folder\nfirst. As soon as it is imported you should see it in the Custom section\nof your package list. That's all we need to do in the Unity Editor,\nlet's get back to Visual Studio.\n\nLet's start with our ColorEntity. First, we want to import the Realm\npackage by adding 'using Realms'. The way Realm knows which objects are\nmeant to be saved in the database is by subclassing 'RealmObjects'.\nThat's all we have to do here really. To make our life a little bit\neasier though I'll also add some more things. First, we want to have a\nprimary key by which we can later find the object we're looking for\neasily. We'll just use the 'ObjectName' for that and add an attribute on\ntop of it, called 'PrimaryKey'. Next we add a default initialiser to\ncreate a new Realm object for this class and a convenience initialiser\nthat sets the ObjectName right away. Ok, back to our Square. We need to\nimport Realm here as well. Then we'll create a property for the Realm\nitself. This will later let us access our database. And all we need to\ndo to get access to it is to instantiate it. We'll do this during awake\nas well, since it only needs to be done once.\n\nNow that we're done setting up our Realm we can go ahead and look at how\nto perform some simple CRUD operations. First, we want to actually\ncreate the object in the database. We do this by calling add. Notice\nthat I have put this into a block that is passed to the write function.\nWe need to do this to tell our Realm that we are about to change data.\nIf another process was changing data at the same time we could end up in\na corrupt database. The write function makes sure that every other\nprocess is blocked from writing to the database at the time we're\nperforming this change.\n\nAnother thing I'd like to add is a check if the ColorEntity we just\ncreated already exists. If so, we don't need to create it again and in\nfact can't since primary keys have to be unique. We do this by asking\nour Realm for the ColorEntity we're looking for, identified by it's\nprimary key. I'll just call it 'square' for now. Now I check if the\nobject could be found and only if not, we'll be creating it with exactly\nthe same primary key. Whenever we update the color and therefore update\nthe properties of our ColorEntity we change data in our database.\nTherefore we also need to wrap our mouse click within a write block.\nLet's see how that looks in Unity. When we start the game we still see\nthe initial black state. We can still randomly update the color by\nclicking on the square. And when we stop and start Play Mode again, we\nsee the color persists now.\n\nLet's quickly recap what we've done. We added the Realm package in Unity\nand imported it in our script. We added the superclass RealmObject to\nour class that's supposed to be saved. And then all we need to do is to\nmake sure we always start a write transaction when we're changing data.\nNotice that we did not need any transaction to actually read the data\ndown here in the SetColor function.\n\nAlright, that's it for this tutorial. I hope you've learned how to use\nRealm in your Unity project to save and load data.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["Realm", "Unity"], "pageDescription": "In this video, Dominic Frei, iOS engineer on the Realm team, will introduce you to the Realm SDK for Unity3D", "contentType": "News & Announcements"}, "title": "Introduction to the Realm SDK for Unity3D", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/get-hyped-using-docker-go-mongodb", "action": "created", "body": "# Get Hyped: Using Docker + Go with MongoDB\n\nIn the developer community, ensuring your projects run accurately regardless of the environment can be a pain. Whether it\u2019s trying to recreate a demo from an online tutorial or working on a code review, hearing the words, \u201cWell, it works on my machine\u2026\u201d can be frustrating. Instead of spending hours debugging, we want to introduce you to a platform that will change your developer experience: Docker. \n\nDocker is a great tool to learn because it provides developers with the ability for their applications to be used easily between environments, and it's resource-efficient in comparison to virtual machines. This tutorial will gently guide you through how to navigate Docker, along with how to integrate Go on the platform. We will be using this project to connect to our previously built MongoDB Atlas Search Cluster made for using Synonyms in Atlas Search. Stay tuned for a fun read on how to learn all the above while also expanding your Gen-Z slang knowledge from our synonyms cluster. Get hyped! \n\n## The Prerequisites\n\nThere are a few requirements that must be met to be successful with this tutorial.\n\n- A M0 or better MongoDB Atlas cluster\n- Docker Desktop\n\nTo use MongoDB with the Golang driver, you only need a free M0 cluster. To create this cluster, follow the instructions listed on the MongoDB documentation. However, we\u2019ll be making many references to a previous tutorial where we used Atlas Search with custom synonyms.\n\nSince this is a Docker tutorial, you\u2019ll need Docker Desktop. You don\u2019t actually need to have Golang configured on your host machine because Docker can take care of this for us as we progress through the tutorial.\n\n## Building a Go API with the MongoDB Golang Driver \n\nLike previously mentioned, you don\u2019t need Go installed and configured on your host computer to be successful. However, it wouldn\u2019t hurt to have it in case you wanted to test things prior to creating a Docker image.\n\nOn your computer, create a new project directory, and within that project directory, create a **src** directory with the following files:\n\n- go.mod\n- main.go\n\nThe **go.mod** file is our dependency management file for Go modules. It could easily be created manually or by using the following command:\n\n```bash\ngo mod init\n```\n\nThe **main.go** file is where we\u2019ll keep all of our project code.\n\nStarting with the **go.mod** file, add the following lines:\n\n```\nmodule github.com/mongodb-developer/docker-golang-example\ngo 1.15\nrequire go.mongodb.org/mongo-driver v1.7.0\nrequire github.com/gorilla/mux v1.8.0\n```\n\nEssentially, we\u2019re defining what version of Go to use and the modules that we want to use. For this project, we\u2019ll be using the MongoDB Go driver as well as the Gorilla Web Toolkit.\n\nThis brings us into the building of our simple API.\n\nWithin the **main.go** file, add the following code:\n\n```golang\npackage main\n\nimport (\n\"context\"\n\"encoding/json\"\n\"fmt\"\n\"net/http\"\n\"os\"\n\"time\"\n\n\"github.com/gorilla/mux\"\n\"go.mongodb.org/mongo-driver/bson\"\n\"go.mongodb.org/mongo-driver/mongo\"\n\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nvar client *mongo.Client\nvar collection *mongo.Collection\n\ntype Tweet struct {\nID int64 `json:\"_id,omitempty\" bson:\"_id,omitempty\"`\nFullText string `json:\"full_text,omitempty\" bson:\"full_text,omitempty\"`\nUser struct {\nScreenName string `json:\"screen_name\" bson:\"screen_name\"`\n} `json:\"user,omitempty\" bson:\"user,omitempty\"`\n}\n\nfunc GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}\nfunc SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}\n\nfunc main() {\nfmt.Println(\"Starting the application...\")\nctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\ndefer cancel()\nclient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"MONGODB_URI\")))\ndefer func() {\nif err = client.Disconnect(ctx); err != nil {\npanic(err)\n}\n}()\ncollection = client.Database(\"synonyms\").Collection(\"tweets\")\nrouter := mux.NewRouter()\nrouter.HandleFunc(\"/tweets\", GetTweetsEndpoint).Methods(\"GET\")\nrouter.HandleFunc(\"/search\", SearchTweetsEndpoint).Methods(\"GET\")\nhttp.ListenAndServe(\":12345\", router)\n}\n```\n\nThere\u2019s more to the code, but before we see the rest, let\u2019s start breaking down what we have above to make sense of it.\n\nYou\u2019ll probably notice our `Tweets` data structure:\n\n```golang\ntype Tweet struct {\nID int64 `json:\"_id,omitempty\" bson:\"_id,omitempty\"`\nFullText string `json:\"full_text,omitempty\" bson:\"full_text,omitempty\"`\nUser struct {\nScreenName string `json:\"screen_name\" bson:\"screen_name\"`\n} `json:\"user,omitempty\" bson:\"user,omitempty\"`\n}\n```\n\nEarlier in the tutorial, we mentioned that this example is heavily influenced by a previous tutorial that used Twitter data. We highly recommend you take a look at it. This data structure has some of the fields that represent a tweet that we scraped from Twitter. We didn\u2019t map all the fields because it just wasn\u2019t necessary for this example.\n\nNext, you\u2019ll notice the following:\n\n```golang\nfunc GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}\nfunc SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}\n```\n\nThese will be the functions that hold our API endpoint logic. We\u2019re going to skip these for now and focus on understanding the connection and configuration logic.\n\nAs of now, most of what we\u2019re interested in is happening in the `main` function.\n\nThe first thing we\u2019re doing is connecting to MongoDB:\n\n```golang\nctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\ndefer cancel()\nclient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"MONGODB_URI\")))\ndefer func() {\nif err = client.Disconnect(ctx); err != nil {\npanic(err)\n}\n}()\ncollection = client.Database(\"synonyms\").Collection(\"tweets\")\n```\n\nYou\u2019ll probably notice the `MONGODB_URI` environment variable in the above code. It\u2019s not a good idea to hard-code the MongoDB connection string in the application. This prevents us from being flexible and it could be a security risk. Instead, we\u2019re using environment variables that we\u2019ll pass in with Docker when we deploy our containers.\n\nYou can visit the MongoDB Atlas dashboard for your URI string.\n\nThe database we plan to use is `synonyms` and we plan to use the `tweets` collection, both of which we talked about in that previous tutorial.\n\nAfter connecting to MongoDB, we focus on configuring the Gorilla Web Toolkit:\n\n```golang\nrouter := mux.NewRouter()\nrouter.HandleFunc(\"/tweets\", GetTweetsEndpoint).Methods(\"GET\")\nrouter.HandleFunc(\"/search\", SearchTweetsEndpoint).Methods(\"GET\")\nhttp.ListenAndServe(\":12345\", router)\n```\n\nIn this code, we are defining which endpoint path should route to which function. The functions are defined, but we haven\u2019t yet added any logic to them. The application itself will be serving on port 12345.\n\nAs of now, the application has the necessary basic connection and configuration information. Let\u2019s circle back to each of those endpoint functions.\n\nWe\u2019ll start with the `GetTweetsEndpoint` because it will work fine with an M0 cluster:\n\n```golang\nfunc GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {\nresponse.Header().Set(\"content-type\", \"application/json\")\nvar tweets ]Tweet\nctx, _ := context.WithTimeout(context.Background(), 30*time.Second)\ncursor, err := collection.Find(ctx, bson.M{})\nif err != nil {\nresponse.WriteHeader(http.StatusInternalServerError)\nresponse.Write([]byte(`{ \"message\": \"` + err.Error() + `\" }`))\nreturn\n}\nif err = cursor.All(ctx, &tweets); err != nil {\nresponse.WriteHeader(http.StatusInternalServerError)\nresponse.Write([]byte(`{ \"message\": \"` + err.Error() + `\" }`))\nreturn\n}\njson.NewEncoder(response).Encode(tweets)\n}\n```\n\nIn the above code, we\u2019re saying that we want to use the `Find` operation on our collection for all documents in that collection, hence the empty filter object.\n\nIf there were no errors, we can get all the results from our cursor, load them into a `Tweet` slice, and then JSON encode that slice for sending to the client. The client will receive JSON data as a result.\n\nNow we can look at the more interesting endpoint function.\n\n```golang\nfunc SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {\nresponse.Header().Set(\"content-type\", \"application/json\")\nqueryParams := request.URL.Query()\nvar tweets []Tweet\nctx, _ := context.WithTimeout(context.Background(), 30*time.Second)\nsearchStage := bson.D{\n{\"$search\", bson.D{\n{\"index\", \"synsearch\"},\n{\"text\", bson.D{\n{\"query\", queryParams.Get(\"q\")},\n{\"path\", \"full_text\"},\n{\"synonyms\", \"slang\"},\n}},\n}},\n}\ncursor, err := collection.Aggregate(ctx, mongo.Pipeline{searchStage})\nif err != nil {\nresponse.WriteHeader(http.StatusInternalServerError)\nresponse.Write([]byte(`{ \"message\": \"` + err.Error() + `\" }`))\nreturn\n}\nif err = cursor.All(ctx, &tweets); err != nil {\nresponse.WriteHeader(http.StatusInternalServerError)\nresponse.Write([]byte(`{ \"message\": \"` + err.Error() + `\" }`))\nreturn\n}\njson.NewEncoder(response).Encode(tweets)\n}\n```\n\nThe idea behind the above function is that we want to use an aggregation pipeline for Atlas Search. It does use the synonym information that we outlined in the [previous tutorial.\n\nThe first important thing in the above code to note is the following:\n\n```golang\nqueryParams := request.URL.Query()\n```\n\nWe\u2019re obtaining the query parameters passed with the HTTP request. We\u2019re expecting a `q` parameter to exist with the search query to be used.\n\nTo keep things simple, we make use of a single stage for the MongoDB aggregation pipeline:\n\n```golang\nsearchStage := bson.D{\n{\"$search\", bson.D{\n{\"index\", \"synsearch\"},\n{\"text\", bson.D{\n{\"query\", queryParams.Get(\"q\")},\n{\"path\", \"full_text\"},\n{\"synonyms\", \"slang\"},\n}},\n}},\n}\n```\n\nIn this stage, we are doing a text search with a specific index and a specific set of synonyms. The query that we use for our text search comes from the query parameter of our HTTP request.\n\nAssuming that everything went well, we can load all the results from the cursor into a `Tweet` slice, JSON encode it, and return it to the client that requested it.\n\nIf you have Go installed and configured on your computer, go ahead and try to run this application. Just don\u2019t forget to add the `MONGODB_URI` to your environment variables prior.\n\nIf you want to learn more about API development with the Gorilla Web Toolkit and MongoDB, check out this tutorial on the subject.\n\n## Configuring a Docker Image for Go with MongoDB\n\nLet\u2019s get started with Docker! If it\u2019s a platform you\u2019ve never used before, it might seem a bit daunting at first, but let us guide you through it, step by step. We will be showing you how to download Docker and get started with setting up your first Dockerfile to connect to our Gen-Z Synonyms Atlas Cluster. \n\nFirst things first. Let\u2019s download Docker. This can be done through their website in just a couple of minutes. \n\nOnce you have that up and running, it\u2019s time to create your very first Dockerfile. \n\nAt the root of your project folder, create a new **Dockerfile** file with the following content:\n\n```\n#get a base image\nFROM golang:1.16-buster\n\nMAINTAINER anaiya raisinghani \n\nWORKDIR /go/src/app\nCOPY ./src .\n\nRUN go get -d -v\nRUN go build -v\n\nCMD \"./docker-golang-example\"]\n```\n\nThis format is what many Dockerfiles are composed of, and a lot of it is heavily customizable and can be edited to fit your project's needs. \n\nThe first step is to grab a base image that you\u2019re going to use to build your new image. You can think of using Dockerfiles as layers to a cake. There are a multitude of different base images out there, or you can use `FROM scratch` to start from an entirely blank image. Since this project is using the programming language Go, we chose to start from the `golang` base image and add the tag `1.16` to represent the version of Go that we plan to use. Whenever you include a tag next to your base image, be sure to set it up with a colon in between, just like this: `golang:1.16`. To learn more about which tag will benefit your project the best, check out [Docker\u2019s documentation on the subject.\n\nThis site holds a lot of different tags that can be used on a Golang base image. Tags are important because they hold very valuable information about the base image you\u2019re using such as software versions, operating system flavor, etc. \n\nLet\u2019s run through the rest of what will happen in this Dockerfile!\n\nIt's optional to include a `MAINTAINER` for your image, but it\u2019s good practice so that people viewing your Dockerfile can know who created it. It's not necessary, but it\u2019s helpful to include your full name and your email address in the file. \n\nThe `WORKDIR /go/src/app` command is crucial to include in your Dockerfile since `WORKDIR` specifies which working directory you\u2019re in. All the commands after will be run through whichever directory you choose, so be sure to be aware of which directory you\u2019re currently in.\n\nThe `COPY ./src .` command allows you to copy whichever files you want from the specified location on the host machine into the Docker image. \n\nNow, we can use the `RUN` command to set up exactly what we want to happen at image build time before deploying as a container. The first command we have is `RUN go get -d -v`, which will download all of the Go dependencies listed in the **go.mod** file that was copied into the image.. \n\nOur second `RUN` command is `RUN go build -v`, which will build our project into an executable binary file. \n\nThe last step of this Dockerfile is to use a `CMD` command, `CMD \u201c./docker-golang-example\u201d]`. This command will define what is run when the container is deployed rather than when the image is built. Essentially we\u2019re saying that we want the built Go application to be run when the container is deployed.\n\nOnce you have this Dockerfile set up, you can build and execute your project using your entire MongoDB URI link:\n\nTo build the Docker image and deploy the container, execute the following from the command line:\n\n```bash\ndocker build -t docker-syn-image .\ndocker run -d -p 12345:12345 -e \u201cMONGODB_URI=YOUR_URI_HERE\u201d docker-syn-image\n```\n\nFollowing these instructions will allow you to run the project and access it from http://localhost:12345. **But**! It\u2019s so tedious. What if we told you there was an easier way to run your application without having to write in the entire URI link? There is! All it takes is one extra step: setting up a Docker Compose file. \n\n## Setting Up a Docker Compose File to Streamline Deployments\n\nA Docker Compose file is a nice little step to run all your container files and dependencies through a simple command: `docker compose up`.\n\nIn order to set up this file, you need to establish a YAML configuration file first. Do this by creating a new file in the root of your project folder, naming it **docker-compose**, and adding **.yml** at the end. You can name it something else if you like, but this is the easiest since when running the `docker compose up` command, you won\u2019t need to specify a file name. Once that is in your project folder, follow the steps below.\n\nThis is what your Docker Compose file will look like once you have it all set up: \n\n```yaml\nversion: \"3.9\" \nservices:\n web:\n build: .\n ports:\n - \"12345:12345\"\n environment:\n MONGODB_URI: your_URI_here\n```\n\nLet\u2019s run through it!\n\nFirst things first. Determine which schema version you want to be running. You should be using the most recent version, and you can find this out through [Docker\u2019s documentation.\n\nNext, define which services, otherwise known as containers, you want to be running in your project. We have included `web` since we are attaching to our Atlas Search cluster. The name isn\u2019t important and it acts more as an identifier for that particular service. Next, specify that you are building your application, and put in your `ports` information in the correct spot. For the next step, we can set up our `environment` as our MongoDB URI and we\u2019re done! \n\nNow, run the command `docker compose up` and watch the magic happen. Your container should build, then run, and you\u2019ll be able to connect to your port and see all the tweets!\n\n## Conclusion\n\nThis tutorial has now left you equipped with the knowledge you need to build a Go API with the MongoDB Golang driver, create a Dockerfile, create a Docker Compose file, and connect your newly built container to a MongoDB Atlas Cluster. \n\nUsing these new platforms will allow you to take your projects to a whole new level. \n\nIf you\u2019d like to take a look at the code used in our project, you can access it on GitHub.\n\nUsing Docker or Go, but have a question? Check out the MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["Go", "Docker"], "pageDescription": "Learn how to create and deploy Golang-powered micro-services that interact with MongoDB using Docker.", "contentType": "Tutorial"}, "title": "Get Hyped: Using Docker + Go with MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/ionic-realm-web-app-convert-to-mobile-app", "action": "created", "body": "# Let\u2019s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android!\n\nRealm is an open-source, easy-to-use local database that helps mobile developers to build better apps, faster. It offers a data synchronization service\u2014MongoDB Realm Sync\u2014that makes it simple to move data between the client and MongoDB Atlas on the back end. Using Realm can save you from writing thousands of lines of code, and offers an intuitive way to work with your data.\n\nThe Ionic team posted a fantastic article on how you can use Ionic with Realm to build a React Web app quickly, taking advantage of Realm to easily persist your data in a MongoDB Atlas Database.\n\nAfter cloning the repo and running `ionic serve`, you'll have a really simple task management web application. You can register (using any user/password combination, Realm takes care of your onboarding needs). You can log in, have a look at your tasks, and add new tasks.\n\n| Login in the Web App | Browsing Tasks | \n|--------------|-----------|\n| | | \n\nLet\u2019s build on what the Ionic team created for the web, and expand it by building a mobile app for iOS and Android using one of the best features Ionic has: the _\u201cWrite Once, Run Anywhere\u201d_ approach to coding. I\u2019ll start with an iOS app.\n\n## Prerequisites\n\nTo follow along this post, you\u2019ll need five things:\n\n* A macOS-powered computer running Xcode (to develop for iOS). I\u2019m using Xcode 13 Beta. You don\u2019t have to risk your sanity.\n* Ionic installed. You can follow the instructions here, but TL;DR it\u2019s `npm install -g @ionic/cli`\n* Clone the repo with the Ionic React Web App that we\u2019ll turn into mobile.\n* As we need an Atlas Database to store our data in the cloud, and a Realm app to make it easy to work with Atlas from mobile, set up a Free Forever MongoDB cluster and create and import a Realm app schema so everything is ready server-side.\n* Once you have your Realm app created, copy the Realm app ID from the MongoDB admin interface for Realm, and paste it into `src/App.tsx`, in the line:\n\n`export const APP_ID = '';`\n\nOnce your `APP_ID` is set, run:\n\n```\n$ npm run build\n```\n\n## The iOS app\n\nTo add iOS capabilities to our existing app, we need to open a terminal and run:\n\n```bash\n$ ionic cap add ios\n``` \n\nThis will create the iOS Xcode Project native developers know and love, with the code from our Ionic app. I ran into a problem doing that and it was that the version of Capacitor used in the repo was 3.1.2, but for iOS, I needed at least 3.2.0. So, I just changed `package.json` and ran `npm install` to update Capacitor.\n\n`package.json` fragment:\n\n```\n...\n\"dependencies\": {\n\n \"@apollo/client\": \"^3.4.5\",\n \"@capacitor/android\": \"3.2.2\",\n \"@capacitor/app\": \"1.0.2\",\n \"@capacitor/core\": \"3.2.0\",\n \"@capacitor/haptics\": \"1.0.2\",\n \"@capacitor/ios\": \"3.2.2\",\n...\n```\n\nNow we have a new `ios` directory. If we enter that folder, we\u2019ll see an `App` directory that has a CocoaPods-powered iOS app. To run this iOS app, we need to:\n\n* Change to that directory with `cd ios`. You\u2019ll find an `App` directory. `cd App`\n* Install all CocoaPods with `pod repo update && pod install`, as usual in a native iOS project. This updates all libraries\u2019 caches for CocoaPods and then installs the required libraries and dependencies in your project.\n* Open the generated `App.xcworkspace` file with Xcode. From Terminal, you can just type `open App.xcworkspace`.\n* Run the app from Xcode.\n\n| Login in the iOS App | Browsing Tasks | \n|--------------|-----------|\n|| |\n\nThat\u2019s it. Apart from updating Capacitor, we only needed to run one command to get our Ionic web project running on iOS!\n\n## The Android App\n\nHow hard can it be to build our Ionic app for Android now that we have done it for iOS? Well, it turns out to be super-simple. Just `cd` back to the root of the project and type in a terminal:\n\n```\n ionic cap android\n```\n\nThis will create the Android project. Once has finished, launch your app using:\n\n```\nionic capacitor run android -l --host=10.0.1.81\n```\n\nIn this case, `10.0.1.81` is my own IP address. As you can see, if you have more than one Emulator or even a plugged-in Android phone, you can select where you want to run the Ionic app.\n\nOnce running, you can register, log in, and add tasks in Android, just like you can do in the web and iOS apps.\n\n| Adding a task in Android | Browsing Tasks in Android | \n|--------------|-----------|\n|||\n\nThe best part is that thanks to the synchronization happening in the MongoDB Realm app, every time we add a new task locally, it gets uploaded to the cloud to a MongoDB Atlas database behind the scenes. And **all other apps accessing the same MongoDB Realm app can show that data**! \n\n## Automatically refreshing tasks\n\nRealm SDKs are well known for their syncing capabilities. You change something in the server, or in one app, and other users with access to the same data will see the changes almost immediately. You don\u2019t have to worry about invalidating caches, writing complex networking/multithreading code that runs in the background, listening to silent push notifications, etc. MongoDB Realm takes care of all that for you.\n\nBut in this example, we access data using the Apollo GraphQL Client for React. Using this client, we can log into our Realm app and run GraphQL Queries\u2014although as designed for the web, we don\u2019t have access to the hard drive to store a .realm file. It\u2019s just a simpler way to use the otherwise awesome Apollo GraphQL Client with Realm, so we don\u2019t have synchronization implemented. But luckily, Apollo GraphQL queries can automatically refresh themselves just passing a `pollInterval` argument. I told you it was awesome. You set the time interval in milliseconds to refresh the data.\n\nSo, in `useTasks.ts`, our function to get all tasks will look like this, auto-refreshing our data every half second.\n\n```typescript\nfunction useAllTasksInProject(project: any) {\n const { data, loading, error } = useQuery(\n gql`\n query GetAllTasksForProject($partition: String!) {\n tasks(query: { _partition: $partition }) {\n _id\n name\n status\n }\n }\n `,\n { variables: { partition: project.partition }, pollInterval: 500 }\n );\n if (error) {\n throw new Error(`Failed to fetch tasks: ${error.message}`);\n }\n\n // If the query has finished, return the tasks from the result data\n // Otherwise, return an empty list\n const tasks = data?.tasks ?? ];\n return { tasks, loading };\n}\n```\n\n![Now we can sync our actions. Adding a task in the Android Emulator gets propagated to the iOS and Web versions\n\n## Pull to refresh\n\nAdding automatic refresh is nice, but in mobile apps, we\u2019re used to also refreshing lists of data just by pulling them. To get this, we\u2019ll need to add the Ionic component `IonRefresher` to our Home component:\n\n```html\n\n \n \n Tasks\n \n \n \n \n \n \n \n \n \n \n \n \n \n Tasks\n \n \n \n {loading ? : null}\n {tasks.map((task: any) => (\n \n ))}\n \n \n \n```\n\nAs we can see, an `IonRefresher` component will add the pull-to-refresh functionality with an included loading indicator tailored for each platform.\n\n```html\n\n \n\n```\n\nTo refresh, we call `doRefresh` and there, we just reload the whole page.\n\n```typescript\n const doRefresh = (event: CustomEvent) => {\n window.location.reload(); // reload the whole page\n event.detail.complete(); // we signal the loading indicator to hide\n };\n```\n\n## Deleting tasks\n\nRight now, we can swipe tasks from right to left to change the status of our tasks. But I wanted to also add a left to right swipe so we can delete tasks. We just need to add the swiping control to the already existing `IonItemSliding` control. In this case, we want a swipe from the _start_ of the control. This way, we avoid any ambiguities with right-to-left vs. left-to-right languages. When the user taps on the new \u201cDelete\u201d button (which will appear red as we\u2019re using the _danger_ color), `deleteTaskSelected` is called.\n\n```html\n\n \n {task.name}\n \n \n Status\n \n \n Delete\n \n \n```\n\nTo delete the task, we use a GraphQL mutation defined in `useTaskMutations.ts`:\n\n```typescript\nconst deleteTaskSelected = () => {\n slidingRef.current?.close(); // close sliding menu\n deleteTask(task); // delete task\n };\n```\n\n## Recap\n\nIn this post, we\u2019ve seen how easy it is to start with an Ionic React web application and, with only a few lines of code, turn it into a mobile app running on iOS and Android. Then, we easily added some functionality to the three apps at the same time. Ionic makes it super simple to run your Realm-powered apps everywhere!\n\nYou can check out the code from this post in this branch of the repo, just by typing:\n\n```\n$ git clone https://github.com/mongodb-developer/ionic-realm-demo\n$ git checkout observe-changes\n```\n\nBut this is not the only way to integrate Realm in your Ionic apps. Using Capacitor and our native SDKs, we\u2019ll show you how to use Realm from Ionic in a future follow-up post. \n\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "GraphQL", "React"], "pageDescription": "We can convert a existing Ionic React Web App that saves data in MongoDB Realm using Apollo GraphQL into an iOS and Android app using a couple commands, and the three apps will share the same MongoDB Realm backend. Also, we can easily add functionality to all three apps, just modifying one code base.\n", "contentType": "Tutorial"}, "title": "Let\u2019s Give Your Realm-Powered Ionic Web App the Native Treatment on iOS and Android!", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-flexible-sync-preview", "action": "created", "body": "# A Preview of Flexible Sync\n\n> Atlas Device Sync's flexible sync mode is now GA. Learn more here.\n\n## Introduction\n\nWhen MongoDB acquired Realm in 2019, we knew we wanted to give developers the easiest and fastest way to synchronize data on-device with a backend in the cloud.\n\n:youtube]{vid=6WrQ-f0dcIA}\n\nIn an offline-first environment, edge-to-cloud data sync typically requires thousands of lines of complex conflict resolution and networking code, and leaves developers with code bloat that slows the development of new features in the long-term. MongoDB\u2019s Atlas Device Sync simplifies moving data between the Realm Mobile Database and MongoDB Atlas. With huge amounts of boilerplate code eliminated, teams are able to focus on the features that drive 5-star app reviews and happy users. \n\nSince bringing Atlas Device Sync GA in February 2021, we\u2019ve seen it transform the way developers are building data synchronization into their mobile applications. But we\u2019ve also seen developers creating workarounds for complex sync use cases. With that in mind, we\u2019ve been hard at work building the next iteration of Sync, which we\u2019re calling Flexible Sync.\n\nFlexible Sync takes into account a year\u2019s worth of user feedback on partition-based sync, and aims to make syncing data to MongoDB Atlas a simple and idiomatic process by using a client-defined query to define the data synced to user applications.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!\n\n## How Flexible Sync Works\n\nFlexible Sync lets developers start writing code that syncs data more quickly \u2013 allowing you to choose which data is synced via a language-native query and to change the queries that define your syncing data at any time.\n\nWith Flexible Sync, developers can enable devices to define a query on the client side using the Realm SDK\u2019s query-language, which will execute on MongoDB Atlas to identify the set of data to Sync. Any documents that match the query will be translated to Realm Objects and saved to the client device\u2019s local disk. The query will be maintained on the server, which will check in real-time to identify if new document insertions, updates, or deletions on Atlas change the query results. Relevant changes on the server-side will be replicated down to the client in real-time, and any changes from the client will be similarly replicated to Atlas.\n\n## New Capabilities\n\nFlexible Sync is distinctly different from the partition-based sync used by Device Sync today. \n\nWith partition-based sync, developers must configure a partition field for their Atlas database. This partition field lives on each document within the Atlas database that the operator wants to sync. Clients can then request access to different partitions of the Atlas database, using the different values of the partition key field. When a client opens a synchronized Realm they pass in the partition key value as a parameter. The sync server receives the value from the client, and sends any documents down to the client that match the partition key value. These documents are automatically translated as Realm Objects and stored on the client\u2019s disk for offline access. \n\nPartition-based sync works well for applications where data is static and compartmentalized, and where permissions models rarely need to change. With Flexible Sync, we\u2019re making fine-grained and flexible permissioning possible, and opening up new app use cases through simplifying the syncing of data that requires ranged or dynamic queries.\n\n## Flexible Permissions\n\nUnlike with partition-based sync, Flexible Sync makes it seamless to implement the document-level permission model when syncing data - meaning synced fields can be limited based on a user\u2019s role. We expect this to be available at preview, and with field-level permissions coming after that.\n\nConsider a healthcare app, with different field-level permissions for Patients, Doctors, and Administrative staff using the application. A patient collection contains user data about the patient, their health history, procedures undergone, and prognosis. The patient accessing the app would only be able to see their full healthcare history, along with their own personal information. Meanwhile, a doctor using the app would be able to see any patients assigned to their care, along with healthcare history and prognosis. But doctors viewing patient data would be unable to view certain personal identifying information, like social security numbers. Administrative staff who handle billing would have another set of field-level permissions, seeing only the data required to successfully bill the patient. \n\nUnder the hood, this is made possible when Flexible Sync runs the query sent by the client, obtains the result set, and then subtracts any data from the result set sent down to the client based on the permissions. The server guards against clients receiving data they aren\u2019t allowed to see, and developers can trust that the server will enforce compliance, even if a query is written with mistakes. In this way, Flexible Sync simplifies sharing subsets of data across groups of users and makes it easier for your application's permissions to mirror complex organizations and business requirements.\n\nFlexible Sync also allows clients to share some documents but not others, based on the ResultSet of their query. Consider a company where teams typically share all the data within their respective teams, but not across teams. When a new project requires teams to collaborate, Flexible Sync makes this easy. The shared project documents could have a field called allowedTeams: marketing, sales]. Each member of the team would have a client-side query, searching for all documents on allowedTeams matching marketing or sales using an $in operator, depending on what team that user was a member of.\n\n## Ranged & Dynamic Queries\n\nOne of Flexible Sync's primary benefits is that it allows for simple synchronization of data that falls into a range \u2013 such as a time window \u2013 and automatically adds and removes documents as they fall in and out of range. \n\nConsider an app used by a company\u2019s workforce, where the users only need to see the last seven days of work orders. With partition-based sync, a time-based trigger needed to fire daily to move work orders in and out of the relevant partition. With Flexible Sync, a developer can write a ranged query that automatically includes and removes data as time passes and the 7-day window changes. By adding a time based range component to the query, code is streamlined. The sync resultset gets a built-in TTL, which previously had to be implemented by the operator on the server-side. \n\nFlexible Sync also enables much more dynamic queries, based on user inputs. Consider a shopping app with millions of products in its Inventory collection. As users apply filters in the app \u2013 viewing only pants that are under $30 dollars and size large \u2013 the query parameters can be combined with logical ANDs and ORs to produce increasingly complex queries, and narrow down the search result even further. All of these query results are combined into a single realm file on the client\u2019s device, which significantly simplifies code required on the client-side. \n\n## Looking Ahead\n\nUltimately, our decision to build Flexible Sync is driven by the Realm team\u2019s desire to eliminate every possible piece of boilerplate code for developers. We\u2019re motivated by delivering a sync service that can fit any use case or schema design pattern you can imagine, so that you can spend your time building features rather than implementing workarounds. \n\nThe Flexible Sync project represents the next evolution of Atlas Device Sync. We\u2019re working hard to get to a public preview by the end of 2021, and believe this query-based sync has the potential to become the standard for Sync-enabled applications. We won\u2019t have every feature available on day one, but iterative releases over the course of 2022 will continuously bring you more query operators and permissions integrations.\n\nInterested in joining the preview program? [Sign-up here and we\u2019ll let you know when Flexible Sync is available in preview. \n\n", "format": "md", "metadata": {"tags": ["Realm", "React Native", "Mobile"], "pageDescription": "Flexible Sync lets developers start writing code that syncs data more quickly \u2013 allowing you to choose which data is synced via a language-native query and to change the queries that define your syncing data at any time.", "contentType": "Article"}, "title": "A Preview of Flexible Sync", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/python-quickstart-tornado", "action": "created", "body": "# Getting Started with MongoDB and Tornado\n\n \n\nTornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed. Because Tornado uses non-blocking network I/O, it is ideal for long polling, WebSockets, and other applications that require a long-lived connection to each user.\n\nTornado also makes it very easy to create JSON APIs, which is how we're going to be using it in this example. Motor, the Python async driver for MongoDB, comes with built-in support for Tornado, making it as simple as possible to use MongoDB in Tornado regardless of the type of server you are building.\n\nIn this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Tornado projects.\n\n## Prerequisites\n\n- Python 3.9.0\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.\n\n## Running the Example\n\nTo begin, you should clone the example code from GitHub.\n\n``` shell\ngit clone git@github.com:mongodb-developer/mongodb-with-tornado.git\n```\n\nYou will need to install a few dependencies: Tornado, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.\n\n``` shell\ncd mongodb-with-tornado\npip install -r requirements.txt\n```\n\nIt may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.\n\nOnce you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.\n\n``` shell\nexport DB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\n```\n\nRemember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.\n\nThe final step is to start your Tornado server.\n\n``` shell\npython app.py\n```\n\nTornado does not output anything in the terminal when it starts, so as long as you don't have any error messages, your server should be running.\n\nOnce the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial, but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.\n\n``` shell\ncurl -X \"POST\" \"http://localhost:8000/\" \\\n -H 'Accept: application/json' \\\n -H 'Content-Type: application/json; charset=utf-8' \\\n -d $'{\n \"name\": \"Jane Doe\",\n \"email\": \"jdoe@example.com\",\n \"gpa\": \"3.9\"\n }'\n```\n\nTry creating a few students via these `POST` requests, and then refresh your browser.\n\n## Creating the Application\n\nAll the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.\n\n### Connecting to MongoDB\n\nOne of the very first things we do is connect to our MongoDB database.\n\n``` python\nclient = motor.motor_tornado.MotorClient(os.environ\"MONGODB_URL\"])\ndb = client.college\n```\n\nWe're using the async motor driver to create our MongoDB client, and then we specify our database name `college`.\n\n### Application Routes\n\nOur application has four routes:\n\n- POST / - creates a new student.\n- GET / - view a list of all students or a single student.\n- PUT /{id} - update a student.\n- DELETE /{id} - delete a student.\n\nEach of the routes corresponds to a method on the `MainHandler` class. Here is what that class looks like if we only show the method stubs:\n\n``` python\nclass MainHandler(tornado.web.RequestHandler):\n\n async def get(self, **kwargs):\n pass\n\n async def post(self):\n pass\n\n async def put(self, **kwargs):\n pass\n\n async def delete(self, **kwargs):\n pass\n```\n\nAs you can see, the method names correspond to the different `HTTP` methods. Let's walk through each method in turn.\n\n#### POST - Create Student\n\n``` python\nasync def post(self):\n student = tornado.escape.json_decode(self.request.body)\n student[\"_id\"] = str(ObjectId())\n\n new_student = await self.settings[\"db\"][\"students\"].insert_one(student)\n created_student = await self.settings[\"db\"][\"students\"].find_one(\n {\"_id\": new_student.inserted_id}\n )\n\n self.set_status(201)\n return self.write(created_student)\n```\n\nNote how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON, but we're encoding and decoding our data from JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`, but JSON does not. Because of this, for simplicity, we convert ObjectIds to strings before storing them.\n\nThe route receives the new student data as a JSON string in the body of the `POST` request. We decode this string back into a Python object before passing it to our MongoDB client. Our client is available within the settings dictionary because we pass it to Tornado when we create the app. You can see this towards the end of the `app.py`.\n\n``` python\napp = tornado.web.Application(\n \n (r\"/\", MainHandler),\n (r\"/(?P\\w+)\", MainHandler),\n ],\n db=db,\n)\n```\n\nThe `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and write it to our response. By default, Tornado will return an HTTP `200` status code, but in this instance, a `201` created is more appropriate, so we change the HTTP response status code with `set_status`.\n\n##### GET - View Student Data\n\nWe have two different ways we may wish to view student data: either as a list of all students or a single student document. The `get` method handles both of these functions.\n\n``` python\nasync def get(self, student_id=None):\n if student_id is not None:\n if (\n student := await self.settings[\"db\"][\"students\"].find_one(\n {\"_id\": student_id}\n )\n ) is not None:\n return self.write(student)\n else:\n raise tornado.web.HTTPError(404)\n else:\n students = await self.settings[\"db\"][\"students\"].find().to_list(1000)\n return self.write({\"students\": students})\n```\n\nFirst, we check to see if the URL provided a path parameter of `student_id`. If it does, then we know that we are looking for a specific student document. We look up the corresponding student with `find_one` and the specified `student_id`. If we manage to locate a matching record, then it is written to the response as a JSON string. Otherwise, we raise a `404` not found error.\n\nIf the URL does not contain a `student_id`, then we return a list of all students.\n\nMotor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`; but in a real application, you would use the [skip and limit parameters in find to paginate your results.\n\nIt's worth noting that as a defence against JSON hijacking, Tornado will not allow you to return an array as the root element. Most modern browsers have patched this vulnerability, but Tornado still errs on the side of caution. So, we must wrap the students array in a dictionary before we write it to our response.\n\n##### PUT - Update Student\n\n``` python\nasync def put(self, student_id):\n student = tornado.escape.json_decode(self.request.body)\n await self.settings\"db\"][\"students\"].update_one(\n {\"_id\": student_id}, {\"$set\": student}\n )\n\n if (\n updated_student := await self.settings[\"db\"][\"students\"].find_one(\n {\"_id\": student_id}\n )\n ) is not None:\n return self.write(updated_student)\n\n raise tornado.web.HTTPError(404)\n```\n\nThe update route is like a combination of the create student and the student detail routes. It receives the id of the document to update `student_id` as well as the new data in the JSON body.\n\nWe attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.\n\nIf the `modified_count` is not equal to one, we still check to see if there is a document matching the id. A `modified_count` of zero could mean that there is no document with that id, but it could also mean that the document does exist, but it did not require updating because the current values are the same as those supplied in the `PUT` request.\n\nOnly after that final find fails, we raise a `404` Not Found exception.\n\n##### DELETE - Remove Student\n\n``` python\nasync def delete(self, student_id):\n delete_result = await db[\"students\"].delete_one({\"_id\": student_id})\n\n if delete_result.deleted_count == 1:\n self.set_status(204)\n return self.finish()\n\n raise tornado.web.HTTPError(404)\n```\n\nOur final route is `delete`. Again, because this is acting upon a single document, we have to supply an id, `student_id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or No Content. In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified `student_id`, then instead, we return a `404`.\n\n## Wrapping Up\n\nI hope you have found this introduction to Tornado with MongoDB useful. Now is a fascinating time for Python developers as more and more frameworks\u2014both new and old\u2014begin taking advantage of async.\n\nIf you would like to know more about how you can use MongoDB with Tornado and WebSockets, please read my other tutorial, [Subscribe to MongoDB Change Streams Via WebSockets.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python"], "pageDescription": "Getting Started with MongoDB and Tornado", "contentType": "Code Example"}, "title": "Getting Started with MongoDB and Tornado", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/querying-price-book-data-federation", "action": "created", "body": "# Querying the MongoDB Atlas Price Book with Atlas Data Federation\n\nAs a DevOps engineer or team, keeping up with the cost changes of a continuously evolving cloud service like MongoDB Atlas database can be a daunting task. Manual monitoring of pricing information can be laborious, prone to mistakes, and may result in delays in strategic decisions. In this article, we will demonstrate how to leverage Atlas Data Federation to query and visualize the MongoDB Atlas price book as a real-time data source that can be incorporated into your DevOps processes and application infrastructure.\n\nAtlas Data Federation is a distributed query engine that allows users to combine, transform, and move data across multiple data sources without complex integrations. Users can efficiently and cost-effectively query data from different sources, such as your Atlas clusters, cloud object storage buckets, Atlas Data Lake datasets, and HTTP endpoints with the MongoDB Query Language and the aggregation framework, as if it were all in the same place and format.\n\nWhile using HTTP endpoints as a data source in Atlas Data Federation may not be suitable for large-scale production workloads, it\u2019s a great option for small businesses or startups that want a quick and easy way to analyze pricing data or to use for testing, development, or small-scale analysis. In this guide, we will use the JSON returned by https://cloud.mongodb.com/billing/pricing?product=atlas as an HTTP data source for a federated database.\n\n## Step 1: Create a new federated database\n\nLet's create a new federated database in MongoDB Atlas by clicking on Data Federation in the left-hand navigation and clicking \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI. A federated database is a virtual database that enables you to combine and query data from multiple sources. \n\n## Step 2: Add a new HTTP data source\n\nThe HTTP data source allows you to query data from any web API that returns data in JSON, BSON, CSV, TSV, Avro, Parquet, and ORC formats, such as the MongoDB Atlas price book. \n\n## Step 3: Drag and drop the source into the right side, rename as desired\n\nCreate a mapping between the HTTP data source and your federated database instance by dragging and dropping the HTTP data source into your federated database. Then, rename the cluster, database, and collection as desired by using the pencil icon.\n\n## Step 4: Add a view to transform the response into individual documents\n\nAtlas Data Federation allows you to transform the raw source data by using the powerful MongoDB Aggregation Framework. We\u2019ll create a view that will reshape the price book into individual documents, each to represent a single price item. \n\nFirst, create a view:\n\nThen, name the view and paste the following pipeline:\n\n```\n \n {\n \"$unwind\": {\n \"path\": \"$resource\"\n }\n }, {\n \"$replaceRoot\": {\n \"newRoot\": \"$resource\"\n }\n }\n]\n```\n\nThis pipeline will unwind the \"resource\" field, which contains an array of pricing data, and replace the root document with the contents of the \"resource\" array.\n\n## Step 5: Save and copy the connection string\n\nNow, let's save the changes and copy the connection string for our federated database instance. This connection string will allow you to connect to your federated database.\n\n![Select 'Connect' to connect to your federated database.\n\nAtlas Data Federation supports connection methods varying from tools like MongoDB Shell and Compass, any application supporting MongoDB connection, and even a SQL connection using Atlas SQL. \n\n## Step 6: Connect using Compass\n\nLet\u2019s now connect to the federated database instance using MongoDB Compass. By connecting with Compass, we will then be able to use the MongoDB Query Language and aggregation framework to start querying and analyzing the pricing data, if desired. \n\n## Step 7: Visualize using charts\n\nWe\u2019ll use MongoDB Atlas Charts for visualization of the Atlas price book. Atlas Charts allows you to create interactive charts and dashboards that can be embedded in your applications or shared with your team. \n\nOnce in Charts, you can create new dashboards and add a chart. Then, select the view we created as a data source:\n\nAs some relevant data fields are embedded within the sku field, such as NDS_AWS_INSTANCE_M50, we can use calculated fields to help us extract those, such as provider and instanceType:\n\nUse the following value expression:\n\n - Provider\n\n `{$arrayElemAt: {$split: [\"$sku\", \"_\"]}, 1]}`\n\n - InstanceType\n\n `{$arrayElemAt: [{$split: [\"$sku\", \"_\"]}, 3]}`\n\n - additonalProperty\n\n `{$arrayElemAt: [{$split: [\"$sku\", \"_\"]}, 4]}`\n\nNow, by using Charts like a heatmap, we can visualize the different pricing items in a color-coded format:\n\n 1. Drag and drop the \u201csku\u201d field to the X axis of the chart.\n 2. Drag and drop the \u201cpricing.region\u201d to the Y axis (choose \u201cUnwind array\u201d for array reduction).\n 3. Drag and drop the \u201cpricing.unitPrice\u201d to Intensity (choose \u201cUnwind array\u201d for array reduction).\n 4. Drag and drop the \u201cprovider\u201d, \u201cinstanceType\u201d, and \u201cadditionalProperty\u201d fields to filter and choose the desired values.\n\nThe final result: A heatmap showing the pricing data for the selected providers, instance types, and additional properties, broken down by region. Hovering over each of the boxes will present its exact price using a tooltip. Thanks to the fact that our federated database is composed from an HTTP data source, the data visualized is the actual live prices returned from the HTTP endpoint, and not subjected to any ETL delay.\n\n![A heatmap showing the pricing data for the selected providers, instance types, and additional properties, broken down by region.\n\n## Summary\n\nWith Atlas Data Federation DevOps teams, developers and data engineers can generate insights to power real-time applications or downstream analytics. Incorporating live data from sources such as HTTP, MongoDB Clusters, or Cloud Object Storage reduces the effort, time-sink, and complexity of pipelines and ETL tools. \n\nHave questions or comments? Visit our Community Forums. \nReady to get started? Try Atlas Data Federation today!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this article, we will demonstrate how to leverage Atlas Data Federation to query and visualize the MongoDB Atlas price book as a real-time data source.", "contentType": "Article"}, "title": "Querying the MongoDB Atlas Price Book with Atlas Data Federation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/automate-automation-mongodb-atlas", "action": "created", "body": "# Automate the Automation on MongoDB Atlas\n\nMongoDB Atlas is an awesome Cloud Data Platform providing an immense amount of automation to set up your databases, data lakes, charts, full-text search indexes, and more across all major cloud providers around the globe. Through the MongoDB Atlas GUI, you can easily deploy a fully scalable global cluster across several regions and even across different cloud providers in a matter of minutes. That's what I call automation. Using the MongoDB GUI is super intuitive and great, but how can I manage all these features in my own way?\n\nThe answer is simple and you probably already know it\u2026.**APIs**!\n\nMongoDB Atlas has a full featured API which allows users to programmatically manage all Atlas has to offer.\n\nThe main idea is to enable users to integrate Atlas with all other aspects of your Software Development Life Cycle (SDLC), giving the ability for your DevOps team to create automation on their current processes across all their environments (Dev, Test/QA, UAT, Prod).\n\nOne example would be the DevOps teams leveraging APIs on the creation of ephemeral databases to run their CI/CD processes in lower environments for test purposes. Once it is done, you would just terminate the database deployment.\n\nAnother example we have seen DevOps teams using is to incorporate the creation of the databases needed into their Developers Portals. The idea is to give developers a self-service experience, where they can start a project by using a portal to provide all project characteristics (tech stack according to their coding language, app templates, etc.), and the portal will create all the automation to provide all aspects needed, such as a new code repo, CI/CD job template, Dev Application Servers, and a MongoDB database. So, they can start coding as soon as possible!\n\nEven though the MongoDB Atlas API Resources documentation is great with lots of examples using cURL, we thought developers would appreciate it if they could also have all these in one of their favorite tools to work with APIs. I am talking about Postman, an API platform for building and using APIs. So, we did it! Below you will find step-by-step instructions on how to use it.\n\n### Step 1: Configure your workstation/laptop\n\n* Download and install Postman on your workstation/laptop.\n* Training on Postman is available if you need a refresher on how to use it.\n\n### Step 2: Configure MongoDB Atlas\n\n* Create a free MongoDB Atlas account to have access to a free cluster to play around in. Make sure you create an organization and a project. Don't skip that step. Here is a coupon code\u2014**GOATLAS10**\u2014for some credits to explore more features (valid as of August 2021). Watch this video to learn how to add these credits to your account.\n* Create an API key with Organization Owner privileges and save the public/private key to use when calling APIs. Also, don't forget to add your laptop/workstation IP to the API access list.\n* Create a database deployment (cluster) via the Atlas UI or the MongoDB CLI (check out the MongoDB CLI Atlas Quick Start for detailed instructions). Note that a free database deployment will allow you to run most of the API calls. Use an M10 database deployment or higher if you want to have full access to all of the APIs. Feel free to explore all of the other database deployment options, but the default options should be fine for this example.\n* Navigate to your Project Settings and retrieve your Project ID so it can be used in one of our examples below.\n\n### Step 3: Configure and use Postman\n\n* Fork or Import the MongoDB Atlas Collection to your Postman Workspace: \n ![Run in Postman](https://god.gw.postman.com/run-collection/17637161-25049d75-bcbc-467b-aba0-82a5c440ee02?action=collection%2Ffork&collection-url=entityId%3D17637161-25049d75-bcbc-467b-aba0-82a5c440ee02%26entityType%3Dcollection%26workspaceId%3D8355a86e-dec2-425c-9db0-cb5e0c3cec02#?env%5BAtlas%5D=W3sia2V5IjoiYmFzZV91cmwiLCJ2YWx1ZSI6Imh0dHBzOi8vY2xvdWQubW9uZ29kYi5jb20iLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6InZlcnNpb24iLCJ2YWx1ZSI6InYxLjAiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlByb2plY3RJRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJDTFVTVEVSLU5BTUUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiZGF0YWJhc2VOYW1lIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6ImRiVXNlciIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJPUkctSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLWtleS1wd2QiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLWtleS11c3IiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiSU5WSVRBVElPTl9JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTlZPSUNFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlBST0pFQ1RfTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJURUFNLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlVTRVItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUFJPSi1JTlZJVEFUSU8tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiVEVBTS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlNBTVBMRS1EQVRBU0VULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNMT1VELVBST1ZJREVSIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNMVVNURVItVElFUiIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTlNUQU5DRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkFMRVJULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkFMRVJULUNPTkZJRy1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJEQVRBQkFTRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNPTExFQ1RJT04tTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJTkRFWC1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJTTkFQU0hPVC1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJKT0ItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUkVTVE9SRS1KT0ItSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoicmVzdG9yZUpvYklkIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlRBUkdFVC1DTFVTVEVSLU5BTUUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiVEFSR0VULUdST1VQLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6InRhcmdldEdyb3VwSWQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiY2x1c3Rlck5hbWUiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUkVTVE9SRS1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJBUkNISVZFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkNPTlRBSU5FUi1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJQRUVSLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkVORFBPSU5ULVNFUlZJQ0UtSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiRU5EUE9JTlQtSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiQVBJLUtFWS1JRCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJBQ0NFU1MtTElTVC1FTlRSWSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJJUC1BRERSRVNTIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlBST0NFU1MtSE9TVCIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJQUk9DRVNTLVBPUlQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiRElTSy1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkhPU1ROQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkxPRy1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlVTRVItTkFNRSIsInZhbHVlIjoiIiwiZW5hYmxlZCI6dHJ1ZX0seyJrZXkiOiJST0xFLUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkVWRU5ULUlEIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IkRBVEEtTEFLRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfSx7ImtleSI6IlZBTElEQVRJT04tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiTElWRS1NSUdSQVRJT04tSUQiLCJ2YWx1ZSI6IiIsImVuYWJsZWQiOnRydWV9LHsia2V5IjoiUk9MRS1OQU1FIiwidmFsdWUiOiIiLCJlbmFibGVkIjp0cnVlfV0=)\n* Click on the MongoDB Atlas Collection. Under the Authorization tab, choose the Digest Auth Type and use the *public key* as the *user* and the *private key* as your *password*.\n\n* Open up the **Get All Clusters** API call under the cluster folder.\n\n* Make sure you select the Atlas environment variables and update the Postman variable ProjectID value to your **Project ID** captured in the previous steps.\n\n* Execute the API call by hitting the Send button and you should get a response containing a list of all your clusters (database deployments) alongside the cluster details, like whether backup is enabled or the cluster is running.\n\nNow explore all the APIs available to create your own automation.\n\nOne last tip: Once you have tested all your API calls to build your automation, Postman allows you to export that in code snippets in your favorite programming language.\n\nPlease always refer to the online documentation for any changes or new resources. Also, feel free to make pull requests to update the project with new API resources, fixes, and enhancements.\n\nHope you enjoyed it! Please share this with your team and community. It might be really helpful for everyone!\n\nHere are some other great posts related to this subject:\n\n* Programmatic API Management of Your MongoDB Atlas Database Clusters\n* Programmatic API Management of Your MongoDB Atlas Database Clusters - Part II\n* Calling the MongoDB Atlas API - How to Do it from Node, Python, and Ruby\n\n\\**A subset of API endpoints are supported in (free) M0, M2, and M5 clusters.*\n\nPublic Repo - https://github.com/cassianobein/mongodb-atlas-api-resources \nAtlas API Documentation - https://docs.atlas.mongodb.com/api/ \nPostman MongoDB Public Workspace - https://www.postman.com/mongodb-devrel/workspace/mongodb-public/overview ", "format": "md", "metadata": {"tags": ["Atlas", "Postman API"], "pageDescription": "Build your own automation with MongoDB Atlas API resources.", "contentType": "Article"}, "title": "Automate the Automation on MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/bucket-pattern", "action": "created", "body": "# Building with Patterns: The Bucket Pattern\n\nIn this edition of the *Building with Patterns* series, we're going to\ncover the Bucket Pattern. This pattern is particularly effective when\nworking with Internet of Things (IoT), Real-Time Analytics, or\nTime-Series data in general. By *bucketing* data together we make it\neasier to organize specific groups of data, increasing the ability to\ndiscover historical trends or provide future forecasting and optimize\nour use of storage.\n\n## The Bucket Pattern\n\nWith data coming in as a stream over a period of time (time series data)\nwe may be inclined to store each measurement in its own document.\nHowever, this inclination is a very relational approach to handling the\ndata. If we have a sensor taking the temperature and saving it to the\ndatabase every minute, our data stream might look something like:\n\n``` javascript\n{\n sensor_id: 12345,\n timestamp: ISODate(\"2019-01-31T10:00:00.000Z\"),\n temperature: 40\n}\n\n{\n sensor_id: 12345,\n timestamp: ISODate(\"2019-01-31T10:01:00.000Z\"),\n temperature: 40\n}\n\n{\n sensor_id: 12345,\n timestamp: ISODate(\"2019-01-31T10:02:00.000Z\"),\n temperature: 41\n}\n```\n\nThis can pose some issues as our application scales in terms of data and\nindex size. For example, we could end up having to index `sensor_id` and\n`timestamp` for every single measurement to enable rapid access at the\ncost of RAM. By leveraging the document data model though, we can\n\"bucket\" this data, by time, into documents that hold the measurements\nfrom a particular time span. We can also programmatically add additional\ninformation to each of these \"buckets\".\n\nBy applying the Bucket Pattern to our data model, we get some benefits\nin terms of index size savings, potential query simplification, and the\nability to use that pre-aggregated data in our documents. Taking the\ndata stream from above and applying the Bucket Pattern to it, we would\nwind up with:\n\n``` javascript\n{\n sensor_id: 12345,\n start_date: ISODate(\"2019-01-31T10:00:00.000Z\"),\n end_date: ISODate(\"2019-01-31T10:59:59.000Z\"),\n measurements: \n {\n timestamp: ISODate(\"2019-01-31T10:00:00.000Z\"),\n temperature: 40\n },\n {\n timestamp: ISODate(\"2019-01-31T10:01:00.000Z\"),\n temperature: 40\n },\n ...\n {\n timestamp: ISODate(\"2019-01-31T10:42:00.000Z\"),\n temperature: 42\n }\n ],\n transaction_count: 42,\n sum_temperature: 2413\n}\n```\n\nBy using the Bucket Pattern, we have \"bucketed\" our data to, in this\ncase, a one hour bucket. This particular data stream would still be\ngrowing as it currently only has 42 measurements; there's still more\nmeasurements for that hour to be added to the \"bucket\". When they are\nadded to the `measurements` array, the `transaction_count` will be\nincremented and `sum_temperature` will also be updated.\n\nWith the pre-aggregated `sum_temperature` value, it then becomes\npossible to easily pull up a particular bucket and determine the average\ntemperature (`sum_temperature / transaction_count`) for that bucket.\nWhen working with time-series data it is frequently more interesting and\nimportant to know what the average temperature was from 2:00 to 3:00 pm\nin Corning, California on 13 July 2018 than knowing what the temperature\nwas at 2:03 pm. By bucketing and doing pre-aggregation we're more able\nto easily provide that information.\n\nAdditionally, as we gather more and more information we may determine\nthat keeping all of the source data in an archive is more effective. How\nfrequently do we need to access the temperature for Corning from 1948,\nfor example? Being able to move those buckets of data to a data archive\ncan be a large benefit.\n\n## Sample Use Case\n\nOne example of making time-series data valuable in the real world comes\nfrom an [IoT implementation by\nBosch. They are using MongoDB\nand time-series data in an automotive field data app. The app captures\ndata from a variety of sensors throughout the vehicle allowing for\nimproved diagnostics of the vehicle itself and component performance.\n\nOther examples include major banks that have incorporated this pattern\nin financial applications to group transactions together.\n\n## Conclusion\n\nWhen working with time-series data, using the Bucket Pattern in MongoDB\nis a great option. It reduces the overall number of documents in a\ncollection, improves index performance, and by leveraging\npre-aggregation, it can simplify data access.\n\nThe Bucket Design pattern works great for many cases. But what if there\nare outliers in our data? That's where the next pattern we'll discuss,\nthe Outlier Design\nPattern, comes into play.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Tutorial"}, "title": "Building with Patterns: The Bucket Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/song-recommendations-example-app", "action": "created", "body": "# A Spotify Song and Playlist Recommendation Engine\n\n## Creators\nLucas De Oliveira, Chandrish Ambati, and Anish Mukherjee from University of San Francisco contributed this amazing project.\n\n## Background to the Project\nIn 2018, Spotify organized an Association for Computing Machinery (ACM) RecSys Challenge where they posted a dataset of one million playlists, challenging participants to recommend a list of 500 songs given a user-created playlist.\n\nAs both music lovers and data scientists, we were naturally drawn to this challenge. Right away, we agreed that combining song embeddings with some nearest-neighbors method for recommendation would likely produce very good results. Importantly, we were curious about how we could solve this recommendation task at scale with over 4 billion user-curated playlists on Spotify, where this number keeps growing. This realization raised serious questions about how to train a decent model since all that data would likely not fit in memory or a single server.\n\n## What We Built\nThis project resulted in a scalable ETL pipeline utilizing\n* Apache Spark\n* MongoDB\n* Amazon S3\n* Databricks (PySpark)\n\nThese were used to train a deep learning Word2Vec model to build song and playlist embeddings for recommendation. We followed up with data visualizations we created on Tensorflow\u2019s Embedding Projector.\n\n## The Process\n### Collecting Lyrics\nThe most tedious task of this project was collecting as many lyrics for the songs in the playlists as possible. We began by isolating the unique songs in the playlist files by their track URI; in total we had over 2 million unique songs. Then, we used the track name and artist name to look up the lyrics on the web. Initially, we used simple Python requests to pull in the lyrical information but this proved too slow for our purposes. We then used asyncio, which allowed us to make requests concurrently. This sped up the process significantly, reducing the downloading time of lyrics for 10k songs from 15 mins to under a minute. Ultimately, we were only able to collect lyrics for 138,000 songs.\n\n### Pre-processing\nThe original dataset contains 1 million playlists spread across 1 thousand JSON files totaling about 33 GB of data. We used PySpark in Databricks to preprocess these separate JSON files into a single SparkSQL DataFrame and then joined this DataFrame with the lyrics we saved. \n\nWhile the aforementioned data collection and preprocessing steps are time-consuming, the model also needs to be re-trained and re-evaluated often, so it is critical to store data in a scalable database. In addition, we\u2019d like to consider a database that is schemaless for future expansion in data sets and supports various data types. Considering our needs, we concluded that MongoDB would be the optimal solution as a data and feature store.\n\nCheck out the Preprocessing.ipynb notebook to see how we preprocessed the data.\n\n### Training Song Embeddings\nFor our analyses, we read our preprocessed data from MongoDB into a Spark DataFrame and grouped the records by playlist id (pid), aggregating all of the songs in a playlist into a list under the column song_list. \nUsing the Word2Vec model in Spark MLlib we trained song embeddings by feeding lists of track IDs from a playlist into the model much like you would send a list of words from a sentence to train word embeddings. As shown below, we trained song embeddings in only 3 lines of PySpark code:\n```\nfrom pyspark.ml.feature import Word2Vec\nword2Vec = Word2Vec(vectorSize=32, seed=42, inputCol=\"song_list\").setMinCount(1)\nword2Vec.sexMaxIter(10)\nmodel = word2Vec.fit(df_play)\n```\n\nWe then saved the song embeddings down to MongoDB for later use. Below is a snapshot of the song embeddings DataFrame that we saved:\n\nCheck out the Song_Embeddings.ipynb notebook to see how we train song embeddings.\n\n### Training Playlists Embeddings\nFinally, we extended our recommendation task beyond simple song recommendations to recommending entire playlists. Given an input playlist, we would return the k closest or most similar playlists. We took a \u201ccontinuous bag of songs\u201d approach to this problem by calculating playlist embeddings as the average of all song embeddings in that playlist.\n\nThis workflow started by reading back the song embeddings from MongoDB into a SparkSQL DataFrame. Then, we calculated a playlist embedding by taking the average of all song embeddings in that playlist and saved them in MongoDB.\n\nCheck out the Playlist_Embeddings.ipynb notebook to see how we did this.\n\n### Training Lyrics Embeddings\nAre you still reading? Whew!\n\nWe trained lyrics embeddings by loading in a song's lyrics, separating the words into lists, and feeding those words to a Word2Vec model to produce 32-dimensional vectors for each word. We then took the average embedding across all words as that song's lyrical embedding. Ultimately, our analytical goal here was to determine whether users create playlists based on common lyrical themes by seeing if the pairwise song embedding distance and the pairwise lyrical embedding distance between two songs were correlated. Unsurprisingly, it appears they are not.\n\nCheck out the Lyrical_Embeddings.ipynb notebook to see our analysis.\n\n## Notes on our Approach\nYou may be wondering why we used a language model (Word2Vec) to train these embeddings. Why not use a Pin2Vec or custom neural network model to predict implicit ratings? For practical reasons, we wanted to work exclusively in the Spark ecosystem and deal with the data in a distributed fashion. This was a constraint set on the project ahead of time and challenged us to think creatively.\n\nHowever, we found Word2Vec an attractive candidate model for theoretical reasons as well. The Word2Vec model uses a word\u2019s context to train static embeddings by training the input word\u2019s embeddings to predict its surrounding words. In essence, the embedding of any word is determined by how it co-occurs with other words. This had a clear mapping to our own problem: by using a Word2Vec model the distance between song embeddings would reflect the songs\u2019 co-occurrence throughout 1M playlists, making it a useful measure for a distance-based recommendation (nearest neighbors). It would effectively model how people grouped songs together, using user behavior as the determinant factor in similarity.\n\nAdditionally, the Word2Vec model accepts input in the form of a list of words. For each playlist we had a list of track IDs, which made working with the Word2Vec model not only conceptually but also practically appealing.\n\n## Data Visualizations with Tensorflow and MongoDB\nAfter all of that, we were finally ready to visualize our results and make some interactive recommendations. We decided to represent our embedding results visually using Tensorflow\u2019s Embedding Projector which maps the 32-dimensional song and playlist embeddings into an interactive visualization of a 3D embedding space. You have the choice of using PCA or tSNE for dimensionality reduction and cosine similarity or Euclidean distance for measuring distances between vectors.\n\nClick here for the song embeddings projector for the full 2 million songs, or here for a less crowded version with a random sample of 100k songs (shown below):\n\nThe neat thing about using Tensorflow\u2019s projector is that it gives us a beautiful visualization tool and distance calculator all in one. Try searching on the right panel for a song and if the song is part of the original dataset, you will see the \u201cmost similar\u201d songs appear under it.\n\n## Using MongoDB for ML/AI\nWe were impressed by how easy it was to use MongoDB to reliably store and load our data. Because we were using distributed computing, it would have been infeasible to run our pipeline from start to finish any time we wanted to update our code or fine-tune the model. MongoDB allowed us to save our incremental results for later processing and modeling, which collectively saved us hours of waiting for code to re-run.\n\nIt worked well with all the tools we use everyday and the tooling we chose - we didn't have any areas of friction. \n\nWe were shocked by how this method of training embeddings actually worked. While the 2 million song embedding projector is crowded visually, we see that the recommendations it produces are actually quite good at grouping songs together.\n\nConsider the embedding recommendation for The Beatles\u2019 \u201cA Day In The Life\u201d:\n\nOr the recommendation for Jay Z\u2019s \u201cHeart of the City (Ain\u2019t No Love)\u201d:\n\nFan of Taylor Swift? Here are the recommendations for \u201cNew Romantics\u201d:\n\nWe were delighted to find naturally occurring clusters in the playlist embeddings. Most notably, we see a cluster containing mostly Christian rock, one with Christmas music, one for reggaeton, and one large cluster where genres span its length rather continuously and intuitively.\n\nNote also that when we select a playlist, we have many recommended playlists with the same names. This in essence validates our song embeddings. Recall that playlist embeddings were created by taking the average embedding of all its songs; the name of the playlists did not factor in at all. The similar names only conceptually reinforce this fact.\n\n## Next Steps?\nWe felt happy with the conclusion of this project but there is more that could be done here.\n\n1. We could use these trained song embeddings in other downstream tasks and see how effective these are. Also, you could download the song embeddings we here: Embeddings | Meta Info\n2. We could look at other methods of training these embeddings using some recurrent neural networks and enhanced implementation of this Word2Vec model.\n\n", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "Spark", "AI"], "pageDescription": "Python code example application for Spotify playlist and song recommendations using spark and tensorflow", "contentType": "Code Example"}, "title": "A Spotify Song and Playlist Recommendation Engine", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/farm-stack-fastapi-react-mongodb", "action": "created", "body": "# Introducing FARM Stack - FastAPI, React, and MongoDB\n\nWhen I got my first ever programming job, the LAMP (Linux, Apache, MySQL, PHP) stack\u2014and its variations#Variants)\u2014ruled supreme. I used WAMP at work, DAMP at home, and deployed our customers to SAMP. But now all the stacks with memorable acronyms seem to be very JavaScript forward. MEAN (MongoDB, Express, Angular, Node.js), MERN (MongoDB, Express, React, Node.js), MEVN (MongoDB, Express, Vue, Node.js), JAM (JavaScript, APIs, Markup), and so on.\n\nAs much as I enjoy working with React and Vue, Python is still my favourite language for building back end web services. I wanted the same benefits I got from MERN\u2014MongoDB, speed, flexibility, minimal boilerplate\u2014but with Python instead of Node.js. With that in mind, I want to introduce the FARM stack; FastAPI, React, and MongoDB.\n\n## What is FastAPI?\n\nThe FARM stack is in many ways very similar to MERN. We've kept MongoDB and React, but we've replaced the Node.js and Express back end with Python and FastAPI. FastAPI is a modern, high-performance, Python 3.6+ web framework. As far as web frameworks go, it's incredibly new. The earliest git commit I could find is from December 5th, 2018, but it is a rising star in the Python community. It is already used in production by the likes of Microsoft, Uber, and Netflix.\n\nAnd it is speedy. Benchmarks show that it's not as fast as golang's chi or fasthttp, but it's faster than all the other Python frameworks tested and beats out most of the Node.js ones too.\n\n## Getting Started\n\nIf you would like to give the FARM stack a try, I've created an example TODO application you can clone from GitHub.\n\n``` shell\ngit clone git@github.com:mongodb-developer/FARM-Intro.git\n```\n\nThe code is organised into two directories: back end and front end. The back end code is our FastAPI server. The code in this directory interacts with our MongoDB database, creates our API endpoints, and thanks to OAS3 (OpenAPI Specification 3). It also generates our interactive documentation.\n\n## Running the FastAPI Server\n\nBefore I walk through the code, try running the FastAPI server for yourself. You will need Python 3.8+ and a MongoDB database. A free Atlas Cluster will be more than enough. Make a note of your MongoDB username, password, and connection string as you'll need those in a moment.\n\n### Installing Dependencies\n\n``` shell\ncd FARM-Intro/backend\npip install -r requirements.txt\n```\n\n### Configuring Environment Variables\n\n``` shell\nexport DEBUG_MODE=True\nexport DB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\nexport DB_NAME=\"farmstack\"\n```\n\nOnce you have everything installed and configured, you can run the server with `python main.py` and visit in your browser.\n\nThis interactive documentation is automatically generated for us by FastAPI and is a great way to try your API during development. You can see we have the main elements of CRUD covered. Try adding, updating, and deleting some Tasks and explore the responses you get back from the FastAPI server.\n\n## Creating a FastAPI Server\n\nWe initialise the server in `main.py`; this is where we create our app.\n\n``` python\napp = FastAPI()\n```\n\nAttach our routes, or API endpoints.\n\n``` python\napp.include_router(todo_router, tags=\"tasks\"], prefix=\"/task\")\n```\n\nStart the async event loop and ASGI server.\n\n``` python\nif __name__ == \"__main__\":\n uvicorn.run(\n \"main:app\",\n host=settings.HOST,\n reload=settings.DEBUG_MODE,\n port=settings.PORT,\n )\n```\n\nAnd it is also where we open and close our connection to our MongoDB server.\n\n``` python\n@app.on_event(\"startup\")\nasync def startup_db_client():\n app.mongodb_client = AsyncIOMotorClient(settings.DB_URL)\n app.mongodb = app.mongodb_client[settings.DB_NAME]\n\n@app.on_event(\"shutdown\")\nasync def shutdown_db_client():\n app.mongodb_client.close()\n```\n\nBecause FastAPI is an async framework, we're using Motor to connect to our MongoDB server. [Motor is the officially maintained async Python driver for MongoDB.\n\nWhen the app startup event is triggered, I open a connection to MongoDB and ensure that it is available via the app object so I can access it later in my different routers.\n\n### Defining Models\n\nMany people think of MongoDB as being schema-less, which is wrong. MongoDB has a flexible schema. That is to say that collections do not enforce document structure by default, so you have the flexibility to make whatever data-modelling choices best match your application and its performance requirements. So, it's not unusual to create models when working with a MongoDB database.\n\nThe models for the TODO app are in `backend/apps/todo/models.py`, and it is these models which help FastAPI create the interactive documentation.\n\n``` python\nclass TaskModel(BaseModel):\n id: str = Field(default_factory=uuid.uuid4, alias=\"_id\")\n name: str = Field(...)\n completed: bool = False\n\n class Config:\n allow_population_by_field_name = True\n schema_extra = {\n \"example\": {\n \"id\": \"00010203-0405-0607-0809-0a0b0c0d0e0f\",\n \"name\": \"My important task\",\n \"completed\": True,\n }\n }\n```\n\nI want to draw attention to the `id` field on this model. MongoDB uses `_id`, but in Python, underscores at the start of attributes have special meaning. If you have an attribute on your model that starts with an underscore, pydantic\u2014the data validation framework used by FastAPI\u2014will assume that it is a private variable, meaning you will not be able to assign it a value! To get around this, we name the field `id` but give it an `alias` of `_id`. You also need to set `allow_population_by_field_name` to `True` in the model's `Config` class.\n\nYou may notice I'm not using MongoDB's ObjectIds. You can use ObjectIds with FastAPI; there is just more work required during serialisation and deserialisation. Still, for this example, I found it easier to generate the UUIDs myself, so they're always strings.\n\n``` python\nclass UpdateTaskModel(BaseModel):\n name: Optionalstr]\n completed: Optional[bool]\n\n class Config:\n schema_extra = {\n \"example\": {\n \"name\": \"My important task\",\n \"completed\": True,\n }\n }\n```\n\nWhen users are updating tasks, we do not want them to change the id, so the `UpdateTaskModel` only includes the name and completed fields. I've also made both fields optional so that you can update either of them independently. Making both of them optional did mean that all fields were optional, which caused me to spend far too long deciding on how to handle a `PUT` request (an update) where the user did not send any fields to be changed. We'll see that next when we look at the routers.\n\n### FastAPI Routers\n\nThe task routers are within `backend/apps/todo/routers.py`.\n\nTo cover the different CRUD (Create, Read, Update, and Delete) operations, I needed the following endpoints:\n\n- POST /task/ - creates a new task.\n- GET /task/ - view all existing tasks.\n- GET /task/{id}/ - view a single task.\n- PUT /task/{id}/ - update a task.\n- DELETE /task/{id}/ - delete a task.\n\n#### Create\n\n``` python\n@router.post(\"/\", response_description=\"Add new task\")\nasync def create_task(request: Request, task: TaskModel = Body(...)):\n task = jsonable_encoder(task)\n new_task = await request.app.mongodb[\"tasks\"].insert_one(task)\n created_task = await request.app.mongodb[\"tasks\"].find_one(\n {\"_id\": new_task.inserted_id}\n )\n\n return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_task)\n```\n\nThe create_task router accepts the new task data in the body of the request as a JSON string. We write this data to MongoDB, and then we respond with an HTTP 201 status and the newly created task.\n\n#### Read\n\n``` python\n@router.get(\"/\", response_description=\"List all tasks\")\nasync def list_tasks(request: Request):\n tasks = []\n for doc in await request.app.mongodb[\"tasks\"].find().to_list(length=100):\n tasks.append(doc)\n return tasks\n```\n\nThe list_tasks router is overly simplistic. In a real-world application, you are at the very least going to need to include pagination. Thankfully, there are [packages for FastAPI which can simplify this process.\n\n``` python\n@router.get(\"/{id}\", response_description=\"Get a single task\")\nasync def show_task(id: str, request: Request):\n if (task := await request.app.mongodb\"tasks\"].find_one({\"_id\": id})) is not None:\n return task\n\n raise HTTPException(status_code=404, detail=f\"Task {id} not found\")\n```\n\nWhile FastAPI supports Python 3.6+, it is my use of assignment expressions in routers like this one, which is why this sample application requires Python 3.8+.\n\nHere, I'm raising an exception if we cannot find a task with the correct id.\n\n#### Update\n\n``` python\n@router.put(\"/{id}\", response_description=\"Update a task\")\nasync def update_task(id: str, request: Request, task: UpdateTaskModel = Body(...)):\n task = {k: v for k, v in task.dict().items() if v is not None}\n\n if len(task) >= 1:\n update_result = await request.app.mongodb[\"tasks\"].update_one(\n {\"_id\": id}, {\"$set\": task}\n )\n\n if update_result.modified_count == 1:\n if (\n updated_task := await request.app.mongodb[\"tasks\"].find_one({\"_id\": id})\n ) is not None:\n return updated_task\n\n if (\n existing_task := await request.app.mongodb[\"tasks\"].find_one({\"_id\": id})\n ) is not None:\n return existing_task\n\n raise HTTPException(status_code=404, detail=f\"Task {id} not found\")\n```\n\nWe don't want to update any of our fields to empty values, so first of all, we remove those from the update document. As mentioned above, because all values are optional, an update request with an empty payload is still valid. After much deliberation, I decided that in that situation, the correct thing for the API to do is to return the unmodified task and an HTTP 200 status.\n\nIf the user has supplied one or more fields to be updated, we attempt to `$set` the new values with `update_one`, before returning the modified document. However, if we cannot find a document with the specified id, our router will raise a 404.\n\n#### Delete\n\n``` python\n@router.delete(\"/{id}\", response_description=\"Delete Task\")\nasync def delete_task(id: str, request: Request):\n delete_result = await request.app.mongodb[\"tasks\"].delete_one({\"_id\": id})\n\n if delete_result.deleted_count == 1:\n return JSONResponse(status_code=status.HTTP_204_NO_CONTENT)\n\n raise HTTPException(status_code=404, detail=f\"Task {id} not found\")\n```\n\nThe final router does not return a response body on success, as the requested document no longer exists as we have just deleted it. Instead, it returns an HTTP status of 204 which means that the request completed successfully, but the server doesn't have any data to give you.\n\n## The React Front End\n\nThe React front end does not change as it is only consuming the API and is therefore somewhat back end agnostic. It is mostly the standard files generated by `create-react-app`. So, to start our React front end, open a new terminal window\u2014keeping your FastAPI server running in the existing terminal\u2014and enter the following commands inside the front end directory.\n\n``` shell\nnpm install\nnpm start\n```\n\nThese commands may take a little while to complete, but afterwards, it should open a new browser window to .\n\n![Screenshot of Timeline in browser\n\nThe React front end is just a view of our task list, but you can update\nyour tasks via the FastAPI documentation and see the changes appear in\nReact!\n\nThe bulk of our front end code is in `frontend/src/App.js`\n\n``` javascript\nuseEffect(() => {\n const fetchAllTasks = async () => {\n const response = await fetch(\"/task/\")\n const fetchedTasks = await response.json()\n setTasks(fetchedTasks)\n }\n\n const interval = setInterval(fetchAllTasks, 1000)\n\n return () => {\n clearInterval(interval)\n }\n}, ])\n```\n\nWhen our component mounts, we start an interval which runs each second and gets the latest list of tasks before storing them in our state. The function returned at the end of the hook will be run whenever the component dismounts, cleaning up our interval.\n\n``` javascript\nuseEffect(() => {\n const timelineItems = tasks.reverse().map((task) => {\n return task.completed ? (\n }\n color=\"green\"\n style={{ textDecoration: \"line-through\", color: \"green\" }}\n >\n {task.name} ({task._id})\n \n ) : (\n }\n color=\"blue\"\n style={{ textDecoration: \"initial\" }}\n >\n {task.name} ({task._id})\n \n )\n })\n\n setTimeline(timelineItems)\n}, [tasks])\n```\n\nThe second hook is triggered whenever the task list in our state changes. This hook creates a `Timeline Item` component for each task in our list.\n\n``` javascript\n<>\n \n \n {timeline}\n \n \n\n```\n\nThe last part of `App.js` is the markup to render the tasks to the page. If you have worked with MERN or another React stack before, this will likely seem very familiar.\n\n## Wrapping Up\n\nI'm incredibly excited about the FARM stack, and I hope you are now too. We're able to build highly performant, async, web applications using my favourite technologies! In my next article, we'll look at how you can add authentication to your FARM applications.\n\nIn the meantime, check out the [FastAPI and Motor documentation, as well as the other useful packages and links in this Awesome FastAPI list.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "JavaScript", "FastApi"], "pageDescription": "Introducing FARM Stack - FastAPI, React, and MongoDB", "contentType": "Article"}, "title": "Introducing FARM Stack - FastAPI, React, and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/improve-your-apps-search-results-with-auto-tuning", "action": "created", "body": "# Improve Your App's Search Results with Auto-Tuning\n\nHistorically, the only way to improve your app\u2019s search query relevance is through manual intervention. For example, you can introduce score boosting to multiply a base relevance score in the presence of particular fields. This ensures that searches where a key present in some fields weigh higher than others. This is, however, fixed by nature. The results are dynamic but the logic itself doesn\u2019t change.\n\nThe following project will showcase how to leverage synonyms to create a feedback loop that is self-tuning, in order to deliver incrementally more relevant search results to your users\u2014*all without complex machine learning models!*\n\n## Example\n\nWe have a food search application where a user searches for \u201cRomanian Food.\u201d Assuming that we\u2019re logging every user's clickstream data (their step-by-step interaction with our application), we can take a look at this \u201csequence\u201d and compare it to other results that have yielded a strong CTA (call-to-action): a successful checkout.\n\nAnother user searched for \u201cGerman Cuisine\u201d and that had a very similar clickstream sequence. Well, we can build a script that analyzes both these users\u2019 (and other users\u2019) clickstreams, identify similarities, we can tell the script to append it to a synonyms document that contains \u201cGerman,\u201d \u201cRomanian,\u201d and other more common cuisines, like \u201cHungarian.\u201d\n\nHere\u2019s a workflow of what we\u2019re looking to accomplish:\n\n## Tutorial\n\n### Step 1: Log user\u2019s clickstream activity\n\nIn our app tier, as events are fired, we log them to a clickstreams collection, like:\n\n```\n{\n\"session_id\": \"1\",\n\"event_id\": \"search_query\",\n\"metadata\": {\n\"search_value\": \"romanian food\"\n},\n\"timestamp\": \"1\"\n},\n{\n\"session_id\": \"1\",\n\"event_id\": \"add_to_cart\",\n\"product_category\":\"eastern european cuisine\",\n\"timestamp\": \"2\"\n},\n{\n\"session_id\": \"1\",\n\"event_id\": \"checkout\",\n\"timestamp\": \"3\"\n},\n{\n\"session_id\": \"1\",\n\"event_id\": \"payment_success\",\n\"timestamp\": \"4\"\n},\n{\n\"session_id\": \"2\",\n\"event_id\": \"search_query\",\n\"metadata\": {\n\"search_value\": \"hungarian food\"\n},\n\"timestamp\": \"1\"\n},\n{\n\"session_id\": \"2\",\n\"event_id\": \"add_to_cart\",\n\"product_category\":\"eastern european cuisine\",\n\"timestamp\": \"2\"\n}\n]\n\n```\n\nIn this simplified list of events, we can conclude that {\"session_id\":\"1\"} searched for \u201cromanian food,\u201d which led to a higher conversion rate, payment_success, compared to {\"session_id\":\"2\"}, who searched \u201chungarian food\u201d and stalled after the add_to_cart event.\nYou can import this data yourself using [sample_data.json.\n\nLet\u2019s prepare the data for our search_tuner script.\n\n### Step 2: Create a view that groups by session_id, then filters on the presence of searches\n\nBy the way, it\u2019s no problem that only some documents have a metadata field. Our $group operator can intelligently identify the ones that do vs don\u2019t.\n\n```\n\n # first we sort by timestamp to get everything in the correct sequence of events,\n # as that is what we'll be using to draw logical correlations\n {\n '$sort': {\n 'timestamp': 1\n }\n },\n # next, we'll group by a unique session_id, include all the corresponding events, and begin\n # the filter for determining if a search_query exists\n {\n '$group': {\n '_id': '$session_id',\n 'events': {\n '$push': '$$ROOT'\n },\n 'isSearchQueryPresent': {\n '$sum': {\n '$cond': [\n {\n '$eq': [\n '$event_id', 'search_query'\n ]\n }, 1, 0\n ]\n }\n }\n }\n },\n # we hide session_ids where there is no search query\n # then create a new field, an array called searchQuery, which we'll use to parse\n {\n '$match': {\n 'isSearchQueryPresent': {\n '$gte': 1\n }\n }\n },\n {\n '$unset': 'isSearchQueryPresent'\n },\n {\n '$set': {\n 'searchQuery': '$events.metadata.search_value'\n }\n }\n]\n\n```\n\nLet\u2019s create the view by building the query, then going into Compass and adding it as a new collection called group_by_session_id_and_search_query:\n\n![screenshot of creating a view in compass\n\nHere\u2019s what it will look like:\n\n```\n\n {\n \"session_id\": \"1\",\n \"events\": [\n {\n \"event_id\": \"search_query\",\n \"search_value\": \"romanian food\"\n },\n {\n \"event_id\": \"add_to_cart\",\n \"context\": {\n \"cuisine\": \"eastern european cuisine\"\n }\n },\n {\n \"event_id\": \"checkout\"\n },\n {\n \"event_id\": \"payment_success\"\n }\n ],\n \"searchQuery\": \"romanian food\"\n }, {\n \"session_id\": \"2\",\n \"events\": [\n {\n \"event_id\": \"search_query\",\n \"search_value\": \"hungarian food\"\n },\n {\n \"event_id\": \"add_to_cart\",\n \"context\": {\n \"cuisine\": \"eastern european cuisine\"\n }\n },\n {\n \"event_id\": \"checkout\"\n }\n ],\n \"searchQuery\": \"hungarian food\"\n },\n {\n \"session_id\": \"3\",\n \"events\": [\n {\n \"event_id\": \"search_query\",\n \"search_value\": \"italian food\"\n },\n {\n \"event_id\": \"add_to_cart\",\n \"context\": {\n \"cuisine\": \"western european cuisine\"\n }\n }\n ],\n \"searchQuery\": \"sad food\"\n }\n]\n\n```\n\n### Step 3: Build a scheduled job that compares similar clickstreams and pushes the resulting synonyms to the synonyms collection\n\n```\n// Provide a success indicator to determine which session we want to\n// compare any incomplete sessions with\nconst successIndicator = \"payment_success\"\n\n// what percentage similarity between two sets of click/event streams\n// we'd accept to be determined as similar enough to produce a synonym\n// relationship\nconst acceptedConfidence = .9\n\n// boost the confidence score when the following values are present\n// in the eventstream\nconst eventBoosts = {\n successIndicator: .1\n}\n\n/**\n * Enrich sessions with a flattened event list to make comparison easier.\n * Determine if the session is to be considered successful based on the success indicator.\n * @param {*} eventList List of events in a session.\n * @returns {any} Calculated values used to determine if an incomplete session is considered to\n * be related to a successful session.\n */\nconst enrichEvents = (eventList) => {\n return {\n eventSequence: eventList.map(event => { return event.event_id }).join(';'),\n isSuccessful: eventList.some(event => { return event.event_id === successIndicator })\n }\n}\n\n/**\n * De-duplicate common tokens in two strings\n * @param {*} str1\n * @param {*} str2\n * @returns Returns an array with the provided strings with the common tokens removed\n */\nconst dedupTokens = (str1, str2) => {\n const splitToken = ' '\n const tokens1 = str1.split(splitToken)\n const tokens2 = str2.split(splitToken)\n const dupedTokens = tokens1.filter(token => { return tokens2.includes(token)});\n const dedupedStr1 = tokens1.filter(token => { return !dupedTokens.includes(token)});\n const dedupedStr2 = tokens2.filter(token => { return !dupedTokens.includes(token)});\n\n return [ dedupedStr1.join(splitToken), dedupedStr2.join(splitToken) ]\n}\n\nconst findMatchingIndex = (synonyms, results) => {\n let matchIndex = -1\n for(let i = 0; i < results.length; i++) {\n for(const synonym of synonyms) {\n if(results[i].synonyms.includes(synonym)){\n matchIndex = i;\n break;\n }\n }\n }\n return matchIndex;\n}\n/**\n * Inspect the context of two matching sessions.\n * @param {*} successfulSession\n * @param {*} incompleteSession\n */\nconst processMatch = (successfulSession, incompleteSession, results) => {\n console.log(`=====\\nINSPECTING POTENTIAL MATCH: ${ successfulSession.searchQuery} = ${incompleteSession.searchQuery}`);\n let contextMatch = true;\n\n // At this point we can assume that the sequence of events is the same, so we can\n // use the same index when comparing events\n for(let i = 0; i < incompleteSession.events.length; i++) {\n // if we have a context, let's compare the kv pairs in the context of\n // the incomplete session with the successful session\n if(incompleteSession.events[i].context){\n const eventWithContext = incompleteSession.events[i]\n const contextKeys = Object.keys(eventWithContext.context)\n\n try {\n for(const key of contextKeys) {\n if(successfulSession.events[i].context[key] !== eventWithContext.context[key]){\n // context is not the same, not a match, let's get out of here\n contextMatch = false\n break;\n }\n }\n } catch (error) {\n contextMatch = false;\n console.log(`Something happened, probably successful session didn't have a context for an event.`);\n }\n }\n }\n\n // Update results\n if(contextMatch){\n console.log(`VALIDATED`);\n const synonyms = dedupTokens(successfulSession.searchQuery, incompleteSession.searchQuery, true)\n const existingMatchingResultIndex = findMatchingIndex(synonyms, results)\n if(existingMatchingResultIndex >= 0){\n const synonymSet = new Set([...synonyms, ...results[existingMatchingResultIndex].synonyms])\n results[existingMatchingResultIndex].synonyms = Array.from(synonymSet)\n }\n else{\n const result = {\n \"mappingType\": \"equivalent\",\n \"synonyms\": synonyms\n }\n results.push(result)\n }\n\n }\n else{\n console.log(`NOT A MATCH`);\n }\n\n return results;\n}\n\n/**\n * Compare the event sequence of incomplete and successful sessions\n * @param {*} successfulSessions\n * @param {*} incompleteSessions\n * @returns\n */\nconst compareLists = (successfulSessions, incompleteSessions) => {\n let results = []\n for(const successfulSession of successfulSessions) {\n for(const incompleteSession of incompleteSessions) {\n // if the event sequence is the same, let's inspect these sessions\n // to validate that they are a match\n if(successfulSession.enrichments.eventSequence.includes(incompleteSession.enrichments.eventSequence)){\n processMatch(successfulSession, incompleteSession, results)\n }\n }\n }\n return results\n}\n\nconst processSessions = (sessions) => {\n // console.log(`Processing the following list:`, JSON.stringify(sessions, null, 2));\n // enrich sessions for processing\n const enrichedSessions = sessions.map(session => {\n return { ...session, enrichments: enrichEvents(session.events)}\n })\n // separate successful and incomplete sessions\n const successfulEvents = enrichedSessions.filter(session => { return session.enrichments.isSuccessful})\n const incompleteEvents = enrichedSessions.filter(session => { return !session.enrichments.isSuccessful})\n\n return compareLists(successfulEvents, incompleteEvents);\n}\n\n/**\n * Main Entry Point\n */\nconst main = () => {\n const results = processSessions(eventsBySession);\n console.log(`Results:`, results);\n}\n\nmain();\n\nmodule.exports = processSessions;\n\n```\n\nRun [the script yourself.\n\n### Step 4: Enhance our search query with the newly appended synonyms\n\n```\n\n {\n '$search': {\n 'index': 'synonym-search',\n 'text': {\n 'query': 'hungarian',\n 'path': 'cuisine-type'\n },\n 'synonyms': 'similarCuisines'\n }\n }\n]\n\n```\n\nSee [the synonyms tutorial.\n\n## Next Steps\n\nThere you have it, folks. We\u2019ve taken raw data recorded from our application server and put it to use by building a feedback that encourages positive user behavior.\n\nBy measuring this feedback loop against your KPIs, you can build a simple A/B test against certain synonyms and user patterns to optimize your application!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This blog will cover how to leverage synonyms to create a feedback loop that is self-tuning, in order to deliver incrementally more relevant search results to your users.", "contentType": "Tutorial"}, "title": "Improve Your App's Search Results with Auto-Tuning", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/time-series-candlestick-sma-ema", "action": "created", "body": "# Currency Analysis with Time Series Collections #2 \u2014 Simple Moving Average and Exponential Moving Average Calculation\n\n## Introduction\n\nIn the previous post, we learned how to group currency data based on given time intervals to generate candlestick charts to perform trend analysis. In this article, we\u2019ll learn how the moving average can be calculated on time-series data.\n\nMoving average is a well-known financial technical indicator that is commonly used either alone or in combination with other indicators. Additionally, the moving average is included as a parameter of other financial technical indicators like MACD. The main reason for using this indicator is to smooth out the price updates to reflect recent price changes accordingly. There are many types of moving averages but here we\u2019ll focus on two of them: Simple Moving Average (SMA) and Exponential Moving Average (EMA).\n\n## Simple Moving Average (SMA)\n\nThis is the average price value of a currency/stock within a given period. \n\nLet\u2019s calculate the SMA for the BTC-USD currency over the last three data intervals, including the current data. Remember that each stick in the candlestick chart represents five-minute intervals. Therefore, for every interval, we would look for the previous three intervals.\n\nFirst we\u2019ll group the BTC-USD currency data for five-minute intervals: \n\n```js\ndb.ticker.aggregate(\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n]);\n```\n\nAnd, we will have the following candlestick chart:\n\n![Candlestick chart\n\nWe have four metrics for each interval and we will choose the close price as the numeric value for our moving average calculation. We are only interested in `_id` (a nested field that includes the symbol and time information) and the close price. Therefore, since we are not interested in high, low, open prices for SMA calculation, we will exclude it from the aggregation pipeline with the `$project` aggregation stage:\n\n```js\n{\n $project: {\n _id: 1,\n price: \"$close\",\n },\n}\n```\n\nAfter we grouped and trimmed, we will have the following dataset:\n\n```js\n{\"_id\": {\"time\": ISODate(\"20210101T17:00:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35050}\n{\"_id\": {\"time\": ISODate(\"20210101T17:05:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35170}\n{\"_id\": {\"time\": ISODate(\"20210101T17:10:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35280}\n{\"_id\": {\"time\": ISODate(\"20210101T17:15:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 34910}\n{\"_id\": {\"time\": ISODate(\"20210101T17:20:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35060}\n{\"_id\": {\"time\": ISODate(\"20210101T17:25:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35150}\n{\"_id\": {\"time\": ISODate(\"20210101T17:30:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35350}\n```\n\nOnce we have the above dataset, we want to enrich our data with the simple moving average indicator as shown below. Every interval in every symbol will have one more field (sma) to represent the SMA indicator by including the current and last three intervals:\n\n```js\n{\"_id\": {\"time\": ISODate(\"20210101T17:00:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35050, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:05:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35170, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:10:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35280, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:15:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 34910, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:20:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35060, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:25:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35150, \"sma\": ?}\n{\"_id\": {\"time\": ISODate(\"20210101T17:30:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35350, \"sma\": ?}\n```\n\nHow is it calculated? For the time, `17:00:00`, the calculation of SMA is very simple. Since we don\u2019t have the three previous data points, we can take the existing price (35050) at that time as average. If we don\u2019t have three previous data points, we can get all the available possible price information and divide by the number of price data. \n\nThe harder part comes when we have more than three previous data points. If we have more than three previous data points, we need to remove the older ones. And, we have to keep doing this as we have more data for a single symbol. Therefore, we will calculate the average by considering only up to three previous data points. The below table represents the calculation step by step for every interval:\n\n| Time | SMA Calculation for the window (3 previous + current data points) |\n| --- | --- |\n| 17:00:00 | 35050/1 |\n| 17:05:00 | (35050+35170)/2 |\n| 17:10:00 | (35050+35170+35280)/3 |\n| 17:15:00 | (35050+35170+35280+34910)/4 |\n| 17:20:00 | (35170+35280+34910+35060)/4 \n*oldest price data (35050) discarded from the calculation |\n| 17:25:00 | (35280+34910+35060+35150)/4 \n*oldest price data (35170) discarded from the calculation |\n| 17:30:00 | (34190+35060+35150+35350)/4 \n*oldest price data (35280) discarded from the calculation |\n\nAs you see above, the window for the average calculation is moving as we have more data. \n\n## Window Functions\n\nUntil now, we learned the theory of moving average calculation. How can we use MongoDB to do this calculation for all of the currencies?\n\nMongoDB 5.0 introduced a new aggregation stage, `$setWindowFields`, to perform operations on a specified range of documents (window) in the defined partitions. Because it also supports average calculation on a window through `$avg` operator, we can easily use it to calculate Simple Moving Average:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n sma: {\n $avg: \"$price\",\n window: { documents: -3, 0] },\n },\n },\n },\n}\n\n```\n\nWe chose the symbol field as partition key. For every currency, we have a partition, and each partition will have its own window to process that specific currency data. Therefore, when we\u2019d like to process sequential data of a single currency, we will not mingle the other currency\u2019s data.\n\nAfter we set the partition field, we apply sorting to process the data in an ordered way. The partition field provides processing of single currency data together. However, we want to process data as ordered by time. As we see in how SMA is calculated on the paper, the order of the data matters and therefore, we need to specify the field for ordering. \n\nAfter partitions are set and sorted, then we can process the data for each partition. We generate one more field, \u201c`sma`\u201d, and we define the calculation method of this derived field. Here we set three things:\n\n- The operator that is going to be executed (`$avg`).\n- The field (`$price`) where the operator is going to be executed on.\n- The boundaries of the window (`[-3,0]`).\n- `[-3`: \u201cstart from 3 previous data points\u201d.\n- `0]`: \u201cend up with including current data point\u201d.\n - We can also set the second parameter of the window as \u201c`current`\u201d to include the current data point rather than giving numeric value.\n\nMoving the window on the partitioned and sorted data will look like the following. For every symbol, we\u2019ll have a partition, and all the records belonging to that partition will be sorted by the time information:\n\n![Calculation process\n\nThen we will have the `sma` field calculated for every document in the input stream. You can apply `$round` operator to trim to the specified decimal place in a `$set` aggregation stage:\n\n```js\n{\n $set: {\n sma: { $round: \"$sma\", 2] },\n },\n}\n```\n\nIf we bring all the aggregation stages together, we will end-up with this aggregation pipeline:\n\n```js\ndb.ticker.aggregate([\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5,\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n {\n $project: {\n _id: 1,\n price: \"$close\",\n },\n },\n {\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n sma: {\n $avg: \"$price\",\n window: { documents: [-3, 0] },\n },\n },\n },\n },\n {\n $set: {\n sma: { $round: [\"$sma\", 2] },\n },\n },\n]);\n```\n\nYou may want to add more calculated fields with different options. For example, you can have two SMA calculations with different parameters. One of them could include the last three points as we have done already, and the other one could include the last 10 points, and you may want to compare both. Find the query below:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n sma_3: {\n $avg: \"$price\",\n window: { documents: [-3, 0] },\n },\n sma_10: {\n $avg: \"$price\",\n window: { documents: [-10, 0] },\n },\n },\n },\n }\n\n```\n\nHere in the above code, we set two derived fields. The `sma_3` field represents the moving average for the last three data points, and the `sma_10` field represents the moving average for the 10 last data points. Furthermore, you can compare these two moving averages to take a position on the currency or use it for a parameter for your own technical indicator.\n\nThe below chart shows two moving average calculations. The line with blue color represents the simple moving average with the window `[-3,0]`. The line with the turquoise color represents the simple moving average with the window `[-10,0]`. As you can see, when the window is bigger, reaction to price change gets slower:\n\n![Candlestick chart\n\nYou can even enrich it further with the additional operations such as covariance, standard deviation, and so on. Check the full supported options here. We will cover the Exponential Moving Average here as an additional operation.\n\n## Exponential Moving Average (EMA)\n\nEMA is a kind of moving average. However, it weighs the recent data higher. In the calculation of the Simple Moving Average, we equally weight all the input parameters. However, in the Exponential Moving Average, based on the given parameter, recent data gets more important. Therefore, Exponential Moving Average reacts faster than Simple Moving Average to recent price updates within the similar size window.\n\n`$expMovingAvg` has been introduced in MongoDB 5.0. It takes two parameters: the field name that includes numeric value for the calculation, and `N` or `alpha` value. We\u2019ll set the parameter `N` to specify how many previous data points need to be evaluated while calculating the moving average and therefore, recent records within the `N` data points will have more weight than the older data. You can refer to the documentation for more information:\n\n```js\n{\n $expMovingAvg: {\n input: \"$price\",\n N: 5\n }\n}\n```\n\nIn the below diagram, SMA is represented with the blue line and EMA is represented with the red line, and both are calculated by five recent data points. You can see how the Simple Moving Average reacts slower to the recent price updates than the Exponential Moving Average even though they both have the same records in the calculation:\n\n## Conclusion\n\nMongoDB 5.0, with the introduction of Windowing Function, makes calculations much easier over a window. There are many aggregation operators that can be executed over a window, and we have seen `$avg` and `$expMovingAvg` in this article. \n\nHere in the given examples, we set the window boundaries by including the positional documents. In other words, we start to include documents from three previous data points to current data point (`documents: -3,0]`). You can also set a range of documents rather than defining position. \n\nFor example, if the window is sorted by time, you can include the last 30 minutes of data (whatever number of documents you have) by specifying the range option as follows: `range: [-30,0], unit: \"minute\". `Now, we may have hundreds of documents in the window but we know that we only include the documents that are not older than 30 minutes than the current data.\n\nYou can also materialize the query output into another collection through [`$out` or `$merge` aggregation stages. And furthermore, you can enable change streams or Database Triggers on the materialized view to automatically trigger buy/sell actions based on the result of technical indicator changes.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "Time series collections part 2: How to calculate Simple Moving Average and Exponential Moving Average \n\n", "contentType": "Tutorial"}, "title": "Currency Analysis with Time Series Collections #2 \u2014 Simple Moving Average and Exponential Moving Average Calculation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/auto-pausing-inactive-clusters", "action": "created", "body": "# Auto Pausing Inactive Clusters\n\n# Auto Pausing Inactive Clusters\n## Introduction\n\nA couple of years ago I wrote an article on how to pause and/or scale clusters using scheduled triggers. This article represents a twist on that concept, adding a wrinkle that will pause clusters across an entire organization based on inactivity. Specifically, I\u2019m looking at the Database Access History to determine activity.\n\nIt is important to note this logging limitation: \n\n_If a cluster experiences an activity spike and generates an extremely large quantity of log messages, Atlas may stop collecting and storing new logs for a period of time._\n\nTherefore, this script could get a false positive that a cluster is inactive when indeed quite the opposite is happening. Given, however, that the intent of this script is for managing lower, non-production environments, I don\u2019t see the false positives as a big concern.\n\n## Architecture\n\nThe implementation uses a Scheduled Trigger. The trigger calls a series of App Services Functions, which use the Atlas Administration APIs to iterate over the organization\u2019s projects and their associated clusters, testing the cluster inactivity (as explained in the introduction) and finally pausing the cluster if it is indeed inactive.\n\n \n\n## API Keys\nIn order to call the Atlas Administrative APIs, you'll first need an API Key with the Organization Owner role. API Keys are created in the Access Manager, which you'll find in the Organization menu on the left:\n\n \n\nor the menu bar at the top:\n\n \n\n \n\nClick **Create API Key**. Give the key a description and be sure to set the permissions to **Organization Owner**:\n\n \n\nWhen you click **Next**, you'll be presented with your Public and Private keys. **Save your private key as Atlas will never show it to you again**. \n\nAs an extra layer of security, you also have the option to set an IP Access List for these keys. I'm skipping this step, so my key will work from anywhere.\n\n \n\n## Deployment\n\n### Create a Project for Automation\nSince this solution works across your entire Atlas organization, I like to host it in its own dedicated Atlas Project. \n\n \n\n### Create a App Services Application\nAtlas App Services provide a powerful application development backend as a service. To begin using it, just click the App Services tab.\n\n \n\n You'll see that App Services offers a bunch of templates to get you started. For this use case, just select the first option to **Build your own App**:\n \n \n \n\nYou'll then be presented with options to link a data source, name your application and choose a deployment model. The current iteration of this utility doesn't use a data source, so you can ignore that step (App Services will create a free cluster for you). You can also leave the deployment model at its default (Global), unless you want to limit the application to a specific region. \n\nI've named the application **Atlas Cluster Automation**: \n\n \n \n \n\nAt this point in our journey, you have two options:\n\n1. Simply import the App Services application and adjust any of the functions to fit your needs.\n2. Build the application from scratch (skip to the next section). \n\n## Import Option\n\n### Step 1: Store the API Secret Key.\nThe extract has a dependency on the API Secret Key, thus the import will fail if it is not configured beforehand.\n\nUse the `Values` menu on the left to Create a Secret named `AtlasPrivateKeySecret` containing the private key you created earlier (the secret is not in quotes): \n\n \n \n\n### Step 1: Install the Atlas App Services CLI (realm-cli)\n\nRealm CLI is available on npm. To install version 2 of the Realm CLI on your system, ensure that you have Node.js installed and then run the following command in your shell:\n\n```npm install -g mongodb-realm-cli```\n\n### Step 2: Extract the Application Archive\nDownload and extract the AtlasClusterAutomation.zip.\n\n### Step 3: Log into Atlas\nTo configure your app with realm-cli, you must log in to Atlas using your API keys:\n\n```zsh\n\u2717 realm-cli login --api-key=\"\" --private-api-key=\"\"\nSuccessfully logged in\n```\n\n### Step 4: Get the App Services Application ID\nSelect the `App Settings` menu and copy your Application ID:\n\n### Step 5: Import the Application\nRun the following `realm-cli push` command from the directory where you extracted the export:\n\n```zsh\nrealm-cli push --remote=\"\"\n\n...\nA summary of changes\n...\n\n? Please confirm the changes shown above Yes\nCreating draft\nPushing changes\nDeploying draft\nDeployment complete\nSuccessfully pushed app up:\n```\nAfter the import, replace the `AtlasPublicKey' with your API public key value.\n\n \n \n\n### Review the Imported Application\nThe imported application includes 5 Atlas Functions:\n\n \n \n\nAnd the Scheduled Trigger which calls the **pauseInactiveClusters** function:\n\n \n \n\nThe trigger is schedule to fire every 30 minutes. Note, the **pauseClusters** function that the trigger calls currently only logs cluster activity. This is so you can monitor and verify that the fuction behaves as you desire. When ready, uncomment the line that calls the **pauseCluster** function:\n\n```Javascript\n if (!is_active) {\n console.log(`Pausing ${project.name}:${cluster.name} because it has been inactive for more then ${minutesInactive} minutes`); \n //await context.functions.execute(\"pauseCluster\", project.id, cluster.name, pause);\n```\n\nIn addition, the **pauseClusters** function can be configured to exclude projects (such as those dedicated to production workloads):\n\n```javascrsipt\n /*\n * These project names are just an example. \n * The same concept could be used to exclude clusters or even \n * configure different inactivity intervals by project or cluster.\n * These configuration options could also be stored and read from \n * and Atlas database.\n */\n excludeProjects = 'PROD1', 'PROD2']; \n```\n\nNow that you have reviewed the draft, as a final step go ahead and deploy the App Services application. \n\n![Review Draft & Deploy\n## Build it Yourself Option\nTo understand what's included in the application, here are the steps to build it yourself from scratch. \n\n### Step 1: Store the API Keys\n\nThe functions we need to create will call the Atlas Administration API, so we need to store our API Public and Private Keys, which we will do using Values & Secrets. The sample code I provide references these values as `AtlasPublicKey` and `AtlasPrivateKey`, so use those same names unless you want to change the code where they\u2019re referenced.\n\nYou'll find `Values` under the Build menu:\n\n \n\nFirst, create a Value, `AtlasPublicKey`, for your public key (note, the key is in quotes): \n\n \n\nCreate a Secret, `AtlasPrivateKeySecret`, containing your private key (the secret is not in quotes): \n\n \n\nThe Secret cannot be accessed directly, so create a second Value, `AtlasPrivateKey`, that links to the secret: \n\n \n\n \n\n### Step 2: Create the Functions\n\nThe four functions that need to be created are pretty self-explanatory, so I\u2019m not going to provide a bunch of additional explanations here. \n#### getProjects\n\nThis standalone function can be test run from the App Services console to see the list of all the projects in your organization. \n\n```Javascript\n/*\n * Returns an array of the projects in the organization\n * See https://docs.atlas.mongodb.com/reference/api/project-get-all/\n *\n * Returns an array of objects, e.g.\n *\n * {\n * \"clusterCount\": {\n * \"$numberInt\": \"1\"\n * },\n * \"created\": \"2021-05-11T18:24:48Z\",\n * \"id\": \"609acbef1b76b53fcd37c8e1\",\n * \"links\": \n * {\n * \"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1\",\n * \"rel\": \"self\"\n * }\n * ],\n * \"name\": \"mg-training-sample\",\n * \"orgId\": \"5b4e2d803b34b965050f1835\"\n * }\n *\n */\nexports = async function() {\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: 'api/atlas/v1.0/groups', \n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.get(arg);\n\n return EJSON.parse(response.body.text()).results; \n};\n\n```\n#### getProjectClusters\n\nAfter `getProjects` is called, the trigger iterates over the results, passing the `projectId` to this `getProjectClusters` function. \n\n_To test this function, you need to supply a `projectId`. By default, the Console supplies \u2018Hello world!\u2019, so I test for that input and provide some default values for easy testing._\n\n```Javascript\n/*\n * Returns an array of the clusters for the supplied project ID.\n * See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/\n *\n * Returns an array of objects. See the API documentation for details.\n * \n */\nexports = async function(project_id) {\n \n if (project_id == \"Hello world!\") { // Easy testing from the console\n project_id = \"5e8f8268d896f55ac04969a1\"\n }\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: `api/atlas/v1.0/groups/${project_id}/clusters`, \n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.get(arg);\n\n return EJSON.parse(response.body.text()).results; \n};\n\n```\n\n#### clusterIsActive\n\nThis function contains the logic that determines if the cluster can be paused. \n\nMost of the work in this function is manipulating the timestamp in the database access log so it can be compared to the current time and lookback window. \n\nIn addition to returning true (active) or false (inactive), the function logs it\u2019s findings, for example: \\\n \\\n`Checking if cluster 'SA-SHARED-DEMO' has been active in the last 60 minutes`\n\n```ZSH\n Wed Nov 03 2021 19:52:31 GMT+0000 (UTC) - job is being run\n Wed Nov 03 2021 18:52:31 GMT+0000 (UTC) - cluster inactivity before this time will be reported inactive\n Wed Nov 03 2021 19:48:45 GMT+0000 (UTC) - last logged database access\nCluster is Active: Username 'brian' was active in cluster 'SA-SHARED-DEMO' 4 minutes ago.\n```\n\nLike `getClusterProjects`, there\u2019s a block you can use to provide some test project ID and cluster names for easy testing from the App Services console.\n\n```Javascript\n/*\n * Used the database access history to determine if the cluster is in active use.\n * See https://docs.atlas.mongodb.com/reference/api/access-tracking-get-database-history-clustername/\n * \n * Returns true (active) or false (inactive)\n * \n */\nexports = async function(project_id, clusterName, minutes) {\n \n if (project_id == 'Hello world!') { // We're testing from the console\n project_id = \"5e8f8268d896f55ac04969a1\";\n clusterName = \"SA-SHARED-DEMO\";\n minutes = 60;\n } /*else {\n console.log (`project_id: ${project_id}, clusterName: ${clusterName}, minutes: ${minutes}`)\n }*/\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: `api/atlas/v1.0/groups/${project_id}/dbAccessHistory/clusters/${clusterName}`, \n //query: {'authResult': \"true\"},\n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.get(arg);\n \n accessLogs = EJSON.parse(response.body.text()).accessLogs; \n\n now = Date.now();\n const MS_PER_MINUTE = 60000;\n var durationInMinutes = (minutes < 30, 30, minutes); // The log granularity is 30 minutes.\n var idleStartTime = now - (durationInMinutes * MS_PER_MINUTE);\n \n nowString = new Date(now).toString();\n idleStartTimeString = new Date(idleStartTime).toString();\n console.log(`Checking if cluster '${clusterName}' has been active in the last ${durationInMinutes} minutes`)\n console.log(` ${nowString} - job is being run`);\n console.log(` ${idleStartTimeString} - cluster inactivity before this time will be reported inactive`);\n \n clusterIsActive = false;\n \n accessLogs.every(log => {\n if (log.username != 'mms-automation' && log.username != 'mms-monitoring-agent') {\n \n // Convert string log date to milliseconds \n logTime = Date.parse(log.timestamp);\n\n logTimeString = new Date(logTime);\n console.log(` ${logTimeString} - last logged database access`);\n \n var elapsedTimeMins = Math.round((now - logTime)/MS_PER_MINUTE, 0);\n \n if (logTime > idleStartTime ) {\n console.log(`Cluster is Active: Username '${log.username}' was active in cluster '${clusterName}' ${elapsedTimeMins} minutes ago.`);\n clusterIsActive = true;\n return false;\n } else {\n // The first log entry is older than our inactive window\n console.log(`Cluster is Inactive: Username '${log.username}' was active in cluster '${clusterName}' ${elapsedTimeMins} minutes ago.`);\n clusterIsActive = false;\n return false;\n }\n }\n return true;\n\n });\n\n return clusterIsActive;\n\n};\n\n```\n\n#### pauseCluster\n\nFinally, if the cluster is inactive, we pass the project Id and cluster name to `pauseCluster`. This function can also resume a cluster, although that feature is not utilized for this use case.\n\n```Javascript\n/*\n * Pauses the named cluster \n * See https://docs.atlas.mongodb.com/reference/api/clusters-modify-one/\n *\n */\nexports = async function(projectID, clusterName, pause) {\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const body = {paused: pause};\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: `api/atlas/v1.0/groups/${projectID}/clusters/${clusterName}`, \n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n body: JSON.stringify(body)\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.patch(arg);\n\n return EJSON.parse(response.body.text()); \n};\n```\n\n### pauseInactiveClusters\n\nThis function will be called by a trigger. As it's not possible to pass a parameter to a scheduled trigger, it uses a hard-coded lookback window of 60 minutes that you can change to meet your needs. You could even store the value in an Atlas database and build a UI to manage its setting :-).\n\nThe function will evaluate all projects and clusters in the organization where it\u2019s hosted. Understanding that there are likely projects or clusters that you never want paused, the function also includes an excludeProjects array, where you can specify a list of project names to exclude from evaluation.\n\nFinally, you\u2019ll notice the call to `pauseCluster` is commented out. I suggest you run this function for a couple of days and review the Trigger logs to verify it behaves as you\u2019d expect.\n\n```Javascript\n/*\n * Iterates over the organizations projects and clusters, \n * pausing clusters inactive for the configured minutes.\n */\nexports = async function() {\n \n minutesInactive = 60;\n \n /*\n * These project names are just an example. \n * The same concept could be used to exclude clusters or even \n * configure different inactivity intervals by project or cluster.\n * These configuration options could also be stored and read from \n * and Atlas database.\n */\n excludeProjects = ['PROD1', 'PROD2']; \n \n const projects = await context.functions.execute(\"getProjects\");\n \n projects.forEach(async project => {\n \n if (excludeProjects.includes(project.name)) {\n console.log(`Project '${project.name}' has been excluded from pause.`)\n } else {\n \n console.log(`Checking project '${project.name}'s clusters for inactivity...`);\n\n const clusters = await context.functions.execute(\"getProjectClusters\", project.id);\n \n clusters.forEach(async cluster => {\n \n if (cluster.providerSettings.providerName != \"TENANT\") { // It's a dedicated cluster than can be paused\n \n if (cluster.paused == false) {\n \n is_active = await context.functions.execute(\"clusterIsActive\", project.id, cluster.name, minutesInactive);\n \n if (!is_active) {\n console.log(`Pausing ${project.name}:${cluster.name} because it has been inactive for more then ${minutesInactive} minutes`); \n //await context.functions.execute(\"pauseCluster\", project.id, cluster.name, true);\n } else {\n console.log(`Skipping pause for ${project.name}:${cluster.name} because it has active database users in the last ${minutesInactive} minutes.`);\n }\n }\n }\n });\n }\n });\n\n return true;\n};\n```\n\n### Step 3: Create the Scheduled Trigger\n\nYes, we\u2019re still using a [scheduled trigger, but this time the trigger will run periodically to check for cluster inactivity. Now, your developers working late into the night will no longer have the cluster paused underneath them. \n\n \n\n### Step 4: Deploy\n\nAs a final step you need to deploy the App Services application. \n\n \n\n## Summary\n\nThe genesis for this article was a customer, when presented my previous article on scheduling cluster pauses, asked if the same could be achieved based on inactivity. It\u2019s my belief that with the Atlas APIs, anything could be achieved. The only question was what constitutes inactivity? Given the heartbeat and replication that naturally occurs, there\u2019s always some \u201cactivity\u201d on the cluster. Ultimately, I settled on database access as the guide. Over time, that metric may be combined with some additional metrics or changed to something else altogether, but the bones of the process are here.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "One of Atlas' many great features is that it provides you the ability to pause clusters that are not currently needed, which primarily includes non-prod environments. This article shows you how to automatically pause clusters that go unused for a any period of time that you desire.", "contentType": "Article"}, "title": "Auto Pausing Inactive Clusters", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/document-swift-powered-frameworks-using-docc", "action": "created", "body": "# Document our Realm-Powered Swift Frameworks using DocC\n\n## Introduction\n\nIn the previous post of this series we added Realm to a really simple Binary Tree library. The idea was to create a Package that, using Realm, allowed you to define binary trees and store them locally.\n\nNow we have that library, but how will anyone know how to use it? We can write a manual, a blog post or FAQs, but luckily Xcode allowed us to add Documentation Comments since forever. And in WWDC 21 Apple announced the new Documentation Compiler, DocC, which takes all our documentation comments and creates a nice, powerful documentation site for our libraries and frameworks.\n\nLet\u2019s try it documenting our library!\n\n## Documentation Comments\n\nComments are part of any language. You use regular comments to explain some specially complicated piece of code, to leave a reason about why some code was constructed in a certain way or just to organise a really big file or function (By the way, if this happens, split it, it\u2019s better). These comments start with `//` for single line comments or with `/*` for block comments.\n\nAnd please, please please don't use comments for things like\n\n```swift\n\n// i is now 0\n\ni = 0\n```\n\nFor example, these are line comments from `Package.swift`:\n\n```swift\n\n// swift-tools-version:5.5\n\n// The swift-tools-version declares the minimum version of Swift required to build this package.\n\n```\n\nWhile this is a regular block comment:\n\n```swift\n\n/*\n File.swift\n \n Created by The Realm Team on 16/6/21.\n*/\n\n```\n\nWe\u2019ve had Documentation Comments in Swift (and Objective C) since forever. This is how they look:\n\n```swift\n/// Single-line documentation comment starts with three /\n```\n\n```swift\n/**\n Multi-line documentation\n Comment\n Block starts with /**\n*/\n```\n\nThese are similar in syntax, although have two major differences:\n\n* You can write Markup in documentation comments and Xcode will render it\n* These comments explain _what_ something is and _how it\u2019s used_, not how it\u2019s coded.\n\nDocumentation comments are perfect to explain what that class does, or how to use this function. If you can\u2019t put it in plain words, probably you don\u2019t understand what they do and need to think about it a bit more. Also, having clearly stated what a function receives as parameters, what returns, edge cases and possible side effects helps you a lot while writing unit tests. Is simple to test something that you\u2019ve just written how it should be used, what behaviour will exhibit and which values should return. \n\n## DocC\n\nApple announced the Documentation Compiler, DocC, during WWDC21. This new tool, integrated with Xcode 13 allows us to generate a Documentation bundle that can be shared, with beautiful web pages containing all our symbols (classes, structs, functions, etc.)\n\nWith DocC we can generate documentation for our libraries and frameworks. It won\u2019t work for Apps, as the idea of these comments is to explain how to use a piece of code and that works perfectly with libraries.\n\nDocC allows for much more than just generating a web site from our code. It can host tutorials, and any pages we want to add. Let\u2019s try it!\n\n## Generating Documentation with DocC\n\nFirst, grab the code for the Realm Binary Tree library from this repository. In order to do that, run the following commands from a Terminal:\n\n```bash\n$ git clone https://github.com/mongodb-developer/realm-binary-tree\n$ cd realm-binary-tree\n```\n\nIf you want to follow along and make these changes, just checkout the tag `initial-state` with `git checkout initial-state`.\n\nThen open the project by double clicking on the `Package.swift` file. Once Xcode ends getting all necessary dependencies (`Realm-Swift` is the main one) we can generate the documentation clicking in the menu option `Product > Build Documentation` or the associated keyboard shortcut `\u2303\u21e7\u2318D`. This will open the Documentation Browser with our library\u2019s documentation in it.\n\nAs we can see, all of our public symbols (in this case the `BinaryTree` class and `TreeTraversable` protocol are there, with their documentation comments nicely showing. This is how it looks for `TreeTraversable::mapInOrder(tree:closure:)`\n\n## Adding an About Section and Articles\n\nThis is nice, but Xcode 13 now allows us to create a new type of file: a **Documentation Catalog**. This can host Articles, Tutorials and Images. Let\u2019s start by selecting the `Sources > BinaryTree` folder and typing \u2318N to add a new File. Then scroll down to the Documentation section and select `Documentation Catalog`. Give it the name `BinaryTree.docc`. We can rename this resource later as any other file/group in Xcode. We want a name that identifies it clearly when we create an exported documentation package.\n\nLet\u2019s start by renaming the `Documentation.md` file into `BinaryTree.md`. As this has the same name as our Doc Package, everything we put inside this file will appear in the Documentation node of the Framework itself.\n\nWe can add images to our Documentation Catalog simply by dragging them into `Resources`. Then, we can reference those images using the usual Markdown syntax ``. This is how our framework\u2019s main page look like now:\n\nInside this documentation package we can add Articles. Articles are just Markdown pages where we can explain a subject in written longform. Select the Documentation Package `BinaryTree.docc` and add a new file, using \u2318N. Choose `Article File` from `Documentation`. A new Markdown file will be created. Now write your awesome content to explain how your library works, some concepts you need to know before using it, etc.\n\n## Tutorials\n\nTutorials are step by step instructions on how to use your library or framework. Here you can explain, for example, how to initialize a class that needs several parameters injected when calling the `init` method, or how a certain threading problem can be handled. \n\nIn our case, we want to explain how we can create a Tree, and how we can traverse it.\n\nSo first we need a Tutorial File. Go to your `Tutorials` folder and create a new File. Select Documentation > Tutorial File. A Tutorial file describes the steps in a tutorial, so while you scroll through it related code appears, as you can see here in action.\n\nWe need two things: our code snippets and the tutorial file. The tutorial file looks like this:\n\n```swift\n@Tutorial(time: 5) {\n @Intro(title: \"Creating Trees\") {\n How to create Trees\n\n @Image(source: seed-tree.jpg, alt: \"This is an image of a Tree\")\n }\n\n @Section(title: \"Creating trees\") {\n @ContentAndMedia() {\n Let's create some trees\n }\n\n @Steps {\n @Step {\n Import `BinaryTree` \n @Code(name: \"CreateTree.swift\", file: 01-create-tree.swift)\n }\n\n @Step {\n Create an empty Tree object\n @Code(name: \"CreateTree.swift\", file: 02-create-tree.swift)\n }\n\n @Step {\n Add left and right children. These children are also of type `RealmBinaryTree`\n @Code(name: \"CreateTree.swift\", file: 03-create-tree.swift)\n }\n }\n }\n}\n``` \n\nAs you can see, we have a first `@Tutorial(time: 5)` line where we put the estimated time to complete this tutorial. Then some introduction text and images, and one `@Section`. We can create as many sections as we need. When the documentation is rendered they\u2019ll correspond to a new page of the tutorial and can be selected from a dropdown picker. As a tutorial is a step-by-step explanation, we now add each and every step, that will have some text that will tell you what the code will do and the code itself you need to enter. \n\nThat code is stored in Resources > code as regular Swift files. So if you have 5 steps you\u2019ll need five files. Each step will show what\u2019s in the associated snippet file, so to make it appear as you advance one step should include the previous step\u2019s code. My approach to code snippets is to do it backwards: first I write the final snippet with the complete sample code, then I duplicate it as many times as steps I have in this tutorial, finally I delete code in each file as needed.\n\n## Recap\n\nIn this post we\u2019ve seen how to add developer documentation to our code, how to generate a DocC package including sample code and tutorials.\n\nThis will help us explain to others how to use our code, how to test it, its limitations and a better understanding of our own code. Explaining how something works is the quickest way to master it.\n\nIn the next post we\u2019ll have a look at how we can host this package online!\n\nIf you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum\\(https://www.mongodb.com/community/forums/c/realm-sdks/58). To keep up with the latest Realm news, follow [@realm on Twitter and join the Realm global community.\n\n## Reference Materials\n\n### Sample Repo\n\nSource code repo: https://github.com/mongodb-developer/realm-binary-tree \n\n### Apple DocC documentation\n\nDocumentation about DocC\n\n### WWDC21 Videos\n\n* Meet DocC documentation in Xcode\n* Build interactive tutorials using DocC\n* Elevate your DocC documentation in Xcode\n* Host and automate your DocC documentation\n", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "Learn how to use the new Documentation Compiler from Apple, DocC, to create outstanding tutorials, how-tos and explain how your Frameworks work.", "contentType": "Article"}, "title": "Document our Realm-Powered Swift Frameworks using DocC", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-javascript-v12", "action": "created", "body": "# What to Expect from Realm JavaScript v12\n\nThe Realm JavaScript team has been working on Realm JavaScript version 12 for a while. We have released a number of prereleases to gain confidence in our approach, and we will continue to do so as we uncover and fix issues. We cannot give a date for when we will have the final release, but we would like to give you a brief introduction to what to expect.\n\n## Changes to the existing API\n\nYou will continue to see version 11 releases as bugs are fixed in Realm Core \u2014 our underlying database. All of our effort is now focused on version 12, so we don\u2019t expect to fix any more SDK bugs on version 11, and all new bug reports will be verified against version 12. Moreover, we do not plan any new functionality on version 11.\n\nYou might expect many breaking changes as we are bumping the major version, but we are actually planning to have as few breaking changes as possible. The reason is that the next major version is more breaking for us than you. In reality, it is a complete rewrite of the SDK internals.\n\nWe are changing our collection classes a bit. Today, they derive from a common Collection class that is modeled over ReadonlyArray. It is problematic for Realm.Dictionary as there is no natural ordering. Furthermore, we are deprecating our namespaced API since we find it out of touch with modern TypeScript and JavaScript development. We are dropping support for Altas push notifications (they have been deprecated some time ago). Other changes might come along during the development process and we will document them carefully.\n\nThe goal of the rewrite is to keep the public API as it is, and change the internal implementation. To ensure that we are keeping the API mostly untouched, we are either reusing or rewriting the tests we have written over the years. We implemented the ported tests in JavaScript and rewrote them in TypeScript to help us verify the new TypeScript types. \n\n## Issues with the old architecture\n\nRealm JavaScript has historically been a mixture of C++ and vanilla JavaScript. TypeScript definitions and API documentation have been added on the side. A good portion of the API does not touch a single line of JavaScript code but goes directly to an implementation in C++. This makes it difficult to quickly add new functionality, as you have to decide if it can be implemented in JavaScript, C++, or a mixture of both. Moreover, you need to remember to update TypeScript definitions and API documentation. Consequently, over the years, we have seen issues where either API documentation or TypeScript definitions are not consistent with the implementation.\n\n## Our new architecture\n\nRealm JavaScript builds on Realm Core, which is composed of a storage engine, query engine, and sync client connecting your client device with MongoDB Atlas. Realm Core is a C++ library, and the vast majority of Realm JavaScript\u2019s C++ code in our old architecture calls into Realm Core. Another large portion of our old C++ code is interfacing with the different JavaScript engines we are supporting (currently using NAPI Node.js and Electron] and JSI [JavaScriptCore and Hermes]).\n\nOur rewrite will create two separated layers: i) a handcrafted SDK layer and ii) a generated binding layer. The binding layer is interfacing the JavaScript engines and Realm Core. It is generated code, and our code generator (or binding generator) will read a specification of the Realm Core API and generate C++ code and TypeScript definitions. The generated C++ code can be called from JavaScript or TypeScript.\n\nOn top of the binding layer, we implement a hand-crafted SDK layer. It is an implementation of the Realm JavaScript API as you know it. It is implemented by using classes and methods in the binding layer as building blocks. We have chosen to use TypeScript as the implementation language.\n\n![The new architecture of the Realm JavaScript SDK\n\nWe see a number of benefits from this rewrite:\n\n**Deliver new features faster**\n\nFirst, our hypothesis is that we are able to deliver new functionality faster. We don\u2019t have to write so much C++ boilerplate code as we have done in the past.\n\n**Provide a TypeScript-first experience**\n\nSecond, we are implementing the SDK in TypeScript, which guarantees that the TypeScript definitions will be accurate and consistent with the implementation. If you are a TypeScript developer, this is for you. Likely, your editor will guide you through integrating with Realm, and it will be possible to do static type checking and analysis before deploying your app in production. We are also moving from JSDoc to TSDoc so the API documentation will coexist with the SDK implementation. Again, it will help you and your editor in your day-to-day work, as well as eliminating the previously seen inconsistencies between the API documentation and TypeScripts definitions.\n\n**Facilitate community contributions**\n\nThird, we are lowering the bar for you to contribute. In the past, you likely had to have a good understanding of C++ to open a pull request with either a bug fix or a new feature. Many features can now be implemented in TypeScript alone by using the building blocks found in the binding layer. We are looking forward to seeing contributions from you.\n\n**Generate more optimal code**\n\nLast but not least, we hope to be able to generate more optimal code for the supported JavaScript engines. In the past, we had to write C++ code which was working across multiple JavaScript engines. Our early measurements indicate that many parts of the API will be a little faster, and in a few places, it will be much faster.\n\n## New features\n\nAs mentioned earlier, all new functionality will only be released on version 12 and above. Some new functionality has already been merged and released, and more will follow. Let us briefly introduce some highlights to you.\n\nFirst, a new unified logging mechanism has been introduced. It means that you can get more insights into what the storage engine, query engine, and sync client are doing. The goal is to make it easier for you to debug. You provide a callback function to the global logger, and log messages will be captured by calling your function. \n\n```typescript\ntype Log = {\n message: string;\n level: string;\n};\nconst logs: Log] = [];\n\nRealm.setLogger((level, message) => {\n logs.push({ level, message });\n});\n\nRealm.setLogLevel(\"all\");\n```\nSecond, full-text search will be supported. You can mark a string property to be indexed for full-text search, and Realm Query Language allows you to query your Realm. Currently, the feature is limited to European alphabets. Advanced functionality like stemming and spanning across properties will be added later.\n\n```typescript\ninterface IStory {\n title: string;\n content?: string;\n}\nclass Story extends Realm.Object implements IStory {\n title: string;\n content?: string;\n\n static schema: ObjectSchema = {\n name: \"Story\",\n properties: {\n title: { type: \"string\" },\n content: { type: \"string\", indexed: \"full-text\", optional: true },\n },\n primaryKey: \"title\",\n };\n}\n\n// ... initialize your app and open your Realm\n\nlet amazingStories = realm.objects(Story).filtered(\"content TEXT 'amazing'\");\n```\nLast, a new subscription API for flexible sync will be added. The aim is to make it easier to subscribe and unsubscribe by providing `subscribe()` and `unsubscribe()` methods directly on the query result.\n\n```typescript\nconst peopleOver20 = await realm\n .objects(\"Person\")\n .filtered(\"age > 20\")\n .subscribe({\n name: \"peopleOver20\",\n behavior: WaitForSync.FirstTime, // Default\n timeout: 2000,\n });\n\n// \u2026\n\npeopleOver20.unsubscribe();\n```\n## A better place\n\nWhile Realm JavaScript version 12 will not bring major changes for you as a developer, we believe that the code base will be at a better place. The code base is easier to work with, and it is an open invitation to you to contribute.\n\nThe new features are additive, and we hope that they will be useful for you. Logging is likely most useful while developing your app, and full-text search can be useful in many use cases. The new flexible sync subscription API is experimental, and we might change it as we get [feedback from you.", "format": "md", "metadata": {"tags": ["Realm", "TypeScript", "JavaScript"], "pageDescription": "The Realm JavaScript team has been working on Realm JavaScript version 12 for a while, and we'd like to give you a brief introduction to what to expect.", "contentType": "Article"}, "title": "What to Expect from Realm JavaScript v12", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-your-own-function-retry-mechanism-with-realm", "action": "created", "body": "# Build Your Own Function Retry Mechanism with Realm\n\n## What is Realm?\n \nWondering what it's all about? Realm is an object-oriented data model database that will persist data on disk, doesn\u2019t need an ORM, and lets you write less code with full offline capabilities\u2026 but Realm is also a fully-managed back-end service that helps you deliver best-in-class apps across Android, iOS, and Web. \n \nTo leverage the full BaaS capabilities, Functions allow you to define and execute server-side logic for your application. You can call functions from your client applications as well as from other functions and in JSON expressions throughout Realm. \n \nFunctions are written in modern JavaScript (ES6+) and execute in a serverless manner. When you call a function, you can dynamically access components of the current application as well as information about the request to execute the function and the logged-in user that sent the request. \n \nBy default, Realm Functions have no Node.js modules available for import. If you would like to make use of any such modules, you can upload external dependencies to make them available to import into your Realm Functions. \n \n## Motivation \n \nThis tutorial is born to show how we can create a retry mechanism for our functions. We have to keep in mind that triggers have their own internal automatic retry mechanism that ensures they are executed. However, functions lack such a mechanism. Realm functions are executed as HTTP requests, so it is our responsibility to create a mechanism to retry if they fail. \n \nNext, we will show how we can achieve this mechanism in a simple way that could be applied to any project. \n \n## Flow Diagram\n \nThe main basis of this mechanism will be based on states. In this way, we will be able to contemplate **four different states**. Thus, we will have: \n \n* **0: Not tried**: Initial state. When creating a new event that will need to be processed, it will be assigned the initial status **0**. \n* **1: Success**: Successful status. When an event is successfully executed through our function, it will be assigned this status so that it will not be necessary to retry again. \n* **2: Failed**: Failed status. When, after executing an event, it results in an error, it will be necessary to retry and therefore it will be assigned a status **2 or failed**. \n* **3: Error**: It is important to note that we cannot always retry. We must have a limit of retries. When this limit is exhausted, the status will change to **error or 3**. \n \nThe algorithm that will define the passage between states will be the following: \n \n \n \nFlow diagram \n \n## System Architecture\n \nThe system is based on two collections and a trigger. The trigger will be defined as a **database trigger** that will react each time there is an insert or update in a specific collection. The collection will keep track of the events that need to be processed. Each time this trigger is activated, the event is processed in a function linked to it. The function, when processing the event, may or may not fail, and we need to capture the failure to retry. \n \nWhen the function fails, the event state is updated in the event collection, and as the trigger reacts on inserts and updates, it will call the function again to reprocess the same. \n \nA maximum number of retries will be defined so that, once exhausted, the event will not be reprocessed and will be marked as an error in the **error** collection. \n \n## Sequence Diagram\n \nThe following diagram shows the three use cases contemplated for this scenario. \n \n## Use Case 1:\n \nA new document is inserted in the collection of events to be processed. Its initial state is **0 (new)** and the number of retries is **0**. The trigger is activated and executes the function for this event. The function is executed successfully and the event status is updated to **1 (success).** \n \n## Use Case 2:\n \nA new document is inserted into the collection of events to be processed. Its initial state is **0 (new)** and the number of retries is **0.** The trigger is activated and executes the function for this event. The function fails and the event status is updated to **2 (failed)** and the number of retries is increased to **1**. \n \n## Use Case 3:\n \nA document is updated in the collection of events to be processed. Its initial status is **2 (failed)** and the number of retries is less than the maximum allowed. The trigger is activated and executes the function for this event. The function fails, the status remains at **2 (failed),** and the counter increases. If the counter for retries is greater than the maximum allowed, the event is sent to the **error** collection and deleted from the event collection. \n \n## Use Case 4:\n \nA document is updated in the event collection to be processed. Its initial status is **2 (failed)** and the number of retries is less than the maximum allowed. The trigger is activated and executes the function for this event. The function is executed successfully, and the status changes to **1 (success).** \n \n \n \nSequence Diagram \n \n## Project Example Repository\n \nWe can find a simple project that illustrates the above here. \n \nThis project uses a trigger, **newEventsGenerator**, to generate a new document every two minutes through a cron job in the **Events** collection. This will simulate the creation of events to be processed. \n \nThe trigger **eventsProcessor** will be in charge of processing the events inserted or updated in the **Events** collection. To simulate a failure, a function is used that generates a random number and returns whether it is divisible or not by two. In this way, both states can be simulated. \n \n``` \nfunction getFailOrSuccess() { \n // Random number between 1 and 10 \n const number = Math.floor(Math.random() * 10) + 1; \n return ((number % 2) === 0);\n} \n``` \n \n## Conclusion\n \nThis tutorial illustrates in a simple way how we can create our own retry mechanism to increase the reliability of our application. Realm allows us to create our application completely serverless, and thanks to the Realm functions, we can define and execute the server-side logic for our application in the cloud. \n \nWe can use the functions to handle low-latency, short-lived connection logic, and other server-side interactions. Functions are especially useful when we want to work with multiple services, behave dynamically based on the current user, or abstract the implementation details of our client applications. \n \nThis retry mechanism we have just created will allow us to handle interaction with other services in a more robust way, letting us know that the action will be reattempted in case of failure.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This tutorial is born to show how we can create a retry mechanism for our functions. Realm Functions allow you to define and execute server-side logic for your application. You can call functions from your client applications as well as from other functions and in JSON expressions throughout Realm. ", "contentType": "Tutorial"}, "title": "Build Your Own Function Retry Mechanism with Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/farm-stack-authentication", "action": "created", "body": "# Adding Authentication to Your FARM Stack App\n\n>If you have not read my Introduction to FARM stack tutorial, I would urge you to do that now and then come back. This guide assumes you have already read and understood the previous article so some things might be confusing or opaque if you have not.\n\nAn important part of many web applications is user management, which can be complex with lots of different scenarios to cover: registration, logging in, logging out, password resets, protected routes, and so on. In this tutorial, we will look at how you can integrate the FastAPI Users package into your FARM stack.\n\n## Prerequisites\n\n- Python 3.9.0\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your database username, password, and connection string as you will need those later.\n- A MongoDB Realm App connected to your cluster. Follow the \"Create a Realm App (Realm UI)\" guide and make a note of your Realm App ID.\n\n## Getting Started\n\nLet's begin by cloning the sample code source from GitHub\n\n``` shell\ngit clone git@github.com:mongodb-developer/FARM-Auth.git\n```\n\nOnce you have cloned the repository, you will need to install the dependencies. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active. The requirements.txt file is within the back end folder.\n\n``` shell\ncd FARM-Auth/backend\npip install -r requirements.txt\n```\n\nIt may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.\n\nYou'll need two new configuration values for this tutorial. To get them, log into Atlas and create a new Realm App by selecting the Realm tab at the top of the page, and then clicking on \"Create a New App\" on the top-right of the page.\n\nConfigure the Realm app to connect to your existing cluster:\n\nYou should see your Realm app's ID at the top of the page. Copy it and keep it somewhere safe. It will be used for your application's `REALM_APP_ID` value.\n\n \n\nClick on the \"Authentication\" option on the left-hand side of the page. Then select the \"Edit\" button next to \"Custom JWT Authentication\". Ensure the first option, \"Provider Enabled\" is set to \"On\". Check that the Signing Algorithm is set to \"HS256\". Now you need to create a signing key, which is just a set of 32 random bytes. Fortunately, Python has a quick way to securely create random bytes! In your console, run the following:\n\n``` shell\npython -c 'import secrets; print(secrets.token_hex(32))'\n```\n\nRunning that line of code will print out some random characters to the console. Type \"signing_key\" into the \"Signing Key (Secret Name)\" text box and then click \"Create 'signing_key'\" in the menu that appears underneath. A new text box will appear for the actual key bytes. Paste in the random bytes you generated above. Keep the random bytes safe for the moment. You'll need them for your application's \"JWT_SECRET_KEY\" configuration value.\n\n \n\nNow you have all your configuration values, you need to set the following environment variables (make sure that you substitute your actual credentials).\n\n``` shell\nexport DEBUG_MODE=True\nexport DB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\nexport DB_NAME=\"farmstack\"\nexport JWT_SECRET_KEY=\"\"\nexport REALM_APP_ID=\"\"\n```\n\nSet these values appropriately for your environment, ensuring that `REALM_APP_ID` and `JWT_SECRET_KEY` use the values from above. Remember, anytime you start a new terminal session, you will need to set these environment variables again. I use direnv to make this process easier. Storing and loading these values from a .env file is another popular alternative.\n\nThe final step is to start your FastAPI server.\n\n``` shell\nuvicorn main:app --reload\n```\n\nOnce the application has started, you can view it in your browser at .\n\n \n\nYou may notice that we now have a lot more endpoints than we did in the FARM stack Intro. These routes are all provided by the FastAPI `Users` package. I have also updated the todo app routes so that they are protected. This means that you can no longer access these routes, unless you are logged in.\n\nIf you try to access the `List Tasks` route, for example, it will fail with a 401 Unauthorized error. In order to access any of the todo app routes, we need to first register as a new user and then authenticate. Try this now. Use the `/auth/register` and `/auth/jwt/login` routes to create and authenticate as a new user. Once you are successfully logged in, try accessing the `List Tasks` route again. It should now grant you access and return an HTTP status of 200. Use the Atlas UI to check the new `farmstack.users` collection and you'll see that there's now a document for your new user.\n\n## Integrating FastAPI Users\n\nThe routes and models for our users are within the `/backend/apps/user` folder. Lets walk through what it contains.\n\n### The User Models\n\nThe FastAPI `Users` package includes some basic `User` mixins with the following attributes:\n\n- `id` (`UUID4`) \u2013 Unique identifier of the user. Default to a UUID4.\n- `email` (`str`) \u2013 Email of the user. Validated by `email-validator`.\n- `is_active` (`bool`) \u2013 Whether or not the user is active. If not, login and forgot password requests will be denied. Default to `True`.\n- `is_superuser` (`bool`) \u2013 Whether or not the user is a superuser. Useful to implement administration logic. Default to `False`.\n\n``` python\nfrom fastapi_users.models import BaseUser, BaseUserCreate, BaseUserUpdate, BaseUserDB\n\nclass User(BaseUser):\n pass\n\nclass UserCreate(BaseUserCreate):\n pass\n\nclass UserUpdate(User, BaseUserUpdate):\n pass\n\nclass UserDB(User, BaseUserDB):\n pass\n```\n\nYou can use these as-is for your User models, or extend them with whatever additional properties you require. I'm using them as-is for this example.\n\n### The User Routers\n\nThe FastAPI Users routes can be broken down into four sections:\n\n- Registration\n- Authentication\n- Password Reset\n- User CRUD (Create, Read, Update, Delete)\n\n``` python\ndef get_users_router(app):\n users_router = APIRouter()\n\n def on_after_register(user: UserDB, request: Request):\n print(f\"User {user.id} has registered.\")\n\n def on_after_forgot_password(user: UserDB, token: str, request: Request):\n print(f\"User {user.id} has forgot their password. Reset token: {token}\")\n\n users_router.include_router(\n app.fastapi_users.get_auth_router(jwt_authentication),\n prefix=\"/auth/jwt\",\n tags=\"auth\"],\n )\n users_router.include_router(\n app.fastapi_users.get_register_router(on_after_register),\n prefix=\"/auth\",\n tags=[\"auth\"],\n )\n users_router.include_router(\n app.fastapi_users.get_reset_password_router(\n settings.JWT_SECRET_KEY, after_forgot_password=on_after_forgot_password\n ),\n prefix=\"/auth\",\n tags=[\"auth\"],\n )\n users_router.include_router(\n app.fastapi_users.get_users_router(), prefix=\"/users\", tags=[\"users\"]\n )\n\n return users_router\n```\n\nYou can read a detailed description of each of the routes in the [FastAPI Users' documentation, but there are a few interesting things to note in this code.\n\n#### The on_after Functions\n\nThese functions are called after a new user registers and after the forgotten password endpoint is triggered.\n\nThe `on_after_register` is a convenience function allowing you to send a welcome email, add the user to your CRM, notify a Slack channel, and so on.\n\nThe `on_after_forgot_password` is where you would send the password reset token to the user, most likely via email. The FastAPI Users package does not send the token to the user for you. You must do that here yourself.\n\n#### The get_users_router Wrapper\n\nIn order to create our routes we need access to the `fastapi_users` object, which is part of our `app` object. Because app is defined in `main.py`, and `main.py` imports these routers, we wrap them within a `get_users_router` function to avoid creating a cyclic import.\n\n## Creating a Custom Realm JWT\n\nCurrently, Realm's user management functionality is only supported in the various JavaScript SDKs. However, Realm does support custom JWTs for authentication, allowing you to use the over the wire protocol support in the Python drivers to interact with some Realm services.\n\nThe available Realm services, as well as how you would interact with them via the Python driver, are out of scope for this tutorial, but you can read more in the documentation for Users & Authentication, Custom JWT Authentication, and MongoDB Wire Protocol.\n\nRealm expects the custom JWT tokens to be structured in a certain way. To ensure the JWT tokens we generate with FastAPI Users are structured correctly, within `backend/apps/user/auth.py` we define `MongoDBRealmJWTAuthentication` which inherits from the FastAPI Users' `CookieAuthentication` class.\n\n``` python\nclass MongoDBRealmJWTAuthentication(CookieAuthentication):\n def __init__(self, *args, **kwargs):\n super(MongoDBRealmJWTAuthentication, self).__init__(*args, **kwargs)\n self.token_audience = settings.REALM_APP_ID\n\n async def _generate_token(self, user):\n data = {\n \"user_id\": str(user.id),\n \"sub\": str(user.id),\n \"aud\": self.token_audience,\n \"external_user_id\": str(user.id),\n }\n return generate_jwt(data, self.lifetime_seconds, self.secret, JWT_ALGORITHM)\n```\n\nMost of the authentication code stays the same. However we define a new `_generate_token` method which includes the additional data Realm expects.\n\n## Protecting the Todo App Routes\n\nNow we have our user models, routers, and JWT token ready, we can modify the todo routes to restrict access only to authenticated and active users.\n\nThe todo app routers are defined in `backend/apps/todo/routers.py` and are almost identical to those found in the Introducing FARM Stack tutorial, with one addition. Each router now depends upon `app.fastapi_users.get_current_active_user`.\n\n``` python\n@router.post(\n \"/\",\n response_description=\"Add new task\",\n)\nasync def create_task(\n request: Request,\n user: User = Depends(app.fastapi_users.get_current_active_user),\n task: TaskModel = Body(...),\n):\n task = jsonable_encoder(task)\n new_task = await request.app.db\"tasks\"].insert_one(task)\n created_task = await request.app.db[\"tasks\"].find_one(\n {\"_id\": new_task.inserted_id}\n )\n\n return JSONResponse(status_code=status.HTTP_201_CREATED, content=created_task)\n```\n\nBecause we have declared this as a dependency, if an unauthenticated or inactive user attempts to access any of these URLs, they will be denied. This does mean, however, that our todo app routers now must also have access to the app object, so as we did with the user routers we wrap it in a function to avoid cyclic imports.\n\n## Creating Our FastAPI App and Including the Routers\n\nThe FastAPI app is defined within `backend/main.py`. This is the entry point to our FastAPI server and has been quite heavily modified from the example in the previous FARM stack tutorial, so let's go through it section by section.\n\n``` python\n@app.on_event(\"startup\")\nasync def configure_db_and_routes():\n app.mongodb_client = AsyncIOMotorClient(\n settings.DB_URL, uuidRepresentation=\"standard\"\n )\n app.db = app.mongodb_client[settings.DB_NAME]\n\n user_db = MongoDBUserDatabase(UserDB, app.db[\"users\"])\n\n app.fastapi_users = FastAPIUsers(\n user_db,\n [jwt_authentication],\n User,\n UserCreate,\n UserUpdate,\n UserDB,\n )\n\n app.include_router(get_users_router(app))\n app.include_router(get_todo_router(app))\n```\n\nThis function is called whenever our FastAPI application starts. Here, we connect to our MongoDB database, configure FastAPI Users, and include our routers. Your application won't start receiving requests until this event handler has completed.\n\n``` python\n@app.on_event(\"shutdown\")\nasync def shutdown_db_client():\n app.mongodb_client.close()\n```\n\nThe shutdown event handler does not change. It is still responsible for closing the connection to our database.\n\n## Wrapping Up\n\nIn this tutorial we have covered one of the ways you can add user authentication to your [FARM stack application. There are several other packages available which you might also want to try. You can find several of them in the awesome FastAPI list.\n\nOr, for a more in-depth look at the FastAPI Users package, please check their documentation.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "JavaScript", "FastApi"], "pageDescription": "Adding Authentication to a FARM stack application", "contentType": "Tutorial"}, "title": "Adding Authentication to Your FARM Stack App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/building-space-shooter-game-syncs-unity-mongodb-realm", "action": "created", "body": "# Building a Space Shooter Game in Unity that Syncs with Realm and MongoDB Atlas\n\nWhen developing a game, in most circumstances you're going to need to store some kind of data. It could be the score, it could be player inventory, it could be where they are located on a map. The possibilities are endless and it's more heavily dependent on the type of game.\n\nNeed to sync that data between devices and your remote infrastructure? That is a whole different scenario.\n\nIf you managed to catch MongoDB .Live 2021, you'll be familiar that the first stable release of the Realm .NET SDK for Unity was made available. This means that you can use Realm in your Unity game to store and sync data with only a few lines of code.\n\nIn this tutorial, we're going to build a nifty game that explores some storage and syncing use-cases.\n\nTo get a better idea of what we plan to accomplish, take a look at the following animated image:\n\nIn the above example, we have a space shooter style game. Waves of enemies are coming at you and as you defeat them your score increases. In addition to keeping track of score, the player has a set of enabled blasters. What you don't see in the above example is what's happening behind the scenes. The score is synced to and from the cloud and likewise are the blasters.\n\n## The Requirements\n\nThere are a lot of moving pieces for this particular gaming example. To be successful with this tutorial, you'll need to have the following ready to go:\n\n- Unity 2021.2.0b3 or newer\n- A MongoDB Atlas M0 cluster or better\n- A web application pointed at the Atlas cluster\n- Game media assets\n\nThis is heavily a Unity example. While older or newer versions of Unity might work, I was personally using 2021.2.0b3 when I developed it. You can check to see what version of Unity is available to you using the Unity Hub software.\n\nBecause we are going to be introducing a synchronization feature to the game, we're going to need an Atlas cluster as well as an Atlas App Services application. Both of these can be configured for free here. Don't worry about the finer details of the configuration because we'll get to those as we progress in the tutorial.\n\nAs much as I'd like to take credit for the space shooter assets used within this game, I can't. I actually downloaded them from the Unity Asset Store. Feel free to download what I used or create your own.\n\nIf you're looking for a basic getting started tutorial for Unity with Realm, check out my previous tutorial on the subject.\n\n## Designing the Scenes and Interfaces for the Unity Game\n\nThe game we're about to build is not a small and quick project. There will be many game objects and a few scenes that we have to configure, but none of it is particularly difficult.\n\nTo get an idea of what we need to create, make note of the following breakdown:\n\n- LoginScene\n - Camera\n - LoginController\n - RealmController\n - Canvas\n - UsernameField\n - PasswordField\n - LoginButton\n- MainScene\n - GameController\n - RealmController\n - Background\n - Player\n - Canvas\n - HighScoreText\n - ScoreText\n - BlasterEnabled\n - SparkBlasterEnabled\n - CrossBlasterEnabled\n - Blaster\n - CrossBlast\n - Enemy\n - SparkBlast\n\nThe above list represents our two scenes with each of the components that live within the scene.\n\nLet's start by configuring the **LoginScene** with each of the components. Don't worry, we'll explore the logic side of things for this scene later.\n\nWithin the Unity IDE, create a **LoginScene** and within the **Hierarchy** choose to create a new **UI -> Input Field**. You'll need to do this twice because this is how we're going to create the **UsernameField** and the **PasswordField** that we defined in the list above. You're also going to want to create a **UI -> Button** which will represent our **LoginButton** to submit the form.\n\nFor each of the UI game objects, position them on the screen how you want them. Mine looks like the following:\n\nWithin the **Hierarchy** of your scene, create two empty game objects. The first game object, **LoginController**, will eventually hold a script for managing the user input and interactions with the UI components we had just created. The second game object, **RealmController**, will eventually have a script that contains any Realm interactions. For now, we're going to leave these as empty game objects and move on.\n\nNow let's move onto our next scene.\n\nCreate a **MainScene** if you haven't already and start adding **UI -> Text** to represent the current score and the high score.\n\nSince we probably don't want a solid blue background in our game, we should add a background image. Add an empty game object to the **Hierarch** and then add a **Sprite Renderer** component to that object using the inspector. Add whatever image you want to the **Sprite** field of the **Sprite Renderer** component.\n\nSince we're going to give the player a few different blasters to choose from, we want to show them which blasters they have at any given time. For this, we should add some simple sprites with blaster images on them.\n\nCreate three empty game objects and add a **Sprite Renderer** component to each of them. For each **Sprite** field, add the image that you want to use. Then position the sprites to a section on the screen that you're comfortable with.\n\nIf you've made it this far, you might have a scene that looks like the following:\n\nThis might be hard to believe, but the visual side of things is almost complete. With just a few more game objects, we can move onto the more exciting logic things.\n\nLike with the **LoginScene**, the **GameController** and **RealmController** game objects will remain empty. There's a small change though. Even though the **RealmController** will eventually exist in the **MainScene**, we're not going to create it manually. Instead, just create an empty **GameController** game object.\n\nThis leaves us with the player, enemies, and various blasters.\n\nStarting with the player, create an empty game object and add a **Sprite Renderer**, **Rigidbody 2D**, and **Box Collider 2D** component to the game object. For the **Sprite Renderer**, add the graphic you want to use for your ship. The **Rigidbody 2D** and **Box Collider 2D** have to do with physics and collisions. We're not going to burden ourselves with gravity for this example, so make sure the **Body Type** for the **Rigidbody 2D** is **Kinematic** and the **Is Trigger** for the **Box Collider 2D** is enabled. Within the inspector, tag the player game object as \"Player.\"\n\nThe blasters and enemies will have the same setup as our player. Create new game objects for each, just like you did the player, only this time select a different graphic for them and give them the tags of \"Weapon\" or \"Enemy\" in the inspector.\n\nThis is where things get interesting.\n\nWe know that there will be more than one enemy in circulation and likewise with your blaster bullets. Rather than creating a bunch of each, take the game objects you used for the blasters and enemies and drag them into your **Assets** directory. This will convert the game objects into prefabs that can be recycled as many times as you want. Once the prefabs are created, the objects can be removed from the **Hierarchy** section of your scene. As we progress, we'll be instantiating these prefabs through code.\n\nWe're ready to start writing code to give our game life.\n\n## Configuring MongoDB Atlas and Atlas Device Sync for Data Synchronization\n\nFor this game, we're going to rely on a cloud and synchronization aspect, so there is some additional configuration that we'll need to take care of. However, before we worry about the cloud configurations, let's install the Realm .NET SDK for Unity.\n\nWithin Unity, select **Window -> Package Manager** and then click the little cog icon to find the **Advanced Project Settings** area.\n\nHere you're going to want to add a new registry with the following information:\n\n```\nname: NPM\nurl: https://registry.npmjs.org\nscope(s): io.realm.unity\n```\n\nEven though we're working with Unity, the best way to get the Realm SDK is through NPM, hence the custom registry that we're going to use.\n\nWith the registry added, we can add an entry for Realm in the project's **Packages/manifest.json** file. Within the **manifest.json** file, add the following to the `dependencies` object:\n\n```\n\"io.realm.unity\": \"10.3.0\"\n```\n\nYou can swap the version of Realm with whatever you plan to use.\n\nFrom a Unity perspective, Realm is ready to be used. Now we just need to configure Device Sync and Atlas in the cloud.\n\nWithin MongoDB Atlas, assuming you already have a cluster to work with, click the **App Services** tab and then **Create a New App** to create a new application.\n\nName the application whatever you'd like. The MongoDB Atlas cluster requires no special configuration to work with App Services, only that such a cluster exists. App Services will create the necessary databases and collections when the time comes.\n\nBefore we start configuring your app, take note of your **App ID** in the top left corner of the screen:\n\nThe **App ID** will be very important within the Unity project because it tells the SDK where to sync and authenticate with.\n\nNext you'll want to define what kind of authentication is allowed for your Unity game and the users that are allowed to authenticate. Within the dashboard, click the **Authentication** tab followed by the **Authentication Providers** tab. Enable **Email / Password** if it isn't already enabled. After email and password authentication is enabled for your application, click the **Users** tab and choose to **Add New User** with the email and password information of your choice.\n\nThe users can be added through an API request, but for this example we're just going to focus on adding them manually.\n\nWith the user information added, we need to define the collections and schemas to sync with our game. Click the **Schema** tab within the dashboard and choose to create a new database and collection if you don't already have a **space_shooter** database and a **PlayerProfile** collection.\n\nThe schema for the **PlayerProfile** collection should look like the following:\n\n```json\n{\n \"title\": \"PlayerProfile\",\n \"bsonType\": \"object\",\n \"required\": \n \"high_score\",\n \"spark_blaster_enabled\",\n \"cross_blaster_enabled\",\n \"score\",\n \"_partition\"\n ],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"_partition\": {\n \"bsonType\": \"string\"\n },\n \"high_score\": {\n \"bsonType\": \"int\"\n },\n \"score\": {\n \"bsonType\": \"int\"\n },\n \"spark_blaster_enabled\": {\n \"bsonType\": \"bool\"\n },\n \"cross_blaster_enabled\": {\n \"bsonType\": \"bool\"\n }\n }\n}\n```\n\nIn the above schema, we're saying that we are going to have five fields with the types defined. These fields will eventually be mapped to C# objects within the Unity game. The one field to pay the most attention to is the `_partition` field. The `_partition` field will be the most valuable when it comes to sync because it will represent which data is synchronized rather than attempting to synchronize the entire MongoDB Atlas collection.\n\nIn our example, the `_partition` field should hold user email addresses because they are unique and the user will provide them when they log in. With this we can specify that we only want to sync data based on the users email address.\n\nWith the schema defined, now we can enable Atlas Device Sync.\n\nWithin the dashboard, click on the **Sync** tab. Specify the cluster and the field to be used as the partition key. You should specify `_partition` as the partition key in this example, although the actual field name doesn't matter if you wanted to call it something else. Leaving the permissions as the default will give users read and write permissions.\n\n> Atlas Device Sync will only sync collections that have a defined schema. You could have other collections in your MongoDB Atlas cluster, but they won't sync automatically unless you have schemas defined for them.\n\nAt this point, we can now focus on the actual game development.\n\n## Defining the Data Model and Usage Logic\n\nWhen it comes to data, your Atlas App Services app is going to manage all of it. We need to create a data model that matches the schema that we had just created for synchronization and we need to create the logic for our **RealmController** game object.\n\nLet's start by creating the model to be used.\n\nWithin the **Assets** folder of your project, create a **Scripts** folder with a **PlayerProfile.cs** script in it. The **PlayerProfile.cs** script should contain the following C# code:\n\n```csharp\nusing Realms;\nusing Realms.Sync;\n\npublic class PlayerProfile : RealmObject {\n\n [PrimaryKey]\n [MapTo(\"_id\")]\n public string UserId { get; set; }\n\n [MapTo(\"high_score\")]\n public int HighScore { get; set; }\n\n [MapTo(\"score\")]\n public int Score { get; set; }\n\n [MapTo(\"spark_blaster_enabled\")]\n public bool SparkBlasterEnabled { get; set; }\n\n [MapTo(\"cross_blaster_enabled\")]\n public bool CrossBlasterEnabled { get; set; }\n\n public PlayerProfile() {}\n\n public PlayerProfile(string userId) {\n this.UserId = userId;\n this.HighScore = 0;\n this.Score = 0;\n this.SparkBlasterEnabled = false;\n this.CrossBlasterEnabled = false;\n }\n\n}\n```\n\nWhat we're doing is we are defining object fields and how they map to a remote document in a MongoDB collection. While our C# object looks like the above, the BSON that we'll see in MongoDB Atlas will look like the following:\n\n```json\n{\n \"_id\": \"12345\",\n \"high_score\": 1337,\n \"score\": 0,\n \"spark_blaster_enabled\": false,\n \"cross_blaster_enabled\": false\n}\n```\n\nIt's important to note that the documents in Atlas might have more fields than what we see in our game. We'll only be able to use the mapped fields in our game, so if we have for example an email address in our document, we won't see it in the game because it isn't mapped.\n\nWith the model in place, we can focus on syncing, querying, and writing our data.\n\nWithin the **Assets/Scripts** directory, add a **RealmController.cs** script. This script should contain the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing Realms;\nusing Realms.Sync;\nusing Realms.Sync.Exceptions;\nusing System.Threading.Tasks;\n\npublic class RealmController : MonoBehaviour {\n\n public static RealmController Instance;\n\n public string RealmAppId = \"YOUR_REALM_APP_ID_HERE\";\n\n private Realm _realm;\n private App _realmApp;\n private User _realmUser;\n\n void Awake() {\n DontDestroyOnLoad(gameObject);\n Instance = this;\n }\n\n void OnDisable() {\n if(_realm != null) {\n _realm.Dispose();\n }\n }\n\n public async Task Login(string email, string password) {}\n\n public PlayerProfile GetPlayerProfile() {}\n\n public void IncreaseScore() {}\n\n public void ResetScore() {}\n\n public bool IsSparkBlasterEnabled() {}\n\n public bool IsCrossBlasterEnabled() {}\n\n}\n```\n\nThe above code is incomplete, but it gives you an idea of where we are going.\n\nFirst, take notice of the `AppId` variable. You're going to want to use your App Services application so sync can happen based on how you've configured everything. This also applies to the authentication rules that are in place for your particular application.\n\nThe `RealmController` class is going to be used as a singleton object between scenes. The goal is to make sure it cannot be destroyed and everything we do is through a static instance of itself.\n\nIn the `Awake` method, we are saying that the game object that the script is attached to should not be destroyed and that we are setting the static variable to itself. In the `OnDisable`, we are doing cleanup which should really only happen when the game is closed.\n\nMost of the magic will happen in the `Login` function:\n\n```csharp\npublic async Task Login(string email, string password) {\n if(email != \"\" && password != \"\") {\n _realmApp = App.Create(new AppConfiguration(RealmAppId) {\n MetadataPersistenceMode = MetadataPersistenceMode.NotEncrypted\n });\n try {\n if(_realmUser == null) {\n _realmUser = await _realmApp.LogInAsync(Credentials.EmailPassword(email, password));\n _realm = await Realm.GetInstanceAsync(new SyncConfiguration(email, _realmUser));\n } else {\n _realm = Realm.GetInstance(new SyncConfiguration(email, _realmUser));\n }\n } catch (ClientResetException clientResetEx) {\n if(_realm != null) {\n _realm.Dispose();\n }\n clientResetEx.InitiateClientReset();\n }\n return _realmUser.Id;\n }\n return \"\";\n}\n```\n\nIn the above code, we are defining our application based on the application ID. Next we are attempting to log into the application using email and password authentication, something we had previously configured in the web dashboard. If successful, we are getting an instance of our Realm to work with going forward. The data to be synchronized is based on our partition field which in this case is the email address. This means we're only synchronizing data for this particular email address.\n\nIf all goes smooth with the login, the ID for the user is returned.\n\nAt some point in time, we're going to need to load the player data. This is where the `GetPlayerProfile` function comes in:\n\n```csharp\npublic PlayerProfile GetPlayerProfile() {\n PlayerProfile _playerProfile = _realm.Find(_realmUser.Id);\n if(_playerProfile == null) {\n _realm.Write(() => {\n _playerProfile = _realm.Add(new PlayerProfile(_realmUser.Id));\n });\n }\n return _playerProfile;\n}\n```\n\nWhat we're doing is we're taking the current instance and we're finding a particular player profile based on the id. If one does not exist, then we create one using the current ID. In the end, we're returning a player profile, whether it be one that we had been using or a fresh one.\n\nWe know that we're going to be working with score data in our game. We need to be able to increase the score, reset the score, and calculate the high score for a player.\n\nStarting with the `IncreaseScore`, we have the following:\n\n```csharp\npublic void IncreaseScore() {\n PlayerProfile _playerProfile = GetPlayerProfile();\n if(_playerProfile != null) {\n _realm.Write(() => {\n _playerProfile.Score++;\n });\n }\n}\n```\n\nFirst we get the player profile and then we take whatever score is associated with it and increase it by one. With Realm we can work with our objects like native C# objects. The exception is that when we want to write, we have to wrap it in a `Write` block. Reads we don't have to.\n\nNext let's look at the `ResetScore` function:\n\n```csharp\npublic void ResetScore() {\n PlayerProfile _playerProfile = GetPlayerProfile();\n if(_playerProfile != null) {\n _realm.Write(() => {\n if(_playerProfile.Score > _playerProfile.HighScore) {\n _playerProfile.HighScore = _playerProfile.Score;\n }\n _playerProfile.Score = 0;\n });\n }\n}\n```\n\nIn the end we want to zero out the score, but we also want to see if our current score is the highest score before we do. We can do all this within the `Write` block and it will synchronize to the server.\n\nFinally we have our two functions to tell us if a certain blaster is available to us:\n\n```csharp\npublic bool IsSparkBlasterEnabled() {\n PlayerProfile _playerProfile = GetPlayerProfile();\n return _playerProfile != null ? _playerProfile.SparkBlasterEnabled : false;\n}\n```\n\nThe reason our blasters are data dependent is because we may want to unlock them based on points or through a micro-transaction. In this case, maybe Realm Sync takes care of it.\n\nThe `IsCrossBlasterEnabled` function isn't much different:\n\n```csharp\npublic bool IsCrossBlasterEnabled() {\n PlayerProfile _playerProfile = GetPlayerProfile();\n return _playerProfile != null ? _playerProfile.CrossBlasterEnabled : false;\n}\n```\n\nThe difference is we are using a different field from our data model.\n\nWith the Realm logic in place for the game, we can focus on giving the other game objects life through scripts.\n\n## Developing the Game-Play Logic Scripts for the Space Shooter Game Objects\n\nAlmost every game object that we've created will be receiving a script with logic. To keep the flow appropriate, we're going to add logic in a natural progression. This means we're going to start with the **LoginScene** and each of the game objects that live in it.\n\nFor the **LoginScene**, only two game objects will be receiving scripts:\n\n- LoginController\n- RealmController\n\nSince we already have a **RealmController.cs** script file, go ahead and attach it to the **RealmController** game object as a component.\n\nNext up, we need to create an **Assets/Scripts/LoginController.cs** file with the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.UI;\nusing UnityEngine.SceneManagement;\n\npublic class LoginController : MonoBehaviour {\n\n public Button LoginButton;\n public InputField UsernameInput;\n public InputField PasswordInput;\n\n void Start() {\n UsernameInput.text = \"nic.raboy@mongodb.com\";\n PasswordInput.text = \"password1234\";\n LoginButton.onClick.AddListener(Login);\n }\n\n async void Login() {\n if(await RealmController.Instance.Login(UsernameInput.text, PasswordInput.text) != \"\") {\n SceneManager.LoadScene(\"MainScene\");\n }\n }\n\n void Update() {\n if(Input.GetKey(\"escape\")) {\n Application.Quit();\n }\n }\n\n}\n```\n\nThere's not a whole lot going on since the backbone of this script is in the **RealmController.cs** file.\n\nWhat we're doing in the **LoginController.cs** file is we're defining the UI components which we'll link through the Unity IDE. When the script starts, we're going to default the values of our input fields and we're going to assign a click event listener to the button.\n\nWhen the button is clicked, the `Login` function from the **RealmController.cs** file is called and we pass the provided email and password. If we get an id back, we know we were successful so we can switch to the next scene.\n\nThe `Update` method isn't a complete necessity, but if you want to be able to quit the game with the escape key, that is what this particular piece of logic does.\n\nAttach the **LoginController.cs** script to the **LoginController** game object as a component and then drag each of the corresponding UI game objects into the script via the game object inspector. Remember, we defined public variables for each of the UI components. We just need to tell Unity what they are by linking them in the inspector.\n\nThe **LoginScene** logic is complete. Can you believe it? This is because the Realm .NET SDK for Unity is doing all the heavy lifting for us.\n\nThe **MainScene** has a lot more going on, but we'll break down what's happening.\n\nLet's start with something you don't actually see but that controls all of our prefab instances. I'm talking about the object pooling script.\n\nIn short, creating and destroying game objects on-demand is resource intensive. Instead, we should create a fixed amount of game objects when the game loads and hide them or show them based on when they are needed. This is what an object pool does.\n\nCreate an **Assets/Scripts/ObjectPool.cs** file with the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class ObjectPool : MonoBehaviour\n{\n\n public static ObjectPool SharedInstance;\n\n private List pooledEnemies;\n private List pooledBlasters;\n private List pooledCrossBlasts;\n private List pooledSparkBlasts;\n public GameObject enemyToPool;\n public GameObject blasterToPool;\n public GameObject crossBlastToPool;\n public GameObject sparkBlastToPool;\n public int amountOfEnemiesToPool;\n public int amountOfBlastersToPool;\n public int amountOfCrossBlastsToPool;\n public int amountOfSparkBlastsToPool;\n\n void Awake() {\n SharedInstance = this;\n }\n\n void Start() {\n pooledEnemies = new List();\n pooledBlasters = new List();\n pooledCrossBlasts = new List();\n pooledSparkBlasts = new List();\n GameObject tmpEnemy;\n GameObject tmpBlaster;\n GameObject tmpCrossBlast;\n GameObject tmpSparkBlast;\n for(int i = 0; i < amountOfEnemiesToPool; i++) {\n tmpEnemy = Instantiate(enemyToPool);\n tmpEnemy.SetActive(false);\n pooledEnemies.Add(tmpEnemy);\n }\n for(int i = 0; i < amountOfBlastersToPool; i++) {\n tmpBlaster = Instantiate(blasterToPool);\n tmpBlaster.SetActive(false);\n pooledBlasters.Add(tmpBlaster);\n }\n for(int i = 0; i < amountOfCrossBlastsToPool; i++) {\n tmpCrossBlast = Instantiate(crossBlastToPool);\n tmpCrossBlast.SetActive(false);\n pooledCrossBlasts.Add(tmpCrossBlast);\n }\n for(int i = 0; i < amountOfSparkBlastsToPool; i++) {\n tmpSparkBlast = Instantiate(sparkBlastToPool);\n tmpSparkBlast.SetActive(false);\n pooledSparkBlasts.Add(tmpSparkBlast);\n }\n }\n\n public GameObject GetPooledEnemy() {\n for(int i = 0; i < amountOfEnemiesToPool; i++) {\n if(pooledEnemies[i].activeInHierarchy == false) {\n return pooledEnemies[i];\n }\n }\n return null;\n }\n\n public GameObject GetPooledBlaster() {\n for(int i = 0; i < amountOfBlastersToPool; i++) {\n if(pooledBlasters[i].activeInHierarchy == false) {\n return pooledBlasters[i];\n }\n }\n return null;\n }\n\n public GameObject GetPooledCrossBlast() {\n for(int i = 0; i < amountOfCrossBlastsToPool; i++) {\n if(pooledCrossBlasts[i].activeInHierarchy == false) {\n return pooledCrossBlasts[i];\n }\n }\n return null;\n }\n\n public GameObject GetPooledSparkBlast() {\n for(int i = 0; i < amountOfSparkBlastsToPool; i++) {\n if(pooledSparkBlasts[i].activeInHierarchy == false) {\n return pooledSparkBlasts[i];\n }\n }\n return null;\n }\n \n}\n```\n\nThe above object pooling logic is not code optimized because I wanted to keep it readable. If you want to see an optimized version, check out a [previous tutorial I wrote on the subject.\n\nSo let's break down what we're doing in this object pool.\n\nWe have four different game objects to pool:\n\n- Enemies\n- Spark Blasters\n- Cross Blasters\n- Regular Blasters\n\nThese need to be pooled because there could be more than one of the same object at any given time. We're using public variables for each of the game objects and quantities so that we can properly link them to actual game objects in the Unity IDE.\n\nLike with the **RealmController.cs** script, this script will also act as a singleton to be used as needed.\n\nIn the `Start` method, we are instantiating a game object, as per the quantities defined through the Unity IDE, and adding them to a list. Ideally the linked game object should be one of the prefabs that we previously defined. The list of instantiated game objects represent our pools. We have four object pools to pull from.\n\nPulling from the pool is as simple as creating a function for each pool and seeing what's available. Take the `GetPooledEnemy` function for example:\n\n```csharp\npublic GameObject GetPooledEnemy() {\n for(int i = 0; i < amountOfEnemiesToPool; i++) {\n if(pooledEnemiesi].activeInHierarchy == false) {\n return pooledEnemies[i];\n }\n }\n return null;\n}\n```\n\nIn the above code, we loop through each object in our pool, in this case enemies. If an object is inactive it means we can pull it and use it. If our pool is depleted, then we either defined too small of a pool or we need to wait until something is available.\n\nI like to pool about 50 of each game object even if I only ever plan to use 10. Doesn't hurt to have excess as it's still less resource-heavy than creating and destroying game objects as needed.\n\nThe **ObjectPool.cs** file should be attached as a component to the **GameController** game object. After attaching, make sure you assign your prefabs and the pooled quantities using the game object inspector within the Unity IDE.\n\nThe **ObjectPool.cs** script isn't the only script we're going to attach to the **GameController** game object. We need to create a script that will control the flow of our game. Create an **Assets/Scripts/GameController.cs** file with the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.UI;\n\npublic class GameController : MonoBehaviour {\n\n public float timeUntilEnemy = 1.0f;\n public float minTimeUntilEnemy = 0.25f;\n public float maxTimeUntilEnemy = 2.0f;\n\n public GameObject SparkBlasterGraphic;\n public GameObject CrossBlasterGraphic;\n\n public Text highScoreText;\n public Text scoreText;\n\n private PlayerProfile _playerProfile;\n\n void OnEnable() {\n _playerProfile = RealmController.Instance.GetPlayerProfile();\n highScoreText.text = \"HIGH SCORE: \" + _playerProfile.HighScore.ToString();\n scoreText.text = \"SCORE: \" + _playerProfile.Score.ToString();\n }\n\n void Update() {\n highScoreText.text = \"HIGH SCORE: \" + _playerProfile.HighScore.ToString();\n scoreText.text = \"SCORE: \" + _playerProfile.Score.ToString();\n timeUntilEnemy -= Time.deltaTime;\n if(timeUntilEnemy <= 0) {\n GameObject enemy = ObjectPool.SharedInstance.GetPooledEnemy();\n if(enemy != null) {\n enemy.SetActive(true);\n }\n timeUntilEnemy = Random.Range(minTimeUntilEnemy, maxTimeUntilEnemy);\n }\n if(_playerProfile != null) {\n SparkBlasterGraphic.SetActive(_playerProfile.SparkBlasterEnabled);\n CrossBlasterGraphic.SetActive(_playerProfile.CrossBlasterEnabled);\n }\n if(Input.GetKey(\"escape\")) {\n Application.Quit();\n }\n }\n\n}\n```\n\nThere's a diverse set of things happening in the above script, so let's break them down.\n\nYou'll notice the following public variables:\n\n```csharp\npublic float timeUntilEnemy = 1.0f;\npublic float minTimeUntilEnemy = 0.25f;\npublic float maxTimeUntilEnemy = 2.0f;\n```\n\nWe're going to use these variables to define when a new enemy should be activated.\n\nThe `timeUntilEnemy` represents how much actual time from the current time until a new enemy should be pulled from the object pool. The `minTimeUntilEnemy` and `maxTimeUntilEnemy` will be used for randomizing what the `timeUntilEnemy` value should become after an enemy is pooled. It's boring to have all enemies appear after a fixed amount of time, so the minimum and maximum values keep things interesting.\n\n```csharp\npublic GameObject SparkBlasterGraphic;\npublic GameObject CrossBlasterGraphic;\n\npublic Text highScoreText;\npublic Text scoreText;\n```\n\nRemember those UI components and sprites to represent enabled blasters we had created earlier in the Unity IDE? When we attach this script to the **GameController** game object, you're going to want to assign the other components in the game object inspector.\n\nThis brings us to the `OnEnable` method:\n\n```csharp\nvoid OnEnable() {\n _playerProfile = RealmController.Instance.GetPlayerProfile();\n highScoreText.text = \"HIGH SCORE: \" + _playerProfile.HighScore.ToString();\n scoreText.text = \"SCORE: \" + _playerProfile.Score.ToString();\n}\n```\n\nThe `OnEnable` method is where we're going to get our current player profile and then update the score values visually based on the data stored in the player profile. The `Update` method will continuously update those score values for as long as the scene is showing.\n\n```csharp\nvoid Update() {\n highScoreText.text = \"HIGH SCORE: \" + _playerProfile.HighScore.ToString();\n scoreText.text = \"SCORE: \" + _playerProfile.Score.ToString();\n timeUntilEnemy -= Time.deltaTime;\n if(timeUntilEnemy <= 0) {\n GameObject enemy = ObjectPool.SharedInstance.GetPooledEnemy();\n if(enemy != null) {\n enemy.SetActive(true);\n }\n timeUntilEnemy = Random.Range(minTimeUntilEnemy, maxTimeUntilEnemy);\n }\n if(_playerProfile != null) {\n SparkBlasterGraphic.SetActive(_playerProfile.SparkBlasterEnabled);\n CrossBlasterGraphic.SetActive(_playerProfile.CrossBlasterEnabled);\n }\n if(Input.GetKey(\"escape\")) {\n Application.Quit();\n }\n}\n```\n\nIn the `Update` method, every time it's called, we subtract the delta time from our `timeUntilEnemy` variable. When the value is zero, we attempt to get a new enemy from the object pool and then reset the timer. Outside of the object pooling, we're also checking to see if the other blasters have become enabled. If they have been, we can update the game object status for our sprites. This will allow us to easily show and hide these sprites.\n\nIf you haven't already, attach the **GameController.cs** script to the **GameController** game object. Remember to update any values for the script within the game object inspector.\n\nIf we were to run the game, every enemy would have the same position and they would not be moving. We need to assign logic to the enemies.\n\nCreate an **Assets/Scripts/Enemy.cs** file with the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Enemy : MonoBehaviour {\n\n public float movementSpeed = 5.0f;\n\n void OnEnable() {\n float randomPositionY = Random.Range(-4.0f, 4.0f);\n transform.position = new Vector3(10.0f, randomPositionY, 0);\n }\n\n void Update() {\n transform.position += Vector3.left * movementSpeed * Time.deltaTime;\n if(transform.position.x < -10.0f) {\n gameObject.SetActive(false);\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if(collider.tag == \"Weapon\") {\n gameObject.SetActive(false);\n RealmController.Instance.IncreaseScore();\n }\n }\n\n}\n```\n\nWhen the enemy is pulled from the object pool, the game object becomes enabled. So the `OnEnable` method picks a random y-axis position for the game object. For every frame, the `Update` method will move the game object along the x-axis. If the game object goes off the screen, we can safely add it back into the object pool.\n\nThe `OnTriggerEnter2D` method is for our collision detection. We're not doing physics collisions so this method just tells us if the objects have touched. If the current game object, in this case the enemy, has collided with a game object tagged as a weapon, then add the enemy back into the queue and increase the score.\n\nAttach the **Enemy.cs** script to your enemy prefab.\n\nBy now, your game probably looks something like this, minus the animations:\n\n![Space Shooter Enemies\n\nWe won't be worrying about animations in this tutorial. Consider that part of your extracurricular challenge after completing this tutorial.\n\nSo we have a functioning enemy pool. Let's look at the blaster logic since it is similar.\n\nCreate an **Assets/Scripts/Blaster.cs** file with the following C# logic:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Blaster : MonoBehaviour {\n\n public float movementSpeed = 5.0f;\n public float decayRate = 2.0f;\n\n private float timeToDecay;\n\n void OnEnable() {\n timeToDecay = decayRate;\n }\n\n void Update() {\n timeToDecay -= Time.deltaTime;\n transform.position += Vector3.right * movementSpeed * Time.deltaTime;\n if(transform.position.x > 10.0f || timeToDecay <= 0) {\n gameObject.SetActive(false);\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if(collider.tag == \"Enemy\") {\n gameObject.SetActive(false);\n }\n }\n\n}\n```\n\nLook mildly familiar to the enemy? It is similar.\n\nWe need to first define how fast each blaster should move and how quickly the blaster should disappear if it hasn't hit anything.\n\nIn the `Update` method will subtract the current time from our blaster decay time. The blaster will continue to move along the x-axis until it has either gone off screen or it has decayed. In this scenario, the blaster is added back into the object pool. If the blaster collides with a game object tagged as an enemy, the blaster is also added back into the pool. Remember, the blaster will likely be tagged as a weapon so the **Enemy.cs** script will take care of adding the enemy back into the object pool.\n\nAttach the **Blaster.cs** script to your blaster prefab and apply any value settings as necessary with the Unity IDE in the inspector.\n\nTo make the game interesting, we're going to add some very slight differences to the other blasters.\n\nCreate an **Assets/Scripts/CrossBlast.cs** script with the following C# code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class CrossBlast : MonoBehaviour {\n\n public float movementSpeed = 5.0f;\n\n void Update() {\n transform.position += Vector3.right * movementSpeed * Time.deltaTime;\n if(transform.position.x > 10.0f) {\n gameObject.SetActive(false);\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) { }\n\n}\n```\n\nAt a high level, this blaster behaves the same. However, if it collides with an enemy, it keeps going. It only goes back into the object pool when it goes off the screen. So there is no decay and it isn't a one enemy per blast weapon.\n\nLet's look at an **Assets/Scripts/SparkBlast.cs** script:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class SparkBlast : MonoBehaviour {\n\n public float movementSpeed = 5.0f;\n\n void Update() {\n transform.position += Vector3.right * movementSpeed * Time.deltaTime;\n if(transform.position.x > 10.0f) {\n gameObject.SetActive(false);\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if(collider.tag == \"Enemy\") {\n gameObject.SetActive(false);\n }\n }\n\n}\n```\n\nThe minor difference in the above script is that it has no decay, but it can only ever destroy one enemy.\n\nMake sure you attach these scripts to the appropriate blaster prefabs.\n\nWe're almost done! We have one more script and that's for the actual player!\n\nCreate an **Assets/Scripts/Player.cs** file and add the following code:\n\n```csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Player : MonoBehaviour\n{\n\n public float movementSpeed = 5.0f;\n public float respawnSpeed = 8.0f;\n public float weaponFireRate = 0.5f;\n\n private float nextBlasterTime = 0.0f;\n private bool isRespawn = true;\n\n void Update() {\n if(isRespawn == true) {\n transform.position = Vector2.MoveTowards(transform.position, new Vector2(-6.0f, -0.25f), respawnSpeed * Time.deltaTime);\n if(transform.position == new Vector3(-6.0f, -0.25f, 0.0f)) {\n isRespawn = false;\n }\n } else {\n if(Input.GetKey(KeyCode.UpArrow) && transform.position.y < 4.0f) {\n transform.position += Vector3.up * movementSpeed * Time.deltaTime;\n } else if(Input.GetKey(KeyCode.DownArrow) && transform.position.y > -4.0f) {\n transform.position += Vector3.down * movementSpeed * Time.deltaTime;\n }\n if(Input.GetKey(KeyCode.Space) && Time.time > nextBlasterTime) {\n nextBlasterTime = Time.time + weaponFireRate;\n GameObject blaster = ObjectPool.SharedInstance.GetPooledBlaster();\n if(blaster != null) {\n blaster.SetActive(true);\n blaster.transform.position = new Vector3(transform.position.x + 1, transform.position.y);\n }\n }\n if(RealmController.Instance.IsCrossBlasterEnabled()) {\n if(Input.GetKey(KeyCode.B) && Time.time > nextBlasterTime) {\n nextBlasterTime = Time.time + weaponFireRate;\n GameObject crossBlast = ObjectPool.SharedInstance.GetPooledCrossBlast();\n if(crossBlast != null) {\n crossBlast.SetActive(true);\n crossBlast.transform.position = new Vector3(transform.position.x + 1, transform.position.y);\n }\n }\n }\n if(RealmController.Instance.IsSparkBlasterEnabled()) {\n if(Input.GetKey(KeyCode.V) && Time.time > nextBlasterTime) {\n nextBlasterTime = Time.time + weaponFireRate;\n GameObject sparkBlast = ObjectPool.SharedInstance.GetPooledSparkBlast();\n if(sparkBlast != null) {\n sparkBlast.SetActive(true);\n sparkBlast.transform.position = new Vector3(transform.position.x + 1, transform.position.y);\n }\n }\n }\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if(collider.tag == \"Enemy\" && isRespawn == false) {\n RealmController.Instance.ResetScore();\n transform.position = new Vector3(-10.0f, -0.25f, 0.0f);\n isRespawn = true;\n }\n }\n\n}\n```\n\nLooking at the above script, we have a few variables to keep track of:\n\n```csharp\npublic float movementSpeed = 5.0f;\npublic float respawnSpeed = 8.0f;\npublic float weaponFireRate = 0.5f;\n\nprivate float nextBlasterTime = 0.0f;\nprivate bool isRespawn = true;\n```\n\nWe want to define how fast the player can move, how long it takes for the respawn animation to happen, and how fast you're allowed to fire blasters.\n\nIn the `Update` method, we first check to see if we are currently respawning:\n\n```csharp\ntransform.position = Vector2.MoveTowards(transform.position, new Vector2(-6.0f, -0.25f), respawnSpeed * Time.deltaTime);\nif(transform.position == new Vector3(-6.0f, -0.25f, 0.0f)) {\n isRespawn = false;\n}\n```\n\nIf we are respawning, then we need to smoothly move the player game object towards a particular coordinate position. When the game object has reached that new position, then we can disable the respawn indicator that prevents us from controlling the player.\n\nIf we're not respawning, we can check to see if the movement keys were pressed:\n\n```csharp\nif(Input.GetKey(KeyCode.UpArrow) && transform.position.y < 4.0f) {\n transform.position += Vector3.up * movementSpeed * Time.deltaTime;\n} else if(Input.GetKey(KeyCode.DownArrow) && transform.position.y > -4.0f) {\n transform.position += Vector3.down * movementSpeed * Time.deltaTime;\n}\n```\n\nWhen pressing a key, as long as we haven't moved outside our y-axis boundary, we can adjust the position of the player. Since this is in the `Update` method, the movement should be smooth for as long as you are holding a key.\n\nUsing a blaster isn't too different:\n\n```csharp\nif(Input.GetKey(KeyCode.Space) && Time.time > nextBlasterTime) {\n nextBlasterTime = Time.time + weaponFireRate;\n GameObject blaster = ObjectPool.SharedInstance.GetPooledBlaster();\n if(blaster != null) {\n blaster.SetActive(true);\n blaster.transform.position = new Vector3(transform.position.x + 1, transform.position.y);\n }\n}\n```\n\nIf the particular blaster key is pressed and our rate limit isn't exceeded, we can update our `nextBlasterTime` based on the rate limit, pull a blaster from the object pool, and let the blaster do its magic based on the **Blaster.cs** script. All we're doing in the **Player.cs** script is checking to see if we're allowed to fire and if we are pull from the pool.\n\nThe data dependent spark and cross blasters follow the same rules, the exception being that we first check to see if they are enabled in our player profile.\n\nFinally, we have our collisions:\n\n```csharp\nvoid OnTriggerEnter2D(Collider2D collider) {\n if(collider.tag == \"Enemy\" && isRespawn == false) {\n RealmController.Instance.ResetScore();\n transform.position = new Vector3(-10.0f, -0.25f, 0.0f);\n isRespawn = true;\n }\n}\n```\n\nIf our player collides with a game object tagged as an enemy and we're not currently respawning, then we can reset the score and trigger the respawn.\n\nMake sure you attach this **Player.cs** script to your **Player** game object.\n\nIf everything worked out, the game should be functional at this point. If something isn't working correctly, double check the following:\n\n- Make sure each of your game objects is properly tagged.\n- Make sure the scripts are attached to the proper game object or prefab.\n- Make sure the values on the scripts have been defined through the Unity IDE inspector.\n\nPlay around with the game and setting values within MongoDB Atlas.\n\n## Conclusion\n\nYou just saw how to create a space shooter type game with Unity that syncs with MongoDB Atlas by using the Realm .NET SDK for Unity and Atlas Device Sync. Realm only played a small part in this game because that is the beauty of Realm. You can get data persistence and sync with only a few lines of code.\n\nWant to give this project a try? I've uploaded all of the source code to GitHub. You just need to clone the project, replace my App ID with yours, and build the project. Of course you'll still need to have properly configured Atlas and Device Sync in the cloud.\n\nIf you're looking for a slightly slower introduction to Realm with Unity, check out a previous tutorial that I wrote on the subject.\n\nIf you'd like to connect with us further, don't forget to visit the community forums.", "format": "md", "metadata": {"tags": ["Realm", "C#", "Unity", ".NET"], "pageDescription": "Learn how to build a space shooter game that synchronizes between clients and the cloud using MongoDB, Unity, and Atlas Device Sync.", "contentType": "Tutorial"}, "title": "Building a Space Shooter Game in Unity that Syncs with Realm and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/migrate-to-realm-kotlin-sdk", "action": "created", "body": "# Migrating Android Apps from Realm Java SDK to Kotlin SDK\n\n## Introduction\n\nSo, it's here! The engineering team has released a major milestone of the Kotlin\nSDK. The preview is available to\nyou to try it and make comments and suggestions.\n\nUntil now, if you were using Realm in Android, you were using the Java version of the SDK. The purpose of the Realm\nKotlin SDK is to be the evolution of the Java one and eventually replace it. So, you might be wondering if and when you\nshould migrate to it. But even more important for your team and your app is what it provides that the Java SDK\ndoesn't. The Kotlin SDK has been written from scratch to combine what the engineering team has learned through years of\nSDK development, with the expressivity and fluency of the Kotlin language. They have been successful at that and the\nresulting SDK provides a first-class experience that I would summarize in the following points:\n\n- The Kotlin SDK allows you to use expressions that are Kotlin idiomatic\u2014i.e., more natural to the language.\n- It uses Kotlin coroutines and flows to make concurrency easier and more efficient.\n- It has been designed and developed with Kotlin Multiplatform in mind.\n- It has removed the thread-confinement restriction on Java and it directly integrates with the Android lifecycle hooks\n so the developer doesn't have to spin up and tear down a realm instance on every activity lifecycle.\n- It's the way forward. MongoDB is not discontinuing the Java SDK anytime soon, but Kotlin provides the engineering\n team more resources to implement cooler things going forward. A few of them have been implemented already. Why wouldn't\n you want to benefit from them?\n\nAre you on board? I hope you are, because through the rest of this article, I'm going to tell you how to upgrade your\nprojects to use the Realm Kotlin SDK and take advantage of those benefits that I have just mentioned and some more. You\ncan also find a complete code example in this repo.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Gradle build files\n\nFirst things first. You need to make some changes to your `build.gradle` files to get access to the Realm Kotlin SDK\nwithin your project, instead of the Realm Java SDK that you were using. The Realm Kotlin SDK uses a gradle plugin that\nhas been published in the Gradle Plugin Portal, so the prefered way\nto add it to your project is using the plugins section of the build configuration of the module \u2014i.e.,\n`app/build.gradle`\u2014 instead of the legacy method of declaring the dependency in the `buildscript` block of the\nproject `build.gradle`.\n\nAfter replacing the plugin in the module configuration with the Kotlin SDK one, you need to add an implementation\ndependency to your module. If you want to use Sync with your MongoDB\ncluster, then you should use `'io.realm.kotlin:library-sync'`, but if you just want to have local persistence, then\n`'io.realm.kotlin:library-base'` should be enough. Also, it's no longer needed to have a `realm` dsl section in the\n`android` block to enable sync.\n\n### `build.gradle` Comparison\n\n#### Java SDK\n\n```kotlin\nbuildscript {\n// ...\n dependencies {\n classpath \"com.android.tools.build:gradle:$agp_version\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version\"\n\n // Realm Plugin\n classpath \"io.realm:realm-gradle-plugin:10.10.1\"\n }\n}\n```\n\n#### Kotlin SDK\n\n```kotlin\nbuildscript {\n// ...\n dependencies {\n classpath \"com.android.tools.build:gradle:$agp_version\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version\"\n }\n}\n```\n\n### `app/build.gradle` Comparison\n\n#### Java SDK\n\n```kotlin\nplugins {\n id 'com.android.application'\n id 'org.jetbrains.kotlin.android'\n id 'kotlin-kapt'\n id 'realm-android'\n}\nandroid {\n// ...\n realm {\n syncEnabled = true\n }\n}\ndependencies {\n// ...\n}\n```\n\n#### Kotlin SDK\n\n```kotlin\nplugins {\n id 'com.android.application'\n id 'org.jetbrains.kotlin.android'\n id 'kotlin-kapt'\n id 'io.realm.kotlin' version '0.10.0'\n}\nandroid {\n// ...\n}\ndependencies {\n// ...\n implementation(\"io.realm.kotlin:library-base:0.10.0\")\n}\n```\n\nIf you have more than one module in your project and want to pin the version number of the plugin for all of them, you\ncan define the plugin in the project `build.gradle` with the desired version and the attribute `apply false`. Then, the\n`build.gradle` files of the modules should use the same plugin id, but without the version attribute.\n\n### Multimodule Configuration\n\n#### Project build.gradle\n\n```kotlin\nplugins {\n// ...\n id 'io.realm.kotlin' version '0.10.0' apply false\n}\n```\n\n#### Modules build.gradle\n\n```kotlin\nplugins {\n// ...\n id 'io.realm.kotlin'\n}\n```\n\n## Model classes\n\nKotlin scope functions (i.e., `apply`, `run`, `with`, `let`, and `also`) make object creation and manipulation easier.\nThat was already available when using the Java SDK from kotlin, because they are provided by the Kotlin language itself.\n\nDefining a model class is even easier with the Realm Kotlin SDK. You are not required to make the model class `open`\nanymore. The Java SDK was using the Kotlin Annotation Processing Tool to derive proxy classes that took care of\ninteracting with the persistence. Instead, the Kotlin SDK uses the `RealmObject` interface as a marker for the plugin.\nIn the construction process, the plugin identifies the objects that are implementing the marker interface and injects the\nrequired functionality to interact with the persistence. So, that's another change that you have to put in place:\ninstead of making your model classes extend \u2014i.e., inherit\u2014 from `RealmObject`, you just have to implement the interface\nwith the same name. In practical terms, this means using `RealmObject` in the class declaration without parentheses.\n\n### Model Class Definition\n\n#### Java SDK\n\n```kotlin\nopen class ExpenseInfo : RealmObject() {\n @PrimaryKey\n var expenseId: String = UUID.randomUUID().toString()\n var expenseName: String = \"\"\n var expenseValue: Int = 0\n}\n```\n\n#### Kotlin SDK\n\n```kotlin\nclass ExpenseInfo : RealmObject {\n @PrimaryKey\n var expenseId: String = UUID.randomUUID().toString()\n var expenseName: String = \"\"\n var expenseValue: Int = 0\n}\n```\n\nThere are also changes in terms of the type system. `RealmList` that was used in the Java SDK to model one-to-many\nrelationships is extended in the Kotlin SDK to benefit from the typesystem and allow expressing nullables in those\nrelationships. So, now you can go beyond `RealmList` and use `RealmList`. You will get all the\nbenefits of the syntax sugar to mean that the strings the object is related to, might be null. You can check this and\nthe rest of the supported types in the documentation of the Realm Kotlin SDK.\n\n## Opening (and closing) the realm\n\nUsing Realm is even easier now. The explicit initialization of the library that was required by the Realm Java SDK is\nnot needed for the Realm Kotlin SDK. You did that invoking `Realm.init()` explicitly. And in order to ensure that you\ndid that once and at the beginning of the execution of your app, you normally put that line of code in the `onCreate()`\nmethod of the `Application` subclass. You can forget about that chore for good.\n\nThe configuration of the Realm in the Kotlin SDK requires passing the list of object model classes that conform the\nschema, so the `builder()` static method has that as the argument. The Realm Kotlin SDK also allows setting the logging\nlevel per configuration, should you use more than one. The rest of the configuration options remain the same.\n\nIt's also different the way you get an instance of a Realm when you have defined the configuration that you want to\nuse. With the Java SDK, you had to get access to a thread singleton using one of the static methods\n`Realm.getInstance()` or `Realm.getDefaultInstance()` (the latter when a default configuration was being set and used).\nIn most cases, that instance was used and released, by invoking its `close()` method, at the end of the\nActivity/Fragment lifecycle. The Kotlin SDK allows you to use the static method `open()` to get a single instance of a\nRealm per configuration. Then you can inject it and use it everywhere you need it. This change takes the burden of\nRealm lifecycle management off from the shoulders of the developer. That is huge! Lifecycle management is often\npainful and sometimes difficult to get right.\n\n### Realm SDK Initialization\n\n#### Java SDK\n\n```kotlin\nclass ExpenseApplication : Application() {\n override fun onCreate() {\n super.onCreate()\n\n Realm.init(this)\n\n val config = RealmConfiguration.Builder()\n .name(\"expenseDB.db\")\n .schemaVersion(1)\n .deleteRealmIfMigrationNeeded()\n .build()\n\n Realm.setDefaultConfiguration(config)\n // Realms can now be obtained with Realm.getDefaultInstance()\n }\n}\n```\n\n#### Kotlin SDK\n\n```kotlin\nclass ExpenseApplication : Application() {\n lateinit var realm: Realm\n\n override fun onCreate() {\n super.onCreate()\n\n val config = RealmConfiguration\n .Builder(schema = setOf(ExpenseInfo::class))\n .name(\"expenseDB.db\")\n .schemaVersion(1)\n .deleteRealmIfMigrationNeeded()\n .log(LogLevel.ALL)\n .build()\n\n realm = Realm.open(configuration = config)\n // This realm can now be injected everywhere\n }\n}\n```\n\nObjects in the Realm Kotlin SDK are now frozen to directly integrate seamlessly into Kotlin coroutine and flows. That\nmeans that they are not live as they used to be in the Realm Java SDK and don't update themselves when they get changed\nin some other part of the application or even in the cloud. Instead, you have to modify them within a write\ntransaction, i.e., within a `write` or `writeBlocking` block. When the scope of the block ends, the objects are frozen\nagain.\n\nEven better, the realms aren't confined to a thread. No more thread singletons. Instead, realms are thread-safe, so\nthey can safely be shared between threads. That means that you don't need to be opening and closing realms for the\npurpose of using them within a thread. Get your Realm and use it everywhere in your app. Say goodbye to all those\nlifecycle management operations for the realms!\n\nFinally, if you are injecting dependencies of your application, with the Realm Kotlin SDK, you can have a singleton for\nthe Realm and let the dependency injection framework do its magic and inject it in every view-model. That's much easier\nand more efficient than having to create one each time \u2014using a factory, for example\u2014 and ensuring that the\nclose method was called wherever it was injected.\n\n## Writing data\n\nIt took a while, but Kotlin brought coroutines to Android and we have learned to use them and enjoy how much easier\nthey make doing asynchronous things. Now, it seems that coroutines are _the way_ to do those things and we would like\nto use them to deal with operations that might affect the performance of our apps, such as dealing with the persistence\nof our data.\n\nSupport for coroutines and flows is built-in in the Realm Kotlin SDK as a first-class citizen of the API. You no longer\nneed to insert write operations in suspending functions to benefit from coroutines. The `write {}` method of a realm is\na suspending method itself and can only be invoked from within a coroutine context. No worries here, since the compiler\nwill complain if you try to do it outside of a context. But with no extra effort on your side, you will be performing\nall those expensive IO operations asynchronously. Ain't that nice?\n\nYou can still use the `writeBlocking {}` of a realm, if you need to perform a synchronous operation. But, beware that,\nas the name states, the operation will block the current thread. Android might not be very forgiving if you block the\nmain thread for a few seconds, and it'll present the user with the undesirable \"Application Not Responding\" dialog. Please,\nbe mindful and use this **only when you know it is safe**.\n\nAnother additional advantage of the Realm Kotlin SDK is that, thanks to having the objects frozen in the realm, we can\nmake asynchronous transactions easier. In the Java SDK, we had to find again the object we wanted to modify inside of\nthe transaction block, so it was obtained from the realm that we were using on that thread. The Kotlin SDK makes that\nmuch simpler by using `findLatest()` with that object to get its instance in the mutable realm and then apply the\nchanges to it.\n\n### Asynchronous Transaction Comparison\n\n#### Java SDK\n\n```kotlin\nrealm.executeTransactionAsync { bgRealm ->\n val result = bgRealm.where(ExpenseInfo::class.java)\n .equalTo(\"expenseId\", expenseInfo.expenseId)\n .findFirst()\n\n result?.let {\n result.deleteFromRealm()\n }\n}\n```\n\n#### Kotlin SDK\n\n```kotlin\nviewModelScope.launch(Dispatchers.IO) {\n realm.write {\n findLatest(expenseInfo)?.also {\n delete(it)\n }\n }\n}\n```\n\n## Queries and listening to updates\n\nOne thing where Realm shines is when you have to retrieve information from it. Data is obtained concatenating three\noperations:\n\n1. Creating a RealmQuery for the object class that you are interested in.\n2. Optionally adding constraints to that query, like expected values or acceptable ranges for some attributes.\n3. Executing the query to get the results from the realm. Those results can be actual objects from the realm, or\n aggregations of them, like the number of matches in the realm that you get when you use `count()`.\n\nThe Realm Kotlin SDK offers you a new query system where each of those steps has been simplified.\n\nThe queries in the Realm Java SDK used filters on the collections returned by the `where` method. The Kotlin SDK offers\nthe `query` method instead. This method takes a type parameter using generics, instead of the explicit type parameter\ntaken as an argument of `where` method. That is easier to read and to write.\n\nThe constraints that allow you to narrow down the query to the results you care about are implemented using a predicate\nas the optional argument of the `query()` method. That predicate can have multiple constraints concatenated with\nlogical operators like `AND` or `OR` and even subqueries that are a mayor superpower that will boost your ability to\nquery the data.\n\nFinally, you will execute the query to get the data. In most cases, you will want that to happen in the background so\nyou are not blocking the main thread. If you also want to be aware of changes on the results of the query, not just the\ninitial results, it's better to get a flow. That required two steps in the Java SDK. First, you had to use\n`findAllAsync()` on the query, to get it to work in the background, and then convert the results into a flow with the\n`toFlow()` method. The new system simplifies things greatly, providing you with the `asFlow()` method that is a\nsuspending function of the query. There is no other step. Coroutines and flows are built-in from the beginning in the\nnew query system.\n\n### Query Comparison\n\n#### Java SDK\n\n```kotlin\nprivate fun getAllExpense(): Flow> =\n realm.where(ExpenseInfo::class.java).greaterThan(\"expenseValue\", 0).findAllAsync().toFlow()\n```\n\n#### Kotlin SDK\n\n```kotlin\nprivate fun getAllExpense(): Flow> =\n realm.query(\"expenseValue > 0\").asFlow()\n```\n\nAs it was the case when writing to the Realm, you can also use blocking operations when you need them, invoking `find()`\non the query. And also in this case, use it **only when you know it is safe**.\n\n## Conclusion\n\nYou're probably not reading this, because if I were you, I would be creating a branch in my project and trying the\nRealm Kotlin SDK already and benefiting from all these wonderful changes. But just in case you are, let me summarize the\nmost relevant changes that the Realm Kotlin SDK provides you with:\n\n- The configuration of your project to use the Realm Kotlin SDK is easier, uses more up-to-date mechanisms, and is more\n explicit.\n- Model classes are simpler to define and more idiomatic.\n- Working with the realm is much simpler because it requires less ceremonial steps that you have to worry about and\n plays better with coroutines.\n- Working with the objects is easier even when doing things asynchronously, because they're frozen, and that helps you\n to do things safely.\n- Querying is enhanced with simpler syntax, predicates, and suspending functions and even flows.\n\nTime to code!\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin"], "pageDescription": "This is a guide to help you migrate you apps that are using the Realm Java SDK to the newer Realm Kotlin SDK. It covers the most important changes that you need to put in place to use the Kotlin SDK.", "contentType": "Article"}, "title": "Migrating Android Apps from Realm Java SDK to Kotlin SDK", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/hapijs-nodejs-driver", "action": "created", "body": "# Build a RESTful API with HapiJS and MongoDB\n\nWhile JAMStack, static site generators, and serverless functions continue to be all the rage in 2020, traditional frameworks like Express.js and Hapi.js remain the go-to solution for many developers. These frameworks are battle-tested, reliable, and scalable, so while they may not be the hottest tech around, you can count on them to get the job done.\n\nIn this post, we're going to build a web application with Hapi.js and MongoDB. If you would like to follow along with this tutorial, you can get the code from this GitHub repo. Also, be sure to sign up for a free MongoDB Atlas account to make sure you can implement all of the code in this tutorial.\n\n## Prerequisites\n\nFor this tutorial you'll need:\n\n- Node.js\n- npm\n- MongoDB\n\nYou can download Node.js here, and it will come with the latest version of npm. For MongoDB, use MongoDB Atlas for free. While you can use a local MongoDB install, you will not be able to implement some of the functionality that relies on MongoDB Atlas Search, so I encourage you to give Atlas a try. All other required items will be covered in the article.\n\n## What is Hapi.js\n\nHapi.js or simply Hapi is a Node.js framework for \"building powerful, scalable applications, with minimal overhead and full out-of-the-box functionality\". Originally developed for Walmart's e-commerce platform, the framework has been adopted by many enterprises. In my personal experience, I've worked with numerous companies who heavily relied on Hapi.js for their most critical infrastructure ranging from RESTful APIs to traditional web applications.\n\nFor this tutorial, I'll assume that you are already familiar with JavaScript and Node.js. If not, I would suggest checking out the Nodejs.dev website which offers an excellent introduction to Node.js and will get you up and running in no time.\n\n## What We're Building: RESTful Movie Database\n\nThe app that we're going to build today is going to expose a series of RESTful endpoints for working with a movies collection. The dataset we'll be relying on can be accessed by loading sample datasets into your MongoDB Atlas cluster. In your MongoDB dashboard, navigate to the **Clusters** tab. Click on the ellipses (...) button on the cluster you wish to use and select the **Load Sample Dataset** option. Within a few minutes, you'll have a series of new databases created and the one we'll work with is called `sample_mflix`.\n\nWe will not build a UI as part of this tutorial, instead, we'll focus on getting the most out of our Hapi.js backend.\n\n## Setting up a Hapi.js Application\n\nLike with any Node.js application, we'll start off our project by installing some packages from the node package manager or npm. Navigate to a directory where you would like to store your application and execute the following commands:\n\n``` bash\nnpm init\n\nnpm install @hapi/hapi --save\n```\n\nExecuting `npm init` will create a `package.json` file where we can store our dependencies. When you run this command you'll be asked a series of questions that will determine how the file gets populated. It's ok to leave all the defaults as is. The `npm install @hapi/hapi --save` command will pull down the latest\nversion of the Hapi.js framework and save a reference to this version in the newly created `package.json` file. When you've completed this step, create an `index.js` file in the root directory and open it up.\n\nMuch like Express, Hapi.js is not a very prescriptive framework. What I mean by this is that we as the developer have the total flexibility to decide how we want our directory structure to look. We could have our entire application in a single file, or break it up into hundreds of components, Hapi.js does not care. To make sure our install was successful, let's write a simple app to display a message in our browser. The code will look like this:\n\n``` javascript\nconst Hapi = require('@hapi/hapi');\n\nconst server = Hapi.server({\n port: 3000,\n host: 'localhost'\n});\n\nserver.route({\n method: 'GET',\n path: '/',\n handler: (req, h) => {\n\n return 'Hello from HapiJS!';\n }\n});\n\nserver.start();\nconsole.log('Server running on %s', server.info.uri);\n```\n\nLet's go through the code above to understand what is going on here. At the start of our program, we are requiring the hapi package which imports all of the Hapi.js API's and makes them available in our app. We then use the `Hapi.server` method to create an instance of a Hapi server and pass in our parameters. Now that we have a server, we can add routes to it, and that's what we do in the subsequent section. We are defining a single route for our homepage, saying that this route can only be accessed via a **GET** request, and the handler function is just going to return the message **\"Hello from HapiJS!\"**. Finally, we start the Hapi.js server and display a message to the console that tells us the server is running. To start the server, execute the following command in your terminal window:\n\n``` bash\nnode index.js\n```\n\nIf we navigate to `localhost:3000` in our web browser of choice, our result will look as follows:\n\nIf you see the message above in your browser, then you are ready to proceed to the next section. If you run into any issues, I would first ensure that you have the latest version of Node.js installed and that you have a `@hapi/hapi` folder inside of your `node_modules` directory.\n\n## Building a RESTful API with Hapi.js\n\nNow that we have the basics down, let's go ahead and create the actual routes for our API. The API routes that we'll need to create are as follows:\n\n- Get all movies\n- Get a single movie\n- Insert a movie\n- Update a movie\n- Delete a movie\n- Search for a movie\n\nFor the most part, we just have traditional CRUD operations that you are likely familiar with. But, our final route is a bit more advanced. This route is going to implement search functionality and allow us to highlight some of the more advanced features of both Hapi.js and MongoDB. Let's update our `index.js` file with the routes we need.\n\n``` javascript\nconst Hapi = require('@hapi/hapi');\n\nconst server = Hapi.server({\n port: 3000,\n host: 'localhost'\n});\n\n// Get all movies\nserver.route({\n method: 'GET',\n path: '/movies',\n handler: (req, h) => {\n\n return 'List all the movies';\n }\n});\n\n// Add a new movie to the database\nserver.route({\n method: 'POST',\n path: '/movies',\n handler: (req, h) => {\n\n return 'Add new movie';\n }\n});\n\n// Get a single movie\nserver.route({\n method: 'GET',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Return a single movie';\n }\n});\n\n// Update the details of a movie\nserver.route({\n method: 'PUT',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Update a single movie';\n }\n});\n\n// Delete a movie from the database\nserver.route({\n method: 'DELETE',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Delete a single movie';\n }\n});\n\n// Search for a movie\nserver.route({\n method: 'GET',\n path: '/search',\n handler: (req, h) => {\n\n return 'Return search results for the specified term';\n }\n});\n\nserver.start();\nconsole.log('Server running on %s', server.info.uri);\n```\n\nWe have created our routes, but currently, all they do is return a string saying what the route is meant to do. That's no good. Next, we'll connect our Hapi.js app to our MongoDB database so that we can return actual data. We'll use the MongoDB Node.js Driver to accomplish this.\n\n>If you are interested in learning more about the MongoDB Node.js Driver through in-depth training, check out the MongoDB for JavaScript Developers course on MongoDB University. It's free and will teach you all about reading and writing data with the driver, using the aggregation framework, and much more.\n\n## Connecting Our Hapi.js App to MongoDB\n\nConnecting a Hapi.js backend to a MongoDB database can be done in multiple ways. We could use the traditional method of just bringing in the MongoDB Node.js Driver via npm, we could use an ODM library like Mongoose, but I believe there is a better way to do it. The way we're going to connect to our MongoDB database in our Atlas cluster is using a Hapi.js plugin.\n\nHapi.js has many excellent plugins for all your development needs. Whether that need is authentication, logging, localization, or in our case data access, the Hapi.js plugins page provides many options. The plugin we're going to use is called `hapi-mongodb`. Let's install this package by running:\n\n``` bash\nnpm install hapi-mongodb --save \n```\n\nWith the package installed, let's go back to our `index.js` file and\nconfigure the plugin. The process for this relies on the `register()`\nmethod provided in the Hapi API. We'll register our plugin like so:\n\n``` javascript\nserver.register({\n plugin: require('hapi-mongodb'),\n options: {\n uri: 'mongodb+srv://{YOUR-USERNAME}:{YOUR-PASSWORD}@main.zxsxp.mongodb.net/sample_mflix?retryWrites=true&w=majority',\n settings : {\n useUnifiedTopology: true\n },\n decorate: true\n }\n});\n```\n\nWe would want to register this plugin before our routes. For the options object, we are passing our MongoDB Atlas service URI as well as the name of our database, which in this case will be `sample_mflix`. If you're working with a different database, make sure to update it accordingly. We'll also want to make one more adjustment to our entire code base before moving on. If we try to run our Hapi.js application now, we'll get an error saying that we cannot start our server before plugins are finished registering. The register method will take some time to run and we'll have to wait on it. Rather than deal with this in a synchronous fashion, we'll wrap an async function around our server instantiation. This will make our code much cleaner and easier to reason about. The final result will look like this:\n\n``` javascript\nconst Hapi = require('@hapi/hapi');\n\nconst init = async () => {\n\n const server = Hapi.server({\n port: 3000,\n host: 'localhost'\n });\n\n await server.register({\n plugin: require('hapi-mongodb'),\n options: {\n url: 'mongodb+srv://{YOUR-USERNAME}:{YOUR-PASSWORD}@main.zxsxp.mongodb.net/sample_mflix?retryWrites=true&w=majority',\n settings: {\n useUnifiedTopology: true\n },\n decorate: true\n }\n });\n\n // Get all movies\n server.route({\n method: 'GET',\n path: '/movies',\n handler: (req, h) => {\n\n return 'List all the movies';\n }\n });\n\n // Add a new movie to the database\n server.route({\n method: 'POST',\n path: '/movies',\n handler: (req, h) => {\n\n return 'Add new movie';\n }\n });\n\n // Get a single movie\n server.route({\n method: 'GET',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Return a single movie';\n }\n });\n\n // Update the details of a movie\n server.route({\n method: 'PUT',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Update a single movie';\n }\n });\n\n // Delete a movie from the database\n server.route({\n method: 'DELETE',\n path: '/movies/{id}',\n handler: (req, h) => {\n\n return 'Delete a single movie';\n }\n });\n\n // Search for a movie\n server.route({\n method: 'GET',\n path: '/search',\n handler: (req, h) => {\n\n return 'Return search results for the specified term';\n }\n });\n\n await server.start();\n console.log('Server running on %s', server.info.uri);\n}\n\ninit(); \n```\n\nNow we should be able to restart our server and it will register the plugin properly and work as intended. To ensure that our connection to the database does work, let's run a sample query to return just a single movie when we hit the `/movies` route. We'll do this with a `findOne()` operation. The `hapi-mongodb` plugin is just a wrapper for the official MongoDB Node.js driver so all the methods work exactly the same. Check out the official docs for details on all available methods. Let's use the `findOne()` method to return a single movie from the database.\n\n``` javascript\n// Get all movies\nserver.route({\n method: 'GET',\n path: '/movies',\n handler: async (req, h) => {\n\n const movie = await req.mongo.db.collection('movies').findOne({})\n\n return movie;\n }\n});\n```\n\nWe'll rely on the async/await pattern in our handler functions as well to keep our code clean and concise. Notice how our MongoDB database is now accessible through the `req` or request object. We didn't have to pass in an instance of our database, the plugin handled all of that for us, all we have to do was decide what our call to the database was going to be. If we restart our server and navigate to `localhost:3000/movies` in our browser we should see the following response:\n\nIf you do get the JSON response, it means your connection to the database is good and your plugin has been correctly registered with the Hapi.js application. If you see any sort of error, look at the above instructions carefully. Next, we'll implement our actual database calls to our routes.\n\n## Implementing the RESTful Routes\n\nWe have six API routes to implement. We'll tackle each one and introduce new concepts for both Hapi.js and MongoDB. We'll start with the route that gets us all the movies.\n\n### Get All Movies\n\nThis route will retrieve a list of movies. Since our dataset contains thousands of movies, we would not want to return all of them at once as this would likely cause the user's browser to crash, so we'll limit the result set to 20 items at a time. We'll allow the user to pass an optional query parameter that will give them the next 20 results in the set. My implementation is below.\n\n``` javascript\n// Get all movies\nserver.route({\n method: 'GET',\n path: '/movies',\n handler: async (req, h) => {\n\n const offset = Number(req.query.offset) || 0;\n\n const movies = await req.mongo.db.collection('movies').find({}).sort({metacritic:-1}).skip(offset).limit(20).toArray();\n\n return movies;\n }\n});\n```\n\nIn our implementation, the first thing we do is sort our collection to ensure we get a consistent order of documents. In our case, we're sorting by the `metacritic` score in descending order, meaning we'll get the highest rated movies first. Next, we check to see if there is an `offset` query parameter. If there is one, we'll take its value and convert it into an integer, otherwise, we'll set the offset value to 0. Next, when we make a call to our MongoDB database, we are going to use that `offset` value in the `skip()` method which will tell MongoDB how many documents to skip. Finally, we'll use the `limit()` method to limit our results to 20 records and the `toArray()` method to turn the cursor we get back into an object.\n\nTry it out. Restart your Hapi.js server and navigate to `localhost:3000/movies`. Try passing an offset query parameter to see how the results change. For example try `localhost:3000/movies?offset=500`. Note that if you pass a non-integer value, you'll likely get an error. We aren't doing any sort of error handling in this tutorial but in a real-world application, you should handle all errors accordingly. Next, let's implement the method to return a single movie.\n\n### Get Single Movie\n\nThis route will return the data on just a single movie. For this method, we'll also play around with projection, which will allow us to pick and choose which fields we get back from MongoDB. Here is my implementation:\n\n``` javascript\n// Get a single movie\nserver.route({\n method: 'GET',\n path: '/movies/{id}',\n handler: async (req, h) => {\n const id = req.params.id\n const ObjectID = req.mongo.ObjectID;\n\n const movie = await req.mongo.db.collection('movies').findOne({_id: new ObjectID(id)},{projection:{title:1,plot:1,cast:1,year:1, released:1}});\n\n return movie;\n }\n});\n```\n\nIn this implementation, we're using the `req.params` object to get the dynamic value from our route. We're also making use of the `req.mongo.ObjectID` method which will allow us to transform the string id into an ObjectID that we use as our unique identifier in the MongoDB database. We'll have to convert our string to an ObjectID otherwise our `findOne()` method would not work as our `_id` field is not stored as a string. We're also using a projection to return only the `title`, `plot`, `cast`, `year`, and `released` fields. The result is below.\n\nA quick tip on projection. In the above example, we used the `{ fieldName: 1 }` format, which told MongoDB to return only this specific field. If instead we only wanted to omit a few fields, we could have used the inverse `{ fieldName: 0}` format instead. This would send us all fields, except the ones named and given a value of zero in the projection option. Note that you can't mix and match the 1 and 0 formats, you have to pick one. The only exception is the `_id` field, where if you don't want it you can pass `{_id:0}`.\n\n### Add A Movie\n\nThe next route we'll implement will be our insert operation and will allow us to add a document to our collection. The implementation looks like this:\n\n``` javascript\n// Add a new movie to the database\nserver.route({\n method: 'POST',\n path: '/movies',\n handler: async (req, h) => {\n\n const payload = req.payload\n\n const status = await req.mongo.db.collection('movies').insertOne(payload);\n\n return status;\n }\n});\n\nThe payload that we are going to submit to this endpoint will look like this: \n\n.. code-block:: javascript\n\n{\n \"title\": \"Avengers: Endgame\",\n \"plot\": \"The avengers save the day\",\n \"cast\" : \"Robert Downey Jr.\", \"Chris Evans\", \"Scarlett Johansson\", \"Samuel L. Jackson\"],\n \"year\": 2019\n} \n```\n\nIn our implementation we're again using the `req` object but this time we're using the `payload` sub-object to get the data that is sent to the endpoint. To test that our endpoint works, we'll use [Postman to send the request. Our response will give us a lot of info on what happened with the operation so for educational purposes we'll just return the entire document. In a real-world application, you would just send back a `{message: \"ok\"}` or similar statement. If we look at the response we'll find a field titled `insertedCount: 1` and this will tell us that our document was successfully inserted.\n\nIn this route, we added the functionality to insert a brand new document, in the next route, we'll update an existing one.\n\n### Update A Movie\n\nUpdating a movie works much the same way adding a new movie does. I do want to introduce a new concept in Hapi.js here though and that is the concept of validation. Hapi.js can help us easily validate data before our handler function is called. To do this, we'll import a package that is maintained by the Hapi.js team called Joi. To work with Joi, we'll first need to install the package and include it in our `index.js` file.\n\n``` bash\nnpm install @hapi/joi --save\nnpm install joi-objectid --save\n```\n\nNext, let's take a look at our implementation of the update route and then I'll explain how it all ties together.\n\n``` javascript\n// Add this below the @hapi/hapi require statement\nconst Joi = require('@hapi/joi');\nJoi.objectId = require('joi-objectid')(Joi)\n\n// Update the details of a movie\nserver.route({\n method: 'PUT',\n path: '/movies/{id}',\n options: {\n validate: {\n params: Joi.object({\n id: Joi.objectId()\n })\n }\n },\n handler: async (req, h) => {\n const id = req.params.id\n const ObjectID = req.mongo.ObjectID;\n\n const payload = req.payload\n\n const status = await req.mongo.db.collection('movies').updateOne({_id: ObjectID(id)}, {$set: payload});\n\n return status;\n\n }\n});\n```\n\nWith this route we are really starting to show the strength of Hapi.js. In this implementation, we added an `options` object and passed in a `validate` object. From here, we validated that the `id` parameter matches what we'd expect an ObjectID string to look like. If it did not, our handler function would never be called, instead, the request would short-circuit and we'd get an appropriate error message. Joi can be used to validate not only the defined parameters but also query parameters, payload, and even headers. We barely scratched the surface.\n\nThe rest of the implementation had us executing an `updateOne()` method which updated an existing object with the new data. Again, we're returning the entire status object here for educational purposes, but in a real-world application, you wouldn't want to send that raw data.\n\n### Delete A Movie\n\nDeleting a movie will simply remove the record from our collection. There isn't a whole lot of new functionality to showcase here, so let's get right into the implementation.\n\n``` javascript\n// Update the details of a movie\nserver.route({\n method: 'PUT',\n path: '/movies/{id}',\n options: {\n validate: {\n params: Joi.object({\n id: Joi.objectId()\n })\n }\n },\n handler: async (req, h) => {\n const id = req.params.id\n const ObjectID = req.mongo.ObjectID;\n\n const payload = req.payload\n\n const status = await req.mongo.db.collection('movies').deleteOne({_id: ObjectID(id)});\n\n return status;\n\n }\n}); \n```\n\nIn our delete route implementation, we are going to continue to use the Joi library to validate that the parameter to delete is an actual ObjectId. To remove a document from our collection, we'll use the `deleteOne()` method and pass in the ObjectId to delete.\n\nImplementing this route concludes our discussion on the basic CRUD operations. To close out this tutorial, we'll implement one final route that will allow us to search our movie database.\n\n### Search For A Movie\n\nTo conclude our routes, we'll add the ability for a user to search for a movie. To do this we'll rely on a MongoDB Atlas feature called Atlas Search. Before we can implement this functionality on our backend, we'll first need to enable Atlas Search and create an index within our MongoDB Atlas dashboard. Navigate to your dashboard, and locate the `sample_mflix` database. Select the `movies` collection and click on the **Search (Beta)** tab.\n\nClick the **Create Search Index** button, and for this tutorial, we can leave the field mappings to their default dynamic state, so just hit the **Create Index** button. While our index is built, we can go ahead and implement our backend functionality. The implementation will look like this:\n\n``` javascript\n// Search for a movie\nserver.route({\n method: 'GET',\n path: '/search',\n handler: async(req, h) => {\n const query = req.query.term;\n\n const results = await req.mongo.db.collection(\"movies\").aggregate(\n {\n $searchBeta: {\n \"search\": {\n \"query\": query,\n \"path\":\"title\"\n }\n }\n },\n {\n $project : {title:1, plot: 1}\n },\n { \n $limit: 10\n }\n ]).toArray()\n\n return results;\n }\n});\n```\n\nOur `search` route has us using the extremely powerful MongoDB aggregation pipeline. In the first stage of the pipeline, we are using the `$searchBeta` attribute and passing along our search term. In the next stage of the pipeline, we run a `$project` to only return specific fields, in our case the `title` and `plot` of the movie. Finally, we limit our search results to ten items and convert the cursor to an array and send it to the browser. Let's try to run a search query against our movies collection. Try search for `localhost:3000/search?term=Star+Wars`. Your results will look like this:\n\n![Atlas Search Results\n\nMongoDB Atlas Search is very powerful and provides all the tools to add superb search functionality for your data without relying on external APIs. Check out the documentation to learn more about how to best leverage it in your applications.\n\n## Putting It All Together\n\nIn this tutorial, I showed you how to create a RESTful API with Hapi.js and MongoDB. We scratched the surface of the capabilities of both, but I hope it was a good introduction and gives you an idea of what's possible. Hapi.js has an extensive plug-in system that will allow you to bring almost any functionality to your backend with just a few lines of code. Integrating MongoDB into Hapi.js using the `hapi-mongo` plugin allows you to focus on building features and functionality rather than figuring out best practices and how to glue everything together. Speaking of glue, Hapi.js has a package called glue that makes it easy to break your server up into multiple components, we didn't need to do that in our tutorial, but it's a great next step for you to explore.\n\n>If you'd like to get the code for this tutorial, you can find it here. If you want to give Atlas Search a try, sign up for MongoDB Atlas for free.\n\nHappy, er.. Hapi coding!", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Learn how to build an API with HapiJS and MongoDB.", "contentType": "Tutorial"}, "title": "Build a RESTful API with HapiJS and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-change-streams", "action": "created", "body": "# MongoDB Change Streams with Python\n\n## Introduction\n\n \n\nChange streams allow you to listen to changes that occur in your MongoDB database. On MongoDB 3.6 or above, this functionality allows you to build applications that can immediately respond to real time data changes. In this tutorial, we'll show you how to use change streams with Python. In particular you will:\n\n- Learn about change streams\n- Create a program that listens to inserts\n- Change the program to listen to other event types\n- Change the program to listen to specific notifications\n\nTo follow along, you can create a test environment using the steps below. This is optional but highly encouraged as it will allow you to test usage of the change stream functionality with the examples provided. You will be given all commands, but some familiarity with MongoDB is needed.\n\n## Learn about Change Streams\n\nThe ability to listen to specific changes in the data allows an application to be much faster in responding to change. If a user of your system updates their information, the system can listen to and propagate these changes right away. For example, this could mean users no longer have to click refresh to see when changes have been applied. Or if a user's changes in one system need approval by someone, another system could listen to changes and send notifications requesting approvals instantaneously.\n\nBefore change streams, applications that needed to know about the addition of new data in real-time had to continuously poll data or rely on other update mechanisms. One common, if complex, technique for monitoring changes was tailing MongoDB's Operation Log (Oplog). The Oplog is part of the replication system of MongoDB and as such already tracks modifications to the database but is not easy to use for business logic. Change streams are built on top of the Oplog but they provide a native API that improves efficiency and usability. Note that you cannot open a change stream against a collection in a standalone MongoDB server because the feature relies on the Oplog which is only used on replica sets.\n\nWhen registering a change stream you need to specify the collection and what types of changes you want to listen to. You can do this by using the `$match` and a few other aggregation pipeline stages which limit the amount of data you will receive. If your database enforces authentication and authorization, change streams provide the same access control as for normal queries.\n\n## Test the Change Stream Features\n\nThe best way to understand how change streams operate is to work with them. In the next section, we'll show you how to set up a server and scripts. After completing the setup, you will get two scripts: One Python script will listen to notifications from the change stream and print them. The other script will mimic an application by performing insert, update, replace, and delete operations so that you can see the notifications in the output of the first script. You will also learn how to limit the notifications to the ones you are interested in.\n\n## Set up PyMongo\n\nTo get started, set up a virtual environment using Virtualenv. Virtualenv allows you to isolate dependencies of your project from other projects. Create a directory for this project and copy the following into a file called requirements.txt in your new directory:\n\n``` none\npymongo==3.8.0\ndnspython\n```\n\nTo create and activate your virtual environment, run the following commands in your terminal:\n\n``` bash\nvirtualenv venv # sets up the environment\nsource venv/bin/activate # activates the environment\npip3 install -r requirements.txt # installs our dependencies\n```\n\n>For ease of reading, we assume you are running Python 3 with the python3 and pip3 commands. If you are running Python 2.7, substitute python and pip for those commands.\n\n## Set up your Cluster\n\nWe will go through two options for setting up a test MongoDB Replica Set for us to connect to. If you have MongoDB 3.6 or later installed and are comfortable making changes to your local setup choose this option and follow the guide in the appendix and skip to the next section.\n\nIf you do not have MongoDB installed, would prefer not to mess with your local setup or if you are fairly new to MongoDB then we recommend that you set up a MongoDB Atlas cluster; there's a free tier which gives you a three node replica set which is ideal for experimenting and learning with. Simply follow these steps until you get the URI connection string in step 8. Take that URI connection string, insert the password where it says ``, and add it to your environment by running\n\n``` bash\nexport CHANGE_STREAM_DB=\"mongodb+srv://user:@example-xkfzv.mongodb.net/test?retryWrites=true\"\n```\n\nin your terminal. The string you use as a value will be different.\n\n## Listen to Inserts from an Application\n\nBefore continuing, quickly test your setup. Create a file `test.py` with the following contents:\n\n``` python\nimport os\nimport pymongo\n\nclient = pymongo.MongoClient(os.environ'CHANGE_STREAM_DB'])\nprint(client.changestream.collection.insert_one({\"hello\": \"world\"}).inserted_id)\n```\n\nWhen you run `python3 test.py` you should see an `ObjectId` being printed.\n\nNow that you've confirmed your setup, let's create the small program that will listen to changes in the database using a change stream. Create a different file `change_streams.py` with the following content:\n\n``` python\nimport os\nimport pymongo\nfrom bson.json_util import dumps\n\nclient = pymongo.MongoClient(os.environ['CHANGE_STREAM_DB'])\nchange_stream = client.changestream.collection.watch()\nfor change in change_stream:\n print(dumps(change))\n print('') # for readability only\n```\n\nGo ahead and run `python3 change_streams.py`, you will notice that the program doesn't print anything and just waits for operations to happen on the specified collection. While keeping the `change_streams` program running, open up another terminal window and run `python3 test.py`. You will have to run the same export command you ran in the *Set up your Cluster* section to add the environment variable to the new terminal window.\n\nChecking the terminal window that is running the `change_streams` program, you will see that the insert operation was logged. It should look like the output below but with a different `ObjectId` and with a different value for `$binary`.\n\n``` json\n\u279c python3 change_streams.py\n{\"_id\": {\"_data\": {\"$binary\": \"glsIjGUAAAABRmRfaWQAZFsIjGXiJuWPOIv2PgBaEAQIaEd7r8VFkazelcuRgfgeBA==\", \"$type\": \"00\"}}, \"operationType\": \"insert\", \"fullDocument\": {\"_id\": {\"$oid\": \"5b088c65e226e58f388bf63e\"}, \"hello\": \"world\"}, \"ns\": {\"db\": \"changestream\", \"coll\": \"collection\"}, \"documentKey\": {\"_id\": {\"$oid\": \"5b088c65e226e58f388bf63e\"}}}\n```\n\n## Listen to Different Event Types\n\nYou can listen to four types of document-based events:\n\n- Insert\n- Update\n- Replace\n- Delete\n\nDepending on the type of event the document structure you will receive will differ slightly but you will always receive the following:\n\n``` json\n{\n _id: ,\n operationType: \"\",\n ns: {db: \"\", coll: \"\"},\n documentKey: { }\n}\n```\n\nIn the case of inserts and replace operations the `fullDocument` is provided by default as well. In the case of update operations the extra field provided is `updateDescription` and it gives you the document delta (i.e. the difference between the document before and after the operation). By default update operations only include the delta between the document before and after the operation. To get the full document with each update you can [pass in \"updateLookup\" to the full document option. If an update operation ends up changing multiple documents, there will be one notification for each updated document. This transformation occurs to ensure that statements in the oplog are idempotent.\n\nThere is one further type of event that can be received which is the invalidate event. This tells the driver that the change stream is no longer valid. The driver will then close the stream. Potential reasons for this include the collection being dropped or renamed.\n\nTo see this in action update your `test.py` and run it while also running the `change_stream` program:\n\n``` python\nimport os\nimport pymongo\n\nclient = pymongo.MongoClient(os.environ'CHANGE_STREAM_DB'])\nclient.changestream.collection.insert_one({\"_id\": 1, \"hello\": \"world\"})\nclient.changestream.collection.update_one({\"_id\": 1}, {\"$set\": {\"hello\": \"mars\"}})\nclient.changestream.collection.replace_one({\"_id\": 1} , {\"bye\": \"world\"})\nclient.changestream.collection.delete_one({\"_id\": 1})\nclient.changestream.collection.drop()\n```\n\nThe output should be similar to:\n\n``` json\n\u279c python3 change_streams.py\n{\"fullDocument\": {\"_id\": 1, \"hello\": \"world\"}, \"documentKey\": {\"_id\": 1}, \"_id\": {\"_data\": {\"$binary\": \"glsIjuEAAAABRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=\", \"$type\": \"00\"}}, \"ns\": {\"coll\": \"collection\", \"db\": \"changestream\"}, \"operationType\": \"insert\"}\n\n{\"documentKey\": {\"_id\": 1}, \"_id\": {\"_data\": {\"$binary\": \"glsIjuEAAAACRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=\", \"$type\": \"00\"}}, \"updateDescription\": {\"removedFields\": [], \"updatedFields\": {\"hello\": \"mars\"}}, \"ns\": {\"coll\": \"collection\", \"db\": \"changestream\"}, \"operationType\": \"update\"}\n\n{\"fullDocument\": {\"bye\": \"world\", \"_id\": 1}, \"documentKey\": {\"_id\": 1}, \"_id\": {\"_data\": {\"$binary\": \"glsIjuEAAAADRh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=\", \"$type\": \"00\"}}, \"ns\": {\"coll\": \"collection\", \"db\": \"changestream\"}, \"operationType\": \"replace\"}\n\n{\"documentKey\": {\"_id\": 1}, \"_id\": {\"_data\": {\"$binary\": \"glsIjuEAAAAERh5faWQAKwIAWhAECGhHe6/FRZGs3pXLkYH4HgQ=\", \"$type\": \"00\"}}, \"ns\": {\"coll\": \"collection\", \"db\": \"changestream\"}, \"operationType\": \"delete\"}\n\n{\"_id\": {\"_data\": {\"$binary\": \"glsIjuEAAAAFFFoQBAhoR3uvxUWRrN6Vy5GB+B4E\", \"$type\": \"00\"}}, \"operationType\": \"invalidate\"}\n```\n\n## Listen to Specific Notifications\n\nSo far, your program has been listening to *all* operations. In a real application this would be overwhelming and often unnecessary as each part of your application will generally want to listen only to specific operations. To limit the amount of operations, you can use certain aggregation stages when setting up the stream. These stages are: `$match`, `$project`, `$addfields`, `$replaceRoot`, and `$redact`. All other aggregation stages are not available.\n\nYou can test this functionality by changing your `change_stream.py` file with the code below and running the `test.py` script. The output should now only contain insert notifications.\n\n``` python\nimport os\nimport pymongo\nfrom bson.json_util import dumps\n\nclient = pymongo.MongoClient(os.environ['CHANGE_STREAM_DB'])\nchange_stream = client.changestream.collection.watch([{\n '$match': {\n 'operationType': { '$in': ['insert'] }\n }\n}])\n\nfor change in change_stream:\n print(dumps(change))\n print('')\n```\n\nYou can also *match* on document fields and thus limit the stream to certain `DocumentIds` or to documents that have a certain document field, etc.\n\n## Resume your Change Streams\n\nNo matter how good your network, there will be situations when connections fail. To make sure that no changes are missed in such cases, you need to add some code for storing and handling `resumeToken`s. Each event contains a `resumeToken`, for example:\n\n``` json\n\"_id\": {\"_data\": {\"$binary\": \"glsIj84AAAACRh5faWQAKwIAWhAEvyfcy4djS8CUKRZ8tvWuOgQ=\", \"$type\": \"00\"}}\n```\n\nWhen a failure occurs, the driver should automatically make one attempt to reconnect. The application has to handle further retries as needed. This means that the application should take care of always persisting the `resumeToken`.\n\nTo retry connecting, the `resumeToken` has to be passed into the optional field resumeAfter when creating the new change stream. This does not guarantee that we can always resume the change stream. MongoDB's oplog is a capped collection that keeps a rolling record of the most recent operations. Resuming a change stream is only possible if the oplog has not rolled yet (that is if the changes we are interested in are still in the oplog).\n\n## Caveats\n\n- **Change Streams in Production**: If you plan to use change streams in production, please read [MongoDB's recommendations.\n- **Ordering and Rollbacks**: MongoDB guarantees that the received events will be in the order they occurred (thus providing a total ordering of changes across shards if you use shards). On top of that only durable, i.e. majority committed changes will be sent to listeners. This means that listeners do not have to consider rollbacks in their applications.\n- **Reading from Secondaries**: Change streams can be opened against any data-bearing node in a cluster regardless whether it's primary or secondary. However, it is generally not recommended to read from secondaries as failovers can lead to increased load and failures in this setup.\n- **Updates with the fullDocument Option**: The fullDocument option for Update Operations does not guarantee the returned document does not include further changes. In contrast to the document deltas that are guaranteed to be sent in order with update notifications, there is no guarantee that the *fullDocument* returned represents the document as it was exactly after the operation. `updateLookup` will poll the current version of the document. If changes happen quickly it is possible that the document was changed before the `updateLookup` finished. This means that the fullDocument might not represent the document at the time of the event thus potentially giving the impression events took place in a different order.\n- **Impact on Performance**: Up to 1,000 concurrent change streams to each node are supported with negligible impact on the overall performance. However, on sharded clusters, the guarantee of total ordering could cause response times of the change stream to be slower.\n- **WiredTiger**: Change streams are a MongoDB 3.6 and later feature. It is not available for older versions, MMAPv1 storage or pre pv1 replications.\n\n## Learn More\n\nTo read more about this check out the Change Streams documentation.\n\nIf you're interested in more MongoDB tips, follow us on Twitter @mongodb.\n\n## Appendix\n\n### How to set up a Cluster in the Cloud\n\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n### How to set up a Local Cluster\n\nBefore setting up the instances please confirm that you are running version 3.6 or later of the MongoDB Server (mongod) and the MongoDB shell (mongo). You can do this by running `mongod --version` and `mongo --version`. If either of these do not satisfy our requirements, please upgrade to a more recent version before continuing.\n\nIn the following you will set up a single-node replica-set named `test-change-streams`. For a production replica-set, at least three nodes are recommended.\n\n1. Run the following commands in your terminal to create a directory for the database files and start the mongod process on port `27017`:\n\n ``` bash\n mkdir -p /data/test-change-streams\n mongod --replSet test-change-streams --logpath \"mongodb.log\" --dbpath /data/test-change-streams --port 27017 --fork\n ```\n\n2. Now open up a mongo shell on port `27017`:\n\n ``` bash\n mongo --port 27017\n ```\n\n3. Within the mongo shell you just opened, configure your replica set:\n\n ``` javascript\n config = {\n _id: \"test-change-streams\",\n members: { _id : 0, host : \"localhost:27017\"}]\n };\n rs.initiate(config);\n ```\n\n4. Still within the mongo shell, you can now check that your replica set is working by running: `rs.status();`. The output should indicate that your node has become primary. It may take a few seconds to show this so if you are not seeing this immediately, run the command again after a few seconds.\n\n5. Run\n\n ``` bash\n export CHANGE_STREAM_DB=mongodb://localhost:27017\n ```\n\n in your shell and [continue.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Change streams allow you to listen to changes that occur in your MongoDB database.", "contentType": "Quickstart"}, "title": "MongoDB Change Streams with Python", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/introduction-aggregation-framework", "action": "created", "body": "# Introduction to the MongoDB Aggregation Framework\n\n \n\nOne of the difficulties when storing any data is knowing how it will be accessed in the future. What reports need to be run on it? What information is \"hidden\" in there that will allow for meaningful insights for your business? After spending the time to design your data schema in an appropriate fashion for your application, one needs to be able to retrieve it. In MongoDB, there are two basic ways that data retrieval can be done: through queries with the find() command, and through analytics using the aggregation framework and the aggregate() command.\n\n`find()` allows for the querying of data based on a condition. One can filter results, do basic document transformations, sort the documents, limit the document result set, etc. The `aggregate()` command opens the door to a whole new world with the aggregation framework. In this series of posts, I'll take a look at some of the reasons why using the aggregation framework is so powerful, and how to harness that power.\n\n## Why Aggregate with MongoDB?\n\nA frequently asked question is why do aggregation inside MongoDB at all? From the MongoDB documentation:\n\n>\n>\n>Aggregation operations process data records and return computed results. Aggregation operations group values from multiple documents together, and can perform a variety of operations on the grouped data to return a single result.\n>\n>\n\nBy using the built-in aggregation operators available in MongoDB, we are able to do analytics on a cluster of servers we're already using without having to move the data to another platform, like Apache Spark or Hadoop. While those, and similar, platforms are fast, the data transfer from MongoDB to them can be slow and potentially expensive. By using the aggregation framework the work is done inside MongoDB and then the final results can be sent to the application typically resulting in a smaller amount of data being moved around. It also allows for the querying of the **LIVE** version of the data and not an older copy of data from a batch.\n\nAggregation in MongoDB allows for the transforming of data and results in a more powerful fashion than from using the `find()` command. Through the use of multiple stages and expressions, you are able to build a \"pipeline\" of operations on your data to perform analytic operations. What do I mean by a \"pipeline\"? The aggregation framework is conceptually similar to the `*nix` command line pipe, `|`. In the `*nix` command line pipeline, a pipe transfers the standard output to some other destination. The output of one command is sent to another command for further processing.\n\nIn the aggregation framework, we think of stages instead of commands. And the stage \"output\" is documents. Documents go into a stage, some work is done, and documents come out. From there they can move onto another stage or provide output.\n\n## Aggregation Stages\n\nAt the time of this writing, there are twenty-eight different aggregation stages available. These different stages provide the ability to do a wide variety of tasks. For example, we can build an aggregation pipeline that *matches* a set of documents based on a set of criteria, *groups* those documents together, *sorts* them, then returns that result set to us.\n\nOr perhaps our pipeline is more complicated and the document flows through the `$match`, `$unwind`, `$group`, `$sort`, `$limit`, `$project`, and finally a `$skip` stage.\n\nThis can be confusing and some of these concepts are worth repeating. Therefore, let's break this down a bit further:\n\n- A pipeline starts with documents\n- These documents come from a collection, a view, or a specially designed stage\n- In each stage, documents enter, work is done, and documents exit\n- The stages themselves are defined using the document syntax\n\nLet's take a look at an example pipeline. Our documents are from the Sample Data that's available in MongoDB Atlas and the `routes` collection in the `sample_training` database. Here's a sample document:\n\n``` json\n{\n\"_id\":{\n \"$oid\":\"56e9b39b732b6122f877fa31\"\n},\n\"airline\":{\n \"id\":{\n \"$numberInt\":\"410\"\n },\n \"name\":\"Aerocondor\"\n ,\"alias\":\"2B\"\n ,\"iata\":\"ARD\"\n},\n\"src_airport\":\"CEK\",\n\"dst_airport\":\"KZN\",\n\"Codeshare\":\"\",\n\"stops\":{\n \"$numberInt\":\"0\"\n},\n\"airplane\":\"CR2\"\n}\n```\n\n>\n>\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n>\n>\n\nFor this example query, let's find the top three airlines that offer the most direct flights out of the airport in Portland, Oregon, USA (PDX). To start with, we'll do a `$match` stage so that we can concentrate on doing work only on those documents that meet a base of conditions. In this case, we'll look for documents with a `src_airport`, or source airport, of PDX and that are direct flights, i.e. that have zero stops.\n\n``` javascript\n{\n $match: {\n \"src_airport\": \"PDX\",\n \"stops\": 0\n }\n}\n```\n\nThat reduces the number of documents in our pipeline down from 66,985 to 113. Next, we'll group by the airline name and count the number of flights:\n\n``` javascript\n{\n $group: {\n _id: {\n \"airline name\": \"$airline.name\"\n },\n count: {\n $sum: 1\n }\n }\n}\n```\n\nWith the addition of the `$group` stage, we're down to 16 documents. Let's sort those with a `$sort` stage and sort in descending order:\n\n``` javascript\n{\n $sort: {\n count: -1\n}\n```\n\nThen we can add a `$limit` stage to just have the top three airlines that are servicing Portland, Oregon:\n\n``` javascript\n{\n $limit: 3\n}\n```\n\nAfter putting the documents in the `sample_training.routes` collection through this aggregation pipeline, our results show us that the top three airlines offering non-stop flights departing from PDX are Alaska, American, and United Airlines with 39, 17, and 13 flights, respectively.\n\nHow does this look in code? It's fairly straightforward with using the `db.aggregate()` function. For example, in Python you would do something like:\n\n``` python\nfrom pymongo import MongoClient\n\n# Requires the PyMongo package.\n# The dnspython package is also required to use a mongodb+src URI string\n# https://api.mongodb.com/python/current\n\nclient = MongoClient('YOUR-ATLAS-CONNECTION-STRING')\nresult = client['sample_training']['routes'].aggregate([\n {\n '$match': {\n 'src_airport': 'PDX',\n 'stops': 0\n }\n }, {\n '$group': {\n '_id': {\n 'airline name': '$airline.name'\n },\n 'count': {\n '$sum': 1\n }\n }\n }, {\n '$sort': {\n 'count': -1\n }\n }, {\n '$limit': 3\n }\n])\n```\n\nThe aggregation code is pretty similar in other languages as well.\n\n## Wrap Up\n\nThe MongoDB aggregation framework is an extremely powerful set of tools. The processing is done on the server itself which results in less data being sent over the network. In the example used here, instead of pulling **all** of the documents into an application and processing them in the application, the aggregation framework allows for only the three documents we wanted from our query to be sent back to the application.\n\nThis was just a brief introduction to some of the operators available. Over the course of this series, I'll take a closer look at some of the most popular aggregation framework operators as well as some interesting, but less used ones. I'll also take a look at performance considerations of using the aggregation framework.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn about MongoDB's aggregation framework and aggregation operators.", "contentType": "Quickstart"}, "title": "Introduction to the MongoDB Aggregation Framework", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/bson-data-types-decimal128", "action": "created", "body": "# Quick Start: BSON Data Types - Decimal128\n\n \n\nThink back to when you were first introduced to the concept of decimals in numerical calculations. Doing math problems along the lines of 3.231 / 1.28 caused problems when starting out because 1.28 doesn't go into 3.231 evenly. This causes a long string of numbers to be created to provide a more precise answer. In programming languages, we must choose which number format is correct depending on the amount of precision we need. When one needs high precision when working with BSON data types, the `decimal128` is the one to use.\n\nAs the name suggests, decimal128 provides 128 bits of decimal representation for storing really big (or really small) numbers when rounding decimals exactly is important. Decimal128 supports 34 decimal digits of precision, or significand along with an exponent range of -6143 to +6144. The significand is not normalized in the decimal128 standard allowing for multiple possible representations: 10 x 10^-1 = 1 x 10^0 = .1 x 10^1 = .01 x 10^2 and so on. Having the ability to store maximum and minimum values in the order of 10^6144 and 10^-6143, respectively, allows for a lot of precision.\n\n## Why & Where to Use\n\nSometimes when doing mathematical calculations in a programmatic way, results are unexpected. For example in Node.js:\n\n``` bash\n> 0.1\n0.1\n> 0.2\n0.2\n> 0.1 * 0.2\n0.020000000000000004\n> 0.1 + 0.1\n0.010000000000000002\n```\n\nThis issue is not unique to Node.js, in Java:\n\n``` java\nclass Main {\n public static void main(String] args) {\n System.out.println(\"0.1 * 0.2:\");\n System.out.println(0.1 * 0.2);\n }\n}\n```\n\nProduces an output of:\n\n``` bash\n0.1 * 0.2:\n0.020000000000000004\n```\n\nThe same computations in Python, Ruby, Rust, and others produce the same results. What's going on here? Are these languages just bad at math? Not really, binary floating-point numbers just aren't great at representing base 10 values. For example, the `0.1` used in the above examples is represented in binary as `0.0001100110011001101`.\n\nFor many situations, this isn't a huge issue. However, in monetary applications precision is very important. Who remembers the [half-cent issue from Superman III? When precision and accuracy are important for computations, decimal128 should be the data type of choice.\n\n## How to Use\n\nIn MongoDB, storing data in decimal128 format is relatively straight forward with the NumberDecimal() constructor:\n\n``` bash\nNumberDecimal(\"9823.1297\")\n```\n\nPassing in the decimal value as a string, the value gets stored in the database as:\n\n``` bash\nNumberDecimal(\"9823.1297\")\n```\n\nIf values are passed in as `double` values:\n\n``` bash\nNumberDecimal(1234.99999999999)\n```\n\nLoss of precision can occur in the database:\n\n``` bash\nNumberDecimal(\"1234.50000000000\")\n```\n\nAnother consideration, beyond simply the usage in MongoDB, is the usage and support your programming has for decimal128. Many languages don't natively support this feature and will require a plugin or additional package to get the functionality. Some examples...\n\nPython: The `decimal.Decimal` module can be used for floating-point arithmetic.\n\nJava: The Java BigDecimal class provides support for decimal128 numbers.\n\nNode.js: There are several packages that provide support, such as js-big-decimal or node.js bigdecimal available on npm.\n\n## Wrap Up\n\n>Get started exploring BSON types, like decimal128, with MongoDB Atlas today!\n\nThe `decimal128` field came about in August 2009 as part of the IEEE 754-2008 revision of floating points. MongoDB 3.4 is when support for decimal128 first appeared and to use the `decimal` data type with MongoDB, you'll want to make sure you use a driver version that supports this great feature. Decimal128 is great for huge (or very tiny) numbers and for when precision in those numbers is important.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Working with decimal numbers can be a challenge. The Decimal128 BSON data type allows for high precision options when working with numbers.", "contentType": "Quickstart"}, "title": "Quick Start: BSON Data Types - Decimal128", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/getting-started-kotlin-driver", "action": "created", "body": "# Getting Started with the MongoDB Kotlin Driver\n\n> This is an introductory article on how to build an application in Kotlin using MongoDB Atlas and\n> the MongoDB Kotlin driver, the latest addition to our list of official drivers.\n> Together, we'll build a CRUD application that covers the basics of how to use MongoDB as a database, while leveraging the benefits of Kotlin as a\n> programming language, like data classes, coroutines, and flow.\n\n## Prerequisites\n\nThis is a getting-started article. Therefore, not much is needed as a prerequisite, but familiarity with Kotlin as a programming language will be\nhelpful.\n\nAlso, we need an Atlas account, which is free forever. Create an account if you haven't got one. This\nprovides MongoDB as a cloud database and much more. Later in this tutorial, we'll use this account to create a new cluster, load a dataset, and\neventually query against it.\n\nIn general, MongoDB is an open-source, cross-platform, and distributed document database that allows building apps with flexible schema. In case you\nare not familiar with it or would like a quick recap, I recommend exploring\nthe MongoDB Jumpstart series to get familiar with MongoDB and\nits various services in under 10 minutes. Or if you prefer to read, then you can follow\nour guide.\n\nAnd last, to aid our development activities, we will be using Jetbrains IntelliJ IDEA (Community Edition),\nwhich has default support for the Kotlin language.\n\n## MongoDB Kotlin driver vs MongoDB Realm Kotlin SDK\n\nBefore we start, I would like to touch base on Realm Kotlin SDK, one of the SDKs used to create\nclient-side mobile applications using the MongoDB ecosystem. It shouldn't be confused with\nthe MongoDB Kotlin driver for server-side programming.\nThe MongoDB Kotlin driver, a language driver, enables you to seamlessly interact\nwith Atlas, a cloud database, with the benefits of the Kotlin language paradigm. It's appropriate to create\nbackend apps, scripts, etc.\n\nTo make learning more meaningful and practical, we'll be building a CRUD application. Feel free to check out our\nGithub repo if you would like to follow along together. So, without further ado,\nlet's get started.\n\n## Create a project\n\nTo create the project, we can use the project wizard, which can be found under the `File` menu options. Then, select `New`, followed by `Project`.\nThis will open the `New Project` screen, as shown below, then update the project and language to Kotlin.\n\nAfter the initial Gradle sync, our project is ready to run. So, let's give it a try using the run icon in the menu bar, or simply press CTRL + R on\nMac. Currently, our project won't do much apart from printing `Hello World!` and arguments supplied, but the `BUILD SUCCESSFUL` message in the run\nconsole is what we're looking for, which tells us that our project setup is complete.\n\nNow, the next step is to add the Kotlin driver to our project, which allows us to interact\nwith MongoDB Atlas.\n\n## Adding the MongoDB Kotlin driver\n\nAdding the driver to the project is simple and straightforward. Just update the `dependencies` block with the Kotlin driver dependency in the build\nfile \u2014 i.e., `build.gradle`.\n\n```groovy\ndependencies {\n // Kotlin coroutine dependency\n implementation(\"org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.4\")\n \n // MongoDB Kotlin driver dependency\n implementation(\"org.mongodb:mongodb-driver-kotlin-coroutine:4.10.1\")\n}\n```\n\nAnd now, we are ready to connect with MongoDB Atlas using the Kotlin driver.\n\n## Connecting to the database\n\nTo connect with the database, we first need the `Connection URI` that can be found by pressing `connect to cluster` in\nour Atlas account, as shown below.\n\nFor more details, you can also refer to our documentation.\n\nWith the connection URI available, the next step is to create a Kotlin file. `Setup.kt` is where we write the code for connecting\nto MongoDB Atlas.\n\nConnection with our database can be split into two steps. First, we create a MongoClient instance using `Connection URI`.\n\n```kotlin\nval connectionString = \"mongodb+srv://:@cluster0.sq3aiau.mongodb.net/?retryWrites=true&w=majority\"\nval client = MongoClient.create(connectionString = connectString)\n```\n\nAnd second, use client to connect with the database, `sample_restaurants`, which is a sample dataset for\nrestaurants. A sample dataset is a great way to explore the platform and build a more realistic POC\nto validate your ideas. To learn how to seed your first Atlas database with sample\ndata, visit the docs.\n\n```kotlin\nval databaseName = \"sample_restaurants\"\nval db: MongoDatabase = client.getDatabase(databaseName = databaseName)\n```\n\nHardcoding `connectionString` isn't a good approach and can lead to security risks or an inability to provide role-based access. To avoid such issues\nand follow the best practices, we will be using environment variables. Other common approaches are the use of Vault, build configuration variables,\nand CI/CD environment variables.\n\nTo add environment variables, use `Modify run configuration`, which can be found by right-clicking on the file.\n\nTogether with code to access the environment variable, our final code looks like this.\n\n```kotlin\nsuspend fun setupConnection(\n databaseName: String = \"sample_restaurants\",\n connectionEnvVariable: String = \"MONGODB_URI\"\n): MongoDatabase? {\n val connectString = if (System.getenv(connectionEnvVariable) != null) {\n System.getenv(connectionEnvVariable)\n } else {\n \"mongodb+srv://:@cluster0.sq3aiau.mongodb.net/?retryWrites=true&w=majority\"\n }\n\n val client = MongoClient.create(connectionString = connectString)\n val database = client.getDatabase(databaseName = databaseName)\n\n return try {\n // Send a ping to confirm a successful connection\n val command = Document(\"ping\", BsonInt64(1))\n database.runCommand(command)\n println(\"Pinged your deployment. You successfully connected to MongoDB!\")\n database\n } catch (me: MongoException) {\n System.err.println(me)\n null\n }\n}\n```\n\n> In the code snippet above, we still have the ability to use a hardcoded string. This is only done for demo purposes, allowing you to use a\n> connection URI directly for ease and to run this via any online editor. But it is strongly recommended to avoid hardcoding a connection URI.\n\nWith the `setupConnection` function ready, let's test it and query the database for the collection count and name.\n\n```kotlin\nsuspend fun listAllCollection(database: MongoDatabase) {\n\n val count = database.listCollectionNames().count()\n println(\"Collection count $count\")\n\n print(\"Collection in this database are -----------> \")\n database.listCollectionNames().collect { print(\" $it\") }\n}\n```\n\nUpon running that code, our output looks like this:\n\nBy now, you may have noticed that we are using the `suspend` keyword with `listAllCollection()`. `listCollectionNames()` is an asynchronous function\nas it interacts with the database and therefore would ideally run on a different thread. And since the MongoDB Kotlin driver\nsupports Coroutines, the\nnative Kotlin asynchronous language paradigm, we can benefit from it by using `suspend`\nfunctions.\n\nSimilarly, to drop collections, we use the `suspend` function.\n\n```kotlin\nsuspend fun dropCollection(database: MongoDatabase) {\n database.getCollection(collectionName = \"restaurants\").drop()\n}\n```\n\nWith this complete, we are all set to start working on our CRUD application. So to start with, we need to create a `data` class that represents\nrestaurant information that our app saves into the database.\n\n```kotlin\ndata class Restaurant(\n @BsonId\n val id: ObjectId,\n val address: Address,\n val borough: String,\n val cuisine: String,\n val grades: List,\n val name: String,\n @BsonProperty(\"restaurant_id\")\n val restaurantId: String\n)\n\ndata class Address(\n val building: String,\n val street: String,\n val zipcode: String,\n val coord: List\n)\n\ndata class Grade(\n val date: LocalDateTime,\n val grade: String,\n val score: Int\n)\n```\n\nIn the above code snippet, we used two annotations:\n\n1. `@BsonId`, which represents the unique identity or `_id` of a document.\n2. `@BsonProperty`, which creates an alias for keys in the document \u2014 for example, `restaurantId` represents `restaurant_id`.\n\n> Note: Our `Restaurant` data class here is an exact replica of a restaurant document in the sample dataset, but a few fields can be skipped or marked\n> as optional \u2014 e.g., `grades` and `address` \u2014 while maintaining the ability to perform CRUD operations. We are able to do so, as MongoDB\u2019s document\n> model allows flexible schema for our data.\n\n## Create\n\nWith all the heavy lifting done (10 lines of code for connecting), adding a new document to the database is really simple and can be done with one\nline of code using `insertOne`. So, let's create a new file called `Create.kt`, which will contain all the create operations.\n\n```kotlin\nsuspend fun addItem(database: MongoDatabase) {\n\n val collection = database.getCollection(collectionName = \"restaurants\")\n val item = Restaurant(\n id = ObjectId(),\n address = Address(\n building = \"Building\", street = \"street\", zipcode = \"zipcode\", coord =\n listOf(Random.nextDouble(), Random.nextDouble())\n ),\n borough = \"borough\",\n cuisine = \"cuisine\",\n grades = listOf(\n Grade(\n date = LocalDateTime.now(),\n grade = \"A\",\n score = Random.nextInt()\n )\n ),\n name = \"name\",\n restaurantId = \"restaurantId\"\n )\n\n collection.insertOne(item).also {\n println(\"Item added with id - ${it.insertedId}\")\n }\n}\n```\n\nWhen we run it, the output on the console is:\n\n> Again, don't forget to add an environment variable again for this file, if you had trouble while running it.\n\nIf we want to add multiple documents to the collection, we can use `insertMany`, which is recommended over running `insertOne` in a loop.\n\n```kotlin\nsuspend fun addItems(database: MongoDatabase) {\n val collection = database.getCollection(collectionName = \"restaurants\")\n val newRestaurants = collection.find().first().run {\n listOf(\n this.copy(\n id = ObjectId(), name = \"Insert Many Restaurant first\", restaurantId = Random\n .nextInt().toString()\n ),\n this.copy(\n id = ObjectId(), name = \"Insert Many Restaurant second\", restaurantId = Random\n .nextInt().toString()\n )\n )\n }\n\n collection.insertMany(newRestaurants).also {\n println(\"Total items added ${it.insertedIds.size}\")\n }\n}\n\n```\n\nWith these outputs on the console, we can say that the data has been added successfully.\n\nBut what if we want to see the object in the database? One way is with a read operation, which we would do shortly or\nuse MongoDB Compass to view the information.\n\nMongoDB Compass is a free, interactive GUI tool for querying, optimizing, and analyzing the MongoDB data\nfrom your system. To get started, download the tool and use the `connectionString` to connect with the\ndatabase.\n\n## Read\n\nTo read the information from the database, we can use the `find` operator. Let's begin by reading any document.\n\n```kotlin\nval collection = database.getCollection(collectionName = \"restaurants\")\ncollection.find().limit(1).collect {\n println(it)\n}\n```\n\nThe `find` operator returns a list of results, but since we are only interested in a single document, we can use the `limit` operator in conjunction\nto limit our result set. In this case, it would be a single document.\n\nIf we extend this further and want to read a specific document, we can add filter parameters over the top of it:\n\n```kotlin\nval queryParams = Filters\n .and(\n listOf(\n eq(\"cuisine\", \"American\"),\n eq(\"borough\", \"Queens\")\n )\n )\n```\n\nOr, we can use any of the operators from our list. The final code looks like this.\n\n```kotlin\nsuspend fun readSpecificDocument(database: MongoDatabase) {\n val collection = database.getCollection(collectionName = \"restaurants\")\n val queryParams = Filters\n .and(\n listOf(\n eq(\"cuisine\", \"American\"),\n eq(\"borough\", \"Queens\")\n )\n )\n\n collection\n .find(queryParams)\n .limit(2)\n .collect {\n println(it)\n }\n\n}\n```\n\nFor the output, we see this:\n\n> Don't forget to add the environment variable again for this file, if you had trouble while running it.\n\nAnother practical use case that comes with a read operation is how to add pagination to the results. This can be done with the `limit` and `offset`\noperators.\n\n```kotlin\nsuspend fun readWithPaging(database: MongoDatabase, offset: Int, pageSize: Int) {\n val collection = database.getCollection(collectionName = \"restaurants\")\n val queryParams = Filters\n .and(\n listOf(\n eq(Restaurant::cuisine.name, \"American\"),\n eq(Restaurant::borough.name, \"Queens\")\n )\n )\n\n collection\n .find(queryParams)\n .limit(pageSize)\n .skip(offset)\n .collect {\n println(it)\n }\n}\n```\n\nBut with this approach, often, the query response time increases with value of the `offset`. To overcome this, we can benefit by creating an `Index`,\nas shown below.\n\n```kotlin\nval collection = database.getCollection(collectionName = \"restaurants\")\nval options = IndexOptions().apply {\n this.name(\"restaurant_id_index\")\n this.background(true)\n}\n\ncollection.createIndex(\n keys = Indexes.ascending(\"restaurant_id\"),\n options = options\n)\n```\n\n## Update\n\nNow, let's discuss how to edit/update an existing document. Again, let's quickly create a new Kotlin file, `Update.Kt`.\n\nIn general, there are two ways of updating any document:\n\n* Perform an **update** operation, which allows us to update specific fields of the matching documents without impacting the other fields.\n* Perform a **replace** operation to replace the matching document with the new document.\n\nFor this exercise, we'll use the document we created earlier with the create operation `{restaurant_id: \"restaurantId\"}` and update\nthe `restaurant_id` with a more realistic value. Let's split this into two sub-tasks for clarity.\n\nFirst, using `Filters`, we query to filter the document, similar to the read operation earlier.\n\n```kotlin\nval collection = db.getCollection(\"restaurants\")\nval queryParam = Filters.eq(\"restaurant_id\", \"restaurantId\")\n```\n\nThen, we can set the `restaurant_id` with a random integer value using `Updates`.\n\n```kotlin\nval updateParams = Updates.set(\"restaurant_id\", Random.nextInt().toString())\n```\n\nAnd finally, we use `updateOne` to update the document in an atomic operation.\n\n```kotlin\ncollection.updateOne(filter = queryParam, update = updateParams).also {\n println(\"Total docs modified ${it.matchedCount} and fields modified ${it.modifiedCount}\")\n}\n```\n\nIn the above example, we were already aware of which document we wanted to update \u2014 the restaurant with an id `restauratantId` \u2014 but there could be a\nfew use cases where that might not be the situation. In such cases, we would first look up the document and then update it. `findOneAndUpdate` can be\nhandy. It allows you to combine both of these processes into an atomic operation, unlocking additional performance.\n\nAnother variation of the same could be updating multiple documents with one call. `updateMany` is useful for such use cases \u2014 for example, if we want\nto update the `cuisine` of all restaurants to your favourite type of cuisine and `borough` to Brooklyn.\n\n```kotlin\nsuspend fun updateMultipleDocuments(db: MongoDatabase) {\n val collection = db.getCollection(\"restaurants\")\n val queryParam = Filters.eq(Restaurant::cuisine.name, \"Chinese\")\n val updateParams = Updates.combine(\n Updates.set(Restaurant::cuisine.name, \"Indian\"),\n Updates.set(Restaurant::borough.name, \"Brooklyn\")\n )\n\n collection.updateMany(filter = queryParam, update = updateParams).also {\n println(\"Total docs matched ${it.matchedCount} and modified ${it.modifiedCount}\")\n }\n}\n```\n\nIn these examples, we used `set` and `combine` with `Updates`. But there are many more types of update operator to explore that allow us to do many\nintuitive operations, like set the currentDate or timestamp, increase or decrease the value of the field, and so on. To learn more about the different\ntypes of update operators you can perform with Kotlin and MongoDB, refer to\nour docs.\n\n## Delete\n\nNow, let's explore one final CRUD operation: delete. We'll start by exploring how to delete a single document. To do this, we'll\nuse `findOneAndDelete` instead of `deleteOne`. As an added benefit, this also returns the deleted document as output. In our example, we delete the\nrestaurant:\n\n```kotlin\nval collection = db.getCollection(collectionName = \"restaurants\")\nval queryParams = Filters.eq(\"restaurant_id\", \"restaurantId\")\n\ncollection.findOneAndDelete(filter = queryParams).also {\n it?.let {\n println(it)\n }\n}\n```\n\nTo delete multiple documents, we can use `deleteMany`. We can, for example, use this to delete all the data we created earlier with our create\noperation.\n\n```kotlin\nsuspend fun deleteRestaurants(db: MongoDatabase) {\n val collection = db.getCollection(collectionName = \"restaurants\")\n\n val queryParams = Filters.or(\n listOf(\n Filters.regex(Restaurant::name.name, Pattern.compile(\"^Insert\")),\n Filters.regex(\"restaurant_id\", Pattern.compile(\"^restaurant\"))\n )\n )\n collection.deleteMany(filter = queryParams).also {\n println(\"Document deleted : ${it.deletedCount}\")\n }\n}\n```\n\n## Summary\n\nCongratulations! You now know how to set up your first Kotlin application with MongoDB and perform CRUD operations. The complete source code of the\napp can be found on GitHub.\n\nIf you have any feedback on your experience working with the MongoDB Kotlin driver, please submit a comment in our\nuser feedback portal or reach out to me on Twitter: @codeWithMohit.", "format": "md", "metadata": {"tags": ["MongoDB", "Kotlin"], "pageDescription": "This is an introductory article on how to build an application in Kotlin using MongoDB Atlas and the MongoDB Kotlin driver, the latest addition to our list of official drivers.", "contentType": "Tutorial"}, "title": "Getting Started with the MongoDB Kotlin Driver", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-starlette", "action": "created", "body": "# Getting Started with MongoDB and Starlette\n\n \n\nStarlette is a lightweight ASGI framework/toolkit, which is ideal for building high-performance asyncio services. It provides everything you need to create JSON APIs, with very little boilerplate. However, if you would prefer an async web framework that is a bit more \"batteries included,\" be sure to read my tutorial on Getting Started with MongoDB and FastAPI.\n\nIn this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Starlette projects.\n\n## Prerequisites\n\n- Python 3.9.0\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.\n\n## Running the Example\n\nTo begin, you should clone the example code from GitHub.\n\n``` shell\ngit clone git@github.com:mongodb-developer/mongodb-with-starlette.git\n```\n\nYou will need to install a few dependencies: Starlette, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.\n\n``` shell\ncd mongodb-with-starlette\npip install -r requirements.txt\n```\n\nIt may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.\n\nOnce you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.\n\n``` shell\nexport MONGODB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\n```\n\nRemember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.\n\nThe final step is to start your Starlette server.\n\n``` shell\nuvicorn app:app --reload\n```\n\nOnce the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial; but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.\n\n``` shell\ncurl -X \"POST\" \"http://localhost:8000/\" \\\n -H 'Accept: application/json' \\\n -H 'Content-Type: application/json; charset=utf-8' \\\n -d '{\n \"name\": \"Jane Doe\",\n \"email\": \"jdoe@example.com\",\n \"gpa\": \"3.9\"\n }'\n```\n\nTry creating a few students via these `POST` requests, and then refresh your browser.\n\n## Creating the Application\n\nAll the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.\n\n### Connecting to MongoDB\n\nOne of the very first things we do is connect to our MongoDB database.\n\n``` python\nclient = motor.motor_asyncio.AsyncIOMotorClient(os.environ\"MONGODB_URL\"])\ndb = client.college\n```\n\nWe're using the async motor driver to create our MongoDB client, and then we specify our database name `college`.\n\n### Application Routes\n\nOur application has five routes:\n\n- POST / - creates a new student.\n- GET / - view a list of all students.\n- GET /{id} - view a single student.\n- PUT /{id} - update a student.\n- DELETE /{id} - delete a student.\n\n#### Create Student Route\n\n``` python\nasync def create_student(request):\n student = await request.json()\n student[\"_id\"] = str(ObjectId())\n new_student = await db[\"students\"].insert_one(student)\n created_student = await db[\"students\"].find_one({\"_id\": new_student.inserted_id})\n return JSONResponse(status_code=201, content=created_student)\n```\n\nNote how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON; Starlette encodes and decodes data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`, but JSON does not. Fortunately, MongoDB `_id` values don't need to be ObjectIDs. Because of this, for simplicity, we convert ObjectIds to strings before storing them.\n\nThe `create_student` route receives the new student data as a JSON string in a `POST` request. The `request.json` function converts this JSON string back into a Python dictionary which we can then pass to our MongoDB client.\n\nThe `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return this in our `JSONResponse`.\n\nStarlette returns an HTTP `200` status code by default, but in this instance, a `201` created is more appropriate.\n\n##### Read Routes\n\nThe application has two read routes: one for viewing all students and the other for viewing an individual student.\n\n``` python\nasync def list_students(request):\n students = await db\"students\"].find().to_list(1000)\n return JSONResponse(students)\n```\n\nMotor's `to_list` method requires a max document count argument. For this example, I have hardcoded it to `1000`, but in a real application, you would use the [skip and limit parameters in find to paginate your results.\n\n``` python\nasync def show_student(request):\n id = request.path_params\"id\"]\n if (student := await db[\"students\"].find_one({\"_id\": id})) is not None:\n return JSONResponse(student)\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nThe student detail route has a path parameter of `id`, which Starlette passes as an argument to the `show_student` function. We use the id to attempt to find the corresponding student in the database. The conditional in this section is using an [assignment expression, a recent addition to Python (introduced in version 3.8) and often referred to by the incredibly cute sobriquet \"walrus operator.\"\n\nIf a document with the specified `id` does not exist, we raise an `HTTPException` with a status of `404`.\n\n##### Update Route\n\n``` python\nasync def update_student(request):\n id = request.path_params\"id\"]\n student = await request.json()\n update_result = await db[\"students\"].update_one({\"_id\": id}, {\"$set\": student})\n\n if update_result.modified_count == 1:\n if (updated_student := await db[\"students\"].find_one({\"_id\": id})) is not None:\n return JSONResponse(updated_student)\n\n if (existing_student := await db[\"students\"].find_one({\"_id\": id})) is not None:\n return JSONResponse(existing_student)\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nThe `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the id of the document to update as well as the new data in the JSON body.\n\nWe attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.\n\nIf the `modified_count` is not equal to one, we still check to see if there is a document matching the id. A `modified_count` of zero could mean that there is no document with that id, but it could also mean that the document does exist, but it did not require updating because the current values are the same as those supplied in the `PUT` request.\n\nIt is only after that final find fails that we raise a `404` Not Found exception.\n\n##### Delete Route\n\n``` python\nasync def delete_student(request):\n id = request.path_params[\"id\"]\n delete_result = await db[\"students\"].delete_one({\"_id\": id})\n\n if delete_result.deleted_count == 1:\n return JSONResponse(status_code=204)\n\n raise HTTPException(status_code=404, detail=f\"Student {id} not found\")\n```\n\nOur last route is `delete_student`. Again, because this is acting upon a single document, we have to supply an id in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or \"No Content.\" In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified id, then instead we return a `404`.\n\n### Creating the Starlette App\n\n``` python\napp = Starlette(\n debug=True,\n routes=[\n Route(\"/\", create_student, methods=[\"POST\"]),\n Route(\"/\", list_students, methods=[\"GET\"]),\n Route(\"/{id}\", show_student, methods=[\"GET\"]),\n Route(\"/{id}\", update_student, methods=[\"PUT\"]),\n Route(\"/{id}\", delete_student, methods=[\"DELETE\"]),\n ],\n)\n```\n\nThe final piece of code creates an instance of Starlette and includes each of the routes we defined. You can see that many of the routes share the same URL but use different HTTP methods. For example, a `GET` request to `/{id}` will return the corresponding student document for you to view, whereas a `DELETE` request to the same URL will delete it. So, be very thoughtful about the which HTTP method you use for each request!\n\n## Wrapping Up\n\nI hope you have found this introduction to Starlette with MongoDB useful. Now is a fascinating time for Python developers as more and more frameworks\u2014both new and old\u2014begin taking advantage of async.\n\nIf you would like to learn more and take your MongoDB and Starlette knowledge to the next level, check out Ado's very in-depth tutorial on how to [Build a Property Booking Website with Starlette, MongoDB, and Twilio. Also, if you're interested in FastAPI (a web framework built upon Starlette), you should view my tutorial on getting started with the FARM stack: FastAPI, React, & MongoDB.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Getting Started with MongoDB and Starlette", "contentType": "Quickstart"}, "title": "Getting Started with MongoDB and Starlette", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/creating-multiplayer-drawing-game-phaser", "action": "created", "body": "# Creating a Multiplayer Drawing Game with Phaser and MongoDB\n\nWhen it comes to MongoDB, an often overlooked industry that it works amazingly well in is gaming. It works great in gaming because of its performance, but more importantly its ability to store whatever complex data the game throws at it.\n\nLet's say you wanted to create a drawing game like Pictionary. I know what you're thinking: why would I ever want to create a Pictionary game with MongoDB integration? Well, what if you wanted to be able to play with friends remotely? In this scenario, you could store your brushstrokes in MongoDB and load those brushstrokes on your friend's device. These brushstrokes can be pretty much anything. They could be images, vector data, or something else entirely.\n\nA drawing game is just one of many possible games that would pair well with MongoDB.\n\nIn this tutorial, we're going to create a drawing game using Phaser. The data will be stored and synced with MongoDB and be visible on everyone else's device whether that is desktop or mobile.\n\nTake the following animated image for example:\n\nIn the above example, I have my MacBook as well as my iOS device in the display. I'm drawing on my iOS device, on the right, and after the brushstrokes are considered complete, they are sent to MongoDB and the other clients, such as the MacBook. This is why the strokes are not instantly available as the strokes are in progress.\n\n## The Tutorial Requirements\n\nThere are a few requirements that must be met prior to starting this\ntutorial:\n\n- A MongoDB Atlas free tier cluster or better must be available.\n- A MongoDB Realm application configured to use the Atlas cluster.\n\nThe heavy lifting of this example will be with Phaser, MongoDB\nAtlas, and MongoDB Realm.\n\n>MongoDB Atlas has a forever FREE tier that can be configured in the MongoDB Cloud.\n\nThere's no account requirement or downloads necessary when it comes to building Phaser games. These games are both web and mobile compatible.\n\n## Drawing with Phaser, HTML, and Simple JavaScript\n\nWhen it comes to Phaser, you can do everything within a single HTML\nfile. This file must be served rather than opened from the local\nfilesystem, but nothing extravagant needs to be done with the project.\n\nLet's start by creating a project somewhere on your computer with an\n**index.html** file and a **game.js** file. We're going to add some\nboilerplate code to our **index.html** prior to adding our game logic to\nthe **game.js** file.\n\nWithin the **index.html** file, add the following:\n\n``` xml\n\n \n \n \n \n \n \n \n \n \n \n \n\n```\n\nIn the above HTML, we've added scripts for both Phaser and MongoDB\nRealm. We've also defined an HTML container `\n` element, as seen by\nthe `game` id, to hold our game when the time comes.\n\nWe could add all of our Phaser and MongoDB logic into the unused\n`\n```\n\nIn the above code, we're defining that our game should be rendered in the HTML element with the `game` id. We're also saying that it should take the full width and height that's available to us in the browser. This full width and height works for both computers and mobile devices.\n\nNow we can take a look at each of our scenes in the **game.js** file, starting with the `initScene` function:\n\n``` javascript\nasync initScene(data) {\n this.strokes = ];\n this.isDrawing = false;\n}\n```\n\nFor now, the `initScene` function will remain short. This is because we are not going to worry about initializing any database information yet. When it comes to `strokes`, this will represent independent collections of points. A brushstroke is just a series of connected points, so we want to maintain them. We need to be able to determine when a stroke starts and finishes, so we can use `isDrawing` to determine if we've lifted our cursor or pencil.\n\nNow let's have a look at the `createScene` function:\n\n``` javascript\nasync createScene() {\n this.graphics = this.add.graphics();\n this.graphics.lineStyle(4, 0x00aa00);\n}\n```\n\nLike with the `initScene`, this function will change as we add the database functionality. For now, we're initializing the graphics layer in our scene and defining the line size and color that should be rendered. This is a simple game so all lines will be 4 pixels in size and the color green.\n\nThis brings us into the most extravagant of the scenes. Let's take a look at the `updateScene` function:\n\n``` javascript\nasync updateScene() {\n if(!this.input.activePointer.isDown && this.isDrawing) {\n this.isDrawing = false;\n } else if(this.input.activePointer.isDown) {\n if(!this.isDrawing) {\n this.path = new Phaser.Curves.Path(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);\n this.isDrawing = true;\n } else {\n this.path.lineTo(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);\n }\n this.path.draw(this.graphics);\n }\n}\n```\n\nThe `updateScene` function is responsible for continuously rendering things to the screen. It is constantly run, unlike the `createScene` which is only ran once. When updating, we want to check to see if we are either drawing or not drawing.\n\nIf the `activePointer` is not down, it means we are not drawing. If we are not drawing, we probably want to indicate so with the `isDrawing` variable. This condition will get more advanced when we start adding database logic.\n\nIf the `activePointer` is down, it means we are drawing. In Phaser, to draw a line, we need a starting point and then a series of points we can render as a path. If we're starting the brushstroke, we should probably create a new path. Because we set our line to be 4 pixels, if we want the line to draw at the center of our cursor, we need to use half the size for the x and y position.\n\nWe're not ever clearing the canvas, so we don't actually need to draw the path unless the pointer is active. When the pointer is active, whatever was previously drawn will stay on the screen. This saves us some processing resources.\n\nWe're almost at a point where we can test our offline game!\n\nThe scenes are good, even though we haven't added MongoDB logic to them. We need to actually create the game so the scenes can be used. Within the **game.js** file, update the following function:\n\n``` javascript\nasync createGame(id, authId) {\n this.game = new Phaser.Game(this.phaserConfig);\n this.game.scene.start(\"default\", {});\n}\n```\n\nThe above code will take the Phaser configuration that we had set in the `constructor` method and start the `default` scene. As of right now we aren't passing any data to our scenes, but we will in the future.\n\nWith the `createGame` function available, we need to make use of it. Within the **index.html** file, add the following line to your `\n \n \n \n \n \n \n \n Create / Join\n \n\n \n Game ID: \n \n \n \n Not in a game...\n \n \n \n \n\n```\n\nThe above code has a little more going on now, but don't forget to use your own application ids, database names, and collections. You'll start by probably noticing the following markup:\n\n``` xml\n\n \n \n Create / Join\n \n \n Game ID: \n \n\n Not in a game...\n\n```\n\nNot all of it was absolutely necessary, but it does give our game a better look and feel. Essentially now we have an input field. When the input field is submitted, whether that be with keypress or click, the `joinOrCreateGame` function is called The `keyCode == 13` represents that the enter key was pressed. The function isn't called directly, but the wrapper functions call it. The game id is extracted from the input, and the HTML components are transformed based on the information about the game.\n\nTo summarize what happens, the user submits a game id. The game id floats on top of the game scene as well as information regarding if you're the owner of the game or not.\n\nThe markup looks worse than it is.\n\nNow that we can create or join games both from a UX perspective and a logic perspective, we need to change what happens when it comes to interacting with the game itself. We need to be able to store our brush strokes in MongoDB. To do this, we're going to revisit the `updateScene` function:\n\n``` javascript\nupdateScene() {\n if(this.authId == this.ownerId) {\n if(!this.input.activePointer.isDown && this.isDrawing) {\n this.collection.updateOne(\n { \n \"owner_id\": this.authId,\n \"_id\": this.gameId\n },\n {\n \"$push\": {\n \"strokes\": this.path.toJSON()\n }\n }\n ).then(result => console.log(result));\n this.isDrawing = false;\n } else if(this.input.activePointer.isDown) {\n if(!this.isDrawing) {\n this.path = new Phaser.Curves.Path(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);\n this.isDrawing = true;\n } else {\n this.path.lineTo(this.input.activePointer.position.x - 2, this.input.activePointer.position.y - 2);\n }\n this.path.draw(this.graphics);\n }\n }\n}\n```\n\nRemember, this time around we have access to the game id and the owner id information. It was passed into the scene when we created or joined a game.\n\nWhen it comes to actually drawing, nothing is going to change. However, when we aren't drawing, we want to update the game document to push our new strokes. Phaser makes it easy to convert our line information to JSON which inserts very easily into MongoDB. Remember earlier when I said accepting flexible data was a huge benefit for gaming?\n\nSo we are pushing these brushstrokes to MongoDB. We need to be able to load them from MongoDB.\n\nLet's update our `createScene` function:\n\n``` javascript\nasync createScene() {\n this.graphics = this.add.graphics();\n this.graphics.lineStyle(4, 0x00aa00);\n this.strokes.forEach(stroke => {\n this.path = new Phaser.Curves.Path();\n this.path.fromJSON(stroke);\n this.path.draw(this.graphics);\n });\n}\n```\n\nWhen the `createScene` function executes, we are taking the `strokes` array that was provided by the `createGame` and `joinGame` functions and looping over it. Remember, in the `updateScene` function we are storing the exact path. This means we can load the exact path and draw it.\n\nThis is great, but the users on the other end will only see the brush strokes when they first launch the game. We need to make it so they get new brushstrokes as they are pushed into our document. We can do this with [change streams in Realm.\n\nLet's update our `createScene` function once more:\n\n``` javascript\nasync createScene() {\n this.graphics = this.add.graphics();\n this.graphics.lineStyle(4, 0x00aa00);\n this.strokes.forEach(stroke => {\n this.path = new Phaser.Curves.Path();\n this.path.fromJSON(stroke);\n this.path.draw(this.graphics);\n });\n const stream = await this.collection.watch({ \"fullDocument._id\": this.gameId });\n stream.onNext(event => {\n let updatedFields = event.updateDescription.updatedFields;\n if(updatedFields.hasOwnProperty(\"strokes\")) {\n updatedFields = [updatedFields.strokes[\"0\"]];\n }\n for(let strokeNumber in updatedFields) {\n let changeStreamPath = new Phaser.Curves.Path();\n changeStreamPath.fromJSON(updatedFields[strokeNumber]);\n changeStreamPath.draw(this.graphics);\n }\n });\n}\n```\n\nWe're now watching our collection for documents that have an `_id` field that matches our game id. Remember, we're in a game, we don't need to watch documents that are not our game. When a new document comes in, we can look at the updated fields and render the new strokes to the scene.\n\nSo why are we not using `path` like all the other areas of the code?\n\nYou don't know when new strokes are going to come in. If you're using the same global variable between the active drawing canvas and the change stream, there's a potential for the strokes to merge together given certain race conditions. It's just easier to let the change stream make its own path.\n\nAt this point in time, assuming your cluster is available and the configurations were made correctly, any drawing you do will be added to MongoDB and essentially synchronized to other computers and devices watching the document.\n\n## Conclusion\n\nYou just saw how to make a simple drawing game with Phaser and MongoDB. Given the nature of Phaser, this game is compatible on desktops as well as mobile devices, and, given the nature of MongoDB and Realm, anything you add to the game will sync across devices and platforms as well.\n\nThis is just one of many possible gaming examples that could use MongoDB, and these interactive applications don't even need to be a game. You could be creating the next Photoshop application and you want every brushstroke, every layer, etc., to be synchronized to MongoDB. What you can do is limitless.", "format": "md", "metadata": {"tags": ["JavaScript", "Realm"], "pageDescription": "Learn how to build a drawing game with Phaser that synchronizes with MongoDB Realm for multiplayer.", "contentType": "Article"}, "title": "Creating a Multiplayer Drawing Game with Phaser and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/maintaining-geolocation-specific-game-leaderboard-phaser-mongodb", "action": "created", "body": "\n \n \n \n \n \n \n ", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas"], "pageDescription": "Learn how to create a game with a functioning leaderboard using Phaser, JavaScript, and MongoDB.", "contentType": "Tutorial"}, "title": "Maintaining a Geolocation Specific Game Leaderboard with Phaser and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/build-infinite-runner-game-unity-realm-unity-sdk", "action": "created", "body": "# Build an Infinite Runner Game with Unity and the Realm Unity SDK\n\n> The Realm .NET SDK for Unity is now in GA. Learn more here.\n> \n\nDid you know that MongoDB has a Realm SDK for the Unity game development framework that makes working with game data effortless when making mobile games, PC games, and similar? It's currently an alpha release, but you can already start using it to build persistence into your cross platform gaming projects.\n\nA popular game template for the past few years has been in infinite runner style games. Great games such as\u00a0Temple Run\u00a0and\u00a0Subway Surfers\u00a0have had many competitors, each with their own spin on the subject. If you're unfamiliar with the infinite runner concept, the idea is that you have a player that can move horizontally to fixed positions. As the game progresses, obstacles and rewards enter the scene. The player must dodge or obtain depending on the object and this happens until the player collides with an obstacle. As time progresses, the game generally speeds up to make things more difficult.\n\nWhile the game might sound complicated, there's actually a lot of repetition.\n\nIn this tutorial, we'll look at how to make a game in Unity and C#, particularly our own infinite runner 2d game. We'll look at important concepts such as object pooling and collision, as well as data persistence using the Realm SDK for Unity.\n\nTo get an idea of what we want to build, check out the following animated image:\n\nAs you can see in the above image, we have simple shapes as well as cake. The score increases as time increases or when cake is obtained. The level restarts when you collide with an obstacle and depending on what your score was, it could now be the new high score.\n\n## The Requirements\n\nThere are a few requirements, some of which will change once the Realm SDK for Unity becomes a stable release.\n\n- Unity 2020.2.4f1 or newer\n- The Realm SDK for Unity, 10.1.1 or newer\n\nThis tutorial might work with earlier versions of the Unity editor. However, 2020.2.4f1 is the version that I'm using. As of right now, the Realm SDK for Unity is only available as a tarball through GitHub rather than through the Unity Asset Store. For now, you'll have to dig through the releases on GitHub.\n\n## Creating the Game Objects for the Player, Obstacles, and Rewards\n\nEven though there are a lot of visual components moving around on the screen, there's not a lot happening behind the scenes in terms of the Unity project. There are three core visual objects that make up this game example.\n\nWe have the player, the obstacles, and the rewards, which we're going to interchangeably call cake. Each of the objects will have the same components, but different scripts. We'll add the components here, but create the scripts later.\n\nWithin your project, create the three different game objects in the Unity editor. To start, each will be an empty game object.\n\nRather than working with all kinds of fancy graphics, create a 1x1 pixel image that is white. We're going to use it for all of our game objects, just giving them a different color or size. If you'd prefer the fancy graphics, consider checking out the Unity Asset Store for more options.\n\nEach game object should have a\u00a0**Sprite Renderer**,\u00a0**Rigidbody 2D**, and a\u00a0**Box Collider 2D**\u00a0component attached. The\u00a0**Sprite Renderer**\u00a0component can use the 1x1 pixel graphic or one of your choosing. For the\u00a0**Rigidbody 2D**, make sure the\u00a0**Body Type\u00a0is\u00a0Kinematic**\u00a0on all game objects because we won't be using things like gravity. Likewise, make sure the\u00a0**Is Trigger**\u00a0is enabled for each of the\u00a0**Box Collider 2D**\u00a0components.\n\nWe'll be adding more as we go along, but for now, we have a starting point.\n\n## Creating an Object to Represent the Game Controller\n\nThere are a million different ways to create a great game with Unity. However, for this game, we're going to not rely on any particular visually rendered object for managing the game itself. Instead, we're going to create a game object responsible for game management.\n\nAdd an empty game object to your scene titled\u00a0**GameController**. While we won't be doing anything with it now, we'll be attaching scripts to it for managing the object pool and the score.\n\n## Adding Logic to the Game Objects Within the Scene with C# Scripts\n\nWith the three core game objects (player, obstacle, reward) in the scene, we need to give each of them some game logic. Let's start with the logic for the obstacle and reward since they are similar.\n\nThe idea behind the obstacle and reward is that they are constantly moving down from the top of the screen. As they become visible, the position along the x-axis is randomized. As they fall off the screen, the object is disabled and eventually reset.\n\nCreate an\u00a0**Obstacle.cs**\u00a0file with the following C# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Obstacle : MonoBehaviour {\n\n public float movementSpeed;\n\n private float] _fixedPositionX = new float[] { -8.0f, 0.0f, 8.0f };\n\n void OnEnable() {\n int randomPositionX = Random.Range(0, 3);\n transform.position = new Vector3(_fixedPositionX[randomPositionX], 6.0f, 0);\n }\n\n void Update() {\n transform.position += Vector3.down * movementSpeed * Time.deltaTime;\n if(transform.position.y < -5.25) {\n gameObject.SetActive(false);\n }\n }\n\n}\n```\n\nIn the above code, we have fixed position possibilities. When the game object is enabled, we randomly choose from one of the possible fixed positions and update the overall position of the game object.\n\nFor every frame of the game, the position of the game object falls down on the y-axis. If the object reaches a certain position, it is then disabled.\n\nSimilarly, create a\u00a0**Cake.cs**\u00a0file with the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Cake : MonoBehaviour {\n\n public float movementSpeed;\n\n private float[] _fixedPositionX = new float[] { -8.0f, 0.0f, 8.0f };\n\n void OnEnable() {\n int randomPositionX = Random.Range(0, 3);\n transform.position = new Vector3(_fixedPositionX[randomPositionX], 6.0f, 0);\n }\n\n void Update() {\n transform.position += Vector3.down * movementSpeed * Time.deltaTime;\n if (transform.position.y < -5.25) {\n gameObject.SetActive(false);\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if (collider.gameObject.tag == \"Player\") {\n gameObject.SetActive(false);\n }\n }\n\n}\n```\n\nThe above code should look the same with the exception of the\u00a0`OnTriggerEnter2D`\u00a0function. In the\u00a0`OnTriggerEnter2D`\u00a0function, we have the following code:\n\n``` csharp\nvoid OnTriggerEnter2D(Collider2D collider) {\n if (collider.gameObject.tag == \"Player\") {\n gameObject.SetActive(false);\n }\n}\n```\n\nIf the current reward game object collides with another game object and that other game object is tagged as being a \"Player\", then the reward object is disabled. We'll handle the score keeping of the consumed reward elsewhere.\n\nMake sure to attach the\u00a0`Obstacle`\u00a0and\u00a0`Cake`\u00a0scripts to the appropriate game objects within your scene.\n\nWith the obstacles and rewards out of the way, let's look at the logic for the player. Create a\u00a0**Player.cs**\u00a0file with the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.SceneManagement;\n\npublic class Player : MonoBehaviour {\n\n public float movementSpeed;\n\n void Update() {\n if(Input.GetKey(KeyCode.LeftArrow)) {\n transform.position += Vector3.left * movementSpeed * Time.deltaTime;\n } else if(Input.GetKey(KeyCode.RightArrow)) {\n transform.position += Vector3.right * movementSpeed * Time.deltaTime;\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if(collider.gameObject.tag == \"Obstacle\") {\n // Handle Score Here\n SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);\n } else if(collider.gameObject.tag == \"Cake\") {\n // Handle Score Here\n }\n }\n\n}\n```\n\nThe\u00a0**Player.cs**\u00a0file will change in the future, but for now, we can move the player around based on the arrow keys on the keyboard. We are also looking at collisions with other objects. If the player object collides with an object tagged as being an obstacle, then the goal is to change the score and restart the scene. Otherwise, if the player object collides with an object tagged as being \"Cake\", which is a reward, then the goal is to just change the score.\n\nMake sure to attach the\u00a0`Player`\u00a0script to the appropriate game object within your scene.\n\n## Pooling Obstacles and Rewards with Performance-Maximizing Object Pools\n\nAs it stands, when an obstacle falls off the screen, it becomes disabled. As a reward is collided with or as it falls off the screen, it becomes disabled. In an infinite runner, we need those obstacles and rewards to be constantly resetting to look infinite. While we could just destroy and instantiate as needed, that is a performance-heavy task. Instead, we should make use of an object pool.\n\nThe idea behind an object pool is that you instantiate objects when the game starts. The number you instantiate is up to you. Then, while the game is being played, objects are pulled from the pool if they are available and when they are done, they are added back to the pool. Remember the enabling and disabling of our objects in the obstacle and reward scripts? That has to do with pooling.\n\nAges ago, I had\u00a0[written a tutorial\u00a0around object pooling, but we'll explore it here as a refresher. Create an\u00a0**ObjectPool.cs**\u00a0file with the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class ObjectPool : MonoBehaviour\n{\n\n public static ObjectPool SharedInstance;\n\n private List pooledObstacles;\n private List pooledCake;\n public GameObject obstacleToPool;\n public GameObject cakeToPool;\n public int amountToPool;\n\n void Awake() {\n SharedInstance = this;\n }\n\n void Start() {\n pooledObstacles = new List();\n pooledCake = new List();\n GameObject tmpObstacle;\n GameObject tmpCake;\n for(int i = 0; i < amountToPool; i++) {\n tmpObstacle = Instantiate(obstacleToPool);\n tmpObstacle.SetActive(false);\n pooledObstacles.Add(tmpObstacle);\n tmpCake = Instantiate(cakeToPool);\n tmpCake.SetActive(false);\n pooledCake.Add(tmpCake);\n }\n }\n\n public GameObject GetPooledObstacle() {\n for(int i = 0; i < amountToPool; i++) {\n if(pooledObstaclesi].activeInHierarchy == false) {\n return pooledObstacles[i];\n }\n }\n return null;\n }\n\n public GameObject GetPooledCake() {\n for(int i = 0; i < amountToPool; i++) {\n if(pooledCake[i].activeInHierarchy == false) {\n return pooledCake[i];\n }\n }\n return null;\n }\n\n}\n```\n\nIf the code looks a little familiar, a lot of it was taken from the Unity educational resources, particularly\u00a0[Introduction to Object Pooling.\n\nThe\u00a0`ObjectPool`\u00a0class is meant to be a singleton instance, meaning that we want to use the same pool regardless of where we are and not accidentally create numerous pools. We start by initializing each pool, which in our example is a pool of obstacles and a pool of rewards. For each object in the pool, we initialize them as disabled. The instantiation of our objects will be done with prefabs, but we'll get to that soon.\n\nWith the pool initialized, we can make use of the\u00a0GetPooledObstacle\u00a0or\u00a0GetPooledCake\u00a0methods to pull from the pool. Remember, items in the pool should be disabled. Otherwise, they are considered to be in use. We loop through our pools to find the first object that is disabled and if none exist, then we return null.\n\nAlright, so we have object pooling logic and need to fill the pool. This is where the object prefabs come in.\n\nAs of right now, you should have an\u00a0**Obstacle**\u00a0game object and a\u00a0**Cake**\u00a0game object in your scene. These game objects should have various physics and collision-related components attached, as well as the logic scripts. Create a\u00a0**Prefabs**\u00a0directory within your\u00a0**Assets**\u00a0directory and then drag each of the two game objects into that directory. Doing this will convert them from a game object in the scene to a reusable prefab.\n\nWith the prefabs in your\u00a0**Prefabs**\u00a0directory, delete the obstacle and reward game objects from your scene. We're going to add them to the scene via our object pooling script, not through the Unity UI.\n\nYou should have the\u00a0`ObjectPool`\u00a0script completed. Make sure you attach this script to the\u00a0**GameController**\u00a0game object. Then, drag each of your prefabs into the public variables of that script in the inspector for the\u00a0**GameController**\u00a0game object.\n\nJust like that, your prefabs will be pooled at the start of your game. However, just because we are pooling them doesn't mean we are using them. We need to create another script to take objects from the pool.\n\nCreate a\u00a0**GameController.cs**\u00a0file and include the following C# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class GameController : MonoBehaviour {\n\n public float obstacleTimer = 2;\n public float timeUntilObstacle = 1;\n public float cakeTimer = 1;\n public float timeUntilCake = 1;\n\n void Update() {\n timeUntilObstacle -= Time.deltaTime;\n timeUntilCake -= Time.deltaTime;\n if(timeUntilObstacle <= 0) {\n GameObject obstacle = ObjectPool.SharedInstance.GetPooledObstacle();\n if(obstacle != null) {\n obstacle.SetActive(true);\n }\n timeUntilObstacle = obstacleTimer;\n }\n if(timeUntilCake <= 0) {\n GameObject cake = ObjectPool.SharedInstance.GetPooledCake();\n if(cake != null) {\n cake.SetActive(true);\n }\n timeUntilCake = cakeTimer;\n }\n }\n}\n```\n\nIn the above code, we are making use of a few timers. We're creating timers to determine how frequently an object should be taken from the object pool.\n\nWhen the timer indicates we are ready to take from the pool, we use the\u00a0`GetPooledObstacle`\u00a0or\u00a0`GetPooledCake`\u00a0methods, set the object taken as enabled, and then reset the timer. Each instantiated prefab has the logic script attached, so once the object is enabled, it will start falling from the top of the screen.\n\nTo activate this script, make sure to attach it to the\u00a0**GameController**\u00a0game object within the scene.\n\n## Persisting Game Scores with the Realm SDK for Unity\n\nIf you ran the game as of right now, you'd be able to move your player around and collide with obstacles or rewards that continuously fall from the top of the screen. There's no concept of score-keeping or data persistence in the game up until this point.\n\nIncluding Realm in the game can be broken into two parts. For now, it is three parts due to needing to manually add the dependency to your project, but two parts will be evergreen.\n\nFrom the Realm .NET releases, find the latest release that includes Unity. For this tutorial, I'm using the **realm.unity.bundle-10.1.1.tgz** file.\n\nIn Unity, click **Window -> Package Manager** and choose to **Add package from tarball...**, then find the Realm SDK that you had just downloaded.\n\nIt may take a few minutes to import the SDK, but once it's done, we can start using it.\n\nBefore we start adding code, we need to be able to display our score information to the user. In your Unity scene, add three\u00a0**Text**\u00a0game objects: one for the high score, one for the current score, and one for the amount of cake or rewards obtained. We'll be using these game objects soon.\n\nLet's create a\u00a0**PlayerStats.cs**\u00a0file and add the following C# code:\n\n``` csharp\nusing Realms;\n\npublic class PlayerStats : RealmObject {\n\n PrimaryKey]\n public string Username { get; set; }\n\n public RealmInteger Score { get; set; }\n\n public PlayerStats() {}\n\n public PlayerStats(string Username, int Score) {\n this.Username = Username;\n this.Score = Score;\n }\n\n}\n```\n\nThe above code represents an object within our Realm data store. For our example, we want the high score for any given player to be in our Realm. While we won't have multiple users in our example, the foundation is there.\n\nTo use the above\u00a0`RealmObject`, we'll want to create another script. Create a\u00a0**Score.cs**\u00a0file and add the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.UI;\nusing Realms;\n\npublic class Score : MonoBehaviour {\n\n private Realm _realm;\n private PlayerStats _playerStats;\n private int _cake;\n\n public Text highScoreText;\n public Text currentScoreText;\n public Text cakeText;\n\n void Start() {\n _realm = Realm.GetInstance();\n _playerStats = _realm.Find(\"nraboy\");\n if(_playerStats is null) {\n _realm.Write(() => {\n _playerStats = _realm.Add(new PlayerStats(\"nraboy\", 0));\n });\n }\n highScoreText.text = \"HIGH SCORE: \" + _playerStats.Score.ToString();\n _cake = 0;\n }\n\n void OnDisable() {\n _realm.Dispose();\n }\n\n void Update() {\n currentScoreText.text = \"SCORE: \" + (Mathf.Floor(Time.timeSinceLevelLoad) + _cake).ToString();\n cakeText.text = \"CAKE: \" + _cake;\n }\n\n public void CalculateHighScore() {\n int snapshotScore = (int)Mathf.Floor(Time.timeSinceLevelLoad) + _cake;\n if(_playerStats.Score < snapshotScore) {\n _realm.Write(() => {\n _playerStats.Score = snapshotScore;\n });\n }\n }\n\n public void AddCakeToScore() {\n _cake++;\n }\n\n}\n```\n\nIn the above code, when the\u00a0`Start`\u00a0method is called, we get the Realm instance and do a find for a particular user. If the user doesn't exist, we create a new one, at which point we can use our Realm like any other object in our application.\n\nWhen we decide to call the\u00a0`CalculateHighScore`\u00a0method, we do a check to see if the new score should be saved. In this example, we are using the rewards as a multiplier to the score.\n\nIf you've never used Realm before, the Realm SDK for Unity uses the same API as the .NET SDK. You can learn more about how to use it in the\u00a0[getting started guide. You can also swing by the\u00a0community\u00a0to get additional help.\n\nSo, we have the\u00a0`Score`\u00a0class. This script should be attached to the\u00a0**GameController**\u00a0game object and each of the\u00a0**Text**\u00a0game objects should be dragged into the appropriate areas using the inspector.\n\nWe're not done yet. Remember, our\u00a0**Player.cs**\u00a0file needed to update the score. Before we open our class, make sure to drag the\u00a0**GameController**\u00a0into the appropriate area of the\u00a0**Player**\u00a0game object using the Unity inspector.\n\nOpen the\u00a0**Player.cs**\u00a0file and add the following to the\u00a0`OnTriggerEnter2D`\u00a0method:\n\n``` csharp\nvoid OnTriggerEnter2D(Collider2D collider) {\n if(collider.gameObject.tag == \"Obstacle\") {\n score.CalculateHighScore();\n SceneManager.LoadScene(SceneManager.GetActiveScene().buildIndex);\n } else if(collider.gameObject.tag == \"Cake\") {\n score.AddCakeToScore();\n }\n}\n```\n\nWhen running the game, not only will we have something playable, but the score should update and persist depending on if we've failed at the level or not.\n\nThe above image is a reminder of what we've built, minus the graphic for the cake.\n\n## Conclusion\n\nYou just saw how to create an infinite runner type game with\u00a0Unity\u00a0and C# that uses the MongoDB Realm SDK for Unity when it comes to data persistence. Like previously mentioned, the Realm SDK is currently an alpha release, so it isn't a flawless experience and there are features missing. However, it works great for a simple game example like we saw here.\n\nIf you're interested in checking out this project, it can be found on\u00a0GitHub. There's also a video version of this tutorial, an on-demand live-stream, which can be found below.\n\nAs a fun fact, this infinite runner example wasn't my first attempt at one. I\u00a0built something similar\u00a0a long time ago and it was quite fun. Check it out and continue your journey as a game developer.\n\n>If you have questions, please head to our\u00a0developer community website\u00a0where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Realm", "C#", "Unity"], "pageDescription": "Learn how to use Unity and the Realm SDK for Unity to build an infinite runner style game.", "contentType": "Tutorial"}, "title": "Build an Infinite Runner Game with Unity and the Realm Unity SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-how-to-add-realm-to-your-unity-project", "action": "created", "body": "# Persistence in Unity Using Realm\n\nWhen creating a game with Unity, we often reach the point where we need to save data that we need at a later point in time. This could be something simple, like a table of high scores, or a lot more complex, like the state of the game that got paused and now needs to be resumed exactly the way the user left it when they quit it earlier. Maybe you have tried this before using `PlayerPrefs` but your data was too complex to save it in there. Or you have tried SQL only to find it to be very complicated and cumbersome to use.\n\nRealm can help you achieve this easily and quickly with just some minor adjustments to your code.\n\nThe goal of this article is to show you how to add Realm to your Unity game and make sure your data is persisted. The Realm Unity SDK is part of our Realm .NET SDK. The documentation for the Realm .NET SDK will help you get started easily.\n\nThe first part of this tutorial will describe the example itself. If you are already familiar with Unity or really just want to see Realm in action, you can also skip it and jump straight to the second part.\n\n## Example game\n\nWe will be using a simple 3D chess game for demonstration purposes. Creating this game itself will not be part of this tutorial. However, this section will provide you with an overview so that you can follow along and add Realm to the game. This example can be found in our Unity examples repository.\n\nThe final implementation of the game including the usage of Realm is also part of the example repository.\n\nTo make it easy to find your way around this example, here are some notes to get you started:\n\nThe interesting part in the `MainScene` to look at is the `Board` which is made up of `Squares` and `Pieces`. The `Squares` are just slightly scaled and colored default `Cube` objects which we utilize to visualize the `Board` but also detect clicks for moving `Pieces` by using its already attached `Box Collider` component.\n\nThe `Pieces` have to be activated first, which happens by making them clickable as well. `Pieces` are not initially added to the `Board` but instead will be spawned by the `PieceSpawner`. You can find them in the `Prefabs` folder in the `Project` hierarchy.\n\nThe important part to look for here is the `Piece` script which detects clicks on this `Piece` (3) and offers a color change via `Select()` (1) and `Deselect()` (2) to visualize if a `Piece` is active or not.\n\n```cs\nusing UnityEngine;\n\npublic class Piece : MonoBehaviour\n{\n private Events events = default;\n private readonly Color selectedColor = new Color(1, 0, 0, 1);\n private readonly Color deselectedColor = new Color(1, 1, 1, 1);\n\n // 1\n public void Select()\n {\n gameObject.GetComponent().material.color = selectedColor;\n }\n\n // 2\n public void Deselect()\n {\n gameObject.GetComponent().material.color = deselectedColor;\n }\n\n // 3\n private void OnMouseDown()\n {\n events.PieceClickedEvent.Invoke(this);\n }\n\n private void Awake()\n {\n events = FindObjectOfType();\n }\n}\n\n```\n\nWe use two events to actually track the click on a `Piece` (1) or a `Square` (2):\n\n```cs\nusing UnityEngine;\nusing UnityEngine.Events;\n\npublic class PieceClickedEvent : UnityEvent { }\npublic class SquareClickedEvent : UnityEvent { }\n\npublic class Events : MonoBehaviour\n{\n // 1\n public readonly PieceClickedEvent PieceClickedEvent = new PieceClickedEvent();\n // 2\n public readonly SquareClickedEvent SquareClickedEvent = new SquareClickedEvent();\n}\n```\n\nThe `InputListener` waits for those events to be invoked and will then notify other parts of our game about those updates. Pieces need to be selected when clicked (1) and deselected if another one was clicked (2).\n\nClicking a `Square` while a `Piece` is selected will send a message (3) to the `GameState` to update the position of this `Piece`.\n\n```cs\nusing UnityEngine;\n\npublic class InputListener : MonoBehaviour\n{\n SerializeField] private Events events = default;\n [SerializeField] private GameState gameState = default;\n\n private Piece activePiece = default;\n\n private void OnEnable()\n {\n events.PieceClickedEvent.AddListener(OnPieceClicked);\n events.SquareClickedEvent.AddListener(OnSquareClicked);\n }\n\n private void OnDisable()\n {\n events.PieceClickedEvent.RemoveListener(OnPieceClicked);\n events.SquareClickedEvent.RemoveListener(OnSquareClicked);\n }\n\n private void OnPieceClicked(Piece piece)\n {\n if (activePiece != null)\n {\n // 2\n activePiece.Deselect();\n }\n // 1\n activePiece = piece;\n activePiece.Select();\n }\n\n private void OnSquareClicked(Vector3 position)\n {\n if (activePiece != null)\n {\n // 3\n gameState.MovePiece(activePiece, position);\n activePiece.Deselect();\n activePiece = null;\n }\n }\n}\n\n```\n\nThe actual movement as well as controlling the spawning and destroying of pieces is done by the `GameState`, in which all the above information eventually comes together to update `Piece` positions and possibly destroy other `Piece` objects. Whenever we move a `Piece` (1), we not only update its position (2) but also need to check if there is a `Piece` in that position already (3) and if so, destroy it (4).\n\nIn addition to updating the game while it is running, the `GameState` offers two more functionalities:\n- set up the initial board (5)\n- reset the board to its initial state (6)\n\n```cs\nusing System.Linq;\nusing UnityEngine;\n\npublic class GameState : MonoBehaviour\n{\n [SerializeField] private PieceSpawner pieceSpawner = default;\n [SerializeField] private GameObject pieces = default;\n\n // 1\n public void MovePiece(Piece movedPiece, Vector3 newPosition)\n {\n // 3\n // Check if there is already a piece at the new position and if so, destroy it.\n var attackedPiece = FindPiece(newPosition);\n if (attackedPiece != null)\n {\n // 4\n Destroy(attackedPiece.gameObject);\n }\n\n // 2\n // Update the movedPiece's GameObject.\n movedPiece.transform.position = newPosition;\n }\n\n // 6\n public void ResetGame()\n {\n // Destroy all GameObjects.\n foreach (var piece in pieces.GetComponentsInChildren())\n {\n Destroy(piece.gameObject);\n }\n\n // Recreate the GameObjects.\n pieceSpawner.CreateGameObjects(pieces);\n }\n\n private void Awake()\n {\n // 5\n pieceSpawner.CreateGameObjects(pieces);\n }\n\n private Piece FindPiece(Vector3 position)\n {\n return pieces.GetComponentsInChildren()\n .FirstOrDefault(piece => piece.transform.position == position);\n }\n}\n```\n\nGo ahead and try it out yourself if you like. You can play around with the board and pieces and reset if you want to start all over again.\n\nTo make sure the example is not overly complex and easy to follow, there are no rules implemented. You can move the pieces however you want. Also, the game is purely local for now and will be expanded using our Sync component in a later article to be playable online with others.\n\nIn the following section, I will explain how to make sure that the current game state gets saved and the players can resume the game at any state.\n\n## Adding Realm to your project\n\nThe first thing we need to do is to import the Realm framework into Unity.\nThe easiest way to do this is by using NPM.\n\nYou'll find it via `Windows` \u2192 `Package Manager` \u2192 cogwheel in the top right corner \u2192 `Advanced Project Settings`:\n\n![\n\nWithin the `Scoped Registries`, you can add the `Name`, `URL`, and `Scope` as follows:\n\nThis adds `NPM` as a source for libraries. The final step is to tell the project which dependencies to actually integrate into the project. This is done in the `manifest.json` file which is located in the `Packages` folder of your project.\n\nHere you need to add the following line to the `dependencies`:\n\n```json\n\"io.realm.unity\": \"\"\n```\n\nReplace `` with the most recent Realm version found in https://github.com/realm/realm-dotnet/releases and you're all set.\n\nThe final `manifest.json` should look something like this:\n\n```json\n{\n \"dependencies\": {\n ...\n \"io.realm.unity\": \"10.3.0\"\n },\n \"scopedRegistries\": \n {\n \"name\": \"NPM\",\n \"url\": \"https://registry.npmjs.org/\",\n \"scopes\": [\n \"io.realm.unity\"\n ]\n }\n ]\n}\n```\n\nWhen you switch back to Unity, it will reload the dependencies. If you then open the `Package Manager` again, you should see `Realm` as a new entry in the list on the left:\n\n![\n\nWe can now start using Realm in our Unity project.\n\n## Top-down or bottom-up?\n\nBefore we actually start adding Realm to our code, we need to think about how we want to achieve this and how the UI and database will interact with each other.\n\nThere are basically two options we can choose from: top-down or bottom-up.\n\nThe top-down approach would be to have the UI drive the changes. The `Piece` would know about its database object and whenever a `Piece` is moved, it would also update the database with its new position.\n\nThe preferred approach would be bottom-up, though. Changes will be applied to the Realm and it will then take care of whatever implications this has on the UI by sending notifications.\n\nLet's first look into the initial setup of the board.\n\n## Setting up the board\n\nThe first thing we want to do is to define a Realm representation of our piece since we cannot save the `MonoBehaviour` directly in Realm. Classes that are supposed to be saved in Realm need to subclass `RealmObject`. The class `PieceEntity` will represent such an object. Note that we cannot just duplicate the types from `Piece` since not all of them can be saved in Realm, like `Vector3` and `enum`.\n\nAdd the following scripts to the project:\n\n```cs\nusing Realms;\nusing UnityEngine;\n\npublic class PieceEntity : RealmObject\n{\n // 1\n public PieceType PieceType\n {\n get => (PieceType)Type;\n private set => Type = (int)value;\n }\n\n // 2\n public Vector3 Position\n {\n get => PositionEntity.ToVector3();\n set => PositionEntity = new Vector3Entity(value);\n }\n\n // 3\n private int Type { get; set; }\n private Vector3Entity PositionEntity { get; set; }\n\n // 4\n public PieceEntity(PieceType type, Vector3 position)\n {\n PieceType = type;\n Position = position;\n }\n\n // 5\n protected override void OnPropertyChanged(string propertyName)\n {\n if (propertyName == nameof(PositionEntity))\n {\n RaisePropertyChanged(nameof(Position));\n }\n }\n\n // 6\n private PieceEntity()\n {\n }\n}\n```\n\n```cs\nusing Realms;\nusing UnityEngine;\n\npublic class Vector3Entity : EmbeddedObject // 7\n{\n public float X { get; private set; }\n public float Y { get; private set; }\n public float Z { get; private set; }\n\n public Vector3Entity(Vector3 vector) // 8\n {\n X = vector.x;\n Y = vector.y;\n Z = vector.z;\n }\n\n public Vector3 ToVector3() => new Vector3(X, Y, Z); // 9\n\n private Vector3Entity() // 10\n {\n }\n}\n```\n\nEven though we cannot save the `PieceType` (1) and the position (2) directly in the Realm, we can still expose them using backing variables (3) to make working with this class easier while still fulfilling the requirements for saving data in Realm.\n\nAdditionally, we provide a convenience constructor (4) for setting those two properties. A default constructor (6) also has to be provided for every `RealmObject`. Since we are not going to use it here, though, we can set it to `private`.\n\nNote that one of these backing variables is a `RealmObject` itself, or rather a subclass of it: `EmbeddedObject` (7). By extracting the position to a separate class `Vector3Entity` the `PieceEntity` is more readable. Another plus is that we can use the `EmbeddedObject` to represent a 1:1 relationship. Every `PieceEntity` can only have one `Vector3Entity` and even more importantly, every `Vector3Entity` can only belong to one `PieceEntity` because there can only ever be one `Piece` on any given `Square`.\n\nThe `Vector3Entity`, like the `PieceEntity`, has some convenience functionality like a constructor that takes a `Vector3` (8), the `ToVector3()` function (9) and the private, mandatory default constructor (10) like `PieceEntity`.\n\nLooking back at the `PieceEntity`, you will notice one more function: `OnPropertyChanged` (5). Realm sends notifications for changes to fields saved in the database. Since we expose those fields using `PieceType` and `Position`, we need to make sure those notifications are passed on. This is achieved by calling `RaisePropertyChanged(nameof(Position));` whenever `PositionEntity` changes.\n\nThe next step is to add some way to actually add `Pieces` to the `Realm`. The current database state will always represent the current state of the board. When we create a new `PieceEntity`\u2014for example, when setting up the board\u2014the `GameObject` for it (`Piece`) will be created. If a `Piece` gets moved, the `PieceEntity` will be updated by the `GameState` which then leads to the `Piece`'s `GameObject` being updated using above mentioned notifications.\n\nFirst, we will need to set up the board. To achieve this using the bottom-up approach, we adjust the `PieceSpawner` as follows:\n\n```cs\nusing Realms;\nusing UnityEngine;\n\npublic class PieceSpawner : MonoBehaviour\n{\n SerializeField] private Piece prefabBlackBishop = default;\n [SerializeField] private Piece prefabBlackKing = default;\n [SerializeField] private Piece prefabBlackKnight = default;\n [SerializeField] private Piece prefabBlackPawn = default;\n [SerializeField] private Piece prefabBlackQueen = default;\n [SerializeField] private Piece prefabBlackRook = default;\n\n [SerializeField] private Piece prefabWhiteBishop = default;\n [SerializeField] private Piece prefabWhiteKing = default;\n [SerializeField] private Piece prefabWhiteKnight = default;\n [SerializeField] private Piece prefabWhitePawn = default;\n [SerializeField] private Piece prefabWhiteQueen = default;\n [SerializeField] private Piece prefabWhiteRook = default;\n\n public void CreateNewBoard(Realm realm)\n {\n realm.Write(() =>\n {\n // 1\n realm.RemoveAll();\n\n // 2\n realm.Add(new PieceEntity(PieceType.WhiteRook, new Vector3(1, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteKnight, new Vector3(2, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteBishop, new Vector3(3, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteQueen, new Vector3(4, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteKing, new Vector3(5, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteBishop, new Vector3(6, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteKnight, new Vector3(7, 0, 1)));\n realm.Add(new PieceEntity(PieceType.WhiteRook, new Vector3(8, 0, 1)));\n\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(1, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(2, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(3, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(4, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(5, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(6, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(7, 0, 2)));\n realm.Add(new PieceEntity(PieceType.WhitePawn, new Vector3(8, 0, 2)));\n\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(1, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(2, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(3, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(4, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(5, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(6, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(7, 0, 7)));\n realm.Add(new PieceEntity(PieceType.BlackPawn, new Vector3(8, 0, 7)));\n\n realm.Add(new PieceEntity(PieceType.BlackRook, new Vector3(1, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackKnight, new Vector3(2, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackBishop, new Vector3(3, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackQueen, new Vector3(4, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackKing, new Vector3(5, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackBishop, new Vector3(6, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackKnight, new Vector3(7, 0, 8)));\n realm.Add(new PieceEntity(PieceType.BlackRook, new Vector3(8, 0, 8)));\n });\n }\n\n public void SpawnPiece(PieceEntity pieceEntity, GameObject parent)\n {\n var piecePrefab = pieceEntity.PieceType switch\n {\n PieceType.BlackBishop => prefabBlackBishop,\n PieceType.BlackKing => prefabBlackKing,\n PieceType.BlackKnight => prefabBlackKnight,\n PieceType.BlackPawn => prefabBlackPawn,\n PieceType.BlackQueen => prefabBlackQueen,\n PieceType.BlackRook => prefabBlackRook,\n PieceType.WhiteBishop => prefabWhiteBishop,\n PieceType.WhiteKing => prefabWhiteKing,\n PieceType.WhiteKnight => prefabWhiteKnight,\n PieceType.WhitePawn => prefabWhitePawn,\n PieceType.WhiteQueen => prefabWhiteQueen,\n PieceType.WhiteRook => prefabWhiteRook,\n _ => throw new System.Exception(\"Invalid piece type.\")\n };\n\n var piece = Instantiate(piecePrefab, pieceEntity.Position, Quaternion.identity, parent.transform);\n piece.Entity = pieceEntity;\n }\n}\n```\n\nThe important change here is `CreateNewBoard`. Instead of spawning the `Piece`s, we now add `PieceEntity` objects to the Realm. When we look at the changes in `GameState`, we will see how this actually creates a `Piece` per `PieceEntity`.\n\nHere we just wipe the database (1) and then add new `PieceEntity` objects (2). Note that this is wrapped by a `realm.write` block. Whenever we want to change the database, we need to enclose it in a write transaction. This makes sure that no other piece of code can change the database at the same time since transactions block each other.\n\nThe last step to create a new board is to update the `GameState` to make use of the new `PieceSpawner` and the `PieceEntity` that we just created.\n\nWe'll go through these changes step by step. First we also need to import Realm here as well:\n\n```cs\nusing Realms;\n```\n\nThen we add a private field to save our `Realm` instance to avoid creating it over and over again. We also create another private field to save the collection of pieces that are on the board and a notification token which we need for above mentioned notifications:\n\n```cs\nprivate Realm realm;\nprivate IQueryable pieceEntities;\nprivate IDisposable notificationToken;\n```\n\nIn `Awake`, we do need to get access to the `Realm`. This is achieved by opening an instance of it (1) and then asking it for all `PieceEntity` objects currently saved using `realm.All` (2) and assigning them to our `pieceEntities` field:\n\n```cs\nprivate void Awake()\n{\n realm = Realm.GetInstance(); // 1\n pieceEntities = realm.All(); // 2\n\n // 3\n notificationToken = pieceEntities.SubscribeForNotifications((sender, changes, error) =>\n {\n // 4\n if (error != null)\n {\n Debug.Log(error.ToString());\n return;\n }\n\n // 5\n // Initial notification\n if (changes == null)\n {\n // Check if we actually have `PieceEntity` objects in our Realm (which means we resume a game).\n if (sender.Count > 0)\n {\n // 6\n // Each `RealmObject` needs a corresponding `GameObject` to represent it.\n foreach (PieceEntity pieceEntity in sender)\n {\n pieceSpawner.SpawnPiece(pieceEntity, pieces);\n }\n }\n else\n {\n // 7\n // No game was saved, create a new board.\n pieceSpawner.CreateNewBoard(realm);\n }\n return;\n }\n\n // 8\n foreach (var index in changes.InsertedIndices)\n {\n var pieceEntity = sender[index];\n pieceSpawner.SpawnPiece(pieceEntity, pieces);\n }\n });\n}\n```\n\nNote that collections are live objects. This has two positive implications: Every access to the object reference always returns an updated representation of said object. Because of this, every subsequent change to the object will be visible any time the object is accessed again. We also get notifications for those changes if we subscribed to them. This can be done by calling `SubscribeForNotifications` on a collection (3).\n\nApart from an error object that we need to check (4), we also receive the `changes` and the `sender` (the updated collection itself) with every notification. For every new collection of objects, an initial notification is sent that does not include any `changes` but gives us the opportunity to do some initial setup work (5).\n\nIn case we resume a game, we'll already see `PieceEntity` objects in the database even for the initial notification. We need to spawn one `Piece` per `PieceEntity` to represent it (6). We make use of the `SpawnPiece` function in `PieceSpawner` to achieve this. In case the database does not have any objects yet, we need to create the board from scratch (7). Here we use the `CreateNewBoard` function we added earlier to the `PieceSpawner`.\n\nOn top of the initial notification, we also expect to receive a notification every time a `PieceEntity` is inserted into the Realm. This is where we continue the `CreateNewBoard` functionality we started in the `PieceSpawner` by adding new objects to the database. After those changes happen, we end up with `changes` (8) inside the notifications. Now we need to iterate over all new `PieceEntity` objects in the `sender` (which represents the `pieceEntities` collection) and add a `Piece` for each new `PieceEntity` to the board.\n\nApart from inserting new pieces when the board gets set up, we also need to take care of movement and pieces attacking each other. This will be explained in the next section.\n\n## Updating the position of a PieceEntity\n\nWhenever we receive a click on a `Square` and therefore call `MovePiece` in `GameState`, we need to update the `PieceEntity` instead of directly moving the corresponding `GameObject`. The movement of the `Piece` will then happen via the `PropertyChanged` notifications as we saw earlier.\n\n```cs\npublic void MovePiece(Vector3 oldPosition, Vector3 newPosition)\n{\n realm.Write(() =>\n {\n // 1\n var attackedPiece = FindPieceEntity(newPosition);\n if (attackedPiece != null)\n {\n realm.Remove(attackedPiece);\n }\n\n // 2\n var movedPieceEntity = FindPieceEntity(oldPosition);\n movedPieceEntity.Position = newPosition;\n });\n}\n\n// 3\nprivate PieceEntity FindPieceEntity(Vector3 position)\n{\n return pieceEntities\n .Filter(\"PositionEntity.X == $0 && PositionEntity.Y == $1 && PositionEntity.Z == $2\",\n position.x, position.y, position.z)\n .FirstOrDefault();\n}\n```\n\nBefore actually moving the `PieceEntity`, we do need to check if there is already a `PieceEntity` at the desired position and if so, destroy it. To find a `PieceEntity` at the `newPosition` and also to find the `PieceEntity` that needs to be moved from `oldPosition` to `newPosition`, we can use queries on the `pieceEntities` collection (3).\n\nBy querying the collection (calling `Filter`), we can look for one or multiple `RealmObject`s with specific characteristics. In this case, we're interested in the `RealmObject` that represents the `Piece` we are looking for. Note that when using a `Filter` we can only filter using the Realm properties saved in the database, not the exposed properties (`Position` and `PieceType`) exposed for convenience by the `PieceEntity`.\n\nIf there is an `attackedPiece` at the target position, we need to delete the corresponding `PieceEntity` for this `GameObject` (1). After the `attackedPiece` is updated, we can then also update the `movedPiece` (2).\n\nLike the initial setup of the board, this has to be called within a write transaction to make sure no other code is changing the database at the same time.\n\nThis is all we had to do to update and persist the position. Go ahead and start the game. Stop and start it again and you should now see the state being persisted.\n\n## Resetting the board\n\nThe final step will be to also update our `ResetGame` button to update (or rather, wipe) the `Realm`. At the moment, it does not update the state in the database and just recreates the `GameObject`s.\n\nResetting works similar to what we do in `Awake` in case there were no entries in the database\u2014for example, when starting the game for the first time.\n\nWe can reuse the `CreateNewBoard` functionality here since it includes wiping the database before actually re-creating it:\n\n```cs\npublic void ResetGame()\n{\n pieceSpawner.CreateNewBoard(realm);\n}\n```\n\nWith this change, our game is finished and fully functional using a local `Realm` to save the game's state.\n\n## Recap and conclusion\n\nIn this tutorial, we have seen that saving your game and resuming it later can be easily achieved by using `Realm`.\n\nThe steps we needed to take:\n\n- Add `Realm` via NPM as a dependency.\n- Import `Realm` in any class that wants to use it by calling `using Realms;`.\n- Create a new `Realm` instance via `Realm.GetInstance()` to get access to the database.\n- Define entites by subclassing `RealmObject` (or any of its subclasses):\n - Fields need to be public and primitive values or lists.\n - A default constructor is mandatory.\n - A convenience constructor and additional functions can be defined.\n- Write to a `Realm` using `realm.Write()` to avoid data corruption.\n- CRUD operations (need to use a `write` transaction):\n - Use `realm.Add()` to `Create` a new object.\n - Use `realm.Remove()` to `Delete` an object.\n - `Read` and `Update` can be achieved by simply `getting` and `setting` the `public fields`.\n\nWith this, you should be ready to use Realm in your games.\n\nIf you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB and Realm.", "format": "md", "metadata": {"tags": ["Realm", "C#", "Unity"], "pageDescription": "This article shows how to integrate the Realm Unity SDK into your Unity game. We will cover everything you need to know to get started: installing the SDK, defining your models, and connecting the database to your GameObjects.", "contentType": "Tutorial"}, "title": "Persistence in Unity Using Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-aggregation", "action": "created", "body": "# Getting Started with Aggregation Pipelines in Python\n\n \n\nMongoDB's aggregation pipelines are one of its most powerful features. They allow you to write expressions, broken down into a series of stages, which perform operations including aggregation, transformations, and joins on the data in your MongoDB databases. This allows you to do calculations and analytics across documents and collections within your MongoDB database.\n\n## Prerequisites\n\nThis quick start is the second in a series of Python posts. I *highly* recommend you start with my first post, Basic MongoDB Operations in Python, which will show you how to get set up correctly with a free MongoDB Atlas database cluster containing the sample data you'll be working with here. Go read it and come back. I'll wait. Without it, you won't have the database set up correctly to run the code in this quick start guide.\n\nIn summary, you'll need:\n\n- An up-to-date version of Python 3. I wrote the code in this tutorial in Python 3.8, but it should run fine in version 3.6+.\n- A code editor of your choice. I recommend either PyCharm or the free VS Code with the official Python extension.\n- A MongoDB cluster containing the `sample_mflix` dataset. You can find instructions to set that up in the first blog post in this series.\n\n## Getting Started\n\nMongoDB's aggregation pipelines are very powerful and so they can seem a little overwhelming at first. For this reason, I'll start off slowly. First, I'll show you how to build up a pipeline that duplicates behaviour that you can already achieve with MQL queries, using PyMongo's `find()` method, but instead using an aggregation pipeline with `$match`, `$sort`, and `$limit` stages. Then, I'll show how to make queries that go beyond MQL, demonstrating using `$lookup` to include related documents from another collection. Finally, I'll put the \"aggregation\" into \"aggregation pipeline\" by showing you how to use `$group` to group together documents to form new document summaries.\n\n>All of the sample code for this quick start series can be found on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!\n\nAll of the pipelines in this post will be executed against the sample_mflix database's `movies` collection. It contains documents that look like this:\n\n``` python\n{\n '_id': ObjectId('573a1392f29313caabcdb497'),\n 'awards': {'nominations': 7,\n 'text': 'Won 1 Oscar. Another 2 wins & 7 nominations.',\n 'wins': 3},\n 'cast': 'Janet Gaynor', 'Fredric March', 'Adolphe Menjou', 'May Robson'],\n 'countries': ['USA'],\n 'directors': ['William A. Wellman', 'Jack Conway'],\n 'fullplot': 'Esther Blodgett is just another starry-eyed farm kid trying to '\n 'break into the movies. Waitressing at a Hollywood party, she '\n 'catches the eye of alcoholic star Norman Maine, is given a test, '\n 'and is caught up in the Hollywood glamor machine (ruthlessly '\n 'satirized). She and her idol Norman marry; but his career '\n 'abruptly dwindles to nothing',\n 'genres': ['Drama'],\n 'imdb': {'id': 29606, 'rating': 7.7, 'votes': 5005},\n 'languages': ['English'],\n 'lastupdated': '2015-09-01 00:55:54.333000000',\n 'plot': 'A young woman comes to Hollywood with dreams of stardom, but '\n 'achieves them only with the help of an alcoholic leading man whose '\n 'best days are behind him.',\n 'poster': 'https://m.media-amazon.com/images/M/MV5BMmE5ODI0NzMtYjc5Yy00MzMzLTk5OTQtN2Q3MzgwOTllMTY3XkEyXkFqcGdeQXVyNjc0MzMzNjA@._V1_SY1000_SX677_AL_.jpg',\n 'rated': 'NOT RATED',\n 'released': datetime.datetime(1937, 4, 27, 0, 0),\n 'runtime': 111,\n 'title': 'A Star Is Born',\n 'tomatoes': {'critic': {'meter': 100, 'numReviews': 11, 'rating': 7.4},\n 'dvd': datetime.datetime(2004, 11, 16, 0, 0),\n 'fresh': 11,\n 'lastUpdated': datetime.datetime(2015, 8, 26, 18, 58, 34),\n 'production': 'Image Entertainment Inc.',\n 'rotten': 0,\n 'viewer': {'meter': 79, 'numReviews': 2526, 'rating': 3.6},\n 'website': 'http://www.vcientertainment.com/Film-Categories?product_id=73'},\n 'type': 'movie',\n 'writers': ['Dorothy Parker (screen play)',\n 'Alan Campbell (screen play)',\n 'Robert Carson (screen play)',\n 'William A. Wellman (from a story by)',\n 'Robert Carson (from a story by)'],\n 'year': 1937}\n```\n\nThere's a lot of data there, but I'll be focusing mainly on the `_id`, `title`, `year`, and `cast` fields.\n\n## Your First Aggregation Pipeline\n\nAggregation pipelines are executed by PyMongo using Collection's [aggregate() method.\n\nThe first argument to `aggregate()` is a sequence of pipeline stages to be executed. Much like a query, each stage of an aggregation pipeline is a BSON document, and PyMongo will automatically convert a `dict` into a BSON document for you.\n\nAn aggregation pipeline operates on *all* of the data in a collection. Each stage in the pipeline is applied to the documents passing through, and whatever documents are emitted from one stage are passed as input to the next stage, until there are no more stages left. At this point, the documents emitted from the last stage in the pipeline are returned to the client program, in a similar way to a call to `find()`.\n\nIndividual stages, such as `$match`, can act as a filter, to only pass through documents matching certain criteria. Other stage types, such as `$project`, `$addFields`, and `$lookup` will modify the content of individual documents as they pass through the pipeline. Finally, certain stage types, such as `$group`, will create an entirely new set of documents based on the documents passed into it taken as a whole. None of these stages change the data that is stored in MongoDB itself. They just change the data before returning it to your program! There *is* a stage, $set, which can save the results of a pipeline back into MongoDB, but I won't be covering it in this quick start.\n\nI'm going to assume that you're working in the same environment that you used for the last post, so you should already have PyMongo and python-dotenv installed, and you should have a `.env` file containing your `MONGODB_URI` environment variable.\n\n### Finding and Sorting\n\nFirst, paste the following into your Python code:\n\n``` python\nimport os\nfrom pprint import pprint\n\nimport bson\nfrom dotenv import load_dotenv\nimport pymongo\n\n# Load config from a .env file:\nload_dotenv(verbose=True)\nMONGODB_URI = os.environ\"MONGODB_URI\"]\n\n# Connect to your MongoDB cluster:\nclient = pymongo.MongoClient(MONGODB_URI)\n\n# Get a reference to the \"sample_mflix\" database:\ndb = client[\"sample_mflix\"]\n\n# Get a reference to the \"movies\" collection:\nmovie_collection = db[\"movies\"]\n```\n\nThe above code will provide a global variable, a Collection object called `movie_collection`, which points to the `movies` collection in your database.\n\nHere is some code which creates a pipeline, executes it with `aggregate`, and then loops through and prints the detail of each movie in the results. Paste it into your program.\n\n``` python\npipeline = [\n {\n \"$match\": {\n \"title\": \"A Star Is Born\"\n }\n }, \n {\n \"$sort\": {\n \"year\": pymongo.ASCENDING\n }\n },\n]\nresults = movie_collection.aggregate(pipeline)\nfor movie in results:\n print(\" * {title}, {first_castmember}, {year}\".format(\n title=movie[\"title\"],\n first_castmember=movie[\"cast\"][0],\n year=movie[\"year\"],\n ))\n```\n\nThis pipeline has two stages. The first is a [$match stage, which is similar to querying a collection with `find()`. It filters the documents passing through the stage based on an MQL query. Because it's the first stage in the pipeline, its input is all of the documents in the `movie` collection. The MQL query for the `$match` stage filters on the `title` field of the input documents, so the only documents that will be output from this stage will have a title of \"A Star Is Born.\"\n\nThe second stage is a $sort stage. Only the documents for the movie \"A Star Is Born\" are passed to this stage, so the result will be all of the movies called \"A Star Is Born,\" now sorted by their year field, with the oldest movie first.\n\nCalls to aggregate() return a cursor pointing to the resulting documents. The cursor can be looped through like any other sequence. The code above loops through all of the returned documents and prints a short summary, consisting of the title, the first actor in the `cast` array, and the year the movie was produced.\n\nExecuting the code above results in:\n\n``` none\n* A Star Is Born, Janet Gaynor, 1937\n* A Star Is Born, Judy Garland, 1954\n* A Star Is Born, Barbra Streisand, 1976\n```\n\n### Refactoring the Code\n\nIt is possible to build up whole aggregation pipelines as a single data structure, as in the example above, but it's not necessarily a good idea. Pipelines can get long and complex. For this reason, I recommend you build up each stage of your pipeline as a separate variable, and then combine the stages into a pipeline at the end, like this:\n\n``` python\n# Match title = \"A Star Is Born\":\nstage_match_title = {\n \"$match\": {\n \"title\": \"A Star Is Born\"\n }\n}\n\n# Sort by year, ascending:\nstage_sort_year_ascending = {\n \"$sort\": { \"year\": pymongo.ASCENDING }\n}\n\n# Now the pipeline is easier to read:\npipeline = \n stage_match_title, \n stage_sort_year_ascending,\n]\n```\n\n### Limit the Number of Results\n\nImagine I wanted to obtain the most recent production of \"A Star Is Born\" from the movies collection.\n\nThis can be thought of as three stages, executed in order:\n\n1. Obtain the movie documents for \"A Star Is Born.\"\n2. Sort by year, descending.\n3. Discard all but the first document.\n\nThe first stage is already the same as `stage_match_title` above. The second stage is the same as `stage_sort_year_ascending`, but with `pymongo.ASCENDING` changed to `pymongo.DESCENDING`. The third stage is a [$limit stage.\n\nThe **modified and new** code looks like this:\n\n``` python\n# Sort by year, descending:\nstage_sort_year_descending = {\n \"$sort\": { \"year\": pymongo.DESCENDING }\n}\n\n# Limit to 1 document:\nstage_limit_1 = { \"$limit\": 1 }\n\npipeline = \n stage_match_title, \n stage_sort_year_descending,\n stage_limit_1,\n]\n```\n\nIf you make the changes above and execute your code, then you should see just the following line:\n\n``` none\n* A Star Is Born, Barbra Streisand, 1976\n```\n\n>Wait a minute! Why isn't there a document for the amazing production with Lady Gaga and Bradley Cooper?\n>\n>Hold on there! You'll find the answer to this mystery, and more, later on in this blog post.\n\nOkay, so now you know how to filter, sort, and limit the contents of a collection using an aggregation pipeline. But these are just operations you can already do with `find()`! Why would you want to use these complex, new-fangled aggregation pipelines?\n\nRead on, my friend, and I will show you the *true power* of MongoDB aggregation pipelines.\n\n## Look Up Related Data in Other Collections\n\nThere's a dirty secret, hiding in the `sample_mflix` database. As well as the `movies` collection, there's also a collection called `comments`. Documents in the `comments` collection look like this:\n\n``` python\n{\n '_id': ObjectId('5a9427648b0beebeb69579d3'),\n 'movie_id': ObjectId('573a1390f29313caabcd4217'),\n 'date': datetime.datetime(1983, 4, 27, 20, 39, 15),\n 'email': 'cameron_duran@fakegmail.com',\n 'name': 'Cameron Duran',\n 'text': 'Quasi dicta culpa asperiores quaerat perferendis neque. Est animi '\n 'pariatur impedit itaque exercitationem.'}\n```\n\nIt's a comment for a movie. I'm not sure why people are writing Latin comments for these movies, but let's go with it. The second field, `movie_id,` corresponds to the `_id` value of a document in the `movies` collection.\n\nSo, it's a comment *related* to a movie!\n\nDoes MongoDB enable you to query movies and embed the related comments, like a JOIN in a relational database? *Yes it does!* With the [$lookup stage.\n\nI'll show you how to obtain related documents from another collection, and embed them in the documents from your primary collection. First, create a new pipeline from scratch, and start with the following:\n\n``` python\n# Look up related documents in the 'comments' collection:\nstage_lookup_comments = {\n \"$lookup\": {\n \"from\": \"comments\", \n \"localField\": \"_id\", \n \"foreignField\": \"movie_id\", \n \"as\": \"related_comments\",\n }\n}\n\n# Limit to the first 5 documents:\nstage_limit_5 = { \"$limit\": 5 }\n\npipeline = \n stage_lookup_comments,\n stage_limit_5,\n]\n\nresults = movie_collection.aggregate(pipeline)\nfor movie in results:\n pprint(movie)\n```\n\nThe stage I've called `stage_lookup_comments` is a `$lookup` stage. This `$lookup` stage will look up documents from the `comments` collection that have the same movie id. The matching comments will be listed as an array in a field named 'related_comments,' with an array value containing all of the comments that have this movie's '\\_id' value as 'movie_id.'\n\nI've added a `$limit` stage just to ensure that there's a reasonable amount of output without being overwhelming.\n\nNow, execute the code.\n\n>You may notice that the pipeline above runs pretty slowly! There are two reasons for this:\n>\n>- There are 23.5k movie documents and 50k comments.\n>- There's a missing index on the `comments` collection. It's missing on purpose, to teach you about indexes! \n>\n>I'm not going to show you how to fix the index problem right now. I'll write about that in a later post in this series, focusing on indexes. Instead, I'll show you a trick for working with slow aggregation pipelines while you're developing.\n>\n>Working with slow pipelines is a pain while you're writing and testing the pipeline. *But*, if you put a temporary `$limit` stage at the *start* of your pipeline, it will make the query faster (although the results may be different because you're not running on the whole dataset).\n>\n>When I was writing this pipeline, I had a first stage of `{ \"$limit\": 1000 }`.\n>\n>When you have finished crafting the pipeline, you can comment out the first stage so that the pipeline will now run on the whole collection. **Don't forget to remove the first stage, or you're going to get the wrong results!**\n\nThe aggregation pipeline above will print out all of the contents of five movie documents. It's quite a lot of data, but if you look carefully, you should see that there's a new field in each document that looks like this:\n\n``` python\n'related_comments': []\n```\n\n### Matching on Array Length\n\nIf you're *lucky*, you may have some documents in the array, but it's unlikely, as most of the movies have no comments. Now, I'll show you how to add some stages to match only movies which have more than two comments.\n\nIdeally, you'd be able to add a single `$match` stage which obtained the length of the `related_comments` field and matched it against the expression `{ \"$gt\": 2 }`. In this case, it's actually two steps:\n\n- Add a field (I'll call it `comment_count`) containing the length of the `related_comments` field.\n- Match where the value of `comment_count` is greater than two.\n\nHere is the code for the two stages:\n\n``` python\n# Calculate the number of comments for each movie:\nstage_add_comment_count = {\n \"$addFields\": {\n \"comment_count\": {\n \"$size\": \"$related_comments\"\n }\n } \n}\n\n# Match movie documents with more than 2 comments:\nstage_match_with_comments = {\n \"$match\": {\n \"comment_count\": {\n \"$gt\": 2\n }\n } \n}\n```\n\nThe two stages go after the `$lookup` stage, and before the `$limit` 5 stage:\n\n``` python\npipeline = [\n stage_lookup_comments,\n stage_add_comment_count,\n stage_match_with_comments,\n limit_5,\n]\n```\n\nWhile I'm here, I'm going to clean up the output of this code, instead of using `pprint`:\n\n``` python\nresults = movie_collection.aggregate(pipeline)\nfor movie in results:\n print(movie[\"title\"])\n print(\"Comment count:\", movie[\"comment_count\"])\n\n # Loop through the first 5 comments and print the name and text:\n for comment in movie[\"related_comments\"][:5]:\n print(\" * {name}: {text}\".format(\n name=comment[\"name\"],\n text=comment[\"text\"]))\n```\n\n*Now* when you run this code, you should see something more like this:\n\n``` none\nFootsteps in the Fog\n--------------------\nComment count: 3\n* Sansa Stark: Error ex culpa dignissimos assumenda voluptates vel. Qui inventore quae quod facere veniam quaerat quibusdam. Accusamus ab deleniti placeat non.\n* Theon Greyjoy: Animi dolor minima culpa sequi voluptate. Possimus necessitatibus voluptatem hic cum numquam voluptates.\n* Donna Smith: Et esse nulla ducimus tempore aliquid. Suscipit iste dignissimos voluptate velit. Laboriosam sequi quae fugiat similique alias. Corporis cumque labore veniam dignissimos.\n```\n\nIt's good to see Sansa Stark from Game of Thrones really knows her Latin, isn't it?\n\nNow I've shown you how to work with lookups in your pipelines, I'll show you how to use the `$group` stage to do actual *aggregation*.\n\n## Grouping Documents with `$group`\n\nI'll start with a new pipeline again.\n\nThe `$group` stage is one of the more difficult stages to understand, so I'll break this down slowly.\n\nStart with the following code:\n\n``` python\n# Group movies by year, producing 'year-summary' documents that look like:\n# {\n# '_id': 1917,\n# }\nstage_group_year = {\n \"$group\": {\n \"_id\": \"$year\",\n }\n}\n\npipeline = [\n stage_group_year,\n]\nresults = movie_collection.aggregate(pipeline)\n\n# Loop through the 'year-summary' documents:\nfor year_summary in results:\n pprint(year_summary)\n```\n\nExecute this code, and you should see something like this:\n\n``` none\n{'_id': 1978}\n{'_id': 1996}\n{'_id': 1931}\n{'_id': '2000\u00e8'}\n{'_id': 1960}\n{'_id': 1972}\n{'_id': 1943}\n{'_id': '1997\u00e8'}\n{'_id': 2010}\n{'_id': 2004}\n{'_id': 1947}\n{'_id': '1987\u00e8'}\n{'_id': 1954}\n...\n```\n\nEach line is a document emitted from the aggregation pipeline. But you're not looking at *movie* documents any more. The `$group` stage groups input documents by the specified `_id` expression and output one document for each unique `_id` value. In this case, the expression is `$year`, which means one document will be emitted for each unique value of the `year` field. Each document emitted can (and usually will) also contain values generated from aggregating data from the grouped documents.\n\nChange the stage definition to the following:\n\n``` python\nstage_group_year = {\n \"$group\": {\n \"_id\": \"$year\",\n # Count the number of movies in the group:\n \"movie_count\": { \"$sum\": 1 }, \n }\n}\n```\n\nThis will add a `movie_count` field, containing the result of adding `1` for every document in the group. In other words, it counts the number of movie documents in the group. If you execute the code now, you should see something like the following:\n\n``` none\n{'_id': '1997\u00e8', 'movie_count': 2}\n{'_id': 2010, 'movie_count': 970}\n{'_id': 1947, 'movie_count': 38}\n{'_id': '1987\u00e8', 'movie_count': 1}\n{'_id': 2012, 'movie_count': 1109}\n{'_id': 1954, 'movie_count': 64}\n...\n```\n\nThere are a number of [accumulator operators, like `$sum`, that allow you to summarize data from the group. If you wanted to build an array of all the movie titles in the emitted document, you could add `\"movie_titles\": { \"$push\": \"$title\" },` to the `$group` stage. In that case, you would get documents that look like this:\n\n``` python\n{\n '_id': 1917,\n 'movie_count': 3,\n 'movie_titles': \n 'The Poor Little Rich Girl',\n 'Wild and Woolly',\n 'The Immigrant'\n ]\n}\n```\n\nSomething you've probably noticed from the output above is that some of the years contain the \"\u00e8\" character. This database has some messy values in it. In this case, there's only a small handful of documents, and I think we should just remove them. Add the following two stages to only match documents with a numeric `year` value, and to sort the results:\n\n``` python\nstage_match_years = {\n \"$match\": {\n \"year\": {\n \"$type\": \"number\",\n }\n }\n}\n\nstage_sort_year_ascending = {\n \"$sort\": {\"_id\": pymongo.ASCENDING}\n}\n\npipeline = [\n stage_match_years, # Match numeric years\n stage_group_year,\n stage_sort_year_ascending, # Sort by year\n]\n```\n\nNote that the `$match` stage is added to the start of the pipeline, and the `$sort` is added to the end. A general rule is that you should filter documents out early in your pipeline, so that later stages have fewer documents to deal with. It also ensures that the pipeline is more likely to be able to take advantages of any appropriate indexes assigned to the collection.\n\n>\n>\n>Remember, all of the sample code for this quick start series can be found [on GitHub.\n>\n>\n\nAggregations using `$group` are a great way to discover interesting things about your data. In this example, I'm illustrating the number of movies made each year, but it would also be interesting to see information about movies for each country, or even look at the movies made by different actors.\n\n## What Have You Learned?\n\nYou've learned how to construct aggregation pipelines to filter, group, and join documents with other collections. You've hopefully learned that putting a `$limit` stage at the start of your pipeline can be useful to speed up development (but should be removed before going to production). You've also learned some basic optimization tips, like putting filtering expressions towards the start of your pipeline instead of towards the end.\n\nAs you've gone through, you'll probably have noticed that there's a *ton* of different stage types, operators, and accumulator operators. Learning how to use the different components of aggregation pipelines is a big part of learning to use MongoDB effectively as a developer.\n\nI love working with aggregation pipelines, and I'm always surprised at what you can do with them!\n\n## Next Steps\n\nAggregation pipelines are super powerful, and because of this, they're a big topic to cover. Check out the full documentation to get a better idea of their full scope.\n\nMongoDB University also offers a *free* online course on The MongoDB Aggregation Framework.\n\nNote that aggregation pipelines can also be used to generate new data and write it back into a collection, with the $out stage.\n\nMongoDB provides a *free* GUI tool called Compass. It allows you to connect to your MongoDB cluster, so you can browse through databases and analyze the structure and contents of your collections. It includes an aggregation pipeline builder which makes it easier to build aggregation pipelines. I highly recommend you install it, or if you're using MongoDB Atlas, use its similar aggregation pipeline builder in your browser. I often use them to build aggregation pipelines, and they include export buttons which will export your pipeline as Python code.\n\nI don't know about you, but when I was looking at some of the results above, I thought to myself, \"It would be fun to visualise this with a chart.\" MongoDB provides a hosted service called Charts which just *happens* to take aggregation pipelines as input. So, now's a good time to give it a try!\n\nI consider aggregation pipelines to be one of MongoDB's two \"power tools,\" along with Change Streams. If you want to learn more about change streams, check out this blog post by my awesome colleague, Naomi Pentrel.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Query, group, and join data in MongoDB using aggregation pipelines with Python.", "contentType": "Quickstart"}, "title": "Getting Started with Aggregation Pipelines in Python", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/integration-test-atlas-serverless-apps", "action": "created", "body": "# How to Write Integration Tests for MongoDB Atlas Functions\n\n> As of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas \u2013 Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs. Some of the naming or references in this article may be outdated.\n\nIntegration tests are vital for apps built with a serverless architecture. Unfortunately, figuring out how to build integration tests for serverless apps can be challenging.\n\nToday, I'll walk you through how to write integration tests for apps built with MongoDB Atlas Functions.\n\nThis is the second post in the *DevOps + MongoDB Atlas Functions = \ud83d\ude0d* blog series. Throughout this series, I'm explaining how I built automated tests and a CI/CD pipeline for the Social Stats app. In the first post, I explained what the Social Stats app does and how I architected it. Then I walked through how I wrote unit tests for the app's serverless functions. If you haven't read the first post, I recommend starting there to understand what is being tested and then returning to this post.\n\n>Prefer to learn by video? Many of the concepts I cover in this series are available in this video.\n\n## Integration Testing MongoDB Atlas Functions\n\nToday we'll focus on the middle layer of the testing pyramid: integration tests.\n\nIntegration tests are designed to test the integration of two or more components that work together as part of the application. A component could be a piece of the code base. A component could also exist outside of the code base. For example, an integration test could check that a function correctly saves information in a database. An integration test could also test that a function is correctly interacting with an external API.\n\nWhen following the traditional test pyramid, a developer will write significantly more unit tests than integration tests. When testing a serverless app, developers tend to write nearly as many (or sometimes more!) integration tests as unit tests. Why?\n\nServerless apps rely on integrations. Serverless functions tend to be small pieces of code that interact with other services. Testing these interactions is vital to ensure the application is functioning as expected.\n\n### Example Integration Test\n\nLet's take a look at how I tested the integration between the `storeCsvInDb` Atlas Function, the `removeBreakingCharacters` Atlas Function, and the MongoDB database hosted on Atlas. (I discuss what these functions do and how they interact with each other and the database in my previous post.)\n\nI decided to build my integration tests using Jest since I was already using Jest for my unit tests. You can use whatever testing framework you prefer; the principles described below will still apply.\n\nLet's focus on one test case: storing the statistics about a single Tweet.\n\nAs we discussed in the previous post, the storeCsvInDb function completes the following:\n\n- Calls the `removeBreakingCharacters` function to remove breaking characters like emoji.\n- Converts the Tweets in the CSV to JSON documents.\n- Loops through the JSON documents to clean and store each one in the database.\n- Returns an object that contains a list of Tweets that were inserted, updated, or unable to be inserted or updated.\n\nWhen I wrote unit tests for this function, I created mocks to simulate the `removeBreakingCharacters` function and the database.\n\nWe won't use any mocks in the integration tests. Instead, we'll let the `storeCsvInDb` function call the `removeBreakingCharacters` function and the database.\n\nThe first thing I did was import `MongoClient` from the `mongodb` module. We will use MongoClient later to connect to the MongoDB database hosted on Atlas.\n\n``` javascript\nconst { MongoClient } = require('mongodb');\n```\n\nNext, I imported several constants from `constants.js`. I created the `constants.js` file to store constants I found myself using in several test files.\n\n``` javascript\nconst { TwitterStatsDb, statsCollection, header, validTweetCsv, validTweetJson, validTweetId, validTweetUpdatedCsv, validTweetUpdatedJson, emojiTweetId, emojiTweetCsv, emojiTweetJson, validTweetKenId, validTweetKenCsv, validTweetKenJson } = require('../constants.js');\n```\n\nNext, I imported the `realm-web` SDK. I'll be able to use this module to call the Atlas Functions.\n\n``` javascript\nconst RealmWeb = require('realm-web');\n```\n\nThen I created some variables that I'll set later.\n\n``` javascript\nlet collection;\nlet mongoClient;\nlet app;\n```\n\nNow that I had all of my prep work completed, I was ready to start setting up my test structure. I began by implementing the beforeAll() function. Jest runs `beforeAll()` once before any of the tests in the file are run. Inside of `beforeAll()` I connected to a copy of the App Services app I'm using for testing. I also connected to the test database hosted on Atlas that is associated with that App Services app. Note that this database is NOT my production database. (We'll explore how I created Atlas App Services apps for development, staging, and production later in this series.)\n\n``` javascript\nbeforeAll(async () => {\n // Connect to the App Services app\n app = new RealmWeb.App({ id: `${process.env.REALM_APP_ID}` });\n\n // Login to the app with anonymous credentials\n await app.logIn(RealmWeb.Credentials.anonymous());\n\n // Connect directly to the database\n const uri = `mongodb+srv://${process.env.DB_USERNAME}:${process.env.DB_PASSWORD}@${process.env.CLUSTER_URI}/test?retryWrites=true&w=majority`;\n mongoClient = new MongoClient(uri);\n await mongoClient.connect();\n collection = mongoClient.db(TwitterStatsDb).collection(statsCollection);\n});\n```\n\nI chose to use the same app with the same database for all of my tests. As a result, these tests cannot be run in parallel as they could interfere with each other.\n\nMy app is architected in a way that it cannot be spun up completely using APIs and command line interfaces. Manual intervention is required to get the app configured correctly. If your app is architected in a way that you can completely generate your app using APIs and/or command line interfaces, you could choose to spin up a copy of your app with a new database for every test case or test file. This would allow you to run your test cases or test files in parallel.\n\nI wanted to ensure I always closed the connection to my database, so I added a call to do so in the afterAll() function.\n\n``` javascript\nafterAll(async () => {\n await mongoClient.close();\n})\n```\n\nI also wanted to ensure each test started with clean data since all of my tests are using the same database. In the beforeEach() function, I added a call to delete all documents from the collection the tests will be using.\n\n``` javascript\nbeforeEach(async () => {\n await collection.deleteMany({});\n});\n```\n\nNow that my test infrastructure was complete, I was ready to start writing a test case that focuses on storing a single valid Tweet.\n\n``` javascript\ntest('Single tweet', async () => {\n\n expect(await app.functions.storeCsvInDb(header + \"\\n\" + validTweetCsv)).toStrictEqual({\n newTweets: validTweetId],\n tweetsNotInsertedOrUpdated: [],\n updatedTweets: []\n });\n\n const tweet = await collection.findOne({ _id: validTweetId });\n expect(tweet).toStrictEqual(validTweetJson);\n});\n```\n\nThe test begins by calling the `storeCsvInDb` Atlas Function just as application code would. The test simulates the contents of a Twitter statistics CSV file by concatenating a valid header, a new line character, and the statistics for a Tweet with standard characters.\n\nThe test then asserts that the function returns an object that indicates the Tweet statistics were successfully saved.\n\nFinally, the test checks the database directly to ensure the Tweet statistics were stored correctly.\n\nAfter I finished this integration test, I wrote similar tests for Tweets that contain emoji as well as for updating statistics for Tweets already stored in the database.\n\nYou can find the complete set of integration tests in [storeCsvInDB.test.js.\n\n## Wrapping Up\n\nIntegration tests are especially important for apps built with a serverless architecture. The tests ensure that the various components that make up the app are working together as expected.\n\nThe Social Stats application source code and associated test files are available in a GitHub repo: . The repo's readme has detailed instructions on how to execute the test files.\n\nBe on the lookout for the next post in this series where I'll walk you through how to write end-to-end tests (sometimes referred to as UI tests) for serverless apps.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- GitHub Repository: Social Stats\n- Video: DevOps + MongoDB Atlas Functions = \ud83d\ude0d\n- Documentation: MongoDB Atlas Functions\n- MongoDB Atlas\n- MongoDB Charts\n", "format": "md", "metadata": {"tags": ["Realm", "Serverless"], "pageDescription": "Learn how to write integration tests for MongoDB Atlas Functions.", "contentType": "Tutorial"}, "title": "How to Write Integration Tests for MongoDB Atlas Functions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-client-side-field-level-encryption", "action": "created", "body": "# Java - Client Side Field Level Encryption\n\n## Updates\n\nThe MongoDB Java quickstart repository is available on GitHub.\n\n### February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n\n### November 14th, 2023\n\n- Update to Java 17\n- Update Java Driver to 4.11.1\n- Update mongodb-crypt to 1.8.0\n\n### March 25th, 2021\n\n- Update Java Driver to 4.2.2.\n- Added Client Side Field Level Encryption example.\n\n### October 21st, 2020\n\n- Update Java Driver to 4.1.1.\n- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in\n the `pom.xml` and a configuration file `logback.xml`.\n\n## What's the Client Side Field Level Encryption?\n\n \n\nThe Client Side Field Level Encryption (CSFLE\nfor short) is a new feature added in MongoDB 4.2 that allows you to encrypt some fields of your MongoDB documents prior\nto transmitting them over the wire to the cluster for storage.\n\nIt's the ultimate piece of security against any kind of intrusion or snooping around your MongoDB cluster. Only the\napplication with the correct encryption keys can decrypt and read the protected data.\n\nLet's check out the Java CSFLE API with a simple example.\n\n## Video\n\nThis content is also available in video format.\n\n:youtube]{vid=tZSH--qwdcE}\n\n## Getting Set Up\n\nI will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just\nupdate it if you already have it:\n\n``` sh\ngit clone git@github.com:mongodb-developer/java-quick-start.git\n```\n\n> If you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. You have all the instructions in\n> this [post.\n\nFor this CSFLE quickstart post, I will only use the Community Edition of MongoDB. As a matter of fact, the only part of\nCSFLE that is an enterprise-only feature is the\nautomatic encryption of fields\nwhich is supported\nby mongocryptd\nor\nthe Automatic Encryption Shared Library for Queryable Encryption.\n\n> `Automatic Encryption Shared Library for Queryable Encryption` is a replacement for `mongocryptd` and should be the\n> preferred solution. They are both optional and part of MongoDB Enterprise.\n\nIn this tutorial, I will be using the explicit (or manual) encryption of fields which doesn't require `mongocryptd`\nor the `Automatic Encryption Shared Library` and the enterprise edition of MongoDB or Atlas. If you would like to\nexplore\nthe enterprise version of CSFLE with Java, you can find out more\nin this documentation or in\nmy more recent\npost: How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB.\n\n> Do not confuse `mongocryptd` or the `Automatic Encryption Shared Library` with the `libmongocrypt` library which is\n> the companion C library used by the drivers to\n> encrypt and decrypt your data. We *need* this library to run CSFLE. I added it in the `pom.xml` file of this project.\n\n``` xml\n\n org.mongodb\n mongodb-crypt\n 1.8.0\n\n```\n\nTo keep the code samples short and sweet in the examples below, I will only share the most relevant parts. If you want\nto see the code working with all its context, please check the source code in the github repository in\nthe csfle package\ndirectly.\n\n## Run the Quickstart Code\n\nIn this quickstart tutorial, I will show you the CSFLE API using the MongoDB Java Driver. I will show you how to:\n\n- create and configure the MongoDB connections we need.\n- create a master key.\n- create Data Encryption Keys (DEK).\n- create and read encrypted documents.\n\nTo run my code from the above repository, check out\nthe README.\n\nBut for short, the following command should get you up and running in no time:\n\n``` shell\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\" -Dexec.cleanupDaemonThreads=false\n```\n\nThis is the output you should get:\n\n``` none\n**************\n* MASTER KEY *\n**************\nA new Master Key has been generated and saved to file \"master_key.txt\".\nMaster Key: 100, 82, 127, -61, -92, -93, 0, -11, 41, -96, 89, -39, -26, -25, -33, 37, 85, -50, 64, 70, -91, 99, -44, -57, 18, 105, -101, -111, -67, -81, -19, 56, -112, 62, 11, 106, -6, 85, -125, 49, -7, -49, 38, 81, 24, -48, -6, -15, 21, -120, -37, -5, 65, 82, 74, -84, -74, -65, -43, -15, 40, 80, -23, -52, -114, -18, -78, -64, -37, -3, -23, -33, 102, -44, 32, 65, 70, -123, -97, -49, -13, 126, 33, -63, -75, -52, 78, -5, -107, 91, 126, 103, 118, 104, 86, -79]\n\n******************\n* INITIALIZATION *\n******************\n=> Creating local Key Management System using the master key.\n=> Creating encryption client.\n=> Creating MongoDB client with automatic decryption.\n=> Cleaning entire cluster.\n\n*************************************\n* CREATE KEY ALT NAMES UNIQUE INDEX *\n*************************************\n\n*******************************\n* CREATE DATA ENCRYPTION KEYS *\n*******************************\nCreated Bobby's data key ID: 668a35af-df8f-4c41-9493-8d09d3d46d3b\nCreated Alice's data key ID: 003024b3-a3b6-490a-9f31-7abb7bcc334d\n\n************************************************\n* INSERT ENCRYPTED DOCUMENTS FOR BOBBY & ALICE *\n************************************************\n2 docs have been inserted.\n\n**********************************\n* FIND BOBBY'S DOCUMENT BY PHONE *\n**********************************\nBobby document found by phone number:\n{\n \"_id\": {\n \"$oid\": \"60551bc8dd8b737958e3733f\"\n },\n \"name\": \"Bobby\",\n \"age\": 33,\n \"phone\": \"01 23 45 67 89\",\n \"blood_type\": \"A+\",\n \"medical_record\": [\n {\n \"test\": \"heart\",\n \"result\": \"bad\"\n }\n ]\n}\n\n****************************\n* READING ALICE'S DOCUMENT *\n****************************\nBefore we remove Alice's key, we can read her document.\n{\n \"_id\": {\n \"$oid\": \"60551bc8dd8b737958e37340\"\n },\n \"name\": \"Alice\",\n \"age\": 28,\n \"phone\": \"09 87 65 43 21\",\n \"blood_type\": \"O+\"\n}\n\n***************************************************************\n* REMOVE ALICE's KEY + RESET THE CONNECTION (reset DEK cache) *\n***************************************************************\nAlice key is now removed: 1 key removed.\n=> Creating MongoDB client with automatic decryption.\n\n****************************************\n* TRY TO READ ALICE DOC AGAIN BUT FAIL *\n****************************************\nWe get a MongoException because 'libmongocrypt' can't decrypt these fields anymore.\n```\n\nLet's have a look in depth to understand what is happening.\n\n## How it Works\n\n![CSFLE diagram with master key and DEK vault\n\nCSFLE looks complicated, like any security and encryption feature, I guess. Let's try to make it simple in a few words.\n\n1. We need\n a master key\n which unlocks all\n the Data Encryption Keys (\n DEK for short) that we can use to encrypt one or more fields in our documents.\n2. You can use one DEK for our entire cluster or a different DEK for each field of each document in your cluster. It's\n up to you.\n3. The DEKs are stored in a collection in a MongoDB cluster which does **not** have to be the same that contains the\n encrypted data. The DEKs are stored **encrypted**. They are useless without the master key which needs to be\n protected.\n4. You can use the manual (community edition) or the automated (enterprise advanced or Atlas) encryption of fields.\n5. The decryption can be manual or automated. Both are part of the community edition of MongoDB. In this post, I will\n use manual encryption and automated decryption to stick with the community edition of MongoDB.\n\n## GDPR Compliance\n\nEuropean laws enforce data protection and privacy. Any oversight can result in massive fines.\n\nCSFLE is a great way to save millions of dollars/euros.\n\nFor example, CSFLE could be a great way to enforce\nthe \"right-to-be-forgotten\" policy of GDPR. If a user asks to be removed from your\nsystems, the data must be erased from your production cluster, of course, but also the logs, the dev environment, and\nthe backups... And let's face it: Nobody will ever remove this user's data from the backups. And if you ever restore or\nuse these backups, this can cost you millions of dollars/euros.\n\nBut now... encrypt each user's data with a unique Data Encryption Key (DEK) and to \"forget\" a user forever, all you have\nto do is lose the key. So, saving the DEKs on a separated cluster and enforcing a low retention policy on this cluster\nwill ensure that a user is truly forgotten forever once the key is deleted.\n\nKenneth White, Security Principal at MongoDB who worked on CSFLE, explains this\nperfectly\nin this answer\nin the MongoDB Community Forum.\n\n> If the primary motivation is just to provably ensure that deleted plaintext user records remain deleted no matter\n> what, then it becomes a simple timing and separation of concerns strategy, and the most straight-forward solution is\n> to\n> move the keyvault collection to a different database or cluster completely, configured with a much shorter backup\n> retention; FLE does not assume your encrypted keyvault collection is co-resident with your active cluster or has the\n> same access controls and backup history, just that the client can, when needed, make an authenticated connection to\n> that\n> keyvault database. Important to note though that with a shorter backup cycle, in the event of some catastrophic data\n> corruption (malicious, intentional, or accidental), all keys for that db (and therefore all encrypted data) are only\n> as\n> recoverable to the point in time as the shorter keyvault backup would restore.\n\nMore trivial, but in the event of an intrusion, any stolen data will be completely worthless without the master key and\nwould not result in a ruinous fine.\n\n## The Master Key\n\nThe master key is an array of 96 bytes. It can be stored in a Key Management Service in a cloud provider or can be\nlocally\nmanaged (documentation).\nOne way or another, you must secure it from any threat.\n\nIt's as simple as that to generate a new one:\n\n``` java\nfinal byte] masterKey = new byte[96];\nnew SecureRandom().nextBytes(masterKey);\n```\n\nBut you most probably just want to do this once and then reuse the same one each time you restart your application.\n\nHere is my implementation to store it in a local file the first time and then reuse it for each restart.\n\n``` java\nimport java.io.FileInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.security.SecureRandom;\nimport java.util.Arrays;\n\npublic class MasterKey {\n\n private static final int SIZE_MASTER_KEY = 96;\n private static final String MASTER_KEY_FILENAME = \"master_key.txt\";\n\n public static void main(String[] args) {\n new MasterKey().tutorial();\n }\n\n private void tutorial() {\n final byte[] masterKey = generateNewOrRetrieveMasterKeyFromFile(MASTER_KEY_FILENAME);\n System.out.println(\"Master Key: \" + Arrays.toString(masterKey));\n }\n\n private byte[] generateNewOrRetrieveMasterKeyFromFile(String filename) {\n byte[] masterKey = new byte[SIZE_MASTER_KEY];\n try {\n retrieveMasterKeyFromFile(filename, masterKey);\n System.out.println(\"An existing Master Key was found in file \\\"\" + filename + \"\\\".\");\n } catch (IOException e) {\n masterKey = generateMasterKey();\n saveMasterKeyToFile(filename, masterKey);\n System.out.println(\"A new Master Key has been generated and saved to file \\\"\" + filename + \"\\\".\");\n }\n return masterKey;\n }\n\n private void retrieveMasterKeyFromFile(String filename, byte[] masterKey) throws IOException {\n try (FileInputStream fis = new FileInputStream(filename)) {\n fis.read(masterKey, 0, SIZE_MASTER_KEY);\n }\n }\n\n private byte[] generateMasterKey() {\n byte[] masterKey = new byte[SIZE_MASTER_KEY];\n new SecureRandom().nextBytes(masterKey);\n return masterKey;\n }\n\n private void saveMasterKeyToFile(String filename, byte[] masterKey) {\n try (FileOutputStream fos = new FileOutputStream(filename)) {\n fos.write(masterKey);\n } catch (IOException e) {\n e.printStackTrace();\n }\n }\n}\n```\n\n> This is nowhere near safe for a production environment because leaving the `master_key.txt` directly in the\n> application folder on your production server is like leaving the vault combination on a sticky note. Secure that file\n> or\n> please consider using\n> a [KMS in production.\n\nIn this simple quickstart, I will only use a single master key, but it's totally possible to use multiple master keys.\n\n## The Key Management Service (KMS) Provider\n\nWhichever solution you choose for the master key, you need\na KMS provider to set\nup the `ClientEncryptionSettings` and the `AutoEncryptionSettings`.\n\nHere is the configuration for a local KMS:\n\n``` java\nMap> kmsProviders = new HashMap>() {{\n put(\"local\", new HashMap() {{\n put(\"key\", localMasterKey);\n }});\n}};\n```\n\n## The Clients\n\nWe will need to set up two different clients:\n\n- The first one \u2500 `ClientEncryption` \u2500 will be used to create our Data Encryption Keys (DEK) and encrypt our fields\n manually.\n- The second one \u2500 `MongoClient` \u2500 will be the more conventional MongoDB connection that we will use to read and write\n our documents, with the difference that it will be configured to automatically decrypt the encrypted fields.\n\n### ClientEncryption\n\n``` java\nConnectionString connection_string = new ConnectionString(\"mongodb://localhost\");\nMongoClientSettings kvmcs = MongoClientSettings.builder().applyConnectionString(connection_string).build();\n\nClientEncryptionSettings ces = ClientEncryptionSettings.builder()\n .keyVaultMongoClientSettings(kvmcs)\n .keyVaultNamespace(\"csfle.vault\")\n .kmsProviders(kmsProviders)\n .build();\n\nClientEncryption encryption = ClientEncryptions.create(ces);\n```\n\n### MongoClient\n\n``` java\nAutoEncryptionSettings aes = AutoEncryptionSettings.builder()\n .keyVaultNamespace(\"csfle.vault\")\n .kmsProviders(kmsProviders)\n .bypassAutoEncryption(true)\n .build();\n\nMongoClientSettings mcs = MongoClientSettings.builder()\n .applyConnectionString(connection_string)\n .autoEncryptionSettings(aes)\n .build();\n\nMongoClient client = MongoClients.create(mcs);\n```\n\n> `bypassAutoEncryption(true)` is the ticket for the Community Edition. Without it, `mongocryptd` or\n> the `Automatic Encryption Shared Library` would rely on the JSON schema that you would have to provide to encrypt\n> automatically the documents. See\n> this example in the documentation.\n\n> You don't have to reuse the same connection string for both connections. It would actually be a lot more\n> \"GDPR-friendly\" to use separated clusters, so you can enforce a low retention policy on the Data Encryption Keys.\n\n## Unique Index on Key Alternate Names\n\nThe first thing you should do before you create your first Data Encryption Key is to create a unique index on the key\nalternate names to make sure that you can't reuse the same alternate name on two different DEKs.\n\nThese names will help you \"label\" your keys to know what each one is used for \u2500 which is still totally up to you.\n\n``` java\nMongoCollection vaultColl = client.getDatabase(\"csfle\").getCollection(\"vault\");\nvaultColl.createIndex(ascending(\"keyAltNames\"),\n new IndexOptions().unique(true).partialFilterExpression(exists(\"keyAltNames\")));\n```\n\nIn my example, I choose to use one DEK per user. I will encrypt all the fields I want to secure in each user document\nwith the same key. If I want to \"forget\" a user, I just need to drop that key. In my example, the names are unique so\nI'm using this for my `keyAltNames`. It's a great way to enforce GDPR compliance.\n\n## Create Data Encryption Keys\n\nLet's create two Data Encryption Keys: one for Bobby and one for Alice. Each will be used to encrypt all the fields I\nwant to keep safe in my respective user documents.\n\n``` java\nBsonBinary bobbyKeyId = encryption.createDataKey(\"local\", keyAltName(\"Bobby\"));\nBsonBinary aliceKeyId = encryption.createDataKey(\"local\", keyAltName(\"Alice\"));\n```\n\nWe get a little help from this private method to make my code easier to read:\n\n``` java\nprivate DataKeyOptions keyAltName(String altName) {\n return new DataKeyOptions().keyAltNames(List.of(altName));\n}\n```\n\nHere is what Bobby's DEK looks like in my `csfle.vault` collection:\n\n``` json\n{\n \"_id\" : UUID(\"aaa2e53d-875e-49d8-9ce0-dec9a9658571\"),\n \"keyAltNames\" : \"Bobby\" ],\n \"keyMaterial\" : BinData(0,\"/ozPZBMNUJU9udZyTYe1hX/KHqJJPrjdPads8UNjHX+cZVkIXnweZe5pGPpzcVcGmYctTAdxB3b+lmY5ONTzEZkqMg8JIWenIWQVY5fogIpfHDJQylQoEjXV3+e3ZY1WmWJR8mOp7pMoTyoGlZU2TwyqT9fcN7E5pNRh0uL3kCPk0sOOxLT/ejQISoY/wxq2uvyIK/C6/LrD1ymIC9w6YA==\"),\n \"creationDate\" : ISODate(\"2021-03-19T16:16:09.800Z\"),\n \"updateDate\" : ISODate(\"2021-03-19T16:16:09.800Z\"),\n \"status\" : 0,\n \"masterKey\" : {\n \"provider\" : \"local\"\n }\n}\n```\n\nAs you can see above, the `keyMaterial` (the DEK itself) is encrypted by the master key. Without the master key to\ndecrypt it, it's useless. Also, you can identify that it's Bobby's key in the `keyAltNames` field.\n\n## Create Encrypted Documents\n\nNow that we have an encryption key for Bobby and Alice, I can create their respective documents and insert them into\nMongoDB like so:\n\n``` java\nprivate static final String DETERMINISTIC = \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\";\nprivate static final String RANDOM = \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\";\n\nprivate Document createBobbyDoc(ClientEncryption encryption) {\n BsonBinary phone = encryption.encrypt(new BsonString(\"01 23 45 67 89\"), deterministic(BOBBY));\n BsonBinary bloodType = encryption.encrypt(new BsonString(\"A+\"), random(BOBBY));\n BsonDocument medicalEntry = new BsonDocument(\"test\", new BsonString(\"heart\")).append(\"result\", new BsonString(\"bad\"));\n BsonBinary medicalRecord = encryption.encrypt(new BsonArray(List.of(medicalEntry)), random(BOBBY));\n return new Document(\"name\", BOBBY).append(\"age\", 33)\n .append(\"phone\", phone)\n .append(\"blood_type\", bloodType)\n .append(\"medical_record\", medicalRecord);\n}\n\nprivate Document createAliceDoc(ClientEncryption encryption) {\n BsonBinary phone = encryption.encrypt(new BsonString(\"09 87 65 43 21\"), deterministic(ALICE));\n BsonBinary bloodType = encryption.encrypt(new BsonString(\"O+\"), random(ALICE));\n return new Document(\"name\", ALICE).append(\"age\", 28).append(\"phone\", phone).append(\"blood_type\", bloodType);\n}\n\nprivate EncryptOptions deterministic(String keyAltName) {\n return new EncryptOptions(DETERMINISTIC).keyAltName(keyAltName);\n}\n\nprivate EncryptOptions random(String keyAltName) {\n return new EncryptOptions(RANDOM).keyAltName(keyAltName);\n}\n\nprivate void createAndInsertBobbyAndAlice(ClientEncryption encryption, MongoCollection usersColl) {\n Document bobby = createBobbyDoc(encryption);\n Document alice = createAliceDoc(encryption);\n int nbInsertedDocs = usersColl.insertMany(List.of(bobby, alice)).getInsertedIds().size();\n System.out.println(nbInsertedDocs + \" docs have been inserted.\");\n}\n```\n\nHere is what Bobby and Alice documents look like in my `encrypted.users` collection:\n\n**Bobby**\n\n``` json\n{\n \"_id\" : ObjectId(\"6054d91c26a275034fe53300\"),\n \"name\" : \"Bobby\",\n \"age\" : 33,\n \"phone\" : BinData(6,\"ATKkRdZWR0+HpqNyYA7zgIUCgeBE4SvLRwaXz/rFl8NPZsirWdHRE51pPa/2W9xgZ13lnHd56J1PLu9uv/hSkBgajE+MJLwQvJUkXatOJGbZd56BizxyKKTH+iy+8vV7CmY=\"),\n \"blood_type\" : BinData(6,\"AjKkRdZWR0+HpqNyYA7zgIUCUdc30A8lTi2i1pWn7CRpz60yrDps7A8gUJhJdj+BEqIIx9xSUQ7xpnc/6ri2/+ostFtxIq/b6IQArGi+8ZBISw==\"),\n \"medical_record\" : BinData(6,\"AjKkRdZWR0+HpqNyYA7zgIUESl5s4tPPvzqwe788XF8o91+JNqOUgo5kiZDKZ8qudloPutr6S5cE8iHAJ0AsbZDYq7XCqbqiXvjQobObvslR90xJvVMQidHzWtqWMlzig6ejdZQswz2/WT78RrON8awO\")\n}\n```\n\n**Alice**\n\n``` json\n{\n \"_id\" : ObjectId(\"6054d91c26a275034fe53301\"),\n \"name\" : \"Alice\",\n \"age\" : 28,\n \"phone\" : BinData(6,\"AX7Xd65LHUcWgYj+KbUT++sCC6xaCZ1zaMtzabawAgB79quwKvld8fpA+0m+CtGevGyIgVRjtj2jAHAOvREsoy3oq9p5mbJvnBqi8NttHUJpqooUn22Wx7o+nlo633QO8+c=\"),\n \"blood_type\" : BinData(6,\"An7Xd65LHUcWgYj+KbUT++sCTyp+PJXudAKM5HcdX21vB0VBHqEXYSplHdZR0sCOxzBMPanVsTRrOSdAK5yHThP3Vitsu9jlbNo+lz5f3L7KYQ==\")\n}\n```\n\nClient Side Field Level Encryption currently\nprovides [two different algorithms\nto encrypt the data you want to secure.\n\n### AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\n\nWith this algorithm, the result of the encryption \u2500 given the same inputs (value and DEK) \u2500\nis deterministic. This means that we have a greater support\nfor read operations, but encrypted data with low cardinality is susceptible\nto frequency analysis attacks.\n\nIn my example, if I want to be able to retrieve users by phone numbers, I must use the deterministic algorithm. As a\nphone number is likely to be unique in my collection of users, it's safe to use this algorithm here.\n\n### AEAD_AES_256_CBC_HMAC_SHA_512-Random\n\nWith this algorithm, the result of the encryption is\n*always* different. That means that it provides the strongest\nguarantees of data confidentiality, even when the cardinality is low, but prevents read operations based on these\nfields.\n\nIn my example, the blood type has a low cardinality and it doesn't make sense to search in my user collection by blood\ntype anyway, so it's safe to use this algorithm for this field.\n\nAlso, Bobby's medical record must be very safe. So, the entire subdocument containing all his medical records is\nencrypted with the random algorithm as well and won't be used to search Bobby in my collection anyway.\n\n## Read Bobby's Document\n\nAs mentioned in the previous section, it's possible to search documents by fields encrypted with the deterministic\nalgorithm.\n\nHere is how:\n\n``` java\nBsonBinary phone = encryption.encrypt(new BsonString(\"01 23 45 67 89\"), deterministic(BOBBY));\nString doc = usersColl.find(eq(\"phone\", phone)).first().toJson();\n```\n\nI simply encrypt again, with the same key, the phone number I'm looking for, and I can use this `BsonBinary` in my query\nto find Bobby.\n\nIf I output the `doc` string, I get:\n\n``` none\n{\n \"_id\": {\n \"$oid\": \"6054d91c26a275034fe53300\"\n },\n \"name\": \"Bobby\",\n \"age\": 33,\n \"phone\": \"01 23 45 67 89\",\n \"blood_type\": \"A+\",\n \"medical_record\": \n {\n \"test\": \"heart\",\n \"result\": \"bad\"\n }\n ]\n}\n```\n\nAs you can see, the automatic decryption worked as expected, I can see my document in clear text. To find this document,\nI could use the `_id`, the `name`, the `age`, or the phone number, but not the `blood_type` or the `medical_record`.\n\n## Read Alice's Document\n\nNow let's put CSFLE to the test. I want to be sure that if Alice's DEK is destroyed, Alice's document is lost forever\nand can never be restored, even from a backup that could be restored. That's why it's important to keep the DEKs and the\nencrypted documents in two different clusters that don't have the same backup retention policy.\n\nLet's retrieve Alice's document by name, but let's protect my code in case something \"bad\" has happened to her key...\n\n``` java\nprivate void readAliceIfPossible(MongoCollection usersColl) {\n try {\n String aliceDoc = usersColl.find(eq(\"name\", ALICE)).first().toJson();\n System.out.println(\"Before we remove Alice's key, we can read her document.\");\n System.out.println(aliceDoc);\n } catch (MongoException e) {\n System.err.println(\"We get a MongoException because 'libmongocrypt' can't decrypt these fields anymore.\");\n }\n}\n```\n\nIf her key still exists in the database, then I can decrypt her document:\n\n``` none\n{\n \"_id\": {\n \"$oid\": \"6054d91c26a275034fe53301\"\n },\n \"name\": \"Alice\",\n \"age\": 28,\n \"phone\": \"09 87 65 43 21\",\n \"blood_type\": \"O+\"\n}\n```\n\nNow, let's remove her key from the database:\n\n``` java\nvaultColl.deleteOne(eq(\"keyAltNames\", ALICE));\n```\n\nIn a real-life production environment, it wouldn't make sense to read her document again; and because we are all\nprofessional and organised developers who like to keep things tidy, we would also delete Alice's document along with her\nDEK, as this document is now completely worthless for us anyway.\n\nIn my example, I want to try to read this document anyway. But if I try to read it immediately after deleting her\ndocument, there is a great chance that I will still able to do so because of\nthe [60 seconds Data Encryption Key Cache\nthat is managed by `libmongocrypt`.\n\nThis cache is very important because, without it, multiple back-and-forth would be necessary to decrypt my document.\nIt's critical to prevent CSFLE from killing the performances of your MongoDB cluster.\n\nSo, to make sure I'm not using this cache anymore, I'm creating a brand new `MongoClient` (still with auto decryption\nsettings) for the sake of this example. But of course, in production, it wouldn't make sense to do so.\n\nNow if I try to access Alice's document again, I get the following `MongoException`, as expected:\n\n``` none\ncom.mongodb.MongoException: not all keys requested were satisfied\n at com.mongodb.MongoException.fromThrowableNonNull(MongoException.java:83)\n at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:286)\n at com.mongodb.client.internal.Crypt.executeStateMachine(Crypt.java:244)\n at com.mongodb.client.internal.Crypt.decrypt(Crypt.java:128)\n at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:121)\n at com.mongodb.client.internal.CryptConnection.command(CryptConnection.java:131)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:345)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommand(CommandOperationHelper.java:336)\n at com.mongodb.internal.operation.CommandOperationHelper.executeCommandWithConnection(CommandOperationHelper.java:222)\n at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:658)\n at com.mongodb.internal.operation.FindOperation$1.call(FindOperation.java:652)\n at com.mongodb.internal.operation.OperationHelper.withReadConnectionSource(OperationHelper.java:583)\n at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:652)\n at com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:80)\n at com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:170)\n at com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:200)\n at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.readAliceIfPossible(ClientSideFieldLevelEncryption.java:91)\n at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.demo(ClientSideFieldLevelEncryption.java:79)\n at com.mongodb.quickstart.csfle.ClientSideFieldLevelEncryption.main(ClientSideFieldLevelEncryption.java:41)\nCaused by: com.mongodb.crypt.capi.MongoCryptException: not all keys requested were satisfied\n at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:145)\n at com.mongodb.crypt.capi.MongoCryptContextImpl.throwExceptionFromStatus(MongoCryptContextImpl.java:151)\n at com.mongodb.crypt.capi.MongoCryptContextImpl.completeMongoOperation(MongoCryptContextImpl.java:93)\n at com.mongodb.client.internal.Crypt.fetchKeys(Crypt.java:284)\n ... 17 more\n```\n\n## Wrapping Up\n\nIn this quickstart tutorial, we have discovered how to use Client Side Field Level Encryption using the MongoDB Java\nDriver, using only the community edition of MongoDB. You can learn more about\nthe automated encryption in our\ndocumentation.\n\nCSFLE is the ultimate security feature to ensure the maximal level of security for your cluster. Not even your admins\nwill be able to access the data in production if they don't have access to the master keys.\n\nBut it's not the only security measure you should use to protect your cluster. Preventing access to your cluster is, of\ncourse, the first security measure that you should enforce\nby enabling the authentication\nand limit network exposure.\n\nIn doubt, check out the security checklist before\nlaunching a cluster in production to make sure that you didn't overlook any of the security options MongoDB has to offer\nto protect your data.\n\nThere is a lot of flexibility in the implementation of CSFLE: You can choose to use one or multiple master keys, same\nfor the Data Encryption Keys. You can also choose to encrypt all your phone numbers in your collection with the same DEK\nor use a different one for each user. It's really up to you how you will organise your encryption strategy but, of\ncourse, make sure it fulfills all your legal obligations. There are multiple right ways to implement CSFLE, so make sure\nto find the most suitable one for your use case.\n\n> If you have questions, please head to our developer community website where the\n> MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n### Documentation\n\n- GitHub repository with all the Java Quickstart examples of this series\n- MongoDB CSFLE Doc\n- MongoDB Java Driver CSFLE Doc\n- MongoDB University CSFLE implementation example\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to use the client side field level encryption using the MongoDB Java Driver.", "contentType": "Quickstart"}, "title": "Java - Client Side Field Level Encryption", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/introduction-to-linked-lists-and-mongodb", "action": "created", "body": "# A Gentle Introduction to Linked Lists With MongoDB\n\nAre you new to data structures and algorithms? In this post, you will learn about one of the most important data structures in Computer Science, the Linked List, implemented with a MongoDB twist. This post will cover the fundamentals of the linked list data structure. It will also answer questions like, \"How do linked lists differ from arrays?\" and \"What are the pros and cons of using a linked list?\"\n\n## Intro to Linked Lists\n\nDid you know that linked lists are one of the foundational data structures in Computer Science? If you are like many devs that are self-taught or you graduated from a developer boot camp, then you might need a little lesson in how this data structure works. Or, if you're like me, you might need a refresher if it's been a couple of years since your last Computer Science lecture on data structures and algorithms. In this post, I will be walking through how to implement a linked list from scratch using Node.js and MongoDB. This is also a great place to start for getting a handle on the basics of MongoDB CRUD operations and this legendary data structure. Let's get started with the basics.\n\nDiagram of a singly linked list\n\nA linked list is a data structure that contains a list of nodes that are connected using references or pointers. A node is an object in memory. It usually contains at most two pieces of information, a data value, and a pointer to next node in the linked list. Linked lists also have separate pointer references to the head and the tail of the linked list. The head is the first node in the list, while the tail is the last object in the list.\n\nA node that does NOT link to another node\n\n``` json\n{\n \"data\": \"Cat\",\n \"next\": null\n}\n```\n\nA node that DOES link to another node\n\n``` json\n{\n \"data\": \"Cat\",\n \"next\": {\n \"data\": \"Dog\",\n \"next\": {\n \"data\": \"Bird\",\n \"next\": null\n }\n } // these are really a reference to an object in memory\n}\n```\n\n## Why Use a Linked List?\n\nThere are a lot of reasons why linked lists are used, as opposed to other data structures like arrays (more on that later). However, we use linked lists in situations where we don't know the exact size of the data structure but anticipate that the list could potentially grow to large sizes. Often, linked lists are used when we think that the data structure might grow larger than the available memory of the computer we are working with. Linked lists are also useful if we still need to preserve order AND anticipate that order will change over time.\n\nLinked lists are just objects in memory. One object holds a reference to another object, or one node holds a pointer to the next node. In memory, a linked list looks like this:\n\nDiagram that demonstrates how linked lists allocate use pointers to link data in memory\n\n### Advantages of Linked Lists\n\n- Linked lists are dynamic in nature, which allocates the memory when required.\n- Insertion and deletion operations can be easily implemented.\n- Stacks and queues can be easily executed using a linked list.\n\n### Disadvantages of Linked Lists\n\n- Memory is wasted as pointers require extra memory for storage.\n- No element can be accessed randomly; it has to access each node sequentially starting from the head.\n- Reverse traversing is difficult in a singly linked list.\n\n## Comparison Between Arrays and Linked Lists\n\nNow, you might be thinking that linked lists feel an awful lot like arrays, and you would be correct! They both keep track of a sequence of data, and they both can be iterated and looped over. Also, both data structures preserve sequence order. However, there are some key differences.\n\n### Advantages of Arrays\n\n- Arrays are simple and easy to use.\n- They offer faster access to elements (O(1) or constant time).\n- They can access elements by any index without needing to iterate through the entire data set from the beginning.\n\n### Disadvantages of Arrays\n\n- Did you know that arrays can waste memory? This is because typically, compilers will preallocate a sequential block of memory when a new array is created in order to make super speedy queries. Therefore, many of these preallocated memory blocks may be empty.\n- Arrays have a fixed size. If the preallocated memory block is filled to capacity, the code compiler will allocate an even larger memory block, and it will need to copy the old array over to the new array memory block before new array operations can be performed. This can be expensive with both time and space.\n\nDiagram that demonstrates how arrays allocate continuous blocks of memory space\n\nDiagram that demonstrates how linked lists allocate memory for new linked list nodes\n\n- To insert an element at a given position, operation is complex. We may need to shift the existing elements to create vacancy to insert the new element at desired position.\n\n## Other Types of Linked Lists\n\n### Doubly Linked List\n\nA doubly linked list is the same as a singly linked list with the exception that each node also points to the previous node as well as the next node.\n\nDiagram of a doubly linked list\n\n### Circular Linked List\n\nA circular linked list is the same as a singly linked list with the exception that there is no concept of a head or tail. All nodes point to the next node circularly. There is no true start to the circular linked list.\n\nDiagram of a circular linked list\n\n## Let's Code A Linked List!\n\n### First, Let's Set Up Our Coding Environment\n\n#### Creating A Cluster On Atlas\n\nFirst thing we will need to set up is a MongoDB Atlas account. And don't worry, you can create an M0 MongoDB Atlas cluster for free. No credit card is required to get started! To get up and running with a free M0 cluster, follow the MongoDB Atlas Getting Started guide.\n\nAfter signing up for Atlas, we will then need to deploy a free MongoDB cluster. Note, you will need to add a rule to allow the IP address of the computer we are connecting to MongoDB Atlas Custer too, and you will need to create a database user before you are able to connect to your new cluster. These are security features that are put in place to make sure bad actors cannot access your database.\n\nIf you have any issues connecting or setting up your free MongoDB Atlas cluster, be sure to check out the MongoDB Community Forums to get help.\n\n#### Connect to VS Code MongoDB Plugin\n\nNext, we are going to connect to our new MongoDB Atlas database cluster using the Visual Studio Code MongoDB Plugin. The MongoDB extension allow us to:\n\n- Connect to a MongoDB or Atlas cluster, navigate through your databases and collections, get a quick overview of your schema, and see the documents in your collections.\n- Create MongoDB Playgrounds, the fastest way to prototype CRUD operations and MongoDB commands.\n- Quickly access the MongoDB Shell, to launch the MongoDB Shell from the command palette and quickly connect to the active cluster.\n\nTo install MongoDB for VS Code, simply search for it in the Extensions list directly inside VS Code or head to the \"MongoDB for VS Code\" homepage in the VS Code Marketplace.\n\n#### Navigate Your MongoDB Data\n\nMongoDB for VS Code can connect to MongoDB standalone instances or clusters on MongoDB Atlas or self-hosted. Once connected, you can **browse databases**, **collections**, and **read-only views** directly from the tree view.\n\nFor each collection, you will see a list of sample documents and a **quick overview of the schema**. This is very useful as a reference while writing queries and aggregations.\n\nOnce installed, there will be a new MongoDB tab that we can use to add our connections by clicking \"Add Connection.\" If you've used MongoDB Compass before, then the form should be familiar. You can enter your connection details in the form or use a connection string. I went with the latter, as my database is hosted on MongoDB Atlas.\n\nTo obtain your connection string, navigate to your \"Clusters\" page and select \"Connect.\"\n\nChoose the \"Connect using MongoDB Compass\" option and copy the connection string. Make sure to add your username and password in their respective places before entering the string in VS Code.\n\nOnce you've connected successfully, you should see an alert. At this point, you can explore the data in your cluster, as well as your schemas.\n\n#### Creating Functions to Initialize the App\n\nAlright, now that we have been able to connect to our MongoDB Atlas database, let's write some code to allow our linked list to connect to our database and to do some cleaning while we are developing our linked list.\n\nThe general strategy for building our linked lists with MongoDB will be as follows. We are going to use a MongoDB document to keep track of meta information, like the head and tail location. We will also use a unique MongoDB document for each node in our linked list. We will be using the unique IDs that are automatically generated by MongoDB to simulate a pointer. So the *next* value of each linked list node will store the ID of the next node in the linked list. That way, we will be able to iterate through our Linked List.\n\nSo, in order to accomplish this, the first thing that we are going to do is set up our linked list class.\n\n``` javascript\nconst MongoClient = require(\"mongodb\").MongoClient;\n\n// Define a new Linked List class\nclass LinkedList {\n\n constructor() {}\n\n // Since the constructor cannot be an asynchronous function,\n // we are going to create an async `init` function that connects to our MongoDB \n // database.\n // Note: You will need to replace the URI here with the one\n // you get from your MongoDB Cluster. This is the same URI\n // that you used to connect the MongoDB VS Code plugin to our cluster.\n async init() {\n const uri = \"PASTE YOUR ATLAS CLUSTER URL HERE\";\n this.client = new MongoClient(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n\n try {\n await this.client.connect();\n console.log(\"Connected correctly to server\");\n this.col = this.client\n .db(\"YOUR DATABASE NAME HERE\")\n .collection(\"YOUR COLLECTION NAME HERE\");\n } catch (err) {\n console.log(err.stack);\n }\n }\n}\n\n// We are going to create an immediately invoked function expression (IFEE)\n// in order for us to immediately test and run the linked list class defined above.\n(async function () {\n try {\n const linkedList = new LinkedList();\n await linkedList.init();\n linkedList.resetMeta();\n linkedList.resetData();\n } catch (err) {\n // Good programmers always handle their errors\n console.log(err.stack);\n }\n})();\n```\n\nNext, let's create some helper functions to reset our DB every time we run the code so our data doesn't become cluttered with old data.\n\n``` javascript\n// This function will be responsible for cleaning up our metadata\n// function everytime we reinitialize our app.\nasync resetMeta() {\n await this.col.updateOne(\n { meta: true },\n { $set: { head: null, tail: null } },\n { upsert: true }\n );\n}\n\n// Function to clean up all our Linked List data\nasync resetData() {\n await this.col.deleteMany({ value: { $exists: true } });\n}\n```\n\nNow, let's write some helper functions to help us query and update our meta document.\n\n``` javascript\n// This function will query our collection for our single\n// meta data document. This document will be responsible\n// for tracking the location of the head and tail documents\n// in our Linked List.\nasync getMeta() {\n const meta = await this.col.find({ meta: true }).next();\n return meta;\n}\n\n// points to our head\nasync getHeadID() {\n const meta = await this.getMeta();\n return meta.head;\n}\n\n// Function allows us to update our head in the\n// event that the head is changed\nasync setHead(id) {\n const result = await this.col.updateOne(\n { meta: true },\n { $set: { head: id } }\n );\n return result;\n}\n\n// points to our tail\nasync getTail(data) {\n const meta = await this.getMeta();\n return meta.tail;\n}\n\n// Function allows us to update our tail in the\n// event that the tail is changed\nasync setTail(id) {\n const result = await this.col.updateOne(\n { meta: true },\n { $set: { tail: id } }\n );\n return result;\n}\n\n// Create a brand new linked list node\nasync newNode(value) {\n const newNode = await this.col.insertOne({ value, next: null });\n return newNode;\n}\n```\n\n### Add A Node\n\nThe steps to add a new node to a linked list are:\n\n- Add a new node to the current tail.\n- Update the current tails next to the new node.\n- Update your linked list to point tail to the new node.\n\n``` javascript\n// Takes a new node and adds it to our linked lis\nasync add(value) {\n const result = await this.newNode(value);\n const insertedId = result.insertedId;\n\n // If the linked list is empty, we need to initialize an empty linked list\n const head = await this.getHeadID();\n if (head === null) {\n this.setHead(insertedId);\n } else {\n // if it's not empty, update the current tail's next to the new node\n const tailID = await this.getTail();\n await this.col.updateOne({ _id: tailID }, { $set: { next: insertedId } });\n }\n // Update your linked list to point tail to the new node\n this.setTail(insertedId);\n return result;\n}\n```\n\n### Find A Node\n\nIn order to traverse a linked list, we must start at the beginning of the linked list, also known as the head. Then, we follow each *next* pointer reference until we come to the end of the linked list, or the node we are looking for. It can be implemented by using the following steps:\n\n- Start at the head node of your linked list.\n- Check if the value matches what you're searching for. If found, return that node.\n- If not found, move to the next node via the current node's next property.\n- Repeat until next is null (tail/end of list).\n\n``` javascript\n// Reads through our list and returns the node we are looking for\nasync get(index) {\n // If index is less than 0, return false\n if (index <= -1) {\n return false;\n }\n let headID = await this.getHeadID();\n let postion = 0;\n let currNode = await this.col.find({ _id: headID }).next();\n\n // Loop through the nodes starting from the head\n while (postion < index) {\n // Check if we hit the end of the linked list\n if (currNode.next === null) {\n return false;\n }\n\n // If another node exists go to next node\n currNode = await this.col.find({ _id: currNode.next }).next();\n postion++;\n }\n return currNode;\n}\n```\n\n### Delete A Node\n\nNow, let's say we want to remove a node in our linked list. In order to do this, we must again keep track of the previous node so that we can update the previous node's *next* pointer reference to the node that is being deleted *next* value is pointing to. Or to put it another way:\n\n- Find the node you are searching for and keep track of the previous node.\n- When found, update the previous nodes next to point to the next node referenced by the node to be deleted.\n- Delete the found node from memory.\n\nDiagram that demonstrates how linked lists remove a node from a linked list by moving pointer references\n\n``` javascript\n// reads through our list and removes desired node in the linked list\nasync remove(index) {\n const currNode = await this.get(index);\n const prevNode = await this.get(index - 1);\n\n // If index not in linked list, return false\n if (currNode === false) {\n return false;\n }\n\n // If removing the head, reassign the head to the next node\n if (index === 0) {\n await this.setHead(currNode.next);\n\n // If removing the tail, reassign the tail to the prevNode\n } else if (currNode.next === null) {\n await this.setTail(prevNode._id);\n await this.col.updateOne(\n { _id: prevNode._id },\n { $set: { next: currNode.next } }\n );\n\n // update previous node's next to point to the next node referenced by node to be deleted\n } else {\n await this.col.updateOne(\n { _id: prevNode._id },\n { $set: { next: currNode.next } }\n );\n }\n\n // Delete found node from memory\n await this.col.deleteOne({\n _id: currNode._id,\n });\n\n return true;\n}\n```\n\n### Insert A Node\n\nThe following code inserts a node after an existing node in a singly linked list. Inserting a new node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a new node after it. We can do that by following these steps:\n\n- Find the position/node in your linked list where you want to insert your new node after.\n- Update the next property of the new node to point to the node that the target node currently points to.\n- Update the next property of the node you want to insert after to point to the new node.\n\nDiagram that demonstrates how a linked list inserts a new node by moving pointer references\n\n``` javascript\n// Inserts a new node at the deisred index in the linked list\nasync insert(value, index) {\n const currNode = await this.get(index);\n const prevNode = await this.get(index - 1);\n const result = await this.newNode(value);\n const node = result.ops0];\n\n // If the index is not in the linked list, return false\n if (currNode === false) {\n return false;\n }\n\n // If inserting at the head, reassign the head to the new node\n if (index === 0) {\n await this.setHead(node._id);\n await this.col.updateOne(\n { _id: node._id },\n { $set: { next: currNode.next } }\n );\n } else {\n // If inserting at the tail, reassign the tail\n if (currNode.next === null) {\n await this.setTail(node._id);\n }\n\n // Update the next property of the new node\n // to point to the node that the target node currently points to\n await this.col.updateOne(\n { _id: prevNode._id },\n { $set: { next: node._id } }\n );\n\n // Update the next property of the node you\n // want to insert after to point to the new node\n await this.col.updateOne(\n { _id: node._id },\n { $set: { next: currNode.next } }\n );\n }\n return node;\n}\n```\n\n## Summary\n\nMany developers want to learn the fundamental Computer Science data structures and algorithms or get a refresher on them. In this author's humble opinion, the best way to learn data structures is by implementing them on your own. This exercise is a great way to learn data structures as well as learn the fundamentals of MongoDB CRUD operations.\n\n>When you're ready to implement your own linked list in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.\n\nIf you want to learn more about linked lists and MongoDB, be sure to check out these resources.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- Want to see me implement a Linked List using MongoDB? You check out this recording of the MongoDB Twitch Stream\n- Source Code\n- Want to learn more about MongoDB? Be sure to take a class on the MongoDB University\n- Have a question, feedback on this post, or stuck on something be sure to check out and/or open a new post on the MongoDB Community Forums:\n- Quick Start: Node.js:\n- Want to check out more cool articles about MongoDB? Be sure to check out more posts like this on the MongoDB Developer Hub\n- For additional information on Linked Lists, be sure to check out the Wikipedia article", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "Want to learn about one of the most important data structures in Computer Science, the Linked List, implemented with a MongoDB twist? Click here for more!", "contentType": "Tutorial"}, "title": "A Gentle Introduction to Linked Lists With MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-data-architecture-ofish-app", "action": "created", "body": "# Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps\n\nIn 2020, MongoDB partnered with the WildAid Marine Protection Program to create a mobile app for officers to use while out at sea patrolling Marine Protected Areas (MPAs) worldwide. We implemented apps for iOS, Android, and web, where they all share the same Realm back end, schema, and sync strategy. This article explains the data architecture, schema, and partitioning strategy we used. If you're developing a mobile app with Realm, this post will help you design and implement your data architecture.\n\nMPAs\u2014like national parks on land\u2014set aside dedicated coastal and marine environments for conservation. WildAid helps enable local agencies to protect their MPAs. We developed the O-FISH app for enforcement officers to search and create boarding reports while they're out at sea, patrolling the MPAs and boarding vessels for inspection.\n\nO-FISH needed to be a true offline-first application as officers will typically be without network access when they're searching and creating boarding reports. It's a perfect use case for the Realm mobile database and MongoDB Realm Sync.\n\nThis video gives a great overview of the WildAid Marine Program, the requirements for the O-FISH app, and the technologies behind the app:\n\n:youtube]{vid=54E9wfFHjiw}\n\nThis article is broken down into these sections:\n\n- [The O-FISH Application\n- System Architecture\n- Data Partitioning\n- Data Schema\n- Handling images\n- Summary\n- Resources\n\n## The O-FISH Application\n\nThere are three frontend applications.\n\nThe two mobile apps (iOS and Android) provide the same functionality. An officer logs in and can search existing boarding reports, for example, to check on past reports for a vessel before boarding it. After boarding the boat, the officer uses the app to create a new boarding report. The report contains information about the vessel, equipment, crew, catch, and any laws they're violating.\n\nCrucially, the mobile apps need to allow users to view and create reports even when there is no network coverage (which is the norm while at sea). Data is synchronized with other app instances and the backend database when it regains network access.\n\n \n iOS O-FISH App in Action\n\nThe web app also allows reports to be viewed and edited. It provides dashboards to visualize the data, including plotting boardings on a map. User accounts are created and managed through the web app.\n\nAll three frontend apps share a common backend Realm application. The Realm app is responsible for authenticating users, controlling what data gets synced to each mobile app instance, and persisting the data to MongoDB Atlas. Multiple \"agencies\" share the same frontend and backend apps. An officer should have access to the reports belonging to their agency. An agency is an authority responsible for enforcing the rules for one or more regional MPAs. Agencies are often named after the country they operate in. Examples of agencies would be Galapogas or Tanzania.\n\n## System Architecture\n\nThe iOS and Android mobile apps both contain an embedded Realm mobile database. The app reads and writes data to that Realm database-whether the device is connected to the network or not. Whenever the device has network coverage, Realm synchronizes the data with other devices via the Realm backend service.\n\n \n O-FISH System Architecture\n\nThe Realm database is embedded within the mobile apps, each instance storing a partition of the O-FISH data. We also need a consolidated view of all of the data that the O-FISH web app can access, and we use MongoDB Atlas for that. MongoDB Realm is also responsible for synchronizing the data with the MongoDB Atlas database.\n\nThe web app is stateless, accessing data from Atlas as needed via the Realm SDK.\n\nMongoDB Charts dashboards are embedded in the web app to provide richer, aggregated views of the data.\n\n## Data Partitioning\n\nMongoDB Realm Sync uses partitions to control what data it syncs to instances of a mobile app. You typically partition data to limit the amount of space used on the device and prevent users from accessing information they're not entitled to see or change.\n\nWhen a mobile app opens a synced Realm, it can provide a partition value to specify what data should be synced to the device.\n\nAs a developer, you must specify an attribute to use as your partition key. The rules for partition keys have some restrictions:\n\n- All synced collections use the same attribute name and type for the partition key.\n- The key can be a `string`, `objectId`, or a `long`.\n- When the app provides a partition key, only documents that have an exact match will be synced. For example, the app can't specify a set or range of partition key values.\n\nA common use case would be to use a string named \"username\" as the partition key. The mobile app would then open a Realm by setting the partition to the current user's name, ensuring that the user's data is available (but no data for other users).\n\nIf you want to see an example of creating a sophisticated partitioning strategy, then Building a Mobile Chat App Using Realm \u2013 Data Architecture describes RChat's approach (RChat is a reference mobile chat app built on Realm and MongoDB Realm). O-FISH's method is straightforward in comparison.\n\nWildAid works with different agencies around the world. Each officer within an agency needs access to any boarding report created by other officers in the same agency. Photos added to the app by one officer should be visible to the other officers. Officers should be offered menu options tailored to their agency\u2014an agency operating in the North Sea would want cod to be in the list of selectable species, but including clownfish would clutter the menu.\n\nWe use a string attribute named `agency` as the partitioning key to meet those requirements.\n\nAs an extra level of security, we want to ensure that an app doesn't open a Realm for the wrong partition. This could result from a coding error or because someone hacks a version of the app. When enabling Realm Sync, we can provide expressions to define whether the requesting user should be able to access a partition or not.\n\n \n Expression to Limit Sync Access to Partitions\n\nFor O-FISH, the rule is straightforward. We compare the logged-in user's agency name with the partition they're requesting to access. The Realm will be synced if and only if they're the same:\n\n``` json\n{\n \"%%user.custom_data.agency.name\": \"%%partition\"\n}\n```\n\n## Data Schema\n\nAt the highest level, the O-FISH schema is straightforward with four Realms (each with an associated MongoDB Atlas collection):\n\n- `DutyChange` records an officer going on-duty or back off-duty.\n- `Report` contains all of the details associated with the inspection of a vessel.\n- `Photo` represents a picture (either of one of the users or a photo that was taken to attach to a boarding report).\n- `MenuData` contains the agency-specific values that officers can select through the app's menus.\n\n \n\nYou might want to right-click that diagram so that you can open it in a new tab!\n\nLet's take a look at each of those four objects.\n\n### DutyChange\n\nThe app creates a `DutyChange` object when a user toggles a switch to flag that they are going on or off duty (at sea or not).\n\n \n\nThese are the Swift and Kotlin versions of the `DutyChange` class:\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` swift\nimport RealmSwift\n\nclass DutyChange: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var user: User? = User()\n @objc dynamic var date = Date()\n @objc dynamic var status = \"\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\nimport io.realm.RealmObject\nimport io.realm.annotations.PrimaryKey\nimport io.realm.annotations.RealmClass\nimport org.bson.types.ObjectId\nimport java.util.Date\n\n@RealmClass\nopen class DutyChange : RealmObject() {\n @PrimaryKey\n var _id: ObjectId = ObjectId.get()\n var user: User? = User()\n var date: Date = Date()\n var status: String = \"\"\n}\n```\n:::\n::::\n\nOn iOS, `DutyChange` inherits from the Realm `Object` class, and the attributes need to be made accessible to the Realm SDK by making them `dynamic` and adding the `@objc` annotation. The Kotlin app uses the `@RealmClass` annotation and inheritance from `RealmObject`.\n\nNote that there is no need to include the partition key as an attribute.\n\nIn addition to primitive attributes, `DutyChange` contains `user` which is of type `User`:\n\n::::tabs\n:::tab[]{tabid=\"Swift\"}\n``` swift\nimport RealmSwift\n\nclass User: EmbeddedObject, ObservableObject {\n @objc dynamic var name: Name? = Name()\n @objc dynamic var email = \"\"\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\nimport io.realm.RealmObject\nimport io.realm.annotations.RealmClass\n\n@RealmClass(embedded = true)\nopen class User : RealmObject() {\n var name: Name? = Name()\n var email: String = \"\"\n}\n```\n:::\n::::\n\n`User` objects are always embedded in higher-level objects rather than being top-level Realm objects. So, the class inherits from `EmbeddedObject` rather than `Object` in Swift. The Kotlin app extends the `@RealmClass` annotation to include `(embedded = true)`.\n\nWhether created in the iOS or Android app, the `DutyChange` object is synced to MongoDB Atlas as a single `DutyChange` document that contains a `user` sub-document:\n\n``` json\n{\n \"_id\" : ObjectId(\"6059c9859a545bbceeb9e881\"),\n \"agency\" : \"Ecuadorian Galapagos\",\n \"date\" : ISODate(\"2021-03-23T10:57:09.777Z\"),\n \"status\" : \"At Sea\",\n \"user\" : {\n \"email\" : \"global-admin@clusterdb.com\",\n \"name\" : {\n \"first\" : \"Global\",\n \"last\" : \"Admin\"\n }\n }\n}\n```\n\nThere's a [Realm schema associated with each collection that's synced with Realm Sync. The schema can be viewed and managed through the Realm UI:\n\n \n\n### Report\n\nThe `Report` object is at the heart of the O-FISH app. A report is what an officer reviews for relevant data before boarding a boat. A report is where the officer records all of the details when they've boarded a vessel for an inspection.\n\nIn spite of appearances, it pretty straightforward. It looks complex because there's a lot of information that an officer may need to include in their report.\n\nStarting with the top-level object - `Report`:\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` swift\nimport RealmSwift\n\nclass Report: Object, Identifiable {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n let draft = RealmOptional()\n @objc dynamic var reportingOfficer: User? = User()\n @objc dynamic var timestamp = NSDate()\n let location = List()\n @objc dynamic var date: NSDate? = NSDate()\n @objc dynamic var vessel: Boat? = Boat()\n @objc dynamic var captain: CrewMember? = CrewMember()\n let crew = List()\n let notes = List()\n @objc dynamic var inspection: Inspection? = Inspection()\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\n@RealmClass\nopen class Report : RealmObject() {\n @PrimaryKey\n var _id: ObjectId = ObjectId.get()\n var reportingOfficer: User? = User()\n var timestamp: Date = Date()\n @Required\n var location: RealmList = RealmList() // In order longitude, latitude\n var date: Date? = Date()\n var vessel: Boat? = Boat()\n var captain: CrewMember? = CrewMember()\n var crew: RealmList = RealmList()\n var notes: RealmList = RealmList()\n var draft: Boolean? = false\n var inspection: Inspection? = Inspection()\n}\n```\n:::\n::::\n\nThe `Report` class contains Realm `List` s (`RealmList` in Kotlin) to store lists of instances of classes such as `CrewMember`.\n\nSome of the classes embedded in `Report` contain further embedded classes. There are 19 classes in total that make up a `Report`. You can view all of the component classes in the [iOS and Android repos.\n\nOnce synced to Atlas, the Report is represented as a single `BoardingReports` document (the name change is part of the schema definition):\n\n \n\nNote that Realm lists are mapped to JSON/BSON arrays.\n\n### Photo\n\nA single boarding report could contain many large photographs, and so we don't want to embed those within the `Report` object (as an object could grow very large and even exceed MongoDB's 16 MB document limit). Instead, the `Report` object (and its embedded objects) store references to `Photo` objects. Each photo is represented by a top-level `Photo` Realm object. As an example, `Attachments` contains a Realm `List` of strings, each of which identifies a `Photo` object. Handling images will step through how we implemented this.\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` swift\nclass Attachments: EmbeddedObject, ObservableObject {\n let notes = List()\n let photoIDs = List()\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\n@RealmClass(embedded = true)\nopen class Attachments : RealmObject() {\n @Required\n var notes: RealmList = RealmList()\n\n @Required\n var photoIDs: RealmList = RealmList()\n}\n```\n:::\n::::\n\nThe general rule is that it isn't the best practice to store images in a database as they consume a lot of valuable storage space. A typical solution is to keep the image in some store with plentiful, cheap capacity (e.g., a block store such as cloud storage - Amazon S3 of Microsoft Blob Storage.) The O-FISH app's issue is that it's probable that the officer's phone has no internet connectivity when they create the boarding report and attach photos, so uploading them to cloud object storage can't be done at that time. As a compromise, O-FISH stores the image in the `Photo` object, but when the device has internet access, that image is uploaded to cloud object storage, removed from the `Photo` object and replaced with the S3 link. This is why the `Photo` includes both an optional binary `picture` attribute and a `pictureURL` field for the S3 link:\n\n::::tabs\n:::tab[]{tabid=\"Swift\"}\n``` swift\nclass Photo: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var thumbNail: NSData?\n @objc dynamic var picture: NSData?\n @objc dynamic var pictureURL = \"\"\n @objc dynamic var referencingReportID = \"\"\n @objc dynamic var date = NSDate()\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\nopen class Photo : RealmObject() {\n @PrimaryKey\n var _id: ObjectId = ObjectId.get()\n var thumbNail: ByteArray? = null\n var picture: ByteArray? = null\n var pictureURL: String = \"\"\n var referencingReportID: String = \"\"\n var date: Date = Date()\n}\n```\n:::\n::::\n\nNote that we include the `referencingReportID` attribute to make it easy to delete all `Photo` objects associated with a `Report`.\n\nThe officer also needs to review past boarding reports (and attached photos), and so the `Photo` object also includes a thumbnail image for off-line use.\n\n### MenuData\n\nEach agency needs the ability to customize what options are added in the app's menus. For example, agencies operating in different countries will need to define the list of locally applicable laws. Each agency has a `MenuData` instance with a list of strings for each of the customizable menus:\n\n::::tabs\n:::tab[]{tabid=\"Swift\"}\n``` swift\nclass MenuData: Object {\n @objc dynamic var _id = ObjectId.generate()\n let countryPickerPriorityList = List()\n let ports = List()\n let fisheries = List()\n let species = List()\n let emsTypes = List()\n let activities = List()\n let gear = List()\n let violationCodes = List()\n let violationDescriptions = List()\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\n@RealmClass\nopen class MenuData : RealmModel {\n @PrimaryKey\n var _id: ObjectId = ObjectId.get()\n @Required\n var countryPickerPriorityList: RealmList = RealmList()\n @Required\n var ports: RealmList = RealmList()\n @Required\n var fisheries: RealmList = RealmList()\n @Required\n var species: RealmList = RealmList()\n @Required\n var emsTypes: RealmList = RealmList()\n @Required\n var activities: RealmList = RealmList()\n @Required\n var gear: RealmList = RealmList()\n @Required\n var violationCodes: RealmList = RealmList()\n @Required\n var violationDescriptions: RealmList = RealmList()\n}\n```\n:::\n::::\n\n## Handling images\n\nWhen MongoDB Realm Sync writes a new `Photo` document to Atlas, it contains the full-sized image in the `picture` attribute. It consumes space that we want to free up by moving that image to Amazon S3 and storing the resulting S3 location in `pictureURL`. Those changes are then synced back to the mobile apps, which can then decide how to get an image to render using this algorithm:\n\n1. If `picture` contains an image, use it.\n2. Else, if `pictureURL` is set and the device is connected to the internet, then fetch the image from cloud object storage and use the returned image.\n3. Else, use the `thumbNail`.\n\n \n\nWhen the `Photo` document is written to Atlas, the `newPhoto` database trigger fires, which invokes a function named `newPhoto` function.\n\n \n\nThe trigger passes the `newPhoto` Realm function the `changeEvent`, which contains the new `Photo` document. The function invokes the `uploadImageToS3` Realm function and then updates the `Photo` document by removing the image and setting the URL:\n\n``` javascript\nexports = function(changeEvent){\nconst fullDocument = changeEvent.fullDocument;\nconst image = fullDocument.picture;\nconst agency = fullDocument.agency;\nconst id = fullDocument._id;\nconst imageName = `${id}`;\n\nif (typeof image !== 'undefined') {\n console.log(`Requesting upload of image: ${imageName}`);\n context.functions.execute(\"uploadImageToS3\", imageName, image)\n .then (() => {\n console.log('Uploaded to S3');\n const bucketName = context.values.get(\"photoBucket\");\n const imageLink = `https://${bucketName}.s3.amazonaws.com/${imageName}`;\n const collection = context.services.get('mongodb-atlas').db(\"wildaid\").collection(\"Photo\");\n collection.updateOne({\"_id\": fullDocument._id}, {$set: {\"pictureURL\": imageLink}, $unset: {picture: null}});\n },\n (error) => {\n console.error(`Failed to upload image to S3: ${error}`);\n });\n} else {\n console.log(\"No new photo to upload this time\");\n}\n};\n```\n\n`uploadImageToS3` uses Realm's AWS integration to upload the image:\n\n``` javascript\nexports = function(name, image) {\n const s3 = context.services.get('AWS').s3(context.values.get(\"awsRegion\"));\n const bucket = context.values.get(\"photoBucket\");\n console.log(`Bucket: ${bucket}`);\n return s3.PutObject({\n \"Bucket\": bucket,\n \"Key\": name,\n \"ACL\": \"public-read\",\n \"ContentType\": \"image/jpeg\",\n \"Body\": image\n });\n};\n```\n\n## Summary\n\nWe've covered the common data model used across the iOS, Android, and backend Realm apps. (The [web app also uses it, but that's beyond the scope of this article.)\n\nThe data model is deceptively simple. There's a lot of nested information that can be captured in each boarding report, resulting in 20+ classes, but there are only four top-level classes in the app, with the rest accounted for by embedding. The only other type of relationship is the references to instances of the `Photo` class from other classes (required to prevent the `Report` objects from growing too large).\n\nThe partitioning strategy is straightforward. Partitioning for every class is based on the name of the user's agency. That pattern is going to appear in many apps\u2014just substitute \"agency\" with \"department,\" \"team,\" \"user,\" \"country,\" ...\n\nSuppose you determine that your app needs a different partitioning strategy for different classes. In that case, you can implement a more sophisticated partitioning strategy by encoding a key-value pair in a string partition key.\n\nFor example, if we'd wanted to partition the reports by username (each officer can only access reports they created) and the menu items by agency, then you could partition on a string attribute named `partition`. For the `Report` objects, it would be set to pairs such as `partition = \"user=bill@some-domain.com\"` whereas for a `MenuData` object it might be set to `partition = \"agency=Galapagos\"`. Building a Mobile Chat App Using Realm \u2013 Data Architecture steps through designing these more sophisticated strategies.\n\n## Resources\n\n- O-FISH GitHub repos\n - iOS.\n - Android.\n - Web.\n - Realm Backend.\n- Read Building a Mobile Chat App Using Realm \u2013 Data Architecture to understand the data model and partitioning strategy behind the RChat app-an example of a more sophisticated partitioning strategy.\n- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Understand the data model and partitioning scheme used for WildAid's O-FISH app and how you can adapt them for your own mobile apps.", "contentType": "Tutorial"}, "title": "Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/laravel-mongodb-tutorial", "action": "created", "body": "# How To Build a Laravel + MongoDB Back End Service\n\nLaravel is a leading PHP framework that vastly increases the productivity of PHP developers worldwide. I come from a WordPress background, but when asked to build a web service for a front end app, Laravel and MongoDB come to mind, especially when combined with the MongoDB Atlas developer data platform.\n\nThis Laravel MongoDB tutorial addresses prospective and existing Laravel developers considering using MongoDB as a database. \n\nLet's create a simple REST back end for a front-end app and go over aspects of MongoDB that might be new. Using MongoDB doesn't affect the web front-end aspect of Laravel, so we'll use Laravel's built-in API routing in this article.\n\nMongoDB support in Laravel is provided by the official mongodb/laravel-mongodb package, which extends Eloquent, Laravel's built-in ORM. \n\nFirst, let's establish a baseline by creating a default Laravel app. We'll mirror some of the instructions provided on our MongoDB Laravel Integration page, which is the primary entry point for all things Laravel at MongoDB. Any Laravel environment should work, but we'll be using some Linux commands under Ubuntu in this article.\n\n> Laravel MongoDB Documentation\n\n### Prerequisites\n\n- A MongoDB Atlas cluster\n - Create a free cluster and load our sample data.\u00a0\n- A code editor\n - **Optional**: We have a MongoDB VS Code extension that makes it very easy to browse the database(s).\n\n## Getting the Laravel web server up and running\n\n**Note**: We'll go over creating the Laravel project with Composer but the article's code repository is available.\n\nThe \"Setting Up and Configuring Your Laravel Project\" instructions in the MongoDB and Laravel Integration show how to configure a Laravel-MongoDB development environment. We'll cover the Laravel application creation and the MongoDB configuration below.\n\nHere are handy links, just in case:\u00a0\n\n- Official Laravel installation instructions (10.23.0 here)\n- Official PHP installation instructions (PHP 8.1.6+ here)\n- Install Composer (Composer 2.3.5 here)\n- The MongoDB PHP extension (1.13.0 here)\n\n## Create a Laravel project\n\nWith our development environment working, let's create a Laravel project by creating a Laravel project directory. From inside that new directory, create a new Laravel project called `laraproject` by running the command, which specifies using Laravel:\n\n`composer create-project laravel/laravel laraproject`\n\nAfter that, your directory structure should look like this:\n\n```.\n\u2514\u2500\u2500 ./laraproject\n\u00a0\u00a0\u00a0\u00a0\u251c\u2500\u2500 ./app\n\u00a0\u00a0\u00a0\u00a0\u251c\u2500\u2500 ./artisan\n\u00a0\u00a0\u00a0\u00a0\u251c\u2500\u2500 ./bootstrap\n\u00a0\u00a0\u00a0\u00a0\u251c\u2500\u2500 ...\n```\n\nOnce our development environment is properly configured, we can browse to the Laravel site (likely 'localhost', for most people) and view the homepage:\n\n## Add a Laravel to MongoDB connection\n\nCheck if the MongoPHP driver is installed and running\nTo check the MongoDB driver is up and running in our web server, we can add a webpage to our Laravel website. in the code project, open `/routes/web.php` and add a route as follows:\n\n Route::get('/info', function () {\n phpinfo();\n });\n\nSubsequently visit the web page at localhost/info/ and we should see the PHPinfo page. Searching for the MongoDB section in the page, we should see something like the below. It means the MongoDB PHP driver is loaded and ready. If there are experience errors, our MongoDB PHP error handling goes over typical issues.\n\nWe can use Composer to add the Laravel MongoDB package to the application. In the command prompt, go to the project's directory and run the command below to add the package to the `/vendor/` directory.\n\n`composer require mongodb/laravel-mongodb:4.0.0`\n\nNext, update the database configuration to add a MongoDB connection string and credentials. Open the `/config/database.php` file and update the 'connection' array as follows:\n\n 'connections' => \n 'mongodb' => [\n 'driver' => 'mongodb',\n 'dsn' => env('MONGODB_URI'),\n 'database' => 'YOUR_DATABASE_NAME',\n ],\n\n`env('MONGODB_URI')` refers to the content of the default `.env` file of the project. Make sure this file does not end up in the source control. Open the `/.env` file and add the DB_URI environment variable with the connection string and credentials in the form:\n\n`MONGODB_URI=mongodb+srv://USERNAME:PASSWORD@clustername.subdomain.mongodb.net/?retryWrites=true&w=majority`\n\nYour connection string may look a bit different. Learn [how to get the connection string in Atlas. Remember to allow the web server's IP address to access the MongoDB cluster. Most developers will add their current IP address to the cluster.\n\nIn `/config/database.php`, we can optionally set the default database connection. At the top of the file, change 'default' to this:\n\n` 'default' => env('DB_CONNECTION', 'mongodb'),`\n\nOur Laravel application can connect to our MongoDB database. Let's create an API endpoint that pings it. In `/routes/api.php`, add the route below, save, and visit `localhost/api/ping/`. The API should return the object {\"msg\": \"MongoDB is accessible!\"}. If there's an error message, it's probably a configuration issue. Here are some general PHP MongoDB error handling tips.\n\n Route::get('/ping', function (Request $request) { \n \u00a0\u00a0\u00a0\u00a0$connection = DB::connection('mongodb');\n \u00a0\u00a0\u00a0\u00a0$msg = 'MongoDB is accessible!';\n \u00a0\u00a0\u00a0\u00a0try { \n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$connection->command('ping' => 1]); \n } catch (\\Exception $e) { \n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$msg = 'MongoDB is not accessible. Error: ' . $e->getMessage();\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0\u00a0\u00a0return ['msg' => $msg];\n });\n\n## Create data with Laravel's Eloquent\u00a0\n\nLaravel comes with [Eloquent, an ORM that abstracts the database back end so users can use different databases utilizing a common interface. Thanks to the Laravel MongoDB package, developers can opt for a MongoDB database to benefit from a flexible schema, excellent performance, and scalability.\n\nEloquent has a \"Model\" class, the interface between our code and a specific database table (or \"collection,\" in MongoDB terminology). Instances of the Model classes represent rows of tables in relational databases.\n\nIn MongoDB, they are documents in the collection. In relational databases, we can set values only for columns defined in the database, but MongoDB allows any field to be set.\n\nThe models can define fillable fields if we want to enforce a document schema in our application and prevent errors like name typos. This is not required if we want full flexibility of being schemaless to be faster.\n\nFor new Laravel developers, there are many Eloquent features and philosophies. The official Eloquent documentation is the best place to learn more about that. For now, **we will highlight the most important aspects** of using MongoDB with Eloquent. We can use both MongoDB and an SQL database in the same Laravel application. Each model is associated with one connection or the other.\n\n### Classic Eloquent model\n\nFirst, we create a classic model with its associated migration code by running the command:\n\n`php artisan make:model CustomerSQL --migration`\n\nAfter execution, the command created two files, `/app/Models/CustomerSQL.php` and `/database/migrations/YY_MM_DD_xxxxxx_create_customer_s_q_l_s_table.php`. The migration code is meant to be executed once in the prompt to initialize the table and schema. In the extended Migration class, check the code in the `up()` function.\n\nWe'll edit the migration's `up()` function to build a simple customer schema like this:\n\n public function up()\n {\n Schema::connection('mysql')->create('customer_sql', function (Blueprint $table) {\n $table->id();\n $table->uuid('guid')->unique();\n $table->string('first_name');\n $table->string('family_name');\n $table->string('email');\n $table->text('address');\n $table->timestamps();\n });\n }\n\nOur migration code is ready, so let's execute it to build the table and index associated with our Eloquent model.\n\n`php artisan migrate --path=/database/migrations/YY_MM_DD_xxxxxx_create_customer_s_q_l_s_table.php`\n\nIn the MySQL database, the migration created a 'customer_sql' table with the required schema, along with the necessary indexes. Laravel keeps track of which migrations have been executed in the 'migrations' table.\n\nNext, we can modify the model code in `/app/Models/CustomerSQL.php` to match our schema. \n\n // This is the standard Eloquent Model\n use Illuminate\\Database\\Eloquent\\Model;\n class CustomerSQL extends Model\n {\n \u00a0\u00a0\u00a0\u00a0use HasFactory;\n \u00a0\u00a0\u00a0\u00a0// the selected database as defined in /config/database.php\n \u00a0\u00a0\u00a0\u00a0protected $connection = 'mysql';\n \u00a0\u00a0\u00a0\u00a0// the table as defined in the migration\n \u00a0\u00a0\u00a0\u00a0protected $table= 'customer_sql';\n \u00a0\u00a0\u00a0\u00a0// our selected primary key for this model\n \u00a0\u00a0\u00a0\u00a0protected $primaryKey = 'guid';\n \u00a0\u00a0\u00a0\u00a0//the attributes' names that match the migration's schema\n \u00a0\u00a0\u00a0\u00a0protected $fillable = 'guid', 'first_name', 'family_name', 'email', 'address'];\n }\n\n### MongoDB Eloquent model\n\nLet's create an Eloquent model for our MongoDB database named \"CustomerMongoDB\" by running this Laravel prompt command from the project's directory\"\n\n`php artisan make:model CustomerMongoDB`\n\nLaravel creates a `CustomerMongoDB` class in the file `\\models\\CustomerMongoDB.php` shown in the code block below. By default, models use the 'default' database connection, but we can specify which one to use by adding the `$connection` member to the class. Likewise, it is possible to specify the collection name via a `$collection` member.\n\nNote how the base model class is replaced in the 'use' statement. This is necessary to set \"_id\" as the primary key and profit from MongoDB's advanced features like array push/pull.\n\n //use Illuminate\\Database\\Eloquent\\Model;\n use MongoDB\\Laravel\\Eloquent\\Model;\n \n class CustomerMongoDB extends Model\n {\n \u00a0\u00a0\u00a0\u00a0use HasFactory;\n \n \u00a0\u00a0\u00a0\u00a0// the selected database as defined in /config/database.php\n protected $connection = 'mongodb';\n \n \u00a0\u00a0\u00a0\u00a0// equivalent to $table for MySQL\n \u00a0\u00a0\u00a0\u00a0protected $collection = 'laracoll';\n \n \u00a0\u00a0\u00a0\u00a0// defines the schema for top-level properties (optional).\n \u00a0\u00a0\u00a0\u00a0protected $fillable = ['guid', 'first_name', 'family_name', 'email', 'address'];\n }\n\nThe extended class definition is nearly identical to the default Laravel one. Note that `$table` is replaced by `$collection` to use MongoDB's naming. That's it.\n\nWe can still use Eloquent Migrations with MongoDB (more on that below), but defining the schema and creating a collection with a Laravel-MongoDB Migration is optional because of MongoDB's flexible schema. At a high level, each document in a MongoDB collection can have a different schema.\n\nIf we want to [enforce a schema, we can! MongoDB has a great schema validation mechanism that works by providing a validation document when manually creating the collection using db.createcollection(). We'll cover this in an upcoming article.\n\n## CRUD with Eloquent\n\nWith the models ready, creating data for a MongoDB back end isn't different, and that's what we expect from an ORM.\\\n\nBelow, we can compare the `/api/create_eloquent_mongo/` and `/api/create_eloquent_sql/` API endpoints. The code is identical, except for the different `CustomerMongoDB` and `CustomerSQL` model names.\n\n Route::get('/create_eloquent_sql/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$success = CustomerSQL::create(\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'guid'=> 'cust_0000',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'first_name'=> 'John',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'family_name' => 'Doe',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'email' => 'j.doe@gmail.com',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'address' => '123 my street, my city, zip, state, country'\n \u00a0\u00a0\u00a0\u00a0]);\n \n \u00a0\u00a0\u00a0\u00a0...\n });\n \n Route::get('/create_eloquent_mongo/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$success = CustomerMongoDB::create([\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'guid'=> 'cust_1111',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'first_name'=> 'John',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'family_name' => 'Doe',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'email' => 'j.doe@gmail.com',\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'address' => '123 my street, my city, zip, state, country'\n \u00a0\u00a0\u00a0\u00a0]);\n \n \u00a0\u00a0\u00a0\u00a0...\n });\n\nAfter adding the document, we can retrieve it using Eloquent's \"where\" function as follows:\n\n Route::get('/find_eloquent/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$customer = CustomerMongoDB::where('guid', 'cust_1111')->get();\n \u00a0\u00a0\u00a0\u00a0...\n });\n\nEloquent allows developers to find data using complex queries with multiple matching conditions, and there's more to learn by studying both Eloquent and the MongoDB Laravel extension. The [Laravel MongoDB query tests are an excellent place to look for additional syntax examples and will be kept up-to-date.\n\nOf course, we can also **Update** and **Delete** records using Eloquent as shown in the code below:\n\n Route::get('/update_eloquent/', function (Request $request) {\n $result = CustomerMongoDB::where('guid', 'cust_1111')->update( 'first_name' => 'Jimmy'] );\n \u00a0\u00a0\u00a0\u00a0...\n });\n \n Route::get('/delete_eloquent/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$result = CustomerMongoDB::where('guid', 'cust_1111')->delete();\n \u00a0\u00a0\u00a0\u00a0...\n });\n\nEloquent is an easy way to start with MongoDB, and things work very much like one would expect. Even with a simple schema, developers can benefit from great scalability, high data reliability, and cluster availability with MongoDB Atlas' fully-managed [clusters and sharding.\n\nAt this point, our MongoDB-connected back-end service is up and running, and this could be the end of a typical \"CRUD\" article. However, MongoDB is capable of much more, so keep reading.\n\n## Unlock the full power of MongoDB\n\nTo extract the full power of MongoDB, it's best to fully utilize its document model and native Query API.\n\nThe document model is conceptually like a JSON object, but it is based on BSON (a binary representation with more fine-grained typing) and backed by a high-performance storage engine. Document supports complex BSON types, including object, arrays, and regular expressions. Its native Query API can efficiently access and process such data.\n\n### Why is the document model great?\n\nLet's discuss a few benefits of the document model.\n\n#### It reduces or eliminates joins\n\nEmbedded documents and arrays paired with data modeling allow developers to avoid expensive database \"join\" operations, especially on the most critical workloads, queries, and huge collections. If needed, MongoDB does support join-like operations with the $lookup operator, but the document model lets developers keep such operations to a minimum or get rid of them entirely. Reducing joins also makes it easier to shard collections across multiple servers to increase capacity.\n\n#### It reduces workload costs\n\nThis NoSQL strategy is critical to increasing **database workload efficiency**, to **reduce billing**. That's why Amazon eliminated most of its internal relational database workloads years ago. Learn more by watching Rick Houlihan, who led this effort at Amazon, tell that story on YouTube, or read about it on our blog. He is now MongoDB's Field CTO for Strategic Accounts.\n\n#### It helps avoid downtime during schema updates\n\nMongoDB documents are contained within \"collections\" (tables, in SQL parlance). The big difference between SQL and MongoDB is that each document in a collection can have a different schema. We could store completely different schemas in the same collection. This enables strategies like schema versioning to **avoid downtime during schema updates** and more!\n\nData modeling goes beyond the scope of this article, but it is worth spending 15 minutes watching the Principles of Data Modeling for MongoDB video featuring Daniel Coupal, the author of MongoDB Data Modeling and Schema Design, a book that many of us at MongoDB have on our desks. At the very least, read this short 6 Rules of Thumb for MongoDB Schema article.\n\n## CRUD with nested data\n\nThe Laravel MongoDB Eloquent extension does offer MongoDB-specific operations for nested data. However, adding nested data is also very intuitive without using the embedsMany() and embedsOne() methods provided by the extension.\n\nAs shown earlier, it is easy to define the top-level schema attributes with Eloquent. However, it is more tricky to do so when using arrays and embedded documents.\n\nFortunately, we can intuitively create the Model's data structures in PHP. In the example below, the 'address' field has gone from a string to an object type. The 'email' field went from a string to an array of strings] type. Arrays and objects are not supported types in MySQL.\n\n Route::get('/create_nested/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$message = \"executed\";\n \u00a0\u00a0\u00a0\u00a0$success = null;\n \n \u00a0\u00a0\u00a0\u00a0$address = new stdClass;\n \u00a0\u00a0\u00a0\u00a0$address->street = '123 my street name';\n \u00a0\u00a0\u00a0\u00a0$address->city = 'my city';\n \u00a0\u00a0\u00a0\u00a0$address->zip= '12345';\n \u00a0\u00a0\u00a0\u00a0$emails = ['j.doe@gmail.com', 'j.doe@work.com'];\n \n \u00a0\u00a0\u00a0\u00a0try {\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer = new CustomerMongoDB();\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer->guid = 'cust_2222';\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer->first_name = 'John';\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer->family_name= 'Doe';\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer->email= $emails;\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$customer->address= $address;\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$success = $customer->save(); // save() returns 1 or 0\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0\u00a0\u00a0catch (\\Exception $e) {\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$message = $e->getMessage();\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0\u00a0\u00a0return ['msg' => $message, 'data' => $success];\n });\n\nIf we run the `localhost/api/create_nested/` API endpoint, it will create a document as the JSON representation below shows. The `updated_at` and `created_at` datetime fields are automatically added by Eloquent, and it is possible to disable this Eloquent feature (check the [Timestamps in the official Laravel documentation).\n\n## Introducing the MongoDB Query API\n\nMongoDB has a native query API optimized to manipulate and transform complex data. There's also a powerful aggregation framework with which we can pipe data from one stage to another, making it intuitive for developers to create very complex aggregations. The native query is accessible via the MongoDB \"collection\" object.\n\n### Eloquent and \"raw queries\"\n\nEloquent has an intelligent way of exposing the full capabilities of the underlying database by using \"raw queries,\" which are sent \"as is\" to the database without any processing from the Eloquent Query Builder, thus exposing the native query API. Read about raw expressions in the official Laravel documentation.\n\nWe can perform a raw native MongoDB query from the model as follows, and the model will return an Eloquent collection\n\n $mongodbquery = 'guid' => 'cust_1111'];\n \n // returns a \"Illuminate\\Database\\Eloquent\\Collection\" Object\n $results = CustomerMongoDB::whereRaw( $mongodbquery )->get();\n\nIt's also possible to obtain the native MongoDB collection object and perform a query that will return objects such as native MongoDB documents or cursors:\n\n $mongodbquery = ['guid' => 'cust_1111', ];\n \n $mongodb_native_collection = DB::connection('mongodb')->getCollection('laracoll');\n \n $document = $mongodb_native_collection->findOne( $mongodbquery ); \n $cursor = $mongodb_native_collection->find( $mongodbquery ); \n\nUsing the MongoDB collection directly is the sure way to access all the MongoDB features. Typically, people start using the native collection.insert(), collection.find(), and collection.update() first.\n\nCommon MongoDB Query API functions work using a similar logic and require matching conditions to identify documents for selection or deletion. An optional projection defines which fields we want in the results.\n\nWith Laravel, there are several ways to query data, and the /find_native/ API endpoint below shows how to use whereRaw(). Additionally, we can use MongoDB's [findOne() and find() collection methods that return a document and a cursor, respectively.\n\n /*\n Find records using a native MongoDB Query\n 1 - with Model->whereRaw()\n 2 - with native Collection->findOne()\n 3 - with native Collection->find()\n */\n \n Route::get('/find_native/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0// a simple MongoDB query that looks for a customer based on the guid\n \u00a0\u00a0\u00a0\u00a0$mongodbquery = 'guid' => 'cust_2222'];\n \n \u00a0\u00a0\u00a0\u00a0// Option #1\n \u00a0\u00a0\u00a0\u00a0//==========\n \u00a0\u00a0\u00a0\u00a0// use Eloquent's whereRaw() function. This is the easiest way to stay close to the Laravel paradigm\n \u00a0\u00a0\u00a0\u00a0// returns a \"Illuminate\\Database\\Eloquent\\Collection\" Object\n \n \u00a0\u00a0\u00a0\u00a0$results = CustomerMongoDB::whereRaw( $mongodbquery )->get();\n \n \u00a0\u00a0\u00a0\u00a0// Option #2 & #3\n \u00a0\u00a0\u00a0\u00a0//===============\n \u00a0\u00a0\u00a0\u00a0// use the native MongoDB driver Collection object. with it, you can use the native MongoDB Query API\n \u00a0\u00a0\u00a0\u00a0$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');\n \n \u00a0\u00a0\u00a0\u00a0// find the first document that matches the query\n \u00a0\u00a0\u00a0\u00a0$mdb_bsondoc= $mdb_collection->findOne( $mongodbquery ); // returns a \"MongoDB\\Model\\BSONDocument\" Object\n \n \u00a0\u00a0\u00a0\u00a0// if we want to convert the MongoDB Document to a Laravel Model, use the Model's newFromBuilder() method\n \u00a0\u00a0\u00a0\u00a0$cust= new CustomerMongoDB();\n \u00a0\u00a0\u00a0\u00a0$one_doc = $cust->newFromBuilder((array) $mdb_bsondoc);\n \n \u00a0\u00a0\u00a0\u00a0// find all documents that matches the query\n \u00a0\u00a0\u00a0\u00a0// Note: we're using find without any arguments, so ALL documents will be returned\n \n \u00a0\u00a0\u00a0\u00a0$mdb_cursor = $mdb_collection->find( ); // returns a \"MongoDB\\Driver\\Cursor\" object\n \u00a0\u00a0\u00a0\u00a0$cust_array = array();\n \u00a0\u00a0\u00a0\u00a0foreach ($mdb_cursor->toArray() as $bson) {\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$cust_array[] = $cust->newFromBuilder( $bson );\n \u00a0\u00a0\u00a0\u00a0}\n \n \u00a0\u00a0\u00a0\u00a0return ['msg' => 'executed', 'whereraw' => $results, 'document' => $one_doc, 'cursor_array' => $cust_array];\n });\n\nUpdating documents is done by providing a list of updates in addition to the matching criteria. Here's an example using [updateOne(), but updateMany() works similarly. updateOne() returns a document that contains information about how many documents were matched and how many were actually modified.\n\n /*\n \u00a0Update a record using a native MongoDB Query\n */\n Route::get('/update_native/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');\n \u00a0\u00a0\u00a0\u00a0$match = 'guid' => 'cust_2222'];\n \u00a0\u00a0\u00a0\u00a0$update = ['$set' => ['first_name' => 'Henry', 'address.street' => '777 new street name'] ];\n \u00a0\u00a0\u00a0\u00a0$result = $mdb_collection->updateOne($match, $update );\n \u00a0\u00a0\u00a0\u00a0return ['msg' => 'executed', 'matched_docs' => $result->getMatchedCount(), 'modified_docs' => $result->getModifiedCount()];\n });\n\nDeleting documents is as easy as finding them. Again, there's a matching criterion, and the API returns a document indicating the number of deleted documents.\n\n Route::get('/delete_native/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$mdb_collection = DB::connection('mongodb')->getCollection('laracoll');\n \u00a0\u00a0\u00a0\u00a0$match = ['guid' => 'cust_2222'];\n \u00a0\u00a0\u00a0\u00a0$result = $mdb_collection->deleteOne($match );\n \u00a0\u00a0\u00a0\u00a0return ['msg' => 'executed', 'deleted_docs' => $result->getDeletedCount() ];\n });\n\n### Aggregation pipeline\n\nSince we now have access to the MongoDB native API, let's introduce the [aggregation pipeline. An aggregation pipeline is a task in MongoDB's aggregation framework. Developers use the aggregation framework to perform various tasks, from real-time dashboards to \"big data\" analysis.\n\nWe will likely use it to query, filter, and sort data at first. The aggregations introduction of the free online book Practical MongoDB Aggregations by Paul Done gives a good overview of what can be done with it.\n\nAn aggregation pipeline consists of multiple stages where the output of each stage is the input of the next, like piping in Unix.\n\nWe will use the \"sample_mflix\" sample database that should have been loaded when creating our Atlas cluster. Laravel lets us access multiple MongoDB databases in the same app, so let's add the sample_mflix database (to `database.php`):\n\n 'mongodb_mflix' => \n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'driver' => 'mongodb',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'dsn' => env('DB_URI'),\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'database' => 'sample_mflix',\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0],\n\nNext, we can build an /aggregate/ API endpoint and define a three-stage aggregation pipeline to fetch data from the \"movies\" collection, compute the average movie rating per genre, and return a list. [More details about this movie ratings aggregation.\n\n Route::get('/aggregate/', function (Request $request) {\n \u00a0\u00a0\u00a0\u00a0$mdb_collection = DB::connection('mongodb_mflix')->getCollection('movies');\n \n \u00a0\u00a0\u00a0\u00a0$stage0 = '$unwind' => ['path' => '$genres']];\n \u00a0\u00a0\u00a0\u00a0$stage1 = ['$group' => ['_id' => '$genres', 'averageGenreRating' => ['$avg' => '$imdb.rating']]];\n \u00a0\u00a0\u00a0\u00a0$stage2 = ['$sort' => ['averageGenreRating' => -1]];\n \n \u00a0\u00a0\u00a0\u00a0$aggregation = [$stage0, $stage1, $stage2];\n \u00a0\u00a0\u00a0\u00a0$mdb_cursor = $mdb_collection->aggregate( $aggregation );\n \u00a0\u00a0\u00a0\u00a0return ['msg' => 'executed', 'data' => $mdb_cursor->toArray() ];\n });\n\nThis shows how easy it is to compose several stages to group, compute, transform, and sort data. This is the preferred method to perform [aggregation operations, and it's even possible to output a document, which is subsequently used by the updateOne() method. There's a whole aggregation course.\n\n### Don't forget to index\n\nWe now know how to perform CRUD operations, native queries, and aggregations. However, don't forget about indexing to increase performance. MongoDB indexing strategies and best practices are beyond the scope of this article, but let's look at how we can create indexes.\n\n#### Option #1: Create indexes with Eloquent's Migrations\n\nFirst, we can use Eloquent's Migrations. Even though we could do without Migrations because we have a flexible schema, they could be a vessel to store how indexes are defined and created.\\\nSince we have not used the --migration option when creating the model, we can always create the migration later. In this case, we can run this command:\n\n`php artisan make:migration create_customer_mongo_db_table`\n\nIt will create a Migration located at `/database/migrations/YYYY_MM_DD_xxxxxx_create_customer_mongo_db_table.php`.\n\nWe can update the code of our up() function to create an index for our collection. For example, we'll create an index for our 'guid' field, and make it a unique constraint. By default, MongoDB always has an _id primary key field initialized with an ObjectId by default. We can provide our own unique identifier in place of MongoDB's default ObjectId.\n\n public function up() {\n Schema::connection('mongodb')->create('laracoll', function ($collection) {\n $collection->unique('guid'); // Ensure the guid is unique since it will be used as a primary key.\n });\n }\n\nAs previously, this migration `up()` function can be executed using the command:\n\n`php artisan migrate --path=/database/migrations/2023_08_09_051124_create_customer_mongo_db_table.php`\n\nIf the 'laracoll' collection does not exist, it is created and an index is created for the 'guid' field. In the Atlas GUI, it looks like this:\n\n#### Option #2: Create indexes with MongoDB's native API\n\nThe second option is to use the native MongoDB createIndex() function which might have new options not yet covered by the Laravel MongoDB package. Here's a simple example that creates an index with the 'guid' field as the unique constraint.\n\n Route::get('/create_index/', function (Request $request) {\n \n \u00a0\u00a0\u00a0\u00a0$indexKeys = \"guid\" => 1];\n \u00a0\u00a0\u00a0\u00a0$indexOptions = [\"unique\" => true];\n \u00a0\u00a0\u00a0\u00a0$result = DB::connection('mongodb')->getCollection('laracoll')->createIndex($indexKeys, $indexOptions);\n \n \u00a0\u00a0\u00a0\u00a0return ['msg' => 'executed', 'data' => $result ];\n });\n\n#### Option #3: Create indexes with the Atlas GUI\n\nFinally, we can also [create an Index in the web Atlas GUI interface, using a visual builder or from JSON. The GUI interface is handy for experimenting. The same is true inside MongoDB Compass, our MongoDB GUI application.\n\n## Conclusion\n\nThis article covered creating a back-end service with PHP/Laravel, powered by MongoDB, for a front-end web application. We've seen how easy it is for Laravel developers to leverage their existing skills with a MongoDB back end.\n\nIt also showed why the document model, associated with good data modeling, leads to higher database efficiency and scalability. We can fully use it with the native MongoDB Query API to unlock the full power of MongoDB to create better apps with less downtime.\n\nLearn more about the Laravel MongoDB extension syntax by looking at the official documentation and repo's example tests on GitHub. For plain PHP MongoDB examples, look at the example tests of our PHP Library.\n\nConsider taking the free Data Modeling course at MongoDB University or the overall PHP/MongoDB course, although it's not specific to Laravel.\n\nWe will build more PHP/Laravel content, so subscribe to our various channels, including YouTube and LinkedIn. Finally, join our official community forums! There's a PHP tag where fellow developers and MongoDB engineers discuss all things data and PHP.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8df3f279bdc52b8e/64dea7310349062efa866ae4/laravel-9-mongodb-tutorial_02_laravel-homepage.png", "format": "md", "metadata": {"tags": ["PHP"], "pageDescription": "A tutorial on how to use MongoDB with Laravel Eloquent, but also with the native MongoDB Query API and Aggregation Pipeline, to access the new MongoDB features.", "contentType": "Tutorial"}, "title": "How To Build a Laravel + MongoDB Back End Service", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/global-read-write-concerns", "action": "created", "body": "# Set Global Read and Write Concerns in MongoDB 4.4\n\nMongoDB is very flexible when it comes to both reading and writing data. When it comes to writing data, a MongoDB write concern allows you to set the level of acknowledgment for a desired write operation. Likewise, the read concern allows you to control the consistency and isolation properties of the data read from your replica sets. Finding the right values for the read and write concerns is pivotal as your application evolves and with the latest release of MongoDB adding global read isolation and write durability defaults is now possible.\n\nMongoDB 4.4 is available in beta right now. You can try it out in MongoDB Atlas or download the development release. In this post, we are going to look at how we can set our read isolation and write durability defaults globally and also how we can override these global settings on a per client or per operation basis when needed.\n\n## Prerequisites\n\nFor this tutorial, you'll need:\n\n- MongoDB 4.4\n- MongoDB shell\n\n>Setting global read and write concerns is currently unavailable on MongoDB Atlas. If you wish to follow along with this tutorial, you'll need your own instance of MongoDB 4.4 installed.\n\n## Read and Write Concerns\n\nBefore we get into how we can set these features globally, let's quickly examine what it is they actually do, what benefits they provide, and why we should even care.\n\nWe'll start with the MongoDB write concern functionality. By default, when you send a write operation to a MongoDB database, it has a write concern of `w:1`. What this means is that the write operation will be acknowledged as successful when the primary in a replica set has successfully executed the write operation.\n\nLet's assume you're working with a 3-node replicate set, which is the default when you create a free MongoDB Atlas cluster. Sending a write command such as `db.collection('test').insertOne({name:\"Ado\"})` will be deemed successful when the primary has acknowledged the write. This ensures that the data doesn't violate any database constraints and has successfully been written to the database in memory. We can improve this write concern durability, by increasing the number of nodes we want to acknowledge the write.\n\nInstead of `w:1`, let's say we set it to `w:2`. Now when we send a write operation to the database, we wouldn't hear back until both the primary, and one of the two secondary nodes acknowledged the write operation was successful. Likewise, we could also set the acknowledgement value to 0, i.e `w:0`, and in this instance we wouldn't ask for acknowledgement at all. I wouldn't recommend using `w:0` for any important data, but in some instances it can be a valid option. Finally, if we had a three member replica set and we set the w value to 3, i.e `w:3`, now the primary and both of the secondary nodes would need to acknowledge the write. I wouldn't recommend this approach either, because if one of the secondary members become unavailable, we wouldn't be able to acknowledge write operations, and our system would no longer be highly available.\n\nAdditionally, when it comes to write concern, we aren't limited to setting a numeric value. We can set the value of w to \"majority\" for example, which will wait for the write operation to propagate to a majority of the nodes or even write our own custom write concern.\n\nMongoDB read concern allows you to control the consistency and isolation properties of the data read from replica sets and replica set shards. Essentially what this means is that when you send a read operation to the database such as a db.collection.find(), you can specify how durable the data that is returned must be. Note that read concern should not be confused with read preference, which specifies which member of a replica set you want to read from.\n\nThere are multiple levels of read concern including local, available, majority, linearizable, and snapshot. Each level is complex enough that it can be an article itself, but the general idea is similar to that of the write concern. Setting a read concern level will allow you to control the type of data read. Defaults for read concerns can vary and you can find what default is applied when here. Default read concern reads the most recent data, rather than data that's been majority committed.\n\nThrough the effective use of write concerns and read\nconcerns, you can adjust the level of consistency and availability defaults as appropriate for your application.\n\n## Setting Global Read and Write Concerns\n\nSo now that we know a bit more about why these features exist and how they work, let's see how we can change the defaults globally. In MongoDB 4.4, we can use the db.adminCommand() to configure our isolation and durability defaults.\n\n>Setting global read and write concerns is currently unavailable on MongoDB Atlas. If you wish to follow along with this tutorial, you'll need your own instance of MongoDB 4.4 installed.\n\nWe'll use the `db.adminCommand()` to set a default read and write concern of majority. In the MongoDB shell, execute the following command:\n\n``` bash\ndb.adminCommand({\n setDefaultRWConcern: 1,\n defaultReadConcern: { level : \"majority\" },\n defaultWriteConcern: { w: \"majority\" }\n})\n```\n\nNote that to execute this command you need to have a replica set and the command will need to be sent to the primary node. Additionally, if you have a sharded cluster, the command will need to be run on the `mongos`. If you have a standalone node, you'll get an error. The final requirement to be able to execute the `setDefaultRWConcern` command is having the correct privilege.\n\nWhen setting default read and write concerns, you don't have to set both a default read concern and a default write concern, you are allowed to set only a default read concern or a default write concern as you see fit. For example, say we only wanted to set a default write concern, it would look something like this:\n\n``` bash\ndb.adminCommand({\n setDefaultRWConcern: 1,\n defaultWriteConcern: { w: 2 }\n})\n```\n\nThe above command would set just a default write concern of 2, meaning that the write would succeed when the primary and one secondary node acknowledged the write.\n\nWhen it comes to default write concerns, in addition to specifying the acknowledgment, you can also set a `wtimeout` period for how long an operation has to wait for an acknowledgement. To set this we can do this:\n\n``` bash\ndb.adminCommand({\n setDefaultRWConcern: 1,\n defaultWriteConcern: { w: 2, wtimeout: 5000 }\n})\n```\n\nThis will set a timeout of 5000ms so if we don't get an acknowledgement within 5 seconds, the write operation will return an `writeConcern` timeout error.\n\nTo unset either a default read or write concern, you can simply pass into it an empty object.\n\n``` bash\ndb.adminCommand({\n setDefaultRWConcern: 1,\n defaultReadConcern: { },\n defaultWriteConcern: { }\n})\n```\n\nThis will return the read concern and the write concern to their MongoDB defaults. You can also easily check and see what defaults are currently set for your global read and write concerns using the getDefaultRWConcern command. When you run this command against the `admin` database like so:\n\n``` bash\ndb.adminCommand({\n getDefaultRWConcern: 1\n})\n```\n\nYou will get a response like the one below showing you your global settings:\n\n``` \n{\n \"defaultWriteConcern\" : {\n \"w\" : \"majority\"\n },\n \"defaultReadConcern\" : {\n \"level\" : \"majority\"\n },\n \"updateOpTime\" : Timestamp(1586290895, 1),\n \"updateWallClockTime\" : ISODate(\"2020-04-07T20:21:41.849Z\"),\n \"localUpdateWallClockTime\" : ISODate(\"2020-04-07T20:21:41.862Z\"),\n \"ok\" : 1,\n \"$clusterTime\" : { ... }\n \"operationTime\" : Timestamp(1586290925, 1)\n}\n```\n\nIn the next section, we'll take a look at how we can override these global settings when needed.\n\n## Overriding Global Read and Write Concerns\n\nMongoDB is a very flexible database. The default read and write concerns allow you to set reasonable defaults for how clients interact with your database cluster-wide, but as your application evolves a specific client may need a different read isolation or write durability default. This can be accomplished using any of the MongoDB drivers.\n\nWe can override read and write concerns at:\n\n- the client connection layer when connecting to the MongoDB database,\n- the database level,\n- the collection level,\n- an individual operation or query.\n\nHowever, note that MongoDB transactions can span multiple databases and collections, and since all operations within a transaction must use the same write concern, transactions have their own hierarchy of:\n\n- the client connection layer,\n- the session level,\n- the transaction level.\n\nA diagram showing this inheritance is presented below to help you understand what read and write concern takes precedence when multiple are declared:\n\nWe'll take a look at a couple of examples where we override the read and write concerns. For our examples we'll use the Node.js Driver.\n\nLet's see an example of how we would overwrite our read and write concerns in our Node.js application. The first example we'll look at is how to override read and write concerns at the database level. To do this our code will look like this:\n\n``` js\nconst MongoClient = require('mongodb').MongoClient;\nconst uri = \"{YOUR-CONNECTION-STRING}\";\nconst client = new MongoClient(uri, { useNewUrlParser: true });\n\nclient.connect(err => {\n const options = {w:\"majority\", readConcern: {level: \"majority\"}};\n\n const db = client.db(\"test\", options);\n});\n```\n\nWhen we specify the database we want to connect to, in this case the database is called `test`, we also pass an `options` object with the read and write concerns we wish to use. For our first example, we are using the **majority** concern for both read and write operations.\n\nIf we already set defaults globally, then overriding in this way may not make sense, but we may still run into a situation where we want a specific collection to execute read and write operations at a specific read or write concern. Let's declare a collection with a **majority** write concern and a read concern \"majority.\"\n\n``` js\nconst options = {w:\"majority\", readConcern: {level: \"majority\"}};\n\nconst collection = db.collection('documents', options);\n```\n\nLikewise we can even scope it down to a specific operation. In the following example we'll use the **majority** read concern for just one specific query.\n\n``` js\nconst collection = db.collection('documents');\n\ncollection.insertOne({name:\"Ado Kukic\"}, {w:\"majority\", wtimeout: 5000})\n```\n\nThe code above will execute a write query and try to insert a document that has one field titled **name**. For the query to be successful, the write operation will have to be acknowledged by the primary and one secondary, assuming we have a three member replica set.\n\nBeing able to set the default read and write concerns is important to providing developers the ability to set defaults that make sense for their use case, but also the flexibility to easily override those defaults when needed.\n\n## Conclusion\n\nGlobal read or write concerns allow developers to set default read isolation and write durability defaults for their database cluster-wide. As your application evolves, you are able to override the global read and write concerns at the client level ensuring you have flexibility when you need it and customized defaults when you don't. It is available in MongoDB 4.4, which is available in beta today.\n\n>**Safe Harbor Statement**\n>\n>The development, release, and timing of any features or functionality described for MongoDB products remains at MongoDB's sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality. Except as required by law, we undertake no obligation to update any forward-looking statements to reflect events or circumstances after the date of such statements.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to set global read isolation and write durability defaults in MongoDB 4.4.", "contentType": "Article"}, "title": "Set Global Read and Write Concerns in MongoDB 4.4", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/location-geofencing-stitch-mapbox", "action": "created", "body": "# Location Geofencing with MongoDB, Stitch, and Mapbox\n\n>\n>\n>Please note: This article discusses Stitch. Stitch is now MongoDB Realm.\n>All the same features and functionality, now with a new name. Learn more\n>here. We will be updating this article\n>in due course.\n>\n>\n\nFor a lot of organizations, when it comes to location, geofencing is\noften a very desirable or required feature. In case you're unfamiliar, a\ngeofence can be thought of as a virtual perimeter for a geographic area.\nOften, you'll want to know when something enters or exits that geofence\nso that you can apply your own business logic. Such logic might include\nsending a notification or updating something in your database.\n\nMongoDB supports GeoJSON data and offers quite a few operators that make\nworking the location data easy.\n\nWhen it comes to geofencing, why would you want to use a database like\nMongoDB rather than defining boundaries directly within your\nclient-facing application? Sure, it might be easy to define and manage\none or two boundaries, but when you're working at scale, checking to see\nif something has exited or entered one of many boundaries could be a\nhassle.\n\nIn this tutorial, we're going to explore the\n$near\nand\n$geoIntersects\noperators within MongoDB to define geofences and see if we're within the\nfences. For the visual aspect of things, we're going to make use of\nMapbox for showing our geofences and our\nlocation.\n\nTo get an idea of what we're going to build, take a look at the\nfollowing animated image:\n\nWe're going to implement functionality where a map is displayed and\npolygon shapes are rendered based on data from within MongoDB. When we\nmove the marker around on the map to simulate actual changes in\nlocation, we're going to determine whether or not we've entered or\nexited a geofence.\n\n## The Requirements\n\nThere are a few moving pieces for this particular tutorial, so it is\nimportant that the prerequisites are met prior to starting:\n\n- Must have a Mapbox account with an access token generated.\n- Must have a MongoDB Atlas cluster available.\n\nMapbox is a service, not affiliated with MongoDB. To render a map along\nwith shapes and markers, an account is necessary. For this example,\neverything can be accomplished within the Mapbox free tier.\n\nBecause we'll be using MongoDB Stitch in connection with Mapbox, we'll\nneed to be using MongoDB Atlas.\n\n>\n>\n>MongoDB Atlas can be used to deploy an M0\n>sized cluster of MongoDB for FREE.\n>\n>\n\nThe MongoDB Atlas cluster should have a **location_services** database\nwith a **geofences** collection.\n\n## Understanding the GeoJSON Data to Represent Fenced Regions\n\nTo use the geospatial functionality that MongoDB offers, the data stored\nwithin MongoDB must be valid GeoJSON data. At the end of the day,\nGeoJSON is still JSON, which plays very nicely with MongoDB, but there\nis a specific schema that must be followed. To learn more about GeoJSON,\nvisit the specification documentation.\n\nFor our example, we're going to be working with Polygon and Point data.\nTake the following document model:\n\n``` json\n{\n \"_id\": ObjectId(),\n \"name\": string,\n \"region\": {\n \"type\": string,\n \"coordinates\": \n [\n [double]\n ]\n ]\n }\n}\n```\n\nIn the above example, the `region` represents our GeoJSON data and\neverything above it such as `name` represents any additional data that\nwe want to store for the particular document. A realistic example to the\nabove model might look something like this:\n\n``` json\n{\n \"_id\": ObjectId(\"5ebdc11ab96302736c790694\"),\n \"name\": \"tracy\",\n \"region\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [-121.56115581054638, 37.73644193427164],\n [-121.33868266601519, 37.59729761382843],\n [-121.31671000976553, 37.777700170855454],\n [-121.56115581054638, 37.73644193427164]\n ]\n ]\n }\n}\n```\n\nWe're naming any of our possible fenced regions. This could be useful to\na lot of organizations. For example, maybe you're a business with\nseveral franchise locations. You could geofence the location and name it\nsomething like the address, store number, etc.\n\nTo get the performance we need from our geospatial data and to be able\nto use certain operators, we're going to need to create an index on our\ncollection. The index looks something like the following:\n\n``` javascript\ndb.geofences.createIndex({ region: \"2dsphere\" })\n```\n\nThe index can be created through Atlas, Compass, and with the CLI. The\ngoal here is to make sure the `region` field is a `2dsphere` index.\n\n## Configuring MongoDB Stitch for Client-Facing Application Interactions\n\nRather than creating a backend application to interact with the\ndatabase, we're going to make use of MongoDB Stitch. Essentially, the\nclient-facing application will use the Stitch SDK to authenticate before\ninteracting with the data.\n\nWithin the [MongoDB Cloud, choose to create\na new Stitch application if you don't already have one that you wish to\nuse. Make sure that the application is using the cluster that has your\ngeofencing data.\n\nWithin the Stitch dashboard, choose the **Rules** tab and create a new\nset of permissions for the **geofences** collection. For this particular\nexample, the **Users can only read all data** permission template is\nfine.\n\nNext, we'll want to choose an authentication mechanism. In the **Users**\ntab, choose **Providers**, and enable the anonymous authentication\nprovider. In a more realistic production scenario, you'll likely want to\ncreate geofences that have stricter users and rules design.\n\nBefore moving onto actually creating an application, make note of your\n**App ID** within Stitch, as it will be necessary for connecting.\n\n## Interacting with the Geofences using Mapbox and MongoDB Geospatial Queries\n\nWith all the configuration out of the way, we can move into the fun part\nof creating an attractive client-facing application that queries the\ngeospatial data in MongoDB and renders it on a map.\n\nOn your computer, create an **index.html** file with the following\nboilerplate code:\n\n``` xml\n\n \n \n \n\n \n \n \n \n\n```\n\nIn the above HTML, we're importing the Mapbox and MongoDB Stitch SDKs,\nand we are defining an HTML container to hold our interactive map.\nInteracting with MongoDB and the map will be done in the `", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to use MongoDB geospatial queries and GeoJSON with Mapbox to create dynamic geofences.", "contentType": "Tutorial"}, "title": "Location Geofencing with MongoDB, Stitch, and Mapbox", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-cocoa-data-types", "action": "created", "body": "# New Realm Cocoa Data Types\n\nIn this blog post we will discover the new data types that Realm has to offer.\n\nOver the past year we have worked hard to bring three new datatypes to the Realm SDK: `MutableSet`, `Map`, and `AnyRealmValue`.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## MutableSet\n\n`MutableSet` allows you to store a collection of unique values in an unordered fashion. This is different to `List` which allows you to store duplicates and persist the order of items.\n\n`MutableSet` has some methods that many will find useful for data manipulation and storage:\n\n- `Intersect`\n - Gets the common items between two `MutableSet`s.\n- `Union`\n - Combines elements from two `MutableSet`s, removing any duplicates.\n- `Subtract`\n - Removes elements from one `MutableSet` that are present in another given `MutableSet`.\n- `isSubset`\n - Checks to see if the elements in a `MutableSet` are children of a given super `MutableSet`.\n\nSo why would you use a `MutableSet` over a `List`?\n- You require a distinct collection of elements.\n- You do not rely on the order of items.\n- You need to perform mathematical operations such as `Intersect`, `Union`, and `Subtract`.\n- You need to test for membership in other Set collections using `isSubset` or `intersects`.\n\n### Practical example\n\nUsing our Movie object, we want to store and sync certain properties that will never contain duplicates and we don't care about ordering. Let's take a look below:\n\n```swift\nclass Movie: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var _partitionKey: String\n // we will want to keep the order of the cast, so we will use a `List`\n @Persisted var cast: List\n @Persisted var countries: MutableSet\n @Persisted var genres: MutableSet\n @Persisted var languages: MutableSet\n @Persisted var writers: MutableSet\n}\n```\n\nStraight away you can see the use case, we never want to have duplicate elements in the `countries`, `genres`, `languages`, and `writers` collections, nor do we care about their stored order. `MutableSet` does support sorting so you do have the ability to rearrange the order at runtime, but you can't persist the order.\n\nYou query a `MutableSet` the same way you would with List:\n```swift\nlet danishMovies = realm.objects(Movie.self).filter(\"'Danish' IN languages\")\n```\n### Under the hood\n\n`MutableSet` is based on the `NSSet` type found in Foundation. From the highest level we mirror the `NSMutableSet / Set API` on `RLMSet / MutableSet`. \n\nWhen a property is unmanaged the underlying storage type is deferred to `NSMutableSet`.\n\n## Map\n\nOur new `Map` data type is a Key-Value store collection type. It is similar to Foundation's `Dictionary` and shares the same call semantics. You use a `Map` when you are unsure of a schema and need to store data in a structureless fashion. NOTE: You should not use `Map` over an `Object` where a schema is known.\n\n### Practical example\n\n```swift\n@Persisted phoneNumbers: Map\n\nphoneNumbers\"Charlie\"] = \"+353 86 123456789\"\nlet charliesNumber = phoneNumbers[\"Charlie\"] // \"+353 86 123456789\"\n```\n\n`Map` also supports aggregate functions so you can easily calculate data:\n\n```swift\n@Persisted testScores: Map\n\ntestScores[\"Julio\"] = 95\ntestScores[\"Maria\"] = 95\ntestScores[\"John\"] = 70\n\nlet averageScore = testScores.avg()\n```\n\nAs well as filtering with NSPredicate:\n\n```swift\n@Persisted dogMap: Map\n\nlet spaniels = dogMap.filter(NSPredicate(\"breed = 'Spaniel'\")) // Returns `Results`\n\n```\n\nYou can observe a `Map` just like the other collection types:\n\n```swift\nlet token = map.observe(on: queue) { change in\n switch change {\n case .initial(let map):\n ...\n case let .update(map, deletions: deletions, insertions: insertions, modifications: modifications):\n // `deletions`, `insertions` and `modifications` contain the modified keys in the Map\n ...\n case .error(let error):\n...\n }\n}\n```\n\nCombine is also supported for observation:\n\n```swift\ncancellable = map.changesetPublisher\n .sink { change in\n ...\n }\n```\n\n### Under the hood\n\n`Map` is based on the `NSDictionary` type found in Foundation. From the highest level, we mirror the `NSMutableDictionary / Dictionary API` on `RLMDictionary / Map`. \n\nWhen a property is unmanaged the underlying storage type is deferred to `NSMutableDictionary`.\n\n## AnyRealmValue\n\nLast but not least, a datatype we are very excited about, `AnyRealmValue`. No this is not another collection type but one that allows you to store various different types of data under one property. Think of it like `Any` or `AnyObject` in Swift or a union in C.\n\nTo better understand how to use `AnyRealmValue`, let's see some practical examples.\n\nLet's say we have a Settings class which uses a `Map` for storing the user preferences, because the types of references we want to store are changing all the time, we are certain that this is schemaless for now:\n\n```swift\nclass Settings: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var _partitionKey: String?\n @Persisted var misc: Map\n}\n```\n\nUsage:\n\n```swift\nmisc[\"lastScreen\"] = .string(\"home\")\nmisc[\"lastOpened\"] = .date(.now)\n\n// To unwrap the values\n\nif case let .string(lastScreen) = misc[\"lastScreen\"] {\n print(lastScreen) // \"home\"\n}\n```\n\nHere we can store different variants of the value, so depending on the need of your application, you may find it useful to be able to switch between different types.\n\n### Under the hood\n\nWe don't use any Foundation types for storing `AnyRealmValue`. Instead the `AnyRealmValue` enum is converted to the ObjectiveC representation of the stored type. This is any type that conforms to `RLMValue`. You can see how that works [here.\n\n## Conclusion\n\nI hope you found this insightful and have some great ideas with what to do with these data types! All of these new data types are fully compatible with MongoDB Realm Sync too, and are available in Objective-C as well as Swift. We will follow up with another post and presentation on data modelling with Realm soon.\n\nLinks to documentation:\n\n- MutableSet\n- Map\n- AnyRealmValue", "format": "md", "metadata": {"tags": ["Realm", "Mobile"], "pageDescription": "In this blog post we will discover the new data types that Realm Cocoa has to offer.", "contentType": "Article"}, "title": "New Realm Cocoa Data Types", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-setup-crud-operations", "action": "created", "body": "# Getting Started with MongoDB and Java - CRUD Operations Tutorial\n\n## Updates\n\nThe MongoDB Java quickstart repository is available on GitHub.\n\n### February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n- Update the `preFlightChecks` method to support both MongoDB Atlas shared and dedicated clusters.\n\n### November 14th, 2023\n\n- Update to Java 17\n- Update Java Driver to 4.11.1\n- Update mongodb-crypt to 1.8.0\n\n### March 25th, 2021\n\n- Update Java Driver to 4.2.2.\n- Added Client Side Field Level Encryption example.\n\n### October 21st, 2020\n\n- Update Java Driver to 4.1.1.\n- The MongoDB Java Driver logging is now enabled via the popular SLF4J API, so I added logback\n in the `pom.xml` and a configuration file `logback.xml`.\n\n## Introduction\n\n \n\nIn this very first blog post of the Java Quick Start series, I will show you how to set up your Java project with Maven\nand execute a MongoDB command in Java. Then, we will explore the most common operations \u2014 such as create, read, update,\nand delete \u2014 using the MongoDB Java driver. I will also show you\nsome of the more powerful options and features available as part of the\nMongoDB Java driver for each of these\noperations, giving you a really great foundation of knowledge to build upon as we go through the series.\n\nIn future blog posts, we will move on and work through:\n\n- Mapping MongoDB BSON documents directly to Plain Old Java Object (POJO)\n- The MongoDB Aggregation Framework\n- Change Streams\n- Multi-document ACID transactions\n- The MongoDB Java reactive streams driver\n\n### Why MongoDB and Java?\n\nJava is the most popular language in the IT industry at the\ndate of this blog post,\nand developers voted MongoDB as their most wanted database four years in a row.\nIn this series of blog posts, I will be demonstrating how powerful these two great pieces of technology are when\ncombined and how you can access that power.\n\n### Prerequisites\n\nTo follow along, you can use any environment you like and the integrated development environment of your choice. I'll\nuse Maven 3.8.7 and the Java OpenJDK 21, but it's fairly easy to update the code\nto support older versions of Java, so feel free to use the JDK of your choice and update the Java version accordingly in\nthe pom.xml file we are about to set up.\n\nFor the MongoDB cluster, we will be using a M0 Free Tier MongoDB Cluster\nfrom MongoDB Atlas. If you don't have one already, check out\nmy Get Started with an M0 Cluster blog post.\n\n> Get your free M0 cluster on MongoDB Atlas today. It's free forever, and you'll\n> be able to use it to work with the examples in this blog series.\n\nLet's jump in and take a look at how well Java and MongoDB work together.\n\n## Getting set up\n\nTo begin with, we will need to set up a new Maven project. You have two options at this point. You can either clone this\nseries' git repository or you can create and set up the Maven project.\n\n### Using the git repository\n\nIf you choose to use git, you will get all the code immediately. I still recommend you read through the manual set-up.\n\nYou can clone the repository if you like with the following command.\n\n``` bash\ngit clone git@github.com:mongodb-developer/java-quick-start.git\n```\n\nOr you\ncan download the repository as a zip file.\n\n### Setting up manually\n\nYou can either use your favorite IDE to create a new Maven project for you or you can create the Maven project manually.\nEither way, you should get the following folder architecture:\n\n``` none\njava-quick-start/\n\u251c\u2500\u2500 pom.xml\n\u2514\u2500\u2500 src\n \u2514\u2500\u2500 main\n \u2514\u2500\u2500 java\n \u2514\u2500\u2500 com\n \u2514\u2500\u2500 mongodb\n \u2514\u2500\u2500 quickstart\n```\n\nThe pom.xml file should contain the following code:\n\n``` xml\n\n 4.0.0\n\n com.mongodb\n java-quick-start\n 1.0-SNAPSHOT\n\n \n UTF-8\n 21\n 21\n 3.12.1\n 5.0.0\n 1.8.0\n \n \n 1.2.13\n 3.1.1\n \n\n \n \n org.mongodb\n mongodb-driver-sync\n ${mongodb-driver-sync.version}\n \n \n org.mongodb\n mongodb-crypt\n ${mongodb-crypt.version}\n \n \n ch.qos.logback\n logback-classic\n ${logback-classic.version}\n \n \n\n \n \n \n org.apache.maven.plugins\n maven-compiler-plugin\n ${maven-compiler-plugin.version}\n \n ${maven-compiler-plugin.source}\n ${maven-compiler-plugin.target}\n \n \n \n \n \n org.codehaus.mojo\n exec-maven-plugin\n ${exec-maven-plugin.version}\n \n false\n \n \n \n \n\n```\n\nTo verify that everything works correctly, you should be able to create and run a simple \"Hello MongoDB!\" program.\nIn `src/main/java/com/mongodb/quickstart`, create the `HelloMongoDB.java` file:\n\n``` java\npackage com.mongodb.quickstart;\n\npublic class HelloMongoDB {\n\n public static void main(String] args) {\n System.out.println(\"Hello MongoDB!\");\n }\n}\n```\n\nThen compile and execute it with your IDE or use the command line in the root directory (where the `src` folder is):\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.HelloMongoDB\"\n```\n\nThe result should look like this:\n\n``` none\n[INFO] Scanning for projects...\n[INFO] \n[INFO] --------------------< com.mongodb:java-quick-start >--------------------\n[INFO] Building java-quick-start 1.0-SNAPSHOT\n[INFO] --------------------------------[ jar ]---------------------------------\n[INFO] \n[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ java-quick-start ---\n[INFO] Using 'UTF-8' encoding to copy filtered resources.\n[INFO] Copying 1 resource\n[INFO] \n[INFO] --- maven-compiler-plugin:3.12.1:compile (default-compile) @ java-quick-start ---\n[INFO] Nothing to compile - all classes are up to date.\n[INFO] \n[INFO] --- exec-maven-plugin:3.1.1:java (default-cli) @ java-quick-start ---\nHello MongoDB!\n[INFO] ------------------------------------------------------------------------\n[INFO] BUILD SUCCESS\n[INFO] ------------------------------------------------------------------------\n[INFO] Total time: 0.634 s\n[INFO] Finished at: 2024-02-19T18:12:22+01:00\n[INFO] ------------------------------------------------------------------------\n```\n\n## Connecting with Java\n\nNow that our Maven project works, we have resolved our dependencies, we can start using MongoDB Atlas with Java.\n\nIf you have imported the [sample dataset as suggested in\nthe Quick Start Atlas blog post, then with the Java code we are about\nto create, you will be able to see a list of the databases in the sample dataset.\n\nThe first step is to instantiate a `MongoClient` by passing a MongoDB Atlas connection string into\nthe `MongoClients.create()` static method. This will establish a connection\nto MongoDB Atlas using the connection string. Then we can retrieve the list of\ndatabases on this cluster and print them out to test the connection with MongoDB.\n\nAs per the recommended best practices, I'm also doing a \"pre-flight check\" using the `{ping: 1}` admin command.\n\nIn `src/main/java/com/mongodb`, create the `Connection.java` file:\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport org.bson.Document;\nimport org.bson.json.JsonWriterSettings;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\npublic class Connection {\n\n public static void main(String] args) {\n String connectionString = System.getProperty(\"mongodb.uri\");\n try (MongoClient mongoClient = MongoClients.create(connectionString)) {\n System.out.println(\"=> Connection successful: \" + preFlightChecks(mongoClient));\n System.out.println(\"=> Print list of databases:\");\n List databases = mongoClient.listDatabases().into(new ArrayList<>());\n databases.forEach(db -> System.out.println(db.toJson()));\n }\n }\n\n static boolean preFlightChecks(MongoClient mongoClient) {\n Document pingCommand = new Document(\"ping\", 1);\n Document response = mongoClient.getDatabase(\"admin\").runCommand(pingCommand);\n System.out.println(\"=> Print result of the '{ping: 1}' command.\");\n System.out.println(response.toJson(JsonWriterSettings.builder().indent(true).build()));\n return response.get(\"ok\", Number.class).intValue() == 1;\n }\n}\n```\n\nAs you can see, the MongoDB connection string is retrieved from the *System Properties*, so we need to set this up. Once\nyou have retrieved your [MongoDB Atlas connection string, you\ncan add the `mongodb.uri` system property into your IDE. Here is my configuration with IntelliJ for example.\n\nOr if you prefer to use Maven in command line, here is the equivalent command line you can run in the root directory:\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.Connection\" -Dmongodb.uri=\"mongodb+srv://username:password@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\n> Note: Don't forget the double quotes around the MongoDB URI to avoid surprises from your shell.\n\nThe standard output should look like this:\n\n``` none\n{\"name\": \"admin\", \"sizeOnDisk\": 303104.0, \"empty\": false}\n{\"name\": \"config\", \"sizeOnDisk\": 147456.0, \"empty\": false}\n{\"name\": \"local\", \"sizeOnDisk\": 5.44731136E8, \"empty\": false}\n{\"name\": \"sample_airbnb\", \"sizeOnDisk\": 5.761024E7, \"empty\": false}\n{\"name\": \"sample_geospatial\", \"sizeOnDisk\": 1384448.0, \"empty\": false}\n{\"name\": \"sample_mflix\", \"sizeOnDisk\": 4.583424E7, \"empty\": false}\n{\"name\": \"sample_supplies\", \"sizeOnDisk\": 1339392.0, \"empty\": false}\n{\"name\": \"sample_training\", \"sizeOnDisk\": 7.4801152E7, \"empty\": false}\n{\"name\": \"sample_weatherdata\", \"sizeOnDisk\": 5103616.0, \"empty\": false}\n```\n\n## Insert operations\n\n### Getting set up\n\nIn the Connecting with Java section, we created the classes `HelloMongoDB` and `Connection`. Now we will work on\nthe `Create` class.\n\nIf you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. Get the directions\nfor creating your cluster.\n\n### Checking the collection and data model\n\nIn the sample dataset, you can find the database `sample_training`, which contains a collection `grades`. Each document\nin this collection represents a student's grades for a particular class.\n\nHere is the JSON representation of a document in the MongoDB shell.\n\n``` bash\nMongoDB Enterprise Cluster0-shard-0:PRIMARY> db.grades.findOne({student_id: 0, class_id: 339})\n{\n \"_id\" : ObjectId(\"56d5f7eb604eb380b0d8d8ce\"),\n \"student_id\" : 0,\n \"scores\" : \n {\n \"type\" : \"exam\",\n \"score\" : 78.40446309504266\n },\n {\n \"type\" : \"quiz\",\n \"score\" : 73.36224783231339\n },\n {\n \"type\" : \"homework\",\n \"score\" : 46.980982486720535\n },\n {\n \"type\" : \"homework\",\n \"score\" : 76.67556138656222\n }\n ],\n \"class_id\" : 339\n}\n```\n\nAnd here is the [extended JSON representation of the\nsame student. You can retrieve it in MongoDB Compass, our free GUI tool, if\nyou want.\n\nExtended JSON is the human-readable version of a BSON document without loss of type information. You can read more about\nthe Java driver and\nBSON in the MongoDB Java driver documentation.\n\n``` json\n{\n \"_id\": {\n \"$oid\": \"56d5f7eb604eb380b0d8d8ce\"\n },\n \"student_id\": {\n \"$numberDouble\": \"0\"\n },\n \"scores\": {\n \"type\": \"exam\",\n \"score\": {\n \"$numberDouble\": \"78.40446309504266\"\n }\n }, {\n \"type\": \"quiz\",\n \"score\": {\n \"$numberDouble\": \"73.36224783231339\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"46.980982486720535\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"76.67556138656222\"\n }\n }],\n \"class_id\": {\n \"$numberDouble\": \"339\"\n }\n}\n```\n\nAs you can see, MongoDB stores BSON documents and for each key-value pair, the BSON contains the key and the value along\nwith its type. This is how MongoDB knows that `class_id` is actually a double and not an integer, which is not explicit\nin the mongo shell representation of this document.\n\nWe have 10,000 students (`student_id` from 0 to 9999) already in this collection and each of them took 10 different\nclasses, which adds up to 100,000 documents in this collection. Let's say a new student (`student_id` 10,000) just\narrived in this university and received a bunch of (random) grades in his first class. Let's insert this new student\ndocument using Java and the MongoDB Java driver.\n\nIn this university, the `class_id` varies from 0 to 500, so I can use any random value between 0 and 500.\n\n### Selecting databases and collections\n\nFirstly, we need to set up our `Create` class and access this `sample_training.grades` collection.\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\n\npublic class Create {\n\n public static void main(String[] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n }\n }\n}\n```\n\n### Create a BSON document\n\nSecondly, we need to represent this new student in Java using the `Document` class.\n\n``` java\nRandom rand = new Random();\nDocument student = new Document(\"_id\", new ObjectId());\nstudent.append(\"student_id\", 10000d)\n .append(\"class_id\", 1d)\n .append(\"scores\", List.of(new Document(\"type\", \"exam\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"quiz\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100)));\n```\n\nAs you can see, we reproduced the same data model from the existing documents in this collection as we made sure\nthat `student_id`, `class_id`, and `score` are all doubles.\n\nAlso, the Java driver would have generated the `_id` field with an ObjectId for us if we didn't explicitly create one\nhere, but it's good practice to set the `_id` ourselves. This won't change our life right now, but it makes more sense\nwhen we directly manipulate POJOs, and we want to create a clean REST API. I'm doing this in\nmy [mapping POJOs post.\n\nNote as well that we are inserting a document into an existing collection and database, but if these didn't already\nexist, MongoDB would automatically create them the first time you to go insert a document into the collection.\n\n### Insert document\n\nFinally, we can insert this document.\n\n``` java\ngradesCollection.insertOne(student);\n```\n\n### Final code to insert one document\n\nHere is the final `Create` class to insert one document in MongoDB with all the details I mentioned above.\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\nimport org.bson.types.ObjectId;\n\nimport java.util.List;\nimport java.util.Random;\n\npublic class Create {\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n Random rand = new Random();\n Document student = new Document(\"_id\", new ObjectId());\n student.append(\"student_id\", 10000d)\n .append(\"class_id\", 1d)\n .append(\"scores\", List.of(new Document(\"type\", \"exam\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"quiz\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100)));\n\n gradesCollection.insertOne(student);\n }\n }\n}\n```\n\nYou can execute this class with the following Maven command line in the root directory or using your IDE (see above for\nmore details). Don't forget the double quotes around the MongoDB URI to avoid surprises.\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.Create\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nAnd here is the document I extracted from [MongoDB\nCompass.\n\n``` json\n{\n \"_id\": {\n \"$oid\": \"5d97c375ded5651ea3462d0f\"\n },\n \"student_id\": {\n \"$numberDouble\": \"10000\"\n },\n \"class_id\": {\n \"$numberDouble\": \"1\"\n },\n \"scores\": {\n \"type\": \"exam\",\n \"score\": {\n \"$numberDouble\": \"4.615256396625178\"\n }\n }, {\n \"type\": \"quiz\",\n \"score\": {\n \"$numberDouble\": \"73.06173415145801\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"19.378205578990727\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"82.3089189278531\"\n }\n }]\n}\n```\n\nNote that the order of the fields is different from the initial document with `\"student_id\": 0`.\n\nWe could get exactly the same order if we wanted to by creating the document like this.\n\n``` java\nRandom rand = new Random();\nDocument student = new Document(\"_id\", new ObjectId());\nstudent.append(\"student_id\", 10000d)\n .append(\"scores\", List.of(new Document(\"type\", \"exam\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"quiz\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100)))\n .append(\"class_id\", 1d);\n```\n\nBut if you do things correctly, this should not have any impact on your code and logic as fields in JSON documents are\nnot ordered.\n\nI'm quoting [json.org for this:\n\n> An object is an unordered set of name/value pairs.\n\n### Insert multiple documents\n\nNow that we know how to create one document, let's learn how to insert many documents.\n\nOf course, we could just wrap the previous `insert` operation into a `for` loop. Indeed, if we loop 10 times on this\nmethod, we would send 10 insert commands to the cluster and expect 10 insert acknowledgments. As you can imagine, this\nwould not be very efficient as it would generate a lot more TCP communications than necessary.\n\nInstead, we want to wrap our 10 documents and send them in one call to the cluster and we want to receive only one\ninsert acknowledgement for the entire list.\n\nLet's refactor the code. First, let's make the random generator a `private static final` field.\n\n``` java\nprivate static final Random rand = new Random();\n```\n\nLet's make a grade factory method.\n\n``` java\nprivate static Document generateNewGrade(double studentId, double classId) {\n List scores = List.of(new Document(\"type\", \"exam\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"quiz\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100));\n return new Document(\"_id\", new ObjectId()).append(\"student_id\", studentId)\n .append(\"class_id\", classId)\n .append(\"scores\", scores);\n}\n```\n\nAnd now we can use this to insert 10 documents all at once.\n\n``` java\nList grades = new ArrayList<>();\nfor (double classId = 1d; classId <= 10d; classId++) {\n grades.add(generateNewGrade(10001d, classId));\n}\n\ngradesCollection.insertMany(grades, new InsertManyOptions().ordered(false));\n```\n\nAs you can see, we are now wrapping our grade documents into a list and we are sending this list in a single call with\nthe `insertMany` method.\n\nBy default, the `insertMany` method will insert the documents in order and stop if an error occurs during the process.\nFor example, if you try to insert a new document with the same `_id` as an existing document, you would get\na `DuplicateKeyException`.\n\nTherefore, with an ordered `insertMany`, the last documents of the list would not be inserted and the insertion process\nwould stop and return the appropriate exception as soon as the error occurs.\n\nAs you can see here, this is not the behaviour we want because all the grades are completely independent from one to\nanother. So, if one of them fails, we want to process all the grades and then eventually fall back to an exception for\nthe ones that failed.\n\nThis is why you see the second parameter `new InsertManyOptions().ordered(false)` which is true by default.\n\n### The final code to insert multiple documents\n\nLet's refactor the code a bit and here is the final `Create` class.\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.InsertManyOptions;\nimport org.bson.Document;\nimport org.bson.types.ObjectId;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.Random;\n\npublic class Create {\n\n private static final Random rand = new Random();\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n insertOneDocument(gradesCollection);\n insertManyDocuments(gradesCollection);\n }\n }\n\n private static void insertOneDocument(MongoCollection gradesCollection) {\n gradesCollection.insertOne(generateNewGrade(10000d, 1d));\n System.out.println(\"One grade inserted for studentId 10000.\");\n }\n\n private static void insertManyDocuments(MongoCollection gradesCollection) {\n List grades = new ArrayList<>();\n for (double classId = 1d; classId <= 10d; classId++) {\n grades.add(generateNewGrade(10001d, classId));\n }\n\n gradesCollection.insertMany(grades, new InsertManyOptions().ordered(false));\n System.out.println(\"Ten grades inserted for studentId 10001.\");\n }\n\n private static Document generateNewGrade(double studentId, double classId) {\n List scores = List.of(new Document(\"type\", \"exam\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"quiz\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100),\n new Document(\"type\", \"homework\").append(\"score\", rand.nextDouble() * 100));\n return new Document(\"_id\", new ObjectId()).append(\"student_id\", studentId)\n .append(\"class_id\", classId)\n .append(\"scores\", scores);\n }\n}\n```\n\nAs a reminder, every write operation (create, replace, update, delete) performed on a **single** document\nis [ACID in MongoDB. Which means `insertMany` is not ACID by default but, good\nnews, since MongoDB 4.0, we can wrap this call in a multi-document ACID transaction to make it fully ACID. I explain\nthis in more detail in my blog\nabout multi-document ACID transactions.\n\n## Read documents\n\n### Create data\n\nWe created the class `Create`. Now we will work in the `Read` class.\n\nWe wrote 11 new grades, one for the student with `{\"student_id\": 10000}` and 10 for the student\nwith `{\"student_id\": 10001}` in the `sample_training.grades` collection.\n\nAs a reminder, here are the grades of the `{\"student_id\": 10000}`.\n\n``` javascript\nMongoDB Enterprise Cluster0-shard-0:PRIMARY> db.grades.findOne({\"student_id\":10000})\n{\n \"_id\" : ObjectId(\"5daa0e274f52b44cfea94652\"),\n \"student_id\" : 10000,\n \"class_id\" : 1,\n \"scores\" : \n {\n \"type\" : \"exam\",\n \"score\" : 39.25175977753478\n },\n {\n \"type\" : \"quiz\",\n \"score\" : 80.2908713167313\n },\n {\n \"type\" : \"homework\",\n \"score\" : 63.5444978481843\n },\n {\n \"type\" : \"homework\",\n \"score\" : 82.35202261582563\n }\n ]\n}\n```\n\nWe also discussed BSON types, and we noted that `student_id` and `class_id` are doubles.\n\nMongoDB treats some types as equivalent for comparison purposes. For instance, numeric types undergo conversion before\ncomparison.\n\nSo, don't be surprised if I filter with an integer number and match a document which contains a double number for\nexample. If you want to filter documents by value types, you can use\nthe [$type operator.\n\nYou can read more\nabout type bracketing\nand comparison and sort order in our\ndocumentation.\n\n### Read a specific document\n\nLet's read the document above. To achieve this, we will use the method `find`, passing it a filter to help identify the\ndocument we want to find.\n\nPlease create a class `Read` in the `com.mongodb.quickstart` package with this code:\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.*;\nimport org.bson.Document;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static com.mongodb.client.model.Filters.*;\nimport static com.mongodb.client.model.Projections.*;\nimport static com.mongodb.client.model.Sorts.descending;\n\npublic class Read {\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // find one document with new Document\n Document student1 = gradesCollection.find(new Document(\"student_id\", 10000)).first();\n System.out.println(\"Student 1: \" + student1.toJson());\n }\n }\n}\n```\n\nAlso, make sure you set up your `mongodb.uri` in your system properties using your IDE if you want to run this code in\nyour favorite IDE.\n\nAlternatively, you can use this Maven command line in your root project (where the `src` folder is):\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.Read\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nThe standard output should be:\n\n``` javascript\nStudent 1: {\"_id\": {\"$oid\": \"5daa0e274f52b44cfea94652\"},\n \"student_id\": 10000.0,\n \"class_id\": 1.0,\n \"scores\": [\n {\"type\": \"exam\", \"score\": 39.25175977753478},\n {\"type\": \"quiz\", \"score\": 80.2908713167313},\n {\"type\": \"homework\", \"score\": 63.5444978481843},\n {\"type\": \"homework\", \"score\": 82.35202261582563}\n ]\n}\n```\n\nThe MongoDB driver comes with a few helpers to ease the writing of these queries. Here's an equivalent query using\nthe `Filters.eq()` method.\n\n``` java\ngradesCollection.find(eq(\"student_id\", 10000)).first();\n```\n\nOf course, I used a static import to make the code as compact and easy to read as possible.\n\n``` java\nimport static com.mongodb.client.model.Filters.eq;\n```\n\n### Read a range of documents\n\nIn the previous example, the benefit of these helpers is not obvious, but let me show you another example where I'm\nsearching all the grades with a *student_id* greater than or equal to 10,000.\n\n``` java\n// without helpers\ngradesCollection.find(new Document(\"student_id\", new Document(\"$gte\", 10000)));\n// with the Filters.gte() helper\ngradesCollection.find(gte(\"student_id\", 10000));\n```\n\nAs you can see, I'm using the `$gte` operator to write this query. You can learn about all the\ndifferent [query operators in the MongoDB documentation.\n\n### Iterators\n\nThe `find` method returns an object that implements the interface `FindIterable`, which ultimately extends\nthe `Iterable` interface, so we can use an iterator to go through the list of documents we are receiving from MongoDB:\n\n``` java\nFindIterable iterable = gradesCollection.find(gte(\"student_id\", 10000));\nMongoCursor cursor = iterable.iterator();\nSystem.out.println(\"Student list with cursor: \");\nwhile (cursor.hasNext()) {\n System.out.println(cursor.next().toJson());\n}\n```\n\n### Lists\n\nLists are usually easier to manipulate than iterators, so we can also do this to retrieve directly\nan `ArrayList`:\n\n``` java\nList studentList = gradesCollection.find(gte(\"student_id\", 10000)).into(new ArrayList<>());\nSystem.out.println(\"Student list with an ArrayList:\");\nfor (Document student : studentList) {\n System.out.println(student.toJson());\n}\n```\n\n### Consumers\n\nWe could also use a `Consumer` which is a functional interface:\n\n``` java\nConsumer printConsumer = document -> System.out.println(document.toJson());\ngradesCollection.find(gte(\"student_id\", 10000)).forEach(printConsumer);\n```\n\n### Cursors, sort, skip, limit, and projections\n\nAs we saw above with the `Iterator` example, MongoDB\nleverages cursors to iterate through your result set.\n\nIf you are already familiar with the cursors in the mongo shell, you know\nthat transformations can be applied to it. A cursor can\nbe sorted and the documents it contains can be\ntransformed using a projection. Also,\nonce the cursor is sorted, we can choose to skip a few documents and limit the number of documents in the output. This\nis very useful to implement pagination in your frontend for example.\n\nLet's combine everything we have learnt in one query:\n\n``` java\nList docs = gradesCollection.find(and(eq(\"student_id\", 10001), lte(\"class_id\", 5)))\n .projection(fields(excludeId(),\n include(\"class_id\",\n \"student_id\")))\n .sort(descending(\"class_id\"))\n .skip(2)\n .limit(2)\n .into(new ArrayList<>());\n\nSystem.out.println(\"Student sorted, skipped, limited and projected: \");\nfor (Document student : docs) {\n System.out.println(student.toJson());\n}\n```\n\nHere is the output we get:\n\n``` javascript\n{\"student_id\": 10001.0, \"class_id\": 3.0}\n{\"student_id\": 10001.0, \"class_id\": 2.0}\n```\n\nRemember that documents are returned in\nthe natural order, so if you want your output\nordered, you need to sort your cursors to make sure there is no randomness in your algorithm.\n\n### Indexes\n\nIf you want to make these queries (with or without sort) efficient,\n**you need** indexes!\n\nTo make my last query efficient, I should create this index:\n\n``` javascript\ndb.grades.createIndex({\"student_id\": 1, \"class_id\": -1})\n```\n\nWhen I run an explain on this query, this is the\nwinning plan I get:\n\n``` javascript\n\"winningPlan\" : {\n \"stage\" : \"LIMIT\",\n \"limitAmount\" : 2,\n \"inputStage\" : {\n \"stage\" : \"PROJECTION_COVERED\",\n \"transformBy\" : {\n \"_id\" : 0,\n \"class_id\" : 1,\n \"student_id\" : 1\n },\n \"inputStage\" : {\n \"stage\" : \"SKIP\",\n \"skipAmount\" : 2,\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"student_id\" : 1,\n \"class_id\" : -1\n },\n \"indexName\" : \"student_id_1_class_id_-1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"student_id\" : ],\n \"class_id\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"student_id\" : [\n \"[10001.0, 10001.0]\"\n ],\n \"class_id\" : [\n \"[5.0, -inf.0]\"\n ]\n }\n }\n }\n }\n }\n```\n\nWith this index, we can see that we have no *SORT* stage, so we are not doing a sort in memory as the documents are\nalready sorted \"for free\" and returned in the order of the index.\n\nAlso, we can see that we don't have any *FETCH* stage, so this is\na [covered query, the most efficient type of\nquery you can run in MongoDB. Indeed, all the information we are returning at the end is already in the index, so the\nindex itself contains everything we need to answer this query.\n\n### The final code to read documents\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.*;\nimport org.bson.Document;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Filters.*;\nimport static com.mongodb.client.model.Projections.*;\nimport static com.mongodb.client.model.Sorts.descending;\n\npublic class Read {\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // find one document with new Document\n Document student1 = gradesCollection.find(new Document(\"student_id\", 10000)).first();\n System.out.println(\"Student 1: \" + student1.toJson());\n\n // find one document with Filters.eq()\n Document student2 = gradesCollection.find(eq(\"student_id\", 10000)).first();\n System.out.println(\"Student 2: \" + student2.toJson());\n\n // find a list of documents and iterate throw it using an iterator.\n FindIterable iterable = gradesCollection.find(gte(\"student_id\", 10000));\n MongoCursor cursor = iterable.iterator();\n System.out.println(\"Student list with a cursor: \");\n while (cursor.hasNext()) {\n System.out.println(cursor.next().toJson());\n }\n\n // find a list of documents and use a List object instead of an iterator\n List studentList = gradesCollection.find(gte(\"student_id\", 10000)).into(new ArrayList<>());\n System.out.println(\"Student list with an ArrayList:\");\n for (Document student : studentList) {\n System.out.println(student.toJson());\n }\n\n // find a list of documents and print using a consumer\n System.out.println(\"Student list using a Consumer:\");\n Consumer printConsumer = document -> System.out.println(document.toJson());\n gradesCollection.find(gte(\"student_id\", 10000)).forEach(printConsumer);\n\n // find a list of documents with sort, skip, limit and projection\n List docs = gradesCollection.find(and(eq(\"student_id\", 10001), lte(\"class_id\", 5)))\n .projection(fields(excludeId(), include(\"class_id\", \"student_id\")))\n .sort(descending(\"class_id\"))\n .skip(2)\n .limit(2)\n .into(new ArrayList<>());\n\n System.out.println(\"Student sorted, skipped, limited and projected:\");\n for (Document student : docs) {\n System.out.println(student.toJson());\n }\n }\n }\n}\n```\n\n## Update documents\n\n### Update one document\n\nLet's edit the document with `{student_id: 10000}`. To achieve this, we will use the method `updateOne`.\n\nPlease create a class `Update` in the `com.mongodb.quickstart` package with this code:\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.FindOneAndUpdateOptions;\nimport com.mongodb.client.model.ReturnDocument;\nimport com.mongodb.client.model.UpdateOptions;\nimport com.mongodb.client.result.UpdateResult;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\nimport org.bson.json.JsonWriterSettings;\n\nimport static com.mongodb.client.model.Filters.and;\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Updates.*;\n\npublic class Update {\n\n public static void main(String[] args) {\n JsonWriterSettings prettyPrint = JsonWriterSettings.builder().indent(true).build();\n\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // update one document\n Bson filter = eq(\"student_id\", 10000);\n Bson updateOperation = set(\"comment\", \"You should learn MongoDB!\");\n UpdateResult updateResult = gradesCollection.updateOne(filter, updateOperation);\n System.out.println(\"=> Updating the doc with {\\\"student_id\\\":10000}. Adding comment.\");\n System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));\n System.out.println(updateResult);\n }\n }\n}\n```\n\nAs you can see in this example, the method `updateOne` takes two parameters:\n\n- The first one is the filter that identifies the document we want to update.\n- The second one is the update operation. Here, we are setting a new field `comment` with the\n value `\"You should learn MongoDB!\"`.\n\nIn order to run this program, make sure you set up your `mongodb.uri` in your system properties using your IDE if you\nwant to run this code in your favorite IDE (see above for more details).\n\nAlternatively, you can use this Maven command line in your root project (where the `src` folder is):\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.Update\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nThe standard output should look like this:\n\n``` javascript\n=> Updating the doc with {\"student_id\":10000}. Adding comment.\n{\n \"_id\": {\n \"$oid\": \"5dd5c1f351f97d4a034109ed\"\n },\n \"student_id\": 10000.0,\n \"class_id\": 1.0,\n \"scores\": [\n {\n \"type\": \"exam\",\n \"score\": 21.580800815091415\n },\n {\n \"type\": \"quiz\",\n \"score\": 87.66967927111044\n },\n {\n \"type\": \"homework\",\n \"score\": 96.4060480668003\n },\n {\n \"type\": \"homework\",\n \"score\": 75.44966835508427\n }\n ],\n \"comment\": \"You should learn MongoDB!\"\n}\nAcknowledgedUpdateResult{matchedCount=1, modifiedCount=1, upsertedId=null}\n```\n\n### Upsert a document\n\nAn upsert is a mix between an insert operation and an update one. It happens when you want to update a document,\nassuming it exists, but it actually doesn't exist yet in your database.\n\nIn MongoDB, you can set an option to create this document on the fly and carry on with your update operation. This is an\nupsert operation.\n\nIn this example, I want to add a comment to the grades of my student 10002 for the class 10 but this document doesn't\nexist yet.\n\n``` java\nfilter = and(eq(\"student_id\", 10002d), eq(\"class_id\", 10d));\nupdateOperation = push(\"comments\", \"You will learn a lot if you read the MongoDB blog!\");\nUpdateOptions options = new UpdateOptions().upsert(true);\nupdateResult = gradesCollection.updateOne(filter, updateOperation, options);\nSystem.out.println(\"\\n=> Upsert document with {\\\"student_id\\\":10002.0, \\\"class_id\\\": 10.0} because it doesn't exist yet.\");\nSystem.out.println(updateResult);\nSystem.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));\n```\n\nAs you can see, I'm using the third parameter of the update operation to set the option upsert to true.\n\nI'm also using the static method `Updates.push()` to push a new value in my array `comments` which does not exist yet,\nso I'm creating an array of one element in this case.\n\nThis is the output we get:\n\n``` javascript\n=> Upsert document with {\"student_id\":10002.0, \"class_id\": 10.0} because it doesn't exist yet.\nAcknowledgedUpdateResult{matchedCount=0, modifiedCount=0, upsertedId=BsonObjectId{value=5ddeb7b7224ad1d5cfab3733}}\n{\n \"_id\": {\n \"$oid\": \"5ddeb7b7224ad1d5cfab3733\"\n },\n \"class_id\": 10.0,\n \"student_id\": 10002.0,\n \"comments\": [\n \"You will learn a lot if you read the MongoDB blog!\"\n ]\n}\n```\n\n### Update many documents\n\nThe same way I was able to update one document with `updateOne()`, I can update multiple documents with `updateMany()`.\n\n``` java\nfilter = eq(\"student_id\", 10001);\nupdateResult = gradesCollection.updateMany(filter, updateOperation);\nSystem.out.println(\"\\n=> Updating all the documents with {\\\"student_id\\\":10001}.\");\nSystem.out.println(updateResult);\n```\n\nIn this example, I'm using the same `updateOperation` as earlier, so I'm creating a new one element array `comments` in\nthese 10 documents.\n\nHere is the output:\n\n``` javascript\n=> Updating all the documents with {\"student_id\":10001}.\nAcknowledgedUpdateResult{matchedCount=10, modifiedCount=10, upsertedId=null}\n```\n\n### The findOneAndUpdate method\n\nFinally, we have one last very useful method available in the MongoDB Java Driver: `findOneAndUpdate()`.\n\nIn most web applications, when a user updates something, they want to see this update reflected on their web page.\nWithout the `findOneAndUpdate()` method, you would have to run an update operation and then fetch the document with a\nfind operation to make sure you are printing the latest version of this object in the web page.\n\nThe `findOneAndUpdate()` method allows you to combine these two operations in one.\n\n``` java\n// findOneAndUpdate\nfilter = eq(\"student_id\", 10000);\nBson update1 = inc(\"x\", 10); // increment x by 10. As x doesn't exist yet, x=10.\nBson update2 = rename(\"class_id\", \"new_class_id\"); // rename variable \"class_id\" in \"new_class_id\".\nBson update3 = mul(\"scores.0.score\", 2); // multiply the first score in the array by 2.\nBson update4 = addToSet(\"comments\", \"This comment is uniq\"); // creating an array with a comment.\nBson update5 = addToSet(\"comments\", \"This comment is uniq\"); // using addToSet so no effect.\nBson updates = combine(update1, update2, update3, update4, update5);\n// returns the old version of the document before the update.\nDocument oldVersion = gradesCollection.findOneAndUpdate(filter, updates);\nSystem.out.println(\"\\n=> FindOneAndUpdate operation. Printing the old version by default:\");\nSystem.out.println(oldVersion.toJson(prettyPrint));\n\n// but I can also request the new version\nfilter = eq(\"student_id\", 10001);\nFindOneAndUpdateOptions optionAfter = new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER);\nDocument newVersion = gradesCollection.findOneAndUpdate(filter, updates, optionAfter);\nSystem.out.println(\"\\n=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:\");\nSystem.out.println(newVersion.toJson(prettyPrint));\n```\n\nHere is the output:\n\n``` javascript\n=> FindOneAndUpdate operation. Printing the old version by default:\n{\n \"_id\": {\n \"$oid\": \"5dd5d46544fdc35505a8271b\"\n },\n \"student_id\": 10000.0,\n \"class_id\": 1.0,\n \"scores\": [\n {\n \"type\": \"exam\",\n \"score\": 69.52994626959251\n },\n {\n \"type\": \"quiz\",\n \"score\": 87.27457417188077\n },\n {\n \"type\": \"homework\",\n \"score\": 83.40970667948744\n },\n {\n \"type\": \"homework\",\n \"score\": 40.43663797673247\n }\n ],\n \"comment\": \"You should learn MongoDB!\"\n}\n\n=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:\n{\n \"_id\": {\n \"$oid\": \"5dd5d46544fdc35505a82725\"\n },\n \"student_id\": 10001.0,\n \"scores\": [\n {\n \"type\": \"exam\",\n \"score\": 138.42535412437857\n },\n {\n \"type\": \"quiz\",\n \"score\": 84.66740178906916\n },\n {\n \"type\": \"homework\",\n \"score\": 36.773091359279675\n },\n {\n \"type\": \"homework\",\n \"score\": 14.90842128691825\n }\n ],\n \"comments\": [\n \"You will learn a lot if you read the MongoDB blog!\",\n \"This comment is uniq\"\n ],\n \"new_class_id\": 10.0,\n \"x\": 10\n}\n```\n\nAs you can see in this example, you can choose which version of the document you want to return using the appropriate\noption.\n\nI also used this example to show you a bunch of update operators:\n\n- `set` will set a value.\n- `inc` will increment a value.\n- `rename` will rename a field.\n- `mul` will multiply the value by the given number.\n- `addToSet` is similar to push but will only push the value in the array if the value doesn't exist already.\n\nThere are a few other update operators. You can consult the entire list in\nour [documentation.\n\n### The final code for updates\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.FindOneAndUpdateOptions;\nimport com.mongodb.client.model.ReturnDocument;\nimport com.mongodb.client.model.UpdateOptions;\nimport com.mongodb.client.result.UpdateResult;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\nimport org.bson.json.JsonWriterSettings;\n\nimport static com.mongodb.client.model.Filters.and;\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Updates.*;\n\npublic class Update {\n\n public static void main(String] args) {\n JsonWriterSettings prettyPrint = JsonWriterSettings.builder().indent(true).build();\n\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // update one document\n Bson filter = eq(\"student_id\", 10000);\n Bson updateOperation = set(\"comment\", \"You should learn MongoDB!\");\n UpdateResult updateResult = gradesCollection.updateOne(filter, updateOperation);\n System.out.println(\"=> Updating the doc with {\\\"student_id\\\":10000}. Adding comment.\");\n System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));\n System.out.println(updateResult);\n\n // upsert\n filter = and(eq(\"student_id\", 10002d), eq(\"class_id\", 10d));\n updateOperation = push(\"comments\", \"You will learn a lot if you read the MongoDB blog!\");\n UpdateOptions options = new UpdateOptions().upsert(true);\n updateResult = gradesCollection.updateOne(filter, updateOperation, options);\n System.out.println(\"\\n=> Upsert document with {\\\"student_id\\\":10002.0, \\\"class_id\\\": 10.0} because it doesn't exist yet.\");\n System.out.println(updateResult);\n System.out.println(gradesCollection.find(filter).first().toJson(prettyPrint));\n\n // update many documents\n filter = eq(\"student_id\", 10001);\n updateResult = gradesCollection.updateMany(filter, updateOperation);\n System.out.println(\"\\n=> Updating all the documents with {\\\"student_id\\\":10001}.\");\n System.out.println(updateResult);\n\n // findOneAndUpdate\n filter = eq(\"student_id\", 10000);\n Bson update1 = inc(\"x\", 10); // increment x by 10. As x doesn't exist yet, x=10.\n Bson update2 = rename(\"class_id\", \"new_class_id\"); // rename variable \"class_id\" in \"new_class_id\".\n Bson update3 = mul(\"scores.0.score\", 2); // multiply the first score in the array by 2.\n Bson update4 = addToSet(\"comments\", \"This comment is uniq\"); // creating an array with a comment.\n Bson update5 = addToSet(\"comments\", \"This comment is uniq\"); // using addToSet so no effect.\n Bson updates = combine(update1, update2, update3, update4, update5);\n // returns the old version of the document before the update.\n Document oldVersion = gradesCollection.findOneAndUpdate(filter, updates);\n System.out.println(\"\\n=> FindOneAndUpdate operation. Printing the old version by default:\");\n System.out.println(oldVersion.toJson(prettyPrint));\n\n // but I can also request the new version\n filter = eq(\"student_id\", 10001);\n FindOneAndUpdateOptions optionAfter = new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER);\n Document newVersion = gradesCollection.findOneAndUpdate(filter, updates, optionAfter);\n System.out.println(\"\\n=> FindOneAndUpdate operation. But we can also ask for the new version of the doc:\");\n System.out.println(newVersion.toJson(prettyPrint));\n }\n }\n}\n```\n\n## Delete documents\n\n### Delete one document\n\nLet's delete the document above. To achieve this, we will use the method `deleteOne`.\n\nPlease create a class `Delete` in the `com.mongodb.quickstart` package with this code:\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.result.DeleteResult;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\n\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Filters.gte;\n\npublic class Delete {\n\n public static void main(String[] args) {\n\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // delete one document\n Bson filter = eq(\"student_id\", 10000);\n DeleteResult result = gradesCollection.deleteOne(filter);\n System.out.println(result);\n }\n }\n}\n```\n\nAs you can see in this example, the method `deleteOne` only takes one parameter: a filter, just like the `find()`\noperation.\n\nIn order to run this program, make sure you set up your `mongodb.uri` in your system properties using your IDE if you\nwant to run this code in your favorite IDE (see above for more details).\n\nAlternatively, you can use this Maven command line in your root project (where the `src` folder is):\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.Delete\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nThe standard output should look like this:\n\n``` javascript\nAcknowledgedDeleteResult{deletedCount=1}\n```\n\n### FindOneAndDelete()\n\nAre you emotionally attached to your document and want a chance to see it one last time before it's too late? We have\nwhat you need.\n\nThe method `findOneAndDelete()` allows you to retrieve a document and delete it in a single atomic operation.\n\nHere is how it works:\n\n``` java\nBson filter = eq(\"student_id\", 10002);\nDocument doc = gradesCollection.findOneAndDelete(filter);\nSystem.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n```\n\nHere is the output we get:\n\n``` javascript\n{\n \"_id\": {\n \"$oid\": \"5ddec378224ad1d5cfac02b8\"\n },\n \"class_id\": 10.0,\n \"student_id\": 10002.0,\n \"comments\": [\n \"You will learn a lot if you read the MongoDB blog!\"\n ]\n}\n```\n\n### Delete many documents\n\nThis time we will use `deleteMany()` instead of `deleteOne()` and we will use a different filter to match more\ndocuments.\n\n``` java\nBson filter = gte(\"student_id\", 10000);\nDeleteResult result = gradesCollection.deleteMany(filter);\nSystem.out.println(result);\n```\n\nAs a reminder, you can learn more about all the query selectors [in our\ndocumentation.\n\nThis is the output we get:\n\n``` javascript\nAcknowledgedDeleteResult{deletedCount=10}\n```\n\n### Delete a collection\n\nDeleting all the documents from a collection will not delete the collection itself because a collection also contains\nmetadata like the index definitions or the chunk distribution if your collection is sharded for example.\n\nIf you want to remove the entire collection **and** all the metadata associated with it, then you need to use\nthe `drop()` method.\n\n``` java\ngradesCollection.drop();\n```\n\n### The final code for delete operations\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.result.DeleteResult;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\nimport org.bson.json.JsonWriterSettings;\n\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Filters.gte;\n\npublic class Delete {\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(System.getProperty(\"mongodb.uri\"))) {\n MongoDatabase sampleTrainingDB = mongoClient.getDatabase(\"sample_training\");\n MongoCollection gradesCollection = sampleTrainingDB.getCollection(\"grades\");\n\n // delete one document\n Bson filter = eq(\"student_id\", 10000);\n DeleteResult result = gradesCollection.deleteOne(filter);\n System.out.println(result);\n\n // findOneAndDelete operation\n filter = eq(\"student_id\", 10002);\n Document doc = gradesCollection.findOneAndDelete(filter);\n System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n\n // delete many documents\n filter = gte(\"student_id\", 10000);\n result = gradesCollection.deleteMany(filter);\n System.out.println(result);\n\n // delete the entire collection and its metadata (indexes, chunk metadata, etc).\n gradesCollection.drop();\n }\n }\n}\n```\n\n## Wrapping up\n\nWith this blog post, we have covered all the basic operations, such as create and read, and have also seen how we can\neasily use powerful functions available in the Java driver for MongoDB. You can find the links to the other blog posts\nof this series just below.\n\n> If you want to learn more and deepen your knowledge faster, I recommend you check out the \"MongoDB Java\n> Developer Path\" available for free on [MongoDB University.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to use MongoDB with Java in this tutorial on CRUD operations with example code and walkthrough!", "contentType": "Quickstart"}, "title": "Getting Started with MongoDB and Java - CRUD Operations Tutorial", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-meetup-javascript-react-native", "action": "created", "body": "# Realm Meetup - Realm JavaScript for React Native Applications\n\nDidn't get a chance to attend the Realm JavaScript for React Native applications Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n:youtube]{vid=6nqMCAR_v7U}\n\nIn this event, recorded on June 10th, Andrew Meyer, Software Engineer, on the Realm JavaScript team, walks us through the React Native ecosystem as it relates to persisting data with Realm. We discuss things to consider when using React Native, best practices to implement and gotcha's to avoid, as well as what's next for the JavaScript team at Realm.\n\nIn this 55-minute recording, Andrew spends about 45 minutes presenting \n\n- React Native Overview & Benefits\n\n- React Native Key Concepts and Architecture\n\n- Realm Integration with React Native\n\n- Realm Best Practices / Tips&Tricks with React Native\n\nAfter this, we have about 10 minutes of live Q&A with Ian & Andrew and our community . For those of you who prefer to read, below we have a full transcript of the meetup too. \n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.\n\n### Transcript\n(*As this is verbatim, please excuse any typos or punctuation errors!*)\n\n**Ian:**\nI'm Ian Ward. I'm a product manager that focuses on the Realm SDKs. And with me today, I'm joined by Andrew Meyer, who is an engineer on our React Native development team, and who is focusing on a lot of the improvements we're looking to make for the React Native SDK in the future. And so I just went through a few slides here to just kick it off. So we've been running these user group sessions for a while now, we have some upcoming meetups next week we are going to be joined by a AWS engineer to talk about how to integrate MongoDB Realm, our serverless platform with AWS EventBridge. A couple of weeks after that, we will also be joined by the Swift team to talk about some of the new improvements they've made to the SDK and developer experience. So that's key path filtering as well as automatic open for Realms.\n\nWe also have MongoDB.live, which is happening on July 13th and 14th. This is a free virtual event and we will have a whole track set up for Realm and mobile development. So if you are interested in mobile development, which I presume you are, if you're here, you can sign up for that. No knowledge or experience with MongoDB is necessary to learn something from some of these sessions that we're going to have.\n\nA little bit of housekeeping here. So we're using this Bevy platform. You'll see, in the web view here that there's a chat. If you have questions during the program, feel free to type in the question right there, and we'll look to answer it if we can in the chat, as well as we're going to have after Andrew goes through his presentation, we're going to have a Q&A session. So we'll go through some of those questions that have been accumulating across the presentation. And then at the end you can also ask other questions as well. We'll go through each one of those as well. If you'd like to get more connected we have our developer hub. This is our developer blog, We post a bunch of developer focused articles there. Please check that out at developer.mongodb.com, many of them are mobile focus.\n\nSo if you have questions on Swift UI, if you have questions on Kotlin multi-platform we have articles for you. If you have a question yourself come to forums.realm.io and ask a question, we patrol that regularly and answer a lot of those questions. And of course our Twitter @realm please follow us. And if you're interested in getting Swag please tweet about us. Let us know your comments, thoughts, especially about this program that you're watching right now. We would love to give away Swag and we'd love to see the community talk about us in the Twitter sphere. And without further ado, I'll stop sharing my screen here and pass it over to Andrew. Andrew, floor's yours.\n\n**Andrew:**\nHello. Just one second, I'll go find my slides. Okay. I think that looks good. So hello. My name is Andrew Meyer, I'm a software engineer at MongoDB on the Realm-JS team, just joined in February. I have been working with React Native for the past four years. In my last job I worked for HORNBACH, which is one of the largest hardware stores in Germany making the HORNBACH shopping app. It also allowed you to scan barcodes in the store and you could fill up a cart and checkout in the store and everything. So I like React Native, I've been using it as I said for four years and I'm excited to talk to you all about it. So my presentation is called Realm JavaScript for React Native applications. I'm hoping that it inspires you if you haven't used React Native to give it a shot and hopefully use realm for your data and persistence.\n\nLet's get started. So my agenda today, I'm going to go over React Native. I'm also going to go over some key concepts in React. We're going to go over how to integrate Realm with React Native, some best practices and tips when using Realm with React Native. And I'm also going to go over some upcoming changes to our API. So what is React Native? I think we've got a pretty mixed group. I'm not sure how many of you are actually React Native developers now or not. But I'm going to just assume that you don't know what React Native is and I'm going to give you a quick overview. So React Native is a framework made from Facebook. It's a cross platform app development library, you can basically use it for developing both Android and iOS applications, but it doesn't end there; there is also the ability to make desktop applications with React Native windows and React Native Mac OS.\n\nIt's pretty nice because with one team you can basically get your entire application development done. As it is written in JavaScript, if your backend is written in Node.JS, then you don't have a big context switch from jumping from front end development to back end development. So at my last job I think a lot of us started as front end developers, but by the end of a couple of years, we basically were full stack developers. So we were constantly going back and forth from front end to backend. And it was pretty easy, it's really a huge context switch when you have to jump into something like Ruby or Java, and then go back to JavaScript and yeah, it takes more time. So basically you just stay in one spot, but when you're using a JavaScript for the full stack you can hop back and forth really fast.\n\nAnother cool feature about React Native is fast refresh. This was implemented a few years ago, basically, as you develop your code, you can see the changes real time in your simulator, actually on your hardware as well. It can actually handle multiple simulators and hardware at the same time. I've tested Android, iOS phones in multiple languages and sizes and was able to see my front end changes happen in real time. So that's super useful if you've ever done a native development in iOS or an Android, you have to compile your changes and that takes quite a bit of time.\n\nSo this is an example of what a component looks like in React Native. If you're familiar with HTML and CSS it's really not a big jump to use React Native, basically you have a view and you apply styles to it, which is a JavaScript object that looks eerily similar to CSS except that it has camelCase instead of dash-case. One thing is if you do use React Native, you are going to want to become a very big friend of Flexbox. They use Flex quite adamantly, there's no CSS grid or anything like that, but I've been able to get pretty much anything I need to get done using Flexbox. So this is just a basic example of how that looks.\n\nSo, we're going to move on to React. So the React portion of React Native; it is using the React framework under the hood which is a front end web development framework. Key stomped concepts about React are JSX, that's what we just saw over here in the last example, this is JSX basically it's HTML and JavaScript. It resolves to basically a function call that will manipulate the DOM. If you're doing front end development and React Native world, it's actually going to bridge over into objective C for iOS and Java for Android. So that's one concept of it. The next is properties, pretty much every component you write is going to have properties and they're going to be managed by state. State is very important too. You can make basically say a to-do list and you need to have a state that's saving all its items and you need to be able to manipulate that. And if you manipulate that state, then it will re-render any changes through the properties that you pass down to sub components. And I'll show you an example of that now.\n\nSo this is an example of some React code. This is basically just a small piece of text to the button that allows you to change it from lowercase to uppercase. This is an example of a class component. There's actually two ways that you can make components in React, class components and functional components. So this is an example of how you do it with a class component, basically you make an instructor where you set your initial state then you have a rendering function that returns JSX. So this JSX reacts on that state. So in this case, we have a toUpper state with just a Boolean. If I change this toUpper Boolean to true, then that's going to change the text property that was passed in to uppercase or lowercase. And that'll be displayed here in the text. To set that state, I call this dot set state and basically just toggle that Boolean from true to false or false to true, depending on what state it's in.\n\nSo, as I said, this is class components. There's a lot more to this. Basically there's some of these life cycle methods that you had to override. You could basically before your component is mounted, make a network request and maybe initiate your state with some of that data. Or if you need to talk to a database that's where you would handle that. There's also a lot of... Yeah, the lifecycle methods get pretty confusing and that's why I'm going to move on to functional components, which are quite simpler. Before we could use this, but I think three years ago React introduced hooks, which is a way that we can do state management with functional programming. This gets rid of all those life cycle methods that are a bit confusing to know what's happening when.\n\nSo this is an example of what a functional component looks like. It's a lot less code, your state is actually being handled by a function and this function is called useState. Basically, you initialize it with some state and you get back that state and a function to set that state with. So in this case, I can look at that toUpper Boolean here and call this function to change that state. I want to go back real quick, that's how it was looking before, and that's how it is now. So I'm just going to go quickly through some of the hooks that are available to you because these are pretty much the basics of what you need to work with React and React Native. So as I talked about useState before this is just an example of showing a modal, but it's not too different than changing the case of a text.\n\nSo basically you'd be able to press a button and say, show this modal, you pass that in as a property to your modal. And you could actually pass that set modal, visible function to your modal components so that something inside of that modal can close that. And if you don't know what the modal is, it's basically an overlay that shows up on top of your app.\n\nSo then the next one is called useEffect. This is basically going to replace all your life cycle methods that I talked about before. And what you can do with useEffect is basically subscribe to changes that are happening. So that could be either in the state or some properties that are being passed down. There's an array at the end that you provide with the dependencies and every time something changes this function will be called. In this case, it's just an empty array, which basically means call this once and never call it again. This would be if you need to initialize your state with some data that's stored in this case in persistent storage then you'd be able to get that data out and store it to your state. We're going to see a lot more of this in the next slides.\n\nUseContext is super useful. It's a bit confusing, but this is showing how to use basically a provider pattern to apply a darker light mode to your application. So basically you would define the styles that you want to apply for your component. You create your context with the default state and that create gives you a context that you can call the provider on, and then you can set that value. So this one's basically overriding that light with dark, but maybe you have some sort of functionality or a switch that would change this value of state and change it on the fly. And then if you wrap your component or your entire application with this provider, then you can use the useContext hook to basically get that value out.\n\nSo this could be a very complex app tree and some button way deep down in that whole tree structure that can just easily get this theme value out and say, \"Okay, what am I, dark or light?\" Also, you can define your own hooks. So if you notice that one of your components is getting super complex or that you created a use effect that you're just using all over the place, then that's probably a good chance for you to do a little bit of dry coding and create your own hooks. So this one is basically one that will check a friend status. If you have some sort of chat API, so you'd be able to subscribe to any changes to that. And for trends it's Boolean from that to let you know that friends online or not. There's also a cool thing about useEffect. It has a tear down function. So if that component that's using this hook is removed from the tree, this function will be called so that those subscription handlers are not going to be called later on.\n\nA couple of other hooks useCallback and useMemo. These are a bit nice, this is basically the concept of memorization. So if you have a component that's doing some sort of calculation like averaging an array of items and maybe they raise 5,000 items long. If you just call the function to do that in your component, then every time your component got re-rendered from a state change, then it would do that computation again, and every single time it got re-rendered. You actually only want to do that if something in that array changes. So basically if you use useMemo you can provide dependencies and then compute that expensive value. Basically it helps you not have an on performance app.\n\nUseCallback is similar, but this is on return to function, this function will basically... Well, this is important because if you were to give a function to a component as a property, and you didn't call useCallback to do that, then every time that function re-rendered any component that was using that function as a property would also be re-rendered. So this basically keeps that function reference static, basically makes sure it doesn't change all the time. We're going to go into that a little bit more on the next slides. UseRef is also quite useful in React Native. React Native has to sometimes have some components that take advantage of data features on your device, for instance, the camera. So you have a camera component that you're using typically there might be some functions you want to call on that component. And maybe you're not actually using properties to define that, but you actually have functions that you can call in this case, maybe something that turns the flashlight on.\n\nIn that case, you would basically define your reference using useRef and you would be able to basically get a reference from useRef and you can the useRef property on this component to get a reference of that. Sorry, it's a bit confusing. But if you click this button, then you'd be able to basically call function on that reference. Cool. And these are the rest of the hooks. I didn't want to go into detail on them, but there are other ones out there. I encourage you to take a look at them yourselves and see what's useful but the ones that I went through are probably the most used, you get actually really far with useState, useEffect, useContext.\n\nSo one thing I want to go over that's super important is if you're going to use a JavaScript object for state manipulation. So, basically objects in JavaScript are a bit strange. I'll go up here. So if I make an object, like say, we have this message here and I changed something on that object and set that state with that changed object, the reference doesn't change. So basically react doesn't detect that anything changed, it's not looking at did the values inside this object change it's actually looking at is the address value of this object different. So going back to that, let me go back to the previous slide. So basically that's what immutability is. Sorry, let me get back here. And the way we fix that is if you set that state with an object, you want to make a copy of it, and if you make a copy, then the address will change and then the state will be detected as new, and then you'll get your message where you rendered.\n\nAnd you can either use object data sign, but basically the best method right now is to use the spread operator and what this does is basically take all the properties of that object and makes a copy of them here and then overrides that message with the texts that you entered into this text input. Cool. And back to that concept of memorization. There's actually a pretty cool function from React to that, it's very useful. When we had class components, there used to be a function you could override that compared the properties of what was changing inside of your component. And then you would be able to basically compare your previous properties with your next properties and decide, should I re-render this or not, should I return false, it's going to re-render. If you return true, then it's just going to stay the same.\n\nTo do that with functional components, we get a function called the React.memo. Basically, if you wrap your component React.memo it's going to automatically look at those base level properties and just check if they're equal to each other. With objects that becomes a little bit problematic, if it's just strings and Booleans, then it's going to successfully pull that off and only re-render that component if that string changes or that Boolean changes. So if you do wrap this and you're using objects, then you can actually make a function called... Well, in this case, it's equal or are equal, which is the second argument of this memo function. And that will give you the access to previous prompts and next prompts. So if you're coming into the hooks world and you already have a class component, this is a way to basically get that functionality back. Otherwise hooks is just going to re-render all the time.\n\nSo I have an example of this right here. So basically if I have in like my example before that text input if I wrap that in memo and I have this message and my setMessage, you state functions past in here, then this will only re-render if those things change. So back to this, the setMessage, if this was defined by me, not from useState, this is something that you definitely want to wrap with useCallback by the way making sure that this doesn't potentially always re-render, just like objects, functions are also... Actually functions in JavaScript are objects. So if you change a function or re-render the definition of a function, then its address is going to change and thus your component is going to be unnecessarily re-rendering.\n\nSo let's see if there's any questions at the moment, nothing. Okay, cool. So that brings us to Realm. Basically, how do you persist state? We have our hooks, like useState that's all great, but you need a way to be able to save that state and persist it when you close your app and open it again. And that's where Realm comes in. Realm has been around for 10 years, basically started as an iOS library and has moved on to Native Android and .NET and React Native finally. It's a very fast database it's actually written in C++, so that's why it's easily cross-platform. Its offline first, so most data that you usually have in an application is probably going to be talking to a server and getting that data off that, you can actually just store the data right on the phone.\n\nWe actually do offer a synchronization feature, it's not free, but if you do want to have cloud support, we do offer that as well. I'm not going to go over that in this presentation, but if that's something that you're really interested, in I encourage you to take a look at that and see if that's right for your application. And most databases, you have to know some sort of SQL language or some sort of query language to do anything. I don't know if anybody has MySQL or a PostgreSQL. Yeah, that's how I started learning what the DOM is. And you don't need to know a query language to basically use Realm. It's just object oriented data model. So you'll be using dots and texts and calling functions and using JavaScript objects to basically create and manipulate your data. It's pretty easy to integrate, if you want to add Realm to your React Native application either use npm or Yarn, whatever your flavor is to install Realm, and then just update your pods.\n\nThis is a shortcut, if anybody wanted to know how to install your pods without jumping into the iOS directory, if you're just getting to React Native, you'll know what I'm talking about later. So there's a little bit of an introduction around, so basically if you want to get started using Realm, you need to start modeling your data. Realm has schemas to do that, basically any model you have needs to have a schema. You provide a name for that schema, you define properties for this. Properties are typically defined with just a string to picking their type. This also accepts an object with a few other properties. This would be, if we were using the objects INTAX, then this would be type colon object ID, and then you could also make this the primary key if you wanted to, or provide some sort of default value.\n\nWe also have a bit of TypeScript support. So here's an example of how you would define a class using this syntax to basically make sure that you have that TypeScript support and whatever you get back from your Realm queries is going to be properly typed. Basically, so this is example of a journal from my previous schema definition here. And what's important to notice is that you have to add this exclamation point, basically, this is just telling TypeScript that something else is going to be defining how these properties are being set, which Realm is going to be doing for you. It's important to know that Realm objects, their properties are actually pointers to a memory address. So those will be automatically propagated as soon as you connect this to Realm.\n\nIn this example, I created a generate function. This is basically just a nice syntax, where if you wanted to basically define an object that you can use to create Realm objects you can do that here and basically provide some values, you'll see what I mean in a second how that works. So once you have your schema defined then you can put that into a configuration and open the Realm, and then you get this Realm object. When you create a Realm, then it's going to actually create that database on your phone. If you close it, then it'll just make sure that that's saved and everything's good to go. So I'm going to show you some tips on how to keep that open and close using hooks here in a second.\n\nAnother thing that's pretty useful though, is when you're starting to get getting started with defining your definitions, your schema definitions, you're getting started with your app, it's pretty useful to put this deleteRealmMigrationNeeded to true. Basically that's if you're adding new properties to your Realm in development, and it's going to yell at you because it needs to have a migration path. If you've put this to true, then it's just going to ignore that, it's going to delete all that data and start from scratch. So this is pretty useful to have in development when you're constantly tweaking changes and all that to your data models.\n\nHere's some examples about how you can basically create, edit and delete anything in Realm. So that Realm object that you get basically any sort of manipulation you have to put inside of a right transaction that basically ensures that you're not going to have any sort of problems with concurrency. So if I do realm.write that takes a call back and within that callback, you can start manipulating data. So this is an example of how I would create something using that journal class. So if I give this thing, that journal class, it's going to actually already be typed for me, and I'm going to call that generate function. I could actually just give this a plain JavaScript object as well. And if I provide this journal in the front then it'll start type checking that whatever I'm providing as a second argument.\n\nIf you want to change anything, say that display journal in this case, it's just the journal that I'm working on in some component, then if I wrap this in a right transaction, I can immediately manipulate that that property and it'll automatically be written to the database. I'll show you how to manage state with that in a second because it's a bit tricky. And then if you want to delete something, then basically you just provide what's coming back from realm.object creation into this, or realm.query into this delete function and then it'll remove that from the database. In this example, I'm just grabbing a journal by the ID primary key.\n\nAnd last but not least how to read data. There's two main functions I basically use to get data out of Realm, one is using realm.objects. Oops, I have a little bit of code there. If you call realm.objects and journal and forget about this filtered part basically it'll just get everything in the database for that model that you defined. If you want to filter it by something, say if you have an author field and it's got a name, then you could say, I just want everything that was authored by Andrew then this filter would basically return a model that's filtered and then you can also sort it. But you can chain these as well as you see, you can just be filtered or realm.object.filtered.sorted, that'd be the better syntax, but for readability sake, I kept it on one line. And if you want to get a single object, you can use object for primary and provide that ID.\n\nSo I'm going to go through a few best practices and tips to basically combine this knowledge of Realm and hooks, it's a lot, so bear with me. So if you have an app and you need to access Realm you could either use a singleton or something to provide that, but I prefer to make sure to just provide it once and I found that using useContext is the best way to do that. So if you wanted to do that, you could write your own Realm provider, basically this is a component, it's going to be wrapping. So if you make any sort of component, that's wrapping other components, you have to give children and you have to access the children property and make sure that what you're returning is implementing those children otherwise you won't have an app it'll just stop here. So this Realm provider is going to have children and it's going to have a configuration just like where you defined in the previous slide.\n\nAnd basically I have a useEffect that basically detects changes on the configuration and opens the Realm and then it sets that Realm to state and adds that to that provider value. And then if you do that, you'll be able to use that useContext to retrieve that realm at any point in your app or any component. So if you wrap that component with Realm provider, then you'll be able to get that Realm. I would recommend making a hook for this called useRealm or something similar where you can have error checking and any sort of extra logic that you need when you're accessing that Realm here and have that return that context for you to use.\n\nSo another thing, initializing data. So if you have a component and it's the very first time your app is opened you might want to initialize it with some data. The way I recommend doing that is making an effect for it, basically calling realm.objects and setting that to your state, having this useEffect listen for that state and just check, do we have any entries? If we don't have any entries then I would initialize some data and then set that journal up. So going on the next slide. And another very important thing is subscribing to changes. Yeah, basically if you are making changes to your collection in Realm, it's not going to automatically re-render. So I recommend using useState to do that and keeping a copy of that realm.object in state and updating with set state. And basically all you need to do is create an effect with a handle change function. This handle change function can be given to these listeners and basically it will be called any time any change happens to that Realm collection.\n\nYou want to make sure though that you do check if there are any modifications before you start setting state especially if you're subscribing to changes to that collection, because you could find yourself into an infinite loop. Because as soon as you call ad listener, there will be an initial event that fires and the length of all the changes is zero. So this is pretty important, make sure you check that there actually are changes before you set that state. So here's an example of basically providing or using a FlatList to display around data. FlatList is one of the main components from React Native that I've used to basically display any list of data. FlatList basically takes an array of data, in our case, it'll also take a Realm collection, which is almost an array. It works like an array. So it works in this case. So you can provide that collection.\n\nI recommend sorting it because one thing about Realm collections is the order is not guaranteed. So you should sort it by some sort of timestamp or something to make sure that when you add new entries, it's not just showing up in some random spot in the list. It's just showing up in this case at the creation date. And then it's also recommended to use a key extractor and do not set it to the index of the array. That's a bad idea, set it to something that is that's unique. In this case, the idea that we were using for our Realm is object ID, in the future we'll have a UUID property coming out, but in the meantime, object ID is our best option for providing that for you to have basically a unique ID that you can define your data with. And if you use that, I recommend using that. You can call it the two check string function on here because key extractor wants a string. He's not going to work with an object. And then basically this will make sure that your items are properly rendered and not rerunning the whole list all the time.\n\nAlso, using React.memo is going to help with that as well, which I'm going to show you how to do that. This item in this case is actually a React.memo. I recommend instead of just passing that whole item as a property to maybe just get what you need out of it and passing that down and that way you'll avoid any necessary re-renders. I did intentionally put a mistake in here. ID is an object, so you will have to be careful if you do it like this and I'll show you how that's done. you could just set it to string and then you wouldn't have to provide this extra function that on purpose I decided to put the object and to basically show you how you can check the properties and, and update this. So this is using React.memo and basically it will only render once. It will only render if that title changes or if that ID changes, which it shouldn't change.\n\nBasically, this guy will look at is title different? Are the IDs different? If they're not return true, if any of them changed return false. And that'll basically cause a re-render. So I wrote quite a bit of sample code to basically make these slides, if you want to check that out, my GitHub is Takameyer, T-A-K-A-M-E-Y-E-R. And I have a Realm and React Native example there. You can take a look there and I'll try to keep that updated with some best practices and things, but a lot of the sample code came from there. So I recommend checking that out. So that's basically my overview on React and Realm. I'll just want to take an opportunity to show up what's coming up for these upcoming features. Yeah, you just saw there was quite a lot of boiler plate in setting up those providers and schemas and things.\n\nAnd yeah, if you're setting up TypeScript types, you got to set up your schemers, you got to set up your types and you're doing that in multiple places. So I'm going to show you some things that I'm cooking up in the near future. Well, I'm not sure when they're coming out, but things that are going to make our lives a little bit easier. One goal I had is I wanted to have a single source of truth for your types. So we are working on some decorators. This is basically a feature of a JavaScript. It's basically a Boolean that you have to hit in TypeScript or in Babel to get working. And basically what that does is allow you to add some more context to classes and properties on that class. So in this case this one is going to allow you to define your models without a schema. And to do that, you provide a property, a decorator to your attributes. And that property is basically as an argument taking those configuration values I talked about before.\n\nSo basically saying, \"Hey, this description is a type string, or this ID is primary key and type object ID.\" My goal eventually when TypeScript supports it, I would like to infer the types from the TypeScript types that you're defining here. So at the moment we're probably going to have to live with just defining it twice, but at least they're not too far from each other and you can immediately see if they're not lining up. I didn't go over relations, but you can set up relations between Realms models. And that's what I'm going to revive with this link from property, this is bit easier, send texts, get that done. You can take a look at our documentation to see how you do that with normal schemas. But basically this is saying I'm linking lists from todoLists because a TodoItem on the items property from todoList link from todoList items, reads kind of nice.\n\nYeah, so those are basically how we're going to define schemas in the future. And we're also going to provide some mutator functions for your business logic in your classes. So basically if you define the mutator, it'll basically wrap this in a right transaction for you. So I'm running out of time, so I'm just going to go for the next things quick. We have Realm context generator. This is basically going to do that whole provider pattern for you. You call createRealmContext, give it your schemas, he's going to give you a context object back, you can call provider on that, but you can also use that thing to get hooks. I'm going to provide some hooks, so you don't have to do any sort of notification handling or anything like that. You basically call the hook on that context. You give it that Realm class.\n\nAnd in this case use object, he's just going to be looking at the primary key. You'll get that object back and you'll be able to render that and display it and subscribe to updates. UseQuery is also similar. That'll provide a sorting and filter function for you as well. And that's how you'd be able to get lists of items and display that. And then obviously you can just call, useRealm to get your Realm and then you can do all your right transactions. So that's coming up and that's it for me. Any questions?\n\n**Ian:**\nYeah. Great. Well, thank you, Andrew. We don't have too many questions, but we'll go through the ones we have. So there's one question around the deleteRealmIfMigrationNeeded and the user said this should only be used in dev. And I think, yes, we would agree with that, that this is for iterating your schema while you're developing your application. Is that correct Andrew?\n\n**Andrew:**\nYeah, definitely. You don't want to be using that in production at all. That's just for development. So Yeah.\n\n**Ian:**\nDefinitely. Next question here is how has Realm integrated with static code analyzers in order to give better dev experience and show suggestions like if a filtered field doesn't exist? I presume this is for maybe you're using Realm objects or maybe using regular JavaScript objects and filtered wouldn't exist on those, right? It's the regular filter expression.\n\n**Andrew:**\nYeah. If you're using basically that syntax I showed to you, you should still see the filtered function on all your collections. If you are looking at using filtered in that string, we don't have any sort of static analysis for those query strings yet, but definitely for the future, we could look at that in the future.\n\n**Ian:**\nYeah, I think the Vs code is definitely plugin heavy. And as we start to replatform the JavaScript SDK, especially for React Native, some of these new features that Andrew showed we definitely want to get into creating a plugin that is Realm specific. That'll help you create some of your queries and give you suggestions. So that's definitely something to look forward to in the future. Please give us feedback on some of these new features and APIs that you're looking to come out with, especially as around hooks? Because we interviewed quite a few users of Realm and React Native, and we settled on this, but if you have some extra feedback, we are a community driven product, so please we're looking for the feedback and if it could work for you or if it needed an extra parameter to take your use case into account, we're in the stage right now where we're designing it and we can add more functionality as we come.\n\nSome of the other things we're looking to develop for the React Native SDK is we're replatforming it to take advantage of the new Hermes JavaScripts VM, right interpreter, so that not just using JavaScript core, but also using Hermes with that. Once we do that, we'll also be able to get a new debugger right now, the debugging experience with Realm and React Native is a little bit... It's not great because of the way that we are a C++ database that runs on the device itself. And so with the Chrome debugger, right, it wants to run all your JavaScript code in Chrome. And so there's no Native context for it to attach to, and had to write this RPC layer to work around that. But with our new Hermes integration, we'll be able to get in much better debugging experience. And we think we'll actually look to Flipper as the debugger in the future, once we do that.\n\nOkay, great. What is your opinion benefit, would you say makes a difference better to use than the likes of PouchDB? So certainly noted here that this is a SDK for React Native. We're depending on the hard drive to be available in a mobile application. So for PouchDB, it's more used in web browsers. This SDK you can't use it in a web browser. We do have a Realm web SDK that is specific for querying data from Atlas, and it gives some convenience methods for logging into our serverless platform. But I will say that we are doing a spike right now to allow for compilation of our Realm core database into. And if we do that, we'll be able to then integrate into browsers and have the ability to store and persist data into IndexedDB, which is a browser available or is the database available in browsers. Right. And so you can look forward to that because then we could then be integrated into PWAs for instance in the web.\n\nOther Question here, is there integration, any suggestions talk around Realm sync? Is there any other, I guess, tips and tricks that we can suggest the things may be coming in the future API regarding a React Native application for Realm sync? I know one of the things that came out in our user interviews was partitions. And being able to open up multiple Realms in a React Native SDK, I believe we were looking to potentially add this to our provider pattern, to put in multiple partition key values. Maybe you can talk a little bit to that.\n\n**Andrew:**\nYeah. Basically that provider you'd be able to actually provide that configuration as properties as well to the provider. So if you initiate your context with the configuration and something needs to change along the line based on some sort of state, or maybe you open a new screen and it's like a detailed view. And that parameter, that new screen is taking an ID, then you'd be able to basically set the partition to that ID and base the data off that partition ID.\n\n**Ian:**\nYeah, mostly it's our recommendation here to follow a singleton pattern where you put everything in the provider and that when you call that in a new view, it basically gives you an already open Realm reference. So you can boot up the app, you open up all the rounds that you need to, and then depending on the view you're on, you can call to that provider to get the Realm reference that you'd need.\n\n**Andrew:**\nRight. Yeah. That's another way to do it as well. So you can do it as granular as you want. And so you can use your provider on a small component on your header of your app, or you could wrap the whole app with it. So many use cases. So I would like to go a little bit more into detail someday about how to like use Realm with React navigation and multiple partitions and Realms and stuff like that. So maybe that's something we could look at in the future.\n\n**Ian:**\nYeah. Absolutely. Great. Are there any other questions from anyone here? Just to let everyone know this will be recorded, so we've recorded this and then we'll post this on YouTube later, so you can watch it from there, but if there's any other questions, please ask them now, otherwise we'll close early. Are there any issues with multiple independently installed applications accessing the same database? So I think it's important to note here that with Realm, we do allow multi-process access. We do have a way, we have like a lot file and so there is the ability to have Realm database be used and access by multiple applications if you are using a non-sync Realm. With sync Realms, we don't have multi-process support, it is something we'll look to add in the future, but for right now we don't have it. And that's just from the fact that our synchronization runs in a background thread. And it's hard for us to tell when that thread has done to at work or not.\n\nAnother question is the concept behind partitions. We didn't cover this. I'd certainly encourage you to go to docs@mongodb.com/realm we have a bunch of documentation around our sync but a partition corresponds to the Realm file on the client side. So what you can do with the Realm SDK is if you enable sync, you're now sinking into a MongoDB Atlas server or cluster. This is the database as a service managed offering that is for the cloud version of MongoDB. And you can have multiple collections within this MongoDB instance. And you could have, let's say a hundred thousand documents. Those a hundred thousand documents are for the amalgamation of all of your Realm clients. And so a partition allows you to specify which documents are for which clients. So you can boot up and say, \"Okay, I logged in. My user ID is Ian Ward. Therefore give me all documents that are for Ian Ward.\" And that's where you can segment your data that's all stored together in MongoDB. Interesting Question.\n\n**Andrew:**\nYeah. I feel like a simple application, it's probably just going to be partitioned by a user ID, but if you're making an app for a logistics company that has multiple warehouses and you have an app that has the inventory for all those warehouses, then you might probably want to partition on those warehouses, the warehouse that you're in. So that'd be a good example of where you could use that partition in a more complex environment.\n\n**Ian:**\nYeah, definitely. Yeah. It doesn't need to be user ID. It could also be store ID. We have a lot of logistics customer, so it could be driver ID, whatever packages that driver is supposed to deliver on that day will be part of their partition. Great. Well if there's no other... Oops, got another one in, can we set up our own sync on existing Realm database and have it sync existing data i.e. the user used the app without syncing, but later decides to sync the data after signing up? So right now the file format for a Realm database using non-sync and a syncing database is different basically because with sync, we need to keep track of the operations that are happening when you're occurring offline. So it keeps a queue of those operations.\n\n**Ian:**\nAnd then once you connect back online, it automatically sends those operations to the service side to apply the state. Right now, if you wanted to move to a synchronized brown, you would need to copy that data from the non-sync Realm to the sync Realm. We do have a project that I hope to get to in the next quarter for automatically doing that conversion for you. So you'll basically be able to write all the data and copy it over to the sync and make it a lot easier for developers to do that if they wish to. But it is something that we get some requests for. So we would like to make it easier. Okay. Well, thank you very much, everyone. Thank you, Andrew. I really appreciate it. And thank you everyone for coming. I hope you found this valuable and please reach out to us if you have any further questions. Okay. Thanks everyone. Bye.\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "React"], "pageDescription": "In this event, recorded on June 10th, Andrew Meyer, Software Engineer, on the Realm JavaScript team, walks us through the React Native ecosystem as it relates to persisting data with Realm. We discuss things to consider when using React Native, best practices to implement and gotcha's to avoid, as well as what's next for the JavaScript team at Realm.\n\n", "contentType": "Article"}, "title": "Realm Meetup - Realm JavaScript for React Native Applications", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/designing-developing-analyzing-new-mongodb-shell", "action": "created", "body": "# Designing, Developing, and Analyzing with the New MongoDB Shell\n\nThere are many methods available for interacting with MongoDB and depending on what you're trying to accomplish, one way to work with MongoDB might be better than another. For example, if you're a power user of Visual Studio Code, then the MongoDB Extension for Visual Studio Code might make sense. If you're constantly working with infrastructure and deployments, maybe the MongoDB CLI makes the most sense. If you're working with data but prefer a command line experience, the MongoDB Shell is something you'll be interested in.\n\nThe MongoDB Shell gives you a rich experience to work with your data through syntax highlighting, intelligent autocomplete, clear error messages, and the ability to extend and customize it however you'd like.\n\nIn this article, we're going to look a little deeper at the things we can do with the MongoDB Shell.\n\n## Syntax Highlighting and Intelligent Autocomplete\n\nIf you're like me, looking at a wall of code or text that is a single color can be mind-numbing to you. It makes it difficult to spot things and creates overall strain, which could damage productivity. Most development IDEs don't have this problem because they have proper syntax highlighting, but it's common for command line tools to not have this luxury. However, this is no longer true when it comes to the MongoDB Shell because it is a command line tool that has syntax highlighting.\n\nWhen you write commands and view results, you'll see colors that match your command line setup as well as pretty-print formatting that is readable and easy to process.\n\nFormatting and colors are only part of the battle that typical command line advocates encounter. The other common pain-point, that the MongoDB Shell fixes, is around autocomplete. Most IDEs have autocomplete functionality to save you from having to memorize every little thing, but it is less common in command line tools.\n\nAs you're using the MongoDB Shell, simply pressing the \"Tab\" key on your keyboard will bring up valuable suggestions related to what you're trying to accomplish.\n\nSyntax highlighting, formatting, and autocomplete are just a few small things that can go a long way towards making the developer experience significantly more pleasant.\n\n## Error Messages that Actually Make Sense\n\nHow many times have you used a CLI, gotten some errors you didn't understand, and then either wasted half your day finding a missing comma or rage quit? It's happened to me too many times because of poor error reporting in whatever tool I was using.\n\nWith the MongoDB Shell, you'll get significantly better error reporting than a typical command line tool.\n\nIn the above example, I've forgotten a comma, something I do regularly along with colons and semi-colons, and it told me, along with providing a general area on where the comma should go. That's a lot better than something like \"Generic Runtime Error 0x234223.\"\n\n## Extending the MongoDB Shell with Plugins Known as Snippets\n\nIf you use the MongoDB Shell enough, you'll probably reach a point in time where you wish it did something specific to your needs on a repetitive basis. Should this happen, you can always extend the tool with snippets, which are similar to plugins.\n\nTo get an idea of some of the official MongoDB Shell snippets, execute the following from the MongoDB Shell:\n\n```bash\nsnippet search\n```\n\nThe above command searches the snippets found in a repository on GitHub.\n\nYou can always define your own repository of snippets, but if you wanted to use one of the available but optional snippets, you could run something like this:\n\n```bash\nsnippet install analyze-schema\n```\n\nThe above snippet allows you to analyze any collection that you specify. So in the example of my \"recipes\" collection, I could do the following:\n\n```bash\nuse food;\nschema(db.recipes);\n```\n\nThe results of the schema analysis, at least for my collection, is the following:\n\n```\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 (index) \u2502 0 \u2502 1 \u2502 2 \u2502\n\u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n\u2502 0 \u2502 '_id ' \u2502 '100.0 %' \u2502 'ObjectID' \u2502\n\u2502 1 \u2502 'ingredients' \u2502 '100.0 %' \u2502 'Array' \u2502\n\u2502 2 \u2502 'name ' \u2502 '100.0 %' \u2502 'String' \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\nSnippets aren't the only way to extend functionality within the MongoDB Shell. You can also use Node.js in all its glory directly within the MongoDB Shell using custom scripts.\n\n## Using Node.js Scripts within the MongoDB Shell\n\nSo let's say you've got a data need that you can't easily accomplish with the MongoDB Query API or an aggregation pipeline. If you can accomplish what you need using Node.js, you can accomplish what you need in the MongoDB Shell.\n\nLet's take this example.\n\nSay you need to consume some data from a remote service and store it in MongoDB. Typically, you'd probably write an application, download the data, maybe store it in a file, and load it into MongoDB or load it with one of the programming drivers. You can skip a few steps and make your life a little easier.\n\nTry this.\n\nWhen you are connected to the MongoDB Shell, execute the following commands:\n\n```bash\nuse pokemon\n.editor\n```\n\nThe first will switch to a database\u2014in this case, \"pokemon\"\u2014and the second will open the editor. From the editor, paste in the following code:\n\n```javascript\nasync function getData(url) {\n const fetch = require(\"node-fetch\");\n const results = await fetch(url)\n .then(response => response.json());\n db.creatures.insertOne(results);\n}\n```\n\nThe above function will make use of the node-fetch package from NPM. Then, using the package, we can make a request to a provided URL and store the results in a \"creatures\" collection.\n\nYou can execute this function simply by doing something like the following:\n\n```bash\ngetData(\"https://pokeapi.co/api/v2/pokemon/pikachu\");\n```\n\nIf it ran successfully, your collection should have new data in it.\n\nIn regards to the NPM packages, you can either install them globally or to your current working directory. The MongoDB Shell will pick them up when you need them.\n\nIf you'd like to use your own preferred editor rather than the one that the MongoDB Shell provides you, execute the following command prior to attempting to open an editor:\n\n```bash\nconfig.set(\"editor\", \"vi\");\n```\n\nThe above command will make VI the default editor to use from within the MongoDB Shell. More information on using an external editor can be found in the documentation.\n\n## Conclusion\n\nYou can do some neat things with the MongoDB Shell, and while it isn't for everyone, if you're a power user of the command line, it will certainly improve your productivity with MongoDB.\n\nIf you have questions, stop by the MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["MongoDB", "Bash", "JavaScript"], "pageDescription": "Learn about the benefits of using the new MongoDB Shell for interacting with databases, collections, and the data inside.", "contentType": "Article"}, "title": "Designing, Developing, and Analyzing with the New MongoDB Shell", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/getting-started-unity-creating-2d-game", "action": "created", "body": "# Getting Started with Unity for Creating a 2D Game\n\nIf you've been keeping up with the content on the MongoDB Developer Portal, you'll know that a few of us at MongoDB (Nic Raboy, Adrienne Tacke, Karen Huaulme) have been working on a game titled Plummeting People, a Fall Guys: Ultimate Knockout tribute game. Up until now we've focused on game planning and part of our backend infrastructure with a user profile store.\n\nAs part of the natural progression in our development of the game and part of this tutorial series, it makes sense to get started with the actual gaming aspect, and that means diving into Unity, our game development framework.\n\nIn this tutorial, we're going to get familiar with some of the basics behind Unity and get a sprite moving on the screen as well as handing collision. If you're looking for how we plan to integrate the game into MongoDB, that's going to be saved for another tutorial.\n\nAn example of what we want to accomplish can be seen in the following animated image:\n\nThe framerate in the image is a little stuttery, but the actual result is quite smooth.\n\n## The Requirements\n\nBefore we get started, it's important to understand the requirements for creating the game.\n\n- Unity 2020+\n- Image to be used for player\n- Image to be used for the background\n\nI'm using Unity 2020.1.6f1, but any version around this particular version should be fine. You can download Unity at no cost for macOS and Windows, but make sure you understand the licensing model if you plan to sell your game.\n\nSince the goal of this tutorial is around moving a game object and handling collisions with another game object, we're going to need images. I'm using a 1x1 pixel image for my player, obstacle, and background, all scaled differently within Unity, but you can use whatever images you want.\n\n## Creating a New Unity Project with Texture and Script Assets\n\nTo keep things easy to understand, we're going to start with a fresh project. Within the **Unity Hub** application that becomes available after installing Unity, choose to create a new project.\n\nYou'll want to choose **2D** from the available templates, but the name and project location doesn't matter as long as you're comfortable with it.\n\nThe project might take a while to generate, but when it's done, you should be presented with something that looks like the following:\n\nAs part of the first steps, we need to make the project a little more development ready. Within the **Project** tree, right click on **Assets** and choose to create a new folder for **Textures** as well as **Scripts**.\n\nAny images that we plan to use in our game will end up in the **Textures** folder and any game logic will end up as a script within the **Scripts** folder. If you have your player, background, and obstacle images, place them within the **Textures** directory now.\n\nAs of right now there is a single scene for the game titled **SampleScene**. The name for this scene doesn't properly represent what the scene will be responsible for. Instead, let's rename it to **GameScene** as it will be used for the main gaming component for our project. A scene for a game is similar to a scene in a television show or movie. You'll likely have more than one scene, but each scene is responsible for something distinct. For example, in a game you might have a scene for the menu that appears when the user starts the game, a scene for game-play, and a scene for what happens when they've gotten game over. The use cases are limitless.\n\nWith the scene named appropriately, it's time to add game objects for the player, background, and obstacle. Within the project hierarchy panel, right click underneath the **Main Camera** item (if your hierarchy is expanded) or just under **GameScene** (if not expanded) and choose **Create Empty** from the list.\n\nWe'll want to create a game object for each of the following: the player, background, and obstacle. The name isn't too important, but it's probably a good idea to give them names based around their purpose.\n\nTo summarize what we've done, double-check the following:\n\n- Created a **Textures** and **Scripts** directory within the **Assets** directory.\n- Added an image that represents a player, an obstacle, and a background to the **Textures** directory.\n- Renamed **SampleScene** to **GameScene**.\n- Created a **Player** game object within the scene.\n- Created an **Obstacle** game object within the scene.\n- Created a **Background** game object within the scene.\n\nAt this point in time we have the project properly laid out.\n\n## Adding Sprite Renders, Physics, Collision Boxes, and Scripts to a Game Object\n\nWe have our game objects and assets ready to go and are now ready to configure them. This means adding images to the game object, physics properties, and any collision related data.\n\nWith the player game object selected from the project hierarchy, choose **Add Component** and search for **Sprite Renderer**.\n\nThe **Sprite Renderer** allows us to associate an image to our game object. Click the circle icon next to the **Sprite** property's input box. A panel will pop up that allows you to select the image you want to associate to the selected game object. You're going to want to use the image that you've added to the **Textures** directory. Follow the same steps for the obstacle and the background.\n\nYou may or may not notice that the layering of your sprites with images are not correct in the sense that some images are in the background and some are in the foreground. To fix the layering, we need to add a **Sorting Layer** to the game objects.\n\nRather than using the default sorting layer, choose to **Add Sorting Layer...** so we can use our own strategy. Create two new layers titled **Background** and **GameObject** and make sure that **Background** sits above **GameObject** in the list. The list represents the rendering order so higher in the list gets rendered first and lower in the list gets rendered last. This means that the items rendering last appear at the highest level of the foreground. Think about it as layers in Adobe Photoshop, only reversed in terms of which layers are most visible.\n\nWith the sorting layers defined, set the correct **Sorting Layer** for each of the game objects in the scene.\n\nFor clarity, the background game object should have the **Background** sorting layer applied and the obstacle as well as the player game object should have the **GameObject** sorting layer applied. We are doing it this way because based on the order of our layers, we want the background game object to truly sit behind the other game objects.\n\nThe next step is to add physics and collision box data to the game objects that should have such data. Select the player game object and search for a **Rigidbody 2D** component.\n\nSince this is a 2D game that has no sense of flooring, the **Gravity Scale** for the player should be zero. This will prevent the player from falling off the screen as soon as the game starts. The player is the only game object that will need a rigid body because it is the only game object where physics might be important.\n\nIn addition to a rigid body, the player will also need a collision box. Add a new **Box Collider 2D** component to the player game object.\n\nThe **Box Collider 2D** component should be added to the obstacle as well. The background, since it has no interaction with the player or obstacle does not need any additional component added to it.\n\nThe final configuration for the game objects is the adding of the scripts for game logic.\n\nRight click on the **Scripts** directory and choose to create a new **C# Script**. You'll want to rename the script to something that represents the game object that it will be a part of. For this particular script, it will be associated to the player game object.\n\nAfter selecting the game object for the player, drag the script file to the **Add Component** area of the inspector to add it to the game object.\n\nAt this point in time everything for this particular game is configured. However, before we move onto the next step, let's confirm the components added to each of the game objects in the scene.\n\n- Background has one sprite renderer with a **Background** sorting layer.\n- Player has one sprite renderer, one rigid body, and one box collider with the **GameObject** sorting layer.\n- Obstacle has one sprite renderer, and one box collider with the **GameObject** sorting layer.\n\nThe next step is to apply some game logic.\n\n## Controlling a Game Object with a Unity C# Script\n\nIn Unity, everything in a scene is controlled by a script. These scripts exist on game objects which make it easy to separate the bits and pieces that make up a game. For example the player might have a script with logic. The obstacles might have a different script with logic. Heck, even the grass within your scene might have a script. It's totally up to you how you want to script every part of your scene.\n\nIn this particular game example, we're only going to add logic to the player object script.\n\nThe script should already be associated to a player object, so open the script file and you should see the following code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Player : MonoBehaviour\n{\n\n void Start()\n {\n // ...\n }\n\n void Update()\n {\n // ...\n }\n\n}\n```\n\nTo move the player we have a few options. We could transform the position of the game object, we can transform the position of the rigid body, or we can apply physics force to the rigid body. Each will give us different results, with the force option being the most unique.\n\nBecause we do have physics, let's look at the latter two options, starting with the movement through force.\n\nWithin your C# script, change your code to the following:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Player : MonoBehaviour\n{\n\n public float speed = 1.5f;\n\n private Rigidbody2D rigidBody2D;\n\n void Start()\n {\n rigidBody2D = GetComponent();\n }\n\n void Update()\n {\n\n }\n\n void FixedUpdate() {\n float h = 0.0f;\n float v = 0.0f;\n if (Input.GetKey(\"w\")) { v = 1.0f; }\n if (Input.GetKey(\"s\")) { v = -1.0f; }\n if (Input.GetKey(\"a\")) { h = -1.0f; }\n if (Input.GetKey(\"d\")) { h = 1.0f; }\n\n rigidBody2D.AddForce(new Vector2(h, v) * speed);\n }\n\n}\n```\n\nWe're using a `FixedUpdate` because we're using physics on our game object. Had we not been using physics, the `Update` function would have been fine.\n\nWhen any of the directional keys are pressed (not arrow keys), force is applied to the rigid body in a certain direction at a certain speed. If you ran the game and tried to move the player, you'd notice that it moves with a kind of sliding on ice effect. Rather than moving the player at a constant speed, the player increases speed as it builds up momentum and then when you release the movement keys it gradually slows down. This is because of the physics and the applying of force.\n\nMoving the player into the obstacle will result in the player stopping. We didn't even need to add any code to make this possible.\n\nSo let's look at moving the player without applying force. Change the `FixedUpdate` function to the following:\n\n``` csharp\nvoid FixedUpdate() {\n float h = 0.0f;\n float v = 0.0f;\n if (Input.GetKey(\"w\")) { v = 1.0f; }\n if (Input.GetKey(\"s\")) { v = -1.0f; }\n if (Input.GetKey(\"a\")) { h = -1.0f; }\n if (Input.GetKey(\"d\")) { h = 1.0f; }\n\n rigidBody2D.MovePosition(rigidBody2D.position + (new Vector2(h, v) * speed * Time.fixedDeltaTime));\n}\n```\n\nInstead of using the `AddForce` method we are using the `MovePosition` method. We are now translating our rigid body which will also translate our game object position. We have to use the `fixedDeltaTime`, otherwise we risk our translations happening too quickly if the `FixedUpdate` is executed too quickly.\n\nIf you run the game, you shouldn't get the moving on ice effect, but instead nice smooth movement that stops as soon as you let go of the keys.\n\nIn both examples, the movement was limited to the letter keys on the keyboard.\n\nIf you want to move based on the typical WASD letter keys and the arrow keys, you could do something like this instead:\n\n``` csharp\nvoid FixedUpdate() {\n float h = Input.GetAxis(\"Horizontal\");\n float v = Input.GetAxis(\"Vertical\");\n\n rigidBody2D.MovePosition(rigidBody2D.position + (new Vector2(h, v) * speed * Time.fixedDeltaTime));\n}\n```\n\nThe above code will generate a value of -1.0, 0.0, or 1.0 depending on if the corresponding letter key or arrow key was pressed.\n\nJust like with the `AddForce` method, when using the `MovePosition` method, the collisions between the player and the obstacle still happen.\n\n## Conclusion\n\nYou just saw how to get started with Unity and building a simple 2D game. Of course what we saw in this tutorial wasn't an actual game, but it has all of the components that can be applied towards a real game. This was discussed by Karen Huaulme and myself (Nic Raboy) in the fourth part of our game development Twitch stream.\n\nThe player movement and collisions will be useful in the Plummeting People game as players will not only need to dodge other players, but obstacles as well as they race to the finish line.", "format": "md", "metadata": {"tags": ["C#", "Unity"], "pageDescription": "Learn how to get started with Unity for moving an object on the screen with physics and collisions.", "contentType": "Tutorial"}, "title": "Getting Started with Unity for Creating a 2D Game", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/adding-realm-as-dependency-ios-framework", "action": "created", "body": "# Adding Realm as a dependency to an iOS Framework\n\n# Adding Realm as a Dependency to an iOS Framework\n\n## Introduction\n\nIn this post we\u2019ll review how we can add RealmSwift as a dependency to our libraries, using two different methods: Xcode assistants and the Swift Package Manager.\n\n## The Problem \n\nI have a little, nice Binary Tree library. I know that I will use it for a later project, and that it'll be used at least in a macOS and iOS app. Maybe also a Vapor web app. So I decided to create a Framework to hold this code. But some of my model classes there need to be persisted in some way locally in the phone later. The Realm library is perfect for this, as I can start working with regular objects and store them locally and, later, if I need a ~~no code~~ really simple & quick to implement backend solution I can use Atlas Device Sync.\n\nBut the problem is, how do we add Realm as a dependency in our Frameworks?\n\n## Solution 1: Use Xcode to Create the Framework and Add Realm with SPM\n\nThe first way to create the Framework is just to create a new Xcode Project. Start Xcode and select `File > New > Project`. In this case I\u2019ll change to the iOS tab, scroll down to the Framework & Library section, then select Framework. This way I can share this Framework between my iOS app and its extensions, for instance.\n\nNow we have a new project that holds our code. This project has two targets, one to build the Framework itself and a second one to run our Unit Tests. Every time we write code we should test it, but this is especially important for reusable code, as one bug can propagate to multiple places.\n\nTo add Realm/Swift as a dependency, open your project file in the File Navigator. Then click on the Project Name and change to the Swift Packages tab. Finally click on the + button to add a new package. \n\nIn this case, we\u2019ll add Realm Cocoa, a package that contains two libraries. We\u2019re interested in Realm Swift: https://github.com/realm/realm-cocoa. We want one of the latest versions, so we\u2019ll choose \u201cUp to major version\u201d 10.0.0. Once the resolution process is done, we can select RealmSwift.\n\nNice! Now that the package is added to our Framework we can compile our code containing Realm Objects without any problems!\n\n## Solution 2: create the Framework using SPM and add the dependency directly in Package.swift\n\nThe other way to author a framework is to create it using the Swift Package Manager. We need to add a Package Manifest (the Package.swift file), and follow a certain folder structure. We have two options here to create the package:\n\n* Use the Terminal\n* Use Xcode\n\n### Creating the Package from Terminal\n\n* Open Terminal / CLI\n* Create a folder with `mkdir yourframeworkname`\n* Enter that folder with `cd yourframeworkname`\n* Run `swift package init`\n* Once created, you can open the package with `open Package.swift`\n\n \n\n### Creating the Package using Xcode\n\nYou can also use Xcode to do all this for you. Just go to `File > New > Swift Package`, give it a name and you\u2019ll get your package with the same structure.\n\n### Adding Realm as a dependency\n\nSo we have our Framework, with our library code and we can distribute it easily using Swift Package Manager. Now, we need to add Realm Swift. We don\u2019t have the nice assistant that Xcode shows when you create the Framework using Xcode, so we need to add it manually to `Package.swift`\n\nThe complete `Package.swift` file\n\n```swift\nlet package = Package(\n name: \"BinaryTree\",\n platforms: \n .iOS(.v14)\n ],\n products: [\n // Products define the executables and libraries a package produces, and make them visible to other packages.\n .library(\n name: \"BinaryTree\",\n targets: [\"BinaryTree\"]),\n ],\n dependencies: [\n // Dependencies declare other packages that this package depends on.\n .package(name: \"Realm\", url: \"https://github.com/realm/realm-cocoa\", from: \"10.7.0\")\n ],\n targets: [\n // Targets are the basic building blocks of a package. A target can define a module or a test suite.\n // Targets can depend on other targets in this package, and on products in packages this package depends on.\n .target(\n name: \"BinaryTree\",\n dependencies: [.product(name: \"RealmSwift\", package: \"Realm\")]),\n .testTarget(\n name: \"BinaryTreeTests\",\n dependencies: [\"BinaryTree\"]),\n ]\n)\n```\n\nHere, we declare a package named \u201cBinaryTree\u201d, supporting iOS 14\n\n```swift\nlet package = Package(\n name: \"BinaryTree\",\n platforms: [\n .iOS(.v14)\n ],\n```\n\nAs this is a library, we declare the products we\u2019re going to build, in this case it\u2019s just one target called `BinaryTree`.\n\n```swift\nproducts: [\n // Products define the executables and libraries a package produces, and make them visible to other packages.\n .library(\n name: \"BinaryTree\",\n targets: [\"BinaryTree\"]),\n ],\n```\n\nNow, the important part: we declare Realm as a dependency in our library. We\u2019re giving this dependency the short name \u201cRealm\u201d so we can refer to it in the next step.\n\n```swift\ndependencies: [\n // Dependencies declare other packages that this package depends on.\n .package(name: \"Realm\", url: \"https://github.com/realm/realm-cocoa\", from: \"10.7.0\")\n ],\n```\n\nIn our target, we use the previously defined `Realm` dependency.\n\n```swift\n.target(\n name: \"BinaryTree\",\n dependencies: [.product(name: \"RealmSwift\", package: \"Realm\")]),\n```\n\nAnd that\u2019s all! Now our library can be used as a Swift Package normally, and it will include automatically Realm.\n\n## Recap\n\nIn this post we\u2019ve seen different ways to create a Framework, directly from Xcode or as a Swift Package, and how to add `Realm` as a dependency to that Framework. This way, we can write code that uses Realm and distribute it quickly using SPM. \n\nIn our next post in this series we\u2019ll document this library using the new Documentation Compiler (DocC) from Apple. Stay tuned and thanks for reading!\n\nIf you have questions, please head to our [developer community website where the Realm engineers and the Realm/MongoDB community will help you build your next big idea with Realm and MongoDB.\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Adding Realm to a Project is how we usually work. But sometimes we want to create a Framework (could be the data layer of a bigger project) that uses Realm. So... how do we add Realm as a dependency to said Framework? ", "contentType": "Tutorial"}, "title": "Adding Realm as a dependency to an iOS Framework", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/nodejs-python-ruby-atlas-api", "action": "created", "body": "# Calling the MongoDB Atlas Administration API: How to Do it from Node, Python, and Ruby\n\nThe real power of a cloud-hosted, fully managed service like MongoDB Atlas is that you can create whole new database deployment architectures automatically, using the services API. Getting to the MongoDB Atlas Administration API is relatively simple and, once unlocked, it opens up a massive opportunity to integrate and automate the management of database deployments from creation to deletion. The API itself is an extensive REST API. There's role-based access control and you can have user or app-specific credentials to access it.\n\nThere is one tiny thing that can trip people up though. The credentials have to be passed over using the digest authentication mechanism, not the more common basic authentication or using an issued token. Digest authentication, at its simplest, waits to get an HTTP `401 Unauthorized` response from the web endpoint. That response comes with data and the client then sends an encrypted form of the username and password as a digest and the server works with that.\n\nAnd that's why we\u2019re here today: to show you how to do that with the least fuss in Python, Node, and Ruby. In each example, we'll try and access the base URL of the Atlas Administration API which returns a JSON document about the underlying applications name, build and other facts.\nYou can find all code samples in the dedicated Github repository.\n\n## Setup\n\nTo use the Atlas Administration API, you need\u2026 a MongoDB Atlas cluster! If you don\u2019t have one already, follow the Get Started with Atlas guide to create your first cluster.\n\nThe next requirement is the organization API key. You can set it up in two steps:\n\nCreate an API key in your Atlas organization. Make sure the key has the Organization Owner permission.\nAdd your IP address to the API Access List for the API key.\n\nThen, open a new terminal and export the following environment variables, where `ATLAS_USER` is your public key and `ATLAS_USER_KEY` is your private key.\n\n```\nexport ATLAS_USER=\nexport ATLAS_USER_KEY=\n```\n\nYou\u2019re all set up! Let\u2019s see how we can use the Admin API with Python, Node, and Ruby.\n\n## Python\n\nWe start with the simplest and most self-contained example: Python.\n\nIn the Python version, we lean on the `requests` library for most of the heavy lifting. We can install it with `pip`:\n\n``` bash\npython -m pip install requests\n```\n\nThe implementation of the digest authentication itself is the following:\n\n``` python\nimport os\nimport requests\nfrom requests.auth import HTTPDigestAuth\nimport pprint\n\nbase_url = \"https://cloud.mongodb.com/api/atlas/v1.0/\"\nauth = HTTPDigestAuth(\n os.environ\"ATLAS_USER\"],\n os.environ[\"ATLAS_USER_KEY\"]\n)\n\nresponse = requests.get(base_url, auth = auth)\npprint.pprint(response.json())\n```\n\nAs well as importing `requests`, we also bring in `HTTPDigestAuth` from requests' `auth` module to handle digest authentication. The `os` import is just there so we can get the environment variables `ATLAS_USER` and `ATLAS_USER_KEY` as credentials, and the `pprint` import is just to format our results.\n\nThe critical part is the addition of `auth = HTTPDigestAuth(...)` to the `requests.get()` call. This installs the code needed to respond to the server when it asks for the digest.\n\nIf we now run this program...\n\n![Screenshot of the terminal emulator after the execution of the request script for Python. The printed message shows that the request was successful.\n\n\u2026we have our API response.\n\n## Node.js\n\nFor Node.js, we\u2019ll take advantage of the `urllib` package which supports digest authentication.\n\n``` bash\nnpm install urllib\n```\n\nThe code for the Node.js HTTP request is the following:\n\n``` javascript\nconst urllib = require('urllib');\n\nconst baseUrl = 'https://cloud.mongodb.com/api/atlas/v1.0/';\nconst { ATLAS_USER, ATLAS_USER_KEY } = process.env;\nconst options = {\n digestAuth: `${ATLAS_USER}:${ATLAS_USER_KEY}`,\n};\n\nurllib.request(baseUrl, options, (error, data, response) => {\n if (error || response.statusCode !== 200) {\n console.error(`Error: ${error}`);\n console.error(`Status code: ${response.statusCode}`);\n } else {\n console.log(JSON.parse(data));\n }\n});\n```\n\nTaking it from the top\u2026 we first require and import the `urllib` package. Then, we extract the `ATLAS_USER` and `ATLAS_USER_KEY` variables from the process environment and use them to construct the authentication key. Finally, we send the request and handle the response in the passed callback. \n\nAnd we\u2019re ready to run:\n\nOn to our final language...\n\n## Ruby\n\nHTTParty is a widely used Gem which is used by the Ruby and Rails community to perform HTTP operations. It also, luckily, supports digest authentication. So, to get the party started:\n\n``` bash\ngem install httparty\n```\n\nThere are two ways to use HTTParty. One is creating an object which abstracts the calls away while the other is just directly calling methods on HTTParty itself. For brevity, we'll do the latter. Here's the code:\n\n``` ruby\nrequire 'httparty'\nrequire 'json'\n\nbase_url = 'https://cloud.mongodb.com/api/atlas/v1.0/'\noptions = {\n :digest_auth => {\n :username=>ENV'ATLAS_USER'],\n :password=>ENV['ATLAS_USER_KEY']\n }\n}\n\nresult = HTTParty.get(base_url, options)\n\npp JSON.parse(result.body())\n```\n\nWe require the HTTParty and JSON gems first. We then create a dictionary with our username and key, mapped for HTTParty's authentication, and set a variable to hold the base URL. We're ready to do our GET request now, and in the `options` (the second parameter of the GET request), we pass `:digest_auth=>auth` to switch on the digest support. We wrap up by JSON parsing the resulting body and pretty printing that. Put it all together and run it and we get:\n\n![Screenshot of the terminal emulator after the execution of the request script for Ruby. The printed message shows that the request was successful.\n\n## Next Stop - The API\n\nIn this article, we learned how to call the MongoDB Atlas Administration API using digest authentication. We took advantage of the vast library ecosystems of Python, Node.js, and Ruby, and used the following open-source community libraries:\n\nRequests for Python\nurllib for JavaScript\nhttparty for Ruby\n\nIf your project requires it, you can implement digest authentication yourself by following the official specification. You can draw inspiration from the implementations in the aforementioned libraries.\n\nAdditionally, you can find all code samples from the article in Github.\n\nWith the authentication taken care of, just remember to be fastidious with your API key security and make sure you revoke unused keys. You can now move on to explore the API itself. Start in the documentation and see what you can automate today.\n\n", "format": "md", "metadata": {"tags": ["Atlas", "Ruby", "Python", "Node.js"], "pageDescription": "Learn how to use digest authentication for the MongoDB Atlas Administration API from Python, Node.js, and Ruby.", "contentType": "Tutorial"}, "title": "Calling the MongoDB Atlas Administration API: How to Do it from Node, Python, and Ruby", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-transactions-3-3-2", "action": "created", "body": "# How to Use MongoDB Transactions in Node.js\n\n \n\nDevelopers who move from relational databases to MongoDB commonly ask, \"Does MongoDB support ACID transactions? If so, how do you create a transaction?\" The answer to the first question is, \"Yes!\"\n\nBeginning in 4.0, MongoDB added support for multi-document ACID transactions, and, beginning in 4.2, MongoDB added support for distributed ACID transactions. If you're not familiar with what ACID transactions are or if you should be using them in MongoDB, check out my earlier post on the subject.\n\n>\n>\n>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n\nWe're over halfway through the Quick Start with MongoDB and Node.js series. We began by walking through how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. Then we jumped into more advanced topics like the aggregation framework.\n\nThe code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.\n\nNow let's dive into that second question developers ask\u2014let's discover how to create a transaction!\n\n>\n>\n>Want to see transactions in action? Check out the video below! It covers the same topics you'll read about in this article.\n>\n>:youtube]{vid=bdS03tgD2QQ}\n>\n>\n\n>\n>\n>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n>\n>\n\n## Creating an Airbnb Reservation\n\nAs you may have experienced while working with MongoDB, most use cases do not require you to use multi-document transactions. When you model your data using our rule of thumb **Data that is accessed together should be stored together**, you'll find that you rarely need to use a multi-document transaction. In fact, I struggled a bit to think of a use case for the Airbnb dataset that would require a multi-document transaction.\n\nAfter a bit of brainstorming, I came up with a somewhat plausible example. Let's say we want to allow users to create reservations in the `sample_airbnb database`.\n\nWe could begin by creating a collection named `users`. We want users to be able to easily view their reservations when they are looking at their profiles, so we will store the reservations as embedded documents in the `users` collection. For example, let's say a user named Leslie creates two reservations. Her document in the `users` collection would look like the following:\n\n``` json\n{\n \"_id\": {\"$oid\":\"5dd589544f549efc1b0320a5\"},\n \"email\": \"leslie@example.com\",\n \"name\": \"Leslie Yepp\",\n \"reservations\": \n {\n \"name\": \"Infinite Views\",\n \"dates\": [\n {\"$date\": {\"$numberLong\":\"1577750400000\"}},\n {\"$date\": {\"$numberLong\":\"1577836800000\"}}\n ],\n \"pricePerNight\": {\"$numberInt\":\"180\"},\n \"specialRequests\": \"Late checkout\",\n \"breakfastIncluded\": true\n },\n {\n \"name\": \"Lovely Loft\",\n \"dates\": [\n {\"$date\": {\"$numberLong\": \"1585958400000\"}}\n ],\n \"pricePerNight\": {\"$numberInt\":\"210\"},\n \"breakfastIncluded\": false\n }\n ]\n}\n```\n\nWhen browsing Airbnb listings, users need to know if the listing is already booked for their travel dates. As a result, we want to store the dates the listing is reserved in the `listingsAndReviews` collection. For example, the \"Infinite Views\" listing that Leslie reserved should be updated to list her reservation dates.\n\n``` json\n{\n \"_id\": {\"$oid\":\"5dbc20f942073d6d4dabd730\"},\n \"name\": \"Infinite Views\",\n \"summary\": \"Modern home with infinite views from the infinity pool\",\n \"property_type\": \"House\",\n \"bedrooms\": {\"$numberInt\": \"6\"},\n \"bathrooms\": {\"$numberDouble\":\"4.5\"},\n \"beds\": {\"$numberInt\":\"8\"},\n \"datesReserved\": [\n {\"$date\": {\"$numberLong\": \"1577750400000\"}},\n {\"$date\": {\"$numberLong\": \"1577836800000\"}}\n ]\n}\n```\n\nKeeping these two records in sync is imperative. If we were to create a reservation in a document in the `users` collection without updating the associated document in the `listingsAndReviews` collection, our data would be inconsistent. We can use a multi-document transaction to ensure both updates succeed or fail together.\n\n## Setup\n\nAs with all posts in this MongoDB and Node.js Quick Start series, you'll need to ensure you've completed the prerequisite steps outlined in the **Set up** section of the [first post in this series.\n\n**Note**: To utilize transactions, MongoDB must be configured as a replica set or a sharded cluster. Transactions are not supported on standalone deployments. If you are using a database hosted on Atlas, you do not need to worry about this as every Atlas cluster is either a replica set or a sharded cluster. If you are hosting your own standalone deployment, follow these instructions to convert your instance to a replica set.\n\nWe'll be using the \"Infinite Views\" Airbnb listing we created in a previous post in this series. Hop back to the post on Creating Documents if your database doesn't currently have the \"Infinite Views\" listing.\n\nThe Airbnb sample dataset only has the `listingsAndReviews` collection by default. To help you quickly create the necessary collection and data, I wrote usersCollection.js. Download a copy of the file, update the `uri` constant to reflect your Atlas connection info, and run the script by executing `node usersCollection.js`. The script will create three new users in the `users` collection: Leslie Yepp, April Ludfence, and Tom Haverdodge. If the `users` collection does not already exist, MongoDB will automatically create it for you when you insert the new users. The script also creates an index on the `email` field in the `users` collection. The index requires that every document in the `users` collection has a unique `email`.\n\n## Create a Transaction in Node.js\n\nNow that we are set up, let's implement the functionality to store Airbnb reservations.\n\n### Get a Copy of the Node.js Template\n\nTo make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.\n\n1. Download a copy of template.js.\n2. Open `template.js` in your favorite code editor.\n3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.\n4. Save the file as `transaction.js`.\n\nYou can run this file by executing `node transaction.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.\n\n### Create a Helper Function\n\nLet's create a helper function. This function will generate a reservation document that we will use later.\n\n1. Paste the following function in `transaction.js`:\n\n``` js\nfunction createReservationDocument(nameOfListing, reservationDates, reservationDetails) {\n // Create the reservation\n let reservation = {\n name: nameOfListing,\n dates: reservationDates,\n }\n\n // Add additional properties from reservationDetails to the reservation\n for (let detail in reservationDetails) {\n reservationdetail] = reservationDetails[detail];\n }\n\n return reservation;\n }\n```\n\nTo give you an idea of what this function is doing, let me show you an example. We could call this function from inside of `main()`:\n\n``` js\ncreateReservationDocument(\"Infinite Views\",\n [new Date(\"2019-12-31\"), new Date(\"2020-01-01\")],\n { pricePerNight: 180, specialRequests: \"Late checkout\", breakfastIncluded: true });\n```\n\nThe function would return the following:\n\n``` json\n{ \n name: 'Infinite Views',\n dates: [ 2019-12-31T00:00:00.000Z, 2020-01-01T00:00:00.000Z ],\n pricePerNight: 180,\n specialRequests: 'Late checkout',\n breakfastIncluded: true \n}\n```\n\n### Create a Function for the Transaction\n\nLet's create a function whose job is to create the reservation in the database.\n\n1. Continuing to work in `transaction.js`, create an asynchronous function named `createReservation`. The function should accept a `MongoClient`, the user's email address, the name of the Airbnb listing, the reservation dates, and any other reservation details as parameters.\n\n ``` js\n async function createReservation(client, userEmail, nameOfListing, reservationDates, reservationDetails) {\n }\n ```\n\n2. Now we need to access the collections we will update in this function. Add the following code to `createReservation()`.\n\n ``` js\n const usersCollection = client.db(\"sample_airbnb\").collection(\"users\");\n const listingsAndReviewsCollection = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n ```\n\n3. Let's create our reservation document by calling the helper function we created in the previous section. Paste the following code in `createReservation()`.\n\n ``` js\n const reservation = createReservationDocument(nameOfListing, reservationDates, reservationDetails);\n ```\n\n4. Every transaction and its operations must be associated with a session. Beneath the existing code in `createReservation()`, start a session.\n\n ``` js\n const session = client.startSession();\n ```\n\n5. We can choose to define options for the transaction. We won't get into the details of those here. You can learn more about these options in the [driver documentation. Paste the following beneath the existing code in `createReservation()`.\n\n ``` js\n const transactionOptions = {\n readPreference: 'primary',\n readConcern: { level: 'local' },\n writeConcern: { w: 'majority' }\n };\n ```\n\n6. Now we're ready to start working with our transaction. Beneath the existing code in `createReservation()`, open a `try { }` block, follow it with a `catch { }` block, and finish it with a `finally { }` block.\n\n ``` js\n try {\n\n } catch(e){\n\n } finally {\n\n }\n ```\n\n7. We can use ClientSession's withTransaction() to start a transaction, execute a callback function, and commit (or abort on error) the transaction. `withTransaction()` requires us to pass a function that will be run inside the transaction. Add a call to `withTransaction()` inside of `try { }` . Let's begin by passing an anonymous asynchronous function to `withTransaction()`.\n\n ``` js\n const transactionResults = await session.withTransaction(async () => {}, transactionOptions);\n ```\n\n8. The anonymous callback function we are passing to `withTransaction()` doesn't currently do anything. Let's start to incrementally build the database operations we want to call from inside of that function. We can begin by adding a reservation to the `reservations` array inside of the appropriate `user` document. Paste the following inside of the anonymous function that is being passed to `withTransaction()`.\n\n ``` js\n const usersUpdateResults = await usersCollection.updateOne(\n { email: userEmail },\n { $addToSet: { reservations: reservation } },\n { session });\n console.log(`${usersUpdateResults.matchedCount} document(s) found in the users collection with the email address ${userEmail}.`);\n console.log(`${usersUpdateResults.modifiedCount} document(s) was/were updated to include the reservation.`);\n ```\n\n9. Since we want to make sure that an Airbnb listing is not double-booked for any given date, we should check if the reservation date is already listed in the listing's `datesReserved` array. If so, we should abort the transaction. Aborting the transaction will rollback the update to the user document we made in the previous step. Paste the following beneath the existing code in the anonymous function.\n\n ``` js\n const isListingReservedResults = await listingsAndReviewsCollection.findOne(\n { name: nameOfListing, datesReserved: { $in: reservationDates } },\n { session });\n if (isListingReservedResults) {\n await session.abortTransaction();\n console.error(\"This listing is already reserved for at least one of the given dates. The reservation could not be created.\");\n console.error(\"Any operations that already occurred as part of this transaction will be rolled back.\");\n return;\n }\n ```\n\n10. The final thing we want to do inside of our transaction is add the reservation dates to the `datesReserved` array in the `listingsAndReviews` collection. Paste the following beneath the existing code in the anonymous function.\n\n ``` js\n const listingsAndReviewsUpdateResults = await listingsAndReviewsCollection.updateOne(\n { name: nameOfListing },\n { $addToSet: { datesReserved: { $each: reservationDates } } },\n { session });\n console.log(`${listingsAndReviewsUpdateResults.matchedCount} document(s) found in the listingsAndReviews collection with the name ${nameOfListing}.`);\n console.log(`${listingsAndReviewsUpdateResults.modifiedCount} document(s) was/were updated to include the reservation dates.`);\n ```\n\n11. We'll want to know if the transaction succeeds. If `transactionResults` is defined, we know the transaction succeeded. If `transactionResults` is undefined, we know that we aborted it intentionally in our code. Beneath the definition of the `transactionResults` constant, paste the following code.\n\n ``` js\n if (transactionResults) {\n console.log(\"The reservation was successfully created.\");\n } else {\n console.log(\"The transaction was intentionally aborted.\");\n }\n ```\n\n12. Let's log any errors that are thrown. Paste the following inside of `catch(e){ }`:\n\n ``` js\n console.log(\"The transaction was aborted due to an unexpected error: \" + e);\n ```\n\n13. Regardless of what happens, we need to end our session. Paste the following inside of `finally { }`:\n\n ``` js\n await session.endSession();\n ```\n\n At this point, your function should look like the following:\n\n ``` js\n async function createReservation(client, userEmail, nameOfListing, reservationDates, reservationDetails) {\n\n const usersCollection = client.db(\"sample_airbnb\").collection(\"users\");\n const listingsAndReviewsCollection = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n\n const reservation = createReservationDocument(nameOfListing, reservationDates, reservationDetails);\n\n const session = client.startSession();\n\n const transactionOptions = {\n readPreference: 'primary',\n readConcern: { level: 'local' },\n writeConcern: { w: 'majority' }\n };\n\n try {\n const transactionResults = await session.withTransaction(async () => {\n\n const usersUpdateResults = await usersCollection.updateOne(\n { email: userEmail },\n { $addToSet: { reservations: reservation } },\n { session });\n console.log(`${usersUpdateResults.matchedCount} document(s) found in the users collection with the email address ${userEmail}.`);\n console.log(`${usersUpdateResults.modifiedCount} document(s) was/were updated to include the reservation.`);\n\n const isListingReservedResults = await listingsAndReviewsCollection.findOne(\n { name: nameOfListing, datesReserved: { $in: reservationDates } },\n { session });\n if (isListingReservedResults) {\n await session.abortTransaction();\n console.error(\"This listing is already reserved for at least one of the given dates. The reservation could not be created.\");\n console.error(\"Any operations that already occurred as part of this transaction will be rolled back.\");\n return;\n }\n\n const listingsAndReviewsUpdateResults = await listingsAndReviewsCollection.updateOne(\n { name: nameOfListing },\n { $addToSet: { datesReserved: { $each: reservationDates } } },\n { session });\n console.log(`${listingsAndReviewsUpdateResults.matchedCount} document(s) found in the listingsAndReviews collection with the name ${nameOfListing}.`);\n console.log(`${listingsAndReviewsUpdateResults.modifiedCount} document(s) was/were updated to include the reservation dates.`);\n\n }, transactionOptions);\n\n if (transactionResults) {\n console.log(\"The reservation was successfully created.\");\n } else {\n console.log(\"The transaction was intentionally aborted.\");\n }\n } catch(e){\n console.log(\"The transaction was aborted due to an unexpected error: \" + e);\n } finally {\n await session.endSession();\n }\n\n }\n ```\n\n## Call the Function\n\nNow that we've written a function that creates a reservation using a transaction, let's try it out! Let's create a reservation for Leslie at the \"Infinite Views\" listing for the nights of December 31, 2019 and January 1, 2020.\n\n1. Inside of `main()` beneath the comment that says\n `Make the appropriate DB calls`, call your `createReservation()`\n function:\n\n ``` js\n await createReservation(client,\n \"leslie@example.com\",\n \"Infinite Views\",\n new Date(\"2019-12-31\"), new Date(\"2020-01-01\")],\n { pricePerNight: 180, specialRequests: \"Late checkout\", breakfastIncluded: true });\n ```\n\n2. Save your file.\n3. Run your script by executing `node transaction.js` in your shell.\n4. The following output will be displayed in your shell.\n\n ``` none\n 1 document(s) found in the users collection with the email address leslie@example.com.\n 1 document(s) was/were updated to include the reservation.\n 1 document(s) found in the listingsAndReviews collection with the name Infinite Views.\n 1 document(s) was/were updated to include the reservation dates.\n The reservation was successfully created.\n ```\n\n Leslie's document in the `users` collection now contains the\n reservation.\n\n ``` js\n {\n \"_id\": {\"$oid\":\"5dd68bd03712fe11bebfab0c\"},\n \"email\": \"leslie@example.com\",\n \"name\": \"Leslie Yepp\",\n \"reservations\": [\n {\n \"name\": \"Infinite Views\", \n \"dates\": [\n {\"$date\": {\"$numberLong\":\"1577750400000\"}},\n {\"$date\": {\"$numberLong\":\"1577836800000\"}}\n ],\n \"pricePerNight\": {\"$numberInt\":\"180\"},\n \"specialRequests\": \"Late checkout\",\n \"breakfastIncluded\": true\n }\n ]\n }\n ```\n\n The \"Infinite Views\" listing in the `listingsAndReviews` collection now\n contains the reservation dates.\n\n ``` js\n {\n \"_id\": {\"$oid\": \"5dbc20f942073d6d4dabd730\"},\n \"name\": \"Infinite Views\",\n \"summary\": \"Modern home with infinite views from the infinity pool\",\n \"property_type\": \"House\",\n \"bedrooms\": {\"$numberInt\":\"6\"},\n \"bathrooms\": {\"$numberDouble\":\"4.5\"},\n \"beds\": {\"$numberInt\":\"8\"},\n \"datesReserved\": [\n {\"$date\": {\"$numberLong\": \"1577750400000\"}},\n {\"$date\": {\"$numberLong\": \"1577836800000\"}}\n ]\n }\n ```\n\n## Wrapping Up\n\nToday, we implemented a multi-document transaction. Transactions are really handy when you need to make changes to more than one document as an all-or-nothing operation.\n\nBe sure you are using the correct read and write concerns when creating a transaction. See the [MongoDB documentation for more information.\n\nWhen you use relational databases, related data is commonly split between different tables in an effort to normalize the data. As a result, transaction usage is fairly common.\n\nWhen you use MongoDB, data that is accessed together should be stored together. When you model your data this way, you will likely find that you rarely need to use transactions.\n\nThis post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.\n\nNow you're ready to try change streams and triggers. Check out the next post in this series to learn more!\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.\n\n## Additional Resources\n\n- MongoDB official documentation: Transactions\n- MongoDB documentation: Read Concern/Write Concern/Read Preference\n- Blog post: What's the deal with data integrity in relational databases vs MongoDB?\n- Informational page with videos and links to additional resources: ACID Transactions in MongoDB\n- Whitepaper: MongoDB Multi-Document ACID Transactions\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Discover how to implement multi-document transactions in MongoDB using Node.js.", "contentType": "Quickstart"}, "title": "How to Use MongoDB Transactions in Node.js", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-automation-index-autopilot", "action": "created", "body": "# Database Automation Series - Automated Indexes\n\nManaging databases can be difficult, but it doesn't have to be. Most\naspects of database management can be automated, and with a platform\nsuch as MongoDB Atlas, the tools are not only available, but they're\neasy to use. In this series, we'll chat with Rez\nKahn, Lead Product Manager at\nMongoDB, to learn about some of the ways Atlas automates the various\ntasks associated with deploying, scaling, and ensuring efficient\nperformance of your databases. In this first part of the series, we'll\nfocus on a feature built into Atlas, called Index Autopilot.\n\n:youtube]{vid=8feWYX0KQ9M}\n\n*Nic Raboy (00:56):* Rez, thanks for taking the time to be on this\npodcast episode. Before we get into the core material of actually\ntalking about database management and the things that you can automate,\nlet's take a step back, and why don't you tell us a little bit about\nyourself?\n\n*Rez Kahn (01:10):* Cool. Happy to be here, Nick. My name's Rez. I am a\nlead product manager at MongoDB, I work out of the New York office. My\nteam is roughly responsible for making sure that the experience of our\ncustomers are as amazing as possible after they deploy their first\napplication in MongoDB, which means we work on problems such as how we\nmonitor MongoDB. How we make sure our customers can diagnose issues and\nfix issues that may come up with MongoDB, and a whole host of other\ninteresting areas, which we're going to dive into throughout the\npodcast.\n\n*Nic Raboy (01:55):* So, when you're talking about the customer success,\nafter they've gotten started on MongoDB, are you referencing just Atlas?\nAre you referencing, say, Realm or some of the other tooling that\nMongoDB offers as well? You want to shed some light?\n\n*Rez Kahn (02:10):* Yeah, that's a really good question. Obviously, the\naspiration of the team is to help with all the products which MongoDB\nsupports today, and will eventually support in the future. But for the\ntime being, our focus has been on how do we crush the Atlas experience\nand make it as magical of an experience as possible after a user\n\\[inaudible 00:02:29\\] the first application.\n\n*Michael Lynn (02:30):* How long have you been with MongoDB and what\nwere you doing prior to coming on board?\n\n*Rez Kahn (02:35):* Well, I've been with MongoDB for a couple of years\nnow. Before joining MongoDB, I used to work in a completely different\nindustry advertising technology. I spent five years at a New York\nstartup called AppNexus, which eventually got sold to AT&T, and at\nAppNexus, I was a product manager as well. But instead of building\ndatabases, or helping manage databases better, I built products to help\nour customers buy ads on the internet more effectively. A lot of it was\nmachine learning-based products. So, this would be systems to help\noptimize how you spend your advertising dollars.\n\n*Rez Kahn (03:18):* The root of the problem we're trying to solve is\nfiguring out which ads customers would click on and eventually purchase\na product based out of. How do we do that as effectively and as\nefficiently as possible? Prior to AppNexus, I actually spent a number of\nyears in the research field trying to invent new types of materials to\nbuild microchips. So, it was even more off-base compared to what I'm\ndoing today, but it's always very interesting to see the relationship\nbetween research and product management. Eventually, I found it was\nactually a very good background to have to be a good product manager.\n\n*Michael Lynn (03:59):* Yeah, I would imagine you've got to be pretty\ncurious to work in that space, looking for new materials for chips.\nThat's pretty impressive. So, you've been on board with MongoDB for five\nyears. Did you go right into the Atlas space as a product manager?\n\n*Rez Kahn (04:20):* So, I've been with MongoDB for two years, and yes-\n\n*Michael Lynn (04:22):* Oh, two years, sorry.\n\n*Rez Kahn (04:23):* No worries. I got hired as a, I think, as the second\nproduct or third product manager for Atlas, and have been very lucky to\nwork with Atlas when it was fairly small to what is arguably a very\nlarge part of MongoDB today.\n\n*Michael Lynn (04:42):* Yeah. It's huge. I remember when I started,\nAtlas didn't exist, and I remember when they made the first initial\nannouncements, internal only, about this product that was going to be in\nthe cloud. I just couldn't picture it. It's so funny to now see it, it's\nbecome the biggest, I think arguably, the biggest part of our business.\nI remember watching the chart as it took more and more of a percentage\nof our gross revenue. Today, it's a phenomenal story and it's helping so\nmany people. One of the biggest challenges I had even before coming on\nboard at MongoDB was, how do you deploy this?\n\n*Michael Lynn (05:22):* How do you configure it? If you want high\navailability, it's got it. MongoDB has it built in, it's built right in,\nbut you've got to configure it and you've got to maintain it and you've\ngot to scale it, and all of those things can take hours, and obviously,\neffort associated with that. So, to see something that's hit the stage\nand people have just loved it, it's just been such a success. So,\nthat's, I guess, a bit of congratulations on Atlas and the success that\nyou've experienced. But I wonder if you might want to talk a little bit\nabout the problem space that Atlas lives in. Maybe touch a little bit\nmore on the elements of frustration that DBAs and developers face that\nAtlas can have an impact on.\n\n*Rez Kahn (06:07):* Yeah, totally. So, my experience with MongoDB is\nactually very similar to yours, Mike. I think I first started, I first\nused it at a hackathon back in 2012. I remember, while getting started\nwith it was very easy, it took us 10 minutes, I think, to run the first\nquery and get data from MongoDB. But once we had to deploy that app into\nproduction and manage MongoDB, things became a little bit more tricky.\nIt takes us a number of hours to actually set things up, which is a lot\nat the hackathon because you got only two hours to build the whole\nthing. So, I did not think too much about the problem that I experienced\nin my hackathon day, when I was doing the hackathon till I came to\nMongoDB.\n\n*Rez Kahn (06:58):* Then, I learned about Atlas and I saw my manager\ndeploy an Atlas cluster and show me an app, and the whole experience of\nhaving an app running on a production version of MongoDB within 20\nminutes was absolutely magical. Digging deeper into it, the problem we\nwere trying to solve is this, we know that the experience of using\nMongoDB is great as a developer. It's very easy and fast to build\napplications, but once you want to deploy an application, there is a\nwhole host of things you need to think about. You need to think about\nhow do I configure the MongoDB instance to have multiple nodes so that\nif one of those nodes go down, you'll have a database available.\n\n*Rez Kahn (07:50):* How do I configure a backup of my data so that\nthere's a copy of my data always available in case there's a\ncatastrophic data loss? How do I do things like monitor the performance\nof MongoDB, and if there's a performance degradation, get alerted that\nthere's a performance degradation? Once I get alerted, what do I need to\ndo to make sure that I can fix the problem? If the way to fix the\nproblem is I need to have a bigger machine running MongoDB, how do I\nupgrade a machine while making sure my database doesn't lose\nconnectivity or go down? So, those are all like not easy problems to\nsolve.\n\n*Rez Kahn (08:35):* In large corporations, you have teams of DBS who do\nthat, in smaller startups, you don't have DBS. You have software\nengineers who have to spend valuable time from their day to handle all\nof these operational issues. If you really think about it, these\noperational issues are not exactly value added things to be working on,\nbecause you'd rather be spending the time building, differentiating\nfeatures in your application. So, the value which Atlas provided is we\nhandle all these operational issues for you. It's literally a couple of\nclicks before you have a production into the MongoDB running, with\nbackup, monitoring, alerting, all those magically set up for you.\n\n*Rez Kahn (09:20):* If you need to upgrade MongoDB instances, or need to\ngo to a higher, more powerful instance, all those things are just one\nclick as well, and it takes only a few minutes for it to be configured\nfor you. So, in other words, we're really putting a lot of time back\ninto the hands of our customers so that they can focus on building,\nwriting code, which differentiates their business as opposed to spending\ntime doing ops work.\n\n*Michael Lynn (09:45):* Amazing. I mean, truly magical. So, you talked\nquite a bit about the space there. You mentioned high availability, you\nmentioned monitoring, you mentioned initial deployment, you mentioned\nscalability. I know we talked before we kicked the podcast off, I'd love\nfor this to be the introduction to database management automation.\nBecause there's just so much, we could probably make four or five\nepisodes alone, but, Nick, did you have a question?\n\n*Nic Raboy (10:20):* Yeah, I was just going to ask, so of all the things\nthat Atlas does for us, I was just going to ask, is there anything that\nstill does require user intervention after they've deployed an Atlas\ncluster? Or, is it all automated? This is on the topic of automation,\nright?\n\n*Rez Kahn (10:37):* Yeah. I wish everything was automated, but if it\nwere, I would not have a job. So, there's obviously a lot of work to do.\nThe particular area, which is of great interest to me and the company\nis, once you deploy an application and the application is scaling, or is\nbeing used by lots of users, and you're making changes to the\napplication, how do we make sure that the performance of MongoDB itself\nis as awesome as possible? Now, that's a very difficult problem to\nsolve, because you could talk about performance in a lot of different\nways. One of the more obvious proxies of performance is how much time it\ntakes to get responses to a query back.\n\n*Rez Kahn (11:30):* You obviously want it to be as low as possible. Now,\nthe way to get a very low latency on your queries is you can have a very\npowerful machine backing that MongoDB instance, but the consequence of\nhaving a very powerful machine backing that MongoDB instance is it can\nbe very costly. So, how do you manage, how do you make sure costs are\nmanageable while getting as great of a performance as possible is a very\ndifficult problem to solve. A lot of people get paid a lot of money to\nsolve that particular problem. So, we have to characterize that problem\nfor us, sort of like track the necessary metrics to measure costs,\nmeasure performance.\n\n*Rez Kahn (12:13):* Then, we need to think about how do I detect when\nthings are going bad. If things are going bad, what are the strategies I\ncan put in place to solve those problems? Luckily, with MongoDB, there's\na lot of strategies they can put in place. For example, one of the\nattributes of MongoDB is you could have multiple, secondary indexes, and\nthose indexes can be as complex or as simple as you want, but when do\nyou put indexes? What indexes do you put? When do you keep indexes\naround versus when do you get rid of it? Those are all decisions you\nneed to make because making indexes is something which is neither cheap,\nand keeping them is also not cheap.\n\n*Rez Kahn (12:57):* So, you have to do an optimization in your head on\nwhen you make indexes, when you get rid of them. Those are the kind of\nproblems that we believe our expertise and how MongoDB works. The\nperformance data we are capturing from you using Mongo DB can help us\nprovide you in a more data-driven recommendations. So, you don't have to\nworry about making these calculations in your head yourself.\n\n*Michael Lynn (13:22):* The costs that you mentioned, there are costs\nassociated with implementing and maintaining indexes, but there are also\ncosts if you don't, right? If you're afraid to implement indexes,\nbecause you feel like you may impact your performance negatively by\nhaving too many indexes. So, just having the tool give you visibility\ninto the effectiveness of your indexes and your index strategy. That's\npowerful as well.\n\n*Nic Raboy (13:51):* So, what kind of visibility would you exactly get?\nI want to dig a little deeper into this. So, say I've got my cluster and\nI've got tons and tons of data, and quite a few indexes created. Will it\ntell me about indexes that maybe I haven't used in a month, for example,\nso that way I could remove them? How does it relay that information to\nyou effectively?\n\n*Rez Kahn (14:15):* Yeah. What the system would do is it keeps a record\nof all indexes that you had made. It will track how much you're using\ncertain indexes. It will also track whether there are overlaps between\nthose indexes, which might make one index redundant compared to the\nother. Then, we do some heuristics in the background to look at each\nindex and make an evaluation, like whether it's possible, or whether\nit's a good idea to remove that particular index based on how often it\nhas been used over the past X number of weeks. Whether they are overlaps\nwith other indexes, and all those things you can do by yourself.\n\n*Rez Kahn (14:58):* But these are things you need to learn about MongoDB\nbehavior, which you can, but why do it if it can just tell you that this\nis something which is inefficient, and these are the things you need to\nmake it more efficient.\n\n*Michael Lynn (15:13):* So, I want to be cognizant that not all of the\nlisteners of our podcast are going to be super familiar with even\nindexes, the concept of indexes. Can you maybe give us a brief\nintroduction to what indexes are and why they're so important?\n\n*Rez Kahn (15:26):* Yeah, yeah. That's a really good question. So, when\nyou're working with a ... So, indexes are not something which is unique\nto MongoDB, all other databases also have indexes. The way to look at\nindexes, it's a special data structure which stores the data you need in\na manner that makes it very fast to get that data back. So, one good\nexample is, let's say you have a database with names and phone numbers.\nYou want to query the database with a name and get that person's phone\nnumber.\n\n*Rez Kahn (16:03):* Now, if you don't have an index, what the database\nsoftware would do is it would go through every record of name and phone\nnumber so that it finds the name you're looking for, and then, it will\ngive you back the phone number for that particular name. Now, that's a\nvery expensive process because if you have a database with 50 million\nnames and phone numbers, that would take a long time. But one of the\nthings you can do with index is you can create an index of names, which\nwould organize the data in a manner where it wouldn't have to go through\nall the names to find the relevant name that you care about.\n\n*Rez Kahn (16:38):* It can quickly go to that record and return back the\nphone number that you care about. So, instead of going through 50\nmillion rows, you might have to go through a few hundred rows of data in\norder to get the information that you want. Suddenly, your queries are\nsignificantly faster than what it would have been if you had not created\nan index. Now, the challenge for our users is, like you said, Mike, a\nlot of people might not know what an index is, but people generally know\nwhat an index is. The problem is, what is the best possible thing you\ncould do for MongoDB?\n\n*Rez Kahn (17:18):* There's some stuff you need to learn. There's some\nanalysis you need to do such as you need to look at the queries you're\nrunning to figure out like which queries are the most important. Then\nfor those queries, you need figure out what the best index is. Again,\nyou can think about those things by yourself if you want to, but there\nis some analytical, logical way of giving you, of crunching these\nnumbers, and just telling you that this is the index which is the best\nindex for you at this given point in time. These are the reasons why,\nand these are the benefit it would give you.\n\n*Michael Lynn (17:51):* So, okay. Well, indexes sounded like I need\nthem, because I've got an application and I'm looking up phone numbers\nand I do have a lot of phone numbers. So, I'm just going to index\neverything in my database. How does that sound?\n\n*Rez Kahn (18:05):* It might be fine, actually. It depends on how many\nindexes you create. The thing which is tricky is because indexes are a\nspecial data structure, it does take up storage space in the database\nbecause you're storing, in my example from before, names in a particular\nway. So, you're essentially copying the data that you already have, but\nstoring it in a particular way. Now, that might be fine, but if you have\na lot of these indexes, you have to create lots of copies of your data.\nSo, it does use up space, which could actually go to storing new data.\n\n*Rez Kahn (18:43):* It also has another effect where if you're writing a\nlot of data into a database, every time you write a new record, you need\nto make sure all those indexes are updated. So, writes can take longer\nbecause you have indexes now. So, you need to strike a happy balance\nbetween how many indexes do I need to get great read performance, but\nnot have too many indexes so my write performance is hard? That's a\nbalancing act that you need to do as a user, or you can use our tools\nand we can do it for.\n\n*Michael Lynn (19:11):* There you go, yeah. Obviously, playing a little\ndevil's advocate there, but it is important to have the right balance-\n\n*Rez Kahn (19:17):* Absolutely.\n\n*Michael Lynn (19:17):* ... and base the use of your index on the\nread-write profile of your application. So, knowing as much about the\nread-write profile, how many reads versus how many writes, how big are\neach is incredibly important. So, that's the space that this is in. Is\nthere a tagline or a product within Atlas that you refer to when you're\ntalking about this capability?\n\n*Rez Kahn (19:41):* Yeah. So, there's a product called Performance\nAdvisor, which you can use via the API, or you can use it with the UI.\nWhen you use Performance Advisor, it will scan the queries that ran on\nyour database and give you a ranked list of indexes that you should be\nbuilding based on importance. It will tell you why a particular index is\nimportant. So, we have this very silly name called the impact score. It\nwould just tell you that this is the percentage impact of having this\nindex built, and it would rank index recommendations based on that.\n\n*Rez Kahn (20:21):* One of the really cool things we are building is, so\nwe've had Performance Advisor for a few years, and it's a fairly popular\nproduct amongst our customers. Our customers who are building an\napplication on MongoDB Atlas, or if they're changing an application, the\nfirst thing that they do after deploying is they would go to Performance\nAdvisor and check to see if there are index recommendations. If there\nare, then, they would go and build it, and magically the performance of\ntheir queries become better.\n\n>\n>\n>So, when you deploy an Atlas cluster, you can say, \"I want indexes to be\n>built automatically.\" ... as we detect queries, which doesn't have an\n>index and is important and causing performance degradation, we can\n>automatically figure out like what the index ought to be.\n>\n>\n\n*Rez Kahn (20:51):* Because we have had good success with the product,\nwhat we have decided next is, why do even make people go into Atlas and\nlook at the recommendations, decide which they want to keep, and create\nthe index manually? Why not just automate that for them? So, when you\ndeploy an Atlas cluster, you can say, \"I want indexes to be built\nautomatically.\" If you have it turned on, then we will be proactively\nanalyzing your queries behind the scenes for you, and as soon as we\ndetect queries, which doesn't have an index and is important and causing\nperformance degradation, we can automatically figure out like what the\nindex ought to be.\n\n*Rez Kahn (21:36):* Then, build that index for you behind the scenes in\na manner that it's performed. That's a product which we're calling\nautopilot mode for indexing, which is coming in the next couple of\nmonths.\n\n*Nic Raboy (21:46):* So, I have a question around autopilot indexing.\nSo, you're saying that it's a feature you can enable to allow it to do\nit for you on a needed basis. So, around that, will it also remove\nindexes for you that are below the percent threshold, or can you\nactually even set the threshold on when an index would be created?\n\n*Rez Kahn (22:08):* So, I'll answer the first question, which is can it\ndelete indexes for you. Today, it can't. So, we're actually releasing\nanother product within Performance Advisor called Index Removal\nRecommendations, where you can see recommendations of which indexes you\nneed to remove. The general product philosophy that we have in the\ncompany is, we build recommendations first. If the recommendations are\ngood, then we can use those recommendations to automate things for our\ncustomers. So, the plan is, over the next six months to provide\nrecommendations on when indexes ought to be removed.\n\n*Rez Kahn (22:43):* If we get good user feedback, and if it's actually\nuseful, then we will incorporate that in autopilot mode for indexing and\nhave that system also do the indexes for you. Regarding your second\nquestion of, are the thresholds of when indexes are built configurable?\nThat's a good question, because we did spend a lot of time thinking\nabout whether we want to give users those thresholds. It's a difficult\nquestion to answer because on one hand, having knobs, and dials, and\nbuttons is attractive because you can, as a user, can control how the\nsystem behaves.\n\n*Rez Kahn (23:20):* On the other hand, if you don't know what you're\ndoing, you could create a lot of problems for yourself, and we want to\nbe cognizant of that. So, what we have decided to do instead is we're\nnot providing a lot of knobs and dials in the beginning for our users.\nWe have selected some defaults on how the system behaves based on\nanalysis that we have done on thousands of customers, and hoping that\nwould be enough. But we have a window to add those knobs and dials back\nif there are special use cases for our users, but we will do it if it\nmakes sense, obviously.\n\n*Nic Raboy (23:58):* The reason why I asked is because you've got the\ncategory of developers who probably are under index, right? Then, to\nflip that switch, and now, is there a risk of over-indexing now, in that\nsense?\n\n*Rez Kahn (24:12):* That's a great question. The way we built the\nsystem, we built some fail-safes into it, where the risk of\nover-indexing is very limited. So, we do a couple of really cool things.\nOne of the things we do is, when we detect that there's an index that we\ncan build, we try to predict things such as how long an index would take\nto be built. Then, based on that we can make a decision, whether we'll\nautomatically build it, or we'll give user the power to say, yay or nay\non building that index. Because we're cognizant of how much time and\nresources that index build might take. We also have fail-safes in the\nbackground to prevent runaway index build.\n\n*Rez Kahn (24:59):* I think we have this configurable threshold of, I\nforget the exact number, like 10 or 20 indexes for collections that can\nbe auto build. After that, it's up to the users to decide to build more\nthings. The really cool part is, once we have the removal\nrecommendations out and assuming it works really, if it works well and\nusers like it, we could use that as a signal to automatically remove\nindexes, if you're building too many indexes. Like a very neat, closed\nloop system, where we build indexes and observe how it works. If it does\nwork well, we'll keep it. If it doesn't work well, we'll remove it. You\ncan be as hands off as you want.\n\n*Michael Lynn (25:40):* That sounds incredibly exciting. I think I have\na lot of fear around that though, especially because of the speed at\nwhich a system like Atlas, with an application running against it, the\nspeed to make those types of changes can be onerous, right. To\ncontinually get feedback and then act on that feedback. I'm just\ncurious, is that one of the challenges that you faced in implementing a\nsystem like this?\n\n*Rez Kahn (26:12):* Yeah. One of the big challenges is, we talked about\nthis a lot during the R&D phase is, we think there are two strategies\nfor index creation. There is what we call reactive, and then there is\nproactive. Reactive generally is you make a change in your application\nand you add a query which has no index, and it's a very expensive query.\nYou want to make the index as soon as possible in order to protect the\nMongoDB instance from a performance problem. The question is, what is\nsoon? How do you know that this particular query would be used for a\nlong time versus just used once?\n\n*Rez Kahn (26:55):* It could be a query made by an analyst and it's\nexpensive, but it's only going to be used once. So, it doesn't make\nsense to build an index for it. That's a very difficult problem to\nsolve. So, in the beginning, our approach has been, let's be\nconservative. Let's wait six hours and observe like what a query does\nfor six hours. That gives us an adequate amount of confidence that this\nis a query which is actually going to be there for a while and hence an\nindex makes sense. Does that make sense, Mike?\n\n*Michael Lynn (27:28):* Yeah, it does. Yeah. Perfect sense. I'm thinking\nabout the increased flexibility associated with leveraging MongoDB in\nAtlas. Now, obviously, MongoDB is an open database. You can download it,\ninstall it on your laptop and you can use it on servers in your data\ncenter. Will any of these automation features appear in the non-Atlas\nproduct set?\n\n*Rez Kahn (27:58):* That's a really good question. We obviously want to\nmake it available to as many of our customers as possible, because it is\nvery valuable to have systems like this. There are some practical\nrealities that make it difficult. One of the reality is, when you're\nusing Atlas, the underlying machines, which is backing your database, is\nsomething that we can quickly configure and add very easily because\nMongoDB is the one which is managing those machines for you, because\nit's a service that we provide. The infrastructure is hidden from you,\nwhich means that automation features, where we need to change the\nunderlying machines very quickly, is only possible in Atlas because we\ncontrol those machines.\n\n*Rez Kahn (28:49):* So, a good example of that is, and we should talk\nabout this at some point, we have auto scaling, where we can\nautomatically scale a machine up or down in order to manage your load.\nEven if you want to, we can actually give that feature to our customers\nusing MongoDB on premise because we don't have access to the machine,\nbut in Atlas we do. For automatic indexing, it's a little bit easier\nbecause it's more of a software configuration. So, it's easier for us to\ngive it to other places where MongoDB is used.\n\n*Rez Kahn (29:21):* We definitely want to do that. We're just starting\nwith Atlas because it's faster and easier to do, and we have a lot of\ncustomers there. So, it's a lot of customers to test and give us\nfeedback about the product.\n\n*Michael Lynn (29:31):* That's a great answer. It helps me to draw it\nout in my head in terms of an architecture. So, it sounds like there's a\nlayer above ... MongoDB is a server process. You connect to it to\ninterface with and to manage your data. But in Atlas, there's an\nadditional layer that is on top of the database, and through that layer,\nwe have access to all of the statistics associated with how you're\naccessing your database. So, that layer does not exist in the\ndownloadable MongoDB today, anyway.\n\n*Rez Kahn (30:06):* It doesn't. Yeah, it doesn't.\n\n*Michael Lynn (30:08):* Yeah.\n\n*Rez Kahn (30:09):* Exactly.\n\n*Michael Lynn (30:09):* Wow, so that's quite a bit in the indexing\nspace, but that's just one piece of the puzzle, right? Folks that are\nleveraging the database are struggling across a whole bunch of areas.\nSo, what else can we talk about in this space where you're solving these\nproblems?\n\n*Rez Kahn (30:26):* Yeah. There is so much, like you mentioned, indexing\nis just one strategy for performance optimization, but there's so many\nothers, one of the very common or uncommon, whoever you might ask this,\nis what should the schema of your data be and how do you optimize the\nschema for optimal performance? That's a very interesting problem space.\nWe have done a lot of ticking on that and we have a couple of products\nto help you do that as well.\n\n*Rez Kahn (30:54):* Another problem is, how do we project out, how do we\nforecast what your future workload would be in order to make sure that\nwe are provisioning the right amount of machine power behind your\ndatabase, so that you get the best performance, but don't pay extra\nbecause you're over over-provisioned? When is the best time to have a\nshard versus scale up vertically, and what is the best shard key to use?\nThat is also another interesting problem space for us to tackle. So,\nthere's a lot to talk about \\[crosstalk 00:31:33\\] we should, at some\npoint.\n\n*Michael Lynn (31:36):* These are all facets of the product that you\nmanage?\n\n*Rez Kahn (31:39):* These are all facets of the product that I manage,\nyeah. One thing which I would love to invite our users listen to the\npodcast, like I mentioned before, we're building this tool called\nAutopilot Mode for Indexing to automatically create indexes for you.\nIt's in heavy engineering development right now, and we're hoping to\nrelease it in the next couple of months. We're going to be doing a private preview program for that particular product, trying to get around\nhundred users to use that product and get early access to it. I would\nencourage you guys to think about that and give that a shot.\n\n*Michael Lynn (32:21):* Who can participate, who are you looking to get\ntheir hands on this?\n\n*Rez Kahn (32:26):* In theory, it should be anyone, anyone who is\nspending a lot of time building indexes would be perfect candidates for\nit. All of our MongoDB users spend a lot of time building indexes. So,\nwe are open to any type of companies, or use cases, and we're very\nexcited to work with you to see how we can make the product successful\nfor it, and use your feedback to build the next version of the product.\n\n*Michael Lynn (32:51):* Great. Well, this has been a phenomenal\nintroduction to database automation, Rez. I want to thank you for taking\nthe time to talk with us. Nick, before we close out, any other questions\nor things you think we should cover?\n\n*Nic Raboy (33:02):* No, not for this episode. If anyone has any\nquestions after listening to this episode, please jump into our\ncommunity. So, this is a community forum board, Rez, Mike, myself, we're\nall active in it. It's community.mongodb.com. It's a great way to get\nyour questions answered about automation.\n\n*Michael Lynn (33:21):* Absolutely. Rez, you're in there as well. You've\ntaken a look at some of the questions that come across from users.\n\n*Rez Kahn (33:27):* Yeah, I do that occasionally. Not as much as I\nshould, but I do that.\n\n*Michael Lynn (33:32):* Awesome.\n\n*Nic Raboy (33:32):* Well, there's a question that pops up, we'll pull\nyou.\n\n*Michael Lynn (33:34):* Yeah, if we get some more questions in there,\nwe'll get you in there.\n\n*Rez Kahn (33:37):* Sounds good.\n\n*Michael Lynn (33:38):* Awesome. Well, terrific, Rez. Thanks once again\nfor taking the time to talk with us. I'm going to hold you to that.\nWe're going to take this in a series approach. We're going to break all\nof these facets of database automation down, and we're going to cover\nthem one by one. Today's been an introduction and a little bit about\nautopilot mode for indexing. Next one, what do you think? What do you\nthink you want to cover next?\n\n*Rez Kahn (34:01):* Oh, let's do scaling.\n\n*Nic Raboy (34:02):* I love it.\n\n*Michael Lynn (34:03):* Scaling and auto scalability. I love it.\nAwesome. All right, folks, thanks.\n\n*Rez Kahn (34:08):* Thank you.\n\n## Summary\n\nAn important part of ensuring efficient application performance is\nmodeling the data in your documents, but once you've designed the\nstructure of your documents, it's absolutely critical that you continue\nto review the read/write profile of your application to ensure that\nyou've properly indexed the data elements most frequently read. MongoDB\nAtlas' automated index management can help as the profile of your\napplication changes over time.\n\nBe sure you check out the links below for suggested reading around\nperformance considerations. If you have questions, visit us in the\n[Community Forums.\n\nIn our next episodes in this series, we'll build on the concept of\nautomating database management to discuss automating the scaling of your\ndatabase to ensure that your application has the right mix of resources\nbased on its requirements.\n\nStay tuned for Part 2. Remember to subscribe to the\nPodcast to make sure that you don't miss\na single episode.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Remove Unnecessary\n Indexes\n- MongoDB Docs: Indexes\n- MongoDB Docs: Compound Indexes \u2014\n Prefixes\n- MongoDB Docs: Indexing\n Strategies\n- MongoDB Docs: Data Modeling\n Introduction\n- MongoDB University M320: Data\n Modeling\n- MongoDB University M201: MongoDB\n Performance\n- MongoDB Docs: Performance\n Advisor\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn about database automation with Rez Kahn - Part 1 - Index Autopilot", "contentType": "Podcast"}, "title": "Database Automation Series - Automated Indexes", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/typescript/type-safety-with-prisma-and-mongodb", "action": "created", "body": "# Type Safety with Prisma & MongoDB\n\nDid you know that Prisma now supports MongoDB? In this article, we'll take a look at how to use Prisma to connect to MongoDB.\n\n## What is Prisma?\n\nPrisma is an open source ORM (Object Relational Mapper) for Node.js. It supports both JavaScript and TypeScript. It really shines when using TypeScript, helping you to write code that is both readable and type-safe.\n\n> If you want to hear from Nikolas Burk and Matthew Meuller of Prisma, check out this episode of the MongoDB Podcast.\n> \n> \n\n## Why Prisma?\n\nSchemas help developers avoid data inconsistency issues over time. While you can define a schema at the database level within MongoDB, Prisma lets you define a schema at the application level. When using the Prisma Client, a developer gets the aid of auto-completing queries, since the Prisma Client is aware of the schema.\n\n## Data modeling\n\nGenerally, data that is accessed together should be stored together in a MongoDB database. Prisma supports using embedded documents to keep data together.\n\nHowever, there may be use cases where you'll need to store related data in separate collections. To do that in MongoDB, you can include one document\u2019s `_id` field in another document. In this instance, Prisma can assist you in organizing this related data and maintaining referential integrity of the data.\n\n## Prisma & MongoDB in action\n\nWe are going to take an existing example project from Prisma\u2019s `prisma-examples` repository.\n\nOne of the examples is a blog content management platform. This example uses a SQLite database. We'll convert it to use MongoDB and then seed some dummy data.\n\nIf you want to see the final code, you can find it in the dedicated Github repository.\n\n### MongoDB Atlas configuration\n\nIn this article, we\u2019ll use a MongoDB Atlas cluster. To create a free account and your first forever-free cluster, follow the Get Started with Atlas guide.\n\n### Prisma configuration\n\nWe'll first need to set up our environment variable to connect to our MongoDB Atlas database. I'll add my MongoDB Atlas connection string to a `.env` file.\n\nExample:\n\n```js\nDATABASE_URL=\"mongodb+srv://:@.mongodb.net/prisma?retryWrites=true&w=majority\"\n```\n\n> You can get your connection string from the Atlas dashboard.\n\nNow, let's edit the `schema.prisma` file.\n\n> If you are using Visual Studio Code, be sure to install the official Prisma VS Code extension to help with formatting and auto-completion.\n> \n> While you\u2019re in VS Code, also install the official MongoDB VS Code extension to monitor your database right inside VS Code!\n\nIn the `datasource db` object, we'll set the provider to \"mongodb\" and the url to our environment variable `DATABASE_URL`.\n\nFor the `User` model, we'll need to update the `id`. Instead of an `Int`, we'll use `String`. We'll set the default to `auto()`. Since MongoDB names the `id` field `_id`, we'll map the `id` field to `_id`. Lastly, we'll tell Prisma to use the data type of `ObjectId` for the `id` field.\n\nWe'll do the same for the `Post` model `id` field. We'll also change the `authorId` field to `String` and set the data type to `ObjectId`.\n\n```js\ngenerator client {\n provider = \"prisma-client-js\"\n}\n\ndatasource db {\n provider = \"mongodb\"\n url = env(\"DATABASE_URL\")\n}\n\nmodel User {\n id String @id @default(auto()) @map(\"_id\") @db.ObjectId\n email String @unique\n name String?\n posts Post]\n}\n\nmodel Post {\n id String @id @default(auto()) @map(\"_id\") @db.ObjectId\n createdAt DateTime @default(now())\n updatedAt DateTime @updatedAt\n title String\n content String?\n published Boolean @default(false)\n viewCount Int @default(0)\n author User @relation(fields: [authorId], references: [id])\n authorId String @db.ObjectId\n}\n```\n\nThis schema will result in a separate `User` and `Post` collection in MongoDB. Each post will have a reference to a user.\n\nNow that we have our schema set up, let's install our dependencies and generate our schema.\n\n```bash\nnpm install\nnpx prisma generate\n```\n\n> If you make any changes later to the schema, you'll need to run `npx prisma generate` again.\n\n### Create and seed the MongoDB database\n\nNext, we need to seed our database. The repo comes with a `prisma/seed.ts` file with some dummy data.\n\nSo, let's run the following command to seed our database:\n\n```bash\nnpx prisma db seed\n```\n\nThis also creates the `User` and `Post` collections that are defined in `prisma/schema.prisma`.\n\n### Other updates to the example code\n\nBecause we made some changes to the `id` data type, we'll need to update some of the example code to reflect these changes.\n\nThe updates are in the [`pages/api/post/[id].ts` and `pages/api/publish/[id].ts` files.\n\nHere's one example. We need to remove the `Number()` call from the reference to the `id` field since it is now a `String`.\n\n```js\n// BEFORE\nasync function handleGET(postId, res) {\n const post = await prisma.post.findUnique({\n where: { id: Number(postId) },\n include: { author: true },\n })\n res.json(post)\n}\n\n// AFTER\nasync function handleGET(postId, res) {\n const post = await prisma.post.findUnique({\n where: { id: postId },\n include: { author: true },\n })\n res.json(post)\n}\n```\n\n### Awesome auto complete & IntelliSense\n\nNotice in this file, when hovering over the `post` variable, VS Code knows that it is of type `Post`. If we just wanted a specific field from this, VS Code automatically knows which fields are included. No guessing!\n\n### Run the app\n\nThose are all of the updates needed. We can now run the app and we should see the seed data show up.\n\n```bash\nnpm run dev\n```\n\nWe can open the app in the browser at `http://localhost:3000/`.\n\nFrom the main page, you can click on a post to see it. From there, you can delete the post.\n\nWe can go to the Drafts page to see any unpublished posts. When we click on any unpublished post, we can publish it or delete it.\n\nThe \"Signup\" button will allow us to add a new user to the database.\n\nAnd, lastly, we can create a new post by clicking the \"Create draft\" button.\n\nAll of these actions are performed by the Prisma client using the API routes defined in our application.\n\nCheck out the `pages/api` folder to dive deeper into the API routes.\n\n## Conclusion\n\nPrisma makes dealing with schemas in MongoDB a breeze. It especially shines when using TypeScript by making your code readable and type-safe. It also helps to manage multiple collection relationships by aiding with referential integrity.\n\nI can see the benefit of defining your schema at the application level and will be using Prisma to connect to MongoDB in the future.\n\nLet me know what you think in the MongoDB community.", "format": "md", "metadata": {"tags": ["TypeScript", "MongoDB", "JavaScript"], "pageDescription": "In this article, we\u2019ll explore Prisma, an Object Relational Mapper (ODM) for MongoDB. Prisma helps developers to write code that is both readable and type-safe.", "contentType": "Tutorial"}, "title": "Type Safety with Prisma & MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/flask-python-mongodb", "action": "created", "body": "# Build a RESTful API with Flask, MongoDB, and Python\n\n>This is the first part of a short series of blog posts called \"Rewrite it in Rust (RiiR).\" It's a tongue-in-cheek title for some posts that will investigate the similarities and differences between the same service written in Python with Flask, and Rust with Actix-Web.\n\nThis post will show how I built a RESTful API for a collection of cocktail recipes I just happen to have lying around. The aim is to show an API server with some complexity, so although it's a small example, it will cover important factors such as:\n\n- Data transformation between the database and a JSON representation.\n- Data validation.\n- Pagination.\n- Error-handling.\n\n## Prerequisites\n\n- Python 3.8 or above\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your database username, password, and connection string as you will need those later.\n\nThis is an *advanced* guide, so it'll cover a whole bunch of different libraries which can be brought together to build a declarative Restful API server on top of MongoDB. I won't cover repeating patterns in the codebase, so if you want to build the whole thing, I recommend checking out the source code, which is all on GitHub.\n\nIt won't cover the basics of Python, Flask, or MongoDB, so if that's what you're looking for, I recommend checking out the following resources before tackling this post:\n\n- Think Python\n- The Python & MongoDB Quickstart Series\n- Flask Tutorial\n- Pydantic Documentation\n\n## Getting Started\n\nBegin by cloning the sample code source from GitHub. There are four top-level directories:\n\n- actix-cocktail-api: You can ignore this for now.\n- data: This contains an export of my cocktail data. You'll import this into your cluster in a moment.\n- flask-cocktail-api: The code for this blog post.\n- test_scripts: A few shell scripts that use curl to test the HTTP interface of the API server.\n\nThere are more details in the GitHub repo, but the basics are: Install the project with your virtualenv active:\n\n``` shell\npip install -e .\n```\n\nNext, you should import the data into your cluster. Set the environment variable `$MONGO_URI` to your cluster URI. This environment variable will be used in a moment to import your data, and also by the Flask app. I use `direnv` to configure this, and put the following line in my `.envrc` file in my project's directory:\n\n``` shell\nexport MONGO_URI=\"mongodb+srv://USERNAME:PASSW0RD@cluster0-abcde.azure.mongodb.net/cocktails?retryWrites=true&w=majority\"\n```\n\nNote that your database must be called \"cocktails,\" and the import will create a collection called \"recipes.\" After checking that `$MONGO_URI` is set correctly, run the following command:\n\n``` shell\nmongoimport --uri \"$MONGO_URI\" --file ./recipes.json\n```\n\nNow you should be able to run the Flask app from the\n`flask-cocktail-api` directory:\n\n``` shell\nFLASK_DEBUG=true FLASK_APP=cocktailapi flask run\n```\n\n(You can run `make run` if you prefer.)\n\nCheck the output to ensure it is happy with the configuration, and then in a different terminal window, run the `list_cocktails.sh` script in the `test_scripts` directory. It should print something like this:\n\n``` json\n{\n \"_links\": {\n \"last\": {\n \"href\": \"http://localhost:5000/cocktails/?page=5\"\n }, \n \"next\": {\n \"href\": \"http://localhost:5000/cocktails/?page=5\"\n }, \n \"prev\": {\n \"href\": \"http://localhost:5000/cocktails/?page=3\"\n }, \n \"self\": {\n \"href\": \"http://localhost:5000/cocktails/?page=4\"\n }\n }, \n \"recipes\": \n {\n \"_id\": \"5f7daa198ec9dfb536781b0d\", \n \"date_added\": null, \n \"date_updated\": null, \n \"ingredients\": [\n {\n \"name\": \"Light rum\", \n \"quantity\": {\n \"unit\": \"oz\", \n }\n }, \n {\n \"name\": \"Grapefruit juice\", \n \"quantity\": {\n \"unit\": \"oz\", \n }\n }, \n {\n \"name\": \"Bitters\", \n \"quantity\": {\n \"unit\": \"dash\", \n }\n }\n ], \n \"instructions\": [\n \"Pour all of the ingredients into an old-fashioned glass almost filled with ice cubes\", \n \"Stir well.\"\n ], \n \"name\": \"Monkey Wrench\", \n \"slug\": \"monkey-wrench\"\n },\n ]\n ...\n```\n\n## Breaking it All Down\n\nThe code is divided into three submodules.\n\n- `__init__.py` contains all the Flask setup code, and defines all the HTTP routes.\n- `model.py` contains all the Pydantic model definitions.\n- `objectid.py` contains a Pydantic field definition that I stole from the [Beanie object-data mapper for MongoDB.\n\nI mentioned earlier that this code makes use of several libraries:\n\n- PyMongo and Flask-PyMongo handle the connection to the database. Flask-PyMongo specifically wraps the database collection object to provide a convenient`find_one_or_404` method.\n- Pydantic manages data validation, and some aspects of data transformation between the database and a JSON representations.\n- along with a single function from FastAPI.\n\n## Data Validation and Transformation\n\nWhen building a robust API, it's important to validate all the data passing into the system. It would be possible to do this using a stack of `if/else` statements, but it's much more effective to define a schema declaratively, and to allow that to programmatically validate the data being input.\n\nI used a technique that I learned from Beanie, a new and neat ODM that I unfortunately couldn't practically use on this project, because Beanie is async, and Flask is a blocking framework.\n\nBeanie uses Pydantic to define a schema, and adds a custom Field type for ObjectId.\n\n``` python\n# model.py\n\nclass Cocktail(BaseModel):\n id: OptionalPydanticObjectId] = Field(None, alias=\"_id\")\n slug: str\n name: str\n ingredients: List[Ingredient]\n instructions: List[str]\n date_added: Optional[datetime]\n date_updated: Optional[datetime]\n\n def to_json(self):\n return jsonable_encoder(self, exclude_none=True)\n\n def to_bson(self):\n data = self.dict(by_alias=True, exclude_none=True)\n if data[\"_id\"] is None:\n data.pop(\"_id\")\n return data\n```\n\nThis `Cocktail` schema defines the structure of a `Cocktail` instance, which will be validated by Pydantic when instances are created. It includes another embedded schema for `Ingredient`, which is defined in a similar way.\n\nI added convenience functions to export the data in the `Cocktail` instance to either a JSON-compatible `dict` or a BSON-compatible `dict`. The differences are subtle, but BSON supports native `ObjectId` and `datetime` types, for example, whereas when encoding as JSON, it's necessary to encode ObjectId instances in some other way (I prefer a string containing the hex value of the id), and datetime objects are encoded as ISO8601 strings.\n\nThe `to_json` method makes use of a function imported from FastAPI, which recurses through the instance data, encoding all values in a JSON-compatible form. It already handles `datetime` instances correctly, but to get it to handle ObjectId values, I extracted some [custom field code from Beanie, which can be found in `objectid.py`.\n\nThe `to_bson` method doesn't need to pass the `dict` data through `jsonable_encoder`. All the types used in the schema can be directly saved with PyMongo. It's important to set `by_alias` to `True`, so that the key for `_id` is just that, `_id`, and not the schema's `id` without an underscore.\n\n``` python\n# objectid.py\n\nclass PydanticObjectId(ObjectId):\n \"\"\"\n ObjectId field. Compatible with Pydantic.\n \"\"\"\n\n @classmethod\n def __get_validators__(cls):\n yield cls.validate\n\n @classmethod\n def validate(cls, v):\n return PydanticObjectId(v)\n\n @classmethod\n def __modify_schema__(cls, field_schema: dict):\n field_schema.update(\n type=\"string\",\n examples=\"5eb7cf5a86d9755df3a6c593\", \"5eb7cfb05e32e07750a1756a\"],\n )\n\nENCODERS_BY_TYPE[PydanticObjectId] = str\n```\n\nThis approach is neat for this particular use-case, but I can't help feeling that it would be limiting in a more complex system. There are many [patterns for storing data in MongoDB. These often result in storing data in a form that is optimal for writes or reads, but not necessarily the representation you would wish to export in an API.\n\n>**What is a Slug?**\n>\n>Looking at the schema above, you may have wondered what a \"slug\" is ... well, apart from a slimy garden pest.\n>\n>A slug is a unique, URL-safe, mnemonic used for identifying a document. I picked up the terminology as a Django developer, where this term is part of the framework. A slug is usually derived from another field. In this case, the slug is derived from the name of the cocktail, so if a cocktail was called \"Rye Whiskey Old-Fashioned,\" the slug would be \"rye-whiskey-old-fashioned.\"\n>\n>In this API, that cocktail could be accessed by sending a `GET` request to the `/cocktails/rye-whiskey-old-fashioned` endpoint.\n>\n>I've kept the unique `slug` field separate from the auto-assigned `_id` field, but I've provided both because the slug could change if the name of the cocktail was tweaked, in which case the `_id` value would provide a constant identifier to look up an exact document.\n\nIn the Rust version of this code, I was nudged to use a different approach. It's a bit more verbose, but in the end I was convinced that it would be more powerful and flexible as the system grew.\n\n## Creating a New Document\n\nNow I'll show you what a single endpoint looks like, first focusing on the \"Create\" endpoint, that handles a POST request to `/cocktails` and creates a new document in the \"recipes\" collection. It then returns the document that was stored, including the newly unique ID that MongoDB assigned as `_id`, because this is a RESTful API, and that's what RESTful APIs do.\n\n``` python\n@app.route(\"/cocktails/\", methods=\"POST\"])\ndef new_cocktail():\n raw_cocktail = request.get_json()\n raw_cocktail[\"date_added\"] = datetime.utcnow()\n\n cocktail = Cocktail(**raw_cocktail)\n insert_result = recipes.insert_one(cocktail.to_bson())\n cocktail.id = PydanticObjectId(str(insert_result.inserted_id))\n print(cocktail)\n\n return cocktail.to_json()\n```\n\nThis endpoint modifies the incoming JSON directly, to add a `date_added` item with the current time. It then passes it to the constructor for our Pydantic schema. At this point, if the schema failed to validate the data, an exception would be raised and displayed to the user.\n\nAfter validating the data, `to_bson()` is called on the `Cocktail` to convert it to a BSON-compatible dict, and this is directly passed to PyMongo's `insert_one` method. There's no way to get PyMongo to return the document that was just inserted in a single operation (although an upsert using `find_one_and_update` is similar to just that).\n\nAfter inserting the data, the code then updates the local object with the newly-assigned `id` and returns it to the client.\n\n## Reading a Single Cocktail\n\nThanks to `Flask-PyMongo`, the endpoint for looking up a single cocktail is even more straightforward:\n\n``` python\n@app.route(\"/cocktails/\", methods=[\"GET\"])\ndef get_cocktail(slug):\n recipe = recipes.find_one_or_404({\"slug\": slug})\n return Cocktail(**recipe).to_json()\n```\n\nThis endpoint will abort with a 404 if the slug can't be found in the collection. Otherwise, it simply instantiates a Cocktail with the document from the database, and calls `to_json` to convert it to a dict that Flask will automatically encode correctly as JSON.\n\n## Listing All the Cocktails\n\nThis endpoint is a monster, and it's because of pagination, and the links for pagination. In the sample data above, you probably noticed the `_links` section:\n\n``` json\n\"_links\": {\n \"last\": {\n \"href\": \"http://localhost:5000/cocktails/?page=5\"\n }, \n \"next\": {\n \"href\": \"http://localhost:5000/cocktails/?page=5\"\n }, \n \"prev\": {\n \"href\": \"http://localhost:5000/cocktails/?page=3\"\n }, \n \"self\": {\n \"href\": \"http://localhost:5000/cocktails/?page=4\"\n }\n}, \n```\n\nThis `_links` section is specified as part of the [HAL (Hypertext Application\nLanguage) specification. It's a good idea to follow a standard for pagination data, and I didn't feel like inventing something myself!\n\nAnd here's the code to generate all this. Don't freak out.\n\n``` python\n@app.route(\"/cocktails/\")\ndef list_cocktails():\n \"\"\"\n GET a list of cocktail recipes.\n\n The results are paginated using the `page` parameter.\n \"\"\"\n\n page = int(request.args.get(\"page\", 1))\n per_page = 10 # A const value.\n\n # For pagination, it's necessary to sort by name,\n # then skip the number of docs that earlier pages would have displayed,\n # and then to limit to the fixed page size, ``per_page``.\n cursor = recipes.find().sort(\"name\").skip(per_page * (page - 1)).limit(per_page)\n\n cocktail_count = recipes.count_documents({})\n\n links = {\n \"self\": {\"href\": url_for(\".list_cocktails\", page=page, _external=True)},\n \"last\": {\n \"href\": url_for(\n \".list_cocktails\", page=(cocktail_count // per_page) + 1, _external=True\n )\n },\n }\n # Add a 'prev' link if it's not on the first page:\n if page > 1:\n links\"prev\"] = {\n \"href\": url_for(\".list_cocktails\", page=page - 1, _external=True)\n }\n # Add a 'next' link if it's not on the last page:\n if page - 1 < cocktail_count // per_page:\n links[\"next\"] = {\n \"href\": url_for(\".list_cocktails\", page=page + 1, _external=True)\n }\n\n return {\n \"recipes\": [Cocktail(**doc).to_json() for doc in cursor],\n \"_links\": links,\n }\n```\n\nAlthough there's a lot of code there, it's not as complex as it may first appear. Two requests are made to MongoDB: one for a page-worth of cocktail recipes, and the other for the total number of cocktails in the collection. Various calculations are done to work out how many documents to skip, and how many pages of cocktails there are. Finally, some links are added for \"prev\" and \"next\" pages, if appropriate (i.e.: the current page isn't the first or last.) Serialization of the cocktail documents is done in the same way as the previous endpoint, but in a loop this time.\n\nThe update and delete endpoints are mainly repetitions of the code I've already included, so I'm not going to include them here. Check them out in the [GitHub repo if you want to see how they work.\n\n## Error Handling\n\nNothing irritates me more than using a JSON API which returns HTML when an error occurs, so I was keen to put in some reasonable error handling to avoid this happening.\n\nAfter Flask set-up code, and before the endpoint definitions, the code registers two error-handlers:\n\n``` python\n@app.errorhandler(404)\ndef resource_not_found(e):\n \"\"\"\n An error-handler to ensure that 404 errors are returned as JSON.\n \"\"\"\n return jsonify(error=str(e)), 404\n\n@app.errorhandler(DuplicateKeyError)\ndef resource_not_found(e):\n \"\"\"\n An error-handler to ensure that MongoDB duplicate key errors are returned as JSON.\n \"\"\"\n return jsonify(error=f\"Duplicate key error.\"), 400\n```\n\nThe first error-handler intercepts any endpoint that fails with a 404 status code and ensures that the error is returned as a JSON dict.\n\nThe second error-handler intercepts a `DuplicateKeyError` raised by any endpoint, and does the same thing as the first error-handler, but sets the HTTP status code to \"400 Bad Request.\"\n\nAs I was writing this post, I realised that I've missed an error-handler to deal with invalid Cocktail data. I'll leave implementing that as an exercise for the reader! Indeed, this is one of the difficulties with writing robust Python applications: Because exceptions can be raised from deep in your stack of dependencies, it's very difficult to comprehensively predict what exceptions your application may raise in different circumstances.\n\nThis is something that's very different in Rust, and even though, as you'll see, error-handling in Rust can be verbose and tricky, I've started to love the language for its insistence on correctness.\n\n## Wrapping Up\n\nWhen I started writing this post, I though it would end up being relatively straightforward. As I added the requirement that the code should not just be a toy example, some of the inherent difficulties with building a robust API on top of any database became apparent.\n\nIn this case, Flask may not have been the right tool for the job. I recently wrote a blog post about building an API with Beanie. Beanie and FastAPI are a match made in heaven for this kind of application and will handle validation, transformation, and pagination with much less code. On top of that, they're self-documenting and can provide the data's schema in open formats, including OpenAPI Spec and JSON Schema!\n\nIf you're about to build an API from scratch, I strongly recommend you check them out, and you may enjoy reading Aaron Bassett's posts on the FARM (FastAPI, React, MongoDB) Stack.\n\nI will shortly publish the second post in this series, *Build a Cocktail API with Actix-Web, MongoDB, and Rust*, and then I'll conclude with a third post, *I Rewrote it in Rust\u2014How Did it Go?*, where I'll evaluate the strengths and weaknesses of the two experiments.\n\nThank you for reading. Keep a look out for the upcoming posts!\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "Flask"], "pageDescription": "Build a RESTful API with Flask, MongoDB, and Python", "contentType": "Tutorial"}, "title": "Build a RESTful API with Flask, MongoDB, and Python", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/real-time-card-fraud-solution-accelerator-databricks", "action": "created", "body": "# Real-Time Card Fraud Solution Accelerator with MongoDB and Databricks\n\nCard fraud is a significant problem and fear for both consumers and businesses. However, despite the seriousness of it, there are solutions that can be implemented for card fraud prevention. Financial institutions have various processes and technical solutions in place to detect and prevent card fraud, such as monitoring transactions for suspicious activity, implementing know-your-customer (KYC) procedures, and a combination of controls based on static rules or machine learning models. These can all help, but they are not without their own challenges. \n\nFinancial institutions with legacy fraud prevention systems can find themselves fighting against their own data infrastructure. These challenges can include:\n\n* **Incomplete data**: Legacy systems may not have access to all relevant data sources, leading to a lack of visibility into fraud patterns and behaviors.\n* **Latency**: Fraud prevention systems need to execute fast enough to be able to be deployed as part of a real-time payment approval process. Legacy systems often lack this capability.\n* **Difficulty to change**: Legacy systems have been designed to work within specific parameters, and changing them to meet new requirements is often difficult and time-consuming.\n* **Weak security**: Legacy systems may have outdated security protocols that leave organizations vulnerable to cyber attacks.\n* **Operational overheads due to technical sprawl**: Existing architectures often pose operational challenges due to diverse technologies that have been deployed to support the different access patterns required by fraud models and ML training. This technical sprawl in the environment requires significant resources to maintain and update.\n* **High operation costs**: Legacy systems can be costly to operate, requiring significant resources to maintain and update.\n* **No collaboration between application and data science teams**: Technical boundaries between the operational platform and the data science platform are stopping application developers and data science teams from working collaboratively, leading to longer time to market and higher overheads.\n\nThese data issues can be detrimental to a financial institution trying desperately to keep up with the demands of customer expectations, user experience, and fraud. As technology is advancing rapidly, surely so is card fraud, becoming increasingly sophisticated. This has naturally led to an absolute need for real-time solutions to detect and prevent card fraud effectively. Anything less than that is unacceptable. So, how can financial institutions today meet these demands? The answer is simple. Fraud detection big data analytics should shift-left to the application itself. \n\nWhat does this look like in practice? Application-driven analytics for fraud detection is the solution for the very real challenges financial institutions face today, as mentioned above.\n\n## Solution overview\nTo break down what this looks like, we will demonstrate how easy it is to build an ML-based fraud solution using MongoDB and Databricks. The functional and nonfunctional features of this proposed solution include: \n\n* **Data completeness**: To address the challenge of incomplete data, the system will be integrated with external data sources to ensure complete and accurate data is available for analysis.\n* **Real-time processing**: The system will be designed to process data in real time, enabling the timely detection of fraudulent activities.\n* **AI/ML modeling and model use**: Organizations can leverage AI/ML to enhance their fraud prevention capabilities. AI/ML algorithms can quickly identify and flag potential fraud patterns and behaviors.\n* **Real-time monitoring**: Organizations should aim to enable real-time monitoring of the application, allowing for real-time processing and analysis of data. \n* **Model observability**: Organizations should aim to improve observability in their systems to ensure that they have full visibility into fraud patterns and behaviors.\n* **Flexibility and scalability**: The system will be designed with flexibility and scalability in mind, allowing for easy changes to be made to accommodate changing business needs and regulatory requirements.\n* **Security**: The system will be designed with robust security measures to protect against potential security breaches, including encryption, access control, and audit trails.\n* **Ease of operation**: The system will be designed with ease of operation in mind, reducing operational headaches and enabling the fraud prevention team to focus on their core responsibilities..\n* **Application development and data science team collaboration**: Organizations should aim to enable collaboration between application development and data science teams to ensure that the goals and objectives are aligned, and cooperation is optimized.\n* **End-to-end CI/CD pipeline support**: Organizations should aim to have end-to-end CI/CD pipeline support to ensure that their systems are up-to-date and secure.\n\n## Solution components\nThe functional features listed above can be implemented by a few architectural components. These include:\n\n1. **Data sourcing** \n 1. **Producer apps**: The producer mobile app simulates the generation of live transactions. \n 2. **Legacy data source**: The SQL external data source is used for customer demographics.\n 3. **Training data**: Historical transaction data needed for model training data is sourced from cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. \n2. **MongoDB Atlas**: Serves as the Operational Data Store (ODS) for card transactions and processes transactions in real time. The solution leverages MongoDB Atlas aggregation framework to perform in-app analytics to process transactions based on pre-configured rules and communicates with Databricks for advanced AI/ML-based fraud detection via a native Spark connector. \n3. **Databricks**: Hosts the AI/ML platform to complement MongoDB Atlas in-app analytics. A fraud detection algorithm used in this example is a notebook inspired by Databrick's fraud framework. MLFlow has been used to manage the MLOps for managing this model. The trained model is exposed as a REST endpoint. \n\nNow, let\u2019s break down these architectural components in greater detail below, one by one.\n\n***Figure 1**: MongoDB for event-driven and shift-left analytics architecture*\n\n### 1. Data sourcing\nThe first step in implementing a comprehensive fraud detection solution is aggregating data from all relevant data sources. As shown in **Figure 1** above, an event-driven federated architecture is used to collect and process data from real-time sources such as producer apps, batch legacy systems data sources such as SQL databases, and historical training data sets from offline storage. This approach enables data sourcing from various facets such as transaction summary, customer demography, merchant information, and other relevant sources, ensuring data completeness. \n\nAdditionally, the proposed event-driven architecture provides the following benefits: \n* Real-time transaction data unification, which allows for the collection of card transaction event data such as transaction amount, location, time of the transaction, payment gateway information, payment device information, etc., in **real-time**.\n* Helps re-train monitoring models based on live event activity to combat fraud as it happens. \n\nThe producer application for the demonstration purpose is a Python script that generates live transaction information at a predefined rate (transactions/sec, which is configurable).\n\n***Figure 2**: Transaction collection sample document*\n\n### 2. MongoDB for event-driven, shift-left analytics architecture \nMongoDB Atlas is a managed data platform that offers several features that make it the perfect choice as the datastore for card fraud transaction classification. It supports flexible data models and can handle various types of data, high scalability to meet demand, advanced security features to ensure compliance with regulatory requirements, real-time data processing for fast and accurate fraud detection, and cloud-based deployment to store data closer to customers and comply with local data privacy regulations.\n\nThe MongoDB Spark Streaming Connector integrates Apache Spark and MongoDB. Apache Spark, hosted by Databricks, allows the processing and analysis of large amounts of data in real-time. The Spark Connector translates MongoDB data into Spark data frames and supports real time Spark streaming.\n\n***Figure 3**: MongoDB for event-driven and shift-left analytics architecture*\n\nThe App Services features offered by MongoDB allow for real-time processing of data through change streams and triggers. Because MongoDB Atlas is capable of storing and processing various types of data as well as streaming capabilities and trigger functionality, it is well suited for use in an event-driven architecture. \n\nIn the demo, we used both the rich connector ecosystem of MongoDB and App Services to process transactions in real time. The App Service Trigger function is used by invoking a REST service call to an AI/ML model hosted through the Databricks MLflow framework.\n\n***Figure 4**: The processed and \u201cfeatures of transaction\u201d MongoDB sample document*\n\n***Figure 5**: Processed transaction sample document*\n\n**Note**: *A combined view of the collections, as mentioned earlier, can be visually represented using **MongoDB Charts** to help better understand and observe the changing trends of fraudulent transactions. For advanced reporting purposes, materialized views can help.*\n\nThe example solution manages rules-based fraud prevention by storing user-defined payment limits and information in a user settings collection, as shown below. This includes maximum dollar limits per transaction, the number of transactions allowed per day, and other user-related details. By filtering transactions based on these rules before invoking expensive AI/ML models, the overall cost of fraud prevention is reduced.\n\n### 3. Databricks as an AI/ML ops platform\nDatabricks is a powerful AI/ML platform to develop models for identifying fraudulent transactions. One of the key features of Databricks is the support of real-time analytics. As discussed above, real-time analytics is a key feature of modern fraud detection systems. \n\nDatabricks includes MLFlow, a powerful tool for managing the end-to-end machine learning lifecycle. MLFlow allows users to track experiments, reproduce results, and deploy models at scale, making it easier to manage complex machine learning workflows. MLFlow offers model observability, which allows for easy tracking of model performance and debugging. This includes access to model metrics, logs, and other relevant data, which can be used to identify issues and improve the accuracy of the model over time. Additionally, these features can help in the design of modern fraud detection systems using AI/ML.\n\n## Demo artifacts and high-level description\n\nWorkflows needed for processing and building models for validating the authenticity of transactions are done through the Databricks AI/ML platform. There are mainly two workflow sets to achieve this:\n\n1: The **Streaming workflow**, which runs in the background continuously to consume incoming transactions in real-time using the MongoDB Spark streaming connector. Every transaction first undergoes data preparation and a feature extraction process; the transformed features are then streamed back to the MongoDB collection with the help of a Spark streaming connector. \n\n***Figure 7**: Streaming workflow*\n\n2: The **Training workflow** is a scheduled process that performs three main tasks/notebooks, as mentioned below. This workflow can be either manually triggered or through the Git CI/CD (webhooks).\n\n***Figure 8**: Training workflow stages*\n\n>A step-by-step breakdown of how the example solution works can be accessed at this GitHub repository, and an end-to-end solution demo is available. \n\n## Conclusion\nModernizing legacy fraud prevention systems using MongoDB and Databricks can provide many benefits, such as improved detection accuracy, increased flexibility and scalability, enhanced security, reduced operational headaches, reduced cost of operation, early pilots and quick iteration, and enhanced customer experience.\n\nModernizing legacy fraud prevention systems is essential to handling the challenges posed by modern fraud schemes. By incorporating advanced technologies such as MongoDB and Databricks, organizations can improve their fraud prevention capabilities, protect sensitive data, and reduce operational headaches. With the solution proposed, organizations can take a step forward in their fraud prevention journey to achieve their goals. \n\nLearn more about how MongoDB can modernize your fraud prevention system, and contact the MongoDB team.", "format": "md", "metadata": {"tags": ["MongoDB", "AI"], "pageDescription": "In this article, we'll demonstrate how easy it is to build an ML-based fraud solution using MongoDB and Databricks.", "contentType": "Article"}, "title": "Real-Time Card Fraud Solution Accelerator with MongoDB and Databricks", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/serverless-development-kotlin-aws-lambda-mongodb-atlas", "action": "created", "body": "# Serverless Development with Kotlin, AWS Lambda, and MongoDB Atlas\n\nAs seen in a previous tutorial, creating a serverless function for AWS Lambda with Java and MongoDB isn't too complicated of a task. In fact, you can get it done with around 35 lines of code!\n\nHowever, maybe your stack doesn't consist of Java, but instead Kotlin. What needs to be done to use Kotlin for AWS Lambda and MongoDB development? The good news is not much will be different!\n\nIn this tutorial, we'll see how to create a simple AWS Lambda function. It will use Kotlin as the programming language and it will use the MongoDB Kotlin driver for interacting with MongoDB.\n\n## The requirements\n\nThere are a few prerequisites that must be met in order to be successful with this particular tutorial:\n\n- Must have a Kotlin development environment installed and configured on your local computer.\n- Must have a MongoDB Atlas instance deployed and configured.\n- Must have an Amazon Web Services (AWS) account.\n\nThe easiest way to develop with Kotlin is through IntelliJ, but it is a matter of preference. The requirement is that you can build Kotlin applications with Gradle.\n\nFor the purpose of this tutorial, any MongoDB Atlas instance will be sufficient whether it be the M0 free tier, the serverless pay-per-use tier, or something else. However, you will need to have the instance properly configured with user rules and network access rules. If you need help, use our MongoDB Atlas tutorial as a starting point.\n\n## Defining the project dependencies with the Gradle Kotlin DSL\n\nAssuming you have a project created using your tooling of choice, we need to properly configure the **build.gradle.kts** file with the correct dependencies for AWS Lambda with MongoDB.\n\nIn the **build.gradle.kts** file, include the following:\n\n```kotlin\nimport org.jetbrains.kotlin.gradle.tasks.KotlinCompile\n\nplugins {\n kotlin(\"jvm\") version \"1.9.0\"\n application\n id(\"com.github.johnrengelman.shadow\") version \"7.1.2\"\n}\n\napplication {\n mainClass.set(\"example.Handler\")\n}\n\ngroup = \"org.example\"\nversion = \"1.0-SNAPSHOT\"\n\nrepositories {\n mavenCentral()\n}\n\ndependencies {\n testImplementation(kotlin(\"test\"))\n implementation(\"com.amazonaws:aws-lambda-java-core:1.2.2\")\n implementation(\"com.amazonaws:aws-lambda-java-events:3.11.1\")\n implementation(\"org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.1\")\n implementation(\"org.mongodb:bson:4.10.2\")\n implementation(\"org.mongodb:mongodb-driver-kotlin-sync:4.10.2\")\n}\n\ntasks.test {\n useJUnitPlatform()\n}\n\ntasks.withType {\n kotlinOptions.jvmTarget = \"1.8\"\n}\n```\n\nThere are a few noteworthy items in the above configuration.\n\nLooking at the `plugins` first, you'll notice the use of Shadow:\n\n```kotlin\nplugins {\n kotlin(\"jvm\") version \"1.9.0\"\n application\n id(\"com.github.johnrengelman.shadow\") version \"7.1.2\"\n}\n```\n\nAWS Lambda expects a ZIP or a JAR. By using the Shadow plugin, we can use Gradle to build a \"fat\" JAR, which includes both the application and all required dependencies. When using Shadow, the main class must be defined.\n\nTo define the main class, we have the following:\n\n```kotlin\napplication {\n mainClass.set(\"example.Handler\")\n}\n```\n\nThe above assumes that all our code will exist in a `Handler` class in an `example` package. Yours does not need to match, but note that this particular class and package will be referenced throughout the tutorial. You should swap names wherever necessary.\n\nThe next item to note is the `dependencies` block:\n\n```kotlin\ndependencies {\n testImplementation(kotlin(\"test\"))\n implementation(\"com.amazonaws:aws-lambda-java-core:1.2.2\")\n implementation(\"com.amazonaws:aws-lambda-java-events:3.11.1\")\n implementation(\"org.jetbrains.kotlinx:kotlinx-serialization-json:1.3.1\")\n implementation(\"org.mongodb:bson:4.10.2\")\n implementation(\"org.mongodb:mongodb-driver-kotlin-sync:4.10.2\")\n}\n```\n\nIn the above block, we are including the various AWS Lambda SDK packages as well as the MongoDB Kotlin driver. These dependencies will allow us to use MongoDB with Kotlin and AWS Lambda.\n\nIf you wanted to, you could run the following command:\n\n```bash\n./gradlew shadowJar\n```\n\nAs long as the main class exists, it should build a JAR file for you.\n\n## Developing a serverless function with Kotlin and MongoDB\n\nWith the configuration items out of the way, we can focus on the development of our serverless function. Open the project's **src/main/kotlin/example/Handler.kt** file and include the following boilerplate code:\n\n```kotlin\npackage example\n\nimport com.amazonaws.services.lambda.runtime.Context\nimport com.amazonaws.services.lambda.runtime.RequestHandler\nimport com.mongodb.client.model.Filters\nimport com.mongodb.kotlin.client.MongoClient\nimport com.mongodb.kotlin.client.MongoCollection\nimport com.mongodb.kotlin.client.MongoDatabase\nimport org.bson.Document\nimport org.bson.conversions.Bson\nimport org.bson.BsonDocument\n\nclass Handler : RequestHandler, Void> {\n\n override fun handleRequest(input: Map, context: Context): void {\n\n return null;\n\n }\n}\n```\n\nThe above code won't do much of anything if you tried to execute it on AWS Lambda, but it is a starting point. Let's start by establishing a connection to MongoDB.\n\nWithin the `Handler` class, add the following:\n\n```kotlin\nclass Handler : RequestHandler, Void> {\n\n private val mongoClient: MongoClient = MongoClient.create(System.getenv(\"MONGODB_ATLAS_URI\"))\n\n override fun handleRequest(input: Map, context: Context): void {\n\n val database: MongoDatabase = mongoClient.getDatabase(\"sample_mflix\")\n val collection: MongoCollection = database.getCollection(\"movies\")\n\n return null;\n\n }\n}\n```\n\nFirst, you'll notice that we are creating a `mongoClient` variable to hold the information about our connection. This client will be created using a MongoDB Atlas URI that we plan to store as an environment variable. It is strongly recommended that you use environment variables to store this information so your credentials don't get added to your version control.\n\nIn case you're unsure what the MongoDB Atlas URI looks like, it looks like the following:\n\n```\nmongodb+srv://:@.dmhrr.mongodb.net/?retryWrites=true&w=majority\n```\n\nYou can find your exact connection string using the MongoDB Atlas CLI or through the MongoDB Atlas dashboard.\n\nWithin the `handleRequest` function, we get a reference to the database and collection that we want to use:\n\n```kotlin\nval database: MongoDatabase = mongoClient.getDatabase(\"sample_mflix\")\nval collection: MongoCollection = database.getCollection(\"movies\")\n```\n\nFor this particular example, we are using the `sample_mflix` database and the `movies` collection, both of which are part of the optional MongoDB Atlas sample dataset. Feel free to use a database and collection that you already have.\n\nNow we can focus on interactions with MongoDB. Make a few changes to the `Handler` class so it looks like this:\n\n```kotlin\npackage example\n\nimport com.amazonaws.services.lambda.runtime.Context\nimport com.amazonaws.services.lambda.runtime.RequestHandler\nimport com.mongodb.client.model.Filters\nimport com.mongodb.kotlin.client.MongoClient\nimport com.mongodb.kotlin.client.MongoCollection\nimport com.mongodb.kotlin.client.MongoDatabase\nimport org.bson.Document\nimport org.bson.conversions.Bson\nimport org.bson.BsonDocument\n\nclass Handler : RequestHandler, List> {\n\n private val mongoClient: MongoClient = MongoClient.create(System.getenv(\"MONGODB_ATLAS_URI\"))\n\n override fun handleRequest(input: Map, context: Context): List {\n\n val database: MongoDatabase = mongoClient.getDatabase(\"sample_mflix\")\n val collection: MongoCollection = database.getCollection(\"movies\")\n\n var filter: Bson = BsonDocument()\n\n if(input.containsKey(\"title\") && !input.get(\"title\").isNullOrEmpty()) {\n filter = Filters.eq(\"title\", input.get(\"title\"))\n }\n\n val results: List = collection.find(filter).limit(5).toList()\n\n return results;\n\n }\n}\n```\n\nInstead of using `Void` in the `RequestHandler` and `void` as the return type for the `handleRequest` function, we are now using `List` because we plan to return an array of documents to the requesting client.\n\nThis brings us to the following:\n\n```kotlin\nvar filter: Bson = BsonDocument()\n\nif(input.containsKey(\"title\") && !input.get(\"title\").isNullOrEmpty()) {\n filter = Filters.eq(\"title\", input.get(\"title\"))\n}\n\nval results: List = collection.find(filter).limit(5).toList()\n\nreturn results;\n```\n\nInstead of executing a fixed query when the function is invoked, we are accepting input from the user. If the user provides a `title` field with the invocation, we construct a filter for it. In other words, we will be looking for movies with a title that matches the user input. If no `title` is provided, we just query for all documents in the collection.\n\nFor the actual `find` operation, rather than risking the return of more than a thousand documents, we are limiting the result set to five and are converting the response from a cursor to a list.\n\nAt this point in time, our simple AWS Lambda function is complete. We can focus on the building and deployment of the function now.\n\n## Building and deploying a Kotlin function to AWS Lambda\n\nBefore we worry about AWS Lambda, let's build the project using Shadow. From the command line, IntelliJ, or with whatever tool you're using, execute the following:\n\n```bash\n./gradlew shadowJar\n```\n\nFind the JAR file, which is probably in the **build/libs** directory unless you specified otherwise.\n\nEverything we do next will be done in the AWS portal. There are three main items that we want to take care of during this process:\n\n1. Add the environment variable with the MongoDB Atlas URI to the Lambda function.\n2. Rename the \"Handler\" information in Lambda to reflect the actual project.\n3. Upload the JAR file to AWS Lambda.\n\nWithin the AWS Lambda dashboard for your function, click the \"Configuration\" tab followed by the \"Environment Variables\" navigation item. Add `MONGODB_ATLAS_URI` along with the appropriate connection string when prompted. Make sure the connection string reflects your instance with the proper username and password.\n\nYou can now upload the JAR file from the \"Code\" tab of the AWS Lambda dashboard. When this is done, we need to tell AWS Lambda what the main class is and the function that should be executed.\n\nIn the \"Code\" tab, look for \"Runtime Settings\" and choose to edit it. In our example, we had **example** as the package and **Handler** as the class. We also had our function logic in the **handleRequest** function.\n\nWith all this in mind, change the \"Handler\" within AWS Lambda to **example.Handler::handleRequest** or whatever makes sense for your project.\n\nAt this point, you should be able to test your function.\n\nOn the \"Test\" tab of the AWS Lambda dashboard, choose to run a test as is. You should get a maximum of five results back. Next, try using the following input criteria:\n\n```json\n{\n\"title\": \"The Terminator\"\n}\n```\n\nYour response will now look different because of the filter.\n\n## Conclusion\n\nCongratulations! You created your first AWS Lambda function in Kotlin and that function supports communication with MongoDB!\n\nWhile this example was intended to be short and simple, you could add significantly more logic to your functions that engage with other functionality of MongoDB, such as aggregations and more.\n\nIf you'd like to see how to use Java to accomplish the same thing, check out my previous tutorial on the subject titled Serverless Development with AWS Lambda and MongoDB Atlas Using Java.", "format": "md", "metadata": {"tags": ["Atlas", "Kotlin", "Serverless"], "pageDescription": "Learn how to use Kotlin and MongoDB to create performant and scalable serverless functions on AWS Lambda.", "contentType": "Tutorial"}, "title": "Serverless Development with Kotlin, AWS Lambda, and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/securing-mongodb-with-tls", "action": "created", "body": "# Securing MongoDB with TLS\n\nHi! I'm Carl from Smallstep. We make it easier to use TLS everywhere. In this post, I\u2019m going to make a case for using TLS/SSL certificates to secure your self-managed MongoDB deployment, and I\u2019ll take you through the steps to enable various TLS features in MongoDB.\n\nMongoDB has very strong support for TLS that can be granularly controlled. At minimum, TLS will let you validate and encrypt connections into your database or between your cluster member nodes. But MongoDB can also be configured to authenticate users using TLS client certificates instead of a password. This opens up the possibility for more client security using short-lived (16-hour) certificates. The addition of Smallstep step-ca, an open source certificate authority, makes it easy to create and manage MongoDB TLS certificates.\n\n## The Case for Certificates\n\nTLS certificates come with a lot of benefits:\n\n* Most importantly, TLS makes it possible to require *authenticated encryption* for every database connection\u2014just like SSH connections.\n* Unlike SSH keys, certificates expire. You can issue ephemeral (e.g., five-minute) certificates to people whenever they need to access your database, and avoid having long-lived key material (like SSH keys) sitting around on people's laptops.\n* Certificates allow you to create a trust domain around your database. MongoDB can be configured to refuse connections from clients who don\u2019t have a certificate issued by your trusted Certificate Authority (CA).\n* Certificates can act as user login credentials in MongoDB, replacing passwords. This lets you delegate MongoDB authentication to a CA. This opens the door to further delegation via OpenID Connect, so you can have Single Sign-On MongoDB access.\n\nWhen applied together, these benefits offer a level of security comparable to an SSH tunnel\u2014without the need for SSH.\n\n## MongoDB TLS \n\nHere\u2019s an overview of TLS features that can be enabled in MongoDB:\n\n* **Channel encryption**: The traffic between clients and MongoDB is encrypted. You can enable channel encryption using self-signed TLS certificates. Self-signed certificates are easy to create, but they will not offer any client or server identity validation, so you will be vulnerable to man-in-the-middle attacks. This option only makes sense within a trusted network.\n* **Identity validation**: To enable identity validation on MongoDB, you\u2019ll need to run an X.509 CA that can issue certificates for your MongoDB hosts and clients. Identity validation happens on both sides of a MongoDB connection:\n * **Client identity validation**: Client identity validation means that the database can ensure all client connections are coming from *your* authorized clients. In this scenario, the client has a certificate and uses it to authenticate itself to the database when connecting.\n * **Server identity validation**: Server identity validation means that MongoDB clients can ensure that they are talking to your MongoDB database. The server has an identity certificate that all clients can validate when connecting to the database.\n* **Cluster member validation**: MongoDB can require all members of a cluster to present valid certificates when they join the cluster. This encrypts the traffic between cluster members.\n* **X.509 User Authentication**: Instead of passwords, you can use X.509 certificates as login credentials for MongoDB users.\n* **Online certificate rotation**: Use short-lived certificates and MongoDB online certificate rotation to automate operations.\n\nTo get the most value from TLS with your self-managed MongoDB deployment, you need to run a CA (the fully-managed MongoDB Atlas comes with TLS features enabled by default).\n\nSetting up a CA used to be a difficult, time-consuming hassle requiring deep domain knowledge. Thanks to emerging protocols and tools, it has become a lot easier for any developer to create and manage a simple private CA in 2021. At Smallstep, we\u2019ve created an open source online CA called step-ca that\u2019s secure and easy to use, either online or offline.\n\n## TLS Deployment with MongoDB and Smallstep step-ca\n\nHere are the main steps required to secure MongoDB with TLS. If you\u2019d like to try it yourself, you can find a series of blog posts on the Smallstep website detailing the steps:\n\n* Set up a CA. A single step-ca instance is sufficient. When you run your own CA and use short-lived certificates, you can avoid the complexity of managing CRL and OCSP endpoints by using passive revocation. With passive revocation, if a key is compromised, you simply block the renewal of its certificate in the CA.\n* For server validation, issue a certificate and private key to your MongoDB server and configure server TLS.\n* For client validation, issue certificates and private keys to your clients and configure client-side TLS.\n* For cluster member validation, issue certificates and keys to your MongoDB cluster members and configure cluster TLS.\n* Deploy renewal mechanisms for your certificates. For example, certificates used by humans could be renewed manually when a database connection is needed. Certificates used by client programs or service accounts can be renewed with a scheduled job.\n* To enable X.509 user authentication, you\u2019ll need to add X.509-authenticated users to your database, and configure your clients to attempt X.509 user authentication when connecting to MongoDB.\n* Here\u2019s the icing on the cake: Once you\u2019ve set all of this up, you can configure step-ca to allow users to get MongoDB certificates via an identity provider, using OpenID Connect. This is a straightforward way to enable Single Sign-on for MongoDB.\n\nFinally, it\u2019s important to note that it\u2019s possible to stage the migration of an existing MongoDB cluster to TLS: You can make TLS connections to MongoDB optional at first, and only require client validation once you\u2019ve migrated all of your clients.\n\nReady to get started? In this Smallstep series of tutorials, we\u2019ll take you through this process step-by-step.", "format": "md", "metadata": {"tags": ["MongoDB", "TLS"], "pageDescription": "Learn how to secure your self-managed MongoDB TLS deployment with certificates using the Smallstep open source online certificate authority.", "contentType": "Article"}, "title": "Securing MongoDB with TLS", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-federation-control-access-analytics-node", "action": "created", "body": "# Using Atlas Data Federation to Control Access to Your Analytics Node\n\nMongoDB replica sets, analytics nodes, and read preferences are powerful tools that can help you ensure high availability, optimize performance, and control how your applications access and query data in a MongoDB database. This blog will cover how to use Atlas Data Federation to control access to your analytics node, customize read preferences, and set tag sets for a more seamless and secure data experience.\n\n## How do MongoDB replica sets work?\nMongoDB deployed in replica sets is a strategy to achieve high availability. This strategy provides automatic failover and data redundancy for your applications. A replica set is a group of MongoDB servers that contain the same information, with one server designated as the primary node and the others as secondary nodes. \n\nThe primary node is the leader of a replica set and is the only node that can receive write operations, while the secondary nodes, on the other hand, continuously replicate the data from the primary node and can be used to serve read operations. If the primary node goes down, one of the secondaries can then be promoted to be the primary, allowing the replica set to continue to operate without downtime. This is often referred to as automatic failover. In this case, the new primary is chosen through an \"election\" process, which involves the nodes in the replica set voting for the new primary.\n\nHowever, in some cases, you may not want your secondary node to become the primary node in your replica set. For example, imagine that you have a primary node and two secondary nodes in your database cluster. The primary node is responsible for handling all of the write operations and the secondary nodes are responsible for handling read operations. Now, suppose you have a heavy query that scans a large amount of data over a long period of time on one of the secondary nodes. This query will require the secondary node to do a lot of work because it needs to scan through a large amount of data to find the relevant results. If the primary were to fail while this query is running, the secondary node that is running the query could be promoted to primary. However, since the node is busy running the heavy query, it may struggle to handle the additional load of write operations now that it is the primary. As a result, the performance of the database may suffer, or the newly promoted node might fail entirely.\n\nThis is where Analytics nodes come in\u2026\n\n## Using MongoDB\u2019s analytics nodes to isolate workloads\nIf a database performs complex or long-running operations, such as ETL or reporting, you may want to isolate these queries from the rest of your operational workload by running them on analytics nodes which are completely dedicated to this kind of operation.\n\nAnalytics nodes are a type of secondary node in a MongoDB replica set that can be designated to handle special read-only workloads, and importantly, they cannot be promoted to primary. They can be scaled independently to handle complex analytical queries that involve large amounts of data. When you offload read-intensive workloads from the primary node in a MongoDB replica set, you are directing read operations to other nodes in the replica set, rather than to the primary node. This can help to reduce the load on the primary node and ensure it does not get overwhelmed.\n\nTo use analytics nodes, you must configure your cluster with additional nodes that are designated as \u201cAnalytic Nodes.\u201d This is done in the cluster configuration setup flow. Then, in order to have your client application utilize the Analytic Nodes, you must utilize tag sets when connecting. Utilizing these tag sets enables you to direct all read operations to the analytics nodes.\n\n## What are read preferences and tag sets in MongoDB?\nRead preferences in MongoDB allow you to control what node, within a standard cluster, you are connecting to and want to read from.\n\nMongoDB supports several read preference types that you can use to specify which member of a replica set you want to read from. Here are the most commonly used read preference types:\n1. **Primary**: Read operations are sent to the primary node. This is the default read preference.\n2. **PrimaryPreferred**: Read operations are sent to the primary node if it is available. Otherwise, they are sent to a secondary node.\n3. **Secondary**: Read operations are sent to a secondary node.\n4. **SecondaryPreferred**: Read operations are sent to a secondary node, if one is available. Otherwise, they are sent to the primary node.\n5. **Nearest**: Read operations are sent to the member of the replica set with the lowest network latency, regardless of whether it is the primary or a secondary node.\n\n*It's important to note that read preferences are only used for read operations, not write operations.\n\nTag sets allow you to control even more details about which node you read. MongoDB tag sets are a way to identify specific nodes in a replica set. You can think of them as labels. This allows the calling client application to specify which nodes in a replica set you want to use for read operations, based on the tags that have been applied to them.\n\nMongoDB Atlas clusters are automatically configured with predefined tag sets for different member types depending on how you\u2019ve configured your cluster. You can utilize these predefined replica set tags to direct queries from specific applications to your desired node types and regions. Here are some examples:\n\n1. **Provider**: Cloud provider on which the node is provisioned \n 1. {\"provider\" : \"AWS\"}\n 2. {\"provider\" : \"GCP\"}\n 3. {\"provider\" : \"AZURE\"}\n2. **Region**: Cloud region in which the node resides \n 1. {\"region\" : \"US_EAST_2\"}\n3. **Node**: Node type \n 1. {\"nodeType\" : \"ANALYTICS\"}\n 2. {\"nodeType\" : \"READ_ONLY\"}\n 3. {\"nodeType\" : \"ELECTABLE\"}\n4. **Workload Type**: Tag to distribute your workload evenly among your non-analytics (electable or read-only) nodes.\n 1. {\"workloadType\" : \"OPERATIONAL\"}\n\n## Customer challenge \nRead preferences and tag sets can be helpful in controlling which node gets utilized for a specific query. However, they may not be sufficient on their own to protect against certain types of risks or mistakes. For example, if you are concerned about other users or developers accidentally accessing the primary node of the cluster, read preferences and tag sets may not provide enough protection, as someone with a database user can forget to set the read preference or choose not to use a tag set. In this case, you might want to use additional measures to ensure that certain users or applications only have access to specific nodes of your cluster.\n\nMongoDB Atlas Data Federation can be used as a view on top of your data that is tailored to the specific needs of the user or application. You can create database users in Atlas that are only provisioned to connect to specific clusters or federated database instances. Then, when you provide the endpoints for the federated database instances and the necessary database users, you can be sure that the end user is only able to connect to the nodes you want them to have access to. This can help to \"lock down\" a user or application to a specific node, allowing them to better control which data is accessible to them and ensuring that your data is being accessed in a safe and secure way. \n\n## How does Atlas Data Federation fit in?\nAtlas Data Federation is an on-demand query engine that allows you to query, transform, and move data across multiple data sources, as if it were all in the same place and format. With Atlas Data Federation, you can create virtual collections that refer to your underlying Atlas cluster collections and lock them to a specific read preference or tag set. You can then restrict database users to only be able to connect to the federated database instance, thereby giving partners within your business live access to your cluster data, while not having any risk that they connect to the primary. This allows you to isolate different workloads and reduce the risk of contention between them. \n\nFor example, you could create a separate endpoint for analytics queries that is locked down to read-only access and restrict queries to only run on analytics nodes, while continuing to use the cluster connection for your operational application queries. This would allow you to run analytics queries with access to real-time data without affecting the performance of the cluster.\n\nTo do this, you would create a virtual collection, choose the source of a specific cluster collection, and specify a tag set for the analytics node. Then, a user can query their federated database instance, knowing it will always query the analytics node and that their primary cluster won\u2019t be impacted. The only way to make a change would be in the storage configuration of the federated database instance, which you can prevent, ensuring that no mistakes happen.\n\nIn addition to restricting the federated database instance to only read from the analytics node, the database manager can also place restrictions on the user to only read from that specific federated database instance. Now, not only do you have a connection string for your federated database instance that will never query your primary node, but you can also ensure that your users are assigned the correct roles, and they can\u2019t accidentally connect to your cluster connection string. \n\nBy locking down an analytics node to read-only access, you can protect your most sensitive workloads and improve security while still sharing access to your most valuable data.\n\n## How to lock down a user to access the analytics node\nThe following steps will allow you to set your read-preferences in Atlas Data Federation to use the analytics node:\n\nStep 1: Log into MongoDB Atlas.\n\nStep 2: Select the Data Federation option on the left-hand navigation. \n\nStep 3: Click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nStep 4: *Repeat this step for each of your data sources.* Select the dataset for your federated database instance from the Data Sources section.\n\n4a. Select your cluster and collection.\n\n* Select your \u201cRead Preference Mode.\u201d Data Federation enables \u2018nearest\u2019 as its default. \n\n4b. Click \u201cCluster Read Preference.\u201d\n\n* Select your \u201cRead Preference Mode.\u201d Data Federation enables \u2018nearest\u2019 as its default. \n* Type in your TagSets. For example:\n * [ { \"name\": \"nodeType\", \"value\": \"ANALYTICS\" } ] ]\n\n![Data Federation UI showing the selection of a data source and an example of setting your read preferences and TagSets\n\n4c. Select \u201cNext.\u201d\n\nStep 5: Map your datasets from the Data Sources pane on the left to the Federated Database Instance pane on the right. \n\nStep 6: Click \u201cSave\u201d to create the federated database instance. \n\n*To connect to your federated database instance, continue to follow the instructions outlined in our documentation.\n\n**Note: If you have many databases and collections in your underlying cluster, you can use our \u201cwildcard syntax\u201d along with read preference to easily expose all your databases and collections from your cluster without enumerating each one. This can be set after you\u2019ve configured read preference by going to the JSON editor view.**\n\n```\n\"databases\" : \n {\n \"name\" : \"*\",\n \"collections\" : [\n {\n \"name\" : \"*\",\n \"dataSources\" : [\n {\n \"storeName\" : \"\"\n }\n ]\n }\n ]\n }\n]\n```\n\n## How to manage database access in Atlas and assign roles to users\nYou must create a database user to access your deployment. For security purposes, Atlas requires clients to authenticate as MongoDB database users to access federated database instances. To add a database user to your cluster, perform the following steps:\n\nStep 1: In the Security section of the left navigation, click \u201cDatabase Access.\u201d \n\n1a. Make sure it shows the \u201cDatabase Users\u201d tab display.\n\n1b. Click \u201c+ Add New Database User.\u201d\n\n![add a new database user to assign roles\n\nStep 2: Select \u201cPassword\u201d and enter user information. \n\nStep 3: Assign user privileges, such as read/write access.\n\n3a. Select a built-in role from the \u201cBuilt-in Role\u201d dropdown menu. You can select one built-in role per database user within the Atlas UI. If you delete the default option, you can click \u201cAdd Built-in Role\u201d to select a new built-in role.\n\n3b. If you have any custom roles defined, you can expand the \u201cCustom Roles\u201d section and select one or more roles from the \u201cCustom Roles\u201d dropdown menu. Click \u201cAdd Custom Role\u201d to add more custom roles. You can also click the \u201cCustom Roles\u201d link to see the custom roles for your project.\n\n3c. Expand the \u201cSpecific Privileges\u201d section and select one or more privileges from the \u201cSpecific Privileges\u201d dropdown menu. Click \u201cAdd Specific Privilege\u201d to add more privileges. This assigns the user specific privileges on individual databases and collections.\n\nStep 4: *Optional*: Specify the resources in the project that the user can access.\n\n*By default, database users can access all the clusters and federated database instances in the project. You can restrict database users to have access to specific clusters and federated database instances by doing the following:\n\n* Toggle \u201cRestrict Access to Specific Clusters/Federated Database Instances\u201d to \u201cON.\u201d\n* Select the clusters and federated database instances to grant the user access to from the \u201cGrant Access To\u201d list.\n\nStep 5: Optional: Save as a temporary user.\n\nStep 6: Click \u201cAdd User.\u201d \n\nBy following these steps, you can control access management using the analytics node with Atlas Data Federation. This can be a useful way to ensure that only authorized users have access to the analytics node, and that the data on the node is protected.\n\nOverall, setting read preferences and using analytics nodes can help you to better manage access to your data and improve the performance and scalability of your application.\n\nTo learn more about Atlas Data Federation and whether it would be the right solution for you, check out our documentation and tutorials.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to use Atlas Data Federation to control access to your analytics node and customize read preferences and tag sets for a more seamless and secure data experience. ", "contentType": "Article"}, "title": "Using Atlas Data Federation to Control Access to Your Analytics Node", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-swift-query-api", "action": "created", "body": "# Goodbye NSPredicate, hello Realm Swift Query API\n\n## Introduction\n\nI'm not a fan of writing code using pseudo-English text strings. It's a major context switch when you've been writing \"native\" code. Compilers don't detect errors in the strings, whether syntax errors or mismatched types, leaving you to learn of your mistakes when your app crashes.\n\nI spent more than seven years working at MySQL and Oracle, and still wasn't comfortable writing anything but the simplest of SQL queries. I left to join MongoDB because I knew that the object/document model was the way that developers should work with their data. I also knew that idiomatic queries for each programming language were the way to go.\n\nThat's why I was really excited when MongoDB acquired Realm\u2014a leading mobile **object** database. You work with Realm objects in your native language (in this case, Swift) to manipulate your data.\n\nHowever, there was one area that felt odd in Realm's Swift SDK. You had to use `NSPredicate` when searching for Realm objects that match your criteria. `NSPredicate`s are strings with variable substitution. \ud83e\udd26\u200d\u2642\ufe0f\n\n`NSPredicate`s are used when searching for data in Apple's Core Data database, and so it was a reasonable design decision. It meant that iOS developers could reuse the skills they'd developed while working with Core Data.\n\nBut, I hate writing code as strings.\n\nThe good news is that the Realm SDK for Swift has added the option to use type-safe queries through the Realm Swift Query API. \ud83e\udd73.\n\nYou now have the option whether to filter using `NSPredicate`s:\n\n```swift\nlet predicate = NSPredicate(format: \"isSoft == %@\", NSNumber(value: wantSoft)\nlet decisions = unfilteredDecisions.filter(predicate)\n```\n\nor with the new Realm Swift Query API:\n\n```swift\nlet decisions = unfilteredDecisions.where { $0.isSoft == wantSoft }\n```\n\nIn this article, I'm going to show you some examples of how to use the Realm Swift Query API. I'll also show you an example where wrangling with `NSPredicate` strings has frustrated me.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Prerequisites\n\n- Realm-Cocoa 10.19.0+\n\n## Using The Realm Swift Query API\n\nI have a number of existing Realm iOS apps using `NSPredicate`s. When I learnt of the new query API, the first thing I wanted to do was try to replace some of \"legacy\" queries. I'll start by describing that experience, and then show what other type-safe queries are possible.\n\n### Replacing an NSPredicate\n\nI'll start with the example I gave in the introduction (and how the `NSPredicate` version had previously frustrated me).\n\nI have an app to train you on what decisions to make in Black Jack (based on the cards you've been dealt and the card that the dealer is showing). There are three different decision matrices based on the cards you've been dealt:\n\n- Whether you have the option to split your hand (you've been dealt two cards with the same value)\n- Your hand is \"soft\" (you've been dealt an ace, which can take the value of either one or eleven)\n- Any other hand\n\nAll of the decision-data for the app is held in `Decisions` objects:\n\n```swift\nclass Decisions: Object, ObjectKeyIdentifiable {\n @Persisted var decisions = List()\n @Persisted var isSoft = false\n @Persisted var isSplit = false\n ...\n}\n```\n\n`SoftDecisionView` needs to find the `Decisions` object where `isSoft` is set to `true`. That requires a simple `NSPredicate`:\n\n```swift\nstruct SoftDecisionView: View {\n @ObservedResults(Decisions.self, filter: NSPredicate(format: \"isSoft == YES\")) var decisions\n ...\n}\n```\n\nBut, what if I'd mistyped the attribute name? There's no Xcode auto-complete to help when writing code within a string, and this code builds with no errors or warnings:\n\n```swift\nstruct SoftDecisionView: View {\n @ObservedResults(Decisions.self, filter: NSPredicate(format: \"issoft == YES\")) var decisions\n ...\n}\n```\n\nWhen I run the code, it works initially. But, when I'm dealt a soft hand, I get this runtime crash:\n\n```\nTerminating app due to uncaught exception 'Invalid property name', reason: 'Property 'issoft' not found in object of type 'Decisions''\n```\n\nRather than having a dedicated view for each of the three types of hand, I want to experiment with having a single view to handle all three.\n\nSwiftUI doesn't allow me to use variables (or even named constants) as part of the filter criteria for `@ObservedResults`. This is because the `struct` hasn't been initialized until after the `@ObservedResults` is defined. To live within SwitfUIs constraints, the filtering is moved into the view's body:\n\n```swift\nstruct SoftDecisionView: View {\n @ObservedResults(Decisions.self) var unfilteredDecisions\n let isSoft = true\n\n var body: some View {\n let predicate = NSPredicate(format: \"isSoft == %@\", isSoft)\n let decisions = unfilteredDecisions.filter(predicate)\n ...\n}\n```\n\nAgain, this builds, but the app crashes as soon as I'm dealt a soft hand. This time, the error is much more cryptic:\n\n```\nThread 1: EXC_BAD_ACCESS (code=1, address=0x1)\n```\n\nIt turns out that, you need to convert the boolean value to an `NSNumber` before substituting it into the `NSPredicate` string:\n\n```swift\nstruct SoftDecisionView: View {\n @ObservedResults(Decisions.self) var unfilteredDecisions\n\n let isSoft = true\n\n var body: some View {\n let predicate = NSPredicate(format: \"isSoft == %@\", NSNumber(value: isSoft))\n let decisions = unfilteredDecisions.filter(predicate)\n ...\n}\n```\n\nWho knew? OK, StackOverflow did, but it took me quite a while to find the solution.\n\nHopefully, this gives you a feeling for why I don't like writing strings in place of code.\n\nThis is the same code using the new (type-safe) Realm Swift Query API:\n\n```swift\nstruct SoftDecisionView: View {\n @ObservedResults(Decisions.self) var unfilteredDecisions\n let isSoft = true\n\n var body: some View {\n let decisions = unfilteredDecisions.where { $0.isSoft == isSoft }\n ...\n}\n```\n\nThe code's simpler, and (even better) Xcode won't let me use the wrong field name or type\u2014giving me this error before I even try running the code:\n\n### Experimenting With Other Sample Queries\n\nIn my RCurrency app, I was able to replace this `NSPredicate`-based code:\n\n```swift\nstruct CurrencyRowContainerView: View {\n @ObservedResults(Rate.self) var rates\n let baseSymbol: String\n let symbol: String\n\n var rate: Rate? {\n NSPredicate(format: \"query.from = %@ AND query.to = %@\", baseSymbol, symbol)).first\n }\n ...\n}\n```\n\nWith this:\n\n```swift\nstruct CurrencyRowContainerView: View {\n @ObservedResults(Rate.self) var rates\n let baseSymbol: String\n let symbol: String\n\n var rate: Rate? {\n rates.where { $0.query.from == baseSymbol && $0.query.to == symbol }.first\n }\n ...\n}\n```\n\nAgain, I find this more Swift-like, and bugs will get caught as I type/build rather than when the app crashes.\n\nI'll use this simple `Task` `Object` to show a few more example queries:\n\n```swift\nclass Task: Object, ObjectKeyIdentifiable {\n @Persisted var name = \"\"\n @Persisted var isComplete = false\n @Persisted var assignee: String?\n @Persisted var priority = 0\n @Persisted var progressMinutes = 0\n}\n```\n\nAll in-progress tasks assigned to name:\n\n```swift\nlet myStartedTasks = realm.objects(Task.self).where {\n ($0.progressMinutes > 0) && ($0.assignee == name)\n}\n```\n\nAll tasks where the `priority` is higher than `minPriority`:\n\n```swift\nlet highPriorityTasks = realm.objects(Task.self).where {\n $0.priority >= minPriority\n}\n```\n\nAll tasks that have a `priority` that's an integer between `-1` and `minPriority`:\n\n```swift\nlet lowPriorityTasks = realm.objects(Task.self).where {\n $0.priority.contains(-1...minPriority)\n}\n```\n\nAll tasks where the `assignee` name string includes `namePart`:\n\n```swift\nlet tasksForName = realm.objects(Task.self).where {\n $0.assignee.contains(namePart)\n}\n```\n\n### Filtering on Sub-Objects\n\nYou may need to filter your Realm objects on values within their sub-objects. Those sub-object may be `EmbeddedObject`s or part of a `List`.\n\nI'll use the `Project` class to illustrate filtering on the attributes of sub-documents:\n\n```swift\nclass Project: Object, ObjectKeyIdentifiable {\n @Persisted var name = \"\"\n @Persisted var tasks: List\n}\n```\n\nAll projects that include a task that's in-progress, and is assigned to a given user:\n\n```swift\nlet myActiveProjects = realm.objects(Project.self).where {\n ($0.tasks.progressMinutes >= 1) && ($0.tasks.assignee == name)\n}\n```\n\n### Including the Query When Creating the Original Results (SwiftUI)\n\nAt the time of writing, this feature wasn't released, but it can be tested using this PR.\n\nYou can include the where modifier directly in your `@ObservedResults` call. That avoids the need to refine your results inside your view's body:\n\n```swift\n@ObservedResults(Decisions.self, where: { $0.isSoft == true }) var decisions\n```\n\nUnfortunately, SwiftUI rules still mean that you can't use variables or named constants in your `where` block for `@ObservedResults`.\n\n## Conclusion\n\nRealm type-safe queries provide a simple, idiomatic way to filter results in Swift. If you have a bug in your query, it should be caught by Xcode rather than at run-time.\n\nYou can find more information in the docs. If you want to see hundreds of examples, and how they map to equivalent `NSPredicate` queries, then take a look at the test cases.\n\nFor those that prefer working with `NSPredicate`s, you can continue to do so. In fact the Realm Swift Query API runs on top of the `NSPredicate` functionality, so they're not going anywhere soon.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "New type-safe queries in Realm's Swift SDK", "contentType": "News & Announcements"}, "title": "Goodbye NSPredicate, hello Realm Swift Query API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/deploying-across-multiple-kubernetes-clusters", "action": "created", "body": "# Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti\n\nThis article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.\n\n- Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud\n\n- Mastering MongoDB Ops Manager\n\n- Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti\n\nWith the latest version of the MongoDB Enterprise Kubernetes Operator, you can deploy MongoDB resources across multiple Kubernetes clusters! By running your MongoDB replica set across different clusters, you can ensure that your deployment remains available even in the event of a failure or outage in one of them. The MongoDB Enterprise Kubernetes Operator's Custom Resource Definition (CRD), MongoDBMulti, makes it easy to run MongoDB replica sets across different Kubernetes environments and provides a declarative approach to deploying MongoDB, allowing you to specify the desired state of your deployment and letting the operator handle the details of achieving that state.\n\n> \u26a0\ufe0f Support for multi-Kubernetes-cluster deployments of MongoDB is a preview feature and not yet ready for Production use. The content of this article is meant to provide you with a way to experiment with this upcoming feature, but should not be used in production as breaking changes may still occur. Support for this feature during preview is direct with the engineering team and on a best-efforts basis, so please let us know if trying this out at kubernetes-product@mongodb.com. Also feel free to get in touch with any questions, or if this is something that may be of interest once fully released.\n\n## Overview of MongoDBMulti CRD\n\nDeveloped by MongoDB, MongoDBMulti Custom Resource allows for the customization of resilience levels based on the needs of the enterprise application.\n\n- Single region (Multi A-Z) consists of one or more Kubernetes clusters where each cluster has nodes deployed in different availability zones in the same region. This type of deployment protects MongoDB instances backing your enterprise applications against zone and Kubernetes cluster failures.\n\n- Multi Region consists of one or more Kubernetes clusters where you deploy each cluster in a different region, and within each region, deploy cluster nodes in different availability zones. This gives your database resilience against the loss of a Kubernetes cluster, a zone, or an entire cloud region.\n\nBy leveraging the native capabilities of Kubernetes, the MongoDB Enterprise Kubernetes Operator performs the following tasks to deploy and operate a multi-cluster MongoDB replica set:\n\n- Creates the necessary resources, such as Configmaps, secrets, service objects, and StatefulSet objects, in each member cluster. These resources are in line with the number of replica set members in the MongoDB cluster, ensuring that the cluster is properly configured and able to function.\n\n- Identifies the clusters where the MongoDB replica set should be deployed using the corresponding MongoDBMulti Custom Resource spec. It then deploys the replica set on the identified clusters.\n\n- Watches for the creation of the MongoDBMulti Custom Resource spec in the central cluster.\n\n- Uses a mounted kubeconfig file to communicate with member clusters. This allows the operator to access the necessary information and resources on the member clusters in order to properly manage and configure the MongoDB cluster.\n\n- Watches for events related to the CentralCluster and MemberCluster in order to confirm that the multi-Kubernetes-cluster deployment is in the desired state.\n\nYou should start by constructing a central cluster. This central cluster will host the Kubernetes Operator, MongoDBMulti Custom Resource spec, and act as the control plane for the multi-cluster deployment. If you deploy Ops Manager with the Kubernetes Operator, the central cluster may also host Ops Manager.\n\nYou will also need a service mesh. I will be using Istio, but any service mesh that provides a fully qualified domain name resolution between pods across clusters should work.\n\nCommunication between replica set members happens via the service mesh, which means that your MongoDB replica set doesn't need the central cluster to function. Keep in mind that if the central cluster goes down, you won't be able to use the Kubernetes Operator to modify your deployment until you regain access to this cluster.\u00a0\u00a0\n\n## Using the MongoDBMulti CRD\n\nAlright, let's get started using the operator and build something! For this tutorial, we will need the following tools:\u00a0\n\n- gcloud\u00a0\n\n- gke-cloud-auth-plugin\n\n- Go v1.17 or later\n\n- Helm\n\n- kubectl\n\n- kubectx\n\n- Git.\n\nWe need to set up a master Kubernetes cluster to host the MongoDB Enterprise Multi-Cluster Kubernetes Operator and the Ops Manager. You will need to create a GKE Kubernetes cluster by following the instructions in Part 1 of this series. Then, we should install the MongoDB Multi-Cluster Kubernetes Operator\u00a0 in the `mongodb` namespace, along with the necessary CRDs. This will allow us to utilize the operator to effectively manage and operate our MongoDB multi cluster replica set. For instructions on how to do this, please refer to the relevant section of Part 1. Additionally, we will need to install the Ops Manager, as outlined in Part 2 of this series.\n\n### Creating the clusters\n\nAfter master cluster creation and configuration, we need three additional GKE clusters, distributed across three different regions: `us-west2`, `us-central1`, and `us-east1`. Those clusters will host MongoDB replica set members. \n\n```bash\nCLUSTER_NAMES=(mdb-cluster-1 mdb-cluster-2 mdb-cluster-3)\nZONES=(us-west2-a us-central1-a us-east1-b)\n\nfor ((i=0; i<${#CLUSTER_NAMES@]:0:1}; i++)); do\n gcloud container clusters create \"${CLUSTER_NAMES[$i]}\" \\\n --zone \"${ZONES[$i]}\" \\\n --machine-type n2-standard-2 --cluster-version=\"${K8S_VERSION}\" \\\n --disk-type=pd-standard --num-nodes 1\ndone\n```\n\nThe clusters have been created, and we need to obtain the credentials for them.\n\n```bash\nfor ((i=0; i<${#CLUSTER_NAMES[@]:0:1}; i++)); do\n gcloud container clusters get-credentials \"${CLUSTER_NAMES[$i]}\" \\\n --zone \"${ZONES[$i]}\"\ndone\n```\n\nAfter successfully creating the Kubernetes master and MongoDB replica set clusters, installing the Ops Manager and all required software on it, we can check them using `[kubectx`.\n\n```bash\nkubectx\n```\n\nYou should see all your Kubernetes clusters listed here. Make sure that you only have the clusters you just created and remove any other unnecessary clusters using `kubectx -d ` for the next script to work.\n\n```bash\ngke_lustrous-spirit-371620_us-central1-a_mdb-cluster-2\ngke_lustrous-spirit-371620_us-east1-b_mdb-cluster-3\ngke_lustrous-spirit-371620_us-south1-a_master-operator\ngke_lustrous-spirit-371620_us-west2-a_mdb-cluster-1\n```\n\nWe need to create the required variables: `MASTER` for a master Kubernetes cluster, and `MDB_1`, `MDB_2`, and `MDB_3` for clusters which will host MongoDB replica set members. Important note: These variables should contain the full Kubernetes cluster names.\n\n```bash\nKUBECTX_OUTPUT=($(kubectx))\nCLUSTER_NUMBER=0\nfor context in \"${KUBECTX_OUTPUT@]}\"; do\n if [[ $context == *\"master\"* ]]; then\n MASTER=\"$context\"\n else\n CLUSTER_NUMBER=$((CLUSTER_NUMBER+1))\n eval \"MDB_$CLUSTER_NUMBER=$context\"\n fi\ndone\n```\n\nYour clusters are now configured and ready to host the MongoDB Kubernetes Operator.\n\n### Installing Istio\n\nInstall [Istio (I'm using v 1.16.1) in a multi-primary mode on different networks, using the install_istio_separate_network script. To learn more about it, see the Multicluster Istio documentation. I have prepared a code that downloads and updates `install_istio_separate_network.sh` script variables to currently required ones, such as full K8s cluster names and the version of Istio.\n\n```bash\nREPO_URL=\"https://github.com/mongodb/mongodb-enterprise-kubernetes.git\"\nSUBDIR_PATH=\"mongodb-enterprise-kubernetes/tools/multicluster\"\nSCRIPT_NAME=\"install_istio_separate_network.sh\"\nISTIO_VERSION=\"1.16.1\"\ngit clone \"$REPO_URL\"\nfor ((i = 1; i <= ${#CLUSTER_NAMES@]}; i++)); do\n eval mdb=\"\\$MDB_${i}\"\n eval k8s=\"CTX_CLUSTER${i}\"\n sed -i'' -e \"s/export ${k8s}=.*/export CTX_CLUSTER${i}=${mdb}/\" \"$SUBDIR_PATH/$SCRIPT_NAME\"\ndone\nsed -i'' -e \"s/export VERSION=.*/export VERSION=${ISTIO_VERSION}/\" \"$SUBDIR_PATH/$SCRIPT_NAME\"\n```\n\nInstall Istio in a multi-primary mode on different Kubernetes clusters via the following command.\n\n```bash\nyes | \"$SUBDIR_PATH/$SCRIPT_NAME\"\n```\n\nExecute the[ multi-cluster kubeconfig creator tool. By default, the Kubernetes Operator is scoped to the `mongodb` namespace, although it can be installed in a different namespace as well. Navigate to the directory where you cloned the Kubernetes Operator repository in an earlier step, and run the tool. Got to Multi-Cluster CLI documentation to lean more about `multi cluster cli`.\n\n```bash\nCLUSTERS=$MDB_1,$MDB_2,$MDB_3\ncd \"$SUBDIR_PATH\"\ngo run main.go setup \\\n -central-cluster=\"${MASTER}\" \\\n -member-clusters=\"${CLUSTERS}\" \\\n -member-cluster-namespace=\"mongodb\" \\\n -central-cluster-namespace=\"mongodb\"\n```\n### Verifying cluster configurations\n\nLet's check the configurations we have made so far. I will switch the context to cluster #2.\n\n```bash\nkubectx $MDB_2\n```\n\nYou should see something like this in your terminal.\n\n```bash\nSwitched to context \"gke_lustrous-spirit-371620_us-central1-a_mdb-cluster-2\"\n```\n\nWe can see `istio-system` and `mongodb` namespaces created by the scripts\n\n```bash\nkubectl get ns\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 STATUS \u00a0 AGE\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Active \u00a0 62m\nistio-system\u00a0 \u00a0 \u00a0 Active \u00a0 7m45s\nkube-node-lease \u00a0 Active \u00a0 62m\nkube-public \u00a0 \u00a0 \u00a0 Active \u00a0 62m\nkube-system \u00a0 \u00a0 \u00a0 Active \u00a0 62m\nmongodb \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Active \u00a0 41s\n```\n\nand the MongoDB Kubernetes operator service account is ready.\n\n```bash\nkubectl -n mongodb get sa\n\ndefault \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 55s\nmongodb-enterprise-operator-multi-cluster \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 52s\n```\n\nNext, execute the following command on the clusters, specifying the context for each of the member clusters in the deployment. The command adds the label `istio-injection=enabled`' to the`'mongodb` namespace on each member cluster. This label activates Istio's injection webhook, which allows a sidecar to be added to any pods created in this namespace.\n\n```bash\nCLUSTER_ARRAY=($MDB_1 $MDB_2 $MDB_3)\nfor CLUSTER in \"${CLUSTER_ARRAY@]}\"; do \n kubectl label --context=$CLUSTER namespace mongodb istio-injection=enabled\ndone\n```\n\n### Installing the MongoDB multi cluster Kubernetes operator\n\nNow the MongoDB Multi Cluster Kubernetes operator must be installed on the master-operator cluster and be aware of the all Kubernetes clusters which are part of the Multi Cluster. This step will add the multi cluster Kubernetes operator to each of our clusters. \n\nFirst, switch context to the master cluster.\n\n```bash\nkubectx $MASTER\n```\n\nThe `mongodb-operator-multi-cluster` operator needs to be made aware of the newly created Kubernetes clusters by updating the operator config through Helm. This procedure was tested with `mongodb-operator-multi-cluster` version `1.16.3`.\n\n```bash\nhelm upgrade --install mongodb-enterprise-operator-multi-cluster mongodb/enterprise-operator \\\n --namespace mongodb \\\n --set namespace=mongodb \\\n --version=\"${HELM_CHART_VERSION}\" \\\n --set operator.name=mongodb-enterprise-operator-multi-cluster \\\n --set \"multiCluster.clusters={${CLUSTERS}}\" \\\n --set operator.createOperatorServiceAccount=false \\\n --set multiCluster.performFailover=false\n```\n\nCheck if the MongoDB Enterprise Operator multi cluster pod on the master cluster is running.\n\n```bash\nkubectl -n mongodb get pods\n```\n\n```bash\nNAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 READY STATUS\u00a0 \u00a0 RESTARTS \u00a0 AGE\nmongodb-enterprise-operator-multi-cluster-688d48dfc6\u00a0 \u00a0 1/1\u00a0 Running 0\u00a0 8s\n```\n\nIt's now time to link all those clusters together using the MongoDB Multi CRD. The Kubernetes API has already been extended with a MongoDB-specific object - `mongodbmulti`.\n\n```bash\nkubectl -n mongodb get crd | grep multi\n```\n\n```bash\nmongodbmulti.mongodb.com\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n```\n\nYou should also review after the installation logs and ensure that there are no issues or errors.\n\n```bash\nPOD=$(kubectl -n mongodb get po|grep operator|awk '{ print $1 }')\nkubectl -n mongodb logs -f po/$POD\n```\n\nWe are almost ready to create a multi cluster MongoDB Kubernetes replica set! We need to configure the required service accounts for each member cluster.\n\n```bash\nfor CLUSTER in \"${CLUSTER_ARRAY[@]}\"; do\n helm template --show-only templates/database-roles.yaml mongodb/enterprise-operator --namespace \"mongodb\" | kubectl apply -f - --context=${CLUSTER} --namespace mongodb; \ndone\n```\n\nAlso, let's generate Ops Manager API keys and add our IP addresses to the Ops Manager access list. Get the Ops Manager (created as described in [Part 2) URL. Make sure you switch the context to master.\u00a0\n\n```bash\nkubectx $MASTER\nURL=http://$(kubectl -n \"${NAMESPACE}\" get svc ops-manager-svc-ext -o jsonpath='{.status.loadBalancer.ingress0].ip}:{.spec.ports[0].port}')\necho $URL\n```\nLog in to Ops Manager, and generate public and private API keys. When you create API keys, don't forget to add your current IP address to API Access List.\n\nTo do so, log in to the Ops Manager and go to `ops-manager-db` organization.\n\n![Ops Manager provides a organizations and projects hierarchy to help you manage your Ops Manager deployments. In the organizations and projects hierarchy, an organization can contain many projects\n\nClick `Access Manager` on the left-hand side, and choose Organization Access then choose `Create API KEY`\u00a0 in the top right corner.\n\nThe key must have a name (I use `mongodb-blog`) and permissions must be set to `Organization Owner` .\n\nWhen you click Next, you will see your `Public Key`and `Private Key`. Copy those values and save them --- you will not be able to see the private key again. Also, make sure you added your current IP address to the API access list.\n\nGet the public and private keys generated by the API key creator and paste them into the Kubernetes secret.\n\n```bash\nkubectl apply -f - <\n privateKey: \nEOF\n```\n\nYou also need an \u00a0`Organization ID`. You can see the organization ID by clicking on the gear icon in the top left corner.\n\nCopy the `Organization ID` and paste to the Kubernetes config map below.\n\n```bash\nkubectl apply -f - <\nEOF\n```\n\nThe Ops Manager instance has been configured, and you have everything needed to add the MongoDBMultiCRD to your cluster.\n\n### Using the MongoDBMultiCRD\n\nFinally, we can create a MongoDB replica set that is distributed across three Kubernetes clusters in different regions. I have updated the Kubernetes manifest with the full names of the Kubernetes clusters. Let's apply it now!\n\n```bash\nMDB_VERSION=6.0.2-ent\nkubectl apply -f - <", "format": "md", "metadata": {"tags": ["Connectors", "Kubernetes"], "pageDescription": "Learn how to deploy MongoDB across multiple Kubernetes clusters using the operator and the MongoDBMulti CRD.", "contentType": "Tutorial"}, "title": "Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-meetup-jwt-authentication", "action": "created", "body": "# Easy Realm JWT Authentication with CosyncJWT\n\nDidn't get a chance to attend the Easy Realm JWT Authentication with CosyncJWT Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n:youtube]{vid=k5ZcrOW-leY}\n\nIn this meetup, Richard Krueger, CEO Cosync, will focus on the benefits of JWT authentication and how to easily implement CosyncJWT within a Realm application. CosyncJWT is a JWT Authentication service specifically designed for MongoDB Realm application. It supports RSA public/private key third party email authentication and a number of features for onboard users to a Realm application. These features include signup and invite email confirmation, two-factor verification through the Google authenticator and SMS through Twilio, and configurable meta-data through the JWT standard. CosyncJWT offers both a cloud implementation where Cosync hosts the application/user authentication data, and will soon be releasing a self-hosted version of the service, where developers can save their user data to their own MongoDB Atlas cluster. \n\nIn this 60-minute recording, Richard spends about 40 minutes presenting an overview of Cosync, and then dives straight into a live coding demo. After this, we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our [community forums. Come to learn. Stay to connect.\n\n### Transcript \n\nShane:\nSo, you're very, very welcome. We have a great guest here speaker today, Richard Krueger's joined us, which is brilliant to have. But just before Richard get started into the main event, I just wanted to do introductions and a bit of housekeeping and a bit of information about our upcoming events too. My name is Shane McAllister. I look after developer advocacy for Realm, for MongoDB. And we have been doing these meetups, I suppose, steadily since the beginning of this year, this is our fifth meetup and we're delighted that you can all attend. We're delighted to get an audience on board our platform. And as we know in COVID, our events and conferences are few and far between and everything has moved online. And while that is still the case, this is going to be a main channel for our developer community that we're trying to build up here in Realm at MongoDB.\n\nWe are going to do these regularly. We are featuring talkers and speakers from both the Realm team, our SDK leads, our advocacy team, number of them who are joining us here today as well too, our users and also our partners. And that's where Richard comes in as well too. So I do want to share with you a couple of future meetups that we have coming as well to show you what we have in store. We have a lot coming on the horizon very, very soon. So just next week we have Klaus talking about Realm Kotlin Multiplatform, followed a week or so later by Jason who's done these meetups before. Jason is our lead for our Coco team, our Swift team, and he's on June 2nd. He's talking about SwiftUI testing and Realm with projections. And then June 10th, a week later again, we have Kr\u00e6n, who's talking about Realm JS for react native applications.\n\nBut that's not the end. June 17th, we have Igor from Amazon Web Services talking about building a serverless event driven application with MongoDB in Realm. And that will also be done with Andrew Morgan who's one of our developer advocates. We've built, and you can see that on our developer hub, we've built a very, very neat application integrating with Slack. And then Jason, a glutton for punishment is back at the end of June and joining us again for a key path filtering and auto open. We really are pushing forward with Swift and SwiftUI with Realm. And we see great uptake within our community. On top of all of that in July is mongodb.live. This is our key MongoDB event. It's on July 13th and 14th, fully online. And we do hope that if you're not registered already, you will sign up, just search for mongodb.live, sign up and register. It's free. And over the two days, we will have a number of talks, a number of sessions, a number of live coding sessions, a number tutorials and an interactive elements as well too. So, it's where we're announcing our new products, our roadmap for the year, and engage in across everything MongoDB, including Realm. We have a number of Realm's specific sessions there as well too. So, just a little bit of housekeeping. We're using this bevy platform, for those of you familiar with Zoom, and who've been here before to meet ups, you're very familiar. We have the chat. Thank you so much on the right-hand side, we have the chats. Thank you for joining there, letting us know where you're all from. We've got people tuning in from India, Sweden, Spain, Germany. So that's brilliant. It's great to see a global audience and I hope this time zone suits all of you.\n\nWe're going to take probably about, I think roughly, maybe 40 minutes for both the presentation and Richard's brave enough to do some live coding as well too. So we very much look forward to that. We will be having a Q&A at the end. So, by all means, please ask any questions in the chat during Richard's presentation. We have some people, Kurt and others here, and who'll be able to answer some questions on Cosync. We also have some of our advocates, Diego and Mohit who joined in and answer any questions that you have on Realm as well too. So, we can have the chat in the sidebar. But what happens in this, what happened before at other meetups is that if you have some questions at the end and you're very comfortable, we can open up your mic and your video and allow you to join in in this meetup.\n\nIt is a meetup after all, and the more the merrier. So, if you're comfortable, let me know, make a note or a DM in the chats, and you can ask your question directly to Richard or myself at the end as well too. The other thing then really with regard to the housekeeping is, do get connected. This is our meetup, this is our forums. This is our channels. And that we're on as well too. So, developer.mongodb.com is our forums and our developer hub. We're creating articles there weekly and very in-depth tutorials, demos, links to repos, et cetera. That's where our advocates hang out and create content there. And around global community, you're obviously familiar with that because you've ended up here, right? But do spread the word. We're trying to get more and more people joining that community.\n\nThe reason being is that you will be first to know about the future events that we're hosting in our Realm global community if you're signed up and a member there. As soon as we add them, you'll automatically get an email, simple button inside the email to RSVP and to join future events as well too. And as always, we're really active on Twitter. We really like to engage with our mobile community there on Twitter. So, please follow us, DM us and get in touch there as well too. And if you do, and especially for this event now, I'm hoping that you will ... We have some prizes, you can win some swag.\n\nIt's not for everybody, but please post comments and your thoughts during the presentation or later on today, and we'll pick somebody at random and we send them a bunch of nice swag, as you can see, happily models there by our Realm SDK engineers, and indeed by Richard and myself as well too. So, I won't keep you much longer, essentially, we should get started now. So I would like to introduce Richard Krueger who's the CEO of Cosync. I'm going to stop sharing my screen. Richard, you can swap over to your screen. I'll still be here. I'll be moderating the chat. I'm going to jump back in at the end as well too. So, Richard, really looking forward to today. Thank you so much. We're really happy to have you here.\n\nRichard:\nSounds good. Okay. I'm Richard Krueger, I'm the CEO of Cosync, and I'm going to be presenting a JWT authentication system, which we've built and as we're adding more features to it as we speak. So let me go ahead and share my screen here. And I'm going to share the screen right. Okay. Do you guys see my screen?\nShane:\nWe see double of your screen at the moment there.\n\nRichard:\nOh, okay. Let me take this away. Okay, there you go.\n\nShane:\nWe can see that, if you make that full screen, we should be good and happier. I'd say, are you going to move between windows because you're doing-\n\nRichard:\nYeah, I will. There we go. Let me just ... I could make this full screen right now. I might toggle between full screen and non-full screen. So, what is a little bit about myself, I've been a Realm programmer for now almost six years. I was a very early adopter of the very first object database which I used for ... I've been doing kind of cloud synchronization programs. So my previous employer, Needley we used that extensively, that was before there was even a cloud version of Realm. So, in order to build collaborative apps, one, back in the day would have to use something like Parse and Realm or Firebase and Realm. And it was kind of hybrid systems. And then about 2017, Realm came out with its own cloud version, the Realm Cloud and I was a very early adopter and enthusiast for that system.\n\nI was so enthusiastic. I started a company that would build some add on tools for it. The way I see Realm is as kind of a seminole technology for doing full collaborative computing, I don't think there's any technology out there. The closest would be Firebase but that is still very server centric. What I love about Realm is that it kind of grew out of the client first and then kind of synchronizes client-side database with a mirrored copy on a server automatically. So, what Realm gives you is kind of an offline first capability and that's just absolutely huge. So you could be using your local app and you could be in a non-synced environment or non-connected environment. Then later when you connect everything automatically synchronizes to a server, copy all the updates.\n\nAnd I think it scales well. And I think this is really seminal to develop collaborative computing apps. So one of the things we decided to do was, and this was about a year ago was build an authentication system. We first did it on the old Realm cloud system. And then in June of last year, Mongo, actually two years ago, Mongo acquired Realm and then merged the Atlas infrastructure with the Realm front end. And that new product was released last June and called MongoDB Realm. And which I actually think is a major improvement even on Realm sync, which I was very happy with, but I think the Apple infrastructures is significantly more featured than the Realm cloud infrastructure was. And they did a number of additional support capabilities on the authentication side.\n\nSo, what we did is we retargeted, co-synced JWT as an authentication system for the new MongoDB Realm. So, what is JWT? That stands for Java Script Web Tokens. So it's essentially a mechanism by which a third party can authenticate users for an app and verify their identity. And it's secure because the technology that's used, that underlies JWT's public private key encryption, it's the same technology that's behind Bitcoin. So you have a private key that encrypts the token or signs it, and then a public key that can verify the signature that can verify that a trusted party actually authenticated the user. And so why would you want to separate these two? Well, because very often you may want to do additional processing on your users. And a lot of the authentication systems that are right now with MongoDB Realm, you have anonymous authentication, or you have email password, but you may want to get more sophisticated than that.\n\nYou may want to attach metadata. You may want to have a single user that authenticates the same way across multiple apps. And so it was to kind of deal with these more complex issues in a MongoDB Realm environment that we developed this product. Currently, this product is a SaaS system. So, we actually host the authentication server, but the summer we're going to release a self hosted version. So you, the developer can host your own users on your own MongoDB Atlas cluster, and you run a NodeJS module called CosyncJWT server, and you will basically provide your own rest API to your own application. The only thing Cosync portal will do will be to manage that for you to administrate it.\n\nSo let me move on to the next slide here. Realm allows you to build better apps faster. So the big thing about Realm is that it works in an offline mode first. And that to me is absolutely huge because if anybody has ever developed synchronized software, often you require people to be connected or just doesn't work at all. Systems like Slack come to mind or most chat programs. But with Realm you can work completely offline. And then when you come back online, your local Realm automatically syncs up to your background Realm. So what we're going to do here is kind of show you how easy it is to implement a JWT server for a MongoDB Realm app. And so what I'm going to go ahead and do is we're going to kind of create an app from scratch and we're going to first create the MongoDB Realm app.\n\nAnd so what I'm going to go here, I've already created this Atlas cluster. I'm going to go ahead and create an app called, let's call it CosyncJWT test. And this is I'm inside the MongoDB Realm portal right now. And I'm just going to go ahead and create this app. And then I'm going to set up its sync parameters, all of the MongoDB Realm developers are familiar with this. And so we're going to go to is we'll give it a partition key called partition, and we will go ahead and give it a database called CosyncJWT TestDB. And then we will turn our development mode on. Wait, what happened here?\n\nWhat is the problem there? Okay. Review and deploy. Okay. Let me go ahead and deploy this. Okay. So, now this is a complete Realm app. It's got nothing on it whatsoever. And if I look at its authentication providers, all I have is anonymous login. I don't have JWT set at all. And so what we're going to do is show you how easy it is to configure a JWT token. But the very first thing we need to do is create what I call an API key, and an API key enables a third party program to manipulate programmatically your MongoDB Realm app. And so for that, what we'll do is go into the access manager and for this project, we'll go ahead and create an API key. So let me go ahead and create an API key. And I'm going to call this CosyncJWT test API key, and let's give it some permissions.\n\nI'll be the project owner and let's go ahead and create it. Okay. So that will create both a public key and a private cake. So the very first thing you need to do when you do this is you need to save all of your keys to a file, which your private key, you have to be very careful because the minute somebody has this, go in and programmatically monkey with your stuff. So, save this away securely, not the way I'm doing it now, but write it down or save it to a zip drive. So let me copy the private key here. For the purpose of this demo and let me copy the public key.\n\nOkay. Let me turn that. Not bold. Okay. Now the other thing we need is the project ID, and that's very easy to get, you just hit this little menu here and you go to project settings and you'll have your project ID here. So I'm going to, also, I'll need that as well. And lastly, what we need is the Realm app ID. So, let's go back to Realm here and go into the Realm tab there, and you can always get your app ID here. That's so unique, that uniquely identifies your app to Realm and you'll need that both the cursing portal level and at your app level. Okay, so now we've retrieved all of our data there. So what we're going to go ahead and do now is we're going to go into our Cosync portal and we're going to go ahead and create a Cosync app that mirrors this.\n\nSo I'm going to say create new app and I'll say Cosync. And by the way, to get to the Cosync portal, just quick note, to get to the Cosync portal, all you have to do is go to our Cosync website, which is here and then click on sign in, and then you're in your Cosync. I've already signed in. So, you can register yourself with Cosync. So we're going to go ahead and create a new app called Cosync JWT test and I'm going to go ahead and create it here. And close this. And it's initializing there, just takes a minute to create it on our server. Okay. Right. Something's just going wrong here. You go back in here.\n\nShane:\nSuch is the world of live demos!\n\nRichard:\nThat's just the world of live demos. It always goes wrong the very second. Okay, here we go. It's created.\n\nShane:\nThere you go.\n\nRichard:\nYeah. Okay. So, now let me explain here. We have a bunch of tabs and this is basically a development app. We either provide free development apps up to 50 users. And after that they become commercial apps and we charge a dollar for 1,000 users per month. So, if you have an app with 10,000 users, that would cost you $10 per month. And let me go, and then there's Realm tab to initialize your Realm. And we'll go into that in a minute. And then there's a JWT tab that kind of has all of the parameters that regulate JWT. So, one of the things I want to do is talk about metadata and for this demo, we can attach some metadata to the JWT token.\n\nSo the metadata we're going to attach as a first name and a last name, just to show you how that works. So, I'm going to make this a required field. And I'll say we're going to have a first name, this actually gets attached to the user object. So this will be its path, user data dot name dot first. And then this is the field name that gets attached to the user object. And there'll be first name and let's set another field, which is user data dot name dot last. And that will be last name. Okay. And so we have our metadata defined, let's go ahead and save it. There's also some invite metadata. So, if you want to do an invitation, you could attach a coupon to an invitation. So these are various onboarding techniques.\n\nWe support two types of onboarding, which is either invitation or sign up. You could have a system of the invitation only where a user would ... the free masons or something where somebody would have to know you, and then you could only get in if you were invited. Okay. So, now what we're going to go ahead and do is initialize our instance. So that's pretty easy. Let's go take our Realm app ID here, and we paste that in and let's go ahead and initialize our Kosik JWT, our token expiration will be 24 hours. So let's go ahead and initialize this. I'll put in my project ID.\n\nAll right. My project ID here, and then I will put in my public key, and I will put in my private key here. Okay. Let's go ahead and do this. Okay. And it's successfully initialized it, and we can kind of see that it did. If we go back over here to authentication, we're going to actually see that now we have cosynced JWT authentication. If we go in, it'll actually have set the signing algorithm to RS256, intellectually, have set the public key. So the Cosync, I mean, the MongoDB Realm app will hold onto the public key so that it knows that only this provider which holds onto the private key has the ability to sign. And then it also is defined metadata fields, which are first name, last name and email. Okay. So, anytime you sign up, those metadata fields will be kind of cemented into your user object.\n\nAnd we also provide APIs to be able to change the metadata at runtime. So if you need to change it, you can. But it's important to realize that this metadata doesn't reside in Realm, it resides with the provider itself. And that's kind of the big difference there. So you could have another database that only had your user data. That was not part of your MongoDB Realm database, and you could mine that database for just your user stuff. So, that's the idea there. So the next step, what we're going to do is we're going to go ahead and run this kind of sample app. So the sample, we provide a number of sample apps. If you go to our docs here and you go down to sample application, we provide a good hub project called Cosync samples, which has samples for both our Cosync storage product, which we're not talking about here today, and our CosyncJWT project.\n\nCosync storage basically maps Amazon as three assets onto a MongoDB Realm app. So CosyncJWT has different directories. So, we have a Swift directory, a Kotlin directory and a ReactNative. Today I'm primarily just showing the Swift, but we also have ReactNative binding as well that works fine with this example. Okay. So what happens is you go ahead and clone this. You would go ahead and clone this, Github project here and install it. And then once you've installed it, let me bring it up here, here we go, this is what you would get. We have a sample app called CosyncJWT iOS. Now, that has three packages that depends on. One is a package called CosyncJWT Swift, which wrappers around our arrest API that uses NSURL.\n\nAnd then we depend on the Realm packages. And so this little sample app will do nothing, but allow you to sign up a user to CosyncJWT, and logging in. And it'll also do things like two factor verification. We support both phones two factor verification if you have a Twilio account and we support the Google two-factor authentication, which is free, and even more secure than a phone. So, that gives you an added level of security, and I'll just show you how easy it is too. So, in order to kind of customize this, you need to set two constants. You need to set your Realm app ID and your wrap token. So, that's very easy to do. I can go ahead, and let me just copy this Realm app ID, which I copied from the Realm portal.\n\nAnd I'll stick that here. Let me go ahead and get the app token, which itself is a JWT token because the Cosync, this token enables your client side app to use the CosyncJWT rust API and identify you as the client is belonging to the sound. And so if we actually looked at that token, we could go to utilities that have used JWT. You always use jwt.io, and you can paste any JWT token in the world into this little thing. And you'll see that this is this app token is in fact itself, a JWT token, and it's signed with CosyncJWT, and that will enable your client side to use the rest API.\n\nSo, let's go ahead and paste that in here, and now we're ready to go. So, at this point, if I just run this app, it should connect to the MongoDB Realm instance that we just previously created, and it should be able to connect to the CosyncJWT service for authentication. There are no users by the way in the system yet. So, let me go ahead and build and run this app here, and comes up, [inaudible 00:29:18] an iPhone 8+ simulator. And what we'll do is we'll sign up a user. So if we actually go to the JWT users, you'll see we have no users in our system at all. So, what we're going to go ahead and do is sign up a user. It'll just come up in a second.\n\nShane:\nSimulators are always slow, Richard, especially-\n\nRichard:\nI know.\nShane:\n... when you try to enable them. There you go.\n\nRichard:\nRight. There we go. Okay. So I would log in here. This is just simple SwiftUI. The design is Apple, generic Apple stuff. So, this was our signup. Now, if I actually look at the code here, I have a logged out view, and this is the actual calls here. I would have a sign up where I would scrape the email, the password, and then some metadata. So what I'm going to go ahead and do is I'm going to go ahead and put a break point right there and let's go ahead and sign myself up as richard@cosync.io, give it a password and let's go ahead and let's say Richard Krueger. So, at this point, we're right here. So, if we look at ... Let me just make this a little bit bigger.\n\nShane:\nYeah. If you could a little bit, because some of this obviously bevy adjusts itself by your connection and sometimes-\n\nRichard:\nRight away.\n\nShane:\n... excavated in code. Thank you.\n\nRichard:\nYeah. Okay. So if we look at the ... We have an email here, which is, I think we might be able to see it. I'm not sure. Okay, wait. Self.email. So, for some reason it's coming out empty there, but I'm pretty sure it's not empty. It's just the debugger is not showing the right stuff, but that's the call. I would just make a call to CosyncJWT sign up. I pass in an email, I pass in a password, pass in the metadata and it'll basically come back with it signed in. So, if I just run it here, it came back and then should not be ... there's no error. And it's now going to ask me to verify my code. So, the next step after that will be ... So, at this point I should get an email here. Let's run. So, it's not going to be prompting me for a code. So I just got this email, which says let me give it a code. And I'll make another call, Russ call to verify the code. And this should let me in.\n\nYeah. Which it did log me in. So, the call to verify the code. We also have things where you can just click on a link. So, by the way, let me close this. How your signup flow, you can either have code, link or none. So, you might have an app that doesn't need purification. So then you would just turn it on to none. If you don't want to enter a code, you would have them click on a link and all of these things themselves can be configured. So, the emails that go out like this particular email looks very generic. But I can customize the HTML of that email with these email templates. So, the email verification, the password reset email, all of these emails can be customized to 50 branding of the client itself.\n\nSo, you wouldn't have the words cosync in there. Anyways, so that kind of shows you. So now let me go ahead and log out and I can go ahead and log back in if I wanted to. Let me go ahead and the show you where the log in is. So, this is going to call user manager, which will have a log in here. And that we'll call Realm manage ... Wait a minute, log out, log in this right here. So, let's go put a break point on log in and I'm going to go ahead and say Richard@krueger@cosync.io. I'm going to go ahead and log in here. And I just make a call to CosyncJWT rest. And again, I should be able to just come right back.\n\nAnd there I am. Often, by the way, you'll see this dispatch main async a lot of times when you make Rest calls, you come back on a different thread. The thing to remember, I wrote an article on Medium about this, but the thing to remember about Realm and threads is this, what happens on a thread? It's the Vegas rule. What happens on a thread must stay on a thread. So with Realm does support multithreading very, very well except for the one rule. If you open a Realm on a thread, you have to write it on the same thread and read it from the same thread. If you try and open a Realm on one thread and then try and read it from another thread, you'll cause an exception. So, often what I do a lot is force it back on the main thread.\n\nAnd that's what this dispatch queue main async is. So, this went ahead and there's no error and it should just go ahead and log me in. So, what this is doing here, by the way, let me step into this. You'll see that that's going to go ahead and now issue a Realm log in. So that's an actual Realm call app.login.credentials, and then I pass it the JWT token that was returned to me by CosyncJWT. So by the way, if you don't want to force your user to go through the whole authentication procedure, every time he takes this app out of process, you can go ahead and save that JWT token to your key chain, and then just redo this this way.\n\nSo you could bypass that whole step, but this is a demo app, so I'd put it in there. So this will go ahead and log me in and it should transition, let me see. Yeah, and it did. Okay. So, that kind of shows you that. We also have capabilities for example, if you wanted to change your password, I could. So, I could change my password. Let me give my existing password and then I'll change it to a new password and let me change my password. And it did that. So, that itself is a function called change password.\n\nIt's right here, Cosync change password, is passing your new password, your old password, and that's another Rest call. We also have forgotten password, the same kind of thing. And we have two factor phone verification, which I'm not going to go into just because of time right now, or on two factor Google authentication. So, this was kind of what we're working on. It's a system that you can use today as a SaaS system. I think it's going to get very interesting this summer, once we release the self hosted version, because then, we're very big believers in open source, all of the code that you have here result released under the Apache open source license. And so anything that you guys get as developers you can modify and it's the same way that Realm has recently developed, Andrew Morgan recently developed a great chat app for Realm, and it's all equally under the Apache license.\n\nSo, if you need to implement chat functionality, I highly recommend to go download that app. And they show you very easily how to build a chat app using the new Swift combine nomenclature which was absolutely phenomenal in terms of opaque ... I mean, in terms of terseness. I actually wrote a chat program recently called Tinychat and I'd say MongoDB Realm app, and it's a cloud hosted chat app that is no more than 70 lines of code. Just to give you an idea how powerful the MongoDB Realm stuff and I'm going to try and get a JWT version of that posted in the next few days. And without it, yes, we probably should take some questions because we're coming up at quarter to the hour here. Shane.\n\nShane:\nExcellent. No, thank you, Richard. Definitely, there's been some questions in the sidebar. Kurt has been answering some of them there, probably no harm to revisit a couple of them. So, Gigan, I hope I'm pronouncing that correctly as well too, was asking about changing the metadata at the beginning, when you were showing first name, last name, can you change that in future? Can you modify it?\n\nRichard:\nYeah. So, if I want to add to the metadata, so what I could do is if I want to go ahead and add another field, so let's go ahead and add another field a year called user data coupon, and I'll just call this guy coupon. I can go ahead and add that. Now if I add something that's required, that could be a problem if I already have users without a required piece of metadata. So, we may actually have to come up with some migration techniques there. You don't want to delete metadata, but yeah, you could go ahead and add things.\n\nShane:\nAnd is there any limits to how much metadata? I mean, obviously you don't want-\n\nRichard:\nNot really.\n\nShane:\n... fields for users to fill in, but is there any strict limit at all?\n\nRichard:\nI mean, I don't think you want to store image data even if it's 64 encoded. If you were to store an avatar as metadata I'd store the link to the image somewhere, you might store that avatar on Amazon, that's free, and then you would store the link to it in the metadata. So, it's got normally JWT tokens pretty sparse. It's something supposed to be a 10 HighQ object, but the metadata I find is one of the powers of this thing because ... and all of this metadata gets rolled into the user objects. So, if you get the Realm user object, you can get access to all the metadata once you log in.\n\nShane:\nI mean, the metadata can reside with the provider. That's obviously really important for, look, we see data breaches and I break, so you can essentially have that metadata elsewhere as well too.\n\nRichard:\nRight.\n\nShane:\nIt's very important for the likes of say publications and things like that.\n\nRichard:\nRight. Yeah, exactly. And by the way, this was a big feature MongoDB Realm added, because metadata was not part of the JWT support in the old Realm cloud. So, it was actually a woman on the forum. So MongoDB employee that tuned me into this about a year ago. And I think it was Shakuri I think is her name. And that's why it was after some discussion on the forums. By the way, these forums are fantastic. If you have any, you meet people there, you have great discussions. If you have a problem, you can just post it. If I know an issue, I try to answer it. I would say there it's much better than flashed off. And then it's the best place to get Realm questions answered okay much better than Stack Overflow. So, [inaudible 00:44:20]. Right?\n\nShane:\nI know in our community, especially for Realm are slightly scattered all rights as well too. Our advocates look at questions on Stack Overflow, also get help comments and in our forum as well too. And I know you're an active member there, which is great. Just on another question then that came up was the CosyncJWT. You mentioned it was with Swift and ReactNative by way of examples. Have you plans for other languages?\n\nRichard:\nWe have, I don't think we've published it yet, but we have a Kotlin example. I've just got to dig that up. I mean, if we like to hear more, I think Swift and Kotlin and React Native are the big ones. And I've noticed what's going on is it seems that people feel compelled to have a Native iOS, just because that's the cache operating system. And then what they do is they'll do an iOS version and then they'll do a ReactNative version to cover desktop and Android. And I haven't bumped into that many people that are pure Android, purest or the iOS people tend to be more purest than the Android people. I know...\n\nShane:\n... partly down to Apple's review process with apps as well too can be incredibly stringent. And so you want to by the letter of the law, essentially try and put two things as natively as possible. Or as we know, obviously with Google, it's much more open, it's much freer to use whatever frameworks you want. Right?\n\nRichard:\nRight. I would recommend though, if you're an iOS developer, definitely go with SwiftUI for a number ... Apple is putting a huge amount of effort into that. And I have the impression that if you don't go there, you'll be locked out of a lot of features. And then more importantly, it's like Jason Flax who's a MongoDB employee has done a phenomenal job on getting these MongoDB Realm combined primitives working that make it just super easy to develop a SwiftUI app. I mean, it's gotten to the point where one of our developer advocate, Kurt Libby, is telling me that his 12 year old could \nJason flax's stuff. That was like normally two years ago to use something like Realm required a master's degree, but it's gone from a master's degree to a twelve-year-old. It just in simplification right now.\n\nShane:\nYeah. We're really impressed with what we've seen in SwiftUI. It's one of the areas we see a lot of innovation, a huge amount of traction, I suppose. Realm, historically, was seen as a leader in the Swift space as well too. Not only did we have Realm compatible with Swift, but we talked about swift a lot outside of, we led one of the largest Swift meetup groups in San Francisco at the time. And we see the same happening again with SwiftUI. Some people, look, dyed in the wool, developers are saying, \"Oh, it's not ready for real time commercial apps,\" but it's 95% there. I think you can build an app wholly with SwiftUI. There's a couple of things that you might want to do, and kind of using UI kit and other things as well too, it's all right, but that's going to change quickly. Let's see what's in store at DC as well for us coming up.\n\nRichard:\nYeah, exactly.\n\nShane:\nRight. Excellent. I know, does anybody, I said at the beginning, we can open up the mic and the cameras to anybody who'd like to come on and ask a question directly of Richard or myself. If you want to do that, please make a comment in the chat. And I can certainly do that, if not just ask the questions in the chat there as well too. While we're waiting for that, you spoke about Google two factor and also Twilio. Your example there was with the code with the Google email, how much more work is involved in the two factor side of things either\n\nRichard:\nSo, the two factor stuff, what you have to do, when you go here, you can turn on two factor verification. So, if you select Google you would have to put in your ... Let me just see what my ... You would have to put in the name of your Google app. And then if you did phone ... Yes, change it, you'd have to put your Twilio account SI, your off the token from Twilio and your Twilio phone number. Now, Twilio, it looks cheap. It's just like a penny a message. It adds up pretty fast.\n\nRichard:\nMy previous company I worked with, Needley, we had crypto wallet for EOS and we released it and we had 15,000 users within two weeks. And then our Twilio bill was $4,000 within the week. It just added up very quickly. So it's the kind of thing that ... it doesn't cost much, but if you start sending out machine gunning out these SMS messages, it can start adding up. But if you're a banking app, you don't really care. You're more interested in providing the security for your ... Anyways, I guess that would answer that question. Are there any other questions here?\n\nShane:\nThere's been a bit of, I think it was a comment that was funny while you were doing the demo there, Richard, with regards to working on the main thread. And you were saying that there was issues. Now, look, Realm, we have frozen objects as well too, if you need to pass objects rights, but they are frozen. So maybe you might want to just maybe clarify your thoughts on that a little bit there. There was one or two comments in the sidebar.\n\nRichard:\nWell, with threading in Realm, this is what I tend to do. If you have a background, one of the problems you bump into is the way threading in SwiftUI works is you have your main thread that's a little bit like you're Sergeant major. And then you have all your secondary threads that are more like your privates. And the Sergeant major says, \"Go do this, go clean the latrine, or go peel some potatoes.\" And he doesn't really care which private goes off and doesn't, just the system in the background will go assign some private to go clean the little train. But when Realm, you have to be careful because if you do an async open on a particular thread, particular worker thread, then all the other subsequent things, all the writes and the reads should be done on that same thread.\n\nRichard:\nSo, what I found is I go ahead and create a worker thread at the beginning that will kind of handle requests. And then I make sure I can get back there and to that particular thread. There was an article I wrote on Medium about how to do this, because you obviously you don't want to burden your main thread with all your Realm rights. You don't want to do that because it will start eating ... I mean, your main threads should be for SwiftUI and nothing more. And you want to then have a secondary thread that can process that, and having just one secondary thread that's working in the background is sufficient. And then that guy handles the Realm request in a sense. That was the strategy seemed to work best I found.\n\nRichard:\nBut you could open a Realm on your primary thread. You can also open the same Realm on a background thread. You just have to be careful when you're doing the read better beyond the Realm that was opened on the thread that it was opened on that the read is taking place from. Otherwise, you just got an exception. That's what I've found. But I can't say that I'm a complete expert at it, but in general, with most of my programming, I've always had to eventually revert to kind of multi-threading just to get the performance up because otherwise you'll just be sitting there just waiting and waiting and waiting sometimes.\n\nShane:\nYeah, no, that's good. And I think everybody has a certain few points on this. Sebastian asked the question originally, I know both Mohit and Andrew who are developer advocates here at Realm have chimed in on that as well too. And it is right by best practices and finding the effect on what might happen depending on where you are trying to read and write.\n\nRichard:\nRight. Well, this particular example, I was just forcing it back on the main thread, because I think that's where I had to do the Rest calls from. There was an article I wrote, I think it was about three months ago, Multithreading and MongoDB Realm, because I was messing around with it for some imaging out that there was writing and we needed to get the performance out of it. And so anyways, that was ... But yeah, I hope that answers that question.\n\nShane:\nYeah, yeah. Look, we could probably do a whole session on this as well. That's the reality of it. And maybe we might do that. I'm conscious of everybody's time. It'd be mindful of that. And didn't see anything else pop up in the questions. Andrew's linked your Medium articles there as well too. We've published them on Realm, also writes on Medium. We publish a lot of the content, we create on dev up to Medium, but we do and we are looking for others who are writing about Realm that who may be writing Medium to also contribute. So if you are, please reach out to us on Medium there to add to that or ping us on the forums or at Realm. I look after a lot of our Twitter content on that Realm as we [crosstalk 00:56:12] there. I've noticed during this, that nobody wants T-shirts and face masks, nobody's tweeted yet at Realm. Please do. We'll keep that open towards the end of the day as well. If there's no other questions, I first of all want to say thank you very much, Richard.\n\nRichard:\nWell, thank you for having me.\n\nShane:\nNo, we're delighted. I think this is a thing that we want to do ongoing. Yes, we are running our own meetups with our own advocates and engineers, but we also want, at least perhaps once a month, maybe more if we could fit it in to invite guests along to share their experience of using MongoDB Realm as well too. So, this is the first one of those. As we saw at the beginning, we do have Igor in AWS during the presentation in June as well too. But really appreciate the attendance here today. Do keep an eye. We are very busy. You saw it's pretty much once week for the next four or five weeks, these meetups. Please share amongst your team as well too.\n\nShane:\nAnd above all, join us. As you said, Richard, look, I know you're a contributor in our forums and we do appreciate that. We have a lot of active participants in our forums. We like to, I suppose, let the community answer some of those questions themselves before the engineers and the advocates dive in. It's a slow growth obviously, but we're seeing that happen as well too, so we do appreciate it. So communicate with us via forums, via @realm and go to our dev hub, consume those articles. The articles Richard mentioned about the chat app is on our dev hub by Andrew. If you go look there and select actually the product category, you can select just mobile and see all our mobile articles. Since certainly November of last year, I think there's 24, 25 articles there now. So, they are relatively recent and relatively current. So, I don't know, Richard, have you any parting words? I mean, where do people ... you said up to 50 users it's free, right? And all that.\n\nRichard:\nRight. So, up to 50 users it's free. And then after that you would be charged a dollar for 1,000 users per month.\n\nShane:\nThat's good.\n\nRichard:\nWell, what we're going to try and do is push once we get the self hosted version. We're actually going to try and push developers into that option, we don't know the price of it yet, but it will be equally as affordable. And then you basically host your own authentication server on your own servers and you'll save all your users to your own Atlas cluster. Because one of the things we have bumped into is people go, \"Well, I don't really know if I want to have all my user data hosted by you,\" and which is a valid point. It's very sensitive data.\n\nShane:\nSure.\n\nRichard:\nAnd so that was why we wanted to build an option so your government agency, you can't share your user data, then you would host, we would just provide the software for you to do that and nothing more. And so that's where the self hosted version of CosyncJWT would do them.\n\nShane:\nExcellent. It sounds great. And look, you mentioned then your storage framework that you're building at the moment as well too. So hopefully, Richard, we can have you back in a couple of months when that's ready.\n\nRichard:\nGreat. Okay. Sounds good.\n\nShane:\nExcellent.\n\nRichard:\nThanks, Shane.\n\nShane:\nNo problem at all. Well, look, thank you everybody for tuning in. This is recorded. So, it will end up on YouTube as well too and we'll send that link to the group once that's ready. We'll also end up on the developer hub where we've got a transcript of the content that Richard's presented here as well. That'd be perfect. Richard, you have some pieces in your presentation too that we can share in our community as well too later?\n\nRichard:\nYeah, yeah. That's fine. Go ahead and share.\n\nShane:\nExcellent. We'll certainly do that.\n\nRichard:\nYeah.\n\nShane:\nSo, thank you very much everybody for joining, and look forward to seeing you at the future meetups, as I said, five of them over the next six weeks or so. Very, very [inaudible 01:00:34] time for us. And thank you so much, Richard. Really entertaining, really informative and great to see the demo of the live coding.\n\nRichard:\nOkay. Thanks Shane. Excellent one guys.\n\nShane:\nTake care, everybody. Bye.\n\nRichard:\nBye.", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "This meetup talk will focus on the benefits of JWT authentication and how to easily implement CosyncJWT within a Realm application.", "contentType": "Article"}, "title": "Easy Realm JWT Authentication with CosyncJWT", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-data-parquet", "action": "created", "body": "# How to Get MongoDB Data into Parquet in 10 Seconds or Less\n\nFor those of you not familiar with Parquet, it\u2019s an amazing file format that does a lot of the heavy lifting to ensure blazing fast query performance on data stored in files. This is a popular file format in the Data Warehouse and Data Lake space as well as for a variety of machine learning tasks.\n\nOne thing we frequently see users struggle with is getting NoSQL data into Parquet as it is a columnar format. Historically, you would have to write some custom code to get the data out of the database, transform it into an appropriate structure, and then probably utilize a third-party library to write it to Parquet. Fortunately, with MongoDB Atlas Data Federation's $out to cloud object storage - Amazon S3 or Microsoft Azure Blob Storage, you can now convert MongoDB Data into Parquet with little effort.\n\nIn this blog post, I\u2019m going to walk you through the steps necessary to write data from your Atlas Cluster directly to cloud object storage in the Parquet format and then finish up by reviewing some things to keep in mind when using Parquet with NoSQL data. I\u2019m going to use a sample data set that contains taxi ride data from New York City.\n\n## Prerequisites\n\nIn order to follow along with this tutorial yourself, you will need the following:\nAn Atlas cluster with some data in it. (It can be the sample data.)\nAn AWS account with privileges to create IAM Roles and cloud object storage buckets (to give us access to write data to your cloud object storage bucket).\n\n## Create a Federated Database Instance and Connect to cloud object storage\n\nThe first thing you'll need to do is navigate to the \"Data Federation\" tab on the left hand side of your Atlas Dashboard and then click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nThen, you need to connect your cloud object storage bucket to your Federated Database Instance. This is where we will write the Parquet files. The setup wizard should guide you through this pretty quickly but you will need access to your credentials for AWS. (Be sure to give Atlas Data Federation \u201cRead and Write\u201d access to the bucket so it can write the Parquet files there.)\n\nOnce you\u2019ve connected your cloud object storage bucket, we\u2019re going to create a simple data source to query the data in cloud object storage so we can verify we\u2019ve written the data to cloud object storage at the end of this tutorial. Our new setup tool makes it easier than ever to configure your Federated Database Instance to take advantage of the partitioning of data in cloud object storage. Partitioning allows us to only select the relevant data to process in order to satisfy your query. (I\u2019ve put a sample file in there for this test that will fit how we\u2019re going to partition the data by \\_cab\\_type).\n\n``` bash\nmongoimport --uri mongodb+srv://:@/ --collection --type json --file \n```\n\n## Connect Your Federated Database Instance to an Atlas Cluster\n\nNow we\u2019re going to connect our Atlas cluster, so we can write data from it into the Parquet files. This involves picking the cluster from a list of clusters in your Atlas project and then selecting the databases and collections you\u2019d like to create Data Sources from and dragging them into your Federated Database Instance.\n\n## $out to cloud object storage in Parquet\n\nNow we\u2019re going to connect to our Federated Database Instance using the mongo shell and execute the following command. This is going to do quite a few things, so I\u2019m going to explain the important ones.\n- First, you can use the \u2018filename\u2019 field of the $out stage to have your Federated Database Instance partition files by \u201c_cab_type\u201d, so all the green cabs will go in one set of files and all the yellow cabs will go in another.\n- Then in the format, we\u2019re going to specify parquet and determine a maxFileSize and maxRowGroupSize.\n -- maxFileSize is going to determine the maximum size each partition will be.\n -- maxRowGroupSize is going to determine how records are grouped inside of the Parquet file in \u201crow groups\u201d which will impact performance querying your Parquet files, similarly to file size.\n- Lastly, we\u2019re using a special Atlas Data Federation aggregation \u201cbackground: true\u201d which simply tells the Federated Database Instance to keep executing the query even if the client disconnects. (This is handy for long running queries or environments where your network connection is not stable.)\n\n``` js\ndb.getSiblingDB(\"clusterData\").getCollection(\"trips\").aggregate(\n {\n \"$out\" : {\n \"s3\" : {\n \"bucket\" : \"ben.flast\",\n \"region\" : \"us-east-1\",\n \"filename\" : {\n \"$concat\" : [\n \"taxi-trips/\",\n \"$_cab_type\",\n \"/\"\n ]\n },\n \"format\" : {\n \"name\" : \"parquet\",\n \"maxFileSize\" : \"10GB\",\n \"maxRowGroupSize\" : \"100MB\"\n }\n }\n }\n }\n], {\n background: true\n})\n```\n\n![\n\n## Blazing Fast Queries on Parquet Files\n\nNow, to give you some idea of the potential performance improvements for Object Store Data you can see, I\u2019ve written three sets of data, each with 10 million documents: one in Parquet, one in uncompressed JSON, and another in compressed JSON. And I ran a count command on each of them with the following results.\n\n*db.trips.count()*\n10,000,000\n\n| Type | Data Size (GB) | Count Command Latency (Seconds) |\n| ---- | -------------- | ------------------------------- |\n| JSON (Uncompressed) | \\~16.1 | 297.182 |\n| JSON (Compressed) | \\~1.1 | 78.070 |\n| Parquet | \\~1.02 | 1.596 |\n\n## In Review\n\nSo, what have we done and what have we learned?\n- We saw how quickly and easily you can create a Federated Database Instance in MongoDB Atlas.\n- We connected an Atlas cluster to our Federated Database Instance.\n- We used our Federated Database Instance to write Atlas cluster data to cloud object storage in Parquet format.\n- We demonstrated how fast and space-efficient Parquet is when compared to JSON.\n\n## A Couple of Things to Remember About Atlas Data Federation\n\n- Parquet is a super fast columnar format that can be read and written with Atlas Data Federation.\n- Atlas Data Federation takes advantage of various pieces of metadata contained in Parquet files, not just the maxRowGroupSize. For instance, if your first stage in an aggregation pipeline was $project: {fieldA: 1, filedB: 1}, we would only read the two columns from the Parquet file which results in faster performance and lower costs as we are scanning less data.\n- Atlas Data Federation writes Parquet files flexibly so if you have polymorphic data, we will create union columns so you can have \u2018Column A - String\u2019 and \u2018Column A - Int\u2019. Atlas Data Federation will read union columns back in as one field but other tools may not handle union types. So if you\u2019re going to be using these Parquet files with other tools, you should transform your data before the $out stage to ensure no union columns.\n- Atlas Data Federation will also write files with different schemas if it encounters data with varying schemas throughout the aggregation. It can handle different schemas across files in one collection, but other tools may require a consistent schema across files. So if you\u2019re going to be using these Parquet files with other tools, you should do a $project with $convert\u2019s before the $out stage to ensure a consistent schema across generated files.\n- Parquet is a great format for your MongoDB data when you need to use columnar oriented tools like Tableau for visualizations or machine learning frameworks that use data frames. Parquet can be quickly and easily converted into Pandas data frames in Python.", "format": "md", "metadata": {"tags": ["Atlas", "Parquet"], "pageDescription": "Learn how to transform MongoDB data to Parquet with Atlas Data Federation.", "contentType": "Tutorial"}, "title": "How to Get MongoDB Data into Parquet in 10 Seconds or Less", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/semantic-search-mongodb-atlas-vector-search", "action": "created", "body": "# How to Do Semantic Search in MongoDB Using Atlas Vector Search\n\nHave you ever been looking for something but don\u2019t quite have the words? Do you remember some characteristics of a movie but can\u2019t remember the name? Have you ever been trying to get another sweatshirt just like the one you had back in the day but don\u2019t know how to search for it? Are you using large language models, but they only know information up until 2021? Do you want it to get with the times?! Well then, vector search may be just what you\u2019re looking for.\n\n## What is vector search?\n\nVector search is a capability that allows you to do semantic search where you are searching data based on meaning. This technique employs machine learning models, often called encoders, to transform text, audio, images, or other types of data into high-dimensional vectors. These vectors capture the semantic meaning of the data, which can then be searched through to find similar content based on vectors being \u201cnear\u201d one another in a high-dimensional space. This can be a great compliment to traditional keyword-based search techniques but is also seeing an explosion of excitement because of its relevance to augment the capabilities of large language models (LLMs) by providing ground truth outside of what the LLMs \u201cknow.\u201d In search use cases, this allows you to find relevant results even when the exact wording isn't known. This technique can be useful in a variety of contexts, such as natural language processing and recommendation systems.\n\nNote: As you probably already know, MongoDB Atlas has supported full-text search since 2020, allowing you to do rich text search on your MongoDB data. The core difference between vector search and text search is that vector search queries on meaning instead of explicit text and therefore can also search data beyond just text.\n\n## Benefits of vector search\n\n- Semantic understanding: Rather than searching for exact matches, vector search enables semantic searching. This means that even if the query words aren't present in the index, but the meanings of the phrases are similar, they will still be considered a match.\n- Scalable: Vector search can be done on large datasets, making it perfect for use cases where you have a lot of data.\n- Flexible: Different types of data, including text but also unstructured data like audio and images, can be semantically searched.\n\n## Benefits of vector search with MongoDB\n\n- Efficiency: By storing the vectors together with the original data, you avoid the need to sync data between your application database and your vector store at both query and write time.\n- Consistency: Storing the vectors with the data ensures that the vectors are always associated with the correct data. This can be important in situations where the vector generation process might change over time. By storing the vectors, you can be sure that you always have the correct vector for a given piece of data.\n- Simplicity: Storing vectors with the data simplifies the overall architecture of your application. You don't need to maintain a separate service or database for the vectors, reducing the complexity and potential points of failure in your system.\n- Scalability: With the power of MongoDB Atlas, vector search on MongoDB scales horizontally and vertically, allowing you to power the most demanding workloads.\n\n> Want to experience Vector Search with MongoDB quick and easy? Check out this automated demo on GitHub as you walk through the tutorial.\n\n## Set up a MongoDB Atlas cluster\n\nNow, let's get into setting up a MongoDB Atlas cluster, which we will use to store our embeddings.\n\n**Step 1: Create an account**\n\nTo create a MongoDB Atlas cluster, first, you need to create a MongoDB Atlas account if you don't already have one. Visit the MongoDB Atlas website and click on \u201cRegister.\u201d\n\n**Step 2: Build a new cluster**\n\nAfter creating an account, you'll be directed to the MongoDB Atlas dashboard. You can create a cluster in the dashboard, or using our public API, CLI, or Terraform provider. To do this in the dashboard, click on \u201cCreate Cluster,\u201d and then choose the shared clusters option. We suggest creating an M0 tier cluster.\n\nIf you need help, check out our tutorial demonstrating the deployment of Atlas using various strategies.\n\n**Step 3: Create your collections**\n\nNow, we\u2019re going to create your collections in the cluster so that we can insert our data. They need to be created now so that you can create an Atlas trigger that will target them.\n\nFor this tutorial, you can create your own collection if you have data to use. If you\u2019d like to use our sample data, you need to first create an empty collection in the cluster so that we can set up the trigger to embed them as they are inserted. Go ahead and create a \u201csample_mflix\u201d database and \u201cmovies\u201d collection now using the UI, if you\u2019d like to use our sample data.\n\n## Setting up an Atlas trigger\n\nWe will create an Atlas trigger to call the OpenAI API whenever a new document is inserted into the cluster.\n\nTo proceed to the next step using OpenAI, you need to have set up an account on OpenAI and created an API key.\n\nIf you don't want to embed all the data in the collection you can use the \"sample_mflix.embedded_movies\" collection for this which already has embeddings generated by Open AI, and just create an index and run Vector Search queries.\n\n**Step 1: Create a trigger**\n\nTo create a trigger, navigate to the \u201cTriggers\u201d section in the MongoDB Atlas dashboard, and click on \u201cAdd Trigger.\u201d\n\n**Step 2: Set up secrets and values for your OpenAI credentials**\n\nGo over to \u201cApp Services\u201d and select your \u201cTriggers\u201d application.\n\nClick \u201cValues.\u201d\n\nYou\u2019ll need your OpenAI API key, which you can create on their website:\n\nCreate a new Value\n\nSelect \u201cSecret\u201d and then paste in your OpenAI API key.\n\nThen, create another value \u2014 this time, a \u201cValue\u201d \u2014 and link it to your secret. This is how you will securely reference this API key in your trigger.\n\nNow, you can go back to the \u201cData Services\u201d tab and into the triggers menu. If the trigger you created earlier does not show up, just add a new trigger. It will be able to utilize the values you set up in App Services earlier.\n\n**Step 3: Configure the trigger**\n\nSelect the \u201cDatabase\u201d type for your trigger. Then, link the source cluster and set the \u201cTrigger Source Details\u201d to be the Database and Collection to watch for changes. For this tutorial, we are using the \u201csample_mflix\u201d database and the \u201cmovies\u201d collection. Set the Operation Type to 'Insert' \u2018Update\u2019 \u2018Replace\u2019 operation. Check the \u201cFull Document\u201d flag and in the Event Type, choose \u201cFunction.\u201d\n\nIn the Function Editor, use the code snippet below, replacing DB Name and Collection Name with the database and collection names you\u2019d like to use, respectively.\n\nThis trigger will see when a new document is created or updated in this collection. Once that happens, it will make a call to the OpenAI API to create an embedding of the desired field, and then it will insert that vector embedding into the document with a new field name.\n\n```javascript\nexports = async function(changeEvent) {\n // Get the full document from the change event.\n const doc = changeEvent.fullDocument;\n\n // Define the OpenAI API url and key.\n const url = 'https://api.openai.com/v1/embeddings';\n // Use the name you gave the value of your API key in the \"Values\" utility inside of App Services\n const openai_key = context.values.get(\"openAI_value\");\n try {\n console.log(`Processing document with id: ${doc._id}`);\n\n // Call OpenAI API to get the embeddings.\n let response = await context.http.post({\n url: url,\n headers: {\n 'Authorization': `Bearer ${openai_key}`],\n 'Content-Type': ['application/json']\n },\n body: JSON.stringify({\n // The field inside your document that contains the data to embed, here it is the \"plot\" field from the sample movie data.\n input: doc.plot,\n model: \"text-embedding-ada-002\"\n })\n });\n\n // Parse the JSON response\n let responseData = EJSON.parse(response.body.text());\n\n // Check the response status.\n if(response.statusCode === 200) {\n console.log(\"Successfully received embedding.\");\n\n const embedding = responseData.data[0].embedding;\n\n // Use the name of your MongoDB Atlas Cluster\n const collection = context.services.get(\"\").db(\"sample_mflix\").collection(\"movies\");\n\n // Update the document in MongoDB.\n const result = await collection.updateOne(\n { _id: doc._id },\n // The name of the new field you'd like to contain your embeddings.\n { $set: { plot_embedding: embedding }}\n );\n\n if(result.modifiedCount === 1) {\n console.log(\"Successfully updated the document.\");\n } else {\n console.log(\"Failed to update the document.\");\n }\n } else {\n console.log(`Failed to receive embedding. Status code: ${response.statusCode}`);\n }\n\n } catch(err) {\n console.error(err);\n }\n};\n```\n\n## Configure index\n\nNow, head over to Atlas Search and create an index. Use the JSON index definition and insert the following, replacing the embedding field name with the field of your choice. If you are using the sample_mflix database, it should be \u201cplot_embedding\u201d, and give it a name. I\u2019ve used \u201cmoviesPlotIndex\u201d for my setup with the sample data.\n\nFirst, click the \u201catlas search\u201d tab on your cluster\n\n![Databases Page for a Cluster with an arrow pointing at the Search tab][1]\n\nThen, click \u201cCreate Search Index.\u201d\n\n![Search tab within the Cluster page with an arrow pointing at Create Search Index\n\nCreate \u201cJSON Editor.\u201d\n\nThen, select your Database and Collection on the left and a drop in the code snippet below for your index definition.\n\n```json\n{\n \"type\": \"vectorSearch\",\n \"fields\": {\n \"path\": \"plot_embedding\",\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }]\n}\n```\n\n## Insert your data\n\nNow, you need to insert your data. As your data is inserted, it will be embedded using the script and then indexed using the KNN index we just set.\n\nIf you have your own data, you can insert it now using something like [MongoImports.\n\nIf you\u2019re going to use the sample movie data, you can just go to the cluster, click the \u2026 menu, and load the sample data. If everything has been set up correctly, the sample_mflix database and movies collections will have the plot embeddings created on the \u201cplot\u201d field and added to a new \u201cplot_embeddings\u201d field.\n\n## Now, to query your data with JavaScript\n\nOnce the documents in your collection have their embeddings generated, you can perform a query. But because this is using vector search, your query needs to be transformed into an embedding. This is an example script of how you could add a function to get both an embedding of the query and a function to use that embedding inside of your application. \n\n```javascript\nconst axios = require('axios');\nconst MongoClient = require('mongodb').MongoClient;\n\nasync function getEmbedding(query) {\n // Define the OpenAI API url and key.\n const url = 'https://api.openai.com/v1/embeddings';\n const openai_key = 'your_openai_key'; // Replace with your OpenAI key.\n \n // Call OpenAI API to get the embeddings.\n let response = await axios.post(url, {\n input: query,\n model: \"text-embedding-ada-002\"\n }, {\n headers: {\n 'Authorization': `Bearer ${openai_key}`,\n 'Content-Type': 'application/json'\n }\n });\n \n if(response.status === 200) {\n return response.data.data[0].embedding;\n } else {\n throw new Error(`Failed to get embedding. Status code: ${response.status}`);\n }\n}\n\nasync function findSimilarDocuments(embedding) {\n const url = 'your_mongodb_url'; // Replace with your MongoDB url.\n const client = new MongoClient(url);\n \n try {\n await client.connect();\n \n const db = client.db(''); // Replace with your database name.\n const collection = db.collection(''); // Replace with your collection name.\n \n // Query for similar documents.\n const documents = await collection.aggregate([\n {\"$vectorSearch\": {\n \"queryVector\": embedding,\n \"path\": \"plot_embedding\",\n \"numCandidates\": 100,\n \"limit\": 5,\n \"index\": \"moviesPlotIndex\",\n }}\n]).toArray();\n \n return documents;\n } finally {\n await client.close();\n }\n}\n\nasync function main() {\n const query = 'your_query'; // Replace with your query.\n \n try {\n const embedding = await getEmbedding(query);\n const documents = await findSimilarDocuments(embedding);\n \n console.log(documents);\n } catch(err) {\n console.error(err);\n }\n}\n\nmain();\n```\n\nThis script first transforms your query into an embedding using the OpenAI API, and then queries your MongoDB cluster for documents with similar embeddings.\n\n> Support for the '$vectorSearch' aggregation pipeline stage is available with MongoDB Atlas 6.0.11 and 7.0.2.\n\nRemember to replace 'your_openai_key', 'your_mongodb_url', 'your_query', \u2018\u2019, and \u2018\u2019 with your actual OpenAI key, MongoDB URL, query, database name, and collection name, respectively.\n\nAnd that's it! You've successfully set up a MongoDB Atlas cluster and Atlas trigger which calls the OpenAI API to embed documents when they get inserted into the cluster, and you\u2019ve performed a vector search query. \n\n> If you prefer learning by watching, check out the video version of this article!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb8503d464e800c36/65a1bba2d6cafb29fbf758da/Screenshot_2024-01-12_at_4.45.14_PM.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js", "Serverless"], "pageDescription": "Learn how to get started with Vector Search on MongoDB while leveraging the OpenAI.", "contentType": "Tutorial"}, "title": "How to Do Semantic Search in MongoDB Using Atlas Vector Search", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/atlas-flask-azure-container-apps", "action": "created", "body": "# Building a Flask and MongoDB App with Azure Container Apps\n\nFor those who want to focus on creating scalable containerized applications without having to worry about managing any environments, this is the tutorial for you! We are going to be hosting a dockerized version of our previously built Flask and MongoDB Atlas application on Azure Container Apps.\n\nAzure Container Apps truly simplifies not only the deployment but also the management of containerized applications and microservices on a serverless platform. This Microsoft service also offers a huge range of integrations with other Azure platforms, making it easy to scale or improve your application over time. The combination of Flask, Atlas, and Container Apps allows for developers to build applications that are capable of handling large amounts of data and traffic, while being extremely accessible from any machine or environment. \n\nThe specifics of this tutorial are as follows: We will be cloning our previously built Flask application that utilizes CRUD (create, read, update, and delete) functionality applying to a \u201cbookshelf\u201d created in MongoDB Atlas. When properly up and running and connected to Postman or using cURL, we can add in new books, read back all the books in our database, update (exchange) a book, and even delete books. From here, we will dockerize our application and then we will host our dockerized image on Azure Container Apps. Once this is done, anyone anywhere can access our application!\n\nThe success of following this tutorial requires a handful of prerequisites:\n\n* Access and clone our Flask and MongoDB application from the GitHub repository if you would like to follow along.\n* View the completed repository for this demo.\n* MongoDB Atlas\n* Docker Desktop\n* Microsoft Azure subscription.\n* Python 3.9+.\n* Postman Desktop (or another way to test our functions).\n\n### Before we dive in...\nBefore we continue on to containerizing our application, please ensure you have a proper understanding of our program through this article: Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service. It goes into a lot of detail on how to properly build and connect a MongoDB Atlas database to our application along with the intricacies of the app itself. If you are a beginner, please ensure you completely understand the application prior to containerizing it through Docker. \n\n#### Insight into our database\nBefore moving on in our demo, if you\u2019ve followed our previous demo linked above, this is how your Atlas database can look. These books were added in at the end of the previous demo using our endpoint. If you\u2019re using your own application, an empty collection will be supported. But if you have existing documents, they need to support our schema or an error message will appear: \n\nWe are starting with four novels with various pages. Once properly connected and hosted in Azure Container Apps, when we connect to our `/books` endpoint, these novels will show up. \n\n### Creating a Dockerfile\n\nOnce you have a cloned version of the application, it\u2019s time to create our Dockerfile. A Dockerfile is important because it contains all of the information and commands to assemble an image. From the commands in a Dockerfile, Docker can actually build the image automatically with just one command from your CLI. \n\nIn your working directory, create a new file called `Dockerfile` and put in these commands:\n\n```\nFROM python:3.9-slim-buster\nWORKDIR /azurecontainerappsdemo\n\nCOPY ./config/requirements.txt /azurecontainerappsdemo/\nRUN pip install -r requirements.txt\n\nCOPY . /azurecontainerappsdemo/\n\nENV FLASK_APP=app.py\nEXPOSE 5000\nCMD \"flask\", \"run\", \"--host=0.0.0.0\"]\n```\n\nPlease ensure your `requirements.txt` file is placed under a new folder called `/config`. This is so we can be certain our `requirements.txt` file is located and properly copied in with our Dockerfile since it is crucial for our demo.\n\nOur base image `python:3.9-slim-buster` is essential because it provides a starting point for creating a new container image. In Docker, base images contain all the necessary components to successfully build and run an application. The rest of the commands copy over all the files in our working directory, expose Flask\u2019s default port 5000, and specify how to run our application while allowing network access from anywhere. It is crucial to include the `--host=0.0.0.0` because otherwise, when we attempt to host our app on Azure, it will not connect properly.\n\nIn our app.py file, please make sure to add the following two lines at the very bottom of the file:\n```\nif __name__ == '__main__':\n app.run(host='0.0.0.0', debug=True)\n```\n\nThis once again allows Flask to run the application, ensuring it is accessible from any network. \n\n###### Optional\n\nYou can test and make sure your app is properly dockerized with these commands:\n\nBuild: `docker build --tag azurecontainerappsdemo . `\n\nRun: `docker run -d -p 5000:5000 -e \"CONNECTION_STRING=\" azurecontainerappsdemo`\n\nYou should see your app up and running on your local host.\n\n![app hosted on local host\n\nNow, we can use Azure Container Apps to run our containerized application without worrying about infrastructure on a serverless platform. Let\u2019s go over how to do this. \n\n### Creating an Azure Container Registry\n\nWe are going to be building and pushing our image to our Azure Container Registry in order to successfully host our application. To do this, please make sure that you are logged into Azure. There are multiple ways to create an Azure Container Registry: through the user interface, the command line, or even through the VSCode extension. For simplicity, this tutorial will show how to do it through the user interface. \n\nOur first step is to log into Azure and access the Container Registry service. Click to create a new registry and you will be taken to this page:\n\nChoose which Resource Group you want to use, along with a Registry Name (this will serve as your login URL) and Location. Make a note of these because when we start our Container App, all these need to be the same. After these are in place, press the Review and Create button. Once configured, your registry will look like this:\n\nNow that you have your container registry in place, let\u2019s access it in VSCode. Make sure that you have the Docker extension installed. Go to registries, log into your Azure account, and connect your registry. Mine is called \u201canaiyaregistry\u201d and when set up, looks like this:\n\nNow, log into your ACR using this command: \n`docker login `\n\nAs an example, mine is: \n`docker login anaiyaregistry.azurecr.io`\n\nYou will have to go to Access Keys inside of your Container Registry and click on Admin Access. Then, use that username and password to log into your terminal when prompted. If you are on a Windows machine, please make sure to right-click to paste. Otherwise, an error will appear:\n\nWhen you\u2019ve successfully logged in to your Azure subscription and Azure Registry, we can move on to building and pushing our image. \n\n### Building and pushing our image to Azure Container Registry\n\nWe need to now build our image and push it to our Azure Container Registry. \n\nIf you are using an M1 Mac, we need to reconfigure our image so that it is using `amd64` instead of the configured `arm64`. This is because at the moment, Azure Container Apps only supports `linux/amd64` container images, and with an M1 machine, your image will automatically be built as `arm`. To get around this, we will be utilizing Buildx, a Docker plugin that allows you to build and push images for various platforms and architectures. \n\nIf you are not using an M1 Mac, please skip to our \u201cNon-M1 Machines\u201d section.\n\n#### Install Buildx\nTo install `buildx` on your machine, please put in the following commands:\n\n`docker buildx install`\n\nTo enable `buildx` to use the Docker CLI, please type in:\n\n`docker buildx create \u2013use`\n\nOnce this runs and a randomized container name appears in your terminal, you\u2019ll know `buildx` has been properly installed.\n\n#### Building and pushing our image\n\nThe command to build our image is as follows:\n`docker buildx build --platform linux/amd64 --t /: --output type=docker .`\n\nAs an example, my build command is: \n`docker buildx build --platform linux/amd64 --t anaiyaregistry.azurecr.io/azurecontainerappsdemo:latest --output type=docker .`\n\nSpecifying the platform you want your image to run on is the most important part. Otherwise, when we attempt to host it on Azure, we are going to get an error. \n\nOnce this has succeeded, we need to push our image to our registry. We can do this with the command:\n\n`docker push /:`\n\nAs an example, my push command is:\n\n`docker push anaiyaregistry.azurecr.io/azurecontainerappsdemo:latest`\n\n#### Non-M1 Mac machines\nIf you have a non-M1 machine, please follow the above steps but feel free to ignore installing `buildx`. For example, your build command will be:\n\n`docker build --t /: --output type=docker .`\n\nYour push command will be:\n\n`docker push /:`\n\n#### Windows machines\nFor Windows machines, please use the following build command:\n\n`docker build --t /: .`\n\nAnd use the following push command:\n\n`docker push :`\n\nOnce your push has been successful, let\u2019s ensure we can properly see it in our Azure user interface. Access your Container Registries service and click on your registry. Then, click on Repositories. \n\nClick again on your repository and there will be an image named `latest` since that is what we tagged our image with when we pushed it. This is the image we are going to host on our Container App service.\n\n### Creating our Azure container app\n\nWe are going to be creating our container app through the Azure user interface. \n\nAccess your Container Apps service and click Create. \n\nNow access your Container Apps in the UI. Click Create and fill in the \u201cBasics\u201d like this:\n\n**If this is the first time creating an Azure Container App, please ensure an environment is created when the App is created. A Container Apps environment is crucial as it creates a secure boundary around various container apps that exist on the same virtual network. To check on your Container Apps environments, they can be accessed under \u201cContainer Apps Environments\u201d in your Azure portal. \n\nAs we can see, the environment we chose from above is available under our Container Apps Environments tab. \n\nPlease ensure your Region and Resource Group are identical to the options picked while creating your Registry in a previous step. Once you\u2019re finished putting in the \u201cBasics,\u201d click App Settings and uncheck \u201cUse quickstart image.\u201d Under \u201cImage Source,\u201d click on Azure Container Registry and put in your image information.\n\nAt the bottom, enter your Environment Variable (your connection string. It\u2019s located in both Atlas and your .env file, if copying over from the previous demo):\n\nUnder that, hit \u201cEnabled\u201d for ingress and fill in the rest like this:\n\nWhen done, hit Review and Create at the very bottom of the screen.\n\nYou\u2019ll see this page when your deployment is successful. Hit Go to Resource.\n\nClick on the \u201cApplication URL\u201d on the right-hand side. \n\nYou\u2019ll be taken to your app.\n\nIf we change the URL to incorporate our \u2018/books\u2019 route, we will see all the books from our Atlas database! \n\n### Conclusion\nOur Flask and MongoDB Atlas application has been successfully containerized and hosted on Azure Container Apps! Throughout this article, we\u2019ve gone over how to create a Dockerfile and an Azure Container Registry, along with how to create and host our application on Azure Container Apps. \n\nGrab more details on MongoDB Atlas or Azure Container Apps.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8640927974b6af94/6491be06c32681403e55e181/containerapps1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta91a08c67a4a22c5/6491bea250d8ed2c592f2c2b/containerapps2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt04705caa092b7b97/6491da12359ef03a0860ef58/containerapps3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc9e46e8cf38d937b/6491da9c83c7fb0f375f56e2/containerapps4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd959e0e4e09566fb/6491db1d2429af7455f493d4/containerapps5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa57e9ac66da7827/6491db9e595392cc54a060bb/containerapps6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf60949e01ec5cba1/6491dc67359ef00b4960ef66/containerapps7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted59e4c31f184cc4/6491dcd40f2d9b48bbed67a6/containerapps8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb71c98cdf43724e3/6491dd19ea50bc8939bea30e/containerapps9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38985f7d3ddeaa6a/6491dd718b23a55598054728/containerapps10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7a9d4d12404bd90b/6491deb474d501e28e016236/containerapps11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b2b6dd5b0b8d228/6491def9ee654933dccceb84/containerapps12.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6593fb8e678ddfcd/6491dfc0ea50bc4a92bea31f/containerapps13.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c03d1267f75bd58/6491e006f7411b5c2137dd1a/containerapps14.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt471796e40d6385b5/6491e04b0f2d9b22fded67c1/containerapps15.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1fb382a03652147d/6491e074b9a076289492814a/containerapps16.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "This tutorial explains how to host your MongoDB Atlas application on Azure Container Apps for a scalable containerized solution.\n", "contentType": "Article"}, "title": "Building a Flask and MongoDB App with Azure Container Apps", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-multi-environment-continuous-delivery-pipeline-mongodb-atlas", "action": "created", "body": "# Building a Multi-Environment Continuous Delivery Pipeline for MongoDB Atlas\n\n## Why CI/CD?\n\nTo increase the speed and quality of development, you may use continuous delivery strategies to manage and deploy your application code changes. However, continuous delivery for databases is often a manual process.\n\nAdopting continuous integration and continuous delivery (CI/CD) for managing the lifecycle of a database has the following benefits:\n\n* An automated multi-environment setup enables you to move faster and focus on what really matters.\n* The confidence level of the changes applied increases.\n* The process is easier to reproduce.\n* All changes to database configuration will be traceable.\n\n### Why CI/CD for MongoDB Atlas?\n\nMongoDB Atlas is a multi-cloud developer data platform, providing an integrated suite of cloud database and data services to accelerate and simplify how you build with data. MongoDB Atlas also provides a comprehensive API, making CI/CD for the actual data platform itself possible.\n\nIn this blog, we\u2019ll demonstrate how to set up CI/CD for MongoDB Atlas, in a typical production setting. The intended audience is developers, solutions architects, and database administrators with knowledge of MongoDB Atlas, AWS, and Terraform.\n\n## Our CI/CD Solution Requirements\n\n* Ensure that each environment (dev, test, prod) is isolated to minimize blast radius in case of a human error or from a security perspective. MongoDB Atlas Projects and API Keys will be utilized to enable environment isolation.\n* All services used in this solution will use managed services. This to minimize the time needed to spend on managing infrastructure.\n* Minimize commercial agreements required. Use as much as possible from AWS and the Atlas ecosystem so that there is no need to purchase external tooling, such as HashiCorp Vault. \n* Minimize time spent on installing local dev tooling, such as git and Terraform. The solution will provide a docker image, with all tooling required to run provisioning of Terraform templates. The same image will be used to also run the pipeline in AWS CodeBuild. \n\n## Implementation\n\nEnough talk\u2014let\u2019s get to the action. As developers, we love working examples as a way to understand how things work. So, here\u2019s how we did it.\n\n### Prerequisites\n\nFirst off, we need to have at least an Atlas account to provision Atlas and then somewhere to run our automation. You can get an Atlas account for free at mongodb.com. If you want to take this demo for a spin, take the time and create your Atlas account now. Next, you\u2019ll need to create an organization-level API key. If you or your org already have an Atlas account you\u2019d like to use, you\u2019ll need the organization owner to create the organization-level API key.\n\nSecond, you\u2019ll need an AWS account. For more information on how to create an AWS account, see How do I create an AWS account? For this demo, we\u2019ll be using some for-pay services like S3, but you get 12 months free. \n\nYou will also need to have Docker installed as we are using a docker container to run all provisioning. For more information on how to install Docker, see Get Started with Docker. We are using Docker as it will make it easier for you to get started, as all the tooling is packaged in the container\u2014such as AWS cli, mongosh, and Terraform.\n\n### What You Will Build\n\n* MongoDB Atlas Projects for dev, test, prod environments, to minimize blast radius in case of a human error and from a security perspective.\n* MongoDB Atlas Cluster in each Atlas project (dev, test, prod). MongoDB Atlas is a fully managed data platform for modern applications. Storing data the way it is accessed as documents makes developers more productive. It provides a document-based database that is cost-efficient and resizable while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications by providing the foundation of high performance, high availability, security, and compatibility they need.\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n* CodePipeline orchestrates the CI/CD database migration stages.\n* IAM roles and policies allow cross-account access to applicable AWS resources.\n* CodeCommit creates a repo to store the SQL statements used when running the database migration.\n* Amazon S3 creates a bucket to store pipeline artifacts.\n* CodeBuild creates a project in target accounts using Flyway to apply database changes.\n* VPC security groups ensure the secure flow of traffic between a CodeBuild project deployed within a VPC and MongoDB Atlas. AWS Private Link will also be provisioned.\n* AWS Parameter Store stores secrets securely and centrally, such as the Atlas API keys and database username and password.\n* Amazon SNS notifies you by email when a developer pushes changes to the CodeCommit repo.\n\n### Step 1: Bootstrap AWS Resources\n\nNext, we\u2019ll fire off the script to bootstrap our AWS environment and Atlas account as shown in Diagram 1 using Terraform.\n\nYou will need to use programmatic access keys for your AWS account and the Atlas organisation-level API key that you have created as described in the prerequisites.This is also the only time you\u2019ll need to handle the keys manually. \n\n```\n# Set your environment variables\n\n# You'll find this in your Atlas console as described in prerequisites\nexport ATLAS_ORG_ID=60388113131271beaed5\n\n# The public part of the Atlas Org key you created previously \nexport ATLAS_ORG_PUBLIC_KEY=l3drHtms\n\n# The private part of the Atlas Org key you created previously \nexport ATLAS_ORG_PRIVATE_KEY=ab02313b-e4f1-23ad-89c9-4b6cbfa1ed4d\n\n# Pick a username, the script will create this database user in Atlas\nexport DB_USER_NAME=demouser\n\n# Pick a project base name, the script will appended -dev, -test, -prod depending on environment\nexport ATLAS_PROJECT_NAME=blogcicd6\n\n# The AWS region you want to deploy into\nexport AWS_DEFAULT_REGION=eu-west-1\n\n# The AWS public programmatic access key\nexport AWS_ACCESS_KEY_ID=AKIAZDDBLALOZWA3WWQ\n\n# The AWS private programmatic access key\nexport AWS_SECRET_ACCESS_KEY=nmarrRZAIsAAsCwx5DtNrzIgThBA1t5fEfw4uJA\n\n```\n\nOnce all the parameters are defined, you are ready to run the script that will create your CI/CD pipeline.\n\n```\n# Clone solution code repository\n$ git clone https://github.com/mongodb-developer/atlas-cicd-aws\n$ cd atlas-cicd\n\n# Start docker container, which contains all the tooling e.g terraform, mongosh, and other, \n$ docker container run -it --rm -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_DEFAULT_REGION -e ATLAS_ORG_ID -e ATLAS_ORG_PUBLIC_KEY -e ATLAS_ORG_PRIVATE_KEY -e DB_USER_NAME -e ATLAS_PROJECT_NAME -v ${PWD}/terraform:/terraform piepet/cicd-mongodb:46\n\n$ cd terraform \n\n# Bootstrap AWS account and Atlas Account\n$ ./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base apply\n\n```\n\nWhen deploy:baseline.sh is invoked, provisioning of AWS resources starts, using Terraform templates. The resources created are shown in Diagram 1.\n\nFrom here on, you'll be able to operate your Atlas infrastructure without using your local docker instance. If you want to blaze through this guide, including cleaning it all up, you might as well keep the container running, though. The final step of tearing down the AWS infrastructure requires an external point like your local docker instance.\n\nUntil you\u2019ve committed anything, the pipeline will have a failed Source stage. This is because it tries to check out a branch that does not exist in the code repository. After you\u2019ve committed the Terraform code you want to execute, you\u2019ll see that the Source stage will restart and proceed as expected. You can find the pipeline in the AWS console at this url: https://eu-west-1.console.aws.amazon.com/codesuite/codepipeline/pipelines?region=eu-west-1\n\n### Step 2: Deploy Atlas Cluster\n\nNext is to deploy the Atlas cluster (projects, users, API keys, etc). This is done by pushing a configuration into the new AWS CodeCommit repo. \n\nIf you\u2019re like me and want to see how provisioning of the Atlas cluster works before setting up IAM properly, you can push the original github repo to AWS CodeCommit directly inside the docker container (inside the Terraform folder) using a bit of a hack. By pushing to the CodeCommit repo, AWS CodePipeline will be triggered and provisioning of the Atlas cluster will start. \n\n```\ncd /terraform\n# Push default settings to AWS Codecommit\n./git_push_terraform.sh\n\n```\n\nTo set up access to the CodeCommit repo properly, for use that survives stopping the docker container, you\u2019ll need a proper git CodeCommit user. Follow the steps in the AWS documentation to create and configure your CodeCommit git user in AWS IAM. Then clone the AWS CodeCommit repository that was created in the bootstrapping, outside your docker container, perhaps in another tab in your shell, using your IAM credentials. If you did not use the \u201chack\u201d to initialize it, it\u2019ll be empty, so copy the Terraform folder that is provided in this solution, to the root of the cloned CodeCommit repository, then commit and push to kick off the pipeline. Now you can use this repo to control your setup! You should now see in the AWS CodePipeline console that the pipeline has been triggered. The pipeline will create Atlas clusters in each of the Atlas Projects and configure AWS PrivateLink. \n\nLet\u2019s dive into the stages defined in this Terraform pipeline file.\n\n**Deploy-Base**\nThis is basically re-applying what we did in the bootstrapping. This stage ensures we can improve on the AWS pipeline infrastructure itself over time.\n\nThis stage creates the projects in Atlas, including Atlas project API keys, Atlas project users, and database users. \n\n**Deploy-Dev**\n\nThis stage creates the corresponding Private Link and MongoDB cluster.\n\n**Deploy-Test**\n\nThis stage creates the corresponding Private Link and MongoDB cluster.\n\n**Deploy-Prod**\n\nThis stage creates the corresponding Private Link and MongoDB cluster.\n\n**Gate**\n\nApproving means we think it all looks good. Perhaps counter intuitively but great for demos, it proceeds to teardown. This might be one of the first behaviours you\u2019ll change. :)\n\n**Teardown**\n\nThis decommissions the dev, test, and prod resources we created above. To decommission the base resources, including the pipeline itself, we recommend you run that externally\u2014for example, from the Docker container on your laptop. We\u2019ll cover that later.\n\nAs you advance towards the Gate stage, you\u2019ll see the Atlas clusters build out. Below is an example where the Test stage is creating a cluster. Approving the Gate will undeploy the resources created in the dev, test, and prod stages, but keep projects and users.\n\n### Step 3: Make a Change!\n\nAssuming you took the time to set up IAM properly, you can now work with the infrastructure as code directly from your laptop outside the container. If you just deployed using the hack inside the container, you can continue interacting using the repo created inside the Docker container, but at some point, the container will stop and that repo will be gone. So, beware.\n\nNavigate to the root of the clone of the CodeCommit repo. For example, if you used the script in the container, you\u2019d run, also in the container:\n\n```\ncd /${ATLAS_PROJECT_NAME}-base-repo/\n\n```\n\nThen you can edit, for example, the MongoDB version by changing 4.4 to 5.0 in `terraform/environment/dev/variables.tf`.\n\n```\nvariable \"cluster_mongodbversion\" {\n description = \"The Major MongoDB Version\"\n default = \"5.0\"\n}\n```\n\nThen push (git add, commit, push) and you\u2019ll see a new run initiated in CodePipeline.\n\n### Step 4: Clean Up Base Infrastructure\n\nNow, that was interesting. Time for cleaning up! To decommission the full environment, you should first approve the Gate stage to execute the teardown job. When that\u2019s been done, only the base infrastructure remains. Start the container again as in Step 1 if it\u2019s not running, and then execute deploy_baseline.sh, replacing the word ***apply*** with ***destroy***: \n\n```\n# inside the /terraform folder of the container\n\n# Clean up AWS and Atlas Account\n./deploy_baseline.sh $AWS_DEFAULT_REGION $ATLAS_ORG_ID $ATLAS_ORG_PUBLIC_KEY $ATLAS_ORG_PRIVATE_KEY $DB_USER_NAME $ATLAS_PROJECT_NAME base destroy\n\n```\n\n## Lessons Learned\n\nIn this solution, we have separated the creation of AWS resources and the Atlas cluster, as the changes to the Atlas cluster will be more frequent than the changes to the AWS resources. \n\nWhen implementing infrastructure as code for a MongoDB Atlas Cluster, you have to consider not just the cluster creation but also a strategy for how to separate dev, qa, and prod environments and how to store secrets. This to minimize blast radius. \n\nWe also noticed how useful resource tagging is to make Terraform scripts portable. By setting tags on AWS resources, the script does not need to know the names of the resources but can look them up by tag instead.\n\n## Conclusion\n\nBy using CI/CD automation for Atlas clusters, you can speed up deployments and increase the agility of your software teams. \n\nMongoDB Atlas offers a powerful API that, in combination with AWS CI/CD services and Terraform, can support continuous delivery of MongoDB Atlas clusters, and version-control the database lifecycle. You can apply the same pattern with other CI/CD tools that aren\u2019t specific to AWS. \n\nIn this blog, we\u2019ve offered an exhaustive, reproducible, and reusable deployment process for MongoDB Atlas, including traceability. A devops team can use our demonstration as inspiration for how to quickly deploy MongoDB Atlas, automatically embedding organisation best practices. ", "format": "md", "metadata": {"tags": ["Atlas", "AWS", "Docker"], "pageDescription": "In this blog, we\u2019ll demonstrate how to set up CI/CD for MongoDB Atlas, in a typical production setting.", "contentType": "Tutorial"}, "title": "Building a Multi-Environment Continuous Delivery Pipeline for MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/swiftui-previews", "action": "created", "body": "# Making SwiftUI Previews Work For You\n\n## Introduction\n\nCanvas previews are an in-your-face feature of SwiftUI. When you create a new view, half of the boilerplate code is for the preview. A third of your Xcode real estate is taken up by the preview.\n\nDespite the prominence of the feature, many developers simply delete the preview code from their views and rely on the simulator.\n\nIn past releases of Xcode (including the Xcode 13 betas), a reluctance to use previews was understandable. They'd fail for no apparent reason, and the error messages were beyond cryptic.\n\nI've stuck with previews from the start, but at times, they've felt like more effort than they're worth. But, with Xcode 13, I think we should all be using them for all views. In particular, I've noticed:\n\n- They're more reliable.\n- The error messages finally make sense.\n- Landscape mode is supported.\n\nI consider previews a little like UI unit tests for your views. Like with unit tests, there's some extra upfront effort required, but you get a big payback in terms of productivity and quality.\n\nIn this article, I'm going to cover:\n\n- What you can check in your previews (think light/dark mode, different devices, landscape mode, etc.) and how to do it.\n- Reducing the amount of boilerplate code you need in your previews.\n- Writing previews for stateful apps. (I'll be using Realm, but the same approach can be used with Core Data.)\n- Troubleshooting your previews.\n\nOne feature I won't cover is using previews as a graphical way to edit views. One of the big draws of SwiftUI is writing everything in code rather than needing storyboards and XML files. Using a drag-and-drop view builder for SwiftUI doesn't appeal to me.\n\n95% of the examples I use in this article are based on a BlackJack training app. You can find the final version in the repo.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Prerequisites\n\n- Xcode 13+\n- iOS 15+\n- Realm-Cocoa 10.17.0+\n\nNote: \n\n- I've used Xcode 13 and iOS 15, but most of the examples in this post will work with older versions.\n- Previewing in landscape mode is new in Xcode 13.\n- The `buttonStyle` modifier is only available in iOS 15.\n- I used Realm-Cocoa 10.17.0, but earlier 10.X versions are likely to work. \n\n## Working with previews\n\nPreviews let you see what your view looks like without running it in a simulator or physical device. When you edit the code for your view, its preview updates in real time.\n\nThis section shows what aspects you can preview, and how it's done.\n\n### A super-simple preview\n\nWhen you create a new Xcode project or SwiftUI view, Xcode adds the code for the preview automatically. All you need to do is press the \"Resume\" button (or CMD-Alt-P).\n\nThe preview code always has the same structure, with the `View` that needs previewing (in this case, `ContentView`) within the `previews` `View`:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n }\n}\n```\n\n### Views that require parameters\n\nMost of your views will require that the enclosing view pass in parameters. Your preview must do the same\u2014you'll get a build error if you forget.\n\nMy `ResetButton` view requires that the caller provides two values\u2014`label` and `resetType`:\n\n```swift\nstruct ResetButton: View {\n var label: String\n var resetType: ResetType\n ...\n}\n```\n\nThe preview code needs to pass in those values, just like any embedding view:\n\n```swift\nstruct ResetButton_Previews: PreviewProvider {\n static var previews: some View {\n ResetButton(label: \"Reset All Matrices\",\n resetType: .all)\n }\n}\n```\n\n### Views that require `Binding`s\n\nIn a chat app, I have a `LoginView` that updates the `username` binding that's past from the enclosing view:\n\n```swift\nstruct LoginView: View { \n @Binding var username: String\n ...\n}\n```\n\nThe simplest way to create a binding in your preview is to use the `constant` function:\n\n```swift\nstruct LoginView_Previews: PreviewProvider {\n static var previews: some View {\n LoginView(username: .constant(\"Billy\"))\n }\n}\n```\n\n### `NavigationView`s\n\nIn your view hierarchy, you only add a `NavigationView` at a single level. That `NavigationView` then wraps all subviews.\n\nWhen previewing those subviews, you may or may not care about the `NavigationView` functionality. For example, you'll only see titles and buttons in the top nav bar if your preview wraps the view in a `NavigationView`.\n\nIf I preview my `PracticeView` without adding a `NavigationView`, then I don't see the title:\n\nTo preview the title, my preview code needs to wrap `PracticeView` in a `NavigationView`:\n\n```swift\nstruct PracticeView_Previews: PreviewProvider {\n static var previews: some View {\n NavigationView {\n PracticeView()\n }\n }\n}\n```\n\n### Smaller views\n\nSometimes, you don't need to preview your view in the context of a full device screen. My `CardView` displays a single playing card. Previewing it in a full device screen just wastes desk space: \n\nWe can add the `previewLayout` modifier to indicate that we only want to preview an area large enough for the view. It often makes sense to add some `padding` as well:\n\n```swift\nstruct CardView_Previews: PreviewProvider {\n static var previews: some View {\n CardView(card: Card(suit: .heart))\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\n```\n\n### Light and dark modes\n\nIt can be quite a shock when you finally get around to testing your app in dark mode. If you've not thought about light/dark mode when implementing each of your views, then the result can be ugly, or even unusable.\n\nPreviews to the rescue!\n\nReturning to `CardView`, I can preview a card in dark mode using the `preferredColorScheme` view modifier:\n\n```swift\nstruct CardView_Previews: PreviewProvider {\n static var previews: some View {\n CardView(card: Card(suit: .heart))\n .preferredColorScheme(.dark)\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\n```\n\nThat seems fine, but what if I previewed a spade instead?\n\nThat could be a problem.\n\nAdding a white background to the view fixes it:\n\n### Preview multiple view instances\n\nSometimes, previewing a single instance of your view doesn't paint the full picture. Just look at the surprise I got when enabling dark mode for my card view. Wouldn't it be better to simultaneously preview both hearts and spades in both dark and light modes?\n\nYou can create multiple previews for the same view using the `Group` view:\n\n```swift\nstruct CardView_Previews: PreviewProvider {\n static var previews: some View {\n Group {\n CardView(card: Card(suit: .heart))\n CardView(card: Card(suit: .spade))\n CardView(card: Card(suit: .heart))\n .preferredColorScheme(.dark)\n CardView(card: Card(suit: .spade))\n .preferredColorScheme(.dark)\n }\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\n```\n\n### Composing views in a preview\n\nA preview of a single view in isolation might look fine, but what will they look like within a broader context?\n\nPreviewing a single `DecisionCell` view looks great:\n\n```swift\nstruct DecisionCell_Previews: PreviewProvider {\n static var previews: some View {\n DecisionCell(\n decision: Decision(handValue: 6, dealerCardValue: .nine, action: .hit), myHandValue: 8, dealerCardValue: .five)\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\n```\n\nBut, the app will never display a single `DecisionCell`. They'll always be in a grid. Also, the text, background color, and border vary according to state. To create a more realistic preview, I created some sample data within the view and then composed multiple `DecisionCell`s using vertical and horizontal stacks:\n\n```swift\nstruct DecisionCell_Previews: PreviewProvider {\n static var previews: some View {\n let decisions: Decision] = [\n Decision(handValue: 6, dealerCardValue: .nine, action: .split),\n Decision(handValue: 6, dealerCardValue: .nine, action: .stand),\n Decision(handValue: 6, dealerCardValue: .nine, action: .double),\n Decision(handValue: 6, dealerCardValue: .nine, action: .hit)\n ]\n return Group {\n VStack(spacing: 0) {\n ForEach(decisions) { decision in\n HStack (spacing: 0) {\n DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .three)\n DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .three)\n DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .nine)\n DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .nine)\n }\n }\n }\n VStack(spacing: 0) {\n ForEach(decisions) { decision in\n HStack (spacing: 0) {\n DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .three)\n DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .three)\n DecisionCell(decision: decision, myHandValue: 8, dealerCardValue: .nine)\n DecisionCell(decision: decision, myHandValue: 6, dealerCardValue: .nine)\n }\n }\n }\n .preferredColorScheme(.dark)\n }\n .previewLayout(.sizeThatFits)\n .padding()\n }\n```\n\nI could then see that the black border didn't work too well in dark mode:\n\n![Dark border around selected cells is lost in front of the dark background\n\nSwitching the border color from `black` to `primary` quickly fixed the issue:\n\n### Landscape mode\n\nPreviews default to portrait mode. Use the `previewInterfaceOrientation` modifier to preview in landscape mode instead:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n .previewInterfaceOrientation(.landscapeRight)\n }\n}\n```\n\n### Device type\n\nPreviews default to the simulator device that you've selected in Xcode. Chances are that you want your app to work well on multiple devices. Typically, I find that there's extra work needed to make an app I designed for the iPhone work well on an iPad.\n\nThe `previewDevice` modifier lets us specify the device type to use in the preview:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n ContentView()\n .previewDevice(PreviewDevice(rawValue: \"iPad (9th generation)\"))\n }\n}\n```\n\nYou can find the names of the available devices from Xcode's simulator menu, or from the terminal using `xcrun simctl list devices`.\n\n### Pinning views\n\nIn the bottom-left corner of the preview area, there's a pin button. Pressing this \"pins\" the current preview so that it's still shown when you browse to the code for other views:\n\nThis is useful to observe how a parent view changes as you edit the code for the child view:\n\n### Live previews\n\nAt the start of this article, I made a comparison between previews and unit testing. Live previews mean that you really can test your views in isolation (to be accurate, the view you're testing plus all of the views it embeds or links to).\n\nPress the play button above the preview to enter live mode:\n\nYou can now interact with your view:\n\n## Getting rid of excess boilerplate preview code\n\nAs you may have noticed, some of my previews now have more code than the actual views. This isn't necessarily a problem, but there's a lot of repeated boilerplate code used by multiple views. Not only that, but you'll be embedding the same boilerplate code into previews in other projects.\n\nTo streamline my preview code, I've created several view builders. They all follow the same pattern\u2014receive a `View` and return a new `View` that's built from that `View`.\n\nI start the name of each view builder with `_Preview` to make it easy to take advantage of Xcode's code completion feature.\n\n### Light/dark mode\n\n`_PreviewColorScheme` returns a `Group` of copies of the view. One is in light mode, the other dark:\n\n```swift\nstruct _PreviewColorScheme: View {\n private let viewToPreview: Value\n\n init(_ viewToPreview: Value) {\n self.viewToPreview = viewToPreview\n }\n\n var body: some View {\n Group {\n viewToPreview\n viewToPreview.preferredColorScheme(.dark)\n }\n }\n}\n```\n\nTo use this view builder in a preview, simply pass in the `View` you're previewing:\n\n```swift\nstruct CardView_Previews: PreviewProvider {\n static var previews: some View {\n _PreviewColorScheme(\n VStack {\n ForEach(Suit.allCases, id: \\.rawValue) { suit in\n CardView(card: Card(suit: suit))\n }\n }\n .padding()\n .previewLayout(.sizeThatFits)\n )\n }\n}\n```\n\n### Orientation\n\n`_PreviewOrientation` returns a `Group` containing the original `View` in portrait and landscape modes:\n\n```swift\nstruct _PreviewOrientation: View {\n private let viewToPreview: Value\n\n init(_ viewToPreview: Value) {\n self.viewToPreview = viewToPreview\n }\n\n var body: some View {\n Group {\n viewToPreview\n viewToPreview.previewInterfaceOrientation(.landscapeRight)\n }\n }\n}\n```\n\nTo use this view builder in a preview, simply pass in the `View` you're previewing:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n _PreviewOrientation(\n ContentView()\n )\n }\n}\n```\n\n### No device\n\n`_PreviewNoDevice` returns a view built from adding the `previewLayout` modifier and adding `padding to the input view:\n\n```swift\nstruct _PreviewNoDevice: View {\n private let viewToPreview: Value\n\n init(_ viewToPreview: Value) {\n self.viewToPreview = viewToPreview\n }\n\n var body: some View {\n Group {\n viewToPreview\n .previewLayout(.sizeThatFits)\n .padding()\n }\n }\n}\n```\n\nTo use this view builder in a preview, simply pass in the `View` you're previewing:\n\n```swift\nstruct CardView_Previews: PreviewProvider {\n static var previews: some View {\n _PreviewNoDevice(\n CardView(card: Card())\n )\n }\n}\n```\n\n### Multiple devices\n\n`_PreviewDevices` returns a `Group` containing a copy of the `View` for each device type. You can modify `devices` in the code to include the devices you want to see previews for:\n\n```swift\nstruct _PreviewDevices: View {\n let devices = \n \"iPhone 13 Pro Max\",\n \"iPhone 13 mini\",\n \"iPad (9th generation)\"\n ]\n\n private let viewToPreview: Value\n\n init(_ viewToPreview: Value) {\n self.viewToPreview = viewToPreview\n }\n\n var body: some View {\n Group {\n ForEach(devices, id: \\.self) { device in\n viewToPreview\n .previewDevice(PreviewDevice(rawValue: device))\n .previewDisplayName(device)\n }\n }\n }\n}\n```\n\nI'd be cautious about adding too many devices as it will make any previews using this view builder slow down and consume resources.\n\nTo use this view builder in a preview, simply pass in the `View` you're previewing:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n _PreviewDevices(\n ContentView()\n )\n }\n}\n```\n\n![The same view previewed on 3 different device types\n\n### Combining multiple view builders\n\nEach view builder receives a view and returns a new view. That means that you can compose the functions by passing the results of one view builder to another. In the extreme case, you can use up to three on the same view preview:\n\n```swift\nstruct ContentView_Previews: PreviewProvider {\n static var previews: some View {\n _PreviewOrientation(\n _PreviewColorScheme(\n _PreviewDevices(ContentView())\n )\n )\n }\n}\n```\n\nThis produces 12 views to cover all permutations of orientation, appearance, and device.\n\nFor each view, you should consider which modifiers add value. For the `CardView`, it makes sense to use `_PreviewNoDevice` and `_PreviewColorSchem`e, but previewing on different devices and orientations wouldn't add any value.\n\n## Previewing stateful views (Realm)\n\nOften, a SwiftUI view will fetch state from a database such as Realm or Core Data. For that to work, there needs to be data in that database.\n\nPreviews are effectively running on embedded iOS simulators. That helps explain how they are both slower and more powerful than you might expect from a \"preview\" feature. That also means that each preview also contains a Realm database (assuming that you're using the Realm-Cocoa SDK). The preview can store data in that database, and the view can access that data.\n\nIn the BlackJack training app, the action to take for each player/dealer hand combination is stored in Realm. For example, `DefaultDecisionView` uses `@ObservedResults` to access data from Realm:\n\n```swift\nstruct DefaultDecisionView: View {\n @ObservedResults(Decisions.self,\n filter: NSPredicate(format: \"isSoft == NO AND isSplit == NO\")) var decisions\n```\n\nTo ensure that there's data for the previewed view to find, the preview checks whether the Realm database already contains data (`Decisions.areDecisionsPopulated`). If not, then it adds the required data (`Decisions.bootstrapDecisions()`):\n\n```swift\nstruct DefaultDecisionView_Previews: PreviewProvider {\n static var previews: some View {\n if !Decisions.areDecisionsPopulated {\n Decisions.bootstrapDecisions()\n }\n return _PreviewOrientation(\n _PreviewColorScheme(\n Group {\n NavigationView {\n DefaultDecisionView(myHandValue: 6, dealerCardValue: .nine)\n }\n NavigationView {\n DefaultDecisionView(myHandValue: 6, dealerCardValue: .nine, editable: true)\n }\n }\n .navigationViewStyle(StackNavigationViewStyle())\n )\n )\n }\n}\n```\n\n`DefaultDecisionView` is embedded in `DecisionMatrixView` and so the preview for `DecisionMatrixView` must also conditionally populate the Realm data. In turn, `DecisionMatrixView` is embedded in `PracticeView`, and `PracticeView` in `ContentView`\u2014and so, they too need to bootstrap the Realm data so that it's available further down the view hierarchy.\n\nThis is the implementation of the bootstrap functions:\n\n```swift\nextension Decisions {\n static var areDecisionsPopulated: Bool {\n do {\n let realm = try Realm()\n let decisionObjects = realm.objects(Decisions.self)\n return decisionObjects.count >= 3\n } catch {\n print(\"Error, couldn't read decision objects from Realm: \\(error.localizedDescription)\")\n return false\n }\n }\n\n static func bootstrapDecisions() {\n do {\n let realm = try Realm()\n let defaultDecisions = Decisions()\n let softDecisions = Decisions()\n let splitDecisions = Decisions()\n\n defaultDecisions.bootstrap(defaults: defaultDefaultDecisions, handType: .normal)\n softDecisions.bootstrap(defaults: defaultSoftDecisions, handType: .soft)\n splitDecisions.bootstrap(defaults: defaultSplitDecisions, handType: .split)\n try realm.write {\n realm.delete(realm.objects(Decision.self))\n realm.delete(realm.objects(Decisions.self))\n realm.delete(realm.objects(Decision.self))\n realm.delete(realm.objects(Decisions.self))\n realm.add(defaultDecisions)\n realm.add(softDecisions)\n realm.add(splitDecisions)\n }\n } catch {\n print(\"Error, couldn't read decision objects from Realm: \\(error.localizedDescription)\")\n }\n }\n}\n```\n\n### Partitioned, synced realms\n\nThe BlackJack training app uses a standalone Realm database. But what happens if the app is using Realm Sync?\n\nOne option could be to have the SwiftUI preview sync data with your backend Realm service. I think that's a bit too complex, and it breaks my paradigm of treating previews like unit tests for views.\n\nI've found that the simplest solution is to make the view aware of whether it's been created by a preview or by a running app. I'll explain how that works.\n\n`AuthorView` from the RChat app fetches data from Realm:\n\n```swift\nstruct AuthorView: View {\n @ObservedResults(Chatster.self) var chatsters\n ...\n}\n```\n\nIts preview code bootstraps the embedded realm:\n\n```swift\nstruct AuthorView_Previews: PreviewProvider {\n static var previews: some View {\n Realm.bootstrap()\n\n return AppearancePreviews(AuthorView(userName: \"rod@contoso.com\"))\n .previewLayout(.sizeThatFits)\n .padding()\n }\n}\n```\n\nThe app adds bootstrap as an extension to Realm:\n\n```swift\nextension Realm: Samplable {\n static func bootstrap() {\n do {\n let realm = try Realm()\n try realm.write {\n realm.deleteAll()\n realm.add(Chatster.samples)\n realm.add(User(User.sample))\n realm.add(ChatMessage.samples)\n }\n } catch {\n print(\"Failed to bootstrap the default realm\")\n }\n }\n}\n```\n\nA complication is that `AuthorView` is embedded in `ChatBubbleView`. For the app to work, `ChatBubbleView` must pass the synced realm configuration to `AuthorView`:\n\n```swift\nAuthorView(userName: authorName)\n .environment(\\.realmConfiguration,\n app.currentUser!.configuration(\n partitionValue: \"all-users=all-the-users\"))\n```\n\n**But**, when previewing `ChatBubbleView`, we want `AuthorView` to use the preview's local, embedded realm (not to be dependent on a Realm back-end app). That means that `ChatBubbleView` must check whether or not it's running as part of a preview:\n\n```swift\nstruct ChatBubbleView: View {\n ...\n var isPreview = false\n ...\n var body: some View {\n ...\n if isPreview {\n AuthorView(userName: authorName)\n } else {\n AuthorView(userName: authorName)\n .environment(\\.realmConfiguration,\n app.currentUser!.configuration(\n partitionValue: \"all-users=all-the-users\"))\n }\n ...\n }\n}\n```\n\nThe preview is then responsible for bootstrapping the local realm and flagging to `ChatBubbleView` that it's a preview:\n\n```swift\nstruct ChatBubbleView_Previews: PreviewProvider {\n static var previews: some View {\n Realm.bootstrap()\n return ChatBubbleView(\n chatMessage: .sample,\n authorName: \"jane\",\n isPreview: true)\n }\n}\n```\n\n## Troubleshooting your previews\n\nAs mentioned at the beginning of this article, the error messages for failed previews are actually useful in Xcode 13.\n\nThat's the good news. \n\nThe bad news is that you still can't use breakpoints or print to the console.\n\nOne mitigation is that the `previews` static var in your preview is a `View`. That means that you can replace the `body` of your `ContentView` with your `previews` code. You can then run the app in a simulator and add breakpoints or print to the console. It feels odd to use this approach, but I haven't found a better option yet.\n\n## Conclusion\n\nI've had a mixed relationship with SwiftUI previews.\n\nWhen they work, they're a great tool, making it quicker to write your views. Previews allow you to unit test your views. Previews help you avoid issues when your app is running in dark or landscape mode or on different devices.\n\nBut, they require effort to build. Prior to Xcode 13, it would be tough to justify that effort because of reliability issues.\n\nI believe that Xcode 13 is the tipping point where the efficiency and quality gains far outweigh the effort of writing preview code. That's why I've written this article now.\n\nIn this article, you've seen a number of tips to make previews as useful as possible. I've provided four view builders that you can copy directly into your SwiftUI projects, letting you build the best previews with the minimum of code. Finally, you've seen how you can write previews for views that work with data held in a database such as Realm or Core Data.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["Realm", "Swift", "Mobile", "iOS"], "pageDescription": "Get the most out of iOS Canvas previews to improve your productivity and app quality", "contentType": "Article"}, "title": "Making SwiftUI Previews Work For You", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/real-time-data-architectures-with-mongodb-cloud-manager-and-verizon-5g-edge", "action": "created", "body": "# Real-time Data Architectures with MongoDB Cloud Manager and Verizon 5G Edge\n\nThe network edge has been one of the most explosive cloud computing opportunities in recent years. As mobile contactless experiences become the norm and as businesses move ever-faster to digital platforms and services, edge computing is positioned as a faster, cheaper, and more reliable alternative for data processing and compute at scale.\n\nWhile mobile devices continue to increase their hardware capabilities with built-in GPUs, custom chipsets, and more storage, even the most cutting-edge devices will suffer the same fundamental problem: each device serves as a single point of failure and, thus, cannot effectively serve as a persistent data storage layer. Said differently, wouldn\u2019t it be nice to have the high-availability of the cloud but with the topological distance to your end users of the smartphone?\n\nMobile edge computing promises to precisely address this problem\u2014bringing low latency compute to the edge of networks with the high-availability and scale of cloud computing. Through Verizon 5G Edge with AWS Wavelength, we saw the opportunity to explore how to take existing compute-intensive workflows and overlay a data persistence layer with MongoDB, utilizing the MongoDB Atlas management platform, to enable ultra-immersive experiences with personalized experience\u2014reliant on existing database structures in the parent region with the seamlessness to extend to the network edge. \n\nIn this article, learn how Verizon and MongoDB teamed up to deliver on this vision, a quick Getting Started guide to build your first MongoDB application at the edge, and advanced architectures for those proficient with MongoDB.\n\nLet\u2019s get started!\n\n## About Verizon 5G Edge and MongoDB\n\nThrough Verizon 5G Edge, AWS developers can now deploy parts of their application that require low latency at the edge of 4G and 5G networks using the same AWS APIs, tools, and functionality they use today, while seamlessly connecting back to the rest of their application and the full range of cloud services running in an AWS Region. By embedding AWS compute and storage services at the edge of the network, use cases such as ML inference, real-time video streaming, remote video production, and game streaming can be rapidly accelerated.\n\nHowever, for many of these use cases, a persistent storage layer is required that extends beyond the native storage capabilities of AWS Wavelength\u2014namely Elastic Block Storage (EBS) volumes. However, using MongoDB Enterprise, developers can leverage the underlying compute (i.e.,. EC2 instances) at the edge to deploy MongoDB clusters either a) as standalone clusters or b) highly available replica sets that can synchronize data seamlessly.\n\nMongoDB is a general purpose, document-based, distributed database built for modern application developers. With MongoDB Atlas, developers can get up and running even faster with fully managed MongoDB databases deployed across all major cloud providers.\n\nWhile MongoDB Atlas today does not support deployments within Wavelength Zones, MongoDB Cloud Manager can automate, monitor, and back up your MongoDB infrastructure. Cloud Manager Automation enables you to configure and maintain MongoDB nodes and clusters, whereby MongoDB Agents running on each MongoDB host can maintain your MongoDB deployments. In this example, we\u2019ll start with a fairly simple architecture highlighting the relationship between Wavelength Zones (the edge) and the Parent Region (core cloud):\n\nJust like any other architecture, we\u2019ll begin with a VPC consisting of two subnets. Instead of one public subnet and one private subnet, we\u2019ll have one public subnet and one carrier subnet \u2014a new way to describe subnets exposed within Wavelength Zones to the mobile network only.\n\n* **Public Subnet**: Within the us-west-2 Oregon region, we launched a subnet in us-west-2a availability zone consisting of a single EC2 instance with a public IP address. From a routing perspective, we attached an Internet Gateway to the VPC to provide outbound connectivity and attached the Internet Gateway as the default route (0.0.0.0/0) to the subnet\u2019s associated route table.\n* **Carrier Subnet**: Also within the us-west-2 Oregon region, our second subnet is in the San Francisco Wavelength Zone (us-west-2-wl1-sfo-wlz-1) \u2014an edge data center within the Verizon carrier network but part of the us-west-2 region. In this subnet, we also deploy a single EC2 instance, this time with a carrier IP address\u2014a carrier network-facing IP address exposed to Verizon mobile devices. From a routing perspective, we attached a Carrier Gateway to the VPC to provide outbound connectivity and attached the Carrier Gateway as the default route (0.0.0.0/0) to the subnet\u2019s associated route table.\n\nNext, let\u2019s configure the EC2 instance in the parent region. Once you get the IP address (54.68.26.68) of the launched EC2 instance, SSH into the instance itself and begin to download the MongoDB agent.\n\n```bash\nssh -i \"mongovz.pem\" ec2-user@ec2-54-68-26-68.us-west-2.compute.amazonaws.com\n```\n\nOnce you are in, download and install the packages required for the MongoDB MMS Automation Agent. Run the following command:\n\n```bash\nsudo yum install cyrus-sasl cyrus-sasl-gssapi \\\n cyrus-sasl-plain krb5-libs libcurl \\\n lm_sensors-libs net-snmp net-snmp-agent-libs \\\n openldap openssl tcp_wrappers-libs xz-libs\n```\n\nOnce within the instance, download the MongoDB MMS Automation Agent, and install the agent using the RPM package manager.\n\n```bash\ncurl -OL https://cloud.mongodb.com/download/agent/automation/mongodb-mms-automation-agent-manager-10.30.1.6889-1.x86_64.rhel7.rpm\n\nsudo rpm -U mongodb-mms-automation-agent-manager-10.30.1.6889-1.x86_64.rhel7.rpm\n```\n\nNext, navigate to the **/etc/mongodb-mms/** and edit the **automation-agent.config** file to include your MongoDB Cloud Manager API Key. To create a key, head over to MongoDB Atlas at https://mongodb.com/atlas and either login to an existing account, or sign up for a new free account.\n\nOnce you are logged in, create a new organization, and for the cloud service, be sure to select Cloud Manager.\n\nWith your organization created, next we\u2019ll create a new Project. When creating a new project, you may be asked to select a cloud service, and you\u2019ll choose Cloud Manager again.\n\nNext, you\u2019ll name your project. You can select any name you like, we\u2019ll go with Verizon for our project name. After you give your project a name, you will be given a prompt to invite others to the project. You can skip this step for now as you can always add additional users in the future.\n\nFinally, you are ready to deploy MongoDB to your environment using Cloud Manager. With Cloud Manager, you can deploy both standalone instances as well as Replica Sets of MongoDB. Since we want high availability, we\u2019ll deploy a replica set.\n\nClicking on the **New Replica Set** button will bring us to the user interface to configure our replica set. At this point, we\u2019ll probably get a message saying that no servers were detected, and that\u2019s fine since we haven\u2019t started our MongoDB Agents yet. \n\nClick on the \u201csee instructions\u201d link to get more details on how to install the MongoDB Agent. On the modal that pops up, it will have familiar instructions that we\u2019re already following, but it will also have two pieces of information that we\u2019ll need. The **mmsApiKey** and **mmsGroupId** will be displayed here and you\u2019ll likely have to click the Generate Key button to generate a new mmsAPIKey which will be automatically populated. Make note of these **mmsGroupId** and **mmsApiKey** values as we\u2019ll need when configuring our MongoDB Agents next.\n\nHead back to your terminal for the EC2 instance and navigate to the **/etc/mongodb-mms/** and edit the **automation-agent.config** file to include your MongoDB Cloud Manager API Key. \n\nIn this example, we edited the **mmsApiKey** and **mmsGroupId** variables. From there, we\u2019ll create the data directory and start our MongoDB agent!\n\n```bash\nsudo mkdir -p /data\nsudo chown mongod:mongod /data\nsudo systemctl start mongodb-mms-automation-agent.service\n```\n\nOnce you\u2019ve completed these configuration steps, go ahead and do the same for your Wavelength Zone instance. Note that you will not be able to SSH directly to the instance\u2019s Carrier IP (155.146.16.178/). Instead, you must use the parent region instance as a bastion host to \u201cjump\u201d onto the edge instance itself. To do so, find the private IP address of the edge instance (10.0.0.54) and, from the parent region instance, SSH into the second instance using the same key pair you used.\n\n```bash\nssh -i \"mongovz.pem\" ec2-user@10.0.0.54\n```\n\nAfter completing configuration of the second instance, which follows the same instructions from above, it\u2019s time for the fun part \u2014launching the ReplicaSet on the Cloud Manager Console! The one thing to note for the replica set, since we\u2019ll have three nodes, on the edge instance we\u2019ll create a /data and /data2 directories to allow for two separate directories to host the individual nodes data. Head back over to https://mongodb.com/atlas and the Cloud Manager to complete setup.\n\nRefresh the Create New Replica Set page and now since the MongoDB Agents are running you should see a lot of information pre-populated for you. Make sure that it matches what you\u2019d expect and when you\u2019re satisfied hit the Create Replica Set button.\n\nClick on the \u201cCreate Replica Set\u201d button to finalize the process.\n\nWithin a few minutes the replica set cluster will be deployed to the servers and your MongoDB cluster will be up and running. \n\nWith the replica set deployed, you should now be able to connect to your MongoDB cluster hosted on either the standard Us-West or Wavelength zone. To do this, you\u2019ll need the public address for the cluster and the port as well as Authentication enabled in Cloud Manager. To enable Authentication, simply click on the Enabled/Disabled button underneath the Auth section of your replica set and you\u2019ll be given a number of options to connect to the client. We\u2019ll select Username/password.\n\nClick Next, and the subsequent modal will have your username and password to connect to the cluster with.\n\nYou are all set. Next, let\u2019s see how the MongoDB performs at the edge. We\u2019ll test this by reading data from both our standard US-West node as well as the Wavelength zone and compare our results.\n\n## Racing MongoDB at the Edge\n\nAfter laying out the architecture, we wanted to see the power of 5G Edge in action. To that end, we designed a very simple \u201crace.\u201d Over 1,000 trials we would read data from our MongoDB database, and timestamp each operation both from the client to the edge and to the parent region. \n\n```python\nfrom pymongo import MongoClient\nimport time\nclient = MongoClient('155.146.144.134', 27017)\nmydb = client\"mydatabase\"]\nmycol = mydb[\"customers\"]\nmydict = { \"name\": \"John\", \"address\": \"Highway 37\" }\n\n# Load dataset\nfor i in range(1000):\n x = mycol.insert(mydict)\n\n# Measure reads from Parent Region\nedge_latency=[]\nfor i in range(1000):\n t1=time.time()\n y = mycol.find_one({\"name\":\"John\"})\n t2=time.time()\n edge_latency.append(t2-t1)\n\nprint(sum(edge_latency)/len(edge_latency))\n\nclient = MongoClient('52.42.129.138', 27017)\nmydb = client[\"mydatabase\"]\nmycol = mydb[\"customers\"]\nmydict = { \"name\": \"John\", \"address\": \"Highway 37\" }\n\n# Measure reads from Wavelength Region\nedge_latency=[]\nfor i in range(1000):\n t1=time.time()\n y = mycol.find_one({\"name\":\"John\"})\n t2=time.time()\n edge_latency.append(t2-t1)\n\nprint(sum(edge_latency)/len(edge_latency))\n```\n\nAfter running this experiment, we found that our MongoDB node at the edge performed **over 40% faster** than the parent region! But why was that the case? \n\nGiven that the Wavelength Zone nodes were deployed within the mobile network, packets never had to leave the Verizon network and incur the latency penalty of traversing through the public internet\u2014prone to incremental jitter, loss, and latency. In our example, our 5G Ultra Wideband connected device in San Francisco had two options: connect to a local endpoint within the San Francisco mobile network or travel 500+ miles to a data center in Oregon. Thus, we validated the significant performance savings of MongoDB on Verizon 5G Edge relative to the next best alternative: deploying the same architecture in the core cloud.\n\n## Getting started on 5G Edge with MongoDB\n\nWhile Verizon 5G Edge alone enables developers to build ultra-immersive applications, how can immersive applications become personalized and localized?\n\nEnter MongoDB. \n\nFrom real-time transaction processing, telemetry capture for your IoT application, or personalization using profile data for localized venue experiences, bringing MongoDB ReplicaSets to the edge allows you to maintain the low latency characteristics of your application without sacrificing access to user profile data, product catalogues, IoT telemetry, and more.\n\nThere\u2019s no better time to start your edge enablement journey with Verizon 5G Edge and MongoDB. To learn more about Verizon 5G Edge, you can visit our [developer resources page. If you have any questions about this blog post, find us in the MongoDB community.\n\nIn our next post, we will demonstrate how to build your first image classifier on 5G Edge using MongoDB to identify VIPs at your next sporting event, developer conference, or large-scale event.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.", "format": "md", "metadata": {"tags": ["MongoDB", "AWS"], "pageDescription": "From real-time transaction processing, telemetry capture for your IoT application, or personalization using profile data for localized venue experiences, bringing MongoDB to the edge allows you to maintain the low latency characteristics of your application without sacrificing access to data.", "contentType": "Tutorial"}, "title": "Real-time Data Architectures with MongoDB Cloud Manager and Verizon 5G Edge", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/adl-sql-integration-test", "action": "created", "body": "# Atlas Query Federation SQL to Form Powerful Data Interactions\n\nModern platforms have a wide variety of data sources. As businesses grow, they have to constantly evolve their data management and have sophisticated, scalable, and convenient tools to analyse data from all sources to produce business insights.\n\nMongoDB has developed a rich and powerful query language, including a very robust aggregation framework. \n\nThese were mainly done to optimize the way developers work with data and provide great tools to manipulate and query MongoDB documents.\n\nHaving said that, many developers, analysts, and tools still prefer the legacy SQL language to interact with the data sources. SQL has a strong foundation around joining data as this was a core concept of the legacy relational databases normalization model. \n\nThis makes SQL have a convenient syntax when it comes to describing joins. \n\nProviding MongoDB users the ability to leverage SQL to analyse multi-source documents while having a flexible schema and data store is a compelling solution for businesses.\n\n## Data Sources and the Challenge\n\nConsider a requirement to create a single view to analyze data from operative different systems. For example:\n\n- Customer data is managed in the user administration systems (REST API).\n- Financial data is managed in a financial cluster (Atlas cluster).\n- End-to-end transactions are stored in files on cold storage gathered from various external providers (cloud object storage - Amazon S3 or Microsoft Azure Blob Storage store).\n\nHow can we combine and best join this data? \n\nMongoDB Atlas Query Federation connects multiple data sources using the different data store types. Once the data sources are mapped, we can create collections consuming this data. Those collections can have SQL schema generated, allowing us to perform sophisticated joins and do JDBC queries from various BI tools.\n\nIn this article, we will showcase the extreme power hidden in Atlas SQL Query.\n\n## Setting Up My Federated Database Instance\nIn the following view, I have created three main data stores: \n- S3 Transaction Store (S3 sample data).\n- Accounts from my Atlas clusters (Sample data sample_analytics.accounts).\n- Customer data from a secure https source.\n\nI mapped the stores into three collections under `FinTech` database:\n\n- `Transactions`\n- `Accounts`\n- `CustomerDL`\n\nNow, I can see them through a Query Federation connection as MongoDB collections.\n\nLet's grab our Query Federation instance connection string from the Atlas UI.\n\nThis connection string can be used with our BI tools or client applications to run SQL queries.\n\n## Connecting and Using $sql and db.sql\n\nOnce we connect to the Query Federation instancee via a mongosh shell, we can generate a SQL schema for our collections. This is optional for the JDBC or $sql operators to recognise collections as SQL \u201ctables\u201d as this step is done automatically for newly created collections, however, its always good to be familiar with the available commands.\n\n#### Generate SQL schema for each collection:\n```js\nuse admin;\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: \"FinTech.customersDL\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [\"FinTech.accounts\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [\"FinTech.transactions\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\n```\n#### Running SQL queries and joins using $sql stage:\n```js\nuse FinTech;\ndb.aggregate([{\n $sql: {\n statement: \"SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2\",\n format: \"jdbc\",\n formatVersion: 2,\n dialect: \"mysql\",\n }\n}])\n\n// Equivalent command\ndb.sql(\"SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2\");\n```\n\nThe above query will prompt account information and the transaction counts of each account.\n\n## Connecting Via JDBC\n\nLet\u2019s connect a powerful BI tool like Tableau with the [JDBC driver.\n\nDownload JDBC Driver.\n\n#### Connect to Tableau\nYou have 2 main options to connect, via \"MongoDB Atlas\" connector or via a JDBC general connector. Please follow the relevant instructions and prerequisites on this documentation page.\n\n##### Connector \"MongoDB Atlas by MongoDB\"\nSearch and click the \u201cMongoDB Atlas by MongoDB\u201d connector and provide the information pointing to our Query Federation URI. See the following example:\n\n##### \"JDBC\" Connector\n\nSetting `connection.properties` file.\n```\nuser=root\npassword=*******\nauthSource=admin\ndatabase=FinTech\nssl=true\ncompressors=zlib\n```\n\nClick the \u201cOther Databases (JDBC)\u201d connector, copy JDBC connection format, and load the `connection.properties` file.\n\nOnce the data is read successfully, the collections will appear on the right side.\n\n#### Setting and Joining Data\n\nWe can drag and drop collections from different sources and link them together.\n\nIn my case, I connected `Transactions` => `Accounts` based on the `Account Id` field, and accounts and users based on the `Account Id` to `Accounts` field.\n\nIn this view, we will see a unified table for all accounts with usernames and their transactions start quarter. \n\n## Summary\n\nMongoDB has all the tools to read, transform, and analyse your documents for almost any use-case. \n\nWhether your data is in an Atlas operational cluster, in a service, or on cold storage like cloud object storage, Atlas Query Federation will provide you with the ability to join the data in real time. With the option to use powerful join SQL syntax and SQL-based BI tools like Tableau, you can get value out of the data in no time.\n\nTry Atlas Query Federation with your BI tools and SQL today.", "format": "md", "metadata": {"tags": ["MongoDB", "SQL"], "pageDescription": "Learn how new SQL-based queries can power your Query Federation insights in minutes. Integrate this capability with powerful BI tools like Tableau to get immediate value out of your data. ", "contentType": "Article"}, "title": "Atlas Query Federation SQL to Form Powerful Data Interactions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-serverless-quick-start", "action": "created", "body": "# MongoDB Atlas Serverless Instances: Quick Start\n\nMongoDB Atlas serverless instances are now GA (generally available)!\n\nWhat is a serverless instance you might ask? In short, *it\u2019s an on-demand serverless database*. In this article, we'll deploy a MongoDB Atlas serverless instance and perform some basic CRUD operations. You\u2019ll need a MongoDB Atlas account. If you already have one sign-in, or register now.\n\n## Demand Planning\n\nWhen you deploy a MongoDB Atlas cluster, you need to understand what compute and storage resources your application will require so that you pick the correct tier to accommodate its needs.\n\nAs your storage needs grow, you will need to adjust your cluster\u2019s tier accordingly. You can also enable auto-scaling between a minimum and maximum tier.\n\n## Ongoing Management\n\nOnce you\u2019ve set your tiering scale, what happens when your app explodes and gets tons of traffic and exceeds your estimated maximum tier? It\u2019s going to be slow and unresponsive because there aren\u2019t enough resources.\n\nOr, maybe you\u2019ve over-anticipated how much traffic your application would get but you\u2019re not getting any traffic. You still have to pay for the resources even if they aren\u2019t being utilized.\n\nAs your application scales, you are limited to these tiered increments but nothing in between.\n\nThese tiers tightly couple compute and storage with each other. You may not need 3TB of storage but you do need a lot of compute. So you\u2019re forced into a tier that isn\u2019t balanced to the needs of your application.\n\n## The Solve\n\nMongoDB Atlas serverless instances solve all of these issues:\n\n- Deployment friction\n- Management overhead\n- Performance consequences\n- Paying for unused resources\n- Rigid data models\n\nWith MongoDB Atlas serverless instances, you will get seamless deployment and scaling, a reliable backend infrastructure, and an intuitive pricing model.\n\nIt\u2019s even easier to deploy a serverless instance than it is to deploy a free cluster on MongoDB Atlas. All you have to do is choose a cloud provider and region. Once created, your serverless instance will seamlessly scale up and down as your application demand fluctuates.\n\nThe best part is you only pay for the compute and storage resources you use, leaving the operations to MongoDB\u2019s best-in-class automation, including end-to-end security, continuous uptime, and regular backups.\n\n## Create Your First Serverless Instance\n\nLet\u2019s see how it works\u2026\n\nIf you haven\u2019t already signed up for a MongoDB Atlas account, go ahead and do that first, then select \"Build a Database\".\n\nNext, choose the Serverless deployment option.\n\nNow, select a cloud provider and region, and then optionally modify your instance name. Create your new deployment and you\u2019re ready to start using your serverless instance!\n\nYour serverless instance will be up and running in just a few minutes. Alternatively, you can also use the Atlas CLI to create and deploy a new serverless instance.\n\nWhile we wait for that, let\u2019s set up a quick Node.js application to test out the CRUD operations.\n\n## Node.js CRUD Example\n\nPrerequisite: You will need Node.js installed on your computer.\n\nConnecting to the serverless instance is just as easy as a tiered instance.\n\n1. Click \u201cConnect.\u201d\n\n \n\n3. Set your IP address and database user the same as you would a tiered instance.\n4. Choose a connection method.\n- You can choose between mongo shell, Compass, or \u201cConnect your application\u201d using MongoDB drivers.\n \n \n \nWe are going to \u201cConnect your application\u201d and choose Node.js as our driver. This will give us a connection string we can use in our Node.js application. Check the \u201cInclude full driver code example\u201d box and copy the example to your clipboard.\n\nTo set up our application, open VS Code (or your editor of choice) in a blank folder. From the terminal, let\u2019s initiate a project:\n\n`npm init -y`\n\nNow we\u2019ll install MongoDB in our project:\n\n`npm i mongodb`\n\n### Create\n\nWe\u2019ll create a `server.js` file in the root and paste the code example we just copied.\n\n```js\nconst MongoClient = require('mongodb').MongoClient;\nconst uri = \"mongodb+srv://mongo:@serverlessinstance0.xsel4.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\";\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true });\n\nclient.connect(err => {\n const collection = client.db(\"test\").collection(\"devices\");\n \n // perform actions on the collection object\n \n client.close();\n});\n```\n\nWe\u2019ll need to replace `` with our actual user password and `myFirstDatabase` with the database name we\u2019ll be connecting to.\n\nLet\u2019s modify the `client.connect` method to create a database, collection, and insert a new document.\n\nNow we\u2019ll run this from our terminal using `node server`.\n\n```js\nclient.connect((err) => {\n const collection = client.db(\"store\").collection(\"products\");\n collection\n .insertOne(\n {\n name: \"JavaScript T-Shirt\",\n category: \"T-Shirts\",\n })\n .then(() => {\n client.close();\n });\n});\n```\n\nWhen we use the `.db` and `.collection` methods, if the database and/or collection does not exist, it will be created. We also have to move the `client.close` method into a `.then()` after the `.insertOne()` promise has been returned. Alternatively, we could wrap this in an async function.\n\nWe can also insert multiple documents at the same time using `.insertMany()`.\n\n```js\ncollection\n .insertMany(\n {\n name: \"React T-Shirt\",\n category: \"T-Shirts\",\n },\n {\n name: \"Vue T-Shirt\",\n category: \"T-Shirts\",\n }\n ])\n .then(() => {\n client.close();\n });\n```\n\nMake the changes and run `node server` again.\n\n### Read\n\nLet\u2019s see what\u2019s in the database now. There should be three documents. The `find()` method will return all documents in the collection.\n\n```js\nclient.connect((err) => {\n const collection = client.db(\"store\").collection(\"products\");\n collection.find().toArray((err, result) => console.log(result))\n .then(() => {\n client.close();\n });\n});\n```\n\nWhen you run `node server` now, you should see all of the documents created in the console.\n\nIf we wanted to find a specific document, we could pass an object to the `find()` method, giving it something to look for.\n\n```js\nclient.connect((err) => {\n const collection = client.db(\"store\").collection(\"products\");\n collection.find({name: \u201cReact T-Shirt\u201d}).toArray((err, result) => console.log(result))\n .then(() => {\n client.close();\n });\n});\n```\n\n### Update\n\nTo update a document, we can use the `updateOne()` method, passing it an object with the search parameters and information to update.\n\n```js\nclient.connect((err) => {\n const collection = client.db(\"store\").collection(\"products\");\n collection.updateOne(\n { name: \"Awesome React T-Shirt\" },\n { $set: { name: \"React T-Shirt\" } }\n )\n .then(() => {\n client.close();\n });\n});\n```\n\nTo see these changes, run a `find()` or `findOne()` again.\n\n### Delete\n\nTo delete something from the database, we can use the `deleteOne()` method. This is similar to `find()`. We just need to pass it an object for it to find and delete.\n\n```js\nclient.connect((err) => {\n const collection = client.db(\"store\").collection(\"products\");\n collection.deleteOne({ name: \"Vue T-Shirt\" }).then(() => client.close());\n});\n```\n\n## Conclusion\n\nIt\u2019s super easy to use MongoDB Atlas serverless instances! You will get seamless deployment and scaling, a reliable backend infrastructure, and an intuitive pricing model. We think that serverless instances are a great deployment option for new users on Atlas.\n\nI\u2019d love to hear your feedback or questions. Let\u2019s chat in the [MongoDB Community.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Serverless", "Node.js"], "pageDescription": "MongoDB Atlas serverless instances are now generally available! What is a serverless instance you might ask? In short, it\u2019s an on-demand serverless database. In this article, we'll deploy a MongoDB Atlas serverless instance and perform some basic CRUD operations.", "contentType": "Quickstart"}, "title": "MongoDB Atlas Serverless Instances: Quick Start", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-federation-out-aws-s3", "action": "created", "body": "# MongoDB Atlas Data Federation Tutorial: Federated Queries and $out to AWS S3\n\nData Federation is a MongoDB Atlas feature that allows you to query data from disparate sources such as:\n\n* Atlas databases.\n* Atlas Data Lake.\n* HTTP APIs.\n* AWS S3 buckets.\n\nIn this tutorial, I will show you how to access your archived documents in S3 **and** your documents in your MongoDB Atlas cluster with a **single** MQL query.\n\nThis feature is really amazing because it allows you to have easy access to your archived data in S3 along with your \"hot\" data in your Atlas cluster. This could help you prevent your Atlas clusters from growing in size indefinitely and reduce your costs drastically. It also makes it easier to gain new insights by easily querying data residing in S3 and exposing it to your real-time app.\n\nFinally, I will show you how to use the new version of the $out aggregation pipeline stage to write documents from a MongoDB Atlas cluster into an AWS S3 bucket.\n\n## Prerequisites\n\nIn order to follow along this tutorial, you need to:\n\n* Create a MongoDB Atlas cluster. \u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n* Create a user in the **Database Access** menu.\n* Add your IP address in the Network Access List in the **Network Access** menu.\n* Have Python 3 with `pymongo` and `dnspython` libs installed.\n\n### Configure your S3 bucket and AWS account\n\nLog into your AWS account and create an S3 bucket. Choose a region close to your Atlas deployment to minimize data latency. The scripts in this tutorial use a bucket called `cold-data-mongodb` in the region `eu-west-1`. If you use a different name or select another region, make sure to reflect that in the Python code you\u2019ll see in the tutorial. \n\nThen, install the AWS CLI and configure it to access your AWS account. If you need help setting it up, refer to the AWS documentation.\n\n### Prepare the dataset\n\nTo illustrate how `$out` and federated queries work, I will use an overly simple dataset to keep things as easy as possible to understand. Our database \u201ctest\u201d will have a single collection, \u201corders,\u201d representing orders placed in an online store. Each order document will have a \u201ccreated\u201d field of type \u201cDate.\u201d We\u2019ll use that field to archive older orders, moving them from the Atlas cluster to S3.\n\nI\u2019ve written a Python script that inserts the required data in the Atlas cluster. You can get the script, along with the rest of the code we\u2019ll use in the tutorial, from GitHub:\n\n```\ngit clone https://github.com/mongodb-developer/data-lake-tutorial.git\n```\n\nThen, go back to Atlas to locate the connection string for your cluster. Click on \u201cConnect\u201d and then \u201cConnect your application.\u201d Copy the connection string and paste it in the `insert_data.py` script you just downloaded from GitHub. Don\u2019t forget to replace the `` and `` placeholders with the credentials of your database user:\n\n**insert_data.py**\n```python\nfrom pymongo import MongoClient\nfrom datetime import datetime\n\nclient = MongoClient('mongodb+srv://:@m0.lbtrerw.mongodb.net/')\n\u2026\n```\n\nFinally, install the required libraries and run the script:\n\n```\npip3 install -r requirements.txt\npython3 insert_data.py\n```\n\nNow that we have a \u201cmassive\u201d collection of orders, we can consider archiving the oldest orders to an S3 bucket. Let's imagine that once a month is over, we can archive all the orders from the previous month. We\u2019ll create one JSON file in S3 for all the orders created during the previous month.\n\nWe\u2019ll transfer these orders to S3 using the aggregation pipeline stage $out.\n\nBut first, we need to configure Atlas Data Federation correctly.\n\n## Configure Data Federation\nNavigate to \u201cData Federation\u201d from the side menu in Atlas and then click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI.\n\nOn the left, we see a panel with the data sources (we don\u2019t have any yet), and on the right are the \u201cvirtual\u201d databases and collections of the federated instance.\n\n### Configure the Atlas cluster as a data source\n\nLet\u2019s add the first data source \u2014 the orders from our Atlas cluster. Click \u201cAdd Data Sources,\u201d select \u201cAtlas Cluster,\u201d and then select your cluster and database.\n\nClick \u201cNext\u201d and you\u2019ll see the \u201ctest.orders\u201d collection as a data source. Click on the \u201ctest.orders\u201d row, drag it underneath the \u201cVirtualCollection0,\u201d and drop it there as a data source.\n\n### Configure the S3 bucket as a data source\n\nNext, we\u2019ll connect our S3 bucket. Click on \u201cAdd Data Sources\u201d again and this time, select Amazon S3. Click \u201cNext\u201d and follow the instructions to create and authorize a new AWS IAM role. We need to execute a couple of commands with the AWS CLI. Make sure you\u2019ve installed and linked the CLI to your AWS account before that. If you\u2019re facing any issues, check out the AWS CLI troubleshooting page.\n\nOnce you\u2019ve authorized the IAM role, you\u2019ll be prompted for the name of your S3 bucket and the access policy. Since we'll be writing files to our bucket, we need to choose \u201cRead and write.\u201d\n\nYou can also configure a prefix. If you do, Data Federation will only search for files in directories starting with the specified prefix. In this tutorial, we want to access files in the root directory of the bucket, so we\u2019ll leave this field empty.\n\nAfter that, we need to execute a couple more AWS CLI commands to make sure the IAM role has permissions for the S3 bucket. When you\u2019re finished, click \u201cNext.\u201d\n\nFinally, we\u2019ll be prompted to define a path to the data we want to access in the bucket. To keep things simple, we\u2019ll use a wildcard configuration allowing us to access all files. Set `s3://cold-data-mongodb/*` as the path and `any value (*)` as the data type of the file. \n\nData Federation also allows you to create partitions and parse fields from the filenames in your bucket. This can optimize the performance of your queries by traversing only relevant files and directories. To find out more, check out the Data Federation docs.\n\nOnce we\u2019ve added the S3 bucket data, we can drag it over to the virtual collection as a data source.\n\n### Rename the virtual database and collection\n\nThe names \u201cVirtualDatabase0\u201d and \u201cVirtualCollection0\u201d don\u2019t feel appropriate for our data. Let\u2019s rename them to \u201ctest\u201d and \u201corders\u201d respectively to match the data in the Atlas cluster.\n\n### Verify the JSON configuration\n\nFinally, to make sure that our setup is correct, we can switch to the JSON view in the top right corner, right next to the \u201cSave\u201d button. Your configuration, except for the project ID and the cluster name, should be identical to this:\n\n```json\n{\n \"databases\": \n {\n \"name\": \"test\",\n \"collections\": [\n {\n \"name\": \"orders\",\n \"dataSources\": [\n {\n \"storeName\": \"M0\",\n \"database\": \"test\",\n \"collection\": \"orders\"\n },\n {\n \"storeName\": \"cold-data-mongodb\",\n \"path\": \"/*\"\n }\n ]\n }\n ],\n \"views\": []\n }\n ],\n \"stores\": [\n {\n \"name\": \"M0\",\n \"provider\": \"atlas\",\n \"clusterName\": \"M0\",\n \"projectId\": \"\"\n },\n {\n \"name\": \"cold-data-mongodb\",\n \"provider\": \"s3\",\n \"bucket\": \"cold-data-mongodb\",\n \"prefix\": \"\",\n \"delimiter\": \"/\"\n }\n ]\n}\n```\n\nOnce you've verified everything looks good, click the \u201cSave\u201d button. If your AWS IAM role is configured correctly, you\u2019ll see your newly configured federated instance. We\u2019re now ready to connect to it!\n\n## Archive cold data to S3 with $out\n\nLet's now collect the URI we are going to use to connect to Atlas Data Federation.\n\nClick on the \u201cConnect\u201d button, and then \u201cConnect your application.\u201d Copy the connection string as we\u2019ll need it in just a minute.\n\nNow let's use Python to execute our aggregation pipeline and archive the two orders from May 2020 in our S3 bucket.\n\n``` python\nfrom datetime import datetime\n\nfrom pymongo import MongoClient\n\nclient = MongoClient('')\ndb = client.get_database('test')\ncoll = db.get_collection('orders')\n\nstart_date = datetime(2020, 5, 1) # May 1st\nend_date = datetime(2020, 6, 1) # June 1st\n\npipeline = [\n {\n '$match': {\n 'created': {\n '$gte': start_date,\n '$lt': end_date\n }\n }\n },\n {\n '$out': {\n 's3': {\n 'bucket': 'cold-data-mongodb',\n 'region': 'eu-west-1',\n 'filename': start_date.isoformat('T', 'milliseconds') + 'Z-' + end_date.isoformat('T', 'milliseconds') + 'Z',\n 'format': {'name': 'json', 'maxFileSize': '200MiB'}\n }\n }\n }\n]\n\ncoll.aggregate(pipeline)\nprint('Archive created!')\n```\nOnce you replace the connection string with your own, execute the script:\n\n```\npython3 archive.py\n```\n\nAnd now we can confirm that our archive was created correctly in our S3 bucket:\n\n![\"file in the S3 bucket\"\n\n### Delete the \u201ccold\u201d data from Atlas\n\nNow that our orders are safe in S3, I can delete these two orders from my Atlas cluster. Let's use Python again. This time, we need to use the URI from our Atlas cluster because the Atlas Data Federation URI doesn't allow this kind of operation.\n\n``` python\nfrom datetime import datetime\n\nfrom pymongo import MongoClient\n\nclient = MongoClient('')\ndb = client.get_database('test')\ncoll = db.get_collection('orders')\n\nstart_date = datetime(2020, 5, 1) # May 1st\nend_date = datetime(2020, 6, 1) # June 1st\nquery = {\n 'created': {\n '$gte': start_date,\n '$lt': end_date\n }\n}\n\nresult = coll.delete_many(query)\nprint('Deleted', result.deleted_count, 'orders.')\n```\n\nLet's run this code:\n\n``` none\npython3 remove.py\n\n```\n\nNow let's double-check what we have in S3. Here is the content of the S3 file I downloaded:\n\n``` json\n{\"_id\":{\"$numberDouble\":\"1.0\"},\"created\":{\"$date\":{\"$numberLong\":\"1590796800000\"}},\"items\":{\"$numberDouble\":\"1.0\"},{\"$numberDouble\":\"3.0\"}],\"price\":{\"$numberDouble\":\"20.0\"}}\n{\"_id\":{\"$numberDouble\":\"2.0\"},\"created\":{\"$date\":{\"$numberLong\":\"1590883200000\"}},\"items\":[{\"$numberDouble\":\"2.0\"},{\"$numberDouble\":\"3.0\"}],\"price\":{\"$numberDouble\":\"25.0\"}}\n```\n\nAnd here is what's left in my MongoDB Atlas cluster.\n\n![Documents left in MongoDB Atlas cluster\n\n### Federated queries\n\nAs mentioned above already, with Data Federation, you can query data stored across Atlas and S3 simultaneously. This allows you to retain easy access to 100% of your data. We actually already did that when we ran the aggregation pipeline with the `$out` stage.\n\nLet's verify this one last time with Python:\n\n``` python\nfrom pymongo import MongoClient\n\nclient = MongoClient('')\ndb = client.get_database('test')\ncoll = db.get_collection('orders')\n\nprint('All the docs from S3 + Atlas:')\ndocs = coll.find()\nfor d in docs:\n print(d)\n\npipeline = \n {\n '$group': {\n '_id': None,\n 'total_price': {\n '$sum': '$price'\n }\n }\n }, {\n '$project': {\n '_id': 0\n }\n }\n]\n\nprint('\\nI can also run an aggregation.')\nprint(coll.aggregate(pipeline).next())\n```\n\nExecute the script with:\n\n```bash\npython3 federated_queries.py\n```\n\nHere is the output:\n\n``` none\nAll the docs from S3 + Atlas:\n{'_id': 1.0, 'created': datetime.datetime(2020, 5, 30, 0, 0), 'items': [1.0, 3.0], 'price': 20.0}\n{'_id': 2.0, 'created': datetime.datetime(2020, 5, 31, 0, 0), 'items': [2.0, 3.0], 'price': 25.0}\n{'_id': 3.0, 'created': datetime.datetime(2020, 6, 1, 0, 0), 'items': [1.0, 3.0], 'price': 20.0}\n{'_id': 4.0, 'created': datetime.datetime(2020, 6, 2, 0, 0), 'items': [1.0, 2.0], 'price': 15.0}\n\nI can also run an aggregation:\n{'total_price': 80.0}\n```\n\n## Wrap up\n\nIf you have a lot of infrequently accessed data in your Atlas cluster but you still need to be able to query it and access it easily once you've archived it to S3, creating a federated instance will help you save tons of money. If you're looking for an automated way to archive your data from Atlas clusters to fully-managed S3 storage, then check out our new [Atlas Online Archive feature!\n\nStorage on S3 is a lot cheaper than scaling up your MongoDB Atlas cluster because your cluster is full of cold data and needs more RAM and storage size to operate correctly.\n\nAll the Python code is available in this Github repository.\n\nPlease let us know on Twitter if you liked this blog post: @MBeugnet and @StanimiraVlaeva.\n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will give you a hand.", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "Learn how to use MongoDB Atlas Data Federation to query data from Atlas databases and AWS S3 and archive cold data to S3 with $out.", "contentType": "Tutorial"}, "title": "MongoDB Atlas Data Federation Tutorial: Federated Queries and $out to AWS S3", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/php-crud", "action": "created", "body": "# Creating, Reading, Updating, and Deleting MongoDB Documents with PHP\n\n \n\nWelcome to Part 2 of this quick start guide for MongoDB and PHP. In the previous article, I walked through the process of installing, configuring, and setting up PHP, Apache, and the MongoDB Driver and Extension so that you can effectively begin building an application leveraging the PHP, MongoDB stack.\n\nI highly recommend visiting the first article in this series to get set up properly if you have not previously installed PHP and Apache.\n\nI've created each section with code samples. And I'm sure I'm much like you in that I love it when a tutorial includes examples that are standalone... They can be copy/pasted and tested out quickly. Therefore, I tried to make sure that each example is created in a ready-to-run fashion.\n\nThese samples are available in this repository, and each code sample is a standalone program that you can run by itself. In order to run the samples, you will need to have installed PHP, version 8, and you will need to be able to install additional PHP libraries using `Compose`. These steps are all covered in the first article in this series.\n\nAdditionally, while I cover it in this article, it bears mentioning upfront that you will need to create and use a `.env` file with your credentials and the server name from your MongoDB Atlas cluster.\n\nThis guide is organized into a few sections over a few articles. This first article addresses the installation and configuration of your development environment. PHP is an integrated web development language. There are several components you typically use in conjunction with the PHP programming language.\n\n>Video Introduction and Overview\n>\n>:youtube]{vid=tW87xDCPspk}\n\nLet's start with an overview of what we'll cover in this article.\n\n1. [Connecting to a MongoDB Database Instance\n1. Creating or Inserting a Single MongoDB Document with PHP\n1. Creating or Inserting Multiple MongoDB Documents with PHP\n1. Reading Documents with PHP\n1. Updating Documents with PHP\n1. Deleting Documents with PHP\n\n## Connecting to a MongoDB Database Instance\n\nTo connect to a MongoDB Atlas cluster, use the Atlas connection string for your cluster:\n\n``` php\n:@/test?w=majority'\n);\n$db = $client->test;\n```\n\n>Just a note about language. Throughout this article, we use the term `create` and `insert` interchangeably. These two terms are synonymous. Historically, the act of adding data to a database was referred to as `CREATING`. Hence, the acronym `CRUD` stands for Create, Read, Update, and Delete. Just know that when we use create or insert, we mean the same thing.\n\n## Protecting Sensitive Authentication Information with DotEnv (.env)\n\nWhen we connect to MongoDB, we need to specify our credentials as part of the connection string. You can hard-code these values into your programs, but when you commit your code to a source code repository, you're exposing your credentials to whomever you give access to that repository. If you're working on open source, that means the world has access to your credentials. This is not a good idea. Therefore, in order to protect your credentials, we store them in a file that **does** **not** get checked into your source code repository. Common practice dictates that we store this information only in the environment. A common method of providing these values to your program's running environment is to put credentials and other sensitive data into a `.env` file.\n\nThe following is an example environment file that I use for the examples in this tutorial.\n\n``` bash\nMDB_USER=\"yourusername\"\nMDB_PASS=\"yourpassword\"\nATLAS_CLUSTER_SRV=\"mycluster.zbcul.mongodb.net\"\n```\n\nTo create your own environment file, create a file called `.env` in the root of your program directory. You can simply copy the example environment file I've provided and rename it to `.env`. Be sure to replace the values in the file `yourusername`, `yourpassword`, and `mycluster.zbcul.mongodb.net` with your own.\n\nOnce the environment file is in place, you can use `Composer` to install the DotEnv library, which will enable us to read these variables into our program's environment. See the first article in this series for additional setup instructions.\n\n``` bash\n$ composer require vlucas/phpdotenv\n```\n\nOnce installed, you can incorporate this library into your code to pull in the values from your `.env` file.\n\n``` php\n$dotenv = Dotenv\\Dotenv::createImmutable(__DIR__);\n$dotenv->load();\n```\n\nNext, you will be able to reference the values from the `.env` file using the `$_ENV]` array like this:\n\n``` php\necho $_ENV['MDB_USER'];\n```\n\nSee the code examples below to see this in action.\n\n## Creating or Inserting a Single MongoDB Document with PHP\n\nThe [MongoDBCollection::insertOne() method inserts a single document into MongoDB and returns an instance of MongoDBInsertOneResult, which you can use to access the ID of the inserted document.\n\nThe following code sample inserts a document into the users collection in the test database:\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/test'\n);\n\n$collection = $client->test->users;\n\n$insertOneResult = $collection->insertOne([\n 'username' => 'admin',\n 'email' => 'admin@example.com',\n 'name' => 'Admin User',\n]);\n\nprintf(\"Inserted %d document(s)\\n\", $insertOneResult->getInsertedCount());\n\nvar_dump($insertOneResult->getInsertedId());\n```\n\nYou should see something similar to:\n\n``` bash\nInserted 1 document(s)\nobject(MongoDB\\BSON\\ObjectId)#11 (1) {\n [\"oid\"]=>\n string(24) \"579a25921f417dd1e5518141\"\n}\n```\n\nThe output includes the ID of the inserted document.\n\n## Creating or Inserting Multiple MongoDB Documents with PHP\n\nThe [MongoDBCollection::insertMany() method allows you to insert multiple documents in one write operation and returns an instance of MongoDBInsertManyResult, which you can use to access the IDs of the inserted documents.\n\nThe following sample code inserts two documents into the users collection in the test database:\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/test'\n);\n\n$collection = $client->test->users;\n\n$insertManyResult = $collection->insertMany([\n [\n 'username' => 'admin',\n 'email' => 'admin@example.com',\n 'name' => 'Admin User',\n ],\n [\n 'username' => 'test',\n 'email' => 'test@example.com',\n 'name' => 'Test User',\n ],\n]);\n\nprintf(\"Inserted %d document(s)\\n\", $insertManyResult->getInsertedCount());\n\nvar_dump($insertManyResult->getInsertedIds());\n```\n\nYou should see something similar to the following:\n\n``` bash\nInserted 2 document(s)\narray(2) {\n[0]=>\n object(MongoDB\\BSON\\ObjectId)#18 (1) {\n [\"oid\"]=>\n string(24) \"6037b861301e1d502750e712\"\n }\n [1]=>\n object(MongoDB\\BSON\\ObjectId)#21 (1) {\n [\"oid\"]=>\n string(24) \"6037b861301e1d502750e713\"\n }\n}\n```\n\n## Reading Documents with PHP\n\nReading documents from a MongoDB database can be accomplished in several ways, but the most simple way is to use the `$collection->find()` command.\n\n``` php\nfunction find($filter = [], array $options = []): MongoDB\\Driver\\Cursor\n```\n\nRead more about the find command in PHP [here:.\n\nThe following sample code specifies search criteria for the documents we'd like to find in the `restaurants` collection of the `sample_restaurants` database. To use this example, please see the Available Sample Datasets for Atlas Clusters.\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'\n);\n\n$collection = $client->sample_restaurants->restaurants;\n\n$cursor = $collection->find(\n [\n 'cuisine' => 'Italian',\n 'borough' => 'Manhattan',\n ],\n [\n 'limit' => 5,\n 'projection' => [\n 'name' => 1,\n 'borough' => 1,\n 'cuisine' => 1,\n ],\n ]\n);\n\nforeach ($cursor as $restaurant) {\n var_dump($restaurant);\n};\n```\n\nYou should see something similar to the following output:\n\n``` bash\nobject(MongoDB\\Model\\BSONDocument)#20 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#26 (1) {\n [\"oid\"]=>\n string(24) \"5eb3d668b31de5d588f42965\"\n }\n [\"borough\"]=>\n string(9) \"Manhattan\"\n [\"cuisine\"]=>\n string(7) \"Italian\"\n [\"name\"]=>\n string(23) \"Isle Of Capri Resturant\"\n }\n}\nobject(MongoDB\\Model\\BSONDocument)#19 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#24 (1) {\n [\"oid\"]=>\n string(24) \"5eb3d668b31de5d588f42974\"\n }\n [\"borough\"]=>\n string(9) \"Manhattan\"\n [\"cuisine\"]=>\n string(7) \"Italian\"\n [\"name\"]=>\n string(18) \"Marchis Restaurant\"\n }\n}\nobject(MongoDB\\Model\\BSONDocument)#26 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#20 (1) {\n [\"oid\"]=>\n string(24) \"5eb3d668b31de5d588f42988\"\n }\n [\"borough\"]=>\n string(9) \"Manhattan\"\n [\"cuisine\"]=>\n string(7) \"Italian\"\n [\"name\"]=>\n string(19) \"Forlinis Restaurant\"\n }\n}\nobject(MongoDB\\Model\\BSONDocument)#24 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#19 (1) {\n [\"oid\"]=>\n string(24) \"5eb3d668b31de5d588f4298c\"\n }\n [\"borough\"]=>\n string(9) \"Manhattan\"\n [\"cuisine\"]=>\n string(7) \"Italian\"\n [\"name\"]=>\n string(22) \"Angelo Of Mulberry St.\"\n }\n}\nobject(MongoDB\\Model\\BSONDocument)#20 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#26 (1) {\n [\"oid\"]=>\n string(24) \"5eb3d668b31de5d588f42995\"\n }\n [\"borough\"]=>\n string(9) \"Manhattan\"\n [\"cuisine\"]=>\n string(7) \"Italian\"\n [\"name\"]=>\n string(8) \"Arturo'S\"\n }\n}\n```\n\n## Updating Documents with PHP\n\nUpdating documents involves using what we learned in the previous section for finding and passing the parameters needed to specify the changes we'd like to be reflected in the documents that match the specific criterion.\n\nThere are two specific commands in the PHP Driver vocabulary that will enable us to `update` documents.\n\n- `MongoDB\\Collection::updateOne` - Update, at most, one document that matches the filter criteria. If multiple documents match the filter criteria, only the first matching document will be updated.\n- `MongoDB\\Collection::updateMany` - Update all documents that match the filter criteria.\n\nThese two work very similarly, with the obvious exception around the number of documents impacted.\n\nLet's start with `MongoDB\\Collection::updateOne`. The following [code sample finds a single document based on a set of criteria we pass in a document and `$set`'s values in that single document.\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'\n);\n\n$collection = $client->sample_restaurants->restaurants;\n\n$updateResult = $collection->updateOne(\n [ 'restaurant_id' => '40356151' ],\n [ '$set' => [ 'name' => 'Brunos on Astoria' ]]\n);\n\nprintf(\"Matched %d document(s)\\n\", $updateResult->getMatchedCount());\nprintf(\"Modified %d document(s)\\n\", $updateResult->getModifiedCount());\n```\n\nYou should see something similar to the following output:\n\n``` bash\nMatched 1 document(s)\nModified 1 document(s) \n```\n\nNow, let's explore updating multiple documents in a single command execution.\n\nThe following [code sample updates all of the documents with the borough of \"Queens\" by setting the active field to true:\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'\n);\n\n$collection = $client->sample_restaurants->restaurants;\n\n$updateResult = $collection->updateMany(\n [ 'borough' => 'Queens' ],\n [ '$set' => [ 'active' => 'True' ]]\n);\n\nprintf(\"Matched %d document(s)\\n\", $updateResult->getMatchedCount());\nprintf(\"Modified %d document(s)\\n\", $updateResult->getModifiedCount());\n```\n\nYou should see something similar to the following:\n\n``` bash\nMatched 5656 document(s)\nModified 5656 document(s)\n```\n\n>When updating data in your MongoDB database, it's important to consider `write concern`. Write concern describes the level of acknowledgment requested from MongoDB for write operations to a standalone `mongod`, replica sets, or sharded clusters.\n\nTo understand the current value of write concern, try the following example code:\n\n``` php\n$collection = (new MongoDB\\Client)->selectCollection('test', 'users', [\n 'writeConcern' => new MongoDB\\Driver\\WriteConcern(1, 0, true),\n]);\n\nvar_dump($collection->getWriteConcern());\n```\n\nSee for more information on write concern.\n\n## Deleting Documents with PHP\n\nJust as with updating and finding documents, you have the ability to delete a single document or multiple documents from your database.\n\n- `MongoDB\\Collection::deleteOne` - Deletes, at most, one document that matches the filter criteria. If multiple documents match the filter criteria, only the first matching document will be deleted.\n- `MongoDB\\Collection::deleteMany` - Deletes all documents that match the filter criteria.\n\nLet's start with deleting a single document.\n\nThe following [code sample deletes one document in the users collection that has \"ny\" as the value for the state field:\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'\n);\n\n$collection = $client->sample_restaurants->restaurants;\n\n$deleteResult = $collection->deleteOne(['cuisine' => 'Hamburgers']); \n\nprintf(\"Deleted %d document(s)\\n\", $deleteResult->getDeletedCount());\n```\n\nYou should see something similar to the following output:\n\n``` bash\nDeleted 1 document(s)\n```\n\nYou will notice, if you examine the `sample_restaurants` database, that there are many documents matching the criteria `{ \"cuisine\": \"Hamburgers\" }`. However, only one document was deleted.\n\nDeleting multiple documents is possible using `MongoDB\\Collection::deleteMany`. The following code sample shows how to use `deleteMany`.\n\n``` php\nload();\n\n $client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV['MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$_ENV['ATLAS_CLUSTER_SRV'].'/sample_restaurants'\n );\n\n $collection = $client->sample_restaurants->restaurants;\n $deleteResult = $collection->deleteMany(['cuisine' => 'Hamburgers']);\n\n printf(\"Deleted %d document(s)\\n\", $deleteResult->getDeletedCount()); \n```\n\nYou should see something similar to the following output:\n\n> Deleted 432 document(s)\n\n>If you run this multiple times, your output will obviously differ. This is because you may have removed or deleted documents from prior executions. If, for some reason, you want to restore your sample data, visit: for instructions on how to do this.\n\n## Summary\n\nThe basics of any language are typically illuminated through the process of creating, reading, updating, and deleting data. In this article, we walked through the basics of CRUD with PHP and MongoDB. In the next article in the series, will put these principles into practice with a real-world application.\n\nCreating or inserting documents is accomplished through the use of:\n\n- [MongoDBCollection::insertOne\n- MongoDBCollection::insertMany\n\nReading or finding documents is accomplished using:\n\n- MongoDBCollection::find\n\nUpdating documents is accomplished through the use of:\n\n- MongoDBCollection::updateOne\n- MongoDBCollection::updateMany\n\nDeleting or removing documents is accomplished using:\n\n- MongoDBCollection::deleteOne\n- MongoDBCollection::deleteMany\n\nPlease be sure to visit, star, fork, and clone the companion repository for this article.\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.\n\n## References\n\n- MongoDB PHP Quickstart Source Code Repository\n- MongoDB PHP Driver CRUD Documentation\n- MongoDB PHP Driver Documentation provides thorough documentation describing how to use PHP with our MongoDB cluster.\n- MongoDB Query Document documentation details the full power available for querying MongoDB collections.", "format": "md", "metadata": {"tags": ["PHP", "MongoDB"], "pageDescription": "Getting Started with MongoDB and PHP - Part 2 - CRUD", "contentType": "Quickstart"}, "title": "Creating, Reading, Updating, and Deleting MongoDB Documents with PHP", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-suggestions-julia-oppenheim", "action": "created", "body": "# Schema Suggestions with Julia Oppenheim - Podcast Episode 59\n\nToday, we are joined by Julia Oppenheim, Associate Product Manager at MongoDB. Julia chats with us and shares details of a set of features within MongoDB Atlas designed to help developers improve the design of their schemas to avoid common anti-patterns. \n\nThe notion that MongoDB is schema-less is a bit of a misnomer. Traditional relational databases use a separate entity in the database that defines the schema - the structure of the tables/rows/columns and acceptable values that get stored in the database. MongoDB takes a slightly different approach. The schema does exist in MongoDB, but to see what that schema is - you typically look at the documents previously written to the database. With this in mind, you, as a developer have the power to make decisions about the structure of the documents you store in your database... and as they say with great power, comes great responsibility. \n\nMongoDB has created a set of features built into Atlas that enable you to see when your assumptions about the structure of your documents turn out to be less than optimal. These features come under the umbrella of Schema Suggestions and on today's podcast episode, Julia Oppenheim joins Nic Raboy and I to talk about how Schema Suggestions can help you maintain and improve the performance of your applications by exposing anti-patterns in your schema.\n\n**Julia: [00:00:00]** My name is Julia Oppenheim and welcome to the Mongo DB podcast. Stay tuned to learn more about how to improve your schema and alleviate schema anti-patterns with schema suggestions and Mongo DB Atlas.\n\n**Michael: [00:00:12]** And today we're talking with Julia Oppenheim. Welcome to the show, Julia, it's great to have you on the podcast. Thanks. It's great to be here. So why don't you introduce yourself to the audience? Let folks know who you are and what you do at Mongo DB. \n\n**Julia: [00:00:26]** Yeah. Sure. So hi, I'm Julia. I actually joined Mongo DB about nine months ago as a product manager on Rez's team.\nSo yeah, I actually did know that you had spoken to him before. And if you listened to those episodes Rez probably touched on what our team does, which is. Ensure that the customer's journey or the user's journey with Mongo DB runs smoothly and that their deployments are performance. Making sure that, you know, developers can focus on what's truly exciting and interesting to them like pushing out new features and they don't have the stress of is my deployment is my database.\n You know, going to have any problems. We try to make that process as smooth as possible. \n \n**Michael: [00:01:10]** Fantastic. And today we're going to be focusing on schemas, right. Schema suggestions, and eliminating schema. Anti-patterns so hold the phone, Mike. Yeah, yeah, go ahead, Nick. \n\n**Nic: [00:01:22]** I thought I thought I'm going to be people call this the schema-less database.\n\n**Michael: [00:01:28]** Yeah, I guess that is, that is true. With the document database, it's not necessary to plan your schema ahead of time. So maybe Julia, do you want to shed some light on why we need schema suggestions in the Mongo DB \n\n**Julia: [00:01:41]** Yeah, no, I think that's a really good point and definitely a common misconception.\nSo I think one of the draws of Mongo DB is that schema can be pretty flexible. And it's not rigid in the sense that other more relational databases you know, they have a strict set of rules and how you can access the data. I'm going to be as definitely more lenient in that regard, but at the end of the day, you still.\n\nNeed certain fields value types and things like that dependent on the needs of your application. So one of the first things that any developer will do is kind of map out what their use cases for their applications are and figure out how they should store the data to make sure that those use cases can be carried out.\n\n I think that you can kind of get a little stuck with schema in MongoDB, is that. The needs of your application changed throughout the development cycle. So a schema that may work on day one when you're you know, user base is relatively small, your feature set is pretty limited. May not work. As your app, you get Cisco, you may need to refactor a little bit, and it may not always be immediately obvious how to do that.\n \nAnd, you know, we don't expect users to be experts in MongoDB and schema design with Mongo DB which is why I think. Highlighting schema anti-patterns is very useful. \n\n**Michael: [00:03:03]** Fantastic. So do you want to talk a little bit about how the product works? How schema suggestions work in Mongo DB. Atlas? \n\n**Julia: [00:03:12]** Yeah. So there are two places where you as a user can see schema anti-patterns they're in.\nThe performance advisor tab a, which Rez definitely touched on if he talked about autopilot and index suggestions, and you can also see schema anti-patterns in the in our data Explorer. So the collections tab, and we can talk about you know, in a little bit why we have them in two separate places, but in general what you, as the user will see is the same.\n\nSo we. Flag schema anti-patterns we give kind of like a brief explanation as to why we flagged them. We'll show, which collections are impacted by you know, this anti-pattern that we've identified and we'll also kind of give a call to action on how to address them. So we actually have custom docs on the six schema anti-patterns that we.\n\nLook for at this stage of the products, you know, life cycle, and we give kind of steps on how to solve it, what our recommendation would be, and also kind of explain, you know, why it's a problem and how it can really you know, come back to hurt you later on. \n\n**Nic: [00:04:29]** So you've thrown out the keyword schema.\n\nAnti-patterns a few times now, do you want to go over what you said? There are six of them, right? We want to go what each of those six are. \n\n**Julia: [00:04:39]** Yeah, sure. So there are, like you said, there are six. So I think that we look for use of Our dollar lookup operations. So this means that where it's very, very similar to joining in the relational world where you would be accessing data across different collections.\nAnd this is not always ideal because you're reading and performing, you know, different logic on more than one collection. So in general, it just takes a lot of time a little more resource intensive and. You know, when we see this, we're kind of thinking, oh, this person might come from a more relational background.\n\n That's not to say that this is always a problem. It could make sense to do this in certain cases. Which is where things get a little dicier, but that's the first one that we look for. The, another one is looking for unbounded arrays. So if you just keep. Embedding information and have no limit on that.\n \nThe size of your documents can get really, really big. This, we actually have a limit in place and this is one of our third anti-patterns where if you keep you'll hit our 16 megabyte per document limit which kind of means that. Your hottest documents are the working set, takes up too much space on RAM.\n\nSo now we're going to disk to fulfill your request, which is, you know, generally again, we'll take some time it's more resource you know, consumptive, things like that. \n\n**Nic: [00:06:15]** This might be out of scope, but how do you prevent an unbounded array in Mongo DB? Like. I get the concept, but I've never, I've never heard of it done in a database before, so this would be new to me.\n\n**Julia: [00:06:27]** So this is going to be a little contradictory to the lookup anti-pattern that I just mentioned, and I think that we can talk about this more. Cause I know that when I was first learning about anti-patterns and they did seem very contradictory to me and I got of stressed. So we'll talk about that in a little bit, but the way you would avoid.\n\nThe unbounded array would probably be to reference other documents. So that's essentially doing the look of that. I just said was an anti-pattern, but one way to think of it is say you have, okay, so you have a developer collection and you have different information about the developer, like their team at Mongo DB.\n\nYou know how long they've been here and maybe you have all of their get commits and like they get commit. It could be an embedded document. It could have like the date of the commit and what project it was on and things like that. A developer can have, you know, infinitely many commits, like maybe they just commit a lot and there was no bound on that.\n\nSo you know, it's a one to many relationship and. If that were in an array, I think we all see that that would grow probably would hit that 16 megabyte limit. What we would instead maybe want to consider doing is creating like a commit collection where we would then tie it back to the developer who made the commit and reference it from the original developer document.\n\n I don't know if that analogy was helpful, but that's, that's kind of how you would handle that. \n \n**Michael: [00:08:04]** And I think the the key thing here is, you know, you get to make these decisions about how you design your schema. You're not forced to normalize data in one way across the entire database, as you are in the relational world.\n\n And so you're going to make a decision about the number of elements in a potential array versus the cost of storing that data in separate collections and doing a lookup. And. Obviously, you know, you may start, you may embark on your journey to develop an application, thinking that your arrays are going to be within scope within a relative, relatively low number.\n \nAnd maybe the use pattern changes or the number of users changes the number of developers using your application changes. And at some point you may need to change that. So let me ask the question about the. The user case when I'm interacting with Mongo DB Atlas, and my use case does change. My user pattern does change.\n\nHow will that appear? How will it surface in the product that now I've breached the limits of what is an acceptable pattern. And now it's, I'm in the scope of an anti-pattern. \n\n**Julia: [00:09:16]** Right. So when that happens, the best place for it to be flagged is our performance advisor tab. So we'll have, we have a little card that says improve your schema.\nAnd if we have anti-patterns that we flagged we'll show the number of suggestions there. You can click it to learn more about them. And what we do there is it's based on. A sample of your data. So we kind of try to catch these in a reactive sense. We'll see that something is going on and we'll give you a suggestion to improve it.\n\nSo to do that, we like analyze your data. We try to determine which collections matter, which collections you're really using. So based on the number of reads and writes to the collections, we'll kind of identify your top 20 collections and then. We'll see what's going on. We'll look for, you know, the edgy pattern, some of which I've mentioned and kind of just collect, this is all going on behind the scenes, by the way, we'll kind of collect you know, distributions of, you know, average data size, our look ups happening you know, just looking for some of those anti-patterns that I've mentioned, and then we'll determine which ones.\nYou can actually fix and which ones are most impactful, which ones are actually a problem. And then we surface that to the user. \n\n**Nic: [00:10:35]** So is it monitoring what type of queries you're doing or is it just looking at, based on how your documents are structured when it's suggesting a schema? \n\n**Julia: [00:10:46]** Yeah. It's mainly looking for how your documents are structured.\n The dollar lookup is a little tricky because it is, you know, an operation that's kind of happening under the hood, but it's based on the fact that you're referencing things within the document.\n \n**Michael: [00:11:00]** Okay. So we talked about the unbounded arrays. We talked about three anti-patterns so far. Do you want to continue on the journey of anti-patterns? \n\n**Julia: [00:11:10]** Okay. Yeah. Yeah, no, definitely. So one that we also flag is at the index level, and this is something that is also available in porphyry performance advisor in general.\n\nSo if you have unnecessary indexes on the collection, that's something that is problematic because an index just existing is you know, it consumes resources, it takes up space and. It can slow down, writes, even though it does slow down speed up reads. So that's like for indexes in general, but then there's the case where the index isn't actually doing anything and it may be kind of stale.\n\nMaybe your query patterns have changed and things like that. So if you have excessive indexes on your collection, we'll flag that, but I will say in performance advisor we do now have index removal recommendations that. We'll say this is the actual index that you should remove. So a little more granular which is nice.\n\nThen another one we have is reducing the number of collections you have in general. So at a certain point, collections again, consume a lot of resources. You have indexes on the collections. You have a lot of documents. Maybe you're referencing things that could be embedded. So that's just kind of another sign that you might want to refactor your data landscape within Mongo DB.\n\n**Michael: [00:12:36]** Okay. So we've talked about a number of, into patterns so far, we've talked about a use of dollar lookup, storing unbounded arrays in your documents. We've talked about having too many indexes. We've talked about having a large document sizes in your collections. We've talked about too many collections.\n\nAnd then I guess the last one we need to cover off is around case insensitive rejects squares. You want to talk a little bit about that? \n\n**Julia: [00:13:03]** Yeah. So. Like with the other anti-patterns we'll kind of look to see when you have queries that are using case insensitive red jacks and recommend that you have the appropriate index.\n\nSo it could be case insensitive. Index, it could be a search index, things like that. That is, you know, the last anti-pattern we flag. \n\n**Michael: [00:13:25]** Okay. Okay, great. And obviously, you know, any kind of operation against the database is going to require resource. And the whole idea here is there's a balancing act between leveraging the resource and and operating efficiently.\n\n So, so these are, this is a product feature that's available in Mongo, DB, Atlas. All of these things are available today. Correct? Yeah. And you would get to, to see these suggestions in the performance advisor tab, right? \n \n**Julia: [00:13:55]** Yes. Performance advisor. And also as I mentioned, our data Explorer, which is our collections.\nYeah. Right. \n\n**Michael: [00:14:02]** Yeah. Fantastic. The whole entire goal of. Automating database management is to make it easier for the developer to interact with the database. What else do we want to tell the audience about a schema suggestions or anything in this product space? So \nJulia: [00:14:19] I think definitely want to highlight what you just mentioned, that, you know, your schema changes the anti-patterns that could be, you know, more damaging to your performance.\n\nChange over time and it really does depend on your workload and how you're accessing the data. I know that, you know, some of this FEMA anti-patterns do conflict with each other. We do say that some cases you S you should reduce references and some cases you shouldn't, it really depends on, you know, is the data that you want to access together, actually being stored together.\n\nAnd does that. You know, it makes sense. So they won't all always apply. It will be kind of situational and that's, you know why we're here to help. \n\n**Nic: [00:15:01]** So when people are using Mongo DB to create documents in their collections, I imagine that they have some pretty intense looking document schemas, like I'm talking objects that are nested eight levels deep.\nWill the schema suggestions help in those scenarios to try to improve how people have created their data? \n\n**Julia: [00:15:23]** Schema suggestions are still definitely in their early days. I think we released this product almost a year ago. We'll definitely capture any of the six anti-patterns that we just mentioned if they're happening on a high level.\n\nSo if you're nesting a lot of stuff within the document, that would probably increase. You know, document size and we would flag it. We might not be able to get that targeted to say, this is why your document sizes this large. But I think that that's a really good call-out and it's safe to say, we know that we are not capturing every scenario that a user could encounter with their schema.\n\n You can truly do whatever you want you know, designing your Mongo DB documents. Were actively researching, which schema suggestions it makes sense to look for in our next iteration of this product. So if you have feedback, you know, always don't hesitate to reach out. We'd love to hear your thoughts.\n \n So yeah, there are definitely some limitations we're working on it. We're looking into it. \n\n**Michael: [00:16:27]** Okay. Let's say I'm a developer and I have a number of collections that maybe they're not accessed as frequently, but I am concerned about the patterns in them. How can I force the performance advisor to look at a specific collection?\n\n**Julia: [00:16:43]** Yeah, that's a really good question. So as I mentioned before, we do surface the anti-patterns in two places. One is performance advisor and that's for the more reactive use case where doing a sweep, seeing what's going on and those 20 most active collections and kind of. Doing some logic to determine where the most impactful changes could be made.\n\nAnd then there's also the collections tab in Atlas. And this is where you can go say you're actively developing or adding documents to collection. They aren't heavily used yet, but you want to make sure you're on the right track. If you view the schema, anti-patterns there, it basically runs our algorithm for you.\n\nAnd we'll. Search a sample of collections for that, or sorry, a sample of documents for that collection and surface the suggestions there. So it's a little more targeted. And I would say very useful for when you're actively developing something or have a small workload. \n\n**Michael: [00:17:39]** We've got a huge conference coming up in July.\nIt's Mongo, db.live. My first question is, are you going to be there? Are you perhaps presenting a talk on on this subject at.live? \n\n**Julia: [00:17:50]** I am not presenting a talk on this subject at.live, but I will be there. I'm very, very excited for it. \n\n**Michael: [00:17:56]** Fantastic. Well, maybe we can get you to come to community day, which is the week after where we've got talks and sessions and games and all sorts of fun stuff for the community.\nMaybe we can get you to to talk a little bit about this at the at the event that would be. That would be fantastic. I'm going to be.live is our biggest user conference of the year. Joined us July 13th and 14th. It's free. It's all online. There's a huge lineup of cutting edge keynotes and breakout sessions.\n\nAll sorts of ask me anything, panels and brain breaking activities so much more. You can get more information@mongodb.com slash live. All right, Nick, anything else to add before we begin to wrap? \n \n**Nic: [00:18:36]** Nothing for me. I mean, Julia, is there any other last minute words of wisdom or anything that you want to tell the audience about schemas suggestions with the Mongo DB or anything that'll help them?\nYeah, \n\n**Julia: [00:18:47]** I don't think so. I think we covered a lot again. I would just emphasize you know, don't be overwhelmed. Scheme is very important for Mongo DB. And it is meant to be flexible. We're just here to help you. \n\n**Nic: [00:19:00]** I think that's the key word there. It's not a, it's not schema less. It's just flexible schema, right?\n\n**Julia: [00:19:05]** Yes, yes, yes. \n\n**Michael: [00:19:05]** Yes. Well, Julia, thank you so much. This has been a great conversation. \n\n**Julia: [00:19:09]** Awesome. Thanks for having me.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Today, we are joined by Julia Oppenheim, Associate Product Manager at MongoDB. Julia chats with us and shares details of a set of features within MongoDB Atlas designed to help developers improve the design of their schemas to avoid common anti-patterns. ", "contentType": "Podcast"}, "title": "Schema Suggestions with Julia Oppenheim - Podcast Episode 59", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-using-realm-sync-in-unity", "action": "created", "body": "# Turning Your Local Game into an Online Experience with MongoDB Realm Sync\n\nPlaying a game locally can be fun at times. But there is nothing more exciting than playing with or against the whole world. Using Realm Sync you can easily synchronize data between multiple instances and turn your local game into an online experience.\n\nIn a previous tutorial we showed how to use Realm locally to persist your game's data. We will build on the local Realm to show how to easily transition to Realm Sync.\n\nIf you have not used local Realms before we recommend working through the previous tutorial first so you can easily follow along here when we build on them.\n\nYou can find the local Realm example that this tutorial is based on in our example repository at Github and use it to follow along.\n\nThe final of result of this tutorial can also be found in the examples reposity.\n\n## MongoDB Realm Sync and MongoDB Atlas\n\nThe local Realm database we have seen in the previous tutorial is one of three components we need to synchronize data between multiple instances of our game. The other two are MongoDB Atlas and MongoDB Realm Sync.\n\nWe will use Atlas as our backend and cloud-based database. Realm Sync on the other side enables sync between your local Realm database and Atlas, seamlessly stitching together the two components into an application layer for your game. To support these services, MongoDB Realm also provides components to fulfill several common application requirements from which we will be using the Realm Users and Authentication feature to register and login the user.\n\nThere are a couple of things we need to prepare in order to enable synchronisation in our app. You can find an overview on how to get started with MongoDB Realm Sync in the documentation. Here are the steps we need to take:\n\n- Create an Atlas account\n- Create a Realm App\n- Enable Sync\n- Enable Developer Mode\n- Enable email registration and choose `Automatically confirm users` under `User Confirmation Method`\n\n## Example\n\nWe will build on the local Realm example we created in the previous tutorial using the 3D chess game. To get you started easily you can find the final result in our examples reposity (branch: `local-realm`).\n\nThe local Realm is based on four building blocks:\n\n- `PieceEntity`\n- `Vector3Entity`\n- `PieceSpawner`\n- `GameState`\n\nThe `PieceEntity` along with the `Vector3Entity` represents our model which include the two properties that make up a chess piece: type and position.\n\n```cs\n...\n\npublic class PieceEntity : RealmObject\n{\n public PieceType PieceType\n {\n ...\n }\n\n public Vector3 Position\n {\n ...\n }\n ...\n}\n```\n\nIn the previous tutorial we have also added functionality to persist changes in position to the Realm and react to changes in the database that have to be reflected in the model. This was done by implementing `OnPropertyChanged` in the `Piece` and `PieceEntity` respectively.\n\nThe `PieceSpawner` is responsible for spawning new `Piece` objects when the game starts via `public void CreateNewBoard(Realm realm)`. Here we can see some of the important functions that we need when working with Realm:\n\n- `Write`: Starts a new write transaction which is necessary to change the state of the database.\n- `Add`: Adds a new `RealmObject` to the database that has not been there before.\n- `RemoveAll`: Removes all objects of a specified type from the database.\n\nAll of this comes together in the central part of the game that manages the flow of it: `GameState`. The `GameState` open the Realm using `Realm.GetInstance()` in `Awake` and offers an option to move pieces via `public void MovePiece(Vector3 oldPosition, Vector3 newPosition)` which also checks if a `Piece` already exists at the target location. Furthermore we subscribe for notifications to set up the initial board. One of the things we will be doing in this tutorial is to expand on this subscription mechanic to also react to changes that come in through Realm Sync.\n\n## Extending the model\n\nThe first thing we need to change to get the local Realm example ready for Sync is to add a PrimaryKey to the PieceType. This is a mandatory requirement for Sync to make sure objects can be distinguished from each other. We will be using the field `Id` here. Note that you can add a `MapTo` attribute in case the name of the field in the `RealmObject` differs from the name set in Atlas. By default the primary key is named `_id` in Atlas which would conflict with the .NET coding guidelines. By adding `MapTo(\"_id\")]` we can address this fact.\n\n```cs\nusing MongoDB.Bson;\n```\n\n```cs\n[PrimaryKey]\n[MapTo(\"_id\")]\npublic ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n```\n\n## Who am I playing with?\n\nThe local Realm tutorial showed you how to create a persisted game locally. While you could play with someone else using the same game client, there was only ever one game running at a time since every game is accessing the same table in the database and therefore the same objects.\n\nThis would still be the same when using Realm Sync if we do not separate those games. Everyone accessing the game from wherever they are would see the same state. We need a way to create multiple games and identify which one we are playing. Realm Sync offers a feature that let's us achieve exactly this: [partitions.\n\n> A partition represents a subset of the documents in a synced cluster that are related in some way and have the same read/write permissions for a given user. Realm directly maps partitions to individual synced .realm files so each object in a synced realm has a corresponding document in the partition.\n\nWhat does this mean for our game? If we use one partition per match we can make sure that only players using the same partition will actually play the same game. Furthermore, we can start as many games as we want. Using the same partition simply means using the same `partiton key` when opening a synced Realm. Partition keys are restricted to the following types: `String`, `ObjectID`, `Guid`, `Long`.\n\nFor our game we will use a string that we ask the user for when they start the game. We will do this by adding a new scene to the game which also acts as a welcome and loading scene.\n\nGo to `Assets -> Create -> Scene` to create a new scene and name it `WelcomeScene`. Double click it to activate it.\n\nUsing `GameObject -> UI` we then add `Text`, `Input Field` and `Button` to the new scene. The input will be our partition key. To make it easier to understand for the player we will call its placeholder `game id`. The `Text` object can be set to `Your Game ID:` and the button's text to `Start Game`. Make sure to reposition them to your liking.\n\n## Getting everything in Sync\n\nAdd a script to the button called `StartGameButton` by clicking `Add Component` in the Inspector with the start button selected. Then select `script` and type in its name.\n\n```cs\nusing Realms;\nusing Realms.Sync;\nusing System;\nusing System.IO;\nusing System.Threading.Tasks;\nusing UnityEngine;\nusing UnityEngine.SceneManagement;\nusing UnityEngine.UI;\n\npublic class StartGameButton : MonoBehaviour\n{\n SerializeField] private GameObject loadingIndicator = default; // 1\n [SerializeField] private InputField gameIdInputField = default; // 2\n\n public async void OnStartButtonClicked() // 3\n {\n loadingIndicator.SetActive(true); // 4\n\n // 5\n var gameId = gameIdInputField.text;\n PlayerPrefs.SetString(Constants.PlayerPrefsKeys.GameId, gameId);\n\n await CreateRealmAsync(gameId); // 5\n\n SceneManager.LoadScene(Constants.SceneNames.Main); // 13\n }\n \n private async Task CreateRealmAsync(string gameId)\n {\n var app = App.Create(Constants.Realm.AppId); // 6\n var user = app.CurrentUser; // 7\n\n if (user == null) // 8\n {\n // This example focuses on an introduction to Sync.\n // We will keep the registration simple for now by just creating a random email and password.\n // We'll also not create a separate registration dialog here and instead just register a new user every time.\n // In a different example we will focus on authentication methods, login / registration dialogs, etc.\n var email = Guid.NewGuid().ToString();\n var password = Guid.NewGuid().ToString();\n await app.EmailPasswordAuth.RegisterUserAsync(email, password); // 9\n user = await app.LogInAsync(Credentials.EmailPassword(email, password)); // 10\n }\n\n RealmConfiguration.DefaultConfiguration = new SyncConfiguration(gameId, user);\n\n if (!File.Exists(RealmConfiguration.DefaultConfiguration.DatabasePath)) // 11\n {\n // If this is the first time we start the game, we need to create a new Realm and sync it.\n // This is done by `GetInstanceAsync`. There is nothing further we need to do here.\n // The Realm is then used by `GameState` in it's `Awake` method.\n using var realm = await Realm.GetInstanceAsync(); // 12\n }\n }\n}\n```\n\nThe `StartGameButton` knows two other game objects: the `gameIdInputField` (1) that we created above and a `loadingIndicator` (2) that we will be creating in a moment. If offers one action that will be executed when the button is clicked: `OnStartButtonClicked` (3).\n\nFirst, we want to show a loading indicator (4) in case loading the game takes a moment. Next we grab the `gameId` from the `InputField` and save it using the [`PlayerPrefs`. Saving data using the `PlayerPrefs` is acceptable if it is user input that does not need to be saved safely and only has a simple structure since `PlayerPrefs` can only take a limited set of data types: `string`, `float`, `int`.\n\nNext, we need to create a Realm (5). Note that this is done asynchrounously using `await`. There are a couple of components necessary for opening a synced Realm:\n\n- `app`: An instance of `App` (6) represents your Realm App that you created in Atlas. Therefore we need to pass the `app id` in here.\n- `user`: If a user has been logged in before, we can access them by using `app.CurrentUser` (7). In case there has not been a successful login before this variable will be null (8) and we need to register a new user.\n\nThe actual values for `email` and `password` are not really relevant for this example. In your game you would use more `Input Field` objects to ask the user for this data. Here we can just use `Guid` to generate random values. Using `EmailPasswordAuth.RegisterUserAsync` offered by the `App` class we can then register the user (9) and finally log them in (10) using these credentials. Note that we need to await this asynchrounous call again.\n\nWhen we are done with the login, all we need to do is to create a new `SyncConfiguration` with the `gameId` (which acts as our partition key) and the `user` and save it as the `RealmConfiguration.DefaultConfiguration`. This will make sure whenever we open a new Realm, we will be using this `user` and `partitionKey`.\n\nFinally we want to open the Realm and synchronize it to get it ready for the game. We can detect if this is the first start of the game simply by checking if a Realm file for the given coonfiguration already exists or not (11). If there is no such file we open a Realm using `Realm.GetInstanceAsync()` (12) which automatically uses the `DefaultConfiguration` that we set before.\n\nWhen this is done, we can load the main scene (13) using the `SceneManager`. Note that the name of the main scene was extracted into a file called `Constants` in which we also added the app id and the key we use to save the `game id` in the `PlayerPrefs`. You can either add another class in your IDE or in Unity (using `Assets -> Create -> C# Script`).\n\n```cs\nsealed class Constants\n{\n public sealed class Realm\n {\n public const string AppId = \"insert your Realm App ID here\";\n }\n\n public sealed class PlayerPrefsKeys\n {\n public const string GameId = \"GAME_ID_KEY\";\n }\n\n public sealed class SceneNames\n {\n public const string Main = \"MainScene\";\n }\n}\n```\n\nOne more thing we need to do is adding the main scene in the build settings, otherwise the `SceneManager` will not be able to find it. Go to `File -> Build Settings ...` and click `Add Open Scenes` while the `MainScene` is open.\n\nWith these adjustments we are ready to synchronize data. Let's add the loading indicator to improve the user experience before we start and test our game.\n\n## Loading Indicator\n\nAs mentioned before we want to add a loading indicator while the game is starting up. Don't worry, we will keep it simple since it is not the focus of this tutorial. We will just be using a simple `Text` and an `Image` which can both be found in the same `UI` sub menu we used above.\n\nThe make sure things are a bit more organised, embed both of them into another `GameObject` using `GameObject -> Create Empty`.\n\nYou can arrange and style the UI elements to your liking and when you're done just add a script to the `LoadingIndicatorImage`:\n\nThe script itself should look like this:\n\n```cs\nusing UnityEngine;\n\npublic class LoadingIndicator : MonoBehaviour\n{\n // 1\n SerializeField] private float maxLeft = -150;\n [SerializeField] private float maxRight = 150;\n [SerializeField] private float speed = 100;\n\n // 2\n private enum MovementDirection { None, Left, Right }\n private MovementDirection movementDirection = MovementDirection.Left;\n\n private void Update()\n {\n switch (movementDirection) // 3\n {\n case MovementDirection.None:\n break;\n case MovementDirection.Left:\n transform.Translate(speed * Time.deltaTime * Vector3.left);\n if (transform.localPosition.x <= maxLeft) // 4\n {\n transform.localPosition = new Vector3(maxLeft, transform.localPosition.y, transform.localPosition.z); // 5\n movementDirection = MovementDirection.Right; // 6\n }\n break;\n case MovementDirection.Right:\n transform.Translate(speed * Time.deltaTime * Vector3.right);\n if (transform.localPosition.x >= maxRight) // 4\n {\n transform.localPosition = new Vector3(maxRight, transform.localPosition.y, transform.localPosition.z); // 5\n movementDirection = MovementDirection.Left; // 6\n }\n break;\n }\n }\n}\n```\n\nThe loading indicator that we will be using for this example is just a simple square moving sideways to indicate progress. There are two fields (1) we are going to expose to the Unity Editor by using `SerializeField` so that you can adjust these values while seing the indicator move. `maxMovement` will tell the indicator how far to move to the left and right from the original position. `speed` - as the name indicates - will determine how fast the indicator moves. The initial movement direction (2) is set to left, with `Vector3.Left` and `Vector3.Right` being the options given here.\n\nThe movement itself will be calculated in `Update()` which is run every frame. We basically just want to do one of two things:\n\n- Move the loading indicator to the left until it reaches the left boundary, then swap the movement direction.\n- Move the loading indicator to the right until it reaches the right boundary, then swap the movement direction.\n\nUsing the [`transform` component of the `GameObject` we can move it by calling `Translate`. The movement consists of the direction (`Vector3.left` or `Vector3.right`), the speed (set via the Unity Editor) and `Time.deltaTime` which represents the time since the last frame. The latter makes sure we see a smooth movement no matter what the frame time is. After moving the square we check (3) if we have reached the boundary and if so, set the position to this boundary (4). This is just to make sure the indicator does not visibly slip out of bounds in case we see a low frame rate. Finally the position is swapped (5).\n\nThe loading indicator will only be shown when the start button is clicked. The script above takes care of showing it. We need to disable it so that it does not show up before. This can be done by clicking the checkbox next to the name of the `LoadingIndicator` parent object in the Inspector.\n\n## Connecting UI and code\n\nThe scripts we have written above are finished but still need to be connected to the UI so that it can act on it.\n\nFirst, let's assign the action to the button. With the `StartGameButton` selected in the `Hierarchy` open the `Inspector` and scroll down to the `On Click ()` area. Click the plus icon in the lower right to add a new on click action.\n\nNext, drag and drop the `StartGameButton` from the `Hierarchy` onto the new action. This tells Unity which `GameObject` to use to look for actions that can be executed (which are functions that we implement like `OnStartButtonClicked()`).\n\nFinally, we can choose the action that should be assigned to the `On Click ()` event by opening the drop down. Choose the `StartGameButton` and then `OnStartButtonClicked ()`.\n\nWe also need to connect the input field and the loading indicator to the `StartGameButton` script so that it can access those. This is done via drag&drop again as before.\n\n## Let's play!\n\nNow that the loading indicator is added the game is finished and we can start and run it. Go ahead and try it!\n\nYou will notice the experience when using one local Unity instance with Sync is the same as it was in the local Realm version. To actually test multiple game instances you can open the project on another computer. An easier way to test multiple Unity instances is ParallelSync. After following the installation instruction you will find a new menu item `ParallelSync` which offers a `Clones Manager`.\n\nWithin the `Clones Manager` you add and open a new clone by clicking `Add new clone` and `Open in New Editor`.\n\nUsing both instances you can then test the game and Realm Sync.\n\nRemember that you need to use the same `game id` / `partition key` to join the same game with both instances.\n\nHave fun!\n\n## Recap and Conclusion\n\nIn this tutorial we have learned how to turn a game with a local Realm into a multiplayer experience using MongoDB Realm Sync. Let's summarise what needed to be done:\n\n- Create an Atlas account and a Realm App therein\n- Enable Sync, an authentication method and development mode\n- Make sure every `RealmObject` has an `_id` field\n- Choose a partition strategy (in this case: use the `partition key` to identify the match)\n- Open a Realm using the `SyncConfiguration` (which incorporates the `App` and `User`)\n\nThe code for all of this can be found in our example repository.\n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB and Realm.", "format": "md", "metadata": {"tags": ["Realm", "Mobile"], "pageDescription": "This article shows how to migrate from using a local Realm to MongoDB Realm Sync. We will cover everything you need to know to transform your game into a multiplayer experience.", "contentType": "Tutorial"}, "title": "Turning Your Local Game into an Online Experience with MongoDB Realm Sync", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/kotlin/realm-startactivityforresult-registerforactivityresult-deprecated-android-kotlin", "action": "created", "body": "# StartActivityForResult is Deprecated!\n\n## Introduction\n\nAndroid has been on the edge of evolution for a while recently, with updates to `androidx.activity:activity-ktx` to `1.2.0`. It has deprecated `startActivityForResult` in favour of `registerForActivityResult`.\n\nIt was one of the first fundamentals that any Android developer has learned, and the backbone of Android's way of communicating between two components. API design was simple enough to get started quickly but had its cons, like how it\u2019s hard to find the caller in real-world applications (except for cmd+F in the project \ud83d\ude02), getting results on the fragment, results missed if the component is recreated, conflicts with the same request code, etc.\n\nLet\u2019s try to understand how to use the new API with a few examples.\n\n## Example 1: Activity A calls Activity B for the result\n\nOld School:\n\n```kotlin\n// Caller \nval intent = Intent(context, Activity1::class.java)\nstartActivityForResult(intent, REQUEST_CODE)\n\n// Receiver \noverride fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {\n super.onActivityResult(requestCode, resultCode, data)\n if (resultCode == Activity.RESULT_OK && requestCode == REQUEST_CODE) {\n val value = data?.getStringExtra(\"input\")\n }\n}\n```\n\nNew Way:\n\n```kotlin\n\n// Caller \nval intent = Intent(context, Activity1::class.java)\ngetResult.launch(intent)\n\n// Receiver \nprivate val getResult =\n registerForActivityResult(\n ActivityResultContracts.StartActivityForResult()\n ) {\n if (it.resultCode == Activity.RESULT_OK) {\n val value = it.data?.getStringExtra(\"input\")\n }\n }\n```\n\nAs you would have noticed, `registerForActivityResult` takes two parameters. The first defines the type of action/interaction needed (`ActivityResultContracts`) and the second is a callback function where we receive the result.\n\nNothing much has changed, right? Let\u2019s check another example.\n\n## Example 2: Start external component like the camera to get the image:\n\n```kotlin\n//Caller\ngetPreviewImage.launch(null)\n\n//Receiver \nprivate val getPreviewImage = registerForActivityResult(ActivityResultContracts.TakePicture { bitmap ->\n // we get bitmap as result directly\n})\n```\n\nThe above snippet is the complete code getting a preview image from the camera. No need for permission request code, as this is taken care of automatically for us!\n\nAnother benefit of using the new API is that it forces developers to use the right contract. For example, with `ActivityResultContracts.TakePicture()` \u2014 which returns the full image \u2014 you need to pass a `URI` as a parameter to `launch`, which reduces the development time and chance of errors.\n\nOther default contracts available can be found here.\n\n---\n\n## Example 3: Fragment A calls Activity B for the result\n\nThis has been another issue with the old system, with no clean implementation available, but the new API works consistently across activities and fragments. Therefore, we refer and add the snippet from example 1 to our fragments.\n\n---\n\n## Example 4: Receive the result in a non-Android class\n\nOld Way: \ud83d\ude04\n\nWith the new API, this is possible using `ActivityResultRegistry` directly.\n\n```kotlin\nclass MyLifecycleObserver(private val registry: ActivityResultRegistry) : DefaultLifecycleObserver {\n\n lateinit var getContent: ActivityResultLauncher\n\n override fun onCreate(owner: LifecycleOwner) {\n getContent = registry.register(\"key\", owner, GetContent()) { uri ->\n // Handle the returned Uri\n }\n }\n\n fun selectImage() {\n getContent.launch(\"image/*\")\n }\n}\n\nclass MyFragment : Fragment() {\n lateinit var observer: MyLifecycleObserver\n\n override fun onCreate(savedInstanceState: Bundle?) {\n // ...\n\n observer = MyLifecycleObserver(requireActivity().activityResultRegistry)\n lifecycle.addObserver(observer)\n }\n\n override fun onViewCreated(view: View, savedInstanceState: Bundle?) {\n val selectButton = view.findViewById(R.id.select_button)\n\n selectButton.setOnClickListener {\n // Open the activity to select an image\n observer.selectImage()\n }\n }\n}\n```\n\n## Summary\n\nI have found the registerForActivityResult useful and clean. Some of the pros, in my opinion, are:\n\n1. Improve the code readability, no need to remember to jump to `onActivityResult()` after `startActivityForResult`.\n\n2. `ActivityResultLauncher` returned from `registerForActivityResult` used to launch components, clearly defining the input parameter for desired results.\n\n3. Removed the boilerplate code for requesting permission from the user. \n\nHope this was informative and enjoyed reading it.\n", "format": "md", "metadata": {"tags": ["Kotlin"], "pageDescription": "Learn the benefits and usage of registerForActivityResult for Android in Kotlin.", "contentType": "Article"}, "title": "StartActivityForResult is Deprecated!", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/enhancing-diabetes-data-visibility-with-tidepool-and-mongodb", "action": "created", "body": "# Making Diabetes Data More Accessible and Meaningful with Tidepool and MongoDB\n\nThe data behind diabetes management can be overwhelming \u2014 understanding it all is empowering. Tidepool turns diabetes data points into accessible, actionable, and meaningful insights using an open source tech stack that incorporates MongoDB. Tidepool is a nonprofit organization founded by people with diabetes, caregivers, and leading healthcare providers committed to helping all people with dependent diabetes safely achieve great outcomes through more accessible, actionable, and meaningful diabetes data.\n\nThey are committed to empowering the next generation of innovations in diabetes management. We harness the power of technology to provide intuitive software products that help people with diabetes.\n\nIn this episode of the MongoDB Podcast, Michael and Nic sit down with Tapani Otala, V.P. of Engineering at Tidepool, to talk about their platform, how it was built, and how it uses MongoDB to provide unparalleled flexibility and visibility into the critical data that patients use to manage their condition. \n\n:youtube]{vid=Ocf6ZJiq7ys}\n\n### MongoDB Podcast - Tidepool with Tapani Otala and Christopher Snyder\n### \n\n**Tapani: [00:00:00]** Hi, my name is Tapani Otala. I'm the VP of engineering at [Tidepool. We are a nonprofit organization whose mission is to make diabetes data more accessible, meaningful, and actionable. The software we develop is designed to integrate [00:01:00] data from various diabetes devices like insulin pumps, continuous glucose monitors, and blood glucose meters into a single intuitive interface that allows people with diabetes and their care team to make sense of that data.\nAnd we're using Mongo DB to power all this. Stay tuned for more.\n\n**Chris: [00:00:47]**\nMy name is Christopher Snyder. I've been living with type one diabetes since 2002. I'm also Tidepool's community and clinic success manager. Having this data available to me just gives me the opportunity to make sense of everything that's happening. Prior to using Tidepool, if I wanted to look at my data, I either had to write everything down and keep track of all those notes.\nOr I do use proprietary software for each of my devices and then potentially print things out and hold them up to the light to align events and data points and things like that. Because Tidepool brings everything together in one place, I am biased. I think it looks real pretty. It makes it a lot easier for me to identify trends, make meaningful changes in my diabetes management habits, and hopefully lead a healthier life.\n\n**Mike: [00:01:28]** So we're talking today about Tidepool and maybe you could give us a quick description of what Tidepool is and who it may appeal to \n**Tapani: [00:01:38] **We're a nonprofit organization. And we're developing software that helps people with diabetes manage that condition. We enable people to upload data from their devices, different types of devices, like glucose monitors, meters, insulin pumps, and so on into a single place where you can view that data in one place.\nAnd you can share it with your care team members like doctors, clinicians, or [00:02:00] your family members. They can view that data in real time as well.\n \n**Mike: [00:02:03]** Are there many companies that are doing this type of thing today?\n\n**Tapani: [00:02:06]** There are a \nfew companies, as far as I'm aware, the only non-profit in this space though. Everything else is for profit.\nAnd there are a lot of companies that look at it from diabetes, from different perspective. They might work with type two diabetes or type one. We work with any kind. There's no difference. \n\n**Nic: [00:02:24]** In regards to Tidepool, are you building hardware as well as software? Or are you just looking at data? Can you shed some more light into that?\n\n**Tapani: [00:02:33]** Sure. We're a hundred percent software company. We don't make any other hardware. We do work with lots of great manufacturers of those devices in the space and medical space in general, but in particular diabetes that make those devices. And so we collaborate with them.\n\n**Mike: [00:02:48]** So what stage is Tidepool in today? Are you live? \n\n**Tapani: [00:02:50]** Yeah, we've been live since 2013 and we we've grown since a fair bit. And we're now at 33 or so people, but still, I guess you could consider as a [00:03:00] startup, substance. So \n\n**Nic: [00:03:01]** I'd actually like to dig deeper into the software that Tidepool produces.\nSo you said that there are many great hardware manufacturers working in this space. How are you obtaining that data? Are you like a mobile application connecting to the hardware? Are you some kind of IoT or are they sending you that information and you're working with it at that point?\n\n**Tapani: [00:03:22]** So it really depends on the device and the integration that we have. For most devices, we talk directly to the device. So these are devices that you would use at your home and you connect them to a PC over Bluetooth or USB or your phone for that matter. And we have software that can read the data directly from the device and upload it to our backend service that's using Mongo DB to store that data. \n\n**Mike: [00:03:43]** Is there a common format that is required in order to send data to Tidepool? \n\n**Tapani: [00:03:49]** We wish. That would make our life a whole lot simpler. No, actually a good chunk of the work that's involved in here is writing software that knows how to talk to each individual device.\nAnd there's some [00:04:00] families of devices that, that use similar protocols and so on, but no, there's no really universal protocol that talk to the devices or for the format of the data that comes from the devices for that matter. So a lot of the work goes into normalizing that data so that when it is stored in in our backend, it's then visible and viewable by people.\n\n**Nic: [00:04:21]** So we'll get to this in a second. It does sound like a perfect case for a kind of a document database, but in regards to supporting all of these other devices, so I imagine that any single device over its lifetime might experience different kind of data output through the versions.\nWhat kind of compatibility is Tidepool having on these devices? Do you use, do say support like the latest version or? Maybe you can shed some light on that, how many devices in general you're supporting. \nTapani: [00:04:50] Right now, we support over 50 different devices. And then by extension anything that Apple Health supports.\nSo if you have a device that stores data in apple [00:05:00] health kit, we can read that as well. But 50 devices directly. You can actually go to type bullet org slash devices, and you can see the list full list there. You can filter it by different types of devices and manufacturers and so on. And that those devices are some of them are actually obsolete at this point.\nThey're end of life. You can't buy them anymore. So we support devices even long past the point when there've been sold. We try to keep up with the latest devices, but that's not always feasible.\n\n**Mike: [00:05:26]** This is it's like a health oriented IOT application right? \n\n**Tapani: [00:05:30]** Yeah. In a way that that's certainly true.\nThe only difference here maybe is that those devices don't directly usually connect to the net. So they need an intermediary. Like in our case, we have a mobile application. We have a desktop application that talks to the device that's in your possession, but you can't reach the device directly over internet.\n\n**Mike:** And just so we can understand the scale, how many devices are reporting into Tidepool today?\n\n**Tapani:** I don't actually know exactly how many devices there are. Those are discreet different types of devices. [00:06:00] What I can say is our main database production database, we're storing something it's approaching to 6 billion documents at this point\nin terms of the amount of data across across and hundreds of thousands of users. \n\n**Nic: [00:06:11]** Just for clarity, because I want to get to, because the diabetes space is not something I'm personally too familiar in. And the different hardware that exists. So say I'm a user of the hardware and it's reporting to Tidepool.\nIs Tidepool gonna alert you if there's some kind of low blood sugar level or does it serve a different purpose? \n\n**Tapani: [00:06:32]** Both. And this is actually a picture that's changing. So right now what we have out there in terms of the products, they're backward looking. So what happened in the past, but you might might be using these devices and you might upload data, a few times a day.\nBut if you're using some of the more, more newer devices like continuous glucose monitors, those record data every five minutes. So the opposite frequency, it could be much higher, but that's going to change going [00:07:00] forward as more and more people start using this continuous glucose monitors that are actually doing that. For the older devices might be, this is classic fingerprint what glucose meter or you poke your finger, or you draw some little bit of blood and you measure it and you might do that five to 10 times a day.\nVersus 288 times, if you have a glucose monitor, continuous glucose monitor that sends data every five minutes. So it varies from device to device. \n\n**Mike: [00:07:24]** This is a fascinating space. I test myself on a regular basis as part of my diet not necessarily for diabetes, but for for ketosis and that's an interesting concept to me. The continuous monitoring devices, though,\nthat's something that you attach to your body, right? \n\n**Tapani: [00:07:39]** Yeah. These are little devices about the size of a stack of quarters that sits somewhere on your skin, on an arm or leg or somewhere on your body. There's a little filament that goes onto your skin, that does the actual measurements, but it's basically a little full. \n\n**Mike: [00:07:54]** So thinking about the application itself and how you're leveraging MongoDB, do you want to talk a little bit about how the [00:08:00] application comes together and what the stack looks like?\n\n**Tapani: [00:08:01]** Sure. So we're hosted in AWS, first of all. We have about 20 or so microservices in there. And as part of those microservices, they all communicate to all MongoDB Atlas.\nThat's implemented with the sort of best practices of suppose security in mind because security and privacy are critically important for us. So we're using the busy gearing from our microservices to MongoDB Atlas. And we're using a three node replica set in MongoDB Atlas, so that there's no chance of losing any of that data.\n\n**Mike: [00:08:32]** And in terms of the application itself, is it largely an API? I'm sure that there's a user interface or your application set, but what does the backend or the API look like in terms of the technology? \n\n**Tapani: [00:08:43]** So, what people see in front of them as a, either a desktop application or mobile application, that's the visible manifestation of it.\nBoth of those communicate to our backend through a set of rest APIs for authentication authorization, data upload, data retrieval, and so on. Those APIs then take that data and they store it in our MongoDB production cluster. So the API is very from give me our user profile to upload this pile of continuous glucose monitor samples.\n\n**Mike: [00:09:13]** What is the API written in? What technologies are you using?\n\n**Tapani: [00:09:16]** It's a mix of Node JS and Golang. I would say 80% Golang and 20% Node JS. \n\n**Nic: [00:09:23]** I'm interested in why Golang for this type of application. I wouldn't have thought it as a typical use case. So are you able to shed any light on that? \n\n**Tapani: [00:09:32]** The decision to switch to Golang? And so this actually the growing set of services. That happened before my time. I would say it's pretty well suited for this particular application. This, the backend service is fundamentally, it's a set of APIs that have no real user visible manifestation themselves.\nWe do have a web service, a web front end to all this as well, and that's written in React and so on, but the Golang is proven to be a very good language for developing this, services specifically that respond to API requests because really all they do is they're taking a bunch of inputs from the, on the caller and translating, applying business policy and so on, and then storing the data in Mongo.\nSo it's a good way to do it. \n\n**Nic: [00:10:16]** Awesome. So we know that you're using Go and Node for your APIs, and we know that you're using a MongaDB as your data layer. What features in particular using with MongoDB specifically? \n\n**Tapani: [00:10:26]** So right now, and I mentioned we were running a three node replica set.\nWe don't yet use sharding, but that's actually the next big thing that we'll be tackling in the near future because that set of data that we have is growing fairly fast and it will be growing very fast, even faster in the future with a new product coming out. But sharding will be next one.\nWe do a lot of aggregate queries across several different collections. So some fairly complicated queries. And as I mentioned, that largest collection is fairly large. So performance, that becomes critical. Having the right indices in place and being able to look for all the right data is critical.\n\n**Nic: [00:11:07]** You mentioned aggregations across numerous collections at a high level. Are you able to talk us through what exactly you're aggregating to give us an idea of a use case. \n\n**Tapani: [00:11:16]** Yeah. Sure. In fact, the one thing I should've mentioned earlier perhaps is besides being non-profit, we're also open source.\nSo everything we do is actually visible on GitHub in our open-source repo. So if anybody's interested in the details, they're welcome to take a look in there. But in the sort of broader sense, we have a user collection where all the user accounts profiles are stored. We have a data collection or device data collection, rather.\nThat's where all the data from diabetes devices goes. There's other collections for things like messages that we sent to the users, emails, basically invitations to join this account or so on and confirmations of those and so different collections for different use cases. Broadly speaking is it's, there's one collection for each use case like user profiles or messages, notifications, device data.\n\n**Mike: [00:12:03]** And I'm thinking about the schema and the aggregations across multiple collections. Can you share what that schema looks like? And maybe even just the number of collections that you're storing. \n\n**Tapani: [00:12:12]** Sure. Number of collections is actually relatively small. It's only a half a dozen or so, but the schema is pretty straightforward for most of them.\nThey like the user profiles. There's only so many things you store in a user profile, but that device data collection is perhaps the most complex because it stores data from all the devices, regardless of type. So the data that comes out of a continuous glucose monitor is different than the data that comes from an insulin pump.\nFor instance, for example. So there's different fields. There are different units that we're dealing with and so on. \n\n**Mike: [00:12:44]** Okay, so Tapani, what other features within the Atlas platform are you leveraging today? And have you possibly look at automated scalability as a solution moving forward?\n\n**Tapani: [00:12:55]** So our use of MongoDB Atlas right now is pretty straightforward and intensive. So a lot of data in the different collections, indices and aggregate queries that are used to manage that data and so on. The things that we're looking forward in the future are things like sharding because of the scale of data that's growing.\nOther things are a data lake, for instance, archiving some of the data. Currently our production database stores all the data from 2013 onwards. And really the value of that data beyond the past few months to a few years is not that important. So we'd want to archive it. We can't lose it because it's important data, but we don't want to archive it and move it someplace else.\nSo that, and bucketizing the data in the more effective ways. And so it's faster to access by different stakeholders in the company.\n\n**Mike: [00:13:43]** So some really compelling features that are available today around online archiving. I think we can definitely help out there. And coming down the pike, we've got some really exciting stuff happening in the time series space.\nSo stay tuned for that. We'll be talking more about that at our .live conference in July. So stay tuned for that. \n\n**Nic: [00:14:04]** Hey Mike, how about you to give a plug about that conference right now?\n\n**Mike: [00:14:06]** Yeah, sure. It's our biggest user conference of the year. And we get together, thousands of developers join us and we present all of the feature updates.\nWe're going to be talking about MongoDB 5.0, which is the latest upcoming release and some really super exciting announcements there. There's a lot of breaks and brain breaking activities and just a great way to get plugged into the MongoDB community. You can get more information at mongodb.com/live.\nSo Tapani, thanks so much for sharing the details of how you're leveraging Mongo DB. As we touched on earlier, this is an application that users are going to be sharing very sensitive details about their health. Do you want to talk a little bit about the security?\n\n**Tapani: [00:14:49]** Sure. Yeah, it's actually, it's a critically important piece for us. So first of all of those APS that we talked about earlier, those are all the traffic is encrypted in transit. There's no unauthorized or unauthenticated access to any other data or API. In MongoDB Atlas, what we're obviously leveraging is we use the encryption at rest.\nSo all the data that's stored by MongoDB is encrypted. We're using VPC peering between our services and MongoDB Atlas, to make sure that traffic is even more secure. And yeah, privacy and security of the data is key thing for us, because this is all what what the health and human services calls, protected health information or PHI. That's the sort of highest level of private information you could possibly have.\n\n**Nic: [00:15:30]** So in regards to the information being sent, we know that the information is being encrypted at rest. Are you collecting data that could be sensitive, like social security numbers and things like that that might need to be encrypted at a field level to prevent prying eyes of DBAs and similar?\n\n**Tapani: [00:15:45]** We do not collect any social security information or anything like that. That's purely healthcare data. Um, diabetes device data, and so on. No credit cards. No SSNs.\n\n**Nic: [00:15:56]** Got it. So nothing that could technically tie the information back to an individual or be used in a malicious way?\n\n**Tapani: [00:16:02]** Not in that way now. I mean, I think it's fair to say that this is obviously people's healthcare information, so that is sensitive regardless of whether it could be used maliciously or not. \n\n**Mike: [00:16:13]** Makes sense. Okay. So I'm wondering if you want to talk a little bit about what's next for Tidepool. You did make a brief mention of another application that you'll be launching.\nMaybe talk a little bit about the roadmap. \n\n**Tapani: [00:16:25]** Sure. We're working on, besides the existing products we're working on a new product that's called Tidepool Loop and that's an effort to build an automatic insulin dosing system. This takes a more proactive role in the treatment of diabetes.\nExisting products show data that you already have. This is actually helping you administer insulin. And so it's a smartphone application that's currently under FDA review. We are working with a couple of great partners and the medical device space to launch that with them, with their products. \n\n**Mike: [00:16:55]** Well, I love the open nature of Tidepool.\nIt seems like everything you're doing is kind of out in the open. From open source to full disclosure on the architecture stack. That's something that that I can really appreciate as a developer. I love the ability to kind of dig a little deeper and see how things work.\nIs there anything else that you'd like to cover from an organizational perspective? Any other details you wanna share? \n\n**Tapani: [00:17:16]** Sure. I mean, you mentioned the transparency and openness. We practice what some people might call radical transparency. Not only is our software open source. It's in GitHub.\nAnybody can take a look at it. Our JIRA boards for bugs and so on. They're also open, visible to anybody. Our interactions with the FDA, our meeting minutes, filings, and so on. We also make those available. Our employee handbook is open. We actually forked another company's employee handbook, committed ours opened as well.\nAnd in the hopes that people can benefit from that. Ultimately, why we do this is we hope that we can help improve public health by making everything as, as much as possible we can do make it publicly. And as far as the open source projects go, we have a, several people out there who are making open source contributions or pull requests and so on. Now, because we do operate in the healthcare space,\nwe have to review those submissions pretty carefully before we integrate them into the product. But yeah, we do take to take full requests from people we've gotten community submissions, for instance, translations to Spanish and German and French products. But we'd have to verify those before we can roll them up.\n\n**Mike: [00:18:25]** Well, this has been a great discussion. Is there anything else that you'd like to share with the audience before we begin to wrap up? \n\n**Tapani: [00:18:29]** Oh, a couple of things it's closing. So I was, I guess it would be one is first of all we're a hundred percent remote first and globally distributed organization.\nWe have people in five countries in 14 states within the US right now. We're always hiring in some form or another. So if anybody's interested in, they're welcome to take a look at our job postings tidepool.org/jobs. The other thing is as a nonprofit, we tend suddenly gracefully accept donations as well.\nSo there's another link there that will donate. And if anybody's interested in the technical details of how we actually built this all, there's a couple of links that I can throw out there. One is tidepool.org/pubsecc, that'll be secc, that's a R a security white paper, basically whole lot of information about the architecture and infrastructure and security and so on.\nWe also publish a series of blood postings, at tidepool.org/blog, where the engineering team has put out a couple of things in there about our infrastructure. We went through some pretty significant upgrades over the past couple of years, and then finally github.com/tidepool is where are all our sources.\n\n**Nic: [00:19:30]** Awesome. And you mentioned that you're a remote company and that you were looking for candidates. Were these candidates global, strictly to the US, does it matter?\n\n**Tapani: [00:19:39]** So we hire anywhere people are, and they work from wherever they are. We don't require relocation. We don't require a visa in that sense that you'd have to come to the US, for instance, to work. We have people in five countries, us, Canada, UK, Bulgaria, and Croatia right now.\n\n**Mike: [00:19:55]** Well, Tapani I want to thank you so much for joining us today. I really enjoyed the conversation. \n\n**Tapani: [00:19:58]** Thanks as well. Really enjoyed it.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Tapani Otala is the VP of Engineering at Tidepool, an open source, not-for-profit company focused on liberating data from diabetes devices, supporting researchers, and providing great, free software to people with diabetes and their care teams. He joins us today to share details of the Tidepool solution, how it enables enhanced visibility into Diabetes data and enables people living with this disease to better manage their condition. Visit https://tidepool.org for more information.", "contentType": "Podcast"}, "title": "Making Diabetes Data More Accessible and Meaningful with Tidepool and MongoDB", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-movie-search-application", "action": "created", "body": "# Tutorial: Build a Movie Search Application Using Atlas Search\n\nLet me guess. You want to give your application users the ability to find *EXACTLY* what they are looking for FAST! Who doesn't? Search is a requirement for most applications today. With MongoDB Atlas Search, we have made it easier than ever to integrate simple, fine-grained, and lightning-fast search capabilities into all of your MongoDB applications. To demonstrate just how easy it is, let's build a web application to find our favorite movies.\n\nThis tutorial is the first in a four-part series where we will learn over the next few months to build out the application featured in our Atlas Search Product Demo.\n\n:youtube]{vid=kZ77X67GUfk}\n\nArmed with only a basic knowledge of HTML and Javascript, we will build out our application in the following four parts.\n\n##### The Path to Our Movie Search Application\n\n| | |\n|---|---|\n| **Part 1** | Get up and running with a basic search movie engine allowing us to look for movies based on a topic in our MongoDB Atlas movie data. |\n| **Part 2** | Make it even easier for our users by building more advanced search queries with fuzzy matching and wildcard paths to forgive them for fat fingers and misspellings. We'll introduce custom score modifiers to allow us to influence our movie results. |\n| **Part 3** | Add autocomplete capabilities to our movie application. We'll also discuss index mappings and analyzers and how to use them to optimize the performance of our application. |\n| **Part 4** | Wrap up our application by creating filters to query across dates and numbers to even further fine-tune our movie search results. We'll even host the application on Realm, our serverless backend platform, so you can deliver your movie search website anywhere in the world. |\n\nNow, without any further adieu, let's get this show on the road!\n\n \n\nThis tutorial will guide you through building a very basic movie search engine on a free tier Atlas cluster. We will set it up in a way that will allow us to scale our search in a highly performant manner as we continue building out new features in our application over the coming weeks. By the end of Part 1, you will have something that looks like this:\n\n \n\nTo accomplish this, here are our tasks for today:\n\n \n\n## STEP 1. SPIN UP ATLAS CLUSTER AND LOAD MOVIE DATA\n\nTo **Get Started**, we will need only an Atlas cluster, which you can get for free, loaded with the Atlas sample dataset. If you do not already have one, sign up to [create an Atlas cluster on your preferred cloud provider and region.\n\nOnce you have your cluster, you can load the sample dataset by clicking the ellipse button and **Load Sample Dataset**.\n\n \n\n>For more detailed information on how to spin up a cluster, configure your IP address, create a user, and load sample data, check out Getting Started with MongoDB Atlas from our documentation.\n\nNow, let's have a closer look at our sample data within the Atlas Data Explorer. In your Atlas UI, click on **Collections** to examine the **movies** collection in the new **sample_mflix** database. This collection has over 23k movie documents with information such as title, plot, and cast. The **sample_mflix.movies** collection provides the dataset for our application.\n\n \n\n \n\n## STEP 2. CREATE A SEARCH INDEX\n\nSince our movie search engine is going to look for movies based on a topic, we will use Atlas Search to query for specific words and phrases in the `fullplot` field of the documents.\n\nThe first thing we need is an Atlas Search index. Click on the tab titled **Search Indexes** under **Collections**. Click on the green **Create Search Index** button. Let's accept the default settings and click **Create Index**. That's all you need to do to start taking advantage of Search in your MongoDB Atlas data!\n\n \n\nBy accepting the default settings when we created the Search index, we dynamically mapped all the fields in the collection as indicated in the default index configuration:\n\n``` javascript\n{\n mappings: {\n \"dynamic\":true \n }\n}\n```\n\nMapping is simply how we define how the fields on our documents are indexed and stored. If a field's value looks like a string, we'll treat it as a full-text field, similarly for numbers and dates. This suits MongoDB's flexible data model perfectly. As you add new data to your collection and your schema evolves, dynamic mapping accommodates those changes in your schema and adds that new data to the Atlas Search index automatically.\n\nWe'll talk more about mapping and indexes in Part 3 of our series. For right now, we can check off another item from our task list.\n\n \n\n## STEP 3. WRITE A BASIC AGGREGATION WITH $SEARCH OPERATORS\n\nSearch queries take the form of an aggregation pipeline stage. The `$search` stage performs a search query on the specified field(s) covered by the Search index and must be used as the first stage in the aggregation pipeline.\n\nLet's use the aggregation pipeline builder inside of the Atlas UI to make an aggregation pipeline that makes use of our Atlas Search index. Our basic aggregation will consist of only three stages: $search, $project, and $limit.\n\n>You do not have to use the pipeline builder tool for this stage, but I really love the easy-to-use user interface. Plus, the ability to preview the results by stage makes troubleshooting a snap!\n\n \n\nNavigate to the **Aggregation** tab in the **sample_mflix.movies** collection:\n\n \n\n### Stage 1. $search\n\nFor the first stage, select the `$search` aggregation operator to search for the *text* \"werewolves and vampires\" in the `fullplot` field *path.*\n\n \n\nYou can also add the **highlight** option, which will return the highlights by adding fields to the result payload that display search terms in their original context, along with the adjacent text content. (More on this later.)\n\n \n\nYour final `$search` aggregation stage should be:\n\n``` javascript\n{\n text: {\n query: \"werewolves and vampires\",\n path: \"fullplot\", \n },\n highlight: { \n path: \"fullplot\" \n }\n}\n```\n\n>Note the returned movie documents in the preview panel on the right. If no documents are in the panel, double-check the formatting in your aggregation code.\n\n### Stage 2: $project\n\n \n\nAdd stage `$project` to your pipeline to get back only the fields we will use in our movie search application. We also use the `$meta` operator to surface each document's **searchScore** and **searchHighlights** in the result set.\n\n``` javascript\n{\n title: 1,\n year:1,\n fullplot:1,\n _id:0,\n score: {\n $meta:'searchScore'\n },\n highlight:{\n $meta: 'searchHighlights'\n }\n}\n```\n\nLet's break down the individual pieces in this stage further:\n\n**SCORE:** The `\"$meta\": \"searchScore\"` contains the assigned score for the document based on relevance. This signifies how well this movie's `fullplot` field matches the query terms \"werewolves and vampires\" above.\n\nNote that by scrolling in the right preview panel, the movie documents are returned with the score in *descending* order. This means we get the best matched movies first.\n\n**HIGHLIGHT:** The `\"$meta\": \"searchHighlights\"` contains the highlighted results.\n\n*Because* **searchHighlights** *and* **searchScore** *are not part of the original document, it is necessary to use a $project pipeline stage to add them to the query output.*\n\nNow, open a document's **highlight** array to show the data objects with text **values** and **types**.\n\n``` bash\ntitle:\"The Mortal Instruments: City of Bones\"\nfullplot:\"Set in contemporary New York City, a seemingly ordinary teenager, Clar...\"\nyear:2013\nscore:6.849891185760498\nhighlight:Array\n 0:Object\n path:\"fullplot\"\n texts:Array\n 0:Object\n value:\"After the disappearance of her mother, Clary must join forces with a g...\"\n type:\"text\"\n 1:Object\n value:\"vampires\"\n type:\"hit\"\n 2:Object\n 3:Object\n 4:Object\n 5:Object\n 6:Object\n score:3.556248188018799\n```\n\n**highlight.texts.value** - text from the `fullplot` field returning a match\n\n**highlight.texts.type** - either a hit or a text \n- **hit** is a match for the query\n- **text** is the surrounding text context adjacent to the matching\n string\n\nWe will use these later in our application code.\n\n### Stage 3: $limit\n\n \n\nRemember that the results are returned with the scores in descending order. `$limit: 10` will therefore bring the 10 most relevant movie documents to your search query. $limit is very important in Search because speed is very important. Without `$limit:10`, we would get the scores for all 23k movies. We don't need that.\n\nFinally, if you see results in the right preview panel, your aggregation pipeline is working properly! Let's grab that aggregation code with the Export Pipeline to Language feature by clicking the button in the top toolbar.\n\n \n\n \n\nYour final aggregation code will be this:\n\n``` bash\n\n { \n $search {\n text: {\n query: \"werewolves and vampires\",\n path: \"fullplot\" \n },\n highlight: { \n path: \"fullplot\" \n }\n }},\n { \n $project: {\n title: 1,\n _id: 0,\n year: 1,\n fullplot: 1,\n score: { $meta: 'searchScore' },\n highlight: { $meta: 'searchHighlights' }\n }},\n { \n $limit: 10 \n }\n]\n```\n\nThis small snippet of code powers our movie search engine!\n\n \n\n## STEP 4. CREATE A REST API\n\nNow that we have the heart of our movie search engine in the form of an aggregation pipeline, how will we use it in an application? There are lots of ways to do this, but I found the easiest was to simply create a RESTful API to expose this data - and for that, I leveraged [MongoDB Realm's HTTP Service from right inside of Atlas.\n\nRealm is MongoDB's serverless platform where functions written in Javascript automatically scale to meet current demand. To create a Realm application, return to your Atlas UI and click **Realm.** Then click the green **Start a New Realm App** button.\n\nName your Realm application **MovieSearchApp** and make sure to link to your cluster. All other default settings are fine.\n\nNow click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service and name it **movies**:\n\n \n\nClick the green **Add a Service** button, and you'll be directed to **Add Incoming Webhook**.\n\nOnce in the **Settings** tab, name your webhook **getMoviesBasic**. Enable **Respond with Result**, and set the HTTP Method to **GET**. To make things simple, let's just run the webhook as the System and skip validation with **No Additional Authorization.** Make sure to click the **Review and Deploy** button at the top along the way.\n\n \n\nIn this service function editor, replace the example code with the following:\n\n``` javascript\nexports = function(payload) {\n const movies = context.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n let arg = payload.query.arg;\n\n return movies.aggregate(<>).toArray();\n};\n```\n\nLet's break down some of these components. MongoDB Realm interacts with your Atlas movies collection through the global **context** variable. In the service function, we use that context variable to access the **sample_mflix.movies** collection in your Atlas cluster. We'll reference this collection through the const variable **movies**:\n\n``` javascript\nconst movies =\ncontext.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n```\n\nWe capture the query argument from the payload:\n\n``` javascript\nlet arg = payload.query.arg;\n```\n\nReturn the aggregation code executed on the collection by pasting your aggregation copied from the aggregation pipeline builder into the code below:\n\n``` javascript\nreturn movies.aggregate(<>).toArray();\n```\n\nFinally, after pasting the aggregation code, change the terms \"werewolves and vampires\" to the generic `arg` to match the function's payload query argument - otherwise our movie search engine capabilities will be *extremely* limited.\n\n \n\nYour final code in the function editor will be:\n\n``` javascript\nexports = function(payload) {\n const movies = context.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n let arg = payload.query.arg;\n return movies.aggregate(\n { \n $search: {\n text: {\n query: arg,\n path:'fullplot' \n },\n highlight: { \n path: 'fullplot' \n }\n }},\n { \n $project: {\n title: 1,\n _id: 0,\n year: 1, \n fullplot: 1,\n score: { $meta: 'searchScore'},\n highlight: {$meta: 'searchHighlights'}\n }\n },\n { \n $limit: 10\n }\n ]).toArray();\n};\n```\n\nNow you can test in the Console below the editor by changing the argument from **arg1: \"hello\"** to **arg: \"werewolves and vampires\"**.\n\n>Please make sure to change BOTH the field name **arg1** to **arg**, as well as the string value **\"hello\"** to **\"werewolves and vampires\"** - or it won't work.\n\n \n\n \n\nClick **Run** to verify the result:\n\n \n\nIf this is working, congrats! We are almost done! Make sure to **SAVE** and deploy the service by clicking **REVIEW & DEPLOY CHANGES** at the top of the screen.\n\n### Use the API\n\nThe beauty of a REST API is that it can be called from just about anywhere. Let's execute it in our browser. However, if you have tools like Postman installed, feel free to try that as well.\n\nSwitch back to the **Settings** of your **getMoviesBasic** function, and you'll notice a Webhook URL has been generated.\n\n \n\nClick the **COPY** button and paste the URL into your browser. Then append the following to the end of your URL: **?arg=\"werewolves and vampires\"**\n\n \n\nIf you receive an output like what we have above, congratulations! You\nhave successfully created a movie search API! \ud83d\ude4c \ud83d\udcaa\n\n \n\n \n\n## STEP 5. FINALLY! THE FRONT-END\n\nNow that we have this endpoint, it takes a single call from the front-end application using the Fetch API to retrieve this data. Download the following [index.html file and open it in your browser. You will see a simple search bar:\n\n \n\nEntering data in the search bar will bring you movie search results because the application is currently pointing to an existing API.\n\nNow open the HTML file with your favorite text editor and familiarize yourself with the contents. You'll note this contains a very simple container and two javascript functions:\n\n- Line 81 - **userAction()** will execute when the user enters a\n search. If there is valid input in the search box and no errors, we\n will call the **buildMovieList()** function.\n- Line 125 - **buildMovieList()** is a helper function for\n **userAction()**.\n\nThe **buildMovieList()** function will build out the list of movies along with their scores and highlights from the `fullplot` field. Notice in line 146 that if the **highlight.texts.type === \"hit\"** we highlight the **highlight.texts.value** with a style attribute tag.\\*\n\n``` javascript\nif (moviesi].highlight[j].texts[k].type === \"hit\") {\n txt += ` ${movies[i].highlight[j].texts[k].value} `;\n} else {\n txt += movies[i].highlight[j].texts[k].value;\n}\n```\n\n### Modify the Front-End Code to Use Your API\n\nIn the **userAction()** function, notice on line 88 that the **webhook_url** is already set to a RESTful API I created in my own Movie Search application.\n\n``` javascript\nlet webhook_url = \"https://webhooks.mongodb-realm.com/api/client/v2.0/app/ftsdemo-zcyez/service/movies-basic-FTS/incoming_webhook/movies-basic-FTS\";\n```\n\nWe capture the input from the search form field in line 82 and set it equal to **searchString**. In this application, we append that **searchString** input to the **webhook_url**\n\n``` javascript\nlet url = webhook_url + \"?arg=\" + searchString;\n```\n\nbefore calling it in the fetch API in line 92.\n\nTo make this application fully your own, simply replace the existing **webhook_url** value on line 88 with your own API from the **getMoviesBasic** Realm HTTP Service webhook you just created. \ud83e\udd1e Now save these changes, and open the **index.html** file once more in your browser, et voil\u00e0! You have just built your movie search engine using Atlas Search. \ud83d\ude0e\n\nPass the popcorn! \ud83c\udf7f What kind of movie do you want to watch?!\n\n \n\n## That's a Wrap!\n\nYou have just seen how easy it is to build a simple, powerful search into an application with [MongoDB Atlas Search. In our next tutorial, we continue by building more advanced search queries into our movie application with fuzzy matching and wildcard to forgive fat fingers and typos. We'll even introduce custom score modifiers to allow us to shape our search results. Check out our $search documentation for other possibilities.\n\n \n\nHarnessing the power of Apache Lucene for efficient search algorithms, static and dynamic field mapping for flexible, scalable indexing, all while using the same MongoDB Query Language (MQL) you already know and love, spoken in our very best Liam Neeson impression - MongoDB now has a very particular set of skills. Skills we have acquired over a very long career. Skills that make MongoDB a DREAM for developers like you.\n\nLooking forward to seeing you in Part 2. Until then, if you have any questions or want to connect with other MongoDB developers, check out our community forums. Come to learn. Stay to connect.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Check out this blog tutorial to learn how to build a movie search application using MongoDB Atlas Search.", "contentType": "Tutorial"}, "title": "Tutorial: Build a Movie Search Application Using Atlas Search", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-keypath-filtering", "action": "created", "body": "# Filter Realm Notifications in Your iOS App with KeyPaths\n\n## Introduction\n\nRealm Swift v10.12.0 introduced the ability to filter change notifications for desired key paths. This level of granularity has been something we've had our eye on, so it\u2019s really satisfying to release this kind of control and performance benefit. Here\u2019s a quick rundown on what\u2019s changed and why it matters.\n\n## Notifications Before\n\nBy default, notifications return changes for all insertions, modifications, and deletions. Suppose that I have a schema that looks like the one below.\n\nIf I observe a `Results` object and the name of one of the companies in the results changes, the notification block would fire and my UI would update: \n\n```swift\nlet results = realm.objects(Company.self)\nlet notificationToken = results.observe() { changes in\n // update UI\n}\n```\n\nThat\u2019s quite straightforward for non-collection properties. But what about other types, like lists?\n\nNaturally, the block I passed into .`observe` will execute each time an `Order` is added or removed. But the block also executes each time a property on the `Order` list is edited. The same goes for _those_ properties\u2019 collections too (and so on!). Even though I\u2019m observing \u201cjust\u201d a collection of `Company` objects, I\u2019ll receive change notifications for properties on a half-dozen other collections.\n\nThis isn\u2019t necessarily an issue for most cases. Small object graphs, or \u201csiloed\u201d objects, that don\u2019t feature many relationships might not experience unneeded notifications at all. But for complex webs of objects, where several layers of children objects exist, an app developer may benefit from a **major performance enhancement and added control from KeyPath filtering**.\n\n## KeyPath Filtering\n\nNow `.observe` comes with an optional `keyPaths` parameter:\n\n```swift\npublic func observe(keyPaths: String]? = nil,\n on queue: DispatchQueue? = nil,\n _ block: @escaping (ObjectChange) -> Void) -> NotificationToken\n```\n\nThe `.observe `function will only notify on the field or fields specified in the `keyPaths` parameter. Other fields are ignored unless explicitly passed into the parameter.\n\nThis allows the app developer to tailor which relationship paths are observed. This reduces computing cost and grants finer control over when the notification fires.\n\nOur modified code might look like this:\n\n```swift\nlet results = realm.objects(Company.self)\nlet notificationToken = results.observe(keyPaths: [\"orders.status\"]) { changes in\n// update UI\n}\n```\n\n`.observe `can alternatively take a `PartialKeyPath`:\n\n```swift\nlet results = realm.objects(Company.self)\nlet notificationToken = results.observe(keyPaths: [\\Company.orders.status]) { changes in\n// update UI\n}\n```\n\nIf we applied the above snippets to our previous example, we\u2019d only receive notifications for this portion of the schema:\n\n![Graph showing that just a single path through the Objects components is selected\n\nThe notification process is no longer traversing an entire tree of relationships each time a modification is made. Within a complex tree of related objects, the change-notification checker will now traverse only the relevant paths. This saves huge amounts of work. \n\nIn a large database, this can be a serious performance boost! The end-user can spend less time with a spinner and more time using your application.\n\n## Conclusion\n\n- `.observe` has a new optional `keyPaths` parameter. \n- The app developer has more granular control over when notifications are fired.\n- This can greatly improve notification performance for large databases and complex object graphs.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.\n", "format": "md", "metadata": {"tags": ["Realm", "iOS"], "pageDescription": "How to customize your notifications when your iOS app is observing Realm", "contentType": "Tutorial"}, "title": "Filter Realm Notifications in Your iOS App with KeyPaths", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/pause-resume-atlas-clusters", "action": "created", "body": "# How to Easily Pause and Resume MongoDB Atlas Clusters\n\nOne of the most important things to think about in the cloud is what is burning dollars while you sleep. In the case of MongoDB Atlas, that is your live clusters. The minute you start a cluster (with the exception of our free tier), we start accumulating cost.\n\nIf you're using a dedicated cluster\u2014not one of the cheaper, shared cluster types, such as M0, M2 or M5\u2014then it's easy enough to pause a cluster using the Atlas UI, but logging in over 2FA can be a drag. Wouldn't it be great if we could just jump on a local command line to look at our live clusters?\n\nThis you can do with a command line tool like `curl`, some programming savvy, and knowledge of the MongoDB Atlas Admin API. But who has time for that? Not me, for sure.\n\nThat is why I wrote a simple script to automate those steps. It's now a Python package up on PyPi called mongodbatlas.\n\nYou will need Python 3.6 or better installed to run the script. (This is your chance to escape the clutches of 2.x.)\n\nJust run:\n\n``` bash\n$ pip install mongodbatlas\n Collecting mongodbatlas\n Using cached mongodbatlas-0.2.6.tar.gz (17 kB)\n ...\n ...\n Building wheels for collected packages: mongodbatlas\n Building wheel for mongodbatlas (setup.py) ... done\n Created wheel for mongodbatlas: filename=mongodbatlas-0.2.6-py3-none-any.whl size=23583 sha256=d178ab386a8104f4f5100a6ccbe61670f9a1dd3501edb5dcfb585fb759cb749c\n Stored in directory: /Users/jdrumgoole/Library/Caches/pip/wheels/d1/84/74/3da8d3462b713bfa67edd02234c968cb4b1367d8bc0af16325\n Successfully built mongodbatlas\n Installing collected packages: certifi, chardet, idna, urllib3, requests, six, python-dateutil, mongodbatlas\n Successfully installed certifi-2020.11.8 chardet-3.0.4 idna-2.10 mongodbatlas-0.2.6 python-dateutil-2.8.1 requests-2.25.0 six-1.15.0 urllib3-1.26.1\n```\n\nNow you will have a script installed called `atlascli`. To test the install worked, run `atlascli -h`.\n\n``` bash\n$ atlascli -h\n usage: atlascli -h] [--publickey PUBLICKEY] [--privatekey PRIVATEKEY]\n [-p PAUSE_CLUSTER] [-r RESUME_CLUSTER] [-l] [-lp] [-lc]\n [-pid PROJECT_ID_LIST] [-d]\n\n A command line program to list organizations,projects and clusters on a\n MongoDB Atlas organization.You need to enable programmatic keys for this\n program to work. See https://docs.atlas.mongodb.com/reference/api/apiKeys/\n\n optional arguments:\n -h, --help show this help message and exit\n --publickey PUBLICKEY\n MongoDB Atlas public API key.Can be read from the\n environment variable ATLAS_PUBLIC_KEY\n --privatekey PRIVATEKEY\n MongoDB Atlas private API key.Can be read from the\n environment variable ATLAS_PRIVATE_KEY\n -p PAUSE_CLUSTER, --pause PAUSE_CLUSTER\n pause named cluster in project specified by project_id\n Note that clusters that have been resumed cannot be\n paused for the next 60 minutes\n -r RESUME_CLUSTER, --resume RESUME_CLUSTER\n resume named cluster in project specified by\n project_id\n -l, --list List everything in the organization\n -lp, --listproj List all projects\n -lc, --listcluster List all clusters\n -pid PROJECT_ID_LIST, --project_id PROJECT_ID_LIST\n specify the project ID for cluster that is to be\n paused\n -d, --debug Turn on logging at debug level\n\n Version: 0.2.6\n```\n\nTo make this script work, you will need to do a little one-time setup on your cluster. You will need a [programmatic key for your cluster. You will also need to enable the IP address that the client is making requests from.\n\nThere are two ways to create an API key:\n\n- If you have a single project, it's probably easiest to create a single project API key\n- If you have multiple projects, you should probably create an organization API key and add it to each of your projects.\n\n## Single Project API Key\n\nGoing to your \"Project Settings\" page by clicking on the \"three dot\" button next your project name at the top-left of the screen and selecting \"Project Settings\". Then click on \"Access Manager\" on the left side of the screen and click on \"Create API Key\". Take a note of the public *and* private parts of the key, and ensure that the key has the \"Project Cluster Manager\" permission. More detailed steps can be found in the documentation.\n\n## Organization API Key\n\nClick on the cog icon next to your organization name at the top-left of the screen. Click on \"Access Manager\" on the left-side of the screen and click on \"Create API Key\". Take a note of the public *and* private parts of the key. Don't worry about selecting any specific organization permissions.\n\nNow you'll need to invite the API key to each of the projects containing clusters you wish to control. Click on \"Projects' on the left-side of the screen. For each of the projects, click on the \"three dots\" icon on the same row in the project table and select \"Visit Project Settings\" Click on \"Access Manager\", and click on \"Invite to Project\" on the top-right. Paste your public key into the search box and select it in the menu that appears. Ensure that the key has the \"Project Cluster Manager\" permission that it will need to pause and resume clusters in that project.\n\nMore detailed steps can be found in the documentation.\n\n## Configuring `atlascli`\n\nThe programmatic key has two parts: a public key and a private key. Both of these are used by the `atlascli` program to query the projects and clusters associated with the organization.\n\nYou can pass the keys in on the command line, but this is not recommended because they will be stored in the command line history. It's better to store them in environment variables, and the `atlascli` program will look for these two:\n\n- `ATLAS_PUBLIC_KEY`: stores the public key part of the programmatic key\n- `ATLAS_PRIVATE_KEY`: stores the private part of the programmatic key\n\nOnce you have created these environment variables, you can run `atlascli -l` to list the organization and its associated projects and clusters. I've blocked out part of the actual IDs with `xxxx` characters for security purposes:\n\n``` bash\n$ atlascli -l\n {'id': 'xxxxxxxxxxxxxxxx464d175c',\n 'isDeleted': False,\n 'links': {'href': 'https://cloud.mongodb.com/api/atlas/v1.0/orgs/599eeced9f78f769464d175c',\n 'rel': 'self'}],\n 'name': 'Open Data at MongoDB'}\n Organization ID:xxxxxxxxxxxxf769464d175c Name:'Open Data at MongoDB'\n project ID:xxxxxxxxxxxxd6522bc457f1 Name:'DevHub'\n Cluster ID:'xxxxxxxxxxxx769c2577a54' name:'DRA-Data' state=running\n project ID:xxxxxxxxx2a0421d9bab Name:'MUGAlyser Project'\n Cluster ID:'xxxxxxxxxxxb21250823bfba' name:'MUGAlyser' state=paused\n project ID:xxxxxxxxxxxxxxxx736dfdcddf Name:'MongoDBLive'\n project ID:xxxxxxxxxxxxxxxa9a5a04e7 Name:'Open Data Covid-19'\n Cluster ID:'xxxxxxxxxxxxxx17cec56acf' name:'pre-prod' state=running\n Cluster ID:'xxxxxxxxxxxxxx5fbfe04313' name:'dev' state=running\n Cluster ID:'xxxxxxxxxxxxxx779f979879' name:'covid-19' state=running\n project ID xxxxxxxxxxxxxxxxa132a8010 Name:'Open Data Project'\n Cluster ID:'xxxxxxxxxxxxxx5ce1ef94dd' name:'MOT' state=paused\n Cluster ID:'xxxxxxxxxxxxxx22bf6c226f' name:'GDELT' state=paused\n Cluster ID:'xxxxxxxxxxxxxx5647797ac5' name:'UKPropertyPrices' state=paused\n Cluster ID:'xxxxxxxxxxxxxx0f270da18a' name:'New-York-Taxi' state=paused\n Cluster ID:'xxxxxxxxxxxxxx11eab32cf8' name:'demodata' state=running\n Cluster ID:'xxxxxxxxxxxxxxxdcaef39c8' name:'stackoverflow' state=paused\n project ID:xxxxxxxxxxc9503a77fcce0c Name:'Realm'\n```\n\nTo pause a cluster, you will need to specify the `project ID` and the `cluster name`. Here is an example:\n\n``` bash\n$ atlascli --project_id xxxxxxxxxxxxxxxxa132a8010 --pause demodata\n Pausing 'demodata'\n Paused cluster 'demodata'\n```\n\nTo resume the same cluster, do the converse:\n\n``` bash\n$ atlascli --project_id xxxxxxxxxxxxxxxxa132a8010 --resume demodata\n Resuming cluster 'demodata'\n Resumed cluster 'demodata'\n```\n\nNote that once a cluster has been resumed, it cannot be paused again for a while.\n\nThis delay allows the Atlas service to apply any pending changes or patches to the cluster that may have accumulated while it was paused.\n\nNow go save yourself some money. This script can easily be run from a `crontab` entry or the Windows Task Scheduler.\n\nWant to see the code? It's in this [repo on GitHub.\n\nFor a much more full-featured Atlas Admin API in Python, please check out my colleague Matthew Monteleone's PyPI package AtlasAPI.\n\n> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to easily pause and resume MongoDB Atlas clusters.", "contentType": "Article"}, "title": "How to Easily Pause and Resume MongoDB Atlas Clusters", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/time-series-candlestick", "action": "created", "body": "# Currency Analysis with Time Series Collections #1 \u2014 Generating Candlestick Charts Data\n\n## Introduction\n\nTechnical analysis is a methodology used in finance to provide price forecasts for financial assets based on historical market data. \n\nWhen it comes to analyzing market data, you need a better toolset. You will have a good amount of data, hence storing, accessing, and fast processing of this data becomes harder.\n\nThe financial assets price data is an example of time-series data. MongoDB 5.0 comes with a few important features to facilitate time-series data processing:\n\n- Time Series Collections: This specialized MongoDB collection makes it incredibly simple to store and process time-series data with automatic bucketing capabilities.\n- New Aggregation Framework Date Operators: `$dateTrunc`, `$dateAdd`, `$dateTrunc`, and `$dateDiff`.\n- Window Functions: Performs operations on a specified span of documents in a collection, known as a window, and returns the results based on the chosen window operator.\n\nThis three-part series will explain how you can build a currency analysis platform where you can apply well-known financial analysis techniques such as SMA, EMA, MACD, and RSI. While you can read through this article series and grasp the main concepts, you can also get your hands dirty and run the entire demo-toolkit by yourself. All the code is available in the Github repository.\n\n## Data Model\n\nWe want to save the last price of every currency in MongoDB, in close to real time. Depending on the currency data provider, it can be millisecond level to minute level. We insert the data as we get it from the provider with the following simple data model:\n\n```json\n{\n \"time\": ISODate(\"20210701T13:00:01.343\"),\n \"symbol\": \"BTC-USD\",\n \"price\": 33451.33\n}\n```\n\nWe only have three fields in MongoDB:\n\n- `time` is the time information when the symbol information is received.\n- `symbol` is the currency symbol such as \"BTC-USD.\" There can be hundreds of different symbols. \n- `price` field is the numeric value which indicates the value of currency at the time.\n\n## Data Source\n\nCoinbase, one of the biggest cryptocurrency exchange platforms, provides a WebSocket API to consume real-time cryptocurrency price updates. We will connect to Coinbase through a WebSocket, retrieve the data in real-time, and insert it into MongoDB. In order to increase the efficiency of insert operations, we can apply bulk insert.\n\nEven though our data source in this post is a cryptocurrency exchange, this article and the demo toolkit are applicable to any exchange platform that has time, symbol, and price information.\n\n## Bucketing Design Pattern \n\nThe MongoDB document model provides a lot of flexibility in how you model data. That flexibility is incredibly powerful, but that power needs to be harnessed in terms of your application\u2019s data access patterns; schema design in MongoDB has a tremendous impact on the performance of your application.\n\nThe bucketing design pattern is one MongoDB design pattern that groups raw data from multiple documents into one document rather than keeping separate documents for each and every raw piece of data. Therefore, we see performance benefits in terms of index size savings and read/write speed. Additionally, by grouping the data together with bucketing, we make it easier to organize specific groups of data, thus increasing the ability to discover historical trends or provide future forecasting. \n\nHowever, prior to MongoDB 5.0, in order to take advantage of bucketing, it required application code to be aware of bucketing and engineers to make conscious upfront schema decisions, which added overhead to developing efficient time series solutions within MongoDB. \n\n## Time Series Collections for Currency Analysis\n\nTime Series collections are a new collection type introduced in MongoDB 5.0. It automatically optimizes for the storage of time series data and makes it easier, faster, and less expensive to work with time series data in MongoDB. There is a great blog post that covers MongoDB\u2019s newly introduced Time Series collections in more detail that you may want to read first or for additional information.\n\nFor our use case, we will create a Time Series collection as follows:\n\n```javascript\ndb.createCollection(\"ticker\", {\n timeseries: {\n timeField: \"time\",\n metaField: \"symbol\",\n },\n});\n\n```\n\nWhile defining the time series collection, we set the `timeField` of the time series collection as `time`, and the `metaField` of the time series collection as `symbol`. Therefore, a particular symbol\u2019s data for a period will be stored together in the time series collection. \n\n### How the Currency Data is Stored in the Time Series Collection\n\nThe application code will make a simple insert operation as it does in a regular collection:\n\n```javascript\ndb.ticker.insertOne({\n time: ISODate(\"20210101T01:00:00\"),\n symbol: \"BTC-USD\",\n price: 34114.1145,\n});\n```\n\nWe read the data in the same way we would from any other MongoDB collection: \n\n```javascript\ndb.ticker.findOne({\"symbol\" : \"BTC-USD\"})\n\n{\n \"time\": ISODate(\"20210101T01:00:00\"),\n \"symbol\": \"BTC-USD\",\n \"price\": 34114.1145,\n \"_id\": ObjectId(\"611ea97417712c55f8d31651\")\n}\n```\n\nHowever, the underlying storage optimization specific to time series data will be done by MongoDB. For example, \"BTC-USD\" is a digital currency and every second you make an insert operation, it looks and feels like it\u2019s stored as a separate document when you query it. However, the underlying optimization mechanism keeps the same symbols\u2019 data together for faster and efficient processing. This allows us to automatically provide the advantages of the bucket pattern in terms of index size savings and read/write performance without sacrificing the way you work with your data.\n\n## Candlestick Charts\n\nWe have already inserted hours of data for different currencies. A particular currency\u2019s data is stored together, thanks to the Time Series collection. Now it\u2019s time to start analyzing the currency data.\n\nNow, instead of individually analyzing second level data, we will group the data by five-minute intervals, and then display the data on candlestick charts. Candlestick charts in technical analysis represent the movement in prices over a period of time. \n\nAs an example, consider the following candlestick. It represents one time interval, e.g. five minutes between `20210101-17:30:00` and `20210101-17:35:00`, and it\u2019s labeled with the start date, `20210101-17:30:00.` It has four metrics: high, low, open, and close. High is the highest price, low is the lowest price, open is the first price, and close is the last price of the currency in this duration. \n\nIn our currency dataset, we have to reach a stage where we need to have grouped the data by five-minute intervals like: `2021-01-01T01:00:00`, `2021-01-01T01:05:00`, etc. And every interval group needs to have four metrics: high, low, open, and close price. Examples of interval data are as follows:\n\n```json\n{\n \"time\": ISODate(\"20210101T01:00:00\"),\n \"symbol\": \"BTC-USD\",\n \"open\": 34111.12,\n \"close\": 34192.23,\n \"high\": 34513.28,\n \"low\": 33981.17\n},\n{\n \"time\": ISODate(\"20210101T01:05:00\"),\n \"symbol\": \"BTC-USD\",\n \"open\": 34192.23,\n \"close\": 34244.16,\n \"high\": 34717.90,\n \"low\": 34001.13\n}]\n```\n\nHowever, we only currently have second-level data for each ticker stored in our Time Series collection as we push the data for every second. We need to group the data, but how can we do this?\n\nIn addition to Time Series collections, MongoDB 5.0 has introduced a new aggregation operator, [`$dateTrunc`. This powerful new aggregation operator can do many things, but essentially, its core functionality is to truncate the date information to the closest time or a specific datepart, by considering the given parameters. In our scenario, we want to group currency data for five-minute intervals. Therefore, we can set the `$dateTrunc` operator parameters accordingly:\n\n```json\n{\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5\n }\n}\n```\n\nIn order to set the high, low, open, and close prices for each group (each candlestick), we can use other MongoDB operators, which were already available before MongoDB 5.0:\n\n- high: `$max`\n- low: `$min`\n- open: `$first`\n- close: `$last`\n\nAfter grouping the data, we need to sort the data by time to analyze it properly. Therefore, recent data (represented by a candlestick) will be at the right-most of the chart.\n\nPutting this together, our entire aggregation query will look like this:\n\n```js\ndb.ticker.aggregate(\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n]);\n```\n\nAfter we grouped the data based on five-minute intervals, we can visualize it in a candlestick chart as follows:\n\n![Candlestick chart\n\nWe are currently using an open source visualization tool to display five-minute grouped data of BTC-USD currency. Every stick in the chart represents a five-minute interval and has four metrics: high, low, open, and close price. \n\n## Conclusion\n\nWith the introduction of Time Series collections and advanced aggregation operators for date calculations, MongoDB 5.0 makes currency analysing much easier. \n\nAfter you\u2019ve grouped the data for the selected intervals, you can allow MongoDB to remove old data by setting the `expireAfterSeconds` parameter in the collection options. It will automatically remove the older data than the specified time in seconds.\n\nAnother option is to archive raw data to cold storage for further analysis. Fortunately, MongoDB Atlas has automatic archiving capability to offload the old data in a MongoDB Atlas cluster to cold object storage, such as cloud object storage - Amazon S3 or Microsoft Azure Blob Storage. To do that, you can set your archiving rules on the time series collection and it will automatically offload the old data to the cold storage. Online Archive will be available for time-series collections very soon.\n\nIs the currency data already placed in Kafka topics? That\u2019s perfectly fine. You can easily transfer the data in Kafka topics to MongoDB through MongoDB Sink Connector for Kafka. Please check out this article for further details on the integration of Kafka topics and the MongoDB Time Series collection.\n\nIn the following posts, we\u2019ll discuss how well-known financial technical indicators can be calculated via windowing functions on time series collections.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Time series collections part 1: generating data for a candlestick chart from time-series data", "contentType": "Tutorial"}, "title": "Currency Analysis with Time Series Collections #1 \u2014 Generating Candlestick Charts Data", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/golang-change-streams", "action": "created", "body": "# Reacting to Database Changes with MongoDB Change Streams and Go\n\n \n\nIf you've been keeping up with my getting started with Go and MongoDB tutorial series, you'll remember that we've accomplished quite a bit so far. We've had a look at everything from CRUD interaction with the database to data modeling, and more. To play catch up with everything we've done, you can have a look at the following tutorials in the series:\n\n- How to Get Connected to Your MongoDB Cluster with Go\n- Creating MongoDB Documents with Go\n- Retrieving and Querying MongoDB Documents with Go\n- Updating MongoDB Documents with Go\n- Deleting MongoDB Documents with Go\n- Modeling MongoDB Documents with Native Go Data Structures\n- Performing Complex MongoDB Data Aggregation Queries with Go\n\nIn this tutorial we're going to explore change streams in MongoDB and how they might be useful, all with the Go programming language (Golang).\n\nBefore we take a look at the code, let's take a step back and understand what change streams are and why there's often a need for them.\n\nImagine this scenario, one of many possible:\n\nYou have an application that engages with internet of things (IoT) clients. Let's say that this is a geofencing application and the IoT clients are something that can trigger the geofence as they come in and out of range. Rather than having your application constantly run queries to see if the clients are in range, wouldn't it make more sense to watch in real-time and react when it happens?\n\nWith MongoDB change streams, you can create a pipeline to watch for changes on a collection level, database level, or deployment level, and write logic within your application to do something as data comes in based on your pipeline.\n\n## Creating a Real-Time MongoDB Change Stream with Golang\n\nWhile there are many possible use-cases for change streams, we're going to continue with the example that we've been using throughout the scope of this getting started series. We're going to continue working with podcast show and podcast episode data.\n\nLet's assume we have the following code to start:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"os\"\n \"sync\"\n\n \"go.mongodb.org/mongo-driver/bson\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nfunc main() {\n client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n panic(err)\n }\n defer client.Disconnect(context.TODO())\n\n database := client.Database(\"quickstart\")\n episodesCollection := database.Collection(\"episodes\")\n}\n```\n\nThe above code is a very basic connection to a MongoDB cluster, something that we explored in the How to Get Connected to Your MongoDB Cluster with Go, tutorial.\n\nTo watch for changes, we can do something like the following:\n\n``` go\nepisodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})\nif err != nil {\n panic(err)\n}\n```\n\nThe above code will watch for any and all changes to documents within the `episodes` collection. The result is a cursor that we can iterate over indefinitely for data as it comes in.\n\nWe can iterate over the curser and make sense of our data using the following code:\n\n``` go\nepisodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})\nif err != nil {\n panic(err)\n}\n\ndefer episodesStream.Close(context.TODO())\n\nfor episodesStream.Next(context.TODO()) {\n var data bson.M\n if err := episodesStream.Decode(&data); err != nil {\n panic(err)\n }\n fmt.Printf(\"%v\\n\", data)\n}\n```\n\nIf data were to come in, it might look something like the following:\n\n``` none\nmap_id:map[_data:825E4EFCB9000000012B022C0100296E5A1004D960EAE47DBE4DC8AC61034AE145240146645F696400645E3B38511C9D4400004117E80004] clusterTime:{1582234809 1} documentKey:map[_id:ObjectID(\"5e3b38511c9d\n4400004117e8\")] fullDocument:map[_id:ObjectID(\"5e3b38511c9d4400004117e8\") description:The second episode duration:30 podcast:ObjectID(\"5e3b37e51c9d4400004117e6\") title:Episode #3] ns:map[coll:episodes \ndb:quickstart] operationType:replace]\n```\n\nIn the above example, I've done a `Replace` on a particular document in the collection. In addition to information about the data, I also receive the full document that includes the change. The results will vary depending on the `operationType` that takes place.\n\nWhile the code that we used would work fine, it is currently a blocking operation. If we wanted to watch for changes and continue to do other things, we'd want to use a [goroutine for iterating over our change stream cursor.\n\nWe could make some changes like this:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"fmt\"\n \"os\"\n \"sync\"\n\n \"go.mongodb.org/mongo-driver/bson\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nfunc iterateChangeStream(routineCtx context.Context, waitGroup sync.WaitGroup, stream *mongo.ChangeStream) {\n defer stream.Close(routineCtx)\n defer waitGroup.Done()\n for stream.Next(routineCtx) {\n var data bson.M\n if err := stream.Decode(&data); err != nil {\n panic(err)\n }\n fmt.Printf(\"%v\\n\", data)\n }\n}\n\nfunc main() {\n client, err := mongo.Connect(context.TODO(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n panic(err)\n }\n defer client.Disconnect(context.TODO())\n\n database := client.Database(\"quickstart\")\n episodesCollection := database.Collection(\"episodes\")\n\n var waitGroup sync.WaitGroup\n\n episodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{})\n if err != nil {\n panic(err)\n }\n waitGroup.Add(1)\n routineCtx, cancelFn := context.WithCancel(context.Background())\n go iterateChangeStream(routineCtx, waitGroup, episodesStream)\n\n waitGroup.Wait()\n}\n```\n\nA few things are happening in the above code. We've moved the stream iteration into a separate function to be used in a goroutine. However, running the application would result in it terminating quite quickly because the `main` function will terminate not too longer after creating the goroutine. To resolve this, we are making use of a `WaitGroup`. In our example, the `main` function will wait until the `WaitGroup` is empty and the `WaitGroup` only becomes empty when the goroutine terminates.\n\nMaking use of the `WaitGroup` isn't an absolute requirement as there are other ways to keep the application running while watching for changes. However, given the simplicity of this example, it made sense in order to see any changes in the stream.\n\nTo keep the `iterateChangeStream` function from running indefinitely, we are creating and passing a context that can be canceled. While we don't demonstrate canceling the function, at least we know it can be done.\n\n## Complicating the Change Stream with the Aggregation Pipeline\n\nIn the previous example, the aggregation pipeline that we used was as basic as you can get. In other words, we were looking for any and all changes that were happening to our particular collection. While this might be good in a lot of scenarios, you'll probably get more out of using a better defined aggregation pipeline.\n\nTake the following for example:\n\n``` go\nmatchPipeline := bson.D{\n {\n \"$match\", bson.D{\n {\"operationType\", \"insert\"},\n {\"fullDocument.duration\", bson.D{\n {\"$gt\", 30},\n }},\n },\n },\n}\n\nepisodesStream, err := episodesCollection.Watch(context.TODO(), mongo.Pipeline{matchPipeline})\n```\n\nIn the above example, we're still watching for changes to the `episodes` collection. However, this time we're only watching for new documents that have a `duration` field greater than 30. Any other insert or other change stream operation won't be detected.\n\nThe results of the above code, when a match is found, might look like the following:\n\n``` none\nmap_id:map[_data:825E4F03CF000000012B022C0100296E5A1004D960EAE47DBE4DC8AC61034AE145240146645F696400645E4F03A01C9D44000063CCBD0004] clusterTime:{1582236623 1} documentKey:map[_id:ObjectID(\"5e4f03a01c9d\n44000063ccbd\")] fullDocument:map[_id:ObjectID(\"5e4f03a01c9d44000063ccbd\") description:a quick start into mongodb duration:35 podcast:1234 title:getting started with mongodb] ns:map[coll:episodes db:qui\nckstart] operationType:insert]\n```\n\nWith change streams, you'll have access to a subset of the MongoDB aggregation pipeline and its operators. You can learn more about what's available in the [official documentation.\n\n## Conclusion\n\nYou just saw how to use MongoDB change streams in a Golang application using the MongoDB Go driver. As previously pointed out, change streams make it very easy to react to database, collection, and deployment changes without having to constantly query the cluster. This allows you to efficiently plan out aggregation pipelines to respond to as they happen in real-time.\n\nIf you're looking to catch up on the other tutorials in the MongoDB with Go quick start series, you can find them below:\n\n- How to Get Connected to Your MongoDB Cluster with Go\n- Creating MongoDB Documents with Go\n- Retrieving and Querying MongoDB Documents with Go\n- Updating MongoDB Documents with Go\n- Deleting MongoDB Documents with Go\n- Modeling MongoDB Documents with Native Go Data Structures\n- Performing Complex MongoDB Data Aggregation Queries with Go\n\nTo bring the series to a close, the next tutorial will focus on transactions with the MongoDB Go driver.", "format": "md", "metadata": {"tags": ["Go", "MongoDB"], "pageDescription": "Learn how to use change streams to react to changes to MongoDB documents, databases, and clusters in real-time using the Go programming language.", "contentType": "Quickstart"}, "title": "Reacting to Database Changes with MongoDB Change Streams and Go", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-service-based-atlas-management", "action": "created", "body": "# Building Service-Based Atlas Cluster Management\n\n## Developer Productivity\n\nMongoDB Atlas is changing the database industry standards when it comes to database provisioning, maintenance, and scaling, as it just works. However, even superheroes like Atlas know that with Great Power Comes Great Responsibility.\n\nFor this reason, Atlas provides Enterprise-grade security features for your clusters and a set of user management roles that can be assigned to log in users or programmatic API keys.\n\nHowever, since the management roles were built for a wide use case of\nour customers there are some customers who need more fine-grained\npermissions for specific teams or user types. Although, at the moment\nthe management roles are predefined, with the help of a simple Realm\nservice and the programmatic API we can allow user access for very\nspecific management/provisioning features without exposing them to a\nwider sudo all ability.\n\nTo better understand this scenario I want to focus on the specific use\ncase of database user creation for the application teams. In this\nscenario perhaps each developer per team may need its own user and\nspecific database permissions. With the current Atlas user roles you\nwill need to grant the team a `Cluster Manager Role`, which allows them\nto change cluster properties as well as pause and resume a cluster. In\nsome cases this power is unnecessary for your users.\n\n> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n## Proposed Solution\n\nYour developers will submit their requests to a pre-built service which\nwill authenticate them and request an input for the user description.\nFurthermore, the service will validate the input and post it to the\nAtlas Admin API without exposing any additional information or API keys.\n\nThe user will receive a confirmation that the user was created and ready\nto use.\n\n## Work Flow\n\nTo make the service more accessible for users I am using a form-based\nservice called Typeform, you can choose many other available form builders (e.g Google Forms). This form will gather the information and password/secret for the service authentication from the user and pass it to the Realm webhook which will perform the action.\n\n \n\nThe input is an Atlas Admin API user object that we want to create, looking something like the following object:\n\n``` javascript\n{\n \"databaseName\": ,\n \"password\": ,\n \"roles\": ...],\n \"username\": \n}\n```\n\nFor more information please refer to our Atlas Role Based Authentication\n[documentation.\n\n## Webhook Back End\n\nThis section will require you to use an existing Realm Application or\nbuild a new one.\n\nMongoDB Realm is a serverless platform and mobile database. In our case\nwe will use the following features:\n\n- Realm webhooks\n- Realm context HTTP Module\n- Realm Values/Secrets\n\nYou will also need to configure an Atlas Admin API key for the relevant Project and obtain it's Project Id. This can be done from your Atlas project url (e.g., `https://cloud.mongodb.com/v2/#clusters`).\n\nThe main part of the Realm application is to hold the Atlas Admin API keys and information as private secure secrets.\n\nThis is the webhook configuration that will call our Realm Function each\ntime the form is sent:\n\nThe function below receives the request. Fetch the needed API\ninformation and sends the Atlas Admin API command. The result of which is\nreturned to the Form.\n\n``` javascript\n// This function is the webhook's request handler.\nexports = async function(payload, response) {\n // Get payload\n const body = JSON.parse(payload.body.text());\n\n // Get secrets for the Atlas Admin API\n const username = context.values.get(\"AtlasPublicKey\");\n const password = context.values.get(\"AtlasPrivateKey\");\n const projectID = context.values.get(\"AtlasGroupId\");\n\n //Extract the Atlas user object description\n const userObject = JSON.parse(body.form_response.answers0].text);\n\n // Database users post command\n const postargs = {\n scheme: 'https',\n host: 'cloud.mongodb.com',\n path: 'api/atlas/v1.0/groups/' + projectID + '/databaseUsers',\n username: username,\n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept': ['application/json']},\n digestAuth:true,\n body: JSON.stringify(userObject)};\n\n var res = await context.http.post(postargs);\n console.log(JSON.stringify(res));\n\n // Check status of the user creation and report back to the user.\n if (res.statusCode == 201)\n {\n response.setStatusCode(200)\n response.setBody(`Successfully created ${userObject.username}.`);\n } else {\n // Respond with a malformed request error\n response.setStatusCode(400)\n response.setBody(`Could not create user ${userObject.username}.`);\n }\n};\n```\n\nOnce the webhook is set and ready we can use it as a webhook url input\nin the Typeform configuration.\n\nThe Realm webhook url can now be placed in the Typform webhook section.\nNow the submitted data on the form will be forwarded via Webhook\nintegration to our webhook:\n\n![\n\nTo strengthen the security around our Realm app we can strict the\nallowed domain for the webhook request origin. Go to Realm application\n\"Manage\" - \"Settings\" \\> \"Allowed Request Origins\":\n\nWe can test the form now by providing an Atlas Admin API user\nobject.\n\nIf you go to the Atlas UI under the Database Access tab you will see the\ncreated user.\n\n## Summary\n\nNow our developers will be able to create users quickly without being\nexposed to any unnecessary privileges or human errors.\n\nThe webhook code can be converted to a function that can be called from\nother webhooks or triggers allowing us to build sophisticated controlled\nand secure provisioning methods. For example, we can configure a\nscheduled trigger that pulls any newly created clusters and continuously\nprovision any new required users for our applications or edit any\nexisting users to add the needed new set of permissions.\n\nMongoDB Atlas and Realm platforms can work in great synergy allowing us to bring our devops and development cycles to the\nnext level.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to build Service-Based Atlas Cluster Management webhooks/functionality with Atlas Admin API and MongoDB Realm.", "contentType": "Article"}, "title": "Building Service-Based Atlas Cluster Management", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/aggregation-expression-builders", "action": "created", "body": "# Java Aggregation Expression Builders in MongoDB\n\nMongoDB aggregation pipelines allow developers to create rich document retrieval, manipulation, and update processes expressed as a sequence \u2014 or pipeline \u2014 of composable stages, where the output of one stage becomes the input to the next stage in the pipeline.\n\nWith aggregation operations, it is possible to:\n\n* Group values from multiple documents together.\n* Reshape documents.\n* Perform aggregation operations on the grouped data to return a single result. \n* Apply specialized operations to documents such as geographical functions, full text search, and time-window functions.\n* Analyze data changes over time.\n\nThe aggregation framework has grown since its introduction in MongoDB version 2.2 to \u2014 as of version 6.1 \u2014 cover over 35 different stages and over 130 different operators.\n\nWorking with the MongoDB shell or in tools such as MongoDB Compass, aggregation pipelines are defined as an array of BSON1] objects, with each object defining a stage in the pipeline. In an online-store system, a simple pipeline to find all orders placed between January 1st 2023 and March 31st 2023, and then provide a count of those orders grouped by product type, might look like:\n\n```JSON\ndb.orders.aggregate(\n[\n {\n $match:\n {\n orderDate: {\n $gte: ISODate(\"2023-01-01\"),\n },\n orderDate: {\n $lte: ISODate(\"2023-03-31\"),\n },\n },\n },\n {\n $group:\n {\n _id: \"$productType\",\n count: {\n $sum: 1\n },\n },\n },\n])\n```\n\n_Expressions_ give aggregation pipeline stages their ability to manipulate data. They come in four forms:\n\n**Operators**: expressed as objects with a dollar-sign prefix followed by the name of the operator. In the example above, **{$sum : 1}** is an example of an operator incrementing the count of orders for each product type by 1 each time a new order for a product type is found.\n\n**Field Paths**: expressed as strings with a dollar-sign prefix, followed by the field\u2019s path. In the case of embedded objects or arrays, dot-notation can be used to provide the path to the embedded item. In the example above, \"**$productType**\" is a field path. \n\n**Variables**: expressed with a double dollar-sign prefix, variables can be system or user defined. For example, \"**$$NOW**\" returns the current datetime value.\n\n**Literal Values**: In the example above, the literal value \u20181\u2019 in **{$sum : 1}** can be considered an expression and could be replaced with \u2014 for example \u2014 a field path expression. \n \nIn Java applications using the MongoDB native drivers, aggregation pipelines can be defined and executed by directly building equivalent BSON document objects. Our example pipeline above might look like the following when being built in Java using this approach:\n\n```Java\n\u2026\nMongoDatabase database = mongoClient.getDatabase(\"Mighty_Products\");\nMongoCollection collection = database.getCollection(\"orders\");\n\nSimpleDateFormat formatter = new SimpleDateFormat(\"yyyy-MM-dd\");\n\nBson matchStage = new Document(\"$match\",\n new Document(\"orderDate\",\n new Document(\"$gte\",\n formatter.parse(\"2023-01-01\")))\n .append(\"orderDate\",\n new Document(\"$lte\",\n formatter.parse(\"2023-03-31\"))));\n\nBson groupStage = new Document(\"$group\",\n new Document(\"_id\", \"$productType\")\n .append(\"count\",\n new Document(\"$sum\", 1L)));\n\ncollection.aggregate(\n Arrays.asList(\n matchStage,\n groupStage\n )\n).forEach(doc -> System.out.println(doc.toJson()));\n```\n\nThe Java code above is perfectly functional and will execute as intended, but it does highlight a couple of issues:\n\n* When creating the code, we had to understand the format of the corresponding BSON documents. We were not able to utilize IDE features such as code completion and discovery.\n* Any mistakes in the formatting of the documents being created, or the parameters and data types being passed to its various operators, would not be identified until we actually try to run the code.\n* Although our example above is relatively simple, in more complex pipelines, the level of indentation and nesting required in the corresponding document building code can lead to readability issues.\n\nAs an alternative to building BSON document objects, the MongoDB Java driver also defines a set of \u201cbuilder\u201d classes with static utility methods to simplify the execution of many operations in MongoDB, including the creation and execution of aggregation pipeline stages. Using the builder classes allows developers to discover more errors at compile rather than run time and to use code discovery and completion features in IDEs. Recent versions of the Java driver have additionally added extended support for [expression operators when using the aggregation builder classes, allowing pipelines to be written with typesafe methods and using fluent coding patterns.\n\nUsing this approach, the above code could be written as:\n\n```Java\nMongoDatabase database = mongoClient.getDatabase(\"Mighty_Products\");\nMongoCollection collection = database.getCollection(\"orders\");\n\nvar orderDate = current().getDate(\"orderDate\");\nBson matchStage = match(expr(orderDate.gte(of(Instant.parse(\"2023-01-01\")))\n .and(orderDate.lte(of(Instant.parse(\"2023-03-31\"))))));\n\nBson groupStage = group(current().getString(\"productType\"), sum(\"count\", 1L));\n\ncollection.aggregate(\n Arrays.asList(\n matchStage,\n groupStage\n )\n).forEach(doc -> System.out.println(doc.toJson()));\n```\nIn the rest of this article, we\u2019ll walk through an example aggregation pipeline using the aggregation builder classes and methods and highlight some of the new aggregation expression operator support.\n\n## The ADSB air-traffic control application\nOur aggregation pipeline example is based on a database collecting and analyzing Air Traffic Control data transmitted by aircraft flying in and out of Denver International Airport. The data is collected using a receiver built using a Raspberry Pi and USB Software Defined Radios (SDRs) using software from the rather excellent Stratux open-source project.\n\nThese cheap-to-build receivers have become popular with pilots of light aircraft in recent years as it allows them to project the location of nearby aircraft within the map display of tablet and smartphone-based navigation applications such as Foreflight, helping to avoid mid-air collisions.\n\nIn our application, the data received from the Stratux receiver is combined with aircraft reference data from the Opensky Network to give us documents that look like this:\n\n```JSON\n{\n \"_id\": {\n \"$numberLong\": \"11262117\"\n },\n \"model\": \"B737\",\n \"tailNum\": \"N8620H\",\n \"positionReports\": \n {\n \"callsign\": \"SWA962\",\n \"alt\": {\n \"$numberLong\": \"12625\"\n },\n \"lat\": {\n \"$numberDecimal\": \"39.782833\"\n },\n \"lng\": {\n \"$numberDecimal\": \"-104.49988\"\n },\n \"speed\": {\n \"$numberLong\": \"283\"\n },\n \"track\": {\n \"$numberLong\": \"345\"\n },\n \"vvel\": {\n \"$numberLong\": \"-1344\"\n },\n \"timestamp\": {\n \"$date\": \"2023-01-31T23:28:26.294Z\"\n }\n },\n {\n \"callsign\": \"SWA962\",\n \"alt\": {\n \"$numberLong\": \"12600\"\n },\n \"lat\": {\n \"$numberDecimal\": \"39.784744\"\n },\n \"lng\": {\n \"$numberDecimal\": \"-104.50058\"\n },\n \"speed\": {\n \"$numberLong\": \"283\"\n },\n \"track\": {\n \"$numberLong\": \"345\"\n },\n \"vvel\": {\n \"$numberLong\": \"-1344\"\n },\n \"timestamp\": {\n \"$date\": \"2023-01-31T23:28:26.419Z\"\n }\n },\n {\n \"callsign\": \"SWA962\",\n \"alt\": {\n \"$numberLong\": \"12600\"\n },\n \"lat\": {\n \"$numberDecimal\": \"39.78511\"\n },\n \"lng\": {\n \"$numberDecimal\": \"-104.50071\"\n },\n \"speed\": {\n \"$numberLong\": \"283\"\n },\n \"track\": {\n \"$numberLong\": \"345\"\n },\n \"vvel\": {\n \"$numberLong\": \"-1344\"\n },\n \"timestamp\": {\n \"$date\": \"2023-01-31T23:28:26.955Z\"\n }\n }\n ]\n}\n```\nThe \u201ctailNum\u201d field provides the unique registration number of the aircraft and doesn\u2019t change between position reports. The position reports are in an array[2], with each entry giving the geographical coordinates of the aircraft, its altitude, speed (horizontal and vertical), heading, and a timestamp. The position reports also give the callsign of the flight the aircraft was operating at the time it broadcast the position report. This can vary if the aircraft\u2019s position reports were picked up as it flew into Denver, and then again later as it flew out of Denver operating a different flight. In the sample above, aircraft N8620H, a Boeing 737, was operating flight SWA962 \u2014 a Southwest Airlines flight. It was flying at a speed of 283 knots, on a heading of 345 degrees, descending through 12,600 feet at 1344 ft/minute.\n\nUsing data collected over a 36-hour period, our collection contains information on over 500 different aircraft and over half a million position reports. We want to build an aggregation pipeline that will show the number of different aircraft operated by United Airlines grouped by aircraft type. \n\n## Defining the aggregation pipeline\nThe aggregation pipeline that we will run on our data will consist of three stages:\n\nThe first \u2014 a **match** stage \u2014 will find all aircraft that transmitted a United Airlines callsign between two dates.\n\nNext, we will carry out a **group** stage that takes the aircraft documents found by the match stage and creates a new set of documents \u2014 one for each model of aircraft found during the match stage, with each document containing a list of all the tail numbers of aircraft of that type found during the match stage.\n\nFinally, we carry out a **project** stage which is used to reshape the data in each document into our final desired format. \n\n### Stage 1: $match\nA [match stage carries out a query to filter the documents being passed to the next stage in the pipeline. A match stage is typically used as one of the first stages in the pipeline in order to keep the number of documents the pipeline has to work with \u2014 and therefore its memory footprint \u2014 to a reasonable size.\n\nIn our pipeline, the match stage will select all aircraft documents containing at least one position report with a United Airlines callsign (United callsigns all start with the three-letter prefix \u201cUAL\u201d), and with a timestamp between falling within a selected date range. The BSON representation of the resulting pipeline stage looks like:\n\n```JSON\n{\n $match: {\n positionReports: {\n $elemMatch: {\n callsign: /^UAL/,\n $and: \n {\n timestamp: {\n $gte: ISODate(\n \"2023-01-31T12:00:00.000-07:00\"\n )\n }\n },\n {\n timestamp: {\n $lt: ISODate(\n \"2023-02-01T00:00:00.000-07:00\"\n )\n }\n }\n ]\n }\n }\n }\n }\n\n```\nThe **$elemMatch** operator specifies that the query criteria we provide must all occur within a single entry in an array to generate a match, so an aircraft document will only match if it contains at least one position report where the callsign starts with \u201cUAL\u201d and the timestamp is between 12:00 on January 31st and 00:00 on February 1st in the Mountain time zone. \n\nIn Java, after using either Maven or Gradle to [add the MongoDB Java drivers as a dependency within our project, we could define this stage by building an equivalent BSON document object:\n\n```Java\n//Create the from and to dates for the match stage\nString sFromDate = \"2023-01-31T12:00:00.000-07:00\";\nTemporalAccessor ta = DateTimeFormatter.ISO_INSTANT.parse(sFromDate);\nInstant fromInstant = Instant.from(ta);\nDate fromDate = Date.from(fromInstant);\n\nString sToDate = \"2023-02-01T00:00:00.000-07:00\";\nta = DateTimeFormatter.ISO_INSTANT.parse(sToDate);\nInstant toInstant = Instant.from(ta);\nDate toDate = Date.from(toInstant);\n\nDocument matchStage = new Document(\"$match\",\n new Document(\"positionReports\",\n new Document(\"$elemMatch\",\n new Document(\"callsign\", Pattern.compile(\"^UAL\"))\n .append(\"$and\", Arrays.asList(\n new Document(\"timestamp\", new Document(\"$gte\", fromDate)),\n new Document(\"timestamp\", new Document(\"$lt\", toDate))\n ))\n )\n )\n);\n```\n\nAs we saw with the earlier online store example, whilst this code is perfectly functional, we did need to understand the structure of the corresponding BSON document, and any mistakes we made in constructing it would only be discovered at run-time.\n\nAs an alternative, after adding the necessary import statements to give our code access to the aggregation builder and expression operator static methods, we can build an equivalent pipeline stage with the following code:\n\n```Java\nimport static com.mongodb.client.model.Aggregates.*;\nimport static com.mongodb.client.model.Filters.*;\nimport static com.mongodb.client.model.Projections.*;\nimport static com.mongodb.client.model.Accumulators.*;\nimport static com.mongodb.client.model.mql.MqlValues.*;\n//...\n\n//Create the from and to dates for the match stage\nString sFromDate = \"2023-01-31T12:00:00.000-07:00\";\nTemporalAccessor ta = DateTimeFormatter.ISO_INSTANT.parse(sFromDate);\nInstant fromInstant = Instant.from(ta);\n\nString sToDate = \"2023-02-01T00:00:00.000-07:00\";\nta = DateTimeFormatter.ISO_INSTANT.parse(sToDate);\nInstant toInstant = Instant.from(ta);\n\nvar positionReports = current().getArray(\"positionReports\");\nBson matchStage = match(expr(\n positionReports.any(positionReport -> {\n var callsign = positionReport.getString(\"callsign\");\n var ts = positionReport.getDate(\"timestamp\");\n return callsign\n .substr(0,3)\n .eq(of(\"UAL\"))\n .and(ts.gte(of(fromInstant)))\n .and(ts.lt(of(toInstant)));\n })\n));\n```\nThere\u2019s a couple of things worth noting in this code:\n\nFirstly, the expressions operators framework gives us access to a method **current()** which returns the document currently being processed by the aggregation pipeline. We use it initially to get the array of position reports from the current document. \n\nNext, although we\u2019re using the **match()** aggregation builder method to create our match stage, to better demonstrate the use of the expression operators framework and its associated coding style, we\u2019ve used the **expr()**3] filter builder method to build an expression that uses the **any()** array expression operator to iterate through each entry in the positionReports array, looking for any that matches our predicate \u2014 i.e., that has a callsign field starting with the letters \u201cUAL\u201d and a timestamp falling within our specified date/time range. This is equivalent to what the **$elemMatch** operator in our original BSON document-based pipeline stage was doing.\n\nAlso, when using the expression operators to retrieve fields, we\u2019ve used type-specific methods to indicate the type of the expected return value. **callsign** was retrieved using **getString()**, while the timestamp variable **ts** was retrieved using **getDate()**. This allows IDEs such as IntelliJ and Visual Studio Code to perform type checking, and for subsequent code completion to be tailored to only show methods and documentation relevant to the returned type. This can lead to faster and less error-prone coding.\n\n![faster and less error-prone coding\n\nFinally, note that in building the predicate for the **any()** expression operator, we\u2019ve used a fluent coding style and idiosyncratic coding elements, such as lambdas, that many Java developers will be familiar with and more comfortable using rather than the MongoDB-specific approach needed to directly build BSON documents.\n\n### Stage 2: $group\nHaving filtered our document list to only include aircraft operated by United Airlines in our match stage, in the second stage of the pipeline, we carry out a group operation to begin the task of counting the number of aircraft of each model. The BSON document for this stage looks like:\n\n```JSON\n{\n $group:\n {\n _id: \"$model\",\n aircraftSet: {\n $addToSet: \"$tailNum\",\n },\n },\n}\n```\n\nIn this stage, we are specifying that we want to group the document data by the \u201cmodel\u201d field and that in each resulting document, we want an array called \u201caircraftSet\u201d containing each unique tail number of observed aircraft of that model type. The documents output from this stage look like:\n\n```JSON\n{\n \"_id\": \"B757\",\n \"aircraftSet\": \n \"N74856\",\n \"N77865\",\n \"N17104\",\n \"N19117\",\n \"N14120\",\n \"N57855\",\n \"N77871\"\n ]\n}\n```\nThe corresponding Java code for the stage looks like:\n\n```java\nBson bGroupStage = group(current().getString(\"model\"),\n addToSet(\"aircraftSet\", current().getString(\"tailNum\")));\n```\n\nAs before, we\u2019ve used the expressions framework **current()** method to access the document currently being processed by the pipeline. The aggregation builders **addToSet()** accumulator method is used to ensure only unique tail numbers are added to the \u201caircraftSet\u201d array.\n\n### Stage 3: $project\nIn the third and final stage of our pipeline, we use a [project stage to:\n\n* Rename the \u201c_id\u201d field introduced by the group stage back to \u201cmodel.\u201d\n* Swap the array of tail numbers for the number of entries in the array. \n* Add a new field, \u201cairline,\u201d populating it with the literal value \u201cUnited.\u201d \n* Add a field named \u201cmanufacturer\u201d and use a $cond conditional operator to populate it with:\n * \u201cAIRBUS\u201d if the aircraft model starts with \u201cA.\u201d\n * \u201cBOEING\u201d if it starts with a \u201cB.\u201d\n * \u201cCANADAIR\u201d if it starts with a \u201cC.\u201d\n * \u201cEMBRAER\u201d if it starts with an \u201cE.\u201d\n * \u201cMCDONNELL DOUGLAS\u201d if it starts with an \u201cM.\u201d\n * \u201cUNKNOWN\u201d in all other cases.\n\nThe BSON document for this stage looks like:\n\n```java\n{\n $project: {\n airline: \"United\",\n model: \"$_id\",\n count: {\n $size: \"$aircraftSet\",\n },\n manufacturer: {\n $let: {\n vars: {\n manufacturerPrefix: {\n $substrBytes: \"$_id\", 0, 1],\n },\n },\n in: {\n $switch: {\n branches: [\n {\n case: {\n $eq: [\n \"$$manufacturerPrefix\",\n \"A\",\n ],\n },\n then: \"AIRBUS\",\n },\n {\n case: {\n $eq: [\n \"$$manufacturerPrefix\",\n \"B\",\n ],\n },\n then: \"BOEING\",\n },\n {\n case: {\n $eq: [\n \"$$manufacturerPrefix\",\n \"C\",\n ],\n },\n then: \"CANADAIR\",\n },\n {\n case: {\n $eq: [\n \"$$manufacturerPrefix\",\n \"E\",\n ],\n },\n then: \"EMBRAER\",\n },\n {\n case: {\n $eq: [\n \"$$manufacturerPrefix\",\n \"M\",\n ],\n },\n then: \"MCDONNELL DOUGLAS\",\n },\n ],\n default: \"UNKNOWN\",\n },\n },\n },\n },\n _id: \"$$REMOVE\",\n },\n }\n\n```\n\nThe resulting output documents look like:\n\n```JSON\n{\n \"airline\": \"United\",\n \"model\": \"B777\",\n \"count\": 5,\n \"Manufacturer\": \"BOEING\"\n}\n```\n\nThe Java code for this stage looks like:\n\n```java\nBson bProjectStage = project(fields(\n computed(\"airline\", \"United\"),\n computed(\"model\", current().getString(\"_id\")),\n computed(\"count\", current().getArray(\"aircraftSet\").size()),\n computed(\"manufacturer\", current()\n .getString(\"_id\")\n .substr(0, 1)\n .switchStringOn(s -> s\n .eq(of(\"A\"), (m -> of(\"AIRBUS\")))\n .eq(of(\"B\"), (m -> of(\"BOEING\")))\n .eq(of(\"C\"), (m -> of(\"CANADAIR\")))\n .eq(of(\"E\"), (m -> of(\"EMBRAER\")))\n .eq(of(\"M\"), (m -> of(\"MCDONNELL DOUGLAS\")))\n .defaults(m -> of(\"UNKNOWN\"))\n )),\n excludeId()\n));\n```\nNote again the use of type-specific field accessor methods to get the aircraft model type (string) and aircraftSet (array of type MqlDocument). In determining the aircraft manufacturer, we\u2019ve again used a fluent coding style to conditionally set the value to Boeing or Airbus. \n\nWith our three pipeline stages now defined, we can now run the pipeline against our collection:\n\n```java\naircraftCollection.aggregate(\n Arrays.asList(\n matchStage,\n groupStage,\n projectStage\n )\n).forEach(doc -> System.out.println(doc.toJson()));\n```\n\nIf all goes to plan, this should produce output to the console that look like:\n\n```JSON\n{\"airline\": \"United\", \"model\": \"B757\", \"count\": 7, \"manufacturer\": \"BOEING\"}\n{\"airline\": \"United\", \"model\": \"B777\", \"count\": 5, \"manufacturer\": \"BOEING\"}\n{\"airline\": \"United\", \"model\": \"A320\", \"count\": 21, \"manufacturer\": \"AIRBUS\"}\n{\"airline\": \"United\", \"model\": \"B737\", \"count\": 45, \"manufacturer\": \"BOEING\"}\n```\n\nIn this article, we shown examples of how expression operators and aggregation builder methods in the latest versions of the MongoDB Java drivers can be used to construct aggregation pipelines using a fluent, idiosyncratic style of Java programming that can utilize autocomplete functionality in IDEs and type-safety compiler features. This can result in code that is more robust and more familiar in style to many Java developers. The use of the builder classes also places less dependence on developers having an extensive understanding of the BSON document format for aggregation pipeline stages. \n\nMore information on the use of aggregation builder and expression operator classes can be found in the official MongoDB Java Driver [documentation.\n\nThe example Java code, aggregation pipeline BSON, and a JSON export of the data used in this article can be found in Github.\n\n*More information*\n\n1] MongoDB uses Binary JSON (BSON) to store data and define operations. BSON is a superset of JSON, stored in binary format and allowing data types over and above those defined in the JSON standard. [Get more information on BSON. \n\n2] It should be noted that storing the position reports in an array for each aircraft like this works well for purposes of our example, but it\u2019s probably not the best design for a production grade system as \u2014 over time \u2014 the arrays for some aircraft could become excessively large. A really good discussion of massive arrays and other anti patterns, and how to handle them, is available [over at Developer Center.\n\n3] The use of expressions in Aggregation Pipeline Match stages can sometimes cause some confusion. For a discussion of this, and aggregations in general, Paul Done\u2019s excellent eBook, \u201c[Practical MongoDB Aggregations,\u201d is highly recommended.", "format": "md", "metadata": {"tags": ["Java"], "pageDescription": "Learn how expression builders can make coding aggregation pipelines in Java applications faster and more reliable.", "contentType": "Tutorial"}, "title": "Java Aggregation Expression Builders in MongoDB", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/free-atlas-cluster", "action": "created", "body": "# Getting Your Free MongoDB Atlas Cluster\n\n**You probably already know that MongoDB Atlas is MongoDB as a service in the public cloud of your choice but did you know we also offer a free forever cluster? In this Quick Start, we'll show you why you should get one and how to create one.**\n\nMongoDB Atlas's Free Tier clusters - which are also known as M0 Sandboxes - are limited to only 512MB of storage but it's more than enough for a pet project or to learn about MongoDB with our free MongoDB University courses.\n\nThe only restriction on them is that they are available in a few regions for each of our three cloud providers: currently there are six on AWS, five on Azure, and four on Google Cloud Platform.\n\nIn this tutorial video, I will show you how to create an account. Then I'll show you how to create your first 3 node cluster and populate it with sample data.\n\n:youtube]{vid=rPqRyYJmx2g}\n\nNow that you understand the basics of [MongoDB Atlas, you may want to explore some of our advanced features that are not available in the Free Tier clusters:\n\n- Peering your MongoDB Clusters with your AWS, GCP or Azure machines is only available for dedicated instances (M10 at least),\n- LDAP Authentication and Authorization,\n- AWS PrivateLink.\n\nOur new Lucene-based Full-Text Search engine is now available for free tier clusters directly.\n", "format": "md", "metadata": {"tags": ["Atlas", "Azure", "Google Cloud"], "pageDescription": "Want to know the quickest way to start with MongoDB? It begins with getting yourself a free MongoDB Atlas Cluster so you can leverage your learning", "contentType": "Quickstart"}, "title": "Getting Your Free MongoDB Atlas Cluster", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-autocomplete-form-element-atlas-search-javascript", "action": "created", "body": "\n \n Recipe:\n \n \n \n \n ", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to create an autocomplete form element that leverages the natural language processing of MongoDB Atlas Search.", "contentType": "Tutorial"}, "title": "Building an Autocomplete Form Element with Atlas Search and JavaScript", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-aggregation-pipeline", "action": "created", "body": "# Java - Aggregation Pipeline\n\n## Updates\n\nThe MongoDB Java quickstart repository is available on GitHub.\n\n### February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n\n### November 14th, 2023\n\n- Update to Java 17\n- Update Java Driver to 4.11.1\n- Update mongodb-crypt to 1.8.0\n\n### March 25th, 2021\n\n- Update Java Driver to 4.2.2.\n- Added Client Side Field Level Encryption example.\n\n### October 21st, 2020\n\n- Update Java Driver to 4.1.1.\n- The Java Driver logging is now enabled via the popular SLF4J API so, I added logback in the `pom.xml` and a configuration file `logback.xml`.\n\n## What's the Aggregation Pipeline?\n\n \n\nThe aggregation pipeline is a framework for data aggregation modeled on the concept of data processing pipelines, just like the \"pipe\" in the Linux Shell. Documents enter a multi-stage pipeline that transforms the documents into aggregated results.\n\nIt's the most powerful way to work with your data in MongoDB. It will allow us to make advanced queries like grouping documents, manipulate arrays, reshape document models, etc.\n\nLet's see how we can harvest this power using Java.\n\n## Getting Set Up\n\nI will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:\n\n``` sh\ngit clone https://github.com/mongodb-developer/java-quick-start\n```\n\n>If you didn't set up your free cluster on MongoDB Atlas, now is great time to do so. You have all the instructions in this blog post.\n\n## First Example with Zips\n\nIn the MongoDB Sample Dataset in MongoDB Atlas, let's explore a bit the `zips` collection in the `sample_training` database.\n\n``` javascript\nMongoDB Enterprise Cluster0-shard-0:PRIMARY> db.zips.find({city:\"NEW YORK\"}).limit(2).pretty()\n{\n \"_id\" : ObjectId(\"5c8eccc1caa187d17ca72f8a\"),\n \"city\" : \"NEW YORK\",\n \"zip\" : \"10001\",\n \"loc\" : {\n \"y\" : 40.74838,\n \"x\" : 73.996705\n },\n \"pop\" : 18913,\n \"state\" : \"NY\"\n}\n{\n \"_id\" : ObjectId(\"5c8eccc1caa187d17ca72f8b\"),\n \"city\" : \"NEW YORK\",\n \"zip\" : \"10003\",\n \"loc\" : {\n \"y\" : 40.731253,\n \"x\" : 73.989223\n },\n \"pop\" : 51224,\n \"state\" : \"NY\"\n}\n```\n\nAs you can see, we have one document for each zip code in the USA and for each, we have the associated population.\n\nTo calculate the population of New York, I would have to sum the population of each zip code to get the population of the entire city.\n\nLet's try to find the 3 biggest cities in the state of Texas. Let's design this on paper first.\n\n- I don't need to work with the entire collection. I need to filter only the cities in Texas.\n- Once this is done, I can regroup all the zip code from a same city together to get the total population.\n- Then I can order my cities by descending order or population.\n- Finally, I can keep the first 3 cities of my list.\n\nThe easiest way to build this pipeline in MongoDB is to use the aggregation pipeline builder that is available in MongoDB Compass or in MongoDB Atlas in the `Collections` tab.\n\nOnce this is done, you can export your pipeline to Java using the export button.\n\nAfter a little code refactoring, here is what I have:\n\n``` java\n/**\n * find the 3 most densely populated cities in Texas.\n * @param zips sample_training.zips collection from the MongoDB Sample Dataset in MongoDB Atlas.\n */\nprivate static void threeMostPopulatedCitiesInTexas(MongoCollection zips) {\n Bson match = match(eq(\"state\", \"TX\"));\n Bson group = group(\"$city\", sum(\"totalPop\", \"$pop\"));\n Bson project = project(fields(excludeId(), include(\"totalPop\"), computed(\"city\", \"$_id\")));\n Bson sort = sort(descending(\"totalPop\"));\n Bson limit = limit(3);\n\n List results = zips.aggregate(List.of(match, group, project, sort, limit)).into(new ArrayList<>());\n System.out.println(\"==> 3 most densely populated cities in Texas\");\n results.forEach(printDocuments());\n}\n```\n\nThe MongoDB driver provides a lot of helpers to make the code easy to write and to read.\n\nAs you can see, I solved this problem with:\n\n- A $match stage to filter my documents and keep only the zip code in Texas,\n- A $group stage to regroup my zip codes in cities,\n- A $project stage to rename the field `_id` in `city` for a clean output (not mandatory but I'm classy),\n- A $sort stage to sort by population descending,\n- A $limit stage to keep only the 3 most populated cities.\n\nHere is the output we get:\n\n``` json\n==> 3 most densely populated cities in Texas\n{\n \"totalPop\": 2095918,\n \"city\": \"HOUSTON\"\n}\n{\n \"totalPop\": 940191,\n \"city\": \"DALLAS\"\n}\n{\n \"totalPop\": 811792,\n \"city\": \"SAN ANTONIO\"\n}\n```\n\nIn MongoDB 4.2, there are 30 different aggregation pipeline stages that you can use to manipulate your documents. If you want to know more, I encourage you to follow this course on MongoDB University: M121: The MongoDB Aggregation Framework.\n\n## Second Example with Posts\n\nThis time, I'm using the collection `posts` in the same database.\n\n``` json\nMongoDB Enterprise Cluster0-shard-0:PRIMARY> db.posts.findOne()\n{\n \"_id\" : ObjectId(\"50ab0f8bbcf1bfe2536dc3f9\"),\n \"body\" : \"Amendment I\\n\n\nCongress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.\\n\n\n\\nAmendment II\\n\n\n\\nA well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.\\n\n\n\\nAmendment III\\n\n\n\\nNo Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.\\n\n\n\\nAmendment IV\\n\n\n\\nThe right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.\\n\n\n\\nAmendment V\\n\n\n\\nNo person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a Grand Jury, except in cases arising in the land or naval forces, or in the Militia, when in actual service in time of War or public danger; nor shall any person be subject for the same offence to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.\\n\n\n\\n\\nAmendment VI\\n\n\n\\nIn all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial, by an impartial jury of the State and district wherein the crime shall have been committed, which district shall have been previously ascertained by law, and to be informed of the nature and cause of the accusation; to be confronted with the witnesses against him; to have compulsory process for obtaining witnesses in his favor, and to have the Assistance of Counsel for his defence.\\n\n\n\\nAmendment VII\\n\n\n\\nIn Suits at common law, where the value in controversy shall exceed twenty dollars, the right of trial by jury shall be preserved, and no fact tried by a jury, shall be otherwise re-examined in any Court of the United States, than according to the rules of the common law.\\n\n\n\\nAmendment VIII\\n\n\n\\nExcessive bail shall not be required, nor excessive fines imposed, nor cruel and unusual punishments inflicted.\\n\n\n\\nAmendment IX\\n\n\n\\nThe enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.\\n\n\n\\nAmendment X\\n\n\n\\nThe powers not delegated to the United States by the Constitution, nor prohibited by it to the States, are reserved to the States respectively, or to the people.\\\"\\n\n\n\\n\",\n \"permalink\" : \"aRjNnLZkJkTyspAIoRGe\",\n \"author\" : \"machine\",\n \"title\" : \"Bill of Rights\",\n \"tags\" : \n \"watchmaker\",\n \"santa\",\n \"xylophone\",\n \"math\",\n \"handsaw\",\n \"dream\",\n \"undershirt\",\n \"dolphin\",\n \"tanker\",\n \"action\"\n ],\n \"comments\" : [\n {\n \"body\" : \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\",\n \"email\" : \"HvizfYVx@pKvLaagH.com\",\n \"author\" : \"Santiago Dollins\"\n },\n {\n \"body\" : \"Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum\",\n \"email\" : \"glbeRCMi@KwnNwhzl.com\",\n \"author\" : \"Omar Bowdoin\"\n }\n ],\n \"date\" : ISODate(\"2012-11-20T05:05:15.231Z\")\n}\n```\n\nThis collection of 500 posts has been generated artificially, but it contains arrays and I want to show you how we can manipulate arrays in a pipeline.\n\nLet's try to find the three most popular tags and for each tag, I also want the list of post titles they are tagging.\n\nHere is my solution in Java.\n\n``` java\n/**\n * find the 3 most popular tags and their post titles\n * @param posts sample_training.posts collection from the MongoDB Sample Dataset in MongoDB Atlas.\n */\nprivate static void threeMostPopularTags(MongoCollection posts) {\n Bson unwind = unwind(\"$tags\");\n Bson group = group(\"$tags\", sum(\"count\", 1L), push(\"titles\", \"$title\"));\n Bson sort = sort(descending(\"count\"));\n Bson limit = limit(3);\n Bson project = project(fields(excludeId(), computed(\"tag\", \"$_id\"), include(\"count\", \"titles\")));\n\n List results = posts.aggregate(List.of(unwind, group, sort, limit, project)).into(new ArrayList<>());\n System.out.println(\"==> 3 most popular tags and their posts titles\");\n results.forEach(printDocuments());\n}\n```\n\nHere I'm using the very useful [$unwind stage to break down my array of tags.\n\nIt allows me in the following $group stage to group my tags, count the posts and collect the titles in a new array `titles`.\n\nHere is the final output I get.\n\n``` json\n==> 3 most popular tags and their posts titles\n{\n \"count\": 8,\n \"titles\": \n \"Gettysburg Address\",\n \"US Constitution\",\n \"Bill of Rights\",\n \"Gettysburg Address\",\n \"Gettysburg Address\",\n \"Declaration of Independence\",\n \"Bill of Rights\",\n \"Declaration of Independence\"\n ],\n \"tag\": \"toad\"\n}\n{\n \"count\": 8,\n \"titles\": [\n \"Bill of Rights\",\n \"Gettysburg Address\",\n \"Bill of Rights\",\n \"Bill of Rights\",\n \"Declaration of Independence\",\n \"Declaration of Independence\",\n \"Bill of Rights\",\n \"US Constitution\"\n ],\n \"tag\": \"forest\"\n}\n{\n \"count\": 8,\n \"titles\": [\n \"Bill of Rights\",\n \"Declaration of Independence\",\n \"Declaration of Independence\",\n \"Gettysburg Address\",\n \"US Constitution\",\n \"Bill of Rights\",\n \"US Constitution\",\n \"US Constitution\"\n ],\n \"tag\": \"hair\"\n}\n```\n\nAs you can see, some titles are repeated. As I said earlier, the collection was generated so the post titles are not uniq. I could solve this \"problem\" by using the [$addToSet operator instead of the $push one if this was really an issue.\n\n## Final Code\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\nimport org.bson.conversions.Bson;\nimport org.bson.json.JsonWriterSettings;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport java.util.function.Consumer;\n\nimport static com.mongodb.client.model.Accumulators.push;\nimport static com.mongodb.client.model.Accumulators.sum;\nimport static com.mongodb.client.model.Aggregates.*;\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Projections.*;\nimport static com.mongodb.client.model.Sorts.descending;\n\npublic class AggregationFramework {\n\n public static void main(String] args) {\n String connectionString = System.getProperty(\"mongodb.uri\");\n try (MongoClient mongoClient = MongoClients.create(connectionString)) {\n MongoDatabase db = mongoClient.getDatabase(\"sample_training\");\n MongoCollection zips = db.getCollection(\"zips\");\n MongoCollection posts = db.getCollection(\"posts\");\n threeMostPopulatedCitiesInTexas(zips);\n threeMostPopularTags(posts);\n }\n }\n\n /**\n * find the 3 most densely populated cities in Texas.\n *\n * @param zips sample_training.zips collection from the MongoDB Sample Dataset in MongoDB Atlas.\n */\n private static void threeMostPopulatedCitiesInTexas(MongoCollection zips) {\n Bson match = match(eq(\"state\", \"TX\"));\n Bson group = group(\"$city\", sum(\"totalPop\", \"$pop\"));\n Bson project = project(fields(excludeId(), include(\"totalPop\"), computed(\"city\", \"$_id\")));\n Bson sort = sort(descending(\"totalPop\"));\n Bson limit = limit(3);\n\n List results = zips.aggregate(List.of(match, group, project, sort, limit)).into(new ArrayList<>());\n System.out.println(\"==> 3 most densely populated cities in Texas\");\n results.forEach(printDocuments());\n }\n\n /**\n * find the 3 most popular tags and their post titles\n *\n * @param posts sample_training.posts collection from the MongoDB Sample Dataset in MongoDB Atlas.\n */\n private static void threeMostPopularTags(MongoCollection posts) {\n Bson unwind = unwind(\"$tags\");\n Bson group = group(\"$tags\", sum(\"count\", 1L), push(\"titles\", \"$title\"));\n Bson sort = sort(descending(\"count\"));\n Bson limit = limit(3);\n Bson project = project(fields(excludeId(), computed(\"tag\", \"$_id\"), include(\"count\", \"titles\")));\n\n List results = posts.aggregate(List.of(unwind, group, sort, limit, project)).into(new ArrayList<>());\n System.out.println(\"==> 3 most popular tags and their posts titles\");\n results.forEach(printDocuments());\n }\n\n private static Consumer printDocuments() {\n return doc -> System.out.println(doc.toJson(JsonWriterSettings.builder().indent(true).build()));\n }\n}\n```\n\n## Wrapping Up\n\nThe aggregation pipeline is very powerful. We have just scratched the surface with these two examples but trust me if I tell you that it's your best ally if you can master it.\n\n>I encourage you to follow the [M121 course on MongoDB University to become an aggregation pipeline jedi.\n>\n>If you want to learn more and deepen your knowledge faster, I recommend you check out the M220J: MongoDB for Java Developers training available for free on MongoDB University.\n\nIn the next blog post, I will explain to you the Change Streams in Java.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to use the Aggregation Pipeline using the MongoDB Java Driver.", "contentType": "Quickstart"}, "title": "Java - Aggregation Pipeline", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/serverless-with-cloud-run-mongodb-atlas", "action": "created", "body": "# Serverless MEAN Stack Applications with Cloud Run and MongoDB Atlas\n\n## Plea and the Pledge: Truly Serverless\n\nAs modern application developers, we\u2019re juggling many priorities: performance, flexibility, usability, security, reliability, and maintainability. On top of that, we\u2019re handling dependencies, configuration, and deployment of multiple components in multiple environments and sometimes multiple repositories as well. And then we have to keep things secure and simple. Ah, the nightmare!\n\nThis is the reason we love serverless computing. Serverless allows developers to focus on the thing they like to do the most\u2014development\u2014and leave the rest of the attributes, including infrastructure and maintenance, to the platform offerings. \n\nIn this read, we\u2019re going to see how Cloud Run and MongoDB come together to enable a completely serverless MEAN stack application development experience. We'll learn how to build a serverless MEAN application with Cloud Run and MongoDB Atlas, the multi-cloud developer data platform by MongoDB.\n\n### Containerized deployments with Cloud Run\n\nAll serverless platform offer exciting capabilities: \n\n* Event-driven function (not a hard requirement though)\n* No-infrastructure maintenance\n* Usage-based pricing\n* Auto-scaling capabilities\n\nCloud Run stands out of the league by enabling us to: \n\n* Package code in multiple stateless containers that are request-aware and invoke it via HTTP requests\n* Only be charged for the exact resources you use\n* Support any programming language or any operating system library of your choice, or any binary\n\nCheck this link for more features in full context.\n\nHowever, many serverless models overlook the fact that traditional databases are not managed. You need to manually provision infrastructure (vertical scaling) or add more servers (horizontal scaling) to scale the database. This introduces a bottleneck in your serverless architecture and can lead to performance issues.\n\n### Deploy a serverless database with MongoDB Atlas\n\nMongoDB launched serverless instances, a new fully managed, serverless database deployment in Atlas to solve this problem. With serverless instances you never have to think about infrastructure \u2014 simply deploy your database and it will scale up and down seamlessly based on demand \u2014 requiring no hands-on management. And the best part, you will only be charged for the operations you run. To make our architecture truly serverless, we'll combine Cloud Run and MongoDB Atlas capabilities.\n\n## What's the MEAN stack?\n\nThe MEAN stack is a technology stack for building full-stack web applications entirely with JavaScript and JSON. The MEAN stack is composed of four main components\u2014MongoDB, Express, Angular, and Node.js.\n\n* **MongoDB** is responsible for data storage. \n* **Express.js** is a Node.js web application framework for building APIs.\n* **Angular** is a client-side JavaScript platform.\n* **Node.js** is a server-side JavaScript runtime environment. The server uses the MongoDB Node.js driver to connect to the database and retrieve and store data. \n\n## Steps for deploying truly serverless MEAN stack apps with Cloud Run and MongoDB\n\nIn the following sections, we\u2019ll provision a new MongoDB serverless instance, connect a MEAN stack web application to it, and finally, deploy the application to Cloud Run.\n\n### 1. Create the database\n\nBefore you begin, get started with MongoDB Atlas on Google Cloud.\n\nOnce you sign up, click the \u201cBuild a Database\u201d button to create a new serverless instance. Select the following configuration:\n\nOnce your serverless instance is provisioned, you should see it up and running.\n\nClick on the \u201cConnect\u201d button to add a connection IP address and a database user.\n\nFor this blog post, we\u2019ll use the \u201cAllow Access from Anywhere\u201d setting. MongoDB Atlas comes with a set of security and access features. You can learn more about them in the security features documentation article.\n\nUse credentials of your choice for the database username and password. Once these steps are complete, you should see the following:\n\nProceed by clicking on the \u201cChoose a connection method\u201d button and then selecting \u201cConnect your application\u201d.\n\nCopy the connection string you see and replace the password with your own. We\u2019ll use that string to connect to our database in the following sections.\n\n### 2. Set up a Cloud Run project\n\nFirst, sign in to Cloud Console, create a new project, or reuse an existing one.\n\nRemember the Project Id for the project you created. Below is an image from https://codelabs.developers.google.com/codelabs/cloud-run-hello#1 that shows how to create a new project in Google Cloud.\n\nThen, enable Cloud Run API from Cloud Shell:\n\n* Activate Cloud Shell from the Cloud Console. Simply click Activate Cloud Shell.\n\n* Use the below command:\n\n*gcloud services enable run.googleapis.com*\n\nWe will be using Cloud Shell and Cloud Shell Editor for code references. To access Cloud Shell Editor, click Open Editor from the Cloud Shell Terminal:\n\nFinally, we need to clone the MEAN stack project we\u2019ll be deploying. \n\nWe\u2019ll deploy an employee management web application. The REST API is built with Express and Node.js; the web interface, with Angular; and the data will be stored in the MongoDB Atlas instance we created earlier.\n\nClone the project repository by executing the following command in the Cloud Shell Terminal:\n\n`git clone` https://github.com/mongodb-developer/mean-stack-example.git\n\nIn the following sections, we will deploy a couple of services\u2014one for the Express REST API and one for the Angular web application. \n\n### 3. Deploy the Express and Node.js REST API\n\nFirst, we\u2019ll deploy a Cloud Run service for the Express REST API. \n\nThe most important file for our deployment is the Docker configuration file. Let\u2019s take a look at it:\n\n**mean-stack-example/server/Dockerfile**\n\n```\nFROM node:17-slim\n \nWORKDIR /usr/app\nCOPY ./ /usr/app\n \n# Install dependencies and build the project.\nRUN npm install\nRUN npm run build\n \n# Run the web service on container startup.\nCMD \"node\", \"dist/server.js\"]\n```\n\nThe configuration sets up Node.js, and copies and builds the project. When the container starts, the command \u201cnode dist/server.js\u201d starts the service.\n\nTo start a new Cloud Run deployment, click on the Cloud Run icon on the left sidebar:\n\n![Select the 'Cloud Run' icon from the left sidebar\n\nThen, click on the Deploy to Cloud Run icon:\n\nFill in the service configuration as follows:\n\n* Service name: node-express-api\n* Deployment platform: Cloud Run (fully managed)\n* Region: Select a region close to your database region to reduce latency\n* Authentication: Allow unauthenticated invocations\n\nUnder Revision Settings, click on Show Advanced Settings to expand them:\n\n* Container port: 5200\n* Environment variables. Add the following key-value pair and make sure you add the connection string for your own MongoDB Atlas deployment:\n\n`ATLAS_URI:mongodb+srv:/:@sandbox.pv0l7.mongodb.net/meanStackExample?retryWrites=true&w=majority`\n\nFor the Build environment, select Cloud Build.\n\nFinally, in the Build Settings section, select:\n\n* Builder: Docker\n* Docker: mean-stack-example/server/Dockerfile\n\nClick the Deploy button and then Show Detailed Logs to follow the deployment of your first Cloud Run service!\n\nAfter the build has completed, you should see the URL of the deployed service:\n\nOpen the URL and append \u2018/employees\u2019 to the end. You should see an empty array because currently, there are no documents in the database. Let\u2019s deploy the user interface so we can add some!\n\n### 4. Deploy the Angular web application\n\nOur Angular application is in the client directory. To deploy it, we\u2019ll use the Nginx server and Docker.\n\n> Just a thought, there is also an option to use Firebase Hosting for your Angular application deployment as you can serve your content to a CDN (content delivery network) directly.\n\nLet\u2019s take a look at the configuration files:\n\n**mean-stack-example/client/nginx.conf**\n\n```\nevents{}\n \nhttp {\n \n include /etc/nginx/mime.types;\n \n server {\n listen 8080;\n server_name 0.0.0.0;\n root /usr/share/nginx/html;\n index index.html;\n \n location / {\n try_files $uri $uri/ /index.html;\n }\n }\n}\n```\n\nIn the Nginx configuration, we specify the default port\u20148080, and the starting file\u2014`index.html`.\n\n**mean-stack-example/client/Dockerfile**\n\n```\nFROM node:17-slim AS build\n \nWORKDIR /usr/src/app\nCOPY package.json package-lock.json ./\n \n# Install dependencies and copy them to the container\nRUN npm install\nCOPY . .\n \n# Build the Angular application for production\nRUN npm run build --prod\n \n# Configure the nginx web server\nFROM nginx:1.17.1-alpine\nCOPY nginx.conf /etc/nginx/nginx.conf\nCOPY --from=build /usr/src/app/dist/client /usr/share/nginx/html\n \n# Run the web service on container startup.\nCMD \"nginx\", \"-g\", \"daemon off;\"]\n```\n\nIn the Docker configuration, we install Node.js dependencies and build the project. Then, we copy the built files to the container, configure, and start the Nginx service.\n\nFinally, we need to configure the URL to the REST API so that our client application can send requests to it. Since we\u2019re only using the URL in a single file in the project, we\u2019ll hardcode the URL. Alternatively, you can attach the environment variable to the window object and access it from there.\n\n**mean-stack-example/client/src/app/employee.service.ts**\n\n```\n@Injectable({\n providedIn: 'root'\n})\nexport class EmployeeService {\n // Replace with the URL of your REST API\n private url = 'https://node-express-api-vsktparjta-uc.a.run.app'; \n\u2026\n```\n\nWe\u2019re ready to deploy to Cloud Run! Start a new deployment with the following configuration settings:\n\n* Service Settings: Create a service\n* Service name: angular-web-app\n* Deployment platform: Cloud Run (fully managed)\n* Authentication: Allow unauthenticated invocations\n\nFor the Build environment, select Cloud Build.\n\nFinally, in the Build Settings section, select:\n\n* Builder: Docker\n* Docker: mean-stack-example/client/Dockerfile\n\nClick that Deploy button again and watch the logs as your app is shipped to the cloud! When the deployment is complete, you should see the URL for the client app:\n\n![Screenshot displaying the message 'Deployment completed successfully!' and the deployment URL for the Angular service.\n\nOpen the URL, and play with your application!\n\n### Command shell alternative for build and deploy\n\nThe steps covered above can alternatively be implemented from Command Shell as below:\n\nStep 1: Create the new project directory named \u201cmean-stack-example\u201d either from the Code Editor or Cloud Shell Command (Terminal):\n\n*mkdir mean-stack-demo\ncd mean-stack-demo*\n\nStep 2: Clone project repo and make necessary changes in the configuration and variables, same as mentioned in the previous section.\n\nStep 3: Build your container image using Cloud build by running the command in Cloud Shell:\n\n*gcloud builds submit --tag gcr.io/$GOOGLECLOUDPROJECT/mean-stack-demo*\n\n$GOOGLE_CLOUD_PROJECT is an environment variable containing your Google Cloud project ID when running in Cloud Shell.\n\nStep 4: Test it locally by running: \ndocker run -d -p 8080:8080 gcr.io/$GOOGLE_CLOUD_PROJECT/mean-stack-demo \nand by clicking Web Preview, Preview on port 8080.\n\nStep 5: Run the following command to deploy your containerized app to Cloud Run:\n\n*gcloud run deploy mean-stack-demo --image \ngcr.io/$GOOGLECLOUDPROJECT/mean-stack-demo --platform managed --region us-central1 --allow-unauthenticated --update-env-vars DBHOST=$DB_HOST*\n\na. \u2013allow-unauthenticated will let the service be reached without authentication.\n\nb. \u2013platform-managed means you are requesting the fully managed environment and not the Kubernetes one via Anthos.\n \nc. \u2013update-env-vars expects the MongoDB Connection String to be passed on to the environment variable DBHOST.\nHang on until the section on Env variable and Docker for Continuous Deployment for Secrets and Connection URI management.\n\nd. When the deployment is done, you should see the deployed service URL in the command line.\n\ne. When you hit the service URL, you should see your web page on the browser and the logs in the Cloud Logging Logs Explorer page. \n\n### 5. Environment variables and Docker for continuous deployment\n\nIf you\u2019re looking to automate the process of building and deploying across multiple containers, services, or components, storing these configurations in the repo is not only cumbersome but also a security threat. \n\n1. For ease of cross-environment continuous deployment and to avoid security vulnerabilities caused by leaking credential information, we can choose to pass variables at build/deploy/up time.\n \n *--update-env-vars* allows you to set the environment variable to a value that is passed only at run time. In our example, the variable DBHOST is assigned the value of $DB_HOST. which is set as *DB_HOST = \u2018<>\u2019*.\n\n Please note that unencoded symbols in Connection URI (username, password) will result in connection issues with MongoDB. For example, if you have a $ in the password or username, replace it with %24 in the encoded Connection URI.\n\n2. Alternatively, you can also pass configuration variables as env variables at build time into docker-compose (*docker-compose.yml*). By passing configuration variables and credentials, we avoid credential leakage and automate deployment securely and continuously across multiple environments, users, and applications.\n\n## Conclusion\n\nMongoDB Atlas with Cloud Run makes for a truly serverless MEAN stack solution, and for those looking to build an application with a serverless option to run in a stateless container, Cloud Run is your best bet. \n\n## Before you go\u2026\n\nNow that you have learnt how to deploy a simple MEAN stack application on Cloud Run and MongoDB Atlas, why don\u2019t you take it one step further with your favorite client-server use case? Reference the below resources for more inspiration:\n\n* Cloud Run HelloWorld: https://codelabs.developers.google.com/codelabs/cloud-run-hello#4\n* MongoDB - MEAN Stack: https://www.mongodb.com/languages/mean-stack-tutorial\n\nIf you have any comments or questions, feel free to reach out to us online: Abirami Sukumaran and Stanimira Vlaeva.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Docker", "Google Cloud"], "pageDescription": "In this blog, we'll see how Cloud Run and MongoDB come together to enable a completely serverless MEAN stack application development experience.", "contentType": "Tutorial"}, "title": "Serverless MEAN Stack Applications with Cloud Run and MongoDB Atlas", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-api-google-apps-script", "action": "created", "body": "# Using the Atlas Data API with Google Apps Script\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nThe MongoDB Atlas Data API is an HTTPS-based API which allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. In this article, we will see how a business analyst or other back office user, who often may not be a professional developer, can access data from and record data in Atlas. The Atlas Data API can easily be used by users unable to create or configure back-end services, who simply want to work with data in tools they know, like Google Sheets or Excel.\n\nLearn about enabling the Atlas Data API and obtaining API keys.\n\nGoogle Office accesses external data using Google Apps Script, a cloud-based JavaScript platform that lets us integrate with and automate tasks across Google products. We will use Google Apps Script to call the Data API.\n\n## Prerequisites\n\nBefore we begin, we will need a Google account and the ability to create Google Sheets. We will also need an Atlas cluster for which we have enabled the Data API, and our **endpoint URL** and **API Key**. You can learn how to get these in this article or this video, if you do not have them already.\n\nA common use of Atlas with Google Sheets might be to look up some business data manually, or produce an export for a third party. To test this, we first need to have some business data in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing \"Load Sample Dataset\", or following the instructions here.\n\n## Creating a Google Apps Script from a Google Sheet\n\nOur next step is to create a new Google sheet. We can do this by going to https://docs.google.com/spreadsheets/ and selecting a new blank sheet, or, if using Chrome, by going to the URL https://sheets.new . We end up viewing a sheet like this. Replace the name \"Untitled spreadsheet\" with \"Atlas Data API Demo\".\n\nWe are going to create a simple front end to allow us to verify the business inspection certificate and history for a property. We will get this from the collection **inspections** in the **sample\\_training** database. The first step is to add some labels in our sheet as shown below. Don't worry if your formatting isn't.exactly the same. Cell B1 is where we will enter the name we are searching for. For now, enter \"American\".\n\nNow we need to add code that queries Atlas and retrieves the data. To do this, select **Extensions -> Apps Script** from the menu bar. (If you are using Google for Business, it might be under **Tools->Script Editor** instead.)\n\nA new tab will open with the Apps Script Development environment, and an empty function named myFunction(). In this tab, we can write JavaScript code to interact with our sheet, MongoDB Atlas, and most other parts of the Google infrastructure.\n\nClick on the name 'Untitled project\", Type in \"My Data API Script\" in the popup and click Rename.\n\nBefore we connect to Atlas, we will first write and test some very basic code that gets a handle to our open spreadsheet and retrieves the contents of cell B1 where we enter what we want to search for. Replace all the code with the code below.\n\n```\nfunction lookupInspection() {\n const activeSheetsApp = SpreadsheetApp.getActiveSpreadsheet();\n const sheet = activeSheetsApp.getSheets()0];\n const partialName = sheet.getRange(\"B1\").getValue();\n SpreadsheetApp.getUi().alert(partialName)\n}\n```\n\n## Granting Permissions to Google Apps Scripts\n\nWe need now to grant permission to the script to access our spreadsheet. Although we just created this script, Google requires explicit permission to trust scripts accessing documents or services.\n\nMake sure the script is saved by typing Control/Command + S, then click \"Run\" on the toolbar, and then \"Review Permissions\" on the \"Authorization required\" popup. Select the name of the Google account you intend to run this as. You will then get a warning that \"Google hasn't verified this app\".\n\n![\n\nThis warning is intended for someone who runs a sheet they got from someone else, rather than us as the author. To continue, click on Advanced, then \"Go to My Data API Script (unsafe)\". *This is not unsafe for you as the author, but anyone else accessing this sheet should be aware it can access any of their Google sheets.*\n\nFinally, click \"Allow\" when asked if the app can \"See, edit, create, and delete all your Google Sheets spreadsheets.\"\n\nAs we change our script and require additional permissions, we will need to go through this approval process again.\n\n## Adding a Launch Button for a Google Apps Script in a Spreadsheet\n\nWe now need to add a button on the sheet to call this function when we want to use it. Google Sheets does not have native buttons to launch scripts, but there is a trick to emulate one.\n\nReturn to the tab that shows the sheet, dismiss the popup if there is one, and use **Insert->Drawing**. Add a textbox by clicking the square with the letter T in the middle and dragging to make a small box. Double click it to set the text to \"Search\" and change the background colour to a nice MongoDB green. Then click \"Save and Close.\"\n\nOnce back in the spreadsheet, drag this underneath the name \"Search For:\" at the top left. You can move and resize it to fit nicely.\n\nFinally, click on the green button, then the three dots in the top right corner. Choose \"Assign a Script\" in the popup type **lookupInspection**. Whilst this feels quite a clumsy way to bind a script to a button, it's the only thing Google Sheets gives us.\n\nNow click the green button you created, it should pop up a dialog that says 'American'. We have now bound our script to the button successfully. You change the value in cell B1 to \"Pizza\" and run the script again checking it says \"Pizza\" this time. *Note the value of B1 does not change until you then click in another cell.*\n\nIf, after you have bound a button to a script you need to select the button for moving, sizing or formatting you can do so with Command/Control + Click.\n\n## Retrieving data from MongoDB Atlas using Google Apps Scripts\n\nNow we have a button to launch our script, we can fill in the rest of the code to call the Data API and find any matching results.\n\nFrom the menu bar on the sheet, once again select **Extensions->Apps Script** (or **Tools->Script Editor**). Now change the code to match the code shown below. Make sure you set the endpoint in the first line to your URL endpoint from the Atlas GUI. The part that says \"**amzuu**\" will be different for you.\n\n```\nconst findEndpoint = 'https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find';\nconst clusterName = \"Cluster0\"\n \nfunction getAPIKey() {\n const userProperties = PropertiesService.getUserProperties();\n let apikey = userProperties.getProperty('APIKEY');\n let resetKey = false; //Make true if you have to change key\n if (apikey == null || resetKey ) {\n var result = SpreadsheetApp.getUi().prompt(\n 'Enter API Key',\n 'Key:', SpreadsheetApp.getUi().ButtonSet);\n apikey = result.getResponseText()\n userProperties.setProperty('APIKEY', apikey);\n }\n return apikey;\n} \n \nfunction lookupInspection() {\n const activeSheetsApp = SpreadsheetApp.getActiveSpreadsheet();\n const sheet = activeSheetsApp.getSheets()0];\n const partname = sheet.getRange(\"B1\").getValue();\n \n \n sheet.getRange(`C3:K103`).clear()\n \n const apikey = getAPIKey()\n \n //We can do operators like regular expression with the Data API\n const query = { business_name: { $regex: `${partname}`, $options: 'i' } }\n const order = { business_name: 1, date: -1 }\n const limit = 100\n //We can Specify sort, limit and a projection here if we want\n const payload = {\n filter: query, sort: order, limit: limit,\n collection: \"inspections\", database: \"sample_training\", dataSource: clusterName\n }\n \n const options = {\n method: 'post',\n contentType: 'application/json',\n payload: JSON.stringify(payload),\n headers: { \"api-key\": apikey }\n };\n \n const response = UrlFetchApp.fetch(findEndpoint, options);\n const documents = JSON.parse(response.getContentText()).documents\n \n for (d = 1; d <= documents.length; d++) {\n let doc = documents[d - 1]\n fields = [[doc.business_name, doc.date, doc.result, doc.sector, \n doc.certificate_number, doc.address.number,\n doc.address.street, doc.address.city, doc.address.zip]]\n let row = d + 2\n sheet.getRange(`C${row}:K${row}`).setValues(fields)\n }\n}\n```\n\nWe can now test this by clicking \u201cRun\u201d on the toolbar. As we have now requested an additional permission (the ability to connect to an external web service), we will once again have to approve permissions for our account by following the process above.\n\nOnce we have granted permission, the script will runLog a successful start but not appear to be continuing. This is because it is waiting for input. Returning to the tab with the sheet, we can see it is now requesting we enter our Atlas Data API key. If we paste our Atlas Data API key into the box, we will see it complete the search.\n\n![\n\nWe can now search the company names by typing part of the name in B1 and clicking the Search button. This search uses an unindexed regular expression. For production use, you should use either indexed MongoDB searches or, for free text searching, Atlas Search, but that is outside the scope of this article.\n\n## Securing Secret API Keys in Google Apps Scripts\n\nAtlas API keys give the holder read and write access to all databases in the cluster, so it's important to manage the API key with care.\n\nRather than simply hard coding the API key in the script, where it might be seen by someone else with access to the spreadsheet, we check if it is in the user's personal property store (a server-side key-value only accessible by that Google user). If not, we prompt for it and store it. This is all encapsulated in the getAPIKey() function.\n\n```\nfunction getAPIKey() {\n const userProperties = PropertiesService.getUserProperties();\n let apikey = userProperties.getProperty('APIKEY');\n let resetKey = false; //Make true if you have to change key\n if (apikey == null || resetKey ) {\n var result = SpreadsheetApp.getUi().prompt(\n 'Enter API Key',\n 'Key:', SpreadsheetApp.getUi().ButtonSet);\n apikey = result.getResponseText()\n userProperties.setProperty('APIKEY', apikey);\n }\n return apikey;\n}\n```\n\n*Should you enter the key incorrectly - or need to change the stored one. Change resetKey to true, run the script and enter the new key then change it back to false.*\n\n## Writing to MongoDB Atlas from Google Apps Scripts\n\nWe have created this simple, sheets-based user interface and we could adapt it to perform any queries or aggregations when reading by changing the payload. We can also write to the database using the Data API. To keep the spreadsheet simple, we will add a usage log for our new search interface showing what was queried for, and when. Remember to change \"**amzuu**\" in the endpoint value at the top to the endpoint for your own project. Add this to the end of the code, keeping the existing functions.\n\n```\nconst insertOneEndpoint = 'https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/insertOne'\n\nfunction logUsage(query, nresults, apikey) {\nconst document = { date: { $date: { $numberLong: ${(new Date()).getTime()} } }, query, nresults, by: Session.getActiveUser().getEmail() }\nconsole.log(document)\nconst payload = {\ndocument: document, collection: \"log\",\ndatabase: \"sheets_usage\", dataSource: \"Cluster0\"\n}\n\nconst options = {\nmethod: 'post',\ncontentType: 'application/json',\npayload: JSON.stringify(payload),\nheaders: { \"api-key\": apikey }\n};\n\nconst response = UrlFetchApp.fetch(insertOneEndpoint, options);\n}\n```\n\n## Using Explicit Data Types in JSON with MongoDB EJSON\n\nWhen we add the data with this, we set the date field to be a date type in Atlas rather than a string type with an ISO string of the date. We do this using EJSON syntax.\n\nEJSON, or Extended JSON, is used to get around the limitation of plain JSON not being able to differentiate data types. JSON is unable to differentiate a date from a string, or specify if a number is a Double, 64 Bit Integer, or 128 Bit BigDecimal value. MongoDB data is data typed and when working with other languages and code, in addition to the Data API, it is important to be aware of this, especially if adding or updating data.\n\nIn this example, rather than using `{ date : (new Date()).toISOString() }`, which would store the date as a string value, we use the much more efficient and flexible native date type in the database by specifying the value using EJSON. The EJSON form is ` { date : { $date : { $numberLong: }}}`.\n\n## Connecting up our Query Logging Function\n\nWe must now modify our code to log each query that is performed by adding the following line in the correct place inside the `lookupInspection` function.\n\n```\n const response = UrlFetchApp.fetch(findendpoint, options);\n const documents = JSON.parse(response.getContentText()).documents\n \n logUsage(partname, documents.length, apikey); // <---- Add This line\n \n for (d = 1; d <= documents.length; d++) {\n...\n```\n\nIf we click the Search button now, not only do we get our search results but checking Atlas data explorer shows us a log of what we searched for, at what time, and what user performed it.\n\n## Conclusion\n\nYou can access the completed sheet here. This is read-only, so you will need to create a copy using the file menu to run the script.\n\nCalling the Data API from Google Apps Script is simple. The HTTPS call is just a few lines of code. Securing the API key and specifying the correct data type when inserting or updating data are just a little more complex, but hopefully, this post will give you a good indication of how to go about it.\n\nIf you have questions, please head to ourdeveloper community websitewhere the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "This article teaches you how to call the Atlas Data API from a Google Sheets spreadsheet using Google Apps Script.", "contentType": "Quickstart"}, "title": "Using the Atlas Data API with Google Apps Script", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-kotlin-041-announcement", "action": "created", "body": "# Realm Kotlin 0.4.1 Announcement\n\nIn this blogpost we are announcing v0.4.1 of the Realm Kotlin Multiplatform SDK. This release contains a significant architectural departure from previous releases of Realm Kotlin as well as other Realm SDK\u2019s, making it much more compatible with modern reactive frameworks like Kotlin Flows. We believe this change will hugely benefit users in the Kotlin ecosystem.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## **Some background**\nThe Realm Java and Kotlin SDK\u2019s have historically exposed a model of interacting with data we call Live Objects. Its primary design revolves around database objects acting as Live Views into the underlying database. \n\nThis was a pretty novel approach when Realm Java was first released 7 years ago. It had excellent performance characteristics and made it possible to avoid a wide range of nasty bugs normally found in concurrent systems.\n\nHowever, it came with one noticeable drawback: Thread Confinement.\n\nThread-confinement was not just an annoying restriction. This was what guaranteed that users of the API would always see a consistent view of the data, even across decoupled queries. Which was also the reason that Kotlin Native adopted a similar memory model\n\nBut it also meant that you manually had to open and close realms on each thread where you needed data, and it was impossible to pass objects between threads without additional boilerplate. \n\nBoth of which put a huge burden on developers.\n\nMore importantly, this approach conflicts with another model for working with concurrent systems, namely Functional Reactive Programming (FRP). In the Android ecosystem this was popularized by the RxJava framework and also underpins Kotlin Flows.\n\nIn this mode, you see changes to data as immutable events in a stream, allowing complex mapping and transformations. Consistency is then guaranteed by the semantics of the stream; each operation is carried out in sequence so no two threads operate on the same object at the same time. \n\nIn this model, however, it isn\u2019t uncommon for different operations to happen on different threads, breaking the thread-confinement restrictions of Realm. \n\nLooking at the plethora of frameworks that support this model (React JS, RxJava, Java Streams, Apple Combine Framework and Kotlin Flows) It becomes clear that this way of reasoning about concurrency is here to stay.\n\nFor that reason we decided to change our API to work much better in this context.\n\n## The new API\n\nSo today we are introducing a new architecture, which we internally have called the Frozen Architecture. It looks similar to the old API, but works in a fundamentally different way.\n\nRealm instances are now thread-safe, meaning that you can use the same instance across the entire application, making it easier to pass around with e.g. dependency injection.\n\nAll query results and objects from the database are frozen or immutable by default. They can now be passed freely between threads. This also means that they no longer automatically are kept up to date. Instead you must register change listeners in order to be notified about any change.\n\nAll modifications to data must happen by using a special instance of a `MutableRealm`, which is only available inside write transactions. Objects inside a write transaction are still live.\n\n## Opening a Realm\n\nOpening a realm now only needs to happen once. It can either be stored in a global variable or made available via dependency injection.\n\n```\n// Global App variable\nclass MyApp: Application() {\n companion object {\n private val config = RealmConfiguration(schema = setOf(Person::class))\n public val REALM = Realm(config)\n }\n}\n\n// Using dependency injection\nval koinModule = module {\n single { RealmConfiguration(schema = setOf(Person::class)) }\n single { Realm(get()) }\n}\n\n// Realms are now thread safe\nval realm = Realm(config)\nval t1 = Thread {\n realm.writeBlocking { /* ... */ }\n}\nval t2 = Thread {\n val queryResult = realm.objects(Person::class)\n}\n\n```\n\nYou can now safely keep your realm instance open for the lifetime of the application. You only need to close your realm when interacting with the realm file itself, such as when deleting the file or compacting it.\n\n```\n// Close Realm to free native resources\nrealm.close()\n```\n\n## Creating Data\nYou can only write within write closures, called `write` and `writeBlocking`. Writes happen through a MutableRealm which is a receiver of the `writeBlocking` and `write` lambdas. \n\nBlocking:\n\n```\nval jane = realm.writeBlocking { \n val unmanaged = Person(\"Jane\")\n copyToRealm(unmanaged)\n}\n```\n\nOr run as a suspend function. Realm automatically dispatch writes to a write dispatcher backed by a background thread, so launching this from a scope on the UI thread like `viewModelScope` is safe:\n\n```\nCoroutineScope(Dispatchers.Main).launch {\n\n // Write automatically happens on a background dispatcher\n val jane = realm.write {\n val unmanaged = Person(\"Jane\")\n // Add unmanaged objects\n copyToRealm(unmanaged)\n }\n\n // Objects returned from writes are automatically frozen\n jane.isFrozen() // == true\n\n // Access any property.\n // All properties are still lazy-loaded.\n jane.name // == \"Jane\"\n}\n```\n\n## **Updating data**\n\nSince everything is frozen by default, you need to retrieve a live version of the object that you want to update, then write to that live object to update the underlying data in the realm.\n\n```\nCoroutineScope(Dispatchers.Main).launch {\n // Create initial object \n val jane = realm.write {\n copyToRealm(Person(\"Jane\"))\n }\n \n realm.write {\n // Find latest version and update it\n // Note, this always involves a null-check\n // as another thread might have deleted the\n // object.\n // This also works on objects without\n // primary keys.\n findLatest(jane)?.apply {\n name = \"Jane Doe\"\n }\n }\n}\n```\n\n## Observing Changes\n\nChanges to all Realm classes are supported through Flows. Standard change listener API support is coming in a future release. \n\n```\nval jane = getJane()\nCoroutineScope(Dispatchers.Main).launch {\n // Updates are observed using Kotlin Flow\n val flow: Flow = jane.observe()\n flow.collect {\n // Listen to changes to the object\n println(it.name)\n }\n}\n```\n\nAs all Realm objects are now frozen by default, it is now possible to pass objects between different dispatcher threads without any additional boilerplate:\n\n```\nval jane = getJane()\nCoroutineScope(Dispatchers.Main).launch {\n\n // Run mapping/transform logic in the background\n val flow: Flow = jane.observe()\n .filter { it.name.startsWith(\"Jane\") }\n .flowOn(Dispatchers.Unconfined)\n\n // Before collecting on the UI thread\n flow.collect {\n println(it.name)\n }\n}\n```\n\n## Pitfalls\n\nWith the change to frozen architecture, there are some new pitfalls to be aware of:\n\nUnrelated queries are no longer guaranteed to run on the same version.\n\n```\n// A write can now happen between two queries\nval results1: RealmResults = realm.objects(Person::class)\nval results2: RealmResults = realm.objects(Person::class)\n\n// Resulting in subsequent queries not returning the same result\nresults1.version() != results2.version()\nresults1.size != results2.size\n\n```\nWe will introduce API\u2019s in the future that can guarantee that all operations within a certain scope are guaranteed to run on the same version. Making it easier to combine the results of multiple queries.\n\nDepending on the schema, it is also possible to navigate the entire object graph for a single object. It is only unrelated queries that risk this behaviour. \n\nStoring objects for extended periods of time can lead to Version Pinning. This results in an increased realm file size. It is thus not advisable to store Realm Objects in global variables unless they are unmanaged. \n\n```\n// BAD: Store a global managed object\nMyApp.GLOBAL_OBJECT = realm.objects(Person::class).first()\n\n// BETTER: Copy data out into an unmanaged object\nval person = realm.objects(Person::class).first()\nMyApp.GLOBAL_OBJECT = Person(person.name)\n\n```\n\nWe will monitor how big an issue this is in practise and will introduce future API\u2019s that can work around this if needed. It is currently possible to detect this happening by setting `RealmConfiguration.Builder.maxNumberOfActiveVersions()`\n \nUltimately we believe that these drawbacks are acceptable given the advantages we otherwise get from this architecture, but we\u2019ll keep a close eye on these as the API develops further.\n\n## Conclusion \nWe are really excited about this change as we believe it will fundamentally make it a lot easier to use Realm Kotlin in Android and will also enable you to use Realm in Kotlin Multilplatform projects.\n\nYou can read more about how to get started at https://docs.mongodb.com/realm/sdk/kotlin-multiplatform/. We encourage you to try out this new version and leave any feedback at https://github.com/realm/realm-kotlin/issues/new. Sample projects can be found here.\n\nThe SDK is still in alpha and as such none of the API\u2019s are considered stable, but it is possible to follow our progress at https://github.com/realm/realm-kotlin. \n\nIf you are interested about learning more about how this works under the hood, you can also read more here\n\nHappy hacking!", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Mobile"], "pageDescription": "In this blogpost we are announcing v0.4.1 of the Realm Kotlin Multiplatform SDK. This release contains a significant architectural departure from previous releases of Realm Kotlin as well as other Realm SDK\u2019s, making it much more compatible with modern reactive frameworks like Kotlin Flows. We believe this change will hugely benefit users in the Kotlin ecosystem.\n", "contentType": "News & Announcements"}, "title": "Realm Kotlin 0.4.1 Announcement", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-crud-tutorial-3-3-2", "action": "created", "body": "# MongoDB and Node.js 3.3.2 Tutorial - CRUD Operations\n\n \n\nIn the first post in this series, I walked you through how to connect to a MongoDB database from a Node.js script, retrieve a list of databases, and print the results to your console. If you haven't read that post yet, I recommend you do so and then return here.\n\n>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n\nNow that we have connected to a database, let's kick things off with the CRUD (create, read, update, and delete) operations.\n\nIf you prefer video over text, I've got you covered. Check out the video in the section below. :-)\n\n>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n\nHere is a summary of what we'll cover in this post:\n\n- Learn by Video\n- How MongoDB Stores Data\n- Create\n- Read\n- Update\n- Delete\n- Wrapping Up\n\n## Learn by Video\n\nI created the video below for those who prefer to learn by video instead\nof text. You might also find this video helpful if you get stuck while\ntrying the steps in the text-based instructions below.\n\nHere is a summary of what the video covers:\n\n- How to connect to a MongoDB database hosted on MongoDB Atlas from inside of a Node.js script (00:40)\n- How MongoDB stores data in documents and collections (instead of rows and tables) (08:51)\n- How to create documents using `insertOne()` and `insertMany()` (11:01)\n- How to read documents using `findOne()` and `find()` (20:04)\n- How to update documents using `updateOne()` with and without `upsert` as well as `updateMany()` (31:13)\n- How to delete documents using `deleteOne()` and `deleteMany()` (46:07)\n\n:youtube]{vid=ayNI9Q84v8g}\n\nNote: In the video, I type `main().catch(console.err);`, which is incorrect. Instead, I should have typed `main().catch(console.error);`.\n\nBelow are the links I mentioned in the video.\n\n- [MongoDB Atlas\n- How to create a free cluster on Atlas\n- MongoDB University's Data Modeling Course\n- MongoDB University's JavaScript Course\n\n## How MongoDB Stores Data\n\nBefore we go any further, let's take a moment to understand how data is stored in MongoDB.\n\nMongoDB stores data in BSON documents. BSON is a binary representation of JSON (JavaScript Object Notation) documents. When you read MongoDB documentation, you'll frequently see the term \"document,\" but you can think of a document as simply a JavaScript object. For those coming from the SQL world, you can think of a document as being roughly equivalent to a row.\n\nMongoDB stores groups of documents in collections. For those with a SQL background, you can think of a collection as being roughly equivalent to a table.\n\nEvery document is required to have a field named `_id`. The value of `_id` must be unique for each document in a collection, is immutable, and can be of any type other than an array. MongoDB will automatically create an index on `_id`. You can choose to make the value of `_id` meaningful (rather than a somewhat random ObjectId) if you have a unique value for each document that you'd like to be able to quickly search.\n\nIn this blog series, we'll use the sample Airbnb listings dataset. The `sample_airbnb` database contains one collection: `listingsAndReviews`. This collection contains documents about Airbnb listings and their reviews.\n\nLet's take a look at a document in the `listingsAndReviews` collection. Below is part of an Extended JSON representation of a BSON document:\n\n``` json\n{\n \"_id\":\"10057447\",\n \"listing_url\":\"https://www.airbnb.com/rooms/10057447\",\n \"name\":\"Modern Spacious 1 Bedroom Loft\",\n \"summary\":\"Prime location, amazing lighting and no annoying neighbours. Good place to rent if you want a relaxing time in Montreal.\",\n \"property_type\":\"Apartment\",\n \"bedrooms\":{\"$numberInt\":\"1\"},\n \"bathrooms\":{\"$numberDecimal\":\"1.0\"},\n \"amenities\":\"Internet\",\"Wifi\",\"Kitchen\",\"Heating\",\"Family/kid friendly\",\"Washer\",\"Dryer\",\"Smoke detector\",\"First aid kit\",\"Safety card\",\"Fire extinguisher\",\"Essentials\",\"Shampoo\",\"24-hour check-in\",\"Hangers\",\"Iron\",\"Laptop friendly workspace\"],\n}\n```\n\nFor more information on how MongoDB stores data, see the [MongoDB Back to Basics Webinar that I co-hosted with Ken Alger.\n\n## Create\n\nNow that we know how to connect to a MongoDB database and we understand how data is stored in a MongoDB database, let's create some data!\n\n### Create One Document\n\nLet's begin by creating a new Airbnb listing. We can do so by calling Collection's insertOne(). `insertOne()` will insert a single document into the collection. The only required parameter is the new document (of type object) that will be inserted. If our new document does not contain the `_id` field, the MongoDB driver will automatically create an id for the document.\n\nOur function to create a new listing will look something like the following:\n\n``` javascript\nasync function createListing(client, newListing){\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").insertOne(newListing);\n console.log(`New listing created with the following id: ${result.insertedId}`);\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as an object that contains information about a listing.\n\n``` javascript\nawait createListing(client,\n {\n name: \"Lovely Loft\",\n summary: \"A charming loft in Paris\",\n bedrooms: 1,\n bathrooms: 1\n }\n );\n```\n\nThe output would be something like the following:\n\n``` none\nNew listing created with the following id: 5d9ddadee415264e135ccec8\n```\n\nNote that since we did not include a field named `_id` in the document, the MongoDB driver automatically created an `_id` for us. The `_id` of the document you create will be different from the one shown above. For more information on how MongoDB generates `_id`, see Quick Start: BSON Data Types - ObjectId.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Create Multiple Documents\n\nSometimes, you will want to insert more than one document at a time. You could choose to repeatedly call `insertOne()`. The problem is that, depending on how you've structured your code, you may end up waiting for each insert operation to return before beginning the next, resulting in slow code.\n\nInstead, you can choose to call Collection's insertMany(). `insertMany()` will insert an array of documents into your collection.\n\nOne important option to note for `insertMany()` is `ordered`. If `ordered` is set to `true`, the documents will be inserted in the order given in the array. If any of the inserts fail (for example, if you attempt to insert a document with an `_id` that is already being used by another document in the collection), the remaining documents will not be inserted. If ordered is set to `false`, the documents may not be inserted in the order given in the array. MongoDB will attempt to insert all of the documents in the given array\u2014regardless of whether any of the other inserts fail. By default, `ordered` is set to `true`.\n\nLet's write a function to create multiple Airbnb listings.\n\n``` javascript\nasync function createMultipleListings(client, newListings){\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").insertMany(newListings);\n\n console.log(`${result.insertedCount} new listing(s) created with the following id(s):`);\n console.log(result.insertedIds); \n}\n```\n\nWe can call this function by passing a connected MongoClient and an array of objects that contain information about listings.\n\n``` javascript\nawait createMultipleListings(client, \n {\n name: \"Infinite Views\",\n summary: \"Modern home with infinite views from the infinity pool\",\n property_type: \"House\",\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5\n },\n {\n name: \"Private room in London\",\n property_type: \"Apartment\",\n bedrooms: 1,\n bathroom: 1\n },\n {\n name: \"Beautiful Beach House\",\n summary: \"Enjoy relaxed beach living in this house with a private beach\",\n bedrooms: 4,\n bathrooms: 2.5,\n beds: 7,\n last_review: new Date()\n }\n]);\n```\n\nNote that every document does not have the same fields, which is perfectly OK. (I'm guessing that those who come from the SQL world will find this incredibly uncomfortable, but it really will be OK \ud83d\ude0a.) When you use MongoDB, you get a lot of flexibility in how to structure your documents. If you later decide you want to add [schema validation rules so you can guarantee your documents have a particular structure, you can.\n\nThe output of calling `createMultipleListings()` would be something like the following:\n\n``` none\n3 new listing(s) created with the following id(s):\n{ \n '0': 5d9ddadee415264e135ccec9,\n '1': 5d9ddadee415264e135cceca,\n '2': 5d9ddadee415264e135ccecb \n}\n```\n\nJust like the MongoDB Driver automatically created the `_id` field for us when we called `insertOne()`, the Driver has once again created the `_id` field for us when we called `insertMany()`.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Read\n\nNow that we know how to **create** documents, let's **read** one!\n\n### Read One Document\n\nLet's begin by querying for an Airbnb listing in the listingsAndReviews collection by name.\n\nWe can query for a document by calling Collection's findOne(). `findOne()` will return the first document that matches the given query. Even if more than one document matches the query, only one document will be returned.\n\n`findOne()` has only one required parameter: a query of type object. The query object can contain zero or more properties that MongoDB will use to find a document in the collection. If you want to query all documents in a collection without narrowing your results in any way, you can simply send an empty object.\n\nSince we want to search for an Airbnb listing with a particular name, we will include the name field in the query object we pass to `findOne()`:\n\n``` javascript\nfindOne({ name: nameOfListing })\n```\n\nOur function to find a listing by querying the name field could look something like the following:\n\n``` javascript\nasync function findOneListingByName(client, nameOfListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").findOne({ name: nameOfListing });\n\n if (result) {\n console.log(`Found a listing in the collection with the name '${nameOfListing}':`);\n console.log(result);\n } else {\n console.log(`No listings found with the name '${nameOfListing}'`);\n }\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as the name of a listing we want to find. Let's search for a listing named \"Infinite Views\" that we created in an earlier section.\n\n``` javascript\nawait findOneListingByName(client, \"Infinite Views\");\n```\n\nThe output should be something like the following.\n\n``` none\nFound a listing in the collection with the name 'Infinite Views':\n{ \n _id: 5da9b5983e104518671ae128,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5 \n}\n```\n\nNote that the `_id` of the document in your database will not match the `_id` in the sample output above.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Read Multiple Documents\n\nNow that you know how to query for one document, let's discuss how to query for multiple documents at a time. We can do so by calling Collection's find().\n\nSimilar to `findOne()`, the first parameter for `find()` is the query object. You can include zero to many properties in the query object.\n\nLet's say we want to search for all Airbnb listings that have minimum numbers of bedrooms and bathrooms. We could do so by making a call like the following:\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n );\n```\n\nAs you can see above, we have two properties in our query object: one for bedrooms and one for bathrooms. We can leverage the $gte comparison query operator to search for documents that have bedrooms greater than or equal to a given number. We can do the same to satisfy our minimum number of bathrooms requirement. MongoDB provides a variety of other comparison query operators that you can utilize in your queries. See the official documentation for more details.\n\nThe query above will return a Cursor. A Cursor allows traversal over the result set of a query.\n\nYou can also use Cursor's functions to modify what documents are included in the results. For example, let's say we want to sort our results so that those with the most recent reviews are returned first. We could use Cursor's sort() function to sort the results using the `last_review` field. We could sort the results in descending order (indicated by passing -1 to `sort()`) so that listings with the most recent reviews will be returned first. We can now update our existing query to look like the following.\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 });\n```\n\nThe above query matches 192 documents in our collection. Let's say we don't want to process that many results inside of our script. Instead, we want to limit our results to a smaller number of documents. We can chain another of `sort()`'s functions to our existing query: limit(). As the name implies, `limit()` will set the limit for the cursor. We can now update our query to only return a certain number of results.\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\n```\n\nWe could choose to iterate over the cursor to get the results one by one. Instead, if we want to retrieve all of our results in an array, we can call Cursor's toArray() function. Now our code looks like the following:\n\n``` javascript\nconst cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\").find(\n {\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n ).sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\nconst results = await cursor.toArray();\n```\n\nNow that we have our query ready to go, let's put it inside an asynchronous function and add functionality to print the results.\n\n``` javascript\nasync function findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {\n minimumNumberOfBedrooms = 0,\n minimumNumberOfBathrooms = 0,\n maximumNumberOfResults = Number.MAX_SAFE_INTEGER\n} = {}) {\n const cursor = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .find({\n bedrooms: { $gte: minimumNumberOfBedrooms },\n bathrooms: { $gte: minimumNumberOfBathrooms }\n }\n )\n .sort({ last_review: -1 })\n .limit(maximumNumberOfResults);\n\n const results = await cursor.toArray();\n\n if (results.length > 0) {\n console.log(`Found listing(s) with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms:`);\n results.forEach((result, i) => {\n date = new Date(result.last_review).toDateString();\n\n console.log();\n console.log(`${i + 1}. name: ${result.name}`);\n console.log(` _id: ${result._id}`);\n console.log(` bedrooms: ${result.bedrooms}`);\n console.log(` bathrooms: ${result.bathrooms}`);\n console.log(` most recent review date: ${new Date(result.last_review).toDateString()}`);\n });\n } else {\n console.log(`No listings found with at least ${minimumNumberOfBedrooms} bedrooms and ${minimumNumberOfBathrooms} bathrooms`);\n }\n}\n```\n\nWe can call this function by passing a connected MongoClient as well as an object with properties indicating the minimum number of bedrooms, the minimum number of bathrooms, and the maximum number of results.\n\n``` javascript\nawait findListingsWithMinimumBedroomsBathroomsAndMostRecentReviews(client, {\n minimumNumberOfBedrooms: 4,\n minimumNumberOfBathrooms: 2,\n maximumNumberOfResults: 5\n});\n```\n\nIf you've created the documents as described in the earlier section, the output would be something like the following:\n\n``` none\nFound listing(s) with at least 4 bedrooms and 2 bathrooms:\n\n1. name: Beautiful Beach House\n _id: 5db6ed14f2e0a60683d8fe44\n bedrooms: 4\n bathrooms: 2.5\n most recent review date: Mon Oct 28 2019\n\n2. name: Spectacular Modern Uptown Duplex\n _id: 582364\n bedrooms: 4\n bathrooms: 2.5\n most recent review date: Wed Mar 06 2019\n\n3. name: Grace 1 - Habitat Apartments\n _id: 29407312\n bedrooms: 4\n bathrooms: 2.0\n most recent review date: Tue Mar 05 2019\n\n4. name: 6 bd country living near beach\n _id: 2741869\n bedrooms: 6\n bathrooms: 3.0\n most recent review date: Mon Mar 04 2019\n\n5. name: Awesome 2-storey home Bronte Beach next to Bondi!\n _id: 20206764\n bedrooms: 4\n bathrooms: 2.0\n most recent review date: Sun Mar 03 2019\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Update\n\nWe're halfway through the CRUD operations. Now that we know how to **create** and **read** documents, let's discover how to **update** them.\n\n### Update One Document\n\nLet's begin by updating a single Airbnb listing in the listingsAndReviews collection.\n\nWe can update a single document by calling Collection's updateOne(). `updateOne()` has two required parameters:\n\n1. `filter` (object): the Filter used to select the document to update. You can think of the filter as essentially the same as the query param we used in findOne() to search for a particular document. You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.\n2. `update` (object): the update operations to be applied to the document. MongoDB has a variety of update operators you can use such as `$inc`, `$currentDate`, `$set`, and `$unset`, among others. See the official documentation for a complete list of update operators and their descriptions.\n\n`updateOne()` also has an optional `options` param. See the updateOne() docs for more information on these options.\n\n`updateOne()` will update the first document that matches the given query. Even if more than one document matches the query, only one document will be updated.\n\nLet's say we want to update an Airbnb listing with a particular name. We can use `updateOne()` to achieve this. We'll include the name of the listing in the filter param. We'll use the $set update operator to set new values for new or existing fields in the document we are updating. When we use `$set`, we pass a document that contains fields and values that should be updated or created. The document that we pass to `$set` will not replace the existing document; any fields that are part of the original document but not part of the document we pass to `$set` will remain as they are.\n\nOur function to update a listing with a particular name would look like the following:\n\n``` javascript\nasync function updateListingByName(client, nameOfListing, updatedListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateOne({ name: nameOfListing }, { $set: updatedListing });\n\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n}\n```\n\nLet's say we want to update our Airbnb listing that has the name \"Infinite Views.\" We created this listing in an earlier section.\n\n``` javascript\n{ \n _id: 5db6ed14f2e0a60683d8fe42,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 5,\n bathrooms: 4.5,\n beds: 5 \n}\n```\n\nWe can call `updateListingByName()` by passing a connected MongoClient, the name of the listing, and an object containing the fields we want to update and/or create.\n\n``` javascript\nawait updateListingByName(client, \"Infinite Views\", { bedrooms: 6, beds: 8 });\n```\n\nExecuting this command results in the following output.\n\n``` none\n1 document(s) matched the query criteria.\n1 document(s) was/were updated.\n```\n\nNow our listing has an updated number of bedrooms and beds.\n\n``` json\n{ \n _id: 5db6ed14f2e0a60683d8fe42,\n name: 'Infinite Views',\n summary: 'Modern home with infinite views from the infinity pool',\n property_type: 'House',\n bedrooms: 6,\n bathrooms: 4.5,\n beds: 8 \n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Upsert One Document\n\nOne of the options you can choose to pass to `updateOne()` is upsert. Upsert is a handy feature that allows you to update a document if it exists or insert a document if it does not.\n\nFor example, let's say you wanted to ensure that an Airbnb listing with a particular name had a certain number of bedrooms and bathrooms. Without upsert, you'd first use `findOne()` to check if the document existed. If the document existed, you'd use `updateOne()` to update the document. If the document did not exist, you'd use `insertOne()` to create the document. When you use upsert, you can combine all of that functionality into a single command.\n\nOur function to upsert a listing with a particular name can be basically identical to the function we wrote above with one key difference: We'll pass `{upsert: true}` in the `options` param for `updateOne()`.\n\n``` javascript\nasync function upsertListingByName(client, nameOfListing, updatedListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateOne({ name: nameOfListing }, \n { $set: updatedListing }, \n { upsert: true });\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n\n if (result.upsertedCount > 0) {\n console.log(`One document was inserted with the id ${result.upsertedId._id}`);\n } else {\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n }\n}\n```\n\nLet's say we aren't sure if a listing named \"Cozy Cottage\" is in our collection or, if it does exist, if it holds old data. Either way, we want to ensure the listing that exists in our collection has the most up-to-date data. We can call `upsertListingByName()` with a connected MongoClient, the name of the listing, and an object containing the up-to-date data that should be in the listing.\n\n``` javascript\nawait upsertListingByName(client, \"Cozy Cottage\", { name: \"Cozy Cottage\", bedrooms: 2, bathrooms: 1 });\n```\n\nIf the document did not previously exist, the output of the function would be something like the following:\n\n``` none\n0 document(s) matched the query criteria.\nOne document was inserted with the id 5db9d9286c503eb624d036a1\n```\n\nWe have a new document in the listingsAndReviews collection:\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2 \n}\n```\n\nIf we discover more information about the \"Cozy Cottage\" listing, we can use `upsertListingByName()` again.\n\n``` javascript\nawait upsertListingByName(client, \"Cozy Cottage\", { beds: 2 });\n```\n\nAnd we would see the following output.\n\n``` none\n1 document(s) matched the query criteria.\n1 document(s) was/were updated.\n```\n\nNow our document has a new field named \"beds.\"\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2,\n beds: 2 \n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Update Multiple Documents\n\nSometimes, you'll want to update more than one document at a time. In this case, you can use Collection's updateMany(). Like `updateOne()`, `updateMany()` requires that you pass a filter of type object and an update of type object. You can choose to include options of type object as well.\n\nLet's say we want to ensure that every document has a field named `property_type`. We can use the $exists query operator to search for documents where the `property_type` field does not exist. Then we can use the $set update operator to set the `property_type` to \"Unknown\" for those documents. Our function will look like the following.\n\n``` javascript\nasync function updateAllListingsToHavePropertyType(client) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .updateMany({ property_type: { $exists: false } }, \n { $set: { property_type: \"Unknown\" } });\n console.log(`${result.matchedCount} document(s) matched the query criteria.`);\n console.log(`${result.modifiedCount} document(s) was/were updated.`);\n}\n```\n\nWe can call this function with a connected MongoClient.\n\n``` javascript\nawait updateAllListingsToHavePropertyType(client);\n```\n\nBelow is the output from executing the previous command.\n\n``` none\n3 document(s) matched the query criteria.\n3 document(s) was/were updated.\n```\n\nNow our \"Cozy Cottage\" document and all of the other documents in the Airbnb collection have the `property_type` field.\n\n``` json\n{ \n _id: 5db9d9286c503eb624d036a1,\n name: 'Cozy Cottage',\n bathrooms: 1,\n bedrooms: 2,\n beds: 2,\n property_type: 'Unknown' \n}\n```\n\nListings that contained a `property_type` before we called `updateMany()` remain as they were. For example, the \"Spectacular Modern Uptown Duplex\" listing still has `property_type` set to `Apartment`.\n\n``` json\n{ \n _id: '582364',\n listing_url: 'https://www.airbnb.com/rooms/582364',\n name: 'Spectacular Modern Uptown Duplex',\n property_type: 'Apartment',\n room_type: 'Entire home/apt',\n bedrooms: 4,\n beds: 7\n ...\n}\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Delete\n\nNow that we know how to **create**, **read**, and **update** documents, let's tackle the final CRUD operation: **delete**.\n\n### Delete One Document\n\nLet's begin by deleting a single Airbnb listing in the listingsAndReviews collection.\n\nWe can delete a single document by calling Collection's deleteOne(). `deleteOne()` has one required parameter: a filter of type object. The filter is used to select the document to delete. You can think of the filter as essentially the same as the query param we used in findOne() and the filter param we used in updateOne(). You can include zero properties in the filter to search for all documents in the collection, or you can include one or more properties to narrow your search.\n\n`deleteOne()` also has an optional `options` param. See the deleteOne() docs for more information on these options.\n\n`deleteOne()` will delete the first document that matches the given query. Even if more than one document matches the query, only one document will be deleted. If you do not specify a filter, the first document found in natural order will be deleted.\n\nLet's say we want to delete an Airbnb listing with a particular name. We can use `deleteOne()` to achieve this. We'll include the name of the listing in the filter param. We can create a function to delete a listing with a particular name.\n\n``` javascript\nasync function deleteListingByName(client, nameOfListing) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .deleteOne({ name: nameOfListing });\n console.log(`${result.deletedCount} document(s) was/were deleted.`);\n}\n```\n\nLet's say we want to delete the Airbnb listing we created in an earlier section that has the name \"Cozy Cottage.\" We can call `deleteListingsByName()` by passing a connected MongoClient and the name \"Cozy Cottage.\"\n\n``` javascript\nawait deleteListingByName(client, \"Cozy Cottage\");\n```\n\nExecuting the command above results in the following output.\n\n``` none\n1 document(s) was/were deleted.\n```\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n### Deleting Multiple Documents\n\nSometimes, you'll want to delete more than one document at a time. In this case, you can use Collection's deleteMany(). Like `deleteOne()`, `deleteMany()` requires that you pass a filter of type object. You can choose to include options of type object as well.\n\nLet's say we want to remove documents that have not been updated recently. We can call `deleteMany()` with a filter that searches for documents that were scraped prior to a particular date. Our function will look like the following.\n\n``` javascript\nasync function deleteListingsScrapedBeforeDate(client, date) {\n const result = await client.db(\"sample_airbnb\").collection(\"listingsAndReviews\")\n .deleteMany({ \"last_scraped\": { $lt: date } });\n console.log(`${result.deletedCount} document(s) was/were deleted.`);\n}\n```\n\nTo delete listings that were scraped prior to February 15, 2019, we can call `deleteListingsScrapedBeforeDate()` with a connected MongoClient and a Date instance that represents February 15.\n\n``` javascript\nawait deleteListingsScrapedBeforeDate(client, new Date(\"2019-02-15\"));\n```\n\nExecuting the command above will result in the following output.\n\n``` none\n606 document(s) was/were deleted.\n```\n\nNow, only recently scraped documents are in our collection.\n\nIf you're not a fan of copying and pasting, you can get a full copy of the code above in the Node.js Quick Start GitHub Repo.\n\n## Wrapping Up\n\nWe covered a lot today! Let's recap.\n\nWe began by exploring how MongoDB stores data in documents and collections. Then we learned the basics of creating, reading, updating, and deleting data.\n\nContinue on to the next post in this series, where we'll discuss how you can analyze and manipulate data using the aggregation pipeline.\n\nComments? Questions? We'd love to chat with you in the MongoDB Community.\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Learn how to execute the CRUD (create, read, update, and delete) operations in MongoDB using Node.js in this step-by-step tutorial.", "contentType": "Quickstart"}, "title": "MongoDB and Node.js 3.3.2 Tutorial - CRUD Operations", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/build-command-line-swift-mongodb", "action": "created", "body": "# Build a Command Line Tool with Swift and MongoDBBuild a Command Line Tool with Swift and MongoDB\n\n## Table of Contents\n\n- Introduction\n- TL;DR:\n- Goals\n- Prerequisites\n- Overview of Steps\n- Requirements for Solution\n- Launching Your Database Cluster in Atlas\n- Setting Up The Project\n- Looking at our Data\n- Integrating the MongoDB Swift Driver\n- Conclusion\n- Resources\n- Troubleshooting\n\n## Introduction\n\nBuilding something with your bare hands gives a sense of satisfaction like few other tasks. But there's really no comparison to the feeling you get when you create something that not only accomplishes the immediate task at hand but also enables you to more efficiently accomplish that same task in the future. Or, even better, when someone else can use what you have built to more easily accomplish their tasks. That is what we are going to do today. We are going to build something that will automate the process of importing data into MongoDB.\n\nAn executable program is powerful because it's self contained and transportable. There's no requirement to compile it or ensure that other elements are present in the environment. It just runs. You can share it with others and assuming they have a relatively similar system, it'll just run for them too. We're going to focus on accomplishing our goal using Swift, Apple's easy-to-learn programming language. We'll also feature use of our brand new MongoDB Swift Driver that enables you to create, read, update and delete data in a MongoDB database.\n\n## TL;DR:\n\nRather have a video run-through of this content? Check out the Youtube Video where my colleague Nic Raboy, and I talk through this very same content.\n\n:youtube]{vid=cHB8hzUSCpE}\n\n## Goals\n\nHere are the goals for this article.\n\n1. Increase your familiarity with MongoDB Atlas\n2. Introduce you to the [Swift Language, and the Xcode Development Environment\n3. Introduce you to the MongoDB Swift Driver\n4. Introduce you to the Swift Package Manager\n\nBy the end of this article, if we've met our goals, you will be able to do the following:\n\n1. Use Xcode to begin experimenting with Swift\n2. Use Swift Package Manager to:\n - Create a basic project.\n - Integrate the MongoDB Swift Driver into your project\n - Create an exectuable on your Mac.\n\n## Prerequisites\n\nBefore we begin, let's clarify some of the things you'll have to have in place to get started.\n\n- A Mac & MacOS (not an iOS device). You may be reading this on your Windows PC or an iPad. Sorry folks this tutorial was written for you to follow along on your Mac machine: MacBook, MacBook Pro, iMac, etc. You may want to check out macincloud if you're interested in a virtual Mac experience.\n- Xcode. You should have Xcode Installed - Visit Apple's App Store to install on your Mac.\n- Swift Installed - Visit Apple's Developer Site to learn more.\n- Access to a MongoDB Database - Visit MongoDB Atlas to start for free. Read more about MongoDB Atlas.\n\n \n\n>If you haven't had much experience with Xcode or MacOS Application Development, check out the guides on Apple's Developer Hub. Getting started is very easy and it's free!\n\n## What will we build?\n\nThe task I'm trying to automate involves importing data into a MongoDB database. Before we get too far down the path of creating a solution, let's document our set of requirements for what we'll create.\n\n \n\n## Overview of Steps\n\nHere's a quick run-down of the steps we'll work on to complete our task.\n\n1. Launch an Atlas Cluster.\n2. Add a Database user/password, and a network exception entry so you can access your database from your IP Address.\n3. Create a Swift project using Swift Package Manager (`swift package init --type=executable`)\n4. Generate an Xcode project using Swift Package Manager (`swift package generate-xcodeproj`)\n5. Create a (`for loop`) using (String) to access, and print out the data in your `example.csv` file. (See csvread.swift)\n6. Modify your package to pull in the MongoDB Swift Driver. (See Package.swift)\n7. Test. (`swift build; swift run`) Errors? See FAQ section below.\n8. Modify your code to incorporate the MongoDB Swift Driver, and write documents. (See Sources/command-line-swift-mongodb/main.swift)\n9. Test. (`swift build; swift run`) Errors? See FAQ section below.\n10. Create executable and release. (`swift package release`)\n\n## Requirements for Solution\n\n1. The solution must **import a set of data** that starts in CSV (or tabular/excel) format into an existing MongoDB database.\n2. Each row of the data in the CSV file **should become a separate document in the MongoDB Database**. Further, each new document should include a new field with the import date/time.\n3. It **must be done with minimal knowledge of MongoDB** - i.e. Someone with relatively little experience and knowledge of MongoDB should be able to perform the task within several minutes.\n\nWe could simply use mongoimport with the following command line:\n\n``` bash\nmongoimport --host localhost:27017 --type csv --db school --collection students --file example.csv --headerline\n```\n\nIf you're familiar with MongoDB, the above command line won't seem tricky at all. However, this will not satisfy our requirements for the following reasons:\n\n- **Requirement 1**: Pass - It will result in data being imported into MongoDB.\n- **Requirement 2**: Fail - While each row WILL become a separate document, we'll not get our additional date field in those documents.\n- **Requirement 3**: Fail - While the syntax here may seem rather straight-forward if you've used MongoDB before, to a newcomer, it can be a bit confusing. For example, I'm using localhost here... when we run this executable on another host, we'll need to replace that with the actual hostname for our MongoDB Database. The command syntax will get quite a bit more complex once this happens.\n\nSo then, how will we build something that meets all of our requirements?\n\nWe can build a command-line executable that uses the MongoDB Swift Driver to accomplish the task. Building a program to accomplish our task enables us to abstract much of the complexity associated with our task. Fortunately, there's a driver for Swift and using it to read CSV data, manipulate it and write it to a MongoDB database is really straight forward.\n\n \n\n## Launching Your Database Cluster in Atlas\n\nYou'll need to create a new cluster and load it with sample data. My colleague Maxime Beugnet has created a video tutorial to help you out, but I also explain the steps below:\n\n- Click \"Start free\" on the MongoDB homepage.\n- Enter your details, or just sign up with your Google account, if you have one.\n- Accept the Terms of Service\n- Create a *Starter* cluster.\n - Select the cloud provider where you'd like to store your MongoDB Database\n - Pick a region that makes sense for you.\n - You can change the name of the cluster if you like. I've called mine \"MyFirstCluster\".\n\nOnce your cluster launches, be sure that you add a Network Exception entry for your current IP and then add a database username and password. Take note of the username and password - you'll need these shortly.\n\n## Setting Up The Project\n\nWe'll start on our journey by creating a Swift Package using Swift Package Manager. This tool will give us a template project and establish the directory structure and some scaffolding we'll need to get started. We're going to use the swift command line tool with the `package` subcommand.\n\nThere are several variations that we can use. Before jumping in, let's example the difference in some of the flags.\n\n``` bash\nswift package init\n```\n\nThis most basic variation will give us a general purpose project. But, since we're building a MacOS, executable, let's add the `--type` flag to indicate the type of project we're working on.\n\n``` bash\nswift package init --type=executable\n```\n\nThis will create a project that defines the \"product\" of a build -- which is in essense our executable. Just remember that if you're creating an executable, typically for server-side Swift, you'll want to incorporate the `--type=executable` flag.\n\nXcode is where most iOS, and Apple developers in general, write and maintain code so let's prepare a project so we can use Xcode too. Now that we've got our basic project scaffolding in place, let's create an Xcode project where we can modify our code.\n\nTo create an Xcode project simply execute the following command:\n\n``` bash\nswift package generate-xcodeproj\n```\n\nThen, we can open the `.xcproject` file. Your mac should automatically open Xcode as a result of trying to open an Xcode Project file.\n\n``` bash\nopen .xcodeproj/ # change this to the name that was created by the previous command.\n```\n\n## Looking at our Data\n\nWith our project scaffolding in place, let's turn our focus to the data we'll be manipulating with our executable. Let's look at the raw data first. Let's say there's a list of students that come out every month that I need to get into my database. It might look something like this:\n\n``` bash\nfirstname,lastname,assigned\nMichael,Basic,FALSE\nDan,Acquilone,FALSE\nEli,Zimmerman,FALSE\nLiam,Tyler,FALSE\nJane,Alberts,FALSE\nTed,Williams,FALSE\nSuzy,Langford,FALSE\nPaulina,Stern,FALSE\nJared,Lentz,FALSE\nJune,Gifford,FALSE\nWilma,Atkinson,FALSE\n```\n\nIn this example data, we have 3 basic fields of information: First Name, Last Name, and a Boolean value indicating whether or not the student has been assigned to a specific class.\n\nWe want to get this data from it's current form (CSV) into documents inside the database and along the way, add a field to record the date that the document was imported. This is going to require us to read the CSV file inside our Swift application. Before proceeding, make sure you either have similar data in a file to which you know the path. We'll be creating some code next to access that file with Swift.\n\nOnce we're finished, the data will look like the following, represented in a JSON document:\n\n``` json\n{\n\"_id\": {\n \"$oid\": \"5f491a3bf983e96173253352\" // this will come from our driver.\n},\n\"firstname\": \"Michael\",\n\"lastname\": \"Basic\",\n\"date\": {\n \"$date\": \"2020-08-28T14:52:43.398Z\" // this will be set by our Struct default value\n},\n\"assigned\": false\n}\n```\n\nIn order to get the rows and fields of names into MongoDB, we'll use Swift's built-in String class. This is a powerhouse utility that can do everything from read the contents of a file to interpolate embedded variables and do comparisons between two or more sets of strings. The class method contentsOfFile of the String class will access the file based on a filepath we provide, open the file and enable us to access its contents. Here's what our code might look like if we were just going to loop through the CSV file and print out the rows it contains.\n\n>You may be tempted to just copy/paste the code below. I would suggest that you type it in by hand... reading it from the screen. This will enable you to experience the power of auto-correct, and code-suggest inside Xcode. Also, be sure to modify the value of the `path` variable to point to the location where you put your `example.csv` file.\n\n``` swift\nimport Foundation\n\nlet path = \"/Users/mlynn/Desktop/example.csv\" // change this to the path of your csv file\ndo {\n let contents = try String(contentsOfFile: path, encoding: .utf8)\n let rows = contents.components(separatedBy: NSCharacterSet.newlines)\n for row in rows {\n if row != \"\" {\n print(\"Got Row: \\(row)\")\n }\n }\n}\n```\n\nLet's take a look at what's happening here.\n\n- Line 1: We'll use the Foundation core library. This gives us access to some basic string, character and comparison methods. The import declaration gives us access to native, as well as third party libraries and modules.\n- Line 3: Hard code a path variable to the CSV file.\n- Lines 6-7: Use the String method to access the contents of the CSV file.\n- Line 8: Loop through each row in our file and display the contents.\n\nTo run this simple example, let's open the `main.swift` file that our that the command `swift package init` created for us. To edit this file, in Xcode, To begin, let's open the main.swift file that our that the command `swift package init` created for us. To edit this file, in Xcode, traverse the folder tree under Project->Sources-Project name... and open `main.swift`. Replace the simple `hello world` with the code above.\n\nRunning this against our `example.csv` file, you should see something like the following output. We'll use the commands `swift build`, and `swift run`.\n\n \n\n## Integrating the MongoDB Swift Driver\n\nWith this basic construct in place, we can now begin to incorporate the code necessary to insert a document into our database for each row of data in the csv file. Let's start by configuring Swift Package Manager to integrate the MongoDB Swift Driver.\n\n \n\nNavigate in the project explorer to find the Package.swift file. Replace the contents with the Package.swift file from the repo:\n\n``` swift\n// swift-tools-version:5.2\n// The swift-tools-version declares the minimum version of Swift required to build this package.\nimport PackageDescription\n\nlet package = Package(\n name: \"csvimport-swift\",\n platforms: \n .macOS(.v10_15),\n ],\n dependencies: [\n .package(url: \"https://github.com/mongodb/mongo-swift-driver.git\", from: \"1.0.1\"),\n ],\n targets: [\n .target(\n name: \"csvimport-swift\",\n dependencies: [.product(name: \"MongoSwiftSync\", package: \"mongo-swift-driver\")]),\n .testTarget(\n name: \"csvimport-swiftTests\",\n dependencies: [\"csvimport-swift\"]),\n ]\n)\n```\n\n>If you're unfamiliar with [Swift Package Manager take a detour and read up over here.\n\nWe're including a statement that tells Swift Package Manager that we're building this executable for a specific set of MacOS versions.\n\n``` swift\nplatforms: \n .macOS(.v10_15)\n],\n```\n\n>Tip: If you leave this statement out, you'll get a message stating that the package was designed to be built for MacOS 10.10 or similar.\n\nNext we've included references to the packages we'll need in our software to insert, and manipulate MongoDB data. In this example, we'll concentrate on an asynchronous implementation. Namely, the [mongo-swift-driver.\n\nNow that we've included our dependencies, let's build the project. Build the project often so you catch any errors you may have inadvertently introduced early on.\n\n``` none\nswift package build\n```\n\nYou should get a response similar to the following:\n\n``` none\n3/3] Linking cmd\n```\n\nNow let's modify our basic program project to make use of our MongoDB driver.\n\n``` swift\nimport Foundation\nimport MongoSwiftSync\n\nvar murl: String = \"mongodb+srv://:\\(ProcessInfo.processInfo.environment[\"PASS\"]!)@myfirstcluster.zbcul.mongodb.net/?retryWrites=true&w=majority\"\nlet client = try MongoClient(murl)\n\nlet db = client.db(\"students\")\nlet session = client.startSession(options: ClientSessionOptions(causalConsistency: true))\n\nstruct Person: Codable {\n let firstname: String\n let lastname: String\n let date: Date = Date()\n let assigned: Bool\n let _id: BSONObjectID\n}\n\nlet path = \"/Users/mlynn/Desktop/example.csv\"\nvar tempAssigned: Bool\nvar count: Int = 0\nvar header: Bool = true\n\nlet personCollection = db.collection(\"people\", withType: Person.self)\n\ndo {\n let contents = try String(contentsOfFile: path, encoding: .utf8)\n let rows = contents.components(separatedBy: NSCharacterSet.newlines)\n for row in rows {\n if row != \"\" {\n var values: [String] = []\n values = row.components(separatedBy: \",\")\n if header == true {\n header = false\n } else {\n if String(values[2]).lowercased() == \"false\" || Bool(values[2]) == false {\n tempAssigned = false\n } else {\n tempAssigned = true\n }\n try personCollection.insertOne(Person(firstname: values[0], lastname: values[1], assigned: tempAssigned, _id: BSONObjectID()), session: session)\n count.self += 1\n print(\"Inserted: \\(count) \\(row)\")\n\n }\n }\n }\n}\n```\n\nLine 2 imports the driver we'll need (mongo-swift).\n\nNext, we configure the driver.\n\n``` swift\nvar murl: String = \"mongodb+srv://:\\(ProcessInfo.processInfo.environment[\"PASS\"]!)@myfirstcluster.zbcul.mongodb.net/?retryWrites=true&w=majority\"\nlet client = try MongoClient(murl)\n\nlet db = client.db(\"students\")\nlet session = client.startSession(options: ClientSessionOptions(causalConsistency: true))\n```\n\nRemember to replace `` with the user you created in Atlas.\n\nTo read and write data from and to MongoDB in Swift, we'll need to leverage a Codable structure. [Codeables are an amazing feature of Swift and definitely helpful for writing code that will write data to MongoDB. Codables is actually an alias for two protocols: Encodable, and Decodable. When we make our `Struct` conform to the Codable protocol, we're able to encode our string data into JSON and then decode it back into a simple `Struct` using JSONEncoder and JSONDecoder respectively. We'll need this structure because the format used to store data in MongoDB is slightly different that the representation you see of that data structure in Swift. We'll create a structure to describe what our document schema should look like inside MongoDB. Here's what our schema `Struct` should look like:\n\n``` swift\nstruct Code: Codable {\n let code: String\n let assigned: Bool\n let date: Date = Date()\n let _id: BSONObjectID\n}\n```\n\nNotice we've got all the elements from our CSV file plus a date field.\n\nWe'll also need a few temporary variables that we will use as we process the data. `count` and a special temporary variable I'll use when I determine whether or not a student is assigned to a class or not... `tempAssigned`. Lastly, in this code block, I'll create a variable to store the state of our position in the file. **header** will be set to true initially because we'll want to skip the first row of data. That's where the column headers live.\n\n``` swift\nlet path = \"/Users/mlynn/Desktop/example.csv\"\nvar tempAssigned: Bool\nvar count: Int = 0\nvar header: Bool = true\n```\n\nNow we can create a reference to the collection in our MongoDB Database that we'll use to store our student data. For lack of a better name, I'm calling mine `personCollection`. Also, notice that we're providing a link back to our `Struct` using the `withType` argument to the collection method. This ensures that the driver knows what type of data we're dealing with.\n\n``` swift\nlet personCollection = db.collection(\"people\", withType: Person.self)\n```\n\nThe next bit of code is at the heart of our task. We're going to loop through each row and create a document. I've commented and explained each row inline.\n\n``` swift\nlet contents = try String(contentsOfFile: path, encoding: .utf8) // get the contents of our csv file with the String built-in\nlet rows = contents.components(separatedBy: NSCharacterSet.newlines) // get the individual rows separated by newline characters\nfor row in rows { // Loop through all rows in the file.\n if row != \"\" { // in case we have an empty row... skip it.\n var values: String] = [] // create / reset the values array of type string - to null.\n values = row.components(separatedBy: \",\") // assign the values array to the fields in the row of data\n if header == true { // if it's the first row... skip it and.\n header = false // Set the header to false so we do this only once.\n } else {\n if String(values[2]).lowercased() == \"false\" || Bool(values[2]) == false {\n tempAssigned = false // Above: if its the string or boolean value false, so be it\n } else {\n tempAssigned = true // otherwise, explicitly set it to true\n }\n try personCollection.insertOne(Person(firstname: values[0], lastname: values[1], assigned: tempAssigned, _id: BSONObjectID()), session: session)\n count.self += 1 // Above: use the insertOne method of the collection class form\n print(\"Inserted: \\(count) \\(row)\") // the mongo-swift-driver and create a document with the Person ``Struct``.\n }\n }\n }\n```\n\n## Conclusion\n\nImporting data is a common challenge. Even more common is when we want to automate the task of inserting, or manipulating data with MongoDB. In this **how-to**, I've explained how you can get started with Swift and accomplish the task of simplifying data import by creating an executable, command-line tool that you can share with a colleague to enable them to import data for you. While this example is quite simple in terms of how it solves the problem at hand, you can certainly take the next step and begin to build on this to support command-line arguments and even use it to not only insert data but also to remove, and merge or update data.\n\nI've prepared a section below titled **Troubleshooting** in case you come across some common errors. I've tried my best to think of all of the usual issues you may find. However, if you do find another, issue, please let me know. The best way to do this is to [Sign Up for the MongoDB Community and be sure to visit the section for Drivers and ODMs.\n\n## Resources\n\n- GitHub\n- MongoDB Swift Driver Repository\n- Announcing the MongoDB Swift Driver\n- MongoDB Swift Driver Examples\n- Mike's Twitter\n\n## Troubleshooting\n\nUse this section to help solve some common problems. If you still have issues after reading these common solutions, please visit me in the MongoDB Community.\n\n### No Such Module\n\nThis occurs when Swift was unable to build the `mongo-swift-driver` module. This most typically occurs when a developer is attempting to use Xcode and has not specified a minimum target OS version. Review the attached image and note the sequence of clicks to get to the appropriate setting. Change that setting to 10.15 or greater.\n\n", "format": "md", "metadata": {"tags": ["Swift", "MongoDB"], "pageDescription": "Build a Command Line Tool with Swift and MongoDB", "contentType": "Code Example"}, "title": "Build a Command Line Tool with Swift and MongoDBBuild a Command Line Tool with Swift and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/capture-iot-data-stitch", "action": "created", "body": "# Capture IoT Data With MongoDB in 5 Minutes\n\n> Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.\n\nCapturing IoT (Internet of Things) data is a complex task for 2 main reasons:\n\n- We have to deal with a huge amount of data so we need a rock solid\n architecture.\n- While keeping a bulletproof security level.\n\nFirst, let's have a look at a standard IoT capture architecture:\n\nOn the left, we have our sensors. Let's assume they can push data every\nsecond over TCP using a\nPOST) and let's suppose we\nhave a million of them. We need an architecture capable to handle a\nmillion queries per seconds and able to resist any kind of network or\nhardware failure. TCP queries need to be distributed evenly to the\napplication servers using load\nbalancers) and\nfinally, the application servers are able to push the data to our\nmultiple\nMongos\nrouters from our MongoDB Sharded\nCluster.\n\nAs you can see, this architecture is relatively complex to install. We\nneed to:\n\n- buy and maintain a lot of servers,\n- make security updates on a regular basis of the Operating Systems\n and applications,\n- have an auto-scaling capability (reduce maintenance cost & enable\n automatic failover).\n\nThis kind of architecture is expensive and maintenance cost can be quite\nhigh as well.\n\nNow let's solve this same problem with MongoDB Stitch!\n\nOnce you have created a MongoDB Atlas\ncluster, you can attach a\nMongoDB Stitch application to it\nand then create an HTTP\nService\ncontaining the following code:\n\n``` javascript\nexports = function(payload, response) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const sensors = mongodb.db(\"stitch\").collection(\"sensors\");\n var body = EJSON.parse(payload.body.text());\n body.createdAt = new Date();\n sensors.insertOne(body)\n .then(result => {\n response.setStatusCode(201);\n });\n};\n```\n\nAnd that's it! That's all we need! Our HTTP POST service can be reached\ndirectly by the sensors from the webhook provided by MongoDB Stitch like\nso:\n\n``` bash\ncurl -H \"Content-Type: application/json\" -d '{\"temp\":22.4}' https://webhooks.mongodb-stitch.com/api/client/v2.0/app/stitchtapp-abcde/service/sensors/incoming_webhook/post_sensor?secret=test\n```\n\nBecause MongoDB Stitch is capable of scaling automatically according to\ndemand, you no longer have to take care of infrastructure or handling\nfailovers.\n\n## Next Step\n\nThanks for taking the time to read my post. I hope you found it useful\nand interesting.\n\nIf you are looking for a very simple way to get started with MongoDB,\nyou can do that in just 5 clicks on our MongoDB\nAtlas database service in the\ncloud.\n\nYou can also try MongoDB Stitch for\nfree and discover how the\nbilling works.\n\nIf you want to query your data sitting in MongoDB Atlas using MongoDB\nStitch, I recommend this article from Michael\nLynn.\n\n", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "Learn how to use MongoDB for Internet of Things data in as little as 5 minutes.", "contentType": "Article"}, "title": "Capture IoT Data With MongoDB in 5 Minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/llm-accuracy-vector-search-unstructured-metadata", "action": "created", "body": "# Enhancing LLM Accuracy Using MongoDB Vector Search and Unstructured.io Metadata\n\nDespite the remarkable strides in artificial intelligence, particularly in generative AI (GenAI), precision remains an elusive goal for large language model (LLM) outputs. According to the latest annual McKinsey Global Survey, \u201cThe state of AI in 2023,\u201d GenAI has had a breakout year. Nearly one-quarter of C-suite executives personally use general AI tools for work, and over 25% of companies with AI implementations have general AI on their boards' agendas. Additionally, 40% of respondents plan to increase their organization's investment in AI due to advances in general AI. The survey reflects the immense potential and rapid adoption of AI technologies. However, the survey also points to a significant concern: **inaccuracy**.\n\nInaccuracy in LLMs often results in \"hallucinations\" or incorrect information due to limitations like shallow semantic understanding and varying data quality. Incorporating semantic vector search using MongoDB can help by enabling real-time querying of training data, ensuring that generated responses align closely with what the model has learned. Furthermore, adding metadata filtering extracted by Unstructured tools can refine accuracy by allowing the model to weigh the reliability of its data sources. Together, these methods can significantly minimize the risk of hallucinations and make LLMs more reliable.\n\nThis article addresses this challenge by providing a comprehensive guide on enhancing the precision of your LLM outputs using MongoDB's Vector Search and Unstructured Metadata extraction techniques. The main purpose of this tutorial is to equip you with the knowledge and tools needed to incorporate external source documents in your LLM, thereby enriching the model's responses with well-sourced and contextually accurate information. At the end of this tutorial, you can generate precise output from the OpenAI GPT-4 model to cite the source document, including the filename and page number. The entire notebook for this tutorial is available on Google Colab, but we will be going over sections of the tutorial together.\n\n## Why use MongoDB Vector Search?\nMongoDB is a NoSQL database, which stands for \"Not Only SQL,\" highlighting its flexibility in handling data that doesn't fit well in tabular structures like those in SQL databases. NoSQL databases are particularly well-suited for storing unstructured and semi-structured data, offering a more flexible schema, easier horizontal scaling, and the ability to handle large volumes of data. This makes them ideal for applications requiring quick development and the capacity to manage vast metadata arrays.\n\nMongoDB's robust vector search capabilities and ability to seamlessly handle vector data and metadata make it an ideal platform for improving the precision of LLM outputs. It allows for multifaceted searches based on semantic similarity and various metadata attributes. This unique feature set distinguishes MongoDB from traditional developer data platforms and significantly enhances the accuracy and reliability of the results in language modeling tasks.\n\n## Why use Unstructured metadata?\nThe Unstructured open-source library provides components for ingesting and preprocessing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of unstructured revolve around streamlining and optimizing the data processing workflow for LLMs. The Unstructured modular bricks and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficiently transforming unstructured data into structured outputs.\n\nMetadata is often referred to as \"data about data.\" It provides contextual or descriptive information about the primary data, such as its source, format, and relevant characteristics. The metadata from the Unstructured tools tracks various details about elements extracted from documents, enabling users to filter and analyze these elements based on particular metadata of interest. The metadata fields include information about the source document and data connectors. \n\nThe concept of metadata is familiar, but its application in the context of unstructured data brings many opportunities. The Unstructured package tracks a variety of metadata at the element level. This metadata can be accessed with `element.metadata` and converted to a Python dictionary representation using `element.metadata.to_dict()`.\n\nIn this article, we particularly focus on `filename` and `page_number` metadata to enhance the traceability and reliability of the LLM outputs. By doing so, we can cite the exact location of the PDF file that provides the answer to a user query. This becomes especially crucial when the LLM answers queries related to sensitive topics such as financial, legal, or medical questions.\n\n## Code walkthrough\n\n### Requirements\n\n 1. Sign up for a MongoDB Atlas account and install the PyMongo library in the IDE of your choice or Colab.\n 2. Install the Unstructured library in the IDE of your choice or Colab.\n 3. Install the Sentence Transformer library for embedding in the IDE of your choice or Colab.\n 4. Get the OpenAI API key. To do this, please ensure you have an OpenAI account.\n\n### Step-by-step process\n\n 1. Extract the texts and metadata from source documents using Unstructured's partition_pdf.\n 2. Prepare the data for storage and retrieval in MongoDB.\n - Vectorize the texts using the SentenceTransformer library.\n - Connect and upload records into MongoDB Atlas.\n - Query the index based on embedding similarity.\n 3. Generate the LLM output using the OpenAI Model.\n\n#### **Step 1: Text and metadata extraction**\nPlease make sure you have installed the required libraries to run the necessary code. \n\n```\n# Install Unstructured partition for PDF and dependencies\npip install unstructured\u201cpdf\u201d]\n!apt-get -qq install poppler-utils tesseract-ocr\n!pip install -q --user --upgrade pillow\n\npip install pymongo\npip install sentence-transformers\n```\nWe'll delve into extracting data from a PDF document, specifically the seminal \"Attention is All You Need\" paper, using the `partition_pdf` function from the `Unstructured` library in Python. First, you'll need to import the function with `from unstructured.partition.pdf import partition_pdf`. Then, you can call `partition_pdf` and pass in the necessary parameters: \n\n - `filename` specifies the PDF file to process, which is \"example-docs/Attention is All You Need.pdf.\" \n - `strategy` sets the extraction type, and for a more comprehensive scan, we use \"hi_res.\" \n - Finally, `infer_table_structured=True` tells the function to also extract table metadata.\n\nProperly set up, as you can see in our Colab file, the code looks like this:\n```\nfrom unstructured.partition.pdf import partition_pdf\n\nelements = partition_pdf(\"example-docs/Attention is All You Need.pdf\",\n strategy=\"hi_res\",\n infer_table_structured=True)\n```\nBy running this code, you'll populate the `elements` variable with all the extracted information from the PDF, ready for further analysis or manipulation. In the Colab\u2019s code snippets, you can inspect the extracted texts and element metadata. To observe the sample outputs \u2014 i.e., the element type and text \u2014 please run the line below. Use a print statement, and please make sure the output you receive matches the one below.\n```\ndisplay(*[(type(element), element.text) for element in elements[14:18]]) \n```\nOutput:\n\n```\n(unstructured.documents.elements.NarrativeText,\n 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English- to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.')\n(unstructured.documents.elements.NarrativeText,\n '\u2217Equal contribution. Listing order is random....\n```\nYou can also use Counter from Python Collection to count the number of element types identified in the document. \n\n```\nfrom collections import Counter\ndisplay(Counter(type(element) for element in elements))\n\n# outputs\nCounter({unstructured.documents.elements.NarrativeText: 86,\n unstructured.documents.elements.Title: 56,\n unstructured.documents.elements.Text: 45,\n unstructured.documents.elements.Header: 3,\n unstructured.documents.elements.Footer: 9,\n unstructured.documents.elements.Image: 5,\n unstructured.documents.elements.FigureCaption: 5,\n unstructured.documents.elements.Formula: 5,\n unstructured.documents.elements.ListItem: 43,\n unstructured.documents.elements.Table: 4})\n```\nFinally, you can convert the element objects into Python dictionaries using `convert_to_dict` built-in function to selectively extract and modify the element metadata.\n\n```\nfrom unstructured.staging.base import convert_to_dict\n\n# built-in function to convert elements into Python dictionary\nrecords = convert_to_dict(elements)\n\n# display the first record\nrecords[0]\n\n# output\n{'type': 'NarrativeText',\n 'element_id': '6b82d499d67190c0ceffe3a99958e296',\n 'metadata': {'coordinates': {'points': ((327.6542053222656,\n 199.8135528564453),\n (327.6542053222656, 315.7165832519531),\n (1376.0062255859375, 315.7165832519531),\n (1376.0062255859375, 199.8135528564453)),\n 'system': 'PixelSpace',\n 'layout_width': 1700,\n 'layout_height': 2200},\n 'filename': 'Attention is All You Need.pdf',\n 'last_modified': '2023-10-09T20:15:36',\n 'filetype': 'application/pdf',\n 'page_number': 1,\n 'detection_class_prob': 0.5751863718032837},\n 'text': 'Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.'}\n```\n#### **Step 2: Data preparation, storage, and retrieval**\n\n**Step 2a:** Vectorize the texts using the SentenceTransformer library.\n\nWe must include the extracted element metadata when storing and retrieving the texts from MongoDB Atlas to enable data retrieval with metadata and vector search.\n\nFirst, we vectorize the texts to perform a similarity-based vector search. In this example, we use `microsoft/mpnet-base` from the Sentence Transformer library. This model has a 768 embedding size.\n\n```\nfrom sentence_transformers import SentenceTransformer\nfrom pprint import pprint\n\nmodel = SentenceTransformer('microsoft/mpnet-base')\n\n# Let's test and check the number of embedding size using this model\nemb = model.encode(\"this is a test\").tolist()\nprint(len(emb))\nprint(emb[:10])\nprint(\"\\n\")\n\n# output\n768\n[-0.15820945799350739, 0.008249259553849697, -0.033347081393003464, \u2026]\n```\n\nIt is important to use a model with the same embedding size defined in MongoDB Atlas Index. Be sure to use the embedding size compatible with MongoDB Atlas indexes. You can define the index using the JSON syntax below: \n\n```json\n{\n \"type\": \"vectorSearch,\n \"fields\": [{\n \"path\": \"embedding\",\n \"dimensions\": 768, # the dimension of `mpnet-base` model \n \"similarity\": \"euclidean\",\n \"type\": \"vector\"\n }]\n}\n```\n\nCopy and paste the JSON index into your MongoDB collection so it can index the `embedding` field in the records. Please view this documentation on [how to index vector embeddings for Vector Search. \n\nNext, create the text embedding for each record before uploading them to MongoDB Atlas:\n\n```\nfor record in records:\n txt = record'text']\n \n # use the embedding model to vectorize the text into the record\n record['embedding'] = model.encode(txt).tolist() \n\n# print the first record with embedding\nrecords[0]\n\n# output\n{'type': 'NarrativeText',\n 'element_id': '6b82d499d67190c0ceffe3a99958e296',\n 'metadata': {'coordinates': {'points': ((327.6542053222656,\n 199.8135528564453),\n (327.6542053222656, 315.7165832519531),\n (1376.0062255859375, 315.7165832519531),\n (1376.0062255859375, 199.8135528564453)),\n 'system': 'PixelSpace',\n 'layout_width': 1700,\n 'layout_height': 2200},\n 'filename': 'Attention is All You Need.pdf',\n 'last_modified': '2023-10-09T20:15:36',\n 'filetype': 'application/pdf',\n 'page_number': 1,\n 'detection_class_prob': 0.5751863718032837},\n 'text': 'Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.',\n 'embedding': [-0.018366225063800812,\n -0.10861606895923615,\n 0.00344603369012475,\n 0.04939081519842148,\n -0.012352174147963524,\n -0.04383034259080887,...],\n'_id': ObjectId('6524626a6d1d8783bb807943')}\n}\n```\n\n**Step 2b**: Connect and upload records into MongoDB Atlas\n\nBefore we can store our records on MongoDB, we will use the PyMongo library to establish a connection to the target MongoDB database and collection. Use this code snippet to connect and test the connection (see the MongoDB documentation on [connecting to your cluster).\n\n```\nfrom pymongo.mongo_client import MongoClient\nfrom pymongo.server_api import ServerApi\n\nuri = \"<>\"\n\n# Create a new client and connect to the server\nclient = MongoClient(uri, server_api=ServerApi('1'))\n\n# Send a ping to confirm a successful connection\ntry:\n client.admin.command('ping')\n print(\"Pinged your deployment. You successfully connected to MongoDB!\")\nexcept Exception as e:\n print(e)\n```\n\nOnce run, the output: \u201cPinged your deployment. You successfully connected to MongoDB!\u201d will appear. \n\nNext, we can upload the records using PyMongo's `insert_many` function.\n\nTo do this, we must first grab our MongoDB database connection string. Please make sure the database and collection names match with the ones in MongoDB Atlas.\n\n```\ndb_name = \"unstructured_db\"\ncollection_name = \"unstructured_col\"\n\n# delete all first\nclientdb_name][collection_name].delete_many({})\n\n# insert\nclient[db_name][collection_name].insert_many(records)\n```\n\nLet\u2019s preview the records in MongoDB Atlas:\n\n![Fig 2. preview the records in the MongoDB Atlas collection\n\n**Step 2c**: Query the index based on embedding similarity\n\nNow, we can retrieve the relevant records by computing the similarity score defined in the index vector search. When a user sends a query, we need to vectorize it using the same embedding model we used to store the data. Using the `aggregate` function, we can pass a `pipeline` that contains the information to perform a vector search.\n\nNow that we have the records stored in MongoDB Atlas, we can search the relevant texts using the vector search. To do so, we need to vectorize the query using the same embedding model and use the aggregate function to retrieve the records from the index.\n\nIn the pipeline, we will specify the following:\n\n - **index**: The name of the vector search index in the collection\n - **vector**: The vectorized query from the user\n - **k**: Number of the most similar records we want to extract from the collection\n - **score**: The similarity score generated by MongoDB Atlas\n\n```\nquery = \"Does the encoder contain self-attention layers?\"\nvector_query = model.encode(query).tolist()\n\npipeline = \n{\n\"$vectorSearch\": {\n \"index\":\"default\",\n \"queryVector\": vector_query,\n \"path\": \"embedding\",\n \"limit\": 5,\n \"numCandidates\": 50\n }\n },\n {\n \"$project\": {\n \"embedding\": 0,\n \"_id\": 0,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n }\n }\n]\n\nresults = list(client[db_name][collection_name].aggregate(pipeline))\n```\n\nThe above pipeline will return the top five records closest to the user\u2019s query embedding. We can define `k` to retrieve the [top-k records in MongoDB Atlas. Please note that the results contain the `metadata`, `text`, and `score`. We can use this information to generate the LLM output in the following step. \n\nHere\u2019s one example of the top five nearest neighbors from the query above:\n\n```\n{'element_id': '7128012294b85295c89efee3bc5e72d2',\n 'metadata': {'coordinates': {'layout_height': 2200,\n 'layout_width': 1700,\n 'points': [290.50477600097656,\n 1642.1170677777777],\n [290.50477600097656,\n 1854.9523748867755],\n [1403.820083618164,\n 1854.9523748867755],\n [1403.820083618164,\n 1642.1170677777777]],\n 'system': 'PixelSpace'},\n 'detection_class_prob': 0.9979791045188904,\n 'file_directory': 'example-docs',\n 'filename': 'Attention is All You Need.pdf',\n 'filetype': 'application/pdf',\n 'last_modified': '2023-09-20T17:08:35',\n 'page_number': 3,\n 'parent_id': 'd1375b5e585821dff2d1907168985bfe'},\n 'score': 0.2526094913482666,\n 'text': 'Decoder: The decoder is also composed of a stack of N = 6 identical '\n 'layers. In addition to the two sub-layers in each encoder layer, '\n 'the decoder inserts a third sub-layer, which performs multi-head '\n 'attention over the output of the encoder stack. Similar to the '\n 'encoder, we employ residual connections around each of the '\n 'sub-layers, followed by layer normalization. We also modify the '\n 'self-attention sub-layer in the decoder stack to prevent positions '\n 'from attending to subsequent positions. This masking, combined with '\n 'fact that the output embeddings are offset by one position, ensures '\n 'that the predictions for position i can depend only on the known '\n 'outputs at positions less than i.',\n 'type': 'NarrativeText'}\n```\n\n**Step 3: Generate the LLM output with source document citation**\n\nWe can generate the output using the OpenAI GPT-4 model. We will use the `ChatCompletion` function from OpenAI API for this final step. [ChatCompletion API processes a list of messages to generate a model-driven response. Designed for multi-turn conversations, they're equally adept at single-turn tasks. The primary input is the 'messages' parameter, comprising an array of message objects with designated roles (\"system\", \"user\", or \"assistant\") and content. Usually initiated with a system message to guide the assistant's behavior, conversations can vary in length with alternating user and assistant messages. While the system message is optional, its absence may default the model to a generic helpful assistant behavior.\n\nYou\u2019ll need an OpenAI API key to run the inferences. Before attempting this step, please ensure you have an OpenAI account. Assuming you store your OpenAI API key in your environment variable, you can import it using the `os.getenv` function:\n\n```\nimport os\nimport openai\n\n# Get the API key from the env\nopenai.api_key = os.getenv(\"OPENAI_API_KEY\")\n```\n\nNext, having a compelling prompt is crucial for generating a satisfactory result. Here\u2019s the prompt to generate the output with specific reference where the information comes from \u2014 i.e., filename and page number.\n\n```\nresponse = openai.ChatCompletion.create(\n model=\"gpt-4\",\n messages=\n {\"role\": \"system\", \"content\": \"You are a useful assistant. Use the assistant's content to answer the user's query \\\n Summarize your answer using the 'texts' and cite the 'page_number' and 'filename' metadata in your reply.\"},\n {\"role\": \"assistant\", \"content\": context},\n {\"role\": \"user\", \"content\": query},\n ],\n temperature = 0.2\n)\n```\n\nIn this Python script, a request is made to the OpenAI GPT-4 model through the `ChatCompletion.create` method to process a conversation. The conversation is structured with predefined roles and messages. It is instructed to generate a response based on the provided context and user query, summarizing the answer while citing the page number and file name. The `temperature` parameter set to 0.2 influences the randomness of the output, favoring more deterministic responses.\n\n## Evaluating the LLM output quality with source document\n\nOne of the key features of leveraging unstructured metadata in conjunction with MongoDB's Vector Search is the ability to provide highly accurate and traceable outputs.\n\n```\nUser query: \"Does the encoder contain self-attention layers?\"\n```\n\nYou can insert this query into the ChatCompletion API as the \u201cuser\u201d role and the context from MongoDB retrieval results as the \u201cassistant\u201d role. To enforce the model responds with the filename and page number, you can provide the instruction in the \u201csystem\u201d role.\n\n```\nresponse = openai.ChatCompletion.create(\n model=\"gpt-4\",\n messages=[\n {\"role\": \"system\", \"content\": \"You are a useful assistant. Use the assistant's content to answer the user's query \\\n Summarize your answer using the 'texts' and cite the 'page_number' and 'filename' metadata in your reply.\"},\n {\"role\": \"assistant\", \"content\": context},\n {\"role\": \"user\", \"content\": query},\n ],\n temperature = 0.2\n)\n\nprint(response)\n\n# output\n{\n \"id\": \"chatcmpl-87rNcLaEYREimtuWa0bpymWiQbZze\",\n \"object\": \"chat.completion\",\n \"created\": 1696884180,\n \"model\": \"gpt-4-0613\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Yes, the encoder does contain self-attention layers. This is evident from the text on page 5 of the document \\\"Attention is All You Need.pdf\\\".\"\n },\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 1628,\n \"completion_tokens\": 32,\n \"total_tokens\": 1660\n }\n}\n```\n\nSource document:\n![Fig 3. The relevant texts in the source document to answer user query\n\nLLM Output:\n\nThe highly specific output cites information from the source document, \"Attention is All You Need.pdf,\" stored in the 'example-docs' directory. The answers are referenced with exact page numbers, making it easy for anyone to verify the information. This level of detail is crucial when answering queries related to research, legal, or medical questions, and it significantly enhances the trustworthiness and reliability of the LLM outputs.\n\n## Conclusion\nThis article presents a method to enhance LLM precision using MongoDB's Vector Search and Unstructured Metadata extraction techniques. These approaches, facilitating real-time querying and metadata filtering, substantially mitigate the risk of incorrect information generation. MongoDB's capabilities, especially in handling vector data and facilitating multifaceted searches, alongside the Unstructured library's data processing efficiency, emerge as robust solutions. These techniques not only improve accuracy but also enhance the traceability and reliability of LLM outputs, especially when dealing with sensitive topics, equipping users with the necessary tools to generate more precise and contextually accurate outputs from LLMs.\n\nReady to get started? Request your Unstructured API key today and unlock the power of Unstructured API and Connectors. Join the Unstructured community group to connect with other users, ask questions, share your experiences, and get the latest updates. We can\u2019t wait to see what you\u2019ll build.\n\n \n\n \n\n ", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "This article provides a comprehensive guide on improving the precision of large language models using MongoDB's Vector Search and Unstructured.io's metadata extraction techniques, aiming to equip readers with the tools to produce well-sourced and contextually accurate AI outputs.", "contentType": "Tutorial"}, "title": "Enhancing LLM Accuracy Using MongoDB Vector Search and Unstructured.io Metadata", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-to-use-custom-archival-rules-and-partitioning-on-mongodb-atlas-online-archive", "action": "created", "body": "# How to Use Custom Archival Rules and Partitioning on MongoDB Atlas Online Archive\n\n>As of June 2022, the functionality previously known as Atlas Data Lake is now named Atlas Data Federation. Atlas Data Federation\u2019s functionality is unchanged and you can learn more about it here. Atlas Data Lake will remain in the Atlas Platform, with newly introduced functionality that you can learn about here.\n\nOkay, so you've set up a simple MongoDB Atlas Online Archive, and now you might be wondering, \"What's next?\" In this post, we will cover some more advanced Online Archive use cases, including setting up custom archival rules and how to improve query performance through partitioning.\n\n## Prerequisites\n\n- The Online Archive feature is available on M10 and greater Atlas clusters that run MongoDB 3.6 or later. So for this demo, you will need to create a M10 cluster in MongoDB Atlas. Click here for information on setting up a new MongoDB Atlas cluster or check out How to Manage Data at Scale With MongoDB Atlas Online Archive.\n\n- Ensure that each database has been seeded by loading sample data into our Atlas cluster. I will be using the `sample_analytics.customers` dataset for this demo.\n\n## Creating a Custom Archival Rule\n\nCreating an Online Archive rule based on the date makes sense for a lot of archiving situations, such as automatically archiving documents that are over X years old, or that were last updated Y months ago. But what if you want to have more control over what gets archived? Some examples of data that might be eligible to be archived are:\n\n- Data that has been flagged for archival by an administrator.\n- Discontinued products on your eCommerce site.\n- User data from users that have closed their accounts on your platform (unless they are European citizens).\n- Employee data from employees that no longer work at your company.\n\nThere are lots of reasons why you might want to set up custom rules for archiving your cold data. Let's dig into how you can achieve this using custom archive rules with MongoDB Atlas Online Archive. For this demo, we will be setting up an automatic archive of all users in the `sample_analytics.customers` collection that have the 'active' field set to `false`.\n\nIn order to configure our Online Archive, first navigate to the Cluster page for your project, click on the name of the cluster you want to configure Online Archive for, and click on the **Online Archive** tab.\n\nNext, click the Configure Online Archive button the first time and the **Add Archive** button subsequently to start configuring Online Archive for your collection. Then, you will need to create an Archiving Rule by specifying the collection namespace, which will be `sample_analytics.customers`.\n\nYou will also need to specify your custom criteria for archiving documents. You can specify the documents you would like to filter for archival with a MongoDB query, in JSON, the same way as you would write filters in MongoDB Atlas.\n\n> Note: You can use any valid MongoDB Query Language (MQL) query, however, you cannot use the empty document argument ({}) to return all documents.\n\nTo retrieve the documents staged for archival, we will use the following find command. This will retrieve all documents that have the \\`active\\` field set to \\`false\\` or do not have an \\`active\\` key at all.\n\n```\n{ $or: \n { active: false }, \n { active: null }\n] }\n```\nContinue setting up your archive, and then you should be done!\n\n> Note: It's always a good idea to run your custom queries in the [mongo shell first to ensure that you are archiving the correct documents.\n\n> Note: Once you initiate an archive and a MongoDB document is queued for archiving, you can no longer edit the document.\n\n## Improving Query Performance Through Partitioning\n\nOne of the reasons we archive data is to access and query it in the future, if for some reason we still need to use it. In fact, you might be accessing this data quite frequently! That's why it's useful to be able to partition your archived data and speed up query times. With Atlas Online Archive, you can specify the two most frequently queried fields in your collection to create partitions in your online archive.\n\nFields with a moderate to high cardinality (or the number of elements in a set or grouping) are good choices to be used as a partition. Queries that don't contain these fields will require a full collection scan of all archived documents, which will take longer and increase your costs. However, it's a bit of a bit of a balancing act. \n\nFor example, fields with low cardinality wont partition the data well and therefore wont improve performance greatly. However, this may be OK for range queries or collection scans, but will result in fast archival performance.\n\nFields with mid to high cardinality will partition the data better leading to better general query performance, but maybe slightly slower archival performance.\n\nFields with extremely high cardinality like `_id` will lead to poor query performance for everything but \"point queries\" that query on _id, and will lead to terrible archival performance due to writing many partitions.\n\n> Note: Online Archive is powered by MongoDB Atlas Data Lake. To learn more about how partitions improve your query performance in Data Lake, see Data Structure in cloud object storage - Amazon S3 or Microsoft Azure Blob Storage.\n\nThe specified fields are used to partition your archived data for optimal query performance. Partitions are similar to folders. You can move whichever field to the first position of the partition if you frequently query by that field.\n\nThe order of fields listed in the path is important in the same way as it is in Compound Indexes. Data in the specified path is partitioned first by the value of the first field, and then by the value of the next field, and so on. Atlas supports queries on the specified fields using the partitions.\n\nYou can specify the two most frequently queried fields in your collection and order them from the most frequently queried in the first position to the least queried field in the second position. For example, suppose you are configuring the online archive for your `customers` collection in the `sample_analytics` database. If your archived field is set to the custom archival rule in our example above, your first queried field is `username`, and your second queried field is `email`, your partition will look like the following:\n\n```\n/username/email\n```\n\nAtlas creates partitions first for the `username` field, followed by the `email`. Atlas uses the partitions for queries on the following fields:\n\n- the `username` field\n- the ` username` field and the `email` field\n\n> Note: The value of a partition field can be up to a maximum of 700 characters. Documents with values exceeding 700 characters are not archived.\n\nFor more information on how to partition data in your Online Archive, please refer to the documentation.\n\n## Summary\n\nIn this post, we covered some advanced use cases for Online Archive to help you take advantage of this MongoDB Atlas feature. We initialized a demo project to show you how to set up custom archival rules with Atlas Online Archive, as well as improve query performance through partitioning your archived data.\n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "So you've set up a simple MongoDB Atlas Online Archive, and now you might be wondering, \"What's next?\" In this post, we will cover some more advanced Online Archive use cases, including setting up custom archival rules and how to improve query performance through partitioning.", "contentType": "Tutorial"}, "title": "How to Use Custom Archival Rules and Partitioning on MongoDB Atlas Online Archive", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/create-data-api-10-min-realm", "action": "created", "body": "# Create a Custom Data Enabled API in MongoDB Atlas in 10 Minutes or Less\n\n## Objectives\n\n- Deploy a Free Tier Cluster\n- Load Sample Data into your MongoDB Atlas Cluster\n- Create a MongoDB Realm application\n- Create a 3rd Party Service, an API with an HTTP service listener\n- Test the API using Postman\n\n## Prerequisites\n\n- MongoDB Atlas Account with a Cluster Running\n- Postman Installed - See \n\n## Getting Started\n\nCreating an Application Programming Interface (API) that exposes data and responds to HTTP requests is very straightforward. With MongoDB Realm, you can create a data enabled endpoint in about 10 minutes or less. In this article, I'll explain the steps to follow to quickly create an API that exposes data from a sample database in MongoDB Atlas. We'll deploy the sample dataset, create a Realm App with an HTTP listener, and then we'll test it using Postman.\n\n> I know that some folks prefer to watch and learn, so I've created this video overview. Be sure to pause the video at the various points where you need to install the required components and complete some of the required steps.\n>\n> :youtube]{vid=bM3fcw4M-yk}\n\n## Step 1: Deploy a Free Tier Cluster\n\nIf you haven't done so already, visit [this link and follow along to deploy a free tier cluster. This cluster will be where we store and manage the data associated with our data API.\n\n## Step 2: Load Sample Datasets into Your Atlas Cluster\n\nMongoDB Atlas offers several sample datasets that you can easily deploy once you launch a cluster. Load the sample datasets by clicking on the three dots button to see additional options, and then select \"Load Sample Dataset.\" This process will take approximately five minutes and will add a number of really helpful databases and collections to your cluster. Be aware that these will consume approximately 350mb of storage. If you intend to use your free tier cluster for an application, you may want to remove some of the datasets that you no longer need. You can always re-deploy these should you need them.\n\nNavigate to the **Collections** tab to see them all. All of the datasets will be created as separate databases prefixed with `sample_` and then the name of the dataset. The one we care about for our API is called `sample_analytics`. Open this database up and you'll see one collection called `customers`. Click on it to see the data we will be working with.\n\nThis collection will have 500 documents, with each containing sample Analytics Customer documents. Don't worry about all the fields or the structure of these documents just now\u2014we'll just be using this as a simple data source.\n\n## Step 3: Create a New App\n\nTo begin creating a new Application Service, navigation from Atlas to App Services.\n\nAt the heart of the entire process are Application Services. There are several from which to choose and to create a data enabled endpoint, you'll choose the HTTP Service with HTTPS Endpoints. HTTPS Endpoints, like they sound, are simply hooks into the web interface of the back end. Coming up, I'll show you the code (a function) that gets executed when the hook receives data from your web client.\n\nTo access and create 3rd Party Services, click the link in the left-hand navigation labeled \"3rd Party Services.\"\n\nNext, let's add a service. Find, and click the button labeled \"Add a Service.\"\n\nNext, we'll specify that we're creating an HTTP service and we'll provide a name for the service. The name is not incredibly significant. I'm using `api` in this example.\n\nWhen you create an HTTP Service, you're enabling access to this service from Realm's serverless functions in the form of an object called `context.services`. More on that later when we create a serverless function attached to this service. Name and add the service and you'll then get to create an Incoming HTTPS Endpoint. This is the process that will be contacted when your clients request data of your API.\n\nCall the HTTPS Endpoint whatever you like, and set the parameters as you see below:\n\n \n\n ##### HTTPS Endpoint Properties \n| Property | Description |\n|------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Name | Choose a name for your HTTPS Endpoint... any value will do. |\n| Authentication | This is how your HTTPS Endpoint will authenticate users of your API. For this simple exercise, let's choose `System`. |\n| Log Function Arguments | Enabling this allows you to get additional log content with the arguments sent from your web clients. Turn this on. |\n| HTTPS Endpoint URL | This is the URL created by Realm. Take note of this - we'll be using this URL to test our API. |\n| HTTP Method | Our API can listen for the various HTTP methods (GET, POST, PATCH, etc.). Set this to POST for our example. |\n| Respond with Result | Our API can respond to web client requests with a dataset result. You'll want this on for our example. |\n| AUTHORIZATION - Can evaluate | This is a JSON expression that must evaluate to TRUE before the function may run. If this field is blank, it will evaluate to TRUE. This expression is evaluated before service-specific rules. |\n| Request Validation | Realm can validate incoming requests to protect against DDOS attacks and users that you don't want accessing your API. Set this to `Require Secret` for our example. |\n| Secret | This is the secret passphrase we'll create and use from our web client. We'll send this using a `PARAM` in the POST request. More on this below. |\n\nAs mentioned above, our example API will respond to `POST` requests. Next up, you'll get to create the logic in a function that will be executed whenever your API is contacted with a POST request.\n\n### Defining the Function\n\nLet's define the function that will be executed when the HTTPS Endpoint receives a POST request.\n\n> As you modify the function, and save settings, you will notice a blue bar appear at the top of the console.\n>\n> \n>\n> This appears to let you know you have modified your Realm Application but have not yet deployed those changes. It's good practice to batch your changes. However, make sure you remember to review and deploy prior to testing.\n\nRealm gives you the ability to specify what logic gets executed as a result of receiving a request on the HTTPS Endpoint URL. What you see above is the default function that's created for you when you create the service. It's meant to be an example and show you some of the things you can do in a Realm Backend function. Pay close attention to the `payload` variable. This is what's sent to you by the calling process. In our case, that's going to be from a form, or from an external JavaScript script. We'll come back to this function shortly and modify it accordingly.\n\nUsing our sample database `sample_analytics` and our `customers`, let's write a basic function to return 10 customer documents.\n\nAnd here's the source:\n\n``` JavaScript\nexports = function(payload) {\n const mongodb = context.services.get(\"mongodb-atlas\");\n const mycollection = mongodb.db(\"sample_analytics\").collection(\"customers\");\n return mycollection.find({}).limit(10).toArray();\n};\n```\n\nThis is JavaScript - ECMAScript 6, to be specific, also known as ES6 and ECMAScript 2015, was the second major revision to JavaScript.\n\nLet's call out an important element of this script: `context`.\n\nRealm functions can interact with connected services, user information, predefined values, and other functions through modules attached to the global `context` variable.\n\nThe `context` variable contains the following modules:\n\n| Property | Description |\n|---------------------|------------------------------------------------------------------------------|\n| `context.services` | Access service clients for the services you've configured. |\n| `context.values` | Access values that you've defined. |\n| `context.user` | Access information about the user that initiated the request. |\n| `context.request` | Access information about the HTTP request that triggered this function call. |\n| `context.functions` | Execute other functions in your Realm app. |\n| `context.http` | Access the HTTP service for get, post, put, patch, delete, and head actions. |\nOnce you've set your configuration for the Realm HTTPS Endpoint, copy the HTTPS Endpoint URL, and take note of the Secret you created. You'll need these to begin sending data and testing.\n\nSpeaking of testing... Postman is a great tool that enables you to test an API like the one we've just created. Postman acts like a web client - either a web application or a browser.\n\n> If you don't have Postman installed, visit this link (it's free!): \n\nLet's test our API with Postman:\n\n1. Launch Postman and click the plus (+ New) to add a new request. You may also use the Launch screen - whichever you're more comfortable with.\n2. Give your request a name and description, and choose/create a collection to save it in.\n3. Paste the HTTPS Endpoint URL you created above into the URL bar in Postman labeled `Enter request URL`.\n4. Change the `METHOD` from `GET` to `POST` - this will match the `HTTP Method` we configured in our HTTPS Endpoint above.\n5. We need to append our `secret` parameter to our request so that our HTTPS Endpoint validates and authorizes the request. Remember, we set the secret parameter above. There are two ways you can send the secret parameter. The first is by appending it to the HTTPS Endpoint URL by adding `?secret=YOURSECRET`. The other is by creating a `Parameter` in Postman. Either way will work.\n\nOnce you've added the secret, you can click `SEND` to send the request to your newly created HTTPS Endpoint.\n\nIf all goes well, Postman will send a POST request to your API and Realm will execute the Function you created, returning 10 records from the `Sample_Analytics` database, and the `Customers` collection...\n\n``` javascript\n\n{\n \"_id\": {\n \"$oid\": \"5ca4bbcea2dd94ee58162a68\"\n },\n \"username\": \"fmiller\",\n \"name\": \"Elizabeth Ray\",\n \"address\": \"9286 Bethany Glens\\nVasqueztown, CO 22939\",\n \"birthdate\": {\n \"$date\": {\n \"$numberLong\": \"226117231000\"\n }\n },\n \"email\": \"arroyocolton@gmail.com\",\n \"active\": true,\n \"accounts\": [\n {\n \"$numberInt\": \"371138\"\n },\n ...\n ],\n \"tier_and_details\": {\n \"0df078f33aa74a2e9696e0520c1a828a\": {\n \"tier\": \"Bronze\",\n \"id\": \"0df078f33aa74a2e9696e0520c1a828a\",\n \"active\": true,\n \"benefits\": [\n \"sports tickets\"\n ]\n },\n \"699456451cc24f028d2aa99d7534c219\": {\n \"tier\": \"Bronze\",\n \"benefits\": [\n \"24 hour dedicated line\",\n \"concierge services\"\n ],\n \"active\": true,\n \"id\": \"699456451cc24f028d2aa99d7534c219\"\n }\n }\n},\n// remaining documents clipped for brevity\n...\n]\n```\n\n## Taking This Further\n\nIn just a few minutes, we've managed to create an API that exposes (READs) data stored in a MongoDB Database. This is just the beginning, however. From here, you can now expand on the API and create additional methods that handle all aspects of data management, including inserts, updates, and deletes.\n\nTo do this, you'll create additional HTTPS Endpoints, or modify this HTTPS Endpoint to take arguments that will control the flow and behavior of your API.\n\nConsider the following example, showing how you might evaluate parameters sent by the client to manage data.\n\n``` JavaScript\nexports = async function(payload) {\n\n const mongodb = context.services.get(\"mongodb-atlas\");\n const db = mongodb.db(\"sample_analytics\");\n const customers = db.collection(\"customers\");\n\n const cmd=payload.query.command;\n const doc=payload.query.doc;\n\n switch(cmd) {\n case \"create\":\n const result= await customers.insertOne(doc);\n if(result) {\n return { text: `Created customer` }; \n }\n return { text: `Error stashing` };\n case \"read\":\n const findresult = await customers.find({'username': doc.username}).toArray();\n return { findresult };\n case \"delete\":\n const delresult = await customers.deleteOne( { username: { $eq: payload.query.username }});\n return { text: `Deleted ${delresult.deletedCount} stashed items` };\n default:\n return { text: \"Unrecognized command.\" };\n }\n}\n```\n\n## Conclusion\n\nMongoDB Realm enables developers to quickly create fully functional application components without having to implement a lot of boilerplate code typically required for APIs. Note that the above example, while basic, should provide you with a good starting point. for you. Please join me in the [Community Forums if you have questions.\n\nYou may also be interested in learning more from an episode of the MongoDB Podcast where we covered Mobile Application Development with Realm.\n\n#### Other Resources\nData API Documentation - docs:https://docs.atlas.mongodb.com/api/data-api/\n\n", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Learn how to create a data API with Atlas Data API in 10 minutes or less", "contentType": "Tutorial"}, "title": "Create a Custom Data Enabled API in MongoDB Atlas in 10 Minutes or Less", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-build-healthcare-interoperability-microservice-using-fhir-mongodb", "action": "created", "body": "# How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB\n\n# How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB\n\nInteroperability refers to a system\u2019s or software's capability to exchange and utilize information. Modern interoperability standards, like Fast Healthcare interoperability Resources (or FHIR), precisely define how data should be communicated. Like most current standards, FHIR uses REST APIs which are set in JSON format. However, these standards do not set how data should be stored, providing software vendors with flexibility in managing information according to their preferences.\n\nThis is where MongoDB's approach comes into play \u2014 data that is accessed together should be stored together. The compatibility between the FHIR\u2019s resource format and MongoDB's document model allows the data to be stored exactly as it should be communicated. This brings several benefits, such as removing the need for any middleware/data processing tool which decreases development complexity and accelerates read/write operations. \n\nAdditionally, MongoDB can also allow you to create a FHIR-compliant Atlas data API. This benefits healthcare providers using software vendors by giving them control over their data without complex integrations. It reduces integration complexity by handling data processing at a platform level. MongoDB's app services also offer security features like authentication. This, however, is not a full clinical data repository nor is it meant to replace one. Rather, this is yet another integration capability that MongoDB has.\n\nIn this article, we will walk you through how you can expose the data of FHIR resources through Atlas Data API to two different users with different permissions.\n\n## Scenario\n\n- Dataset: We have a simple dataset where we have modeled the data using FHIR-compliant schemas. These resources are varied: patients, locations, practitioners, and appointments.\n- We have two users groups that have different responsibilities:\n - The first is a group of healthcare providers. These individuals work in a specific location and should only have access to the appointments in said location.\n - The second is a group that works at a healthcare agency. These individuals analyze the appointments from several centers. They should not be able to look at personal identifiable information (or PII).\n\n## Prerequisites\n\n- Deploy an M0+ Atlas cluster. \n- Install Python3 along with PyMongo and Mimesis modules to generate and insert documents.\n\n## Step 1: Insert FHIR documents into the database\n\nClone this GitHub repository on your computer. \n\n- Add your connection string on the config.py file. You can find it by following the instructions in our docs.\n- Execute the files: locGen.py,pracGen.py, patientGen.py, and ProposedAppointmentGeneration.py in that order.\n\n> Note: The last script will take a couple of minutes as it creates the appointments with the relevant information from the other collections.\n\nBefore continuing, you should check that you have a new \u201cFHIR\u201d database along with four collections inside it:\n\n- Locations with 22 locations\n- Practitioners with 70 documents\n- Patients with 20,000 documents\n- Appointments with close to 7,000 documents\n\n## Step 2: Create an App Services application\n\nAfter you\u2019ve created a cluster and loaded the sample dataset, you can create an application in Atlas App Services. \n\nFollow the steps to create a new App Services application if you haven\u2019t done so already.\n\nI used the name \u201cFHIR-search\u201d and chose the cluster \u201cFHIR\u201d that I\u2019ve already loaded the sample dataset into.\n\n or from below.\n\n```javascript\nexports = async function(request, response) {\n const queryParams = request.query;\n const collection = context.services.get(\"mongodb-atlas\").db(\"FHIR\").collection(\"appointments\");\n\n const query = {};\n const sort = {};\n const project = {};\n const codeParams = {};\n const aggreg = ];\n const pageSize = 20;\n const limit={};\n let tot = true;\n let dynamicPageSize = null;\n const URL = 'https://fakeurl.com/endpoint/appointment'//put your http endpoint URL here\n\n const FieldMap = {\n 'actor': 'participant.actor.reference',\n 'date': 'start', \n 'identifier':'_id',\n 'location': 'location.reference', \n 'part-status': 'participant.0.actor.status',\n 'patient':'participant.0.actor.reference',\n 'practitioner': 'participant.1.actor.reference', \n 'status': 'status', \n };\n\n for (const key in queryParams) {\n switch (key) {\n case \"actor\":\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n break;\n case \"date\":\n const dateParams = queryParams[key].split(\",\");\n const dateFilters = dateParams.map((dateParam) => {\n const firstTwoChars = dateParam.substr(0, 2);\n const dateValue = dateParam.slice(2);\n if (firstTwoChars === \"ge\" || firstTwoChars === \"le\") {\n const operator = firstTwoChars === \"ge\" ? \"$gte\" : \"$lte\";\n return { [\"start\"]: { [operator] : new Date(dateValue) } };\n }\n return null;\n });\n query[\"$and\"] = dateFilters.filter((filter) => filter !== null);\n break;\n case \"identifier\":\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n break;\n case \"location\":\n try {\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n } catch (error) {\n const locValues = queryParams[key].split(\",\"); \n query[FieldMap[key]] = { $in: locValues }; \n }\n break;\n case \"location:contains\" :\n try {\n query[FieldMap[key]] = {\"$regex\": new BSON.ObjectId(queryParams[key]), \"$options\": \"i\"};\n } catch (error) {\n query[FieldMap[key]] = {\"$regex\": queryParams[key], \"$options\": \"i\"};\n }\n break;\n case \"part-status\":\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n break;\n case \"patient\":\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n break;\n case \"practitioner\":\n query[FieldMap[key]] = new BSON.ObjectId(queryParams[key]);\n break;\n case \"status\":\n const statusValues = queryParams[key].split(\",\"); \n query[FieldMap[key]] = { $in: statusValues }; \n break;\n case \"_count\":\n dynamicPageSize = parseInt(queryParams[key]);\n break;\n case \"_elements\":\n const Params = queryParams[key].split(\",\");\n for (const param of Params) {\n if (FieldMap[param]) {\n project[FieldMap[param]] = 1;\n }\n }\n break;\n case \"_sort\":\n // sort logic\n const sortDirection = queryParams[key].startsWith(\"-\") ? -1 : 1;\n const sortField = queryParams[key].replace(/^-/, ''); \n sort[FieldMap[sortField]] = sortDirection;\n break;\n case \"_maxresults\":\n // sort logic\n limit[\"_maxresults\"]=parseInt(queryParams[key])\n break;\n case \"_total\":\n tot = false;\n break;\n default:\n // Default case for other keys\n codeParams[key] = queryParams[key];\n break;\n }\n }\n\n let findResult;\n const page = parseInt(codeParams.page) || 1;\n if (tot) {\n aggreg.push({'$match':query});\n if(Object.keys(sort).length > 0){\n aggreg.push({'$sort':sort});\n } else {\n aggreg.push({'$sort':{\"start\":1}});\n }\n if(Object.keys(project).length > 0){\n aggreg.push({'$project':project});\n }\n if(Object.keys(limit).length > 0){\n aggreg.push({'$limit':limit[\"_maxresults\"]});\n }else{\n aggreg.push({'$limit':(dynamicPageSize||pageSize)*page});\n }\n try {\n //findResult = await collection.find(query).sort(sort).limit((dynamicPageSize||pageSize)*pageSize).toArray();\n findResult = await collection.aggregate(aggreg).toArray();\n } catch (err) {\n console.log(\"Error occurred while executing find:\", err.message);\n response.setStatusCode(500);\n response.setHeader(\"Content-Type\", \"application/json\");\n return { error: err.message };\n }\n } else {\n findResult = [];\n }\n let total\n if(Object.keys(limit).length > 0){\n total=limit[\"_maxresults\"];\n }else{\n total = await collection.count(query);\n }\n const totalPages = Math.ceil(total / (dynamicPageSize || pageSize));\n const startIdx = (page - 1) * (dynamicPageSize || pageSize);\n const endIdx = startIdx + (dynamicPageSize || pageSize);\n const resultsInBundle = findResult.slice(startIdx, endIdx);\n\n const bundle = {\n resourceType: \"Bundle\",\n type: \"searchset\",\n total:total,\n link:[],\n entry: resultsInBundle.map((resource) => ({\n fullUrl: `${URL}?id=${resource._id}`, \n resource,\n search: {\n mode: 'match'\n },\n })),\n };\n\n if (page <= totalPages) {\n if (page > 1 && page!==totalPages) {\n bundle.link = [\n { relation: \"previous\", url: `${URL}${getQueryString(queryParams,sort,page-1,dynamicPageSize || pageSize)}` },\n { relation: \"self\", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` },\n { relation: \"next\", url: `${URL}${getQueryString(queryParams,sort,page+1,dynamicPageSize || pageSize)}` },\n ];\n } else if(page==totalPages && totalPages!==1) {\n bundle.link = [\n { relation: \"previous\", url: `${URL}${getQueryString(queryParams,sort,page-1,dynamicPageSize || pageSize)}` },\n { relation: \"self\", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` }\n ];\n } else if(totalPages==1 || dynamicPageSize==0) {\n bundle.link = [\n { relation: \"self\", url: `${URL}${getQueryString(queryParams,null,0,0)}` },\n ];\n } else {\n bundle.link = [\n { relation: \"self\", url: `${URL}${getQueryString(queryParams,sort,page,dynamicPageSize || pageSize)}` },\n { relation: \"next\", url: `${URL}${getQueryString(queryParams,sort,page+1,dynamicPageSize || pageSize)}` },\n ];\n }\n }\n\n response.setStatusCode(200);\n response.setHeader(\"Content-Type\", \"application/json\");\n response.setBody(JSON.stringify(bundle, null, 2));\n};\n\n// Helper function to generate query string from query parameters\nfunction getQueryString(params,sort, p, pageSize) {\n\n let paramString = \"\";\n let queryString = \"\";\n\n if (params && Object.keys(params).length > 0) {\n paramString = Object.keys(params)\n .filter((key) => key !== \"page\" && key !== \"_count\")\n .map((key) => `${(key)}=${params[key]}`)\n .join(\"&\");\n }\n\n if (paramString!==\"\"){\n if (p > 1) {\n queryString = `?`+ paramString.replace(/ /g, \"%20\") + `&page=${(p)}&_count=${pageSize}`;\n } else {\n queryString += `?`+ paramString.replace(/ /g, \"%20\") +`&_count=${pageSize}`\n }\n } else if (p > 1) {\n queryString = `?page=${(p)}&_count=${pageSize}`;\n }\n\n return queryString;\n}\n```\n\n- Make sure to change the fake URL in said function with the one that was just created from your HTTPS endpoint.\n\n![Add Authentication to the Endpoint Function][3]\n\n- Enable both \u201cFetch Custom User Data\u201d and \u201cCreate User Upon Authentication.\u201d\n- Lastly, save the draft and deploy it.\n\n![Publish the Endpoint Function][4]\n\nNow, your API endpoint is ready and accessible! But if you test it, you will get the following authentication error since no authentication provider has been enabled.\n\n```bash\ncurl --location --request GET https://.com/app//endpoint/appointment' \\\n --header 'Content-Type: application/json' \\\n\n{\"error\":\"no authentication methods were specified\",\"error_code\":\"InvalidParameter\",\"link\":\"https://realm.mongodb.com/groups/64e34f487860ee7a5c8fc990/apps/64e35fe30e434ffceaca4c89/logs?co_id=64e369ca7b46f09497deb46d\"}\n```\n\n> Side note: To view the result without any security, you can go into your function, then go to the settings tab and set the authentication to system. However, this will treat any request as if it came from the system, so proceed with caution.\n\n## Step 3.1: Enable JWT-based authentication\n\nFHIR emphasizes the importance of secure data exchange in healthcare. While FHIR itself doesn't define a specific authentication protocol, it recommends using OAuth for web-centric applications and highlights the HL7 SMART App Launch guide for added context. This focus on secure authentication aligns with MongoDB Atlas's provision for JWT (JSON Web Tokens) as an authentication method, making it an advantageous choice when building FHIR-based microservices.\n\nThen, to add authentication, navigate to the homepage of the App Services application. Click \u201cAuthentication\u201d on the left-hand side menu and click the EDIT button of the row where the provider is Custom JWT Authentication.\n\n![Enable JWT Authentication for the Endpoint][5]\n\nJWT (JSON Web Token) provides a token-based authentication where a token is generated by the client based on an agreed secret and cryptography algorithm. After the client transmits the token, the server validates the token with the agreed secret and cryptography algorithm and then processes client requests if the token is valid.\n\nIn the configuration options of the Custom JWT Authentication, fill out the options with the following:\n\n- Enable the Authentication Provider (Provider Enabled must be turned on).\n- Keep the verification method as is (manually specify signing keys).\n- Keep the signing algorithm as is (HS256).\n- Add a new signing key.\n - Provide the signing key name.\n - For example, APITestJWTSigningKEY\n - Provide the secure key content (between 32 and 512 characters) and note it somewhere secure.\n - For example, FipTEgYJ6WfUEhCJq3e@pm8-TkE9*UZN\n- Add two fields in the metadata fields.\n - The path should be metadata.group and the corresponding field should be group.\n - The path should be metadata.name and the corresponding field should be name.\n- Keep the audience field as is (empty).\n\nBelow, you can find how the JWT Authentication Provider form has been filled accordingly.\n\n![JWT Authentication Provider Example][6]\n\nSave it and then deploy it.\n\nAfter it\u2019s deployed, you can see the secret that has been created in the [App Services Values. It\u2019s accessible on the left side menu by clicking \u201cValues.\u201d\n\n.\n\nThese are the steps to generate an encoded JWT:\n\n- Visit jwt.io.\n- On the right-hand side in the section Decoded, we can fill out the values. On the left-hand side, the corresponding Encoded JWT will be generated.\n- In the Decoded section:\n - Keep the header section the same.\n - In the Payload section, set the following fields:\n - Sub\n - Represents the owner of the token\n - Provide value unique to the user\n - Metadata\n - Represents metadata information regarding this token and can be used for further processing in App Services\n - We have two sub fields here\n - Name\n - Represents the username of the client that will initiate the API request\n - Will be used as the username in App Services\n - Group\n - Represents the group information of the client that we\u2019ll use later for rule-based access\n - Exp\n - Represents when the token is going to expire\n - Provides a future time to keep expiration impossible during our tests\n - Aud\n - Represents the name of the App Services application that you can get from the homepage of your application in App Services\n - In the Verify Signature section:\n - Provide the same secret that you\u2019ve already provided while enabling Custom JWT Authentication in Step 3.1.\n\nBelow, you can find how the values have been filled out in the Decoded section and the corresponding Encoded JWT that has been generated.\n\n defined, we were not able to access any data.\n\nEven though the request is not successful due to the no rule definition, you can check out the App Users page to list authenticated users, as shown below. user01 was the name of the user that was provided in the metadata.name field of the JWT.\n\n.\n\nOtherwise, let\u2019s create a role that will have access to all of the fields. \n\n- Navigate to the Rules section on the left-hand side of the menu in App Services.\n- Choose the collection appointments on the left side of the menu.\n- Click **readAll** on the right side of the menu, as shown below.\n\n. As a demo, this won\u2019t be presenting all of FHIR search capabilities. Instead, we will focus on the basic ones.\n\nIn our server, we will be able to respond to two types of inputs. First, there are the regular search parameters that we can see at the bottom of the resources\u2019 page. And second, we will implement the Search Result Parameters that can modify the results of a performed search. Because of our data schema, not all will apply. Hence, not all were coded into the function.\n\nMore precisely, we will be able to call the search parameters: actor, date, identifier, location, part-status, patient, practitioner, and status. We can also call the search result parameters: _count, _elements, _sort, _maxresults, and _total, along with the page parameter. Please refer to the FHIR documentation to see how they work. \n\nMake sure to test both users as the response for each of them will be different. Here, you have a couple of examples. To keep it short, I\u2019ll set the page to a single appointment by adding ?_count=1 to the URL.\n\nHealthcare provider:\n\n```\ncurl --request GET '{{URL}}?_count=1' \\ --header 'jwtTokenString: {{hcproviderJWT}}' \\ --header 'Content-Type: application/json'\n\nHTTP/1.1 200 OK\ncontent-encoding: gzip\ncontent-type: application/json\nstrict-transport-security: max-age=31536000; includeSubdomains;\nvary: Origin\nx-appservices-request-id: 64e5e47e6dbb75dc6700e42c\nx-frame-options: DENY\ndate: Wed, 23 Aug 2023 10:50:38 GMT\ncontent-length: 671\nx-envoy-upstream-service-time: 104\nserver: mdbws\nx-envoy-decorator-operation: baas-main.baas-prod.svc.cluster.local:8086/*\nconnection: close\n\n{\n \"resourceType\": \"Bundle\",\n \"type\": \"searchset\",\n \"total\": 384,\n \"link\": \n {\n \"relation\": \"self\",\n \"url\": \"https://fakeurl.com/endpoint/appointment\"\n },\n {\n \"relation\": \"next\",\n \"url\": \"https://fakeurl.com/endpoint/appointment?page=2\\u0026_count=1\"\n }\n ],\n \"entry\": [\n {\n \"fullUrl\": \"https://fakeurl.com/endpoint/appointment?id=64e35896eaf6edfdbe5f22be\",\n \"resource\": {\n \"_id\": \"64e35896eaf6edfdbe5f22be\",\n \"resourceType\": \"Appointment\",\n \"status\": \"proposed\",\n \"created\": \"2023-08-21T14:29:10.312Z\",\n \"start\": \"2023-08-21T14:29:09.535Z\",\n \"description\": \"Breast Mammography Screening\",\n \"serviceType\": [\n {\n \"coding\": [\n {\n \"system\": \"http://snomed.info/sct\",\n \"code\": \"278110001\",\n \"display\": \"radiographic imaging\"\n }\n ],\n \"text\": \"Mammography\"\n }\n ],\n \"participant\": [\n {\n \"actor\": {\n \"reference\": \"64e354874f5c09af1a8fc2b6\",\n \"display\": [\n {\n \"given\": [\n \"Marta\"\n ],\n \"family\": \"Donovan\"\n }\n ]\n },\n \"required\": true,\n \"status\": \"needs-action\"\n },\n {\n \"actor\": {\n \"reference\": \"64e353d80727df4ed8d00839\",\n \"display\": [\n {\n \"use\": \"official\",\n \"family\": \"Harrell\",\n \"given\": [\n \"Juan Carlos\"\n ]\n }\n ]\n },\n \"required\": true,\n \"status\": \"accepted\"\n }\n ],\n \"location\": {\n \"reference\": \"64e35380f2f2059b24dafa60\",\n \"display\": \"St. Barney clinic\"\n }\n },\n \"search\": {\n \"mode\": \"match\"\n }\n }\n ]\n}\n```\n\nHealthcare agency:\n\n```\ncurl --request GET '{{URL}}?_count=1' \\ --header 'jwtTokenString: {{hcagencyJWT}}' \\ --header 'Content-Type: application/json'\\\n\nHTTP/1.1 200 OK\ncontent-encoding: gzip\ncontent-type: application/json\nstrict-transport-security: max-age=31536000; includeSubdomains;\nvary: Origin\nx-appservices-request-id: 64e5e4eee069ab6f307d792e\nx-frame-options: DENY\ndate: Wed, 23 Aug 2023 10:52:30 GMT\ncontent-length: 671\nx-envoy-upstream-service-time: 162\nserver: mdbws\nx-envoy-decorator-operation: baas-main.baas-prod.svc.cluster.local:8086/*\nconnection: close\n\n{\n \"resourceType\": \"Bundle\",\n \"type\": \"searchset\",\n \"total\": 6720,\n \"link\": [\n {\n \"relation\": \"self\",\n \"url\": \"https://fakeurl.com/endpoint/appointment\"\n },\n {\n \"relation\": \"next\",\n \"url\": \"https://fakeurl.com/endpoint/appointment?page=2\\u0026_count=1\"\n }\n ],\n \"entry\": [\n {\n \"fullUrl\": \"https://fakeurl.com/endpoint/appointment?id=64e35896eaf6edfdbe5f22be\",\n \"resource\": {\n\n \"_id\": \"64e35896eaf6edfdbe5f22be\",\n \"resourceType\": \"Appointment\",\n \"status\": \"proposed\",\n \"created\": \"2023-08-21T14:29:10.312Z\",\n \"start\": \"2023-08-21T14:29:09.535Z\",\n \"description\": \"Breast Mammography Screening\",\n \"serviceType\": [\n\n {\n\n \"coding\": [\n\n {\n\n \"system\": \"http://snomed.info/sct\",\n \"code\": \"278110001\",\n \"display\": \"radiographic imaging\"\n }\n ],\n \"text\": \"Mammography\"\n }\n ],\n \"participant\": [\n\n {\n\n \"actor\": {\n\n \"reference\": \"64e354874f5c09af1a8fc2b6\",\n \"display\": [\n\n {\n\n \"given\": [\n\n \"Marta\"\n ],\n \"family\": \"Donovan\"\n }\n ]\n },\n \"required\": true,\n \"status\": \"needs-action\"\n },\n {\n\n \"actor\": {\n\n \"reference\": \"64e353d80727df4ed8d00839\",\n \"display\": [\n\n {\n\n \"use\": \"official\",\n \"family\": \"Harrell\",\n \"given\": [\n\n \"Juan Carlos\"\n ]\n }\n ]\n },\n \"required\": true,\n \"status\": \"accepted\"\n }\n ],\n \"location\": {\n\n \"reference\": \"64e35380f2f2059b24dafa60\",\n \"display\": \"St. Barney clinic\"\n }\n },\n \"search\": {\n\n \"mode\": \"match\"\n }\n }\n ]\n}\n```\n\nPlease note the difference on the total number of documents fetched as well as the participant.actor.display fields missing for the agency user.\n\n## Step 6: How to call the microservice from an application\n\nThe calls that were shown up to this point were from API platforms such as Postman or Visual Studio\u2019s REST client. However, for security reasons, when putting this into an application such as a React.js application, then the calls might be blocked by the CORS policy. To avoid this, we need to authenticate our data API request. You can read more on how to manage your user sessions [in our docs. But for us, it should be as simple as sending the following request:\n\n```bash\ncurl -X POST 'https://..realm.mongodb.com/api/client/v2.0/app//auth/providers/custom-token/login' \\\n --header 'Content-Type: application/json' \\\n --data-raw '{\n \"token\": \"\"\n }'\n```\n\nThis will return something like:\n\n```json\n{\n \"access_token\": \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RldmljZV9pZCI6IjAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMCIsImJhYXNfZG9tYWluX2lkIjoiNWVlYTg2NjdiY2I0YzgxMGI2NTFmYjU5IiwiZXhwIjoxNjY3OTQwNjE4LCJpYXQiOjE2Njc5Mzg4MTgsImlzcyI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjgzZSIsInN0aXRjaF9kZXZJZCI6IjAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMCIsInN0aXRjaF9kb21haW5JZCI6IjVlZWE4NjY3YmNiNGM4MTBiNjUxZmI1OSIsInN1YiI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOSIsInR5cCI6ImFjY2VzcyJ9.pyq3nfzFUT-6r-umqGrEVIP8XHOw0WGnTZ3-EbvgbF0\",\n \"refresh_token\": \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJiYWFzX2RhdGEiOm51bGwsImJhYXNfZGV2aWNlX2lkIjoiMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAwIiwiYmFhc19kb21haW5faWQiOiI1ZWVhODY2N2JjYjRjODEwYjY1MWZiNTkiLCJiYWFzX2lkIjoiNjM2YWJhMDIxNzI4YjZjMWMwM2RiODNlIiwiYmFhc19pZGVudGl0eSI6eyJpZCI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOC1ud2hzd2F6ZHljbXZycGVuZHdkZHRjZHQiLCJwcm92aWRlcl90eXBlIjoiYW5vbi11c2VyIiwicHJvdmlkZXJfaWQiOiI2MjRkZTdiYjhlYzZjOTM5NjI2ZjU0MjUifSwiZXhwIjozMjQ0NzM4ODE4LCJpYXQiOjE2Njc5Mzg4MTgsInN0aXRjaF9kYXRhIjpudWxsLCJzdGl0Y2hfZGV2SWQiOiIwMDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAiLCJzdGl0Y2hfZG9tYWluSWQiOiI1ZWVhODY2N2JjYjRjODEwYjY1MWZiNTkiLCJzdGl0Y2hfaWQiOiI2MzZhYmEwMjE3MjhiNmMxYzAzZGI4M2UiLCJzdGl0Y2hfaWRlbnQiOnsiaWQiOiI2MzZhYmEwMjE3MjhiNmMxYzAzZGI3Zjgtbndoc3dhemR5Y212cnBlbmR3ZGR0Y2R0IiwicHJvdmlkZXJfdHlwZSI6ImFub24tdXNlciIsInByb3ZpZGVyX2lkIjoiNjI0ZGU3YmI4ZWM2YzkzOTYyNmY1NDI1In0sInN1YiI6IjYzNmFiYTAyMTcyOGI2YzFjMDNkYjdmOSIsInR5cCI6InJlZnJlc2gifQ.h9YskmSpSLK8DMwBpPGuk7g1s4OWZDifZ1fmOJgSygw\",\n \"user_id\": \"636aba021728b6c1c03db7f9\"\n}\n```\n\nThese tokens will allow your application to request data from your FHIR microservice. You will just need to replace the header 'jwtTokenString: {{JWT}}' with 'Authorization: Bearer {{token above}}', like so:\n\n```\ncurl --request GET {{URL}} \\ --header 'Authorization: Bearer {{token above}}' \\ \n--header 'Content-Type: application/json'\n\n{\"error\": \"no matching rule found\" }\n```\n\nYou can find additional information in our docs for authenticating Data API requests.\n\n## Summary\n\nIn conclusion, interoperability plays a crucial role in enabling the exchange and utilization of information within systems and software. Modern standards like Fast Healthcare Interoperability Resources (FHIR) define data communication methods, while MongoDB's approach aligns data storage with FHIR's resource format, simplifying integration and improving performance. \n\nMongoDB's capabilities, including Atlas Data API, offer healthcare providers and software vendors greater control over their data, reducing complexity and enhancing security. However, it's important to note that this integration capability complements rather than replaces clinical data repositories. In the previous sections, we explored how to: \n\n- Generate your own FHIR data.\n- Configure serverless functions along with Custom JWT Authentication to seamlessly integrate user-specific information. \n- Implement precise data access control through roles and filters. \n- Call the configured APIs directly from the code.\n\nAre you ready to dive in and leverage these capabilities for your projects? Don't miss out on the chance to explore the full potential of MongoDB Atlas App Services. Get started for free by provisioning an M0 Atlas instance and creating your own App Services application. \n\nShould you encounter any roadblocks or have questions, our vibrant developer forums are here to support you every step of the way. \n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1f8674230e64660c/652eb2153b618bf623f212fa/image12.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte32e786ba5092f3c/652eb2573fc0c855d1c9446c/image13.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdbf6696269e8c0fa/652eb2a18fc81358f36c2dd2/image6.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb9458dc116962c7b/652eb2fe74aa53528e325ffc/image4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt22ed4f126322aaf9/652eb3460418d27708f75d8b/image5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27df3f81feb75a2a/652eb36e8fc81306dc6c2dda/image10.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt46ecccd80736dca1/652eb39a701ffe37d839cfd2/image2.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b44f38890b2ea44/652eb3d88dd295fac0efc510/image7.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01966229cb4a8966/652eb40148aba383898b1f9a/image11.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf52ce0e04106ac69/652eb46b8d3ed4341e55286b/image3.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2b60e8592927d6fa/652eb48e3feebb0b40291c9a/image9.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0e97db616428a8ad/652eb4ce8d3ed41c7e55286f/image8.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc35e52e0efa26c03/652eb4fff92b9e5644aa21a4/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to build a healthcare interoperability microservice, secured with JWT using FHIR and MongoDB.", "contentType": "Tutorial"}, "title": "How to Build a Healthcare Interoperability Microservice Using FHIR and MongoDB", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-sdk-schema-migration-android", "action": "created", "body": "# How to Update Realm SDK Database Schema for Android\n\n> This is a follow-up article in the **Getting Started Series**.\n> In this article, we learn how to modify/migrate Realm **local** database schema.\n\n## Introduction\n\nAs you add and change application features, you need to modify database schema, and the need for migrations arises, which is very important for a seamless user experience.\n\nBy the end of this article, you will learn:\n\n1. How to update database schema post-production release on play store.\n2. How to migrate user data from one schema to another.\n\nBefore we get down to business, let's quickly recap how we set `Realm` in our application.\n\n```kotlin\nconst val REALM_SCHEMA_VERSION: Long = 1\nconst val REALM_DB_NAME = \"rMigrationSample.db\"\n\nfun setupRealm(context: Context) {\n Realm.init(context)\n\n val config = RealmConfiguration.Builder()\n .name(REALM_DB_NAME)\n .schemaVersion(REALM_SCHEMA_VERSION)\n .build()\n\n Realm.setDefaultConfiguration(config)\n}\n```\n\nDoing migration in Realm is very straightforward and simple. The high-level steps for the successful migration of any database are:\n\n1. Update the database version.\n2. Make changes to the database schema.\n3. Migrate user data from old schema to new.\n\n## Update the Database Version\n\nThis is the simplest step, which can be done by incrementing the version of\n`REALM_SCHEMA_VERSION`, which notifies `Relam` about database changes. This, in turn, runs triggers migration, if provided.\n\nTo add migration, we use the `migration` function available in `RealmConfiguration.Builder`, which takes an argument of `RealmMigration`, which we will review in the next step.\n\n```kotlin\nval config = RealmConfiguration.Builder()\n .name(REALM_DB_NAME)\n .schemaVersion(REALM_SCHEMA_VERSION)\n .migration(DBMigrationHelper())\n .build()\n```\n\n## Make Changes to the Database Schema\n\nIn `Realm`, all the migration-related operation has to be performed within the scope\nof `RealmMigration`.\n\n```kotlin\nclass DBMigrationHelper : RealmMigration {\n\n override fun migrate(realm: DynamicRealm, oldVersion: Long, newVersion: Long) {\n migration1to2(realm.schema)\n migration2to3(realm.schema)\n migration3to4(realm.schema)\n }\n\n private fun migration3to4(schema: RealmSchema?) {\n TODO(\"Not yet implemented\")\n }\n\n private fun migration2to3(schema: RealmSchema?) {\n TODO(\"Not yet implemented\")\n }\n\n private fun migration1to2(schema: RealmSchema) {\n TODO(\"Not yet implemented\")\n }\n}\n```\n\nTo add/update/rename any field:\n\n```kotlin\n\nprivate fun migration1to2(schema: RealmSchema) {\n val userSchema = schema.get(UserInfo::class.java.simpleName)\n userSchema?.run {\n addField(\"phoneNumber\", String::class.java, FieldAttribute.REQUIRED)\n renameField(\"phoneNumber\", \"phoneNo\")\n removeField(\"phoneNo\")\n }\n}\n```\n\n## Migrate User Data from Old Schema to New\n\nAll the data transformation during migration can be done with `transform` function with the help of `set` and `get` methods.\n\n```kotlin\n\nprivate fun migration2to3(schema: RealmSchema) {\n val userSchema = schema.get(UserInfo::class.java.simpleName)\n userSchema?.run {\n addField(\"fullName\", String::class.java, FieldAttribute.REQUIRED)\n transform {\n it.set(\"fullName\", it.get(\"firstName\") + it.get(\"lastName\"))\n }\n }\n}\n```\n\nIn the above snippet, we are setting the default value of **fullName** by extracting the value from old data, like **firstName** and **lastName**.\n\nWe can also use `transform` to update the data type.\n\n```kotlin\n\nval personSchema = schema! or tweet\nme @codeWithMohit.\n\nIn the next article, we will discuss how to migrate the Realm database with Atlas Device Sync.\n\nIf you have an iOS app, do check out the iOS tutorial\non Realm iOS Migration. ", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Java", "Android"], "pageDescription": "In this article, we explore and learn how to make Realm SDK database schema changes. ", "contentType": "Tutorial"}, "title": "How to Update Realm SDK Database Schema for Android", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/node-connect-mongodb-3-3-2", "action": "created", "body": "# Connect to a MongoDB Database Using Node.js 3.3.2\n\n \n\nUse Node.js? Want to learn MongoDB? This is the blog series for you!\n\nIn this Quick Start series, I'll walk you through the basics of how to get started using MongoDB with Node.js. In today's post, we'll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.\n\n>\n>\n>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n>\n>\n\n>\n>\n>Prefer to learn by video? I've got ya covered. Check out the video below that covers how to get connected as well as how to perform the CRUD operations.\n>\n>:youtube]{vid=fbYExfeFsI0}\n>\n>\n\n## Set Up\n\nBefore we begin, we need to ensure you've completed a few prerequisite steps.\n\n### Install Node.js\n\nFirst, make sure you have a supported version of Node.js installed (the MongoDB Node.js Driver requires Node 4.x or greater, and, for these examples, I've used Node.js 10.16.3).\n\n### Install the MongoDB Node.js Driver\n\nThe MongoDB Node.js Driver allows you to easily interact with MongoDB databases from within Node.js applications. You'll need the driver in order to connect to your database and execute the queries described in this Quick Start series.\n\nIf you don't have the MongoDB Node.js Driver installed, you can install it with the following command.\n\n``` bash\nnpm install mongodb\n```\n\nAt the time of writing, this installed version 3.3.2 of the driver. Running `npm list mongodb` will display the currently installed driver version number. For more details on the driver and installation, see the [official documentation.\n\n### Create a Free MongoDB Atlas Cluster and Load the Sample Data\n\nNext, you'll need a MongoDB database. The easiest way to get started with MongoDB is to use Atlas, MongoDB's fully-managed database-as-a-service.\n\nHead over to Atlas and create a new cluster in the free tier. At a high level, a cluster is a set of nodes where copies of your database will be stored. Once your tier is created, load the sample data. If you're not familiar with how to create a new cluster and load the sample data, check out this video tutorial from MongoDB Developer Advocate Maxime Beugnet.\n\n>\n>\n>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n>\n>\n\n### Get Your Cluster's Connection Info\n\nThe final step is to prep your cluster for connection.\n\nIn Atlas, navigate to your cluster and click **CONNECT**. The Cluster Connection Wizard will appear.\n\nThe Wizard will prompt you to add your current IP address to the IP Access List and create a MongoDB user if you haven't already done so. Be sure to note the username and password you use for the new MongoDB user as you'll need them in a later step.\n\nNext, the Wizard will prompt you to choose a connection method. Select **Connect Your Application**. When the Wizard prompts you to select your driver version, select **Node.js** and **3.0 or later**. Copy the provided connection string.\n\nFor more details on how to access the Connection Wizard and complete the steps described above, see the official documentation.\n\n## Connect to Your Database From a Node.js Application\n\nNow that everything is set up, it's time to code! Let's write a Node.js script that connects to your database and lists the databases in your cluster.\n\n### Import MongoClient\n\nThe MongoDB module exports `MongoClient`, and that's what we'll use to connect to a MongoDB database. We can use an instance of MongoClient to connect to a cluster, access the database in that cluster, and close the connection to that cluster.\n\n``` js\nconst { MongoClient } = require('mongodb');\n```\n\n### Create Our Main Function\n\nLet's create an asynchronous function named `main()` where we will connect to our MongoDB cluster, call functions that query our database, and disconnect from our cluster.\n\nThe first thing we need to do inside of `main()` is create a constant for our connection URI. The connection URI is the connection string you copied in Atlas in the previous section. When you paste the connection string, don't forget to update `` and `` to be the credentials for the user you created in the previous section. The connection string includes a `` placeholder. For these examples, we'll be using the `sample_airbnb` database, so replace `` with `sample_airbnb`.\n\n**Note**: The username and password you provide in the connection string are NOT the same as your Atlas credentials.\n\n``` js\n/**\n* Connection URI. Update , , and to reflect your cluster.\n* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details\n*/\nconst uri = \"mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority\"; \n```\n\nNow that we have our URI, we can create an instance of MongoClient.\n\n``` js\nconst client = new MongoClient(uri);\n```\n\n**Note**: When you run this code, you may see DeprecationWarnings around the URL string `parser` and the Server Discover and Monitoring engine. If you see these warnings, you can remove them by passing options to the MongoClient. For example, you could instantiate MongoClient by calling `new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true })`. See the Node.js MongoDB Driver API documentation for more information on these options.\n\nNow we're ready to use MongoClient to connect to our cluster. `client.connect()` will return a promise. We will use the await keyword when we call `client.connect()` to indicate that we should block further execution until that operation has completed.\n\n``` js\nawait client.connect();\n```\n\nWe can now interact with our database. Let's build a function that prints the names of the databases in this cluster. It's often useful to contain this logic in well-named functions in order to improve the readability of your codebase. Throughout this series, we'll create new functions similar to the function we're creating here as we learn how to write different types of queries. For now, let's call a function named `listDatabases()`.\n\n``` js\nawait listDatabases(client);\n```\n\nLet's wrap our calls to functions that interact with the database in a `try/catch` statement so that we handle any unexpected errors.\n\n``` js\ntry {\n await client.connect();\n\n await listDatabases(client);\n\n} catch (e) {\n console.error(e);\n}\n```\n\nWe want to be sure we close the connection to our cluster, so we'll end our `try/catch` with a finally statement.\n\n``` js\nfinally {\n await client.close();\n}\n```\n\nOnce we have our `main()` function written, we need to call it. Let's send the errors to the console.\n\n``` js\nmain().catch(console.error);\n```\n\nPutting it all together, our `main()` function and our call to it will look something like the following.\n\n``` js\nasync function main(){\n /**\n * Connection URI. Update , , and to reflect your cluster.\n * See https://docs.mongodb.com/ecosystem/drivers/node/ for more details\n */\n const uri = \"mongodb+srv://:@/test?retryWrites=true&w=majority\";\n\n const client = new MongoClient(uri);\n\n try {\n // Connect to the MongoDB cluster\n await client.connect();\n\n // Make the appropriate DB calls\n await listDatabases(client);\n\n } catch (e) {\n console.error(e);\n } finally {\n await client.close();\n }\n}\n\nmain().catch(console.error);\n```\n\n### List the Databases in Our Cluster\n\nIn the previous section, we referenced the `listDatabases()` function. Let's implement it!\n\nThis function will retrieve a list of databases in our cluster and print the results in the console.\n\n``` js\nasync function listDatabases(client){\n databasesList = await client.db().admin().listDatabases();\n\n console.log(\"Databases:\");\n databasesList.databases.forEach(db => console.log(` - ${db.name}`));\n};\n```\n\n### Save Your File\n\nYou've been implementing a lot of code. Save your changes, and name your file something like `connection.js`. To see a copy of the complete file, visit the nodejs-quickstart GitHub repo.\n\n### Execute Your Node.js Script\n\nNow you're ready to test your code! Execute your script by running a command like the following in your terminal: `node connection.js`.\n\nYou will see output like the following:\n\n``` js\nDatabases:\n - sample_airbnb\n - sample_geospatial\n - sample_mflix\n - sample_supplies\n - sample_training\n - sample_weatherdata\n - admin\n - local\n```\n\n## What's Next?\n\nToday, you were able to connect to a MongoDB database from a Node.js script, retrieve a list of databases in your cluster, and view the results in your console. Nice!\n\nNow that you're connected to your database, continue on to the next post in this series, where you'll learn to execute each of the CRUD (create, read, update, and delete) operations.\n\nIn the meantime, check out the following resources:\n\n- MongoDB Node.js Driver\n- Official MongoDB Documentation on the MongoDB Node.js Driver\n- MongoDB University Free Course: M220JS: MongoDB for Javascript Developers\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Node.js and MongoDB is a powerful pairing and in this code example project we show you how.", "contentType": "Code Example"}, "title": "Connect to a MongoDB Database Using Node.js 3.3.2", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/nodejs-change-streams-triggers", "action": "created", "body": "# Change Streams & Triggers with Node.js Tutorial\n\n \n\nSometimes you need to react immediately to changes in your database. Perhaps you want to place an order with a distributor whenever an item's inventory drops below a given threshold. Or perhaps you want to send an email notification whenever the status of an order changes. Regardless of your particular use case, whenever you want to react immediately to changes in your MongoDB database, change streams and triggers are fantastic options.\n\nIf you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! We began by walking through how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. Then we jumped into more advanced topics like the aggregation framework and transactions. The code we write today will use the same structure as the code we built in the first post in the series, so, if you have any questions about how to get started or how the code is structured, head back to that post.\n\nAnd, with that, let's dive into change streams and triggers! Here is a summary of what we'll cover today:\n\n- What are Change Streams?\n- Setup\n- Create a Change Stream\n- Resume a Change Stream\n- What are MongoDB Atlas Triggers?\n- Create a MongoDB Atlas Trigger\n- Wrapping Up\n- Additional Resources\n\n>\n>\n>Prefer a video over an article? Check out the video below that covers the exact same topics that I discuss in this article.\n>\n>:youtube]{vid=9LA7_CSyZb8}\n>\n>\n\n>\n>\n>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n>\n>\n\n## What are Change Streams?\n\nChange streams allow you to receive notifications about changes made to your MongoDB databases and collections. When you use change streams, you can choose to program actions that will be automatically taken whenever a change event occurs.\n\nChange streams utilize the aggregation framework, so you can choose to filter for specific change events or transform the change event documents.\n\nFor example, let's say I want to be notified whenever a new listing in the Sydney, Australia market is added to the **listingsAndReviews** collection. I could create a change stream that monitors the **listingsAndReviews** collection and use an aggregation pipeline to match on the listings I'm interested in.\n\nLet's take a look at three different ways to implement this change stream.\n\n## Set Up\n\nAs with all posts in this MongoDB and Node.js Quick Start series, you'll need to ensure you've completed the prerequisite steps outlined in the **Set up** section of the first post in this series.\n\nI find it helpful to have a script that will generate sample data when I'm testing change streams. To help you quickly generate sample data, I wrote changeStreamsTestData.js. Download a copy of the file, update the `uri` constant to reflect your Atlas connection info, and run it by executing `node changeStreamsTestData.js`. The script will do the following:\n\n1. Create 3 new listings (Opera House Views, Private room in London, and Beautiful Beach House)\n2. Update 2 of those listings (Opera House Views and Beautiful Beach House)\n3. Create 2 more listings (Italian Villa and Sydney Harbour Home)\n4. Delete a listing (Sydney Harbour Home).\n\n## Create a Change Stream\n\nNow that we're set up, let's explore three different ways to work with a change stream in Node.js.\n\n### Get a Copy of the Node.js Template\n\nTo make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.\n\n1. Download a copy of template.js.\n2. Open `template.js` in your favorite code editor.\n3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.\n4. Save the file as `changeStreams.js`.\n\nYou can run this file by executing `node changeStreams.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.\n\n### Create a Helper Function to Close the Change Stream\n\nRegardless of how we monitor changes in our change stream, we will want to close the change stream after a certain amount of time. Let's create a helper function to do just that.\n\n1. Paste the following function in `changeStreams.js`.\n\n ``` javascript\n function closeChangeStream(timeInMs = 60000, changeStream) {\n return new Promise((resolve) => {\n setTimeout(() => {\n console.log(\"Closing the change stream\");\n resolve(changeStream.close());\n }, timeInMs)\n })\n };\n ```\n\n### Monitor Change Stream using EventEmitter's on()\n\nThe MongoDB Node.js Driver's ChangeStream class inherits from the Node Built-in class EventEmitter. As a result, we can use EventEmitter's on() function to add a listener function that will be called whenever a change occurs in the change stream.\n\n#### Create the Function\n\nLet's create a function that will monitor changes in the change stream using EventEmitter's `on()`.\n\n1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingEventEmitter`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.\n\n ``` javascript\n async function monitorListingsUsingEventEmitter(client, timeInMs = 60000, pipeline = ]){ \n\n }\n ```\n\n2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingEventEmitter()`.\n\n ``` javascript\n const collection = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n ```\n\n3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.\n\n ``` javascript\n const changeStream = collection.watch(pipeline);\n ```\n\n4. Once we have our change stream, we can add a listener to it. Let's log each change event in the console. Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.\n\n ``` javascript\n changeStream.on('change', (next) => {\n console.log(next); \n });\n ```\n\n5. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function to set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingEventEmitter()`.\n\n ``` javascript\n await closeChangeStream(timeInMs, changeStream);\n ```\n\n#### Call the Function\n\nNow that we've implemented our function, let's call it!\n\n1. Inside of `main()` beneath the comment that says\n `Make the appropriate DB calls`, call your\n `monitorListingsUsingEventEmitter()` function:\n\n ``` javascript\n await monitorListingsUsingEventEmitter(client);\n ```\n\n2. Save your file.\n\n3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.\n\n4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Output similar to the following will be displayed in your first shell where you are running `changeStreams.js`.\n\n ``` javascript\n { \n _id: { _data: '825DE67A42000000012B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7640004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1575385666 },\n fullDocument: { \n _id: 5de67a42113ea7de6472e764,\n name: 'Opera House Views',\n summary: 'Beautiful apartment with views of the iconic Sydney Opera House',\n property_type: 'Apartment',\n bedrooms: 1,\n bathrooms: 1,\n beds: 1,\n address: { market: 'Sydney', country: 'Australia' } \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e764 } \n }\n { \n _id: { _data: '825DE67A42000000022B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7650004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 2, high_: 1575385666 },\n fullDocument: { \n _id: 5de67a42113ea7de6472e765,\n name: 'Private room in London',\n property_type: 'Apartment',\n bedrooms: 1,\n bathroom: 1 \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e765 } \n }\n { \n _id: { _data: '825DE67A42000000032B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7660004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1575385666 },\n fullDocument: { \n _id: 5de67a42113ea7de6472e766,\n name: 'Beautiful Beach House',\n summary: 'Enjoy relaxed beach living in this house with a private beach',\n bedrooms: 4,\n bathrooms: 2.5,\n beds: 7,\n last_review: 2019-12-03T15:07:46.730Z \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e766 } \n }\n { \n _id: { _data: '825DE67A42000000042B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7640004' },\n operationType: 'update',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 4, high_: 1575385666 },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e764 },\n updateDescription: { \n updatedFields: { beds: 2 }, \n removedFields: ] \n } \n }\n { \n _id: { _data: '825DE67A42000000052B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7660004' },\n operationType: 'update',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 5, high_: 1575385666 },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e766 },\n updateDescription: { \n updatedFields: { address: [Object] }, \n removedFields: [] \n } \n }\n { \n _id: { _data: '825DE67A42000000062B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7670004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 6, high_: 1575385666 },\n fullDocument: { \n _id: 5de67a42113ea7de6472e767,\n name: 'Italian Villa',\n property_type: 'Entire home/apt',\n bedrooms: 6,\n bathrooms: 4,\n address: { market: 'Cinque Terre', country: 'Italy' } \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e767 } \n }\n { \n _id: { _data: '825DE67A42000000072B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 7, high_: 1575385666 },\n fullDocument: { \n _id: 5de67a42113ea7de6472e768,\n name: 'Sydney Harbour Home',\n bedrooms: 4,\n bathrooms: 2.5,\n address: { market: 'Sydney', country: 'Australia' } },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e768 } \n }\n { \n _id: { _data: '825DE67A42000000082B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67A42113EA7DE6472E7680004' },\n operationType: 'delete',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 8, high_: 1575385666 },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67a42113ea7de6472e768 } \n }\n ```\n\n If you run `node changeStreamsTestData.js` again before the 60\n second timer has completed, you will see similar output.\n\n After 60 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n#### Call the Function with an Aggregation Pipeline\n\nIn some cases, you will not care about all change events that occur in a collection. Instead, you will want to limit what changes you are monitoring. You can use an aggregation pipeline to filter the changes or transform the change stream event documents.\n\nIn our case, we only care about new listings in the Sydney, Australia market. Let's create an aggregation pipeline to filter for only those changes in the `listingsAndReviews` collection.\n\nTo learn more about what aggregation pipeline stages can be used with change streams, see the [official change streams documentation.\n\n1. Inside of `main()` and above your existing call to `monitorListingsUsingEventEmitter()`, create an aggregation pipeline:\n\n ``` javascript\n const pipeline = \n {\n '$match': {\n 'operationType': 'insert',\n 'fullDocument.address.country': 'Australia',\n 'fullDocument.address.market': 'Sydney'\n },\n }\n ];\n ```\n\n2. Let's use this pipeline to filter the changes in our change stream. Update your existing call to `monitorListingsUsingEventEmitter()` to only leave the change stream open for 30 seconds and use the pipeline.\n\n ``` javascript\n await monitorListingsUsingEventEmitter(client, 30000, pipeline);\n ```\n\n3. Save your file.\n\n4. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.\n\n5. Create and update sample data by executing [node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to the following will be displayed in your first shell where you are running `changeStreams.js`.\n\n ``` javascript\n { \n _id: { _data: '825DE67CED000000012B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67CED150EA2DF172344370004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1575386349 },\n fullDocument: { \n _id: 5de67ced150ea2df17234437,\n name: 'Opera House Views',\n summary: 'Beautiful apartment with views of the iconic Sydney Opera House',\n property_type: 'Apartment',\n bedrooms: 1,\n bathrooms: 1,\n beds: 1,\n address: { market: 'Sydney', country: 'Australia' } \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67ced150ea2df17234437 } \n }\n { \n _id: { _data: '825DE67CEE000000032B022C0100296E5A10046BBC1C6A9CBB4B6E9CA9447925E693EF46645F696400645DE67CEE150EA2DF1723443B0004' },\n operationType: 'insert',\n clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 3, high_: 1575386350 },\n fullDocument: { \n _id: 5de67cee150ea2df1723443b,\n name: 'Sydney Harbour Home',\n bedrooms: 4,\n bathrooms: 2.5,\n address: { market: 'Sydney', country: 'Australia' } \n },\n ns: { db: 'sample_airbnb', coll: 'listingsAndReviews' },\n documentKey: { _id: 5de67cee150ea2df1723443b } \n }\n ```\n\n After 30 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n### Monitor Change Stream using ChangeStream's hasNext()\n\nIn the section above, we used EventEmitter's `on()` to monitor the change stream. Alternatively, we can create a `while` loop that waits for the next element in the change stream by using hasNext() from MongoDB Node.js Driver's ChangeStream class.\n\n#### Create the Function\n\nLet's create a function that will monitor changes in the change stream using ChangeStream's `hasNext()`.\n\n1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingHasNext`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.\n\n ``` javascript\n async function monitorListingsUsingHasNext(client, timeInMs = 60000, pipeline = ]) { \n\n }\n ```\n\n2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingHasNext()`.\n\n ``` javascript\n const collection = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n ```\n\n3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingHasNext()`.\n\n ``` javascript\n const changeStream = collection.watch(pipeline);\n ```\n\n4. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function that will set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingHasNext()`.\n\n ``` javascript\n closeChangeStream(timeInMs, changeStream);\n ```\n\n5. Now let's create a `while` loop that will wait for new changes in the change stream. We can use ChangeStream's hasNext() inside of the `while` loop. `hasNext()` will wait to return true until a new change arrives in the change stream. `hasNext()` will throw an error as soon as the change stream is closed, so we will surround our `while` loop with a `try { }` block. If an error is thrown, we'll check to see if the change stream is closed. If the change stream is closed, we'll log that information. Otherwise, something unexpected happened, so we'll throw the error. Add the following code beneath the existing code in `monitorListingsUsingHasNext()`.\n\n ``` javascript\n try {\n while (await changeStream.hasNext()) {\n console.log(await changeStream.next());\n }\n } catch (error) {\n if (changeStream.isClosed()) {\n console.log(\"The change stream is closed. Will not wait on any more changes.\")\n } else {\n throw error;\n }\n }\n ```\n\n#### Call the Function\n\nNow that we've implemented our function, let's call it!\n\n1. Inside of `main()`, replace your existing call to `monitorListingsUsingEventEmitter()` with a call to your new `monitorListingsUsingHasNext()`:\n\n ``` javascript\n await monitorListingsUsingHasNext(client);\n ```\n\n2. Save your file.\n\n3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.\n\n4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Output similar to what we saw earlier will be displayed in your first shell where you are running `changeStreams.js`. If you run `node changeStreamsTestData.js` again before the 60 second timer has completed, you will see similar output again. After 60 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n#### Call the Function with an Aggregation Pipeline\n\nAs we discussed earlier, sometimes you will want to use an aggregation pipeline to filter the changes in your change stream or transform the change stream event documents. Let's pass the aggregation pipeline we created in an earlier section to our new function.\n\n1. Update your existing call to `monitorListingsUsingHasNext()` to only leave the change stream open for 30 seconds and use the aggregation pipeline.\n\n ``` javascript\n await monitorListingsUsingHasNext(client, 30000, pipeline);\n ```\n\n2. Save your file.\n\n3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.\n\n4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to what we saw earlier while using a change stream with an aggregation pipeline will be displayed in your first shell where you are running `changeStreams.js`. After 30 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n### Monitor Changes Stream using the Stream API\n\nIn the previous two sections, we used EventEmitter's `on()` and ChangeStreams's `hasNext()` to monitor changes. Let's examine a third way to monitor a change stream: using Node's Stream API.\n\n#### Load the Stream Module\n\nIn order to use the Stream module, we will need to load it.\n\n1. Continuing to work in `changeStreams.js`, load the Stream module at the top of the file.\n\n ``` javascript\n const stream = require('stream');\n ```\n\n#### Create the Function\n\nLet's create a function that will monitor changes in the change stream using the Stream API.\n\n1. Continuing to work in `changeStreams.js`, create an asynchronous function named `monitorListingsUsingStreamAPI`. The function should have the following parameters: a connected MongoClient, a time in ms that indicates how long the change stream should be monitored, and an aggregation pipeline that the change stream will use.\n\n ``` javascript\n async function monitorListingsUsingStreamAPI(client, timeInMs = 60000, pipeline = ]) { \n\n }\n ```\n\n2. Now we need to access the collection we will monitor for changes. Add the following code to `monitorListingsUsingStreamAPI()`.\n\n ``` javascript\n const collection = client.db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n ```\n\n3. Now we are ready to create our change stream. We can do so by using [Collection's watch(). Add the following line beneath the existing code in `monitorListingsUsingStreamAPI()`.\n\n ``` javascript\n const changeStream = collection.watch(pipeline);\n ```\n\n4. Now we're ready to monitor our change stream. ChangeStream's stream() will return a Node Readable stream. We will call Readable's pipe() to pull the data out of the stream and write it to the console.\n\n ``` javascript\n changeStream.stream().pipe(\n new stream.Writable({\n objectMode: true,\n write: function (doc, _, cb) {\n console.log(doc);\n cb();\n }\n })\n );\n ```\n\n5. We could choose to leave the change stream open indefinitely. Instead, let's call our helper function that will set a timer and close the change stream. Add the following line beneath the existing code in `monitorListingsUsingStreamAPI()`.\n\n ``` javascript\n await closeChangeStream(timeInMs, changeStream);\n ```\n\n#### Call the Function\n\nNow that we've implemented our function, let's call it!\n\n1. Inside of `main()`, replace your existing call to `monitorListingsUsingHasNext()` with a call to your new `monitorListingsUsingStreamAPI()`:\n\n ``` javascript\n await monitorListingsUsingStreamAPI(client);\n ```\n\n2. Save your file.\n\n3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 60 seconds.\n\n4. Output similar to what we saw earlier will be displayed in your first shell where you are running `changeStreams.js`. If you run `node changeStreamsTestData.js` again before the 60 second timer has completed, you will see similar output again. After 60 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n#### Call the Function with an Aggregation Pipeline\n\nAs we discussed earlier, sometimes you will want to use an aggregation pipeline to filter the changes in your change stream or transform the change stream event documents. Let's pass the aggregation pipeline we created in an earlier section to our new function.\n\n1. Update your existing call to `monitorListingsUsingStreamAPI()` to only leave the change stream open for 30 seconds and use the aggregation pipeline.\n\n ``` javascript\n await monitorListingsUsingStreamAPI(client, 30000, pipeline);\n ```\n\n2. Save your file.\n\n3. Run your script by executing `node changeStreams.js` in your shell. The change stream will open for 30 seconds.\n\n4. Create and update sample data by executing node changeStreamsTestData.js in a new shell. Because the change stream is using the pipeline you just created, only documents inserted into the `listingsAndReviews` collection that are in the Sydney, Australia market will be in the change stream. Output similar to what we saw earlier while using a change stream with an aggregation pipeline will be displayed in your first shell where you are running `changeStreams.js`. After 30 seconds, the following will be displayed:\n\n ``` sh\n Closing the change stream\n ```\n\n## Resume a Change Stream\n\nAt some point, your application will likely lose the connection to the change stream. Perhaps a network error will occur and a connection between the application and the database will be dropped. Or perhaps your application will crash and need to be restarted (but you're a 10x developer and that would never happen to you, right?).\n\nIn those cases, you may want to resume the change stream where you previously left off so you don't lose any of the change events.\n\nEach change stream event document contains a resume token. The Node.js driver automatically stores the resume token in the `_id` of the change event document.\n\nThe application can pass the resume token when creating a new change stream. The change stream will include all events that happened after the event associated with the given resume token.\n\nThe MongoDB Node.js driver will automatically attempt to reestablish connections in the event of transient network errors or elections. In those cases, the driver will use its cached copy of the most recent resume token so that no change stream events are lost.\n\nIn the event of an application failure or restart, the application will need to pass the resume token when creating the change stream in order to ensure no change stream events are lost. Keep in mind that the driver will lose its cached copy of the most recent resume token when the application restarts, so your application should store the resume token.\n\nFor more information and sample code for resuming change streams, see the official documentation.\n\n## What are MongoDB Atlas Triggers?\n\nChange streams allow you to react immediately to changes in your database. If you want to constantly be monitoring changes to your database, ensuring that your application that is monitoring the change stream is always up and not missing any events is possible... but can be challenging. This is where MongoDB Atlas triggers come in.\n\nMongoDB supports triggers in Atlas. Atlas triggers allow you to execute functions in real time based on database events (just like change streams) or on scheduled intervals (like a cron job). Atlas triggers have a few big advantages:\n\n- You don't have to worry about programming the change stream. You simply program the function that will be executed when the database event is fired.\n- You don't have to worry about managing the server where your change stream code is running. Atlas takes care of the server management for you.\n- You get a handy UI to configure your trigger, which means you have less code to write.\n\nAtlas triggers do have a few constraints. The biggest constraint I hit in the past was that functions did not support module imports (i.e. **import** and **require**). That has changed, and you can now upload external dependencies that you can use in your functions. See Upload External Dependencies for more information. To learn more about functions and their constraints, see the official Realm Functions documentation.\n\n## Create a MongoDB Atlas Trigger\n\nJust as we did in earlier sections, let's look for new listings in the Sydney, Australia market. Instead of working locally in a code editor to create and monitor a change stream, we'll create a trigger in the Atlas web UI.\n\n### Create a Trigger\n\nLet's create an Atlas trigger that will monitor the `listingsAndReviews` collection and call a function whenever a new listing is added in the Sydney, Australia market.\n\n1. Navigate to your project in Atlas.\n\n2. In the Data Storage section of the left navigation pane, click **Triggers**.\n\n3. Click **Add Trigger**. The **Add Trigger** wizard will appear.\n\n4. In the **Link Data Source(s)** selection box, select your cluster that contains the `sample_airbnb` database and click **Link**. The changes will be deployed. The deployment may take a minute or two. Scroll to the top of the page to see the status.\n\n5. In the **Select a cluster...** selection box, select your cluster that contains the `sample_airbnb` database.\n\n6. In the **Select a database name...** selection box, select **sample_airbnb**.\n\n7. In the **Select a collection name...** selection box, select **listingsAndReviews**.\n\n8. In the Operation Type section, check the box beside **Insert**.\n\n9. In the Function code box, replace the commented code with a call to log the change event. The code should now look like the following:\n\n ``` javascript\n exports = function(changeEvent) {\n console.log(JSON.stringify(changeEvent.fullDocument)); \n };\n ```\n\n10. We can create a $match statement to filter our change events just as we did earlier with the aggregation pipeline we passed to the change stream in our Node.js script. Expand the **ADVANCED (OPTIONAL)** section at the bottom of the page and paste the following in the **Match Expression** code box.\n\n ``` javascript\n { \n \"fullDocument.address.country\": \"Australia\", \n \"fullDocument.address.market\": \"Sydney\" \n }\n ```\n\n11. Click **Save**. The trigger will be enabled. From that point on, the function to log the change event will be called whenever a new document in the Sydney, Australia market is inserted in the `listingsAndReviews` collection.\n\n### Fire the Trigger\n\nNow that we have the trigger configured, let's create sample data that will fire the trigger.\n\n1. Return to the shell on your local machine.\n2. Create and update sample data by executing node changeStreamsTestData.js in a new shell.\n\n### View the Trigger Results\n\nWhen you created the trigger, MongoDB Atlas automatically created a Realm application for you named **Triggers_RealmApp**.\n\nThe function associated with your trigger doesn't currently do much. It simply prints the change event document. Let's view the results in the logs of the Realm app associated with your trigger.\n\n1. Return to your browser where you are viewing your trigger in Atlas.\n2. In the navigation bar toward the top of the page, click **Realm**.\n3. In the Applications pane, click **Triggers_RealmApp**. The **Triggers_RealmApp** Realm application will open.\n4. In the MANAGE section of the left navigation pane, click **Logs**. Two entries will be displayed in the Logs pane\u2014one for each of the listings in the Sydney, Australia market that was inserted into the collection.\n5. Click the arrow at the beginning of each row in the Logs pane to expand the log entry. Here you can see the full document that was inserted.\n\nIf you insert more listings in the Sydney, Australia market, you can refresh the Logs page to see the change events.\n\n## Wrapping Up\n\nToday we explored four different ways to accomplish the same task of reacting immediately to changes in the database. We began by writing a Node.js script that monitored a change stream using Node.js's Built-in EventEmitter class. Next we updated the Node.js script to monitor a change stream using the MongoDB Node.js Driver's ChangeStream class. Then we updated the Node.js script to monitor a change stream using the Stream API. Finally, we created an Atlas trigger to monitor changes. In all four cases, we were able to use $match to filter the change stream events.\n\nThis post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.\n\nThe examples we explored today all did relatively simple things whenever an event was fired: they logged the change events. Change streams and triggers become really powerful when you start doing more in response to change events. For example, you might want to fire alarms, send emails, place orders, update other systems, or do other amazing things.\n\nThis is the final post in the Node.js and MongoDB Quick Start Series (at least for now!). I hope you've enjoyed it! If you have ideas for other topics you'd like to see covered, let me know in the MongoDB Community.\n\n## Additional Resources\n\n- MongoDB Official Documentation: Change Streams\n- MongoDB Official Documentation: Triggers\n- Blog Post: An Introduction to Change Streams\n- Video: Using Change Streams to Keep Up with Your Data\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Discover how to react to changes in your MongoDB database using change streams implemented in Node.js and Atlas triggers.", "contentType": "Quickstart"}, "title": "Change Streams & Triggers with Node.js Tutorial", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/visually-showing-atlas-search-highlights-javascript-html", "action": "created", "body": "\n \n\n \n Search\n \n\n \n \n \n ", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js"], "pageDescription": "Learn how to use JavaScript and HTML to show MongoDB Atlas Search highlights on the screen.", "contentType": "Tutorial"}, "title": "Visually Showing Atlas Search Highlights with JavaScript and HTML", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-5-0-schema-validation", "action": "created", "body": "# Improved Error Messages for Schema Validation in MongoDB 5.0\n\n## Intro\n\nMany MongoDB users rely on schema\nvalidation to\nenforce rules governing the structure and integrity of documents in\ntheir collections. But one of the challenges they faced was quickly\nunderstanding why a document that did not match the schema couldn't be\ninserted or updated. This is changing in the upcoming MongoDB 5.0\nrelease.\n\nSchema validation ease-of-use will be significantly improved by\ngenerating descriptive error messages whenever an operation fails\nvalidation. This additional information provides valuable insight into\nwhich parts of a document in an insert/update operation failed to\nvalidate against which parts of a collection's validator, and how. From\nthis information, you can quickly identify and remediate code errors\nthat are causing documents to not comply with your validation rules. No\nmore tedious debugging by slicing your document into pieces to isolate\nthe problem!\n\n>\n>\n>If you would like to evaluate this feature and provide us early\n>feedback, fill in this\n>form to\n>participate in the preview program.\n>\n>\n\nThe most popular way to express the validation rules is JSON\nSchema.\nIt is a widely adopted standard that is also used within the REST API\nspecification and validation. And in MongoDB, you can combine JSON\nSchema with the MongoDB Query Language (MQL) to do even more.\n\nIn this post, I would like to go over a few examples to reiterate the\ncapabilities of schema validation and showcase the addition of new\ndetailed error messages.\n\n## What Do the New Error Messages Look Like?\n\nFirst, let's look at the new error message. It is a structured message\nin the BSON format, explaining which part of the document didn't match\nthe rules and which validation rule caused this.\n\nConsider this basic validator that ensures that the price field does not\naccept negative values. In JSON Schema, the property is the equivalent\nof what we call \"field\" in MongoDB.\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"price\": {\n \"minimum\": 0\n }\n }\n }\n}\n```\n\nWhen trying to insert a document with `{price: -2}`, the following error\nmessage will be returned.\n\n``` json\n{\n \"code\": 121,\n \"errmsg\": \"Document failed validation\",\n \"errInfo\": {\n \"failingDocumentId\": ObjectId(\"5fe0eb9642c10f01eeca66a9\"),\n \"details\": {\n \"operatorName\": \"$jsonSchema\",\n \"schemaRulesNotSatisfied\": \n {\n \"operatorName\": \"properties\",\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"price\",\n \"details\": [\n {\n \"operatorName\": \"minimum\",\n \"specifiedAs\": {\n \"minimum\": 0\n },\n \"reason\": \"comparison failed\",\n \"consideredValue\": -2\n }\n ]\n }\n ]\n }\n ]\n }\n }\n}\n```\n\nSome of the key fields in the response are:\n\n- `failingDocumentId` - the \\_id of the document that was evaluated\n- `operatorName` - the operator used in the validation rule\n- `propertiesNotSatisfied` - the list of fields (properties) that\n failed validation checks\n- `propertyName` - the field of the document that was evaluated\n- `specifiedAs` - the rule as it was expressed in the validator\n- `reason - explanation` of how the rule was not satisfied\n- `consideredValue` - value of the field in the document that was\n evaluated\n\nThe error may include more fields depending on the specific validation\nrule, but these are the most common. You will likely find the\n`propertyName` and `reason` to be the most useful fields in the\nresponse.\n\nNow we can look at the examples of the different validation rules and\nsee how the new detailed message helps us identify the reason for the\nvalidation failure.\n\n## Exploring a Sample Collection\n\nAs an example, we'll use a collection of real estate properties in NYC\nmanaged by a team of real estate agents.\n\nHere is a sample document:\n\n``` json\n{\n \"PID\": \"EV10010A1\",\n \"agents\": [ { \"name\": \"Ana Blake\", \"email\": \"anab@rcgk.com\" } ],\n \"description\": \"Spacious 2BR apartment\",\n \"localization\": { \"description_es\": \"Espacioso apartamento de 2 dormitorios\" },\n \"type\": \"Residential\",\n \"address\": {\n \"street1\": \"235 E 22nd St\",\n \"street2\": \"Apt 42\",\n \"city\": \"New York\",\n \"state\": \"NY\",\n \"zip\": \"10010\"\n },\n \"originalPrice\": 990000,\n \"discountedPrice\": 980000,\n \"geoLocation\": [ -73.9826509, 40.737499 ],\n \"listedDate\": \"Wed Dec 11 2020 10:05:10 GMT-0500 (EST)\",\n \"saleDate\": \"Wed Dec 21 2020 12:00:04 GMT-0500 (EST)\",\n \"saleDetails\": {\n \"price\": 970000,\n \"buyer\": { \"id\": \"24434\" },\n \"bids\": [\n {\n \"price\": 950000,\n \"winner\": false,\n \"bidder\": {\n \"id\": \"24432\",\n \"name\": \"Sam James\",\n \"contact\": { \"email\": \"sjames@gmail.com\" }\n }\n },\n {\n \"price\": 970000,\n \"winner\": true,\n \"bidder\": {\n \"id\": \"24434\",\n \"name\": \"Joana Miles\",\n \"contact\": { \"email\": \"jm@gmail.com\" }\n }\n }\n ]\n }\n}\n```\n\n## Using the Value Pattern\n\nOur real estate properties are identified with property id (PID) that\nhas to follow a specific naming format: It should start with two letters\nfollowed by five digits, and some letters and digits after, like this:\nWS10011FG4 or EV10010A1.\n\nWe can use JSON Schema `pattern` operator to create a rule for this as a\nregular expression.\n\nValidator:\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"PID\": {\n \"bsonType\": \"string\",\n \"pattern\": \"^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$\"\n }\n }\n }\n}\n```\n\nIf we try to insert a document with a PID field that doesn't match the\npattern, for example `{ PID: \"apt1\" }`, we will receive an error.\n\nThe error states that the field `PID` had the value of `\"apt1\"` and it\ndid not match the regular expression, which was specified as\n`\"^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$\"`.\n\n``` json\n{ ...\n \"schemaRulesNotSatisfied\": [\n {\n \"operatorName\": \"properties\",\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"PID\",\n \"details\": [\n {\n \"operatorName\": \"pattern\",\n \"specifiedAs\": {\n \"pattern\": \"^[A-Z]{2}[0-9]{5}[A-Z]+[0-9]+$\"\n },\n \"reason\": \"regular expression did not match\",\n \"consideredValue\": \"apt1\"\n }\n ]\n }\n ]\n ...\n}\n```\n\n## Additional Properties and Property Pattern\n\nThe description may be localized into several languages. Currently, our\napplication only supports Spanish, German, and French, so the\nlocalization object can only contain fields `description_es`,\n`description_de`, or `description_fr`. Other fields will not be allowed.\n\nWe can use operator `patternProperties` to describe this requirement as\nregular expression and indicate that no other fields are expected here\nwith `\"additionalProperties\": false`.\n\nValidator:\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"PID\": {...},\n \"localization\": {\n \"additionalProperties\": false,\n \"patternProperties\": {\n \"^description_(es|de|fr)+$\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n }\n} \n```\n\nDocument like this can be inserted successfully:\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"localization\": {\n \"description_es\": \"Amplio apartamento de 2 dormitorios\",\n \"description_de\": \"Ger\u00e4umige 2-Zimmer-Wohnung\",\n }\n}\n```\n\nDocument like this will fail the validation check:\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"localization\": {\n \"description_cz\": \"Prostorn\u00fd byt 2 + kk\"\n }\n}\n```\n\nThe error below indicates that field `localization` contains additional\nproperty `description_cz`. `description_cz` does not match the expected\npattern, so it is considered an additional property.\n\n``` json\n{ ...\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"localization\",\n \"details\": [\n {\n \"operatorName\": \"additionalProperties\",\n \"specifiedAs\": {\n \"additionalProperties\": false\n },\n \"additionalProperties\": [\n \"description_cz\"\n ]\n }\n ]\n }\n ]\n...\n}\n```\n\n## Enumeration of Allowed Options\n\nEach real estate property in our collection has a type, and we want to\nuse one of the four types: \"Residential,\" \"Commercial,\" \"Industrial,\" or\n\"Land.\" This can be achieved with the operator `enum`.\n\nValidator:\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"type\": {\n \"enum\": [ \"Residential\", \"Commercial\", \"Industrial\", \"Land\" ]\n }\n }\n }\n}\n```\n\nThe following document will be considered invalid:\n\n``` json\n{\n \"PID\": \"TS10018A1\", \"type\": \"House\"\n}\n```\n\nThe error states that field `type` failed validation because \"value was\nnot found in enum.\"\n\n``` json\n{...\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"type\",\n \"details\": [\n {\n \"operatorName\": \"enum\",\n \"specifiedAs\": {\n \"enum\": [\n \"Residential\",\n \"Commercial\",\n \"Industrial\",\n \"Land\"\n ]\n },\n \"reason\": \"value was not found in enum\",\n \"consideredValue\": \"House\"\n }\n ]\n }\n ]\n...\n}\n```\n\n## Arrays: Enforcing Number of Elements and Uniqueness\n\nAgents who manage each real estate property are stored in the `agents`\narray. Let's make sure there are no duplicate elements in the array, and\nno more than three agents are working with the same property. We can use\n`uniqueItems` and `maxItems` for this.\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"agents\": {\n \"bsonType\": \"array\",\n \"uniqueItems\": true,\n \"maxItems\": 3\n }\n }\n }\n}\n```\n\nThe following document violates both if the validation rules.\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"agents\": [\n { \"name\": \"Ana Blake\" },\n { \"name\": \"Felix Morin\" },\n { \"name\": \"Dilan Adams\" },\n { \"name\": \"Ana Blake\" }\n ]\n}\n```\n\nThe error returns information about failure for two rules: \"array did\nnot match specified length\" and \"found a duplicate item,\" and it also\npoints to what value was a duplicate.\n\n``` json\n{\n ...\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"agents\",\n \"details\": [\n {\n \"operatorName\": \"maxItems\",\n \"specifiedAs\": { \"maxItems\": 3 },\n \"reason\": \"array did not match specified length\",\n \"consideredValue\": [\n { \"name\": \"Ana Blake\" },\n { \"name\": \"Felix Morin\" },\n { \"name\": \"Dilan Adams\" },\n { \"name\": \"Ana Blake\" }\n ]\n },\n {\n \"operatorName\": \"uniqueItems\",\n \"specifiedAs\": { \"uniqueItems\": true },\n \"reason\": \"found a duplicate item\",\n \"consideredValue\": [\n { \"name\": \"Ana Blake\" },\n { \"name\": \"Felix Morin\" },\n { \"name\": \"Dilan Adams\" },\n { \"name\": \"Ana Blake\" }\n ],\n \"duplicatedValue\": { \"name\": \"Ana Blake\" }\n }\n ]\n ...\n }\n```\n\n## Enforcing Required Fields\n\nNow, we want to make sure that there's contact information available for\nthe agents. We need each agent's name and at least one way to contact\nthem: phone or email. We will use `required`and `anyOf` to create this\nrule.\n\nValidator:\n\n``` json\n{\n \"$jsonSchema\": {\n \"properties\": {\n \"agents\": {\n \"bsonType\": \"array\",\n \"uniqueItems\": true,\n \"maxItems\": 3,\n \"items\": {\n \"bsonType\": \"object\",\n \"required\": [ \"name\" ],\n \"anyOf\": [ { \"required\": [ \"phone\" ] }, { \"required\": [ \"email\" ] } ]\n }\n }\n }\n }\n}\n```\n\nThe following document will fail validation:\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"agents\": [\n { \"name\": \"Ana Blake\", \"email\": \"anab@rcgk.com\" },\n { \"name\": \"Felix Morin\", \"phone\": \"+12019878749\" },\n { \"name\": \"Dilan Adams\" }\n ]\n}\n```\n\nHere the error indicates that the third element of the array\n(`\"itemIndex\": 2`) did not match the rule.\n\n``` json\n{\n ...\n \"propertiesNotSatisfied\": [\n {\n \"propertyName\": \"agents\",\n \"details\": [\n {\n \"operatorName\": \"items\",\n \"reason\": \"At least one item did not match the sub-schema\",\n \"itemIndex\": 2,\n \"details\": [\n {\n \"operatorName\": \"anyOf\",\n \"schemasNotSatisfied\": [\n {\n \"index\": 0,\n \"details\": [\n {\n \"operatorName\": \"required\",\n \"specifiedAs\": { \"required\": [ \"phone\" ] },\n \"missingProperties\": [ \"phone\" ]\n }\n ]\n },\n {\n \"index\": 1,\n \"details\": [\n {\n \"operatorName\": \"required\",\n \"specifiedAs\": { \"required\": [ \"email\" ] },\n \"missingProperties\": [ \"email\" ]\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n...\n}\n```\n\n## Creating Dependencies\n\nLet's create another rule to ensure that if the document contains the\n`saleDate` field, `saleDetails` is also present, and vice versa: If\nthere is `saleDetails`, then `saleDate` also has to exist.\n\n``` json\n{\n \"$jsonSchema\": {\n \"dependencies\": {\n \"saleDate\": [ \"saleDetails\"],\n \"saleDetails\": [ \"saleDate\"]\n }\n }\n}\n```\n\nNow, let's try to insert the document with `saleDate` but with no\n`saleDetails`:\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"saleDate\": Date(\"2020-05-01T04:00:00.000Z\")\n}\n```\n\nThe error now includes the property with dependency `saleDate` and a\nproperty missing from the dependencies: `saleDetails`.\n\n``` json\n{ \n ...\n \"details\": {\n \"operatorName\": \"$jsonSchema\",\n \"schemaRulesNotSatisfied\": [\n {\n \"operatorName\": \"dependencies\",\n \"failingDependencies\": [\n {\n \"conditionalProperty\": \"saleDate\",\n \"missingProperties\": [ \"saleDetails\" ]\n }\n ]\n }\n ]\n }\n...\n}\n```\n\nNotice that in JSON Schema, the field `dependencies` is in the root\nobject, and not inside of the specific property. Therefore in the error\nmessage, the `details` object will have a different structure:\n\n``` json\n{ \"operatorName\": \"dependencies\", \"failingDependencies\": [...]}\n```\n\nIn the previous examples, when the JSON Schema rule was inside of the\n\"properties\" object, like this:\n\n``` json\n\"$jsonSchema\": { \"properties\": { \"price\": { \"minimum\": 0 } } }\n```\n\nthe details of the error message contained\n`\"operatorName\": \"properties\"` and a `\"propertyName\"`:\n\n``` json\n{ \"operatorName\": \"properties\",\n \"propertiesNotSatisfied\": [ { \"propertyName\": \"...\", \"details\": [] } ]\n}\n```\n\n## Adding Business Logic to Your Validation Rules\n\nYou can use MongoDB Query Language (MQL) in your validator right next to\nJSON Schema to add richer business logic to your rules.\n\nAs one example, you can use\n[$expr\nto add a check for a `discountPrice` to be less than `originalPrice`\njust like this:\n\n``` json\n{\n \"$expr\": {\n \"$lt\": \"$discountedPrice\", \"$originalPrice\" ]\n },\n \"$jsonSchema\": {...}\n}\n```\n\n[$expr\nresolves to `true` or `false`, and allows you to use aggregation\nexpressions to create sophisticated business rules.\n\nFor a little more complex example, let's say we keep an array of bids in\nthe document of each real estate property, and the boolean field\n`isWinner` indicates if a particular bid is a winning one.\n\nSample document:\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"saleDetails\": {\n \"bids\": \n {\n \"price\": 500000,\n \"isWinner\": false,\n \"bidder\": {...}\n },\n {\n \"price\": 530000,\n \"isWinner\": true,\n \"bidder\": {...}\n }\n ]\n }\n}\n```\n\nLet's make sure that only one of the `bids` array elements can be marked\nas the winner. The validator will have an expression where we apply a\nfilter to the array of bids to only keep the elements with `\"isWinner\":`\ntrue, and check the size of the resulting array to be less or equal to\n1.\n\nValidator:\n\n``` json\n{\n \"$and\": [\n {\n \"$expr\": {\n \"$lte\": [\n {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$saleDetails.bids.isWinner\",\n \"cond\": \"$$this\"\n }\n }\n },\n 1\n ]\n }\n },\n {\n \"$expr\": {...}\n },\n {\n \"$jsonSchema\": {...}\n }\n ]\n}\n```\n\nLet's try to insert the document with few bids having\n`\"isWinner\": true`.\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"originalPrice\": 600000,\n \"discountedPrice\": 550000,\n \"saleDetails\": {\n \"bids\": [\n { \"price\": 500000, \"isWinner\": true },\n { \"price\": 530000, \"isWinner\": true }\n ]\n }\n}\n```\n\nThe produced error message will indicate which expression evaluated to\nfalse.\n\n``` json\n{\n...\n \"details\": {\n \"operatorName\": \"$expr\",\n \"specifiedAs\": {\n \"$expr\": {\n \"$lte\": [\n {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$saleDetails.bids.isWinner\",\n \"cond\": \"$$this\"\n }\n }\n },\n 1\n ]\n }\n },\n \"reason\": \"expression did not match\",\n \"expressionResult\": false\n }\n...\n}\n```\n\n## Geospatial Validation\n\nAs the last example, let's see how we can use the geospatial features of\nMQL to ensure that all the real estate properties in the collection are\nlocated within the New York City boundaries. Our documents include a\n`geoLocation` field with coordinates. We can use `$geoWithin` to check\nthat these coordinates are inside the geoJSON polygon (the polygon for\nNew York City in this example is approximate).\n\nValidator:\n\n``` json\n{\n \"geoLocation\": {\n \"$geoWithin\": {\n \"$geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [ [ -73.91326904296874, 40.91091803848203 ],\n [ -74.01626586914062, 40.75297891717686 ],\n [ -74.05677795410156, 40.65563874006115 ],\n [ -74.08561706542969, 40.65199222800328 ],\n [ -74.14329528808594, 40.64417760251725 ],\n [ -74.18724060058594, 40.643656594948524 ],\n [ -74.234619140625, 40.556591288249905 ],\n [ -74.26345825195312, 40.513277131087484 ],\n [ -74.2510986328125, 40.49500373230525 ],\n [ -73.94691467285156, 40.543026009954986 ],\n [ -73.740234375, 40.589449604232975 ],\n [ -73.71826171874999, 40.820045086716505 ],\n [ -73.78829956054686, 40.8870435151357 ],\n [ -73.91326904296874, 40.91091803848203 ] ]\n ]\n }\n }\n },\n \"$jsonSchema\": {...}\n}\n```\n\nA document like this will be inserted successfully.\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"geoLocation\": [ -73.9826509, 40.737499 ],\n \"originalPrice\": 600000,\n \"discountedPrice\": 550000,\n \"saleDetails\": {...}\n}\n```\n\nThe following document will fail.\n\n``` json\n{\n \"PID\": \"TS10018A1\",\n \"type\": \"Residential\",\n \"geoLocation\": [ -73.9826509, 80.737499 ],\n \"originalPrice\": 600000,\n \"discountedPrice\": 550000,\n \"saleDetails\": {...}\n}\n```\n\nThe error will indicate that validation failed the `$geoWithin`\noperator, and the reason is \"none of the considered geometries were\ncontained within the expression's geometry.\"\n\n``` json\n{\n...\n \"details\": {\n \"operatorName\": \"$geoWithin\",\n \"specifiedAs\": {\n \"geoLocation\": {\n \"$geoWithin\": {...}\n }\n },\n \"reason\": \"none of the considered geometries were contained within the \n expression's geometry\",\n \"consideredValues\": [ -73.9826509, 80.737499 ]\n }\n...\n}\n```\n\n## Conclusion and Next Steps\n\nSchema validation is a great tool to enforce governance over your data\nsets. You have the choice to express the validation rules using JSON\nSchema, MongoDB Query Language, or both. And now, with the detailed\nerror messages, it gets even easier to use, and you can have the rules\nbe as sophisticated as you need, without the risk of costly maintenance.\n\nYou can find the full validator code and sample documents from this post\n[here.\n\n>\n>\n>If you would like to evaluate this feature and provide us early\n>feedback, fill in this\n>form to\n>participate in the preview program.\n>\n>\n\nMore posts on schema validation:\n\n- JSON Schema Validation - Locking down your model the smart\n way\n- JSON Schema Validation - Dependencies you can depend\n on\n- JSON Schema Validation - Checking Your\n Arrays\n\nQuestions? Comments? We'd love to connect with you. Join the\nconversation on the MongoDB Community\nForums.\n\n**Safe Harbor**\n\nThe development, release, and timing of any features or functionality\ndescribed for our products remains at our sole discretion. This\ninformation is merely intended to outline our general product direction\nand it should not be relied on in making a purchasing decision nor is\nthis a commitment, promise or legal obligation to deliver any material,\ncode, or functionality.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn about improved error messages for schema validation in MongoDB 5.0.", "contentType": "News & Announcements"}, "title": "Improved Error Messages for Schema Validation in MongoDB 5.0", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-mapping-pojos", "action": "created", "body": "# Java - Mapping POJOs\n\n## Updates\n\nThe MongoDB Java quickstart repository is available on GitHub.\n\n### February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n\n### November 14th, 2023\n\n- Update to Java 17\n- Update Java Driver to 4.11.1\n- Update mongodb-crypt to 1.8.0\n\n### March 25th, 2021\n\n- Update Java Driver to 4.2.2.\n- Added Client Side Field Level Encryption example.\n\n### October 21st, 2020\n\n- Update Java Driver to 4.1.1.\n- The Java Driver logging is now enabled via the popular SLF4J API, so I added logback in the `pom.xml` and a configuration file `logback.xml`.\n\n## Introduction\n\nJava is an object-oriented programming language and MongoDB stores documents, which look a lot like objects. Indeed, this is not a coincidence because that's the core idea behind the MongoDB database.\n\nIn this blog post, as promised in the first blog post of this series, I will show you how to automatically map MongoDB documents to Plain Old Java Objects (POJOs) using only the MongoDB driver.\n\n## Getting Set Up\n\nI will use the same repository as usual in this series. If you don't have a copy of it yet, you can clone it or just update it if you already have it:\n\n``` sh\ngit clone https://github.com/mongodb-developer/java-quick-start\n```\n\nIf you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n## The Grades Collection\n\nIf you followed this series, you know that we have been working with the `grades` collection in the `sample_training` database. You can import it easily by loading the sample dataset in MongoDB Atlas.\n\nHere is what a MongoDB document looks like in extended JSON format. I'm using the extended JSON because it's easier to identify the field types and we will need them to build the POJOs.\n\n``` json\n{\n \"_id\": {\n \"$oid\": \"56d5f7eb604eb380b0d8d8ce\"\n },\n \"student_id\": {\n \"$numberDouble\": \"0\"\n },\n \"scores\": {\n \"type\": \"exam\",\n \"score\": {\n \"$numberDouble\": \"78.40446309504266\"\n }\n }, {\n \"type\": \"quiz\",\n \"score\": {\n \"$numberDouble\": \"73.36224783231339\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"46.980982486720535\"\n }\n }, {\n \"type\": \"homework\",\n \"score\": {\n \"$numberDouble\": \"76.67556138656222\"\n }\n }],\n \"class_id\": {\n \"$numberDouble\": \"339\"\n }\n}\n```\n\n## POJOs\n\nThe first thing we need is a representation of this document in Java. For each document or subdocument, I need a corresponding POJO class.\n\nAs you can see in the document above, I have the main document itself and I have an array of subdocuments in the `scores` field. Thus, we will need 2 POJOs to represent this document in Java:\n\n- One for the grade,\n- One for the scores.\n\nIn the package `com.mongodb.quickstart.models`, I created two new POJOs: `Grade.java` and `Score.java`.\n\n[Grade.java:\n\n``` java\npackage com.mongodb.quickstart.models;\n\n// imports\n\npublic class Grade {\n\n private ObjectId id;\n @BsonProperty(value = \"student_id\")\n private Double studentId;\n @BsonProperty(value = \"class_id\")\n private Double classId;\n private List scores;\n\n // getters and setters with builder pattern\n // toString()\n // equals()\n // hashCode()\n}\n```\n\n>In the Grade class above, I'm using `@BsonProperty` to avoid violating Java naming conventions for variables, getters, and setters. This allows me to indicate to the mapper that I want the `\"student_id\"` field in JSON to be mapped to the `\"studentId\"` field in Java.\n\nScore.java:\n\n``` java\npackage com.mongodb.quickstart.models;\n\nimport java.util.Objects;\n\npublic class Score {\n\n private String type;\n private Double score;\n\n // getters and setters with builder pattern\n // toString()\n // equals()\n // hashCode()\n}\n```\n\nAs you can see, we took care of matching the Java types with the JSON value types to follow the same data model. You can read more about types and documents in the documentation.\n\n## Mapping POJOs\n\nNow that we have everything we need, we can start the MongoDB driver code.\n\nI created a new class `MappingPOJO` in the `com.mongodb.quickstart` package and here are the key lines of code:\n\n- I need a `ConnectionString` instance instead of the usual `String` I have used so far in this series. I'm still retrieving my MongoDB Atlas URI from the system properties. See my starting and setup blog post if you need a reminder.\n\n``` java\nConnectionString connectionString = new ConnectionString(System.getProperty(\"mongodb.uri\"));\n```\n\n- I need to configure the CodecRegistry to include a codec to handle the translation to and from BSON for our POJOs.\n\n``` java\nCodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n```\n\n- And I need to add the default codec registry, which contains all the default codecs. They can handle all the major types in Java-like `Boolean`, `Double`, `String`, `BigDecimal`, etc.\n\n``` java\nCodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),\n pojoCodecRegistry);\n```\n\n- I can now wrap all my settings together using `MongoClientSettings`.\n\n``` java\nMongoClientSettings clientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .codecRegistry(codecRegistry)\n .build();\n```\n\n- I can finally initialise my connection with MongoDB.\n\n``` java\ntry (MongoClient mongoClient = MongoClients.create(clientSettings)) {\n MongoDatabase db = mongoClient.getDatabase(\"sample_training\");\n MongoCollection grades = db.getCollection(\"grades\", Grade.class);\n ...]\n}\n```\n\nAs you can see in this last line of Java, all the magic is happening here. The `MongoCollection` I'm retrieving is typed by `Grade` and not by `Document` as usual.\n\nIn the previous blog posts in this series, I showed you how to use CRUD operations by manipulating `MongoCollection`. Let's review all the CRUD operations using POJOs now.\n\n- Here is an insert (create).\n\n``` java\nGrade newGrade = new Grade().setStudent_id(10003d)\n .setClass_id(10d)\n .setScores(List.of(new Score().setType(\"homework\").setScore(50d)));\ngrades.insertOne(newGrade);\n```\n\n- Here is a find (read).\n\n``` java\nGrade grade = grades.find(eq(\"student_id\", 10003d)).first();\nSystem.out.println(\"Grade found:\\t\" + grade);\n```\n\n- Here is an update with a `findOneAndReplace` returning the newest version of the document.\n\n``` java\nList newScores = new ArrayList<>(grade.getScores());\nnewScores.add(new Score().setType(\"exam\").setScore(42d));\ngrade.setScores(newScores);\nDocument filterByGradeId = new Document(\"_id\", grade.getId());\nFindOneAndReplaceOptions returnDocAfterReplace = new FindOneAndReplaceOptions()\n .returnDocument(ReturnDocument.AFTER);\nGrade updatedGrade = grades.findOneAndReplace(filterByGradeId, grade, returnDocAfterReplace);\nSystem.out.println(\"Grade replaced:\\t\" + updatedGrade);\n```\n\n- And finally here is a `deleteOne`.\n\n``` java\nSystem.out.println(grades.deleteOne(filterByGradeId));\n```\n\n## Final Code\n\n`MappingPojo.java` ([code):\n\n``` java\npackage com.mongodb.quickstart;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.FindOneAndReplaceOptions;\nimport com.mongodb.client.model.ReturnDocument;\nimport com.mongodb.quickstart.models.Grade;\nimport com.mongodb.quickstart.models.Score;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.bson.conversions.Bson;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport static com.mongodb.client.model.Filters.eq;\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\npublic class MappingPOJO {\n\n public static void main(String] args) {\n ConnectionString connectionString = new ConnectionString(System.getProperty(\"mongodb.uri\"));\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n MongoClientSettings clientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .codecRegistry(codecRegistry)\n .build();\n try (MongoClient mongoClient = MongoClients.create(clientSettings)) {\n MongoDatabase db = mongoClient.getDatabase(\"sample_training\");\n MongoCollection grades = db.getCollection(\"grades\", Grade.class);\n\n // create a new grade.\n Grade newGrade = new Grade().setStudentId(10003d)\n .setClassId(10d)\n .setScores(List.of(new Score().setType(\"homework\").setScore(50d)));\n grades.insertOne(newGrade);\n System.out.println(\"Grade inserted.\");\n\n // find this grade.\n Grade grade = grades.find(eq(\"student_id\", 10003d)).first();\n System.out.println(\"Grade found:\\t\" + grade);\n\n // update this grade: adding an exam grade\n List newScores = new ArrayList<>(grade.getScores());\n newScores.add(new Score().setType(\"exam\").setScore(42d));\n grade.setScores(newScores);\n Bson filterByGradeId = eq(\"_id\", grade.getId());\n FindOneAndReplaceOptions returnDocAfterReplace = new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER);\n Grade updatedGrade = grades.findOneAndReplace(filterByGradeId, grade, returnDocAfterReplace);\n System.out.println(\"Grade replaced:\\t\" + updatedGrade);\n\n // delete this grade\n System.out.println(\"Grade deleted:\\t\" + grades.deleteOne(filterByGradeId));\n }\n }\n}\n```\n\nTo start this program, you can use this maven command line in your root project (where the `src` folder is) or your favorite IDE.\n\n``` bash\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.MappingPOJO\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\n## Wrapping Up\n\nMapping POJOs and your MongoDB documents simplifies your life a lot when you are solving real-world problems with Java, but you can certainly be successful without using POJOs.\n\nMongoDB is a dynamic schema database which means your documents can have different schemas within a single collection. Mapping all the documents from such a collection can be a challenge. So, sometimes, using the \"old school\" method and the `Document` class will be easier.\n\n>If you want to learn more and deepen your knowledge faster, I recommend you check out the [MongoDB Java Developer Path training available for free on MongoDB University.\n\nIn the next blog post, I will show you the aggregation framework in Java.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to use the native mapping of POJOs using the MongoDB Java Driver.", "contentType": "Quickstart"}, "title": "Java - Mapping POJOs", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/time-series-macd-rsi", "action": "created", "body": "# Currency Analysis with Time Series Collections #3 \u2014 MACD and RSI Calculation\n\nIn the first post of this series, we learned how to group currency data based on given time intervals to generate candlestick charts. In the second article, we learned how to calculate simple moving average and exponential moving average on the currencies based on a given time window. Now, in this post we\u2019ll learn how to calculate more complex technical indicators.\n\n## MACD Indicator\n\nMACD (Moving Average Convergence Divergence) is another trading indicator and provides visibility of the trend and momentum of the currency/stock. MACD calculation fundamentally leverages multiple EMA calculations with different parameters.\n\nAs shown in the below diagram, MACD indicator has three main components: MACD Line, MACD Signal, and Histogram. (The blue line represents MACD Line, the red line represents MACD Signal, and green and red bars represent histogram):\n\n- MACD Line is calculated by subtracting the 26-period (mostly, days are used for the period) exponential moving average from the 12-period exponential moving average. \n- After we get the MACD Line, we can calculate the MACD Signal. MACD Signal is calculated by getting the nine-period exponential moving average of MACD Line.\n- MACD Histogram is calculated by subtracting the MACD Signal from the MACD Line. \n\nWe can use the MongoDB Aggregation Framework to calculate this complex indicator. \n\nIn the previous blog posts, we learned how we can group the second-level raw data into five-minutes intervals through the `$group` stage and `$dateTrunc` operator:\n\n```js\ndb.ticker.aggregate(\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5,\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n {\n $project: {\n _id: 1,\n price: \"$close\",\n },\n }\n]);\n```\n\nAfter that, we need to calculate two exponential moving averages with different parameters:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n ema_12: {\n $expMovingAvg: { input: \"$price\", N: 12 },\n },\n ema_26: {\n $expMovingAvg: { input: \"$price\", N: 26 },\n },\n },\n },\n}\n```\n\nAfter we calculate two separate exponential moving averages, we need to apply the `$subtract` operation in the next stage of the aggregation pipeline:\n\n```js\n{ $addFields : {\"macdLine\" : {\"$subtract\" : [\"$ema_12\", \"$ema_26\"]}}}\n```\n\nAfter we\u2019ve obtained the `macdLine` field, then we can apply another exponential moving average to this newly generated field (`macdLine`) to obtain MACD signal value:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n macdSignal: {\n $expMovingAvg: { input: \"$macdLine\", N: 9 },\n },\n },\n },\n}\n```\n\nTherefore, we will have two more fields: `macdLine` and `macdSignal`. We can generate another field as `macdHistogram` that is calculated by subtracting the `macdSignal` from `macdLine` value:\n\n```js\n{ $addFields : {\"macdHistogram\" : {\"$subtract\" : [\"$macdLine\", \"$macdSignal\"]}}}\n```\n\nNow we have three derived fields: `macdLine`, `macdSignal`, and `macdHistogram`. Below, you can see how MACD is visualized together with Candlesticks:\n\n![Candlestick charts\n\nThis is the complete aggregation pipeline:\n\n```js\ndb.ticker.aggregate(\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5,\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n {\n $project: {\n _id: 1,\n price: \"$close\",\n },\n },\n {\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n ema_12: {\n $expMovingAvg: { input: \"$price\", N: 12 },\n },\n ema_26: {\n $expMovingAvg: { input: \"$price\", N: 26 },\n },\n },\n },\n },\n { $addFields: { macdLine: { $subtract: [\"$ema_12\", \"$ema_26\"] } } },\n {\n $setWindowFields: {\n partitionBy: \"_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n macdSignal: {\n $expMovingAvg: { input: \"$macdLine\", N: 9 },\n },\n },\n },\n },\n {\n $addFields: { macdHistogram: { $subtract: [\"$macdLine\", \"$macdSignal\"] } },\n },\n]);\n```\n\n## RSI Indicator\n\n[RSI (Relativity Strength Index) is another financial technical indicator that reveals whether the asset has been overbought or oversold. It usually uses a 14-period time frame window, and the value of RSI is measured on a scale of 0 to 100. If the value is closer to 100, then it indicates that the asset has been overbought within this time period. And if the value is closer to 0, then it indicates that the asset has been oversold within this time period. Mostly, 70 and 30 are used for upper and lower thresholds.\n\nCalculation of RSI is a bit more complicated than MACD:\n\n- For every data point, the gain and the loss values are set by comparing one previous data point.\n- After we set gain and loss values for every data point, then we can get a moving average of both gain and loss for a 14-period. (You don\u2019t have to apply a 14-period. Whatever works for you, you can set accordingly.)\n- After we get the average gain and the average loss value, we can divide average gain by average loss.\n- After that, we can smooth the value to normalize it between 0 and 100.\n\n### Calculating Gain and Loss\n\nFirstly, we need to define the gain and the loss value for each interval. \n\nThe gain and loss value are calculated by subtracting one previous price information from the current price information:\n\n- If the difference is positive, it means there is a price increase and the value of the gain will be the difference between current price and previous price. The value of the loss will be 0.\n- If the difference is negative, it means there is a price decline and the value of the loss will be the difference between previous price and current price. The value of the gain will be 0.\n\nConsider the following input data set:\n\n```js\n{\"_id\": {\"time\": ISODate(\"20210101T17:00:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35050}\n{\"_id\": {\"time\": ISODate(\"20210101T17:05:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35150}\n{\"_id\": {\"time\": ISODate(\"20210101T17:10:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35280}\n{\"_id\": {\"time\": ISODate(\"20210101T17:15:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 34910}\n```\n\nOnce we calculate the Gain and Loss, we will have the following data:\n\n```js\n{\"_id\": {\"time\": ISODate(\"20210101T17:00:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35050, \"previousPrice\": null, \"gain\":0, \"loss\":0}\n{\"_id\": {\"time\": ISODate(\"20210101T17:05:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35150, \"previousPrice\": 35050, \"gain\":100, \"loss\":0}\n{\"_id\": {\"time\": ISODate(\"20210101T17:10:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 35280, \"previousPrice\": 35150, \"gain\":130, \"loss\":0}\n{\"_id\": {\"time\": ISODate(\"20210101T17:15:00\"), \"symbol\" : \"BTC-USD\"}, \"price\": 34910, \"previousPrice\": 35280, \"gain\":0, \"loss\":370}\n```\n\nBut in the MongoDB Aggregation Pipeline, how can we refer to the previous document from the current document? How can we derive the new field (`$previousPrice`) from the previous document in the sorted window? \n\nMongoDB 5.0 introduced the `$shift` operator that includes data from another document in the same partition at the given location, e.g., you can refer to the document that is three documents before the current document or two documents after the current document in the sorted window.\n\nWe set our window with partitioning and introduce new field as previousPrice:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"$_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n previousPrice: { $shift: { by: -1, output: \"$price\" } },\n },\n },\n}\n```\n\n`$shift` takes two parameters:\n\n- `by` specifies the location of the document which we\u2019ll include. Since we want to include the previous document, then we set it to `-1`. If we wanted to include one next document, then we would set it to `1`.\n- `output` specifies the field of the document that we want to include in the current document.\n\nAfter we set the `$previousPrice` information for the current document, then we need to subtract the previous value from current value. We will have another derived field \u201c`diff`\u201d that represents the difference value between current value and previous value:\n\n```js\n{\n $addFields: {\n diff: {\n $subtract: \"$price\", { $ifNull: [\"$previousPrice\", \"$price\"] }],\n },\n },\n}\n```\n\nWe\u2019ve set the `diff` value and now we will set two more fields, `gain` and `loss,` to use in the further stages. We just apply the gain/loss logic here:\n\n```js\n{\n $addFields: {\n gain: { $cond: { if: { $gte: [\"$diff\", 0] }, then: \"$diff\", else: 0 } },\n loss: {\n $cond: { if: { $lte: [\"$diff\", 0] }, then: { $abs: \"$diff\" }, else: 0 },\n },\n },\n}\n```\n\nAfter we have enriched the symbol data with gain and loss information for every document, then we can apply further partitioning to get the moving average of gain and loss fields by considering the previous 14 data points:\n\n```js\n{\n $setWindowFields: {\n partitionBy: \"$_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n avgGain: {\n $avg: \"$gain\",\n window: { documents: [-14, 0] },\n },\n avgLoss: {\n $avg: \"$loss\",\n window: { documents: [-14, 0] },\n },\n documentNumber: { $documentNumber: {} },\n },\n },\n}\n```\n\nHere we also used another newly introduced operator, [`$documentNumber`. While we do calculations over the window, we give a sequential number for each document, because we will filter out the documents that have the document number less than or equal to 14. (RSI is calculated after at least 14 data points have been arrived.) We will do filtering out in the later stages. Here, we only set the number of the document.\n\nAfter we calculate the average gain and average loss for every symbol, then we will find the relative strength value. That is calculated by dividing average gain value by average loss value. Since we apply the divide operation, then we need to anticipate the \u201cdivide by 0\u201d problem as well:\n\n```js\n{\n $addFields: {\n relativeStrength: {\n $cond: {\n if: {\n $gt: \"$avgLoss\", 0],\n },\n then: {\n $divide: [\"$avgGain\", \"$avgLoss\"],\n },\n else: \"$avgGain\",\n },\n },\n },\n}\n```\n\nRelative strength value has been calculated and now it\u2019s time to smooth the Relative Strength value to normalize the data between 0 and 100:\n\n```js\n{\n $addFields: {\n rsi: {\n $cond: {\n if: { $gt: [\"$documentNumber\", 14] },\n then: {\n $subtract: [\n 100,\n { $divide: [100, { $add: [1, \"$relativeStrength\"] }] },\n ],\n },\n else: null,\n },\n },\n },\n}\n```\n\nWe basically set `null` to the first 14 documents. And for the others, RSI value has been set.\n\nBelow, you can see a one-minute interval candlestick chart and RSI chart. After 14 data points, RSI starts to be calculated. For every interval, we calculated the RSI through aggregation queries by processing the previous data of that symbol:\n\n![Candlestick charts\n\nThis is the complete aggregation pipeline:\n\n```js\ndb.ticker.aggregate(\n {\n $match: {\n symbol: \"BTC-USD\",\n },\n },\n {\n $group: {\n _id: {\n symbol: \"$symbol\",\n time: {\n $dateTrunc: {\n date: \"$time\",\n unit: \"minute\",\n binSize: 5,\n },\n },\n },\n high: { $max: \"$price\" },\n low: { $min: \"$price\" },\n open: { $first: \"$price\" },\n close: { $last: \"$price\" },\n },\n },\n {\n $sort: {\n \"_id.time\": 1,\n },\n },\n {\n $project: {\n _id: 1,\n price: \"$close\",\n },\n },\n {\n $setWindowFields: {\n partitionBy: \"$_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n previousPrice: { $shift: { by: -1, output: \"$price\" } },\n },\n },\n },\n {\n $addFields: {\n diff: {\n $subtract: [\"$price\", { $ifNull: [\"$previousPrice\", \"$price\"] }],\n },\n },\n },\n {\n $addFields: {\n gain: { $cond: { if: { $gte: [\"$diff\", 0] }, then: \"$diff\", else: 0 } },\n loss: {\n $cond: { if: { $lte: [\"$diff\", 0] }, then: { $abs: \"$diff\" }, else: 0 },\n },\n },\n },\n {\n $setWindowFields: {\n partitionBy: \"$_id.symbol\",\n sortBy: { \"_id.time\": 1 },\n output: {\n avgGain: {\n $avg: \"$gain\",\n window: { documents: [-14, 0] },\n },\n avgLoss: {\n $avg: \"$loss\",\n window: { documents: [-14, 0] },\n },\n documentNumber: { $documentNumber: {} },\n },\n },\n },\n {\n $addFields: {\n relativeStrength: {\n $cond: {\n if: {\n $gt: [\"$avgLoss\", 0],\n },\n then: {\n $divide: [\"$avgGain\", \"$avgLoss\"],\n },\n else: \"$avgGain\",\n },\n },\n },\n },\n {\n $addFields: {\n rsi: {\n $cond: {\n if: { $gt: [\"$documentNumber\", 14] },\n then: {\n $subtract: [\n 100,\n { $divide: [100, { $add: [1, \"$relativeStrength\"] }] },\n ],\n },\n else: null,\n },\n },\n },\n },\n]);\n```\n\n## Conclusion\n\nMongoDB Aggregation Framework provides a great toolset to transform any shape of data into a desired format. As you see in the examples, we use a wide variety of aggregation pipeline [stages and operators. As we discussed in the previous blog posts, time-series collections and window functions are great tools to process time-based data over a window.\n\nIn this post we've looked at the $shift and $documentNumber operators that have been introduced with MongoDB 5.0. The `$shift` operator includes another document in the same window into the current document to process positional data together with current data. In an RSI technical indicator calculation, it is commonly used to compare the current data point with the previous data points, and `$shift` makes it easier to refer to positional documents in a window. For example, price difference between current data point and previous data point.\n\nAnother newly introduced operator is `$documentNumber`. `$documentNumber` gives a sequential number for the sorted documents to be processed later in subsequent aggregation stages. In an RSI calculation, we need to skip calculating RSI value for the first 14 periods of data and $documentNumber helps us to identify and filter out these documents at later stages in the aggregation pipeline. ", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "Time series collections part 3: calculating MACD & RSI values", "contentType": "Article"}, "title": "Currency Analysis with Time Series Collections #3 \u2014 MACD and RSI Calculation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-kotlin-0-6-0", "action": "created", "body": "# Realm Kotlin 0.6.0.\n\n \n \nRealm Kotlin 0.6.0 \n================== \n \nWe just released v0.6.0 of Realm Kotlin. It contains support for Kotlin/JVM, indexed fields as well as a number of bug fixes. \n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n \nKotlin/JVM support \n================== \n \nThe new Realm Kotlin SDK was designed from its inception to support Multiplatform. So far, we\u2019ve been focusing on KMM targets i.e Android and iOS but there was a push from the community to add JVM support, this is now possible using 0.6.0 by enabling the following DSL into your project: \n \n``` \nkotlin { jvm() // other targets \u2026} \n\n``` \n \nNow your app can target: \n \nAndroid, iOS, macOS and JVM (Linux _since Centos 7_, macOS _x86\\_64_ and Windows _8.1 64_). \n \nWhat to build with Kotlin/JVM? \n============================== \n \n* You can build desktop applications using Compose Desktop (see examples: MultiplatformDemo and FantasyPremierLeague). \n* You can build a classic Java console application (see JVMConsole). \n* You can run your Android tests on JVM (note there\u2019s a current issue on IntelliJ where the execution of Android tests from the common source-set is not possible, see/upvote :) https://youtrack.jetbrains.com/issue/KTIJ-15152, alternatively you can still run them as a Gradle task). \n \nWhere is it installed? \n====================== \n \nThe native library dependency is extracted from the cinterop-jar and installed into a default location on your machine: \n \n* _Linux_: \n \n``` \n$HOME/.cache/io.realm.kotlin/ \n\n``` \n \n* _macOS:_ \n \n``` \n$HOME/Library/Caches/io.realm.kotlin/ \n\n``` \n \n* _Windows:_ \n \n``` \n%localappdata%\\io-realm-kotlin\\ \n\n``` \n \nSupport Indexed fields \n====================== \n \nTo index a field, use the _@Index_ annotation. Like primary keys, this makes writes slightly slower, but makes reads faster. It\u2019s best to only add indexes when you\u2019re optimizing the read performance for specific situations. \n \nAbstracted public API into interfaces \n===================================== \n \nIf you tried out the previous version, you will notice that we did an internal refactoring of the project in order to make public APIs consumable via interfaces instead of classes (ex: Realm and RealmConfiguration), this should increase decoupling and make mocking and testability easier for developers. \n \n\ud83c\udf89 Thanks for reading. Now go forth and build amazing apps with Realm! As always, we\u2019re around on GitHub, Twitter and #realm channel on the official Kotlin Slack. \n \nSee the full changelog for all the details.", "format": "md", "metadata": {"tags": ["Realm", "Kotlin"], "pageDescription": "We just released v0.6.0 of Realm Kotlin. It contains support for Kotlin/JVM, indexed fields as well as a number of bug fixes.", "contentType": "News & Announcements"}, "title": "Realm Kotlin 0.6.0.", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-sanic", "action": "created", "body": "# Getting Started with MongoDB and Sanic\n\n \n\nSanic is a Python 3.6+ async web server and web framework that's written to go fast. The project's goal is to provide a simple way to get up and running a highly performant HTTP server that is easy to build, to expand, and ultimately to scale.\n\nUnfortunately, because of its name and dubious choices in ASCII art, Sanic wasn't seen by some as a serious framework, but it has matured. It is worth considering if you need a fast, async, Python framework.\n\nIn this quick start, we will create a CRUD (Create, Read, Update, Delete) app showing how you can integrate MongoDB with your Sanic projects.\n\n## Prerequisites\n\n- Python 3.9.0\n- A MongoDB Atlas cluster. Follow the \"Get Started with Atlas\" guide to create your account and MongoDB cluster. Keep a note of your username, password, and connection string as you will need those later.\n\n## Running the Example\n\nTo begin, you should clone the example code from GitHub.\n\n``` shell\ngit clone git@github.com:mongodb-developer/mongodb-with-sanic.git\n```\n\nYou will need to install a few dependencies: Sanic, Motor, etc. I always recommend that you install all Python dependencies in a virtualenv for the project. Before running pip, ensure your virtualenv is active.\n\n``` shell\ncd mongodb-with-sanic\npip install -r requirements.txt\n```\n\nIt may take a few moments to download and install your dependencies. This is normal, especially if you have not installed a particular package before.\n\nOnce you have installed the dependencies, you need to create an environment variable for your MongoDB connection string.\n\n``` shell\nexport MONGODB_URL=\"mongodb+srv://:@/?retryWrites=true&w=majority\"\n```\n\nRemember, anytime you start a new terminal session, you will need to set this environment variable again. I use direnv to make this process easier.\n\nThe final step is to start your Sanic server.\n\n``` shell\npython app.py\n```\n\nOnce the application has started, you can view it in your browser at . There won't be much to see at the moment as you do not have any data! We'll look at each of the end-points a little later in the tutorial, but if you would like to create some data now to test, you need to send a `POST` request with a JSON body to the local URL.\n\n``` shell\ncurl -X \"POST\" \"http://localhost:8000/\" \\\n -H 'Accept: application/json' \\\n -H 'Content-Type: application/json; charset=utf-8' \\\n -d '{\n \"name\": \"Jane Doe\",\n \"email\": \"jdoe@example.com\",\n \"gpa\": \"3.9\"\n }'\n```\n\nTry creating a few students via these `POST` requests, and then refresh your browser.\n\n## Creating the Application\n\nAll the code for the example application is within `app.py`. I'll break it down into sections and walk through what each is doing.\n\n### Setting Up Our App and MongoDB Connection\n\nWe're going to use the sanic-motor package to wrap our motor client for ease of use. So, we need to provide a couple of settings when creating our Sanic app.\n\n``` python\napp = Sanic(__name__)\n\nsettings = dict(\n MOTOR_URI=os.environ\"MONGODB_URL\"],\n LOGO=None,\n)\napp.config.update(settings)\n\nBaseModel.init_app(app)\n\nclass Student(BaseModel):\n __coll__ = \"students\"\n```\n\nSanic-motor's models are unlikely to be very similar to any other database models you have used before. They do not describe the schema, for example. Instead, we only specify the collection name.\n\n### Application Routes\n\nOur application has five routes:\n\n- POST / - creates a new student.\n- GET / - view a list of all students.\n- GET /{id} - view a single student.\n- PUT /{id} - update a student.\n- DELETE /{id} - delete a student.\n\n#### Create Student Route\n\n``` python\n@app.route(\"/\", methods=[\"POST\"])\nasync def create_student(request):\n student = request.json\n student[\"_id\"] = str(ObjectId())\n\n new_student = await Student.insert_one(student)\n created_student = await Student.find_one(\n {\"_id\": new_student.inserted_id}, as_raw=True\n )\n\n return json_response(created_student)\n```\n\nNote how I am converting the `ObjectId` to a string before assigning it as the `_id`. MongoDB stores data as [BSON. However, we are encoding and decoding our data as JSON strings. BSON has support for additional non-JSON-native data types, including `ObjectId`. JSON does not. Because of this, for simplicity, we convert ObjectIds to strings before storing them.\n\nThe `create_student` route receives the new student data as a JSON string in a `POST` request. Sanic will automatically convert this JSON string back into a Python dictionary which we can then pass to the sanic-motor wrapper.\n\nThe `insert_one` method response includes the `_id` of the newly created student. After we insert the student into our collection, we use the `inserted_id` to find the correct document and return it in the `json_response`.\n\nsanic-motor returns the relevant model objects from any `find` method, including `find_one`. To override this behaviour, we specify `as_raw=True`.\n\n##### Read Routes\n\nThe application has two read routes: one for viewing all students, and the other for viewing an individual student.\n\n``` python\n@app.route(\"/\", methods=\"GET\"])\nasync def list_students(request):\n students = await Student.find(as_raw=True)\n return json_response(students.objects)\n```\n\nIn our example code, we are not placing any limits on the number of students returned. In a real application, you should use sanic-motor's `page` and `per_page` arguments to paginate the number of students returned.\n\n``` python\n@app.route(\"/\", methods=[\"GET\"])\nasync def show_student(request, id):\n if (student := await Student.find_one({\"_id\": id}, as_raw=True)) is not None:\n return json_response(student)\n\n raise NotFound(f\"Student {id} not found\")\n```\n\nThe student detail route has a path parameter of `id`, which Sanic passes as an argument to the `show_student` function. We use the `id` to attempt to find the corresponding student in the database. The conditional in this section is using an [assignment expression, a recent addition to Python (introduced in version 3.8) and often referred to by the incredibly cute sobriquet \"walrus operator.\"\n\nIf a document with the specified `id` does not exist, we raise a `NotFound` exception which will respond to the request with a `404` response.\n\n##### Update Route\n\n``` python\n@app.route(\"/\", methods=\"PUT\"])\nasync def update_student(request, id):\n student = request.json\n update_result = await Student.update_one({\"_id\": id}, {\"$set\": student})\n\n if update_result.modified_count == 1:\n if (\n updated_student := await Student.find_one({\"_id\": id}, as_raw=True)\n ) is not None:\n return json_response(updated_student)\n\n if (\n existing_student := await Student.find_one({\"_id\": id}, as_raw=True)\n ) is not None:\n return json_response(existing_student)\n\n raise NotFound(f\"Student {id} not found\")\n```\n\nThe `update_student` route is like a combination of the `create_student` and the `show_student` routes. It receives the `id` of the document to update as well as the new data in the JSON body.\n\nWe attempt to `$set` the new values in the correct document with `update_one`, and then check to see if it correctly modified a single document. If it did, then we find that document that was just updated and return it.\n\nIf the `modified_count` is not equal to one, we still check to see if there is a document matching the `id`. A `modified_count` of zero could mean that there is no document with that `id`. It could also mean that the document does exist but it did not require updating as the current values are the same as those supplied in the `PUT` request.\n\nIt is only after that final `find` fail when we raise a `404` Not Found exception.\n\n##### Delete Route\n\n``` python\n@app.route(\"/\", methods=[\"DELETE\"])\nasync def delete_student(request, id):\n delete_result = await Student.delete_one({\"_id\": id})\n\n if delete_result.deleted_count == 1:\n return json_response({}, status=204)\n\n raise NotFound(f\"Student {id} not found\")\n```\n\nOur final route is `delete_student`. Again, because this is acting upon a single document, we have to supply an `id` in the URL. If we find a matching document and successfully delete it, then we return an HTTP status of `204` or \"No Content.\" In this case, we do not return a document as we've already deleted it! However, if we cannot find a student with the specified id, then instead, we return a `404`.\n\n## Wrapping Up\n\nI hope you have found this introduction to Sanic with MongoDB useful. If you would like to find out [more about Sanic, please see their documentation. Unfortunately, documentation for sanic-motor is entirely lacking at this time. But, it is a relatively thin wrapper around the MongoDB Motor driver\u2014which is well documented\u2014so do not let that discourage you.\n\nTo see how you can integrate MongoDB with other async frameworks, check out some of the other Python posts on the MongoDB developer portal.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Getting started with MongoDB and Sanic", "contentType": "Quickstart"}, "title": "Getting Started with MongoDB and Sanic", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/connect-atlas-cloud-kubernetes-peering", "action": "created", "body": "# Securely Connect MongoDB to Cloud-Offered Kubernetes Clusters\n\n## Introduction\n\nContainerized applications are becoming an industry standard for virtualization. When we talk about managing those containers, Kubernetes will probably be brought up extremely quickly.\n\nKubernetes is a known open-source system for automating the deployment, scaling, and management of containerized applications. Nowadays, all of the major cloud providers (AWS, Google Cloud, and Azure) have a managed Kubernetes offering to easily allow organizations to get started and scale their Kubernetes environments.\n\nNot surprisingly, MongoDB Atlas also runs on all of those offerings to give your modern containerized applications the best database offering. However, ease of development might yield in missing some critical aspects, such as security and connectivity control to our cloud services.\n\nIn this article, I will guide you on how to properly secure your Kubernetes cloud services when connecting to MongoDB Atlas using the recommended and robust solutions we have.\n\n## Prerequisites\n\nYou will need to have a cloud provider account and the ability to deploy one of the Kubernetes offerings:\n\n* Amazon EKS\n* Google Cloud GKE\n* Azure AKS\n\nAnd of course, you'll need a MongoDB Atlas project where you are a project owner.\n\n> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post. Please note that for this tutorial you are required to have a M10+ cluster.\n\n## Step 1: Set Up Networks\n\nAtlas connections, by default, use credentials and end-to-end encryption to secure the connection. However, building a trusted network is a must for closing the security cycle between your application and the database.\n\nNo matter what cloud of choice you decide to build your Kubernetes cluster in, the basic foundation of securing that deployment is creating its own network. You can look into the following guides to create your own network and gather the main information (Names, Ids, and subnet Classless Inter-Domain Routing \\- CIDR\\)\\.\n\n##### Private Network Creation\n\n| AWS | GCP | Azure |\n| --- | --- | ----- |\n| Create an AWS VPC | Create a GCP VPC | Create a VNET |\n\n## Step 2: Create Network Peerings\n\nNow, we'll configure connectivity of the virtual network that the Atlas region resides in to the virtual network we've created in Step 1. This connectivity is required to make sure the communication between networks is possible. We'll configure Atlas to allow connections from the virtual network from Step 1.\n\nThis process is called setting a Network Peering Connection. It's significant as it allows internal communication between networks of two different accounts (the Atlas cloud account and your cloud account).\nThe network peerings are established under our Projects > Network Access > Peering > \"ADD PEERING CONNECTION.\" For more information, please read our documentation.\n\nHowever, I will highlight the main points in each cloud for a successful peering setup:\n\n##### Private Network Creation\n\nAWSGCPAzure\n 1. Allow outbound traffic to Atlas CIDR on 2015-27017.\n 2. Obtain VPC information (Account ID, VPC Name, VPC Region, VPC CIDR). Enable DNS and Hostname resolution on that VPC.\n 3. Using this information, initiate the VPC Peering.\n 4. Approve the peering on AWS side.\n 5. Add peering route in the relevant subnet/s targeting Atlas CIDR and add those subnets/security groups in the Atlas access list page.\n\n 1. Obtain GCP VPC information (Project ID, VPC Name, VPC Region, and CIDR).\n 2. When you initiate a VPC peering on Atlas side, it will generate information you need to input on GCP VPC network peering page (Atlas Project ID and Atlas VPC Name).\n 3. Submit the peering request approval on GCP and add the GCP CIDR in Atlas access lists.\n\n 1. Obtain the following azure details from your subscription (Subscription ID, Azure Active Directory Directory ID, VNET Resource Group Name, VNet Name, VNet Region).\n 2. Input the gathered information and get a list of commands to perform on Azure console.\n 3. Open Azure console and run the commands, which will create a custom role and permissions for peering.\n 4. Validate and initiate peering.\n\n## Step 3: Deploy the Kubernetes Cluster in Our Networks\n\nThe Kubernetes clusters that we launch must be associated with the\npeered network. I will highlight each cloud provider's specifics.\n\n## AWS EKS\n\nWhen we launch our EKS via the AWS console service, we need to configure\nthe peered VPC under the \"Networking\" tab.\n\nPlace the correct settings:\n\n* VPC Name\n* Relevant Subnets (Recommended to pick at least three availability\nzones)\n* Choose a security group with open 27015-27017 ports to the Atlas\nCIDR.\n* Optionally, you can add an IP range for your pods.\n\n## GCP GKE\n\nWhen we launch our GKE service, we need to configure the peered VPC under the \"Networking\" section.\n\nPlace the correct settings:\n\n* VPC Name\n* Subnet Name\n* Optionally, you can add an IP range for your pod's internal network that cannot overlap with the peered CIDR.\n\n## Azure AKS\n\nWhen we lunch our AKS service, we need to use the same resource group as the peered VNET and configure the peered VNET as the CNI network in the advanced Networking tab.\n\nPlace the correct settings:\n\n* Resource Group\n* VNET Name under \"Virtual Network\"\n* Cluster Subnet should be the peered subnet range.\n* The other CIDR should be a non-overlapping CIDR from the peered network.\n\n## Step 4: Deploy Containers and Test Connectivity\n\nOnce the cluster is up and running in your cloud provider, you can test the connectivity to our peered cluster.\n\nFirst, we will need to get our connection string and method from the Atlas cluster UI. Please note that GCP and Azure have private connection strings for peering, and those must be used for peered networks.\n\nNow, let's test our connection from one of the Kubernetes pods:\nThat's it. We are securely connected!\n\n## Wrap-Up\n\nKubernetes-managed clusters offer a simple and modern way to deploy containerized applications to the vendor of your choice. It's great that we can easily secure their connections to work with the best cloud database offering there is, MongoDB Atlas, unlocking other possibilities such as building cross-platform application with MongoDB Realm and Realm Sync or using MongoDB Data Lake and Atlas Search to build incredible applications.\n\n> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas", "Kubernetes", "Google Cloud"], "pageDescription": "A high-level guide on how to securely connect MongoDB Atlas with the Kubernetes offerings from Amazon AWS, Google Cloud (GCP), and Microsoft Azure.", "contentType": "Tutorial"}, "title": "Securely Connect MongoDB to Cloud-Offered Kubernetes Clusters", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/introduction-realm-sync-android", "action": "created", "body": "# Introduction to Atlas Device Sync for Android\n\n* * *\n> Atlas App Services (Formerly MongoDB Realm )\n> \n> Atlas Device Sync (Formerly Realm Sync)\n> \n* * *\nWelcome back! We really appreciate you coming back and showing your interest in Atlas App Services. This is a follow-up article to Introduction to Realm Java SDK for Android. If you haven't read that yet, we recommend you go through it first.\n\nThis is a beginner-level article, where we introduce you to Atlas Device Sync. As always, we demonstrate its usage by building an Android app using the MVVM architecture.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Prerequisites\n\n>\n>\n>You have created at least one app using Android Studio.\n>\n>\n\n## What Are We Trying to Solve?\n\nIn the previous article, we learned that the Realm Java SDK is easy to use when working with a local database. But in the world of the internet we want to share our data, so how do we do that with Realm?\n\n>\n>\n>**MongoDB Atlas Device Sync**\n>\n>Atlas Device Sync is the solution to our problem. It's one of the many features provided by MongoDB Atlas App Services. It synchronizes the data between client-side Realms and the server-side cloud, MongoDB Atlas, without worrying about conflict resolution and error handling.\n>\n>\n\nThe illustration below demonstrates how MongoDB Atlas Device Sync has simplified the complex architecture:\n\nTo demonstrate how to use Atlas Device Sync, we will extend our previous application, which tracks app views, to use Atlas Device Sync.\n\n## Step 1: Get the Base Code\n\nClone the original repo and rename it \"HelloDeviceSync.\"\n\n## Step 2: Enable Atlas Device Sync\n\nUpdate the `syncEnabled` state as shown below in the Gradle file (at the module level):\n\n``` kotlin\nandroid {\n// few other things\n\n realm {\n syncEnabled = true\n }\n}\n```\n\nAlso, add the `buildConfigField` to `buildTypes` in the same file:\n\n``` kotlin\nbuildTypes {\n\n debug {\n buildConfigField \"String\", \"RealmAppId\", \"\\\"App Key\\\"\"\n }\n\n release {\n buildConfigField \"String\", \"RealmAppId\", \"\\\"App Key\\\"\"\n }\n}\n```\n\nYou can ignore the value of `App Key` for now, as it will be covered in a later step.\n\n## Step 3: Set Up Your Free MongoDB Atlas Cloud Database\n\nOnce this is done, we have a cloud database where all our mobile app data can be saved, i.e., MongoDB Atlas. Now we are left with linking our cloud database (in Atlas) with the mobile app.\n\n## Step 4: Create a App Services App\n\nIn layman's terms, App Services apps on MongoDB Atlas are just links between the data flowing between the mobile apps (Realm Java SDK) and Atlas.\n\n## Step 5: Add the App Services App ID to the Android Project\n\nCopy the App ID and use it to replace `App Key` in the `build.gradle` file, which we added in **Step 2**.\n\nWith this done, MongoDB Atlas and your Android App are connected.\n\n## Step 6: Enable Atlas Device Sync and Authentication\n\nMongoDB Atlas App Services is a very powerful tool and has a bunch of cool features from data security to its manipulation. This is more than sufficient for one application. Let's enable authentication and sync.\n\n### But Why Authentication?\n\nDevice Sync is designed to make apps secure by default, by not allowing an unknown user to access data.\n\nWe don't have to force a user to sign up for them to become a known user. We can enable anonymous authentication, which is a win-win for everyone.\n\nSo let's enable both of them:\n\nLet's quickly recap what we have done so far.\n\nIn the Android app:\n- Added App Services App ID to the Gradle file.\n- Enabled Atlas Device Sync.\n\nIn MongoDB Atlas:\n- Set up account.\n- Created a free cluster for MongoDB Atlas.\n- Created a App Services app.\n- Enabled anonymous authentication.\n- Enabled sync.\n\nNow, the final piece is to make the necessary modifications to our Android app.\n\n## Step 7: Update the Android App Code\n\nThe only code change is to get an instance of the Realm mobile database from the App Services app instance.\n\n1. Get a App Services app instance from which the Realm instance can be derived:\n\n ``` kotlin\n val realmSync by lazy {\n App(AppConfiguration.Builder(BuildConfig.RealmAppId).build())\n }\n ```\n\n2. Update the creation of the View Model:\n\n ``` kotlin\n private val homeViewModel: HomeViewModel by navGraphViewModels(\n R.id.mobile_navigation,\n factoryProducer = {\n object : ViewModelProvider.Factory {\n @Suppress(\"UNCHECKED_CAST\")\n override fun create(modelClass: Class): T {\n val realmApp = (requireActivity().application as HelloRealmSyncApp).realmSync\n return HomeViewModel(realmApp) as T\n }\n }\n })\n ```\n\n3. Update the View Model constructor to accept the App Services app instance:\n\n ``` kotlin\n class HomeViewModel(private val realmApp: App) : ViewModel() {\n\n }\n ```\n\n4. Update the `updateData` method in `HomeViewModel`:\n\n ``` kotlin\n private fun updateData() {\n _isLoading.postValue(true)\n\n fun onUserSuccess(user: User) {\n val config = SyncConfiguration.Builder(user, user.id).build()\n\n Realm.getInstanceAsync(config, object : Realm.Callback() {\n override fun onSuccess(realm: Realm) {\n realm.executeTransactionAsync {\n var visitInfo = it.where(VisitInfo::class.java).findFirst()\n visitInfo = visitInfo?.updateCount() ?: VisitInfo().apply {\n partition = user.id\n visitCount++\n }\n _visitInfo.postValue(it.copyFromRealm(visitInfo))\n it.copyToRealmOrUpdate(visitInfo)\n _isLoading.postValue(false)\n }\n }\n\n override fun onError(exception: Throwable) {\n super.onError(exception)\n //TODO: Implementation pending\n _isLoading.postValue(false)\n }\n })\n }\n\n realmApp.loginAsync(Credentials.anonymous()) {\n if (it.isSuccess) {\n onUserSuccess(it.get())\n } else {\n _isLoading.postValue(false)\n }\n }\n }\n ```\n\nIn the above snippet, we are doing two primary things:\n\n1. Getting a user instance by signing in anonymously.\n2. Getting a Realm instance using `SyncConfiguration.Builder`.\n\n``` kotlin\nSyncConfiguration.Builder(user, user.id).build()\n```\n\nWhere `user.id` is the partition key we defined in our Atlas Device Sync configuration (Step 6). In simple terms, partition key is an identifier that helps you to get the exact data as per client needs. For more details, please refer to the article on Atlas Device Sync Partitioning Strategies.\n\n## Step 8: View Your Results in MongoDB Atlas\n\nThank you for reading. You can find the complete working code in our GitHub repo.\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Android"], "pageDescription": "Learn how to use Atlas Device Sync with Android.", "contentType": "News & Announcements"}, "title": "Introduction to Atlas Device Sync for Android", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-network-compression", "action": "created", "body": "# MongoDB Network Compression: A Win-Win\n\n# MongoDB Network Compression: A Win-Win\n\nAn under-advertised feature of MongoDB is its ability to compress data between the client and the server. The CRM company Close has a really nice article on how compression reduced their network traffic from about 140 Mbps to 65 Mpbs. As Close notes, with cloud data transfer costs ranging from $0.01 per GB and up, you can get a nice little savings with a simple configuration change. \n\nMongoDB supports the following compressors:\n\n* snappy\n* zlib (Available starting in MongoDB 3.6)\n* zstd (Available starting in MongoDB 4.2)\n\nEnabling compression from the client simply involves installing the desired compression library and then passing the compressor as an argument when you connect to MongoDB. For example:\n\n```PYTHON\nclient = MongoClient('mongodb://localhost', compressors='zstd')\n```\n\nThis article provides two tuneable Python scripts, read-from-mongo.py and write-to-mongo.py, that you can use to see the impact of network compression yourself. \n\n## Setup\n\n### Client Configuration\n\nEdit params.py and at a minimum, set your connection string. Other tunables include the amount of bytes to read and insert (default 10 MB) and the batch size to read (100 records) and insert (1 MB):\n\n``` PYTHON\n# Read to Mongo\ntarget_read_database = 'sample_airbnb'\ntarget_read_collection = 'listingsAndReviews'\nmegabytes_to_read = 10\nbatch_size = 100 # Batch size in records (for reads)\n\n# Write to Mongo\ndrop_collection = True # Drop collection on run\ntarget_write_database = 'test'\ntarget_write_collection = 'network-compression-test'\nmegabytes_to_insert = 10\nbatch_size_mb = 1 # Batch size of bulk insert in megabytes\n```\n### Compression Library\nThe snappy compression in Python requires the `python-snappy` package.\n\n```pip3 install python-snappy```\n\nThe zstd compression requires the zstandard package\n\n```pip3 install zstandard```\n\nThe zlib compression is native to Python.\n\n### Sample Data\nMy read-from-mongo.py script uses the Sample AirBnB Listings Dataset but ANY dataset will suffice for this test. \n\nThe write-to-mongo.py script generates sample data using the Python package \nFaker.\n\n```pip3 install faker ```\n\n## Execution\n### Read from Mongo\nThe cloud providers notably charge for data egress, so anything that reduces network traffic out is a win. \n\nLet's first run the script without network compression (the default):\n\n```ZSH\n\u2717 python3 read-from-mongo.py\n\nMongoDB Network Compression Test\nNetwork Compression: Off\nNow: 2021-11-03 12:24:00.904843\n\nCollection to read from: sample_airbnb.listingsAndReviews\nBytes to read: 10 MB\nBulk read size: 100 records\n\n1 megabytes read at 307.7 kilobytes/second\n2 megabytes read at 317.6 kilobytes/second\n3 megabytes read at 323.5 kilobytes/second\n4 megabytes read at 318.0 kilobytes/second\n5 megabytes read at 327.1 kilobytes/second\n6 megabytes read at 325.3 kilobytes/second\n7 megabytes read at 326.0 kilobytes/second\n8 megabytes read at 324.0 kilobytes/second\n9 megabytes read at 322.7 kilobytes/second\n10 megabytes read at 321.0 kilobytes/second\n\n 8600 records read in 31 seconds (276.0 records/second)\n\n MongoDB Server Reported Megabytes Out: 188.278 MB\n ```\n\n_You've obviously noticed the reported Megabytes out (188 MB) are more than 18 times our test size of 10 MBs. There are several reasons for this, including other workloads running on the server, data replication to secondary nodes, and the TCP packet being larger than just the data. Focus on the delta between the other tests runs._\n\nThe script accepts an optional compression argument, that must be either `snappy`, `zlib` or `zstd`. Let's run the test again using `snappy`, which is known to be fast, while sacrificing some compression:\n\n```ZSH\n\u2717 python3 read-from-mongo.py -c \"snappy\"\n\nMongoDB Network Compression Test\nNetwork Compression: snappy\nNow: 2021-11-03 12:24:41.602969\n\nCollection to read from: sample_airbnb.listingsAndReviews\nBytes to read: 10 MB\nBulk read size: 100 records\n\n1 megabytes read at 500.8 kilobytes/second\n2 megabytes read at 493.8 kilobytes/second\n3 megabytes read at 486.7 kilobytes/second\n4 megabytes read at 480.7 kilobytes/second\n5 megabytes read at 480.1 kilobytes/second\n6 megabytes read at 477.6 kilobytes/second\n7 megabytes read at 488.4 kilobytes/second\n8 megabytes read at 482.3 kilobytes/second\n9 megabytes read at 482.4 kilobytes/second\n10 megabytes read at 477.6 kilobytes/second\n\n 8600 records read in 21 seconds (410.7 records/second)\n\n MongoDB Server Reported Megabytes Out: 126.55 MB\n```\nWith `snappy` compression, our reported bytes out were about `62 MBs` fewer. That's a `33%` savings. But wait, the `10 MBs` of data was read in `10` fewer seconds. That's also a `33%` performance boost!\n\nLet's try this again using `zlib`, which can achieve better compression, but at the expense of performance. \n\n_zlib compression supports an optional compression level. For this test I've set it to `9` (max compression)._\n\n```ZSH\n\u2717 python3 read-from-mongo.py -c \"zlib\"\n\nMongoDB Network Compression Test\nNetwork Compression: zlib\nNow: 2021-11-03 12:25:07.493369\n\nCollection to read from: sample_airbnb.listingsAndReviews\nBytes to read: 10 MB\nBulk read size: 100 records\n\n1 megabytes read at 362.0 kilobytes/second\n2 megabytes read at 373.4 kilobytes/second\n3 megabytes read at 394.8 kilobytes/second\n4 megabytes read at 393.3 kilobytes/second\n5 megabytes read at 398.1 kilobytes/second\n6 megabytes read at 397.4 kilobytes/second\n7 megabytes read at 402.9 kilobytes/second\n8 megabytes read at 397.7 kilobytes/second\n9 megabytes read at 402.7 kilobytes/second\n10 megabytes read at 401.6 kilobytes/second\n\n 8600 records read in 25 seconds (345.4 records/second)\n\n MongoDB Server Reported Megabytes Out: 67.705 MB\n ```\n With `zlib` compression configured at its maximum compression level, we were able to achieve a `64%` reduction in network egress, although it took 4 seconds longer. However, that's still a `19%` performance improvement over using no compression at all.\n\n Let's run a final test using `zstd`, which is advertised to bring together the speed of `snappy` with the compression efficiency of `zlib`:\n\n ```ZSH\n \u2717 python3 read-from-mongo.py -c \"zstd\"\n\nMongoDB Network Compression Test\nNetwork Compression: zstd\nNow: 2021-11-03 12:25:40.075553\n\nCollection to read from: sample_airbnb.listingsAndReviews\nBytes to read: 10 MB\nBulk read size: 100 records\n\n1 megabytes read at 886.1 kilobytes/second\n2 megabytes read at 798.1 kilobytes/second\n3 megabytes read at 772.2 kilobytes/second\n4 megabytes read at 735.7 kilobytes/second\n5 megabytes read at 734.4 kilobytes/second\n6 megabytes read at 714.8 kilobytes/second\n7 megabytes read at 709.4 kilobytes/second\n8 megabytes read at 698.5 kilobytes/second\n9 megabytes read at 701.9 kilobytes/second\n10 megabytes read at 693.9 kilobytes/second\n\n 8600 records read in 14 seconds (596.6 records/second)\n\n MongoDB Server Reported Megabytes Out: 61.254 MB\n ```\nAnd sure enough, `zstd` lives up to its reputation, achieving `68%` percent improvement in compression along with a `55%` improvement in performance!\n\n### Write to Mongo\n\nThe cloud providers often don't charge us for data ingress. However, given the substantial performance improvements with read workloads, what can be expected from write workloads?\n\nThe write-to-mongo.py script writes a randomly generated document to the database and collection configured in params.py, the default being `test.network_compression_test`.\n\nAs before, let's run the test without compression:\n\n```ZSH\npython3 write-to-mongo.py\n\nMongoDB Network Compression Test\nNetwork Compression: Off\nNow: 2021-11-03 12:47:03.658036\n\nBytes to insert: 10 MB\nBulk insert batch size: 1 MB\n\n1 megabytes inserted at 614.3 kilobytes/second\n2 megabytes inserted at 639.3 kilobytes/second\n3 megabytes inserted at 652.0 kilobytes/second\n4 megabytes inserted at 631.0 kilobytes/second\n5 megabytes inserted at 640.4 kilobytes/second\n6 megabytes inserted at 645.3 kilobytes/second\n7 megabytes inserted at 649.9 kilobytes/second\n8 megabytes inserted at 652.7 kilobytes/second\n9 megabytes inserted at 654.9 kilobytes/second\n10 megabytes inserted at 657.2 kilobytes/second\n\n 27778 records inserted in 15.0 seconds\n\n MongoDB Server Reported Megabytes In: 21.647 MB\n```\n\nSo it took `15` seconds to write `27,778` records. Let's run the same test with `zstd` compression:\n\n```ZSH\n\u2717 python3 write-to-mongo.py -c 'zstd'\n\nMongoDB Network Compression Test\nNetwork Compression: zstd\nNow: 2021-11-03 12:48:16.485174\n\nBytes to insert: 10 MB\nBulk insert batch size: 1 MB\n\n1 megabytes inserted at 599.4 kilobytes/second\n2 megabytes inserted at 645.4 kilobytes/second\n3 megabytes inserted at 645.8 kilobytes/second\n4 megabytes inserted at 660.1 kilobytes/second\n5 megabytes inserted at 669.5 kilobytes/second\n6 megabytes inserted at 665.3 kilobytes/second\n7 megabytes inserted at 671.0 kilobytes/second\n8 megabytes inserted at 675.2 kilobytes/second\n9 megabytes inserted at 675.8 kilobytes/second\n10 megabytes inserted at 676.7 kilobytes/second\n\n 27778 records inserted in 15.0 seconds\n\n MongoDB Server Reported Megabytes In: 8.179 MB\n ```\nOur reported megabytes in are reduced by `62%`. However, our write performance remained identical. Personally, I think most of this is due to the time it takes the Faker library to generate the sample data. But having gained compression without a performance impact it is still a win.\n## Measurement\n\nThere are a couple of options for measuring network traffic. This script is using the db.serverStatus() `physicalBytesOut` and `physicalBytesIn`, reporting on the delta between the reading at the start and end of the test run. As mentioned previously, our measurements are corrupted by other network traffic occuring on the server, but my tests have shown a consistent improvement when run. Visually, my results achieved appear as follows:\n\nAnother option would be using a network analysis tool like Wireshark. But that's beyond the scope of this article for now.\n\nBottom line, compression reduces network traffic by more than 60%, which is in line with the improvement seen by Close. More importantly, compression also had a dramatic improvement on read performance. That's a Win-Win.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "An under advertised feature of MongoDB is its ability to compress data between the client and the server. This blog will show you exactly how to enable network compression along with a script you can run to see concrete results. Not only will you save some $, but your performance will also likely improve - a true win-win.\n", "contentType": "Tutorial"}, "title": "MongoDB Network Compression: A Win-Win", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-swiftui-maps-location", "action": "created", "body": "# Using Maps and Location Data in Your SwiftUI (+Realm) App\n\n## Introduction\nEmbedding Apple Maps and location functionality in SwiftUI apps used to be a bit of a pain. It required writing your own SwiftUI wrapper around UIKit code\u2014see these examples from the O-FISH app:\n\n* Location helper\n* Map views\n\nIf you only need to support iOS14 and later, then you can **forget most of that messy code \ud83d\ude0a**. If you need to support iOS13\u2014sorry, you need to go the O-FISH route!\n\niOS14 introduced the Map SwiftUI view (part of Mapkit) allowing you to embed maps directly into your SwiftUI apps without messy wrapper code.\n\nThis article shows you how to embed Apple Maps into your app views using Mapkit's Map view. We'll then look at how you can fetch the user's current location\u2014with their permission, of course!\n\nFinally, we'll see how to store the location data in Realm in a format that lets MongoDB Atlas Device Sync it to MongoDB Atlas. Once in Atlas, you can add a geospatial index and use MongoDB Charts to plot the data on a map\u2014we'll look at that too.\n\nMost of the code snippets have been extracted from the RChat app. That app is a good place to see maps and location data in action. Building a Mobile Chat App Using Realm \u2013 The New and Easier Way is a good place to learn more about the RChat app\u2014including how to enable MongoDB Atlas Device Sync.\n\n## Prerequisites\n\n* Realm-Cocoa 10.8.0+ (may work with some 10.7.X versions)\n* iOS 14.5+ (Mapkit was introduced in iOS 14.0 and so most features should work with earlier iOS 14.X versions)\n* XCode12+\n\n## How to Add an Apple Map to Your SwiftUI App\n\nTo begin, let's create a simple view that displays a map, the coordinates of the center of that map, and the zoom level:\n\nWith Mapkit and SwiftUI, this only takes a few lines of code:\n\n``` swift\nimport MapKit\nimport SwiftUI\n\nstruct MyMapView: View {\n @State private var region: MKCoordinateRegion = MKCoordinateRegion(\n center: CLLocationCoordinate2D(latitude: MapDefaults.latitude, longitude: MapDefaults.longitude),\n span: MKCoordinateSpan(latitudeDelta: MapDefaults.zoom, longitudeDelta: MapDefaults.zoom))\n \n private enum MapDefaults {\n static let latitude = 45.872\n static let longitude = -1.248\n static let zoom = 0.5\n }\n\n var body: some View {\n VStack {\n Text(\"lat: \\(region.center.latitude), long: \\(region.center.longitude). Zoom: \\(region.span.latitudeDelta)\")\n .font(.caption)\n .padding()\n Map(coordinateRegion: $region,\n interactionModes: .all,\n showsUserLocation: true)\n }\n }\n}\n```\n\nNote that `showsUserLocation` won't work unless the user has already given the app permission to use their location\u2014we'll get to that.\n\n`region` is initialized to a starting location, but it's updated by the `Map` view as the user scrolls and zooms in and out.\n\n### Adding Bells and Whistles to Your Maps (Pins at Least)\n\nPins can be added to a map in the form of \"annotations.\" Let's start with a single pin:\n\nAnnotations are provided as an array of structs where each instance must contain the coordinates of the pin. The struct must also conform to the Identifiable protocol:\n\n``` swift\nstruct MyAnnotationItem: Identifiable {\n var coordinate: CLLocationCoordinate2D\n let id = UUID()\n}\n```\n\nWe can now create an array of `MyAnnotationItem` structs:\n\n``` swift\nlet annotationItems = \n MyAnnotationItem(coordinate: CLLocationCoordinate2D(\n latitude: MapDefaults.latitude,\n longitude: MapDefaults.longitude))]\n```\n\nWe then pass `annotationItems` to the `MapView` and indicate that we want a `MapMarker` at the contained coordinates:\n\n``` swift\nMap(coordinateRegion: $region,\n interactionModes: .all,\n showsUserLocation: true,\n annotationItems: annotationItems) { item in\n MapMarker(coordinate: item.coordinate)\n }\n```\n\nThat gives us the result we wanted.\n\nWhat if we want multiple pins? Not a problem. Just add more `MyAnnotationItem` instances to the array.\n\nAll of the pins will be the same default color. But, what if we want different colored pins? It's simple to extend our code to produce this:\n\n![Embedded Apple Map showing red, yellow, and plue pins at different locations\n\nFirstly, we need to extend `MyAnnotationItem` to include an optional `color` and a `tint` that returns `color` if it's been defined and \"red\" if not:\n\n``` swift\nstruct MyAnnotationItem: Identifiable {\n var coordinate: CLLocationCoordinate2D\n var color: Color?\n var tint: Color { color ?? .red }\n let id = UUID()\n}\n```\n\nIn our sample data, we can now choose to provide a color for each annotation:\n\n``` swift\nlet annotationItems = \n MyAnnotationItem(\n coordinate: CLLocationCoordinate2D(\n latitude: MapDefaults.latitude,\n longitude: MapDefaults.longitude)),\n MyAnnotationItem(\n coordinate: CLLocationCoordinate2D(\n latitude: 45.8827419,\n longitude: -1.1932383),\n color: .yellow),\n MyAnnotationItem(\n coordinate: CLLocationCoordinate2D(\n latitude: 45.915737,\n longitude: -1.3300991),\n color: .blue)\n]\n```\n\nThe `MapView` can then use the `tint`:\n\n``` swift\nMap(coordinateRegion: $region,\n interactionModes: .all,\n showsUserLocation: true,\n annotationItems: annotationItems) { item in\n MapMarker(\n coordinate: item.coordinate,\n tint: item.tint)\n}\n```\n\nIf you get bored of pins, you can use `MapAnnotation` to use any view you like for your annotations:\n\n``` swift\nMap(coordinateRegion: $region,\n interactionModes: .all,\n showsUserLocation: true,\n annotationItems: annotationItems) { item in\n MapAnnotation(coordinate: item.coordinate) {\n Image(systemName: \"gamecontroller.fill\")\n .foregroundColor(item.tint)\n }\n}\n```\n\nThis is the result:\n\n![Apple Map showing red, yellow and blue game controller icons at different locations on the map\n\nYou could also include the name of the system image to use with each annotation.\n\nThis gist contains the final code for the view.\n\n## Finding Your User's Location\n\n### Asking for Permission\n\nApple is pretty vocal about respecting the privacy of their users, and so it shouldn't be a shock that your app will have to request permission before being able to access a user's location.\n\nThe first step is to add a key-value pair to your Xcode project to indicate that the app may request permission to access the user's location, and what text should be displayed in the alert. You can add the pair to the \"Info.plist\" file:\n\n```\nPrivacy - Location When In Use Usage Description : We'll only use your location when you ask to include it in a message\n```\n\nOnce that setting has been added, the user should see an alert the first time that the app attempts to access their current location:\n\n### Accessing Current Location\n\nWhile Mapkit has made maps simple and native in SwiftUI, the same can't be said for location data.\n\nYou need to create a SwiftUI wrapper for Apple's Core Location functionality. There's not a lot of value in explaining this boilerplate code\u2014just copy this code from RChat's LocationHelper.swift file, and paste it into your app:\n\n``` swift\nimport CoreLocation\n\nclass LocationHelper: NSObject, ObservableObject {\n\n static let shared = LocationHelper()\n static let DefaultLocation = CLLocationCoordinate2D(latitude: 45.8827419, longitude: -1.1932383)\n\n static var currentLocation: CLLocationCoordinate2D {\n guard let location = shared.locationManager.location else {\n return DefaultLocation\n }\n return location.coordinate\n }\n\n private let locationManager = CLLocationManager()\n\n private override init() {\n super.init()\n locationManager.delegate = self\n locationManager.desiredAccuracy = kCLLocationAccuracyBest\n locationManager.requestWhenInUseAuthorization()\n locationManager.startUpdatingLocation()\n }\n}\n\nextension LocationHelper: CLLocationManagerDelegate {\n func locationManager(_ manager: CLLocationManager, didUpdateLocations locations: CLLocation]) { }\n\n public func locationManager(_ manager: CLLocationManager, didFailWithError error: Error) {\n print(\"Location manager failed with error: \\(error.localizedDescription)\")\n }\n\n public func locationManager(_ manager: CLLocationManager, didChangeAuthorization status: CLAuthorizationStatus) {\n print(\"Location manager changed the status: \\(status)\")\n }\n}\n```\n\nOnce added, you can access the user's location with this simple call:\n\n``` swift\nlet location = LocationHelper.currentLocation\n```\n\n### Store Location Data in Your Realm Database\n\n#### The Location Format Expected by MongoDB\n\nRealm doesn't have a native type for a geographic location, and so it's up to us how we choose to store it in a Realm Object. That is, unless we want to synchronize the data to MongoDB Atlas using Device Sync, and go on to use MongoDB's geospatial functionality.\n\nTo make the best use of the location data in Atlas, we need to add a [geospatial index to the field (which we\u2019ll see how to do soon.) That means storing the location in a supported format. Not all options will work with Atlas Device Sync (e.g., it's not guaranteed that attributes will appear in the same order in your Realm Object and the synced Atlas document). The most robust approach is to use an array where the first element is longitude and the second is latitude:\n\n``` json\nlocation: , ]\n```\n\n#### Your Realm Object\n\nThe RChat app gives users the option to include their location in a chat message\u2014this means that we need to include the location in the [ChatMessage Object:\n\n``` swift\nclass ChatMessage: Object, ObjectKeyIdentifiable {\n \u2026\n @Persisted let location = List()\n \u2026\n convenience init(author: String, text: String, image: Photo?, location: Double] = []) {\n ...\nlocation.forEach { coord in\n self.location.append(coord)\n }\n ...\n }\n }\n \u2026.\n}\n```\n\nThe `location` array that's passed to that initializer is formed like this:\n\n``` swift\nlet location = LocationHelper.currentLocation\nself.location = [location.longitude, location.latitude]\n```\n\n## Location Data in Your Backend MongoDB Atlas Application Services App\n\nThe easiest way to create your backend MongoDB Atlas Application Services schema is to enable [Development Mode\u2014that way, the schema is automatically generated from your Swift Realm Objects.\n\nThis is the generated schema for our \"ChatMessage\" collection:\n\n``` swift\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n ...\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n }\n },\n \"required\": \n \"_id\",\n ...\n ],\n \"title\": \"ChatMessage\"\n}\n```\n\nThis is a document that's been created from a synced Realm `ChatMessage` object:\n\n![Screen capture of an Atlas document, which includes an array named location\n\n### Adding a Geospatial Index in Atlas\n\nNow that you have location data stored in Atlas, it would be nice to be able to work with it\u2014e.g., running geospatial queries. To enable this, you need to add a geospatial index to the `location` field.\n\nFrom the Atlas UI, select the \"Indexes\" tab for your collection and click \"CREATE INDEX\":\n\nYou should then configure a `2dsphere` index:\n\nMost chat messages won't include the user's location and so I set the `sparse` option for efficiency.\n\nNote that you'll get an error message if your ChatMessage collection contains any documents where the value in the location attribute isn't in a valid geospatial format.\n\nAtlas will then build the index. This will be very quick, unless you already have a huge number of documents containing the location field. Once complete, you can move onto the next section.\n\n### Plotting Your Location Data in MongoDB Charts\n\nMongoDB Charts is a simple way to visualize MongoDB data. You can access it through the same UI as Application Services and Atlas. Just click on the \"Charts\" button:\n\nThe first step is to click the \"Add Data Source\" button:\n\nSelect your Atlas cluster:\n\nSelect the `RChat.ChatMessage` collection:\n\nClick \u201cFinish.\u201d You\u2019ll be taken to the default Dashboards view, which is empty for now. Click \"Add Dashboard\":\n\nIn your new dashboard, click \"ADD CHART\":\n\nConfigure your chart as shown here by:\n- Setting the chart type to \"Geospatial\" and the sub-type to \"Scatter.\"\n- Dragging the \"location\" attribute to the coordinates box.\n- Dragging the \"author\" field to the \"Color\" box.\n\nOnce you've created your chart, you can embed it in web apps, etc. That's beyond the scope of this article, but check out the MongoDB Charts docs if you're interested.\n\n## Conclusion\n\nSwiftUI makes it easy to embed Apple Maps in your SwiftUI apps. As with most Apple frameworks, there are extra maps features available if you break out from SwiftUI, but I'd suggest that the simplicity of working with SwiftUI is enough incentive for you to avoid that unless you have a compelling reason.\n\nAccessing location information from within SwiftUI still feels a bit of a hack, but in reality, you cut and paste the helper code once, and then you're good to go.\n\nBy storing the location as a `longitude, latitude]` array (`List`) in your Realm database, it's simple to sync it with MongoDB Atlas. Once in Atlas, you have the full power of MongoDB's geospatial functionality to work your location data.\n\nIf you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Learn how to use the new Map view from iOS Map Kit in your SwiftUI/Realm apps. Also see how to use iOS location in Realm, Atlas, and Charts.", "contentType": "Tutorial"}, "title": "Using Maps and Location Data in Your SwiftUI (+Realm) App", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/cidr-subnet-selection-atlas", "action": "created", "body": "# CIDR Subnet Selection for MongoDB Atlas\n\n## Introduction\n\nOne of the best features of MongoDB\nAtlas is the ability to peer your\nhost\nVPC\non your own Amazon Web Services (AWS) account to your Atlas VPC. VPC\npeering provides you with the ability to use the private IP range of\nyour hosts and MongoDB Atlas cluster. This allows you to reduce your\nnetwork exposure and improve security of your data. If you chose to use\npeering there are some considerations you should think about first in\nselecting the right IP block for your private traffic.\n\n## Host VPC\n\nThe host VPC is where you configure the systems that your application\nwill use to connect to your MongoDB Atlas cluster. AWS provides your\naccount with a default VPC for your hosts You may need to modify the\ndefault VPC or create a new one to work alongside MongoDB Atlas.\n\nMongoDB Atlas requires your host VPC to follow the\nRFC-1918 standard for creating\nprivate ranges. The Internet Assigned Numbers Authority (IANA) has\nreserved the following three blocks of the IP address space for private\ninternets:\n\n- 10.0.0.0 - 10.255.255.255 (10/8 prefix)\n- 172.16.0.0 - 172.31.255.255 (172.16/12 prefix)\n- 192.168.0.0 - 192.168.255.255 (192.168/16 prefix)\n\n>\n>\n>Don't overlap your ranges!\n>\n>\n\nThe point of peering is to permit two private IP ranges to work in\nconjunction to keep your network traffic off the public internet. This\nwill require you to use separate private IP ranges that do not conflict.\n\nAWS standard states the following in their \"Invalid VPC\nPeering\"\ndocument:\n\n>\n>\n>You cannot create a VPC peering connection between VPCs with matching or\n>overlapping IPv4 CIDR blocks.\n>\n>\n\n## MongoDB Atlas VPC\n\nWhen you create a group in MongoDB Atlas, by default we provide you with\nan AWS VPC which you can only modify before launching your first\ncluster. Groups with an existing cluster CANNOT MODIFY their VPC CIDR\nblock - this is to comply with the AWS requirement for\npeering.\nBy default we create a VPC with IP range 192.168.248.0/21. To specify\nyour IP block prior to configuring peering and launching your cluster,\nfollow these steps:\n\n1. Sign up for MongoDB Atlas and\n ensure your payment method is completed.\n\n2. Click on the **Network Access** tab, then select **Peering**. You\n should see a page such as this which shows you that you have not\n launched a cluster yet:\n\n \n\n3. Click on the **New Peering Connection** button. You will be given a\n new \"Peering Connection\" window to add your peering details. At the\n bottom of this page you'll see a section to modify \"Your Atlas VPC\"\n\n \n\n4. If you would like to specify a different IP range, you may use one\n of the RFC-1918 ranges with the appropriate subnet and enter it\n here. It's extremely important to ensure that you choose two\n distinct RFC-1918 ranges. These two cannot overlap their subnets:\n\n \n\n5. Click on the **Initiate Peering** button and follow the directions\n to add the appropriate subnet ranges.\n\n## Conclusion\n\nUsing peering ensures that your database traffic remains off the public\nnetwork. This provides you with a much more secure solution allowing you\nto easily scale up and down without specifying IP addresses each time,\nand reduces costs on transporting your data from server to server. At\nany time if you run into problems with this, our support team is always\navailable by clicking the SUPPORT link in the lower left of your window.\nOur support team is happy to assist in ensuring your peering connection\nis properly configured.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "VPC peering provides you with the ability to use the private IP range of your hosts and MongoDB Atlas cluster.", "contentType": "Tutorial"}, "title": "CIDR Subnet Selection for MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/sql-to-aggregation-pipeline", "action": "created", "body": "# MongoDB Aggregation Pipeline Queries vs SQL Queries\n\nLet's be honest: Many devs coming to MongoDB are joining the community\nwith a strong background in SQL. I would personally include myself in\nthis subset of MongoDB devs. I think it's useful to map terms and\nconcepts you might be familiar with in SQL to help\n\"translate\"\nyour work into MongoDB Query Language (MQL). More specifically, in this\npost, I will be walking through translating the MongoDB Aggregation\nPipeline from SQL.\n\n## What is the Aggregation Framework?\n\nThe aggregation framework allows you to analyze your data in real time.\nUsing the framework, you can create an aggregation pipeline that\nconsists of one or more\nstages.\nEach stage transforms the documents and passes the output to the next\nstage.\n\nIf you're familiar with the Unix pipe \\|, you can think of the\naggregation pipeline as a very similar concept. Just as output from one\ncommand is passed as input to the next command when you use piping,\noutput from one stage is passed as input to the next stage when you use\nthe aggregation pipeline.\n\nSQL is a declarative language. You have to declare what you want to\nsee\u2014that's why SELECT comes first. You have to think in sets, which can\nbe difficult, especially for functional programmers. With MongoDB's\naggregation pipeline, you can have stages that reflect how you think\u2014for\nexample, \"First, let's group by X. Then, we'll get the top 5 from every\ngroup. Then, we'll arrange by price.\" This is a difficult query to do in\nSQL, but much easier using the aggregation pipeline framework.\n\nThe aggregation framework has a variety of\nstages\navailable for you to use. Today, we'll discuss the basics of how to use\n$match,\n$group,\n$sort,\nand\n$limit.\nNote that the aggregation framework has many other powerful stages,\nincluding\n$count,\n$geoNear,\n$graphLookup,\n$project,\n$unwind,\nand others.\n\n>\n>\n>If you want to check out another great introduction to the MongoDB\n>Aggregation Pipeline, be sure to check out Introduction to the MongoDB\n>Aggregation\n>Framework.\n>\n>\n\n## Terminology and Concepts\n\nThe following table provides an overview of common SQL aggregation\nterms, functions, and concepts and the corresponding MongoDB\naggregation\noperators:\n\n| **SQL Terms, Functions, and Concepts** | **MongoDB Aggregation Operators** |\n|----------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| WHERE | $match |\n| GROUP BY | $group |\n| HAVING | $match |\n| SELECT | $project |\n| LIMIT | $limit |\n| OFFSET | $skip |\n| ORDER BY | $sort |\n| SUM() | $sum |\n| COUNT() | $sum and $sortByCount |\n| JOIN | $lookup |\n| SELECT INTO NEW_TABLE | $out |\n| MERGE INTO TABLE | $merge (Available starting in MongoDB 4.2) |\n| UNION ALL | $unionWith (Available starting in MongoDB 4.4) |\n\nAlright, now that we've covered the basics of MongoDB Aggregations,\nlet's jump into some examples.\n\n## SQL Setup\n\nThe SQL examples assume *two* tables, *album* and *songs*, that join by\nthe *song.album_id* and the *songs.id* columns. Here's what the tables\nlook like:\n\n##### Albums\n\n| **id** | **name** | **band_name** | **price** | **status** |\n|--------|-----------------------------------|------------------|-----------|------------|\n| 1 | lo-fi chill hop songs to study to | Silicon Infinite | 2.99 | A |\n| 2 | Moon Rocks | Silicon Infinite | 1.99 | B |\n| 3 | Flavour | Organical | 4.99 | A |\n\n##### Songs\n\n| **id** | **title** | **plays** | **album_id** |\n|--------|-----------------------|-----------|--------------|\n| 1 | Snow Beats | 133 | 1 |\n| 2 | Rolling By | 242 | 1 |\n| 3 | Clouds | 3191 | 1 |\n| 4 | But First Coffee | 562 | 3 |\n| 5 | Autumn | 901 | 3 |\n| 6 | Milk Toast | 118 | 2 |\n| 7 | Purple Mic | 719 | 2 |\n| 8 | One Note Dinner Party | 1242 | 2 |\n\nI used a site called SQL Fiddle,\nand used PostgreSQL 9.6 for all of my examples. However, feel free to\nrun these sample SQL snippets wherever you feel most comfortable. In\nfact, this is the code I used to set up and seed my tables with our\nsample data:\n\n``` SQL\n-- Creating the main albums table\nCREATE TABLE IF NOT EXISTS albums (\n id BIGSERIAL NOT NULL UNIQUE PRIMARY KEY,\n name VARCHAR(40) NOT NULL UNIQUE,\n band_name VARCHAR(40) NOT NULL,\n price float8 NOT NULL,\n status VARCHAR(10) NOT NULL\n);\n\n-- Creating the songs table\nCREATE TABLE IF NOT EXISTS songs (\n id SERIAL PRIMARY KEY NOT NULL,\n title VARCHAR(40) NOT NULL,\n plays integer NOT NULL,\n album_id BIGINT NOT NULL REFERENCES albums ON DELETE RESTRICT\n);\n\nINSERT INTO albums (name, band_name, price, status)\nVALUES\n ('lo-fi chill hop songs to study to', 'Silicon Infinite', 7.99, 'A'),\n ('Moon Rocks', 'Silicon Infinite', 1.99, 'B'),\n ('Flavour', 'Organical', 4.99, 'A');\n\nINSERT INTO songs (title, plays, album_id)\nVALUES\n ('Snow Beats', 133, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),\n ('Rolling By', 242, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),\n ('Clouds', 3191, (SELECT id from albums WHERE name='lo-fi chill hop songs to study to')),\n ('But First Coffee', 562, (SELECT id from albums WHERE name='Flavour')),\n ('Autumn', 901, (SELECT id from albums WHERE name='Flavour')),\n ('Milk Toast', 118, (SELECT id from albums WHERE name='Moon Rocks')),\n ('Purple Mic', 719, (SELECT id from albums WHERE name='Moon Rocks')),\n ('One Note Dinner Party', 1242, (SELECT id from albums WHERE name='Moon Rocks'));\n```\n\n## MongoDB Setup\n\nThe MongoDB examples assume *one* collection `albums` that contains\ndocuments with the following schema:\n\n``` json\n{\n name : 'lo-fi chill hop songs to study to',\n band_name: 'Silicon Infinite',\n price: 7.99,\n status: 'A',\n songs: \n { title: 'Snow beats', 'plays': 133 },\n { title: 'Rolling By', 'plays': 242 },\n { title: 'Sway', 'plays': 3191 }\n ]\n}\n```\n\nFor this post, I did all of my prototyping in a MongoDB Visual Studio\nCode plugin playground. For more information on how to use a MongoDB\nPlayground in Visual Studio Code, be sure to check out this post: [How\nTo Use The MongoDB Visual Studio Code\nPlugin.\nOnce you have your playground all set up, you can use this snippet to\nset up and seed your collection. You can also follow along with this\ndemo by using the MongoDB Web\nShell.\n\n``` javascript\n// Select the database to use.\nuse('mongodbVSCodePlaygroundDB');\n\n// The drop() command destroys all data from a collection.\n// Make sure you run it against the correct database and collection.\ndb.albums.drop();\n\n// Insert a few documents into the albums collection.\ndb.albums.insertMany(\n {\n 'name' : 'lo-fi chill hop songs to study to', band_name: 'Silicon Infinite', price: 7.99, status: 'A',\n songs: [\n { title: 'Snow beats', 'plays': 133 },\n { title: 'Rolling By', 'plays': 242 },\n { title: 'Clouds', 'plays': 3191 }\n ]\n },\n {\n 'name' : 'Moon Rocks', band_name: 'Silicon Infinite', price: 1.99, status: 'B',\n songs: [\n { title: 'Milk Toast', 'plays': 118 },\n { title: 'Purple Mic', 'plays': 719 },\n { title: 'One Note Dinner Party', 'plays': 1242 }\n ]\n },\n {\n 'name' : 'Flavour', band_name: 'Organical', price: 4.99, status: 'A',\n songs: [\n { title: 'But First Coffee', 'plays': 562 },\n { title: 'Autumn', 'plays': 901 }\n ]\n },\n]);\n```\n\n## Quick Reference\n\n### Count all records from albums\n\n#### SQL\n\n``` SQL\nSELECT COUNT(*) AS count\nFROM albums\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n {\n $group: {\n _id: null, // An _id value of null on the $group operator accumulates values for all the input documents as a whole.\n count: { $sum: 1 }\n }\n }\n] );\n```\n\n### Sum the price field from albums\n\n#### SQL\n\n``` SQL\nSELECT SUM(price) AS total\nFROM albums\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n {\n $group: {\n _id: null,\n total: { $sum: \"$price\" }\n }\n }\n] );\n```\n\n### For each unique band_name, sum the price field\n\n#### SQL\n\n``` SQL\nSELECT band_name,\nSUM(price) AS total\nFROM albums\nGROUP BY band_name\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n {\n $group: {\n _id: \"$band_name\",\n total: { $sum: \"$price\" }\n }\n }\n] );\n```\n\n### For each unique band_name, sum the price field, results sorted by sum\n\n#### SQL\n\n``` SQL\nSELECT band_name,\n SUM(price) AS total\nFROM albums\nGROUP BY band_name\nORDER BY total\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n {\n $group: {\n _id: \"$band_name\",\n total: { $sum: \"$price\" }\n }\n },\n { $sort: { total: 1 } }\n] );\n```\n\n### For band_name with multiple albums, return the band_name and the corresponding album count\n\n#### SQL\n\n``` SQL\nSELECT band_name,\n count(*)\nFROM albums\nGROUP BY band_name\nHAVING count(*) > 1;\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n {\n $group: {\n _id: \"$band_name\",\n count: { $sum: 1 }\n }\n },\n { $match: { count: { $gt: 1 } } }\n ] );\n```\n\n### Sum the price of all albums with status A and group by unique band_name\n\n#### SQL\n\n``` SQL\nSELECT band_name,\n SUM(price) as total\nFROM albums\nWHERE status = 'A'\nGROUP BY band_name\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n { $match: { status: 'A' } },\n {\n $group: {\n _id: \"$band_name\",\n total: { $sum: \"$price\" }\n }\n }\n] );\n```\n\n### For each unique band_name with status A, sum the price field and return only where the sum is greater than $5.00\n\n#### SQL\n\n``` SQL\nSELECT band_name,\n SUM(price) as total\nFROM albums\nWHERE status = 'A'\nGROUP BY band_name\nHAVING SUM(price) > 5.00;\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n { $match: { status: 'A' } },\n {\n $group: {\n _id: \"$band_name\",\n total: { $sum: \"$price\" }\n }\n },\n { $match: { total: { $gt: 5.00 } } }\n] );\n```\n\n### For each unique band_name, sum the corresponding song plays field associated with the albums\n\n#### SQL\n\n``` SQL\nSELECT band_name,\n SUM(songs.plays) as total_plays\nFROM albums,\n songs\nWHERE songs.album_id = albums.id\nGROUP BY band_name;\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n { $unwind: \"$songs\" },\n {\n $group: {\n _id: \"$band_name\",\n qty: { $sum: \"$songs.plays\" }\n }\n }\n] );\n```\n\n### For each unique album, get the song from album with the most plays\n\n#### SQL\n\n``` SQL\nSELECT name, title, plays\n FROM songs s1 INNER JOIN albums ON (album_id = albums.id)\nWHERE plays=(SELECT MAX(s2.plays)\n FROM songs s2\nWHERE s1.album_id = s2.album_id)\nORDER BY name;\n```\n\n#### MongoDB\n\n``` javascript\ndb.albums.aggregate( [\n { $project:\n {\n name: 1,\n plays: {\n $filter: {\n input: \"$songs\",\n as: \"item\",\n cond: { $eq: [\"$item.plays\", { $max: \"$songs.plays\" }] }\n }\n }\n }\n }\n] );\n```\n\n## Wrapping Up\n\nThis post is in no way a complete overview of all the ways that MongoDB\ncan be used like a SQL-based database. This was only meant to help devs\nin SQL land start to make the transition over to MongoDB with some basic\nqueries using the aggregation pipeline. The aggregation framework has\nmany other powerful stages, including\n[$count,\n$geoNear,\n$graphLookup,\n$project,\n$unwind,\nand others.\n\nIf you want to get better at using the MongoDB Aggregation Framework, be\nsure to check out MongoDB University: M121 - The MongoDB Aggregation\nFramework. Or,\nbetter yet, try to use some advanced MongoDB aggregation pipeline\nqueries in your next project! If you have any questions, be sure to head\nover to the MongoDB Community\nForums. It's the\nbest place to get your MongoDB questions answered.\n\n## Resources:\n\n- MongoDB University: M121 - The MongoDB Aggregation Framework:\n \n- How to Use Custom Aggregation Expressions in MongoDB 4.4:\n \n- Introduction to the MongoDB Aggregation Framework:\n \n- How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4:\n \n- Aggregation Framework with Node.js Tutorial:\n \n- Aggregation Pipeline Quick Reference:\n https://docs.mongodb.com/manual/meta/aggregation-quick-reference\n- SQL to Aggregation Mapping Chart:\n https://docs.mongodb.com/manual/reference/sql-aggregation-comparison\n- SQL to MongoDB Mapping Chart:\n https://docs.mongodb.com/manual/reference/sql-comparison\n- Questions? Comments? We'd love to connect with you. Join the\n conversation on the MongoDB Community Forums:\n https://developer.mongodb.com/community/forums\n\n", "format": "md", "metadata": {"tags": ["MongoDB", "SQL"], "pageDescription": "This is an overview of common SQL aggregation terms, functions, and concepts and the corresponding MongoDB aggregation operators.", "contentType": "Tutorial"}, "title": "MongoDB Aggregation Pipeline Queries vs SQL Queries", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/capturing-hacker-news-mentions-nodejs-mongodb", "action": "created", "body": "# Capturing Hacker News Mentions with Node.js and MongoDB\n\nIf you're in the technology space, you've probably stumbled upon Hacker News at some point or another. Maybe you're interested in knowing what's popular this week for technology or maybe you have something to share. It's a platform for information.\n\nThe problem is that you're going to find too much information on Hacker News without a particularly easy way to filter through it to find the topics that you're interested in. Let's say, for example, you want to know information about Bitcoin as soon as it is shared. How would you do that on the Hacker News website?\n\nIn this tutorial, we're going to learn how to parse through Hacker News data as it is created, filtering for only the topics that we're interested in. We're going to do a sentiment analysis on the potential matches to rank them, and then we're going to store this information in MongoDB so we can run reports from it. We're going to do it all with Node.js and some simple pipelines.\n\n## The Requirements\n\nYou won't need a Hacker News account for this tutorial, but you will need a few things to be successful:\n\n- Node.js 12.10 or more recent\n- A properly configured MongoDB Atlas cluster\n\nWe'll be storing all of our matches in MongoDB Atlas. This will make it easier for us to run reports and not depend on looking at logs or similarly structured data.\n\n>You can deploy and use a MongoDB Atlas M0 cluster for FREE. Learn more by clicking here.\n\nHacker News doesn't have an API that will allow us to stream data in real-time. Instead, we'll be using the Unofficial Hacker News Streaming API. For this particular example, we'll be looking at the comments stream, but your needs may vary.\n\n## Installing the Project Dependencies in a New Node.js Application\n\nBefore we get into the interesting code and our overall journey toward understanding and storing the Hacker News data as it comes in, we need to bootstrap our project.\n\nOn your computer, create a new project directory and execute the following commands:\n\n``` bash\nnpm init -y\nnpm install mongodb ndjson request sentiment through2 through2-filter --save\n```\n\nWith the above commands, we are creating a **package.json** file and installing a few packages. We know mongodb will be used for storing our Hacker News Data, but the rest of the list is probably unfamiliar to you.\n\nWe'll be using the request package to consume raw data from the API. As we progress, you'll notice that we're working with streams of data rather than one-off requests to the API. This means that the data that we receive might not always be complete. To make sense of this, we use the ndjson package to get useable JSON from the stream. Since we're working with streams, we need to be able to use pipelines, so we can't just pass our JSON data through the pipeline as is. Instead, we need to use through2 and through2-filter to filter and manipulate our JSON data before passing it to another stage in the pipeline. Finally, we have sentiment for doing a sentiment analysis on our data.\n\nWe'll reiterate on a lot of these packages as we progress.\n\nBefore moving to the next step, make sure you create a **main.js** file in your project. This is where we'll add our code, which you'll see isn't too many lines.\n\n## Connecting to a MongoDB Cluster to Store Hacker News Mentions\n\nWe're going to start by adding our downloaded dependencies to our code file and connecting to a MongoDB cluster or instance.\n\nOpen the project's **main.js** file and add the following code:\n\n``` javascript\nconst stream = require(\"stream\");\nconst ndjson = require(\"ndjson\");\nconst through2 = require(\"through2\");\nconst request = require(\"request\");\nconst filter = require(\"through2-filter\");\nconst sentiment = require(\"sentiment\");\nconst util = require(\"util\");\nconst pipeline = util.promisify(stream.pipeline);\nconst { MongoClient } = require(\"mongodb\");\n\n(async () => {\n const client = new MongoClient(process.env\"ATLAS_URI\"], { useUnifiedTopology: true });\n try {\n await client.connect();\n const collection = client.db(\"hacker-news\").collection(\"mentions\");\n console.log(\"FINISHED\");\n } catch(error) {\n console.log(error);\n }\n})();\n```\n\nIn the above code, we've added all of our downloaded dependencies, plus some. Remember we're working with a stream of data, so we need to use pipelines in Node.js if we want to work with that data in stages.\n\nWhen we run the application, we are connecting to a MongoDB instance or cluster as defined in our environment variables. The `ATLAS_URI` variable would look something like this:\n\n``` none\nmongodb+srv://:@plummeting-us-east-1.hrrxc.mongodb.net/\n```\n\nYou can find the connection string in your MongoDB Atlas dashboard.\n\nTest that the application can connect to the database by executing the following command:\n\n``` bash\nnode main.js\n```\n\nIf you don't want to use environment variables, you can hard-code the value in your project or use a configuration file. I personally prefer environment variables because we can set them externally on most cloud deployments for security (and there's no risk that we accidentally commit them to GitHub).\n\n## Parsing and Filtering Hacker News Data in Real Time\n\nAt this point, the code we have will connect us to MongoDB. Now we need to focus on streaming the Hacker News data into our application and filtering it for the data that we actually care about.\n\nLet's make the following changes to our **main.js** file:\n\n``` javascript\n(async () => {\n const client = new MongoClient(process.env[\"ATLAS_URI\"], { useUnifiedTopology: true });\n try {\n await client.connect();\n const collection = client.db(\"hacker-news\").collection(\"mentions\");\n await pipeline(\n request(\"http://api.hnstream.com/comments/stream/\"),\n ndjson.parse({ strict: false }),\n filter({ objectMode: true }, chunk => {\n return chunk[\"body\"].toLowerCase().includes(\"bitcoin\") || chunk[\"article-title\"].toLowerCase().includes(\"bitcoin\");\n })\n );\n console.log(\"FINISHED\");\n } catch(error) {\n console.log(error);\n }\n})();\n```\n\nIn the above code, after we connect, we create a pipeline of stages to complete. The first stage is a simple GET request to the streaming API endpoint. The results from our request should be JSON, but since we're working with a stream of data rather than expecting a single response, our result may be malformed depending on where we are in the stream. This is normal.\n\nTo get beyond, this we can either put the pieces of the JSON puzzle together on our own as they come in from the stream, or we can use the [ndjson package. This package acts as the second stage and parses the data coming in from the previous stage, being our streaming request.\n\nBy the time the `ndjson.parse` stage completes, we should have properly formed JSON to work with. This means we need to analyze it to see if it is JSON data we want to keep or toss. Remember, the streaming API gives us all data coming from Hacker News, not just what we're looking for. To filter, we can use the through2-filter package which allows us to filter on a stream like we would on an array in javaScript.\n\nIn our `filter` stage, we are returning true if the body of the Hacker News mention includes \"bitcoin\" or the title of the thread includes the \"bitcoin\" term. This means that this particular entry is what we're looking for and it will be passed to the next stage in the pipeline. Anything that doesn't match will be ignored for future stages.\n\n## Performing a Sentiment Analysis on Matched Data\n\nAt this point, we should have matches on Hacker News data that we're interested in. However, Hacker News has a ton of bots and users posting potentially irrelevant data just to rank in people's searches. It's a good idea to analyze our match and score it to know the quality. Then later, we can choose to ignore matches with a low score as they will probably be a waste of time.\n\nSo let's adjust our pipeline a bit in the **main.js** file:\n\n``` javascript\n(async () => {\n const client = new MongoClient(process.env\"ATLAS_URI\"], { useUnifiedTopology: true });\n const textRank = new sentiment();\n try {\n await client.connect();\n const collection = client.db(\"hacker-news\").collection(\"mentions\");\n await pipeline(\n request(\"http://api.hnstream.com/comments/stream/\"),\n ndjson.parse({ strict: false }),\n filter({ objectMode: true }, chunk => {\n return chunk[\"body\"].toLowerCase().includes(\"bitcoin\") || chunk[\"article-title\"].toLowerCase().includes(\"bitcoin\");\n }),\n through2.obj((row, enc, next) => {\n let result = textRank.analyze(row.body);\n row.score = result.score;\n next(null, row);\n })\n );\n console.log(\"FINISHED\");\n } catch(error) {\n console.log(error);\n }\n})();\n```\n\nIn the above code, we've added two parts related to the [sentiment package that we had previously installed.\n\nWe first initialize the package through the following line:\n\n``` javascript\nconst textRank = new sentiment();\n```\n\nWhen looking at our pipeline stages, we make use of the through2 package for streaming object manipulation. Since this is a stream, we can't just take our JSON from the `ndjson.parse` stage and expect to be able to manipulate it like any other object in JavaScript.\n\nWhen we manipulate the matched object, we are performing a sentiment analysis on the body of the mention. At this point, we don't care what the score is, but we plan to add it to the data which we'll eventually store in MongoDB.\n\nThe object as of now might look something like this:\n\n``` json\n{\n \"_id\": \"5ffcc041b3ffc428f702d483\",\n \"body\": \"\n\nthis is the body from the streaming API\n\n\",\n \"author\": \"nraboy\",\n \"article-id\": 43543234,\n \"parent-id\": 3485345,\n \"article-title\": \"Bitcoin: Is it worth it?\",\n \"type\": \"comment\",\n \"id\": 24985379,\n \"score\": 3\n}\n```\n\nThe only modification we've made to the data as of right now is the addition of a score from our sentiment analysis.\n\nIt's important to note that our data is not yet inside of MongoDB. We're just at the stage where we've made modifications to the stream of data that could be a match to our interests.\n\n## Creating Documents and Performing Queries in MongoDB\n\nWith the data formatted how we want it, we can focus on storing it within MongoDB and querying it whenever we want.\n\nLet's make a modification to our pipeline:\n\n``` javascript\n(async () => {\n const client = new MongoClient(process.env\"ATLAS_URI\"], { useUnifiedTopology: true });\n const textRank = new sentiment();\n try {\n await client.connect();\n const collection = client.db(\"hacker-news\").collection(\"mentions\");\n await pipeline(\n request(\"http://api.hnstream.com/comments/stream/\"),\n ndjson.parse({ strict: false }),\n filter({ objectMode: true }, chunk => {\n return chunk[\"body\"].toLowerCase().includes(\"bitcoin\") || chunk[\"article-title\"].toLowerCase().includes(\"bitcoin\");\n }),\n through2.obj((row, enc, next) => {\n let result = textRank.analyze(row.body);\n row.score = result.score;\n next(null, row);\n }),\n through2.obj((row, enc, next) => {\n collection.insertOne({\n ...row,\n \"user-url\": `https://news.ycombinator.com/user?id=${row[\"author\"]}`,\n \"item-url\": `https://news.ycombinator.com/item?id=${row[\"article-id\"]}`\n });\n next();\n })\n );\n console.log(\"FINISHED\");\n } catch(error) {\n console.log(error);\n }\n})();\n```\n\nWe're doing another transformation on our object. This could have been merged with the earlier transformation stage, but for code cleanliness, we are breaking them into two stages.\n\nIn this final stage, we are doing an `insertOne` operation with the MongoDB Node.js driver. We're taking the `row` of data from the previous stage and we're adding two new fields to the object before it is inserted. We're doing this so we have quick access to the URL and don't have to rebuild it later.\n\nIf we ran the application, it would run forever, collecting any data posted to Hacker News that matched our filter.\n\nIf we wanted to query our data within MongoDB, we could use an MQL query like the following:\n\n``` javascript\nuse(\"hacker-news\");\n\ndb.mentions.find({ \"score\": { \"$gt\": 3 } });\n```\n\nThe above MQL query would find all documents that have a score greater than 3. With the sentiment analysis, you're not looking at a score of 0 to 10. It is best you read through the [documentation to see how things are scored.\n\n## Conclusion\n\nYou just saw an example of using MongoDB and Node.js for capturing relevant data from Hacker News as it happens live. This could be useful for keeping your own feed of particular topics or it can be extended for other use-cases such as monitoring what people are saying about your brand and using the code as a feedback reporting tool.\n\nThis tutorial could be expanded beyond what we explored for this example. For example, we could add MongoDB Realm Triggers to look for certain scores and send a message on Twilio or Slack if a match on our criteria was found.\n\nIf you've got any questions or comments regarding this tutorial, take a moment to drop them in the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["JavaScript", "Node.js"], "pageDescription": "Learn how to stream data from Hacker News into MongoDB for analyzing with Node.js.", "contentType": "Tutorial"}, "title": "Capturing Hacker News Mentions with Node.js and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/realm-api-cache", "action": "created", "body": "# Build Offline-First Mobile Apps by Caching API Results in Realm\n\n## Introduction\n\nWhen building a mobile app, there's a good chance that you want it to pull in data from a cloud service\u2014whether from your own or from a third party. While other technologies are growing (e.g., GraphQL and MongoDB Realm Sync), REST APIs are still prevalent.\n\nIt's easy to make a call to a REST API endpoint from your mobile app, but what happens when you lose network connectivity? What if you want to slice and dice that data after you've received it? How many times will your app have to fetch the same data (consuming data bandwidth and battery capacity each time)? How will your users react to a sluggish app that's forever fetching data over the internet?\n\nBy caching the data from API calls in Realm, the data is always available to your app. This leads to higher availability, faster response times, and reduced network and battery consumption.\n\nThis article shows how the RCurrency mobile app fetches exchange rate data from a public API, and then caches it in Realm for always-on, local access.\n\n### Is Using the API from Your Mobile App the Best Approach?\n\nThis app only reads data through the API. Writing an offline-first app that needs to reliably update cloud data via an API is a **far** more complex affair. If you need to update cloud data when offline, then I'd strongly recommend you consider MongoDB Realm Sync.\n\nMany APIs throttle your request rate or charge per request. That can lead to issues as your user base grows. A more scalable approach is to have your backend Realm app fetch the data from the API and store it in Atlas. Realm Sync then makes that data available locally on every user's mobile device\u2014without the need for any additional API calls.\n\n## Prerequisites\n\n- Realm-Cocoa 10.13.0+\n- Xcode 13\n- iOS 15\n\n## The RCurrency Mobile App\n\nThe RCurrency app is a simple exchange rate app. It's intended for uses such as converting currencies when traveling.\n\nYou choose a base currency and a list of other currencies you want to convert between. \n\nWhen opened for the first time, RCurrency uses a REST API to retrieve exchange rates, and stores the data in Realm. From that point on, the app uses the data that's stored in Realm. Even if you force-close the app and reopen it, it uses the local data.\n\nIf the stored rates are older than today, the app will fetch the latest rates from the API and replace the Realm data.\n\nThe app supports pull-to-refresh to fetch and store the latest exchange rates from the API.\n\nYou can alter the amount of any currency, and the amounts for all other currencies are instantly recalculated.\n\n## The REST API\n\nI'm using the API provided by exchangerate.host. The API is a free service that provides a simple API to fetch currency exchange rates. \n\nOne of the reasons I picked this API is that it doesn't require you to register and then manage access keys/tokens. It's not rocket science to handle that complexity, but I wanted this app to focus on when to fetch data, and what to do once you receive it.\n\nThe app uses a single endpoint (where you can replace `USD` and `EUR` with the currencies you want to convert between):\n\n```js\nhttps://api.exchangerate.host/convert?from=USD&to=EUR\n```\n\nYou can try calling that endpoint directly from your browser.\n\nThe endpoint responds with a JSON document:\n\n```js\n{\n \"motd\": {\n \"msg\": \"If you or your company use this project or like what we doing, please consider backing us so we can continue maintaining and evolving this project.\",\n \"url\": \"https://exchangerate.host/#/donate\"\n },\n \"success\": true,\n \"query\": {\n \"from\": \"USD\",\n \"to\": \"EUR\",\n \"amount\": 1\n },\n \"info\": {\n \"rate\": 0.844542\n },\n \"historical\": false,\n \"date\": \"2021-09-02\",\n \"result\": 0.844542\n}\n```\n\nNote that the exchange rate for each currency is only updated once every 24 hours. That's fine for our app that's helping you decide whether you can afford that baseball cap when you're on vacation. If you're a currency day-trader, then you should look elsewhere.\n\n## The RCurrency App Implementation\n### Data Model\n\nJSON is the language of APIs. That's great news as most modern programming languages (including Swift) make it super easy to convert between JSON strings and native objects.\n\nThe app stores the results from the API query in objects of type `Rate`. To make it as simple as possible to receive and store the results, I made the `Rate` class match the JSON format of the API results:\n\n```swift\nclass Rate: Object, ObjectKeyIdentifiable, Codable {\n var motd = Motd()\n var success = false\n @Persisted var query: Query?\n var info = Info()\n @Persisted var date: String\n @Persisted var result: Double\n}\n\nclass Motd: Codable {\n var msg = \"\"\n var url = \"\"\n}\n\nclass Query: EmbeddedObject, ObjectKeyIdentifiable, Codable {\n @Persisted var from: String\n @Persisted var to: String\n var amount = 0\n}\n\nclass Info: Codable {\n var rate = 0.0\n}\n```\n\nNote that only the fields annotated with `@Persisted` will be stored in Realm.\n\nSwift can automatically convert between `Rate` objects and the JSON strings returned by the API because we make the class comply with the `Codable` protocol.\n\nThere are two other top-level classes used by the app. \n\n`Symbols` stores all of the supported currency symbols. In the app, the list is bootstrapped from a fixed list. For future-proofing, it would be better to fetch them from an API:\n\n```swift\nclass Symbols {\n var symbols = Dictionary()\n}\n\nextension Symbols {\n static var data = Symbols()\n\n static func loadData() {\n data.symbols\"AED\"] = \"United Arab Emirates Dirham\"\n data.symbols[\"AFN\"] = \"Afghan Afghani\"\n data.symbols[\"ALL\"] = \"Albanian Lek\"\n ...\n }\n}\n```\n\n`UserSymbols` is used to store the user's chosen base currency and the list of currencies they'd like to see exchange rates for:\n\n```swift\nclass UserSymbols: Object, ObjectKeyIdentifiable {\n @Persisted var baseSymbol: String\n @Persisted var symbols: List\n}\n```\n\nAn instance of `UserSymbols` is stored in Realm so that the user gets the same list whenever they open the app.\n\n### `Rate` Data Lifecycle\n\nThis flowchart shows how the exchange rate for a single currency (represented by the `symbol` string) is managed when the `CurrencyRowContainerView` is used to render data for that currency:\n\n![Flowchart showing how the app fetches data from the API and stored in in Realm. The mobile app's UI always renders what's stored in MongoDB. The following sections will describe each block in the flow diagram.\n\nNote that the actual behavior is a little more subtle than the diagram suggests. SwiftUI ties the Realm data to the UI. If stage #2 finds the data in Realm, then it will immediately get displayed in the view (stage #8). The code will then make the extra checks and refresh the Realm data if needed. If and when the Realm data is updated, SwiftUI will automatically refresh the UI to render it.\n\nLet's look at each of those steps in turn.\n\n#### #1 `CurrencyContainerView` loaded for currency represented by `symbol`\n\n`CurrencyListContainerView` iterates over each of the currencies that the user has selected. For each currency, it creates a `CurrencyRowContainerView` and passes in strings representing the base currency (`baseSymbol`) and the currency we want an exchange rate for (`symbol`):\n\n```swift\nList {\n ForEach(userSymbols.symbols, id: \\.self) { symbol in\n CurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,\n baseAmount: $baseAmount,\n symbol: symbol,\n refreshNeeded: refreshNeeded)\n }\n .onDelete(perform: deleteSymbol)\n}\n```\n#### #2 `rate` = FetchFromRealm(`symbol`)\n\n`CurrencyRowContainerView` then uses the `@ObservedResults` property wrapper to query all `Rate` objects that are already stored in Realm:\n\n```swift\nstruct CurrencyRowContainerView: View {\n @ObservedResults(Rate.self) var rates\n ...\n}\n```\n\nThe view then filters those results to find one for the requested `baseSymbol`/`symbol` pair:\n\n```swift\nvar rate: Rate? {\n rates.filter(\n NSPredicate(format: \"query.from = %@ AND query.to = %@\",\n baseSymbol, symbol)).first\n}\n```\n\n#### #3 `rate` found?\n\nThe view checks whether `rate` is set or not (i.e., whether a matching object was found in Realm). If `rate` is set, then it's passed to `CurrencyRowDataView` to render the details (step #8). If `rate` is `nil`, then a placeholder \"Loading Data...\" `TextView` is rendered, and `loadData` is called to fetch the data using the API (step #4-3):\n\n```swift\nvar body: some View {\n if let rate = rate {\n HStack {\n CurrencyRowDataView(rate: rate, baseAmount: $baseAmount, action: action)\n ...\n }\n } else {\n Text(\"Loading Data...\")\n .onAppear(perform: loadData)\n }\n}\n```\n\n#### #4-3 Fetch `rate` from API\u00a0\u2014 No matching object found in Realm\n\nThe API URL is formed by inserting the base currency (`baseSymbol`) and the target currency (`symbol`) into a template string. `loadData` then sends the request to the API endpoint and handles the response:\n\n```swift\nprivate func loadData() {\n guard let url = URL(string: \"https://api.exchangerate.host/convert?from=\\(baseSymbol)&to=\\(symbol)\") else {\n print(\"Invalid URL\")\n return\n }\n let request = URLRequest(url: url)\n print(\"Network request: \\(url.description)\")\n URLSession.shared.dataTask(with: request) { data, response, error in\n guard let data = data else {\n print(\"Error fetching data: \\(error?.localizedDescription ?? \"Unknown error\")\")\n return\n }\n if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n // TODO: Step #5-3\n } else {\n print(\"No data received\")\n }\n }\n .resume()\n}\n```\n\n#### #5-3 StoreInRealm(`rate`) \u2014 No matching object found in Realm\n\n`Rate` objects stored in Realm are displayed in our SwiftUI views. Any data changes that impact the UI must be done on the main thread. When the API endpoint sends back results, our code receives them in a callback thread, and so we must use `DispatchQueue` to run our closure in the main thread so that we can add the resulting `Rate` object to Realm:\n\n```swift\nif let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n $rates.append(decodedResponse)\n }\n} else {\n print(\"No data received\")\n}\n```\n\nNotice how simple it is to convert the JSON response into a Realm `Rate` object and store it in our local realm!\n\n#### #6 Refresh Requested?\n\nRCurrency includes a pull-to-refresh feature which will fetch fresh exchange rate data for each of the user's currency symbols. We add the refresh functionality by appending the `.refreshable` modifier to the `List` of rates in `CurrencyListContainerView`:\n\n```swift\nList {\n ...\n}\n.refreshable(action: refreshAll)\n```\n\n`refreshAll` sets the `refreshNeeded` variable to `true`, waits a second to allow SwiftUI to react to the change, and then sets it back to `false`: \n\n```swift\nprivate func refreshAll() {\n refreshNeeded = true\n DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {\n refreshNeeded = false\n }\n}\n```\n\n`refreshNeeded` is passed to each instance of `CurrencyRowContainerView`:\n\n```swift\nCurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,\n baseAmount: $baseAmount,\n symbol: symbol,\n refreshNeeded: refreshNeeded)\n```\n`CurrencyRowContainerView` checks `refreshNeeded`. If `true`, it displays a temporary refresh image and invokes `refreshData` (step #4-6):\n\n```swift\nif refreshNeeded {\n Image(systemName: \"arrow.clockwise.icloud\")\n .onAppear(perform: refreshData)\n}\n```\n\n#### #4-6 Fetch `rate` from API\u00a0\u2014 Refresh requested\n\n`refreshData` fetches the data in exactly the same way as `loadData` in step #4-3:\n\n```swift\nprivate func refreshData() {\n guard let url = URL(string: \"https://api.exchangerate.host/convert?from=\\(baseSymbol)&to=\\(symbol)\") else {\n print(\"Invalid URL\")\n return\n }\n let request = URLRequest(url: url)\n print(\"Network request: \\(url.description)\")\n URLSession.shared.dataTask(with: request) { data, response, error in\n guard let data = data else {\n print(\"Error fetching data: \\(error?.localizedDescription ?? \"Unknown error\")\")\n return\n }\n if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n // TODO: #5-5\n }\n } else {\n print(\"No data received\")\n }\n }\n .resume()\n}\n```\n\nThe difference is that in this case, there may already be a `Rate` object in Realm for this currency pair, and so the results are handled differently...\n\n#### #5-6 StoreInRealm(`rate`) \u2014 Refresh requested\n\nIf the `Rate` object for this currency pair had been found in Realm, then we reference it with `existingRate`. `existingRate` is then updated with the API results:\n\n```swift\nif let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n if let existingRate = rate {\n do {\n let realm = try Realm()\n try realm.write() {\n guard let thawedrate = existingRate.thaw() else {\n print(\"Couldn't thaw existingRate\")\n return\n }\n thawedrate.date = decodedResponse.date\n thawedrate.result = decodedResponse.result\n }\n } catch {\n print(\"Unable to update existing rate in Realm\")\n }\n }\n }\n}\n```\n\n#### #7 `rate` stale?\n\nThe exchange rates available through the API are updated daily. The date that the rate applies to is included in the API response, and it\u2019s stored in the Realm `Rate` object. When displaying the exchange rate data, `CurrencyRowDataView` invokes `loadData`:\n\n```swift\nvar body: some View {\n CurrencyRowView(value: (rate.result) * baseAmount,\n symbol: rate.query?.to ?? \"\",\n baseValue: $baseAmount,\n action: action)\n .onAppear(perform: loadData)\n}\n```\n\n`loadData` checks that the existing Realm `Rate` object applies to today. If not, then it will refresh the data (stage 4-7):\n\n```swift\nprivate func loadData() {\n if !rate.isToday {\n // TODO: 4-7\n }\n}\n```\n\n`isToday` is a `Rate` method to check whether the stored data matches the current date:\n\n```swift\nextension Rate {\n var isToday: Bool {\n let today = Date().description.prefix(10)\n return date == today\n }\n}\n```\n\n#### #4-7 Fetch `rate` from API\u00a0\u2014 `rate` stale\n\nBy now, the code to fetch the data from the API should be familiar:\n\n```swift\nprivate func loadData() {\n if !rate.isToday {\n guard let query = rate.query else {\n print(\"Query data is missing\")\n return\n }\n guard let url = URL(string: \"https://api.exchangerate.host/convert?from=\\(query.from)&to=\\(query.to)\") else {\n print(\"Invalid URL\")\n return\n }\n let request = URLRequest(url: url)\n URLSession.shared.dataTask(with: request) { data, response, error in\n guard let data = data else {\n print(\"Error fetching data: \\(error?.localizedDescription ?? \"Unknown error\")\")\n return\n }\n if let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n // TODO: #5.7\n }\n } else {\n print(\"No data received\")\n }\n }\n .resume()\n }\n}\n```\n\n#### #5-7 StoreInRealm(`rate`) \u2014 `rate` stale\n\n`loadData` copies the new `date` and exchange rate (`result`) to the stored Realm `Rate` object:\n\n```swift\nif let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n $rate.date.wrappedValue = decodedResponse.date\n $rate.result.wrappedValue = decodedResponse.result\n }\n}\n```\n\n#### #8 View rendered with `rate`\n\n`CurrencyRowView` receives the raw exchange rate data, and the amount to convert. It\u2019s responsible for calculating and rendering the results:\n\nThe number shown in this view is part of a `TextField`, which the user can overwrite:\n\n```swift\n@Binding var baseValue: Double\n...\nTextField(\"Amount\", text: $amount)\n .keyboardType(.decimalPad)\n .onChange(of: amount, perform: updateValue)\n .font(.largeTitle)\n```\n\nWhen the user overwrites the number, the `onChange` function is called which recalculates `baseValue` (the value of the base currency that the user wants to convert):\n\n```swift\nprivate func updateValue(newAmount: String) {\n guard let newValue = Double(newAmount) else {\n print(\"\\(newAmount) cannot be converted to a Double\")\n return\n }\n baseValue = newValue / rate\n}\n```\n\nAs `baseValue` was passed in as a binding, the new value percolates up the view hierarchy, and all of the currency values are updated. As the exchange rates are held in Realm, all of the currency values are recalculated without needing to use the API:\n\n## Conclusion\n\nREST APIs let your mobile apps act on a vast variety of cloud data. The downside is that APIs can't help you when you don't have access to the internet. They can also make your app seem sluggish, and your users may get frustrated when they have to wait for data to be downloaded.\n\nA common solution is to use Realm to cache data from the API so that it's always available and can be accessed locally in an instant.\n\nThis article has shown you a typical data lifecycle that you can reuse in your own apps. You've also seen how easy it is to store the JSON results from an API call in your Realm database:\n\n```swift\nif let decodedResponse = try? JSONDecoder().decode(Rate.self, from: data) {\n DispatchQueue.main.async {\n $rates.append(decodedResponse)\n }\n}\n```\n\nWe've focussed on using a read-only API. Things get complicated very quickly when your app starts modifying data through the API. What should your app do when your device is offline?\n\n- Don't allow users to do anything that requires an update?\n- Allow local updates and maintain a list of changes that you iterate through when back online?\n - Will some changes you accept from the user have to be backed out once back online and you discover conflicting changes from other users?\n\nIf you need to modify data that's accessed by other users or devices, consider MongoDB Realm Sync as an alternative to accessing APIs directly from your app. It will save you thousands of lines of tricky code!\n\nThe API you're using may throttle access or charge per request. You can create a backend MongoDB Realm app to fetch the data from the API just once, and then use Realm Sync to handle the fan-out to all instances of your mobile app.\n\nIf you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter and join the Realm global community.\n", "format": "md", "metadata": {"tags": ["Swift", "Realm", "iOS", "Mobile"], "pageDescription": "Learn how to make your mobile app always-on, even when you can't connect to your API.", "contentType": "Code Example"}, "title": "Build Offline-First Mobile Apps by Caching API Results in Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-atlas-mongodb-query-language-mql", "action": "created", "body": "# Getting Started with Atlas and the MongoDB Query API\n\n> MQL is now MongoDB Query API! Learn more about this flexible, intuitive way to work with your data.\n\nDepending where you are in your development career or the technologies\nyou've already become familiar with, MongoDB can seem quite\nintimidating. Maybe you're coming from years of experience with\nrelational database management systems (RDBMS), or maybe you're new to\nthe topic of data persistance in general.\n\nThe good news is that MongoDB isn't as scary as you might think, and it\nis definitely a lot easier when paired with the correct tooling.\n\nIn this tutorial, we're going to see how to get started with MongoDB\nAtlas for hosting our database\ncluster and the MongoDB Query Language (MQL) for interacting with our\ndata. We won't be exploring any particular programming technology, but\neverything we see can be easily translated over.\n\n## Hosting MongoDB Clusters in the Cloud with MongoDB Atlas\n\nThere are a few ways to get started with MongoDB. You could install a\nsingle instance or a cluster of instances on your own hardware which you\nmanage yourself in terms of updates, scaling, and security, or you can\nmake use of MongoDB Atlas which is a database as a service (DBaaS) that\nmakes life quite a bit easier, and in many cases cheaper, or even free.\n\nWe're going to be working with an M0 sized Atlas cluster, which is part\nof the free tier that MongoDB offers. There's no expiration to this\ncluster and there's no credit card required in order to deploy it.\n\n### Deploying a Cluster of MongoDB Instances\n\nBefore we can use MongoDB in our applications, we need to deploy a\ncluster. Create a MongoDB Cloud account and\ninto it.\n\nChoose to **Create a New Cluster** if not immediately presented with the\noption, and start selecting the features of your cluster.\n\nYou'll be able to choose between AWS, Google Cloud, and Azure for\nhosting your cluster. It's important to note that these cloud providers\nare for location only. You won't ever have to sign into the cloud\nprovider or manage MongoDB through them. The location is important for\nlatency reasons in case you have your applications hosted on a\nparticular cloud provider.\n\nIf you want to take advantage of a free cluster, make sure to choose M0\nfor the cluster size.\n\nIt may take a few minutes to finish creating your cluster.\n\n### Defining Network Access Rules for the NoSQL Database Cluster\n\nWith the cluster created, you won't be able to access it from outside of\nthe web dashboard by default. This is a good thing because you don't\nwant random people on the internet attempting to gain unauthorized\naccess to your cluster.\n\nTo be able to access your cluster from the CLI, a web application, or\nVisual Studio Code, which we'll be using later, you'll need to setup a\nnetwork rule that allows access from a particular IP address.\n\nYou have a few options when it comes to adding an IP address to the\nallow list. You could add your current IP address which would be useful\nfor accessing from your local network. You could provide a specific IP\naddress which is useful for applications you host in the cloud\nsomewhere. You can also supply **0.0.0.0/0** which would allow full\nnetwork access to anyone, anywhere.\n\nI'd strongly recommend not adding **0.0.0.0/0** as a network rule to\nkeep your cluster safe.\n\nWith IP addresses on the allow list, the final step is to create an\napplication user.\n\n### Creating Role-Based Access Accounts to Interact with Databases in the Cluster\n\nIt is a good idea to create role-based access accounts to your MongoDB\nAtlas cluster. This means instead of creating one super user like the\nadministrator account, you're creating a user account based on what the\nuser should be doing.\n\nFor example, maybe we create a user that has access to your accounting\ndatabases and another user that has access to your employee database.\n\nWithin Atlas, choose the **Database Access** tab and click **Add New\nDatabase User** to add a new user.\n\nWhile you can give a user access to every database, current and future,\nit is best if you create users that have more refined permissions.\n\nIt's up to you how you want to create your users, but the more specific\nthe permissions, the less likely your cluster will become compromised by\nmalicious activity.\n\nNeed some more guidance around creating an Atlas cluster? Check out\nthis\ntutorial\nby Maxime Beugnet on the subject.\n\nWith the cluster deployed, the network rules in place for your IP\naddress, and a user created, we can focus on some of the basics behind\nthe MongoDB Query Language (MQL).\n\n## Querying Database Collections with the MongoDB Query Language (MQL)\n\nTo get the most out of MongoDB, you're going to need to become familiar\nwith the MongoDB Query Language (MQL). No, it is not like SQL if you're\nfamiliar with relational database management systems (RDBMS), but it\nisn't any more difficult. MQL can be used from the CLI, Visual Studio\nCode, the development drivers, and more. You'll get the same experience\nno matter where you're trying to write your queries.\n\nIn this section, we're going to focus on Visual Studio Code and the\nMongoDB\nPlayground\nextension for managing our data. We're doing this because Visual Studio\nCode is common developer tooling and it makes for an easy to use\nexperience.\n\n### Configuring Visual Studio Code for the MongoDB Playground\n\nWhile we could write our queries out of the box with Visual Studio Code,\nwe won't be able to interact with MongoDB in a meaningful way until we\ninstall the MongoDB\nPlayground\nextension.\n\nWithin Visual Studio Code, bring up the extensions explorer and search\nfor **MongoDB**.\n\nInstall the official extension with MongoDB as the publisher.\n\nWith the extension installed, we'll need to interact with it from within\nVisual Studio Code. There are a few ways to do this, but we're going to\nuse the command palette.\n\nOpen the command pallette (cmd + shift + p, if you're on macOS), and\nenter **MongoDB: Connect** into the input box.\n\nYou'll be able to enter the information for your particular MongoDB\ncluster. Once connected, we can proceed to creating a new Playground. If\nyou've already saved your information into the Visual Studio Code\nextension and need to connect later, you can always enter **Show\nMongoDB** in the command pallette and connect.\n\nAssuming we're connected, enter **Create MongoDB Playground** in the\ncommand pallette to create a new file with boilerplate MQL.\n\n### Defining a Data Model and a Use Case for MongoDB\n\nRather than just creating random queries that may or may not be helpful\nor any different from what you'd find the documentation, we're going to\ncome up with a data model to work with and then interact with that data\nmodel.\n\nI'm passionate about gaming, so our example will be centered around some\ngame data that might look like this:\n\n``` json\n{\n \"_id\": \"nraboy\",\n \"name\": \"Nic Raboy\",\n \"stats\": {\n \"wins\": 5,\n \"losses\": 10,\n \"xp\": 300\n },\n \"achievements\": \n { \"name\": \"Massive XP\", \"timestamp\": 1598961600000 },\n { \"name\": \"Instant Loss\", \"timestamp\": 1598896800000 }\n ]\n}\n```\n\nThe above document is just one of an endless possibility of data models\nfor a document in any given collection. To make the example more\nexciting, the above document has a nested object and a nested array of\nobjects, something that demonstrates the power of JSON, but without\nsacrificing how easy it is to work with in MongoDB.\n\nThe document above is often referred to as a user profile document in\ngame development. You can learn more about user profile stores in game\ndevelopment through a [previous Twitch\nstream on the subject.\n\nAs of right now, it's alright if your cluster has no databases,\ncollections, or even documents that look like the above document. We're\ngoing to get to that next.\n\n### Create, Read, Update, and Delete (CRUD) Documents in a Collections\n\nWhen working with MongoDB, you're going to get quite familiar with the\ncreate, read, update, and delete (CRUD) operations necessary when\nworking with data. To reiterate, we'll be using Visual Studio Code to do\nall this, but any CRUD operation you do in Visual Studio Code, can be\ntaken into your application code, scripts, and similar.\n\nEarlier you were supposed to create a new MongoDB Playground in Visual\nStudio Code. Open it, remove all the boilerplate MQL, and add the\nfollowing:\n\n``` javascript\nuse(\"gamedev\");\n\ndb.profiles.insertOne({\n \"_id\": \"nraboy\",\n \"name\": \"Nic Raboy\",\n \"stats\": {\n \"wins\": 5,\n \"losses\": 10,\n \"xp\": 300\n },\n \"achievements\": \n { \"name\": \"Massive XP\", \"timestamp\": 1598961600000 },\n { \"name\": \"Instant Loss\", \"timestamp\": 1598896800000 }\n ]\n});\n```\n\nIn the above code we are declaring that we want to use a **gamedev**\ndatabase in our queries that follow. It's alright if such a database\ndoesn't already exist because it will be created at runtime.\n\nNext we're using the `insertOne` operation in MongoDB to create a single\ndocument. The `db` object references the **gamedev** database that we've\nchosen to use. The **profiles** object references a collection that we\nwant to insert our document into.\n\nThe **profiles** collection does not need to exist prior to inserting\nour first document.\n\nIt does not matter what we choose to call our database as well as our\ncollection. As long as the name makes sense to you and the use-case that\nyou're trying to fulfill.\n\nWithin Visual Studio Code, you can highlight the above MQL and choose\n**Run Selected Lines From Playground** or use the command pallette to\nrun the entire playground. After running the MQL, check out your MongoDB\nAtlas cluster and you should see the database, collection, and document\ncreated.\n\nMore information on the `insert` function can be found in the [official\ndocumentation.\n\nIf you'd rather verify the document was created without actually\nnavigating through MongoDB Atlas, we can move onto the next stage of the\nCRUD operation journey.\n\nWithin the playground, add the following:\n\n``` javascript\nuse(\"gamedev\");\n\ndb.profiles.find({});\n```\n\nThe above `find` operation will return all documents in the **profiles**\ncollection. If you wanted to narrow the result-set, you could provide\nfilter criteria instead of providing an empty object. For example, try\nexecuting the following instead:\n\n``` javascript\nuse(\"gamedev\");\n\ndb.profiles.find({ \"name\": \"Nic Raboy\" });\n```\n\nThe above `find` operation will only return documents where the `name`\nfield matches exactly `Nic Raboy`. We can do better though. What about\nfinding documents that sit within a certain range for certain fields.\n\nTake the following for example:\n\n``` javascript\nuse(\"gamedev\");\n\ndb.profiles.find(\n { \n \"stats.wins\": { \n \"$gt\": 6 \n }, \n \"stats.losses\": { \n \"$lt\": 11 \n }\n }\n);\n```\n\nThe above `find` operation says that we only want documents that have\nmore than six wins and less than eleven losses. If we were running the\nabove query with the current dataset shown earlier, no results would be\nreturned because nothing satisfies the conditions.\n\nYou can learn more about the filter operators that can be used in the\nofficial\ndocumentation.\n\nSo we've got at least one document in our collection and have seen the\n`insertOne` and `find` operators. Now we need to take a look at the\nupdate and delete parts of CRUD.\n\nLet's say that we finished a game and the `stats.wins` field needs to be\nupdated. We could do something like this:\n\n``` javascript\nuse(\"gamedev\")\n\ndb.profiles.update(\n { \"_id\": \"nraboy\" },\n { \"$inc\": { \"stats.wins\": 1 } }\n);\n```\n\nThe first object in the above `update` operation is the filter. This is\nthe same filter that can be used in a `find` operation. Once we've\nfiltered for documents to update, the second object is the mutation. In\nthe above example, we're using the `$inc` operator to increase the\n`stats.wins` field by a value of one.\n\nThere are quite a few operators that can be used when updating\ndocuments. You can find more information in the official\ndocumentation.\n\nMaybe we don't want to use an operator when updating the document. Maybe\nwe want to change a field or add a field that might not exist. We can do\nsomething like the following:\n\n``` javascript\nuse(\"gamedev\")\n\ndb.profiles.update(\n { \"_id\": \"nraboy\" },\n { \"name\": \"Nicolas Raboy\" }\n);\n```\n\nThe above query will filter for documents with an `_id` of `nraboy`, and\nthen update the `name` field on those documents to be a particular\nstring, in this case \"Nicolas Raboy\". If the `name` field doesn't exist,\nit will be created and set.\n\nGot a document you want to remove? Let's look at the final part of the\nCRUD operators.\n\nAdd the following to your playground:\n\n``` javascript\nuse(\"gamedev\")\n\ndb.profiles.remove({ \"_id\": \"nraboy\" })\n```\n\nThe above `remove` operation uses a filter, just like what we saw with\nthe `find` and `update` operations. We provide it a filter of documents\nto find and in this circumstance, any matches will be removed from the\n**profiles** collection.\n\nTo learn more about the `remove` function, check out the official\ndocumentation.\n\n### Complex Queries with the MongoDB Data Aggregation Pipeline\n\nFor a lot of applications, you might only need to ever use basic CRUD\noperations when working with MongoDB. However, when you need to start\nanalyzing your data or manipulating your data for the sake of reporting,\nrunning a bunch of CRUD operations might not be your best bet.\n\nThis is where a MongoDB data aggregation pipeline might come into use.\n\nTo get an idea of what a data aggregation pipeline is, think of it as a\nseries of data stages that must complete before you have your data.\n\nLet's use a better example. Let's say that you want to look at your\n**profiles** collection and determine all the players who received a\ncertain achievement after a certain date. However, you only want to know\nthe specific achievement and basic information about the player. You\ndon't want to know generic information that matched your query.\n\nTake a look at the following:\n\n``` javascript\nuse(\"gamedev\")\n\ndb.profiles.aggregate(\n { \"$match\": { \"_id\": \"nraboy\" } },\n { \"$unwind\": \"$achievements\" },\n { \n \"$match\": { \n \"achievements.timestamp\": {\n \"$gt\": new Date().getTime() - (1000 * 60 * 60 * 24 * 1)\n }\n }\n },\n { \"$project\": { \"_id\": 1, \"achievements\": 1 }}\n]);\n```\n\nThere are four stages in the above pipeline. First we're doing a\n`$match` to find all documents that match our filter. Those documents\nare pushed to the next stage of the pipeline. Rather than looking at and\ntrying to work with the `achievements` field which is an array, we are\nchoosing to `$unwind` it.\n\nTo get a better idea of what this looks like, at the end of the second\nstage, any data that was found would look like this:\n\n``` json\n[\n {\n \"_id\": \"nraboy\",\n \"name\": \"Nic Raboy\",\n \"stats\": {\n \"wins\": 5,\n \"losses\": 10,\n \"xp\": 300\n },\n \"achievements\": {\n \"name\": \"Massive XP\",\n \"timestamp\": 1598961600000\n }\n },\n {\n \"_id\": \"nraboy\",\n \"name\": \"Nic Raboy\",\n \"stats\": {\n \"wins\": 5,\n \"losses\": 10,\n \"xp\": 300\n },\n \"achievements\": {\n \"name\": \"Instant Loss\",\n \"timestamp\": 1598896800000\n }\n }\n]\n```\n\nNotice in the above JSON response that we are no longer working with an\narray. We should have only matched on a single document, but the results\nare actually two instead of one. That is because the `$unwind` split the\narray into numerous objects.\n\nSo we've flattened the array, now we're onto the third stage of the\npipeline. We want to match any object in the result that has an\nachievement timestamp greater than a specific time. The plan here is to\nreduce the result-set of our flattened documents.\n\nThe final stage of our pipeline is to output only the fields that we're\ninterested in. With the `$project` we are saying we only want the `_id`\nfield and the `achievements` field.\n\nOur final output for this aggregation might look like this:\n\n``` json\n[\n {\n \"_id\": \"nraboy\",\n \"achievements\": {\n \"name\": \"Instant Loss\",\n \"timestamp\": 1598896800000\n }\n }\n]\n```\n\nThere are quite a few operators when it comes to the data aggregation\npipeline, many of which can do far more extravagant things than the four\npipeline stages that were used for this example. You can learn about the\nother operators in the [official\ndocumentation.\n\n## Conclusion\n\nYou just got a taste of what you can do with MongoDB Atlas and the\nMongoDB Query Language (MQL). While the point of this tutorial was to\nget you comfortable with deploying a cluster and interacting with your\ndata, you can extend your knowledge and this example by exploring the\nprogramming drivers.\n\nTake the following quick starts for example:\n\n- Quick Start:\n Golang\n- Quick Start:\n Node.js\n- Quick Start:\n Java\n- Quick Start:\n C#\n\nIn addition to the quick starts, you can also check out the MongoDB\nUniversity course,\nM121, which focuses\non data aggregation.\n\nAs previously mentioned, you can take the same queries between languages\nwith minimal to no changes between them.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to get started with MongoDB Atlas and the MongoDB Query API.", "contentType": "Quickstart"}, "title": "Getting Started with Atlas and the MongoDB Query API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/polymorphic-document-validation", "action": "created", "body": "# Document Validation for Polymorphic Collections\n\nIn data modeling design reviews with customers, I often propose a schema where different documents in the same collection contain different types of data. This makes it efficient to fetch related documents in a single, indexed query. MongoDB's flexible schema is great for optimizing workloads in this way, but people can be concerned about losing control of what applications write to these collections.\n\nCustomers are often concerned about ensuring that only correctly formatted documents make it into a collection, and so I explain MongoDB's schema validation feature. The question then comes: \"How does that work with a polymorphic/single-collection schema?\" This post is intended to answer that question \u2014 and it's simpler than you might think.\n\n## The banking application and its data\n\nThe application I'm working on manages customer and account details. There's a many-to-many relationship between customers and accounts. The app needs to be able to efficiently query customer data based on the customer id, and account data based on either the id of its customer or the account id.\n\nHere's an example of customer and account documents where my wife and I share a checking account but each have our own savings account:\n\n```json\n{\n \"_id\": \"kjfgjebgjfbkjb\",\n \"customerId\": \"CUST-123456789\",\n \"docType\": \"customer\",\n \"name\": {\n \"title\": \"Mr\",\n \"first\": \"Andrew\",\n \"middle\": \"James\",\n \"last\": \"Morgan\"\n },\n \"address\": {\n \"street1\": \"240 Blackfriars Rd\",\n \"city\": \"London\",\n \"postCode\": \"SE1 8NW\",\n \"country\": \"UK\"\n },\n \"customerSince\": ISODate(\"2005-05-20\")\n}\n\n{\n \"_id\": \"jnafjkkbEFejfleLJ\",\n \"customerId\": \"CUST-987654321\",\n \"docType\": \"customer\",\n \"name\": {\n \"title\": \"Mrs\",\n \"first\": \"Anne\",\n \"last\": \"Morgan\"\n },\n \"address\": {\n \"street1\": \"240 Blackfriars Rd\",\n \"city\": \"London\",\n \"postCode\": \"SE1 8NW\",\n \"country\": \"UK\"\n },\n \"customerSince\": ISODate(\"2003-12-01\")\n}\n\n{\n \"_id\": \"dksfmkpGJPowefjdfhs\",\n \"accountNumber\": \"ACC1000000654\",\n \"docType\": \"account\",\n \"accountType\": \"checking\",\n \"customerId\": \n \"CUST-123456789\",\n \"CUST-987654321\"\n ],\n \"dateOpened\": ISODate(\"2003-12-01\"),\n \"balance\": NumberDecimal(\"5067.65\")\n}\n\n{\n \"_id\": \"kliwiiejeqydioepwj\",\n \"accountNumber\": \"ACC1000000432\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-123456789\"\n ],\n \"dateOpened\": ISODate(\"2005-10-28\"),\n \"balance\": NumberDecimal(\"10341.21\")\n}\n\n{\n \"_id\": \"djahspihhfheiphfipewe\",\n \"accountNumber\": \"ACC1000000890\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-987654321\"\n ],\n \"dateOpened\": ISODate(\"2003-12-15\"),\n \"balance\": NumberDecimal(\"10341.89\")\n}\n```\n\nAs an aside, these are the indexes I added to make those frequent queries I referred to more efficient:\n\n```javascript\nconst indexKeys1 = { accountNumber: 1 };\nconst indexKeys2 = { customerId: 1, accountType: 1 };\nconst indexOptions1 = { partialFilterExpression: { docType: 'account' }};\nconst indexOptions2 = { partialFilterExpression: { docType: 'customer' }};\n\ndb.getCollection(collection).createIndex(indexKeys1, indexOptions1);\ndb.getCollection(collection).createIndex(indexKeys2, indexOptions2);\n```\n\n## Adding schema validation\n\nTo quote [the docs\u2026\n\n> Schema validation lets you create validation rules for your fields, such as allowed data types and value ranges.\n>\n> MongoDB uses a flexible schema model, which means that documents in a collection do not need to have the same fields or data types by default. Once you've established an application schema, you can use schema validation to ensure there are no unintended schema changes or improper data types.\n\nThe validation rules are pretty simple to set up, and tools like Hackolade can make it simpler still \u2014 even reverse-engineering your existing documents.\n\nIt's simple to imagine setting up a JSON schema validation rule for a collection where all documents share the same attributes and types. But what about polymorphic collections? Even in polymorphic collections, there is structure to the documents. Fortunately, the syntax for setting up the validation rules allows for the required optionality.\n\nI have two different types of documents that I want to store in my `Accounts` collection \u2014 `customer` and `account`. I included a `docType` attribute in each document to identify which type of entity it represents.\n\nI start by creating a JSON schema definition for each type of document:\n\n```javascript\nconst customerSchema = {\n required: \"docType\", \"customerId\", \"name\", \"customerSince\"],\n properties: {\n docType: { enum: [\"customer\"] },\n customerId: { bsonType: \"string\"},\n name: {\n bsonType: \"object\",\n required: [\"first\", \"last\"],\n properties: {\n title: { enum: [\"Mr\", \"Mrs\", \"Ms\", \"Dr\"]},\n first: { bsonType: \"string\" },\n middle: { bsonType: \"string\" },\n last: { bsonType: \"string\" }\n }\n },\n address: {\n bsonType: \"object\",\n required: [\"street1\", \"city\", \"postCode\", \"country\"],\n properties: {\n street1: { bsonType: \"string\" },\n street2: { bsonType: \"string\" },\n postCode: { bsonType: \"string\" },\n country: { bsonType: \"string\" } \n }\n },\n customerSince: {\n bsonType: \"date\"\n }\n }\n};\n\nconst accountSchema = {\n required: [\"docType\", \"accountNumber\", \"accountType\", \"customerId\", \"dateOpened\", \"balance\"],\n properties: {\n docType: { enum: [\"account\"] },\n accountNumber: { bsonType: \"string\" },\n accountType: { enum: [\"checking\", \"savings\", \"mortgage\", \"loan\"] },\n customerId: { bsonType: \"array\" },\n dateOpened: { bsonType: \"date\" },\n balance: { bsonType: \"decimal\" }\n }\n};\n```\n\nThose definitions define what attributes should be in the document and what types they should take. Note that fields can be optional \u2014 such as `name.middle` in the `customer` schema.\n\nIt's then a simple matter of using the `oneOf` JSON schema operator to allow documents that match either of the two schema:\n\n```javascript\nconst schemaValidation = {\n $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }\n};\n\ndb.createCollection(collection, {validator: schemaValidation});\n```\n\nI wanted to go a stage further and add some extra, semantic validations:\n\n* For `customer` documents, the `customerSince` value can't be any earlier than the current time.\n* For `account` documents, the `dateOpened` value can't be any earlier than the current time.\n* For savings accounts, the `balance` can't fall below zero.\n\nThese documents represents these checks:\n\n```javascript\nconst badCustomer = {\n \"$expr\": { \"$gt\": [\"$customerSince\", \"$$NOW\"] }\n};\n\nconst badAccount = {\n $or: [ \n {\n accountType: \"savings\",\n balance: { $lt: 0}\n },\n {\n \"$expr\": { \"$gt\": [\"$dateOpened\", \"$$NOW\"]}\n }\n ]\n};\n\nconst schemaValidation = {\n \"$and\": [\n { $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},\n { $nor: [\n badCustomer,\n badAccount\n ]\n }\n ]\n};\n```\n\nI updated the collection validation rules to include these new checks:\n\n```javascript\nconst schemaValidation = {\n \"$and\": [\n { $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},\n { $nor: [\n badCustomer,\n badAccount\n ]\n }\n ]\n};\n\ndb.createCollection(collection, {validator: schemaValidation} );\n```\n\nIf you want to recreate this in your own MongoDB database, then just paste this into your [MongoDB playground in VS Code:\n\n```javascript\nconst cust1 = {\n \"_id\": \"kjfgjebgjfbkjb\",\n \"customerId\": \"CUST-123456789\",\n \"docType\": \"customer\",\n \"name\": {\n \"title\": \"Mr\",\n \"first\": \"Andrew\",\n \"middle\": \"James\",\n \"last\": \"Morgan\"\n },\n \"address\": {\n \"street1\": \"240 Blackfriars Rd\",\n \"city\": \"London\",\n \"postCode\": \"SE1 8NW\",\n \"country\": \"UK\"\n },\n \"customerSince\": ISODate(\"2005-05-20\")\n}\n\nconst cust2 = {\n \"_id\": \"jnafjkkbEFejfleLJ\",\n \"customerId\": \"CUST-987654321\",\n \"docType\": \"customer\",\n \"name\": {\n \"title\": \"Mrs\",\n \"first\": \"Anne\",\n \"last\": \"Morgan\"\n },\n \"address\": {\n \"street1\": \"240 Blackfriars Rd\",\n \"city\": \"London\",\n \"postCode\": \"SE1 8NW\",\n \"country\": \"UK\"\n },\n \"customerSince\": ISODate(\"2003-12-01\")\n}\n\nconst futureCustomer = {\n \"_id\": \"nansfanjnDjknje\",\n \"customerId\": \"CUST-666666666\",\n \"docType\": \"customer\",\n \"name\": {\n \"title\": \"Mr\",\n \"first\": \"Wrong\",\n \"last\": \"Un\"\n },\n \"address\": {\n \"street1\": \"240 Blackfriars Rd\",\n \"city\": \"London\",\n \"postCode\": \"SE1 8NW\",\n \"country\": \"UK\"\n },\n \"customerSince\": ISODate(\"2025-05-20\")\n}\n\nconst acc1 = {\n \"_id\": \"dksfmkpGJPowefjdfhs\",\n \"accountNumber\": \"ACC1000000654\",\n \"docType\": \"account\",\n \"accountType\": \"checking\",\n \"customerId\": \n \"CUST-123456789\",\n \"CUST-987654321\"\n ],\n \"dateOpened\": ISODate(\"2003-12-01\"),\n \"balance\": NumberDecimal(\"5067.65\")\n}\n\nconst acc2 = {\n \"_id\": \"kliwiiejeqydioepwj\",\n \"accountNumber\": \"ACC1000000432\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-123456789\"\n ],\n \"dateOpened\": ISODate(\"2005-10-28\"),\n \"balance\": NumberDecimal(\"10341.21\")\n}\n\nconst acc3 = {\n \"_id\": \"djahspihhfheiphfipewe\",\n \"accountNumber\": \"ACC1000000890\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-987654321\"\n ],\n \"dateOpened\": ISODate(\"2003-12-15\"),\n \"balance\": NumberDecimal(\"10341.89\")\n}\n\nconst futureAccount = {\n \"_id\": \"kljkdfgjkdsgjklgjdfgkl\",\n \"accountNumber\": \"ACC1000000999\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-987654333\"\n ],\n \"dateOpened\": ISODate(\"2030-12-15\"),\n \"balance\": NumberDecimal(\"10341.89\")\n}\n\nconst negativeSavings = {\n \"_id\": \"shkjahsjdkhHK\",\n \"accountNumber\": \"ACC1000000666\",\n \"docType\": \"account\",\n \"accountType\": \"savings\",\n \"customerId\": [\n \"CUST-9837462376\"\n ],\n \"dateOpened\": ISODate(\"2005-10-28\"),\n \"balance\": NumberDecimal(\"-10341.21\")\n}\n\nconst indexKeys1 = { accountNumber: 1 }\nconst indexKeys2 = { customerId: 1, accountType: 1 } \nconst indexOptions1 = { partialFilterExpression: { docType: 'account' }}\nconst indexOptions2 = { partialFilterExpression: { docType: 'customer' }}\n\nconst customerSchema = {\n required: [\"docType\", \"customerId\", \"name\", \"customerSince\"],\n properties: {\n docType: { enum: [\"customer\"] },\n customerId: { bsonType: \"string\"},\n name: {\n bsonType: \"object\",\n required: [\"first\", \"last\"],\n properties: {\n title: { enum: [\"Mr\", \"Mrs\", \"Ms\", \"Dr\"]},\n first: { bsonType: \"string\" },\n middle: { bsonType: \"string\" },\n last: { bsonType: \"string\" }\n }\n },\n address: {\n bsonType: \"object\",\n required: [\"street1\", \"city\", \"postCode\", \"country\"],\n properties: {\n street1: { bsonType: \"string\" },\n street2: { bsonType: \"string\" },\n postCode: { bsonType: \"string\" },\n country: { bsonType: \"string\" } \n }\n },\n customerSince: {\n bsonType: \"date\"\n }\n }\n}\n\nconst accountSchema = {\n required: [\"docType\", \"accountNumber\", \"accountType\", \"customerId\", \"dateOpened\", \"balance\"],\n properties: {\n docType: { enum: [\"account\"] },\n accountNumber: { bsonType: \"string\" },\n accountType: { enum: [\"checking\", \"savings\", \"mortgage\", \"loan\"] },\n customerId: { bsonType: \"array\" },\n dateOpened: { bsonType: \"date\" },\n balance: { bsonType: \"decimal\" }\n }\n}\n\nconst badCustomer = {\n \"$expr\": { \"$gt\": [\"$customerSince\", \"$$NOW\"] }\n}\n\nconst badAccount = {\n $or: [ \n {\n accountType: \"savings\",\n balance: { $lt: 0}\n },\n {\n \"$expr\": { \"$gt\": [\"$dateOpened\", \"$$NOW\"]}\n }\n ]\n}\n\nconst schemaValidation = {\n \"$and\": [\n { $jsonSchema: { oneOf: [ customerSchema, accountSchema ] }},\n { $nor: [\n badCustomer,\n badAccount\n ]\n }\n ]\n}\n\nconst database = 'MongoBank';\nconst collection = 'Accounts';\n\nuse(database);\ndb.getCollection(collection).drop();\ndb.createCollection(collection, {validator: schemaValidation} )\ndb.getCollection(collection).replaceOne({\"_id\": cust1._id}, cust1, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": cust2._id}, cust2, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": acc1._id}, acc1, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": acc2._id}, acc2, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": acc3._id}, acc3, {upsert: true});\n\n// The following 3 operations should fail\n\ndb.getCollection(collection).replaceOne({\"_id\": negativeSavings._id}, negativeSavings, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": futureCustomer._id}, futureCustomer, {upsert: true});\ndb.getCollection(collection).replaceOne({\"_id\": futureAccount._id}, futureAccount, {upsert: true});\n\ndb.getCollection(collection).dropIndexes();\ndb.getCollection(collection).createIndex(indexKeys1, indexOptions1);\ndb.getCollection(collection).createIndex(indexKeys2, indexOptions2);\n```\n\n## Conclusion\n\nI hope that this short article has shown how easy it is to use schema validations with MongoDB's polymorphic collections and single-collection design pattern.\n\nI didn't go into much detail about why I chose the data model used in this example. If you want to know more (and you should!), then here are some great resources on data modeling with MongoDB:\n\n* Daniel Coupal and Ken Alger\u2019s excellent series of blog posts on [MongoDB schema patterns\n* Daniel Coupal and Lauren Schaefer\u2019s equally excellent series of blog posts on MongoDB anti-patterns\n* MongoDB University Course, M320 - MongoDB Data Modeling", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "A great feature of MongoDB is its flexible document model. But what happens when you want to combine that with controls on the content of the documents in a collection? This post shows how to use document validation on polymorphic collections.", "contentType": "Article"}, "title": "Document Validation for Polymorphic Collections", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-meetup-swiftui-testing-and-realm-with-projections", "action": "created", "body": "# Realm Meetup - SwiftUI Testing and Realm With Projections\n\nDidn't get a chance to attend the SwiftUI Testing and Realm with Projections Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n:youtube]{vid=fxar75-7ZbQ}\n\nIn this meetup, Jason Flax, Lead iOS Engineer, makes a return to explain how the testing landscape has changed for iOS apps using the new SwiftUI framework. Learn how to write unit tests with SwiftUI apps powered by Realm, where to put your business logic with either ViewModels or in an app following powered by Model-View-Intent, and witness the power of Realm's new Projection feature. \n\nIn this 50-minute recording, Jason spends about 40 minutes presenting \n\n- Testing Overview for iOS Apps\n\n- What's Changed in Testing from UIKit to SwiftUI\n\n- Unit Tests for Business Logic - ViewModels or MVI?\n\n- Realm Projections - Live Realm Objects that Power your View\n\nAfter this, we have about 10 minutes of live Q&A with Ian & Jason and our community . For those of you who prefer to read, below we have a full transcript of the meetup too. \n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.\n\n### Transcript\n(*As this is verbatim, please excuse any typos or punctuation errors!*)\n\n**Ian:**\nWe\u2019re going to talk about our integration into SwiftUI, around the Swift integration into SwiftUI and how we're making that really tight and eliminating boilerplate for developers. We're also going to show off a little bit of a feature that we're thinking about called Realm projections. And so we would love your feedback on this new functionality. We have other user group conference meetings coming up in the next few weeks. So, we're going to be talking about Realm JavaScript for React Native applications next week, following that we're talking about how to integrate with the Realm cloud and AWS EventBridge and then later on at the end of this month, we will have two other engineers from the iOS team talk about key path filtering and auto open, which will be new functionality that we deliver as part of the Realm Swift SDK.\n\nWe also have MongoDB.live, this is taking place on July 13th and 14th. This is a free event and we have a whole track of talks that are dedicated to mobile and mobile development. So, you don't need to know anything about MongoDB as a server or anything like that. These talks will be purely focused on mobile development. So you can definitely join that and get some benefit if you're just a mobile developer. A little bit about housekeeping here. This is the Bevy platform. In a few slides, I'm going to turn it back over to Jason. Jason's going to run through a presentation. If you have any questions during the presentation, there's a little chat box in the window, so just put them in there. We have other team members that are part of the Realm team that can answer them for you. And then at the end, I'll run through them as well as part of a Q&A session that you can ask any questions.\n\nAlso, if you want to make this more interactive, we're happy to have you come on to the mic and do your camera and you can ask a question live as well. So, please get connected with us. You can join our forums.realm.io. Ask any questions that you might have. We also have our community get hubs where you can file an issue. Also, if you want to win some free swag, you can go on our Twitter and tweet about this event or upcoming events. We will be sending swag for users that tweet about us. And without further ado, I will stop sharing my screen and turn it over to Jason.\n\n**Jason:**\nHello, everyone. Hope everyone's doing well. I'm going to figure out how to share my screen with this contraption. Can people see my screen?\n\n**Ian:**\nI can see it.\n\n**Jason:**\nCool stuff. All right. Is the thing full screen? Ian?\n\n**Ian:**\nSorry. I was muted, but I was raising my finger.\n\n**Jason:**\nYeah, I seem to always do these presentations when I'm visiting the States. I normally live in Dublin and so I'm out of my childhood bedroom right now. So, I don't have all of the tools I would normally have. For those of you that do not know me, my name is Jason Flax. I'm the lead engineer on the Realm Cocoa team. I've been at MongoDB for about five years, five years actually in three days, which is crazy. And we've been working with Realm for about two years now since the acquisition. And we've been trying to figure out how to better integrate Realm into SwiftUI, into all the new stuff coming out so that it's easier for people to use. We came up with a feature not too long ago to better integrate with the actual life cycle of SwiftUI ideas.\n\nThat's the ObservedRealmObject out observed results at state Realm object, the property rappers that hook into the view make things easy. We gave a presentation on the architectures that we want to see people using what SwiftUI, ruffled some feathers by saying that everybody should rewrite 50,000 lines of code and change the architecture that we see fit with SwiftUI. But a lot of people are mainly asking about testing. How do you test with SwiftUI? There aren't really good standards and practices out there yet. It's two years old. And to be honest, parts of it still feel a bit preview ish. So, what we want to do today is basically to go over, why do we test in the first place? How should you be testing with Realm?\n\nHow should you be testing the SwiftUI and Realm? What does that look like in a real world scenario? And what's coming next for Realm to better help you out in the future? Today's agenda. Why bother testing? We've all worked in places where testing doesn't happen. I encourage everybody to test their code. How to test a UI application? We're talking about iOS, macOS, TBOS, watchOS, they would be our primary users here. So we are testing a UI application. Unit and integration testing, what those are, how they differ, how Realm fits in? Testing your business logic with Realm. We at least internally have a pretty good idea of where we see business logic existing relative to the database where that sits between classes and whatnot. And then finally a sneak peek for Projections and a Q&A.\n\nSo, Projections is a standalone feature. I'll talk more about it later, but it should greatly assist in this case based on how we've seen people using Realm in SwiftUI. And for the Q&A part, we really want to hear from everybody in the audience. Testing, I wouldn't call it a hotly contested subject but it's something that we sometimes sit a bit too far removed from not building applications every day. So, it's really important that we get your feedback so that we can build better features or provide better guidance on how to better integrate Realm into your entire application life cycle. So why bother testing? Structural integrity, minimize bugs, prevent regressions, improve code quality and creating self-documenting code, though that turned to be dangerous as I have seen it used before, do not write documentation at all. I won't spend too long on this slide.\n\nI can only assume that if you're here at the SwiftUI testing talk that you do enjoy the art of testing your code. But in general, it's going to create better code. I, myself and personal projects test my code. But I certainly didn't when I first started out as a software engineer, I was like, \"Ah, sure. That's a simple function there. It'll never break.\" Lo and behold, three months later, I have no idea what that function is. I have no idea what it did, I was supposed to do and now I have a broken thing that I have to figure out how to fix and I'll spend an extra week on it. It's not fun for anybody involved. So, gesture code. How to test a UI application. That is a UI application. So, unit tests, unit tests are going to be your most basic test.\n\nIt tests functions that test simple bodies of work. They're going to be your smallest test, but you're going to a lot of them. And I promise that I'll try to go quickly through the basics testing for those that are more seasoned here. But unit tests are basically input and output. If I give you this, I expect this back in return. I expect the state of this to look like this based on these parameters. Probably, some of our most important tests, they keep the general structure of the code sound. Integration tests, these are going to, depending on the context of how you're talking about it could be integrating with the backend, it could be integrating with the database. Today, we're going to focus on the latter. But these are the tests that actually makes sure that some of the like external moving parts are working as you'd expect them to work.\n\nAcceptance tests are kind of a looser version of integration tests. I won't really be going over them today. End-to-end tests, which can be considered what I was talking about earlier, hitting an actual backend. UI testing can be considered an end to end test if you want to have sort of a loose reasoning about it or UI testing that actually tests the back. And it basically does the whole system work? I have a method that sends this to the server and I get this back, does the thing I get back look correct and is the server state sound? And then smoke tests, these are your final tests where your manager comes to you at 8:00 PM on the day that you were supposed to ship the thing and he's like, \"Got to get it out there.\n\nDid you test it?\" And you're like, \"Oh, we smoke tested.\" It's the last few checks I suppose. And then performance testing, which is important in most applications, making sure that everything is running as it should, everything is running as quickly as it should. Nothing is slowing it down where it shouldn't. This can catch a lot of bugs in code. XC test provides some really simple mechanisms for performance testing that we use as well. It'd be fairly common as well, at least for libraries to have regression testing with performance testing, to make sure that code you introduced didn't slow things down 100X because that wouldn't be fun for anyone involved. So, let's start with unit tests. Again, unit tests focus on the smallest possible unit of code, typically a function or class, they're fast to run and you'll usually have a lot of them.\n\nSo, the example we're going to be going over today is a really simple application, a library application. There's going to be a library. There's going to be users. Users can borrow books and use just can return book. I don't know why I pick this, seemed like a nice thing that wasn't a to do app. Let's start going down and just explaining what's happening here. You have the library, you have an error enum, which is a fairly common thing to do in Swift. Sorry. You have an array of books. You have an array of member IDs. These are people with assumably library cards. I don't know, that's the thing in every country. You're going to have initialize it, that takes in an array of books and an array of member IDs that initialize the library to be in the correct state.\n\nYou're going to have a borrow method that is going to take an ISBAN, which is ... I don't exactly remember what it stands for. International something book number, it's the internationally recognized idea, the book and then the library memberUid, it is going to be a throwing method that returns a book. And just to go over to the book for a second, a book contains an ISBAN, ID, a title and an author. The borrow method is going to be the main thing that we look at here. This is an individual body of work, there's clear input and output. It takes an ISBAN and the library memberUid, and it gives you back a book if all was successful. Let's walk down what this method does and how we want to test it.\n\nAgain, receive an ISBAN, received a library memberUid. We're going to check if that book actually exists in the available books. If it doesn't, we throw an error, we're going to check if a member actually exists in our library memberUid, it doesn't, throw an error. If we've gotten to this point, our state is correct. We remove the book from the books array, and we return it back to the color. So, it can be a common mistake to only test the happy path there, I give you the right ISBAN, I give you the right Uid, I get the right book back. We also want to test the two cases where you don't have the correct book, you don't have the correct member. And that the correct error is thrown. So, go to import XC test and write our first unit test.\n\nThrowing method, it is going to ... I'll go line by line. I'm not going to do this for every single slide. But because we're just kind of getting warmed up here, it'll make it clear what I'm talking about as we progress with the example because we're going to build on the example as the presentation goes on. So, we're going to create a new library. It's going to have an empty array of books and empty memberUids. We're going to try to borrow a book with an ISBAN that doesn't exist in the array and a random Uid which naturally does not exist in the empty number ID. That's going to throw an error. We're asserting that it throws an error. This is bad path, but it's good that we're testing it. We should also be checking that it's the correct error.\n\nI did not do that to save space on the slide. The wonders of presenting. After that, we're going to create a library now with a book, but not a Uid, that book is going to make sure that the first check passes, but the lack of memberUids is going to make sure that the second check fails. So we're going to try to borrow that book again. That book is Neuromancer, which is great book. Everybody should read it. Add it to your summer reading lists, got plenty of time on our hands. We're going to assert that, that throws an error. After that we're going to actually create the array of memberUids finally, we're going to create another library with the Neuromancer book and the memberUids properly initialized this time. And we're going to finally successfully borrow the book using the first member of that members array of IDs.\n\nThat book, we're going to assert that has been the correct title and the correct author. We tested both bad paths in the happy path. There's probably more we could have tested here. We could have tested the library, initialized it to make sure that the state was set up soundly. That gets a bit murky though, when you have private fields, generally a big no, no in testing is to avoid unprivate things that should be private. That means that you're probably testing wrong or something was structured wrong. So for the most part, this test is sound, this is the basic unit test. Integration tests, integration tests ensure that the interlocking pieces of your application work together as designed. Sometimes this means testing layers between classes, and sometimes this means testing layer between your database and application. So considering that this is the Realm user group, let's consider Realm as your object model and the database that we will be using and testing against.\n\nSo, we're going to switch some things around to work with Realm. It's not going to be radically different than what we had before, but it's going to be different enough that it's worth going over. So our book and library classes are going to inherit an object now, which is a Realm type that you inherit from so that you can store that type in the database. Everything is going to have our wonderful Abruzzi Syntex attached to it, which is going away soon, by the way, everyone, which is great. The library class has changed slightly and so far is that has a library ID now, which is a Uid generated initialization. It has a Realm list of available books and a Realm list of library members. Library member is another Realm object that has a member ID, which is a Uid generated on initialization.\n\nA list of borrowed books, as you can borrow books from the library and the member ID is the primary key there. We are going to change our borrow method on the library to work with Realm now. So it's still going to stick and it has been in a memberUid. This is mainly because we're slowly migrating to the world where the borrow function is going to get more complex. We're going to have a check here to make sure that the Realm is not invalidated. So every Realm object has an exposed Realm property on it that you can use. That is a Realm that is associated with that object. We're going to make sure that that's valid. We're going to check if the ISBAN exists within our available books list. If that passes, we're going to check that the member ID exists within our members list of library members. We're going to grab the book from the available books list. We're going to remove it from the available books list and we're going to return it to the color. As you can see, this actually isn't much different than the previous bit of code.\n\nThe main difference here is that we're writing to a Realm. Everything else is nearly the same, minor API differences. We're also going to add a return method to the library member class that is new. You should always return your library books. There's fines if you don't. So it's going to take a book and the library, we're going to, again, make sure that the Realm is not validated. We're going to make sure that our list of borrowed books because we're borrowing books from a library contains the correct book. If it does, we're going to remove it from our borrowed books list and we're going to append it back to the list of bell books in the library. So, what we're already doing here in these two methods is containing business logic. We're containing these things that actually change our data and in effect we'll eventually when we actually get to that part change the view.\n\nSo, let's test the borrow function now with Realm. Again, stepping through line by line, we're going to create an in-memory Realm because we don't actually want to store this stuff, we don't want state to linger between tests. We're going to open the Realm. We're going to create that Neuromancer book again. We're going to create a library member this time. We're going to create a library. We don't need to pass anything in this time as the state is going to be stored by the Realm and should be messed with from the appropriate locations, not necessarily on initialization, this is a choice.\n\nThis is not a mandate simplicity sake or a presentation. We're going to add that library to the Realm and we're going to, because there's no books in the library or members in the library assert that it's still froze that error. We don't have that book. Now, we're going to populate the library with the books in a right transaction. So, this is where Rome comes into play. We're going to try to borrow again, but because it doesn't have any members we're going to throw the air. Let's add members. Now we can successfully borrow the book with the given member and the given book, we're going to make sure that the ISBAN and title and author are sound, and that's it. It's nearly the same as the previous test.\n\nBut this is a super simple example and let's start including a view and figuring out how that plays in with your business logic and how Realm fits in all that. Testing business logic with Realm. Here's a really simple library view. There's two observed objects on it, a library and a library member. They should actually be observed Realm objects but it's not a perfect presentation. And so for each available book in the library, display a text for the title, a text for the author and a button to borrow the book. We're going to try to borrow, and do catch. If it succeeds, great. If it doesn't, we should actually show an error. I'm not going to put that in presentation code and we're going to tag the button with an identifier to be able to test against it later.\n\nThe main thing that we want to test in this view is the borrow button. It's the only thing that actually isn't read only. We should also test the read only things to make sure that the text user sound, but for again, second presentation, make sure that borrowing this book removes the book from the library and gives it to the member. So the thing that we at Realm have been talking about a lot recently is this MBI pattern, it meshes nicely with SwiftUI because of two-way data binding because of the simplicity of SwiftUI and the fact that we've been given all of the scaffolding to make things simpler, where we don't necessarily need few models, we don't necessarily need routers. And again, you might, I'm not mandating anything here, but this is the simplest way. And you can create a lot of small components and a lot of very clear methods on extensions on your model that make sure that this is fairly sound.\n\nYou have a user, the user has intent. They tap a button. That button changes something in the model. The model changes something in the view, the user sees the view fairly straightforward. It's a circular pattern, it's super useful in simpler circumstances. And as I found through my own dog fooding, in a new application, I can't speak to applications that have to migrate to SwiftUI, but in a new application, you can intentionally keep things simple regardless of the size of your code base, keep things small, keep your components small, create objects as you see fit, have loads of small functions that do exactly what they're supposed to do relative to that view, still a way to keep things simple. And in the case of our application, the user hits the borrow button. It's the tech button that we have.\n\nIt's going to borrow from the library from that function, that function is going to change our data. That data is going to be then reflected in the view via the Realm. The Realm is going to automatically update the view and the user's going to see that view. Fairly straightforward, fairly simple, again, works for many simple use cases. And yeah, so we're also going to add here a method for returning books. So it's the same exact thing. It's just for the member. I could have extracted this out, but wanted to show everybody it's the same thing. Member.borrowed books, texts for the title, text for the author, a return button with an accessibility identifier called return button that actually should have been used in the previous slide instead of tag. And that member is going to return that book to the library.\n\nWe also want to test that and for us in the case of the test that I'm about to show, it's kind of the final stage in the test where not only are we testing that we can borrow the book properly, but testing that we can back properly by pressing the borrow and return. So we're going to create a simple UI test here. The unit tests here that should be done are for the borrow and return methods. So, the borrow tests, we've already done. The return test, I'm going to avoid showing because it's the exact same as the borrow test, just in the case of the user. But having UI test is also really nice here because the UI in the case of MDI is the one that actually triggers the intent, they trigger what happens to the view model ... the view. Sorry, the model.\n\nIn the case of UI tests, it's actually kind of funky how you have to use it with Realm, you can't use your classes from the executable, your application. So, in the case of Realm, you'll actually have to not necessarily copy and paste, but you'll have to share a source file with your models. Realm is going to read those models and this is a totally different process. You have to think of it as the way that we're going to have to use Realm here is going to be a bit funky. That said, it's covered by about five lines of code.\n\nWe're going to use a Realm and the temporary directory, we're going to store that Realm path in the launch environment. That's going to be an environment variable that you can read from your application. I wouldn't consider that test code in your app. I would just consider it an injection required for a better structured application. The actual last line there is M stakes, everyone. But we're going to read that Realm from the application and then use it as we normally would. We're going to then write to that Realm from the rest of this test.\n\nAnd on the right there is a little gift of the test running. It clicks the borrow button, it then clicks the return button and moves very quickly and they don't move as slow as they used to move. But let's go over the test. So, we create a new library. We create a new library member. We create a new book. At the library, we add the member and we append the book to the library. We then launch the application. Now we're going to launch this application with all of that state already stored. So we know exactly what that should look like. We know that the library has a book, but the user doesn't have a book. So, UI testing with SwiftUI is pretty much the same as UI kit. The downside is that it doesn't always do what you expect it to do.\n\nIf you have a heavily nested view, sometimes the identifier isn't properly exposed and you end up having to do some weird things just for the sake of UI testing your application. I think those are actually bugs though. I don't think that that's how it's supposed to work, I guess keep your eyes peeled after WWDC. But yeah, so we're going to tap the borrow.button. That's the tag that you saw before? That's going to trigger the fact that that available book is going to move to the member, so that list is going to be empty. We're going to assert then that the library.members.first.borrowbooks.firststudy is the same as the book that has been.\n\nSo, the first and only member of the library's first and only book is the same as the book that we've injected into this application. We're then going to hit the return button, that's going to return the book to the library and run through that return function that you saw as an extension on the library member class. We're going to check that the library.members.borrowbooks is empty. So, the first and only member of the library no longer has a borrowed book and that the library.borrowbook at first, it has been the only available book in the library is the same as the book that we inject into the application state. Right. So, we did it, we tested everything, the application's great. We're invincible. We beat the game, we got the high score and that's it.\n\nBut what about more complex acts, you say? You can't convert your 50,000 line app that is under concentrator to these simple MVI design pattern now? It's really easy to present information in this really sterile, simple environment. It's kind of the nature of the beast when it comes to giving a presentation in the first place. And unfortunately, sometimes it can also infect the mind when coming up with features and coming up with ways to use Realm. We don't get to work with these crazy complex applications every day, especially ones that are 10 years old.\n\nOccasionally, we actually do get sent people's apps and it's super interesting for us. And we've got enough feedback at this point that we are trying to work towards having Realm be more integrated with more complex architectures. We don't want people to have to work around Realm, which is something we've seen, there are people that completely detach their app from Realm and use Realm as this dummy data store. That's totally fine, but there's often not a point at this point in doing something like that. There's so many better ways to use Realm that we want to introduce features that make it really obvious that you don't have to do some of these crazy things that people do. And yes, we have not completely lost our minds. We know there are more complex apps out there. So let's talk about MVVM.\n\nIt is just totally off the top of my head, not based on any factual truth and only anecdotal evidence, but it seems to be the most popular architecture these days. It is model view view model. So, the view gives commands to the view model, the view model updates the model, the view model reads from the model and it binds it to the view. I have contested in the past that it doesn't make as much sense with SwiftUI because it's two way data binding because what ends up happening with the models in SwiftUI is that you write from the view to the view model and then the view model just passes that information off to the model without doing anything to it. There's not really a transformation that generally happens between the view model and the model anymore. And then you have to then manually update the view, and especially with Realm where we're trying to do all that stuff for you, where you update the Realm and that updates the view without you having to do anything outside of placing a property wrapper on your view, it kind of breaks what we're trying to do.\n\nBut that said, we do understand that there is a nice separation here. And not only that, sometimes what is in your model isn't necessarily what you want to display on the view. Probably, more times than not, your model is not perfectly aligned with your view. What happens to you if you have multiple models wrong, doesn't support joins. More often than not, you have used with like a bunch of different pieces. Even in the example I showed, you have a library and you have a library member, somebody doing MVVM would want only a view model property and any like super simple state variables on that view. They wouldn't want to have their objects directly supplanted onto the view like that. They'd have a library view view model with a library member and a library. Or even simpler than that. They can take it beyond that and do just the available books and the borrowed books, since those are actually the only things that we're working with in that view.\n\nSo this is one thing that we've seen people do, and this is probably the simplest way to do view models with Realm. In this case, because this view specifically only handles available books and borrowed books, those are the things that we're going to read from the library and the library member. We're going to initialize the library view view model with those two things. So you're probably do that in the view before, and then pass that into the next view. You're going to assign the properties of that from the library available books and the member borrowed books, you're then going to observe the available books and observe the borrowed books because of the way that ... now that you're abstracting out some of the functionality that we added in, as far as observation, you're going to have to manually update the view from the view model.\n\nSo in that case, you're going to observe, you don't care, what's changing. You just care that there's change. You're going to send that to the object will change, which is a synthesized property on an observable object. That's going to tell the view, please update. Your borrow function is going to look slightly differently now. In this case, you're going to check for any available books, if the ISBAN exists you can still have the same errors. You're going to get the Realm off of the available books which, again, if the Realm has been invalidated or something happened, you are going to have to throw an error. You're going to grab the book out of the available books and you're going to remove it from the available books and then append it to the borrowed books in the right transaction from the Realm, and then return the book.\n\nSo, it's really not that different in this case. The return function, similarly, it does the opposite, but the same checks and now it even has the advantage of both of these are on the singular model associated with a view. And assuming that this is the only view that does this thing, that's actually not a bad setup. I would totally understand this as a design pattern for simplifying your view and not separating things too much and keeping like concepts together. But then we've seen users do some pretty crazy things, like totally map everything out of Realm and just make their view model totally Realm agnostic. I get why in certain circumstances this happens. I couldn't name a good reason why to do this outside of like there are people that totally abstract out the database layer in case they don't want to be tied to Realm.\n\nThat's understandable. We don't want people to be handcuffed to us. We want people to want to use us and assume that we will be around to continue to deliver great features and work with everyone to make building apps with Realm great. But we have seen this where ... Sure, you have some of the similar setup here where you're going to have a library and a library member, but you're going to save out the library ID and the member ID for lookup later. You're going to observe the Realm object still, but you're going to map out the books from the lists and put them into plain old Swift arrays.\n\nAnd then basically what you're going to end up doing is it's going to get a lot more complex or you're going to have to look up the primary keys in the Realm. You're going to have to make sure that those objects are still sound, you're then going to have to modify the Realm objects anyway, in a right transaction. And then you're going to have to re-map out the Realm lists back into their arrays and it gets really messy and it ends up becoming quintessential spaghetti code and also hard to test, which is the point of this presentation. So, this is not something we'd recommend unless there's good reason for it. So there's a big cancel sign for you.\n\nWe understand that there are infinite use cases and 1,000 design patterns and so many different ways that you can write code, these design patterns or social constructs, man. There's no quick and easy way to do this stuff. So we're trying to come up with ways to better fit in. And for us that's projections, this is a pre-alpha feature. It's only just been scoped out. We still have to design it fully. But this is from the prototype that we have now. So what is a projection? So in database land projection is when you grab a bunch of data from different sources and put it into a single structure, but it's not actually stored in the database. So, if I have a person and that person has a name and I have a dog and that dog has a name and I want to project those two names into a single structure I would have like a structure called person and dog name.\n\nI would do queries on the database to grab those two things. And in Mongo, there's a project operator that you can use to make sure that that object comes out with the appropriate fields and values. For us, it's going to look slightly different. At some point in the future, we would like a similar super loose projection syntax, where you can join across multiple objects and get whatever object you want back. That's kind of far future for us. So in the near future, we want to come up with something a bit simpler where you're essentially reattaching existing properties onto this new arbitrary structure. And arbitrary is kind of the key word here. It's not going to be directly associated with a single Realm object. It's going to be this thing that you associate with whatever you want to associate it with.\n\nSo if you want to associate it with the view, we've incidentally been working on sort of a view model for people then it becomes your view model. If the models are one-to-one with the view, you grab the data from the sources that you want to grab it from. And you stick it on that projection and associate that with the view. Suddenly, you have a view model. In this case, we have our library view view model, it inherits from the projection class or protocol, we're not sure yet. It's going to have two protective properties. It's going to have available books and borrowed books. These are going to be read directly from the library and member classes. These properties are going to be live. Think of this as a Realm object. This is effectively a Realm object with reattached successors.\n\nIt should be treated no differently, but it's much more flexible and lightweight. You can attach to anything on here, and you could attach the member IDs on here, if you had overdue fees and that was supposed to go on this view, you could attach overdue fees to it. There's things you can query. Right now we're trying to stick mainly to things that we can access with keypads. So, for those familiar with keypads, which I think was Swift 52.\n\nI can't remember which version of Swift it was, but it was a really neat feature that you can basically access a chain of keypads on an object and then read those properties out. The initial version of projections will be able to do that where that available books is going to be read from that library and any updates to the library, we'll update available books, same thing with borrowed books and library member. And it's going to have a similar borrow function that the other view model had, this case it's just slightly more integrated with Realm, but I think the code is nearly identical. Same thing with return code is nearly identical, slightly more integrated with Realm.\n\nAnd the view is nearly the same, except now you have the view model, sorry for some of the formatting there. In this case, you call borrow on the view model and you call return on the view model. It is very close to what we had. It's still a Realmy thing that's going to automatically update your view when any of the things that you have attached update so that if the library updates, if the user ... Oops, sorry, not user. If the member updates, if the books update, if the borrowed books update, that view is again going to automatically update. And now we've also created a single structure, which is easier to test or for you to test. Integration testing is going to be, again, very, very similar. The differences is that instead of creating a library and a member, creating we're also creating a library view model.\n\nWe're going to borrow from that, we're going to make sure that it throws the appropriate error. We're going to refill the state, mess with the state, do all the same stuff, except this time on view model. And now what we've done here is that if this is the only place where you need to return and borrow, we've created this nice standalone structure that does that for you. And it's associated with Realm, which means that it's closer to your model, since we are encouraging people to have Realm B of the model as a concept. Your testing is the exact same because this is a view associated thing and not actually a Realm object, you don't need to change these tests at all. They're the same. That's pretty much it. I hope I've left enough time for questions. I have not had a chance to look at the chat yet. \n\n**Ian:**\nI'm going to see to that, Jason, but thank you so much. I guess one of the comments here, Sebastian has never seen the objective C declaration that we have in our Realm models. Maybe tell them a little bit about the history there and then tell him what's in plan for future.\n\n**Jason:**\nSure. So, just looking at the question now, I've never used an ob C member. Obviously, members prevents you from having to put at ob C on all of your properties that need to use objective C reflection. The reason that you have to do that with Realm and Swift is because we need to take advantage of objective C reflection. It's the only way that we're able to do that. When you put that tag there, sorry, annotation. When you put that there, it gives objective C, the objective C runtime access to that property. And we still need that. However, in the future, we are going to be taking advantage of property wrappers to make it a much nicer, cleaner, more obvious with syntax. Also, it's going to have compile time checks. That's going to look like Swift instead of an ob C whatever. That is actually coming sooner than later. I hesitate to ever promise a date, but that one should be pretty, pretty soon.\n\n**Ian:**\nExcellent. Yeah, we're looking forward to being able to clean up those Realm model definitions to make it more swifty. Richard had a question here regarding if there's a recommendation or proper way to automate the user side input for some of the UI testing?\n\n**Jason:**\nUI testing, for UI test proper, is there a way to automate the user input side of the equation since you weren't going to? I'm not entirely sure what you mean, Richard. If you could explain a bit more.\n\n**Ian:**\nI mean, I think maybe this is about having variable input into what the user puts into the field. Could this also be maybe something around a fuzzer, having different inputs and testing different fields and how they accept certain inputs and how it goes through different tests?\n\n**Jason:**\nYeah. I mean, I didn't go over fuzz testing, but that's absolutely something that you should do. There's no automated mouse input on text input. You can automate some of that stuff. There's no mouse touch yet. You can touch any location on the screen, you can make it so that if you want to really, not load test is the wrong word, but batch up your application, just have it touch everywhere and see what happens and make sure nothing crashes, you could do that. It's actually really interesting if you have these UI tests. So yes, you can do that, Richard. I don't know if there's a set of best standards and practices, but at least with macOS, for instance, it was bizarre the first time I ran it. When you a UI test on macOS it actually completely takes control from your mouse, and it will go all over the screen wherever you tell it to and click anywhere. Obviously, on the iPhone simulator, it has a limited space of where it can touch, but yes, that can be automated. But I guess it depends on what you're trying to test.\n\n**Ian:**\nI guess, another question for me is what's your opinion on test coverage? I think a lot of people would look to have everything be unit tested, but then there's also integration tests. Should everything be having integration tests? And then end to end tests, there's kind of a big, a wide berth of different things you can test there. So, what's your opinion on how much coverage you should have for each type of test?\n\n**Jason:**\nThat's a tough question, because at Realm, I suppose we tell ourselves that there can never be too many tests, so we're up to us, every single thing would be tested within reason. You can't really go overkill unless you start doing weird things to your code to accommodate weird testing patterns. I couldn't give you a number as to what appropriate test coverage is. Most things I know for us at Realm, we don't make it so that every single method needs to be tested. So, if you have a bunch of private methods, those don't need to be tested, but for us, anything by the public API needs to be heavily tested, every single method and that's not an exaggeration. We're also a library. So in a UI application, you have to look at it a bit differently and consider what your public API is, which were UI applications, really the entry points to the model, any entry point that transforms data. And in my opinion, all of those should be tested. So, I don't know if that properly answers the question, for integration tests and end to end tests, same thing. What?\n\n**Ian:**\nYeah, I think so. I mean, I think it says where's your public API and then a mobile application, your public API is a lot of the UI interfaces that they can interact with and that's how they get into your code paths. Right?\n\n**Jason:**\nYeah.\n\n**Ian:**\nI guess another question from me, and this is another opinion question is what's your opinion on flaky tests? And so these are tests that sometimes pass sometimes fail and is it okay? A lot of times relate to, should we release, should we not release? Maybe you could give us a little bit of your thoughts on that.\n\n**Jason:**\nYeah. That's a tricky one because even on the Realm Cocoa, if you follow the pull requests, we still have a couple of flaky tests. To be honest, those tests are probably revealing some race condition. They could be in the test themselves though, which I think in the case of some of the recent ones, that was the case. More often flaky tests are revealing something wrong. I don't want to be on a recording thing that that's okay. But very occasionally, yes, you do have to look at a test and be like, \"Maybe this is specific to the testing circumstance,\" but if you're trying to come out with like the most high quality product, you should have all your tests passing, you should make sure that there's no race conditions, you should make sure that everything is clean cut sound, all that kind of thing.\n\n**Ian:**\nYeah. Okay, perfect. There's a final question here, will be docs on best practices for testing? I think we're looking as this presentation is a little bit of our, I wouldn't say docs, but a presentation on best practices for testing. It is something potentially in the future we can look to ask to our docs. So yeah, I think if we have other things covered, we can look to add testing best practices as well to our docs as well. And then last question here from Shane what are we hoping for for WWDC next week?\n\n**Jason:**\nSure. Well, just to add one thing to Ian's question, if there's ever a question that you or anybody else here has, feel free to ask on GitHub or forums or something like that. For things that we can't offer through API or features or things that might take a long time to work on, we're happy to offer guidance. We do have an idea of what those best practices are and are happy to share them. As far as WWDC, what we're hoping for is ..., yes, we should add more docs, Richard, sorry. There are definitely some things there that are got yous. But with WWDC next week, and this ties to best practices on multi-threading using Realm, we're hoping for a sync await which we've been playing with for a few weeks now. We're hoping for actors, we're hoping for a few minor features as well like property wrappers in function parameters and property rappers in pretty much Lambdas and everywhere. We're hoping for Sendable as well, Sendable will prevent you from passing unsafe things into thread safe areas, basically. But yeah, that's probably our main wishlist right now.\n\n**Ian:**\nWow. Okay. That's a substantial wishlist. Well, I hope you get everything you wish for. Perfect. Well, if there's no other questions, thank you so much, everyone, and thank you so much, Jason. This has been very informative yet again.\n\n**Jason:**\nThanks everyone for coming. I always have-\n\n**Ian:**\nThank you.\n\n**Jason:**\n... to thank everyone.\n\n**Ian:**\nBye.\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Learn how the testing landscape has changed for iOS apps using the new SwiftUI framework.", "contentType": "Article"}, "title": "Realm Meetup - SwiftUI Testing and Realm With Projections", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/java/java-spring-data-client-side-field-level-encryption", "action": "created", "body": "# How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB\n\n## GitHub Repository\n\nThe source code of this template is available on GitHub:\n\n```bash\ngit clone git@github.com:mongodb-developer/mongodb-java-spring-boot-csfle.git\n```\n\nTo get started, you'll need:\n\n- Java 17.\n- A MongoDB cluster v7.0.2 or higher.\n- MongoDB Automatic Encryption Shared Library\n v7.0.2 or higher.\n\nSee the README.md file for more\ninformation.\n\n## Video\n\nThis content is also available in video format.\n\n:youtube]{vid=YePIQimYnxI}\n\n## Introduction\n\nThis post will explain the key details of the integration of\nMongoDB [Client-Side Field Level Encryption (CSFLE)\nwith Spring Data MongoDB.\n\nHowever, this post will *not* explain the basic mechanics of CSFLE\nor Spring Data MongoDB.\n\nIf you feel like you need a refresher on CSFLE before working on this more complicated piece, I can recommend a few\nresources for CSFLE:\n\n- My tutorial: CSFLE with the Java Driver (\n without Spring Data)\n- CSFLE MongoDB documentation\n- CSFLE encryption schemas\n- CSFLE quick start\n\nAnd for Spring Data MongoDB:\n\n- Spring Data MongoDB - Project\n- Spring Data MongoDB - Documentation\n- Baeldung Spring Data MongoDB Tutorial\n- Spring Initializr\n\nThis template is *significantly* larger than other online CSFLE templates you can find online. It tries to provide\nreusable code for a real production environment using:\n\n- Multiple encrypted collections.\n- Automated JSON Schema generation.\n- Server-side JSON Schema.\n- Separated clusters for DEKs and encrypted collections.\n- Automated data encryption keys generation or retrieval.\n- SpEL Evaluation Extension.\n- Auto-implemented repositories.\n- Open API documentation 3.0.1.\n\nWhile I was coding, I also tried to respect the SOLID Principles as much\nas possible to increase the code readability, usability, and reutilization.\n\n## High-Level Diagrams\n\nNow that we are all on board, here is a high-level diagram of the different moving parts required to create a correctly-configured CSFLE-enabled MongoClient which can encrypt and decrypt fields automatically.\n\n```java\n/**\n * This class initialize the Key Vault (collection + keyAltNames unique index) using a dedicated standard connection\n * to MongoDB.\n * Then it creates the Data Encryption Keys (DEKs) required to encrypt the documents in each of the\n * encrypted collections.\n */\n@Component\npublic class KeyVaultAndDekSetup {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(KeyVaultAndDekSetup.class);\n private final KeyVaultService keyVaultService;\n private final DataEncryptionKeyService dataEncryptionKeyService;\n @Value(\"${spring.data.mongodb.vault.uri}\")\n private String CONNECTION_STR;\n\n public KeyVaultAndDekSetup(KeyVaultService keyVaultService, DataEncryptionKeyService dataEncryptionKeyService) {\n this.keyVaultService = keyVaultService;\n this.dataEncryptionKeyService = dataEncryptionKeyService;\n }\n\n @PostConstruct\n public void postConstruct() {\n LOGGER.info(\"=> Start Encryption Setup.\");\n LOGGER.debug(\"=> MongoDB Connection String: {}\", CONNECTION_STR);\n MongoClientSettings mcs = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(CONNECTION_STR))\n .build();\n try (MongoClient client = MongoClients.create(mcs)) {\n LOGGER.info(\"=> Created the MongoClient instance for the encryption setup.\");\n LOGGER.info(\"=> Creating the encryption key vault collection.\");\n keyVaultService.setupKeyVaultCollection(client);\n LOGGER.info(\"=> Creating the Data Encryption Keys.\");\n EncryptedCollectionsConfiguration.encryptedEntities.forEach(dataEncryptionKeyService::createOrRetrieveDEK);\n LOGGER.info(\"=> Encryption Setup completed.\");\n } catch (Exception e) {\n LOGGER.error(\"=> Encryption Setup failed: {}\", e.getMessage(), e);\n }\n\n }\n\n}\n```\n\nIn production, you could choose to create the key vault collection and its unique index on the `keyAltNames` field\nmanually once and remove the code as it's never going to be executed again. I guess it only makes sense to keep it if\nyou are running this code in a CI/CD pipeline.\n\nOne important thing to note here is the dependency to a completely standard (i.e., not CSFLE-enabled) and ephemeral `MongoClient` (use of a\ntry-with-resources block) as we are already creating a collection and an index in our MongoDB cluster.\n\nKeyVaultServiceImpl.java\n\n```java\n/**\n * Initialization of the Key Vault collection and keyAltNames unique index.\n */\n@Service\npublic class KeyVaultServiceImpl implements KeyVaultService {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(KeyVaultServiceImpl.class);\n private static final String INDEX_NAME = \"uniqueKeyAltNames\";\n @Value(\"${mongodb.key.vault.db}\")\n private String KEY_VAULT_DB;\n @Value(\"${mongodb.key.vault.coll}\")\n private String KEY_VAULT_COLL;\n\n public void setupKeyVaultCollection(MongoClient mongoClient) {\n LOGGER.info(\"=> Setup the key vault collection {}.{}\", KEY_VAULT_DB, KEY_VAULT_COLL);\n MongoDatabase db = mongoClient.getDatabase(KEY_VAULT_DB);\n MongoCollection vault = db.getCollection(KEY_VAULT_COLL);\n boolean vaultExists = doesCollectionExist(db, KEY_VAULT_COLL);\n if (vaultExists) {\n LOGGER.info(\"=> Vault collection already exists.\");\n if (!doesIndexExist(vault)) {\n LOGGER.info(\"=> Unique index created on the keyAltNames\");\n createKeyVaultIndex(vault);\n }\n } else {\n LOGGER.info(\"=> Creating a new vault collection & index on keyAltNames.\");\n createKeyVaultIndex(vault);\n }\n }\n\n private void createKeyVaultIndex(MongoCollection vault) {\n Bson keyAltNamesExists = exists(\"keyAltNames\");\n IndexOptions indexOpts = new IndexOptions().name(INDEX_NAME)\n .partialFilterExpression(keyAltNamesExists)\n .unique(true);\n vault.createIndex(new BsonDocument(\"keyAltNames\", new BsonInt32(1)), indexOpts);\n }\n\n private boolean doesCollectionExist(MongoDatabase db, String coll) {\n return db.listCollectionNames().into(new ArrayList<>()).stream().anyMatch(c -> c.equals(coll));\n }\n\n private boolean doesIndexExist(MongoCollection coll) {\n return coll.listIndexes()\n .into(new ArrayList<>())\n .stream()\n .map(i -> i.get(\"name\"))\n .anyMatch(n -> n.equals(INDEX_NAME));\n }\n}\n```\n\nWhen it's done, we can close the standard MongoDB connection.\n\n## Creation of the Data Encryption Keys\n\nWe can now create the Data Encryption Keys (DEKs) using the `ClientEncryption` connection.\n\nMongoDBKeyVaultClientConfiguration.java\n\n```java\n/**\n * ClientEncryption used by the DataEncryptionKeyService to create the DEKs.\n */\n@Configuration\npublic class MongoDBKeyVaultClientConfiguration {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBKeyVaultClientConfiguration.class);\n private final KmsService kmsService;\n @Value(\"${spring.data.mongodb.vault.uri}\")\n private String CONNECTION_STR;\n @Value(\"${mongodb.key.vault.db}\")\n private String KEY_VAULT_DB;\n @Value(\"${mongodb.key.vault.coll}\")\n private String KEY_VAULT_COLL;\n private MongoNamespace KEY_VAULT_NS;\n\n public MongoDBKeyVaultClientConfiguration(KmsService kmsService) {\n this.kmsService = kmsService;\n }\n\n @PostConstruct\n public void postConstructor() {\n this.KEY_VAULT_NS = new MongoNamespace(KEY_VAULT_DB, KEY_VAULT_COLL);\n }\n\n /**\n * MongoDB Encryption Client that can manage Data Encryption Keys (DEKs).\n *\n * @return ClientEncryption MongoDB connection that can create or delete DEKs.\n */\n @Bean\n public ClientEncryption clientEncryption() {\n LOGGER.info(\"=> Creating the MongoDB Key Vault Client.\");\n MongoClientSettings mcs = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(CONNECTION_STR))\n .build();\n ClientEncryptionSettings ces = ClientEncryptionSettings.builder()\n .keyVaultMongoClientSettings(mcs)\n .keyVaultNamespace(KEY_VAULT_NS.getFullName())\n .kmsProviders(kmsService.getKmsProviders())\n .build();\n return ClientEncryptions.create(ces);\n }\n}\n```\n\nWe can instantiate directly a `ClientEncryption` bean using\nthe KMS and use it to\ngenerate our DEKs (one for each encrypted collection).\n\nDataEncryptionKeyServiceImpl.java\n\n```java\n/**\n * Service responsible for creating and remembering the Data Encryption Keys (DEKs).\n * We need to retrieve the DEKs when we evaluate the SpEL expressions in the Entities to create the JSON Schemas.\n */\n@Service\npublic class DataEncryptionKeyServiceImpl implements DataEncryptionKeyService {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(DataEncryptionKeyServiceImpl.class);\n private final ClientEncryption clientEncryption;\n private final Map dataEncryptionKeysB64 = new HashMap<>();\n @Value(\"${mongodb.kms.provider}\")\n private String KMS_PROVIDER;\n\n public DataEncryptionKeyServiceImpl(ClientEncryption clientEncryption) {\n this.clientEncryption = clientEncryption;\n }\n\n public Map getDataEncryptionKeysB64() {\n LOGGER.info(\"=> Getting Data Encryption Keys Base64 Map.\");\n LOGGER.info(\"=> Keys in DEK Map: {}\", dataEncryptionKeysB64.entrySet());\n return dataEncryptionKeysB64;\n }\n\n public String createOrRetrieveDEK(EncryptedEntity encryptedEntity) {\n Base64.Encoder b64Encoder = Base64.getEncoder();\n String dekName = encryptedEntity.getDekName();\n BsonDocument dek = clientEncryption.getKeyByAltName(dekName);\n BsonBinary dataKeyId;\n if (dek == null) {\n LOGGER.info(\"=> Creating Data Encryption Key: {}\", dekName);\n DataKeyOptions dko = new DataKeyOptions().keyAltNames(of(dekName));\n dataKeyId = clientEncryption.createDataKey(KMS_PROVIDER, dko);\n LOGGER.debug(\"=> DEK ID: {}\", dataKeyId);\n } else {\n LOGGER.info(\"=> Existing Data Encryption Key: {}\", dekName);\n dataKeyId = dek.get(\"_id\").asBinary();\n LOGGER.debug(\"=> DEK ID: {}\", dataKeyId);\n }\n String dek64 = b64Encoder.encodeToString(dataKeyId.getData());\n LOGGER.debug(\"=> Base64 DEK ID: {}\", dek64);\n LOGGER.info(\"=> Adding Data Encryption Key to the Map with key: {}\",\n encryptedEntity.getEntityClass().getSimpleName());\n dataEncryptionKeysB64.put(encryptedEntity.getEntityClass().getSimpleName(), dek64);\n return dek64;\n }\n\n}\n```\n\nOne thing to note here is that we are storing the DEKs in a map, so we don't have to retrieve them again later when we\nneed them for the JSON Schemas.\n\n## Entities\n\nOne of the key functional areas of Spring Data MongoDB is the POJO-centric model it relies on to implement the\nrepositories and map the documents to the MongoDB collections.\n\nPersonEntity.java\n\n```java\n/**\n * This is the entity class for the \"persons\" collection.\n * The SpEL expression of the @Encrypted annotation is used to determine the DEK's keyId to use for the encryption.\n *\n * @see com.mongodb.quickstart.javaspringbootcsfle.components.EntitySpelEvaluationExtension\n */\n@Document(\"persons\")\n@Encrypted(keyId = \"#{mongocrypt.keyId(#target)}\")\npublic class PersonEntity {\n @Id\n private ObjectId id;\n private String firstName;\n private String lastName;\n @Encrypted(algorithm = \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\")\n private String ssn;\n @Encrypted(algorithm = \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\")\n private String bloodType;\n\n // Constructors\n\n @Override\n // toString()\n\n // Getters & Setters\n}\n```\n\nAs you can see above, this entity contains all the information we need to fully automate CSFLE. We have the information\nwe need to generate the JSON Schema:\n\n- Using the SpEL expression `#{mongocrypt.keyId(#target)}`, we can populate dynamically the DEK that was generated or\n retrieved earlier.\n- `ssn` is a `String` that requires a deterministic algorithm.\n- `bloodType` is a `String` that requires a random algorithm.\n\nThe generated JSON Schema looks like this:\n\n```json\n{\n \"encryptMetadata\": {\n \"keyId\": \n {\n \"$binary\": {\n \"base64\": \"WyHXZ+53SSqCC/6WdCvp0w==\",\n \"subType\": \"04\"\n }\n }\n ]\n },\n \"type\": \"object\",\n \"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n }\n },\n \"bloodType\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n }\n }\n}\n```\n\n## SpEL Evaluation Extension\n\nThe evaluation of the SpEL expression is only possible because of this class we added in the configuration:\n\n```java\n/**\n * Will evaluate the SePL expressions in the Entity classes like this: #{mongocrypt.keyId(#target)} and insert\n * the right encryption key for the right collection.\n */\n@Component\npublic class EntitySpelEvaluationExtension implements EvaluationContextExtension {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(EntitySpelEvaluationExtension.class);\n private final DataEncryptionKeyService dataEncryptionKeyService;\n\n public EntitySpelEvaluationExtension(DataEncryptionKeyService dataEncryptionKeyService) {\n this.dataEncryptionKeyService = dataEncryptionKeyService;\n }\n\n @Override\n @NonNull\n public String getExtensionId() {\n return \"mongocrypt\";\n }\n\n @Override\n @NonNull\n public Map getFunctions() {\n try {\n return Collections.singletonMap(\"keyId\", new Function(\n EntitySpelEvaluationExtension.class.getMethod(\"computeKeyId\", String.class), this));\n } catch (NoSuchMethodException e) {\n throw new RuntimeException(e);\n }\n }\n\n public String computeKeyId(String target) {\n String dek = dataEncryptionKeyService.getDataEncryptionKeysB64().get(target);\n LOGGER.info(\"=> Computing dek for target {} => {}\", target, dek);\n return dek;\n }\n}\n```\n\nNote that it's the place where we are retrieving the DEKs and matching them with the `target`: \"PersonEntity\", in this case.\n\n## JSON Schemas and the MongoClient Connection\n\nJSON Schemas are actually not trivial to generate in a Spring Data MongoDB project.\n\nAs a matter of fact, to generate the JSON Schemas, we need the MappingContext (the entities, etc.) which is created by\nthe automatic configuration of Spring Data which creates the `MongoClient` connection and the `MongoTemplate`...\n\nBut to create the MongoClient \u2014 with the automatic encryption enabled \u2014 you need JSON Schemas!\n\nIt took me a significant amount of time to find a solution to this deadlock, and you can just enjoy the solution now!\n\nThe solution is to inject the JSON Schema creation in the autoconfiguration process by instantiating\nthe `MongoClientSettingsBuilderCustomizer` bean.\n\n[MongoDBSecureClientConfiguration.java\n\n```java\n/**\n * Spring Data MongoDB Configuration for the encrypted MongoClient with all the required configuration (jsonSchemas).\n * The big trick in this file is the creation of the JSON Schemas before the creation of the entire configuration as\n * we need the MappingContext to resolve the SpEL expressions in the entities.\n *\n * @see com.mongodb.quickstart.javaspringbootcsfle.components.EntitySpelEvaluationExtension\n */\n@Configuration\n@DependsOn(\"keyVaultAndDekSetup\")\npublic class MongoDBSecureClientConfiguration {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(MongoDBSecureClientConfiguration.class);\n private final KmsService kmsService;\n private final SchemaService schemaService;\n @Value(\"${crypt.shared.lib.path}\")\n private String CRYPT_SHARED_LIB_PATH;\n @Value(\"${spring.data.mongodb.storage.uri}\")\n private String CONNECTION_STR_DATA;\n @Value(\"${spring.data.mongodb.vault.uri}\")\n private String CONNECTION_STR_VAULT;\n @Value(\"${mongodb.key.vault.db}\")\n private String KEY_VAULT_DB;\n @Value(\"${mongodb.key.vault.coll}\")\n private String KEY_VAULT_COLL;\n private MongoNamespace KEY_VAULT_NS;\n\n public MongoDBSecureClientConfiguration(KmsService kmsService, SchemaService schemaService) {\n this.kmsService = kmsService;\n this.schemaService = schemaService;\n }\n\n @PostConstruct\n public void postConstruct() {\n this.KEY_VAULT_NS = new MongoNamespace(KEY_VAULT_DB, KEY_VAULT_COLL);\n }\n\n @Bean\n public MongoClientSettings mongoClientSettings() {\n LOGGER.info(\"=> Creating the MongoClientSettings for the encrypted collections.\");\n return MongoClientSettings.builder().applyConnectionString(new ConnectionString(CONNECTION_STR_DATA)).build();\n }\n\n @Bean\n public MongoClientSettingsBuilderCustomizer customizer(MappingContext mappingContext) {\n LOGGER.info(\"=> Creating the MongoClientSettingsBuilderCustomizer.\");\n return builder -> {\n MongoJsonSchemaCreator schemaCreator = MongoJsonSchemaCreator.create(mappingContext);\n Map schemaMap = schemaService.generateSchemasMap(schemaCreator)\n .entrySet()\n .stream()\n .collect(toMap(e -> e.getKey().getFullName(),\n Map.Entry::getValue));\n Map extraOptions = Map.of(\"cryptSharedLibPath\", CRYPT_SHARED_LIB_PATH,\n \"cryptSharedLibRequired\", true);\n MongoClientSettings mcs = MongoClientSettings.builder()\n .applyConnectionString(\n new ConnectionString(CONNECTION_STR_VAULT))\n .build();\n AutoEncryptionSettings oes = AutoEncryptionSettings.builder()\n .keyVaultMongoClientSettings(mcs)\n .keyVaultNamespace(KEY_VAULT_NS.getFullName())\n .kmsProviders(kmsService.getKmsProviders())\n .schemaMap(schemaMap)\n .extraOptions(extraOptions)\n .build();\n builder.autoEncryptionSettings(oes);\n };\n }\n}\n```\n\n> One thing to note here is the option to separate the DEKs from the encrypted collections in two completely separated\n> MongoDB clusters. This isn't mandatory, but it can be a handy trick if you choose to have a different backup retention\n> policy for your two clusters. This can be interesting for the GDPR Article 17 \"Right to erasure,\" for instance, as you\n> can then guarantee that a DEK can completely disappear from your systems (backup included). I talk more about this\n> approach in\n> my Java CSFLE post.\n\nHere is the JSON Schema service which stores the generated JSON Schemas in a map:\n\nSchemaServiceImpl.java\n\n```java\n\n@Service\npublic class SchemaServiceImpl implements SchemaService {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(SchemaServiceImpl.class);\n private Map schemasMap;\n\n @Override\n public Map generateSchemasMap(MongoJsonSchemaCreator schemaCreator) {\n LOGGER.info(\"=> Generating schema map.\");\n List encryptedEntities = EncryptedCollectionsConfiguration.encryptedEntities;\n return schemasMap = encryptedEntities.stream()\n .collect(toMap(EncryptedEntity::getNamespace,\n e -> generateSchema(schemaCreator, e.getEntityClass())));\n }\n\n @Override\n public Map getSchemasMap() {\n return schemasMap;\n }\n\n private BsonDocument generateSchema(MongoJsonSchemaCreator schemaCreator, Class entityClass) {\n BsonDocument schema = schemaCreator.filter(MongoJsonSchemaCreator.encryptedOnly())\n .createSchemaFor(entityClass)\n .schemaDocument()\n .toBsonDocument();\n LOGGER.info(\"=> JSON Schema for {}:\\n{}\", entityClass.getSimpleName(),\n schema.toJson(JsonWriterSettings.builder().indent(true).build()));\n return schema;\n }\n\n}\n```\n\nWe are storing the JSON Schemas because this template also implements one of the good practices of CSFLE: server-side\nJSON Schemas.\n\n## Create or Update the Encrypted Collections\n\nIndeed, to make the automatic encryption and decryption of CSFLE work, you do not require the server-side JSON Schemas.\n\nOnly the client-side ones are required for the Automatic Encryption Shared Library. But then nothing would prevent\nanother misconfigured client or an admin connected directly to the cluster to insert or update some documents without\nencrypting the fields.\n\nTo enforce this you can use the server-side JSON Schema as you would to enforce a field type in a document, for instance.\n\nBut given that the JSON Schema will evolve with the different versions of your application, the JSON Schemas need to be\nupdated accordingly each time you restart your application.\n\n```java\n/**\n * Create or update the encrypted collections with a server side JSON Schema to secure the encrypted field in the MongoDB database.\n * This prevents any other client from inserting or editing the fields without encrypting the fields correctly.\n */\n@Component\npublic class EncryptedCollectionsSetup {\n\n private static final Logger LOGGER = LoggerFactory.getLogger(EncryptedCollectionsSetup.class);\n private final MongoClient mongoClient;\n private final SchemaService schemaService;\n\n public EncryptedCollectionsSetup(MongoClient mongoClient, SchemaService schemaService) {\n this.mongoClient = mongoClient;\n this.schemaService = schemaService;\n }\n\n @PostConstruct\n public void postConstruct() {\n LOGGER.info(\"=> Setup the encrypted collections.\");\n schemaService.getSchemasMap()\n .forEach((namespace, schema) -> createOrUpdateCollection(mongoClient, namespace, schema));\n }\n\n private void createOrUpdateCollection(MongoClient mongoClient, MongoNamespace ns, BsonDocument schema) {\n MongoDatabase db = mongoClient.getDatabase(ns.getDatabaseName());\n String collStr = ns.getCollectionName();\n if (doesCollectionExist(db, ns)) {\n LOGGER.info(\"=> Updating {} collection's server side JSON Schema.\", ns.getFullName());\n db.runCommand(new Document(\"collMod\", collStr).append(\"validator\", jsonSchemaWrapper(schema)));\n } else {\n LOGGER.info(\"=> Creating encrypted collection {} with server side JSON Schema.\", ns.getFullName());\n db.createCollection(collStr, new CreateCollectionOptions().validationOptions(\n new ValidationOptions().validator(jsonSchemaWrapper(schema))));\n }\n }\n\n public BsonDocument jsonSchemaWrapper(BsonDocument schema) {\n return new BsonDocument(\"$jsonSchema\", schema);\n }\n\n private boolean doesCollectionExist(MongoDatabase db, MongoNamespace ns) {\n return db.listCollectionNames()\n .into(new ArrayList<>())\n .stream()\n .anyMatch(c -> c.equals(ns.getCollectionName()));\n }\n\n}\n```\n\n## Multi-Entities Support\n\nOne big feature of this template as well is the support of multiple entities. As you probably noticed already, there is\na `CompanyEntity` and all its related components but the code is generic enough to handle any amount of entities which\nisn't usually the case in all the other online tutorials.\n\nIn this template, if you want to support a third type of entity, you just have to create the components of the\nthree-tier architecture as usual and add your entry in the `EncryptedCollectionsConfiguration` class.\n\nEncryptedCollectionsConfiguration.java\n\n```java\n/**\n * Information about the encrypted collections in the application.\n * As I need the information in multiple places, I decided to create a configuration class with a static list of\n * the encrypted collections and their information.\n */\npublic class EncryptedCollectionsConfiguration {\n public static final List encryptedEntities = List.of(\n new EncryptedEntity(\"mydb\", \"persons\", PersonEntity.class, \"personDEK\"),\n new EncryptedEntity(\"mydb\", \"companies\", CompanyEntity.class, \"companyDEK\"));\n}\n```\n\nEverything else from the DEK generation to the encrypted collection creation with the server-side JSON Schema is fully\nautomated and taken care of transparently. All you have to do is specify\nthe `@Encrypted(algorithm = \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\")` annotation in the entity class and the field\nwill be encrypted and decrypted automatically for you when you are using the auto-implemented repositories (courtesy of\nSpring Data MongoDB, of course!).\n\n## Query by an Encrypted Field\n\nMaybe you noticed but this template implements the `findFirstBySsn(ssn)` method which means that it's possible to\nretrieve a person document by its SSN number, even if this field is encrypted.\n\n> Note that it only works because we are using a deterministic encryption algorithm.\n\nPersonRepository.java\n\n```java\n/**\n * Spring Data MongoDB repository for the PersonEntity\n */\n@Repository\npublic interface PersonRepository extends MongoRepository {\n\n PersonEntity findFirstBySsn(String ssn);\n}\n```\n\n## Wrapping Up\n\nThanks for reading my post!\n\nIf you have any questions about it, please feel free to open a question in the GitHub repository or ask a question in\nthe MongoDB Community Forum.\n\nFeel free to ping me directly in your post: @MaBeuLux88.\n \nPull requests and improvement ideas are very welcome!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt871453c21d6d0fd6/65415752d8b7e20407a86241/Spring-Data-MongoDB-CSFLE.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3a98733accb502eb/654157524ed3b2001a90c1fb/Controller-Service-Repos.png", "format": "md", "metadata": {"tags": ["Java", "Spring"], "pageDescription": "In this advanced MongoDB CSFLE Java template, you'll learn all the tips and tricks for a successful deployment of CSFLE with Spring Data MongoDB.", "contentType": "Code Example"}, "title": "How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/new-time-series-collections", "action": "created", "body": "# MongoDB's New Time Series Collections\n\n## What is Time Series Data?\n\nTime-series data are measurements taken at time intervals. Sometimes time-series data will come into your database at high frequency - use-cases like financial transactions, stock market data, readings from smart meters, or metrics from services you're hosting over hundreds or even thousands of servers. In other cases, each measurement may only come in every few minutes. Maybe you're tracking the number of servers that you're running every few minutes to estimate your server costs for the month. Perhaps you're measuring the soil moisture of your favourite plant once a day.\n\n| | Frequent | Infrequent |\n| --------- | ------------------------------------- | ------------------------------------------- |\n| Regular | Service metrics | Number of sensors providing weather metrics |\n| Irregular | Financial transactions, Stock prices? | LPWAN data |\n\nHowever, when it comes to time-series data, it isn\u2019t all about frequency, the only thing that truly matters is the presence of time so whether your data comes every second, every 5 minutes, or every hour isn\u2019t important for using MongoDB for storing and working with time-series data.\n\n### Examples of Time-Series Data\n\nFrom the very beginning, developers have been using MongoDB to store time-series data. MongoDB can be an extremely efficient engine for storing and processing time-series data, but you'd have to know how to correctly model it to have a performant solution, but that wasn't as straightforward as it could have been.\n\nStarting in MongoDB 5.0 there is a new collection type, time-series collections, which are specifically designed for storing and working with time-series data without the hassle or need to worry about low-level model optimization.\n\n## What are Time series Collections?\n\nTime series collections are a new collection type introduced in MongoDB 5.0. On the surface, these collections look and feel like every other collection in MongoDB. You can read and write to them just like you do regular collections and even create secondary indexes with the createIndex command. However, internally, they are natively supported and optimized for storing and working with time-series data.\n\nUnder the hood, the creation of a time series collection results in a collection and an automatically created writable non-materialized view which serves as an abstraction layer. This abstraction layer allows you to always work with their data as single documents in their raw form without worry of performance implications as the actual time series collection implements a form of the bucket pattern you may already know when persisting data to disk, but these details are something you no longer need to care about when designing your schema or reading and writing your data. Users will always be able to work with the abstraction layer and not with a complicated compressed bucketed document.\n\n## Why Use MongoDB's Time Series Collections?\n\nWell because you have time-series data, right?\n\nOf course that may be true, but there are so many more reasons to use the new time series collections over regular collections for time-series data.\n\nEase of use, performance, and storage efficiency were paramount goals when creating time series collections. Time series collections allow you to work with your data model like any other collection as single documents with rich data types and structures. They eliminate the need to model your time-series data in a way that it can be performant ahead of time - they take care of all this for you!\n\nYou can design your document models more intuitively, the way you would with other types of MongoDB collections. The database then optimizes the storage schema\u00a0 for ingestion, retrieval, and storage by providing native compression to allow you to efficiently store your time-series data without worry about duplicated fields alongside your measurements.\n\nDespite being implemented in a different way from the collections you've used before, to optimize for time-stamped documents, it's important to remember that you can still use the MongoDB features you know and love, including things like nesting data within documents, secondary indexes, and the full breadth of analytics and data transformation functions within the aggregation framework, including joining data from other collections, using the `$lookup` operator, and creating materialized views using `$merge`.\n\n## How to Create a Time-Series Collection\n\n### All It Takes is Time\u00a0\n\nCreating a time series collection is straightforward, all it takes is a field in your data that corresponds to time, just pass the new \"timeseries'' field to the createCollection command and you\u2019re off and running. However, before we get too far ahead,\u00a0 let\u2019s walk through just how to do this and all of the options that allow you to optimize time series collections.\n\nThroughout this post, we'll show you how to create a time series collection to store documents that look like the following:\n\n```js\n{\n \"_id\" : ObjectId(\"60c0d44894c10494260da31e\"),\n \"source\" : {sensorId: 123, region: \"americas\"},\n \"airPressure\" : 99 ,\n \"windSpeed\" : 22,\n \"temp\" : { \"degreesF\": 39,\n \"degreesC\": 3.8\n },\n \"ts\" : ISODate(\"2021-05-20T10:24:51.303Z\")\n}\n\n```\n\nAs mentioned before, a time series collection can be created with just a simple time field. In order to store documents like this in a time series collection, we can pass the following to the\u00a0*createCollection*\u00a0command:\n\n```js\ndb.createCollection(\"weather\", {\n timeseries: {\n timeField: \"ts\",\n },\n});\n```\n\nYou probably won't be surprised to learn that the timeField option declares the name of the field in your documents that stores the time, in the example above, \"ts\" is the name of the timeField. The value of the field specified by timeField must be a\u00a0date type.\n\nPretty fast right? While timeseries collections only require a timeField, there are other optional parameters that can be specified at creation or in some cases at modification time which will allow you to get the most from your data and time series collections. Those optional parameters are metaField, granularity, and expireAfterSeconds.\n\n### metaField\nWhile not a required parameter, metaField allows for better optimization when specified, including the ability to create secondary indexes.\n\n```js\ndb.createCollection(\"weather\", {\n timeseries: {\n timeField: \"ts\",\n metaField: \"source\",\n }});\n```\n\nIn the example above, the metaField would be the \"source\" field: \n\n```js\n\"source\" : {sensorId: 123, region: \"americas\"}\n```\n\nThis is an object consisting of key-value pairs which describe our time-series data. In this example, an identifying ID and location for a sensor collecting weather data.\n\nThe metaField field can be a complicated document with nested fields, an object, or even simply a single GUID or string. The important point here is that the metaField is really just metadata which serves as a label or tag which allows you to uniquely identify the source of a time-series, and this field should never or rarely change over time.\u00a0\n\nIt is recommended to always specify a metaField, but you would especially want to use this when you have\u00a0multiple sources of data such as sensors or devices that share common measurements.\n\nThe metaField, if present, should partition the time-series data, so that measurements with the same metadata relate over time. Measurements with a common metaField for periods of time will be grouped together internally to eliminate the duplication of this field at the storage layer. The order of metadata fields is ignored in order to accommodate drivers and applications representing objects as unordered maps. Two metadata fields with the same contents but different order are considered to be identical.\u00a0\n\nAs with the timeField, the metaField is specified as the top-level field name when creating a collection. However, the metaField can be of any BSON data type except\u00a0*array*\u00a0and cannot match the timeField required by timeseries collections. When specifying the metaField, specify the top level field name as a string no matter its underlying structure or data type.\n\nData in the same time period and with the same metaField will be colocated on disk/SSD, so choice of metaField field can affect query performance.\n\n### Granularity\n\nThe granularity parameter represents a string with the following options:\n\n- \"seconds\"\n- \"minutes\"\n- \"hours\"\n\n```js\ndb.createCollection(\"weather\", {\n timeseries: {\n timeField: \"ts\",\n metaField: \"source\",\n granularity: \"minutes\",\n },\n});\n```\n\nGranularity should be set to the unit that is closest to rate of ingestion for a unique metaField value. So, for example, if the collection described above is expected to receive a measurement every 5 minutes from a single source, you should use the \"minutes\" granularity, because source has been specified as the metaField.\n\nIn the first example, where only the timeField was specified and no metaField was identified (try to avoid this!), the granularity would need to be set relative to the\u00a0*total*\u00a0rate of ingestion, across all sources.\n\nThe granularity should be thought about in relation to your metadata ingestion rate, not just your overall ingestion rate. Specifying an appropriate value allows the time series collection to be optimized for your usage.\n\nBy default, MongoDB defines the granularity to be \"seconds\", indicative of a high-frequency ingestion rate or where no metaField is specified.\n\n### expireAfterSeconds\n\nTime series data often grows at very high rates and becomes less useful as it ages. Much like last week leftovers or milk you will want to manage your data lifecycle and often that takes the form of expiring old data.\n\nJust like TTL indexes, time series collections allow you to manage your data lifecycle with the ability to automatically delete old data at a specified interval in the background. However, unlike TTL indexes on regular collections, time series collections do not require you to create an index to do this.\u00a0\n\nSimply specify your retention rate in seconds during creation time, as seen below, or modify it at any point in time after creation with collMod.\u00a0\n\n```js\ndb.createCollection(\"weather\", {\n timeseries: {\n timeField: \"ts\",\n metaField: \"source\",\n granularity: \"minutes\"\n },\n expireAfterSeconds: 9000 \n}); \n```\n\nThe expiry of data is only one way MongoDB natively offers you to manage your data lifecycle. In a future post we will discuss ways to automatically archive your data and efficiently read data stored in multiple locations for long periods of time using MongoDB Online Archive.\n\n### Putting it all Together\u00a0\n\nPutting it all together, we\u2019ve walked you through how to create a timeseries collection and the different options you can and should specify to get the most out of your data.\n\n```js\n{\n \"_id\" : ObjectId(\"60c0d44894c10494260da31e\"),\n \"source\" : {sensorId: 123, region: \"americas\"},\n \"airPressure\" : 99 ,\n \"windSpeed\" : 22,\n \"temp\" : { \"degreesF\": 39,\n \"degreesC\": 3.8\n },\n \"ts\" : ISODate(\"2021-05-20T10:24:51.303Z\")\n}\n```\n\nThe above document can now be efficiently stored and accessed from a time series collection using the below createCollection command.\n\n```js\ndb.createCollection(\"weather\", {\n timeseries: {\n timeField: \"ts\",\n metaField: \"source\",\n granularity: \"minutes\"\n },\n expireAfterSeconds: 9000 \n}); \n```\n\nWhile this is just an example, your document can look like nearly anything. Your schema is your choice to make with the freedom that you need not worry about how that data is compressed and persisted to disk. Optimizations will be made automatically and natively for you.\n\n## Limitations of Time Series Collections in MongoDB 5.0\n\nIn the initial MongoDB 5.0 release of time series collection there are some limitations that exist. The most notable of these limitations is that the timeseries collections are considered append only, so we do not have support on the abstraction level for update and/or delete operations. Update and/delete operations can still be performed on time series collections, but they must go directly to the collection stored on disk using the optimized storage format and a user must have the proper permissions to perform these operations.\n\nIn addition to the append only nature, in the initial release, time series collections will not work with Change Streams, Realm Sync, or Atlas Search. Lastly, time series collections allow for the creation of secondary indexes as discussed above. However, these secondary indexes can only be defined on the metaField and/or timeField.\n\nFor a full list of limitations, please consult the official MongoDB documentation page.\n\nWhile we know some of these limitations may be impactful to your current use case, we promise we're working on this right now and would love for you to provide your feedback!\n\n## Next Steps\n\nNow that you know what time series data is, when and how you should create a timeseries collection and some details of how to set parameters when creating a collection. Why don't you go create a timeseries collection now? Our next blog post will go into more detail on how to optimize your time series collection for specific use-cases.\n\nYou may be interested in migrating to a time series collection from an existing collection! We'll be covering this in a later post, but in the meantime, you should check out the official documentation for a list of migration tools and examples.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn all about MongoDB's new time series collection type! This post will teach you what time series data looks like, and how to best configure time series collections to store your time series data.", "contentType": "News & Announcements"}, "title": "MongoDB's New Time Series Collections", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/building-a-mobile-chat-app-using-realm-new-way", "action": "created", "body": "# Building a Mobile Chat App Using Realm \u2013 The New and Easier Way\n\nIn my last post, I walked through how to integrate Realm into a mobile chat app in Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App. Since then, the Realm engineering team has been busy, and Realm-Swift 10.6 introduced new features that make the SDK way more \"SwiftUI-native.\" For developers, that makes integrating Realm into SwiftUI views much simpler and more robust. This article steps through building the same chat app using these new features. Everything in Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App still works, and it's the best starting point if you're building an app with UIKit rather than SwiftUI.\n\nBoth of these articles follow-up on Building a Mobile Chat App Using Realm \u2013 Data Architecture. Read that post first if you want to understand the Realm data/partitioning architecture and the decisions behind it.\n\nThis article targets developers looking to build the Realm mobile database into their SwiftUI mobile apps and use MongoDB Atlas Device Sync.\n\nIf you've already read Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App, then you'll find some parts unchanged here. As an example, there are no changes to the backend Realm application. I'll label those sections with \"Unchanged\" so that you know it's safe to skip over them.\n\nRChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. This version is an iOS (Swift and SwiftUI) app, but we will use the same data model and backend Realm application to build an Android version in the future.\n\nIf you're looking to add a chat feature to your mobile app, you can repurpose the article's code and the associated repo. If not, treat it as a case study that explains the reasoning behind the data model and partitioning/syncing decisions taken. You'll likely need to make similar design choices in your apps.\n\n>\n>\n>Watch this demo of the app in action.\n>\n>:youtube]{vid=BlV9El_MJqk}\n>\n>\n\n>\n>\n>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.\n>\n>\n\n## Prerequisites\n\nIf you want to build and run the app for yourself, this is what you'll need:\n\n- iOS14.2+\n- XCode 12.3+\n- Realm-Swift 10.6+ (recommended to use the Swift Package Manager (SPM) rather than Cocoa Pods)\n- [MongoDB Atlas account and a (free) Atlas cluster\n\n## Walkthrough\n\nThe iOS app uses MongoDB Atlas Device Sync to share data between instances of the app (e.g., the messages sent between users). This walkthrough covers both the iOS code and the backend Realm app needed to make it work. Remember that all of the code for the final app is available in the GitHub repo.\n\n### Create a Backend Atlas App (Unchanged)\n\nFrom the Atlas UI, select the \"App Services\" tab (formerly \"Realm\"). Select the options to indicate that you're creating a new iOS mobile app and then click \"Start a New App\".\n\nName the app \"RChat\" and click \"Create Application\".\n\nCopy the \"App ID.\" You'll need to use this in your iOS app code:\n\n### Connect iOS App to Your App (Unchanged)\n\nThe SwiftUI entry point for the app is RChatApp.swift. This is where you define your link to your Realm application (named `app`) using the App ID from your new backend Atlas App Services app:\n\n``` swift\nimport SwiftUI\nimport RealmSwift\nlet app = RealmSwift.App(id: \"rchat-xxxxx\") // TODO: Set the Realm application ID\n@main\nstruct RChatApp: SwiftUI.App {\n @StateObject var state = AppState()\n\n var body: some Scene {\n WindowGroup {\n ContentView()\n .environmentObject(state)\n }\n }\n}\n```\n\nNote that we created an instance of AppState and pass it into our top-level view (ContentView) as an `environmentObject`. This is a common SwiftUI pattern for making state information available to every view without the need to explicitly pass it down every level of the view hierarchy:\n\n``` swift\nimport SwiftUI\nimport RealmSwift\nlet app = RealmSwift.App(id: \"rchat-xxxxx\") // TODO: Set the Realm application ID\n@main\nstruct RChatApp: SwiftUI.App {\n @StateObject var state = AppState()\n var body: some Scene {\n WindowGroup {\n ContentView()\n .environmentObject(state)\n }\n }\n}\n```\n\n### Realm Model Objects\n\nThese are largely as described in Building a Mobile Chat App Using Realm \u2013 Data Architecture. I'll highlight some of the key changes using the User Object class as an example:\n\n``` swift\nimport Foundation\nimport RealmSwift\n\nclass User: Object, ObjectKeyIdentifiable {\n @Persisted var _id = UUID().uuidString\n @Persisted var partition = \"\" // \"user=_id\"\n @Persisted var userName = \"\"\n @Persisted var userPreferences: UserPreferences?\n @Persisted var lastSeenAt: Date?\n @Persisted var conversations = List()\n @Persisted var presence = \"Off-Line\"\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n}\n```\n\n`User` now conforms to Realm-Cocoa's `ObjectKeyIdentifiable` protocol, automatically adding identifiers to each instance that are used by SwiftUI (e.g., when iterating over results in a `ForEach` loop). It's like `Identifiable` but integrated into Realm to handle events such as Atlas Device Sync adding a new object to a result set or list.\n\n`conversations` is now a `var` rather than a `let`, allowing us to append new items to the list.\n\n### Application-Wide State: AppState\n\nThe `AppState` class is so much simpler now. Wherever possible, the opening of a Realm is now handled when opening the view that needs it.\n\nViews can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the AppState class.\n\nA lot is going on in `AppState.swift`, and you can view the full file in the repo.\n\nAs part of adopting the latest Realm-Cocoa SDK feature, I no longer need to store open Realms in `AppState` (as Realms are now opened as part of loading the view that needs them). `AppState` contains the `user` attribute to represent the user currently logged into the app (and Realm). If `user` is set to `nil`, then no user is logged in:\n\n``` swift\nclass AppState: ObservableObject {\n ...\n var user: User?\n ...\n}\n```\n\nThe app uses the Realm SDK to interact with the back end Atlas App Services application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use Combine publishers and subscribers to handle these events. `loginPublisher`, `logoutPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening Realms for a user:\n\n``` swift\nclass AppState: ObservableObject {\n ...\n let loginPublisher = PassthroughSubject()\n let logoutPublisher = PassthroughSubject()\n let userRealmPublisher = PassthroughSubject()\n ...\n}\n```\n\nWhen an `AppState` class is instantiated, the actions are assigned to each of the Combine publishers:\n\n``` swift\ninit() {\n _ = app.currentUser?.logOut()\n initLoginPublisher()\n initUserRealmPublisher()\n initLogoutPublisher()\n}\n```\n\nWe'll later see that an event is sent to `loginPublisher` when a user has successfully logged in. In `AppState`, we define what should be done when those events are received. Events received on `loginPublisher` trigger the opening of a realm with the partition set to `user=`, which in turn sends an event to `userRealmPublisher`:\n\n``` swift\nfunc initLoginPublisher() {\nloginPublisher\n .receive(on: DispatchQueue.main)\n .flatMap { user -> RealmPublishers.AsyncOpenPublisher in\n self.shouldIndicateActivity = true\n let realmConfig = user.configuration(partitionValue: \"user=\\(user.id)\")\n return Realm.asyncOpen(configuration: realmConfig)\n }\n .receive(on: DispatchQueue.main)\n .map {\n return $0\n }\n .subscribe(userRealmPublisher)\n .store(in: &self.cancellables)\n}\n```\n\nWhen the Realm has been opened and the Realm sent to `userRealmPublisher`, `user` is initialized with the `User` object retrieved from the Realm. The user's presence is set to `onLine`:\n\n``` swift\nfunc initUserRealmPublisher() {\n userRealmPublisher\n .sink(receiveCompletion: { result in\n if case let .failure(error) = result {\n self.error = \"Failed to log in and open user realm: \\(error.localizedDescription)\"\n }\n }, receiveValue: { realm in\n print(\"User Realm User file location: \\(realm.configuration.fileURL!.path)\")\n self.userRealm = realm\n self.user = realm.objects(User.self).first\n do {\n try realm.write {\n self.user?.presenceState = .onLine\n }\n } catch {\n self.error = \"Unable to open Realm write transaction\"\n }\n self.shouldIndicateActivity = false\n })\n .store(in: &cancellables)\n}\n```\n\nAfter logging out of Realm, we simply set `user` to nil:\n\n``` swift\nfunc initLogoutPublisher() {\n logoutPublisher\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { _ in\n }, receiveValue: { _ in\n self.user = nil\n })\n .store(in: &cancellables)\n}\n```\n\n### Enabling Email/Password Authentication in the Atlas App Services App (Unchanged)\n\nAfter seeing what happens **after** a user has logged into Realm, we need to circle back and enable email/password authentication in the backend Atlas App Services app. Fortunately, it's straightforward to do.\n\nFrom the Atlas UI, select \"Authentication\" from the lefthand menu, followed by \"Authentication Providers.\" Click the \"Edit\" button for \"Email/Password\":\n\n \n\nEnable the provider and select \"Automatically confirm users\" and \"Run a password reset function.\" Select \"New function\" and save without making any edits:\n\n \n\nDon't forget to click on \"REVIEW & DEPLOY\" whenever you've made a change to the backend Realm app.\n\n### Create `User` Document on User Registration (Unchanged)\n\nWhen a new user registers, we need to create a `User` document in Atlas that will eventually synchronize with a `User` object in the iOS app. Atlas provides authentication triggers that can automate this.\n\nSelect \"Triggers\" and then click on \"Add a Trigger\":\n\n \n\nSet the \"Trigger Type\" to \"Authentication,\" provide a name, set the \"Action Type\" to \"Create\" (user registration), set the \"Event Type\" to \"Function,\" and then select \"New Function\":\n\n \n\nName the function `createNewUserDocument` and add the code for the function:\n\n``` javascript\nexports = function({user}) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const userCollection = db.collection(\"User\");\n const partition = `user=${user.id}`;\n const defaultLocation = context.values.get(\"defaultLocation\");\n const userPreferences = {\n displayName: \"\"\n };\n const userDoc = {\n _id: user.id,\n partition: partition,\n userName: user.data.email,\n userPreferences: userPreferences,\n location: context.values.get(\"defaultLocation\"),\n lastSeenAt: null,\n presence:\"Off-Line\",\n conversations: ]\n };\n return userCollection.insertOne(userDoc)\n .then(result => {\n console.log(`Added User document with _id: ${result.insertedId}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\n```\n\nNote that we set the `partition` to `user=`, which matches the partition used when the iOS app opens the User Realm.\n\n\"Save\" then \"REVIEW & DEPLOY.\"\n\n### Define Schema (Unchanged)\n\nRefer to [Building a Mobile Chat App Using Realm \u2013 Data Architecture to better understand the app's schema and partitioning rules. This article skips the analysis phase and just configures the schema.\n\nBrowse to the \"Rules\" section in the App Services UI and click on \"Add Collection.\" Set \"Database Name\" to `RChat` and \"Collection Name\" to `User`. We won't be accessing the `User` collection directly through App Services, so don't select a \"Permissions Template.\" Click \"Add Collection\":\n\n \n\nAt this point, I'll stop reminding you to click \"REVIEW & DEPLOY!\"\n\nSelect \"Schema,\" paste in this schema, and then click \"SAVE\":\n\n``` javascript\n{\n\"bsonType\": \"object\",\n\"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"conversations\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"string\"\n },\n \"members\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"membershipStatus\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": \n \"membershipStatus\",\n \"userName\"\n ],\n \"title\": \"Member\"\n }\n },\n \"unreadCount\": {\n \"bsonType\": \"long\"\n }\n },\n \"required\": [\n \"unreadCount\",\n \"id\",\n \"displayName\"\n ],\n \"title\": \"Conversation\"\n }\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n },\n \"userPreferences\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [],\n \"title\": \"UserPreferences\"\n }\n},\n\"required\": [\n \"_id\",\n \"partition\",\n \"userName\",\n \"presence\"\n],\n\"title\": \"User\"\n}\n```\n\n \n\nRepeat for the `Chatster` schema:\n\n``` javascript\n{\n\"bsonType\": \"object\",\n\"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n},\n\"required\": [\n \"_id\",\n \"partition\",\n \"presence\",\n \"userName\"\n],\n\"title\": \"Chatster\"\n}\n```\n\nAnd for the `ChatMessage` collection:\n\n``` javascript\n{\n\"bsonType\": \"object\",\n\"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"timestamp\": {\n \"bsonType\": \"date\"\n }\n},\n\"required\": [\n \"_id\",\n \"partition\",\n \"text\",\n \"timestamp\"\n],\n\"title\": \"ChatMessage\"\n}\n```\n\n### Enable Atlas Device Sync (Unchanged)\n\nWe use Atlas Device Sync to synchronize objects between instances of the iOS app (and we'll extend this app also to include Android). It also syncs those objects with Atlas collections. Note that there are three options to create a schema:\n\n1. Manually code the schema as a JSON schema document.\n2. Derive the schema from existing data stored in Atlas. (We don't yet have any data and so this isn't an option here.)\n3. Derive the schema from the Realm objects used in the mobile app.\n\nWe've already specified the schema and so will stick to the first option.\n\nSelect \"Sync\" and then select your Atlas cluster. Set the \"Partition Key\" to the `partition` attribute (it appears in the list as it's already in the schema for all three collections), and the rules for whether a user can sync with a given partition:\n\n \n\nThe \"Read\" rule controls whether a user can establish a one-way read-only sync relationship to the mobile app for a given user and partition. In this case, the rule delegates this to an Atlas Function named `canReadPartition`:\n\n``` json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canReadPartition\"\n }\n }\n}\n```\n\nThe \"Write\" rule delegates to the `canWritePartition`:\n\n``` json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canWritePartition\"\n }\n }\n}\n```\n\nOnce more, we've already seen those functions in [Building a Mobile Chat App Using Realm \u2013 Data Architecture but I'll include the code here for completeness.\n\ncanReadPartition:\n\n``` javascript\nexports = function(partition) {\n console.log(`Checking if can sync a read for partition = ${partition}`);\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatsterCollection = db.collection(\"Chatster\");\n const userCollection = db.collection(\"User\");\n const chatCollection = db.collection(\"ChatMessage\");\n const user = context.user;\n let partitionKey = \"\";\n let partitionVale = \"\";\n const splitPartition = partition.split(\"=\");\n if (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n } else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n }\n switch (partitionKey) {\n case \"user\":\n console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n case \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n case \"all-users\":\n console.log(`Any user can read all-users partitions`);\n return true;\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n }\n};\n```\n\n[canWritePartition:\n\n``` javascript\nexports = function(partition) {\nconsole.log(`Checking if can sync a write for partition = ${partition}`);\nconst db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\nconst chatsterCollection = db.collection(\"Chatster\");\nconst userCollection = db.collection(\"User\");\nconst chatCollection = db.collection(\"ChatMessage\");\nconst user = context.user;\nlet partitionKey = \"\";\nlet partitionVale = \"\";\nconst splitPartition = partition.split(\"=\");\nif (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n} else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n}\n switch (partitionKey) {\n case \"user\":\n console.log(`Checking if partitionKey(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n case \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n case \"all-users\":\n console.log(`No user can write to an all-users partitions`);\n return false;\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n }\n};\n```\n\nTo create these functions, select \"Functions\" and click \"Create New Function.\" Make sure you type the function name precisely, set \"Authentication\" to \"System,\" and turn on the \"Private\" switch (which means it can't be called directly from external services such as our mobile app):\n\n \n\n### Linking User and Chatster Documents (Unchanged)\n\nAs described in [Building a Mobile Chat App Using Realm \u2013 Data Architecture, there are relationships between different `User` and `Chatster` documents. Now that we've defined the schemas and enabled Device Sync, it's convenient to add the Atlas Function and Trigger to maintain those relationships.\n\nCreate a Function named `userDocWrittenTo`, set \"Authentication\" to \"System,\" and make it private. This article is aiming to focus on the iOS app more than the backend app, and so we won't delve into this code:\n\n``` javascript\nexports = function(changeEvent) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatster = db.collection(\"Chatster\");\n const userCollection = db.collection(\"User\");\n const docId = changeEvent.documentKey._id;\n const user = changeEvent.fullDocument;\n let conversationsChanged = false;\n console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);\n switch (changeEvent.operationType) {\n case \"insert\":\n case \"replace\":\n case \"update\":\n console.log(`Writing data for ${user.userName}`);\n let chatsterDoc = {\n _id: user._id,\n partition: \"all-users=all-the-users\",\n userName: user.userName,\n lastSeenAt: user.lastSeenAt,\n presence: user.presence\n };\n if (user.userPreferences) {\n const prefs = user.userPreferences;\n chatsterDoc.displayName = prefs.displayName;\n if (prefs.avatarImage && prefs.avatarImage._id) {\n console.log(`Copying avatarImage`);\n chatsterDoc.avatarImage = prefs.avatarImage;\n console.log(`id of avatarImage = ${prefs.avatarImage._id}`);\n }\n }\n chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })\n .then (() => {\n console.log(`Wrote Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);\n });\n\n if (user.conversations && user.conversations.length > 0) {\n for (i = 0; i < user.conversations.length; i++) {\n let membersToAdd = ];\n if (user.conversations[i].members.length > 0) {\n for (j = 0; j < user.conversations[i].members.length; j++) {\n if (user.conversations[i].members[j].membershipStatus == \"User added, but invite pending\") {\n membersToAdd.push(user.conversations[i].members[j].userName);\n user.conversations[i].members[j].membershipStatus = \"Membership active\";\n conversationsChanged = true;\n }\n }\n }\n if (membersToAdd.length > 0) {\n userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})\n .then (result => {\n console.log(`Updated ${result.modifiedCount} other User documents`);\n }, error => {\n console.log(`Failed to copy new conversation to other users: ${error}`);\n });\n }\n }\n }\n if (conversationsChanged) {\n userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});\n }\n break;\n case \"delete\":\n chatster.deleteOne({_id: docId})\n .then (() => {\n console.log(`Deleted Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);\n });\n break;\n }\n};\n```\n\nSet up a database trigger to execute the new function whenever anything in the `User` collection changes:\n\n \n\n### Registering and Logging in from the iOS App\n\nThis section is virtually unchanged. As part of using the new Realm SDK features, there is now less in `AppState` (including fewer publishers), and so less attributes need to be set up as part of the login process.\n\nWe've now created enough of the backend app that mobile apps can now register new Realm users and use them to log into the app.\n\nThe app's top-level SwiftUI view is [ContentView, which decides which sub-view to show based on whether our `AppState` environment object indicates that a user is logged in or not:\n\n``` swift\n@EnvironmentObject var state: AppState\n...\nif state.loggedIn {\n if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {\n SetProfileView(isPresented: $showingProfileView)\n .environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n } else {\n ConversationListView()\n .environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n .navigationBarTitle(\"Chats\", displayMode: .inline)\n .navigationBarItems(\n trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(\n photo: state.user?.userPreferences?.avatarImage,\n online: true) { showingProfileView.toggle() } : nil\n )\n }\n} else {\n LoginView()\n}\n...\n```\n\nWhen first run, no user is logged in, and so `LoginView` is displayed.\n\nNote that `AppState.loggedIn` checks whether a user is currently logged into the Realm `app`:\n\n``` swift\nvar loggedIn: Bool {\n app.currentUser != nil && user != nil && app.currentUser?.state == .loggedIn\n}\n```\n\nThe UI for LoginView contains cells to provide the user's email address and password, a radio button to indicate whether this is a new user, and a button to register or log in a user:\n\n \n\nClicking the button executes one of two functions:\n\n``` swift\n...\nCallToActionButton(\n title: newUser ? \"Register User\" : \"Log In\",\n action: { self.userAction(username: self.username, password: self.password) })\n...\nprivate func userAction(username: String, password: String) {\n state.shouldIndicateActivity = true\n if newUser {\n signup(username: username, password: password)\n } else {\n login(username: username, password: password)\n }\n}\n```\n\n`signup` makes an asynchronous call to the Realm SDK to register the new user. Through a Combine pipeline, `signup` receives an event when the registration completes, which triggers it to invoke the `login` function:\n\n``` swift\nprivate func signup(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n state.shouldIndicateActivity = false\n return\n }\n self.state.error = nil\n app.emailPasswordAuth.registerUser(email: username, password: password)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n login(username: username, password: password)\n })\n .store(in: &state.cancellables)\n}\n```\n\nThe `login` function uses the Realm SDK to log in the user asynchronously. If/when the Realm login succeeds, the Combine pipeline sends the Realm user to the `chatsterLoginPublisher` and `loginPublisher` publishers (recall that we've seen how those are handled within the `AppState` class):\n\n``` swift\nprivate func login(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n state.shouldIndicateActivity = false\n return\n }\n self.state.error = nil\n app.login(credentials: .emailPassword(email: username, password: password))\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n state.loginPublisher.send($0)\n })\n .store(in: &state.cancellables)\n}\n```\n\n### Saving the User Profile\n\nOn being logged in for the first time, the user is presented with SetProfileView. (They can also return here later by clicking on their avatar.) This is a SwiftUI sheet where the user can set their profile and preferences by interacting with the UI and then clicking \"Save User Profile\":\n\n \n\nWhen the view loads, the UI is populated with any existing profile information found in the `User` object in the `AppState` environment object:\n\n``` swift\n...\n@EnvironmentObject var state: AppState\n...\n.onAppear { initData() }\n...\nprivate func initData() {\n displayName = state.user?.userPreferences?.displayName ?? \"\"\n photo = state.user?.userPreferences?.avatarImage\n}\n```\n\nAs the user updates the UI elements, the Realm `User` object isn't changed. It's not until they click \"Save User Profile\" that we update the `User` object. `state.user` is an object that's being managed by Realm, and so it must be updated within a Realm transaction. Using one of the new Realm SDK features, the Realm for this user's partition is made available in `SetProfileView` by injecting it into the environment from `ContentView`:\n\n``` swift\nSetProfileView(isPresented: $showingProfileView)\n .environment(\\.realmConfiguration,\n app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n```\n\n`SetProfileView` receives `userRealm` through the environment and uses it to create a transaction (line 10):\n\n``` swift\n...\n@EnvironmentObject var state: AppState\n@Environment(\\.realm) var userRealm\n...\nCallToActionButton(title: \"Save User Profile\", action: saveProfile)\n...\nprivate func saveProfile() {\n state.shouldIndicateActivity = true\n do {\n try userRealm.write {\n state.user?.userPreferences?.displayName = displayName\n if photoAdded {\n guard let newPhoto = photo else {\n print(\"Missing photo\")\n state.shouldIndicateActivity = false\n return\n }\n state.user?.userPreferences?.avatarImage = newPhoto\n }\n state.user?.presenceState = .onLine\n }\n } catch {\n state.error = \"Unable to open Realm write transaction\"\n }\n}\n```\n\nOnce saved to the local Realm, Device Sync copies changes made to the `User` object to the associated `User` document in Atlas.\n\n### List of Conversations\n\nOnce the user has logged in and set up their profile information, they're presented with the `ConversationListView`. Again, we use the new SDK feature to implicitly open the Realm for this user partition and pass it through the environment from `ContentView`:\n\n``` swift\nif state.loggedIn {\n if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {\n SetProfileView(isPresented: $showingProfileView)\n .environment(\\.realmConfiguration,\n app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n } else {\n ConversationListView()\n .environment(\\.realmConfiguration, \n app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n .navigationBarTitle(\"Chats\", displayMode: .inline)\n .navigationBarItems(\n trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(\n photo: state.user?.userPreferences?.avatarImage,\n online: true) { showingProfileView.toggle() } : nil\n )\n }\n} else {\n LoginView()\n}\n```\n\nConversationListView receives the Realm through the environment and then uses another new Realm SDK feature (`@ObservedResults`) to set `users` to be a live result set of all `User` objects in the partition (as each user has their own partition, there will be exactly one `User` document in `users`):\n\n``` swift\n@ObservedResults(User.self) var users\n```\n\nConversationListView displays a list of all the conversations that the user is currently a member of (initially none) by looping over `conversations` within their `User` Realm object:\n\n``` swift\n@ObservedResults(User.self) var users\n...\nprivate let sortDescriptors = \n SortDescriptor(keyPath: \"unreadCount\", ascending: false),\n SortDescriptor(keyPath: \"displayName\", ascending: true)\n]\n...\nif let conversations = users[0].conversations.sorted(by: sortDescriptors) {\n List {\n ForEach(conversations) { conversation in\n Button(action: {\n self.conversation = conversation\n showConversation.toggle()\n }) { ConversationCardView(conversation: conversation, isPreview: isPreview) }\n }\n }\n ...\n}\n```\n\nAt any time, another user can include you in a new group conversation. This view needs to reflect those changes as they happen:\n\n \n\nWhen the other user adds us to a conversation, our `User` document is updated automatically through the magic of Atlas Device Sync and our Atlas Trigger. Prior to Realm-Cocoa 10.6, we needed to observe the Realm and trick SwiftUI into refreshing the view when changes were received. The Realm/SwiftUI integration now refreshes the view automatically.\n\n### Creating New Conversations\n\nWhen you click in the new conversation button in `ConversationListView`, a SwiftUI sheet is activated to host `NewConversationView`. This time, we implicitly open and pass in the `Chatster` Realm (for the universal partition `all-users=all-the-users`:\n\n``` swift\n.sheet(isPresented: $showingAddChat) {\n NewConversationView()\n .environmentObject(state)\n .environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"all-users=all-the-users\"))\n```\n\n[NewConversationView creates a live Realm result set (`chatsters`) from the Realm passed through the environment:\n\n``` swift\n@ObservedResults(Chatster.self) var chatsters\n```\n\n`NewConversationView` is similar to `SetProfileView.` in that it lets the user provide a number of details which are then saved to Realm when the \"Save\" button is tapped.\n\nIn order to use the \"Realm injection\" approach, we now need to delegate the saving of the `User` object to another view (`NewConversationView` received the `Chatster` Realm but the updated `User` object needs be saved in a transaction for the `User` Realm):\n\n``` swift\ncode content\nSaveConversationButton(name: name, members: members, done: { presentationMode.wrappedValue.dismiss() })\n .environment(\\.realmConfiguration,\n app.currentUser!.configuration(partitionValue: \"user=\\(state.user?._id ?? \"\")\"))\n```\n\nSomething that we haven't covered yet is applying a filter to the live Realm search results. Here we filter on the `userName` within the Chatster objects:\n\n``` swift\n@ObservedResults(Chatster.self) var chatsters\n...\nprivate func searchUsers() {\n var candidateChatsters: Results\n if candidateMember == \"\" {\n candidateChatsters = chatsters\n } else {\n let predicate = NSPredicate(format: \"userName CONTAINScd] %@\", candidateMember)\n candidateChatsters = chatsters.filter(predicate)\n }\n candidateMembers = []\n candidateChatsters.forEach { chatster in\n if !members.contains(chatster.userName) && chatster.userName != state.user?.userName {\n candidateMembers.append(chatster.userName)\n }\n }\n}\n```\n\n### Conversation Status (Unchanged)\n\n \n\nWhen the status of a conversation changes (users go online/offline or new messages are received), the card displaying the conversation details should update.\n\nWe already have a Function to set the `presence` status in `Chatster` documents/objects when users log on or off. All `Chatster` objects are readable by all users, and so [ConversationCardContentsView can already take advantage of that information.\n\nThe `conversation.unreadCount` is part of the `User` object, and so we need another Atlas Trigger to update that whenever a new chat message is posted to a conversation.\n\nWe add a new Atlas Function `chatMessageChange` that's configured as private and with \"System\" authentication (just like our other functions). This is the function code that will increment the `unreadCount` for all `User` documents for members of the conversation:\n\n``` javascript\nexports = function(changeEvent) {\n if (changeEvent.operationType != \"insert\") {\n console.log(`ChatMessage ${changeEvent.operationType} event \u2013 currently ignored.`);\n return;\n }\n\n console.log(`ChatMessage Insert event being processed`);\n let userCollection = context.services.get(\"mongodb-atlas\").db(\"RChat\").collection(\"User\");\n let chatMessage = changeEvent.fullDocument;\n let conversation = \"\";\n\n if (chatMessage.partition) {\n const splitPartition = chatMessage.partition.split(\"=\");\n if (splitPartition.length == 2) {\n conversation = splitPartition1];\n console.log(`Partition/conversation = ${conversation}`);\n } else {\n console.log(\"Couldn't extract the conversation from partition ${chatMessage.partition}\");\n return;\n }\n } else {\n console.log(\"partition not set\");\n return;\n }\n\n const matchingUserQuery = {\n conversations: {\n $elemMatch: {\n id: conversation\n }\n }\n };\n\n const updateOperator = {\n $inc: {\n \"conversations.$[element].unreadCount\": 1\n }\n };\n\n const arrayFilter = {\n arrayFilters:[\n {\n \"element.id\": conversation\n }\n ]\n };\n\n userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)\n .then ( result => {\n console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);\n }, error => {\n console.log(`Failed to match and update User docs: ${error}`);\n });\n};\n```\n\nThat function should be invoked by a new database trigger (`ChatMessageChange`) to fire whenever a document is inserted into the `RChat.ChatMessage` collection.\n\n### Within the Chat Room\n\n \n\n[ChatRoomView has a lot of similarities with `ConversationListView`, but with one fundamental difference. Each conversation/chat room has its own partition, and so when opening a conversation, you need to open a new Realm. Again, we use the new SDK feature to open and pass in the Realm for the appropriate conversation partition:\n\n``` swift\nChatRoomBubblesView(conversation: conversation)\n .environment(\\.realmConfiguration, app.currentUser!.configuration(partitionValue: \"conversation=\\(conversation.id)\"))\n```\n\nIf you worked through Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App, then you may have noticed that I had to introduce an extra view layer\u2014`ChatRoomBubblesView`\u2014in order to open the Conversation Realm. This is because you can only pass in a single Realm through the environment, and `ChatRoomView` needed the User Realm. On the plus side, we no longer need all of the boilerplate code to open the Realm from the view's `onApppear` method explicitly.\n\nChatRoomBubblesView sorts the Realm result set by timestamp (we want the most recent chat message to appear at the bottom of the List):\n\n``` swift\n@ObservedResults(ChatMessage.self, \n sortDescriptor: SortDescriptor(keyPath: \"timestamp\", ascending: true)) var chats.\n```\n\nThe Realm/SwiftUI integration means that the UI will automatically refresh whenever a new chat message is added to the Realm, but I also want to scroll to the bottom of the list so that the latest message is visible. We can achieve this by monitoring the Realm. Note that we only open a `Conversation` Realm when the user opens the associated view because having too many realms open concurrently can exhaust resources. It's also important that we stop observing the Realm by setting it to `nil` when leaving the view:\n\n``` swift\n@State private var realmChatsNotificationToken: NotificationToken?\n@State private var latestChatId = \"\"\n...\nScrollView(.vertical) {\n ScrollViewReader { (proxy: ScrollViewProxy) in\n VStack {\n ForEach(chats) { chatMessage in\n ChatBubbleView(chatMessage: chatMessage,\n authorName: chatMessage.author != state.user?.userName ? chatMessage.author : nil,\n isPreview: isPreview)\n }\n }\n .onAppear {\n scrollToBottom()\n withAnimation(.linear(duration: 0.2)) {\n proxy.scrollTo(latestChatId, anchor: .bottom)\n }\n }\n .onChange(of: latestChatId) { target in\n withAnimation {\n proxy.scrollTo(target, anchor: .bottom)\n }\n }\n }\n}\n...\n.onAppear { loadChatRoom() }\n.onDisappear { closeChatRoom() }\n...\nprivate func loadChatRoom() {\n scrollToBottom()\n realmChatsNotificationToken = chats.thaw()?.observe { _ in\n scrollToBottom()\n }\n}\n\nprivate func closeChatRoom() {\n if let token = realmChatsNotificationToken {\n token.invalidate()\n }\n}\n\nprivate func scrollToBottom() {\n latestChatId = chats.last?._id ?? \"\"\n}\n```\n\nNote that we clear the notification token when leaving the view, ensuring that resources aren't wasted.\n\nTo send a message, all the app needs to do is to add the new chat message to Realm. Atlas Device Sync will then copy it to Atlas, where it is then synced to the other users. Note that we no longer need to explicitly open a Realm transaction to append the new chat message to the Realm that was received through the environment:\n\n``` swift\n@ObservedResults(ChatMessage.self, sortDescriptor: SortDescriptor(keyPath: \"timestamp\", ascending: true)) var chats\n...\nprivate func sendMessage(chatMessage: ChatMessage) {\n guard let conversataionString = conversation else {\n print(\"comversation not set\")\n return\n }\n chatMessage.conversationId = conversataionString.id\n $chats.append(chatMessage)\n}\n```\n\n## Summary\n\nSince the release of Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App, Realm-Swift 10.6 added new features that make working with Realm and SwiftUI simpler. Simply by passing the Realm configuration through the environment, the Realm is opened and made available to the view, and that view can go on to make updates without explicitly starting a transaction. This article has shown how those new features can be used to simplify your code. It has gone through the key steps you need to take when building a mobile app using Realm, including:\n\n- Managing the user lifecycle: registering, authenticating, logging in, and logging out.\n- Managing and storing user profile information.\n- Adding objects to Realm.\n- Performing searches on Realm data.\n- Syncing data between your mobile apps and with MongoDB Atlas.\n- Reacting to data changes synced from other devices.\n- Adding some backend magic using Atlas Triggers and Functions.\n\nWe've skipped a lot of code and functionality in this article, and it's worth looking through the rest of the app to see how to use features such as these from a SwiftUI iOS app:\n\n- Location data\n- Maps\n- Camera and photo library\n- Actions when minimizing your app\n- Notifications\n\nWe wrote the iOS version of the app first, but we plan on adding an Android (Kotlin) version soon\u2014keep checking the developer hub and the repo for updates.\n\n## References\n\n- GitHub Repo for this app\n- Read Building a Mobile Chat App Using Realm \u2013 Data Architecture to understand the data model and partitioning strategy behind the RChat app\n- Read Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App if you want to know how to build Realm into your app without using the new SwiftUI featured in Realm-Cocoa 10.6 (for example, if you need to use UIKit)\n- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine\n- GitHub Repo for Realm-Cocoa SDK\n- Realm Swift SDK documentation\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Swift", "Realm", "iOS", "Mobile"], "pageDescription": "How to incorporate Realm into your iOS App. Building a chat app with SwiftUI and Realm Swift \u2013 the new and easier way to work with Realm and SwiftUI", "contentType": "Code Example"}, "title": "Building a Mobile Chat App Using Realm \u2013 The New and Easier Way", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-starlette-stitch", "action": "created", "body": "\n \n\nHOME!\n\n \n \n \n MongoBnB\n \n \n \n {% for property in response %}\n \n \n \n {{ property.name }} (Up to {{ property.guests }} guests)\n \n\n{{ property.address }}\n\n \n {{ property.summary }}\n \n\n \n\n \n ${{ property.price }}/night (+${{ property.cleaning_fee }} Cleaning Fee)\n \n\n \n Details\n Book\n \n \n {% endfor %}\n \n \n\n \n \n \n MongoBnB\n Back\n \n \n \n\n \n \n \n {{ property.name }} (Up to {{ property.guests }} guests)\n \n\n{{ property.address }}\n\n \n {{ property.summary }}\n \n\n \n \n {% for amenity in property.amenities %}\n {{ amenity }}\n {% endfor %}\n \n\n \n ${{ property.price }}/night (+${{ property.cleaning_fee }} Cleaning Fee)\n \n\n \n Book\n \n \n \n \n\n \n \n \n MongoBnB\n Back\n \n \n \n\n \n \n Confirmed!\n \n\nYour booking confirmation for {{request.path_params['id']}} is {{confirmation}}\n\n \n \n \n \n", "format": "md", "metadata": {"tags": ["Python"], "pageDescription": "Learn how to build a property booking website in Python with Starlette, MongoDB, and Twilio.", "contentType": "Tutorial"}, "title": "Build a Property Booking Website with Starlette, MongoDB, and Twilio", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-api-excel-power-query", "action": "created", "body": "# Using the Atlas Data API from Excel with Power Query\n\n## Data Science and the Ubiquity of Excel\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nWhen you ask what tools you should learn to be a data scientist, you will hear names like *Spark, Jupyter notebooks, R, Pandas*, and *Numpy* mentioned. Many enterprise data wranglers, on the other hand, have been using, and continue to use, industry heavyweights like SAS, SPSS, and Matlab as they have for the last few decades.\n\nThe truth is, though, that the majority of back-office data science is still performed using the ubiquitous *Microsoft Excel*.\n\nExcel has been the go-to choice for importing, manipulating, analysing, and visualising data for 34 years and has more capabilities and features than most of us would ever believe. It would therefore be wrong to have a series on accessing data in MongoDB with the data API without including how to get data into Excel.\n\nThis is also unique in this series or articles in not requiring any imperative coding at all. We will use the Power Query functionality in Excel to both fetch raw data, and to push summarization tasks down to MongoDB and retrieve the results.\n\nThe MongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas, where a MongoDB driver library is either not available or not desirable. In this article, we will see how a business analyst or other back-office user, who often may not be a professional Developer, can access data from, and record data, in Atlas. The Atlas Data API can easily be used by users, unable to create or configure back-end services, who simply want to work with data in tools they know like Google Sheets or Excel.\n\n## Prerequisites\n\nTo access the data API using Power Query in Excel, we will need a version of Excel that supports it. Power Query is only available on the Windows desktop version, not on a Mac or via the browser-based Office 365 version of Excel.\n\nWe will also need an Atlas cluster for which we have enabled the data API, and our **endpoint URL** and **API key**. You can learn how to get these in this article or this video if you do not have them already.\n\nA common use-case of Atlas with Microsoft Excel sheets might be to retrieve some subset of business data to analyse or to produce an export for a third party. To demonstrate this, we first need to have some business data available in MongoDB Atlas, this can be added by selecting the three dots next to our cluster name and choosing \"Load Sample Dataset\" or following instructions here.\n\n## Using Excel Power Query with HTTPS POST Requests\n\nIf we open up a new blank Excel workbook and then go to the **Data** ribbon, we can see on the left-hand side an option to get data **From Web**. Unfortunately, Microsoft has chosen in the wizard that this launches, to restrict data retrieval to API's that use *GET* rather than *POST* as the HTTP verb to request data.\n\n> An HTTP GET request is passed all of its data as part of the URL, the values after the website and path encodes additional parts to the request, normally in a simple key-value format. A POST request sends the data as a second part of the request and is not subject to the same length and security limitations a GET has.\n\nHTTP *GET* is used for many simple read-only APIs, but the richness and complexity of queries and aggregations possible using the Atlas Data API. do not lend themselves to passing data in a GET rather than the body of a *POST*, so we are required to use a *POST* request instead.\n\nFortunately, Excel and Power Query do support *POST* requests when creating a query from scratch using what Microsoft calls a **Blank Query**.\n\nTo call a web service with a *POST* from Excel, start with a new **Blank Workbook**.\n\nClick on **Data** on the menu bar to show the Data Ribbon. Then click **Get Data** on the far left and choose **From Other Sources->Blank Query**. It's right at the bottom of the ribbon bar dropdown.\n\nWe are then presented with the *Query Editor*.\n\nWe now need to use the *Advanced Editor* to define our 'JSON' payload, and send it via an HTTP *POST* request. Click **Advanced Editor** on the left to show the existing *Blank* Query.\n\nThis has two blocks. The *let* part is a set of transformations to fetch and manipulate data and the *in* part defines what the final data set should be called.\n\nThis is using *Power Query M* syntax. To help understand the next steps, let's summarise the syntax for that.\n## Power Query M syntax in a nutshell\nPower Query M can have constant strings and numbers. Constant strings are denoted by double quotes like \"MongoDB.\" Numbers are just the unquoted number alone, i.e., 5.23. Constants cannot be on the left side of an assignment.\n\nSomething not surrounded by quotes is a variable\u2014e.g., *People* or *Source* and can be used either side of an assignment. To allow variable names to contain any character, including spaces, without ambiguity variables can also be declared as a hash symbol followed by double quotes so ` #\"Number of runs\"` is a variable name, not a constant.\n\n*Power Query M* defines arrays/lists of values as a comma separated list enclosed in braces (a.k.a. curly brackets) so `#\"State Names\" = { \"on\", \"off\", \"broken\" }` defines a variable called *State Names* as a list of three string values.\n\n*Power Query M* defines *Records* (Dynamic Key->Value mappings) using a comma separated set of `variable=value` statements inside square brackets, for example `Person = Name=\"John\",Dogs=3]`. These data types can be nested\u2014for example, P`erson = [Name=\"John\",Dogs={ [name=\"Brea\",age=10],[name=\"Harvest\",age=5],[name=\"Bramble\",age=1] }]`.\n\nIf you are used to pretty much any other programming language, you may find the contrarian syntax of *Power Query M* either amusing or difficult.\n\n## Defining a JSON Object to POST to the Atlas Data API with Power Query M\n\nWe can set the value of the variable Source to an explicit JSON object by passing a Power Query M Record to the function Json.FromValue like this.\n\n```\nlet\npostData = Json.FromValue([filter=[property_type=\"House\"],dataSource=\"Cluster0\", database=\"sample_airbnb\",collection=\"listingsAndReviews\"]),\nSource = postData\nin\nSource\n```\n\nThis is the request we are going to send to the Data API. This request will search the collection *listingsAndReviews* in a Cluster called *Cluster0* for documents where the field *property\\_type* equals \"*House*\".\n\nWe paste the code above into the advanced Editor, and verify that there is a green checkmark at the bottom with the words \"No syntax errors have been detected,\" and then we can click **Done**. We see a screen like this.\n![\n\nThe small CSV icon in the grey area represents our single JSON Document. Double click it and Power Query will apply a basic transformation to a table with JSON fields as values as shown below.\n\n## Posting payload JSON to the Find Endpoint in Atlas from Excel\n\nTo get our results from Atlas, we need to post this payload to our Atlas *API find endpoint* and parse the response. Click **Advanced Editor** again and change the contents to those in the box below changing the value \"**data-amzuu**\" in the endpoint to match your endpoint and the value of **YOUR-API-KEY** to match your personal API key. You will also need to change **Cluster0** if your database cluster has a different name.\n\nYou will notice that two additional steps were added to the Query to convert it to the CSV we saw above. Overwrite these so the box just contains the lines below and click Done.\n\n```\nlet\npostData = Json.FromValue(filter=[property_type=\"House\"],dataSource=\"Cluster0\", database=\"sample_airbnb\",collection=\"listingsAndReviews\"]),\nresponse = Web.Contents( \"https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find\",\n[ Headers = [#\"Content-Type\" = \"application/json\",\n#\"api-key\"=\"YOUR-API-KEY\"] ,\nContent=postData]),\nSource = Json.Document(response)\nin\nSource\n```\n\nYou will now see this screen, which is telling us it has retrieved a list of JSON documents.\n![\n\nBefore we go further and look at how to parse this result into our worksheet, let us first review the connection we have just set up.\n\nThe first line, as before, is defining *postData* as a JSON string containing the payload for the Atlas API.\n\nThe next line, seen below, makes an HTTPS call to Atlas by calling the Web.Contents function and puts the return value in the variable *response*.\n\n```\nresponse = Web.Contents(\n\"https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find\",\n Headers = [#\"Content-Type\" = \"application/json\",\n#\"api-key\"=\"YOUR-API-KEY\"] ,\nContent=postData]),\n```\n\nThe first parameter to *Web.Contents* is our endpoint URL as a string.\n\nThe second parameter is a *record* specifying options for the request. We are specifying two options: *Headers* and *Content*.\n\n*Headers* is a *record* used to specify the HTTP Headers to the request. In our case, we specify *Content-Type* and also explicitly include our credentials using a header named *api-key.*\n\n> Ideally, we would use the functionality built into Excel to handle web authentication and not need to include the API key in the query, but Microsoft has disabled this for POST requests out of security concerns with Windows federated authentication ([DataSource.Error: Web.Contents with the Content option is only supported when connecting anonymously). We unfortunately need to, therefore, supply it explicitly as a header.\n\nWe also specify `Content=postData` , this is what makes this become a POST request rather than a GET request and pass our JSON payload to the HTTP API.\n\nThe next line `Source = Json.Document(response)` parses the JSON that gets sent back in the response, creating a Power Query *record* from the JSON data and assigning it to a variable named *Source.*\n\n## Converting documents from MongoDB Atlas into Excel Rows\n\nSo, getting back to parsing our returned data, we are now looking at something like this.\n\nThe parsed JSON has returned a single record with one value, documents, which is a list.In JSON it would look like this `{documents : { \u2026 }, { \u2026 } , { \u2026 } ] }`\n\nHow do we parse it? The first step is to press the **Into Table** button in the Ribbon bar which converts the record into a *table*.\n![\nNow we have a table with one value 'Documents' of type list. We need to break that down.\n\nRight click the second column (**value**) and select **Drill Down** from the menu. As we do each of these stages, we see it being added to the list of transformations in the *Applied Steps* list on the right-hand side.\n\nWe now have a list of JSON documents but we want to convert that into rows.\n\nFirst, we want to right-click on the word **list** in row 1 and select **Drill Down** from the menu again.\n\nNow we have a set of records, convert them to a table by clicking the **To Table** button and setting the delimiter to **None** in the dialog that appears. We now see a table but with a single column called *Column1*.\n\nFinally, If you select the Small icon at the right-hand end of the column header you can choose which columns you want. Select all the columns then click **OK**.\n\nFinally, click **Close and Load** from the ribbon bar to write the results back to the sheet and save the Query.\n\n## Parameterising Power Queries using JSON Parameters\n\nWe hardcoded this to fetch us properties of type \"House\"' but what if we want to perform different queries? We can use the Excel Power Query Parameters to do this.\n\nSelect the **Data** Tab on the worksheet. Then, on the left, **Get Data->Launch Power Query Editor**.\n\nFrom the ribbon of the editor, click **Manage Parameters** to open the parameter editor. Parameters are variables you can edit via the GUI or populate from functions. Click **New** (it's not clear that it is clickable) and rename the new parameter to **Mongo Query**. Wet the *type* to **Text** and the *current value* to **{ beds: 2 }**, then click **OK**.\n\nNow select **Query1** again on the left side of the window and click **Advanced Editor** in the ribbon bar. Change the source to match the code below. *Note that we are only changing the postData line.*\n\n```\nlet\npostData = Json.FromValue(filter=Json.Document(#\"Mongo Query\"),dataSource=\"Cluster0\", database=\"sample_airbnb\",collection=\"listingsAndReviews\"]),\nresponse = Web.Contents(\"https://data.mongodb-api.com/app/data-amzuu/endpoint/data/beta/action/find\",\n[ Headers = [#\"Content-Type\" = \"application/json\",\n#\"api-key\"= \"YOUR-API-KEY\"] , Content=postData]),\nSource = Json.Document(response),\ndocuments = Source[documents],\n#\"Converted to Table\" = Table.FromList(documents, Splitter.SplitByNothing(), null, null, ExtraValues.Error),\n#\"Expanded Column1\" = Table.ExpandRecordColumn(#\"Converted to Table\", \"Column1\", {\"_id\", \"listing_url\", \"name\", \"summary\", \"space\", \"description\", \"neighborhood_overview\", \"notes\", \"transit\", \"access\", \"interaction\", \"house_rules\", \"property_type\", \"room_type\", \"bed_type\", \"minimum_nights\", \"maximum_nights\", \"cancellation_policy\", \"last_scraped\", \"calendar_last_scraped\", \"first_review\", \"last_review\", \"accommodates\", \"bedrooms\", \"beds\", \"number_of_reviews\", \"bathrooms\", \"amenities\", \"price\", \"security_deposit\", \"cleaning_fee\", \"extra_people\", \"guests_included\", \"images\", \"host\", \"address\", \"availability\", \"review_scores\", \"reviews\"}, {\"Column1._id\", \"Column1.listing_url\", \"Column1.name\", \"Column1.summary\", \"Column1.space\", \"Column1.description\", \"Column1.neighborhood_overview\", \"Column1.notes\", \"Column1.transit\", \"Column1.access\", \"Column1.interaction\", \"Column1.house_rules\", \"Column1.property_type\", \"Column1.room_type\", \"Column1.bed_type\", \"Column1.minimum_nights\", \"Column1.maximum_nights\", \"Column1.cancellation_policy\", \"Column1.last_scraped\", \"Column1.calendar_last_scraped\", \"Column1.first_review\", \"Column1.last_review\", \"Column1.accommodates\", \"Column1.bedrooms\", \"Column1.beds\", \"Column1.number_of_reviews\", \"Column1.bathrooms\", \"Column1.amenities\", \"Column1.price\", \"Column1.security_deposit\", \"Column1.cleaning_fee\", \"Column1.extra_people\", \"Column1.guests_included\", \"Column1.images\", \"Column1.host\", \"Column1.address\", \"Column1.availability\", \"Column1.review_scores\", \"Column1.reviews\"})\nin\n#\"Expanded Column1\"\n```\n\nWhat we have done is make *postData* take the value in the *Mongo Query* parameter, and parse it as JSON. This lets us create arbitrary filters by specifying MongoDB queries in the Mongo Query Parameter. The changed line is shown below.\n\n```\npostData = Json.FromValue([filter=Json.Document(#\"Mongo Query\"), dataSource=\"Cluster0\",database=\"sample_airbnb\",collection=\"listingsAndReviews\"]),\n```\n\n## Running MongoDB Aggregation Pipelines from Excel\n\nWe can apply this same technique to run arbitrary MongoDB Aggregation Pipelines. Right click on Query1 in the list on the left and select Duplicate. Then right-click on Query1(2) and rename it to Aggregate. Select it and then click Advanced Editor on the ribbon. Change the word find in the URL to aggregate and the word filter in the payload to pipeline.\n\n![\n\nYou will get an error at first like this.\n\nThis is because the parameter Mongo Query is not a valid Aggregation Pipeline. Click **Manage Parameters** on the ribbon and change the value to **{$sortByCount : \"$beds\" }**]. Then Click the X next to *Expanded Column 1* on the right of the screen\u00a0 as the expansion is now incorrect.\n![\n\nAgain, click on the icon next to **Column1** and Select **All Columns** to see how many properties there are for a given number of beds - processing the query with an aggregation pipeline on the server.\n\n## Putting it all together\n\nUsing Power Query with parameters, we can specify the cluster, collection, database, and parameters such as the query, fields returned, sort order ,and limit. We can also choose, by changing the endpoint, to perform a simple query or run an aggregation pipeline.\n\nTo simplify this, there is an Excel workbook available here which has all of these things parameterised so you can simply set the parameters required and run the Power Query to query your Atlas cluster. You can use this as a starting point in exploring how to further use the Excel and Power Query to access data in MongoDB Atlas.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Excel"], "pageDescription": "This Article shows you how to run Queries and Aggregations again MongoDB Atlas using the Power Query function in Microsoft Excel.", "contentType": "Quickstart"}, "title": "Using the Atlas Data API from Excel with Power Query", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/multiple-mongodb-connections-in-a-single-application", "action": "created", "body": "# Multiple MongoDB Connections in a Single Application\n\nMongoDB, a popular NoSQL database, is widely used in various applications and scenarios. While a single database connection can adequately serve the needs of numerous projects, there are specific scenarios and various real-world use cases that highlight the advantages of employing multiple connections.\n\nIn this article, we will explore the concept of establishing multiple MongoDB connections within a single Node.js application.\n\n## Exploring the need for multiple MongoDB connections: Use cases & examples ##\n\nIn the world of MongoDB and data-driven applications, the demand for multiple MongoDB connections is on the rise. Let's explore why this need arises and discover real-world use cases and examples where multiple connections provide a vital solution.\n\nSectors such as e-commerce, gaming, financial services, media, entertainment, and the Internet of Things (IoT) frequently contend with substantial data volumes or data from diverse sources.\n\nFor instance, imagine a web application that distributes traffic evenly across several MongoDB servers using multiple connections or a microservices architecture where each microservice accesses the database through its dedicated connection. Perhaps in a data processing application, multiple connections allow data retrieval from several MongoDB servers simultaneously. Even a backup application can employ multiple connections to efficiently back up data from multiple MongoDB servers to a single backup server. \n\nMoreover, consider a multi-tenant application where different tenants or customers share the same web application but require separate, isolated databases. In this scenario, each tenant can have their own dedicated MongoDB connection. This ensures data separation, security, and customization for each tenant while all operating within the same application. This approach simplifies management and provides an efficient way to scale as new tenants join the platform without affecting existing ones.\n\nBefore we delve into practical implementation, let's introduce some key concepts that will be relevant in the upcoming use case examples. Consider various use cases such as load balancing, sharding, read replicas, isolation, and fault tolerance. These concepts play a crucial role in scenarios where multiple MongoDB connections are required for efficient data management and performance optimization.\n\n## Prerequisites ##\n\nThroughout this guide, we'll be using Node.js, Express.js, and the Mongoose NPM package for managing MongoDB interactions. Before proceeding, ensure that your development environment is ready and that you have these dependencies installed.\n\nIf you are new to MongoDB or haven't set up MongoDB before, the first step is to set up a MongoDB Atlas account. You can find step-by-step instructions on how to do this in the MongoDB Getting Started with Atlas article.\n\n> This post uses MongoDB 6.3.2 and Node.js 18.17.1\n\nIf you're planning to create a new project, start by creating a fresh directory for your project. Then, initiate a new project using the `npm init` command. \n\nIf you already have an existing project and want to integrate these dependencies, ensure you have the project's directory open. In this case, you only need to install the dependencies Express and Mongoose if you haven\u2019t already, making sure to specify the version numbers to prevent any potential conflicts.\n\n npm i express@4.18.2 mongoose@7.5.3\n\n> Please be aware that Mongoose is not the official MongoDB driver but a\n> popular Object Data Modelling (ODM) library for MongoDB. If you prefer\n> to use the official MongoDB driver, you can find relevant\n> documentation on the MongoDB official\n> website.\n\nThe next step is to set up the environment `.env` file if you haven't already. We will define variables for the MongoDB connection strings that we will use throughout this article. The `PRIMARY_CONN_STR` variable is for the primary MongoDB connection string, and the `SECONDARY_CONN_STR` variable is for the secondary MongoDB connection string.\n\n```javascript\nPRIMARY_CONN_STR=mongodb+srv://\u2026\nSECONDARY_CONN_STR=mongodb+srv://\u2026\n```\nIf you are new to MongoDB and need guidance on obtaining a MongoDB connection string from Atlas, please refer to the Get Connection String article.\n\nNow, we'll break down the connection process into two parts: one for the primary connection and the other for the secondary connection.\n\nNow, let's begin by configuring the primary connection.\n\n## Setting up the primary MongoDB connection ##\n\nThe primary connection process might be familiar to you if you've already implemented it in your application. However, I'll provide a detailed explanation for clarity. Readers who are already familiar with this process can skip this section.\n\nWe commonly utilize the mongoose.connect() method to establish the primary MongoDB database connection for our application, as it efficiently manages a single connection pool for the entire application.\n\nIn a separate file named `db.primary.js`, we define a connection method that we'll use in our main application file (for example, `index.js`). This method, shown below, configures the MongoDB connection and handles events:\n\n```javascript\nconst mongoose = require(\"mongoose\");\n\nmodule.exports = (uri, options = {}) => {\n // By default, Mongoose skips properties not defined in the schema (strictQuery). Adjust it based on your configuration.\n mongoose.set('strictQuery', true);\n\n // Connect to MongoDB\n mongoose.connect(uri, options)\n .then()\n .catch(err => console.error(\"MongoDB primary connection failed, \" + err));\n\n // Event handling\n mongoose.connection.once('open', () => console.info(\"MongoDB primary connection opened!\"));\n mongoose.connection.on('connected', () => console.info(\"MongoDB primary connection succeeded!\"));\n mongoose.connection.on('error', (err) => {\n console.error(\"MongoDB primary connection failed, \" + err);\n mongoose.disconnect();\n });\n mongoose.connection.on('disconnected', () => console.info(\"MongoDB primary connection disconnected!\"));\n\n // Graceful exit\n process.on('SIGINT', () => {\n mongoose.connection.close().then(() => {\n console.info(\"Mongoose primary connection disconnected through app termination!\");\n process.exit(0);\n });\n });\n}\n```\nThe next step is to create schemas for performing operations in your application. We will write the schema in a separate file named `product.schema.js` and export it. Let's take an example schema for products in a stores application:\n\n```javascript\nconst mongoose = require(\"mongoose\");\n\nmodule.exports = (options = {}) => {\n // Schema for Product\n return new mongoose.Schema(\n {\n store: {\n _id: mongoose.Types.ObjectId, // Reference-id to the store collection\n name: String\n },\n name: String\n // add required properties\n }, \n options\n );\n}\n```\nNow, let\u2019s import the `db.primary.js` file in our main file (for example, `index.js`) and use the method defined there to establish the primary MongoDB connection. You can also pass an optional connection options object if needed.\n\nAfter setting up the primary MongoDB connection, you import the `product.schema.js` file to access the Product Schema. This enables you to create a model and perform operations related to products in your application:\n\n```javascript\n// Primary Connection (Change the variable name as per your .env configuration!)\n// Establish the primary MongoDB connection using the connection string variable declared in the Prerequisites section.\nrequire(\"./db.primary.js\")(process.env.PRIMARY_CONN_STR, {\n // (optional) connection options\n});\n\n// Import Product Schema\nconst productSchema = require(\"./product.schema.js\")({\n collection: \"products\",\n // Pass configuration options if needed\n});\n\n// Create Model\nconst ProductModel = mongoose.model(\"Product\", productSchema);\n\n// Execute Your Operations Using ProductModel Object\n(async function () {\n let product = await ProductModel.findOne();\n console.log(product);\n})();\n```\nNow, let's move on to setting up a secondary or second MongoDB connection for scenarios where your application requires multiple MongoDB connections.\n\n## Setting up secondary MongoDB connections ##\nDepending on your application's requirements, you can configure secondary MongoDB connections for various use cases. But before that, we'll create a connection code in a `db.secondary.js` file, specifically utilizing the mongoose.createConnection() method. This method allows us to establish separate connection pools each tailored to a specific use case or data access pattern, unlike the `mongoose.connect()` method that we used previously for the primary MongoDB connection:\n\n```javascript\nconst mongoose = require(\"mongoose\");\n\nmodule.exports = (uri, options = {}) => {\n // Connect to MongoDB\n const db = mongoose.createConnection(uri, options);\n \n // By default, Mongoose skips properties not defined in the schema (strictQuery). Adjust it based on your configuration.\n db.set('strictQuery', true);\n \n // Event handling\n db.once('open', () => console.info(\"MongoDB secondary connection opened!\"));\n db.on('connected', () => console.info(`MongoDB secondary connection succeeded!`));\n db.on('error', (err) => {\n console.error(`MongoDB secondary connection failed, ` + err);\n db.close();\n });\n db.on('disconnected', () => console.info(`MongoDB secondary connection disconnected!`));\n\n // Graceful exit\n process.on('SIGINT', () => {\n db.close().then(() => {\n console.info(`Mongoose secondary connection disconnected through app termination!`);\n process.exit(0);\n });\n });\n\n // Export db object\n return db;\n}\n```\nNow, let\u2019s import the `db.secondary.js` file in our main file (for example, `index.js`), create the connection object with a variable named `db`, and use the method defined there to establish the secondary MongoDB connection. You can also pass an optional connection options object if needed:\n\n```javascript\n// Secondary Connection (Change the variable name as per your .env configuration!)\n// Establish the secondary MongoDB connection using the connection string variable declared in the Prerequisites section.\nconst db = require(\"./db.secondary.js\")(process.env.SECONDARY_CONN_STR, {\n // (optional) connection options\n});\n```\n\nNow that we are all ready with the connection, you can use that `db` object to create a model. We explore different scenarios and examples to help you choose the setup that best aligns with your specific data access and management needs:\n\n### 1. Using the existing schema ###\nYou can choose to use the same schema `product.schema.js` file that was employed in the primary connection. This is suitable for scenarios where both connections will operate on the same data model. \n\nImport the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application:\n\n```javascript\n// Import Product Schema\nconst secondaryProductSchema = require(\"./product.schema.js\")({\n collection: \"products\",\n // Pass configuration options if needed\n});\n\n// Create Model\nconst SecondaryProductModel = db.model(\"Product\", secondaryProductSchema);\n\n// Execute Your Operations Using SecondaryProductModel Object\n(async function () {\n let product = await SecondaryProductModel.findOne();\n console.log(product);\n})();\n```\nTo see a practical code example and available resources for using the existing schema of a primary database connection into a secondary MongoDB connection in your project, visit the GitHub repository.\n\n### 2. Setting schema flexibility ###\nWhen working with multiple MongoDB connections, it's essential to have the flexibility to adapt your schema based on specific use cases. While the primary connection may demand a strict schema with validation to ensure data integrity, there are scenarios where a secondary connection serves a different purpose. For instance, a secondary connection might store data for analytics on an archive server, with varying schema requirements driven by past use cases. In this section, we'll explore how to configure schema flexibility for your secondary connection, allowing you to meet the distinct needs of your application.\n\nIf you prefer to have schema flexibility in mongoose, you can pass the `strict: false` property in the options when configuring your schema for the secondary connection. This allows you to work with data that doesn't adhere strictly to the schema. \n\nImport the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application:\n\n```javascript\n// Import Product Schema\nconst secondaryProductSchema = require(\"./product.schema.js\")({\n collection: \"products\",\n strict: false\n // Pass configuration options if needed\n});\n\n// Create Model\nconst SecondaryProductModel = db.model(\"Product\", secondaryProductSchema);\n\n// Execute Your Operations Using SecondaryProductModel Object\n(async function () {\n let product = await SecondaryProductModel.findOne();\n console.log(product);\n})();\n```\nTo see a practical code example and available resources for setting schema flexibility in a secondary MongoDB connection in your project, visit the GitHub repository.\n\n### 3. Switching databases within the same connection ###\nWithin your application's database setup, you can seamlessly switch between different databases using the db.useDb()) method. This method enables you to create a new connection object associated with a specific database while sharing the same connection pool.\n\nThis approach allows you to efficiently manage multiple databases within your application, using a single connection while maintaining distinct data contexts for each database.\n\nImport the `product.schema.js` file to access the Product Schema. This enables you to create a model using `db` object and perform operations related to products in your application.\n\nNow, to provide an example where a store can have its own database containing users and products, you can include the following scenario.\n\n**Example use case: Store with separate database**\n\nImagine you're developing an e-commerce platform where multiple stores operate independently. Each store has its database to manage its products. In this scenario, you can use the `db.useDb()` method to switch between different store databases while maintaining a shared connection pool:\n```javascript\n// Import Product Schema\nconst secondaryProductSchema = require(\"./product.schema.js\")({\n collection: \"products\",\n // strict: false // that doesn't adhere strictly to the schema!\n // Pass configuration options if needed\n});\n\n// Create a connection for 'Store A'\nconst storeA = db.useDb('StoreA');\n\n// Create Model\nconst SecondaryStoreAProductModel = storeA.model(\"Product\", secondaryProductSchema);\n\n// Execute Your Operations Using SecondaryStoreAProductModel Object\n(async function () {\n let product = await SecondaryStoreAProductModel.findOne();\n console.log(product);\n})();\n\n// Create a connection for 'Store B'\nconst storeB = db.useDb('StoreB');\n\n// Create Model\nconst SecondaryStoreBProductModel = storeB.model(\"Product\", secondaryProductSchema);\n\n// Execute Your Operations Using SecondaryStoreBProductModel Object\n(async function () {\n let product = await SecondaryStoreBProductModel.findOne();\n console.log(product);\n})();\n```\n\nIn this example, separate database connections have been established for `Store A` and `Store B`, each containing its product data. This approach provides a clear separation of data while efficiently utilizing a single shared connection pool for all stores, enhancing data management in a multi-store e-commerce platform.\n\nIn the previous section, we demonstrated a static approach where connections were explicitly created for each store, and each connection was named accordingly (e.g., `StoreA`, `StoreB`).\n\nTo introduce a dynamic approach, you can create a function that accepts a store's ID or name as a parameter and returns a connection object. This dynamic function allows you to switch between different stores by providing their identifiers, and it efficiently reuses existing connections when possible.\n\n```javascript\n// Function to get connection object for particular store's database\nfunction getStoreConnection(storeId) {\n return db.useDb(\"Store\"+storeId, { useCache: true });\n}\n\n// Create a connection for 'Store A'\nconst store = getStoreConnection(\"A\");\n\n// Create Model\nconst SecondaryStoreProductModel = store.model(\"Product\", secondaryProductSchema);\n\n// Execute Your Operations Using SecondaryStoreProductModel Object\n(async function () {\n let product = await SecondaryStoreProductModel.findOne();\n console.log(product);\n})();\n```\n\nIn the dynamic approach, connection instances are created and cached as needed, eliminating the need for manually managing separate connections for each store. This approach enhances flexibility and resource efficiency in scenarios where you need to work with multiple stores in your application.\n\nBy exploring these examples, we've covered a range of scenarios for managing multiple databases within the same connection, providing you with the flexibility to tailor your database setup to your specific application needs. You're now equipped to efficiently manage distinct data contexts for various use cases within your application.\n\nTo see a practical code example and available resources for switching databases within the same connection into a secondary MongoDB connection in your project, visit the GitHub repository.\n\n## Best practices ##\nIn the pursuit of a robust and efficient MongoDB setup within your Node.js application, I recommend the following best practices. These guidelines serve as a foundation for a reliable implementation, and I encourage you to consider and implement them:\n\n - **Connection pooling**: Make the most of connection pooling to efficiently manage MongoDB connections, enabling connection reuse and reducing overhead. Read more about connection pooling.\n- **Error handling**: Robust error-handling mechanisms, comprehensive logging, and contingency plans ensure the reliability of your MongoDB setup in the face of unexpected issues.\n- **Security**: Prioritize data security with authentication, authorization, and secure communication practices, especially when dealing with sensitive information. Read more about MongoDB Security.\n- **Scalability**: Plan for scalability from the outset, considering both horizontal and vertical scaling strategies to accommodate your application's growth.\n- **Testing**: Comprehensive testing in various scenarios, such as failover, high load, and resource constraints, validates the resilience and performance of your multiple MongoDB connection setup.\n\n## Conclusion ##\nLeveraging multiple MongoDB connections in a Node.js application opens up a world of possibilities for diverse use cases, from e-commerce to multi-tenant systems. Whether you need to enhance data separation, scale your application efficiently, or accommodate different data access patterns, these techniques empower you to tailor your database setup to the unique needs of your project. With the knowledge gained in this guide, you're well-prepared to manage multiple data contexts within a single application, ensuring robust, flexible, and efficient MongoDB interactions.\n\n## Additional resources ##\n- **Mongoose documentation**: For an in-depth understanding of Mongoose connections, explore the official Mongoose documentation.\n- **GitHub repository**: To dive into the complete implementation of multiple MongoDB connections in a Node.js application that we have performed above, visit the GitHub repository. Feel free to clone the repository and experiment with different use cases in your projects.\n\nIf you have any questions or feedback, check out the MongoDB Community Forums and let us know what you think.\n\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Multiple MongoDB Connections in a Single Application", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/cloudflare-worker-rest-api", "action": "created", "body": "# Create a REST API with Cloudflare Workers and MongoDB Atlas\n\n## Introduction\n\nCloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure.\n\nMongoDB Atlas allows you to create, manage, and monitor MongoDB clusters in the cloud provider of your choice (AWS, GCP, or Azure) while the Web SDK can provide a layer of authentication and define access rules to the collections.\n\nIn this blog post, we will combine all these technologies together and create a REST API with a Cloudflare worker using a MongoDB Atlas cluster to store the data.\n\n> Note: In this tutorial, the worker isn't using any form of caching. While the connection between MongoDB and the Atlas serverless application is established and handled automatically in the Atlas App Services back end, each new query sent to the worker will require the user to go through the authentication and authorization process before executing any query. In this tutorial, we are using API keys to handle this process but Atlas App Services offers many different authentication providers.\n\n## TL;DR!\n\nThe worker is in this GitHub repository. The README will get you up and running in no time, if you know what you are doing. Otherwise, I suggest you follow this step-by-step blog post. ;-)\n\n```shell\n$ git clone git@github.com:mongodb-developer/cloudflare-worker-rest-api-atlas.git\n```\n\n## Prerequisites\n\n- NO credit card! You can run this entire tutorial for free!\n- Git and cURL.\n- MongoDB Atlas account.\n- MongoDB Atlas Cluster (a free M0 cluster is fine).\n- Cloudflare account (free plan is fine) with a `*.workers.dev` subdomain for the workers. Follow steps 1 to 3 from this documentation to get everything you need.\n\nWe will create the Atlas App Services application (formerly known as a MongoDB Realm application) together in the next section. This will provide you the AppID and API key that we need.\n\nTo deploy our Cloudflare worker, we will need:\n- The application ID (top left corner in your app\u2014see next section).\n- The Cloudflare account login/password.\n- The Cloudflare account ID (in Workers tab > Overview).\n\nTo test (or interact with) the REST API, we need:\n- The authentication API key (more about that below, but it's in Authentication tab > API Keys).\n- The Cloudflare `*.workers.dev` subdomain (in Workers tab > Overview).\n\nIt was created during this step of your set-up:\n\n## Create and Configure the Atlas Application\n\nTo begin with, head to your MongoDB Atlas main page where you can see your cluster and access the 'App Services' tab at the top.\n\nCreate an empty application (no template) as close as possible to your MongoDB Atlas cluster to avoid latency between your cluster and app. My app is \"local\" in Ireland (eu-west-1) in my case.\n\nNow that our app is created, we need to set up two things: authentication via API keys and collection rules. Before that, note that you can retrieve your app ID in the top left corner of your new application.\n\n### Authentication Via API Keys\n\nHead to Authentication > API Keys.\n\nActivate the provider and save the draft.\n\nWe need to create an API key, but we can only do so if the provider is already deployed. Click on review and deploy.\n\nNow you can create an API key and **save it somewhere**! It will only be displayed **once**. If you lose it, discard this one and create a new one.\n\nWe only have a single user in our application as we only created a single API key. Note that this tutorial would work with any other authentication method if you update the authentication code accordingly in the worker.\n\n### Collection Rules\n\nBy default, your application cannot access any collection from your MongoDB Atlas cluster. To define how users can interact with the data, you must define roles and permissions.\n\nIn our case, we want to create a basic REST API where each user can read and write their own data in a single collection `todos` in the `cloudflare` database.\n\nHead to the Rules tab and let's create this new `cloudflare.todos` collection.\n\nFirst, click \"create a collection\".\n\nNext, name your database `cloudflare` and collection `todos`. Click create!\n\nEach document in this collection will belong to a unique user defined by the `owner_id` field. This field will contain the user ID that you can see in the `App Users` tab.\n\nTo limit users to only reading and writing their own data, click on your new `todos` collection in the Rules UI. Add the rule `readOwnWriteOwn` in the `Other presets`.\n\nAfter adding this preset role, you can double-check the rule by clicking on the `Advanced view`. It should contain the following:\n\n```json\n{\n \"roles\": \n {\n \"name\": \"readOwnWriteOwn\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": {\n \"owner_id\": \"%%user.id\"\n },\n \"read\": {\n \"owner_id\": \"%%user.id\"\n }\n },\n \"read\": true,\n \"write\": true,\n \"insert\": true,\n \"delete\": true,\n \"search\": true\n }\n ]\n}\n```\n\nYou can now click one more time on `Review Draft and Deploy`. Our application is now ready to use.\n\n## Set Up and Deploy the Cloudflare Worker\n\nThe Cloudflare worker is available in [GitHub repository. Let's clone the repository.\n\n```shell\n$ git clone git@github.com:mongodb-developer/cloudflare-worker-rest-api-atlas.git\n$ cd cloudflare-worker-rest-api-realm-atlas\n$ npm install\n```\n\nNow that we have the worker template, we just need to change the configuration to deploy it on your Cloudflare account.\n\nEdit the file `wrangler.toml`:\n- Replace `CLOUDFLARE_ACCOUNT_ID` with your real Cloudflare account ID.\n- Replace `MONGODB_ATLAS_APPID` with your real MongoDB Atlas App Services app ID.\n\nYou can now deploy your worker to your Cloudflare account using Wrangler:\n\n```shell\n$ npm i wrangler -g\n$ wrangler login\n$ wrangler deploy\n```\n\nHead to your Cloudflare account. You should now see your new worker in the Workers tab > Overview.\n\n## Check Out the REST API Code\n\nBefore we test the API, please take a moment to read the code of the REST API we just deployed, which is in the `src/index.ts` file:\n\n```typescript\nimport * as Realm from 'realm-web';\nimport * as utils from './utils';\n\n// The Worker's environment bindings. See `wrangler.toml` file.\ninterface Bindings {\n // MongoDB Atlas Application ID\n ATLAS_APPID: string;\n}\n\n// Define type alias; available via `realm-web`\ntype Document = globalThis.Realm.Services.MongoDB.Document;\n\n// Declare the interface for a \"todos\" document\ninterface Todo extends Document {\n owner_id: string;\n done: boolean;\n todo: string;\n}\n\nlet App: Realm.App;\nconst ObjectId = Realm.BSON.ObjectID;\n\n// Define the Worker logic\nconst worker: ExportedHandler = {\n async fetch(req, env) {\n const url = new URL(req.url);\n App = App || new Realm.App(env.ATLAS_APPID);\n\n const method = req.method;\n const path = url.pathname.replace(//]$/, '');\n const todoID = url.searchParams.get('id') || '';\n\n if (path !== '/api/todos') {\n return utils.toError(`Unknown '${path}' URL; try '/api/todos' instead.`, 404);\n }\n\n const token = req.headers.get('authorization');\n if (!token) return utils.toError(`Missing 'authorization' header; try to add the header 'authorization: ATLAS_APP_API_KEY'.`, 401);\n\n try {\n const credentials = Realm.Credentials.apiKey(token);\n // Attempt to authenticate\n var user = await App.logIn(credentials);\n var client = user.mongoClient('mongodb-atlas');\n } catch (err) {\n return utils.toError('Error with authentication.', 500);\n }\n\n // Grab a reference to the \"cloudflare.todos\" collection\n const collection = client.db('cloudflare').collection('todos');\n\n try {\n if (method === 'GET') {\n if (todoID) {\n // GET /api/todos?id=XXX\n return utils.reply(\n await collection.findOne({\n _id: new ObjectId(todoID)\n })\n );\n }\n\n // GET /api/todos\n return utils.reply(\n await collection.find()\n );\n }\n\n // POST /api/todos\n if (method === 'POST') {\n const {todo} = await req.json();\n return utils.reply(\n await collection.insertOne({\n owner_id: user.id,\n done: false,\n todo: todo,\n })\n );\n }\n\n // PATCH /api/todos?id=XXX&done=true\n if (method === 'PATCH') {\n return utils.reply(\n await collection.updateOne({\n _id: new ObjectId(todoID)\n }, {\n $set: {\n done: url.searchParams.get('done') === 'true'\n }\n })\n );\n }\n\n // DELETE /api/todos?id=XXX\n if (method === 'DELETE') {\n return utils.reply(\n await collection.deleteOne({\n _id: new ObjectId(todoID)\n })\n );\n }\n\n // unknown method\n return utils.toError('Method not allowed.', 405);\n } catch (err) {\n const msg = (err as Error).message || 'Error with query.';\n return utils.toError(msg, 500);\n }\n }\n}\n\n// Export for discoverability\nexport default worker;\n```\n\n## Test the REST API\n\nNow that you are a bit more familiar with this REST API, let's test it!\n\nNote that we decided to pass the values as parameters and the authorization API key as a header like this:\n\n```\nauthorization: API_KEY_GOES_HERE\n```\n\nYou can use [Postman or anything you want to test your REST API, but to make it easy, I made some bash script in the `api_tests` folder.\n\nIn order to make them work, we need to edit the file `api_tests/variables.sh` and provide them with:\n\n- The Cloudflare worker URL: Replace `YOUR_SUBDOMAIN`, so the final worker URL matches yours.\n- The MongoDB Atlas App Service API key: Replace `YOUR_ATLAS_APP_AUTH_API_KEY` with your auth API key.\n\nFinally, we can execute all the scripts like this, for example:\n\n```shell\n$ cd api_tests\n\n$ ./post.sh \"Write a good README.md for Github\"\n{\n \"insertedId\": \"618615d879c8ad6d1129977d\"\n}\n\n$ ./post.sh \"Commit and push\"\n{\n \"insertedId\": \"618615e479c8ad6d11299e12\"\n}\n\n$ ./findAll.sh \n\n {\n \"_id\": \"618615d879c8ad6d1129977d\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": false,\n \"todo\": \"Write a good README.md for Github\"\n },\n {\n \"_id\": \"618615e479c8ad6d11299e12\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": false,\n \"todo\": \"Commit and push\"\n }\n]\n\n$ ./findOne.sh 618615d879c8ad6d1129977d\n{\n \"_id\": \"618615d879c8ad6d1129977d\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": false,\n \"todo\": \"Write a good README.md for Github\"\n}\n\n$ ./patch.sh 618615d879c8ad6d1129977d true\n{\n \"matchedCount\": 1,\n \"modifiedCount\": 1\n}\n\n$ ./findAll.sh \n[\n {\n \"_id\": \"618615d879c8ad6d1129977d\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": true,\n \"todo\": \"Write a good README.md for Github\"\n },\n {\n \"_id\": \"618615e479c8ad6d11299e12\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": false,\n \"todo\": \"Commit and push\"\n }\n]\n\n$ ./deleteOne.sh 618615d879c8ad6d1129977d\n{\n \"deletedCount\": 1\n}\n\n$ ./findAll.sh \n[\n {\n \"_id\": \"618615e479c8ad6d11299e12\",\n \"owner_id\": \"6186154c79c8ad6d11294f60\",\n \"done\": false,\n \"todo\": \"Commit and push\"\n }\n]\n```\n\nAs you can see, the REST API works like a charm!\n\n## Wrap Up\n\nCloudflare offers a Workers [KV product that _can_ make for a quick combination with Workers, but it's still a simple key-value datastore and most applications will outgrow it. By contrast, MongoDB is a powerful, full-featured database that unlocks the ability to store, query, and index your data without compromising the security or scalability of your application.\n\nAs demonstrated in this blog post, it is possible to take full advantage of both technologies. As a result, we built a powerful and secure serverless REST API that will scale very well.\n\n> Another option for connecting to Cloudflare is the MongoDB Atlas Data API. The Atlas Data API provides a lightweight way to connect to MongoDB Atlas that can be thought of as similar to a REST API. To learn more, view this tutorial from my fellow developer advocate Mark Smith!\n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB. If your question is related to Cloudflare, I encourage you to join their active Discord community.\n", "format": "md", "metadata": {"tags": ["Atlas", "TypeScript", "Serverless", "Cloudflare"], "pageDescription": "Learn how to create a serverless REST API using Cloudflare workers and MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Create a REST API with Cloudflare Workers and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/authentication-ios-apps-apple-sign-in-atlas-app-services", "action": "created", "body": "# Authentication for Your iOS Apps with Apple Sign-in and Atlas App Services\n\nMobile device authentication serves as the crucial first line of defense against potential intruders who aim to exploit personal information, financial data, or private details. As our mobile phones store a wealth of sensitive information, it is imperative to prioritize security while developing apps that ensure user safety. \n\nApple sign-in is a powerful solution that places user privacy at the forefront by implementing private email relay functionality. This enables users to shield their email addresses, granting them greater control over their data. Combining this with Atlas App Services provides developers with a streamlined and secure authentication experience. It also simplifies app development, service integration, and data connectivity, eliminating operational overhead. \n\nIn the following tutorial, I will show you how with only a few steps and a little code, you can bring this seamless implementation to your iOS apps! If you also want to follow along and check the code that I\u2019ll be explaining in this article, you can find it in the Github repository. \n\n## Context\n\nThis sample application consists of an iOS app with a \u201cSign in with Apple\u201d button, where when the user taps on it, it will prompt the authentication native sheet that will allow the user to choose to sign in to your app hiding or showing their email address. Once the sign-up process is completed, the sign-in process gets handled by the Authentication API by Apple. \n\n## Prerequisites\n\nSince this tutorial\u2019s main focus is on the code implementation with Apple sign-in, a few previous steps are required for it. \n\n- Have the latest stable version of Xcode installed on your macOS computer, and make sure that the OS is compatible with the version. \n- Have a setup of a valid Apple Developer Account and configure your App ID. You can follow the official Apple documentation.\n- Have the Apple Sign-In Capability added to your project. Check out the official Apple sign-in official resources.\n- Have the Realm Swift SDK installed on your project and an Atlas App Services app linked to your cluster. Please follow the steps in our Realm Swift SDK documentation on how to create an Alas App Services app.\n\n## Configuring Apple provider on Atlas App Services\n\nIn order to follow this tutorial, you will need to have an **Atlas App Services app** created. If not, please follow the steps in our MongoDB documentation. It\u2019s quite easy to set it up! \n\nFirst, in your Atlas App Services app, go to **Data Access** -> **Authentication** on the sidebar. \n\nIn the **Authentication Providers** section, enable the **Apple** provider when tapping on the **Edit** button. You\u2019ll see a screen like the one in the screenshot below: \n\nYou will have now to fill the corresponding fields with the following information: \n\n- **Client ID:** Enter your application\u2019s Bundle ID for the App Services Client ID.\n- **Client Secret:** Choose or create a new secret, which is stored in Atlas App Services' back end.\n- **Redirect URIs:** You will have to use a URI in order to redirect the authentication. You can use your own custom domain, but if you have a paid tier cluster in Atlas, you can benefit from our Hosting Service!\n\nClick on the \u201cSave Draft\u201d button and your changes will be deployed. \n\n### Implementing the Apple sign-in authentication functionality\n\nNow, before continuing with this section, please make sure that you have followed our quick start guide to make sure that you have our Realm Swift SDK installed. Moving on to the fun part, it\u2019s time to code!\n\nThis is a pretty simple UIKit project, where *LoginViewController.swift* will implement the authentication functionality of Apple sign-in, and if the authenticated user is valid, then a segue will transition to *WelcomeViewController.swift*.\n\nOn top of the view controller code, make sure that you import both the AuthenticationServices and RealmSwift frameworks so you have access to their methods. In your Storyboard, add a UIButton of type *ASAuthorizationAppleIDButton* to the *LoginViewController* and link it to its corresponding Swift file.\n\nIn the *viewDidLoad()* function of *LoginViewController*, we are going to call *setupAppleSignInButton()*, which is a private function that lays out the Apple sign-in button, provided by the AuthenticationServices API. Here is the code of the functionality.\n\n```swift\n// Mark: - IBOutlets\n@IBOutlet weak var appleSignInButton: ASAuthorizationAppleIDButton!\n\n// MARK: - View Lifecycle\noverride func viewDidLoad() {\n super.viewDidLoad()\n setupAppleSignInButton()\n}\n\n// MARK: - Private helper\nprivate func setupAppleSignInButton() {\n appleSignInButton.addTarget(self, action: #selector(handleAppleIdRequest), for: .touchUpInside)\n appleSignInButton.cornerRadius = 10\n}\n```\n\nThe private function adds a target to the *appleSignInButton* and gives it a radius of 10 to its corners. The screenshot below shows how the button is laid out in the testing device.\n\nNow, moving to *handleAppleIdRequest*, here is the implementation for it: \n\n```swift\n@objc func handleAppleIdRequest() {\n let appleIDProvider = ASAuthorizationAppleIDProvider()\n let request = appleIDProvider.createRequest()\n request.requestedScopes = .fullName, .email]\n let authorizationController = ASAuthorizationController(authorizationRequests: [request])\n authorizationController.delegate = self\n authorizationController.performRequests()\n}\n```\n\nThis function is a method that handles the initialization of Apple ID authorization using *ASAuthorizationAppleIDProvider* and *ASAuthorizationController* classes. Here is a breakdown of what the function does: \n\n1. It creates an instance of *ASAuthorizationAppleIDProvider*, which is responsible for generating requests to authenticate users based on their Apple ID.\n2. Using the *appleIDProvider* instance, it creates an authorization request when calling *createRequest()*.\n3. The request is used to configure the specific data that the app needs to access from the user\u2019s Apple ID. In this case, we are requesting fullName and email.\n4. We create an instance of *ASAuthorizationController* that will manage the authorization requests and will also handle any user interactions related to the Apple ID authentication.\n5. The *authorizationController* has to set its delegate to self, as the current object will have to conform to the *ASAuthorizationControllerDelegate* protocol.\n6. Finally, the specified authorization flows are performed by calling *performRequests()*. This method triggers the system to present the Apple ID login interface to the user. \n\nAs we just mentioned, the view controller has to conform to the *ASAuthorizationControllerDelegate*. To do that, I created an extension of *LoginViewController*, where the implementation of the *didCompleteWithAuthorization* delegate method is where we will handle the successful authentication with the Swift Realm SDK.\n\n``` swift\nfunc authorizationController(controller: ASAuthorizationController, didCompleteWithAuthorization authorization: ASAuthorization) {\n if let appleIDCredential = authorization.credential as? ASAuthorizationAppleIDCredential {\n let userIdentifier = appleIDCredential.user\n let fullName = appleIDCredential.fullName\n let email = appleIDCredential.email\n\n guard let identityToken = appleIDCredential.identityToken else {\n return\n }\n let decodedToken = String(decoding: identityToken, as: UTF8.self)\n print(decodedToken)\n\n realmSignIn(appleToken: decodedToken)\n }\n}\n```\n\nTo resume it in a few lines, this code retrieves the necessary user information from the Apple ID credential if the credentials of the user are successful. We also obtain the *identityToken*, which is the vital piece of information that is needed to use it on the Atlas App Services authentication. \n\nHowever, note that this token **has to be decoded** in order to be used on Atlas App Services, and for that, you can use the *String(decoding:, as:)* method. \n\nOnce the token is decoded, it is a JWT that contains claims about the user signed by Apple Authentication Service. Then the *realmSignIn()* private method is called and the decoded token is passed as a parameter so the authentication can be handled. \n\n```swift\nprivate func realmSignIn(appleToken: String) {\n let credentials = Credentials.apple(idToken: appleToken)\n app.login(credentials: credentials) { (result) in\n switch result {\n case .failure(let error):\n print(\"Realm Login failed: \\(error.localizedDescription)\")\n\n case .success(_):\n DispatchQueue.main.async {\n print(\"Successful Login\")\n self.performSegue(withIdentifier: \"goToWelcomeViewController\", sender: nil)\n }\n }\n }\n}\n```\n\nThe *realmSignIn()* private function handles the login into Atlas App Services. This function will allow you to authenticate your users that will be connected to your app without any additional hassle. First, the credentials are generated by *Credentials.apple(idToken:)*, where the decoded Apple token is passed as a parameter. \n\nIf the login is successful, then the code performs a segue and goes to the main screen of the project, *WelcomeViewController*. If it fails, then it will print an error message. Of course, feel free to adapt this error to whatever suits you better for your use case (i.e., an alert message). \n\nAnother interesting delegate method in terms of error handling is the *didCompleteWithError()* delegate function, which will get triggered if there is an error during the Apple ID authentication. You can use this one to provide some feedback to the user and improve the UX of your application.\n\n## Important note\n\nOne of the biggest perks of Apple sign-in authentication, as it was mentioned earlier, is the flexibility it gives to the user regarding what gets shared with your app. This means that if the user decides to hide their email address and not to share their full name as the code was requested earlier through the *requestedScopes* definition, you will receive **an empty string** in the response. In the case of the email address, it will be a *nil* value. \n\nIf your iOS application has a use case where you want to establish communication with your users, you will need to implement [communication using Apple's private email relay service. You should avoid asking the user for their email in other parts of the app too, as it could potentially create a rejection on the App Store review.\n\n## Repository\n\nThe code for this project can be found in the Github repository. \n\nI hope you found this tutorial helpful. I encourage you to explore our Realm Swift SDK documentation to discover all the benefits that it can offer to you when building iOS apps. We have plenty of resources available to help you learn and implement these features. So go ahead, dive in, and see what Atlas App Services has in store for your app development journey. \n\nIf you have any questions or comments don\u2019t hesitate to head over to our Community Forums to continue the conversation. Happy coding!", "format": "md", "metadata": {"tags": ["Realm", "Swift", "Mobile", "iOS"], "pageDescription": "Learn how to implement Apple sign-in within your own iOS mobile applications using Swift and MongoDB Atlas App Services.", "contentType": "Tutorial"}, "title": "Authentication for Your iOS Apps with Apple Sign-in and Atlas App Services", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-api-introduction", "action": "created", "body": "# An Introduction to the MongoDB Atlas Data API\n\n# Introduction to the MongoDB Atlas Data API\nThere are a lot of options for connecting to MongoDB Atlas as an application developer. One of the newest options is the MongoDB Atlas Data API. The Atlas Data API provides a lightweight way to connect to MongoDB Atlas that can be thought of as similar to a REST API. This tutorial will show you how to enable the Data API and perform basic CRUD operations using curl. It\u2019s the first in a series showing different uses for the Data API and how you can use it to build data-centric applications and services faster.\n\nAccess the full API reference.\n\nThis post assumes you already have an Atlas cluster. You can either use an existing one or you can sign up for a cloud account and create your first database cluster by following the instructions.\n\n## Enabling the Atlas Data API\n\nEnabling the Data API is very easy once you have a cluster in Atlas.\n\nFirst, Click \"Data API\" in the bar on the left of your Atlas deployment.\n\nThen select which data source or sources you want the Data API to have access to. For this example, I am selecting just the default Cluster0.\n\nThen, select the large \"Enable the Data API\" button.\n\nYou will then have a screen confirming what clusters you have enabled for the Data API.\n\nIn the \"Data API Access\" column, select \"Read and Write\" for now, and then click on the button at the top left that says \"Create API Key.\" Choose a name. It's not important what name you choose, as long as it's useful to you.\n\nFinally, click \"Generate API Key\" and take a note of the key displayed in a secure place as you will not be able to see it again in Atlas. You can click the \"Copy\" button to copy it to your clipboard. I pasted mine into a .envrc file in my project.\n\nIf you want to test out a simple command, you can select one of your database collections in the dropdowns and copy-paste some code into your terminal to see some results. While writing this post, I did it just to check that I got some results back. When you're done, click \"Close\" to go back to the Data API screen. If you need to manage the keys you've created, you can click the \"API Keys\" tab on this screen.\n\nYou are now ready to call the Data API!\n\n## Be careful with your API key!\n\nThe API key you've just created should never be shared with anyone, or sent to the browser. Anyone who gets hold of the key can use it to make changes to the data in your database! In fact, the Data API blocks browser access, because there's currently no secure way to make Data API requests securely without sharing an API key.\n\n## Calling the Data API\nAll the Data API endpoints use HTTPS POST. Though it might seem logical to use GET when reading data, GET requests are intended to be cached and many platforms will do so automatically. To ensure you never have stale query results, all of the API endpoints use POST. Time to get started!\n\n### Adding data to Atlas\n\nTo add documents to MongoDB, you will use the InsertOne or InsertMany action endpoints.\n\n### InsertOne\n\nWhen you insert a document with the API, you must provide the \"dataSource\" (which is your cluster name), \"database,\" \"collection,\" and \"document\" as part of a JSON payload document.\nFor authentication, you will need to pass the API key as a header. The API always uses HTTPS, so this is safe and secure from network snooping.\n\nTo call with curl, use the following command:\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-YOUR_ID/endpoint/data/v1/action/insertOne' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: YOUR_API_KEY\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"document\" : { \"name\": \"Harvest\",\n \"breed\": \"Labrador\",\n \"age\": 5 }\n }'\n```\n\nFor example, my call looks like this:\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/insertOne' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"document\" : { \"name\": \"Harvest\",\n \"breed\": \"Labrador\",\n \"age\": 5 }\n }'\n```\n\nNote that the URL I'm using is my Data API URL endpoint, with `/action/insertOne` appended. When I ran this command with my values for `YOUR_ID` and `YOUR_API_KEY`, curl printed the following:\n\n```json\n{\"insertedId\":\"62c6da4f0836cbd6ebf68589\"}\n```\n\nThis means you've added a new document to a collection called \u201cpets\u201d in a database called \u201chousehold.\u201d Due to MongoDB\u2019s flexible dynamic model, neither the database nor collection needed to be defined in advance.\n\nThis API call returned a JSON document with the _id of the new document. As I didn't explicitly supply any value for _id ( the primary key in MongoDB), one was created for me and it was of type ObjectId. The API returns standard JSON by default, so this is displayed as a string. \n\n### FindOne\n\nTo look up the document I just added by _id, I'll need to provide the _id that was just printed by curl. In the document that was printed, the value looks like a string, but it isn't. It's an ObjectId, which is the type of value that's created by MongoDB when no value is provided for the _id.\n\nWhen querying for the ObjectId value, you need to wrap this string as an EJSON ObjectId type, like this: `{ \"$oid\" : }`. If you don't provide this wrapper, MongoDB will mistakenly believe you are looking for a string value, not the ObjectId that's actually there.\n\nThe findOne query looks much like the insertOne query, except that the action name in the URL is now findOne, and this call takes a \"filter\" field instead of a \"document\" field.\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/findOne' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"filter\" : { \"_id\": { \"$oid\": \"62c6da4f0836cbd6ebf68589\" } }\n }'\n```\n\nThis printed out the following JSON for me:\n\n```json\n{\"document\":{\n \"_id\":\"62c6da4f0836cbd6ebf68589\",\n \"name\":\"Harvest\",\n \"breed\":\"Labrador\",\n \"age\":5}}\n```\n\n### Getting Extended JSON from the API\nNote that in the output above, the _id is again being converted to \"plain\" JSON, and so the \"_id\" value is being converted to a string. Sometimes, it's useful to keep the type information, so you can specify that you would like Extended JSON (EJSON) output, for any Data API call, by supplying an \"Accept\" header, with the value of \"application/ejson\":\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/findOne' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header 'Accept: application/ejson' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"filter\" : { \"_id\": { \"$oid\": \"62c6da4f0836cbd6ebf68589\" } }\n }'\n```\n\nWhen I ran this, the \"_id\" value was provided with the \"$oid\" wrapper, to declare that it's an ObjectId value:\n\n```json\n{\"document\":{\n \"_id\":{\"$oid\":\"62c6da4f0836cbd6ebf68589\"},\n \"name\":\"Harvest\",\n \"breed\":\"Labrador\",\n \"age\":{\"$numberInt\":\"5\"}}}\n```\n\n### InsertMany\nIf you're inserting several documents into a collection, it\u2019s much more efficient to make a single HTTPS call with the insertMany action. This endpoint works in a very similar way to the insertOne action, but it takes a \"documents\" field instead of a single \"document\" field, containing an array of documents:\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/insertMany' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"documents\" : {\n \"name\": \"Brea\",\n \"breed\": \"Labrador\",\n \"age\": 9,\n \"colour\": \"black\"\n },\n {\n \"name\": \"Bramble\",\n \"breed\": \"Labrador\",\n \"age\": 1,\n \"colour\": \"black\"\n }]\n }'\n```\n\nWhen I ran this, the output looked like this:\n\n```json\n{\"insertedIds\":[\"62c6e8a15a3411a70813c21e\",\"62c6e8a15a3411a70813c21f\"]}\n```\n\nThis endpoint returns JSON with an array of the values for _id for the documents that were added.\n\n### Querying data\nQuerying for more than one document is done with the find endpoint, which returns an array of results. The following query looks up all the labradors that are two years or older, sorted by age:\n\n```shell\ncurl --location --request POST 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/find' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\":\"Cluster0\",\n \"database\":\"household\",\n \"collection\":\"pets\",\n \"filter\": { \"breed\": \"Labrador\",\n \"age\": { \"$gt\" : 2} },\n \"sort\": { \"age\": 1 } }'\n```\n\nWhen I ran this, I received documents for the two oldest dogs, Harvest and Brea:\n\n```json\n{\"documents\":[\n {\"_id\":\"62c6da4f0836cbd6ebf68589\",\"name\":\"Harvest\",\"breed\":\"Labrador\",\"age\":5},\n {\"_id\":\"62c6e8a15a3411a70813c21e\",\"name\":\"Brea\",\"breed\":\"Labrador\",\"age\":9,\"colour\":\"black\"}]}\n```\n\nThis object contains a field \u201ddocuments,\u201d that is an array of everything that matched. If I wanted to fetch a subset of the results in pages, I could use the skip and limit parameter to set which result to start at and how many to return.\n\n```shell\ncurl --location --request POST https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/updateOne \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\": \"Cluster0\",\n \"database\": \"household\", \n \"collection\": \"pets\",\n \"filter\" : { \"name\" : \"Harvest\"},\n \"update\" : { \"$set\" : { \"colour\": \"yellow\" }}\n }'\n```\n\nBecause this both matched one document and changed its content, my output looked like this:\n\n```json\n{\"matchedCount\":1,\"modifiedCount\":1}\n```\n\nI only wanted to update a single document (because I only expected to find one document for Harvest). To change all matching documents, I would call updateMany with the same parameters.\n\n### Run an aggregation pipeline to compute something\n\nYou can also run [aggregation pipelines. As a simple example of how to call the aggregate endpoint, let's determine the count and average age for each color of labrador.\n\nAggregation pipelines are the more powerful part of the MongoDB Query API. As well as looking up documents, a pipeline allows you to calculate aggregate values across multiple documents. The following example extracts all labrador documents from the \"pets\" collection, groups them by their \"colour\" field, and then calculates the number of dogs ($sum of 1 for each dog document) and the average age of dog (using $avg) for each colour.\n\n```shell\ncurl --location --request POST https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/aggregate \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\" \\\n --data-raw '{\n \"dataSource\": \"Cluster0\",\n \"database\": \"household\", \n \"collection\": \"pets\",\n \"pipeline\" : { \"$match\": {\"breed\": \"Labrador\"}}, \n { \"$group\": { \"_id\" : \"$colour\",\n \"count\" : { \"$sum\" : 1},\n \"average_age\": {\"$avg\": \"$age\" }}}]}'\n }'\n```\n\nWhen I ran the above query, the result looked like this:\n\n```json\n{\"documents\":[{\"_id\":\"yellow\",\"count\":1,\"average_age\":5},{\"_id\":\"black\",\"count\":2,\"average_age\":5}]}\n```\n\nIt's worth noting that there are [some limitations when running aggregation pipelines through the Data API.\n\n## Advanced features\n\nWhen it comes to authentication and authorization, or just securing access to the Data API in general, you have a few options. These features use a neat feature of the Data API, which is that your Data API is a MongoDB Atlas Application Services app behind the scenes!\n\nYou can access the application by clicking on \"Advanced Settings\" on your Data API console page:\n\nThe rest of this section will use the features of this Atlas Application Services app, rather than the high level Data API pages.\n\n### Restrict access by IP address\n\nRestricting access to your API endpoint from only the servers that should have access is a relatively straightforward but effective way of locking down your API. You can change the list of IP addresses by clicking on \"App Settings\" in the left-hand navigation bar, and then clicking on the \"IP Access List\" tab on the settings pane.\n\nBy default, all IP addresses are allowed to access your API endpoint (that's what 0.0.0.0 means). If you want to lock down access to your API, you should delete this entry and add entries for servers that should be able to access your data. There's a convenient button to add your current IP address for when you're writing code against your API endpoint.\n\n### Authentication using JWTs and JWK\n\nIn all the examples in this post, I've shown you how to use an API key to access your data. But by using the Atlas Application Services app, you can lock down access to your data using JSON Web Tokens (or JWTs) and email/password credentials. JWT has the benefit that you can use an external authentication service or identity providers, like Auth0 or Okta, to authenticate users of your application. The auth service can provide a JWT that your application can use to make authenticated queries using the Data API, and provides a JWK (JSON Web Keys) URL that can be used by the Data API to ensure any incoming requests have been authenticated by the authentication service.\n\nMy colleague Jesse (you may know him as codeSTACKr) has written a great tutorial for getting this up and running with the Data API and Auth0, and the same process applies for accepting JWTs with the Data API. By first clicking on \"Advanced Settings\" to access the configuration of the app that provides your Data API endpoints behind the scenes and going into \u201cAuthentication,\u201d you can enable the provider with the appropriate signing key and algorithm.\n\nInstead of setting up a trigger to create a new user document when a new JWT is encountered, however, set \"Create User Upon Authentication\" in the User Settings panel on the Data API configuration to \"on.\" \n\n### Giving role-based access to the Data API\n\nFor each cluster, you can set high-level access permissions like Read-Only Access, Read & Write Access, or No Access. However, you can also take this one step further by setting custom role-based access-control with the App Service Rules. \n\nSelecting Custom Access will allow you to set up additional roles on who can access what data, either at the cluster, collection, document, or field level. \n\nFor example, you can restrict certain API key holders to only be able to insert documents but not delete them. These user.id fields are associated with each API key created:\n\n### Add additional business logic with custom API endpoints\n\nThe Data API provides the basic CRUD and aggregation endpoints I've described above. For accessing and manipulating the data in your MongoDB database, because the Data API is provided by an Atlas App Services application, you get all the goodness that goes with that, including the ability to add more API endpoints yourself that can use all the power available to MongoDB Atlas Functions.\n\nFor example, I could write a serverless function that would look up a user's tweets using the Twitter API, combine those with a document looked up in MongoDB, and return the result:\n\n```javascript\nexports = function({ query, headers, body}, response) {\n const collection = context.services.get(\"mongodb-atlas\").db(\"user_database\").collection(\"twitter_users\");\n\n const username = query.user;\n\n const userDoc = collection.findOne({ \"username\": username });\n\n // This function is for illustration only!\n const tweets = twitter_api.get_tweets(userDoc.twitter_id);\n\n return {\n user: userDoc,\n tweets: tweets\n }\n};\n```\n\nBy configuring this as an HTTPS endpoint, I can set things like the \n\n1. API route.\n2. HTTPS method.\n3. Custom authentication or authorization logic.\n\nIn this example, I\u2019ve made this function available via a straightforward HTTPS GET request.\n\nIn this way, you can build an API to handle all of your application's data service requirements, all in one place. The endpoint above could be accessed with the following curl command:\n\n```shell\ncurl --location --request GET 'https://data.mongodb-api.com/app/data-abcde/endpoint/data/v1/action/aggregate?user=mongodb' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header \"api-key: abcdMgLSoqpQdCfLO3QAiif61iI0v6JrvOYIBHeIBWS1zccqKLuDzyAAg\"\n```\n\nAnd the results would look something like this:\n\n```json\n{\"user\": { \"username\": \"mongodb\", \"twitter_id\": \"MongoDB\" },\n \"tweets\": { \"count\": 10, \"tweet_data\": [...]}}\n```\n\n## Conclusion\nThe Data API is a powerful new MongoDB Atlas feature, giving you the ability to query your database from any environment that supports HTTPS. It also supports powerful social authentication possibilities using the standard JWT and JWK technologies. And finally, you can extend your API using all the features like Rules, Authentication, and HTTPS Endpoints.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article introduces the Atlas Data API and describes how to enable it and then call it from cURL.", "contentType": "Article"}, "title": "An Introduction to the MongoDB Atlas Data API", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-fle", "action": "created", "body": "# Store Sensitive Data With Python & MongoDB Client-Side Field Level Encryption\n\n \n\nWith a combination of legislation around customer data protection (such as GDPR), and increasing legislation around money laundering, it's increasingly necessary to be able to store sensitive customer data *securely*. While MongoDB's default security is based on modern industry standards, such as TLS for the transport-layer and SCRAM-SHA-2356 for password exchange, it's still possible for someone to get into your database, either by attacking your server through a different vector, or by somehow obtaining your security credentials.\n\nIn these situations, you can add an extra layer of security to the most sensitive fields in your database using client-side field level encryption (CSFLE). CSFLE encrypts certain fields that you specify, within the driver, on the client, so that it is never transmitted unencrypted, nor seen unencrypted by the MongoDB server. CSFLE makes it nearly impossible to obtain sensitive information from the database server either directly through intercepting data from the client, or from reading data directly from disk, even with DBA or root credentials.\n\nThere are two ways to use CSFLE in MongoDB: *Explicit*, where your code has to manually encrypt data before it is sent to the driver to be inserted or updated using helper methods; and *implicit*, where you declare in your collection which fields should be encrypted using an extended JSON Schema, and this is done by the Python driver without any code changes. This tutorial will cover *implicit* CSFLE, which is only available in MongoDB Enterprise and MongoDB Atlas. If you're running MongoDB Community Server, you'll need to use explicit CSFLE, which won't be covered here.\n\n## Prerequisites\n\n- A recent release of Python 3. The code in this post was written for 3.8, but any release of Python 3.6+ should be fine.\n- A MongoDB Atlas cluster running MongoDB 4.2 or later.\n\n## Getting Set Up\n\nThere are two things you need to have installed on your app server to enable CSFLE in the PyMongo driver. The first is a Python library called pymongocrypt, which you can install by running the following with your virtualenv enabled:\n\n``` bash\npython -m pip install \"pymongoencryption,srv]~=3.11\"\n```\n\nThe `[encryption]` in square braces tells pip to install the optional dependencies required to encrypt data within the PyMongo driver.\n\nThe second thing you'll need to have installed is mongocryptd, which is an application that is provided as part of [MongoDB Enterprise. Follow the instructions to install mongocryptd on to the machine you'll be using to run your Python code. In a production environment, it's recommended to run mongocryptd as a service at startup on your VM or container.\n\nTest that you have mongocryptd installed in your path by running `mongocryptd`, ensuring that it prints out some output. You can then shut it down again with `Ctrl-C`.\n\n## Creating a Key to Encrypt and Decrypt Your Data\n\nFirst, I'll show you how to write a script to generate a new secret master key which will be used to protect individual field keys. In this tutorial, we will be using a \"local\" master key which will be stored on the application side either in-line in code or in a local key file. Note that a local key file should only be used in development. For production, it's strongly recommended to either use one of the integrated native cloud key management services or retrieve the master key from a secrets manager such as Hashicorp Vault. This Python script will generate some random bytes to be used as a secret master key. It will then create a new field key in MongoDB, encrypted using the master key. The master key will be written out to a file so it can be loaded by other python scripts, along with a JSON schema document that will tell PyMongo which fields should be encrypted and how.\n\n>All of the code described in this post is on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!\n\nFirst, here's a few imports you'll need. Paste these into a file called `create_key.py`.\n\n``` python\n# create_key.py\n\nimport os\nfrom pathlib import Path\nfrom secrets import token_bytes\n\nfrom bson import json_util\nfrom bson.binary import STANDARD\nfrom bson.codec_options import CodecOptions\nfrom pymongo import MongoClient\nfrom pymongo.encryption import ClientEncryption\nfrom pymongo.encryption_options import AutoEncryptionOpts\n```\n\nThe first thing you need to do is to generate 96 bytes of random data. Fortunately, Python ships with a module for exactly this purpose, called `secrets`. You can use the `token_bytes` method for this:\n\n``` python\n# create_key.py\n\n# Generate a secure 96-byte secret key:\nkey_bytes = token_bytes(96)\n```\n\nNext, here's some code that creates a MongoClient, configured with a local key management system (KMS).\n\n>**Note**: Storing the master key, unencrypted, on a local filesystem (which is what I do in this demo code) is insecure. In production you should use a secure KMS, such as AWS KMS, Azure Key Vault, or Google's Cloud KMS.\n>\n>I'll cover this in a later blog post, but if you want to get started now, you should read the documentation\n\nAdd this code to your `create_key.py` script:\n\n``` python\n# create_key.py\n\n# Configure a single, local KMS provider, with the saved key:\nkms_providers = {\"local\": {\"key\": key_bytes}}\ncsfle_opts = AutoEncryptionOpts(\n kms_providers=kms_providers, key_vault_namespace=\"csfle_demo.__keystore\"\n)\n\n# Connect to MongoDB with the key information generated above:\nwith MongoClient(os.environ\"MDB_URL\"], auto_encryption_opts=csfle_opts) as client:\n print(\"Resetting demo database & keystore ...\")\n client.drop_database(\"csfle_demo\")\n\n # Create a ClientEncryption object to create the data key below:\n client_encryption = ClientEncryption(\n kms_providers,\n \"csfle_demo.__keystore\",\n client,\n CodecOptions(uuid_representation=STANDARD),\n )\n\n print(\"Creating key in MongoDB ...\")\n key_id = client_encryption.create_data_key(\"local\", key_alt_names=[\"example\"])\n```\n\nOnce the client is configured in the code above, it's used to drop any existing \"csfle_demo\" database, just to ensure that running this or other scripts doesn't result in your database being left in a weird state.\n\nThe configuration and the client is then used to create a ClientEncryption object that you'll use once to create a data key in the `__keystore` collection in the `csfle_demo` database. `create_data_key` will create a document in the `__keystore` collection that will look a little like this:\n\n``` python\n{\n '_id': UUID('00c63aa2-059d-4548-9e18-54452195acd0'),\n 'creationDate': datetime.datetime(2020, 11, 24, 11, 25, 0, 974000),\n 'keyAltNames': ['example'],\n 'keyMaterial': b'W\\xd2\"\\xd7\\xd4d\\x02e/\\x8f|\\x8f\\xa2\\xb6\\xb1\\xc0Q\\xa0\\x1b\\xab ...'\n 'masterKey': {'provider': 'local'},\n 'status': 0,\n 'updateDate': datetime.datetime(2020, 11, 24, 11, 25, 0, 974000)\n}\n```\n\nNow you have two keys! One is the 96 random bytes you generated with `token_bytes` - that's the master key (which remains outside the database). And there's another key in the `__keystore` collection! This is because MongoDB CSFLE uses [envelope encryption. The key that is actually used to encrypt field values is stored in the database, but it is stored encrypted with the master key you generated.\n\nTo make sure you don't lose the master key, here's some code you should add to your script which will save it to a file called `key_bytes.bin`.\n\n``` python\n# create_key.py\n\nPath(\"key_bytes.bin\").write_bytes(key_bytes)\n```\n\nFinally, you need a JSON schema structure that will tell PyMongo which fields need to be encrypted, and how. The schema needs to reference the key you created in `__keystore`, and you have that in the `key_id` variable, so this script is a good place to generate the JSON file. Add the following to the end of your script:\n\n``` python\n# create_key.py\n\nschema = {\n \"bsonType\": \"object\",\n \"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n # Change to \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\" in order to filter by ssn value:\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n \"keyId\": key_id], # Reference the key\n }\n },\n },\n}\n\njson_schema = json_util.dumps(\n schema, json_options=json_util.CANONICAL_JSON_OPTIONS, indent=2\n)\nPath(\"json_schema.json\").write_text(json_schema)\n```\n\nNow you can run this script. First, set the environment variable `MDB_URL` to the URL for your Atlas cluster. The script should create two files locally: `key_bytes.bin`, containing your master key; and `json_schema.json`, containing your JSON schema. In your database, there should be a `__keystore` collection containing your new (encrypted) field key! The easiest way to check this out is to go to [cloud.mongodb.com, find your cluster, and click on `Collections`.\n\n## Run Queries Using Your Key and Schema\n\nCreate a new file, called `csfle_main.py`. This script will connect to your MongoDB cluster using the key and schema created by running `create_key.py`. I'll then show you how to insert a document, and retrieve it both with and without CSFLE configuration, to show how it is stored encrypted and transparently decrypted by PyMongo when the correct configuration is provided.\n\nStart with some code to import the necessary modules and load the saved files:\n\n``` python\n# csfle_main.py\n\nimport os\nfrom pathlib import Path\n\nfrom pymongo import MongoClient\nfrom pymongo.encryption_options import AutoEncryptionOpts\nfrom pymongo.errors import EncryptionError\nfrom bson import json_util\n\n# Load the master key from 'key_bytes.bin':\nkey_bin = Path(\"key_bytes.bin\").read_bytes()\n\n# Load the 'person' schema from \"json_schema.json\":\ncollection_schema = json_util.loads(Path(\"json_schema.json\").read_text())\n```\n\nAdd the following configuration needed to connect to MongoDB:\n\n``` python\n# csfle_main.py\n\n# Configure a single, local KMS provider, with the saved key:\nkms_providers = {\"local\": {\"key\": key_bin}}\n\n# Create a configuration for PyMongo, specifying the local master key,\n# the collection used for storing key data, and the json schema specifying\n# field encryption:\ncsfle_opts = AutoEncryptionOpts(\n kms_providers,\n \"csfle_demo.__keystore\",\n schema_map={\"csfle_demo.people\": collection_schema},\n)\n```\n\nThe code above is very similar to the configuration created in `create_key.py`. Note that this time, `AutoEncryptionOpts` is passed a `schema_map`, mapping the loaded JSON schema against the `people` collection in the `csfle_demo` database. This will let PyMongo know which fields to encrypt and decrypt, and which algorithms and keys to use.\n\nAt this point, it's worth taking a look at the JSON schema that you're loading. It's stored in `json_schema.json`, and it should look a bit like this:\n\n``` json\n{\n\"bsonType\": \"object\",\n\"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n \"keyId\": \n {\n \"$binary\": {\n \"base64\": \"4/p3dLgeQPyuSaEf+NddHw==\",\n \"subType\": \"04\"}}]\n }}}}\n```\n\nThis schema specifies that the `ssn` field, used to store a social security number, is a string which should be stored encrypted using the [AEAD_AES_256_CBC_HMAC_SHA_512-Random algorithm.\n\nIf you don't want to store the schema in a file when you generate your field key in MongoDB, you can load the key ID at any time using the values you set for `keyAltNames` when you created the key. In my case, I set `keyAltNames` to `\"example\"]`, so I could look it up using the following line of code:\n\n``` python\nkey_id = db.__keystore.find_one({ \"keyAltNames\": \"example\" })[\"_id\"]\n```\n\nBecause my code in `create_key.py` writes out the schema at the same time as generating the key, it already has access to the key's ID so the code doesn't need to look it up.\n\nAdd the following code to connect to MongoDB using the configuration you added above:\n\n``` python\n# csfle_main.py\n\n# Add a new document to the \"people\" collection, and then read it back out\n# to demonstrate that the ssn field is automatically decrypted by PyMongo:\nwith MongoClient(os.environ[\"MDB_URL\"], auto_encryption_opts=csfle_opts) as client:\n client.csfle_demo.people.delete_many({})\n client.csfle_demo.people.insert_one({\n \"full_name\": \"Sophia Duleep Singh\",\n \"ssn\": \"123-12-1234\",\n })\n print(\"Decrypted find() results: \")\n print(client.csfle_demo.people.find_one())\n```\n\nThe code above connects to MongoDB and clears any existing documents from the `people` collection. It then adds a new person document, for Sophia Duleep Singh, with a fictional `ssn` value.\n\nJust to prove the data can be read back from MongoDB and decrypted by PyMongo, the last line of code queries back the record that was just added and prints it to the screen. When I ran this code, it printed:\n\n``` none\n{'_id': ObjectId('5fc12f13516b61fa7a99afba'), 'full_name': 'Sophia Duleep Singh', 'ssn': '123-12-1234'}\n```\n\nTo prove that the data is encrypted on the server, you can connect to your cluster using [Compass or at cloud.mongodb.com, but it's not a lot of code to connect again without encryption configuration, and query the document:\n\n``` python\n# csfle_main.py\n\n# Connect to MongoDB, but this time without CSFLE configuration.\n# This will print the document with ssn *still encrypted*:\nwith MongoClient(os.environ\"MDB_URL\"]) as client:\n print(\"Encrypted find() results: \")\n print(client.csfle_demo.people.find_one())\n```\n\nWhen I ran this, it printed out:\n\n``` none\n{\n '_id': ObjectId('5fc12f13516b61fa7a99afba'),\n 'full_name': 'Sophia Duleep Singh',\n 'ssn': Binary(b'\\x02\\xe3\\xfawt\\xb8\\x1e@\\xfc\\xaeI\\xa1\\x1f\\xf8\\xd7]\\x1f\\x02\\xd8+,\\x9el ...', 6)\n}\n```\n\nThat's a very different result from '123-12-1234'! Unfortunately, when you use the Random encryption algorithm, you lose the ability to filter on the field. You can see this if you add the following code to the end of your script and execute it:\n\n``` python\n# csfle_main.py\n\n# The following demonstrates that if the ssn field is encrypted as\n# \"Random\" it cannot be filtered:\ntry:\n with MongoClient(os.environ[\"MDB_URL\"], auto_encryption_opts=csfle_opts) as client:\n # This will fail if ssn is specified as \"Random\".\n # Change the algorithm to \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n # in client_schema_create_key.py (and run it again) for this to succeed:\n print(\"Find by ssn: \")\n print(client.csfle_demo.people.find_one({\"ssn\": \"123-12-1234\"}))\nexcept EncryptionError as e:\n # This is expected if the field is \"Random\" but not if it's \"Deterministic\"\n print(e)\n```\n\nWhen you execute this block of code, it will print an exception saying, \"Cannot query on fields encrypted with the randomized encryption algorithm...\". `AEAD_AES_256_CBC_HMAC_SHA_512-Random` is the correct algorithm to use for sensitive data you won't have to filter on, such as medical conditions, security questions, etc. It also provides better protection against frequency analysis recovery, and so should probably be your default choice for encrypting sensitive data, especially data that is high-cardinality, such as a credit card number, phone number, or ... yes ... a social security number. But there's a distinct probability that you might want to search for someone by their Social Security number, given that it's a unique identifier for a person, and you can do this by encrypting it using the \"Deterministic\" algorithm.\n\nIn order to fix this, open up `create_key.py` again and change the algorithm in the schema definition from `Random` to `Deterministic`, so it looks like this:\n\n``` python\n# create_key.py\n\n\"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n```\n\nRe-run `create_key.py` to generate a new master key, field key, and schema file. (This operation will also delete your `csfle_demo` database!) Run `csfle_main.py` again. This time, the block of code that failed before should instead print out the details of Sophia Duleep Singh.\n\nThe problem with this way of configuring your client is that if some other code is misconfigured, it can either save unencrypted values in the database or save them using the wrong key or algorithm. Here's an example of some code to add a second record, for Dora Thewlis. Unfortunately, this time, the configuration has not provided a `schema_map`! What this means is that the SSN for Dora Thewlis will be stored in plaintext.\n\n``` python\n# Configure encryption options with the same key, but *without* a schema:\ncsfle_opts_no_schema = AutoEncryptionOpts(\n kms_providers,\n \"csfle_demo.__keystore\",\n)\nwith MongoClient(\n os.environ[\"MDB_URL\"], auto_encryption_opts=csfle_opts_no_schema\n) as client:\n print(\"Inserting Dora Thewlis, without configured schema.\")\n # This will insert a document *without* encrypted ssn, because\n # no schema is specified in the client or server:\n client.csfle_demo.people.insert_one({\n \"full_name\": \"Dora Thewlis\",\n \"ssn\": \"234-23-2345\",\n })\n\n# Connect without CSFLE configuration to show that Sophia Duleep Singh is\n# encrypted, but Dora Thewlis has her ssn saved as plaintext.\nwith MongoClient(os.environ[\"MDB_URL\"]) as client:\n print(\"Encrypted find() results: \")\n for doc in client.csfle_demo.people.find():\n print(\" *\", doc)\n```\n\nIf you paste the above code into your script and run it, it should print out something like this, demonstrating that one of the documents has an encrypted SSN, and the other's is plaintext:\n\n``` none\n* {'_id': ObjectId('5fc12f13516b61fa7a99afba'), 'full_name': 'Sophia Duleep Singh', 'ssn': Binary(b'\\x02\\xe3\\xfawt\\xb8\\x1e@\\xfc\\xaeI\\xa1\\x1f\\xf8\\xd7]\\x1f\\x02\\xd8+,\\x9el\\xfe\\xee\\xa7\\xd9\\x87+\\xb9p\\x9a\\xe7\\xdcjY\\x98\\x82]7\\xf0\\xa4G[]\\xd2OE\\xbe+\\xa3\\x8b\\xf5\\x9f\\x90u6>\\xf3(6\\x9c\\x1f\\x8e\\xd8\\x02\\xe5\\xb5h\\xc64i>\\xbf\\x06\\xf6\\xbb\\xdb\\xad\\xf4\\xacp\\xf1\\x85\\xdbp\\xeau\\x05\\xe4Z\\xe9\\xe9\\xd0\\xe9\\xe1n<', 6)}\n* {'_id': ObjectId('5fc12f14516b61fa7a99afc0'), 'full_name': 'Dora Thewlis', 'ssn': '234-23-2345'}\n```\n\n*Fortunately*, MongoDB provides the ability to attach a [validator to a collection, to ensure that the data stored is encrypted according to the schema.\n\nIn order to have a schema defined on the server-side, return to your `create_key.py` script, and instead of writing out the schema to a JSON file, provide it to the `create_collection` method as a JSON Schema validator:\n\n``` python\n# create_key.py\n\nprint(\"Creating 'people' collection in 'csfle_demo' database (with schema) ...\")\nclient.csfle_demo.create_collection(\n \"people\",\n codec_options=CodecOptions(uuid_representation=STANDARD),\n validator={\"$jsonSchema\": schema},\n)\n```\n\nProviding a validator attaches the schema to the created collection, so there's no need to save the file locally, no need to read it into `csfle_main.py`, and no need to provide it to MongoClient anymore. It will be stored and enforced by the server. This simplifies both the key generation code and the code to query the database, *and* it ensures that the SSN field will always be encrypted correctly. Bonus!\n\nThe definition of `csfle_opts` becomes:\n\n``` python\n# csfle_main.py\n\ncsfle_opts = AutoEncryptionOpts(\n kms_providers,\n \"csfle_demo.__keystore\",\n)\n```\n\n## In Conclusion\n\nBy completing this quick start, you've learned how to:\n\n- Create a secure random key for encrypting data keys in MongoDB.\n- Use local key storage to store a key during development.\n- Create a Key in MongoDB (encrypted with your local key) to encrypt data in MongoDB.\n- Use a JSON Schema to define which fields should be encrypted.\n- Assign the JSON Schema to a collection to validate encrypted fields on the server.\n\nAs mentioned earlier, you should *not* use local key storage to manage your key - it's insecure. You can store the key manually in a KMS of your choice, such as Hashicorp Vault, or if you're using one of the three major cloud providers, their KMS services are already integrated into PyMongo. Read the documentation to find out more.\n\n>I hope you enjoyed this post! Let us know what you think on the MongoDB Community Forums.\n\nThere is a lot of documentation about Client-Side Field-Level Encryption, in different places. Here are the docs I found useful when writing this post:\n\n- PyMongo CSFLE Docs\n- Client-Side Field Level Encryption docs\n- Schema Validation\n- MongoDB University CSFLE Guides Repository\n\nIf CSFLE doesn't quite fit your security requirements, you should check out our other security docs, which cover encryption at rest and configuring transport encryption, among other things.\n\nAs always, if you have any questions, or if you've built something cool, let us know on the MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Store data securely in MongoDB using Client-Side Field-Level Encryption", "contentType": "Quickstart"}, "title": "Store Sensitive Data With Python & MongoDB Client-Side Field Level Encryption", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-aggregation-framework", "action": "created", "body": "# Aggregation Framework with Node.js Tutorial\n\nWhen you want to analyze data stored in MongoDB, you can use MongoDB's powerful aggregation framework to do so. Today, I'll give you a high-level overview of the aggregation framework and show you how to use it.\n\n>This post uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>Click here to see a newer version of this post that uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n\nIf you're just joining us in this Quick Start with MongoDB and Node.js series, welcome! So far, we've covered how to connect to MongoDB and perform each of the CRUD (Create, Read, Update, and Delete) operations. The code we write today will use the same structure as the code we built in the first post in the series; so, if you have any questions about how to get started or how the code is structured, head back to that first post.\n\nAnd, with that, let's dive into the aggregation framework!\n\n>If you are more of a video person than an article person, fear not. I've made a video just for you! The video below covers the same content as this article.\n>\n>:youtube]{vid=iz37fDe1XoM}\n>\n>Get started with an M0 cluster on [Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n\n## What is the Aggregation Framework?\n\nThe aggregation framework allows you to analyze your data in real time. Using the framework, you can create an aggregation pipeline that consists of one or more stages. Each stage transforms the documents and passes the output to the next stage.\n\nIf you're familiar with the Linux pipe ( `|` ), you can think of the aggregation pipeline as a very similar concept. Just as output from one command is passed as input to the next command when you use piping, output from one stage is passed as input to the next stage when you use the aggregation pipeline.\n\nThe aggregation framework has a variety of stages available for you to use. Today, we'll discuss the basics of how to use $match, $group, $sort, and $limit. Note that the aggregation framework has many other powerful stages including $count, $geoNear, $graphLookup, $project, $unwind, and others.\n\n## How Do You Use the Aggregation Framework?\n\nI'm hoping to visit the beautiful city of Sydney, Australia soon. Sydney is a huge city with many suburbs, and I'm not sure where to start looking for a cheap rental. I want to know which Sydney suburbs have, on average, the cheapest one-bedroom Airbnb listings.\n\nI could write a query to pull all of the one-bedroom listings in the Sydney area and then write a script to group the listings by suburb and calculate the average price per suburb. Or, I could write a single command using the aggregation pipeline. Let's use the aggregation pipeline.\n\nThere is a variety of ways you can create aggregation pipelines. You can write them manually in a code editor or create them visually inside of MongoDB Atlas or MongoDB Compass. In general, I don't recommend writing pipelines manually as it's much easier to understand what your pipeline is doing and spot errors when you use a visual editor. Since you're already setup to use MongoDB Atlas for this blog series, we'll create our aggregation pipeline in Atlas.\n\n### Navigate to the Aggregation Pipeline Builder in Atlas\n\nThe first thing we need to do is navigate to the Aggregation Pipeline Builder in Atlas.\n\n1. Navigate to Atlas and authenticate if you're not already authenticated.\n2. In the **Organizations** menu in the upper-left corner, select the organization you are using for this Quick Start series.\n3. In the **Projects** menu (located beneath the Organizations menu), select the project you are using for this Quick Start series.\n4. In the right pane for your cluster, click **COLLECTIONS**.\n5. In the list of databases and collections that appears, select **listingsAndReviews**.\n6. In the right pane, select the **Aggregation** view to open the Aggregation Pipeline Builder.\n\nThe Aggregation Pipeline Builder provides you with a visual representation of your aggregation pipeline. Each stage is represented by a new row. You can put the code for each stage on the left side of a row, and the Aggregation Pipeline Builder will automatically provide a live sample of results for that stage on the right side of the row.\n\n## Build an Aggregation Pipeline\n\nNow we are ready to build an aggregation pipeline.\n\n### Add a $match Stage\n\nLet's begin by narrowing down the documents in our pipeline to one-bedroom listings in the Sydney, Australia market where the room type is \"Entire home/apt.\" We can do so by using the $match stage.\n\n1. On the row representing the first stage of the pipeline, choose **$match** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$match` operator in the code box for the stage.\n\n \n\n2. Now we can input a query in the code box. The query syntax for `$match` is the same as the `findOne()` syntax that we used in a previous post. Replace the code in the `$match` stage's code box with the following:\n\n``` json\n{\n bedrooms: 1,\n \"address.country\": \"Australia\",\n \"address.market\": \"Sydney\",\n \"address.suburb\": { $exists: 1, $ne: \"\" },\n room_type: \"Entire home/apt\"\n}\n```\n\nNote that we will be using the `address.suburb` field later in the pipeline, so we are filtering out documents where `address.suburb` does not exist or is represented by an empty string.\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$match` stage is executed.\n\n### Add a $group Stage\n\nNow that we have narrowed our documents down to one-bedroom listings in the Sydney, Australia market, we are ready to group them by suburb. We can do so by using the $group stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$group** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$group` operator in the code box for the stage.\n\n \n\n3. Now we can input code for the `$group` stage. We will provide an `_id`, which is the field that the Aggregation Framework will use to create our groups. In this case, we will use `$address.suburb` as our `_id`. Inside of the $group stage, we will also create a new field named `averagePrice`. We can use the $avg aggregation pipeline operator to calculate the average price for each suburb. Replace the code in the $group stage's code box with the following:\n\n``` json\n{\n _id: \"$address.suburb\",\n averagePrice: {\n \"$avg\": \"$price\"\n }\n}\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$group` stage is executed. Note that the documents have been transformed. Instead of having a document for each listing, we now have a document for each suburb. The suburb documents have only two fields: `_id` (the name of the suburb) and `averagePrice`.\n\n### Add a $sort Stage\n\nNow that we have the average prices for suburbs in the Sydney, Australia market, we are ready to sort them to discover which are the least expensive. We can do so by using the $sort stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$sort** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$sort` operator in the code box for the stage.\n\n \n\n3. Now we are ready to input code for the `$sort` stage. We will sort on the `$averagePrice` field we created in the previous stage. We will indicate we want to sort in ascending order by passing `1`. Replace the code in the `$sort` stage's code box with the following:\n\n``` json\n{\n \"averagePrice\": 1\n}\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 20 documents that will be included in the results after the `$sort` stage is executed. Note that the documents have the same shape as the documents in the previous stage; the documents are simply sorted from least to most expensive.\n\n### Add a $limit Stage\n\nNow we have the average prices for suburbs in the Sydney, Australia market sorted from least to most expensive. We may not want to work with all of the suburb documents in our application. Instead, we may want to limit our results to the 10 least expensive suburbs. We can do so by using the $limit stage.\n\n1. Click **ADD STAGE**. A new stage appears in the pipeline.\n2. On the row representing the new stage of the pipeline, choose **$limit** in the **Select**... box. The Aggregation Pipeline Builder automatically provides sample code for how to use the `$limit` operator in the code box for the stage.\n\n \n\n3. Now we are ready to input code for the `$limit` stage. Let's limit our results to 10 documents. Replace the code in the $limit stage's code box with the following:\n\n``` json\n10\n```\n\nThe Aggregation Pipeline Builder automatically updates the output on the right side of the row to show a sample of 10 documents that will be included in the results after the `$limit` stage is executed. Note that the documents have the same shape as the documents in the previous stage; we've simply limited the number of results to 10.\n\n## Execute an Aggregation Pipeline in Node.js\n\nNow that we have built an aggregation pipeline, let's execute it from inside of a Node.js script.\n\n### Get a Copy of the Node.js Template\n\nTo make following along with this blog post easier, I've created a starter template for a Node.js script that accesses an Atlas cluster.\n\n1. Download a copy of template.js.\n2. Open `template.js` in your favorite code editor.\n3. Update the Connection URI to point to your Atlas cluster. If you're not sure how to do that, refer back to the first post in this series.\n4. Save the file as `aggregation.js`.\n\nYou can run this file by executing `node aggregation.js` in your shell. At this point, the file simply opens and closes a connection to your Atlas cluster, so no output is expected. If you see DeprecationWarnings, you can ignore them for the purposes of this post.\n\n### Create a Function\n\nLet's create a function whose job it is to print the cheapest suburbs for a given market.\n\n1. Continuing to work in `aggregation.js`, create an asynchronous function named `printCheapestSuburbs` that accepts a connected MongoClient, a country, a market, and the maximum number of results to print as parameters.\n\n ``` js\n async function printCheapestSuburbs(client, country, market, maxNumberToPrint) {\n }\n ```\n\n2. We can execute a pipeline in Node.js by calling\n Collection's\n aggregate().\n Paste the following in your new function:\n\n ``` js\n const pipeline = ];\n\n const aggCursor = client.db(\"sample_airbnb\")\n .collection(\"listingsAndReviews\")\n .aggregate(pipeline);\n ```\n\n3. The first param for `aggregate()` is a pipeline of type object. We could manually create the pipeline here. Since we've already created a pipeline inside of Atlas, let's export the pipeline from there. Return to the Aggregation Pipeline Builder in Atlas. Click the **Export pipeline code to language** button.\n\n ![Export pipeline in Atlas\n\n4. The **Export Pipeline To Language** dialog appears. In the **Export Pipleine To** selection box, choose **NODE**.\n5. In the Node pane on the right side of the dialog, click the **copy** button.\n6. Return to your code editor and paste the `pipeline` in place of the empty object currently assigned to the pipeline constant.\n\n ``` js\n const pipeline = \n {\n '$match': {\n 'bedrooms': 1,\n 'address.country': 'Australia', \n 'address.market': 'Sydney', \n 'address.suburb': {\n '$exists': 1, \n '$ne': ''\n }, \n 'room_type': 'Entire home/apt'\n }\n }, {\n '$group': {\n '_id': '$address.suburb', \n 'averagePrice': {\n '$avg': '$price'\n }\n }\n }, {\n '$sort': {\n 'averagePrice': 1\n }\n }, {\n '$limit': 10\n }\n ];\n ```\n\n7. This pipeline would work fine as written. However, it is hardcoded to search for 10 results in the Sydney, Australia market. We should update this pipeline to be more generic. Make the following replacements in the pipeline definition:\n 1. Replace `'Australia'` with `country`\n 2. Replace `'Sydney'` with `market`\n 3. Replace `10` with `maxNumberToPrint`\n\n8. `aggregate()` will return an [AggregationCursor, which we are storing in the `aggCursor` constant. An AggregationCursor allows traversal over the aggregation pipeline results. We can use AggregationCursor's forEach() to iterate over the results. Paste the following inside `printCheapestSuburbs()` below the definition of `aggCursor`.\n\n``` js\nawait aggCursor.forEach(airbnbListing => {\n console.log(`${airbnbListing._id}: ${airbnbListing.averagePrice}`);\n});\n```\n\n### Call the Function\n\nNow we are ready to call our function to print the 10 cheapest suburbs in the Sydney, Australia market. Add the following call in the `main()` function beneath the comment that says `Make the appropriate DB calls`.\n\n``` js\nawait printCheapestSuburbs(client, \"Australia\", \"Sydney\", 10);\n```\n\nRunning aggregation.js results in the following output:\n\n``` json\nBalgowlah: 45.00\nWilloughby: 80.00\nMarrickville: 94.50\nSt Peters: 100.00\nRedfern: 101.00\nCronulla: 109.00\nBellevue Hill: 109.50\nKingsgrove: 112.00\nCoogee: 115.00\nNeutral Bay: 119.00\n```\n\nNow I know what suburbs to begin searching as I prepare for my trip to Sydney, Australia.\n\n## Wrapping Up\n\nThe aggregation framework is an incredibly powerful way to analyze your data. Learning to create pipelines may seem a little intimidating at first, but it's worth the investment. The aggregation framework can get results to your end-users faster and save you from a lot of scripting.\n\nToday, we only scratched the surface of the aggregation framework. I highly recommend MongoDB University's free course specifically on the aggregation framework: M121: The MongoDB Aggregation Framework. The course has a more thorough explanation of how the aggregation framework works and provides detail on how to use the various pipeline stages.\n\nThis post included many code snippets that built on code written in the first post of this MongoDB and Node.js Quick Start series. To get a full copy of the code used in today's post, visit the Node.js Quick Start GitHub Repo.\n\nNow you're ready to move on to the next post in this series all about change streams and triggers. In that post, you'll learn how to automatically react to changes in your database.\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Discover how to analyze your data using MongoDB's Aggregation Framework and Node.js.", "contentType": "Quickstart"}, "title": "Aggregation Framework with Node.js Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/real-time-chat-phaser-game-mongodb-socketio", "action": "created", "body": "\n \n \n \n \n \n \n \n \n ", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Node.js"], "pageDescription": "Learn how to add real-time chat to a Phaser game with Socket.io and MongoDB.", "contentType": "Tutorial"}, "title": "Real-Time Chat in a Phaser Game with MongoDB and Socket.io", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/jetpack-compose-experience-android", "action": "created", "body": "# Unboxing Jetpack Compose: My First Compose\u00a0App\n\n# Introduction \n\n### What is Jetpack Compose?\n\nAs per Google, *\u201cJetpack Compose is Android\u2019s modern toolkit for building native UI. It simplifies and accelerates UI development on Android. Quickly bring your app to life with less code, powerful tools, and intuitive Kotlin APIs\u201d.*\n\nIn my words, it\u2019s a revolutionary **declarative way** of creating (or should I say composing \ud83d\ude04) UI in Android using Kotlin. Until now, we created layouts using XML and never dared to create via code (except for custom views, no choice) due to its complexity, non-intuitiveness, and maintenance issues.\n\nBut now it\u2019s different!\n\n### What is Declarative UI?\n\n> You know, imperative is like **how** you do something, and declarative is more like **what** you do, or something.\n\nDoesn\u2019t that make sense? It didn\u2019t to me as well in the first go \ud83d\ude04. In my opinion, imperative is more like an algorithm to perform any operation step by step, and declarative is the code that is built using the algorithm, more to do ***what*** works.\n\nIn Android, we normally create an XML of a layout and then update (sync) each element every time based on the input received, business rules using findViewById/Kotlin Semantics/View Binding/ Data Binding \ud83d\ude05.\n\nBut with **Compose**, we simply write a function that has both elements and rules, which is called whenever information gets updated. In short, a part of the UI is recreated every time **without** **performance** **issues**.\n\nThis philosophy or mindset will in turn help you write smaller (Single Responsibility principle) and reusable functions.\n\n### Why is Compose Getting So Popular?\n\nI\u2019m not really sure, but out of the many awesome features, the ones I\u2019ve loved most are:\n\n1. **Faster release cycle**: Bi-weekly, so now there is a real chance that if you get any issue with **composing,** it can be fixed soon. Hopefully!\n\n2. **Interoperable**: Similar to Kotlin, Compose is also interoperable with earlier UI design frameworks.\n\n3. **Jetpack library and material component built-in support**: Reduce developer efforts and time in building beautiful UI with fewer lines of code \u2764\ufe0f.\n\n4. **Declarative UI**: With a new way of building UI, we are now in harmony with all other major frontend development frameworks like SwiftUI, Flutter, and React Native, making it easier for the developer to use concepts/paradigms from other platforms.\n\n### Current state\n\nAs of 29th July, the first stable version was released 1.0, meaning **Compose is production-ready**.\n\n# Get Started with Compose\n\n### For using Compose, we need to set up a few things:\n\n 1. Kotlin v*1.5.10* and above, so let\u2019s update our dependency in the project-level `build.gradle` file.\n\n ```kotlin\n plugins {\n id 'org.jetbrains.kotlin:android' version '1.5.10'\n } \n ```\n\n2. Minimum *API level 21*\n\n ```kotlin\n android {\n defaultConfig {\n ...\n minSdkVersion 21\n }\n } \n ```\n\n3. Enable Compose \n\n ```kotlin\n android { \n\n defaultConfig {\n ...\n minSdkVersion 21\n }\n\n buildFeatures {\n // Enables Jetpack Compose for this module\n compose true\n }\n }\n ```\n \n4. Others like min Java or Kotlin compiler and compose compiler\n\n ```kotlin\n android {\n defaultConfig {\n ...\n minSdkVersion 21\n }\n\n buildFeatures {\n // Enables Jetpack Compose for this module\n compose true\n }\n ...\n\n // Set both the Java and Kotlin compilers to target Java 8.\n compileOptions {\n sourceCompatibility JavaVersion.VERSION_1_8\n targetCompatibility JavaVersion.VERSION_1_8\n }\n kotlinOptions {\n jvmTarget = \"1.8\"\n }\n\n composeOptions {\n kotlinCompilerExtensionVersion '1.0.0'\n }\n }\n ```\n\n5. At last compose dependency for build UI\n\n ```kotlin\n dependencies {\n\n implementation 'androidx.compose.ui:ui:1.0.0'\n // Tooling support (Previews, etc.)\n implementation 'androidx.compose.ui:ui-tooling:1.0.0'\n // Foundation (Border, Background, Box, Image, Scroll, shapes, animations, etc.)\n implementation 'androidx.compose.foundation:foundation:1.0.0'\n // Material Design\n implementation 'androidx.compose.material:material:1.0.0'\n // Material design icons\n implementation 'androidx.compose.material:material-icons-core:1.0.0'\n implementation 'androidx.compose.material:material-icons-extended:1.0.0'\n // Integration with activities\n implementation 'androidx.activity:activity-compose:1.3.0'\n // Integration with ViewModels\n implementation 'androidx.lifecycle:lifecycle-viewmodel-compose:1.0.0-alpha07'\n // Integration with observables\n implementation 'androidx.compose.runtime:runtime-livedata:1.0.0'\n\n }\n ```\n\n### Mindset \n\nWhile composing UI, you need to unlearn various types of layouts and remember just one thing: Everything is a composition of *rows* and *columns*.\n\nBut what about ConstraintLayout, which makes life so easy and is very useful for building complex UI? We can still use it \u2764\ufe0f, but in a little different way.\n\n### First Compose Project \u2014 Tweet Details Screen\n\nFor our learning curve experience, I decided to re-create this screen in Compose.\n\nSo let\u2019s get started. \n\nCreate a new project with Compose project as a template and open MainActivity.\n\nIf you don\u2019t see the Compose project, then update Android Studio to the latest version.\n\n```kotlin\n class MainActivity : ComponentActivity() {\n \n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n ComposeTweetTheme {\n \n .... \n }\n }\n }\n }\n```\n\nNow to add a view to the UI, we need to create a function with `@Composable` annotation, which makes it a Compose function.\n\nCreating our first layout of the view, toolbar\n\n```kotlin\n @Composable\n fun getAppTopBar() {\n TopAppBar(\n title = {\n Text(text = stringResource(id = R.string.app_name))\n },\n elevation = 0.*dp\n )\n }\n```\n\nTo preview the UI rendered in Android Studio, we can use `@Preview` annotation.\n\nTopAppBar is an inbuilt material component for adding a topbar to our application.\n\nLet\u2019s create a little more complex view, user profile view\n\nAs discussed earlier, in Compose, we have only rows and columns, so let\u2019s break our UI \ud83d\udc47, where the red border represents columns and green is rows, and complete UI as a row in the screen.\n\nSo let\u2019s create our compose function for user profile view with our root row.\n\nYou will notice the modifier argument in the Row function. This is the Compose way of adding formatting to the elements, which is uniform across all the elements.\n\nCreating a round imageview is very simple now. No need for any library or XML drawable as an overlay.\n\n```kotlin\nImage(\n painter = painterResource(id = R.drawable.ic_profile),\n contentDescription = \"Profile Image\",\n modifier = Modifier\n .size(36.dp)\n .clip(CircleShape)\n .border(1.dp, Color.Transparent, CircleShape),\n contentScale = ContentScale.Crop\n )\n ```\n\nAgain we have a `modifier` for updating our Image (AKA ImageView) with `clip` to make it rounded and `contentScale` to scale the image. \n\nSimilarly, adding a label will be a piece of cake now.\n\n```kotlin\n Text (text = userName, fontSize = 20.sp)\n```\n\nNow let\u2019s put it all together in rows and columns to complete the view.\n\n```kotlin\n@Composable\nfun userProfileView(userName: String, userHandle: String) {\n Row(\n modifier = Modifier\n .fillMaxWidth()\n .wrapContentHeight()\n .padding(all = 12.dp),\n verticalAlignment = Alignment.CenterVertically\n ) {\n Image(\n painter = painterResource(id = R.drawable.ic_profile),\n contentDescription = \"Profile Image\",\n modifier = Modifier\n .size(36.dp)\n .clip(CircleShape)\n .border(1.dp, Color.Transparent, CircleShape),\n contentScale = ContentScale.Crop\n )\n Column(\n modifier = Modifier\n .padding(start = 12.dp)\n ) {\n Text(text = userName, fontSize = 20.sp, fontWeight = FontWeight.Bold)\n Text(text = userHandle, fontSize = 14.sp)\n }\n }\n}\n```\n\nAnother great example is to create a Text Label with two styles. We know that traditionally doing that is very painful.\n\nLet\u2019s see the Compose way of doing it.\n\n```kotlin\nText(\n text = buildAnnotatedString {\n withStyle(style = SpanStyle(fontWeight = FontWeight.ExtraBold)) {\n append(\"3\")\n }\n append(\" \")\n withStyle(style = SpanStyle(fontWeight = FontWeight.Normal)) {\n\n append(stringResource(id = R.string.retweets))\n }\n },\n modifier = Modifier.padding(end = 8.dp)\n )\n\n```\n\nThat\u2019s it!! I hope you\u2019ve seen the ease of use and benefit of using Compose for building UI.\n\nJust remember everything in Compose is rows and columns, and the order of attributes matters. You can check out my Github repo complete example which also demonstrates the rendering of data using `viewModel`.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Android"], "pageDescription": "Learn how to get started with Jetpack Compose on Android", "contentType": "Quickstart"}, "title": "Unboxing Jetpack Compose: My First Compose\u00a0App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/searching-on-your-location-atlas-search-geospatial-operators", "action": "created", "body": "\n \n Bed and Breakfast [40.7128, -74.0060]:\n \n \n \n ", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to compound Atlas Search operators and do autocomplete searches with geospatial criteria.", "contentType": "Tutorial"}, "title": "Searching on Your Location with Atlas Search and Geospatial Operators", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/adding-real-time-notifications-ghost-cms-using-mongodb-server-sent-events", "action": "created", "body": "# Adding Real-Time Notifications to Ghost CMS Using MongoDB and Server-Sent Events\n\n## About Ghost\n\nGhost is an open-source blogging platform. Unlike other content management systems like WordPress, its focus lies on professional publishing.\n\nThis ensures the core of the system remains lean. To integrate third-party applications, you don't even need to install plugins. Instead, Ghost offers a feature called Webhooks which runs while you work on your publication.\n\nThese webhooks send particular items, such as a post or a page, to an HTTP endpoint defined by you and thereby provide an excellent base for our real-time service.\n\n## Server-sent events\n\nYou are likely familiar with the concept of an HTTP session. A client sends a request, the server responds and then closes the connection. When using server-sent events (SSEs), said connection remains open. This allows the server to continue writing messages into the response.\n\nLike Websockets (WS), apps and websites use SSEs for real-time communication. Where WSs use a dedicated protocol and work in both directions, SSEs are unidirectional. They use plain HTTP endpoints to write a message whenever an event occurs on the server side.\n\nClients can subscribe to these endpoints using the EventSource browser API:\n\n```javascript\nconst subscription = new EventSource(\"https://example.io/subscribe\")\n```\n\n## MongoDB Change Streams\n\nNow that we\u2019ve looked at the periphery of our application, it's time to present its core. We'll use MongoDB to store a subset of the received Ghost Webhook data. On top of that, we'll use MongoDB Change Streams to watch our webhook collection.\n\nIn a nutshell, Change Streams register data flowing into our database. We can subscribe to this data stream and react to it. Reacting means sending out SSE messages to connected clients whenever a new webhook is received and stored.\n\nThe following Javascript code showcases a simple Change Stream subscription.\n\n```javascript\nimport {MongoClient} from 'mongodb';\n\nconst client = new MongoClient(\"\");\nconst ghostDb = client.db('ghost');\nconst ghostCollection = ghostDb.collection('webhooks');\nconst ghostChangeStrem = ghostCollection.watch();\n\nghostChangeStream.on('change', document => {\n /* document is the MongoDB collection entry, e.g. our webhook */\n});\n```\n\nIts event-based nature matches perfectly with webhooks and SSEs. We can react to newly received webhooks where the data is created, ensuring data integrity over our whole application.\n\n## Build a real-time endpoint\n\nWe need an extra application layer to propagate these changes to connected clients. I've decided to use Typescript and Express.js, but you can use any other server-side framework. You will also need a dedicated MongoDB instance*. For a quick start, you can sign up for MongoDB Atlas. Then, create a free cluster and connect to it.\n\nLet's get started by cloning the `1-get-started` branch from this Github repository:\n\n```bash\n# ssh\n$ git clone git@github.com:tq-bit/mongodb-article-mongo-changestreams.git\n\n# HTTP(s)\n$ git clone https://github.com/tq-bit/mongodb-article-mongo-changestreams.git\n\n# Change to the starting branch\n$ git checkout 1-get-started\n\n# Install NPM dependencies\n$ npm install\n\n# Make a copy of .env.example\n$ cp .env.example .env\n```\n\n> Make sure to fill out the MONGO_HOST environment variable with your connection string!\n\nExpress and the database client are already implemented. So in the following, we'll focus on adding MongoDB change streams and server-sent events.\n\nOnce everything is set up, you can start the server on `http://localhost:3000` by typing\n\n```bash\nnpm run dev\n```\n\nThe application uses two important endpoints which we will extend in the next sections: \n\n- `/api/notification/subscribe` <- Used by EventSource to receive event messages\n- `/api/notification/article/create` <- Used as a webhook target by Ghost\n\n\\* If you are not using MongoDB Atlas, make sure to have Replication Sets enabled.\n\n## Add server-sent events\n\nOpen the cloned project in your favorite code editor. We'll add our SSE logic under `src/components/notification/notification.listener.ts`.\n\nIn a nutshell, implementing SSE requires three steps:\n\n- Write out an HTTP status 200 header.\n- Write out an opening message.\n- Add event-based response message handlers.\n\nWe\u2019ll start sending a static message and revisit this module after adding ChangeStreams.\n\n> You can also `git checkout 2-add-sse` to see the final result.\n\n### Write the HTTP header\n\nWriting the HTTP header informs clients of a successful connection. It also propagates the response's content type and makes sure events are not cached.\n\nAdd the following code to the function `subscribeToArticleNotification` inside:\n\n```javascript\n// Replace\n// TODO: Add function to write the head\n// with\nconsole.log('Step 1: Write the response head and keep the connection open');\nres.writeHead(200, {\n 'Content-Type': 'text/event-stream',\n 'Cache-Control': 'no-cache',\n Connection: 'keep-alive'\n});\n```\n\n### Write an opening message\n\nThe first message sent should have an event type of 'open'. It is not mandatory but helps to determine whether the subscription was successful.\n\nAppend the following code to the function `subscribeToArticleNotification`:\n\n```javascript\n// Replace\n// TODO: Add functionality to write the opening message\n// with\nconsole.log('Step 2: Write the opening event message');\nres.write('event: open\\n');\nres.write('data: Connection opened!\\n'); // Data can be any string\nres.write(`id: ${crypto.randomUUID()}\\n\\n`);\n```\n\n### Add response message handlers\n\nWe can customize the content and timing of all further messages sent. Let's add a placeholder function that sends messages out every five seconds for now. And while we\u2019re at it, let\u2019s also add a handler to close the client connection:\n\nAppend the following code to the function `subscribeToArticleNotification`:\n\n```javascript\nsetInterval(() => {\n console.log('Step 3: Send a message every five seconds');\n res.write(`event: message\\n`);\n res.write(`data: ${JSON.stringify({ message: 'Five seconds have passed' })}\\n`);\n res.write(`id: ${crypto.randomUUID()}\\n\\n`);\n}, 5000);\n\n// Step 4: Handle request events such as client disconnect\n// Clean up the Change Stream connection and close the connection stream to the client\nreq.on('close', () => {\n console.log('Step 4: Handle request events such as client disconnect');\n res.end();\n});\n```\n\nTo check if everything works, visit `http://localhost:3000/api/notification/subscribe`.\n\n## Add a POST endpoint for Ghost\n\nLet's visit `src/components/notification/notification.model.ts` next. We'll add a simple `insert` command for our database into the function `createNotificiation`:\n\n> You can also `git checkout 3-webhook-handler` to see the final result.\n\n```javascript\n// Replace\n// TODO: Add insert one functionality for DB\n// with\nreturn notificationCollection.insertOne(notification);\n```\n\nAnd on to `src/components/notification/notification.controller.ts`. To process incoming webhooks, we'll add a handler function into `handleArticleCreationNotification`:\n\n```javascript\n// Replace\n// TODO: ADD handleArticleCreationNotification\n// with\nconst incomingWebhook: GhostWebhook = req.body;\nawait NotificationModel.createNotificiation({\n id: crypto.randomUUID(),\n ghostId: incomingWebhook.post?.current?.id,\n ghostOriginalUrl: incomingWebhook.post?.current?.url,\n ghostTitle: incomingWebhook.post?.current?.title,\n ghostVisibility: incomingWebhook.post?.current?.visibility,\n type: NotificationEventType.PostPublished,\n});\n\nres.status(200).send('OK');\n```\n\nThis handler will pick data from the incoming webhook and insert a new notification.\n\n```bash\ncurl -X POST -d '{\n\"post\": {\n\"current\": {\n \"id\": \"sj7dj-lnhd1-kabah9-107gh-6hypo\",\n \"url\": \"http://localhost:2368/how-to-create-realtime-notifications\",\n \"title\": \"How to create realtime notifications\",\n \"visibility\": \"public\"\n}\n}\n}' http://localhost:3000/api/notification/article/create\n```\n\nYou can also test the insert functionality by using Postman or VSCode REST client and then check your MongoDB collection. There is an example request under `/test/notification.rest` in the project's directory, for your convenience.\n\n## Trigger MongoDB Change Streams\n\nSo far, we can send SSEs and insert Ghost notifications. Let's put these two features together now.\n\nEarlier, we added a static server message sent every five seconds. Let's revisit `src/components/notification/notification.listener.ts` and make it more dynamic.\n\nFirst, let's get rid of the whole `setInterval` and its callback. Instead, we'll use our `notificationCollection` and its built-in method `watch`. This method returns a `ChangeStream`.\n\nYou can create a change stream by adding the following code above the `export default` code segment:\n\n```javascript\nconst notificationStream = notificationCollection.watch();\n```\n\nThe stream fires an event whenever its related collection changes. This includes the `insert` event from the previous section.\n\nWe can register callback functions for each of these. The event that fires when a document inside the collection changes is 'change':\n\n```javascript\nnotificationStream.on('change', (next) => {\n console.log('Step 3.1: Change in Database detected!');\n});\n```\n\nThe variable passed into the callback function is a change stream document. It includes two important information for us:\n\n- The document that's inserted, updated, or deleted.\n- The type of operation on the collection.\n\nLet's assign them to one variable each inside the callback:\n\n```javascript\nnotificationStream.on('change', (next) => {\n // ... previous code\n const {\n // @ts-ignore, fullDocument is not part of the next type (yet)\n fullDocument /* The newly inserted fullDocument */,\n operationType /* The MongoDB operation Type, e.g. insert */,\n } = next;\n});\n```\n\nLet's write the notification to the client. We can do this by repeating the method we used for the opening message.\n\n```javascript\nnotificationStream.on('change', (next) => {\n // ... previous code\n console.log('Step 3.2: Writing out response to connected clients');\n res.write(`event: ${operationType}\\n`);\n res.write(`data: ${JSON.stringify(fullDocument)}\\n`);\n res.write(`id: ${crypto.randomUUID()}\\n\\n`);\n});\n```\n\nAnd that's it! You can test if everything is functional by:\n\n1. Opening your browser under `http://localhost:3000/api/notification/subscribe`.\n2. Using the file under `test/notification.rest` with VSCode's HTTP client.\n3. Checking if your browser includes an opening and a Ghost Notification.\n\nFor an HTTP webhook implementation, you will need a running Ghost instance. I have added a dockerfile to this repo for your convenience. You could also install Ghost yourself locally.\n\nTo start Ghost with the dockerfile, make sure you have Docker Engine or Docker Desktop with support for `docker compose` installed.\n\nFor a local installation and the first-time setup, you should follow the official Ghost installation guide.\n\nAfter your Ghost instance is up and running, open your browser at `http://localhost:2368/ghost`. You can set up your site however you like, give it a name, enter details, and so on.\n\nIn order to create a webhook, you must first create a custom integration. To do so, navigate into your site\u2019s settings and click on the \u201cIntegrations\u201d menu point. Click on \u201cAdd Webhook,\u201d enter a name, and click on \u201cCreate.\u201d\n\nInside the newly created integration, you can now configure a webhook to point at your application under `http://:/api/notification/article/create`*.\n\n\\* This URL might vary based on your local Ghost setup. For example, if you run Ghost in a container, you can find your machine's local IP using the terminal and `ifconfig` on Linux or `ipconfig` on Windows.\n\nAnd that\u2019s it. Now, whenever a post is published, its contents will be sent to our real-time endpoint. After being inserted into MongoDB, an event message will be sent to all connected clients.\n\n## Subscribe to Change Streams from your Ghost theme\n\nThere are a few ways to add real-time notifications to your Ghost theme. Going into detail is beyond the scope of this article. I have prepared two files, a `plugin.js` and a `plugin.css` file you can inject into the default Casper theme.\n\nTry this out by starting a local Ghost instance using the provided dockerfile.\n\nYou must then instruct your application to serve the JS and CSS assets. Add the following to your `index.ts` file:\n\n```javascript\n// ... other app.use hooks\napp.use(express.static('public'));\n// ... app.listen()\n```\n\nFinally, navigate to Code Injection and add the following two entries in the 'Site Header':\n\n```html\n\n```\n\n> The core piece of the plugin is the EventSource browser API. You will want to use it when integrating this application with other themes.\n\nWhen going back into your Ghost publication, you should now see a small bell icon on the upper right side.\n\n## Moving ahead\n\nIf you\u2019ve followed along, congratulations! You now have a working real-time notification service for your Ghost blog. And if you haven\u2019t, what are you waiting for? Sign up for a free account on MongoDB Atlas and start building. You can use the final branch of this repository to get started and explore the full power of MongoDB\u2019s toolkit.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Docker"], "pageDescription": "Learn how to work with MongoDB change streams and develop a server-sent event application that integrates with Ghost CMS.", "contentType": "Tutorial"}, "title": "Adding Real-Time Notifications to Ghost CMS Using MongoDB and Server-Sent Events", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/php-setup", "action": "created", "body": "# Getting Set Up to Run PHP with MongoDB\n\n Welcome to this quickstart guide for MongoDB and PHP. I know you're probably excited to get started writing code and building applications with PHP and MongoDB. We'll get there, I promise. Let's go through some necessary set-up first, however.\n\nThis guide is organized into a few sections over a few articles. This first article addresses the installation and configuration of your development environment. PHP is an integrated web development language. There are several components you typically use in conjunction with the PHP programming language. If you already have PHP installed and you simply want to get started with PHP and MongoDB, feel free to skip to\u00a0the next article in this series.\n\nLet's start with an overview of what we'll cover in this series.\n\n1. Prerequisites\n2. Installation\n3. Installing Apache\n4. Installing PHP\n5. Installing the PHP Extension\n6. Installing the MongoDB PHP Library\n7. Start a MongoDB Cluster on Atlas\n8. Securing Usernames and Passwords\n\nA brief note on PHP and Apache: Because PHP is primarily a web language \u2014 that is to say that it's built to work with a web server \u2014 we will spend some time at the beginning of this article ensuring that you have PHP and the Apache web server installed and configured properly. There are alternatives, but we're going to focus on PHP and Apache.\n\nPHP was developed and first released in 1994 by\u00a0Rasmus Lerdorf. While it has roots in the C language, PHP syntax looked much like Perl early on. One of the major reasons for its massive popularity was its simplicity and the dynamic, interpreted nature of its implementation.\n\n# Prerequisites \n\nYou'll need the following installed on your computer to follow along with this tutorial:\n\n* MacOS Catalina or later: You can run PHP on earlier versions but I'll be keeping to MacOS for this tutorial.\n* Homebrew Package Manager: The missing package manager for MacOS.\n* PECL: The repository for PHP Extensions.\n* A code editor of your choice: I recommend\u00a0Visual Studio Code.\n\n# Installation\n\nFirst, let's install the command line tools as these will be used by Homebrew:\n\n``` bash\nxcode-select --install\n```\n\nNext, we're going to use a package manager to install things. This ensures that our dependencies will be met. I prefer `Homebrew`, or `brew` for short. To begin using `brew`, open your `terminal app` and type:\n\n``` bash\n/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)\"\n```\n\nThis leverages `curl` to pull down the latest installation scripts and binaries for `brew`.\n\nThe installation prompts are fairly straightforward. Enter your password where required to assume root privileges for the install. When it's complete, simply type the following to verify that `brew` is installed correctly:\n\n``` bash\nbrew --version\n```\n\nIf you experience trouble at this point and are unable to get `brew` running, visit the Homebrew installation docs.\n\nYou can also verify your homebrew installation using `brew doctor`. Confirm that any issues or error messages are resolved prior to moving forward. You may find warnings, and those can usually be safely ignored.\n\n## Installing Apache\n\nThe latest macOS 11.0 Big Sur comes with Apache 2.4 pre-installed but Apple removed some critical scripts, which makes it difficult to use.\n\nSo, to be sure we're all on the same page, let's install Apache 2.4 via Homebrew and then have it to run on the standard ports (80/443).\n\nWhen I was writing this tutorial, I wasted a lot of time trying to figure out what was happening with the pre-installed version. So, I think it's best if we install from scratch using Homebrew.\n\n``` bash\nsudo apachectl stop # stop the existing apache just to be safe\nsudo launchtl unload -w /System/Library/LaunchDaemons/org.apache.httpd.plist # Remove the configuration to run httpd daemons\n```\n\nNow, let's install the latest version of Apache:\n\n``` bash\nbrew install httpd\n```\n\nOnce installed, let's start up the service.\n\n``` bash\nbrew services start httpd\n```\n\nYou should now be able to open a web browser and visit `http://localhost:8080` and see something similar to the following:\n\nThe standard Apache web server doesn't have support for PHP built in. Therefore, we need to install PHP and the PHP Extension to recognize and interpret PHP files.\n\n## Installing PHP\n\n> If you've installed previous versions of PHP, I highly recommend that you clean things up by removing older versions. If you have previous projects that depend on these versions, you'll need to be careful, and back up your configurations and project files.\n\nHomebrew is a good way for MacOS users to install PHP.\n\n``` bash\nbrew install php\n```\n\nOnce this completes, you can test whether it's been installed properly by issuing the following command from your command-line prompt in the terminal.\n\n``` bash\nphp --version\n```\n\nYou should see something similar to this:\n\n``` bash\n$ php --version\nPHP 8.0.0 (cli) (built: Nov 30 2020 13:47:29) ( NTS )\nCopyright (c) The PHP Group\nZend Engine v4.0.0-dev, Copyright (c) Zend Technologies\nwith Zend OPcache v8.0.0, Copyright (c), by Zend Technologies\n```\n\n## Installing the PHP extension\n\nNow that we have `php` installed, we can configure Apache to use `PHP` to interpret our web content, translating our `php` commands instead of displaying the source code.\n\n> PECL (PHP Extension Community Library) is a repository for PHP Extensions, providing a directory of all known extensions and hosting facilities or the downloading and development of PHP extensions. `pecl` is the binary or command-line tool (installed by default with PHP) you can use to install and manage PHP extensions. We'll do that in this next section.\n\nInstall the PHP MongoDB extension before installing the PHP Library for MongoDB. It's worth noting that full MongoDB driver experience is provided by installing both the low-level extension (which integrates with our C driver) and high-level library, which is written in PHP.\n\nYou can install the extension using PECL on the command line:\n\n``` bash\npecl install mongodb\n```\n\nNext, we need to modify the main `php.ini` file to include the MongoDB extension. To locate your `php.ini` file, use the following command:\n``` bash\n$ php --ini\nConfiguration File (php.ini) Path: /usr/local/etc/php/8.3\n```\n\nTo install the extension, copy the following line and place it at the end of your `php.ini` file.\n\n``` bash\nextension=mongodb.so\n```\n\nAfter saving php.ini, restart the Apache service and to verify installation, you can use the following command.\n\n``` bash\nbrew services restart httpd\n\nphp -i | grep mongo\n```\n\nYou should see output similar to the following:\n\n``` bash\n$ php -i | grep mongo\nmongodb\nlibmongoc bundled version => 1.25.2\nlibmongoc SSL => enabled\nlibmongoc SSL library => OpenSSL\nlibmongoc crypto => enabled\nlibmongoc crypto library => libcrypto\nlibmongoc crypto system profile => disabled\nlibmongoc SASL => enabled\nlibmongoc SRV => enabled\nlibmongoc compression => enabled\nlibmongoc compression snappy => enabled\nlibmongoc compression zlib => enabled\nlibmongoc compression zstd => enabled\nlibmongocrypt bundled version => 1.8.2\nlibmongocrypt crypto => enabled\nlibmongocrypt crypto library => libcrypto\nmongodb.debug => no value => no value\n```\nYou are now ready to begin using PHP to manipulate and manage data in your MongoDB databases. Next, we'll focus on getting your MongoDB cluster prepared.\n\n## Troubleshooting your PHP configuration\n\nIf you are experiencing issues with installing the MongoDB extension, there are some tips to help you verify that everything is properly installed.\n\nFirst, you can check that Apache and PHP have been successfully installed by creating an info.php file at the root of your web directory. To locate the root web directory, use the following command:\n\n```\n$ brew info httpd\n==> httpd: stable 2.4.58 (bottled)\nApache HTTP server\nhttps://httpd.apache.org/\n/usr/local/Cellar/httpd/2.4.58 (1,663 files, 31.8MB) *\n Poured from bottle using the formulae.brew.sh API on 2023-11-09 at 18:19:19\nFrom: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/h/httpd.rb\nLicense: Apache-2.0\n==> Dependencies\nRequired: apr \u2714, apr-util \u2714, brotli \u2714, libnghttp2 \u2714, openssl@3 \u2714, pcre2 \u2714\n==> Caveats\nDocumentRoot is /usr/local/var/www\n```\n\nIn the file, add the following content:\n\n```\n\n```\n\nThen navigate to http://localhost:8080/info.php and you should see a blank page with just the Hello World text.\n\nNext, edit the info.php file content to: \n\n```\n\n```\n\nSave, and then refresh the info.php page. You should see a page with a large table of PHP information like this:\n\nIMPORTANT: In production servers, it\u2019s **unsafe to expose information displayed by phpinfo()** on a publicly accessible page\n\nThe information that we\u2019re interested could be in these places:\n\n* **\u201cConfiguration File (php.ini) Path\u201d** property shows where your PHP runtime is getting its php.ini file from. It can happen that the mongodb.so extension was added in the wrong php.ini file as there may be more than one.\n* **\u201cAdditional .ini files parsed\u201d** shows potential extra PHP configuration files that may impact your specific configuration. These files are in the directory listed by the \u201cScan this dir for additional .ini files\u201d section in the table.\n\nThere\u2019s also a whole \u201cmongodb\u201d table that looks like this:\n\nIts presence indicates that the MongoDB extension has been properly loaded and is functioning. You can also see its version number to make sure that\u2019s the one you intended to use.\n\nIf you don\u2019t see this section, it\u2019s likely the MongoDB extension failed to load. If that\u2019s the case, look for the \u201cerror_log\u201d property in the table to see where the PHP error log file is, as it may contain crucial clues. Make sure that \u201clog_errors\u201d is set to ON. Both are located in the \u201cCore\u201d PHP section.\n\nIf you are upgrading to a newer version of PHP, or have multiple versions installed, keep in mind that each version needs to have its own MongoDB extension and php.ini files. \n\n### Start a MongoDB Cluster on Atlas\n\nNow that you've got your local environment set up, it's time to create a MongoDB database to work with, and to load in some sample data you can explore and modify.\n\n> Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n\nIt will take a couple of minutes for your cluster to be provisioned, so while you're waiting, you can move on to the next step.\n\n### Set Up Your MongoDB Instance\n\nHopefully, your MongoDB cluster should have finished starting up now and has probably been running for a few minutes.\n\nThe following instructions were correct at the time of writing but may change, as we're always improving the Atlas user interface:\n\nIn the Atlas web interface, you should see a green button at the bottom-left of the screen, saying \"Get Started.\" If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the \"Load Sample Data\" item\u2014we'll use this later to test the PHP library), and it'll help you through the steps to get set up.\n\nThe fastest way to get access to data is to load the sample datasets into your cluster right in the Atlas console. If you're brand new, the new user wizard will actually walk you through the process and prompt you to load these.\n\nIf you already created your cluster and want to go back to load the sample datasets, click the ellipsis (three dots) next to your cluster connection buttons (see below image) and then select `Load Sample Dataset`.\n\nNow, let's move on to setting the configuration necessary to access your data in the MongoDB Cluster. You will need to create a database user and configure your IP Address Access List.\n\n## Create a User\n\nFollowing the \"Get Started\" steps, create a user with \"Read and write access to any database.\" You can give it the username and password of your choice. Make a copy of them, because you'll need them in a minute. Use the \"autogenerate secure password\" button to ensure you have a long, random password which is also safe to paste into your connection string later.\n\n## Add Your IP Address to the Access List\n\nWhen deploying an app with sensitive data, you should only whitelist the IP address of the servers which need to connect to your database. To whitelist the IP address of your development machine, select \"Network Access,\" click the \"Add IP Address\" button, and then click \"Add Current IP Address\" and hit \"Confirm.\"\n\n## Connect to Your Database\n\nThe last step of the \"Get Started\" checklist is \"Connect to your Cluster.\" Select \"Connect your application\" and select \"PHP\" with a version of \"PHPLIB 1.8.\"\n\nClick the \"Copy\" button to copy the URL to your paste buffer. Save it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder, including the `<` and `>` characters.\n\nNow it's time to actually write some PHP code to connect to your MongoDB database! Up until now, we've only installed the supporting system components. Before we begin to connect to our database and use PHP to manipulate data, we need to install the MongoDB PHP Library.\n\nComposer is the recommended installation tool for the MongoDB library. Composer is a tool for dependency management in PHP. It allows you to declare the libraries your project depends on and it will manage (install/update) them for you.\n\nTo install `composer`, we can use Homebrew.\n\n``` bash\nbrew install composer\n```\n\n## Installing the MongoDB PHP Library\n\nOnce you have `composer` installed, you can move forward to installing the MongoDB Library.\n\nInstallation of the library should take place in the root directory of your project. Composer is not a package manager in the same sense as Yum or Apt are. Composer installs packages in a directory inside your project. By default, it does not install anything globally.\n\n``` bash\n$ composer require mongodb/mongodb\nUsing version ^1.8 for mongodb/mongodb\n./composer.json has been created\nRunning composer update mongodb/mongodb\nLoading composer repositories with package information\nUpdating dependencies\nLock file operations: 4 installs, 0 updates, 0 removals\n- Locking composer/package-versions-deprecated (1.11.99.1)\n- Locking jean85/pretty-package-versions (1.6.0)\n- Locking mongodb/mongodb (1.8.0)\n- Locking symfony/polyfill-php80 (v1.22.0)\nWriting lock file\nInstalling dependencies from lock file (including require-dev)\nPackage operations: 4 installs, 0 updates, 0 removals\n- Installing composer/package-versions-deprecated (1.11.99.1): Extracting archive\n- Installing symfony/polyfill-php80 (v1.22.0): Extracting archive\n- Installing jean85/pretty-package-versions (1.6.0): Extracting archive\n- Installing mongodb/mongodb (1.8.0): Extracting archive\nGenerating autoload files\ncomposer/package-versions-deprecated: Generating version class...\ncomposer/package-versions-deprecated: ...done generating version class\n2 packages you are using are looking for funding.\n```\n\nMake sure you're in the same directory as you were when you used `composer` above to install the library.\n\nIn your code editor, create a PHP file in your project directory called quickstart.php. If you're referencing the example, enter in the following code:\n\n``` php\n@myfirstcluster.zbcul.mongodb.net/dbname?retryWrites=true&w=majority');\n\n $customers = $client->selectCollection('sample_analytics', 'customers');\n $document = $customers->findOne('username' => 'wesley20']);\n\n var_dump($document);\n\n?>\n```\n\n`` and `` are the username and password you created in Atlas, and the cluster address is specific to the cluster you launched in Atlas.\n\nSave and close your `quickstart.php` program and run it from the command line:\n\n``` bash\n$ php quickstart.php\n```\n\nIf all goes well, you should see something similar to the following:\n\n``` javascript\n$ php quickstart.php\nobject(MongoDB\\Model\\BSONDocument)#12 (1) {\n[\"storage\":\"ArrayObject\":private]=>\n array(8) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#16 (1) {\n [\"oid\"]=>\n string(24) \"5ca4bbcea2dd94ee58162a72\"\n }\n [\"username\"]=>\n string(8) \"wesley20\"\n [\"name\"]=>\n string(13) \"James Sanchez\"\n [\"address\"]=>\n string(45) \"8681 Karen Roads Apt. 096 Lowehaven, IA 19798\"\n [\"birthdate\"]=>\n object(MongoDB\\BSON\\UTCDateTime)#15 (1) {\n [\"milliseconds\"]=>\n string(11) \"95789846000\"\n }\n [\"email\"]=>\n string(24) \"josephmacias@hotmail.com\"\n [\"accounts\"]=>\n object(MongoDB\\Model\\BSONArray)#14 (1) {\n [\"storage\":\"ArrayObject\":private]=>\n array(1) {\n [0]=>\n int(987709)\n }\n }\n [\"tier_and_details\"]=>\n object(MongoDB\\Model\\BSONDocument)#13 (1) {\n [\"storage\":\"ArrayObject\":private]=>\n array(0) {\n }\n }\n }\n}\n```\n\nYou just connected your PHP program to MongoDB and queried a single document from the `sample_analytics` database in your cluster! If you don't see this data, then you may not have successfully loaded sample data into your cluster. You may want to go back a couple of steps until running this command shows the document above.\n\n## Securing Usernames and Passwords\n\nStoring usernames and passwords in your code is **never** a good idea. So, let's take one more step to secure those a bit better. It's general practice to put these types of sensitive values into an environment file such as `.env`. The trick, then, will be to get your PHP code to read those values in. Fortunately, [Vance Lucas came up with a great solution called `phpdotenv`. To begin using Vance's solution, let's leverage `composer`.\n\n``` bash\n$ composer require vlucas/phpdotenv\n```\n\nNow that we have the library installed, let's create our `.env` file which contains our sensitive values. Open your favorite editor and create a file called `.env`, placing the following values in it. Be sure to replace `your user name` and `your password` with the actual values you created when you added a database user in Atlas.\n\n``` bash\nMDB_USER=\"your user name\"\nMDB_PASS=\"your password\"\n```\n\nNext, we need to modify our quickstart.php program to pull in the values using `phpdotenv`. Let's add a call to the library and modify our quickstart program to look like the following. Notice the changes on lines 5, 6, and 9.\n\n``` php\nload();\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@tasktracker.zbcul.mongodb.net/sample_analytics?retryWrites=true&w=majority'\n);\n\n$customers = $client->selectCollection('sample_analytics', 'customers');\n$document = $customers->findOne(['username' => 'wesley20']);\n\nvar_dump($document);\n```\n\nNext, to ensure that you're not publishing your credentials into `git` or whatever source code repository you're using, be certain to add a .gitignore (or equivalent) to prevent storing this file in your repo. Here's my `.gitignore` file:\n\n``` bash\ncomposer.phar\n/vendor/\n.env\n```\n\nMy `.gitignore` includes files that are leveraged as part of our libraries\u2014these should not be stored in our project.\n\nShould you want to leverage my project files, please feel free to visit my [github repository, clone, fork, and share your feedback in the Community.\n\nThis quick start was intended to get you set up to use PHP with MongoDB. You should now be ready to move onto the next article in this series. Please feel free to contact me in the Community should you have any questions about this article, or anything related to MongoDB.\n\nPlease be sure to visit, star, fork, and clone the companion repository for this article.\n\n## References\n\n* MongoDB PHP Quickstart Source Code Repository\n* MongoDB PHP Driver Documentation provides thorough documentation describing how to use PHP with your MongoDB cluster.\n* MongoDB Query Document documentation details the full power available for querying MongoDB collections.", "format": "md", "metadata": {"tags": ["PHP", "MongoDB"], "pageDescription": "Getting Started with MongoDB and PHP - Part 1 - Setup", "contentType": "Quickstart"}, "title": "Getting Set Up to Run PHP with MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/storing-large-objects-and-files", "action": "created", "body": "# Storing Large Objects and Files in MongoDB\n\nLarge objects, or \"files\", are easily stored in MongoDB. It is no problem to store 100MB videos in the database.\n\nThis has a number of advantages over files stored in a file system. Unlike a file system, the database will have no problem dealing with millions of objects. Additionally, we get the power of the database when dealing with this data: we can do advanced queries to find a file, using indexes; we can also do neat things like replication of the entire file set.\n\nMongoDB stores objects in a binary format called BSON. BinData is a BSON data type for a binary byte array. However, MongoDB objects are typically limited to 16MB in size. To deal with this, files are \"chunked\" into multiple objects that are less than 255 KiB each. This has the added advantage of letting us efficiently retrieve a specific range of the given file.\n\nWhile we could write our own chunking code, a standard format for this chunking is predefined, called GridFS. GridFS support is included in all official MongoDB drivers and also in the mongofiles command line utility.\n\nA good way to do a quick test of this facility is to try out the mongofiles utility. See the MongoDB documentation for more information on GridFS.\n\n## More Information\n\n- GridFS Docs\n- Building MongoDB Applications with Binary Files Using GridFS: Part 1\n- Building MongoDB Applications with Binary Files Using GridFS: Part 2\n- MongoDB Architecture Guide", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Discover how to store large objects and files in MongoDB.", "contentType": "Tutorial"}, "title": "Storing Large Objects and Files in MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-summary", "action": "created", "body": "# A Summary of Schema Design Anti-Patterns and How to Spot Them\n\nWe've reached the final post in this series on MongoDB schema design anti-patterns. You're an expert now, right? We hope so. But don't worry\u2014even if you fall into the trap of accidentally implementing an anti-pattern, MongoDB Atlas can help you identify it.\n\n \n\n## The Anti-Patterns\n\nBelow is a brief description of each of the schema design anti-patterns we've covered in this series.\n\n- Massive arrays: storing massive, unbounded arrays in your documents.\n- Massive number of collections: storing a massive number of collections (especially if they are unused or unnecessary) in your database.\n- Unnecessary indexes: storing an index that is unnecessary because it is (1) rarely used if at all or (2) redundant because another compound index covers it.\n- Bloated documents: storing large amounts of data together in a document when that data is not frequently accessed together.\n- Separating data that is accessed together: separating data between different documents and collections that is frequently accessed together.\n- Case-insensitive queries without case-insensitive indexes: frequently executing a case-insensitive query without having a case-insensitive index to cover it.\n\n>\n>\n>:youtube]{vid=8CZs-0it9r4 list=PL4RCxklHWZ9uluV0YBxeuwpEa0FWdmCRy}\n>\n>If you'd like to learn more about each of the anti-patterns, check out this YouTube playlist.\n>\n>\n\n## Building Your Data Modeling Foundation\n\nNow that you know what **not** to do, let's talk about what you **should** do instead. Begin by learning the MongoDB schema design patterns. [Ken Alger and Daniel Coupal wrote a fantastic blog series that details each of the 12 patterns. Daniel also co-created a free MongoDB University Course that walks you through how to model your data.\n\nOnce you have built your data modeling foundation on schema design patterns and anti-patterns, carefully consider your use case:\n\n- What data will you need to store?\n- What data is likely to be accessed together?\n- What queries will be run most frequently?\n- What data is likely to grow at a rapid, unbounded pace?\n\nThe great thing about MongoDB is that it has a flexible schema. You have the power to rapidly make changes to your data model when you use MongoDB. If your initial data model turns out to be not so great or your application's requirements change, you can easily update your data model. And you can make those updates without any downtime! Check out the Schema Versioning Pattern for more details.\n\nIf and when you're ready to lock down part or all of your schema, you can add schema validation. Don't worry\u2014the schema validation is flexible too. You can configure it to throw warnings or errors. You can also choose if the validation should apply to all documents or just documents that already pass the schema validation rules. All of this flexibility gives you the ability to validate documents with different shapes in the same collection, helping you migrate your schema from one version to the next.\n\n## Spotting Anti-Patterns in Your Database\n\nHopefully, you'll keep all of the schema design patterns and anti-patterns top-of-mind while you're planning and modifying your database schema. But maybe that's wishful thinking. We all make mistakes.\n\n \n\nIf your database is hosted on MongoDB Atlas, you can get some help spotting anti-patterns. Navigate to the Performance Advisor (available in M10 clusters and above) or the Data Explorer (available in all clusters) and look for the Schema Anti-Patterns panel. These Schema Anti-Patterns panels will display a list of anti-patterns in your collections and provide pointers on how to fix the issues.\n\nTo learn more, check out Marissa Jasso's blog post that details this handy schema suggestion feature or watch her demo below.\n\n:youtube]{vid=XFJcboyDSRA}\n\n## Summary\n\nEvery use case is unique, so every schema will be unique. No formula exists for determining the \"right\" model for your data in MongoDB.\n\nGive yourself a solid data modeling foundation by learning the MongoDB schema design patterns and anti-patterns. Then begin modeling your data, carefully considering the details of your particular use case and leveraging the principles of the patterns and anti-patterns.\n\nSo, get pumped, have fun, and model some data!\n\n \n\n>When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- Blog Series: Building with Patterns: A Summary\n- MongoDB University Course M320: Data Modeling\n- MongoDB Docs: Schema Validation\n- Blog Post: JSON Schema Validation - Locking Down Your Model the Smart Way\n- Blog Post: Schema Suggestions in MongoDB Atlas: Years of Best Practices, Instantly Available To You\n- MongoDB Docs: Improve Your Schema\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Get a summary of the six MongoDB Schema Design Anti-Patterns. Plus, learn how MongoDB Atlas can help you spot the anti-patterns in your databases.", "contentType": "Article"}, "title": "A Summary of Schema Design Anti-Patterns and How to Spot Them", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/visualize-mongodb-atlas-database-audit-logs", "action": "created", "body": "# Visualize MongoDB Atlas Database Audit Logs\n\nMongoDB Atlas has advanced security capabilities, and audit logs are one of them. Simply put, enabling audit logs in an Atlas cluster allows you to track what happened in the database by whom and when. \n\nIn this blog post, I\u2019ll walk you through how you can visualize MongoDB Atlas Database Audit Logs with MongoDB Atlas Charts.\n\n## High level architecture\n\n1. In Atlas App Services Values, Atlas Admin API public and private keys and AWS API access key id and secret access have been defined. \n2. aws-sdk node package has been added as a dependency to Atlas Functions.\n3. Atlas Data Federation has been configured to query the data in a cloud storage - Amazon S3 of Microsoft Blob Storage bucket. \n4. Atlas Function retrieves both Atlas Admin API and AWS API credentials.\n5. Atlas Function calls the Atlas Admin API with the credentials and other relevant parameters (time interval for the audit logs) and fetches the compressed audit logs. \n6. Atlas Function uploads the compressed audit logs as a zip file into a cloud object storage bucket where Atlas has read access. \n7. Atlas Charts visualize the data in S3 through Atlas Data Federation. \n## Prerequisites\n\nThe following items must be completed before working on the steps.\n\n* Provision an Atlas cluster where the tier is at least M10. The reason for this is auditing is not supported by free (M0) and shared tier (M2, M5) clusters.\n* You need to set up database auditing in the Atlas cluster where you want to track activities.\n * Under the Security section on the left hand menu on the main dashboard, select Advanced. Then toggle Database Auditing and click Audit Filter Settings.\n* For the sake of simplicity, check All actions to be tracked in Audit Configuration as shown in the below screenshot. \n\n* If you don\u2019t have your own load generator to generate load in the database in order to visualize through MongoDB Charts later, you can review this load generator in the Github repository of this blog post.\n* Create an app in Atlas App Services that will implement our functions inside it. If you haven\u2019t created an app in Atlas App Services before, please follow this tutorial.\n* Create an AWS account along with the following credentials \u2014 AWS Access Key and AWS Secret Access Secret.\n* Set an AWS IAM Role that has privilege to write into the cloud object storage bucket.\n * Later, Atlas will assume this role to make write operations inside S3 bucket.\n## Step 1: configuring credentials\nAtlas Admin API allows you to automate your Atlas environment. With a REST client, you can execute a wide variety of management activities such as retrieving audit logs. \n\nIn order to utilize Atlas Admin API, we need to create keys and use these keys later in Atlas Functions. Follow the instructions to create an API key for your project. \n\n### Creating app services values and app services secrets\nAfter you\u2019ve successfully created public and private keys for the Atlas project, we can store the Atlas Admin API keys and AWS credentials in App Services Values and Secrets.\n\nApp Services Values and App Services Secrets are static, server-side constants that you can access or link to from other components of your application. \n\nIn the following part, we\u2019ll create four App Services Values and two App Services Secrets to manage both MongoDB Atlas Admin API and AWS credentials. In order to create App Services Values and Secrets, navigate your App Services app, and on the left hand menu, select **Values**. This will bring you to a page showing the secrets and values available in your App Services app.\n#### Setting up Atlas Admin API credentials\n\nIn this section, we\u2019ll create two App Services Values and one App Services Secrets to store Atlas Admin API Credentials.\n\n**Value 1: AtlasAdminAPIPublicKey**\n\nThis Atlas App Services value keeps the value of the public key of Atlas Admin API. Values should be wrapped in double quotes as shown in the following example.\n\n**Secret 1: AtlasAdminAPIPrivateKey**\n\nThis Atlas App Services Secret keeps the value of the private key of Atlas Admin API. You should not wrap the secret in quotation marks. \n\n**Value 2: AtlasAdminAPIPrivateKeyLinkToSecret**\n\nWe can\u2019t directly access secrets in our Atlas Functions. That\u2019s why we have to create a new value and link it to the secret containing our private key. \n\nUntil now, we\u2019ve defined necessary App Services Values and Atlas App Services Secrets to access Atlas Admin API from an App Services App.\n\nIn order to access our S3 bucket, we need to utilize AWS SDK. Therefore, we need to do a similar configuration for AWS SDK keys.\n\n### Setting up AWS credentials\n\nIn this section, we\u2019ll create two App Services Values and one App Services Secret to store AWS Credentials. Learn how to get your AWS Credentials.\n\n**Value 3: AWSAccessKeyId**\n\nThis Atlas App Services Value keeps the value of the access key id of AWS SDK.\n\n**Secret 2: AWSSecretAccessKey**\n\nThis Atlas App Services Secret keeps the value of the secret access key of AWS SDK.\n\n**Value 4: AWSSecretAccessKeyLinkToSecret**\n\nThis Atlas App Services Value keeps the link of Atlas App Services Secret that keeps the secret key of AWS SDK.\n\nAnd after you have all these values and secrets as shown below, you can deploy the changes to make it permanent.\n\n## Step 2: adding an external dependency \n\nAn external dependency is an external library that includes logic you'd rather not implement yourself, such as string parsing, convenience functions for array manipulations, and data structure or algorithm implementations. You can upload external dependencies from the npm repository to App Services and then import those libraries into your functions with a `require('external-module')` statement.\n\nIn order to work with AWS S3, we will add the official aws-sdk npm package.\n\nIn your App Services app, on the left-side menu, navigate to Functions. And then, navigate to the Dependencies pane in this page. \n\nIn order to work with AWS S3, we will add the official **aws-sdk** npm package.\n\nIn your App Services app, on the left-side menu, navigate to **Functions**. And then, navigate to the **Dependencies** pane in this page. \n\nClick **Add Dependency**.\n\nProvide **aws-sdk** as the package name and keep the package version empty. That will install the latest version of aws-sdk node package.\n\nNow, the **aws-sdk** package is ready to be used in our Atlas App Services App.\n\n## Step 3: configuring Atlas Data Federation to consume cloud object storage data through MongoDB Query Language (MQL)\n\nIn this tutorial, we\u2019ll not go through all the steps to create a federated database instance in Atlas. Please check out our Atlas Data Federation resources to go through all steps to create Atlas Data Federated Instance.\n\nAs an output of this step, we\u2019d expect a ready Federated Database Instance as shown below.\n\nI have already added the S3 bucket (the name of the bucket is **fuat-sungur-bucket**) that I own into this Federated Database Instance as a data source and I created the collection **auditlogscollection** inside the database **auditlogs** in this Federated Database Instance. \n\nNow, if I have the files in this S3 bucket (fuat-sungur-bucket), I\u2019ll be able to query it using the MongoDB aggregation framework or Atlas SQL. \n\n## Step 4: creating an Atlas function to retrieve credentials from Atlas App Services Values and Secrets\n\nLet\u2019s create an Atlas function, give it the name **RetrieveAndUploadAuditLogs**, and choose **System** for authentication.\n\nWe also provide the following piece of code in the **Function Editor** and **Run** the function. We\u2019ll see the credentials have been printed out in the console.\n\n```\nexports = async function(){\n const atlasAdminAPIPublicKey = context.values.get(\"AtlasAdminAPIPublicKey\");\n const atlasAdminAPIPrivateKey = context.values.get(\"AtlasAdminAPIPrivateKeyLinkToSecret\");\n \n const awsAccessKeyID = context.values.get(\"AWSAccessKeyID\")\n const awsSecretAccessKey = context.values.get(\"AWSSecretAccessKeyLinkToSecret\")\n \n console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)\n console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)}\n\n```\n\n## Step 5: retrieving audit logs in the Atlas function \n\nWe now continue to enhance our existing Atlas function, **RetrieveAndUploadAuditLogs**. Now, we\u2019ll execute the HTTP/S request to retrieve audit logs into the Atlas function.\n\nFollowing piece of code generates an HTTP GET request,calls the relevant Atlas Admin API resource to retrieve audit logs within 1440 minutes, and converts this compressed audit data to the Buffer class in JavaScript.\n\n```\nexports = async function(){\n const atlasAdminAPIPublicKey = context.values.get(\"AtlasAdminAPIPublicKey\");\n const atlasAdminAPIPrivateKey = context.values.get(\"AtlasAdminAPIPrivateKeyLinkToSecret\");\n \n const awsAccessKeyID = context.values.get(\"AWSAccessKeyID\")\n const awsSecretAccessKey = context.values.get(\"AWSSecretAccessKeyLinkToSecret\")\n \n console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)\n console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)\n \n //////////////////////////////////////////////////////////////////////////////////////////////////\n \n // Atlas Cluster information\n const groupId = '5ca48430014b76f34448bbcf';\n const host = \"exchangedata-shard-00-01.5tka5.mongodb.net\";\n const logType = \"mongodb-audit-log\"; // the other option is \"mongodb\" -> that allows you to download database logs\n // defining startDate and endDate of Audit Logs\n const endDate = new Date();\n const durationInMinutes = 20;\n const durationInMilliSeconds = durationInMinutes * 60 * 1000\n const startDate = new Date(endDate.getTime()-durationInMilliSeconds)\n \n const auditLogsArguments = {\n scheme: 'https',\n host: 'cloud.mongodb.com',\n path: `api/atlas/v1.0/groups/${groupId}/clusters/${host}/logs/${logType}.gz`,\n username: atlasAdminAPIPublicKey,\n password: atlasAdminAPIPrivateKey,\n headers: {'Content-Type': 'application/json'], 'Accept-Encoding': ['application/gzip']},\n digestAuth:true,\n query: {\n \"startDate\": [Math.round(startDate / 1000).toString()],\n \"endDate\": [Math.round(endDate / 1000).toString()]\n }\n };\n \n console.log(`Arguments:${JSON.stringify(auditLogsArguments)}`)\n \n const response = await context.http.get(auditLogsArguments)\n auditData = response.body;\n console.log(\"AuditData:\"+(auditData))\n console.log(\"JS Type:\" + typeof auditData)\n // convert it to base64 and then Buffer\n var bufferAuditData = Buffer.from(auditData.toBase64(),'base64')\n console.log(\"Buffered Audit Data\" + bufferAuditData)\n }\n\n```\n\n## Step 6: uploading audit data into the S3 bucket \n\nUntil now, in our Atlas function, we retrieved the audit logs based on the given interval, and now we\u2019ll upload this data into the S3 bucket as a zip file. \n\nFirstly, we import **aws-sdk** NodeJS library and then configure the credentials for AWS S3. We have already retrieved the AWS credentials from App Services Values and App Services Secrets and assigned those into function variables. \n\nAfter that, we configure S3-related parameters, bucket name, key (folder and filename), and body (actual payload that is our audit zip file stored in a Buffer Javascript data type). And finally, we run our upload command [(S3.putObject()).\n\nHere you can find the entire function code:\n\n```\nexports = async function(){\n const atlasAdminAPIPublicKey = context.values.get(\"AtlasAdminAPIPublicKey\");\n const atlasAdminAPIPrivateKey = context.values.get(\"AtlasAdminAPIPrivateKeyLinkToSecret\");\n \n const awsAccessKeyID = context.values.get(\"AWSAccessKeyID\")\n const awsSecretAccessKey = context.values.get(\"AWSSecretAccessKeyLinkToSecret\")\n \n console.log(`Atlas Public + Private Keys: ${atlasAdminAPIPublicKey}, ${atlasAdminAPIPrivateKey}`)\n console.log(`AWS Access Key ID + Secret Access Key: ${awsAccessKeyID}, ${awsSecretAccessKey}`)\n \n //////////////////////////////////////////////////////////////////////////////////////////////////\n \n // Atlas Cluster information\n const groupId = '5ca48430014b76f34448bbcf';\n const host = \"exchangedata-shard-00-01.5tka5.mongodb.net\";\n const logType = \"mongodb-audit-log\"; // the other option is \"mongodb\" -> that allows you to download database logs\n // defining startDate and endDate of Audit Logs\n const endDate = new Date();\n const durationInMinutes = 20;\n const durationInMilliSeconds = durationInMinutes * 60 * 1000\n const startDate = new Date(endDate.getTime()-durationInMilliSeconds)\n \n const auditLogsArguments = {\n scheme: 'https',\n host: 'cloud.mongodb.com',\n path: `api/atlas/v1.0/groups/${groupId}/clusters/${host}/logs/${logType}.gz`,\n username: atlasAdminAPIPublicKey,\n password: atlasAdminAPIPrivateKey,\n headers: {'Content-Type': 'application/json'], 'Accept-Encoding': ['application/gzip']},\n digestAuth:true,\n query: {\n \"startDate\": [Math.round(startDate / 1000).toString()],\n \"endDate\": [Math.round(endDate / 1000).toString()]\n }\n };\n \n console.log(`Arguments:${JSON.stringify(auditLogsArguments)}`)\n \n const response = await context.http.get(auditLogsArguments)\n auditData = response.body;\n console.log(\"AuditData:\"+(auditData))\n console.log(\"JS Type:\" + typeof auditData)\n // convert it to base64 and then Buffer\n var bufferAuditData = Buffer.from(auditData.toBase64(),'base64')\n console.log(\"Buffered Audit Data\" + bufferAuditData)\n // uploading into S3 \n \n const AWS = require('aws-sdk');\n \n // configure AWS credentials\n const config = {\n accessKeyId: awsAccessKeyID,\n secretAccessKey: awsSecretAccessKey\n };\n \n // configure S3 parameters \n const fileName= `auditlogs/auditlog-${new Date().getTime()}.gz`\n const S3params = {\n Bucket: \"fuat-sungur-bucket\",\n Key: fileName,\n Body: bufferAuditData\n };\n const S3 = new AWS.S3(config);\n // create the promise object\n const s3Promise = S3.putObject(S3params).promise();\n \n s3Promise.then(function(data) {\n console.log('Put Object Success');\n return { success: true }\n }).catch(function(err) {\n console.log(err);\n return { success: false, failure: err }\n });\n };\n\n```\n\nAfter we run the Atlas function, we can check out the S3 bucket and verify that the compressed audit file has been uploaded.\n\n![A folder in an S3 bucket where we store the audit logs\n\nYou can find the entire code of the Atlas functions in the dedicated Github repository.\n\n## Step 7: visualizing audit data in MongoDB Charts\n\nFirst, we need to add our Federated Database Instance that we created in Step 4 into our Charts application that we created in the prerequisites section as a data source. This Federated Database Instance allows us to run queries with the MongoDB aggregation framework on the data that is in the cloud object storage (that is S3, in this case). \n\nBefore doing anything with Atlas Charts, let\u2019s connect to Federated Database Instance and query the audit logs to make sure we have already established the data pipeline correctly. \n\n```bash \n$ mongosh \"mongodb://federateddatabaseinstance0-5tka5.a.query.mongodb.net/myFirstDatabase\" --tls --authenticationDatabase admin --username main_user\nEnter password: *********\nCurrent Mongosh Log ID: 63066b8cef5a94f0eb34f561\nConnecting to:\nmongodb://@federateddatabaseinstance0-5tka5.a.query.mongodb.net/myFirstDatabase?directConnection=true&tls=true&authSource=admin&appName=mongosh+1.5.4\nUsing MongoDB: 5.2.0\nUsing Mongosh: 1.5.4\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\nAtlasDataFederation myFirstDatabase> show dbs\nauditlogs 0 B\nAtlasDataFederation myFirstDatabase> use auditlogs\nswitched to db auditlogs\nAtlasDataFederation auditlogs> show collections\nAuditlogscollection\n\n```\n\nNow, we can get a record from the **auditlogscollection**.\n\n```bash\nAtlasDataFederation auditlogs> db.auditlogscollection.findOne()\n{\n atype: 'authCheck',\n ts: ISODate(\"2022-08-24T17:42:44.435Z\"),\n uuid: UUID(\"f5fb1c0a-399b-4308-b67e-732254828d17\"),\n local: { ip: '192.168.248.180', port: 27017 },\n remote: { ip: '192.168.248.180', port: 44072 },\n users: { user: 'mms-automation', db: 'admin' } ],\n roles: [\n { role: 'backup', db: 'admin' },\n { role: 'clusterAdmin', db: 'admin' },\n { role: 'dbAdminAnyDatabase', db: 'admin' },\n { role: 'readWriteAnyDatabase', db: 'admin' },\n { role: 'restore', db: 'admin' },\n { role: 'userAdminAnyDatabase', db: 'admin' }\n ],\n param: {\n command: 'find',\n ns: 'local.clustermanager',\n args: {\n find: 'clustermanager',\n filter: {},\n limit: Long(\"1\"),\n singleBatch: true,\n sort: {},\n lsid: { id: UUID(\"f49d243f-9c09-4a37-bd81-9ff5a2994f05\") },\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1661362964, i: 1 }),\n signature: {\n hash: Binary(Buffer.from(\"1168fff7240bc852e17c04e9b10ceb78c63cd398\", \"hex\"), 0),\n keyId: Long(\"7083075402444308485\")\n }\n },\n '$db': 'local',\n '$readPreference': { mode: 'primaryPreferred' }\n }\n },\n result: 0\n}\nAtlasDataFederation auditlogs>\n\n```\n\nLet\u2019s check the audit log of an update operation.\n\n```bash\nAtlasDataFederation auditlogs> db.auditlogscollection.findOne({atype:\"authCheck\", \"param.command\":\"update\", \"param.ns\": \"audit_test.orders\"})\n{\n atype: 'authCheck',\n ts: ISODate(\"2022-08-24T17:42:44.757Z\"),\n uuid: UUID(\"b7115a0a-c44c-4d6d-b007-a67d887eaea6\"),\n local: { ip: '192.168.248.180', port: 27017 },\n remote: { ip: '91.75.0.56', port: 22787 },\n users: [ { user: 'main_user', db: 'admin' } ],\n roles: [\n { role: 'atlasAdmin', db: 'admin' },\n { role: 'backup', db: 'admin' },\n { role: 'clusterMonitor', db: 'admin' },\n { role: 'dbAdminAnyDatabase', db: 'admin' },\n { role: 'enableSharding', db: 'admin' },\n { role: 'readWriteAnyDatabase', db: 'admin' }\n ],\n param: {\n command: 'update',\n ns: 'audit_test.orders',\n args: {\n update: 'orders',\n updates: [\n { q: { _id: 3757 }, u: { '$set': { location: '7186a' } } }\n ],\n ordered: true,\n writeConcern: { w: 'majority' },\n lsid: { id: UUID(\"a3ace80b-5907-4bf4-a917-be5944ec5a83\") },\n txnNumber: Long(\"509\"),\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1661362964, i: 2 }),\n signature: {\n hash: Binary(Buffer.from(\"1168fff7240bc852e17c04e9b10ceb78c63cd398\", \"hex\"), 0),\n keyId: Long(\"7083075402444308485\")\n }\n },\n '$db': 'audit_test'\n }\n },\n result: 0\n}\n\n```\n\nIf we are able to see some records, that\u2019s great. Now we can build our [dashboard in Atlas Charts.\n\nYou can import this dashboard into your Charts application. You might need to configure the data source name for the Charts for your convenience. In the given dashboard, the datasource was a collection with the name **auditlogscollection** in the database **auditlogs** in the Atlas Federated Database Instance with the name **FederatedDatabaseInstance0**, as shown below.\n\n## Caveats\n\nThe following topics can be considered for more effective and efficient audit log analysis.\n\n* You could retrieve logs from all the hosts rather than one node.\n * Therefore you can track the data modifications even in the case of primary node failures.\n* You might consider tracking only the relevant activities rather than tracking all the activities in the database. Tracking all the activities in the database might impact the performance.\n* You can schedules your triggers.\n * The Atlas function in the example runs once manually, but it can be scheduled via Atlas scheduled triggers. \n * Then, date intervals (start and end time) for the audit logs for each execution need to be calculated properly. \n* You could improve read efficiency.\n * You might consider partitioning in the S3 bucket by most frequently filtered fields. For further information, please check the docs on optimizing query performance.\n\n## Summary\n\nMongoDB Atlas is not only a database as a service platform to run your MongoDB workloads. It also provides a wide variety of components to build end-to-end solutions. In this blog post, we explored some of the capabilities of Atlas such as App Services, Atlas Charts, and Atlas Data Federation, and observed how we utilized them to build a real-world scenario. \n\nQuestions on this tutorial? Thoughts, comments? Join the conversation over at the MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this blog post, I\u2019ll walk you through how you can visualize MongoDB Atlas Database Audit Logs with MongoDB Atlas Charts.", "contentType": "Tutorial"}, "title": "Visualize MongoDB Atlas Database Audit Logs", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-bloated-documents", "action": "created", "body": "# Bloated Documents\n\nWelcome (or welcome back!) to the MongoDB Schema Anti-Patterns series! We're halfway through the series. So far, we've discussed three anti-patterns: massive arrays, massive number of collections, and unnecessary indexes.\n\nToday, let's discuss document size. MongoDB has a 16 MB document size limit. But should you use all 16 MBs? Probably not. Let's find out why.\n\n>\n>\n>:youtube]{vid=mHeP5IbozDU start=389}\n>If your brain feels bloated from too much reading, sit back, relax, and watch this video.\n>\n>\n\n## Bloated Documents\n\nChances are pretty good that you want your queries to be blazing fast. MongoDB wants your queries to be blazing fast too.\n\nTo keep your queries running as quickly as possible, [WiredTiger (the default storage engine for MongoDB) keeps all of the indexes plus the documents that are accessed the most frequently in memory. We refer to these frequently accessed documents and index pages as the working set. When the working set fits in the RAM allotment, MongoDB can query from memory instead of from disk. Queries from memory are faster, so the goal is to keep your most popular documents small enough to fit in the RAM allotment.\n\nThe working set's RAM allotment is the larger of:\n\n- 50% of (RAM - 1 GB)\n- 256 MB.\n\nFor more information on the storage specifics, see Memory Use. If you're using MongoDB Atlas to host your database, see Atlas Sizing and Tier Selection: Memory.\n\nOne of the rules of thumb you'll hear frequently when discussing MongoDB schema design is *data that is accessed together should be stored together*. Note that it doesn't say *data that is related to each other should be stored together*.\n\nSometimes data that is related to each other isn't actually accessed together. You might have large, bloated documents that contain information that is related but not actually accessed together frequently. In that case, separate the information into smaller documents in separate collections and use references to connect those documents together.\n\nThe opposite of the Bloated Documents Anti-Pattern is the Subset Pattern. The Subset Pattern encourages the use of smaller documents that contain the most frequently accessed data. Check out this post on the Subset Pattern to learn more about how to successfully leverage this pattern.\n\n## Example\n\nLet's revisit Leslie's website for inspirational women that we discussed in the previous post. Leslie updates the home page to display a list of the names of 100 randomly selected inspirational women. When a user clicks on the name of an inspirational woman, they will be taken to a new page with all of the detailed biographical information about the woman they selected. Leslie fills the website with 4,704 inspirational women\u2014including herself.\n\n \n\nInitially, Leslie decides to create one collection named InspirationalWomen, and creates a document for each inspirational woman. The document contains all of the information for that woman. Below is a document she creates for Sally Ride.\n\n``` none\n// InspirationalWomen collection\n\n{\n \"_id\": {\n \"$oid\": \"5ec81cc5b3443e0e72314946\"\n },\n \"first_name\": \"Sally\",\n \"last_name\": \"Ride\",\n \"birthday\": 1951-05-26T00:00:00.000Z,\n \"occupation\": \"Astronaut\",\n \"quote\": \"I would like to be remembered as someone who was not afraid to do\n what she wanted to do, and as someone who took risks along the \n way in order to achieve her goals.\",\n \"hobbies\": \n \"Tennis\",\n \"Writing children's books\"\n ],\n \"bio\": \"Sally Ride is an inspirational figure who... \", \n ...\n}\n```\n\nLeslie notices that her home page is lagging. The home page is the most visited page on her site, and, if the page doesn't load quickly enough, visitors will abandon her site completely.\n\nLeslie is hosting her database on [MongoDB Atlas and is using an M10 dedicated cluster. With an M10, she gets 2 GB of RAM. She does some quick calculations and discovers that her working set needs to fit in 0.5 GB. (Remember that her working set can be up to 50% of (2 GB RAM - 1 GB) = 0.5 GB or 256 MB, whichever is larger).\n\nLeslie isn't sure if her working set will currently fit in 0.5 GB of RAM, so she navigates to the Atlas Data Explorer. She can see that her InspirationalWomen collection is 580.29 MB and her index size is 196 KB. When she adds those two together, she can see that she has exceeded her 0.5 GB allotment.\n\nLeslie has two choices: she can restructure her data according to the Subset Pattern to remove the bloated documents, or she can move up to a M20 dedicated cluster, which has 4 GB of RAM. Leslie considers her options and decides that having the home page and the most popular inspirational women's documents load quickly is most important. She decides that having the less frequently viewed women's pages take slightly longer to load is fine.\n\nShe begins determining how to restructure her data to optimize for performance. The query on Leslie's homepage only needs to retrieve each woman's first name and last name. Having this information in the working set is crucial. The other information about each woman (including a lengthy bio) doesn't necessarily need to be in the working set.\n\nTo ensure her home page loads at a blazing fast pace, she decides to break up the information in her `InspirationalWomen` collection into two collections: `InspirationalWomen_Summary` and `InspirationalWomen_Details`. She creates a manual reference between the matching documents in the collections. Below are her new documents for Sally Ride.\n\n``` none\n// InspirationalWomen_Summary collection\n\n{\n \"_id\": {\n \"$oid\": \"5ee3b2a779448b306938af0f\" \n },\n \"inspirationalwomen_id\": {\n \"$oid\": \"5ec81cc5b3443e0e72314946\"\n },\n \"first_name\": \"Sally\",\n \"last_name\": \"Ride\"\n}\n```\n\n``` none\n// InspirationalWomen_Details collection\n\n{\n \"_id\": {\n \"$oid\": \"5ec81cc5b3443e0e72314946\"\n },\n \"first_name\": \"Sally\",\n \"last_name\": \"Ride\",\n \"birthday\": 1951-05-26T00:00:00.000Z,\n \"occupation\": \"Astronaut\",\n \"quote\": \"I would like to be remembered as someone who was not afraid to do\n what she wanted to do, and as someone who took risks along the \n way in order to achieve her goals.\",\n \"hobbies\": \n \"Tennis\",\n \"Writing children's books\"\n ],\n \"bio\": \"Sally Ride is an inspirational figure who... \", \n ...\n}\n```\n\nLeslie updates her query on the home page that retrieves each woman's first name and last name to use the `InspirationalWomen_Summary` collection. When a user selects a woman to learn more about, Leslie's website code will query for a document in the `InspirationalWomen_Details` collection using the id stored in the `inspirationalwomen_id` field.\n\nLeslie returns to Atlas and inspects the size of her databases and collections. She can see that the total index size for both collections is 276 KB (180 KB + 96 KB). She can also see that the size of her `InspirationalWomen_Summary` collection is about 455 KB. The sum of the indexes and this collection is about 731 KB, which is significantly less than her working set's RAM allocation of 0.5 GB. Because of this, many of the most popular documents from the `InspirationalWomen_Details` collection will also fit in the working set.\n\n![The Atlas Data Explorer shows the total index size for the entire database is 276 KB and the size of the InspirationalWomen_Summary collection is 454.78 KB.\n\nIn the example above, Leslie is duplicating all of the data from the `InspirationalWomen_Summary` collection in the `InspirationalWomen_Details` collection. You might be cringing at the idea of data duplication. Historically, data duplication has been frowned upon due to space constraints as well as the challenges of keeping the data updated in both collections. Storage is relatively cheap, so we don't necessarily need to worry about that here. Additionally, the data that is duplicated is unlikely to change very often.\n\nIn most cases, you won't need to duplicate all of the information in more than one collection; you'll be able to store some of the information in one collection and the rest of the information in the other. It all depends on your use case and how you are using the data.\n\n## Summary\n\nBe sure that the indexes and the most frequently used documents fit in the RAM allocation for your database in order to get blazing fast queries. If your working set is exceeding the RAM allocation, check if your documents are bloated with extra information that you don't actually need in the working set. Separate frequently used data from infrequently used data in different collections to optimize your performance.\n\nCheck back soon for the next post in this schema design anti-patterns series!\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Reduce the Size of Large Documents\n- MongoDB Docs: 16 MB Document Size Limit\n- MongoDB Docs: Atlas Sizing and Tier Selection\n- MongoDB Docs: Model One-to-Many Relationships with Document References\n- MongoDB University M320: Data Modeling\n- Blog Series: Building with Patterns\n- Blog: The Subset Pattern\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Bloated Documents", "contentType": "Article"}, "title": "Bloated Documents", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/update-on-monogodb-and-swift", "action": "created", "body": "# An Update on MongoDB's Ongoing Commitment to Swift\n\nRecently, Rachelle Palmer, Senior Product Manager for MongoDB Drivers sat down with Kaitlin Mahar, Lead Engineer for the `Swift` and `Rust` drivers to discuss some of the exciting developments in the `Swift` space. This article details that conversation.\n\nSwift is a well documented, easy to use and convenient language focused on iOS app development. As one of the top ten languages, it's more popular than Ruby, Go, or Rust, but keeps a fairly low profile - it's the underestimated backbone of millions of applications, from Airbnb to LinkedIn. With its simple syntax and strong performance profile, Swift is versatile enough to be used for many use cases and applications, and we've watched with great interest as the number of customers using Swift with MongoDB has grown.\n\nSwift can also be used for more than mobile, and we've seen a growing number of developers worldwide use Swift for backend development - software engineers can easily extend their skills with this concise, open source language. Kaitlin Mahar, and I decided we'd like to share more about MongoDB's commitment and involvement with the Swift community and how that influences some of the initiatives on our Swift driver roadmap.\n\n**Rachelle (RP):** I want to get right to the big announcement! Congratulations on joining the Swift Server Working Group (SSWG). What is the SSWG and what are some of the things that the group is thinking about right now?\n\n**Kaitlin (KM):** The SSWG is a steering team focused on promoting the use of Swift on the server. Joining the SSWG is an honor and a privilege for me personally - through my work on the driver and attendance at conferences like Serverside.swift, I've become increasingly involved in the community over the last couple of years and excited about the huge potential I see for Swift on the server, and being a part of the group is a great opportunity to get more deeply involved in this area. There are representatives in the group from Apple, Vapor (a popular Swift web framework), and Amazon. The group right now is primarily focused on guiding the development of a robust ecosystem of libraries and tools for server-side Swift. We run an incubation process for such projects, focused on providing overall technical direction, ensuring compatibility between libraries, and promoting best practices.\n\nTo that end, one thing we're thinking about right now is connection pooling. The ability to pool connections is very important for a number of server-side use cases, and right now developers who need a pool have to implement one from scratch. A generalized library would make it far easier to, for example, write a new database driver in Swift. Many SSWG members as well as the community at large are interested in such a project and I'm very excited to see where it goes.\n\nA number of other foundational libraries and tools are being worked on by the community as well, and we've been spending a lot of time thinking about and discussing those: for example, standardized APIs to support tracing, and a new library called Swift Service Lifecycle which helps server applications manage their startup and shutdown sequences.\n\n**RP:** When we talk with customers about using Swift for backend development, asking how they made that choice, it seems like the answers are fairly straightforward: with limited time and limited resources, it was the fastest way to get a web app running with a team of iOS developers. Do you feel like Swift is compelling to learn if you aren't an iOS developer though? Like, as a first language instead of Python?\n\n**KM:** Absolutely! My first language was Python, and I see a lot of things I love about Python in Swift: it's succinct and expressive, and it's easy to quickly pick up on the basics. At the same time, Swift has a really powerful and strict type system similar to what you might have used in compiled languages like Java before, which makes it far harder to introduce bugs in your code, and forces you to address edge cases (for example, null values) up front. People often say that Swift borrows the best parts of a number of other languages, and I agree with that. I think it is a great choice whether it is your first language or fifth language, regardless of if you're interested in iOS development or not.\n\n**RP:** Unquestionably, I think there's a great match here - we have MongoDB which is really easy and quick to get started with, and you have Swift which is a major win for developer productivity.\n\n**RP:** What's one of your favorite Swift features?\n\n**KM:** Enums with associated values are definitely up there for me. We use these in the driver a lot. They provide a very succinct way to express that particular values are present under certain conditions. For example, MongoDB allows users to specify either a string or a document as a \"hint\" about what index to use when executing a query. Our API clearly communicates these choices to users by defining our `IndexHint` type like this:\n\n``` swift\npublic enum IndexHint {\n /// Specifies an index to use by its name.\n case indexName(String)\n /// Specifies an index to use by a specification `BSONDocument` containing the index key(s).\n case indexSpec(BSONDocument)\n}\n```\n\nThis requires the user to explicitly specify which version of a hint they want to use, and requires that they provide a value of the correct corresponding type along with it.\n\n**RP:** I'd just like to say that mine is the `MemoryLayout` type. Being able to see the memory footprint of a class that you've defined is really neat. We're also excited to announce that our top priority for the next 6-9 months is rewriting our driver to be purely in Swift. For everyone who is wondering, why wasn't our official Swift driver \"all Swift\" initially? And why change now?\n\n**KM:** We initially chose to wrap libmongoc as it provided a solid, reliable core and allowed us to deliver a great experience at the API level to the community sooner. The downside of that was of course, for every feature we want to do, the C driver had to implement it first sometimes this slowed down our release cadence. We also feel that writing driver internals in pure Swift will enhance performance, and give better memory safety - for example, we won't have to spend as much time thinking about properly freeing memory when we're done using it.\n\nIf you're interested in learning more about Swift, and how to use Swift for your development projects with MongoDB, here are some resources to check out:\n\n- Introduction to Server-Side Swift and Building a Command Line Executable\n- The Swift driver GitHub\n\nKaitlin will also be on an upcoming MongoDB Podcast episode to talk more about working with Swift so make sure you subscribe and stay tuned!\n\nIf you have questions about the Swift Driver, or just want to interact with other developers using this and other drivers, visit us in the MongoDB Community and be sure to introduce yourself and say hello!", "format": "md", "metadata": {"tags": ["Swift", "MongoDB"], "pageDescription": "An update on MongoDB's ongoing commitment to Swift", "contentType": "Article"}, "title": "An Update on MongoDB's Ongoing Commitment to Swift", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-database-and-frozen-objects", "action": "created", "body": "# Realm Core Database 6.0: A New Architecture and Frozen Objects\n\n## TL;DR\n\nRealm is an easy-to-use, offline-first database that lets mobile developers build better apps faster.\n\nSince the acquisition by MongoDB of Realm in May 2019, MongoDB has continued investing in building an updated version of our mobile database; culimating in the Realm Core Database 6.0.\n\nWe're thrilled to announce that it's now out of beta and released; we look forward to seeing what apps you build with Realm in production. The Realm Core Database 6.0 is now included in the 10.0 versions of each SDK: Kotlin/Java, Swift/Obj-C, Javascript on the node.js & React Native, as well as .NET support for a variety of UWP platforms and Xamarin. Take a look at the docs here.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## A New Architecture\n\nThis effort lays a new foundation that further increases the stability of the Realm Database and allows us to quickly release new features in the future.\n\nWe've also increased performance with further optimizations still to come. We're most excited that:\n\n- The new architecture makes it faster to look up objects based on a primary key\n- iOS benchmarks show faster insertions, twice as fast sorting, and ten times faster deletions\n- Code simplifications yielded a ten percent reduction in total lines of code and a smaller library\n- Realm files are now much smaller when storing big blobs or large transactions\n\n## Frozen Objects\n\nWith this release, we're also thrilled to announce that Realm now supports Frozen Objects, making it easier to use Realm with reactive frameworks.\n\nSince our initial release of the Realm database, our concept of live,thread-confined objects, has been key to reducing the code that mobile developers need to write. Objects are the data, so when the local database is updated for a particular thread, all objects are automatically updated too. This design ensures you have a consistent view of your data and makes it extremely easy to hook the local database up to the UI. But it historically came at a cost for developers using reactive frameworks.\n\nNow, Frozen Objects allows you to work with immutable data without needing to extract it from the database. Frozen Objects act like immutable objects, meaning they won't change. They allow you to freeze elements of your data and hand it over to other threads and operations without throwing an exception - so it's simple to use Realm when working with platforms like RxJava & LiveData, RxSwift & Combine, and React.\n\n### Using Frozen Objects\n\nFreeze any 'Realm', 'RealmList', or 'RealmObject' and it will not be possible to modify them in any way. These Frozen Objects have none of the threading restrictions that live objects have; meaning they can be read and queried across all threads.\n\nAs an example, consider what it would look like if you were listening to changes on a live Realm using Kotlin or .NET, and then wanted to freeze query results before sending them on for further processing. If you're an iOS developer please check out our blog post on RealmSwift integration with Combine.\n\nThe Realm team is proud to say that we've heard you, and we hope that you give this feature a try to simplify your code and improve your development experience.\n\n::::tabs\n:::tab]{tabid=\".NET\"}\n``` csharp\nvar realm = Realm.GetInstance();\nvar frozenResults = realm.All()\n .Where(p => p.Name.StartsWith(\"Jane\"))\n .Freeze();\n\nAssert.IsTrue(results.IsFrozen());\nTask.Run(() =>\n{\n // it is now possible to read objects on another thread\n var person = frozenResults.First();\n Console.WriteLine($\"Person from a background thread: {person.Name}\");\n});\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` Kotlin\nval realm: Realm = Realm.getDefaultInstance();\nval results: RealmResults = realm.where().beginsWith(\"name\", \"Jane\").findAllAsync()\nresults.addChangeListener { liveResults ->\n val frozenResults: RealmResults = liveResults.freeze()\n val t = Thread(Runnable {\n assertTrue(frozenResults.isFrozen())\n\n // It is now possible to read objects on another thread\n val person: Person = frozenResults.first()\n person.name\n })\n t.start()\n t.join()\n}\n```\n:::\n::::\n\nSince Java needs immutable objects, we also updated our Java support so all Realm Observables and Flowables now emit frozen objects by default. This means that it should be possible to use all operators available in RxJava without either using `Realm.copyFromRealm()` or running into an `IllegalStateException:`\n\n``` Java\nval realm = Realm.getDefaultInstance()\nval stream: Disposable = realm.where().beginsWith(\"name\", \"Jane\").findAllAsync().asFlowable()\n .flatMap { frozenPersons ->\n Flowable.fromIterable(frozenPersons)\n .filter { person -> person.age > 18 }\n .map { person -> PersonViewModel(person.name, person.age) }\n .toList()\n .toFlowable()\n }\n .subscribeOn(Schedulers.computation())\n .observeOn(AndroidSchedulers.mainThread)\n .subscribe { updateUI(it) }\n }\n```\n\nIf you have feedback please post it in Github and the Realm team will check it out!\n\n- [RealmJS\n- RealmSwift\n- RealmJava\n- RealmDotNet\n\n## A Strong Foundation for the Future\n\nThe Realm Core Database 6.0 now released with Frozen Objects and we're\nnow focused on adding new features; such as new types, new SDKs,\nandunlocking new use cases for our developers.\n\nWant to Ask a Question? Visit our Forums.\n\nWant to make a feature request? Visit our Feedback Portal.\n\nWant to be notified of upcoming Realm events such as our iOS Hackathon in November2020? Visit our Global Community Page.\n\n>Safe Harbor\nThe development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Explaining Realm Core Database 6.0 and Frozen Objects", "contentType": "News & Announcements"}, "title": "Realm Core Database 6.0: A New Architecture and Frozen Objects", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/build-ci-cd-pipelines-realm-apps-github-actions", "action": "created", "body": "# How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions\n\n> As of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas \u2013 Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs.\n\nBuilding Continuous Integration/Continuous Deployment (CI/CD) pipelines can be challenging. You have to map your team's ideal pipeline, identify and fix any gaps in your team's test automation, and then actually build the pipeline. Once you put in the work to craft a pipeline, you'll reap a variety of benefits like...\n\n* Faster releases, which means you can get value to your end users quicker)\n* Smaller releases, which can you help you find bugs faster\n* Fewer manual tasks, which can reduce manual errors in things like testing and deployment.\n\nAs Tom Haverford from the incredible TV show Parks and Recreation wisely said, \"Sometimes you gotta **work a little**, so you can **ball a lot**.\" (View the entire scene here. But don't get too sucked into the silliness that you forget to return to this article \ud83d\ude09.)\n\nIn this article, I'll walk you through how I crafted a CI/CD pipeline for a mobile app built with MongoDB Realm. I'll provide strategies as well as code you can reuse and modify, so you can put in just **a little bit of work** to craft a pipeline for your app and **ball a lot**.\n\nThis article covers the following topics:\n\n- All About the Inventory App\n - What the App Does\n - The System Architecture\n- All About the Pipeline\n - Pipeline Implementation Using GitHub Actions\n - MongoDB Atlas Project Configuration\n - What Happens in Each Stage of the Pipeline\n- Building Your Pipeline\n - Map Your Pipeline\n - Implement Your Pipeline\n- Summary \n\n> More of a video person? No worries. Check out the recording below of a talk I gave at MongoDB.live 2021 that covers the exact same content this article does. :youtube]{vid=-JcEa1snwVQ}\n\n## All About the Inventory App\n\nI recently created a CI/CD pipeline for an iOS app that manages stores' inventories. In this section, I'll walk you through what the app does and how it was architected. This information will help you understand why I built my CI/CD pipeline the way that I did.\n\n### What the App Does\n\nThe Inventory App is a fairly simple iOS app that allows users to manage the online record of their physical stores' inventories. The app allows users to take the following actions:\n\n* Create an account\n* Login and logout\n* Add an item to the inventory\n* Adjust item quantities and prices\n\nIf you'd like to try the app for yourself, you can get a copy of the code in the GitHub repo: [mongodb-developer/realm-demos.\n\n### The System Architecture\n\nThe system has three major components:\n\n* **The Inventory App** is the iOS app that will be installed on the mobile device. The local Realm database is embedded in the Inventory App and stores a local copy of the inventory data.\n* **The Realm App** is the central MongoDB Realm backend instance of the mobile application. In this case, the Realm App utilizes Realm features like authentication, rules, schema, GraphQL API, and Sync. The Inventory App is connected to the Realm App. **Note**: The Inventory App and the Realm App are NOT the same thing; they have two different code bases.\n* **The Atlas Database** stores the inventory data. Atlas is MongoDB's fully managed Database-as-a-Service. Realm Sync handles keeping the data synced between Atlas and the mobile apps.\n\nAs you're building a CI/CD pipeline for a mobile app with an associated Realm App and Atlas database, you'll need to take into consideration how you're going to build and deploy both the mobile app and the Realm App. You'll also need to figure out how you're going to indicate which database the Realm App should be syncing to. Don't worry, I'll share strategies for how to do all of this in the sections below.\n\nOkay, that's enough boring stuff. Let's get to my favorite part: the CI/CD pipeline!\n\n## All About the Pipeline\n\nNow that you know what the Inventory App does and how it was architected, let's dive into the details of the CI/CD pipeline for this app. You can use this pipeline as a basis for your pipeline and tweak it to fit your team's process.\n\nMy pipeline has three main stages:\n\n* **Development**: In the Development Stage, developers do their development work like creating new features and fixing bugs.\n* **Staging**: In the Staging Stage, the team simulates the production environment to make sure everything works together as intended. The Staging Stage could also be known as QA (Quality Assurance), Testing, or Pre-Production.\n* **Production**: The Production Stage is the final stage where the end users have access to your apps.\n\n### Pipeline Implementation Using GitHub Actions\n\nA variety of tools exist to help teams implement CI/CD pipelines. I chose to use GitHub Actions, because it works well with GitHub (which is where my code is already) and it has a free plan for public repositories (and I like free things!). GitHub Actions allows you to automate workflows. As you'll see in later sections, I implemented my CI/CD pipeline using a workflow. Each workflow can contain one or more jobs, and each job contains one or more steps.\n\nThe complete workflow is available in build.yml in the Inventory App's GitHub repository.\n\n### MongoDB Atlas Project Configuration\n\nThroughout the pipeline, the workflow will deploy to new or existing Realm Apps that are associated with new or existing databases based on the pipeline stage. I decided to create four Atlas projects to support my pipeline:\n* **Inventory Demo - Feature Development.** This project contains the Realm Apps associated with every new feature. Each Realm App syncs with a database that has a custom name based on the feature (for example,\u00a0a feature branch named `beta6-improvements` would have a database named `InventoryDemo-beta6-improvements`). All of the databases for feature branches are stored in this project's Atlas cluster. The Realm Apps and databases for feature branches are deleted after the feature work is completed.\n* **Inventory Demo - Pull Requests.**\u00a0This project contains the Realm Apps that are created for every pull request. Each Realm App syncs with a database that has a custom name based on the time the workflow runs (for example,\u00a0`InventoryDemo-2021-06-07_1623089424`). All of the databases associated with pull requests are stored in this project's Atlas cluster.\u00a0 \n\n As part of my pipeline, I chose to delete the Realm App and associated database at the end of the workflow that was triggered by the pull request.\u00a0Another option would be to skip deleting the Realm App and associated database when the tests in the workflow fail, so that a developer could manually investigate the source of the failure.\n* **Inventory Demo - Staging.** This project contains the Realm App for Staging. The Realm App syncs with a database used only for Staging.\u00a0The Staging database is the only database in this project's cluster. The Realm App and database are never deleted, so the team can always look in the same consistent locations for the Staging app and its data.\n* **Inventory Demo - Production.**\u00a0This project contains the Realm App for Production.\u00a0The Realm App syncs with a database used only for Production.\u00a0The Production database is the only database in this project's cluster.\u00a0The Realm App and database are never deleted.\n\n> This app requires only a single database. If your app uses more than one database, the principles described above would still hold true.\n\n### What Happens in Each Stage of the Pipeline\n\nI've been assigned a ticket to change the color of the **Log In** button in the iOS app from blue to pink. In the following sections, I'll walk you through what happens in each stage of the pipeline and how my code change is moved from one stage to the next.\n\nAll of the stages and transitions below use the same GitHub Actions workflow. The workflow has conditions that modify which steps are taken. I'll walk you through what steps are run in each workflow execution in the sections below. The workflow uses environment variables and secrets to store values. Visit the realm-demos GitHub repo to see the complete workflow source code.\n\nDevelopment\n-----------\n\nThe Development stage is where I'll do my work to update the button color. In the subsections below, I'll walk you through how I do my work and trigger a workflow.\n\nUpdating the Inventory App\n--------------------------\n\nSince I want to update my iOS app code, I'll begin by opening a copy of my app's code in Xcode. I'll change the color of the **Log In** button there. I'm a good developer \ud83d\ude09, so I'll run the automated tests to make sure I didn't break anything. The Inventory App has automated unit and UI tests that were implemented using XCTest. I'll also kick off a simulator, so I can manually test that the new button color looks fabulous.\n\nUpdating the Realm App\n----------------------\n\nIf I wanted to make an update to the Realm App code, I could either:\n\n* work in the cloud in the Realm web interface or\n* work locally in a code editor like Visual Studio Code.\n\nIf I choose to work in the Realm web interface, I can make changes and deploy them. The Realm web interface was recently updated to allow developers to commit changes they make there to their GitHub repositories. This means changes made in the web interface won't get lost when changes are deployed through other methods (like through the Realm Command Line Interface or automated GitHub deployments).\n\nIf I choose to work with my Realm App code locally, I could make my code changes and then run unit tests. If I want to run integration tests or do some manual testing, I need to deploy the Realm App. One option is to use the App Services Command Line Interface (App Services CLI) to deploy with a command like `appservices \n push`. Another option is to automate the deployment using a GitHub Actions workflow.\n\nI've chosen to automate the deployment using a GitHub Actions workflow, which I'll describe in the following section.\n\nKicking Off the Workflow\n------------------------\n\nAs I am working locally to make changes to both the Inventory App and the Realm App, I can commit the changes to a new feature branch in my GitHub repository.\n\nWhen I am ready to deploy my Realm App and run all of my automated tests, I will push the commits to my repository. The push will trigger the workflow. \n\nThe workflow runs the `build` job, which runs the following steps:\n\n 1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n 2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.\n 3. **Store current time in variable.** Stores the current time in an environment variable named `CURRENT_TIME`. This variable is used later in the workflow.\n\n ```\n echo \"CURRENT_TIME=$(date +'%Y-%m-%d_%s')\" >> $GITHUB_ENV\n ```\n\n 4. **Is this a push to a feature branch?** If this is a push to a feature branch (which it is), do the following:\n * Create a new environment variable to store the name of the feature branch.\n ```\n ref=$(echo ${{ github.ref }})\n branch=$(echo \"${ref##*/}\")\n echo \"FEATURE_BRANCH=$branch\" >> $GITHUB_ENV\n ```\n * Check the `GitHubActionsMetadata` Atlas database to see if a Realm App already exists for this feature branch. If a Realm App exists, store the Realm App ID in an environment variable. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the Atlas IP Access List.\n ```\n output=$(mongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.metadata.findOne({'branch': '$branch'})\")\n \n if [ $output == *null ]]; then\n echo \"No Realm App found for this branch. A new app will be pushed later in this workflow\"\n else\n echo \"A Realm App was found for this branch. Updates will be pushed to the existing app later in this workflow\"\n app_id=$(echo $output | sed 's/^.*realm_app_id\" : \"\\([^\"]*\\).*/\\1/')\n echo \"REALM_APP_ID=$app_id\" >> $GITHUB_ENV\n fi\n ```\n\n * Update the `databaseName` in the `development.json` [environment file. Set the database name to contain the branch name to ensure it's unique.\n\n ```\n cd inventory/export/sync/environments\n printf '{\\n \"values\": {\"databaseName\": \"InventoryDemo-%s\"}\\n}' \"$branch\" > development.json \n ```\n * Indicate that the Realm App should use the `development` environment by updating `realm_config.json`.\n```\n cd ..\nsed -i txt 's/{/{ \"environment\": \"development\",/' realm_config.json\n```\n 5. **Install the App Services CLI and authenticate.** This step installs the App Services CLI and authenticates using the API keys that are stored as GitHub secrets.\n```bash\nnpm install -g atlas-app-services-cli\nappservices login --api-key=\"${{ secrets.REALM_API_PUBLIC_KEY }}\" --private-api-key=\"${{ secrets.REALM_API_PRIVATE_KEY }}\" --realm-url https://realm.mongodb.com --atlas-url https://cloud.mongodb.com\n```\n 6. **Create a new Realm App for feature branches where the Realm App does not yet exist.** This step has three primary pieces:\n\n 7. Push the Realm App to the Atlas project specifically for feature\n branches.\n```\ncd inventory/export/sync\nappservices push -y --project 609ea554944fe545460529a1\n```\n 8. Retrieve and store the Realm App ID from the output of `appservices app describe`.\n```\noutput=$(appservices app describe)\napp_id=$(echo $output | sed 's/^.*client_app_id\": \"\\(^\"]*\\).*/\\1/')\necho \"REALM_APP_ID=$app_id\" >> $GITHUB_ENV\n```\n 9. Store the Realm App ID in the GitHubActionsMetadata database. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the [Atlas IP Access List.\n ```\n mongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.metadata.insertOne({'branch': '${{ env.FEATURE_BRANCH}}', 'realm_app_id': '$app_id'})\"\n ```\n 10. **Create `realm-app-id.txt` that stores the Realm App ID.** This file will be stored in the mobile app code. The sole purpose of this file is to tell the mobile app to which Realm App it should connect.\n```\necho \"${{ env.REALM_APP_ID }}\" > $PWD/inventory/clients/ios-swiftui/InventoryDemo/realm-app-id.txt\n```\n\n 11. **Build mobile app and run tests.** This step builds the mobile app for testing and then runs the tests using a variety of simulators. If you have integration tests, you could also choose to checkout previous releases of the mobile app and run the integration tests against the current version of the Realm App to ensure backwards compatibility.\n\n 12. Navigate to the mobile app's directory.\n```\ncd inventory/clients/ios-swiftui/InventoryDemo\n```\n 13. Build the mobile app for testing.\n```\nxcodebuild -project InventoryDemo.xcodeproj -scheme \"ci\" -sdk iphonesimulator -destination 'platform=iOS Simulator,name=iPhone 12 Pro Max,OS=14.4' -derivedDataPath './output' build-for-testing\n```\n 14. Define the simulators that will be used for testing.\n```\niPhone12Pro='platform=iOS Simulator,name=iPhone 12 Pro Max,OS=14.4'\niPhone12='platform=iOS Simulator,name=iPhone 12,OS=14.4'\niPadPro4='platform=iOS Simulator,name=iPad Pro (12.9-inch) (4th generation)'\n``` \n 15. Run the tests on a variety of simulators. Optionally, you could put these in separate jobs to run in parallel.\n```\nxcodebuild -project InventoryDemo.xcodeproj -scheme \"ci\" -sdk iphonesimulator -destination \"$iPhone12Pro\" -derivedDataPath './output' test-without-building\nxcodebuild -project InventoryDemo.xcodeproj -scheme \"ci\" -sdk iphonesimulator -destination \"$iPhone12\" -derivedDataPath './output' test-without-building \nxcodebuild -project InventoryDemo.xcodeproj -scheme \"ci\" -sdk iphonesimulator -destination \"$iPadPro4\" -derivedDataPath './output' test-without-building\n```\n\n 16. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2\n Action.\n 17. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nThe nice thing here is that simply by pushing my code changes to my feature branch, my Realm App is deployed and the tests are run. When I am finished making updates to the code, I can feel confident that a Staging build will be successful.\n\nMoving from Development to Staging\n----------------------------------\n\nNow that I'm done working on my code changes, I'm ready to move to Staging. I can kick off this process by creating a GitHub pull request. In the pull request, I'll request to merge my code from my feature branch to the `staging` branch. When I submit the pull request, GitHub will automatically kick off another workflow for me. \n\nThe workflow runs the following steps.\n\n1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.\n3. **Store current time in variable.** See the section above for more information on this step.\n4. **Set environment variables for all other runs.** This step sets the necessary environment variables for pull requests where a new Realm App and database will be created for *each* pull request. This step has three primary pieces.\n * Create a new environment variable named `IS_DYNAMICALLY_GENERATED_APP` to indicate this is a dynamically generated app that should be deleted later in this workflow.\n```\necho \"IS_DYNAMICALLY_GENERATED_APP=true\" >> $GITHUB_ENV\n```\n* Update the `databaseName` in the `testing.json` environment file. Set the database name to contain the current time to ensure it's unique.\n```\ncd inventory/export/sync/environments\nprintf '{\\n \"values\": {\"databaseName\": \"InventoryDemo-%s\"}\\n}' \"${{ env.CURRENT_TIME }}\" > testing.json \n```\n * Indicate that the Realm App should use the `testing` environment by updating `realm_config.json`.\n```\ncd ..\nsed -i txt 's/{/{ \"environment\": \"testing\",/' realm_config.json \n```\n5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.\n6. **Create a new Realm App for pull requests.** Since this is a pull request, the workflow creates a new Realm App just for this workflow. The Realm App will be deleted at the end of the workflow.\n * Push to the Atlas project specifically for pull requests.\n```\n cd inventory/export/sync\nappservices push -y --project 609ea554944fe545460529a1\n```\n* Retrieve and store the Realm App ID from the output of `appservices app describe`.\n```\noutput=$(appservices app describe)\napp_id=$(echo $output | sed 's/^.*client_app_id\": \"\\(^\"]*\\).*/\\1/')\necho \"REALM_APP_ID=$app_id\" >> $GITHUB_ENV\n```\n* Store the Realm App ID in the `GitHubActionsMetadata` database.\n> Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the [Atlas IP Access List.\n```\nmongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.metadata.insertOne({'branch': '${{ env.FEATURE_BRANCH}}', 'realm_app_id': '$app_id'})\"\n```\n7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.\n8. **Build mobile app and run tests.** See the section above for more information on this step.\n9. **Delete dynamically generated Realm App.** The workflow created a Realm App just for this pull request in an earlier step. This step deletes that Realm App.\n```\nappservices app delete --app ${{ env.REALM_APP_ID }}\n```\n10. **Delete dynamically generated database.** The workflow also created a database just for this pull request in an earlier step. This step deletes that database.\n```\nmongo \"mongodb+srv://${{ secrets.ATLAS_URI_PULL_REQUESTS }}/InventoryDemo-${{ env.CURRENT_TIME }}\" --username ${{ secrets.ATLAS_USERNAME_PULL_REQUESTS }} --password ${{ secrets.ATLAS_PASSWORD_PULL_REQUESTS }} --eval \"db.dropDatabase()\"\n```\n11. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.\n12. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nThe results of the workflow are included in the pull request.\n\nMy teammate will review the pull request. They will likely review the code and double check that the workflow passed. We might go back and forth with suggestions and updates until we both agree the code is ready to be merged into the `staging` branch.\n\nWhen the code is ready, my teammate will approve the pull request and then click the button to squash and merge the commits. My teammate may also choose to delete the branch as it is no longer needed. \n\nDeleting the branch triggers the `delete-feature-branch-artifacts` workflow. This workflow is different from all of the workflows I will discuss in this article. This workflow's job is to delete the artifacts that were associated with the branch. \n\nThe `delete-feature-branch-artifacts` workflow runs the following steps.\n\n1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n2. **Install the App Services CLI and authenticate.** See the section above for more information on this step.\n3. **Store the name of the branch.** This step retrieves the name of the branch that was just deleted and stores it in an environment variable named `FEATURE_BRANCH`.\n```\nref=$(echo ${{ github.event.ref }})\nbranch=$(echo \"${ref##*/}\")\necho \"FEATURE_BRANCH=$branch\" >> $GITHUB_ENV\n```\n\n4. **Delete the Realm App associated with the branch.** This step queries the `GitHubActionsMetadata` database for the ID of the Realm App associated with this branch. Then it deletes the Realm App, and deletes the information in the `GitHubActionsMetadata` database. Note: Accessing the Atlas database requires the IP address of the GitHub Actions virtual machine to be in the Atlas IP Access List.\n\n```\n# Get the Realm App associated with this branch\noutput=$(mongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.metadata.findOne({'branch': '${{ env.FEATURE_BRANCH }}'})\")\n \n if [ $output == *null ]]; then\n echo \"No Realm App found for this branch\"\n else\n # Parse the output to retrieve the realm_app_id\n app_id=$(echo $output | sed 's/^.*realm_app_id\" : \"\\([^\"]*\\).*/\\1/')\n \n # Delete the Realm App\n echo \"A Realm App was found for this branch: $app_id. It will now be deleted\"\n appservices app delete --app $app_id\n \n # Delete the record in the GitHubActionsMetadata database\n output=$(mongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/GitHubActionsMetadata\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.metadata.deleteOne({'branch': '${{ env.FEATURE_BRANCH }}'})\")\n fi\n```\n\n5. **Delete the database associated with the branch.** This step deletes the database associated with the branch that was just deleted.\n```\nmongo \"mongodb+srv://${{ secrets.ATLAS_URI_FEATURE_BRANCHES }}/InventoryDemo-${{ env.FEATURE_BRANCH }}\" --username ${{ secrets.ATLAS_USERNAME_FEATURE_BRANCHES }} --password ${{ secrets.ATLAS_PASSWORD_FEATURE_BRANCHES }} --eval \"db.dropDatabase()\"\n```\n6. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nStaging\n-------\n\nAs part of the pull request process, my teammate merged my code change into the `staging` branch. I call this stage \"Staging,\" but teams have a variety of names for this stage. They might call it \"QA (Quality Assurance),\" \"Testing,\" \"Pre-Production,\" or something else entirely. This is the stage where teams simulate the production environment and make sure everything works together as intended.\n\nWhen my teammate merged my code change into the `staging` branch, GitHub kicked off another workflow. The purpose of this workflow is to deploy the code changes to the Staging environment and ensure everything continues to work as expected. \n\n![Screenshot of the GitHub Actions web interface after a push to the 'staging' branch triggers a workflow\n\nThe workflow runs the following steps.\n\n1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.\n3. **Store current time in variable.** See the section above for more information on this step.\n4. **Is this a push to the Staging branch?** This step checks if the workflow was triggered by a push to the `staging` branch. If so, it stores the ID of the Staging Realm App in the `REALM_APP_ID` environment variable.\n```\necho \"REALM_APP_ID=inventorydemo-staging-zahjj\" >> $GITHUB_ENV\n```\n\n5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.\n6. **Push updated copy of the Realm App for existing apps (Main, Staging, or Feature branches).** This step pushes an updated copy of the Realm App (stored in `inventory/export/sync`) for cases when the Realm App already exists.\n```\ncd inventory/export/sync\nappservices push --remote=\"${{ env.REALM_APP_ID }}\" -y\n```\n7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.\n8. **Build mobile app and run tests.** See the section above for more information on this step.\n9. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.\n10. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nRealm has a new feature releasing soon that will allow you to roll back deployments. When this feature releases, I plan to add a step to the workflow above to automatically roll back the deployment to the previous one in the event of test failures.\n\nMoving from Staging to Production\n---------------------------------\n\nAt this point, some teams may choose to have their pipeline automation stop before automatically moving to production. They may want to run manual tests. Or they may want to intentionally limit their number of releases.\n\nI've chosen to move forward with continuous deployment in my pipeline. So, if the tests in Staging pass, the workflow above continues on to the `pushToMainBranch` job that automatically pushes the latest commits to the `main` branch. The job runs the following steps:\n\n1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out all branches in the repository, so the workflow can access both the `main` and `staging` branches.\n3. **Push to the Main branch.** Merges the code from `staging` into `main`.\n```\ngit merge origin/staging\ngit push\n```\n4. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.\n5. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nProduction\n----------\n\nNow my code is in the final stage: production. Production is where the end users get access to the application.\n\nWhen the previous workflow merged the code changes from the `staging` branch into the `main` branch, another workflow began. \n\nThe workflow runs the following steps.\n\n1. **Set up job.** This step is created by GitHub Actions to prepare the workflow.\n2. **Run actions/checkout@v2.** Uses the Checkout V2 Action to check out the repository so the workflow can access the code.\n3. **Store current time in variable.** See the section above for more information on this step.\n4. **Is this a push to the Main branch?** This step checks if the workflow was triggered by a push to the `main` branch. If so, it stores the ID of the Production Realm App in the `REALM_APP_ID` environment variable.\n```\necho \"REALM_APP_ID=inventorysync-ctnnu\" >> $GITHUB_ENV\n```\n5. **Install the App Services CLI and authenticate.** See the section above for more information on this step.\n6. **Push updated copy of the Realm App for existing apps (Main, Staging, or Feature branches).** See the section above for more information on this step.\n7. **Create `realm-app-id.txt` that stores the Realm App ID.** See the section above for more information on this step.\n8. **Build mobile app and run tests.** See the section above for more information on this step.\n9. **Install the Apple certificate and provisioning profile (so we can create the archive).** When the workflow is in the production stage, it does something that is unique to all of the other workflows: This workflow creates the mobile app archive file (the `.ipa` file). In order to create the archive file, the Apple certificate and provisioning profile need to be installed. For more information on how the Apple certificate and provisioning profile are installed, see the GitHub documentation.\n10. **Archive the mobile app.** This step creates the mobile app archive file (the `.ipa` file).\n```\ncd inventory/clients/ios-swiftui/InventoryDemo\nxcodebuild -workspace InventoryDemo.xcodeproj/project.xcworkspace/ -scheme ci archive -archivePath $PWD/build/ci.xcarchive -allowProvisioningUpdates\n xcodebuild -exportArchive -archivePath $PWD/build/ci.xcarchive -exportPath $PWD/build -exportOptionsPlist $PWD/build/ci.xcarchive/Info.plist\n```\n11. **Store the Archive in a GitHub Release.** This step uses the gh-release action to store the mobile app archive in a GitHub Release as shown in the screenshot below. \n12. **Post Run actions/checkout@v2.** This cleanup step runs automatically when you use the Checkout V2 Action.\n13. **Complete job.** This step is created by GitHub Actions to complete the workflow.\n\nAs I described above, my pipeline creates a GitHub release and stores the `.ipa` file in the release. Another option would be to push the `.ipa` file to TestFlight so you could send it to your users for beta testing. Or you could automatically upload the `.ipa` to the App Store for Apple to review and approve for publication. You have the ability to customize your worfklow based on your team's process.\n\nThe nice thing about automating the deployment to production is that no one has to build the mobile app archive locally. You don't have to worry about that one person who knows how to build the archive going on vacation or leaving the company\u2014everything is automated, so you can keep delivering new features to your users without the panic of what to do if a key person is out of the office.\n\n## Building Your Pipeline\n\nAs I wrap up this article, I want to help you get started building your pipeline.\n\n### Map Your Pipeline\n\nI encourage you to begin by working with key stakeholders to map your ideal pipeline. Ask questions like the following:\n\n* **What stages will be in the pipeline?** Do you have more stages than just Development, Staging, and Production?\n* **What automated tests should be run in the various stages of your pipeline?** Consider if you need to create more automated tests so that you feel confident in your releases.\n* **What should be the final output of your pipeline?** Is the result a fully automated pipeline that pushes changes automatically to the App Store? Or do you want to do some steps manually?\n\n### Implement Your Pipeline\n\nOnce you've mapped out your pipeline and figured out what your steps should be, it's time to start implementing your pipeline. Starting from scratch can be challenging... but you don't have to start from scratch. Here are some resources you can use:\n\n1. The **mongodb-developer/realm-demos GitHub repo** contains the code I discussed today.\n * The repo has example mobile app and sync code, so you can see how the app itself was implemented. Check out the ios-swiftui directory.\n * The repo also has automated tests in it, so you can take a peak at those and see how my team wrote those. Check out the InventoryDemoTests and the InventoryDemoUITests directories.\n * The part I'm most excited about is the GitHub Actions Workflow: build.yml. This is where you can find all of the code for my pipeline automation. Even if you're not going to use GitHub Actions to implement your pipeline, this file can be helpful in showing how to execute the various steps from the command line. You can take those commands and use them in other CI/CD tools.\n * The delete-feature-branch-artifacts.yml workflow shows how to clean up artifacts whenever a feature branch is deleted.\n2. The **MongoDB Realm documentation** has a ton of great information and is really helpful in figuring out what you can do with the App Services CLI.\n3. The **MongoDB Community** is the best place to ask questions as you are implementing your pipeline. If you want to show off your pipeline and share your knowledge, we'd love to hear that as well. I hope to see you there!\n\n## Summary\n\nYou've learned a lot about how to craft your own CI/CD pipeline in this article. Creating a CI/CD pipeline can seem like a daunting task. \n\nWith the resources I've given you in this article, you can create a CI/CD pipeline that is customized to your team's process. \n\nAs Tom Haverford wisely said, \"Sometimes you gotta work a little so you can ball a lot.\" Once you put in the work of building a pipeline that works for you and your team, your app development can really fly, and you can feel confident in your releases. And that's a really big deal.\n\n", "format": "md", "metadata": {"tags": ["Realm", "GitHub Actions"], "pageDescription": "Learn how to build CI/CD pipelines in GitHub Actions for apps built using MongoDB Realm.", "contentType": "Tutorial"}, "title": "How to Build CI/CD Pipelines for MongoDB Realm Apps Using GitHub Actions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/scaling-gaming-mongodb-square-enix-gaspard-petit", "action": "created", "body": "# Scaling the Gaming Industry with Gaspard Petit of Square Enix\n\nSquare Enix is one of the most popular gaming brands in the world. They're known for such franchise games as Tomb Raider, Final Fantasy, Dragon Quest, and more. In this article, we provide a transcript of the MongoDB Podcast episode in which Michael and Nic sit down with Gaspard Petit, software architect at Square Enix, to talk about how they're leveraging MongoDB, and his own personal experience with MongoDB as a data platform.\n\nYou can learn more about Square Enix on their website. You can find Gaspard on LinkedIn.\n\nJoin us in the forums to chat about this episode, about gaming, or about anything related to MongoDB and Software Development. \n\nGaspard Petit (00:00):\nHi everybody, this is Gaspard Petit. I'm from Square Enix. Welcome to this MongoDB Podcast.\n\nGaspard Petit (00:09):\nMongoDB was perfect for processes, there wasn't any columns predefined, any schema, we could just add fields. And why this is important as designers is that we don't know ahead of time what the final game will look like. This is something that evolves, we do a prototype of it, you like it, you don't like it, you undo something, you redo something, you go back to something you did previously, and it keeps changing as the game evolves. It's very rare that I've seen a game production go straight from point A to Z without twirling a little bit and going back and forth. So that back and forth process is cumbersome. For the back-end, where the requirements are set in stone, you have to deliver it so the game team can experience it, and then they'll iterate on it. And if you're set in stone on your database, and each time you change something you have to migrate your data, you're wasting an awful lot of time.\n\nMichael Lynn (00:50):\nWelcome to the show. On today's episode, we're talking with Gaspard Petit of the Square Enix, maker of some of the best-known, best-loved games in the gaming industry. Today, we're talking about how they're leveraging MongoDB and a little bit about Gaspard's journey as a software architect. Hope you enjoy this episode.\n\nAutomated (01:07):\nYou're listening to the MongoDB podcast, exploring the world of software development, data, and all things MongoDB. And now your hosts, Michael Lynn and Nic Raboy.\n\nMichael Lynn (01:26):\nHey, Nic. How you doing today?\n\nNic Raboy (01:27):\nI'm doing great, Mike. I'm really looking forward to this episode. I've been looking forward to it for what is it? More than a month now because it's really one of the things that hits home to me, and that's gaming. It's one of the reasons why I got into software development. So this is going to be awesome stuff. What do you think, Mike?\n\nMichael Lynn (01:43):\nFantastic. I'm looking forward to it as well. And we have a special guest, Gaspard Petit, from Square Enix. Welcome to the podcast, it's great to have you on the show.\n\nGaspard Petit (01:51):\nHi, it's good to be here.\n\nMichael Lynn (01:52):\nFantastic. Maybe if you could introduce yourself to the folks and let folks know what you do at Square Enix.\n\nGaspard Petit (01:58):\nSure. So I'm software online architect at Square Enix. I've been into gaming pretty much my whole life. And when I was a kid that was drawing game levels on piece of papers with my friends, went to university as a software engineer, worked in a few companies, some were gaming, some were around gaming. For example, with Autodesk or Softimage. And then got into gaming, first game was a multiplayer game. And it led me slowly into multiplayer games. First company was at Behaviour and then to Eidos working on the reboot of Tomb Raider on the multiplayer side. Took a short break, went back into actually a company called Datamine, where I learned about the back-end how to work. It wasn't on the Azure Cloud at the time. And I learned a lot about how to do these processes on the cloud, which turned out to be fascinating how you can converge a lot of requests, a lot of users into a distributed environment, and process this data efficiently.\n\nGaspard Petit (03:03):\nAnd then came back to Square Enix as a lead at the time for the internally, we call it our team, the online suite, which is a team in charge of many of the Square Enix's game back-ends. And I've been there for a couple of years now. Six years, I think, and now became online architect. So my role is making sure we're developing in the right direction using the right services, that our solutions will scale, that they're appropriate for the needs of the game team. That we're giving them good online services basically, and that they're also reliable for the users.\n\nNic Raboy (03:44):\nSo the Tomb Raider reboot, was that your first big moment in the professional game industry, or did you have prior big moments before that?\n\nGaspard Petit (03:54):\nI have to say it was probably one of the ones I'm most proud of. To be honest, I worked on a previous game, it was called Naughty Bear. It wasn't a great success from the public's point of view, the meta critics weren't great. But the team I worked on was an amazing team, and everyone on that team was dedicated. It was a small team, the challenges were huge. So from my point of view, that game was a huge success. It didn't make it, the public didn't see it that way. But the challenges, it was a multiplayer game. We had the requirements fairly last-minute to make this a multiplayer game. So we had to turn in single player into multiplayer, do the replication. A lot of complicated things in a short amount of time. But with the right team, with the right people motivated. To me, that was my first gaming achievement.\n\nMichael Lynn (04:49):\nYou said the game is called Naughty Bear?\n\nGaspard Petit (04:51):\nNaughty Bear, yes.\n\nMichael Lynn (04:52):\nWhat type of game is that? Because I'm not familiar with that.\n\nGaspard Petit (04:55):\nNo, not many people are. It's a game where you play a teddy bear waking up on an island. And you realize that there's a party and you're not invited to that party. So you just go postal and kill all the bears on the island pretty much. But there's AI involved, there's different ways of killing, there's different ways of interacting with those teddy bears. And of course, there's no blood, right? So it's not violence. It's just plain fun, right? So it's playing a little bit on that side, on the-\n\nMichael Lynn (05:23):\nAbsolutely.\n\nGaspard Petit (05:26):\nBut it's on a small island, so it's very limited. But the fun is about the AI and playing with friends. So you can play as the bears that are trying to hide or as the bear that's trying to carnage the island.\n\nGaspard Petit (05:41):\nThis is pretty much what introduced me to leaderboards, multiplayer replication. We didn't have any saved game. It was over 10 years ago, so the cloud was just building up. But you'd still have add matchmaking features, these kind of features that brought me into the online environment.\n\nNic Raboy (05:59):\nAwesome. In regards to your Naughty Bear game, before we get into the scoring and stuff, what did you use to develop it?\n\nGaspard Petit (06:05):\nIt was all C++, a little bit of Lua back then. Like I said, on the back-end side, there wasn't much to do. We used the first party API's which were C++ connected to their server. The rest was a black box. To me at the time, I didn't know how matchmaking worked or how all these leaderboards worked, I just remember that it felt a bit frustrating that I remember posting scores, for example, to leaderboards. And sometimes it would take a couple of seconds for the rank to be updated. And I remember feeling frustration about that. Why isn't this updated right away? I've just posted my score and can take a minute or two before my rank is updated. And now that I'm working back-end, I totally get it. I understand the volume of scores getting posted, the ranking, the sorting out, all the challenges on the back-end. But to me back then it was still a black box.\n\nMichael Lynn (06:57):\nSo was that game leveraging MongoDB as part of the back-end?\n\nGaspard Petit (07:01):\nNo, no, no. Like I said, it wasn't really on the cloud. It was just first party API. I couldn't tell you what Microsoft, Sony is using. But from our point of view, we were not using any in-house database. So that was a different company, it was at Behaviour.\n\nMichael Lynn (07:19):\nAnd I'm curious as an early developer in your career, what things did you learn about game development that you still take with you today?\n\nGaspard Petit (07:28):\nI think a lot of people are interested in game development for the same reasons I am. It is very left and right brain, you have a lot of creativity, you have to find ways to make things work. Sometimes you're early on in a project and you get a chance to do things right. So you architect things, you do the proper design, you even sometimes draw UML and organize your objects so that it's all clean, and you feel like you're doing theoretical and academic almost work, and then the project evolves. And as you get closer to the release date, this is not something that will live forever, it's not a product that you will recycle, and needs to be maintained for the next 10 years. This is something you're going to ship and it has to work on ideally on the day you ship it.\n\nGaspard Petit (08:13):\nSo you start shifting your focus saying, \"This has to work no matter what. I have to find a solution. There's something here that doesn't work.\" And I don't have time to find a proper design to refactor this, I just have to make it work. And you shift your way of working completely into ship it, make it work, find a solution. And you get into a different kind of creativity as a programmer. Which I love, which is also scary some time because you put this duct tape in your code and it works. And you'rE wondering, \"Should I feel right about shipping this?\" And actually, nobody's going to notice and it's going to hold and the game will be fun. And it doesn't matter that you have this duct tape somewhere. I think this is part of the fun of shaping the game, making it work at the end no matter what. And it doesn't have to be perfectly clean, it has to be fun at the end.\n\nGaspard Petit (09:08):\nThis is definitely one aspect of it. The other aspect is the real-time, you want to hit 30fps or 60fps or more. I'm sure PC people are now demanding more. But you want this frame rate, and at the same time you want the AI, and you want the audio, and you want the physics and you want everything in that FPS. And you somehow have to make it all work. And you have to find whatever trick you can. If you can pre-process things on their hard drive assets, you do it. Whatever needs you can optimize, you get a chance to optimize it.\n\nGaspard Petit (09:37):\nAnd there's very few places in the industry where you still get that chance to optimize things and say, \"If I can remove this one millisecond somewhere, it will have actually an impact on something.\" Back-end has that in a way. MongoDB, I'm sure if you can remove one second in one place, you get that feeling of I can now perform this amount of more queries per second. But the game also has this aspect of, I'll be able to process a little bit more, I'll be able to load more assets, more triangles, render more things or hit more bounding boxes. So the performance is definitely an interesting aspect of the game.\n\nNic Raboy (10:12):\nYou spent a lot of time doing the actual game development being the creative side, being the performance engineer, things like that. How was the transition to becoming an online architect? I assume, at least you're no longer actually making what people see, but what people experience in the back-end, right? What's that like?\n\nGaspard Petit (10:34):\nThat's right. It wasn't an easy transition. And I was the lead on the team for a couple of years. So I got that from a few candidates joining the team, you could tell they wish they were doing gameplay or graphics, and they got into the back-end team. And it feels like you're, \"Okay, I'll do that for a couple of years and then I'll see.\" But it ended up that I really loved it. You get a global view of the players what they're doing, not just on a single console, you also get to experience the game as it is live, which I didn't get to experience when I was programming the game, you program the game, it goes to a disk or a digital format, it's shipped and this is where Julian, you take your vacation after when a game has shipped.\n\nGaspard Petit (11:20):\nThe exhilaration of living the moment where the game is out, monitoring it, seeing the player while something disconnect, or having some problems, monitoring the metrics, seeing that the game is performing as expected or not. And then you get into other interesting things you can do on the back-end, which I couldn't do on the game is fixing the game after it has shipped. So for example, you discovered that the balancing is off. Something on the game doesn't work as expected. But you have a way of somehow figuring out from the back-end how you can fix it.\n\nGaspard Petit (11:54):\nOf course, ideally, you would fix in the game. But nowadays, it's not always easy to repackage the game on each platform and deliver it on time. It can take a couple of weeks to fix it to fix the game from the code. So whatever we can fix from the back-end, we do. So we need to have the proper tools for monitoring this humongous amount of data coming our way. And then we have this creativity kicking in saying, \"Okay, I've got this data, how can I act on it to make the game better?\" So I still get those feelings from the back-end.\n\nMichael Lynn (12:25):\nAnd I feel like the line between back-end and front-end is really blurring lately. Anytime I get online to play a game, I'm forced to go through the update process for many of the games that I play. To what degree do you have flexibility? I'll ask the question this way. How frequently Are you making changes to games that have already shipped?\n\nGaspard Petit (12:46):\nIt's not that frequent. It's not rare, either. It's somewhere in between. Ideally, we would not have to make any changes after the game is out. But in practice, the games are becoming so complex, they no longer fit on a small 32 megabyte cartridge. So there's a lot of things going on in the game. They're they're huge. It's almost impossible to get them perfectly right, and deliver them within a couple of years.\n\nGaspard Petit (13:16):\nAnd there's also a limitation to what you can test internally. Even with a huge team of QA, you will discover things only when players are experiencing the game. Like I said the flow of fixing the game is long. You hear about the report on Reddit or on Twitter, and then you try to reproduce it internally right there. It might take a couple of days to get the same bug the player has reported. And then after that, you have to figure out in the code how you can fix it, make sure you don't break anything else. So it can take literally weeks before you fix something very trivial.\n\nGaspard Petit (13:55):\nOn the back-end, if we can try it out, we can segment a specific fix for a single player, make sure for that player it works. Do some blue-green introduction of that test or do it only on staging first, making sure it works, doing it on production. And within a couple of sometimes I would say, a fix has come out in a couple of hours in some case where we noticed it on production, went to staging and to production within the same day with something that would fix the game.\n\nGaspard Petit (14:25):\nSo ideally, you would put as much as you can on the back-end because you have so much agility from the back-end. I know players are something called about this idea of using back-ends for game because they see it as a threat. I don't think they realize how much they can benefit from fixes we do on the back-end.\n\nNic Raboy (14:45):\nSo in regards to the back-end that you're heavily a part of, what typically goes in to the back-end? I assume that you're using quite a few tools, frameworks, programming languages, maybe you could shed some light onto that.\n\nGaspard Petit (14:57):\nOh yes, sure. So typically, in almost every project, there is some telemetry that is useful for us to monitor that the game is working like I said, as expected. We want to know if the game is crashing, we want to know if players are stuck on the level and they can't go past through it. If there's an achievement that doesn't lock or something that shouldn't be happening and doesn't happen. So we want to make sure that we're monitoring these things.\n\nGaspard Petit (15:23):\nThere's, depending on the project, we have community features. For example, comparing what you did in the life experience series to what the community did, and sometime it will be engagements or creating challenges that will change on a weekly basis. In some cases recently for outriders for example, we have the whole save game saved online, which means two things, right? We can get an idea of the state of each player, but we can also fix things. So it really depends on the project. It goes from simple telemetry, just so we know that things are going okay, or we can act on it to adding some game logic on the back-end getting executed on the back-end.\n\nMichael Lynn (16:09):\nAnd what are the frameworks and development tools that you leverage?\n\nGaspard Petit (16:12):\nYes, sorry. So the back-ends, we write are written in Java. We have different tools, we use outside of the back-end. We deploy on Kubernetes. Almost everything is Docker images at this point. We use MongoDB as the main storage. Redis as ephemeral storage. We also use Kafka for the telemetry pipeline to make sure we don't lose them and can process them asynchronously. Jenkins for building. So this is pretty much our environment.\n\nGaspard Petit (16:45):\nWe also work on the game integration, this is in C++ and C#. So our team provides and actually does some C++ development where we try to make a HTTP client, C++ clients, that is cross platform and as efficient as possible. So at least impacting the frame rate. Even sometimes it means downloading things a little bit slower or are not ticking as many ticks. But we customize our HTTP client to make sure that the online impact is minimal on the gameplay. So our team is in charge of both this client integration into the game and the back-end development.\n\nMichael Lynn (17:24):\nSo those HTTP clients, are those custom SDKs that you're providing your own internal developers for using?\n\nGaspard Petit (17:31):\nExactly, so it's our own library that we maintain. It makes sure that what we provide can authenticate correctly with the back-end as a right way to communicate with it, the right retries, the right queuing. So we don't have to enforce through policies to each game themes, how to connect to the back-end. We can bundle these policies within the SDK that we provide to them.\n\nMichael Lynn (17:57):\nSo what advice would you have for someone that's just getting into developing games? Maybe some advice for where to focus on their journey as a game developer?\n\nGaspard Petit (18:08):\nThat's a great question. The advice I would give is, it starts of course, being passionate about it. You have to because there's a lot of work in the gaming, it's true that we do a lot of hours. If we did not enjoy the work that we did, we would probably go somewhere else. But it is fun. If you're passionate about it, you won't mind as much because the success and the feeling you get on each release compensates the effort that you put into those projects. So first, you need to be passionate about it, you need to be wanting to get those projects and be proud of them.\n\nGaspard Petit (18:46):\nAnd then I would say not to focus too much on one aspect of gaming because at first, I did several things, right? My studies were on the image processing, I wanted to do 3D rendering. At first, that was my initial goal as a teenager. And this is definitely not what I ended up doing. I did almost everything. I did a little bit of rendering, but almost none. I ended up in the back-end. And I learned that almost every aspect of the game development has something interesting and challenging.\n\nGaspard Petit (19:18):\nSo I would say not too much to focus on doing the physics or the rendering, sometime you might end up doing the audio and that is still something fascinating. How you can place your audio within the scene and make it sound like it comes from one place, and hit the walls. And then in each aspect, you can dig and do something interesting. And the games now at least within Square Enix they're too big for one person to do it all. So it's generally, you will be part of a team anyway. And within that team, there will be something challenging to do.\n\nGaspard Petit (19:49):\nAnd even the back-end, I know not so many people consider back-end as their first choice. But I think that's something that's actually a mistake. There is a lot of interesting things to do with the back-end, especially now that there is some gameplay happening on back-ends, and increasingly more logic happening on the back-end. I don't want to say that one is better than the other, of course, but I would personally not go back, and I never expected to love it so much. So be open-minded and be passionate. I think that's my general advice.\n\nMichael Lynn (20:26):\nSo speaking of back-end, can we talk a little bit about how Square Enix is leveraging MongoDB today?\n\nGaspard Petit (20:32):\nSo we've been using MongoDB for quite some time. When I joined the team, it was already been used. We were on, I think version 2.4. MongoDB had just implemented authentication on collections, I think. So quite a while ago, and I saw it evolve over time. If I can share this, I remember my first day on the team hitting MongoDB. And I was coming from a SQL-like world, and I was thinking, \"What is this? What is this query language and JSON?\" And of course, I couldn't query anything at first because it all seemed the syntax was completely strange to me. And I didn't understand anything about sharding, anything about chunking, anything about how the database works. So it actually took me a couple of months, I would say before I started appreciating what Mongo did, and why it had been picked.\n\nGaspard Petit (21:27):\nSo it has been recommended, if I remember, I don't want to say incorrect things. But I think it had been recommended before my time. It was a consulting team that had recommended MongoDB for the gaming. I wouldn't be able to tell you exactly why. So over time, what I realized is that MongoDB was perfect for our processes because there wasn't any columns predefine, any schema, we could just add fields. If the fields were missing, it wasn't a big deal, we could encode in the back-end, and we could just set them to default values.\n\nGaspard Petit (22:03):\nAnd why this is important is because the game team generally doesn't know. I don't want to say the game team actually, the designers or the producer, they don't know ahead of time, what the final game will look like, this is something that evolves. You play, you do a prototype of it, you like it, you don't like it, you undo something, you redo something, you go back to something you did previously, and it keeps changing as the game evolves. It's very rare that I've seen a game production go straight from point A to Z without twirling a little bit and going back and forth.\n\nGaspard Petit (22:30):\nSo that back and forth process is cumbersome for the back-end. You're asked to implement something before the requirements are set in stone, you have to deliver it so the game team can experience it and then we'll iterate on it. And if you're set in stone on your database, and each time that you change something, you have to migrate your data, you're wasting an awful lot of time. And after, like I said, after a couple of months that become obvious that MongoDB was a perfect fit for that because the game team would ask us, \"Hey, I need now to store this thing, or can you change this type for that type?\" And it was seamless, we would change a string for an integer or a string, we would add a field to a document and that was it. No migration. If we needed, the back-end would catch the cases where a default value was missing. But that was it.\n\nGaspard Petit (23:19):\nAnd we were able to progress with the game team as they evolved their design, we were able to follow them quite rapidly with our non-schema database. So now I wouldn't switch back. I've got used to the JSON query language, I think human being get used to anything. And once you're familiar with something, you don't want to learn something else. And I ended up learning the SQL Mongo syntax, and now I'm actually very comfortable with it. I do aggregation on the command line, these kinds of things. So it's just something you have to be patient off if you haven't used MongoDB before. At first, it looks a little bit weird, but it quickly becomes quite obvious why it is designed in a way. It's actually very intuitive to use.\n\nNic Raboy (24:07):\nIn regards to game development in general, who is determining what the data should look like? Is that the people actually creating the local installable copy of the game? Or is that the back-end team deciding what the model looks like in general?\n\nGaspard Petit (24:23):\nIt's a mix of both. Our team acts as an expert team, so we don't dictate where the back-end should be. But since we've been on multiple projects, we have some experience on the good and bad patterns. And in MongoDB it's not always easy, right? We've been hit pretty hard with anti-patterns in the past. So we would now jump right away if the game team asks us to store something in a way that we knew would not perform well when scaling up. So we're cautious about it, but it in general, the requirements come from the game team, and we translate that into a database schema, which says in a few cases, the game team knows exactly what they want. And in those cases, we generally just store their data as a raw string on MongoDB. And then we can process it back, whether it's JSON or whatever other format they want. We give them a field saying, \"This belongs to you, and use whatever schema you want inside of it.\"\n\nGaspard Petit (25:28):\nBut of course, then they won't be able to insert any query into that data. It's more of a storage than anything else. If they need to perform operations, and we're definitely involved because we want to make sure that they will be hitting the right indexes, that the sharding will be done properly. So it's a combination of both sides.\n\nMichael Lynn (25:47):\nOkay, so we've got MongoDB in the stack. And I'm imagining that as a developer, I'm going to get a development environment. And tell me about the way that as a developer, I'm interacting with MongoDB. And then how does that transition into the production environment?\n\nGaspard Petit (26:04):\nSure. So every developer has a local MongoDB, we use that for development. So we have our own. Right now is docker-compose image. And it has a full virtual environment. It has all the other components I mentioned earlier, it has Kafka, it even LDAP, it has a bunch of things running virtually including MongoDB. And it is even configured as a sharded cluster. So we have a local sharded cluster on each of our machine to make sure that our queries will work fine on the actual sharded cluster. So it's actually very close to production, even though it's on our local PC. And we start with that, we develop in Java and write our unit test to make sure we cover what we write and don't have regression. And those unit tests will run against a local MongoDB instance.\n\nGaspard Petit (26:54):\nAt some point, we are about to release something on production especially when there's a lot of changes, we want to make sure we do load testing. For our load testing, we have something else and I am not sure that that's a very well known feature from MongoDB, but it's extremely useful for us. It's the MongoDB Operator, which is an operator within Kubernetes. And it allows spinning up clusters based on the simple YAML. So you can say, \"I want a sharded cluster with three deep, five shards,\" and it will spin it up for you, it will take a couple of seconds a couple of minutes depending on what you have in your YAML. And then you have it. You have your cluster configured in your Kubernetes cluster. And then we run our tests on this. It's a new cluster, fresh. Run the full test, simulate millions of requests of users, destroy it. And then if we're wondering you know what? Does our back-end scale with the number of shards? And then we just spin up a new shard cluster with twice the number of shards, expect twice the performance, run the same test. Again, if we don't have one. Generally, we won't get that exactly twice the performance, right? But it will get an idea of, this operation would scale with the number of shards, and this one wouldn't.\n\nGaspard Petit (28:13):\nSo that Operator is very useful for us because it'll allow us to simulate these scenarios very easily. There's very little work involved in spinning up these Kubernetes cluster.\n\nGaspard Petit (28:23):\nAnd then when we're satisfied with that, we go to Atlas, which provides us the deployment of the CloudReady clusters. So this is not me personally who does it, we have an ops team who handle this, but they will prepare for us through Atlas, they will prepare the final database that we want to use. We work together to find the number of shards, the type of instance we want to deploy. And then Atlas takes care of it. We benefit from disk auto-scaling on Atlas. We generally start with lower instance, to set up the database when the big approaches for the game release, we scale up instance type again, through Atlas.\n\nGaspard Petit (29:10):\nIn some cases, we've realized that the number of shards was insufficient after testing, and Atlas allows us to make these changes quite close to the launch date. So what that means is that we can have a good estimate a couple of weeks before the launch of our requirements in terms of infrastructure, but if we're wrong, it doesn't take that long to adjust and say, \"Okay, you know what? We don't need five shards, we need 10 shards.\" And especially if you're before the launch, you don't have that much data. It just takes a couple of minutes, a couple of hours for Atlas to redeploy these things and get the database ready for us. So it goes in those three stages of going local for unit testing with our own image of Mongo. We have a Kubernetes cluster for load testing which use the Mongo Operator, and then we use Atlas in the end for the actual cloud deployment.\n\nGaspard Petit (30:08):\nWe actually go one step further when the game is getting old and load is predictable on it. And it's not as high as it used to be, we move this database in-house. So we have our own data centers. And we will actually share Mongo instances for multiple games. So we co-host multiple games on a single cluster, not single database, of course, but a single Mongo cluster. And that becomes very, very cost effective. We get to see, for example, if there's a sales on one game, while the other games are less active, it takes a bit more load. But next week, something else is on sales, and they kind of average out on that cluster. So older games, I'm talking like four or five years old games tend to be moved back to on-premises for cost effectiveness.\n\nNic Raboy (31:00):\nSo it's great to know that you can have that choice to bring games back in when they become old, and you need to scale them down. Maybe you can talk about some of the other benefits that come with that.\n\nGaspard Petit (31:12):\nYeas. And while it also ties in to the other aspects I mentioned of. We don't feel locked with MongoDB, we have options. So we have the Atlas option, which is extremely useful when we launch a game. And it's high risk, right? If an incident happened on the first week of a game launch, you want all hands on deck and as much support as you can. After a couple of years, we know the kind of errors we can get, we know what can go wrong with the back-end. And generally the volume is not as high, so we don't necessarily need that kind of support anymore. And there's also a lot of overhead on running things on the cloud, if you're on the small volume. There's not just the Mongo itself, there's the pods themselves that need to run on a compute environment, there's the traffic that is counting.\n\nGaspard Petit (32:05):\nSo we have that data center. We actually have multiple data centers, we're lucky to be big enough to have those. But it gives us this extra option of saying, \"We're not locked to the cloud, it's an option to be on the cloud with MongoDB.\" We can run it locally on a Docker, we can run it on the cloud, where we can control where we go. And this has been a key element in the architecture of our back-ends from the start actually, making sure that every component we use can be virtualized, brought back on-premises so that we can control locally. For example, we can run tests and have everything controlled, not depending on the cloud. But we also get the opportunity of getting an external team looking at the project with us on the critical moments. So I think we're quite happy to have those options of running it wherever we want.\n\nMichael Lynn (32:56):\nYeah, that's clearly a benefit. Talk to me a little bit about the scale. I know you probably can't mention numbers and transactions per second and things like that. But this is clearly one of the challenges in the gaming space, you're going to face massive scale. Do you want to talk a little bit about some of the challenges that you're facing, with the level of scale that you're achieving today?\n\nGaspard Petit (33:17):\nYes, sure. That's actually one of the challenging aspects of the back-end, making sure that you won't hit a ceiling at some point or an unexpected ceiling. And there's always one, you just don't always know which one it is. When we prepare for a game launch, regardless of its success, we have to prepare for the worst, the best success. I don't know how to phrase that. But the best success might be the worst case for us. But we want to make sure that we will support whatever number of players comes our way. And we have to be prepared for that.\n\nGaspard Petit (33:48):\nAnd depending on the scenarios, it can be extremely costly to be prepared for the worst/best. Because it might be that you have to over scale right away, and make sure that your ceiling is very high. Ideally, you want to hit something somewhere in the middle where you're comfortable that if you were to go beyond that, you would be able to adjust quickly. So you sort of compromise between the cost of your launch with the risk and getting to a point where you feel comfortable saying, \"If I were to hit that and it took 30 minutes to recover, that would be fine.\" Nobody would mind because it's such a success that everyone would understand at that point. That ceiling has to be pretty high in the gaming industry. We're talking millions of concurrent users that are connecting within the same minute, are making queries at the same time on their data. It's a huge number. It's difficult, I think, even for the human mind to comprehend these numbers when we're talking millions.\n\nGaspard Petit (34:50):\nIt is a lot of requests per second. So it has to be distributed in a way that will scale, and that was also one of the things that I realized Mongo did very well with the mongos and the mongod split to a sharded cluster, where you pretty much have as many databases you want, you can split the workload on as many database as you want with the mongos, routing it to the right place. So if you're hitting your ceiling with two shards, and you had two more shards, in theory, you can get twice the volume of queries. For that to work, you have to be careful, you have to shard appropriately. So this is where you want to have some experience and you want to make sure that your shard keys is well picked. This is something we've tuned over the years that we've had different experience with different shard keys.\n\nGaspard Petit (35:41):\nFor us, I don't know if everyone in the gaming is doing it this way, but what seems to be the most intuitive and most convenient shard key is the user ID, and we hash it. This way it goes to... Every user profile goes to a random shard, and we can scale Mongo within pretty much the number of users we have, which is generally what tends to go up and down in our case.\n\nGaspard Petit (36:05):\nSo we've had a couple of projects, we've had smaller clusters on one, two. We pretty much never have one shard, but two shards, three shards. And we've been up to 30 plus shards in some cases, and it's never really been an issue. The size, Mongo wise, I would say. There's been issues, but it wasn't really with the architecture itself, it was more of the query pattern, or in some cases, we would pull too much data in the cache. And the cache wasn't used efficiently. But there was always a workaround. And it was never really a limitation on the database. So the sharding model works very well for us.\n\nMichael Lynn (36:45):\nSo I'm curious how you test in that type of scale. I imagine you can duplicate the load patterns, but the number of transactions per second must be difficult to approximate in a development environment. Are you leveraging Atlas for your production load testing?\n\nGaspard Petit (37:04):\nNo. Well, yes and no. The initial tests are done on Kubernetes using the Mongo Operator. So this is where we will simulate. For one operation, we will test will it scale with instance type? So adding more CPU, more RAM, will it scale with number of shards? So we do this grid on each operation that the players might be using ahead of time. At some point, we're comfortable that everything looks right. But testing each operation individually doesn't mean that they will all work fine, they will all play fine when they're mixed together. So the final mix goes through either the production database, if it's not being used yet, or a copy is something that it would look like the production database in Atlas.\n\nGaspard Petit (37:52):\nSo we spin up a Atlas database, similar to the one we expect to use in production. And we run the final load test on that one, just to get clear number with their real components, what will it look like. So it's not necessarily the final cluster we will use, sometimes it's a copy of it. Depending if it's available, sometimes there's already certification ongoing, or QA is already testing on production. So we can't hit the production database for that, so we just spin a different instance of it.\n\nNic Raboy (38:22):\nSo this episode has been fantastic so far, I wanted to leave it open for you giving us or the listeners I should say, any kind of last minute words of wisdom or any anything that we might have missed that you think would be valuable for them to walk away with.\n\nGaspard Petit (38:38):\nSure. So maybe I can share something about why I think we're efficient at what we do and why we're still enjoying the work we're doing. And it has to do a little bit with how we're organized within Square Enix with the different teams. I mentioned earlier that with our interaction with the game team was not so much to dictate how the back-end should be for them, but rather to act as experts. And this is something I think we're lucky to have within Square Enix, where our operation team and our development team are not necessarily acting purely as service providers. And this touches Mongo as well, the way we integrate Mongo in our ecosystem is not so much at... It is in part, \"Please give us database, please make sure they're healthy and working and give us support when we need it.\" But it's also about tapping into different teams as experts.\n\nGaspard Petit (39:31):\nSo Mongo for us is a source of experts where if we need recommendations about shards, query patterns, even know how to use a Java driver. We get a chance to ask MongoDB experts and get accurate feedback on how we should be doing things. And this translate on every level of our processes. We have the ops team that will of course be monitoring and making sure things are healthy, but they're also acting as experts to tell us how the development should be ongoing or what are the best practices?\n\nGaspard Petit (40:03):\nThe back-end dev team does the same thing with the game dev team, where we will bring them our recommendations of how the game should use, consume the services of the back-end, even how they should design some features so that it will scale efficiently or tell them, \"This won't work because the back-end won't scale.\" But act as experts, and I think that's been key for our success is making sure that each team is not just a service provider, but is also bringing expertise on the table so that each other team can be guided in the right direction.\n\nGaspard Petit (40:37):\nSo that's definitely one of the thing that I appreciate over my years. And it's been pushed down from management down to every developers where we have this mentality of acting as experts to others. So we have that as embedded engineers model, where we have some of our folks within our team dedicated to the game teams. And same thing with the ops team, they have the dedicated embedded engineers from their team dedicated to our team, making sure that we're not in silos. So that's definitely a recommendation I would give to anyone in this industry, making sure that the silos are broken and that each team is teaching other teams about their best practices.\n\nMichael Lynn (41:21):\nFantastic. And we love that customers are willing to partner in that way and leverage the teams that have those best practices. So Gaspard, I want to thank you for spending so much time with us. It's been wonderful to chat with you and to learn more about how Square Enix is using MongoDB and everything in the game space.\n\nGaspard Petit (41:40):\nWell, thank you very much. It was a pleasure.\n\nAutomated (41:44):\nThanks for listening. If you enjoyed this episode, please like and subscribe. Have a question or a suggestion for the show? Visit us in the MongoDB community forums at community.mongodb.com.", "format": "md", "metadata": {"tags": ["MongoDB", "Java", "Kubernetes", "Docker"], "pageDescription": "Join Michael Lynn and Nic Raboy as they chat with Gaspard Petit of Square Enix to learn how one of the largest and best-loved gaming brands in the world is using MongoDB to scale and grow.", "contentType": "Podcast"}, "title": "Scaling the Gaming Industry with Gaspard Petit of Square Enix", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/manage-data-at-scale-with-online-archive", "action": "created", "body": "# How to Manage Data at Scale With MongoDB Atlas Online Archive\n\nLet's face it: Your data can get stale and old quickly. But just because\nthe data isn't being used as often as it once was doesn't mean that it's\nnot still valuable or that it won't be valuable again in the future. I\nthink this is especially true for data sets like internet of things\n(IoT) data or user-generated content like comments or posts. (When was\nthe last time you looked at your tweets from 10 years ago?) This is a\nreal-time view of my IoT time series data aging.\n\nWhen managing systems that have massive amounts of data, or systems that\nare growing, you may find that paying to save this data becomes\nincreasingly more costly every single day. Wouldn't it be nice if there\nwas a way to manage this data in a way that still allows it to be\nuseable by being easy to query, as well as saving you money and time?\nWell, today is your lucky day because with MongoDB Atlas Online\nArchive,\nyou can do all this and more!\n\nWith the Online Archive feature in MongoDB\nAtlas, you can create a rule to\nautomatically move infrequently accessed data from your live Atlas\ncluster to MongoDB-managed, read-only cloud object storage. Once your\ndata is archived, you will have a unified view of your Atlas cluster and\nyour Online Archive using a single endpoint..\n\n>\n>\n>Note: You can't write to the Online Archive as it is read-only.\n>\n>\n\nFor this demonstration, we will be setting up an Online Archive to\nautomatically archive comments from the `sample_mflix.comments` sample\ndataset that are older than 10 years. We will then connect to our\ndataset using a single endpoint and run a query to be sure that we can\nstill access all of our data, whether its archived or not.\n\n## Prerequisites\n\n- The Online Archive feature is available on\n M10 and greater\n clusters that run MongoDB 3.6 or later. So, for this demo, you will\n need to create a M10\n cluster in MongoDB\n Atlas. Click here for information on setting up a new MongoDB Atlas\n cluster.\n- Ensure that each database has been seeded by loading sample data\n into our Atlas\n cluster. I will be\n using the `sample_mflix.comments` dataset for this demo.\n\n>\n>\n>If you haven't yet set up your free cluster on MongoDB\n>Atlas, now is a great time to do so. You\n>have all the instructions in this blog post.\n>\n>\n\n## Configure Online Archive\n\nAtlas archives data based on the criteria you specify in an archiving\nrule. The criteria can be one of the following:\n\n- **A combination of a date and number of days.** Atlas archives data\n when the current date exceeds the date plus the number of days\n specified in the archiving rule.\n- **A custom query.** Atlas runs the query specified in the archiving\n rule to select the documents to archive.\n\nIn order to configure our Online Archive, first navigate to the Cluster\npage for your project, click on the name of the cluster you want to\nconfigure Online Archive for, and click on the **Online Archive** tab.\n\nNext, click the Configure Online Archive button the first time and the\nAdd Archive button subsequently to start configuring Online Archive for\nyour collection. Then, you will need to create an Archiving Rule by\nspecifying the collection namespace, which will be\n`sample_mflix.comments` for this demo. You will also need to specify the\ncriteria for archiving documents. You can either use a custom query or a\ndate match. For our demo, we will be using a date match and\nauto-archiving comments that are older than 10 years (365 days \\* 10\nyears = 3650 days) old. It should look like this when you are done.\n\nOptionally, you can enter up to two most commonly queried fields from\nthe collection in the Second most commonly queried field and Third most\ncommonly queried field respectively. These will create an index on your\narchived data so that the performance of your online archive queries is\nimproved. For this demo, we will leave this as is, but if you are using\nproduction data, be sure to analyze which queries you will be performing\nmost often on your Online Archive.\n\nBefore enabling the Online Archive, it's a good idea to run a test to\nensure that you are archiving the data that you intended to archive.\nAtlas provides a query for you to test on the confirmation screen. I am\ngoing to connect to my cluster using MongoDB\nCompass to test this\nquery out, but feel free to connect and run the query using any method\nyou are most comfortable with. The query we are testing here is this.\n\n``` javascript\ndb.comments.find({\n date: { $lte: new Date(ISODate().getTime() - 1000 \\* 3600 \\* 24 \\* 3650)}\n})\n.sort({ date: 1 })\n```\n\nWhen we run this query against the `sample_mflix.comments` collection,\nwe find that there is a total of 50.3k documents in this collection, and\nafter running our query to find all of the comments that are older than\n10 years old, we find that 43,451 documents would be archived using this\nrule. It's a good idea to scan through the documents to check that these\ncomments are in fact older than 10 years old.\n\nSo, now that we have confirmed that this is in fact correct and that we\ndo want to enable this Online Archive rule, head back to the *Configure\nan Online Archive* page and click **Begin Archiving**.\n\nLastly, verify and confirm your archiving rule, and then your collection\nshould begin archiving your data!\n\n>\n>\n>Note: Once your document is queued for archiving, you can no longer edit\n>the document.\n>\n>\n\n## How to Access Your Archived Data\n\nOkay, now that your data has been archived, we still want to be able to\nuse this data, right? So, let's connect to our Online Archive and test\nthat our data is still there and that we are still able to query our\narchived data, as well as our active data.\n\nFirst, navigate to the *Clusters* page for your project on Atlas, and\nclick the **Connect** button for the cluster you have Online Archive\nconfigured for. Choose your connection method. I will be using\nCompass for this\nexample. Select **Connect to Cluster and Online Archive** to get the\nconnection string that allows you to federate queries across your\ncluster and Online Archive.\n\nAfter navigating to the `sample_mflix.comments` collection, we can see\nthat we have access to all 50.3k documents in this collection, even\nafter archiving our old data! This means that from a development point\nof view, there are no changes to how we query our data, since we can\naccess archived data and active data all from one single endpoint! How\ncool is that?\n\n## Wrap-Up\n\nThere you have it! In this post, we explored how to manage your MongoDB\ndata at scale using MongoDB Atlas Online Archive. We set up an Online\nArchive so that Atlas automatically archived comments from the\n`sample_mflix.comments` dataset that were older than 10 years. We then\nconnected to our dataset and made a query in order to be sure that we\nwere still able to access and query all of our data from a unified\nendpoint, regardless of it being archived or not. This technique of\narchiving stale data can be a powerful feature for dealing with datasets\nthat are massive and/or growing quickly in order to save you time,\nmoney, and development costs as your data demands grow.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n## Additional resources:\n\n- Archive Cluster Data\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to efficiently manage your data at scale by leveraging MongoDB Atlas Online Archive.", "contentType": "Tutorial"}, "title": "How to Manage Data at Scale With MongoDB Atlas Online Archive", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/serde-improvements", "action": "created", "body": "# Structuring Data With Serde in Rust\n\n## Introduction\n\nThis post details new upgrades in the Rust MongoDB Driver and BSON library to improve our integration with Serde. In the Rust Quick Start blog post, we discussed the trickiness of working with BSON, which has a dynamic schema, in Rust, which uses a static type system. The MongoDB Rust driver and BSON library use Serde to make the conversion between BSON and Rust structs and enums easier. In the 1.2.0 releases of these two libraries, we've included new Serde integration to make working directly with your own Rust data types more seamless and user-friendly.\n\n## Prerequisites\n\nThis post assumes that you have a recent version of the Rust toolchain installed (v1.44+), and that you're comfortable with Rust syntax. It also assumes you're familiar with the Rust Serde library.\n\n## Driver Changes\n\nThe 1.2.0 Rust driver release introduces a generic type parameter to the Collection type. The generic parameter represents the type of data you want to insert into and find from your MongoDB collection. Any Rust data type that derives/implements the Serde Serialize and Deserialize traits can be used as a type parameter for a Collection.\n\nFor example, I'm working with the following struct that defines the schema of the data in my `students` collection:\n\n``` rust\n#derive(Serialize, Deserialize)]\nstruct Student {\n name: String,\n grade: u32,\n test_scores: Vec,\n}\n```\n\nI can create a generic `Collection` by using the [Database::collection_with_type method and specifying `Student` as the data type I'm working with.\n\n``` rust\nlet students: Collection = db.collection_with_type(\"students\");\n```\n\nPrior to the introduction of the generic `Collection`, the various CRUD `Collection` methods accepted and returned the Document type. This meant I would need to serialize my `Student` structs to `Document`s before inserting them into the students collection. Now, I can insert a `Student` directly into my collection:\n\n``` rust\nlet student = Student {\n name: \"Emily\".to_string(),\n grade: 10,\n test_scores: vec and deserialize_with attributes that allow you to specify functions to use for serialization and deserialization on specific fields and variants.\n\nThe BSON library now includes a set of functions that implement common strategies for custom serialization and deserialization when working with BSON. You can use these functions by importing them from the `serde_helpers` module in the `bson-rust` crate and using the `serialize_with` and `deserialize_with` attributes. A few of these functions are detailed below.\n\nSome users prefer to represent the object ID field in their data with a hexidecimal string rather than the BSON library ObjectId type:\n\n``` rust\n#derive(Serialize, Deserialize)]\n struct Item {\n oid: String,\n // rest of fields\n}\n```\n\nWe've introduced a method for serializing a hex string into an `ObjectId` in the `serde_helpers` module called `serialize_hex_string_as_object_id`. I can annotate my `oid` field with this function using `serialize_with`:\n\n``` rust\n#[derive(Serialize, Deserialize)]\nstruct Item {\n #[serde(serialize_with = \"serialize_hex_string_as_object_id\")]\n oid: String,\n // rest of fields\n}\n```\n\nNow, if I serialize an instance of the `Item` struct into BSON, the `oid` field will be represented by an `ObjectId` rather than a `string`.\n\nWe've also introduced modules that take care of both serialization and deserialization. For instance, I might want to represent binary data using the [Uuid type in the Rust uuid crate:\n\n``` rust\n#derive(Serialize, Deserialize)]\nstruct Item {\n uuid: Uuid,\n // rest of fields\n}\n```\n\nSince BSON doesn't have a specific UUID type, I'll need to convert this data into binary if I want to serialize into BSON. I'll also want to convert back to Uuid when deserializing from BSON.\u00a0The `uuid_as_binary` module in the `serde_helpers` module can take care of both of these conversions. I'll add the following attribute to use this module:\n\n``` rust\n#[derive(Serialize, Deserialize)]\nstruct Item {\n #[serde(with = \"uuid_as_binary\")]\n uuid: Uuid,\n // rest of fields\n}\n```\n\nNow, I can work directly with the Uuid type without needing to worry about how to convert it to and from BSON!\n\nThe `serde_helpers` module introduces functions for several other common strategies; you can check out the documentation [here.\n\n### Unsigned Integers\n\nThe BSON specification defines two integer types: a signed 32 bit integer and a signed 64 bit integer. This can prevent challenges when you attempt to insert data with unsigned integers into your collections.\n\nMy `Student` struct from the previous example contains unsigned integers in the `grade `and `test_score` fields. Previous versions of the BSON library would return an error if I attempted to serialize an instance of this struct into `Document`, since there isn't always a clear mapping between unsigned and signed integer types. However, many unsigned integers can fit into signed types! For example, I might want to create the following student:\n\n``` rust\nlet student = Student {\n name: \"Alyson\".to_string(),\n grade: 11,\n test_scores: vec. For more details on working with MongoDB in Rust, you can check out the documentation for the Rust driver and BSON library. We also happily accept contributions in the form of Github pull requests - please see the section in our README for info on how to run our tests.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Rust", "MongoDB"], "pageDescription": "New upgrades in the Rust MongoDB driver and BSON library improve integration with Serde.", "contentType": "Article"}, "title": "Structuring Data With Serde in Rust", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/data-api-postman", "action": "created", "body": "# Accessing Atlas Data in Postman with the Data API\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nMongoDB's new Data API is a great way to access your MongoDB Atlas data using a REST-like interface. When enabled, the API creates a series of serverless endpoints that can be accessed without using native drivers. This API can be helpful when you need to access your data from an application that doesn't use those drivers, such as a bash script, a Google Sheet document, or even another database.\n\nTo explore the new MongoDB Data API, you can use the public Postman workspace provided by the MongoDB developer relations team. \n\nIn this article, we will show you how to use Postman to read and write to your MongoDB Atlas cluster.\n\n## Getting started\nYou will need to set up your Altas cluster and fork the Postman collection to start using it. Here are the detailed instructions if you need them.\n### Set up your MongoDB Atlas cluster\nThe first step to using the Data API is to create your own MongoDB Atlas cluster. If you don't have a cluster available already, you can get one for free. Follow the instructions from the documentation for the detailed directions on setting up your MongoDB Atlas instance.\n\n### Enable the Data API\nEnabling the Data API on your MongoDB Atlas data collections is done with a few clicks. Once you have a cluster up and running, you can enable the Data API by following these instructions.\n### Fork the Postman collection\nYou can use the button below to open the fork in your Postman workspace or follow the instructions provided in this section.\n\n![Run in Postman](https://god.gw.postman.com/run-collection/17898583-25682080-e247-4d25-8e5c-1798461c7db4?action=collection%2Ffork&collection-url=entityId%3D17898583-25682080-e247-4d25-8e5c-1798461c7db4%26entityType%3Dcollection%26workspaceId%3D8355a86e-dec2-425c-9db0-cb5e0c3cec02)\n\nFrom the public MongoDB workspace on Postman, you will find two collections. The second one from the list, the _MongoDB Data API_, is the one you are interested in. Click on the three dots next to the collection name and select _Create a fork_ from the popup menu.\n\nThen follow the instructions on the screen to add this collection to your workspace. By forking this collection, you will be able to pull the changes from the official collection as the API evolves.\n\n### Fill in the required variables\nYou will now need to configure your Postman collections to be ready to use your MongoDB collection. Start by opening the _Variables_ tab in the Postman collection.\n\nYou will need to fill in the values for each variable. If you don't want the variables to be saved in your collection, use the _Current value_ column. If you're going to reuse those same values next time you log in, use the _Initial value_ column.\n\nFor the `URL_ENDPOINT` variable, go to the Data API screen on your Atlas cluster. The URL endpoint should be right there at the top. Click on the _Copy_ button and paste the value in Postman.\n\nNext, for the `API_KEY`, click on _Create API Key_. This will open up a modal window. Give your key a unique name and click on _Generate Key_. Again, click on the _Copy_ button and paste it into Postman.\n\nNow fill in the `CLUSTER_NAME` with the name of your cluster. If you've used the default values when creating the cluster, it should be *Cluster0*. For `DATABASE` and `COLLECTION`, you can use an existing database if you have one ready. If the database and collection you specify do not exist, they will be created upon inserting the first document. \n\nOnce you've filled in those variables, click on _Save_ to persist your data in Postman.\n\n## Using the Data API\nYou are now ready to use the Data API from your Postman collection. \n\n### Insert a document\nStart with the first request in the collection, the one called \"Insert Document.\" \n\nStart by selecting the request from the left menu. If you click on the _Body_ tab, you will see what will be sent to the Data API.\n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"document\": {\n \"name\": \"John Sample\",\n \"age\": 42\n }\n }\n```\n\nHere, you can see that we are using the workspace variables for the cluster, database, and collection names. The `document` property contains the document we want to insert into the collection.\n\nNow hit the blue _Send_ button to trigger the request. In the bottom part of the screen, you will see the response from the server. You should see something similar to:\n\n```json\n{\"insertedId\":\"61e07acf63093e54f3c6098c\"}\n```\n\nThis `insertedId` is the _id_ value of the newly created document. If you go to the Atlas UI, you will see the newly created document in the collection in the data explorer. Since you already have access to the Data API, why not use the API to see the inserted value?\n\n### Find a document\nSelect the following request in the list, the one called \"Find Document.\" Again, you can look at the body of the request by selecting the matching tab. In addition to the cluster, database, and collection names, you will see a `filter` property.\n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"filter\": { \"name\": \"John Sample\" }\n }\n```\n\nThe filter is the criteria that will be used for the query. In this case, you are searching for a person named \"John Sample.\"\n\nClick the Send button again to trigger the request. This time, you should see the document itself.\n\n```json\n{\"document\":{\"_id\":\"61e07acf63093e54f3c6098c\",\"name\":\"John Sample\",\"age\":42}}\n```\n\nYou can use any MongoDB query operators to filter the records you want. For example, if you wanted the first document for a person older than 40, you could use the $gt operator.\n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"filter\": { \"age\": {\"$gt\": 40} }\n}\n```\n \nThis last query should return you the same document again.\n### Update a document\nSay you made a typo when you entered John's information. He is not 42 years old, but rather 24. You can use the Data API to perform an update. Select the \"Update Document\" request from the list on the left, and click on the _Body_ tab. You will see the body for an update request. \n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"filter\": { \"name\": \"John Sample\" },\n \"update\": { \"$set\": { \"age\": 24 } }\n }\n```\n\nIn this case, you can see a `filter` to find a document for a person with the name John Sample. The `update` field specifies what to update. You can use any update operator here. We've used `$set` for this specific example to change the value of the age field to `24`. Running this query should give you the following result.\n\n```json\n{\"matchedCount\":1,\"modifiedCount\":1}\n```\n\nThis response tells us that the operation succeeded and that one document has been modified. If you go back to the \"Find Document\" request and run it for a person older than 40 again, this time, you should get the following response.\n\n```json\n{\"document\":null}\n```\n\nThe `null` value is returned because no items match the criteria passed in the `filter` field.\n\n### Delete a document\nThe process to delete a document is very similar. Select the \"Delete Document\" request from the left navigation bar, and click on the _Body_ tab to see the request's body.\n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"filter\": { \"name\": \"John Sample\" }\n }\n```\n\nJust as in the \"Find Document\" endpoint, there is a filter field to select the document to delete. If you click on Send, this request will delete the person with the name \"John Sample\" from the collection. The response from the server is:\n\n```json\n{\"deletedCount\":1}\n```\n\nSo you can see how many matching records were deleted from the database.\n\n### Operations on multiple documents\nSo far, we have done each operation on single documents. The endpoints `/insertOne`, `/findOne`, `/updateOne`, and `/deleteOne` were used for that purpose. Each endpoint has a matching endpoint to perform operations on multiple documents in your collection.\n\nYou can find examples, along with the usage instructions for each endpoint, in the Postman collection. \n\nSome of those endpoints can be very helpful. The `/find` endpoint can return all the documents in a collection, which can be helpful for importing data into another database. You can also use the `/insertMany` endpoint to import large chunks of data into your collections.\n\nHowever, use extreme care with `/updateMany` and `/deleteMany` since a small error could potentially destroy all the data in your collection.\n\n### Aggregation Pipelines\nOne of the most powerful features of MongoDB is the ability to create aggregation pipelines. These pipelines let you create complex queries using an array of JSON objects. You can also perform those queries on your collection with the Data API.\n\nIn the left menu, pick the \"Run Aggregation Pipeline\" item. You can use this request for running those pipelines. In the _Body_ tab, you should see the following JSON object.\n\n```json\n{\n \"dataSource\": \"{{CLUSTER_NAME}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"pipeline\": \n {\n \"$sort\": { \"age\": 1 }\n },\n {\n \"$limit\": 1\n }\n ]\n }\n```\n\nHere, we have a pipeline that will take all of the objects in the collection, sort them by ascending age using the `$sort` stage, and only return the first return using the `$limit` stage. This pipeline will return the youngest person in the collection. \n\nIf you want to test it out, you can first run the \"Insert Multiple Documents\" request to populate the collection with multiple records.\n\n## Summary\nThere you have it! A fast and easy way to test out the Data API or explore your MongoDB Atlas data using Postman. If you want to learn more about the Data API, check out the [Atlas Data API Introduction blog post. If you are more interested in automating operations on your Atlas cluster, there is another API called the Management API. You can learn more about the latter on the Automate Automation on MongoDB Atlas blog post.\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Postman API"], "pageDescription": "MongoDB's new Data API is a great way to access your MongoDB Atlas data using a REST-like interface. In this article, we will show you how to use Postman to read and write to your MongoDB Atlas cluster.", "contentType": "Tutorial"}, "title": "Accessing Atlas Data in Postman with the Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/window-functions-and-time-series", "action": "created", "body": "# Window Functions & Time Series Collections\n\nWindow functions and time series collections are both features that were added to MongoDB 5.0. Window functions allow you to run a window across a sorted set of documents, producing calculations over each step of the window, like rolling average or correlation scores. Time-series collections\u00a0*dramatically* reduce the storage cost and increase the performance of MongoDB when working with time-series data. Window functions can be run on\u00a0*any* type of collection in MongoDB, not just time-series collections, but the two go together like ... two things that go together really well. I'm not a fan of peanut butter and jelly sandwiches, but you get the idea!\n\nIn this article, I'll help you get set up with a data project I've created, and then I'll show you how to run some window functions across the data. These kinds of operations were possible in earlier versions of MongoDB, but window functions, with the `$setWindowFields` stage, make these operations relatively straightforward.\n\n# Prerequisites\n\nThis post assumes you already know the fundamentals of time series collections, and it may also be helpful to understand how to optimize your time series collections.\n\nYou'll also need the following software installed on your development machine to follow along with the code in the sample project:\n\n* Just\n* Mongo Shell \\(mongosh\\)\n* Mongoimport\n\nOnce you have your time series collections correctly set up, and you're filling them with lots of time series data, you'll be ready to start analyzing the data you're collecting. Because Time Series collections are all about, well, time, you're probably going to run *temporal* operations on the collection to get the most recent or oldest data in the collection. You will also probably want to run calculations across measurements taken over time. That's where MongoDB's new window functions are especially useful.\n\nTemporal operators and window functions can be used with *any* type of collection, but they're especially useful with time series data, and time series collections will be increasingly optimized for use with these kinds of operations.\n\n# Getting The Sample Data\n\nI found some stock exchange data on Kaggle, and I thought it might be fun to analyse it. I used version 2 of the dataset.\n\nI've written some scripts to automate the process of creating a time series collection and importing the data into the collection. I've also automated running some of the operations described below on the data, so you can see the results. You can find the scripts on GitHub, along with information on how to run them if you want to do that while you're following along with this blog post.\n\n# Getting Set Up With The Sample Project\n\nAt the time of writing, time series collections have only just been released with the release of MongoDB 5.0. As such, integration with the Aggregation tab of the Atlas Data Explorer interface isn't complete, and neither is integration with MongoDB Charts.\n\nIn order to see the results of running window functions and temporal operations on a time series collection, I've created some sample JavaScript code for running aggregations on a collection, and exported them to a new collection using $merge. This is the technique for creating materialized views in MongoDB.\n\nI've glued all the scripts together using a task runner called Just. It's a bit like Make, if you've used that, but easier to install and use. You don't have to use it, but it has some neat features like reading config from a dotenv file automatically. I highly recommend you try it out!\n\nFirst create a file called \".env\", and add a configuration variable called `MDB_URI`, like this:\n\n```\nMDB_URI=\"mongodb+srv://USERNAME:PASSWORD@YOURCLUSTER.mongodb.net/DATABASE?retryWrites=true&w=majority\"\n```\n\nYour URI and the credentials in it will be different, and you can get it from the Atlas user interface, by logging in to Atlas and clicking on the \"Connect\" button next to your cluster details. Make sure you've spun up a MongoDB 5.0 cluster, or higher.\n\nOnce you've saved the .env file, open your command-line to the correct directory and run `just connect` to test the configuration - it'll instruct `mongosh`\u00a0to open up an interactive shell connected to your cluster.\n\nYou can run `db.ping()`\u00a0just to check that everything's okay, and then type exit followed by the \"Enter\" key to quit mongosh.\n\n# Create Your Time Series Collection\n\nYou can run `just init` to create the collection, but if you're not using Just, then the command to run inside mongosh to create your collection is:\n\n```\n// From init_database.js\ndb.createCollection(\"stock_exchange_data\", {\n\u00a0\u00a0\u00a0\u00a0timeseries: {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0timeField: \"ts\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0metaField: \"source\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0granularity: \"hours\"\n\u00a0\u00a0\u00a0\u00a0}\n});\n```\n\nThis will create a time-series collection called \"stock\\_exchange\\_data\", with a time field of \"ts\", a metaField of \"source\" (specifying the stock exchange each set of measurements is relevant to), and because there is one record *per source* per day, I've chosen the closest granularity, which is \"hours\".\n\n# Import The Sample Dataset\n\nIf you run `just import` it'll import the data into the collection you just created, via the following CLI command:\n\n```\nmongoimport --uri $MDB_URI indexProcessed.json --collection stock_exchange_data\n```\n\n> **Note:** When you're importing data into a time-series collection, it's very important that your data is in chronological order, otherwise the import will be very slow!\n\nA single sample document looks like this:\n\n```\n{\n \"source\": {\n \"Region\": \"Hong Kong\",\n \"Exchange\": \"Hong Kong Stock Exchange\",\n \"Index\": \"HSI\",\n \"Currency\": \"HKD\"\n },\n \"ts\": {\n \"$date\": \"1986-12-31T00:00:00+00:00\"\n },\n \"open\": {\n \"$numberDecimal\": \"2568.300049\"\n },\n \"high\": {\n \"$numberDecimal\": \"2568.300049\"\n },\n \"low\": {\n \"$numberDecimal\": \"2568.300049\"\n },\n \"close\": {\n \"$numberDecimal\": \"2568.300049\"\n },\n \"adjustedClose\": {\n \"$numberDecimal\": \"2568.300049\"\n },\n \"volume\": {\n \"$numberDecimal\": \"0.0\"\n },\n \"closeUSD\": {\n \"$numberDecimal\": \"333.87900637\"\n }\n}\n```\n\nIn a way that matches the collection's time-series parameters, \"ts\" contains the timestamp for the measurements in the document, and \"source\" contains metadata describing the source of the measurements - in this case, the Hong Kong Stock Exchange.\n\nYou can read about the meaning of each of the measurements in the documentation for the dataset. I'll mainly be working with \"closeUSD\", which is the closing value for the exchange, in dollars at the end of the specified day.\n\n# Window Functions\n\nWindow functions allow you to apply a calculation to values in a series of ordered documents, either over a specified window of time, or a specified number of documents.\n\nI want to visualise the results of these operations in Atlas Charts. You can attach an Aggregation Pipeline to a Charts data source, so you can use\u00a0`$setWindowFunction`\u00a0directly in data source aggregations. In this case, though, I'll show you how to run the window functions with a `$merge` stage, writing to a new collection, and then the new collection can be used as a Charts data source. This technique of writing pre-calculated results to a new collection is often referred to as a *materialized view*, or colloquially with time-series data, a *rollup*.\n\nFirst, I charted the \"stock\\_exchange\\_data\" in MongoDB Charts, with \"ts\" (the timestamp) on the x-axis, and \"closeUSD\" on the y axis, separated into series by \"source.exchange.\" I've specifically filtered the data to the year of 2008, so I could investigate the stock market values during the credit crunch at the end of the year.\n\nYou'll notice that the data above is quite spiky. A common way to smooth out spiky data is by running a rolling average on the data, where each day's data is averaged with the previous 5 days, for example.\n\nThe following aggregation pipeline will create a smoothed chart:\n\n```\n{\n $setWindowFields: {\n partitionBy: \"$source\",\n sortBy: { ts: 1 },\n output: {\n \"window.rollingCloseUSD\": {\n $avg: \"$closeUSD\",\n window: {\n documents: [-5, 0]\n }\n }\n }\n }\n},\n{\n $merge: {\n into: \"stock_exchange_data_processed\",\n whenMatched: \"replace\"\n }\n}]\n```\n\nThe first step applies the $avg window function to the closeUSD value. The data is partitioned by \"$source\" because the different stock exchanges are discrete series, and should be averaged separately. I've chosen to create a window over 6 documents at a time, rather than 6 days, because there are no values over the weekend, and this means each value will be created as an average of an equal number of documents, whereas otherwise the first day of each week would only include values from the last 3 days from the previous week.\n\nThe second $merge stage stores the result of the aggregation in the \"stock\\_exchange\\_data\\_processed\" collection. Each document will be identical to the equivalent document in the \"stock\\_exchange\\_data\" collection, but with an extra field, \"window.rollingCloseUSD\".\n\n![\n\nPlotting this data shows a much smoother chart, and the drop in various exchanges in September can more clearly be seen.\n\nIt's possible to run more than one window function over the same collection in a single $setWindowFields stage, providing they all operate on the same sequence of documents (although the window specification can be different).\n\nThe file window\\_functions.js contains the following stage, that executes two window functions on the collection:\n\n```\n{\n $setWindowFields: {\n partitionBy: \"$source\",\n sortBy: { ts: 1 },\n output: {\n \"window.rollingCloseUSD\": {\n $avg: \"$closeUSD\",\n window: {\n documents: -5, 0]\n }\n },\n \"window.dailyDifference\": {\n $derivative: {\n input: \"$closeUSD\",\n unit: \"day\"\n },\n window: {\n documents: [-1, 0]\n }\n },\n }\n }\n}\n```\n\nNotice that although the sort order of the collection must be shared across both window functions, they can specify the window individually - the $avg function operates on a window of 6 documents, whereas the $derivative executes over pairs of documents.\n\nThe derivative plot, filtered for just the New York Stock Exchange is below:\n\n![\n\nThis shows the daily difference in the market value at the end of each day. I'm going to admit that I've cheated slightly here, to demonstrate the `$derivative` window function here. It would probably have been more appropriate to just subtract `$first` from `$last`. But that's a blog post for a different day.\n\nThe chart above is quite spiky, so I added another window function in the next stage, to average out the values over 10 days:\n\nThose two big troughs at the end of the year really highlight when the credit crunch properly started to hit. Remember that just because you've calculated a value with a window function in one stage, there's nothing to stop you feeding that value into a later `$setWindowFields` stage, like I have here.\n\n# Conclusion\n\nWindow functions are a super-powerful new feature of MongoDB, and I know I'm going to be using them with lots of different types of data - but especially with time-series data. I hope you found this article useful!\n\nFor more on time-series, our official documentation is comprehensive and very readable. For more on window functions, there's a good post by Guy Harrison about analyzing covid data, and as always, Paul Done's book Practical MongoDB Aggregations has some great content on these topics.\n\nIf you're interested in learning more about how time-series data is stored under-the-hood, check out my colleague John's very accessible blog post.\n\nAnd if you have any questions, or you just want to show us a cool thing you built, definitely check out MongoDB Community!\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Let's load some data into a time series collection and then run some window functions over it, to calculate things like moving average, derivatives, and others.", "contentType": "Article"}, "title": "Window Functions & Time Series Collections", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/manage-game-user-profiles-mongodb-phaser-javascript", "action": "created", "body": "\n \n \n \n ", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas"], "pageDescription": "Learn how to work with user profiles in a Phaser game with JavaScript and MongoDB.", "contentType": "Tutorial"}, "title": "Manage Game User Profiles with MongoDB, Phaser, and JavaScript", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-ios15-swiftui", "action": "created", "body": "# Most Useful iOS 15 SwiftUI Features\n\n## Introduction\n\nI'm all-in on using SwiftUI to build iOS apps. I find it so much simpler than wrangling with storyboards and UIKit. Unfortunately, there are still occasions when SwiftUI doesn't let you do what you need\u2014forcing you to break out into UIKit.\n\nThat's why I always focus on Apple's SwiftUI enhancements at each year's WWDC. And, each year I'm rewarded with a few more enhancements that make SwiftUI more powerful and easy to work with. For example, iOS14 made it much easier to work with Apple Maps.\n\nWWDC 2021 was no exception, introducing a raft of SwiftUI enhancements that were coming in iOS 15/ SwiftUI 3 / Xcode 13. As iOS 15 has now been released, it feels like a good time to cover the features that I've found the most useful.\n\nI've revisited some of my existing iOS apps to see how I could exploit the new iOS 15 SwiftUI features to improve the user experience and/or simplify my code base. This article steps through the features I found most interesting/useful, and how I tested them out on my apps. These are the apps/branches that I worked with:\n\n- RCurrency\n- RChat\n- LiveTutorial2021\n- task-tracker-swiftui\n\n## Prerequisites\n\n- Xcode 13\n- iOS 15\n- Realm-Cocoa (varies by app, but 10.13.0+ is safe for them all)\n\n## Lists\n\nSwiftUI `List`s are pretty critical to data-based apps. I use `List`s in almost every iOS app I build, typically to represent objects stored in Realm. That's why I always go there first when seeing what's new.\n\n### Custom Swipe Options\n\nWe've all used mobile apps where you swipe an item to the left for one action, and to the right for another. SwiftUI had a glaring omission\u2014the only supported action was to swipe left to delete an item.\n\nThis was a massive pain.\n\nThis limitation meant that my task-tracker-swiftui app had a cumbersome UI. You had to click on a task to expose a sheet that let you click on your preferred action.\n\nWith iOS 15, I can replace that popup sheet with swipe actions:\n\nThe swipe actions are implemented in `TasksView`:\n\n```swift\nList {\n ForEach(tasks) { task in\n TaskView(task: task)\n .swipeActions(edge: .leading) {\n if task.statusEnum == .Open || task.statusEnum == .InProgress {\n CompleteButton(task: task)\n }\n if task.statusEnum == .Open || task.statusEnum == .Complete {\n InProgressButton(task: task)\n }\n if task.statusEnum == .InProgress || task.statusEnum == .Complete {\n NotStartedButton(task: task)\n }\n }\n .swipeActions(edge: .trailing) {\n Button(role: .destructive, action: { $tasks.remove(task) }) {\n Label(\"Delete\", systemImage: \"trash\")\n }\n }\n }\n}\n```\n\nThe role of the delete button is set to `.destructive` which automatically sets the color to red.\n\nFor the other actions, I created custom buttons. For example, this is the code for `CompleteButton`:\n\n```swift\nstruct CompleteButton: View {\n @ObservedRealmObject var task: Task\n\n var body: some View {\n Button(action: { $task.statusEnum.wrappedValue = .Complete }) {\n Label(\"Complete\", systemImage: \"checkmark\")\n }\n .tint(.green)\n }\n}\n```\n\n### Searchable Lists\n\nWhen you're presented with a long list of options, it helps the user if you offer a way to filter the results.\n\nRCurrency lets the user choose between 150 different currencies. Forcing the user to scroll through the whole list wouldn't make for a good experience. A search bar lets them quickly jump to the items they care about:\n\nThe selection of the currency is implemented in the `SymbolPickerView` view.\n\nThe view includes a state variable to store the `searchText` (the characters that the user has typed) and a `searchResults` computed value that uses it to filter the full list of symbols:\n\n```swift\nstruct SymbolPickerView: View {\n ...\n @State private var searchText = \"\"\n ...\n var searchResults: Dictionary {\n if searchText.isEmpty {\n return Symbols.data.symbols\n } else {\n return Symbols.data.symbols.filter {\n $0.key.contains(searchText.uppercased()) || $0.value.contains(searchText)}\n }\n }\n}\n```\n\nThe `List` then loops over those `searchResults`. We add the `.searchable` modifier to add the search bar, and bind it to the `searchText` state variable:\n\n```swift\nList {\n ForEach(searchResults.sorted(by: <), id: \\.key) { symbol in\n ...\n }\n}\n.searchable(text: $searchText)\n```\n\nThis is the full view:\n\n```swift\nstruct SymbolPickerView: View {\n @Environment(\\.presentationMode) var presentationMode\n\n var action: (String) -> Void\n let existingSymbols: String]\n\n @State private var searchText = \"\"\n\n var body: some View {\n List {\n ForEach(searchResults.sorted(by: <), id: \\.key) { symbol in\n Button(action: {\n pickedSymbol(symbol.key)\n }) {\n HStack {\n Image(symbol.key.lowercased())\n Text(\"\\(symbol.key): \\(symbol.value)\")\n }\n .foregroundColor(existingSymbols.contains(symbol.key) ? .secondary : .primary)\n }\n .disabled(existingSymbols.contains(symbol.key))\n }\n }\n .searchable(text: $searchText)\n .navigationBarTitle(\"Pick Currency\", displayMode: .inline)\n }\n\n private func pickedSymbol(_ symbol: String) {\n action(symbol)\n presentationMode.wrappedValue.dismiss()\n }\n\n var searchResults: Dictionary {\n if searchText.isEmpty {\n return Symbols.data.symbols\n } else {\n return Symbols.data.symbols.filter {\n $0.key.contains(searchText.uppercased()) || $0.value.contains(searchText)}\n }\n }\n}\n```\n\n## Pull to Refresh\n\nWe've all used this feature in iOS apps. You're impatiently waiting on an important email, and so you drag your thumb down the page to get the app to check the server.\n\nThis feature isn't always helpful for apps that use Realm and Atlas Device Sync. When the Atlas cloud data changes, the local realm is updated, and your SwiftUI view automatically refreshes to show the new data.\n\nHowever, the feature **is** useful for the RCurrency app. I can use it to refresh all of the locally-stored exchange rates with fresh data from the API:\n\n![Animation showing currencies being refreshed when the screen is dragged dowm\n\nWe allow the user to trigger the refresh by adding a `.refreshable` modifier and action (`refreshAll`) to the list of currencies in `CurrencyListContainerView`:\n\n```swift\nList {\n ForEach(userSymbols.symbols, id: \\.self) { symbol in\n CurrencyRowContainerView(baseSymbol: userSymbols.baseSymbol,\n baseAmount: $baseAmount,\n symbol: symbol,\n refreshNeeded: refreshNeeded)\n .listRowSeparator(.hidden)\n }\n .onDelete(perform: deleteSymbol)\n}\n.refreshable{ refreshAll() }\n```\n\nIn that code snippet, you can see that I added the `.listRowSeparator(.hidden)` modifier to the `List`. This is another iOS 15 feature that hides the line that would otherwise be displayed between each `List` item. Not a big feature, but every little bit helps in letting us use native SwiftUI to get the exact design we want.\n\n## Text\n### Markdown\n\nI'm a big fan of Markdown. Markdown lets you write formatted text (including tables, links, and images) without taking your hands off the keyboard. I added this post to our CMS in markdown.\n\niOS 15 allows you to render markdown text within a `Text` view. If you pass a literal link to a `Text` view, then it's automatically rendered correctly:\n\n```swift\nstruct MarkDownTest: View {\n var body: some View {\n Text(\"Let's see some **bold**, *italics* and some ***bold italic text***. ~~Strike that~~. We can even include a link.\")\n }\n}\n```\n\nBut, it doesn't work out of the box for string constants or variables (e.g., data read from Realm):\n\n```swift\nstruct MarkDownTest: View {\n let myString = \"Let's see some **bold**, *italics* and some ***bold italic text***. ~~Strike that~~. We can even include a link.\"\n\n var body: some View {\n Text(myString)\n }\n}\n```\n\nThe issue is that the version of `Text` that renders markdown expects to be passed an `AttributedString`. I created this simple `Markdown` view to handle this for us:\n\n```swift\nstruct MarkDown: View {\n let text: String\n\n @State private var formattedText: AttributedString?\n\n var body: some View {\n Group {\n if let formattedText = formattedText {\n Text(formattedText)\n } else {\n Text(text)\n }\n }\n .onAppear(perform: formatText)\n }\n\n private func formatText() {\n do {\n try formattedText = AttributedString(markdown: text)\n } catch {\n print(\"Couldn't convert this from markdown: \\(text)\")\n }\n }\n}\n```\n\nI updated the `ChatBubbleView` in RChat to use the `Markdown` view:\n\n```swift\nif chatMessage.text != \"\" {\n MarkDown(text: chatMessage.text)\n .padding(Dimensions.padding)\n}\n```\n\nRChat now supports markdown in user messages:\n\n### Dates\n\nWe all know that working with dates can be a pain. At least in iOS 15 we get some nice new functionality to control how we display dates and times. We use the new `Date.formatted` syntax. \n\nIn RChat, I want the date/time information included in a chat bubble to depend on how recently the message was sent. If a message was sent less than a minute ago, then I care about the time to the nearest second. If it were sent a day ago, then I want to see the day of the week plus the hour and minutes. And so on.\n\nI created a `TextDate` view to perform this conditional formatting:\n\n```swift\nstruct TextDate: View {\n let date: Date\n\n private var isLessThanOneMinute: Bool { date.timeIntervalSinceNow > -60 }\n private var isLessThanOneDay: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 }\n private var isLessThanOneWeek: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 * 7}\n private var isLessThanOneYear: Bool { date.timeIntervalSinceNow > -60 * 60 * 24 * 365}\n\n var body: some View {\n if isLessThanOneMinute {\n Text(date.formatted(.dateTime.hour().minute().second()))\n } else {\n if isLessThanOneDay {\n Text(date.formatted(.dateTime.hour().minute()))\n } else {\n if isLessThanOneWeek {\n Text(date.formatted(.dateTime.weekday(.wide).hour().minute()))\n } else {\n if isLessThanOneYear {\n Text(date.formatted(.dateTime.month().day()))\n } else {\n Text(date.formatted(.dateTime.year().month().day()))\n }\n }\n }\n }\n }\n}\n```\n\nThis preview code lets me test it's working in the Xcode Canvas preview:\n\n```swift\nstruct TextDate_Previews: PreviewProvider {\n static var previews: some View {\n VStack {\n TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24 * 365)) // 1 year ago\n TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24 * 7)) // 1 week ago\n TextDate(date: Date(timeIntervalSinceNow: -60 * 60 * 24)) // 1 day ago\n TextDate(date: Date(timeIntervalSinceNow: -60 * 60)) // 1 hour ago\n TextDate(date: Date(timeIntervalSinceNow: -60)) // 1 minute ago\n TextDate(date: Date()) // Now\n }\n }\n}\n```\n\nWe can then use `TextDate` in RChat's `ChatBubbleView` to add context-sensitive date and time information:\n\n```swift\nTextDate(date: chatMessage.timestamp)\n .font(.caption)\n```\n\n## Keyboards\n\nCustomizing keyboards and form input was a real pain in the early days of SwiftUI\u2014take a look at the work we did for the WildAid O-FISH app if you don't believe me. Thankfully, iOS 15 has shown some love in this area. There are a couple of features that I could see an immediate use for...\n\n### Submit Labels\n\nIt's now trivial to rename the on-screen keyboard's \"return\" key. It sounds trivial, but it can give the user a big hint about what will happen if they press it.\n\nTo rename the return key, add a `.submitLabel`) modifier to the input field. You pass the modifier one of these values:\n\n- `done`\n- `go`\n- `send`\n- `join`\n- `route`\n- `search`\n- `return`\n- `next`\n- `continue`\n\nI decided to use these labels to improve the login flow for the LiveTutorial2021 app. In `LoginView`, I added a `submitLabel` to both the \"email address\" and \"password\" `TextFields`:\n\n```swift\nTextField(\"email address\", text: $email)\n .submitLabel(.next)\nSecureField(\"password\", text: $password)\n .onSubmit(userAction)\n .submitLabel(.go)\n```\n\nNote the `.onSubmit(userAction)` modifier on the password field. If the user taps \"go\" (or hits return on an external keyboard), then the `userAction` function is called. `userAction` either registers or logs in the user, depending on whether \"Register new user\u201d is checked.\n\n### Focus\n\nIt can be tedious to have to click between different fields on a form. iOS 15 makes it simple to automate that shifting focus.\n\nSticking with LiveTutorial2021, I want the \"email address\" field to be selected when the view opens. When the user types their address and hits ~~\"return\"~~ \"next\", focus should move to the \"password\" field. When the user taps \"go,\" the app logs them in.\n\nYou can use the new `FocusState` SwiftUI property wrapper to create variables to represent the placement of focus in the view. It can be a boolean to flag whether the associated field is in focus. In our login view, we have two fields that we need to switch focus between and so we use the `enum` option instead.\n\nIn `LoginView`, I define the `Field` enumeration type to represent whether the username (email address) or password is in focus. I then create the `focussedField` `@FocusState` variable to store the value using the `Field` type:\n\n```swift\nenum Field: Hashable {\n case username\n case password\n}\n\n@FocusState private var focussedField: Field?\n```\n\nI use the `.focussed` modifier to bind `focussedField` to the two fields:\n\n```swift\nTextField(\"email address\", text: $email)\n .focused($focussedField, equals: .username)\n ...\nSecureField(\"password\", text: $password)\n .focused($focussedField, equals: .password)\n ...\n```\n\nIt's a two-way binding. If the user selects the email field, then `focussedField` is set to `.username`. If the code sets `focussedField` to `.password`, then focus switches to the password field.\n\nThis next step feels like a hack, but I've not found a better solution yet. When the view is loaded, the code waits half a second before setting focus to the username field. Without the delay, the focus isn't set:\n\n```swift\nVStack(spacing: 16) {\n ...\n}\n.onAppear {\n DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {\n focussedField = .username\n...\n }\n}\n```\n\nThe final step is to shift focus to the password field when the user hits the \"next\" key in the username field:\n\n```swift\nTextField(\"email address\", text: $email)\n .onSubmit { focussedField = .password }\n ...\n```\n\nThis is the complete body from `LoginView`:\n\n```swift\nvar body: some View {\n VStack(spacing: 16) {\n Spacer()\n TextField(\"email address\", text: $email)\n .focused($focussedField, equals: .username)\n .submitLabel(.next)\n .onSubmit { focussedField = .password }\n SecureField(\"password\", text: $password)\n .focused($focussedField, equals: .password)\n .onSubmit(userAction)\n .submitLabel(.go)\n Button(action: { newUser.toggle() }) {\n HStack {\n Image(systemName: newUser ? \"checkmark.square\" : \"square\")\n Text(\"Register new user\")\n Spacer()\n }\n }\n Button(action: userAction) {\n Text(newUser ? \"Register new user\" : \"Log in\")\n }\n .buttonStyle(.borderedProminent)\n .controlSize(.large)\n Spacer()\n }\n .onAppear {\n DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {\n focussedField = .username\n }\n }\n .padding()\n}\n```\n\n## Buttons\n### Formatting\n\nPreviously, I've created custom SwiftUI views to make buttons look like\u2026. buttons.\n\nThings get simpler in iOS 15.\n\nIn `LoginView`, I added two new modifiers to my register/login button:\n\n```swift\nButton(action: userAction) {\n Text(newUser ? \"Register new user\" : \"Log in\")\n}\n.buttonStyle(.borderedProminent)\n.controlSize(.large)\n```\n\nBefore making this change, I experimented with other button styles:\n\n### Confirmation\n\nIt's very easy to accidentally tap the \"Logout\" button, and so I wanted to add this confirmation dialog:\n\nAgain, iOS 15 makes this simple.\n\nThis is the modified version of the `LogoutButton` view:\n\n```swift\nstruct LogoutButton: View {\n ...\n @State private var isConfirming = false\n\n var body: some View {\n Button(\"Logout\") { isConfirming = true }\n .confirmationDialog(\"Are you sure want to logout\",\n isPresented: $isConfirming) {\n Button(action: logout) {\n Text(\"Confirm Logout\")\n }\n Button(\"Cancel\", role: .cancel) {}\n }\n }\n ...\n}\n```\n\nThese are the changes I made:\n\n- Added a new state variable (`isConfirming`)\n- Changed the logout button's action from calling the `logout` function to setting `isConfirming` to `true`\n- Added the `confirmationDialog`-9ibgk) modifier to the button, providing three things:\n - The dialog title (I didn't override the `titleVisibility` option and so the system decides whether this should be shown)\n - A binding to `isConfirming` that controls whether the dialog is shown or not\n - A view containing the contents of the dialog:\n - A button to logout the user\n - A cancel button\n\n## Material\n\nI'm no designer, and this is _blurring_ the edges of what changes I consider worth adding. \n\nThe RChat app may have to wait a moment while the backend MongoDB Atlas App Services application confirms that the user has been authenticated and logged in. I superimpose a progress view while that's happening:\n\nTo make it look a bit more professional, I can update `OpaqueProgressView` to use Material to blur the content that's behind the overlay. To get this effect, I update the background modifier for the `VStack`:\n\n```swift\nvar body: some View {\n VStack {\n if let message = message {\n ProgressView(message)\n } else {\n ProgressView()\n }\n }\n .padding(Dimensions.padding)\n .background(.ultraThinMaterial,\n in: RoundedRectangle(cornerRadius: Dimensions.cornerRadius))\n}\n```\n\nThe result looks like this:\n\n## Developer Tools\n\nFinally, there are a couple of enhancements that are helpful during your development phase.\n\n### Landscape Previews\n\nI'm a big fan of Xcode's \"Canvas\" previews. Previews let you see what your view will look like. Previews update in more or less real time as you make code changes. You can even display multiple previews at once for example:\n\n- For different devices: `.previewDevice(PreviewDevice(rawValue: \"iPhone 12 Pro Max\"))`\n- For dark mode: `.preferredColorScheme(.dark)`\n\nA glaring omission was that there was no way to preview landscape mode. That's fixed in iOS 15 with the addition of the `.previewInterfaceOrientation`) modifier. \n\nFor example, this code will show two devices in the preview. The first will be in portrait mode. The second will be in landscape and dark mode:\n\n```swift\nstruct CurrencyRow_Previews: PreviewProvider {\n static var previews: some View {\n Group {\n List {\n CurrencyRowView(value: 3.23, symbol: \"USD\", baseValue: .constant(1.0))\n CurrencyRowView(value: 1.0, symbol: \"GBP\", baseValue: .constant(10.0))\n }\n List {\n CurrencyRowView(value: 3.23, symbol: \"USD\", baseValue: .constant(1.0))\n CurrencyRowView(value: 1.0, symbol: \"GBP\", baseValue: .constant(10.0))\n }\n .preferredColorScheme(.dark)\n .previewInterfaceOrientation(.landscapeLeft)\n }\n }\n}\n```\n\n### Self._printChanges\n\nSwiftUI is very smart at automatically refreshing views when associated state changes. But sometimes, it can be hard to figure out exactly why a view is or isn't being updated.\n\niOS 15 adds a way to print out what pieces of state data have triggered each refresh for a view. Simply call `Self._printChanges()` from the body of your view. For example, I updated `ContentView` for the LiveChat app:\n\n```swift\nstruct ContentView: View {\n @State private var username = \"\"\n\n var body: some View {\n print(Self._printChanges())\n return NavigationView {\n Group {\n if app.currentUser == nil {\n LoginView(username: $username)\n } else {\n ChatRoomsView(username: username)\n }\n }\n .navigationBarTitle(username, displayMode: .inline)\n .navigationBarItems(trailing: app.currentUser != nil ? LogoutButton(username: $username) : nil) }\n }\n}\n```\n\nIf I log in and check the Xcode console, I can see that it's the update to `username` that triggered the refresh (rather than `app.currentUser`):\n\n```swift\nContentView: _username changed.\n```\n\nThere can be a lot of these messages, and so remember to turn them off before going into production.\n\n## Conclusion\n\nSwiftUI is developing at pace. With each iOS release, there is less and less reason to not use it for all/some of your mobile app.\n\nThis post describes how to use some of the iOS 15 SwiftUI features that caught my attention. I focussed on the features that I could see would instantly benefit my most recent mobile apps. In this article, I've shown how those apps could be updated to use these features.\n\nThere are lots of features that I didn't include here. A couple of notable omissions are:\n\n- `AsyncImage` is going to make it far easier to work with images that are stored in the cloud. I didn't need it for any of my current apps, but I've no doubt that I'll be using it in a project soon.\n- The `task`/) view modifier is going to have a significant effect on how people run asynchronous code when a view is loaded. I plan to cover this in a future article that takes a more general look at how to handle concurrency with Realm.\n- Adding a toolbar to your keyboards (e.g., to let the user switch between input fields).\n\nIf you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter.\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "See how to use some of the most useful new iOS 15 SwiftUI features in your mobile apps", "contentType": "Tutorial"}, "title": "Most Useful iOS 15 SwiftUI Features", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/getting-started-deno-mongodb", "action": "created", "body": "# Getting Started with Deno & MongoDB\n\nDeno is a \u201cmodern\u201d runtime for JavaScript and TypeScript that is built in Rust. This makes it very fast! \n\nIf you are familiar with Node.js then you will be right at home with Deno. It is very similar but has some improvements over Node.js. In fact, the creator of Deno, Ryan Dahl, also created Node and Deno is meant to be the successor to Node.js.\n\n> \ud83d\udca1 Fun Fact: Deno is an anagram. Rearrange the letters in Node to spell Deno.\n\nDeno has no package manager, uses ES modules, has first-class `await`, has built-in testing, and is somewhat browser-compatible by providing built-in `fetch` and the global `window` object. \n\nAside from that, it\u2019s also very secure. It\u2019s completely locked down by default and requires you to enable each access method specifically. \n\nThis makes Deno pair nicely with MongoDB since it is also super secure by default. \n\n### Video Version\n\nHere is a video version of this article if you prefer to watch.\n:youtube]{vid=xOgicDUXnrE}\n\n## Prerequisites\n\n- TypeScript \u2014 Deno uses TypeScript by default, so some TypeScript knowledge is needed.\n- Basic MongoDB knowledge\n- Understanding of RESTful APIs\n\n## Getting Deno set up\n\nYou\u2019ll need to install Deno to get started.\n\n- macOS and Linux Shell:\n \n `curl -fsSL https://deno.land/install.sh | sh`\n \n- Windows PowerShell:\n \n `iwr https://deno.land/install.ps1 -useb | iex`\n \n\nFor more options, here are the Deno [installation instructions.\n\n## Setting up middleware and routing\n\nFor this tutorial, we\u2019re going to use Oak, a middleware framework for Deno. This will provide routing for our various app endpoints to perform CRUD operations. \n\nWe\u2019ll start by creating a `server.ts` file and import the Oak `Application` method. \n\n```jsx\nimport { Application } from \"https://deno.land/x/oak/mod.ts\";\nconst app = new Application();\n```\n\n> \ud83d\udca1 If you are familiar with Node.js, you\u2019ll notice that Deno does things a bit differently. Instead of using a `package.json` file and downloading all of the packages into the project directory, Deno uses file paths or URLs to reference module imports. Modules do get downloaded and cached locally, but this is done globally and not per project. This eliminates a lot of the bloat that is inherent from Node.js and its `node_modules` folder.\n\nNext, let\u2019s start up the server.\n\n```jsx\nconst PORT = 3000;\napp.listen({ port: PORT });\nconsole.log(`Server listening on port ${PORT}`);\n```\n\nWe\u2019re going to create our routes in a new file named `routes.ts`. This time, we\u2019ll import the `Router` method from Oak. Then, create a new instance of `Router()` and export it. \n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nconst router = new Router(); // Create router\n\nexport default router;\n```\n\nNow, let\u2019s bring our `router` into our `server.ts` file.\n\n```jsx\nimport { Application } from \"https://deno.land/x/oak/mod.ts\";\nimport router from \"./routes.ts\"; // Import our router\n\nconst PORT = 3000;\nconst app = new Application();\n\napp.use(router.routes()); // Implement our router\napp.use(router.allowedMethods()); // Allow router HTTP methods\n\nconsole.log(`Server listening on port ${PORT}`);\nawait app.listen({ port: PORT });\n```\n\n## Setting up the MongoDB Data API\n\nIn most tutorials, you\u2019ll find that they use the mongo third-party Deno module. For this tutorial, we\u2019ll use the brand new MongoDB Atlas Data API to interact with our MongoDB Atlas database in Deno. The Data API doesn\u2019t require any drivers!\n\nLet\u2019s set up our MongoDB Atlas Data API. You\u2019ll need a MongoDB Atlas account. If you already have one, sign in, or register now.\n\nFrom the MongoDB Atlas Dashboard, click on the Data API option. By default, all MongoDB Atlas clusters have the Data API turned off. Let\u2019s enable it for our cluster. You can turn it back off at any time. \n\nAfter enabling the Data API, we\u2019ll need to create an API Key. You can name your key anything you want. I\u2019ll name mine `data-api-test`. Be sure to copy your API key secret at this point. You won\u2019t see it again!\n\nAlso, take note of your App ID. It can be found in your URL Endpoint for the Data API. \n\nExample: `https://data.mongodb-api.com/app/{APP_ID}/endpoint/data/beta`\n\n## Configuring each route\n\nAt this point, we need to set up each function for each route. These will be responsible for Creating, Reading, Updating, and Deleting (CRUD) documents in our MongoDB database.\n\nLet\u2019s create a new folder called `controllers` and a file within it called `todos.ts`.\n\nNext, we\u2019ll set up our environmental variables to keep our secrets safe. For this, we\u2019ll use a module called dotenv. \n\n```jsx\nimport { config } from \"https://deno.land/x/dotenv/mod.ts\";\nconst { DATA_API_KEY, APP_ID } = config();\n```\n\nHere, we are importing the `config` method from that module and then using it to get our `DATA_API_KEY` and `APP_ID` environmental variables. Those will be pulled from another file that we\u2019ll create in the root of our project called `.env`. Just the extension and no file name.\n\n```\nDATA_API_KEY=your_key_here\nAPP_ID=your_app_id_here\n```\n\nThis is a plain text file that allows you to store secrets that you don\u2019t want to be uploaded to GitHub or shown in your code. To ensure that these don\u2019t get uploaded to GitHub, we\u2019ll create another file in the root of our project called `.gitignore`. Again, just the extension with no name.\n\n```\n.env\n```\n\nIn this file, we\u2019ll simply enter `.env`. This lets git know to ignore this file so that it\u2019s not tracked.\n\nNow, back to the `todos.ts` file. We\u2019ll configure some variables that will be used throughout each function.\n\n```jsx\nconst BASE_URI = `https://data.mongodb-api.com/app/${APP_ID}/endpoint/data/beta/action`;\nconst DATA_SOURCE = \"Cluster0\";\nconst DATABASE = \"todo_db\";\nconst COLLECTION = \"todos\";\n\nconst options = {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n \"api-key\": DATA_API_KEY \n },\n body: \"\"\n};\n```\n\nWe\u2019ll set up our base URI to our MongoDB Data API endpoint. This will utilize our App ID. Then we need to define our data source, database, and collection. These would be specific to your use case. And lastly, we will define our fetch options, passing in our Data API key.\n\n## Create route\n\nNow we can finally start creating our first route function. We\u2019ll call this function `addTodo`. This function will add a new todo item to our database collection.\n\n```jsx\nconst addTodo = async ({\n request,\n response,\n}: {\n request: any;\n response: any;\n}) => {\n try {\n if (!request.hasBody) {\n response.status = 400;\n response.body = {\n success: false,\n msg: \"No Data\",\n };\n } else {\n const body = await request.body();\n const todo = await body.value;\n const URI = `${BASE_URI}/insertOne`;\n const query = {\n collection: COLLECTION,\n database: DATABASE,\n dataSource: DATA_SOURCE,\n document: todo\n };\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const { insertedId } = await dataResponse.json();\n \n response.status = 201;\n response.body = {\n success: true,\n data: todo,\n insertedId\n };\n }\n } catch (err) {\n response.body = {\n success: false,\n msg: err.toString(),\n };\n }\n};\n\nexport { addTodo };\n```\n\nThis function will accept a `request` and `response`. If the `request` doesn\u2019t have a `body` it will return an error. Otherwise, we\u2019ll get the `todo` from the `body` of the `request` and use the `insertOne` Data API endpoint to insert a new document into our database.\n\nWe do this by creating a `query` that defines our `dataSource`, `database`, `collection`, and the `document` we are adding. This gets stringified and sent using `fetch`. `fetch` happens to be built into Deno as well; no need for another module like in Node.js.\n\nWe also wrap the entire function contents with a `try..catch` to let us know if there are any errors. \n\nAs long as everything goes smoothly, we\u2019ll return a status of `201` and a `response.body`.\n\nLastly, we\u2019ll export this function to be used in our `routes.ts` file. So, let\u2019s do that next.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { addTodo } from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter.post(\"/api/todos\", addTodo); // Add a todo\n\nexport default router;\n```\n\n### Testing the create route\n\nLet\u2019s test our create route. To start the server, we\u2019ll run the following command:\n\n`deno run --allow-net --allow-read server.ts`\n\n> \ud83d\udca1 Like mentioned before, Deno is locked down by default. We have to specify that network access is okay by using the `--allow-net` flag, and that read access to our project directory is okay to read our environmental variables using the `--allow-read` flag.\n\nNow our server should be listening on port 3000. To test, we can use Postman, Insomnia, or my favorite, the Thunder Client extension in VS Code.\n\nWe\u2019ll make a `POST` request to `localhost:3000/api/todos` and include in the `body` of our request the json document that we want to add.\n\n```json\n{\n \"title\": \"Todo 1\",\n \"complete\": false,\n \"todoId\": 1\n}\n```\n\n> \ud83d\udca1 Normally, I would not create an ID manually. I would rely on the MongoDB generated ObjectID, `_id`. That would require adding another Deno module to this project to convert the BSON ObjectId. I wanted to keep this tutorial as simple as possible. \n\nIf all goes well, we should receive a successful response.\n\n## Read all documents route\n\nNow let\u2019s move on to the read routes. We\u2019ll start with a route that gets all of our todos called `getTodos`. \n\n```jsx\nconst getTodos = async ({ response }: { response: any }) => {\n try {\n const URI = `${BASE_URI}/find`;\n const query = {\n collection: COLLECTION,\n database: DATABASE,\n dataSource: DATA_SOURCE\n };\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const allTodos = await dataResponse.json();\n\n if (allTodos) {\n response.status = 200;\n response.body = {\n success: true,\n data: allTodos,\n };\n } else {\n response.status = 500;\n response.body = {\n success: false,\n msg: \"Internal Server Error\",\n };\n }\n } catch (err) {\n response.body = {\n success: false,\n msg: err.toString(),\n };\n }\n};\n```\n\nThis one will be directed to the `find` Data API endpoint. We will not pass anything other than the `dataSource`, `database`, and `collection` into our `query`. This will return all documents from the specified collection.\n\nNext, we\u2019ll need to add this function into our exports at the bottom of the file.\n\n```jsx\nexport { addTodo, getTodos }\n```\n\nThen we\u2019ll add this function and route into our `routes.ts` file as well.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { addTodo, getTodos } from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter\n .post(\"/api/todos\", addTodo) // Add a todo\n .get(\"/api/todos\", getTodos); // Get all todos\n\nexport default router;\n```\n\n### Testing the read all documents route\n\nSince we\u2019ve made changes, we\u2019ll need to restart our server using the same command as before:\n\n`deno run --allow-net --allow-read server.ts`\n\nTo test this route, we\u2019ll send a `GET` request to `localhost:3000/api/todos` this time, with nothing in our request `body`.\n\nThis time, we should see the first document that we inserted in our response.\n\n## Read a single document route\n\nNext, we\u2019ll set up our function to read a single document. We\u2019ll call this one `getTodo`.\n\n```jsx\nconst getTodo = async ({\n params,\n response,\n}: {\n params: { id: string };\n response: any;\n}) => {\n const URI = `${BASE_URI}/findOne`;\n const query = {\n collection: COLLECTION,\n database: DATABASE,\n dataSource: DATA_SOURCE,\n filter: { todoId: parseInt(params.id) }\n };\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const todo = await dataResponse.json();\n \n if (todo) {\n response.status = 200;\n response.body = {\n success: true,\n data: todo,\n };\n } else {\n response.status = 404;\n response.body = {\n success: false,\n msg: \"No todo found\",\n };\n }\n};\n```\n\nThis function will utilize the `findOne` Data API endpoint and we\u2019ll pass a `filter` this time into our `query`. \n\nWe\u2019re going to use query `params` from our URL to get the ID of the document we will filter for. \n\nNext, we need to export this function as well.\n\n```jsx\nexport { addTodo, getTodos, getTodo }\n```\n\nAnd, we\u2019ll import the function and set up our route in the `routes.ts` file.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { \n addTodo, \n getTodos, \n getTodo \n} from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter\n .post(\"/api/todos\", addTodo) // Add a todo\n .get(\"/api/todos\", getTodos) // Get all todos\n .get(\"/api/todos/:id\", getTodo); // Get one todo\n\nexport default router;\n```\n\n### Testing the read single document route\n\nRemember to restart the server. This route is very similar to the \u201cread all documents\u201d route. This time, we will need to add an ID to our URL. Let\u2019s use: `localhost:3000/api/todos/1`.\n\nWe should see the document with the `todoId` of 1. \n\n> \ud83d\udca1 To further test, try adding more test documents using the `POST` method and then run the two `GET` methods again to see the results.\n\n## Update route\n\nNow that we have documents, let\u2019s set up our update route to allow us to make changes to existing documents. We\u2019ll call this function `updateTodo`.\n\n```jsx\nconst updateTodo = async ({\n params,\n request,\n response,\n}: {\n params: { id: string };\n request: any;\n response: any;\n}) => {\n try {\n const body = await request.body();\n const { title, complete } = await body.value;\n const URI = `${BASE_URI}/updateOne`;\n const query = {\n collection: COLLECTION,\n database: DATABASE,\n dataSource: DATA_SOURCE,\n filter: { todoId: parseInt(params.id) },\n update: { $set: { title, complete } }\n };\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const todoUpdated = await dataResponse.json();\n \n response.status = 200;\n response.body = { \n success: true,\n todoUpdated \n };\n \n } catch (err) {\n response.body = {\n success: false,\n msg: err.toString(),\n };\n }\n};\n```\n\nThis route will accept three arguments: `params`, `request`, and `response`. The `params` will tell us which document to update, and the `request` will tell us what to update.\n\nWe\u2019ll use the `updateOne` Data API endpoint and set a `filter` and `update` in our `query`.\n\nThe `filter` will indicate which document we are updating and the `update` will use the `$set` operator to update the document fields. \n\nThe updated data will come from our `request.body`.\n\nLet\u2019s export this function at the bottom of the file.\n\n```jsx\nexport { addTodo, getTodos, getTodo, updateTodo }\n```\n\nAnd, we\u2019ll import the function and set up our route in the `routes.ts` file.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { \n addTodo, \n getTodos, \n getTodo,\n updateTodo\n} from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter\n .post(\"/api/todos\", addTodo) // Add a todo\n .get(\"/api/todos\", getTodos) // Get all todos\n .get(\"/api/todos/:id\", getTodo); // Get one todo\n .put(\"/api/todos/:id\", updateTodo) // Update a todo\n\nexport default router;\n```\n\nThis route will use the `PUT` method.\n\n### Testing the update route\n\nRemember to restart the server. To test this route, we\u2019ll use a combination of the previous tests. \n\nOur method will be `PUT`. Our URL will be `localhost:3000/api/todos/1`. And we\u2019ll include a json document in our `body` with the updated fields. \n\n```json\n{\n \"title\": \"Todo 1\",\n \"complete\": true\n}\n```\n\nOur response this time will indicate if a document was found, or matched, and if a modification was made. Here we see that both are true. \n\nIf we run a `GET` request on that same URL we\u2019ll see that the document was updated!\n\n## Delete route\n\nNext, we'll set up our delete route. We\u2019ll call this one `deleteTodo`.\n\n```jsx\nconst deleteTodo = async ({\n params,\n response,\n}: {\n params: { id: string };\n response: any;\n}) => {\n try {\n const URI = `${BASE_URI}/deleteOne`;\n const query = {\n collection: COLLECTION,\n database: DATABASE,\n dataSource: DATA_SOURCE,\n filter: { todoId: parseInt(params.id) }\n };\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const todoDeleted = await dataResponse.json();\n\n response.status = 201;\n response.body = {\n todoDeleted\n };\n } catch (err) {\n response.body = {\n success: false,\n msg: err.toString(),\n };\n }\n};\n```\n\nThis route will use the `deleteOne` Data API endpoint and will `filter` using the URL `params`.\n\nLet\u2019s export this function.\n\n```jsx\nexport { addTodo, getTodos, getTodo, updateTodo, deleteTodo };\n```\n\nAnd we\u2019ll import it and set up its route in the `routes.ts` file.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { \n addTodo, \n getTodos, \n getTodo,\n updateTodo,\n deleteTodo\n} from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter\n .post(\"/api/todos\", addTodo) // Add a todo\n .get(\"/api/todos\", getTodos) // Get all todos\n .get(\"/api/todos/:id\", getTodo); // Get one todo\n .put(\"/api/todos/:id\", updateTodo) // Update a todo\n .delete(\"/api/todos/:id\", deleteTodo); // Delete a todo\n\nexport default router;\n```\n\n### Testing the delete route\n\nRemember to restart the server. This test will use the `DELETE` method. We\u2019ll delete the first todo using this URL: `localhost:3000/api/todos/1`.\n\nOur response will indicate how many documents were deleted. In this case, we should see that one was deleted. \n\n## Bonus: Aggregation route\n\nWe're going to create one more bonus route. This one will demonstrate a basic aggregation pipeline using the MongoDB Atlas Data API. We'll call this one `getIncompleteTodos`.\n\n```jsx\nconst getIncompleteTodos = async ({ response }: { response: any }) => {\n const URI = `${BASE_URI}/aggregate`;\n const pipeline = \n {\n $match: {\n complete: false\n }\n }, \n {\n $count: 'incomplete'\n }\n ];\n const query = {\n dataSource: DATA_SOURCE,\n database: DATABASE,\n collection: COLLECTION,\n pipeline\n };\n\n options.body = JSON.stringify(query);\n const dataResponse = await fetch(URI, options);\n const incompleteCount = await dataResponse.json();\n \n if (incompleteCount) {\n response.status = 200;\n response.body = {\n success: true,\n incompleteCount,\n };\n } else {\n response.status = 404;\n response.body = {\n success: false,\n msg: \"No incomplete todos found\",\n };\n }\n};\n```\n\nFor this route, we'll use the `aggregate` Data API endpoint. This endpoint will accept a `pipeline`.\n\nWe can pass any aggregation pipeline through this endpoint. Our example will be basic. The result will be a count of the incomplete todos. \n\nLet\u2019s export this final function.\n\n```jsx\nexport { addTodo, getTodos, getTodo, updateTodo, deleteTodo, getIncompleteTodos };\n```\n\nAnd we\u2019ll import it and set up our final route in the `routes.ts` file.\n\n```jsx\nimport { Router } from \"https://deno.land/x/oak/mod.ts\";\nimport { \n addTodo, \n getTodos, \n getTodo,\n updateTodo,\n deleteTodo,\n getIncompleteTodos\n} from \"./controllers/todos.ts\"; // Import controller methods\n\nconst router = new Router();\n\n// Implement routes\nrouter\n .post(\"/api/todos\", addTodo) // Add a todo\n .get(\"/api/todos\", getTodos) // Get all todos\n .get(\"/api/todos/:id\", getTodo); // Get one todo\n .get(\"/api/todos/incomplete/count\", getIncompleteTodos) // Get incomplete todo count\n .put(\"/api/todos/:id\", updateTodo) // Update a todo\n .delete(\"/api/todos/:id\", deleteTodo); // Delete a todo\n\nexport default router;\n```\n\n### Testing the aggregation route\n\nRemember to restart the server. This test will use the `GET` method and this URL: `localhost:3000/api/todos/incomplete/count`.\n\nAdd a few test todos to the database and mark some as complete and some as incomplete.\n\n![\n\nOur response shows the count of incomplete todos. \n\n## Conclusion\n\nWe created a Deno server that uses the MongoDB Atlas Data API to Create, Read, Update, and Delete (CRUD) documents in our MongoDB database. We added a bonus route to demonstrate using an aggregation pipeline with the MongoDB Atlas Data API. What next?\n\nIf you would like to see the completed code, you can find it *here*. You should be able to use this as a starter for your next project and modify it to meet your needs.\n\nI\u2019d love to hear your feedback or questions. Let\u2019s chat in the MongoDB Community.", "format": "md", "metadata": {"tags": ["Rust", "Atlas", "TypeScript"], "pageDescription": "Deno is a \u201cmodern\u201d runtime for JavaScript and TypeScript that is built in Rust. This makes it very fast!\nIf you are familiar with Node.js, then you will be right at home with Deno. It is very similar but has some improvements over Node.js. In fact, the creator of Deno also created Node and Deno is meant to be the successor to Node.js.\nDeno pairs nicely with MongoDB.", "contentType": "Quickstart"}, "title": "Getting Started with Deno & MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/create-restful-api-dotnet-core-mongodb", "action": "created", "body": "# Create a RESTful API with .NET Core and MongoDB\n\nIf you've been keeping up with my development content, you'll remember that I recently wrote Build Your First .NET Core Application with MongoDB Atlas, which focused on building a console application that integrated with MongoDB. While there is a fit for MongoDB in console applications, many developers are going to find it more valuable in web applications.\n\nIn this tutorial, we're going to expand upon the previous and create a RESTful API with endpoints that perform basic create, read, update, and delete (CRUD) operations against MongoDB Atlas.\n\n## The Requirements\n\nTo be successful with this tutorial, you'll need to have a few things taken care of first:\n\n- A deployed and configured MongoDB Atlas cluster, M0 or higher\n- .NET Core 6+\n\nWe won't go through the steps of deploying a MongoDB Atlas cluster or configuring it with user and network rules. If this is something you need help with, check out a previous tutorial that was published on the topic.\n\nWe'll be using .NET Core 6.0 in this tutorial, but other versions may still work. Just take the version into consideration before continuing.\n\n## Create a Web API Project with the .NET Core CLI\n\nTo kick things off, we're going to create a fresh .NET Core project using the web application template that Microsoft offers. To do this, execute the following commands from the CLI:\n\n```bash\ndotnet new webapi -o MongoExample\ncd MongoExample\ndotnet add package MongoDB.Driver\n```\n\nThe above commands will create a new web application project for .NET Core and install the latest MongoDB driver. We'll be left with some boilerplate files as part of the template, but we can remove them.\n\nInside the project, delete any file related to `WeatherForecast` and similar.\n\n## Designing a Document Model and Database Service within .NET Core\n\nBefore we start designing each of the RESTful API endpoints with .NET Core, we need to create and configure our MongoDB service and define the data model for our API.\n\nWe'll start by working on our MongoDB service, which will be responsible for establishing our connection and directly working with documents within MongoDB. Within the project, create \"Models/MongoDBSettings.cs\" and add the following C# code:\n\n```csharp\nnamespace MongoExample.Models;\n\npublic class MongoDBSettings {\n\n public string ConnectionURI { get; set; } = null!;\n public string DatabaseName { get; set; } = null!;\n public string CollectionName { get; set; } = null!;\n\n}\n```\n\nThe above `MongoDBSettings` class will hold information about our connection, the database name, and the collection name. The data we plan to store in these class fields will be found in the project's \"appsettings.json\" file. Open it and add the following:\n\n```json\n{\n \"Logging\": {\n \"LogLevel\": {\n \"Default\": \"Information\",\n \"Microsoft.AspNetCore\": \"Warning\"\n }\n },\n \"AllowedHosts\": \"*\",\n \"MongoDB\": {\n \"ConnectionURI\": \"ATLAS_URI_HERE\",\n \"DatabaseName\": \"sample_mflix\",\n \"CollectionName\": \"playlist\"\n }\n}\n```\n\nSpecifically take note of the `MongoDB` field. Just like with the previous example project, we'll be using the \"sample_mflix\" database and the \"playlist\" collection. You'll need to grab the `ConnectionURI` string from your MongoDB Atlas Dashboard.\n\nWith the settings in place, we can move onto creating the service.\n\nCreate \"Services/MongoDBService.cs\" within your project and add the following:\n\n```csharp\nusing MongoExample.Models;\nusing Microsoft.Extensions.Options;\nusing MongoDB.Driver;\nusing MongoDB.Bson;\n\nnamespace MongoExample.Services;\n\npublic class MongoDBService {\n\n private readonly IMongoCollection _playlistCollection;\n\n public MongoDBService(IOptions mongoDBSettings) {\n MongoClient client = new MongoClient(mongoDBSettings.Value.ConnectionURI);\n IMongoDatabase database = client.GetDatabase(mongoDBSettings.Value.DatabaseName);\n _playlistCollection = database.GetCollection(mongoDBSettings.Value.CollectionName);\n }\n\n public async Task> GetAsync() { }\n public async Task CreateAsync(Playlist playlist) { }\n public async Task AddToPlaylistAsync(string id, string movieId) {}\n public async Task DeleteAsync(string id) { }\n\n}\n```\n\nIn the above code, each of the asynchronous functions were left blank on purpose. We'll be populating those functions as we create our endpoints. Instead, make note of the constructor method and how we're taking the passed settings that we saw in our \"appsettings.json\" file and setting them to variables. In the end, the only variable we'll ever interact with for this example is the `_playlistCollection` variable.\n\nWith the service available, we need to connect it to the application. Open the project's \"Program.cs\" file and add the following at the top:\n\n```csharp\nusing MongoExample.Models;\nusing MongoExample.Services;\n\nvar builder = WebApplication.CreateBuilder(args);\n\nbuilder.Services.Configure(builder.Configuration.GetSection(\"MongoDB\"));\nbuilder.Services.AddSingleton();\n```\n\nYou'll likely already have the `builder` variable in your code because it was part of the boilerplate project, so don't add it twice. What you'll need to add near the top is an import to your custom models and services as well as configuring the service.\n\nRemember the `MongoDB` field in the \"appsettings.json\" file? That is the section that the `GetSection` function is pulling from. That information is passed into the singleton service that we created.\n\nWith the service created and working, with the exception of the incomplete asynchronous functions, we can focus on creating a data model for our collection.\n\nCreate \"Models/Playlist.cs\" and add the following C# code:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\nusing System.Text.Json.Serialization;\n\nnamespace MongoExample.Models;\n\npublic class Playlist {\n\n BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; }\n\n public string username { get; set; } = null!;\n\n [BsonElement(\"items\")]\n [JsonPropertyName(\"items\")]\n public List movieIds { get; set; } = null!;\n\n}\n```\n\nThere are a few things happening in the above class that take it from a standard C# class to something that can integrate seamlessly into a MongoDB document.\n\nFirst, you might notice the following:\n\n```csharp\n[BsonId]\n[BsonRepresentation(BsonType.ObjectId)]\npublic string? Id { get; set; }\n```\n\nWe're saying that the `Id` field is to be represented as an ObjectId in BSON and the `_id` field within MongoDB. However, when we work with it locally in our application, it will be a string.\n\nThe next thing you'll notice is the following:\n\n```csharp\n[BsonElement(\"items\")]\n[JsonPropertyName(\"items\")]\npublic List movieIds { get; set; } = null!;\n```\n\nEven though we plan to work with `movieIds` within our C# application, in MongoDB, the field will be known as `items` and when sending or receiving JSON, the field will also be known as `items` instead of `movieIds`.\n\nYou don't need to define custom mappings if you plan to have your local class field match the document field directly. Take the `username` field in our example. It has no custom mappings, so it will be `username` in C#, `username` in JSON, and `username` in MongoDB.\n\nJust like that, we have a MongoDB service and document model for our collection to work with for .NET Core.\n\n## Building CRUD Endpoints that Interact with MongoDB Using .NET Core\n\nWhen building CRUD endpoints for this project, we'll need to bounce between two different locations within our project. We'll need to define the endpoint within a controller and do the work within our service.\n\nCreate \"Controllers/PlaylistController.cs\" and add the following code:\n\n```csharp\nusing System;\nusing Microsoft.AspNetCore.Mvc;\nusing MongoExample.Services;\nusing MongoExample.Models;\n\nnamespace MongoExample.Controllers; \n\n[Controller]\n[Route(\"api/[controller]\")]\npublic class PlaylistController: Controller {\n \n private readonly MongoDBService _mongoDBService;\n\n public PlaylistController(MongoDBService mongoDBService) {\n _mongoDBService = mongoDBService;\n }\n\n [HttpGet]\n public async Task> Get() {}\n\n [HttpPost]\n public async Task Post([FromBody] Playlist playlist) {}\n\n [HttpPut(\"{id}\")]\n public async Task AddToPlaylist(string id, [FromBody] string movieId) {}\n\n [HttpDelete(\"{id}\")]\n public async Task Delete(string id) {}\n\n}\n```\n\nIn the above `PlaylistController` class, we have a constructor method that gains access to our singleton service class. Then we have a series of endpoints for this particular controller. We could add far more endpoints than this to our controller, but it's not necessary for this example.\n\nLet's start with creating data through the POST endpoint. To do this, it's best to start in the \"Services/MongoDBService.cs\" file:\n\n```csharp\npublic async Task CreateAsync(Playlist playlist) {\n await _playlistCollection.InsertOneAsync(playlist);\n return;\n}\n```\n\nWe had set the `_playlistCollection` in the constructor method of the service, so we can now use the `InsertOneAsync` method, taking a passed `Playlist` variable and inserting it. Jumping back into the \"Controllers/PlaylistController.cs,\" we can add the following:\n\n```csharp\n[HttpPost]\npublic async Task Post([FromBody] Playlist playlist) {\n await _mongoDBService.CreateAsync(playlist);\n return CreatedAtAction(nameof(Get), new { id = playlist.Id }, playlist);\n}\n```\n\nWhat we're saying is that when the endpoint is executed, we take the `Playlist` object from the request, something that .NET Core parses for us, and pass it to the `CreateAsync` function that we saw in the service. After the insert, we return some information about the interaction.\n\nIt's important to note that in this example project, we won't be validating any data flowing from HTTP requests.\n\nLet's jump to the read operations.\n\nHead back into the \"Services/MongoDBService.cs\" file and add the following function:\n\n```csharp\npublic async Task> GetAsync() {\n return await _playlistCollection.Find(new BsonDocument()).ToListAsync();\n}\n```\n\nThe above `Find` operation will return all documents that exist in the collection. If you wanted to, you could make use of the `FindOne` or provide filter criteria to return only the data that you want. We'll explore filters shortly.\n\nWith the service function ready, add the following endpoint to the \"Controllers/PlaylistController.cs\" file:\n\n```csharp\n[HttpGet]\npublic async Task> Get() {\n return await _mongoDBService.GetAsync();\n}\n```\n\nNot so bad, right? We'll be doing the same thing for the other endpoints, more or less.\n\nThe next CRUD stage to take care of is the updating of data. Within the \"Services/MongoDBService.cs\" file, add the following function:\n\n```csharp\npublic async Task AddToPlaylistAsync(string id, string movieId) {\n FilterDefinition filter = Builders.Filter.Eq(\"Id\", id);\n UpdateDefinition update = Builders.Update.AddToSet(\"movieIds\", movieId);\n await _playlistCollection.UpdateOneAsync(filter, update);\n return;\n}\n```\n\nRather than making changes to the entire document, we're planning on adding an item to our playlist and nothing more. To do this, we set up a match filter to determine which document or documents should receive the update. In this case, we're matching on the id which is going to be unique. Next, we're defining the update criteria, which is an `AddToSet` operation that will only add an item to the array if it doesn't already exist in the array.\n\nThe `UpdateOneAsync` method will only update one document even if the match filter returned more than one match.\n\nIn the \"Controllers/PlaylistController.cs\" file, add the following endpoint to pair with the `AddToPlayListAsync` function:\n\n```csharp\n[HttpPut(\"{id}\")]\npublic async Task AddToPlaylist(string id, [FromBody] string movieId) {\n await _mongoDBService.AddToPlaylistAsync(id, movieId);\n return NoContent();\n}\n```\n\nIn the above PUT endpoint, we are taking the `id` from the route parameters and the `movieId` from the request body and using them with the `AddToPlaylistAsync` function.\n\nThis brings us to our final part of the CRUD spectrum. We're going to handle deleting of data.\n\nIn the \"Services/MongoDBService.cs\" file, add the following function:\n\n```csharp\npublic async Task DeleteAsync(string id) {\n FilterDefinition filter = Builders.Filter.Eq(\"Id\", id);\n await _playlistCollection.DeleteOneAsync(filter);\n return;\n}\n```\n\nThe above function will delete a single document based on the filter criteria. The filter criteria, in this circumstance, is a match on the id which is always going to be unique. Your filters could be more extravagant if you wanted.\n\nTo bring it to an end, the endpoint for this function would look like the following in the \"Controllers/PlaylistController.cs\" file:\n\n```csharp\n[HttpDelete(\"{id}\")]\npublic async Task Delete(string id) {\n await _mongoDBService.DeleteAsync(id);\n return NoContent();\n}\n```\n\nWe only created four endpoints, but you could take everything we did and create 100 more if you wanted to. They would all use a similar strategy and can leverage everything that MongoDB has to offer.\n\n## Conclusion\n\nYou just saw how to create a simple four endpoint RESTful API using .NET Core and MongoDB. This was an expansion to the [previous tutorial, which went over the same usage of MongoDB, but in a console application format rather than web application.\n\nLike I mentioned, you can take the same strategy used here and apply it towards more endpoints, each doing something critical for your web application.\n\nGot a question about the driver for .NET? Swing by the MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["C#", "MongoDB", ".NET"], "pageDescription": "Learn how to create a RESTful web API with .NET Core that interacts with MongoDB through each of its endpoints.", "contentType": "Tutorial"}, "title": "Create a RESTful API with .NET Core and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/everything-you-know-is-wrong", "action": "created", "body": "# Everything You Know About MongoDB is Wrong!\n\nI joined MongoDB less than a year ago, and I've learned a lot in the time since. Until I started working towards my interviews at the company, I'd never actually *used* MongoDB, although I had seen some talks about it and been impressed by how simple it seemed to use.\n\nBut like many other people, I'd also heard the scary stories. \"*It doesn't do relationships!*\" people would say. \"*It's fine if you want to store documents, but what if you want to do aggregation later? You'll be trapped in the wrong database! And anyway! Transactions! It doesn't have transactions!*\"\n\nIt wasn't until I started to go looking for the sources of this information that I started to realise two things: First, most of those posts are from a decade ago, so they referred to a three-year-old product, rather than the mature, battle-tested version we have today. Second, almost everything they say is no longer true - and in some cases *has never been true*.\n\nSo I decided to give a talk (and now write this blog post) about the misinformation that's available online, and counter each myth, one by one.\n\n## Myth 0: MongoDB is Web Scale\n\nThere's a YouTube video with a couple of dogs in it (dogs? I think they're dogs). You've probably seen it - one of them is that kind of blind follower of new technology who's totally bought into MongoDB, without really understanding what they've bought into. The other dog is more rational and gets frustrated by the first dog's refusal to come down to Earth.\n\nI was sent a link to this video by a friend of mine on my first day at MongoDB, just in case I hadn't seen it. (I had seen it.) Check out the date at the bottom! This video's been circulating for over a decade. It was really funny at the time, but these days? Almost everything that's in there is outdated.\n\nWe're not upset. In fact, many people at MongoDB have the character on a T-shirt or a sticker on their laptop. He's kind of an unofficial mascot at MongoDB. Just don't watch the video looking for facts. And stop sending us links to the video - we've all seen it!\n\n## What Exactly *is* MongoDB?\n\nBefore launching into some things that MongoDB *isn't*, let's just summarize what MongoDB actually *is.*\n\nMongoDB is a distributed document database. Clusters (we call them replica sets) are mostly self-managing - once you've told each of the machines which other servers are in the cluster, then they'll handle it if one of the nodes goes down or there are problems with the network. If one of the machines gets shut off or crashes, the others will take over. You need a minimum of 3 nodes in a cluster, to achieve quorum. Each server in the cluster holds a complete copy of all of the data in the database.\n\nClusters are for redundancy, not scalability. All the clients are generally connected to only one server - the elected primary, which is responsible for executing queries and updates, and transmitting data changes to the secondary machines, which are there in case of server failure.\n\nThere *are* some interesting things you can do by connecting directly to the secondaries, like running analytics queries, because the machines are under less read load. But in general, forcing a connection to a secondary means you could be working with slightly stale data, so you shouldn't connect to a secondary node unless you're prepared to make some compromises.\n\nSo I've covered \"distributed.\" What do I mean by \"document database?\"\n\nThe thing that makes MongoDB different from traditional relational databases is that instead of being able to store atoms of data in flat rows, stored in tables in the database, MongoDB allows you to store hierarchical structured data in a *document* - which is (mostly) analogous to a JSON object. Documents are stored in a collection, which is really just a bucket of documents. Each document can have a different structure, or *schema*, from all the other documents in the collection. You can (and should!) also index documents in collections, based on the kind of queries you're going to be running and the data that you're storing. And if you want validation to ensure that all the documents in a collection *do* follow a set structure, you can apply a JSON\nschema to the collection as a validator.\n\n``` javascript\n{\n '_id': ObjectId('573a1390f29313caabcd4135'),\n 'title': 'Blacksmith Scene',\n 'fullplot': 'A stationary camera looks at a large anvil with a\n blacksmith behind it and one on either side.',\n 'cast': 'Charles Kayser', 'John Ott'],\n 'countries': ['USA'],\n 'directors': ['William K.L. Dickson'],\n 'genres': ['Short'],\n 'imdb': {'id': 5, 'rating': 6.2, 'votes': 1189},\n 'released': datetime.datetime(1893, 5, 9, 0, 0),\n 'runtime': 1,\n 'year': 1893\n}\n```\n\nThe above document is an example, showing a movie from 1893! This document was retrieved using the [PyMongo driver.\n\nNote that some of the values are arrays, like 'countries' and 'cast'. Some of the values are objects (we call them subdocuments). This demonstrates the hierarchical nature of MongoDB documents - they're not flat like a table row in a relational database.\n\nNote *also* that it contains a native Python datetime type for the 'released' value, and a special *ObjectId* type for the first value. Maybe these aren't actually JSON documents? I'll come back to that later...\n\n## Myth 1: MongoDB is on v3.2\n\nIf you install MongoDB on Debian Stretch, with `apt get mongodb`, it will install version 3.2. Unfortunately, this version is five years old! There have been five major annual releases since then, containing a whole host of new features, as well as security, performance, and scalability improvements.\n\nThe current version of MongoDB is v4.4 (as of late 2020). If you want to install it, you should install MongoDB Community Server, but first make sure you've read about MongoDB Atlas, our hosted database-as-a-service product!\n\n## Myth 2: MongoDB is a JSON Database\n\nYou'll almost certainly have heard that MongoDB is a JSON database, especially if you've read the MongoDB.com homepage recently!\n\nAs I implied before, though, MongoDB *isn't* a JSON database. It supports extra data types, such as ObjectIds, native date objects, more numeric types, geographic primitives, and an efficient binary type, among others!\n\nThis is because **MongoDB is a BSON database**.\n\nThis may seem like a trivial distinction, but it's important. As well as being more efficient to store, transfer, and traverse than using a text-based format for structured data, as well as supporting more data types than JSON, it's also *everywhere* in MongoDB.\n\n- MongoDB stores BSON documents.\n- Queries to look up documents are BSON documents.\n- Results are provided as BSON documents.\n- BSON is even used for the wire protocol used by MongoDB!\n\nIf you're used to working with JSON when doing web development, it's a useful shortcut to think of MongoDB as a JSON database. That's why we sometimes describe it that way! But once you've been working with MongoDB for a little while, you'll come to appreciate the advantages that BSON has to offer.\n\n## Myth 3: MongoDB Doesn't Support Transactions\n\nWhen reading third-party descriptions of MongoDB, you may come across blog posts describing it as a BASE database. BASE is an acronym for \"Basic Availability; Soft-state; Eventual consistency.\"\n\nBut this is not true, and never has been! MongoDB has never been \"eventually consistent.\" Reads and writes to the primary are guaranteed to be strongly consistent, and updates to a single document are always atomic. Soft-state apparently describes the need to continually update data or it will expire, which is also not the case.\n\nAnd finally, MongoDB *will* go into a read-only state (reducing availability) if so many nodes are unavailable that a quorum cannot be achieved. This is by design. It ensures that consistency is maintained when everything else goes wrong.\n\n**MongoDB is an ACID database**. It supports atomicity, consistency, isolation, and durability.\n\nUpdates to multiple parts of individual documents have always been atomic; but since v4.0, MongoDB has supported transactions across multiple documents and collections. Since v4.2, this is even supported across shards in a sharded cluster.\n\nDespite *supporting* transactions, they should be used with care. They have a performance cost, and because MongoDB supports rich, hierarchical documents, if your schema is designed correctly, you should not often have to update across multiple documents.\n\n## Myth 4: MongoDB Doesn't Support Relationships\n\nAnother out-of-date myth about MongoDB is that you can't have relationships between collections or documents. You *can* do joins with queries that we call aggregation pipelines. They're super-powerful, allowing you to query and transform your data from multiple collections using an intuitive query model that consists of a series of pipeline stages applied to data moving through the pipeline.\n\n**MongoDB has supported lookups (joins) since v2.2.**\n\nThe example document below shows how, after a query joining an *orders* collection and an *inventory* collection, a returned order document contains the related inventory documents, embedded in an array.\n\nMy opinion is that being able to embed related documents within the primary documents being returned is more intuitive than duplicating rows for every relationship found in a relational join.\n\n## Myth 5: MongoDB is All About Sharding\n\nYou may hear people talk about sharding as a cool feature of MongoDB. And it is - it's definitely a cool, and core, feature of MongoDB.\n\nSharding is when you divide your data and put each piece in a different replica set or cluster. It's a technique for dealing with huge data sets. MongoDB supports automatically ensuring data and requests are sent to the correct replica sets, and merging results from multiple shards.\n\nBut there's a fundamental issue with sharding.\n\nI mentioned earlier in this post that the minimum number of nodes in a replica set is three, to allow quorum. As soon as you need sharding, you have at least two replica sets, so that's a minimum of six servers. On top of that, you need to run multiple instances of a server called *mongos*. Mongos is a proxy for the sharded cluster which handles the routing of requests and responses. For high availability, you need at least two instances of mongos.\n\nSo, this means a minimum sharded cluster is eight servers, and it goes up by at least three servers, with each shard added.\n\nSharded clusters also make your data harder to manage, and they add some limitations to the types of queries you can conduct. **Sharding is useful if you need it, but it's often cheaper and easier to simply upgrade your hardware!**\n\nScaling data is mostly about RAM, so if you can, buy more RAM. If CPU is your bottleneck, upgrade your CPU, or buy a bigger disk, if that's your issue.\n\nMongoDB's sharding features are still there for you once you scale beyond the amount of RAM that can be put into a single computer. You can also do some neat things with shards, like geo-pinning, where you can store user data geographically closer to the user's location, to reduce latency.\n\nIf you're attempting to scale by sharding, you should at least consider whether hardware upgrades would be a more efficient alternative, first.\n\nAnd before you consider *that*, you should look at MongoDB Atlas, MongoDB's hosted database-as-a-service product. (Yes, I know I've already mentioned it!) As well as hosting your database for you, on the cloud (or clouds) of your choice, MongoDB Atlas will also scale your database up and down as required, keeping you available, while keeping costs low. It'll handle backups and redundancy, and also includes extra features, such as charts, text search, serverless functions, and more.\n\n## Myth 6: MongoDB is Insecure\n\nA rather persistent myth about MongoDB is that it's fundamentally insecure. My personal feeling is that this is one of the more unfair myths about MongoDB, but it can't be denied that there are many insecure instances of MongoDB available on the Internet, and there have been several high-profile data breaches involving MongoDB.\n\nThis is historically due to the way MongoDB has been distributed. Some Linux distributions used to ship MongoDB with authentication disabled, and with networking enabled.\n\nSo, if you didn't have a firewall, or if you opened up the MongoDB port on your firewall so that it could be accessed by your web server... then your data would be stolen. Nowadays, it's just as likely that a bot will find your data, encrypt it within your database, and then add a document telling you where to send Bitcoin to get the key to decrypt it again.\n\n*I* would argue that if you put an unprotected database server on the internet, then that's *your* fault - but it's definitely the case that this has happened many times, and there were ways to make it more difficult to mess this up.\n\nWe fixed the defaults in MongoDB 3.6. **MongoDB will not connect to the network unless authentication is enabled** *or* you provide a specific flag to the server to override this behaviour. So, you can still *be* insecure, but now you have to at least read the manual first!\n\nOther than this, **MongoDB uses industry standards for security**, such as TLS to encrypt the data in-transit, and SCRAM-SHA-256 to authenticate users securely.\n\nMongoDB also features client-side field-level encryption (FLE), which allows you to store data in MongoDB so that it is encrypted both in-transit and at-rest. This means that if a third-party was to gain access to your database server, they would be unable to read the encrypted data without also gaining access to the client.\n\n## Myth 7: MongoDB Loses Data\n\nThis myth is a classic Hacker News trope. Someone posts an example of how they successfully built something with MongoDB, and there's an immediate comment saying, \"I know this guy who once lost all his data in MongoDB. It just threw it away. Avoid.\"\n\nIf you follow up asking these users to get in touch and file a ticket describing the incident, they never turn up!\n\nMongoDB is used in a range of industries who care deeply about keeping their data. These range from banks such as Morgan Stanley, Barclays, and HSBC to massive publishing brands, like Forbes. We've never had a report of large-scale data loss. If you *do* have a first-hand story to tell of data loss, please file a ticket. We'll take it seriously whether you're a paying enterprise customer or an open-source user.\n\n## Myth 8: MongoDB is Just a Toy\n\nIf you've read up until this point, you can already see that this one's a myth!\n\nMongoDB is a general purpose database for storing documents, that can be updated securely and atomically, with joins to other documents and a rich, powerful and intuitive query language for finding and aggregating those documents in the form that you need. When your data gets too big for a single machine, it supports sharding out of the box, and it supports advanced features such as client-side field level encryption for securing sensitive data, and change streams, to allow your applications to respond immediately to changes to your data, using whatever language, framework and set of libraries you prefer to develop with.\n\nIf you want to protect yourself from myths in the future, your best bet is to...\n\n## Become a MongoDB Expert\n\nMongoDB is a database that is easy to get started with, but to build production applications requires that you master the complexities of interacting with a distributed database. MongoDB Atlas simplifies many of those challenges, but you will get the most out of MongoDB if you invest time in learning things like the aggregation framework, read concerns, and write concerns. Nothing hard is easy, but the hard stuff is easier with MongoDB. You're not going to become an expert overnight. The good news is that there are lots of resources for learning MongoDB, and it's fun!\n\nThe MongoDB documentation is thorough and readable. There are many free courses at MongoDB University\n\nOn the MongoDB Developer Blog, we have detailed some MongoDB Patterns for schema design and development, and my awesome colleague Lauren Schaefer has been producing a series of posts describing MongoDB Anti-Patterns to help you recognise when you may not be doing things optimally.\n\nMongoDB has an active Community Forum where you can ask questions or show off your projects.\n\nSo, **MongoDB is big and powerful, and there's a lot to learn**. I hope this article has gone some way to explaining what MongoDB is, what it isn't, and how you might go about learning to use it effectively.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "There are a bunch of myths floating around about MongoDB. Here's where I bust them.", "contentType": "Article"}, "title": "Everything You Know About MongoDB is Wrong!", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/get-hyped-synonyms-atlas-search", "action": "created", "body": "# Get Hyped: Synonyms in Atlas Search\n\nSometimes, the word you\u2019re looking for is on the tip of your tongue, but you can\u2019t quite grasp it. For example, when you\u2019re trying to find a really funny tweet you saw last night to show your friends. If you\u2019re sitting there reading this and thinking, \"Wow, Anaiya and Nic, you\u2019re so right. I wish there was a fix for this,\" strap on in! We have just the solution for those days when your precise linguistic abilities fail you, but you have an idea of what you\u2019re looking for: **Synonyms in Atlas Search**. \n\nIn this tutorial, we are going to be showing you how to index a MongoDB collection to capture searches for words that mean similar things. For the specifics, we\u2019re going to search through content written with Generation Z (Gen-Z) slang. The slang will be mapped to common words with synonyms and as a result, you\u2019ll get a quick Gen-Z lesson without having to ever open TikTok. \n\nIf you\u2019re in the mood to learn a few new words, alongside how effortlessly synonym mappings can be integrated into Atlas Search, this is the tutorial for you. \n\n## Requirements\n\nThere are a few requirements that must be met to be successful with this tutorial:\n\n- MongoDB Atlas M0 (or higher) cluster running MongoDB version 4.4 (or higher)\n- Node.js\n- A Twitter developer account\n\nWe\u2019ll be using Node.js to load our Twitter data, but a Twitter developer account is required for accessing the APIs that contain Tweets.\n\n## Load Twitter Data into a MongoDB Collection\n\nBefore starting this section of the tutorial, you\u2019re going to need to have your Twitter API Key and API Secret handy. These can both be generated from the Twitter Developer Portal.\n\nThe idea is that we want to store a bunch of tweets in MongoDB that contain Gen-Z slang that we can later make sense of using Atlas Search and properly defined synonyms. Each tweet will be stored as a single document within MongoDB and will look something like this:\n\n```json\n{\n \"_id\": 1420091624621629400,\n \"created_at\": \"Tue Jul 27 18:40:01 +0000 2021\",\n \"id\": 1420091624621629400,\n \"id_str\": \"1420091624621629443\",\n \"full_text\": \"Don't settle for a cheugy database, choose MongoDB instead \ud83d\udcaa\",\n \"truncated\": false,\n \"entities\": {\n \"hashtags\": ],\n \"symbols\": [],\n \"user_mentions\": [],\n \"urls\": []\n },\n \"metadata\": {\n \"iso_language_code\": \"en\",\n \"result_type\": \"recent\"\n },\n \"source\": \"Twitter Web App\",\n \"in_reply_to_status_id\": null,\n \"in_reply_to_status_id_str\": null,\n \"in_reply_to_user_id\": null,\n \"in_reply_to_user_id_str\": null,\n \"in_reply_to_screen_name\": null,\n \"user\": {\n \"id\": 1400935623238643700,\n \"id_str\": \"1400935623238643716\",\n \"name\": \"Anaiya Raisinghani\",\n \"screen_name\": \"anaiyaraisin\",\n \"location\": \"\",\n \"description\": \"Developer Advocacy Intern @MongoDB. Opinions are my own!\",\n \"url\": null,\n \"entities\": {\n \"description\": {\n \"urls\": []\n }\n },\n \"protected\": false,\n \"followers_count\": 11,\n \"friends_count\": 29,\n \"listed_count\": 1,\n \"created_at\": \"Fri Jun 04 22:01:07 +0000 2021\",\n \"favourites_count\": 8,\n \"utc_offset\": null,\n \"time_zone\": null,\n \"geo_enabled\": false,\n \"verified\": false,\n \"statuses_count\": 7,\n \"lang\": null,\n \"contributors_enabled\": false,\n \"is_translator\": false,\n \"is_translation_enabled\": false,\n \"profile_background_color\": \"F5F8FA\",\n \"profile_background_image_url\": null,\n \"profile_background_image_url_https\": null,\n \"profile_background_tile\": false,\n \"profile_image_url\": \"http://pbs.twimg.com/profile_images/1400935746593202176/-pgS_IUo_normal.jpg\",\n \"profile_image_url_https\": \"https://pbs.twimg.com/profile_images/1400935746593202176/-pgS_IUo_normal.jpg\",\n \"profile_banner_url\": \"https://pbs.twimg.com/profile_banners/1400935623238643716/1622845231\",\n \"profile_link_color\": \"1DA1F2\",\n \"profile_sidebar_border_color\": \"C0DEED\",\n \"profile_sidebar_fill_color\": \"DDEEF6\",\n \"profile_text_color\": \"333333\",\n \"profile_use_background_image\": true,\n \"has_extended_profile\": true,\n \"default_profile\": true,\n \"default_profile_image\": false,\n \"following\": null,\n \"follow_request_sent\": null,\n \"notifications\": null,\n \"translator_type\": \"none\",\n \"withheld_in_countries\": []\n },\n \"geo\": null,\n \"coordinates\": null,\n \"place\": null,\n \"contributors\": null,\n \"is_quote_status\": false,\n \"retweet_count\": 0,\n \"favorite_count\": 1,\n \"favorited\": false,\n \"retweeted\": false,\n \"lang\": \"en\"\n}\n```\n\nThe above document model is more extravagant than we need. In reality, we\u2019re only going to be paying attention to the `full_text` field, but it\u2019s still useful to know what exists for any given tweet.\n\nNow that we know what the document model is going to look like, we just need to consume it from Twitter.\n\nWe\u2019re going to use two different Twitter APIs with our API Key and API Secret. The first API is the authentication API and it will give us our access token. With the access token we can get tweet data based on a Twitter query.\n\nSince we\u2019re using Node.js, we need to install our dependencies. Within a new directory on your computer, execute the following commands from the command line:\n\n```bash\nnpm init -y\nnpm install mongodb axios --save\n```\n\nThe above commands will create a new **package.json** file and install the MongoDB Node.js driver as well as Axios for making HTTP requests.\n\nTake a look at the following Node.js code which can be added to a **main.js** file within your project:\n\n```javascript\nconst { MongoClient } = require(\"mongodb\");\nconst axios = require(\"axios\");\n\nrequire(\"dotenv\").config();\n\nconst mongoClient = new MongoClient(process.env.MONGODB_URI);\n\n(async () => {\n try {\n await mongoClient.connect();\n const tokenResponse = await axios({\n \"method\": \"POST\",\n \"url\": \"https://api.twitter.com/oauth2/token\",\n \"headers\": {\n \"Authorization\": \"Basic \" + Buffer.from(`${process.env.API_KEY}:${process.env.API_SECRET}`).toString(\"base64\"),\n \"Content-Type\": \"application/x-www-form-urlencoded\"\n },\n \"data\": \"grant_type=client_credentials\"\n });\n const tweetResponse = await axios({\n \"method\": \"GET\",\n \"url\": \"https://api.twitter.com/1.1/search/tweets.json\",\n \"headers\": {\n \"Authorization\": \"Bearer \" + tokenResponse.data.access_token\n },\n \"params\": {\n \"q\": \"mongodb -filter:retweets filter:safe (from:codeSTACKr OR from:nraboy OR from:kukicado OR from:judy2k OR from:adriennetacke OR from:anaiyaraisin OR from:lauren_schaefer)\",\n \"lang\": \"en\",\n \"count\": 100,\n \"tweet_mode\": \"extended\"\n }\n });\n console.log(`Next Results: ${tweetResponse.data.search_metadata.next_results}`)\n const collection = mongoClient.db(process.env.MONGODB_DATABASE).collection(process.env.MONGODB_COLLECTION);\n tweetResponse.data.statuses = tweetResponse.data.statuses.map(status => {\n status._id = status.id;\n return status;\n });\n const result = await collection.insertMany(tweetResponse.data.statuses);\n console.log(result);\n } finally {\n await mongoClient.close();\n }\n})();\n```\n\nThere\u2019s quite a bit happening in the above code so we\u2019re going to break it down. However, before we break it down, it's important to note that we\u2019re using environment variables for a lot of the sensitive information like tokens, usernames, and passwords. For security reasons, you really shouldn\u2019t hard-code these values.\n\nInside the asynchronous function, we attempt to establish a connection to MongoDB. If successful, no error is thrown, and we make our first HTTP request.\n\n```javascript\nconst tokenResponse = await axios({\n \"method\": \"POST\",\n \"url\": \"https://api.twitter.com/oauth2/token\",\n \"headers\": {\n \"Authorization\": \"Basic \" + Buffer.from(`${process.env.API_KEY}:${process.env.API_SECRET}`).toString(\"base64\"),\n \"Content-Type\": \"application/x-www-form-urlencoded\"\n },\n \"data\": \"grant_type=client_credentials\"\n});\n```\n\nOnce again, in this first HTTP request, we are exchanging our API Key and API Secret with an access token to be used in future requests.\n\nUsing the access token from the response, we can make our second request to the tweets API endpoint:\n\n```javascript\nconst tweetResponse = await axios({\n \"method\": \"GET\",\n \"url\": \"https://api.twitter.com/1.1/search/tweets.json\",\n \"headers\": {\n \"Authorization\": \"Bearer \" + tokenResponse.data.access_token\n },\n \"params\": {\n \"q\": \"mongodb -filter:retweets filter:safe\",\n \"lang\": \"en\",\n \"count\": 100,\n \"tweet_mode\": \"extended\"\n }\n});\n```\n\nThe tweets API endpoint expects a Twitter specific query and some other optional parameters like the language of the tweets or the expected result count. You can check the query language in the [Twitter documentation.\n\nAt this point, we have an array of tweets to work with.\n\nThe next step is to pick the database and collection we plan to use and insert the array of tweets as documents. We can use a simple `insertMany` operation like this:\n\n```javascript\nconst result = await collection.insertMany(tweetResponse.data.statuses);\n```\n\nThe `insertMany` takes an array of objects, which we already have. We have an array of tweets, so each tweet will be inserted as a new document within the database.\n\nIf you have the MongoDB shell handy, you can validate the data that was inserted by executing the following:\n\n```javascript\nuse(\"synonyms\");\ndb.tweets.find({ });\n```\n\nNow that there\u2019s data to work with, we can start to search it using slang synonyms.\n\n## Creating Synonym Mappings in MongoDB \n\nWhile we\u2019re using a `tweets` collection for our actual searchable data, the synonym information needs to exist in a separate source collection in the same database. \n\nYou have two options for how you want your synonyms to be mapped\u2013explicit or equivalent. You are not stuck with choosing just one type. You can have a combination of both explicit and equivalent as synonym documents in your collection. Choose the explicit format for when you need a set of terms to show up as a result of your inputted term, and choose equivalent if you want all terms to show up bidirectionally regardless of your queried term. \n\nFor example, the word \"basic\" means \"regular\" or \"boring.\" If we decide on an explicit (one-way) mapping for \"basic,\" we are telling Atlas Search that if someone searches for \"basic,\" we want to return all documents that include the words \"basic,\" \"regular,\" and \"boring.\" But! If we query the word \"regular,\" we would not get any documents that include \"basic\" because \"regular\" is not explicitly mapped to \"basic.\" \n\nIf we decide to map \"basic\" equivalently to \"regular\" and \"boring,\" whenever we query any of these words, all the documents containing \"basic,\" \"regular,\" **and** \"boring\" will show up regardless of the initial queried word. \n\nTo learn more about explicit vs. equivalent synonym mappings, check out the official documentation. \n\nFor our demo, we decided to make all of our synonyms equivalent and formatted our synonym data like this: \n\n```json\n\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"basic\", \"regular\", \"boring\"] \n },\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"bet\", \"agree\", \"concur\"]\n },\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"yikes\", \"embarrassing\", \"bad\", \"awkward\"]\n },\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"fam\", \"family\", \"friends\"]\n }\n]\n```\n\nEach object in the above array will exist as a separate document within MongoDB. Each of these documents contains information for a particular set of synonyms.\n\nTo insert your synonym documents into your MongoDB collection, you can use the \u2018insertMany()\u2019 MongoDB raw function to put all your documents into the collection of your choice. \n\n```javascript\nuse(\"synonyms\");\n\ndb.slang.insertMany([\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"basic\", \"regular\", \"boring\"]\n },\n {\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"bet\", \"agree\", \"concur\"]\n }\n]);\n```\n\nThe `use(\"synonyms\");` line is to ensure you\u2019re in the correct database before inserting your documents. We\u2019re using the `slang` collection to store our synonyms and it doesn\u2019t need to exist in our database prior to running our query.\n\n## Create an Atlas Search Index that Leverages Synonyms\n\nOnce you have your collection of synonyms handy and uploaded, it's time to create your search index! A search index is crucial because it allows you to use full-text search to find the inputted queries in that collection. \n\nWe have included screenshots below of what your MongoDB Atlas Search user interface will look like so you can follow along: \n\nThe first step is to click on the \"Search\" tab, located on your cluster page in between the \"Collections\" and \"Profiler\" tabs.\n\n![Find the Atlas Search Tab\n\nThe second step is to click on the \"Create Index\" button in the upper right hand corner, or if this is your first Index, it will be located in the middle of the page. \n\nOnce you reach this page, go ahead and click \"Next\" and continue on to the page where you will name your Index and set it all up! \n\nClick \"Next\" and you\u2019ll be able to create your very own search index! \n\nOnce you create your search index, you can go back into it and then edit your index definition using the JSON editor to include what you need. The index we wrote for this tutorial is below: \n\n```json\n{\n \"mappings\": {\n \"dynamic\": true\n },\n \"synonyms\": \n {\n \"analyzer\": \"lucene.standard\",\n \"name\": \"slang\",\n \"source\": {\n \"collection\": \"slang\"\n }\n }\n ]\n}\n```\n\nLet\u2019s run through this! \n\n```json\n{\n \"mappings\": {\n \"dynamic\": true\n},\n```\n\nYou have the option of choosing between dynamic and static for your search index, and this can be up to your discretion. To find more information on the difference between dynamic and static mappings, check out the [documentation.\n\n```json\n\"synonyms\": \n {\n \"analyzer\": \"lucene.standard\",\n \"name\": \"slang\",\n \"source\": {\n \"collection\": \"slang\"\n }\n }\n]\n```\n\nThis section refers to the synonyms associated with the search index. In this example, we\u2019re giving this synonym mapping a name of \"slang,\" and we\u2019re using the default index analyzer on the synonym data, which can be found in the slang collection. \n\n## Searching with Synonyms with the MongoDB Aggregation Pipeline\n\nOur next step is to put together the search query that will actually filter through your tweet collection and find the tweets you want using synonyms! \n\nThe code we used for this part is below:\n\n```javascript\nuse(\"synonyms\");\n\ndb.tweets.aggregate([\n {\n \"$search\": {\n \"index\": \"synsearch\",\n \"text\": {\n \"query\": \"throw\",\n \"path\": \"full_text\",\n \"synonyms\": \"slang\"\n }\n }\n }\n]);\n```\n\nWe want to search through our tweets and find the documents containing synonyms for our query \"throw.\" This is the synonym document for \"throw\":\n\n```json\n{\n \"mappingType\": \"equivalent\",\n \"synonyms\": [\"yeet\", \"throw\", \"agree\"]\n},\n```\n\nRemember to include the name of your search index from earlier (synsearch). Then, the query we\u2019re specifying is \"throw.\" This means we want to see tweets that include \"yeet,\" \"throw,\" and \"agree\" once we run this script. \n\nThe \u2018path\u2019 represents the field we want to search within, and in this case, we are searching for \"throw\" only within the \u2018full_text\u2019 field of the documents and no other field. Last but not least, we want to use synonyms found in the collection we have named \"slang.\" \n\nBased on this query, any matches found will include the entire document in the result-set. To better streamline this, we can use a `$project` aggregation stage to specify the fields we\u2019re interested in. This transforms our query into the following aggregation pipeline:\n\n```javascript\ndb.tweets.aggregate([\n {\n \"$search\": {\n \"index\": \"synsearch\",\n \"text\": {\n \"query\": \"throw\",\n \"path\": \"full_text\",\n \"synonyms\": \"slang\"\n }\n }\n },\n {\n \"$project\": {\n \"_id\": 1,\n \"full_text\": 1,\n \"username\": \"$user.screen_name\"\n }\n }\n]);\n```\n\nAnd these are our results! \n\n```json\n[\n {\n \"_id\": 1420084484922347500,\n \"full_text\": \"not to throw shade on SQL databases, but MongoDB SLAPS\",\n \"username\": \"codeSTACKr\"\n },\n {\n \"_id\": 1420088203499884500,\n \"full_text\": \"Yeet all your data into a MongoDB collection and watch the magic happen! No cap, we are efficient \ud83d\udcaa\",\n \"username\": \"nraboy\"\n }\n]\n```\n\nJust as we wanted, we have tweets that include the word \"throw\" and the word \"yeet!\" \n\n## Conclusion\n\nWe\u2019ve accomplished a **ton** in this tutorial, and we hope you\u2019ve enjoyed following along. Now, you are set with the knowledge to load in data from external sources, create your list of explicit or equivalent synonyms and insert it into a collection, and write your own index search script. Synonyms can be useful in a multitude of ways, not just isolated to Gen-Z slang. From figuring out regional variations (e.g., soda = pop), to finding typos that cannot be easily caught with autocomplete, incorporating synonyms will help save you time and a thesaurus. \n\nUsing synonyms in Atlas Search will improve your app\u2019s search functionality and will allow you to find the data you\u2019re looking for, even when you can\u2019t quite put your finger on it. \n\nIf you want to take a look at the code, queries, and indexes used in this blog post, check out the project on [GitHub. If you want to learn more about synonyms in Atlas Search, check out the documentation.\n\nIf you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js"], "pageDescription": "Learn how to define your own custom synonyms for use with MongoDB Atlas Search in this example with features searching within slang found in Twitter messages.", "contentType": "Tutorial"}, "title": "Get Hyped: Synonyms in Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/subset-pattern", "action": "created", "body": "# Building with Patterns: The Subset Pattern\n\nSome years ago, the first PCs had a whopping 256KB of RAM and dual 5.25\"\nfloppy drives. No hard drives as they were incredibly expensive at the\ntime. These limitations resulted in having to physically swap floppy\ndisks due to a lack of memory when working with large (for the time)\namounts of data. If only there was a way back then to only bring into\nmemory the data I frequently used, as in a subset of the overall data.\n\nModern applications aren't immune from exhausting resources. MongoDB\nkeeps frequently accessed data, referred to as the working set,\nin RAM. When the working set of data and indexes grows beyond the\nphysical RAM allotted, performance is reduced as disk accesses starts to\noccur and data rolls out of RAM.\n\nHow can we solve this? First, we could add more RAM to the server. That\nonly scales so much though. We can look at\nsharding\nour collection, but that comes with additional costs and complexities\nthat our application may not be ready for. Another option is to reduce\nthe size of our working set. This is where we can leverage the Subset\nPattern.\n\n## The Subset Pattern\n\nThis pattern addresses the issues associated with a working set that\nexceeds RAM, resulting in information being removed from memory. This is\nfrequently caused by large documents which have a lot of data that isn't\nactually used by the application. What do I mean by that exactly?\n\nImagine an e-commerce site that has a list of reviews for a product.\nWhen accessing that product's data it's quite possible that we'd only\nneed the most recent ten or so reviews. Pulling in the entirety of the\nproduct data with **all** of the reviews could easily cause the working\nset to expand.\n\nInstead of storing all the reviews with the product, we can split the\ncollection into two collections. One collection would have the most\nfrequently used data, e.g. current reviews and the other collection\nwould have less frequently used data, e.g. old reviews, product history,\netc. We can duplicate part of a 1-N or N-N relationship that is used by\nthe most used side of the relationship.\n\nIn the **Product** collection, we'll only keep the ten most recent\nreviews. This allows the working set to be reduced by only bringing in a\nportion, or subset, of the overall data. The additional information,\nreviews in this example, are stored in a separate **Reviews** collection\nthat can be accessed if the user wants to see additional reviews. When\nconsidering where to split your data, the most used part of the document\nshould go into the \"main\" collection and the less frequently used data\ninto another. For our reviews, that split might be the number of reviews\nvisible on the product page.\n\n## Sample Use Case\n\nThe Subset Pattern is very useful when we have a large portion of data\ninside a document that is rarely needed. Product reviews, article\ncomments, actors in a movie are all examples of use cases for this\npattern. Whenever the document size is putting pressure on the size of\nthe working set and causing the working set to exceed the computer's RAM\ncapacities, the Subset Pattern is an option to consider.\n\n## Conclusion\n\nBy using smaller documents with more frequently accessed data, we reduce\nthe overall size of the working set. This allows for shorter disk access\ntimes for the most frequently used information that an application\nneeds. One tradeoff that we must make when using the Subset Pattern is\nthat we must manage the subset and also if we need to pull in older\nreviews or all of the information, it will require additional trips to\nthe database to do so.\n\nThe next post in this series will look at the features and benefits of\nthe Extended Reference Pattern.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Tutorial"}, "title": "Building with Patterns: The Subset Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/lessons-learned-building-game-mongodb-unity", "action": "created", "body": "# Lessons Learned from Building a Game with MongoDB and Unity\n\nBack in September 2020, my colleagues Nic\nRaboy, Karen\nHuaulme, and I decided to learn how to\nbuild a game. At the time, Fall Guys was what\nour team enjoyed playing together, so we set a goal for ourselves to\nbuild a similar game. We called it Plummeting People! Every week, we'd\nstream our process live on Twitch.\n\nAs you can imagine, building a game is not an easy task; add to that\nfact that we were all mostly new to game development and you have a\nwhirlwind of lessons learned while learning in public. After spending\nthe last four months of 2020 building and streaming, I've compiled the\nlessons learned while going through this process, starting from the\nfirst stream.\n\n>\n>\n>\ud83d\udcfa\ufe0f Watch the full series\n>here\n>(YouTube playlist)! And the\n>repo is\n>available too.\n>\n>\n\n## Stream 1: Designing a Strategy to Develop a Game with Unity and MongoDB\n\nAs with most things in life, ambitious endeavors always start out with\nsome excitement, energy, and overall enthusiasm for the new thing ahead.\nThat's exactly how our game started! Nic, Karen, and I had a wonderful\nstream setting the foundation for our game. What were we going to build?\nWhat tools would we use? What were our short-term and long-term goals?\nWe laid it all out on a nice Jamboard. We even incorporated our chat's\nideas and suggestions!\n\n>\n>\n>:youtube]{vid=XAvy2BouZ1Q}\n>\n>Watch us plan our strategy for Plummeting People here!\n>\n>\n\n### Lessons Learned\n\n- It's always good to have a plan; it's even better to have a flexible\n plan.\n- Though we separated our ideas into logical sections on our Jamboard,\n it would have been more helpful to have rough deadlines and a\n solidified understanding of what our minimum viable product (MVP)\n was going to be.\n\n## Stream 2: Create a User Profile Store for a Game with MongoDB, Part 1\n\n>\n>\n>\n>\n>\n\n## Stream 3: Create a User Profile Store for a Game with MongoDB, Part 2\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- Sometimes, things will spill into an additional stream, as seen\n here. In order to fully show how a user profile store could work, we\n pushed the remaining portions into another stream, and that's OK!\n\n## Stream 4: 2D Objects and 2D Physics\n\n>\n>\n>\n>\n>\n\n## Stream 5: Using Unity's Tilemap Creator\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- Teammates get sick every once in a while! As you saw in the last two\n streams, I was out, and then Karen was out. Having an awesome team\n to cover you makes it much easier to be consistent when it comes to\n streaming.\n- Tilemap editors are a pretty neat and non-intimidating way to begin\n creating custom levels in Unity!\n\n## Stream 6: Adding Obstacles and Other Physics to a 2D Game\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- As you may have noticed, we changed our streaming schedule from\n weekly to every other week, and that helped immensely. With all of\n the work we do as Developer Advocates, setting the ambitious goal of\n streaming every week left us no time to focus and learn more about\n game development!\n- Sometimes, reworking the schedule to make sure there's **more**\n breathing room for you is what's needed.\n\n## Stream 7: Making Web Requests from Unity\n\n>\n>\n>\n>\n>\n\n## Stream 8: Programmatically Generating GameObjects in Unity\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- As you become comfortable with a new technology, it's OK to go back\n and improve what you've built upon! In this case, we started out\n with a bunch of game objects that were copied and pasted, but found\n that the proper way was to 1) create prefabs and 2) programmatically\n generate those game objects.\n- Learning in public and showing these moments make for a more\n realistic display of how we learn!\n\n## Stream 9: Talking Some MongoDB, Playing Some Fall Guys\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- Sometimes, you gotta play video games for research! That's exactly\n what we did while also taking a much needed break.\n- It's also always fun to see the human side of things, and that part\n of us plays a lot of video games!\n\n## Stream 10: A Recap on What We've Built\n\n>\n>\n>\n>\n>\n\n### Lessons Learned\n\n- After season one finished, it was rewarding to see what we had\n accomplished so far! It sometimes takes a reflective episode like\n this one to see that consistent habits do lead to something.\n- Though we didn't get to everything we wanted to in our Jamboard, we\n learned way more about game development than ever before.\n- We also learned how to improve for our next season of game\n development streams. One of those factors is focusing on one full\n game a month! You can [catch the first one\n here, where Nic and I build an\n infinite runner game in Unity.\n\n## Summary\n\nI hope this article has given you some insight into learning in public,\nwhat it takes to stream your learning process, and how to continue\nimproving!\n\n>\n>\n>\ud83d\udcfa\ufe0f Don't forget to watch the full season\n>here\n>(YouTube playlist)! And poke around in the code by cloning our\n>repo.\n>\n>\n\nIf you're interested in learning more about game development, check out\nthe following resources:\n\n- Creating a Multiplayer Drawing Game with Phaser and\n MongoDB\n- Build an Infinite Runner Game with Unity and the Realm Unity\n SDK\n- Developing a Side-Scrolling Platformer Game with Unity and MongoDB\n Realm\n\nIf you have any questions about any of our episodes from this season, I\nencourage you to join the MongoDB\nCommunity. It's a great place to ask\nquestions! And if you tag me `@adriennetacke`, I'll be able to see your\nquestions.\n\nLastly, be sure to follow us on Twitch\nso you never miss a stream! Nic and I will be doing our next game dev\nstream on March 26, so see you there!\n\n", "format": "md", "metadata": {"tags": ["Realm", "Unity"], "pageDescription": "After learning how to build a game in public, see what lessons Adrienne learned while building a game with MongoDB and Unity", "contentType": "Article"}, "title": "Lessons Learned from Building a Game with MongoDB and Unity", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/node-connect-mongodb", "action": "created", "body": "# Connect to a MongoDB Database Using Node.js\n\n \n\nUse Node.js? Want to learn MongoDB? This is the blog series for you!\n\nIn this Quick Start series, I'll walk you through the basics of how to get started using MongoDB with Node.js. In today's post, we'll work through connecting to a MongoDB database from a Node.js script, retrieving a list of databases, and printing the results to your console.\n\n>\n>\n>This post uses MongoDB 4.4, MongoDB Node.js Driver 3.6.4, and Node.js 14.15.4.\n>\n>Click here to see a previous version of this post that uses MongoDB 4.0, MongoDB Node.js Driver 3.3.2, and Node.js 10.16.3.\n>\n>\n\n>\n>\n>Prefer to learn by video? I've got ya covered. Check out the video below that covers how to get connected as well as how to perform the CRUD operations.\n>\n>:youtube]{vid=fbYExfeFsI0}\n>\n>\n\n## Set Up\n\nBefore we begin, we need to ensure you've completed a few prerequisite steps.\n\n### Install Node.js\n\nFirst, make sure you have a supported version of Node.js installed. The current version of MongoDB Node.js Driver requires Node 4.x or greater. For these examples, I've used Node.js 14.15.4. See the [MongoDB Compatability docs for more information on which version of Node.js is required for each version of the Node.js driver.\n\n### Install the MongoDB Node.js Driver\n\nThe MongoDB Node.js Driver allows you to easily interact with MongoDB databases from within Node.js applications. You'll need the driver in order to connect to your database and execute the queries described in this Quick Start series.\n\nIf you don't have the MongoDB Node.js Driver installed, you can install it with the following command.\n\n``` bash\nnpm install mongodb\n```\n\nAt the time of writing, this installed version 3.6.4 of the driver. Running `npm list mongodb` will display the currently installed driver version number. For more details on the driver and installation, see the official documentation.\n\n### Create a Free MongoDB Atlas Cluster and Load the Sample Data\n\nNext, you'll need a MongoDB database. The easiest way to get started with MongoDB is to use Atlas, MongoDB's fully-managed database-as-a-service.\n\nHead over to Atlas and create a new cluster in the free tier. At a high level, a cluster is a set of nodes where copies of your database will be stored. Once your tier is created, load the sample data. If you're not familiar with how to create a new cluster and load the sample data, check out this video tutorial from MongoDB Developer Advocate Maxime Beugnet.\n\n>\n>\n>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n>\n>\n\n### Get Your Cluster's Connection Info\n\nThe final step is to prep your cluster for connection.\n\nIn Atlas, navigate to your cluster and click **CONNECT**. The Cluster Connection Wizard will appear.\n\nThe Wizard will prompt you to add your current IP address to the IP Access List and create a MongoDB user if you haven't already done so. Be sure to note the username and password you use for the new MongoDB user as you'll need them in a later step.\n\nNext, the Wizard will prompt you to choose a connection method. Select **Connect Your Application**. When the Wizard prompts you to select your driver version, select **Node.js** and **3.6 or later**. Copy the provided connection string.\n\nFor more details on how to access the Connection Wizard and complete the steps described above, see the official documentation.\n\n## Connect to Your Database from a Node.js Application\n\nNow that everything is set up, it's time to code! Let's write a Node.js script that connects to your database and lists the databases in your cluster.\n\n### Import MongoClient\n\nThe MongoDB module exports `MongoClient`, and that's what we'll use to connect to a MongoDB database. We can use an instance of MongoClient to connect to a cluster, access the database in that cluster, and close the connection to that cluster.\n\n``` js\nconst { MongoClient } = require('mongodb');\n```\n\n### Create our Main Function\n\nLet's create an asynchronous function named `main()` where we will connect to our MongoDB cluster, call functions that query our database, and disconnect from our cluster.\n\n``` js\nasync function main() {\n // we'll add code here soon\n}\n```\n\nThe first thing we need to do inside of `main()` is create a constant for our connection URI. The connection URI is the connection string you copied in Atlas in the previous section. When you paste the connection string, don't forget to update `` and `` to be the credentials for the user you created in the previous section. The connection string includes a `` placeholder. For these examples, we'll be using the `sample_airbnb` database, so replace `` with `sample_airbnb`.\n\n**Note**: The username and password you provide in the connection string are NOT the same as your Atlas credentials.\n\n``` js\n/**\n* Connection URI. Update , , and to reflect your cluster.\n* See https://docs.mongodb.com/ecosystem/drivers/node/ for more details\n*/\nconst uri = \"mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority\"; \n```\n\nNow that we have our URI, we can create an instance of MongoClient.\n\n``` js\nconst client = new MongoClient(uri);\n```\n\n**Note**: When you run this code, you may see DeprecationWarnings around the URL string `parser` and the Server Discover and Monitoring engine. If you see these warnings, you can remove them by passing options to the MongoClient. For example, you could instantiate MongoClient by calling `new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true })`. See the Node.js MongoDB Driver API documentation for more information on these options.\n\nNow we're ready to use MongoClient to connect to our cluster. `client.connect()` will return a promise. We will use the await keyword when we call `client.connect()` to indicate that we should block further execution until that operation has completed.\n\n``` js\nawait client.connect();\n```\n\nWe can now interact with our database. Let's build a function that prints the names of the databases in this cluster. It's often useful to contain this logic in well named functions in order to improve the readability of your codebase. Throughout this series, we'll create new functions similar to the function we're creating here as we learn how to write different types of queries. For now, let's call a function named `listDatabases()`.\n\n``` js\nawait listDatabases(client);\n```\n\nLet's wrap our calls to functions that interact with the database in a `try/catch` statement so that we handle any unexpected errors.\n\n``` js\ntry {\n await client.connect();\n\n await listDatabases(client);\n\n} catch (e) {\n console.error(e);\n}\n```\n\nWe want to be sure we close the connection to our cluster, so we'll end our `try/catch` with a finally statement.\n\n``` js\nfinally {\n await client.close();\n}\n```\n\nOnce we have our `main()` function written, we need to call it. Let's send the errors to the console.\n\n``` js\nmain().catch(console.error);\n```\n\nPutting it all together, our `main()` function and our call to it will look something like the following.\n\n``` js\nasync function main(){\n /**\n * Connection URI. Update , , and to reflect your cluster.\n * See https://docs.mongodb.com/ecosystem/drivers/node/ for more details\n */\n const uri = \"mongodb+srv://:@/sample_airbnb?retryWrites=true&w=majority\";\n\n const client = new MongoClient(uri);\n\n try {\n // Connect to the MongoDB cluster\n await client.connect();\n\n // Make the appropriate DB calls\n await listDatabases(client);\n\n } catch (e) {\n console.error(e);\n } finally {\n await client.close();\n }\n}\n\nmain().catch(console.error);\n```\n\n### List the Databases in Our Cluster\n\nIn the previous section, we referenced the `listDatabases()` function. Let's implement it!\n\nThis function will retrieve a list of databases in our cluster and print the results in the console.\n\n``` js\nasync function listDatabases(client){\n databasesList = await client.db().admin().listDatabases();\n\n console.log(\"Databases:\");\n databasesList.databases.forEach(db => console.log(` - ${db.name}`));\n};\n```\n\n### Save Your File\n\nYou've been implementing a lot of code. Save your changes, and name your file something like `connection.js`. To see a copy of the complete file, visit the nodejs-quickstart GitHub repo.\n\n### Execute Your Node.js Script\n\nNow you're ready to test your code! Execute your script by running a command like the following in your terminal: `node connection.js`\n\nYou will see output like the following:\n\n``` js\nDatabases:\n - sample_airbnb\n - sample_geospatial\n - sample_mflix\n - sample_supplies\n - sample_training\n - sample_weatherdata\n - admin\n - local\n```\n\n## What's Next?\n\nToday, you were able to connect to a MongoDB database from a Node.js script, retrieve a list of databases in your cluster, and view the results in your console. Nice!\n\nNow that you're connected to your database, continue on to the next post in this series where you'll learn to execute each of the CRUD (create, read, update, and delete) operations.\n\nIn the meantime, check out the following resources:\n\n- Official MongoDB Documentation on the MongoDB Node.js Driver\n- MongoDB University Free Courses: MongoDB for Javascript Developers\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.\n", "format": "md", "metadata": {"tags": ["JavaScript", "Node.js"], "pageDescription": "Node.js and MongoDB is a powerful pairing and in this Quick Start series we show you how.", "contentType": "Quickstart"}, "title": "Connect to a MongoDB Database Using Node.js", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/preparing-tsdata-with-densify-and-fill", "action": "created", "body": "# Preparing Time Series data for Analysis Tools with $densify and $fill\n\n## Densification and Gap Filling of Time Series Data\n\nTime series data refers to recordings of continuous values at specific points in time. This data is then examined, not as individual data points, but as how a value either changes over time or correlates with other values in the same time period. \n\nNormally, data points would have a timestamp, one or more metadata values to identify what the source of the data is and one or more values known as measurements. For example, a stock ticker would have a time, stock symbol (metadata), and price (measurement), whereas aircraft tracking data might have time, tail number, and multiple measurement values such as speed, heading, altitude, and rate of climb. When we record this data in MongoDB, we may also include additional category metadata to assist in the analysis. For example, in the case of flight tracking, we may store the tail number as a unique identifier but also the aircraft type and owner as additional metadata, allowing us to analyze data based on broader categories.\n\nAnalysis of time series data is usually either to identify a previously unknown correlation between data points or to try and predict patterns and thus, future readings. There are many tools and techniques, including machine learning and Fourier analysis, applied to examine the changes in the stream of data and predict what future readings might be. In the world of high finance, entire industries and careers have been built around trying to say what a stock price will do next.\n\nSome of these analytic techniques require the data to be in a specific form, having no missing readings and regularly spaced time periods, or both, but data is not always available in that form.\n\nSome data is regularly spaced; an industrial process may record sensor readings at precise intervals. There are, however, reasons for data to be missing. This could be software failure or for external social reasons. Imagine we are examining the number of customers at stations on the London underground. We could have a daily count that we use to predict usage in the future, but a strike or other external event may cause that traffic to drop to zero on given days. From an analytical perspective, we want to replace that missing count or count of zero with a more typical value.\n\nSome data sets are inherently irregular. When measuring things in the real world, it may not be possible to take readings on a regular cadence because they are discrete events that happen irregularly, like recording tornados or detected neutrinos. Other real-world data may be a continuous event but we are only able to observe it at random times. Imagine we are tracking a pod of whales across the ocean. They are somewhere at all times but we can only record them when we see them, or when a tracking device is within range of a receiver.\n\nHaving given these examples, it\u2019s easier to explain the actual functionality available for densification and gap-filling using a generic concept of time and readings rather than specific examples, so we will do that below. These aggregation stages work on both time series and regular collections.\n\n## Add missing data points with $densify\n\nThe aggregation stage `$densify` added in MongoDB 5.2 allows you to create missing documents in the series either by filling in a document where one is not present in a regularly spaced set or by inserting documents at regularly spaced intervals between the existing data points in an irregularly spaced set.\n\nImagine we have a data set where we get a reading once a minute, but sometimes, we are missing readings. We can create data like this in the **mongosh** shell spanning the previous 20 minutes using the following script. This starts with creating a record with the current time and then subtracts 60000 milliseconds from it until we have 20 documents. It also fails to insert any document where the iterator divides evenly by 7 to create missing records.\n\n```\ndb=db.getSiblingDB('tsdemo')\ndb.data.drop()\n\nlet timestamp =new Date()\nfor(let reading=0;reading<20;reading++) {\n timestamp = new Date(timestamp.getTime() - 60000)\n if(reading%7) db.data.insertOne({reading,timestamp})\n}\n```\n\nWhilst we can view these as text using db.data.find() , it\u2019s better if we can visualize them. Ideally, we would use **MongoDB Charts** for this. However, these functions are not yet all available to us in Atlas and Charts with the free tier, so I\u2019m using a local, pre-release installation of **MongoDB 5.3** and the mongosh shell writing out a graph in HTML. We can define a graphing function by pasting the following code into mongosh or saving it in a file and loading it with the `load()` command in mongosh. *Note that you need to modify the word **open** in the script below as per the comments to match the command your OS uses to open an HTML file in a browser.*\n\n```\nfunction graphTime(data)\n{\n let fs=require(\"fs\")\n let exec = require('child_process').exec;\n let content = `\n \n \n \n `\n \n try {\n let rval = fs.writeFileSync('graph.html', content)\n //Linux use xdg-open not open\n //Windows use start not open\n //Mac uses open\n rval = exec('open graph.html',null); //\u2190---- ADJUST FOR OS\n } catch (err) {\n console.error(err)\n } \n}\n```\n\nNow we can view the sample data we added by running a query and passing it to the function.\n\n```\nlet tsdata = db.data.find({},{_id:0,y:\"$reading\",x:\"$timestamp\"}).toArray()\n\ngraphTime(tsdata)\n```\n\nAnd we can see our data points plotted like so\n\nIn this graph, the thin vertical grid lines show minutes and the blue dots are our data points. Note that the blue dots are evenly spaced horizontally although they do not align with exact minutes. A reading that is taken every minute doesn\u2019t require that it\u2019s taken exactly at 0 seconds of that minute. We can see we\u2019re missing a couple of points.\n\nWe can add these points when reading the data using `$densify`. Although we will not initially have a value for them, we can at least create placeholder documents with the correct timestamp.\n\nTo do this, we read the data using a two stage aggregation pipeline as below, specifying the field we need to add, the magnitude of the time between readings, and whether we wish to apply it to some or all of the data points. We can also have separate scales based on data categories adding missing data points for each distinct airplane or sensor, for example. In our case, we will apply to all the data as we are reading just one metric in our simple example.\n\n```\nlet densify = { $densify : { field: \"timestamp\", \n range: { step: 1, unit: \"minute\", bounds: \"full\" }}}\n\nlet projection = {$project: {_id:0, y: {$ifNull:\"$reading\",0]},x:\"$timestamp\"}}\n\nlet tsdata = db.data.aggregate([densify,projection]).toArray()\n\ngraphTime(tsdata)\n```\n\nThis pipeline adds new documents with the required value of timestamp wherever one is missing. It doesn\u2019t add any other fields to these documents, so they wouldn\u2019t appear on our graph. The created documents look like this, with no *reading* or *_id* field.\n\n```\n{\n timestamp : ISODate(\"2022-03-23T17:55:32.485Z\")\n}\n```\n\nTo fix this, I have followed that up with a projection that sets the reading to 0 if it does not exist using [`$ifNull`. This is called zero filling and gives output like so.\n\nTo be useful, we almost certainly need to get a better estimate than zero for these missing readings\u2014we can do this using `$fill`.\n\n## Using $fill to approximate missing readings\n\nThe aggregation stage `$fill` was added in MongoDB 5.3 and can replace null or missing readings in documents by estimating them based on the non null values either side (ignoring nulls allows it to account for multiple missing values in a row). We still need to use `$densify` to add the missing documents in the first place but once we have them, rather than add a zero reading using `$set` or `$project`, we can use `$fill` to calculate more meaningful values.\n\nTo use `$fill`, you need to be able to sort the data in a meaningful fashion, as missing readings will be derived from the readings that fall before and after them. In many cases, you will sort by time, although other interval data can be used.\n\nWe can compute missing values like so, specifying the field to order by, the field we want to add if it's missing, and the method\u2014in this case, `locf`, which repeats the same value as the previous data point.\n\n```\nlet densify = { $densify : { field: \"timestamp\", \n range: { step: 1, unit: \"minute\", bounds : \"full\" }}}\n\nlet fill = { $fill : { sortBy: { timestamp:1}, \n output: { reading : { method: \"locf\"}}}} \n\nlet projection = {$project: {_id:0,y:\"$reading\" ,x:\"$timestamp\"}}\n\nlet tsdata = db.data.aggregate(densify,fill,projection]).toArray()\n\ngraphTime(tsdata)\n```\n\nThis creates a set of values like this.\n\n![\n\nIn this case, though, those added points look wrong. Simply choosing to repeat the prior reading isn't ideal here. What we can do instead is apply a linear interpolation, drawing an imaginary line between the points before and after the gap and taking the point where our timestamp intersects that line. For this, we change `locf` to `linear` in our `$fill`.\n\n```\nlet densify = { $densify : { field: \"timestamp\", \n range : { step: 1, unit: \"minute\", bounds : \"full\" }}}\n\nlet fill = { $fill : { sortBy: { timestamp:1},\n output: { reading : { method: \"linear\"}}}} \n\nlet projection = {$project: {_id:0,y:\"$reading\" ,x:\"$timestamp\"}}\n\nlet tsdata = db.data.aggregate(densify,fill,projection]).toArray()\ngraphTime(tsdata)\n```\n\nNow we get the following graph, which, in this case, seems much more appropriate.\n\n![\n\nWe can see how to add missing values in regularly spaced data but how do we convert irregularly spaced data to regularly spaced, if that is what our analysis requires?\n\n## Converting uneven to evenly spaced data with $densify and $fill\n\nImagine we have a data set where we get a reading approximately once a minute, but unevenly spaced. Sometimes, the time between readings is 20 seconds, and sometimes it's 80 seconds. On average, it's once a minute, but the algorithm we want to apply to it needs evenly spaced data. This time, we will create aperiodic data like this in the **mongosh** shell spanning the previous 20 minutes, with some variation in the timing and a steadily decreasing reading.\n\n```\ndb.db.getSiblingDB('tsdemo')\n\ndb.data.drop()\n\nlet timestamp =new Date()\nlet start = timestamp;\nfor(let i=0;i<20;i++) {\n timestamp = new Date(timestamp.getTime() - Math.random()*60000 - 20000)\n let reading = (start-timestamp)/60000\n db.data.insertOne({reading,timestamp})\n}\n```\n\nWhen we plot this, we can see that the points are no longer evenly spaced. We require periodic data for our downstream analysis work, though, so how can we fix that? We cannot simply quantise the times in the existing readings. We may not even have one for each minute, and the values would be inaccurate for the time.\n\n```\nlet tsdata = db.data.find({},{_id:0,y:\"$reading\",x:\"$timestamp\"}).toArray()\n\ngraphTime(tsdata)\n```\n\nWe can solve this by using $densify to add the points we require, $fill to compute their values based on the nearest value from our original set, and then remove the original records from the set. We need to add an extra field to the originals before densification to identify them. We can do that with $set. Note that this is all inside the aggregation pipeline. We aren\u2019t editing records in the database, so there is no significant cost associated with this.\n\n```\nlet flagOriginal = {$set: {original:true}}\n\nlet densify = { $densify: { field: \"timestamp\",\n range: { step: 1, unit: \"minute\", bounds : \"full\" }}}\n\nlet fill = { $fill : { sortBy: { timestamp:1},\n output: { reading : { method: \"linear\"} }}} \n\nlet projection = {$project: {_id:0,y:\"$reading\" ,x:\"$timestamp\"}}\n\nlet tsdata = db.data.aggregate(flagOriginal, densify,fill,projection]).toArray()\ngraphTime(tsdata)\n```\n\n![\n\n \n \n\nWe now have approximately double the number of data points, original and generated\u2014but we can use $match to remove those we flagged as existing pre densification.\n\n```\nlet flagOriginal = {$set : {original:true}}\n\nlet densify = { $densify : { field: \"timestamp\",\n range: { step: 1, unit: \"minute\", bounds : \"full\" }}}\n\nlet fill = { $fill : { sortBy: { timestamp:1},\n output: { reading : { method: \"linear\"} }}} \n\nlet removeOriginal = { $match : { original : {$ne:true}}}\n\nlet projection = {$project: {_id:0,y:\"$reading\" ,x:\"$timestamp\"}}\n\nlet tsdata = db.data.aggregate(flagOriginal, densify,fill,\n removeOriginal, projection]).toArray()\n\ngraphTime(tsdata)\n```\n\n![\n\nFinally, we have evenly spaced data with values calculated based on the data points we did have. We would have filled in any missing values or large gaps in the process.\n\n## Conclusions\n\nThe new stages `$densify` and `$fill` may not initially seem very exciting but they are key tools in working with time series data. Without `$densify`, there is no way to meaningfully identify and add missing records in a time series. The $fill stage greatly simplifies the process of computing missing values compared to using `$setWindowFields` and writing an expression to determine the value using the $linear and $locf expressions or by computing a moving average.\n\nThis then opens up the possibility of using a wide range of time series analysis algorithms in Python, R, Spark, and other analytic environments.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn through examples and graphs how the aggregation stages $densify and $fill allow you to fill gaps in time series data and convert irregular to regular time spacing. ", "contentType": "Tutorial"}, "title": "Preparing Time Series data for Analysis Tools with $densify and $fill", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/developing-side-scrolling-platformer-game-unity-mongodb-realm", "action": "created", "body": "# Developing a Side-Scrolling Platformer Game with Unity and MongoDB Realm\n\nI've been a gamer since the 1990s, so 2D side-scrolling platformer games like Super Mario Bros.\u00a0hold a certain place in my heart. Today, 2D games are still being created, but with the benefit of having connectivity to the internet, whether that be to store your player state information, to access new levels, or something else.\n\nEvery year, MongoDB holds an internal company-wide hackathon known as Skunkworks. During Skunkworks, teams are created and using our skills and imagination, we create something to make MongoDB better or something that uses MongoDB in a neat way. For Skunkworks 2020, I (Nic Raboy) teamed up with Barry O'Neill to create a side-scrolling platformer game with Unity that queries and sends data between MongoDB and the game. Internally, this project was known as The Untitled Leafy Game.\n\nIn this tutorial, we're going to see what went into creating a game like The Untitled Leafy Game using Unity as the game development framework and MongoDB Realm for data storage and back end.\n\nTo get a better idea of what we're going to accomplish, take a look at the following animated image:\n\nThe idea behind the game is that you are a MongoDB leaf character and you traverse through the worlds to obtain your trophy. As you traverse through the worlds, you can accumulate points by answering questions about MongoDB. These questions are obtained through a remote HTTP request and the answers are validated through another HTTP request.\n\n## The Requirements\n\nThere are a few requirements that must be met prior to starting this tutorial:\n\n- You must be using MongoDB Atlas and MongoDB Realm.\n- You must be using Unity 2020.1.8f1 or more recent.\n- At least some familiarity with Node.js (Realm) and C# (Unity).\n- Your own game graphic assets.\n\nFor this tutorial, MongoDB Atlas will be used to store our data and MongoDB Realm will act as our back end that the game communicates with, rather than trying to access the data directly from Atlas.\n\nMany of the assets in The Untitled Leafy Game were obtained through the Unity Asset Store. For this reason, I won't be able to share them raw in this tutorial. However, they are available for free with a Unity account.\n\nYou can follow along with this tutorial using the source material on GitHub. We won't be doing a step by step reproduction, but we'll be exploring important topics, all of which can be further seen in the project source on GitHub.\n\n## Creating the Game Back End with MongoDB Atlas and MongoDB Realm\n\nIt might seem that MongoDB plays a significant role in this game, but the amount of code to make everything work is actually quite small. This is great because as a game developer, the last thing you want is to worry about fiddling with your back end and database.\n\nIt's important to understand the data model that will represent questions in the game. For this game, we're going to use the following model:\n\n``` json\n{\n \"_id\": ObjectId(\"5f973c8c083f84fa6151ca54\"),\n \"question_text\": \"MongoDB is Awesome!\",\n \"problem_id\": \"abc123\",\n \"subject_area\": \"general\",\n \"answer\": true\n}\n```\n\nThe `question_text` field will be displayed within the game. We can specify which question should be placed where in the game through the `problem_id` field because it will allow us to filter for the document we want. When the player selects an answer, it will be sent back to MongoDB Realm and used as a filter for the `answer` field. The `subject_area` field might be valuable when creating reports at a later date.\n\nIn MongoDB Atlas, the configuration might look like the following:\n\nIn the above example, documents with the proposed data model are stored in the `questions` collection of the `game` database. How you choose to name your collections or even the fields of your documents is up to you.\n\nBecause we'll be using MongoDB Realm rather than a self-hosted application, we need to create webhook functions to act as our back end. Create a Realm application that uses the MongoDB Atlas cluster with our data. The naming of the application does not really matter as long as it makes sense to you.\n\nWithin the MongoDB Realm dashboard, you're going to want to click on **3rd Party Services** to create new webhook functions.\n\nAdd a new **HTTP** service and give it a name of your choosing.\n\nWe'll have the option to create new webhooks and add associated function code to them. The idea is to create two webhooks, a `get_question` for retrieving question information based on an id value and a `checkanswer` for validating a sent answer with an id value.\n\nThe `get_question`, which should accept GET method requests, will have the following JavaScript code:\n\n``` javascript\nexports = async function (payload, response) {\n\n const { problem_id } = payload.query;\n\n const results = await await context.services\n .get(\"mongodb-atlas\")\n .db(\"game\")\n .collection(\"questions\")\n .findOne({ \"problem_id\": problem_id }, { problem_id : 1, question_text : 1 })\n\n response.setBody(JSON.stringify(results));\n\n}\n```\n\nIn the above code, if the function is executed, the query parameters are stored. We are expecting a `problem_id` as a query parameter in any given request. Using that information, we can do a `findOne` with the `problem_id` as the filter. Next, we can specify that we only want the `problem_id` and the `question_text` fields returned for any matched document.\n\nThe `checkanswer` should accept POST requests and will have the following JavaScript code:\n\n``` javascript\nexports = async function (payload, response) {\n\n const query = payload.body.text();\n const filter = EJSON.parse(query);\n\n const results = await context.services\n .get(\"mongodb-atlas\")\n .db(\"game\")\n .collection(\"questions\")\n .findOne({ problem_id: filter.problem_id, answer: filter.answer }, { problem_id : 1, answer: 1 });\n\n response.setBody(results ? JSON.stringify(results) : \"{}\");\n\n}\n```\n\nThe logic between the two functions is quite similar. The difference is that this time, we are expecting a payload to be used as the filter. We are also filtering on both the `problem_id` as well as the `answer` rather than just the `problem_id` field.\n\nAssuming you have questions in your database and you've deployed your webhook functions, you should be able to send HTTP requests to them for testing. As we progress through the tutorial, interacting with the questions will be done through the Unity produced game.\n\n## Designing a Game Scene with Game Objects, Tile Pallets, and Sprites\n\nWith the back end in place, we can start focusing on the game itself. To set expectations, we're going to be using graphic assets from the Unity Asset Store, as previously mentioned in the tutorial. In particular, we're going to be using the Pixel Adventure 1 asset pack which can be obtained for free. This is in combination with some MongoDB custom graphics.\n\nWe're going to assume this is not your first time dabbling with Unity. This means that some of the topics around creating a scene won't be explored from a beginner perspective. It will save us some time and energy and get to the point.\n\nAn example of things that won't be explored include:\n\n- Using the Palette Editor to create a world.\n- Importing media and animating sprites.\n\nIf you want to catch up on some beginner Unity with MongoDB content, check out the series that I did with Adrienne Tacke.\n\nThe game will be composed of worlds also referred to as levels. Each world will have a camera, a player, some question boxes, and a multi-layered tilemap. Take the following image for example:\n\nWithin any given world, we have a **GameController** game object. The role of this object is to orchestrate the changing of scenes, something we'll explore later in the tutorial. The **Camera** game object is responsible for following the player position to keep everything within view.\n\nThe **Grid** is the parent game object to each layer of the tilemap, where in our worlds will be composed of three layers. The **Ground** layer will have basic colliders to prevent the player from moving through them, likewise with the **Boundaries** layer. The **Traps** layer will allow for collision detection, but won't actually apply physics. We have separate layers because we want to know when the player interacts with any of them. These layers are composed of tiles from the **Pixel Adventure 1** set and they are the graphical component to our worlds.\n\nTo show text on the screen, we'll need to use a **Canvas** parent game object with a child game object with the **Text** component. This child game object is represented by the **Score** game object. The **Canvas** comes in combination with the **EventSystem** which we will never directly engage with.\n\nThe **Trophy** game object is nothing more than a sprite with an image of a trophy. We will have collision related components attached, but more on that in a moment.\n\nFinally, we have the **Questions** and **QuestionModal** game objects, both of which contain child game objects. The **Questions** group has any number of sprites to represent question boxes in the game. They have the appropriate collision components and when triggered, will interact with the game objects within the **QuestionModal** group. Think of it this way. The player interacts with the question box. A modal or popup displays with the text, possible answers, and a submit button. Each question box will have scripts where you can define which document in the database is associated with them.\n\nIn summary, any given world scene will look like this:\n\n- GameController\n - Camera\n - Grid\n - Ground\n - Boundaries\n - Traps\n - Player\n - QuestionModal\n - ModalBackground\n - QuestionText\n - Dropdown\n - SubmitButton\n - Questions\n - QuestionOne\n - QuestionTwo\n - Trophy\n - Canvas\n - Score\n - EventSystem\n\nThe way you design your game may differ from the above, but it worked for the example that Barry and I did for the MongoDB Skunkworks project.\n\nWe know that every item in the project hierarchy is a game object. The components we add to them define what the game object actually does. Let's figure out what we need to add to make this game work.\n\nThe **Player** game object should have the following components:\n\n- Sprite Renderer\n- Rigidbody 2D\n- Box Collider 2D\n- Animator\n- Script\n\nThe **Sprite Renderer** will show the graphic of our choosing for this particular game object. The **Rigidbody 2D** is the physics applied to the sprite, so how much gravity should be applied and similar. The **Box Collider 2D** represents the region around the image where collisions should be detected. The **Animator** represents the animations and flow that will be assigned to the game object. The **Script**, which in this example we'll call **Player**, will control how this sprite is interacted with. We'll get to the script later in the tutorial, but really what matters is the physics and colliders applied.\n\nThe **Trophy** game object and each of the question box game objects will have the same components, with the exception that the rigidbody will be static and not respond to gravity and similar physics events on the question boxes and the **Trophy** won't have any rigidbody. They will also not be animated.\n\n## Interacting with the Game Player and the Environment\n\nAt this point, you should have an understanding of the game objects and components that should be a part of your game world scenes. What we want to do is make the game interactive by adding to the script for the player.\n\nThe **Player** game object should have a script associated to it. Mine is **Player.cs**, but yours could be different. Within this script, add the following:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Player : MonoBehaviour {\n\n private Rigidbody2D rb2d;\n private Animator animator;\n private bool isGrounded;\n\n Range(1, 10)]\n public float speed;\n\n [Range(1, 10)]\n public float jumpVelocity;\n\n [Range(1, 5)]\n public float fallingMultiplier;\n\n public Score score;\n\n void Start() {\n rb2d = GetComponent();\n animator = GetComponent();\n isGrounded = true;\n }\n\n void FixedUpdate() {\n float horizontalMovement = Input.GetAxis(\"Horizontal\");\n\n if(Input.GetKey(KeyCode.Space) && isGrounded == true) {\n rb2d.velocity += Vector2.up * jumpVelocity;\n isGrounded = false;\n }\n\n if (rb2d.velocity.y < 0) {\n rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;\n }\n else if (rb2d.velocity.y > 0 && !Input.GetKey(KeyCode.Space)) {\n rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;\n }\n\n rb2d.velocity = new Vector2(horizontalMovement * speed, rb2d.velocity.y);\n\n if(rb2d.position.y < -10.0f) {\n rb2d.position = new Vector2(0.0f, 1.0f);\n score.Reset();\n }\n }\n\n private void OnCollisionEnter2D(Collision2D collision) {\n if (collision.collider.name == \"Ground\" || collision.collider.name == \"Platforms\") {\n isGrounded = true;\n }\n if(collision.collider.name == \"Traps\") {\n rb2d.position = new Vector2(0.0f, 1.0f);\n score.Reset();\n }\n }\n\n void OnTriggerEnter2D(Collider2D collider) {\n if (collider.name == \"Trophy\") {\n Destroy(collider.gameObject);\n score.BankScore();\n GameController.NextLevel();\n }\n }\n\n}\n```\n\nThe above code could be a lot to take in, so we're going to break it down starting with the variables.\n\n``` csharp\nprivate Rigidbody2D rb2d;\nprivate Animator animator;\nprivate bool isGrounded;\n\n[Range(1, 10)]\npublic float speed;\n\n[Range(1, 10)]\npublic float jumpVelocity;\n\n[Range(1, 5)]\npublic float fallingMultiplier;\n\npublic Score score;\n```\n\nThe `rb2d` variable will be used to obtain the currently added **Rigidbody 2D** component. Likewise, the `animator` variable will obtain the **Animator** component. We'll use `isGrounded` to let us know if the player is currently jumping so that way, we can't jump infinitely.\n\nThe public variables such as `speed`, `jumpVelocity`, and `fallingMultiplier` have to do with our physics. We want to define the movement speed, how fast a jump should happen, and how fast the player should fall when finishing a jump. Finally, the `score` variable will be used to link the **Score** game object to our player script. This will allow us to interact with the text in our script.\n\n``` csharp\nvoid Start() {\n rb2d = GetComponent();\n animator = GetComponent();\n isGrounded = true;\n}\n```\n\nOn the first rendered frame, we obtain each of the components and default our `isGrounded` variable.\n\nDuring the `FixedUpdate` method, which happens continuously, we can check for keyboard interaction:\n\n``` csharp\nfloat horizontalMovement = Input.GetAxis(\"Horizontal\");\n\nif(Input.GetKey(KeyCode.Space) && isGrounded == true) {\n rb2d.velocity += Vector2.up * jumpVelocity;\n isGrounded = false;\n}\n```\n\nIn the above code, we are checking to see if the horizontal keys are pressed. These can be defined within Unity, but default as the **a** and **d** keys or the left and right arrow keys. If the space key is pressed and the player is currently on the ground, the `jumpVelocity` is applied to the rigidbody. This will cause the player to start moving up.\n\nTo remove the feeling of the player jumping on the moon, we can make use of the `fallingMultiplier` variable:\n\n``` csharp\nif (rb2d.velocity.y < 0) {\n rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;\n}\nelse if (rb2d.velocity.y > 0 && !Input.GetKey(KeyCode.Space)) {\n rb2d.velocity += Vector2.up * Physics2D.gravity.y * (fallingMultiplier - 1) * Time.fixedDeltaTime;\n}\n```\n\nWe have an if / else if for the reason of long jumps and short jumps. If the velocity is less than zero, you are falling and the multiplier should be used. If you're currently mid jump and continuing to jump, but you let go of the space key, then the fall should start to happen rather than continuing to jump until the velocity reverses.\n\nNow if you happen to fall off the screen, we need a way to reset.\n\n``` csharp\nif(rb2d.position.y < -10.0f) {\n rb2d.position = new Vector2(0.0f, 1.0f);\n score.Reset();\n}\n```\n\nIf we fall off the screen, the `Reset` function on `score`, which we'll see shortly, will reset back to zero and the position of the player will be reset to the beginning of the level.\n\nWe can finish the movement of our player in the `FixedUpdate` method with the following:\n\n``` csharp\nrb2d.velocity = new Vector2(horizontalMovement * speed, rb2d.velocity.y);\n```\n\nThe above line takes the movement direction based on the input key, multiplies it by our defined speed, and keeps the current velocity in the y-axis. We keep the current velocity so we can move horizontally if we are jumping or not jumping.\n\nThis brings us to the `OnCollisionEnter2D` and `OnTriggerEnter2D` methods.\n\nWe need to know when we've ended a jump and when we've stumbled upon a trap. We can't just say a jump is over when the y-position falls below a certain value because the player may have fallen off a cliff.\n\nTake the `OnCollisionEnter2D` method:\n\n``` csharp\nprivate void OnCollisionEnter2D(Collision2D collision) {\n if (collision.collider.name == \"Ground\" || collision.collider.name == \"Platforms\") {\n isGrounded = true;\n }\n if(collision.collider.name == \"Traps\") {\n rb2d.position = new Vector2(0.0f, 1.0f);\n score.Reset();\n }\n}\n```\n\nIf there was a collision, we can get the game object of what we collided with. The game object should be named so we should know immediately if we collided with a floor or platform or something else. If we collided with a floor or platform, reset the jump. If we collided with a trap, we can reset the position and the score.\n\nThe `OnTriggerEnter2D` method is a little different.\n\n``` csharp\nvoid OnTriggerEnter2D(Collider2D collider) {\n if (collider.name == \"Trophy\") {\n Destroy(collider.gameObject);\n score.BankScore();\n GameController.NextLevel();\n }\n}\n```\n\nRemember, the **Trophy** won't have a rigidbody so there will be no physics. However, we want to know when our player has overlapped with the trophy. In the above function, if triggered, we will destroy the trophy which will remove it from the screen. We will also make use of the `BankScore` function that we'll see soon as well as the `NextLevel` function that will change our world.\n\nAs long as the tilemap layers have the correct collider components, your player should be able to move around whatever world you've decided to create. This brings us to some of the other scripts that need to be created for interaction in the **Player.cs** script.\n\nWe used a few functions on the `score` variable within the **Player.cs** script. The `score` variable is a reference to our **Score** game object which should have its own script. We'll call this the **Score.cs** script. However, before we get to the **Score.cs** script, we need to create a static class to hold our locally persistent data.\n\nCreate a **GameData.cs** file with the following:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic static class GameData\n{\n\n public static int totalScore;\n\n}\n```\n\nUsing static classes and variables is the easiest way to pass data between scenes of a Unity game. We aren't assigning this script to any game object, but it will be accessible for as long as the game is open. The `totalScore` variable will represent our session score and it will be manipulated through the **Score.cs** file.\n\nWithin the **Score.cs** file, add the following:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.UI;\n\npublic class Score : MonoBehaviour\n{\n\n private Text scoreText;\n private int score;\n\n void Start()\n {\n scoreText = GetComponent();\n this.Reset();\n }\n\n public void Reset() {\n score = GameData.totalScore;\n scoreText.text = \"SCORE: \" + GameData.totalScore;\n }\n\n public void AddPoints(int amount) {\n score += amount;\n scoreText.text = \"SCORE: \" + score;\n }\n\n public void BankScore() {\n GameData.totalScore += score;\n }\n\n}\n```\n\nIn the above script, we have two private variables. The `scoreText` will reference the component attached to our game object and the `score` will be the running total for the particular world.\n\nThe `Reset` function, which we've seen already, will set the visible text on the screen to the value in our static class. We're doing this because we don't want to necessarily zero out the score on a reset. For this particular game, rather than resetting the entire score when we fail, we reset the score for the particular world, not all the worlds. This makes more sense in the `BankScore` method. We'd typically call `BankScore` when we progress from one world to the next. We take the current score for the world, add it to the persisted score, and then when we want to reset, our persisted score holds while the world score resets. You can design this functionality however you want.\n\nIn the **Player.cs** script, we've also made use of a **GameController.cs** script. We do this to manage switching between scenes in the game. This **GameController.cs** script should be attached to the **GameController** game object within the scene. The code behind the script should look like the following:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.SceneManagement;\nusing System;\n\npublic class GameController : MonoBehaviour {\n\n private static int currentLevelIndex;\n private static string[] levels;\n\n void Start() {\n levels = new string[] {\n \"LevelOne\",\n \"LevelTwo\"\n };\n currentLevelIndex = Array.IndexOf(levels, SceneManager.GetActiveScene().name);\n }\n\n public static void NextLevel() {\n if(currentLevelIndex < levels.Length - 1) {\n SceneManager.LoadScene(levels[currentLevelIndex + 1]);\n }\n }\n\n}\n```\n\nSo why even create a script for switching scenes when it isn't particularly difficult? There are a few reasons:\n\n1\\. We don't want to manage scene switching in the **Player.cs** script to reduce cruft code. 2. We want to define world progression while being cautious that other scenes such as menus could exist.\n\nWith that said, when the first frame renders, we could define every scene that is a level or world. While we don't explore it here, we could also define every scene that is a menu or similar. When we want to progress to the next level, we can just iterate through the level array, all of which is managed by this scene manager.\n\nKnowing what we know now, if we had set everything up correctly and tried to move our player around, we'd likely move off the screen. We need the camera to follow the player and this can be done in another script.\n\nThe **Camera.cs** script, which should be attached to the **Camera** game object, should have the following C# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Camera : MonoBehaviour\n{\n\n public Transform player;\n\n void Update() {\n transform.position = new Vector3(player.position.x + 4, transform.position.y, transform.position.z);\n }\n}\n```\n\nThe `player` variable should represent the **Player** game object, defined in the UI that Unity offers. It can really be any game object, but because we want to have the camera follow the player, it should probably be the **Player** game object that has the movement scripts. On every frame, the camera position is set to the player position with a small offset.\n\nEverything we've seen up until now is responsible for player interaction. We can traverse a world, collide with the environment, and keep score.\n\n## Making HTTP Requests from the Unity Game to MongoDB Realm\n\nHow the game interacts with the MongoDB Realm webhooks is where the fun really comes in! I explored a lot of this in a previous tutorial I wrote titled, [Sending and Requesting Data from MongoDB in a Unity Game, but it is worth exploring again for the context of The Untitled Leafy Game.\n\nBefore we get into the sending and receiving of data, we need to create a data model within Unity that roughly matches what we see in MongoDB. Create a **DatabaseModel.cs** script with the following C# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class DatabaseModel {\n\n public string _id;\n public string question_text;\n public string problem_id;\n public string subject_area;\n public bool answer;\n\n public string Stringify() {\n return JsonUtility.ToJson(this);\n }\n\n public static DatabaseModel Parse(string json) {\n return JsonUtility.FromJson(json);\n }\n\n}\n```\n\nThe above script is not one that we plan to add to a game object. We'll be able to instantiate it from any script. Notice each of the public variables and how they are named based on the fields that we're using within MongoDB. Unity offers a JsonUtility class that allows us to take public variables and either convert them into a JSON string or parse a JSON string and load the data into our public variables. It's very convenient, but the public variables need to match to be effective.\n\nThe process of game to MongoDB interaction is going to be as follows:\n\n1. Player collides with question box\n2. Question box, which has a `problem_id` associated, launches the\n modal\n3. Question box sends an HTTP request to MongoDB Realm\n4. Question box populates the fields in the modal based on the HTTP\n response\n5. Question box sends an HTTP request with the player answer to MongoDB\n Realm\n6. The modal closes and the game continues\n\nWith those chain of events in mind, we can start making this happen. Take a **Question.cs** script that would exist on any particular question box game object:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing UnityEngine.Networking;\nusing System.Text;\nusing UnityEngine.UI;\n\npublic class Question : MonoBehaviour {\n\n private DatabaseModel question;\n\n public string questionId;\n public GameObject questionModal;\n public Score score;\n\n private Text questionText;\n private Dropdown dropdownAnswer;\n private Button submitButton;\n\n void Start() {\n GameObject questionTextGameObject = questionModal.transform.Find(\"QuestionText\").gameObject;\n questionText = questionTextGameObject.GetComponent();\n GameObject submitButtonGameObject = questionModal.transform.Find(\"SubmitButton\").gameObject;\n submitButton = submitButtonGameObject.GetComponent();\n GameObject dropdownAnswerGameObject = questionModal.transform.Find(\"Dropdown\").gameObject;\n dropdownAnswer = dropdownAnswerGameObject.GetComponent();\n }\n\n private void OnCollisionEnter2D(Collision2D collision) {\n if (collision.collider.name == \"Player\") {\n questionModal.SetActive(true);\n Time.timeScale = 0;\n StartCoroutine(GetQuestion(questionId, result => {\n questionText.text = result.question_text;\n submitButton.onClick.AddListener(() =>{SubmitOnClick(result, dropdownAnswer);});\n }));\n }\n }\n\n void SubmitOnClick(DatabaseModel db, Dropdown dropdownAnswer) {\n db.answer = dropdownAnswer.value == 0;\n StartCoroutine(CheckAnswer(db.Stringify(), result => {\n if(result == true) {\n score.AddPoints(1);\n }\n questionModal.SetActive(false);\n Time.timeScale = 1;\n submitButton.onClick.RemoveAllListeners();\n }));\n }\n\n IEnumerator GetQuestion(string id, System.Action callback = null)\n {\n using (UnityWebRequest request = UnityWebRequest.Get(\"https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/get_question?problem_id=\" + id))\n {\n yield return request.SendWebRequest();\n if (request.isNetworkError || request.isHttpError) {\n Debug.Log(request.error);\n if(callback != null) {\n callback.Invoke(null);\n }\n }\n else {\n if(callback != null) {\n callback.Invoke(DatabaseModel.Parse(request.downloadHandler.text));\n }\n }\n }\n }\n\n IEnumerator CheckAnswer(string data, System.Action callback = null) {\n using (UnityWebRequest request = new UnityWebRequest(\"https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/checkanswer\", \"POST\")) {\n request.SetRequestHeader(\"Content-Type\", \"application/json\");\n byte] bodyRaw = System.Text.Encoding.UTF8.GetBytes(data);\n request.uploadHandler = (UploadHandler)new UploadHandlerRaw(bodyRaw);\n request.downloadHandler = (DownloadHandler)new DownloadHandlerBuffer();\n yield return request.SendWebRequest();\n if (request.isNetworkError || request.isHttpError) {\n Debug.Log(request.error);\n if(callback != null) {\n callback.Invoke(false);\n }\n } else {\n if(callback != null) {\n callback.Invoke(request.downloadHandler.text != \"{}\");\n }\n }\n }\n }\n}\n```\n\nOf the scripts that exist in the project, this is probably the most complex. It isn't complex because of the MongoDB interaction. It is just complex based on how questions are integrated into the game.\n\nLet's break it down starting with the variables:\n\n``` csharp\nprivate DatabaseModel question;\n\npublic string questionId;\npublic GameObject questionModal;\npublic Score score;\n\nprivate Text questionText;\nprivate Dropdown dropdownAnswer;\nprivate Button submitButton;\n```\n\nThe `questionId`, `questionModal`, and `score` variables are assigned through the UI inspector in Unity. This allows us to give each question box a unique id and give each question box the same modal to use and score widget. If we wanted, the modal and score items could be different, but it's best to recycle game objects for performance reasons.\n\nThe `questionText`, `dropdownAnswer`, and `submitButton` will be obtained from the attached `questionModal` game object.\n\nTo obtain each of the game objects and their components, we can look at the `Start` method:\n\n``` csharp\nvoid Start() {\n GameObject questionTextGameObject = questionModal.transform.Find(\"QuestionText\").gameObject;\n questionText = questionTextGameObject.GetComponent();\n GameObject submitButtonGameObject = questionModal.transform.Find(\"SubmitButton\").gameObject;\n submitButton = submitButtonGameObject.GetComponent();\n GameObject dropdownAnswerGameObject = questionModal.transform.Find(\"Dropdown\").gameObject;\n dropdownAnswer = dropdownAnswerGameObject.GetComponent();\n}\n```\n\nRemember, game objects don't mean a whole lot to us. We need to get the components that exist on each game object. We have the attached `questionModal` so we can use Unity to find the child game objects that we need and their components.\n\nBefore we explore how the HTTP requests come together with the rest of the script, we should explore how these requests are made in general.\n\n``` csharp\nIEnumerator GetQuestion(string id, System.Action callback = null)\n{\n using (UnityWebRequest request = UnityWebRequest.Get(\"https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/get_question?problem_id=\" + id))\n {\n yield return request.SendWebRequest();\n if (request.isNetworkError || request.isHttpError) {\n Debug.Log(request.error);\n if(callback != null) {\n callback.Invoke(null);\n }\n }\n else {\n if(callback != null) {\n callback.Invoke(DatabaseModel.Parse(request.downloadHandler.text));\n }\n }\n }\n}\n```\n\nIn the above `GetQuestion` method, we expect an `id` which will be our `problem_id` that is attached to the question box. We also provide a `callback` which will be used when we get a response from the backend. With the [UnityWebRequest, we can make a request to our MongoDB Realm webhook. Upon success, the `callback` variable is invoked and the parsed data is returned.\n\nYou can see this in action within the `OnCollisionEnter2D` method.\n\n``` csharp\nprivate void OnCollisionEnter2D(Collision2D collision) {\n if (collision.collider.name == \"Player\") {\n questionModal.SetActive(true);\n Time.timeScale = 0;\n StartCoroutine(GetQuestion(questionId, result => {\n questionText.text = result.question_text;\n submitButton.onClick.AddListener(() =>{SubmitOnClick(result, dropdownAnswer);});\n }));\n }\n}\n```\n\nWhen a collision happens, we see if the **Player** game object is what collided. If true, then we set the modal to active so it displays, alter the time scale so the game pauses, and then execute the `GetQuestion` from within a Unity coroutine. When we get a result for that particular `problem_id`, we set the text within the modal and add a special click listener to the button. We want the button to use the correct information from this particular instance of the question box. Remember, the modal is shared for all questions in this example, so it is important that the correct listener is used.\n\nSo we displayed the question information in the modal. Now we need to submit it. The HTTP request is slightly different:\n\n``` csharp\nIEnumerator CheckAnswer(string data, System.Action callback = null) {\n using (UnityWebRequest request = new UnityWebRequest(\"https://webhooks.mongodb-realm.com/api/client/v2.0/app/skunkworks-rptwf/service/webhooks/incoming_webhook/checkanswer\", \"POST\")) {\n request.SetRequestHeader(\"Content-Type\", \"application/json\");\n byte] bodyRaw = System.Text.Encoding.UTF8.GetBytes(data);\n request.uploadHandler = (UploadHandler)new UploadHandlerRaw(bodyRaw);\n request.downloadHandler = (DownloadHandler)new DownloadHandlerBuffer();\n yield return request.SendWebRequest();\n if (request.isNetworkError || request.isHttpError) {\n Debug.Log(request.error);\n if(callback != null) {\n callback.Invoke(false);\n }\n } else {\n if(callback != null) {\n callback.Invoke(request.downloadHandler.text != \"{}\");\n }\n }\n }\n}\n```\n\nIn the `CheckAnswer` method, we do another `UnityWebRequest`, this time a POST request. We encode the JSON string which is our data and we send it to our MongoDB Realm webhook. The result for the `callback` is either going to be a true or false depending on if the response is an empty object or not.\n\nWe can see this in action through the `SubmitOnClick` method:\n\n``` csharp\nvoid SubmitOnClick(DatabaseModel db, Dropdown dropdownAnswer) {\n db.answer = dropdownAnswer.value == 0;\n StartCoroutine(CheckAnswer(db.Stringify(), result => {\n if(result == true) {\n score.AddPoints(1);\n }\n questionModal.SetActive(false);\n Time.timeScale = 1;\n submitButton.onClick.RemoveAllListeners();\n }));\n}\n```\n\nDropdowns in Unity are numeric, so we need to figure out if it is true or false. Once we have this information, we can execute the `CheckAnswer` through a coroutine, sending the document information with our user defined answer. If the response is true, we add to the score. Regardless, we hide the modal, reset the time scale, and remove the listener on the button.\n\n## Conclusion\n\nWhile we didn't see the step by step process towards reproducing a side-scrolling platformer game like the MongoDB Skunkworks project, The Untitled Leafy Game, we did walk through each of the components that went into it. These components consisted of designing a scene for a possible game world, adding player logic, score keeping logic, and HTTP request logic.\n\nTo play around with the project that took Barry O'Neill and myself ([Nic Raboy) three days to complete, check it out on GitHub. After swapping the MongoDB Realm endpoints with your own, you'll be able to play the game.\n\nIf you're interested in getting more out of game development with MongoDB and Unity, check out a series that I'm doing with Adrienne Tacke, starting with Designing a Strategy to Develop a Game with Unity and MongoDB.\n\nQuestions? Comments? We'd love to connect with you. Join the conversation on the MongoDB Community Forums.\n", "format": "md", "metadata": {"tags": ["Realm", "C#", "Unity"], "pageDescription": "Learn how to create a 2D side-scrolling platformer game with MongoDB and Unity.", "contentType": "Tutorial"}, "title": "Developing a Side-Scrolling Platformer Game with Unity and MongoDB Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-meetup-kotlin-multiplatform", "action": "created", "body": "# Realm Meetup - Realm Kotlin Multiplatform for Modern Mobile Apps\n\nDidn't get a chance to attend the Realm Kotlin Multiplatform for modern mobile apps Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n:youtube]{vid=F1cEI9OKI-E}\n\nIn this meetup, Claus R\u00f8rbech, software engineer on the Realm Android team, will walk us through some of the constraints of the RealmJava SDK, the thought process that went into the decision to build a new SDK for Kotlin, the benefits developers will be able to leverage with the new APIs, and how the RealmKotlin SDK will evolve.\n\nIn this 50-minute recording, Claus spends about 35 minutes presenting an overview of the Realm Kotlin Multiplatfrom. After this, we have about 15 minutes of live Q&A with Ian, Nabil and our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our [community forums. Come to learn. Stay to connect.\n\n### Transcript\n\nClaus:\nYeah, hello. I'm Claus Rorbech, welcome to today's talk on Realm Kotlin. I'm Claus Rorbech and a software engineer at MongoDB working in the Java team and today I'm going to talk about Realm Kotlin and why we decided to build a complete new SDK and I'll go over some of the concepts of this.\n\nWe'll do this with a highlight of what Realm is, what triggered this decision of writing a new SDK instead of trying to keep up with the Realm Java. Well go over some central concepts as it has some key significant changes compared to Realm Java. We'll look into status, where we are in the process. We'll look into where and how it can be used and of course peek a bit into the future.\n\nJust to recap, what is Realm? Realm is an object database with the conventional ACID properties. It's implemented in a C++ storage engine and exposed to various language through language specific SDKs. It's easy to use, as you can define your data model directly with language constructs. It's also performant. It utilizes zero copying and lazy loading to keep the memory footprint small. Which is still key for mobile development.\n\nHistorically we have been offering live objects, which is a consistent view of data within each iteration of your run loop. And finally we are offering infrastructure for notifications and easy on decide encryption and easy synchronization with MongoDB Realm. So, Realm Java already works for Kotlin on Android, so why bother doing a new SDK? The goal of Realm is to simplify app development. Users want to build apps and not persistent layers, so we need to keep up providing a good developer experience around Realm with this ecosystem.\n\nWhy not just keep up with the Realm Java? To understand the challenge of keeping up with Realm Java, we have to have in mind that it has been around for almost a decade. Throughout that period, Android development has changed significantly. Not only has the language changed from Java to Kotlin, but there's also been multiple iterations of design guidelines. Now, finally official Android architectural guidelines and components. We have kept up over the years. We have constantly done API adjustments. Both based on new language features, but also from a lot of community feedback. What users would like us to do. We have tried to accommodate this major new design approach by adding support for the reactive frameworks.\n\nBoth RX Java and lately with coroutine flows. But keeping up has become increasingly harder. Not only just by the growing features of Realm itself, but also trying to constantly fit into this widening set of frameworks. We thought it was a good time to reassess some of these key concepts of Realm Java. The fundamentals of Realm Java is built around live objects. They provide a consistent updated view of your data within each run loop iterations. Live data is actually quite nice, but it's also thread confined.\n\nThis means that each thread needs its own instance of Realm. This Realm instance needs to be explicitly close to free up resources and all objects obtained from this Realm are also thread confined. These things have been major user obstacles because accessing objects on other threads will flow, and failing to close instances of Realm on background threads can potentially lead to growing file sizes because we cannot clean up this updated data.\n\nSo, in general it was awesome for early days Android app architectures that were quite simple, but it's less attractive for the current dominant designs that rely heavily on mutable data streams in frameworks and very flexible execution and threading models. So, besides this wish of trying to redo some of the things, there're other motivations for doing this now. Namely, being Kotlin first. Android has already moved there and almost all other users are also there.\n\nWe want to take advantage of being able to provide the cleaner APIs with nice language features like null safety and in line and extension functions and also, very importantly, co routines for asynchronous operations. Another key feature of Kotlin is the Kotlin Compiler plugin mechanism. This is a very powerful mechanism that can substitute our current pre processor approach and byte manipulation approach.\n\nSo, instead of generating code along the user classes, we can do in place code transformation. This reduces the amount of code we need to generate and simplifies our code weaving approach and therefore also our internal APIs. The Kotlin Compiler plugin is also faster than our current approach because we can eliminate the very slow KAPT plugin that acts as an annotation processor, but does it by requiring multiple passes to first generate stops and then the actual code for the implementation.\n\nWe can also target more platforms with the Compiler plugin because we eliminate the need for Google's Transform API that was only available to Android. Another key opportunity for doing this now is that we can target the Kotlin multi platform ecosystem. In fact, Realm's C++ storage engine is already multi platform. It's already running on the Kotlin multi platform targets with the exception of JavaScript for web. Secondly, we also find Kotlin's multi platform approach very appealing. It allows code sharing but also allows you to target specific portions of your app in native platform frameworks when needed.\n\nFor example, UI. We think Realm fits well into this Kotlin multi platform library suite. There's often no need for developer platform differentiation for persistence and there's actually already quite good coverage in this library suite. Together with the Kotlin serialization and Realm, you can actually do full blown apps as shared code and only supply a platform specific UI.\n\nSo, let's look into some of the key concepts of this new SDK. We'll do that by comparing it to Realm Java and we'll start by defining a data model. Throughout all these code snippets I've used the Kotlin syntax even for the Realm Java examples just to highlight the semantic changes instead of bothering with the syntactical differences. So, I just need some water...\n\nSo, as you see it looks more or less like Java. But there are some key things there. The compiler plugins enable us access the classes directly. This means we no longer need the classes to be open, because we are not internally inheriting from the user classes. We can also just add a marker interface that we fill out by the compiler plugin instead of requiring some specific base classes. And we can derive nullability directly from the types in Kotlin instead of requiring this required annotation.\n\nNot all migration are as easy to cut down. For our Realm configurations, we're not opting in for pure Kotlin with the named parameters, but instead keeping the binder approach. This is because it's easier to discover from inside the ID and we also need to have in mind that we need to be interoperable with the Java. We only offer some specific constructors with named parameters for very common use cases. Another challenge from this new tooling is that the compiler plug-in has some constraints that complicates gathering default schemas.\n\nWe're not fully in place with the constraints for this yet, so for now, we're just required explicit schema definition in the configuration. For completion, I'll just also highlight the current API for perform inquiries on the realm. To get immediate full query capabilities with just exposed the string-based parcel of the underlying storage engine, this was a quick way to get full capabilities and we'll probably add a type safe query API later on when there's a bigger demand.\n\nActually, this string-based query parcel is also available from on Java recently, but users are probably more familiar with the type-based or type safe query system. All these changes are mostly syntactical but the most dominant change for realm Kotlin is the new object behavior. In Realm Kotlin, objects are no longer live, but frozen. Frozen objects are data objects tied to a specific version of the realm. They are immutable. You cannot update them and they don't change over time.\n\nWe still use the underlying zero-copying and lazy loading mechanism, so we still keep the memory footprints small. You can still use a frozen object to navigate the full object graph from this specific version. In Realm Kotlin, the queries also just returns frozen objects. Similarly, notifications also returns new instances of frozen objects, and with this, we have been able to lift the thread confinement constraint. This eases lifecycle management, because we can now only have a single global instance of the realm.\n\nWe can also pass these objects around between threads, which makes it way easier to use it in reactive frameworks. Again, let's look into some examples by comparing is to Realm Java. The shareable Realm instances eases this life cycle management. In Realm Java, we need to obtain an instance on each thread, and we also need to explicitly close this realm instance. On Realm Kotlin, we can now do this upfront with a global instance that can be passed around between the threads. We can finally close it later on this single instance. Of course, it has some consequences. With these shareable instances, changes are immediately available too on the threads.\n\nIn Realm Java, the live data implementation only updated our view of data with in between our run loop iterations. Same query in the same scope would always yield the same result. For Realm Kotlin with our frozen objects, so in Realm Kotlin, updates from different threads are immediately visible. This means that two consecutive queries might reveal different results. This also applies for local or blocking updates. Again, Realm Java with live results local updates were instantly visible, and didn't require refresh. For Realm Java, the original object is frozen and tied to a specific version of the Realm.\n\nWhich means that the update weren't reflected in our original object, and to actually inspect the updates, we would have to re-query the Realm again. In practice, we don't expect access to these different versions to be an issue. Because the primary way of reacting to changes in Realm Kotlin will be listening for changes.\n\nIn Realm Kotlin, updates are delivered as streams of immutable objects. It's implemented by coroutine flows of these frozen instances. In Realm Java, when you got notified about changes, you could basically throw away the notification object, because you could still access all data from your old existing live reference. With Realm Kotlin, your original instance is tied to a specific version of the Realm. It means that you for each update, you would need to access the notify or the new instance supplied by the coroutine flow.\n\nBut again, this updated instance, it gives you full access to the Realm version of that object. It gives you full access to the object graph from this new frozen instance. Here we start to see some of the advantage of coroutine based API. With Realm Java, this code snippet, it was run on a non-loop of thread. It would actually not even give you any notification because it was hard to tie the user code with our underlying notification mechanism. For Realm Kotlin, since we're using coroutines, we use the flexibilities of this. This gives the user flexibility to supply this loop like dispatching context and it's easy for us to hook our event delivery off with that.\n\nSecondly, we can also now spread the operations on a flow over varying contexts. This means that we can apply all the usual flow operations, but we can also actually change the context as our objects are not tied to a specific thread. Further, we can also with the structural concurrency of codes routines, it's easier to align our subscription with the existing scopes. This means that you don't have to bother with closing the underlying Realm instance here.\n\nWith the shareable Realm instances of Realm Kotlin, updates must be done atomically. With Realm Java, since we had a thread confined instance of the Realm, we could just modify it. In Realm Kotlin, we have to be very explicit on when the state is changed. We therefore provide this right transaction on a separate mutable Realm within a managed scope. Inside the scope, it allows us to create an update objects just as in Realm Java. But the changes are only visible to other when once the scope is exited. We actually had a similiar construct in Realm Java, but this is now the only way to update the Realm.\n\nInside the transaction blocks, things almost works as in Realm Java. It's the underlying same starch engine principles. Transactions are still performed on single thread confined live Realm, which means that inside this transaction block, the objects and queries are actually live. The major key difference for the transaction block is how to update existing objects when they are now frozen. For both STKs, it applies that we can only do one transaction at a time. This transaction must be done on the latest version of the Realm. Therefore, when updating existing objects, we need to ensure that we have an instance that is tied to the latest version of the object.\n\nFor Realm Java, we could just pass in the live objects to our transaction block but since it could actually have ... we always had to check the object for its validity. A key issue here was that since object couldn't be passed around on arbitrary threads, we had to get a non-local object, we would have to query the Realm and find out a good filtering pattern to identify objects uniquely. For Realm, we've just provided API for obtaining the latest version of an object. This works for both primary key and non-primary key objects due to some internal identifiers.\n\nSince we can now pass objects between a thread, we can just pass our frozen objects in and look up the latest version. To complete the tour, we'll just close the Realm. As you've already seen, it's just easier to manage the life cycle of Realm when there's one single instance, so closing an instance to free up resources and perform exclusive operations on the Realm, it's just a matter of closing the shared global instance.\n\nInteracting with any object instance after you closed the Realm will still flow, but again, the structural concurrency of coroutine flows should assist you in stopping accessing the objects following the use cases of your app. Besides the major shift to frozen objects, we're of course trying to improve the STKs in a lot of ways, and first of all, we're trying to be idiomatic Kotlin. We want to take advantage of all the new features of the language. We're also trying to reduce size both of our own library but also the generated code. This is possible with the new compiler plug-in as we've previously touched. We can just modify the user instance and not generate additional classes.\n\nWe're also trying to bundle up part of functionality and modularize it into support libraries. This is way easier with the extension methods. Now we should be able to avoid having everybody to ship apps with the JSON import and export functionality and stuff like that. This also makes it easier to target future frameworks by offering support libraries with extension functions. As we've already also seen, we are trying to ensure that our STK is as discoverable from the ID directly, and we're also trying to ensure that the API is backward compatible with the Java.\n\nThis might not be the most idiomatic Java, but at least we try to do it without causing major headaches. We also want to improve testability, so we're putting in places to inject dispatchers. But most notably, we're also supplying JBM support for off-device testing. Lastly, since we're redoing an STK, we of course have a lot of insight in the full feature set, so we also know what to target to make a more maintainable STK. You've already seen some of the compromises for these flights, but please feel free to provide feedback if we can improve something.\n\nWith this new multiplatform STK, where and how to use. We're providing our plug-in and library as a Kotlin multiplatform STK just to be absolutely clear for people not familiar with the multiplatform ecosystem. This still means that you can just apply your project or apply this library and plug-in on Android only projects. It just means that we can now also target iOS and especially multiplatform and later desktop JBM. And I said there's already libraries out there for sterilization and networking.\n\nWith Realm, we can already build full apps with the shared business logic, and only have to supply platform dependent UI. Thanks to the layout and metadata of our artifacts, there's actually no difference in how to apply the projects depending on which setting you are in. It integrates seamlessly with both Android and KMM projects. You just have to apply the plug-in. It's already available in plug-in portal, so you can just use this new plug-in syntax. Then you have to add the repository, but you most likely already have Maven Central as part of your setup, and then at our dependency. There's a small caveat for Android projects before Kotlin or using Kotlin, before 1.5, because the IR backend that triggers our compiler plug-in is not default before that.\n\nYou would have to enable this feature in the compiler also. Yeah. You've already seen a model definition that's a very tiny one. With this ability or this new ability to share our Realm instances, we can now supply one central instance and here have exemplified it by using a tiny coin module. We are able to share this instance throughout the app, and to show how it's all tied together, I have a very small view model example. These users are central Realm instance supplied by Kotlin.\n\nIt sets up some live data to feed the view. This live data is built up from our observable flows. You can apply the various flow operators on it, but most importantly you can also control this context for where it's executing. Lastly, you are handling this in the view model scope. It's just that subscription is also following the life cycle of your view model. Lastly, for completion, there's a tiny method to put some data in there.\n\nAs you might have read between the lines, this is not all in place yet, but I'll try to give a status. We're in the middle of maturing this prove of concept of the proposed frozen architecture. This is merged into our master branch bit by bit without exposing it to the public API. There's a lot of pieces that needs to fit together before we can trigger and migrate completely to this new architecture. But you can still try our library out. We have an initial developer preview in what we can version 0.1.0. Maybe the best label is Realm Kotlin Multiplatform bring-up.\n\nBecause it sort of qualifies the overall concept in this mutliplatform setting with our compiler plug-in being on multi platforms. Also a mutliplatform [inaudible 00:30:09] with collecting all these native objects in the various [inaudible 00:30:14] management domains. A set, it doesn't include the full frozen architecture yet, so the Realm instances are still thread confined, and objects are live. There's only limited support, but we have primitive types, links to other Realm objects and primary keys. You can also register for notifications.\n\nWe use this string-based queries also briefly mentioned. It operates on Kotlin Mutliplatform mobile, which means that it's both available for Android and iOS, but only for 64 bit on both platforms. It's already available on Maven Central, so you can go and try it out either by using our own KMM example in the repository or build your own project following the read me in the repository.\n\nWhat's next? Yeah. Of course in these weeks, we're stabilizing the full frozen architecture. Our upcoming milestones are first we want to target the release with the major Java features. It's a lot of the other features of Realm like lists, indexes, more detailed changelistener APIs, migration for schema updates and dynamic realms and also desktop JVM support. After that, we'll head off to build support for MongoDB Realm. To be able to sync this data remotely.\n\nWhen this is in place, we'll target the full feature set of Realm Java. There's a lot of more exotic types embedded objects and there's we also just introduced new types like sets and dictionaries and [inaudible 00:32:27] types to Realm Java. These will come in a later version on Kotlin. We're also following the evolution of Kotlin Mutliplatform. It's still only alpha so we have to keep track of what they're doing there, and most notably, following the memory management model of Kotlin native, there are constraints that once you pass objects around, Kotlin is freezing those.\n\nRight now, you cannot just pass Realm instances around because they have to be updated. But these frozen objects can be passed around threads and throughout this process, we'll do incremental releases, so please keep an eye open and provide feedback.\n\nTo keep up with our progress, follow us on GitHub. This is our main communication channel with the community. You can try out the sample, a set. You can also ... there's instructions how to do your own Kotlin Mutliplatform project, and you can peek into our public design docs. They're also linked from our repository. If you're more interested into the details of building this Mutliplatform STK, you can read a blog post on how we've addressed some of this challenge with the compiler plug-in and handling Mutliplatform C Interrupts, memory management, and all this.\n\nThank you for ... that's all.\n\n**Ian:**\nThank you Claus, that was very enlightening. Now, we'll take some of your questions, so if you have any questions, please put them in the chat. The first one here will mention, we've answered some of them, but the first one here is regarding the availability of the. It is available now. You can go to GitHub/Realm/Realm.Kotlin, and get our developer preview. We plan to have iterative releases over the next few quarters. That will add more and more functionality.\n\nThe next one is regarding the migration from I presume this user has ... or James, you have a Realm Java application using Realm Java and potentially, you would be looking to migrate to Realm Kotlin. We don't plan to have an automatic feature that would scan your code and change the APIs. Because the underlying semantics have changed so much. But it is something that we can look to have a migration guide or something like that if more users are asking about it.\n\nReally the objects have changed from being live objects to now being frozen objects. We've also removed the threading constraint and we've also have a single shared Realm instance. Whereas before, with every thread, you had to open up a new Realm instance in order to do work on that thread. The semantics have definitely changed, so you'll have to do this with developer care in order to migrate your applications over. Okay. Next question here, we'll go through some of these.\n\nDoes the Kotlin STK support just KMM for syncing or just local operations? I can answer this one. We do plan to offer our sync, and so if you're not familiar, Realm also offers a synchronization from the local store Realm file to MongoDB Atlas through Realm Sync, through the Realm Cloud. This is a way to bidirectionally sync any documents that you have stored on MongoDB Atlas down and transformed into Realm objects and vice versa.\n\nWe don't have that today, but it is something that you can look forward to the future in next quarters, we will be releasing our sync support for the new Realm Kotlin STK. Other questions here, so are these transactions required to be scoped or suspended? I presume this is using the annotations for the Kotlin coroutines keywords. The suspend functions, the functions, Claus, do you have any thoughts on that one?\n\n**Claus:**\nYeah. We are providing a default mechanism but we are also probably adding at least already in our current prototype, we already have a blocking right. You will be able to do it without suspending. Yeah.\n\n**Ian:**\nOkay. Perfect. Also, in the same vein, when running a right transaction, do you get any success or failed result back in the code if the transaction was successful? I presume this is having some sort of callback or on success or on failure if the right transaction succeeded or failed. We plan that to our API at all?\n\n**Claus:**\nUsually, we just have exceptions if things doesn't go right, and they will propagate throughout normal suspend ... throughout coroutine mechanisms. Yeah.\n\n**Ian:**\nYeah. It is a good thought though. We have had other users request this, so it's something if we continue to get more user feedback on this, potentially we could add in the future. Another question here, is it possible to specify a path to where all the entities are loaded instead of declaring each on a set method?\n\n**Ian:**\nNot sure I fully follow that.\n\n**Claus:**\nIt's when defining the schema. Yeah. We have some options of gathering this schema information, but as I stated, we are not completely on top of which constraints we want to put into it. Right now, we are forced to actually define all the classes, but we have issues for addressing this, and we have investigating various options. But some of these come with different constraints, and we have to judge them along the way to see which fits best or maybe we'll find some other ways around this hopefully.\n\n**Nabil:**\nJust to add on top of this, we could use listOf or other data structure. Just the only constraint at the compiler level are using class literal to specify the. Since there's no reflection in Kotlin Native, we don't have the mechanism like we do in Java to infer from your class path, the class that are annotating to Java that will participate in your schema. The lack of reflection in Kotlin Native forces us to just use class and use some compiler classes to build the schema for you constraint.\n\n**Ian:**\nYeah. Just to introduce everyone, this is Nabil. Nabil also works on the Realm Android team. He's one of the lead architects and designers of our new Realm Kotlin STK, so thank you Nabil.\n\n**Ian:**\nIs there any known limitations with the object relation specifically running on a KMM project? My understanding is no. There shouldn't be any restrictions on object relations. Also, for our types, because Realm is designed to be a cross-platform SDK where you can use the same Realm file in both iOS, JavaScript, Android applications, the types are also cross-platform. My understanding is we shouldn't have any restrictions for object relations. I don't know Nabil or Claus if you can confirm or deny that.\n\n**Nabil:**\nSorry. I lost you for a bit.\n\n**Claus:**\nI also lost the initial. No, but we are supporting the full feature set, so I can't immediately come up with any constraints \naround Java.\n\n**Ian:**\nOther questions here, will the Realm SDK support other platforms not just mobile? We talked a little bit about desktop, so I think JVM is something that we think we can get out of the box with our implementations of course. I think for desktop JVM applications is possible.\n\n**Nabil:**\nInternally like I mentioned already compiling for JVM and and also for. But we didn't expose it other public API yet. We just wanted object support in iOS and Android. The only issue for JVM, we have the tool chain compiling on desktop file, is to add the Android specific component, which are like the which you don't have for desktops. We need to find way either to integrate with Swing or to provide a hook for you to provide your looper, so it can deliver a notification for you. That's the only constraint since we're not using any other major Android specific API besides the context. The next target that we'll try to support is JVM, but we're already supported for internally, so it's not going to be a big issue.\n\n**Ian:**\nI guess in terms of web, we do have ... Realm Core, the core database is written in C++. We do have some projects to explore what it would take to compile Realm Core into Wasm so that it could then be run into a browser, and if that is successful, then we could potentially look to have web as a target. But currently, it's not part of our target right now.\n\nOther questions here, will objects still be proxies or will they now be the same object at runtime? For example, the object Realm proxy.\n\n**Nabil:**\nThere will be no proxy for Realm Kotlin. It's one of the benefit of using a compiler, is we're modifying the object itself. Similar to what composes doing with the add compose when you write a compose UI. It's modifying your function by adding some behavior. We do the same thing. Similar to what Kotlin sterilization compiler is doing, so we don't use proxy objects anymore.\n\n**Ian:**\nOkay. Perfect. And then I heard at the end that Realm instances can't be frozen on Native, just wanted to confirm that. Will that throw off if a freeze happens? Also wondering if that's changing or if you're waiting on the Native memory model changes.\n\n**Nabil:**\nThere's two aspects to what we're observing to what are doing. It's like first it's the garbage collector. They introduce an approach, where you could have some callbacks when you finalize the objects and we'll rely on this to free the native resources. By native, I mean the C++ pointer. The other aspect of this is the memory model itself of the Kotlin Native, which based on frozen object similiar concept as what we used to do in Realm Java, which is a thread confinement model to achieve.\n\n**Nabil:**\nActually, what we're doing is like we're trying to freeze the object graph similarly to what Kotlin it does. You can pass object between threads. The only sometimes issue you is like how you interface with multi-threaded coroutine on Kotlin Native. This part is not I think stable yet. But in theory, our should overlap, should work in a similiar way.\n\n**Claus:**\nI guess we don't really expect this memory management scheme from Kotlin Native to be instantly solved, but we have maybe options of providing the Realm instance in like one specific instance that doesn't need to be closed on each thread, acting centrally with if our Native writer and notification thread. It might be possible in the future to define Realms on each thread and interact with the central mechanisms. But I wouldn't expect the memory management constraints on Native to just go away.\n\n**Ian:**\nRight. Other question here will Realm plan on supporting the Android Paging 3 API?\n\n**Nabil:**\nYou could ask the same question with Realm Java. The actual lazy loading capability of Realm doesn't require you to implement the paging library. The paging library problem is like how you can load efficiently pages of data without loading the entire dataset. Since both Realm Java and Realm Kotlin uses just native pointers to the data, so as you traverse your list or collection will only loads this object you're trying to access. And not like 100 or 200 similiar to what a cursor does in SQLite for instance. There's no similiar problem in Realm in general, so it didn't try to come up with a paging solution in the first place.\n\n**Ian:**\nThat's just from our lazy loading and memory map architecture, right?\n\n**Nabil:**\nCorrect.\n\n**Ian:**\nYeah. Okay. Other question here, are there any plans to support polymorphic objects in the Kotlin SDK? I can answer this. I have just finished a product description of adding inheritance in polymorphism to not only the Kotlin SDK, but to all of our SDKs. This is targeted to be a medium term task. It is an expensive task, but it is something that has been highly requested for a while now. Now, we have the resources to implement it, so I'd expect we would get started in the medium term to implement that.\n\n**Nabil:**\nWe also have released for in Java what we call the polymorphic type, which is. You can and then install in it the supported Realm have JSON with the dynamic types, et cetera. Go look at it. But it's not like the polymorphic, a different polymorphic that Ian was referring -\n\n**Ian:**\nIt's a first step I guess you could say into polymorphism. What it enables is kind of what Nabil described is you could have an owner field and let's say that owner could be a business type, it could be a person, it could be an industrial, a commercial, each of these are different classes. You now have the ability to store that type as part of that field. But it doesn't allow for true inheritance, which I think is what most people are looking for for polymorphism. That's something that is after this is approved going to be underway. Look forward to that. Other questions here? Any other questions? Anything I missed? I think we've gone through all of them. Thank you to the Android team. Here's Christian, the lead of our Android team on as well. He's been answering a lot of questions. Thank you Christian. But if any other questions here, please reach out. Otherwise, we will close a little bit early.\n\nOkay. Well, thank you so much everyone. Claus, thank you. I really appreciate it. Thank you for putting this together. This will be posted on YouTube. Any other questions, please go to /Realm/Realm.Kotlin on our GitHub. You can file an issue there. You can ask questions. There's also Forums.Realm.io.\n\nWe look forward to hearing from you. Okay. Thanks everyone. Bye.\n\n**Nabil:**\nSee you online. Cheers.", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Android"], "pageDescription": "In this talk, Claus R\u00f8rbech, software engineer on the Realm Android team, will walk us through some of the constraints of the RealmJava SDK, the thought process that went into the decision to build a new SDK for Kotlin, the benefits developers will be able to leverage with the new APIs, and how the RealmKotlin SDK will evolve.", "contentType": "Article"}, "title": "Realm Meetup - Realm Kotlin Multiplatform for Modern Mobile Apps", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongoimport-guide", "action": "created", "body": "# How to Import Data into MongoDB with mongoimport\n\nNo matter what you're building with MongoDB, at some point you'll want to import some data. Whether it's the majority of your data, or just some reference data that you want to integrate with your main data set, you'll find yourself with a bunch of JSON or CSV files that you need to import into a collection. Fortunately, MongoDB provides a tool called mongoimport which is designed for this task. This guide will explain how to effectively use mongoimport to get your data into your MongoDB database.\n\n>We also provide MongoImport Reference documentation, if you're looking for something comprehensive or you just need to look up a command-line option.\n## Prerequisites\n\nThis guide assumes that you're reasonably comfortable with the command-line. Most of the guide will just be running commands, but towards the end I'll show how to pipe data through some command-line tools, such as `jq`.\n\n>If you haven't had much experience on the command-line (also sometimes called the terminal, or shell, or bash), why not follow along with some of the examples? It's a great way to get started.\n\nThe examples shown were all written on MacOS, but should run on any unix-type system. If you're running on Windows, I recommend running the example commands inside the Windows Subsystem for Linux.\n\nYou'll need a temporary MongoDB database to test out these commands. If\nyou're just getting started, I recommend you sign up for a free MongoDB\nAtlas account, and then we'll take care of the cluster for you!\n\nAnd of course, you'll need a copy of `mongoimport`. If you have MongoDB\ninstalled on your workstation then you may already have `mongoimport`\ninstalled. If not, follow these instructions on the MongoDB website to install it.\n\nI've created a GitHub repo of sample data, containing an extract from the New York Citibike dataset in different formats that should be useful for trying out the commands in this guide.\n\n## Getting Started with `mongoimport`\n\n`mongoimport` is a powerful command-line tool for importing data from JSON, CSV, and TSV files into MongoDB collections. It's super-fast and multi-threaded, so in many cases will be faster than any custom script you might write to do the same thing. `mongoimport` use can be combined with some other command-line tools, such as `jq` for JSON manipulation, or `csvkit` for CSV manipulation, or even `curl` for dynamically downloading data files from servers on the internet. As with many command-line tools, the options are endless!\n\n## Choosing a Source Data Format\n\nIn many ways, having your source data in JSON files is better than CSV (and TSV). JSON is both a hierarchical data format, like MongoDB documents, and is also explicit about the types of data it encodes. On the other hand, source JSON data can be difficult to deal with - in many cases it is not in the structure you'd like, or it has numeric data encoded as strings, or perhaps the date formats are not in a form that `mongoimport` accepts.\n\nCSV (and TSV) data is tabular, and each row will be imported into MongoDB as a separate document. This means that these formats cannot support hierarchical data in the same way as a MongoDB document can. When importing CSV data into MongoDB, `mongoimport` will attempt to make sensible choices when identifying the type of a specific field, such as `int32` or `string`. This behaviour can be overridden with the use of some flags, and you can specify types if you want to. On top of that, `mongoimport` supplies some facilities for parsing dates and other types in different formats.\n\nIn many cases, the choice of source data format won't be up to you - it'll be up to the organisation generating the data and providing it to you. I recommend if the source data is in CSV form then you shouldn't attempt to convert it to JSON first unless you plan to restructure it.\n\n## Connect `mongoimport` to Your Database\n\nThis section assumes that you're connecting to a relatively straightforward setup - with a default authentication database and some authentication set up. (You should *always* create some users for authentication!)\n\nIf you don't provide any connection details to mongoimport, it will attempt to connect to MongoDB on your local machine, on port 27017 (which is MongoDB's default). This is the same as providing `--host=localhost:27017`.\n\n## One URI to Rule Them All\n\nThere are several options that allow you to provide separate connection information to mongoimport, but I recommend you use the `--uri` option. If you're using Atlas you can get the appropriate connection URI from the Atlas interface, by clicking on your cluster's \"Connect\" button and selecting \"Connect your Application\". (Atlas is being continuously developed, so these instructions may be slightly out of date.) Set the URI as the value of your `--uri` option, and replace the username and password with the appropriate values:\n\n``` bash\nmongoimport --uri 'mongodb+srv://MYUSERNAME:SECRETPASSWORD@mycluster-ABCDE.azure.mongodb.net/test?retryWrites=true&w=majority'\n```\n\n**Be aware** that in this form the username and password must be URL-encoded. If you don't want to worry about this, then provide the username and password using the `--username` and `--password` options instead:\n\n``` bash\nmongoimport --uri 'mongodb+srv://mycluster-ABCDE.azure.mongodb.net/test?retryWrites=true&w=majority' \\\n --username='MYUSERNAME' \\\n --password='SECRETPASSWORD'\n```\n\nIf you omit a password from the URI and do not provide a `--password` option, then `mongoimport` will prompt you for a password on the command-line. In all these cases, using single-quotes around values, as I've done, will save you problems in the long-run!\n\nIf you're *not* connecting to an Atlas database, then you'll have to generate your own URI. If you're connecting to a single server (i.e. you don't have a replicaset), then your URI will look like this: `mongodb://your.server.host.name:port/`. If you're running a replicaset (and you\nshould!) then you have more than one hostname to connect to, and you don't know in advance which is the primary. In this case, your URI will consist of a series of servers in your cluster (you don't need to provide all of your cluster's servers, providing one of them is available), and mongoimport will discover and connect to the primary automatically. A replicaset URI looks like this: `mongodb://username:password@host1:port,host2:port/?replicaSet=replicasetname`.\n\nFull details of the supported URI formats can be found in our reference documentation.\n\nThere are also many other options available and these are documented in the mongoimport reference documentation.\n\nOnce you've determined the URI, then the fun begins. In the rest of this guide, I'll leave those flags out. You'll need to add them in when trying out the various other options.\n\n## Import One JSON Document\n\nThe simplest way to import a single file into MongoDB is to use the `--file` option to specify a file. In my opinion, the very best situation is that you have a directory full of JSON files which need to be imported. Ideally each JSON file contains one document you wish to import into MongoDB, it's in the correct structure, and each of the values is of the correct type. Use this option when you wish to import a single file as a single document into a MongoDB collection.\n\nYou'll find data in this format in the 'file_per_document' directory in the sample data GitHub repo. Each document will look like this:\n\n``` json\n{\n\"tripduration\": 602,\n\"starttime\": \"2019-12-01 00:00:05.5640\",\n\"stoptime\": \"2019-12-01 00:10:07.8180\",\n\"start station id\": 3382,\n\"start station name\": \"Carroll St & Smith St\",\n\"start station latitude\": 40.680611,\n\"start station longitude\": -73.99475825,\n\"end station id\": 3304,\n\"end station name\": \"6 Ave & 9 St\",\n\"end station latitude\": 40.668127,\n\"end station longitude\": -73.98377641,\n\"bikeid\": 41932,\n\"usertype\": \"Subscriber\",\n\"birth year\": 1970,\n\"gender\": \"male\"\n}\n```\n\n``` bash\nmongoimport --collection='mycollectionname' --file='file_per_document/ride_00001.json'\n```\n\nThe command above will import all of the json file into a collection\n`mycollectionname`. You don't have to create the collection in advance.\n\nIf you use MongoDB Compass or another tool to connect to the collection you just created, you'll see that MongoDB also generated an `_id` value in each document for you. This is because MongoDB requires every document to have a unique `_id`, but you didn't provide one. I'll cover more on this shortly.\n\n## Import Many JSON Documents\n\nMongoimport will only import one file at a time with the `--file` option, but you can get around this by piping multiple JSON documents into mongoimport from another tool, such as `cat`. This is faster than importing one file at a time, running mongoimport from a loop, as mongoimport itself is multithreaded for faster uploads of multiple documents. With a directory full of JSON files, where each JSON file should be imported as a separate MongoDB document can be imported by `cd`-ing to the directory that contains the JSON files and running:\n\n``` bash\ncat *.json | mongoimport --collection='mycollectionname'\n```\n\nAs before, MongoDB creates a new `_id` for each document inserted into the MongoDB collection, because they're not contained in the source data.\n\n## Import One Big JSON Array\n\nSometimes you will have multiple documents contained in a JSON array in a single document, a little like the following:\n\n``` json\n\n { title: \"Document 1\", data: \"document 1 value\"},\n { title: \"Document 2\", data: \"document 2 value\"}\n]\n```\n\nYou can import data in this format using the `--file` option, using the `--jsonArray` option:\n\n``` bash\nmongoimport --collection='from_array_file' --file='one_big_list.json' --jsonArray\n```\n\nIf you forget to add the --jsonArray option, `mongoimport` will fail with the error \"cannot decode array into a Document.\" This is because documents are equivalent to JSON objects, not arrays. You can store an array as a \\_value\\_ on a document, but a document cannot be an array.\n\n## Import MongoDB-specific Types with JSON\n\nIf you import some of the JSON data from the [sample data github repo and then view the collection's schema in Compass, you may notice a couple of problems:\n\n- The values of `starttime` and `stoptime` should be \"date\" types, not \"string\".\n- MongoDB supports geographical points, but doesn't recognize the start and stop stations' latitudes and longitudes as such.\n\nThis stems from a fundamental difference between MongoDB documents and JSON documents. Although MongoDB documents often *look* like JSON data, they're not. MongoDB stores data as BSON. BSON has multiple advantages over JSON. It's more compact, it's faster to traverse, and it supports more types than JSON. Among those types are Dates, GeoJSON types, binary data, and decimal numbers. All the types are listed in the MongoDB documentation\n\nIf you want MongoDB to recognise fields being imported from JSON as specific BSON types, those fields must be manipulated so that they follow a structure we call Extended JSON. This means that the following field:\n\n``` json\n\"starttime\": \"2019-12-01 00:00:05.5640\"\n```\n\nmust be provided to MongoDB as:\n\n``` json\n\"starttime\": {\n \"$date\": \"2019-12-01T00:00:05.5640Z\"\n}\n```\n\nfor it to be recognized as a Date type. Note that the format of the date string has changed slightly, with the 'T' separating the date and time, and the Z at the end, indicating UTC timezone.\n\nSimilarly, the latitude and longitude must be converted to a GeoJSON Point type if you wish to take advantage of MongoDB's ability to search location data. The two values:\n\n``` json\n\"start station latitude\": 40.680611,\n\"start station longitude\": -73.99475825,\n```\n\nmust be provided to `mongoimport` in the following GeoJSON Point form:\n\n``` json\n\"start station location\": {\n \"type\": \"Point\",\n \"coordinates\": -73.99475825, 40.680611 ]\n}\n```\n\n**Note**: the pair of values are longitude *then* latitude, as this sometimes catches people out!\n\nOnce you have geospatial data in your collection, you can use MongoDB's [geospatial queries to search for data by location.\n\nIf you need to transform your JSON data in this kind of way, see the section on JQ.\n\n## Importing Data Into Non-Empty Collections\n\nWhen importing data into a collection which already contains documents, your `_id` value is important. If your incoming documents don't contain `_id` values, then new values will be created and assigned to the new documents as they are added to the collection. If your incoming documents *do* contain `_id` values, then they will be checked against existing documents in the collection. The `_id` value must be unique within a collection. By default, if the incoming document has an `_id` value that already exists in the collection, then the document will be rejected and an error will be logged. This mode (the default) is called \"insert mode\". There are other modes, however, that behave differently when a matching document is imported using `mongoimport`.\n\n### Update Existing Records\n\nIf you are periodically supplied with new data files you can use `mongoimport` to efficiently update the data in your collection. If your input data is supplied with a stable identifier, use that field as the `_id` field, and supply the option `--mode=upsert`. This mode willinsert a new document if the `_id` value is not currently present in the collection. If the `_id` value already exists in a document, then that document will be overwritten by the new document data.\n\nIf you're upserting records that don't have stable IDs, you can specify some fields to use to match against documents in the collection, with the `--upsertFields` option. If you're using more than one field name, separate these values with a comma:\n\n``` bash\n--upsertFields=name,address,height\n```\n\nRemember to index these fields, if you're using `--upsertFields`, otherwise it'll be slow!\n\n### Merge Data into Existing Records\n\nIf you are supplied with data files which *extend* your existing documents by adding new fields, or update certain fields, you can use `mongoimport` with \"merge mode\". If your input data is supplied with a stable identifier, use that field as the `_id` field, and supply the option `--mode=merge`. This mode will insert a new document if the `_id` value is not currently present in the collection. If the `_id` value already exists in a document, then that document will be overwritten by the new document data.\n\nYou can also use the `--upsertFields` option here as well as when you're doing upserts, to match the documents you want to update.\n\n## Import CSV (or TSV) into a Collection\n\nIf you have CSV files (or TSV files - they're conceptually the same) to import, use the `--type=csv` or `--type=tsv` option to tell `mongoimport` what format to expect. Also important is to know whether your CSV file has a header row - where the first line doesn't contain data - instead it contains the name for each column. If you *do* have a header row, you should use the `--headerline` option to tell `mongoimport` that the first line should not be imported as a document.\n\nWith CSV data, you may have to do some extra work to annotate the data to get it to import correctly. The primary issues are:\n\n- CSV data is \"flat\" - there is no good way to embed sub-documents in a row of a CSV file, so you may want to restructure the data to match the structure you wish to have in your MongoDB documents.\n- CSV data does not include type information.\n\nThe first problem is a probably bigger issue. You have two options. One is to write a script to restructure the data *before* using `mongoimport` to import the data. Another approach could be to import the data into MongoDB and then run an aggregation pipeline to transform the data into your required structure.\n\nBoth of these approaches are out of the scope of this blog post. If it's something you'd like to see more explanation of, head over to the MongoDB Community Forums.\n\nThe fact that CSV files don't specify the type of data in each field can be solved by specifying the field types when calling `mongoimport`.\n\n### Specify Field Types\n\nIf you don't have a header row, then you must tell `mongoimport` the name of each of your columns, so that `mongoimport` knows what to call each of the fields in each of the documents to be imported. There are two methods to do this: You can list the field names on the command-line with the `--fields` option, or you can put the field names in a file, and point to it with the `--fieldFile` option.\n\n``` bash\nmongoimport \\\n --collection='fields_option' \\\n --file=without_header_row.csv \\\n --type=csv \\\n --fields=\"tripduration\",\"starttime\",\"stoptime\",\"start station id\",\"start station name\",\"start station latitude\",\"start station longitude\",\"end station id\",\"end station name\",\"end station latitude\",\"end station longitude\",\"bikeid\",\"usertype\",\"birth year\",\"gender\"\n```\n\nThat's quite a long line! In cases where there are lots of columns it's a good idea to manage the field names in a field file.\n\n### Use a Field File\n\nA field file is a list of column names, with one name per line. So the equivalent of the `--fields` value from the call above looks like this:\n\n``` none\ntripduration\nstarttime\nstoptime\nstart station id\nstart station name\nstart station latitude\nstart station longitude\nend station id\nend station name\nend station latitude\nend station longitude\nbikeid\nusertype\nbirth year\ngender\n```\n\nIf you put that content in a file called 'field_file.txt' and then run the following command, it will use these column names as field names in MongoDB:\n\n``` bash\nmongoimport \\\n --collection='fieldfile_option' \\\n --file=without_header_row.csv \\\n --type=csv \\\n --fieldFile=field_file.txt\n```\n\nIf you open Compass and look at the schema for either 'fields_option' or 'fieldfile_option', you should see that `mongoimport` has automatically converted integer types to `int32` and kept the latitude and longitude values as `double` which is a real type, or floating-point number. In some cases, though, MongoDB may make an incorrect decision. In the screenshot above, you can see that the 'starttime' and 'stoptime' fields have been imported as strings. Ideally they would have been imported as a BSON date type, which is more efficient for storage and filtering.\n\nIn this case, you'll want to specify the type of some or all of your columns.\n\n### Specify Types for CSV Columns\n\nAll of the types you can specify are listed in our reference documentation.\n\nTo tell `mongoimport` you wish to specify the type of some or all of your fields, you should use the `--columnsHaveTypes` option. As well as using the `--columnsHaveTypes` option, you will need to specify the types of your fields. If you're using the `--fields` option, you can add type information to that value, but I highly recommend adding type data to the field file. This way it should be more readable and maintainable, and that's what I'll demonstrate here.\n\nI've created a file called `field_file_with_types.txt`, and entered the following:\n\n``` none\ntripduration.auto()\nstarttime.date(2006-01-02 15:04:05)\nstoptime.date(2006-01-02 15:04:05)\nstart station id.auto()\nstart station name.auto()\nstart station latitude.auto()\nstart station longitude.auto()\nend station id.auto()\nend station name.auto()\nend station latitude.auto()\nend station longitude.auto()\nbikeid.auto()\nusertype.auto()\nbirth year.auto()\ngender.auto()\n```\n\nBecause `mongoimport` already did the right thing with most of the fields, I've set them to `auto()` - the type information comes after a period (`.`). The two time fields, `starttime` and `stoptime` were being incorrectly imported as strings, so in these cases I've specified that they should be treated as a `date` type. Many of the types take arguments inside the parentheses. In the case of the `date` type, it expects the argument to be *a date* formatted in the same way you expect the column's values to be formatted. See the reference documentation for more details.\n\nNow, the data can be imported with the following call to `mongoimport`:\n\n``` bash\nmongoimport --collection='with_types' \\\n --file=without_header_row.csv \\\n --type=csv \\\n --columnsHaveTypes \\\n --fieldFile=field_file_with_types.txt\n```\n\n## And The Rest\n\nHopefully you now have a good idea of how to use `mongoimport` and of how flexible it is! I haven't covered nearly all of the options that can be provided to `mongoimport`, however, just the most important ones. Others I find useful frequently are:\n\n| Option| Description |\n| --- | --- |\n| `--ignoreBlanks`| Ignore fields or columns with empty values. |\n| `--drop` | Drop the collection before importing the new documents. This is particularly useful during development, but **will lose data** if you use it accidentally. |\n| `--stopOnError` | Another option that is useful during development, this causes `mongoimport` to stop immediately when an error occurs. |\n\nThere are many more! Check out the mongoimport reference documentation for all the details.\n\n## Useful Command-Line Tools\n\nOne of the major benefits of command-line programs is that they are designed to work with *other* command-line programs to provide more power. There are a couple of command-line programs that I *particularly* recommend you look at: `jq` a JSON manipulation tool, and `csvkit` a similar tool for working with CSV files.\n\n### JQ\n\nJQ is a processor for JSON data. It incorporates a powerful filtering and scripting language for filtering, manipulating, and even generating JSON data. A full tutorial on how to use JQ is out of scope for this guide, but to give you a brief taster:\n\nIf you create a JQ script called `fix_dates.jq` containing the following:\n\n``` none\n.starttime |= { \"$date\": (. | sub(\" \"; \"T\") + \"Z\") }\n| .stoptime |= { \"$date\": (. | sub(\" \"; \"T\") + \"Z\") }\n```\n\nYou can now pipe the sample JSON data through this script to modify the\n`starttime` and `stoptime` fields so that they will be imported into MongoDB as `Date` types:\n\n``` bash\necho '\n{\n \"tripduration\": 602,\n \"starttime\": \"2019-12-01 00:00:05.5640\",\n \"stoptime\": \"2019-12-01 00:10:07.8180\"\n}' \\\n| jq -f fix_dates.jq\n{\n \"tripduration\": 602,\n \"starttime\": {\n \"$date\": \"2019-12-01T00:00:05.5640Z\"\n },\n \"stoptime\": {\n \"$date\": \"2019-12-01T00:10:07.8180Z\"\n }\n}\n```\n\nThis can be used in a multi-stage pipe, where data is piped into `mongoimport` via `jq`.\n\nThe `jq` tool can be a little fiddly to understand at first, but once you start to understand how the language works, it is very powerful, and very fast. I've provided a more complex JQ script example in the sample data GitHub repo, called `json_fixes.jq`. Check it out for more ideas, and the full documentation on the JQ website.\n\n### CSVKit\n\nIn the same way that `jq` is a tool for filtering and manipulating JSON data, `csvkit` is a small collection of tools for filtering and manipulating CSV data. Some of the tools, while useful in their own right, are unlikely to be useful when combined with `mongoimport`. Tools like `csvgrep` which filters csv file rows based on expressions, and `csvcut` which can remove whole columns from CSV input, are useful tools for slicing and dicing your data before providing it to `mongoimport`.\n\nCheck out the csvkit docs for more information on how to use this collection of tools.\n\n### Other Tools\n\nAre there other tools you know of which would work well with\n`mongoimport`? Do you have a great example of using `awk` to handle tabular data before importing into MongoDB? Let us know on the community forums!\n\n## Conclusion\n\nIt's a common mistake to write custom code to import data into MongoDB. I hope I've demonstrated how powerful `mongoimport` is as a tool for importing data into MongoDB quickly and efficiently. Combined with other simple command-line tools, it's both a fast and flexible way to import your data into MongoDB.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to import different types of data into MongoDB, quickly and efficiently, using mongoimport.", "contentType": "Tutorial"}, "title": "How to Import Data into MongoDB with mongoimport", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-sync-migration", "action": "created", "body": "# Migrating Your iOS App's Synced Realm Schema in Production\n\n## Introduction\n\nIn the previous post in this series, we saw how to migrate your Realm data when you upgraded your iOS app with a new schema. But, that only handled the data in your local, standalone Realm database. What if you're using MongoDB Realm Sync to replicate your local Realm data with other instances of your mobile app and with MongoDB Atlas? That's what this article will focus on.\n\nWe'll start with the original RChat app. We'll then extend the iOS app and backend Realm schema to add a new feature that allows chat messages to be tagged as high priority. The next (and perhaps surprisingly more complicated from a Realm perspective) upgrade is to make the `author` attribute of the existing `ChatMessage` object non-optional.\n\nYou can find all of the code for this post in the RChat repo under these branches:\n\n- Starting point\n- Upgrade #1\n- Upgrade #2\n\n## Prerequisites\n\nRealm Cocoa 10.13.0 or later (for versions of the app that you're upgrading **to**)\n\n## Catch-Up \u2014 The RChat App\n\nRChat is a basic chat app:\n\n- Users can register and log in using their email address and a password.\n- Users can create chat rooms and include other users in those rooms.\n- Users can post messages to a chat room (optionally including their location and photos).\n- All members of a chatroom can see messages sent to the room by themselves or other users.\n\n:youtubeExisting RChat iOS app functionality]{vid=BlV9El_MJqk}\n\n## Upgrade #1: Add a High-Priority Flag to Chat Messages\n\nThe first update is to allow a user to tag a message as being high-priority as they post it to the chat room:\n\n![Screenshot showing the option to click a thermometer button to tag the message as urgent\n\nThat message is then highlighted with bold text and a \"hot\" icon in the list of chat messages:\n\n### Updating the Backend Realm Schema\n\nAdding a new field is an additive change\u2014meaning that you don't need to restart sync (which would require every deployed instance of the RChat mobile app to recognize the change and start sync from scratch, potentially losing local changes).\n\nWe add the new `isHighPriority` bool to our Realm schema through the Realm UI:\n\nWe also make `isHighPriority` a required (non-optional field).\n\nThe resulting schema looks like this:\n\n```js\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": \n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"isHighPriority\": {\n \"bsonType\": \"bool\"\n },\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"timestamp\": {\n \"bsonType\": \"date\"\n }\n },\n \"required\": [\n \"_id\",\n \"partition\",\n \"text\",\n \"timestamp\",\n \"isHighPriority\"\n ],\n \"title\": \"ChatMessage\"\n }\n```\nNote that existing versions of our iOS RChat app can continue to work with our updated backend Realm app, even though their local `ChatMessage` Realm objects don't include the new field.\n\n### Updating the iOS RChat App\n\nWhile existing versions of the iOS RChat app can continue to work with the updated Realm backend app, they can't use the new `isHighPriority` field as it isn't part of the `ChatMessage` object.\n\nTo add the new feature, we need to update the mobile app after deploying the updated Realm backend application.\n\nThe first change is to add the `isHighPriority` field to the `ChatMessage` class:\n\n```swift\nclass ChatMessage: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var partition = \"\" // \"conversation=\"\n @Persisted var author: String? // username\n @Persisted var text = \"\"\n @Persisted var image: Photo?\n @Persisted var location = List()\n @Persisted var timestamp = Date()\n @Persisted var isHighPriority = false\n ...\n}\n```\n\nAs seen in the [previous post in this series, Realm can automatically update the local realm to include this new attribute and initialize it to `false`. Unlike with standalone realms, we **don't** need to signal to the Realm SDK that we've updated the schema by providing a schema version.\n\nThe new version of the app will happily exchange messages with instances of the original app on other devices (via our updated backend Realm app).\n\n## Upgrade #2: Make `author` a Non-Optional Chat Message field\n\nWhen the initial version of RChat was written, the `author` field of `ChatMessage` was declared as being optional. We've since realized that there are no scenarios where we wouldn't want the author included in a chat message. To make sure that no existing or future client apps neglect to include the author, we need to update our schema to make `author` a required field.\n\nUnfortunately, changing a field from optional to required (or vice versa) is a destructive change, and so would break sync for any deployed instances of the RChat app.\n\nOops!\n\nThis means that there's extra work needed to make the upgrade seamless for the end users. We'll go through the process now.\n\n### Updating the Backend Realm Schema\n\nThe change we need to make to the schema is destructive. This means that the new document schema is incompatible with the schema that's currently being used in our mobile app.\n\nIf RChat wasn't already deployed on the devices of hundreds of millions of users (we can dream!), then we could update the Realm schema for the `ChatMessage` collection and restart Realm Sync. During development, we can simply remove the original RChat mobile app and then install an updated version on our test devices.\n\nTo avoid that trauma for our end users, we leave the `ChatMessage` collection's schema as is and create a partner collection. The partner collection (`ChatMessageV2`) will contain the same data as `ChatMessage`, except that its schema makes `author` a required field.\n\nThese are the steps we'll go through to create the partner collection:\n\n- Define a Realm schema for the `ChatMessageV2` collection.\n- Run an aggregation to copy all of the documents from `ChatMessage` to `ChatMessageV2`. If `author` is missing from a `ChatMessage` document, then the aggregation will add it.\n- Add a trigger to the `ChatMessage` collection to propagate any changes to `ChatMessageV2` (adding `author` if needed).\n- Add a trigger to the `ChatMessageV2` collection to propagate any changes to `ChatMessage`.\n\n#### Define the Schema for the Partner Collection\n\nFrom the Realm UI, copy the schema from the `ChatMessage` collection. \n\nClick the button to create a new schema:\n\nSet the database and collection name before clicking \"Add Collection\":\n\nPaste in the schema copied from `ChatMessage`, add `author` to the `required` section, change the `title` to `ChatMessageV2`, and the click the \"SAVE\" button:\n\nThis is the resulting schema:\n\n```js\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": \n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"isHighPriority\": {\n \"bsonType\": \"bool\"\n },\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"timestamp\": {\n \"bsonType\": \"date\"\n }\n },\n \"required\": [\n \"_id\",\n \"partition\",\n \"text\",\n \"timestamp\",\n \"isHighPriority\",\n \"author\"\n ],\n \"title\": \"ChatMessageV2\"\n }\n```\n\n#### Copy Existing Data to the Partner Collection\n\nWe're going to use an [aggregation pipeline to copy and transform the existing data from the original collection (`ChatMessage`) to the partner collection (`ChatMessageV2`).\n\nYou may want to pause sync just before you run the aggregation, and then unpause it after you enable the trigger on the `ChatMessage` collection in the next step:\n\nThe end users can continue to create new messages while sync is paused, but those messages won't be published to other users until sync is resumed. By pausing sync, you can ensure that all new messages will make it into the partner collection (and so be visible to users running the new version of the mobile app).\n\nIf pausing sync is too much of an inconvenience, then you could create a temporary trigger on the `ChatMessage` collection that will copy and transform document inserts to the `ChatMessageV2` collection (it's a subset of the `ChatMessageProp` trigger we'll define in the next section.).\n\nFrom the Atlas UI, select \"Collections\" -> \"ChatMessage\", \"New Pipeline From Text\":\n\nPaste in this aggregation pipeline and click the \"Create New\" button:\n\n```js\n\n {\n '$addFields': {\n 'author': {\n '$convert': {\n 'input': '$author',\n 'to': 'string',\n 'onError': 'unknown',\n 'onNull': 'unknown'\n }\n }\n }\n },\n {\n '$merge': {\n into: \"ChatMessageV2\",\n on: \"_id\",\n whenMatched: \"replace\",\n whenNotMatched: \"insert\"\n }\n }\n]\n```\n\nThis aggregation will take each `ChatMessage` document, set `author` to \"unknown\" if it's not already set, and then add it to the `ChatMessageV2` collection.\n\nClick \"MERGE DOCUMENTS\":\n\n![Clicking the \"Merge Documents\" button in the Realm UI\n\n`ChatMessageV2` now contains a (possibly transformed) copy of every document from `ChatMessage`. But, changes to one collection won't be propagated to the other. To address that, we add a database trigger to each collection\u2026\n\n#### Add Database Triggers\n\nWe need to create two Realm Functions\u2014one to copy/transfer documents to `ChatMessageV2`, and one to copy documents to `ChatMessage`.\n\nFrom the \"Functions\" section of the Realm UI, click \"Create New Function\":\n\nName the function `copyToChatMessageV2`. Set the authentication method to \"System\"\u2014this will circumvent any access permissions on the `ChatMessageV2` collection. Ensure that the \"Private\" switch is turned on\u2014that means that the function can be called from a trigger, but not directly from a frontend app. Click \"Save\":\n\nPaste this code into the function editor and save:\n\n```js\nexports = function (changeEvent) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n\n if (changeEvent.operationType === \"delete\") {\n return db.collection(\"ChatMessageV2\").deleteOne({ _id: changeEvent.documentKey._id });\n }\n\n const author = changeEvent.fullDocument.author ? changeEvent.fullDocument.author : \"Unknown\";\n const pipeline = \n { $match: { _id: changeEvent.documentKey._id } },\n {\n $addFields: {\n author: author,\n }\n },\n { $merge: \"ChatMessageV2\" }];\n\n return db.collection(\"ChatMessage\").aggregate(pipeline);\n};\n```\n\nThis function will receive a `ChatMessage` document from our trigger. If the operation that triggered the function is a delete, then this function deletes the matching document from `ChatMessageV2`. Otherwise, the function either copies `author` from the incoming document or sets it to \"Unknown\" before writing the transformed document to `ChatMessageV2`. We could initialize `author` to any string, but I've used \"Unknown\" to tell the user that we don't know who the author was.\n\nCreate the `copyToChatMessage` function in the same way:\n\n```js\nexports = function (changeEvent) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n\n if (changeEvent.operationType === \"delete\") {\n return db.collection(\"ChatMessage\").deleteOne({ _id: changeEvent.documentKey._id })\n }\n const pipeline = [\n { $match: { _id: changeEvent.documentKey._id } },\n { $merge: \"ChatMessage\" }]\n return db.collection(\"ChatMessageV2\").aggregate(pipeline);\n};\n```\n\nThe final change needed to the backend Realm application is to add database triggers that invoke these functions.\n\nFrom the \"Triggers\" section of the Realm UI, click \"Add a Trigger\":\n\n![Click the \"Add a Trigger\" button in the Realm UI\n\nConfigure the `ChatMessageProp` trigger as shown:\n\nRepeat for `ChatMessageV2Change`:\n\nIf you paused sync in the previous section, then you can now unpause it.\n\n### Updating the iOS RChat App\n\nWe want to ensure that users still running the old version of the app can continue to exchange messages with users running the latest version.\n\nExisting versions of RChat will continue to work. They will create `ChatMessage` objects which will get synced to the `ChatMessage` Atlas collection. The database triggers will then copy/transform the document to the `ChatMessageV2` collection.\n\nWe now need to create a new version of the app that works with documents from the `ChatMessageV2` collection. We'll cover that in this section.\n\nRecall that we set `title` to `ChatMessageV2` in the partner collection's schema. That means that to sync with that collection, we need to rename the `ChatMessage` class to `ChatMessageV2` in the iOS app. \n\nChanging the name of the class throughout the app is made trivial by Xcode.\n\nOpen `ChatMessage.swift` and right-click on the class name (`ChatMessage`), select \"Refactor\" and then \"Rename\u2026\":\n\nOverride the class name with `ChatMessageV2` and click \"Rename\":\n\nThe final step is to make the author field mandatory. Remove the ? from the author attribute to make it non-optional:\n\n```swift\nclass ChatMessageV2: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var partition = \"\" // \"conversation=\"\n @Persisted var author: String\n ...\n}\n```\n\n## Conclusion\n\nModifying a Realm schema is a little more complicated when you're using Realm Sync for a deployed app. You'll have end users who are using older versions of the schema, and those apps need to continue to work.\n\nFortunately, the most common schema changes (adding or removing fields) are additive. They simply require updates to the back end and iOS schema, together.\n\nThings get a little trickier for destructive changes, such as changing the type or optionality of an existing field. For these cases, you need to create and maintain a partner collection to avoid loss of data or service for your users.\n\nThis article has stepped through how to handle both additive and destructive schema changes, allowing you to add new features or fix issues in your apps without impacting users running older versions of your app.\n\nRemember, you can find all of the code for this post in the RChat repo under these branches:\n\n- Starting point\n- Upgrade #1\n- Upgrade #2\n\nIf you're looking to upgrade the Realm schema for an iOS app that **isn't** using Realm Sync, then refer to the previous post in this series.\n\nIf you have any questions or comments on this post (or anything else Realm-related), then please raise them on our community forum. To keep up with the latest Realm news, follow @realm on Twitter and join the Realm global community.\n", "format": "md", "metadata": {"tags": ["Realm", "iOS", "Mobile"], "pageDescription": "When you add features to your app, you may need to modify your Realm schema. Here, we step through how to migrate your synced schema and data.", "contentType": "Tutorial"}, "title": "Migrating Your iOS App's Synced Realm Schema in Production", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/live2020-keynote-summary", "action": "created", "body": "# MongoDB.Live 2020 Keynote In Less Than 10 Minutes\n\nDidn't get a chance to attend the MongoDB.Live 2020 online conference\nthis year? Don't worry. We have compiled a quick recap of all the\nhighlights to get you caught up.\n\n>\n>\n>MongoDB.Live 2020 - Everything from the Breakthroughs to the\n>Milestones - in 10 Minutes!\n>\n>:youtube]{vid=TB_EdovmBUo}\n>\n>\n\nAs you can see, we packed a lot of exciting news in this year's event.\n\nMongoDB Realm: \n- Bi-directional sync between mobile devices and data in an Atlas\n cluster\n- Easy integration with authentication, serverless functions and\n triggers\n- GraphQL support\n\nMongoDB Server 4.4: \n- Refinable shard keys\n- Hedged reads\n- New query language additions - union, custom aggregation expressions\n\nMongoDB Atlas: \n- Atlas Search GA\n- Atlas Online Archive (Beta)\n- Automated schema suggestions\n- AWS IAM authentication\n\nAnalytics \n- Atlas Data Lake\n- Federated queries\n- Charts embedding SDK\n\nDevOps Tools and Integrations \n- Community Kubernetes operator and containerized Ops Manager for\n Enterprise\n- New MongoDB shell\n- New integration for VS Code and other JetBrains IDEs\n\nMongoDB Learning & Community \n- University Learning Paths\n- Developer Hub\n- Community Forum\n\nOver 14,000 attendees attended our annual user conference online to\nexperience how we make data stunningly easy to work with. If you didn't\nget the chance to attend, check out our upcoming [MongoDB.live 2020\nregional series. These virtual\nconferences in every corner of the globe are the easiest way for you to\nsharpen your skills during a full day of interactive, virtual sessions,\ndeep dives, and workshops from the comfort of your home and the\nconvenience of your time zone. You'll discover new ways MongoDB removes\nthe developer pain of working with data, allowing you to focus on your\nvision and freeing your genius.\n\nTo learn more, ask questions, leave feedback or simply connect with\nother MongoDB developers, visit our community\nforums. Come to learn.\nStay to connect.\n\n>\n>\n>Get started with Atlas is easy. Sign up for a free MongoDB\n>Atlas account to start working with\n>all the exciting new features of MongoDB, including Realm and Charts,\n>today!\n>\n>\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Missed the MongoDB .Live 2020 online conference? From the breakthroughs to the milestones, here's what you missed - in less than 10 minutes!", "contentType": "Article"}, "title": "MongoDB.Live 2020 Keynote In Less Than 10 Minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/outlier-pattern", "action": "created", "body": "# Building with Patterns: The Outlier Pattern\n\nSo far in this *Building with Patterns* series, we've looked at the\nPolymorphic,\nAttribute, and\nBucket patterns. While the document schema in\nthese patterns has slight variations, from an application and query\nstandpoint, the document structures are fairly consistent. What happens,\nhowever, when this isn't the case? What happens when there is data that\nfalls outside the \"normal\" pattern? What if there's an outlier?\n\nImagine you are starting an e-commerce site that sells books. One of the\nqueries you might be interested in running is \"who has purchased a\nparticular book\". This could be useful for a recommendation system to\nshow your customers similar books of interest. You decide to store the\n`user_id` of a customer in an array for each book. Simple enough, right?\n\nWell, this may indeed work for 99.99% of the cases, but what happens\nwhen J.K. Rowling releases a new Harry Potter book and sales spike in\nthe millions? The 16MB BSON document size limit could easily\nbe reached. Redesigning our entire application for this *outlier*\nsituation could result in reduced performance for the typical book, but\nwe do need to take it into consideration.\n\n## The Outlier Pattern\n\nWith the Outlier Pattern, we are working to prevent a few queries or\ndocuments driving our solution towards one that would not be optimal for\nthe majority of our use cases. Not every book sold will sell millions of\ncopies.\n\nA typical `book` document storing `user_id` information might look\nsomething like:\n\n``` javascript\n{\n \"_id\": ObjectID(\"507f1f77bcf86cd799439011\")\n \"title\": \"A Genealogical Record of a Line of Alger\",\n \"author\": \"Ken W. Alger\",\n ...,\n \"customers_purchased\": \"user00\", \"user01\", \"user02\"]\n\n}\n```\n\nThis would work well for a large majority of books that aren't likely to\nreach the \"best seller\" lists. Accounting for outliers though results in\nthe `customers_purchased` array expanding beyond a 1000 item limit we\nhave set, we'll add a new field to \"flag\" the book as an outlier.\n\n``` javascript\n{\n \"_id\": ObjectID(\"507f191e810c19729de860ea\"),\n \"title\": \"Harry Potter, the Next Chapter\",\n \"author\": \"J.K. Rowling\",\n ...,\n \"customers_purchased\": [\"user00\", \"user01\", \"user02\", ..., \"user999\"],\n \"has_extras\": \"true\"\n}\n```\n\nWe'd then move the overflow information into a separate document linked\nwith the book's `id`. Inside the application, we would be able to\ndetermine if a document has a `has_extras` field with a value of `true`.\nIf that is the case, the application would retrieve the extra\ninformation. This could be handled so that it is rather transparent for\nmost of the application code.\n\nMany design decisions will be based on the application workload, so this\nsolution is intended to show an example of the Outlier Pattern. The\nimportant concept to grasp here is that the outliers have a substantial\nenough difference in their data that, if they were considered \"normal\",\nchanging the application design for them would degrade performance for\nthe more typical queries and documents.\n\n## Sample Use Case\n\nThe Outlier Pattern is an advanced pattern, but one that can result in\nlarge performance improvements. It is frequently used in situations when\npopularity is a factor, such as in social network relationships, book\nsales, movie reviews, etc. The Internet has transformed our world into a\nmuch smaller place and when something becomes popular, it transforms the\nway we need to model the data around the item.\n\nOne example is a customer that has a video conferencing product. The\nlist of authorized attendees in most video conferences can be kept in\nthe same document as the conference. However, there are a few events,\nlike a company's all hands, that have thousands of expected attendees.\nFor those outlier conferences, the customer implemented \"overflow\"\ndocuments to record those long lists of attendees.\n\n## Conclusion\n\nThe problem that the Outlier Pattern addresses is preventing a few\ndocuments or queries to determine an application's solution. Especially\nwhen that solution would not be optimal for the majority of use cases.\nWe can leverage MongoDB's flexible data model to add a field to the\ndocument \"flagging\" it as an outlier. Then, inside the application, we\nhandle the outliers slightly differently. By tailoring your schema for\nthe typical document or query, application performance will be optimized\nfor those normal use cases and the outliers will still be addressed.\n\nOne thing to consider with this pattern is that it often is tailored for\nspecific queries and situations. Therefore, ad hoc queries may result in\nless than optimal performance. Additionally, as much of the work is done\nwithin the application code itself, additional code maintenance may be\nrequired over time.\n\nIn our next *Building with Patterns* post, we'll take a look at the\n[Computed Pattern and how to optimize schema for\napplications that can result in unnecessary waste of resources.\n \n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Tutorial"}, "title": "Building with Patterns: The Outlier Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/realm-swiftui-scrumdinger-migration", "action": "created", "body": "# Adapting Apple's Scrumdinger SwiftUI Tutorial App to Use Realm\n\nApple published a great tutorial to teach developers how to create iOS apps using SwiftUI. I particularly like it because it doesn't make any assumptions about existing UIKit experience, making it ideal for developers new to iOS. That tutorial is built around an app named \"Scrumdinger,\" which is designed to facilitate daily scrum) meetings.\n\nApple's Scrumdinger implementation saves the app data to a local file whenever the user minimizes the app, and loads it again when they open the app. It seemed an interesting exercise to modify Scrumdinger to use Realm rather than a flat file to persist the data. This article steps through what changes were required to rebase Scrumdinger onto Realm.\n\nAn immediate benefit of the move is that changes are now persisted immediately, so nothing is lost if the device or app crashes. It's beyond the scope of this article, but now that the app data is stored in Realm, it would be straightforward to add enhancements such as:\n\n- Search meeting minutes for a string.\n- Filter minutes by date or attendees.\n- Sync data so that the same user can see all of their data on multiple iOS (and optionally, Android) devices.\n- Use Realm Sync Partitions to share scrum data between team members.\n- Sync the data to MongoDB Atlas so that it can be accessed by web apps or through a GraphQL API\n\n>\n>\n>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.\n>\n>\n\n## Prerequisites\n\n- Mac (sorry Windows and Linux users).\n- Xcode 12.4+.\n\nI strongly recommend that you at least scan Apple's tutorial. I don't explain any of the existing app structure or code in this article.\n\n## Adding Realm to the Scrumdinger App\n\nFirst of all, a couple of notes about the GitHub repo for this project:\n\n- The main branch is the app as it appears in Apple's tutorial. This is the starting point for this article.\n- The realm branch contains a modified version of the Scrumdinger app that persists the application data in Realm. This is the finishing point for this article.\n- You can view the diff between the main and realm branches to see the changes needed to make the app run on Realm.\n\n### Install and Run the Original Scrumdinger App\n\n``` bash\ngit clone https://github.com/realm/Scrumdinger.git\ncd Scrumdinger\nopen Scrumdinger.xcodeproj\n```\n\nFrom Xcode, select a simulator:\n\n \n Select an iOS simulator in Xcode.\n\nBuild and run the app with `\u2318r`:\n\n \n Scrumdinger screen capture\n\nCreate a new daily scrum. Force close and restart the app with `\u2318r`. Note that your new scrum has been lost \ud83d\ude22. Don't worry, that's automatically fixed once we've migrated to Realm.\n\n### Add the Realm SDK to the Project\n\nTo use Realm, we need to add the Realm-Cocoa SDK to the Scrumdinger Xcode project using the Swift Package Manager. Select the \"Scrumdinger\" project and the \"Swift Packages\" tab, and then click the \"+\" button:\n\n \n\nPaste in `https://github.com/realm/realm-cocoa` as the package repository URL:\n\n \n\nAdd the `RealmSwift` package to the `Scrumdinger` target:\n\n \n\nWe can then start using the Realm SDK with `import RealmSwift`.\n\n### Update Model Classes to be Realm Objects\n\nTo store an object in Realm, its class must inherit from Realm's `Object` class. If the class contains sub-classes, those classes must conform to Realm's `EmbeddedObject` protocol.\n\n#### Color\n\nAs with the original app's flat file, Realm can't natively persist the SwiftUI `Color` class, and so colors need to be stored as components. To that end, we need a `Components` class. It conforms to `EmbeddedObject` so that it can be embedded in a higher-level Realm `Object` class. Fields are flagged with the `@Persisted` annotation to indicate that they should be persisted in Realm:\n\n``` swift\nimport RealmSwift\n\nclass Components: EmbeddedObject {\n @Persisted var red: Double = 0\n @Persisted var green: Double = 0\n @Persisted var blue: Double = 0\n @Persisted var alpha: Double = 0\n\n convenience init(red: Double, green: Double, blue: Double, alpha: Double) {\n self.init()\n self.red = red\n self.green = green\n self.blue = blue\n self.alpha = alpha\n }\n}\n```\n\n#### DailyScrum\n\n`DailyScrum` is converted from a `struct` to an `Object` `class` so that it can be persisted in Realm. By conforming to `ObjectKeyIdentifiable`, lists of `DailyScrum` objects can be used within SwiftUI `ForEach` views, with Realm managing the `id` identifier for each instance.\n\nWe use the Realm `List` class to store arrays.\n\n``` swift\nimport RealmSwift\n\nclass DailyScrum: Object, ObjectKeyIdentifiable {\n @Persisted var title = \"\"\n @Persisted var attendeeList = RealmSwift.List()\n @Persisted var lengthInMinutes = 0\n @Persisted var colorComponents: Components?\n @Persisted var historyList = RealmSwift.List()\n\n var color: Color { Color(colorComponents ?? Components()) }\n var attendees: String] { Array(attendeeList) }\n var history: [History] { Array(historyList) }\n\n convenience init(title: String, attendees: [String], lengthInMinutes: Int, color: Color, history: [History] = []) {\n self.init()\n self.title = title\n attendeeList.append(objectsIn: attendees)\n self.lengthInMinutes = lengthInMinutes\n self.colorComponents = color.components\n for entry in history {\n self.historyList.insert(entry, at: 0)\n }\n }\n}\n\nextension DailyScrum {\n struct Data {\n var title: String = \"\"\n var attendees: [String] = []\n var lengthInMinutes: Double = 5.0\n var color: Color = .random\n }\n\n var data: Data {\n return Data(title: title, attendees: attendees, lengthInMinutes: Double(lengthInMinutes), color: color)\n }\n\n func update(from data: Data) {\n title = data.title\n for attendee in data.attendees {\n if !attendees.contains(attendee) {\n self.attendeeList.append(attendee)\n }\n }\n lengthInMinutes = Int(data.lengthInMinutes)\n colorComponents = data.color.components\n }\n}\n```\n\n#### History\n\nThe `History` struct is replaced with a Realm `Object` class:\n\n``` swift\nimport RealmSwift\n\nclass History: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var date: Date?\n @Persisted var attendeeList = List()\n @Persisted var lengthInMinutes: Int = 0\n @Persisted var transcript: String?\n var attendees: [String] { Array(attendeeList) }\n\n convenience init(date: Date = Date(), attendees: [String], lengthInMinutes: Int, transcript: String? = nil) {\n self.init()\n self.date = date\n attendeeList.append(objectsIn: attendees)\n self.lengthInMinutes = lengthInMinutes\n self.transcript = transcript\n }\n}\n```\n\n#### ScrumData\n\nThe `ScrumData` `ObservableObject` class was used to manage the copying of scrum data between the in-memory copy and a local iOS file (including serialization and deserialization). This is now handled automatically by Realm, and so this class can be deleted.\n\nNothing feels better than deleting boiler-plate code!\n\n### Top-Level SwiftUI App\n\nOnce the data is being stored in Realm, there's no need for lifecycle code to load data when the app starts or save it when it's minimized, and so `ScrumdingerApp` becomes a simple wrapper for the top-level view (`ScrumsView`):\n\n``` swift\nimport SwiftUI\n\n@main\nstruct ScrumdingerApp: App {\n var body: some Scene {\n WindowGroup {\n NavigationView {\n ScrumsView()\n }\n }\n }\n}\n```\n\n### SwiftUI Views\n\n#### ScrumsView\n\nThe move from a file to Realm simplifies the top-level view.\n\n``` swift\nimport RealmSwift\n\nstruct ScrumsView: View {\n @ObservedResults(DailyScrum.self) var scrums\n @State private var isPresented = false\n @State private var newScrumData = DailyScrum.Data()\n @State private var currentScrum = DailyScrum()\n\n var body: some View {\n List {\n if let scrums = scrums {\n ForEach(scrums) { scrum in\n NavigationLink(destination: DetailView(scrum: scrum)) {\n CardView(scrum: scrum)\n }\n .listRowBackground(scrum.color)\n }\n }\n }\n .navigationTitle(\"Daily Scrums\")\n .navigationBarItems(trailing: Button(action: {\n isPresented = true\n }) {\n Image(systemName: \"plus\")\n })\n .sheet(isPresented: $isPresented) {\n NavigationView {\n EditView(scrumData: $newScrumData)\n .navigationBarItems(leading: Button(\"Dismiss\") {\n isPresented = false\n }, trailing: Button(\"Add\") {\n let newScrum = DailyScrum(\n title: newScrumData.title,\n attendees: newScrumData.attendees,\n lengthInMinutes: Int(newScrumData.lengthInMinutes),\n color: newScrumData.color)\n $scrums.append(newScrum)\n isPresented = false\n })\n }\n }\n }\n}\n```\n\nThe `DailyScrum` objects are automatically loaded from the default Realm using the `@ObservedResults` annotation.\n\nNew scrums can be added to Realm by appending them to the `scrums` result set with `$scrums.append(newScrum)`. Note that there's no need to open a Realm transaction explicitly. That's now handled under the covers by the Realm SDK.\n\n### DetailView\n\nThe main change to `DetailView` is that any edits to a scrum are persisted immediately. At the time of writing ([Realm-Cocoa 10.7.2), the view must open a transaction to store the change:\n\n``` swift\ndo {\n try Realm().write() {\n guard let thawedScrum = scrum.thaw() else {\n print(\"Unable to thaw scrum\")\n return\n }\n thawedScrum.update(from: data)\n }\n} catch {\n print(\"Failed to save scrum: \\(error.localizedDescription)\")\n}\n```\n\n### MeetingView\n\nAs with `DetailView`, `MeetingView` is enhanced so that meeting notes are added as soon as they've been created (rather than being stored in volatile RAM until the app is minimized):\n\n``` swift\ndo {\n try Realm().write() {\n guard let thawedScrum = scrum.thaw() else {\n print(\"Unable to thaw scrum\")\n return\n }\n thawedScrum.historyList.insert(newHistory, at: 0)\n }\n} catch {\n print(\"Failed to add meeting to scrum: \\(error.localizedDescription)\")\n}\n```\n\n### CardView (+ Other Views)\n\nThere are no changes needed to the view that's responsible for displaying a summary for a scrum. The changes we made to the `DailyScrum` model in order to store it in Realm don't impact how it's used within the app.\n\n \n Cardview\n\nSimilarly, there are no significant changes needed to `EditView`, `HistoryView`, `MeetingTimerView`, `MeetingHeaderView`, or `MeetingFooterView`.\n\n## Summary\n\nI hope that this post has shown that moving an iOS app to Realm is a straightforward process. The Realm SDK abstracts away the complexity of serialization and persisting data to disk. This is especially true when developing with SwiftUI.\n\nNow that Scrumdinger uses Realm, very little extra work is needed to add new features based on filtering, synchronizing, and sharing data. Let me know in the community forum if you try adding any of that functionality.\n\n## Resources\n\n- Apple's tutorial\n- Pre-Realm Scrumdinger code\n- Realm Scrumdinger code\n- Diff - all changes required to migrate Scrumdinger to Realm\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Swift", "Realm", "iOS", "React Native"], "pageDescription": "Learn how to add Realm to an iOS/SwiftUI app to add persistence and flexibility. Uses Apple's Scrumdinger tutorial app as the starting point.", "contentType": "Code Example"}, "title": "Adapting Apple's Scrumdinger SwiftUI Tutorial App to Use Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/update-array-elements-document-mql-positional-operators", "action": "created", "body": "# Update Array Elements in a Document with MQL Positional Operators\n\nMongoDB offers a rich query language that's great for create, read, update, and delete operations as well as complex multi-stage aggregation pipelines. There are many ways to model your data within MongoDB and regardless of how it looks, the MongoDB Query Language (MQL) has you covered.\n\nOne of the lesser recognized but extremely valuable features of MQL is in the positional operators that you'd find in an update operation.\n\nLet's say that you have a document and inside that document, you have an array of objects. You need to update one or more of those objects in the array, but you don't want to replace the array or append to it. This is where a positional operator might be valuable.\n\nIn this tutorial, we're going to look at a few examples that would benefit from a positional operator within MongoDB.\n\n## Use the $ Operator to Update the First Match in an Array\n\nLet's use the example that we have an array in each of our documents and we want to update only the first match within that array, even if there's a potential for numerous matches.\n\nTo do this, we'd probably want to use the `$` operator which acts as a placeholder to update the first element matched.\n\nFor this example, let's use an old-school Pokemon video game. Take look at the following MongoDB document:\n\n``` json\n{\n \"_id\": \"red\",\n \"pokemon\": \n {\n \"number\": 6,\n \"name\": \"Charizard\"\n }\n {\n \"number\": 25,\n \"name\": \"Pikachu\",\n },\n {\n \"number\": 0,\n \"name\": \"MissingNo\"\n }\n ]\n}\n```\n\nLet's assume that the above document represents the Pokemon information for the Pokemon Red video game. The document is not a true reflection and it is very much incomplete. However, if you're a fan of the game, you'll probably remember the glitch Pokemon named \"MissingNo.\" To make up a fictional story, let's assume the developer, at some point in time, wanted to give that Pokemon an actual name, but forgot.\n\nWe can update that particular element in the array by doing something like the following:\n\n``` javascript\ndb.pokemon_game.update(\n { \"pokemon.name\": \"MissingNo\" },\n {\n \"$set\": {\n \"pokemon.$.name\": \"Agumon\"\n }\n }\n);\n```\n\nIn the above example, we are doing a filter for documents that have an array element with a `name` field set to `MissingNo`. With MongoDB, you don't need to specify the array index in your filter for the `update` operator. In the manipulation step, we are using the `$` positional operator to change the first occurrence of the match in the filter. Yes, in my example, I am renaming the \"MissingNo\" Pokemon to that of a Digimon, which is an entirely different brand.\n\nThe new document would look like this:\n\n``` json\n{\n \"_id\": \"red\",\n \"pokemon\": [\n {\n \"number\": 6,\n \"name\": \"Charizard\"\n }\n {\n \"number\": 25,\n \"name\": \"Pikachu\",\n },\n {\n \"number\": 0,\n \"name\": \"Agumon\"\n }\n ]\n}\n```\n\nHad \"MissingNo\" appeared numerous times within the array, only the first occurrence would be updated. If \"MissingNo\" appeared numerous times, but the surrounding fields were different, you could match on multiple fields using the `$elemMatch` operator to narrow down which particular element should be updated.\n\nMore information on the `$` positional operator can be found in the [documentation.\n\n## Use the $\\\\] Operator to Update All Array Elements Within a Document\n\nLet's say that you have an array in your document and you need to update every element in that array using a single operation. To do this, we might want to take a look at the `$[]` operator which does exactly that.\n\nUsing the same Pokemon video game example, let's imagine that we have a team of Pokemon and we've just finished a battle in the game. The experience points gained from the battle need to be distributed to all the Pokemon on your team.\n\nThe document that represents our team might look like the following:\n\n``` json\n{\n \"_id\": \"red\",\n \"team\": [\n {\n \"number\": 1,\n \"name\": \"Bulbasaur\",\n \"xp\": 5\n },\n {\n \"number\": 25,\n \"name\": \"Pikachu\",\n \"xp\": 32\n }\n ]\n}\n```\n\nAt the end of the battle, we want to make sure every Pokemon on our team receives 10 XP. To do this with the `$[]` operator, we can construct an `update` operation that looks like the following:\n\n``` javascript\ndb.pokemon_game.update(\n { \"_id\": \"red\" },\n {\n \"$inc\": {\n \"team.$[].xp\": 10\n }\n }\n);\n```\n\nIn the above example, we use the `$inc` modifier to increase all `xp` fields within the `team` array by a constant number. To learn more about the `$inc` operator, check out the [documentation.\n\nOur new document would look like this:\n\n``` json\n\n {\n \"_id\": \"red\",\n \"team\": [\n {\n \"number\": 1,\n \"name\": \"Bulbasaur\",\n \"xp\": 15\n },\n {\n \"number\": 25,\n \"name\": \"Pikachu\",\n \"xp\": 42\n }\n ]\n }\n]\n```\n\nWhile useful for this example, we don't exactly get to provide criteria in case one of your Pokemon shouldn't receive experience points. If your Pokemon has fainted, maybe they shouldn't get the increase.\n\nWe'll learn about filters in the next part of the tutorial.\n\nTo learn more about the `$[]` operator, check out the [documentation.\n\n## Use the $\\\\\\] Operator to Update Elements that Match a Filter Condition\n\nLet's use the example that we have several array elements that we want to update in a single operation and we don't want to worry about excessive client-side code paired with a replace operation.\n\nTo do this, we'd probably want to use the `$[]` operator which acts as a placeholder to update all elements that match an `arrayFilters` condition.\n\nTo put things into perspective, let's say that we're dealing with Pokemon trading cards, instead of video games, and tracking their values. Our documents might look like this:\n\n``` javascript\ndb.pokemon_collection.insertMany(\n [\n {\n _id: \"nraboy\",\n cards: [\n {\n \"name\": \"Charizard\",\n \"set\": \"Base\",\n \"variant\": \"1st Edition\",\n \"value\": 200000\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 300\n }\n ]\n },\n {\n _id: \"mraboy\",\n cards: [\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 300\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"McDonalds 25th Anniversary Promo\",\n \"variant\": \"Holo\",\n \"value\": 10\n }\n ]\n }\n ]\n);\n```\n\nOf course, the above snippet isn't a document, but an operation to insert two documents into some `pokemon_collection` collection within MongoDB. In the above scenario, each document represents a collection of cards for an individual. The `cards` array has information about the card in the collection as well as the current value.\n\nIn our example, we need to update prices of cards, but we don't want to do X number of update operations against the database. We only want to do a single operation to update the values of each of our cards.\n\nTake the following query:\n\n``` javascript\ndb.pokemon_collection.update(\n {},\n {\n \"$set\": {\n \"cards.$[elemX].value\": 350,\n \"cards.$[elemY].value\": 500000\n }\n },\n {\n \"arrayFilters\": [\n {\n \"elemX.name\": \"Pikachu\",\n \"elemX.set\": \"Base\",\n \"elemX.variant\": \"Red Cheeks\"\n },\n {\n \"elemY.name\": \"Charizard\",\n \"elemY.set\": \"Base\",\n \"elemY.variant\": \"1st Edition\"\n }\n ],\n \"multi\": true\n }\n);\n```\n\nThe above `update` operation is like any other, but with an extra step for our positional operator. The first parameter, which is an empty object, represents our match criteria. Because it is empty, we'll be updating all documents within the collection.\n\nThe next parameter is the manipulation we want to do to our documents. Let's skip it for now and look at the `arrayFilters` in the third parameter.\n\nImagine that we want to update the price for two particular cards that might exist in any person's Pokemon collection. In this example, we want to update the price of the Pikachu and Charizard cards. If you're a Pokemon trading card fan, you'll know that there are many variations of the Pikachu and Charizard card, so we get specific in our `arrayFilters` array. For each object in the array, the fields of those objects represent an `and` condition. So, for `elemX`, which has no specific naming convention, all three fields must be satisfied.\n\nIn the above example, we are using `elemX` and `elemY` to represent two different filters.\n\nLet's go back to the second parameter in the `update` operation. If the filter for `elemX` comes back as true because an array item in a document matched, then the `value` field for that object will be set to a new value. Likewise, the same thing could happen for the `elemY` filter. If a document has an array and one of the filters does not ever match an element in that array, it will be ignored.\n\nIf looking at our example, the documents would now look like the following:\n\n``` json\n[\n {\n \"_id\": \"nraboy\",\n \"cards\": [\n {\n \"name\": \"Charizard\",\n \"set\": \"Base\",\n \"variant\": \"1st Edition\",\n \"value\": 500000\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 350\n }\n ]\n },\n {\n \"_id\": \"mraboy\",\n \"cards\": [\n {\n \"name\": \"Pikachu\",\n \"set\": \"Base\",\n \"variant\": \"Red Cheeks\",\n \"value\": 350\n },\n {\n \"name\": \"Pikachu\",\n \"set\": \"McDonalds 25th Anniversary Promo\",\n \"variant\": \"Holo\",\n \"value\": 10\n }\n ]\n }\n]\n```\n\nIf any particular array contained multiple matches for one of the `arrayFilter` criteria, all matches would have their price updated. This means that if I had, say, 100 matching Pikachu cards in my Pokemon collection, all 100 would now have new prices.\n\nMore information on the `$[]` operator can be found in the [documentation.\n\n## Conclusion\n\nYou just saw how to use some of the positional operators within the MongoDB Query Language (MQL). These operators are useful when working with arrays because they prevent you from having to do full replaces on the array or extended client-side manipulation.\n\nTo learn more about MQL, check out my previous tutorial titled, Getting Started with Atlas and the MongoDB Query Language (MQL).\n\nIf you have any questions, take a moment to stop by the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to work with the positional array operators within the MongoDB Query Language (MQL).", "contentType": "Tutorial"}, "title": "Update Array Elements in a Document with MQL Positional Operators", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/retail-search-mongodb-databricks", "action": "created", "body": "# Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks\n\nIn the rapidly evolving retail landscape, businesses are constantly seeking ways to optimize operations, improve customer experience, and stay ahead of competition. One of the key strategies to achieve this is through leveraging the opportunities search experiences provide. \n\nImagine this: You walk into a department store filled with products, and you have something specific in mind. You want a seamless and fast shopping experience \u2014 this is where product displays play a pivotal role. In the digital world of e-commerce, the search functionality of your site is meant to be a facilitating tool to efficiently display what users are looking for.\n\nShockingly, statistics reveal that only about 50% of searches on retail websites yield the results customers seek. Think about it \u2014 half the time, customers with a strong buying intent are left without an answer to their queries.\n\nThe search component of your e-commerce site is not merely a feature; it's the bridge between customers and the products they desire. Enhancing your search engine logic with artificial intelligence is the best way to ensure that the bridge is sturdy. \n\nIn this article, we'll explore how MongoDB and Databricks can be integrated to provide robust solutions for the retail industry, with a particular focus on the MongoDB Apache Spark Streaming processor; orchestration with Databricks workflows; data transformation and featurization with MLFlow and the Spark User Defined Functions; and by building a product catalog index, sorting, ranking, and autocomplete with Atlas Search.\n\nLet\u2019s get to it! \n\n### Solution overview\n\nA modern e-commerce-backed system should be able to collate data from multiple sources in real-time, as well as batch loads, and be able to transform this data into a schema upon which a Lucene search index can be built. This enables discovery of the added inventory. \n\nThe solution should integrate website customer behavior events data in real-time to feed an \u201cintelligence layer\u201d that will create the criteria to display and order the most interesting products in terms of both relevance to the customer and relevance to the business.\n\nThese features are nicely captured in the above-referenced e-commerce architecture. We\u2019ll divide it into four different stages or layers: \n\n 1. **Multi-tenant streaming ingestion:** With the help of the MongoDB Kafka connector, we are able to sync real-time data from multiple sources to Mongodb. For the sake of simplicity, in this tutorial, we will not focus on this stage.\n\n 2. **Stream processing:** With the help of the MongoDB Spark connector and Databricks jobs and notebooks, we are able to ingest data and transform it to create machine learning model features.\n\n 3. **AI/ML modeling:** All the generated streams of data are transformed and written into a unified view in a MongoDB collection called catalog, which is used to build search indexes and support querying and discovery of products. \n\n 4. **Building the search logic:** With the help of Atlas Search capabilities and robust aggregation pipelines, we can power features such as search/discoverability, hyper-personalization, and featured sort on mobile/web applications.\n\n## Prerequisites\n\nBefore running the app, you'll need to have the following installed on your system:\n* MongoDB Atlas cluster\n* Databricks cluster\n* python>=3.7\n* pip3\n* Node.js and npm\n* Apache Kafka\n* GitHub repository\n\n## Streaming data into Databricks\n\nIn this tutorial, we\u2019ll focus on explaining how to orchestrate different ETL pipelines in real time using Databricks Jobs. A Databricks job represents a single, standalone execution of a Databricks notebook, script, or task. It is used to run specific code or analyses at a scheduled time or in response to an event.\n\nOur search solution is meant to respond to real-time events happening in an e-commerce storefront, so the search experience for a customer can be personalized and provide search results that fit two criteria: \n\n 1. **Relevant for the customer:** We will define a static score comprising behavioral data (click logs) and an Available to Promise status, so search results are products that we make sure are available and relevant based off of previous demand.\n 2. **Relevant for the business:** The results will be scored based on which products are more price sensitive, so higher price elasticity means they appear first on the product list page and as search results. We will also compute an optimal suggested price for the product. \n\nSo let\u2019s check out how to configure these ETL processes over Databricks notebooks and orchestrate them using Databricks jobs to then fuel our MongoDB collections with the intelligence that we will use to build our search experience. \n\n## Databricks jobs for product stream processing, static score, and pricing\n\nWe\u2019ll start by explaining how to configure notebooks in Databricks. Notebooks are a key tool for data science and machine learning, allowing collaboration, real-time coauthoring, versioning, and built-in data visualization. You can also make them part of automated tasks, called jobs in Databricks. A series of jobs are called workflows. Your notebooks and workflows can be attached to computing resources that you can set up at your convenience, or they can be run via autoscale.\n\nLearn more about how to configure jobs in Databricks using JSON configuration files.\n\nYou can find our first job JSON configuration files in our GitHub. In these JSON files, we specify the different parameters on how to run the various jobs in our Databricks cluster. We specify different parameters such as the user, email notifications, task details, cluster information, and notification settings for each task within the job. This configuration is used to automate and manage data processing and analysis tasks within a specified environment.\n\nNow, without further ado, let\u2019s start with our first workflow, the \u201cCatalog collection indexing workflow.\u201d \n\n## Catalog collection indexing workflow\n\nThe above diagram shows how our solution will run two different jobs closely related to each other in two separate notebooks. Let\u2019s unpack this job with the code and its explanation: \n\nThe first part of your notebook script is where you\u2019ll define and install different packages. In the code below, we have all the necessary packages, but the main ones \u2014 `pymongo` and `tqdm` \u2014 are explained below: \n* PyMongo is commonly used in Python applications that need to store, retrieve, or analyze data stored in MongoDB, especially in web applications, data pipelines, and analytics projects.\n\n* tqdm is often used in Python scripts or applications where there's a need to provide visual feedback to users about the progress of a task.\n\nThe rest of the packages are pandas, JSON, and PySpark. In this part of the snippet, we also define a variable for the MongoDB connection string to our cluster.\n\n```\n%pip install pymongo tqdm\n \n\nimport pandas as pd\nimport json\nfrom collections import Counter\nfrom tqdm import tqdm\nfrom pymongo import MongoClient\nfrom pyspark.sql import functions as F\nfrom pyspark.sql import types as T\nfrom pyspark.sql import Window\nimport pyspark\nfrom pyspark import SparkContext\nfrom pyspark.sql import SparkSession\nconf = pyspark.SparkConf()\n\nimport copy\nimport numpy as np\n\ntqdm.pandas()\n\nMONGO_CONN = 'mongodb+srv://:@retail-demo.2wqno.mongodb.net/?retryWrites=true&w=majority' \n\n```\n## Data streaming from MongoDB\nThe script reads data streams from various MongoDB collections using the spark.readStream.format(\"mongodb\") method. \n\nFor each collection, specific configurations are set, such as the MongoDB connection URI, database name, collection name, and other options related to change streams and aggregation pipelines.\n\nThe snippet below is the continuation of the code from above. It can be put in a different cell in the same notebook. \n\n```\natp = spark.readStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"atp_status_myn\").\\ option('spark.mongodb.change.stream.publish.full.document.only','true').\\ option('spark.mongodb.aggregation.pipeline',]).\\ option(\"forceDeleteTempCheckpointLocation\", \"true\").load()\n```\nIn this specific case, the code is reading from the atp_status collection. It specifies options for the MongoDB connection, including the URI, and enables the capture of the full document when changes occur in the MongoDB collection. The empty aggregation pipeline indicates that no specific transformations are applied at this stage.\n\nFollowing with the next stage of the job for the atp_status collection, we can break down the code snippet into three different parts: \n\n#### Data transformation and data writing to MongoDB\n\nAfter reading the data streams, we drop the ``_id`` field. This is a special field that serves as the primary key for a document within a collection. Every document in a MongoDB collection must have a [unique _id field, which distinguishes it from all other documents in the same collection. As we are going to create a new collection, we need to drop the previous _id field of the original documents, and when we insert it into a new collection, a new _id field will be assigned.\n\n```\natp = atp.drop(\"_id\")\n```\n\n#### Data writing to MongoDB\n\nThe transformed data streams are written back to MongoDB using the **writeStream.format(\"mongodb\")** method.\n\nThe data is written to the catalog_myn collection in the search database.\n\nSpecific configurations are set for each write operation, such as the MongoDB connection URI, database name, collection name, and other options related to upserts, checkpoints, and output modes. \n\nThe below code snippet is a continuation of the notebook from above.\n\n```\natp.writeStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"catalog_myn\").\\ option('spark.mongodb.operationType', \"update\").\\ option('spark.mongodb.upsertDocument', True).\\ option('spark.mongodb.idFieldList', \"id\").\\\n```\n\n#### Checkpointing\n\nCheckpoint locations are specified for each write operation. Checkpoints are used to maintain the state of streaming operations, allowing for recovery in case of failures. The checkpoints are stored in the /tmp/ directory with specific subdirectories for each collection.\n\nHere is an example of checkpointing. It\u2019s included in the script right after the code from above.\n\n```\noption(\"forceDeleteTempCheckpointLocation\", \"true\").\\ option(\"checkpointLocation\", \"/tmp/retail-atp-myn4/_checkpoint/\").\\ outputMode(\"append\").\\ start()\n```\n\nThe full snippet of code performs different data transformations for the various collections we are ingesting into Databricks, but they all follow the same pattern of ingestion, transformation, and rewriting back to MongoDB. Make sure to check out the full first indexing job notebook.\n\nFor the second part of the indexing job, we will use a user-defined function (UDF) in our code to embed our product catalog data using a transformers model. This is useful to be able to build Vector Search features. \n\nThis is an example of how to define a user-defined function. You can define your functions early in your notebook so you can reuse them later for running your data transformations or analytics calculations. In this case, we are using it to embed text data from a document. \n\nThe **\u2018@F.udf()\u2019** decorator is used to define a user-defined function in PySpark using the F object, which is an alias for the pyspark.sql.functions module. In this specific case, it is defining a UDF named \u2018get_vec\u2019 that takes a single argument text and returns the result of calling \u2018model.encode(text)\u2019.\n\nThe code from below is a continuation of the same notebook.\n\n```\n@F.udf() def get_vec(text): \n return model.encode(text)\n```\n\nOur notebook code continues with similar snippets to previous examples. We'll use the MongoDB Connector for Spark to ingest data from the previously built catalog collection.\n\n```\ncatalog_status = spark.readStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"catalog_myn\").\\ option('spark.mongodb.change.stream.publish.full.document.only','true').\\ option('spark.mongodb.aggregation.pipeline',]).\\ option(\"forceDeleteTempCheckpointLocation\", \"true\").load()\n```\n\nThen, it performs data transformations on the catalog_status DataFrame, including adding a new column, the atp_status that is now a boolean value, 1 for available, and 0 for unavailable. This is useful for us to be able to define the business logic of the search results showcasing only the products that are available. \n\nWe also calculate the discounted price based on data from another job we will explain further along. \n\nThe below snippet is a continuation of the notebook code from above:\n\n```\ncatalog_status = catalog_status.withColumn(\"discountedPrice\", F.col(\"price\") * F.col(\"pred_price\")) catalog_status = catalog_status.withColumn(\"atp\", (F.col(\"atp\").cast(\"boolean\") & F.lit(1).cast(\"boolean\")).cast(\"integer\")) \n```\n\nWe vectorize the title of the product and we create a new field called \u201cvec\u201d. We then drop the \"_id\" field, indicating that this field will not be updated in the target MongoDB collection.\n\n```\ncatalog_status.withColumn(\"vec\", get_vec(\"title\")) catalog_status = catalog_status.drop(\"_id\")\n\n```\n\nFinally, it sets up a structured streaming write operation to write the transformed data to a MongoDB collection named \"catalog_final_myn\" in the \"search\" database while managing query state and checkpointing.\n\n```\ncatalog_status.writeStream.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"catalog_final_myn\").\\ option('spark.mongodb.operationType', \"update\").\\ option('spark.mongodb.idFieldList', \"id\").\\ option(\"forceDeleteTempCheckpointLocation\", \"true\").\\ option(\"checkpointLocation\", \"/tmp/retail-atp-myn5/_checkpoint/\").\\ outputMode(\"append\").\\ start()\n```\n\nLet\u2019s see how to configure the second workflow to calculate a BI score for each product in the collection and introduce the result back into the same document so it\u2019s reusable for search scoring. \n\n## BI score computing logic workflow\n\n![Diagram overview of the BI score computing job logic using materialized views to ingest data from a MongoDB collection and process user click logs with Empirical Bayes algorithm.\n\nIn this stage, we will explain the script to be run in our Databricks notebook as part of the BI score computing job. Please bear in mind that we will only explain what makes this code snippet different from the previous, so make sure to understand how the complete snippet works. Please feel free to clone our complete repository so you can get a full view on your local machine.\n\nWe start by setting up the configuration for Apache Spark using the SparkConf object and specify the necessary package dependency for our MongoDB Spark connector.\n\n```\nconf = pyspark.SparkConf() conf.set(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector_2.12:10.1.0\")\n\n```\n\nThen, we initialize a Spark session for our Spark application named \"test1\" running in local mode. It also configures Spark with the MongoDB Spark connector package dependency, which is set up in the conf object defined earlier. This Spark session can be used to perform various data processing and analytics tasks using Apache Spark.\n\nThe below code is a continuation to the notebook snippet explained above:\n\n```\nspark = SparkSession.builder \\\n .master(\"local\") \\\n .appName(\"test1\") \\\n .config(conf = conf) \\\n .getOrCreate()\n```\n\nWe\u2019ll use MongoDB Aggregation Pipelines in our code snippet to get a set of documents, each representing a unique \"product_id\" along with the corresponding counts of total views, purchases, and cart events. We\u2019ll use the transformed resulting data to feed an Empirical Bayes algorithm and calculate a value based on the cumulative distribution function (CDF) of a beta distribution.\n\nMake sure to check out the entire .ipynb file in our repository.\n\n This way, we can calculate the relevance of a product based on the behavioral data described before. We\u2019ll also use window functions to calculate different statistics on each one of the products \u2014 like the average of purchases and the purchase beta (the difference between the average total clicks and average total purchases) \u2014 to use as input to create a BI relevance score. This is what is shown in the below code:\n\n```\n@F.udf(T.FloatType()) \n\ndef beta_fn(pct,a,b):\n return float(100*beta.cdf(pct, a,b)) w = Window().partitionBy() df =\ndf.withColumn(\"purchase_alpha\", F.avg('purchase').over(w)) df = df.withColumn(\"cart_alpha\", F.avg('cart').over(w)) df = df.withColumn(\"total_views_mean\", F.avg('total_views').over(w)) df = df.withColumn(\"purchase_beta\", F.expr('total_views_mean - purchase_alpha')) \n\ndf = df.withColumn(\"cart_beta\", F.expr('total_views_mean - cart_alpha')) df = df.withColumn(\"purchase_pct\", F.expr('(purchase+purchase_alpha)/(total_views+purchase_alpha+purchase_beta)')) \ndf = df.withColumn(\"cart_pct\", F.expr('(purchase+cart_alpha)/(total_views+cart_alpha+cart_beta)'))\n```\nAfter calculating the BI score for our product, we want to use a machine learning algorithm to calculate the price elasticity of demand for the product and the optimal price.\n\n## Calculating optimal price workflow\n\nFor calculating the optimal recommended price, first, we need to figure out a pipeline that will shape the data according to what we need. Get the pipeline definition in our repository.\n\nWe\u2019ll first take in data from the MongoDB Atlas click logs (clog) collection that\u2019s being ingested in the database in real-time, and create a DataFrame that will be used as input for a Random Forest regressor machine learning model. We\u2019ll leverage the MLFlow library to be able to run MLOps stages, run tests, and register the best-performing model that will be used in the second job to calculate the price elasticity of demand, the suggested discount, and optimal price for each product. Let\u2019s see what the code looks like! \n\n```\nmodel_name = \"retail_competitive_pricing_model_1\"\nwith mlflow.start_run(run_name=model_name):\n # Create and fit a linear regression model\n model = RandomForestRegressor(n_estimators=50, max_depth=3)\n model.fit(X_train, y_train)\n wrappedModel = CompPriceModelWrapper(model)\n\n # Log model parameters and metrics\n mlflow.log_params(model.get_params())\n mlflow.log_metric(\"mse\", np.mean((model.predict(X_test) - y_test) ** 2))\n \n # Log the model with a signature that defines the schema of the model's inputs and outputs. \n # When the model is deployed, this signature will be used to validate inputs.\n signature = infer_signature(X_train, wrappedModel.predict(None,X_train))\n\n # MLflow contains utilities to create a conda environment used to serve models.\n # The necessary dependencies are added to a conda.yaml file which is logged along with the model.\n conda_env = _mlflow_conda_env(\n additional_conda_deps=None,\n additional_pip_deps=\"scikit-learn=={}\".format(sklearn.__version__)],\n additional_conda_channels=None,\n )\n mlflow.pyfunc.log_model(model_name, python_model=wrappedModel, conda_env=conda_env, signature=signature)\n\n```\nAfter we\u2019ve done the test and train split required for fitting the model, we leverage the mlFlow model wrapping to be able to log model parameters, metrics, and dependencies. \n\nFor the next stage, we apply the previously trained and registered model to the sales data:\n\n```\nmodel_name = \"retail_competitive_pricing_model_1\"\napply_model_udf = mlflow.pyfunc.spark_udf(spark, f\"models:/{model_name}/staging\")\n \n# Apply the model to the new data\ncolumns = ['old_sales','total_sales','min_price','max_price','avg_price','old_avg_price']\nudf_inputs = struct(*columns)\nudf_inputs\n\n```\nThen, we just need to create the sales DataFrame with the resulting data. But first, we use the [.fillna function to make sure all our null values are cast into floats 0.0. We need to perform this so our model has proper data and because most machine learning models return an error if you pass null values.\n\nNow, we can calculate new columns to add to the sales DataFrame: the predicted optimal price, the price elasticity of demand per product, and a discount column which will be rounded up to the next nearest integer. The below code is a continuation of the code from above \u2014 they both reside in the same notebook:\n\n```\nsales = sales.fillna(0.0)\nsales = sales.withColumn(\"pred_price\",apply_model_udf(udf_inputs))\n\nsales = sales.withColumn(\"price_elasticity\", F.expr(\"((old_sales - total_sales)/(old_sales + total_sales))/(((old_avg_price - avg_price)+1)/(old_avg_price + avg_price))\"))\n\nsales = sales.withColumn(\"discount\", F.ceil((F.lit(1) - F.col(\"pred_price\"))*F.lit(100)))\n```\nThen, we push the data back using the MongoDB Connector for Spark into the proper MongoDB collection. These will be used together with the rest as the baseline on top of which we\u2019ll build our application\u2019s search business logic.\n\n```\nsales.select(\"id\", \"pred_price\", \"price_elasticity\").write.format(\"mongodb\").\\ option('spark.mongodb.connection.uri', MONGO_CONN).\\ option('spark.mongodb.database', \"search\").\\ option('spark.mongodb.collection', \"price_myn\").\\ option('spark.mongodb.idFieldList', 'id').\\ mode('overwrite').\\ save()\n```\nAfter these workflows are configured, you should be able to see the new collections and updated documents for your products.\n\n## Building the search logic\n\nTo build the search logic, first, you\u2019ll need to create an index. This is how we\u2019ll make sure that our application runs smoothly as a search query, instead of having to look into all the documents in the collection. We will limit the scan by defining the criteria for those scans. \n\nTo understand more about indexing in MongoDB, you can check out the article from the documentation. But for the purposes of this tutorial, let\u2019s dive into the two main parameters you\u2019ll need to define for building our solution:\n\n**Mappings:** This key dictates how fields in the index should be stored and how they should be treated when queries are made against them.\n\n**Fields:** The fields describe the attributes or columns of the index. Each field can have specific data types and associated settings. We implement the sortable number functionality for the fields \u2018pred_price\u2019, \u2018price_elasticity\u2019, and \u2018score\u2019. So in this way, our search results are organized by relevance.\n\nThe latter steps of building the solution come to defining the index mapping for the application. You can find the full mappings snippet in our GitHub repository.\n\nTo configure the index, you can insert the snippet in MongoDB Atlas by browsing your cluster splash page and clicking over the \u201cSearch\u201d tab:\n\nNext, you can click over \u201cCreate Index.\u201d Make sure you select \u201cJSON Editor\u201d:\n\nPaste the JSON snippet from above \u2014 make sure you select the correct database and collection! In our case, the collection name is **`catalog_final_myn`**.\n\n## Autocomplete\nTo define autocomplete indexes, you can follow the same browsing instructions from the Building the search logic stage, but in the JSON editor, your code snippet may vary. Follow our tutorial to learn how to fully configure autocomplete in Atlas Search.\n\nFor our search solution, check out the code below. We define how the data should be treated and indexed for autocomplete features. \n\n```\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"query\": \n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"tokenization\": \"edgeGram\",\n \"type\": \"autocomplete\"\n }\n ]\n }\n }\n}\n```\nLet\u2019s break down each of the parameters: \n\n**foldDiacritics:** Setting this to false means diacritic marks on characters (like accents on letters) are treated distinctly. For instance, \"r\u00e9sum\u00e9\" and \"resume\" would be treated as different words.\n\n**minGrams and maxGrams:** These specify the minimum and maximum lengths of the edge n-grams. In this case, it would index substrings (edgeGrams) with lengths ranging from 3 to 7.\n\n**Tokenization:** The value edgeGram means the text is tokenized into substrings starting from the beginning of the string. For instance, for the word \"example\", with minGrams set to 3, the tokens would be \"exa\", \"exam\", \"examp\", etc. This is commonly used in autocomplete scenarios to match partial words.\n\nAfter all of this, you should have an AI-enhanced search functionality for your e-commerce storefront! \n\n### Conclusion\n\nIn summary, we\u2019ve covered how to integrate MongoDB Atlas and Databricks to build a performant and intelligent search feature for an e-commerce application.\n\nBy using the MongoDB Connector for Spark and Databricks, along with MLFlow for MLOps, we've created real-time pipelines for AI. Additionally, we've configured MongoDB Atlas Search indexes, utilizing features like Autocomplete, to build a cutting-edge search engine.\n\nGrasping the complexities of e-commerce business models is complicated enough without also having to handle knotty integrations and operational overhead! Counting on the right tools for the job gets you several months ahead out-innovating the competition.\n\nCheck out the [GitHub repository or reach out over LinkedIn if you want to discuss search or any other retail functionality!\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Node.js", "Kafka"], "pageDescription": "Learn how to utilize MongoDB and Databricks to build ai-enhanced retail search solutions.", "contentType": "Tutorial"}, "title": "Learn to Build AI-Enhanced Retail Search Solutions with MongoDB and Databricks", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/rust-mongodb-crud-tutorial", "action": "created", "body": "# Get Started with Rust and MongoDB\n\n \n\nThis Quick Start post will help you connect your Rust application to a MongoDB cluster. It will then show you how to do Create, Read, Update, and Delete (CRUD) operations on a collection. Finally, it'll cover how to use serde to map between MongoDB's BSON documents and Rust structs.\n\n## Series Tools & Versions\n\nThis series assumes that you have a recent version of the Rust toolchain installed (v1.57+), and that you're comfortable with Rust syntax. It also assumes that you're reasonably comfortable using the command-line and your favourite code editor.\n\n>Rust is a powerful systems programming language with high performance and low memory usage which is suitable for a wide variety of tasks. Although currently a niche language for working with data, its popularity is quickly rising.\n\nIf you use Rust and want to work with MongoDB, this blog series is the place to start! I'm going to show you how to do the following:\n\n- Install the MongoDB Rust driver. The Rust driver is the mongodb crate which allows you to communicate with a MongoDB cluster.\n- Connect to a MongoDB instance.\n- Create, Read, Update & Delete (CRUD) documents in your database.\n\nLater blog posts in the series will cover things like *Change Streams*, *Transactions* and the amazing *Aggregation Pipeline* feature which allows you to run advanced queries on your data.\n\n## Prerequisites\n\nI'm going to assume you have a working knowledge of Rust. I won't use any complex Rust code - this is a MongoDB tutorial, not a Rust tutorial - but you'll want to know the basics of error-handling and borrowing in Rust, at least! You may want to run `rustup update` if you haven't since January 2022 because I'll be working with a recent release.\n\nYou'll need the following:\n\n- An up-to-date Rust toolchain, version 1.47+. I recommend you install it with Rustup if you haven't already.\n- A code editor of your choice. I recommend either IntelliJ Rust or the free VS Code with the official Rust plugin\n\nThe MongoDB Rust driver uses Tokio by default - and this tutorial will do that too. If you're interested in running under async-std, or synchronously, the changes are straightforward. I'll cover them at the end.\n\n## Creating your database\n\nYou'll use MongoDB Atlas to host a MongoDB cluster, so you don't need to worry about how to configure MongoDB itself.\n\n> Get started with an M0 cluster on Atlas. It's free forever, and it's the easiest way to try out the steps in this blog series. You won't even need to provide payment details.\n\nYou'll need to create a new cluster and load it with sample data My awesome colleague Maxime Beugnet has created a video tutorial to help you out, but I also explain the steps below:\n\n- Click \"Start free\" on the MongoDB homepage.\n- Enter your details, or just sign up with your Google account, if you have one.\n- Accept the Terms of Service\n- Create a *Starter* cluster.\n - Select the same cloud provider you're used to, or just leave it as-is. Pick a region that makes sense for you.\n - You can change the name of the cluster if you like. I've called mine \"RustQuickstart\".\n\nIt will take a couple of minutes for your cluster to be provisioned, so while you're waiting you can move on to the next step.\n\n## Starting your project\n\nIn your terminal, change to the directory where you keep your coding projects and run the following command:\n\n``` bash\ncargo new --bin rust_quickstart\n```\n\nThis will create a new directory called `rust_quickstart` containing a new, nearly-empty project. In the directory, open `Cargo.toml` and change the `dependencies]` section so it looks like this:\n\n``` toml\n[dependencies]\nmongodb = \"2.1\"\nbson = { version = \"2\", features = [\"chrono-0_4\"] } # Needed for using chrono datetime in doc\ntokio = \"1\"\nchrono = \"0.4\" # Used for setting DateTimes\nserde = \"1\" # Used in the Map Data into Structs section\n```\n\nNow you can download and build the dependencies by running:\n\n``` bash\ncargo run\n```\n\nYou should see *lots* of dependencies downloaded and compiled. Don't worry, most of this only happens the first time you run it! At the end, if everything went well, it should print \"Hello, World!\" in your console.\n\n## Set up your MongoDB instance\n\nYour MongoDB cluster should have been set up and running for a little while now, so you can go ahead and get your database set up for the next steps.\n\nIn the Atlas web interface, you should see a green button at the bottom-left of the screen, saying \"Get Started\". If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the optional \"Load Sample Data\" item), and it'll help you through the steps to get set up.\n\n### Create a User\n\nFollowing the \"Get Started\" steps, create a user with \"Read and write access to any database\". You can give it a username and password of your choice - take a note of them, you'll need them in a minute. Use the \"autogenerate secure password\" button to ensure you have a long random password which is also safe to paste into your connection string later.\n\n### Allow an IP address\n\nWhen deploying an app with sensitive data, you should only allow the IP address of the servers which need to connect to your database. Click the 'Add IP Address' button, then click 'Add Current IP Address' and finally, click 'Confirm'. You can also set a time-limit on an access list entry, for added security. Note that sometimes your IP address may change, so if you lose the ability to connect to your MongoDB cluster during this tutorial, go back and repeat these steps.\n\n## Connecting to MongoDB\n\nNow you've got the point of this tutorial - connecting your Rust code to a MongoDB database! The last step of the \"Get Started\" checklist is \"Connect to your Cluster\". Select \"Connect your application\".\n\nUsually, in the dialog that shows up, you'd select \"Rust\" in the \"Driver\" menu, but because the Rust driver has only just been released, it may not be in the list! You should select \"Python\" with a version of \"3.6 or later\".\n\nEnsure Step 2 has \"Connection String only\" highlighted, and press the \"Copy\" button to copy the URL to your pasteboard (just storing it temporarily in a text file is fine). Paste it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder including the '\\<' and '>' characters.\n\nBack in your Rust project, open `main.rs` and replace the contents with the following:\n\n``` rust\nuse mongodb::{Client, options::{ClientOptions, ResolverConfig}};\nuse std::env;\nuse std::error::Error;\nuse tokio;\n\n#[tokio::main]\nasync fn main() -> Result<(), Box> {\n // Load the MongoDB connection string from an environment variable:\n let client_uri =\n env::var(\"MONGODB_URI\").expect(\"You must set the MONGODB_URI environment var!\");\n\n // A Client is needed to connect to MongoDB:\n // An extra line of code to work around a DNS issue on Windows:\n let options =\n ClientOptions::parse_with_resolver_config(&client_uri, ResolverConfig::cloudflare())\n .await?;\n let client = Client::with_options(options)?;\n\n // Print the databases in our MongoDB cluster:\n println!(\"Databases:\");\n for name in client.list_database_names(None, None).await? {\n println!(\"- {}\", name);\n }\n\n Ok(())\n}\n```\n\nIn order to run this, you'll need to set the MONGODB_URI environment variable to the connection string you obtained above. Run one of the following in your terminal window, depending on your platform:\n\n``` bash\n# Unix (including MacOS):\nexport MONGODB_URI='mongodb+srv://yourusername:yourpasswordgoeshere@rustquickstart-123ab.mongodb.net/test?retryWrites=true&w=majority'\n\n# Windows CMD shell:\nset MONGODB_URI='mongodb+srv://yourusername:yourpasswordgoeshere@rustquickstart-123ab.mongodb.net/test?retryWrites=true&w=majority'\n\n# Powershell:\n$Env:MONGODB_URI='mongodb+srv://yourusername:yourpasswordgoeshere@rustquickstart-123ab.mongodb.net/test?retryWrites=true&w=majority'\n```\n\nOnce you've done that, you can `cargo run` this code, and the result should look like this:\n\n``` none\n$ cargo run\n Compiling rust_quickstart v0.0.1 (/Users/judy2k/development/rust_quickstart)\n Finished dev [unoptimized + debuginfo] target(s) in 3.35s\n Running `target/debug/rust_quickstart`\nDatabases:\n- sample_airbnb\n- sample_analytics\n- sample_geospatial\n- sample_mflix\n- sample_supplies\n- sample_training\n- sample_weatherdata\n- admin\n- local\n```\n\n**Congratulations!** You just connected your Rust program to MongoDB and listed the databases in your cluster. If you don't see this list then you may not have successfully loaded sample data into your cluster - you'll want to go back a couple of steps until running this command shows the list above.\n\n## BSON - How MongoDB understands data\n\nBefore you go ahead querying & updating your database, it's useful to have an overview of BSON and how it relates to MongoDB. BSON is the binary data format used by MongoDB to store all your data. BSON is also the format used by the MongoDB query language and aggregation pipelines (I'll get to these later).\n\nIt's analogous to JSON and handles all the same core types, such as numbers, strings, arrays, and objects (which are called Documents in BSON), but BSON supports more types than JSON. This includes things like dates & decimals, and it has a special ObjectId type usually used for identifying documents in a MongoDB collection. Because BSON is a binary format it's not human readable - usually when it's printed to the screen it'll be printed to look like JSON.\n\nBecause of the mismatch between BSON's dynamic schema and Rust's static type system, dealing with BSON in Rust can be tricky. Fortunately the `bson` crate provides some useful tools for dealing with BSON data, including the `doc!` macro for generating BSON documents, and it implements [serde for the ability to serialize and deserialize between Rust structs and BSON data.\n\nCreating a document structure using the `doc!` macro looks like this:\n\n``` rust\nuse chrono::{TimeZone, Utc};\nuse mongodb::bson::doc;\n\nlet new_doc = doc! {\n \"title\": \"Parasite\",\n \"year\": 2020,\n \"plot\": \"A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.\",\n \"released\": Utc.ymd(2020, 2, 7).and_hms(0, 0, 0),\n};\n```\n\nIf you use `println!` to print the value of `new_doc` to the console, you should see something like this:\n\n``` none\n{ title: \"Parasite\", year: 2020, plot: \"A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.\", released: Date(\"2020-02-07 00:00:00 UTC\") }\n```\n\n(Incidentally, Parasite is an absolutely amazing movie. It isn't already in the database you'll be working with because it was released in 2020 but the dataset was last updated in 2015.)\n\nAlthough the above output looks a bit like JSON, this is just the way the BSON library implements the `Display` trait. The data is still handled as binary data under the hood.\n\n## Creating Documents\n\nThe following examples all use the sample_mflix dataset that you loaded into your Atlas cluster. It contains a fun collection called `movies`, with the details of a whole load of movies with releases dating back to 1903, from IMDB's database.\n\nThe Client type allows you to get the list of databases in your cluster, but not much else. In order to actually start working with data, you'll need to get a Database using either Client's `database` or `database_with_options` methods. You'll do this in the next section.\n\nThe code in the last section constructs a Document in memory, and now you're going to persist it in the movies database. The first step before doing anything with a MongoDB collection is to obtain a Collection object from your database. This is done as follows:\n\n``` rust\n// Get the 'movies' collection from the 'sample_mflix' database:\nlet movies = client.database(\"sample_mflix\").collection(\"movies\");\n```\n\nIf you've browsed the movies collection with Compass or the \"Collections\" tab in Atlas, you'll see that most of the records have more fields than the document I built above using the `doc!` macro. Because MongoDB doesn't enforce a schema within a collection by default, this is perfectly fine, and I've just cut down the number of fields for readability. Once you have a reference to your MongoDB collection, you can use the `insert_one` method to insert a single document:\n\n``` rust\nlet insert_result = movies.insert_one(new_doc.clone(), None).await?;\nprintln!(\"New document ID: {}\", insert_result.inserted_id);\n```\n\nThe `insert_one` method returns the type `Result` which can be used to identify any problems inserting the document, and can be used to find the id generated for the new document in MongoDB. If you add this code to your main function, when you run it, you should see something like the following:\n\n``` none\nNew document ID: ObjectId(\"5e835f3000415b720028b0ad\")\n```\n\nThis code inserts a single `Document` into a collection. If you want to insert multiple Documents in bulk then it's more efficient to use `insert_many` which takes an `IntoIterator` of Documents which will be inserted into the collection.\n\n## Retrieve Data from a Collection\n\nBecause I know there are no other documents in the collection with the name Parasite, you can look it up by title using the following code, instead of the ID you retrieved when you inserted the record:\n\n``` rust\n// Look up one document:\nlet movie: Document = movies\n .find_one(\n doc! {\n \"title\": \"Parasite\"\n },\n None,\n ).await?\n .expect(\"Missing 'Parasite' document.\");\nprintln!(\"Movie: {}\", movie);\n```\n\nThis code should result in output like the following:\n\n``` none\nMovie: { _id: ObjectId(\"5e835f3000415b720028b0ad\"), title: \"Parasite\", year: 2020, plot: \"A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.\", released: Date(\"2020-02-07 00:00:00 UTC\") }\n```\n\nIt's very similar to the output above, but when you inserted the record, the MongoDB driver generated a unique ObjectId for you to identify this document. Every document in a MongoDB collection has a unique `_id` value. You can provide a value yourself if you have a value that is guaranteed to be unique, or MongoDB will generate one for you, as it did in this case. It's usually good practice to explicitly set a value yourself.\n\nThe find_one method is useful to retrieve a single document from a collection, but often you will need to search for multiple records. In this case, you'll need the find method, which takes similar options as this call, but returns a `Result`. The `Cursor` is used to iterate through the list of returned data.\n\nThe find operations, along with their accompanying filter documents are very powerful, and you'll probably use them a lot. If you need more flexibility than `find` and `find_one` can provide, then I recommend you check out the documentation on Aggregation Pipelines which are super-powerful and, in my opinion, one of MongoDB's most powerful features. I'll write another blog post in this series just on that topic - I'm looking forward to it!\n\n## Update Documents in a Collection\n\nOnce a document is stored in a collection, it can be updated in various ways. If you would like to completely replace a document with another document, you can use the find_one_and_replace method, but it's more common to update one or more parts of a document, using update_one or update_many. Each separate document update is atomic, which can be a useful feature to keep your data consistent within a document. Bear in mind though that `update_many` is not itself an atomic operation - for that you'll need to use multi-document ACID Transactions, available in MongoDB since version 4.0 (and available for sharded collections since 4.2). Version 2.x of the Rust driver supports transactions for replica sets.\n\nTo update a single document in MongoDB, you need two BSON Documents: The first describes the query to find the document you'd like to update; The second Document describes the update operations you'd like to conduct on the document in the collection. Although the \"release\" date for Parasite was in 2020, I think this refers to the release in the USA. The *correct* year of release was 2019, so here's the code to update the record accordingly:\n\n``` rust\n// Update the document:\nlet update_result = movies.update_one(\n doc! {\n \"_id\": &movie.get(\"_id\")\n },\n doc! {\n \"$set\": { \"year\": 2019 }\n },\n None,\n).await?;\nprintln!(\"Updated {} document\", update_result.modified_count);\n```\n\nWhen you run the above, it should print out \"Updated 1 document\". If it doesn't then something has happened to the movie document you inserted earlier. Maybe you've deleted it? Just to check that the update has updated the year value correctly, here's a `find_one` command you can add to your program to see what the updated document looks like:\n\n``` rust\n// Look up the document again to confirm it's been updated:\nlet movie = movies\n .find_one(\n doc! {\n \"_id\": &movie.get(\"_id\")\n },\n None,\n ).await?\n .expect(\"Missing 'Parasite' document.\");\nprintln!(\"Updated Movie: {}\", &movie);\n```\n\nWhen I ran these blocks of code, the result looked like the text below. See how it shows that the year is now 2019 instead of 2020.\n\n``` none\nUpdated 1 document\nUpdated Movie: { _id: ObjectId(\"5e835f3000415b720028b0ad\"), title: \"Parasite\", year: 2019, plot: \"A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. But their easy life gets complicated when their deception is threatened with exposure.\", released: Date(\"2020-02-07 00:00:00 UTC\") }\n```\n\n## Delete Documents from a Collection\n\nIn the above sections you learned how to create, read and update documents in the collection. If you've run your program a few times, you've probably built up quite a few documents for the movie Parasite! It's now a good time to clear that up using the `delete_many` method. The MongoDB rust driver provides 3 methods for deleting documents:\n\n- `find_one_and_delete` will delete a single document from a collection and return the document that was deleted, if it existed.\n- `delete_one` will find the documents matching a provided filter and will delete the first one found (if any).\n- `delete_many`, as you might expect, will find the documents matching a provided filter, and will delete *all* of them.\n\nIn the code below, I've used `delete_many` because you may have created several records when testing the code above. The filter just searches for the movie by name, which will match and delete *all* the inserted documents, whereas if you searched by an `_id` value it would delete just one, because ids are unique.\n\nIf you're constantly filtering or sorting on a field, you should consider adding an index to that field to improve performance as your collection grows. Check out the MongoDB Manual for more details.\n\n``` rust\n// Delete all documents for movies called \"Parasite\":\nlet delete_result = movies.delete_many(\n doc! {\n \"title\": \"Parasite\"\n },\n None,\n).await?;\nprintln!(\"Deleted {} documents\", delete_result.deleted_count);\n```\n\nYou did it! Create, read, update and delete operations are the core operations you'll use again and again for accessing and managing the data in your MongoDB cluster. After the taster that this tutorial provides, it's definitely worth reading up in more detail on the following:\n\n- Query Documents which are used for all read, update and delete operations.\n- The MongoDB crate and docs which describe all of the operations the MongoDB driver provides for accessing and modifying your data.\n- The bson crate and its accompanying docs describe how to create and map data for insertion or retrieval from MongoDB.\n- The serde crate provides the framework for mapping between Rust data types and BSON with the bson crate, so it's important to learn how to take advantage of it.\n\n## Using serde to Map Data into Structs\n\nOne of the features of the bson crate which may not be readily apparent is that it provides a BSON data format for the `serde` framework. This means you can take advantage of the serde crate to map between Rust datatypes and BSON types for persistence in MongoDB.\n\nFor an example of how this is useful, see the following example of how to access the `title` field of the `new_movie` document (*without* serde):\n\n``` rust\nuse serde::{Deserialize, Serialize};\nuse mongodb::bson::{Bson, oid::ObjectId};\n\n// Working with Document can be verbose:\nif let Ok(title) = new_doc.get_str(\"title\") {\n println!(\"title: {}\", title);\n} else {\n println!(\"no title found\");\n}\n```\n\nThe first line of the code above retrieves the value of `title` and then attempts to retrieve it *as a string* (`Bson::as_str` returns `None` if the value is a different type). There's quite a lot of error-handling and conversion involved. The serde framework provides the ability to define a struct like the one below, with fields that match the document you're expecting to receive.\n\n``` rust\n// You use `serde` to create structs which can serialize & deserialize between BSON:\n#derive(Serialize, Deserialize, Debug)]\nstruct Movie {\n #[serde(rename = \"_id\", skip_serializing_if = \"Option::is_none\")]\n id: Option,\n title: String,\n year: i32,\n plot: String,\n #[serde(with = \"bson::serde_helpers::chrono_datetime_as_bson_datetime\")]\n released: chrono::DateTime,\n}\n```\n\nNote the use of the `Serialize` and `Deserialize` macros which tell serde that this struct can be serialized and deserialized. The `serde` attribute is also used to tell serde that the `id` struct field should be serialized to BSON as `_id`, which is what MongoDB expects it to be called. The parameter `skip_serializing_if = \"Option::is_none\"` also tells serde that if the optional value of `id` is `None` then it should not be serialized at all. (If you provide `_id: None` BSON to MongoDB it will store the document with an id of `NULL`, whereas if you do not provide one, then an id will be generated for you, which is usually the behaviour you want.) Also, we need to use an attribute to point ``serde`` to the helper that it needs to serialize and deserialize timestamps as defined by ``chrono``.\n\nThe code below creates an instance of the `Movie` struct for the Captain Marvel movie. (Wasn't that a great movie? I loved that movie!) After creating the struct, before you can save it to your collection, it needs to be converted to a BSON *document*. This is done in two steps: First it is converted to a Bson value with `bson::to_bson`, which returns a `Bson` instance; then it's converted specifically to a `Document` by calling `as_document` on it. It is safe to call `unwrap` on this result because I already know that serializing a struct to BSON creates a BSON document type.\n\nOnce your program has obtained a bson `Document` instance, you can call `insert_one` with it in exactly the same way as you did in the section above called [Creating Documents.\n\n``` rust\n// Initialize struct to be inserted:\nlet captain_marvel = Movie {\n id: None,\n title: \"Captain Marvel\".to_owned(),\n year: 2019,\n};\n\n// Convert `captain_marvel` to a Bson instance:\nlet serialized_movie = bson::to_bson(&captain_marvel)?;\nlet document = serialized_movie.as_document().unwrap();\n\n// Insert into the collection and extract the inserted_id value:\nlet insert_result = movies.insert_one(document.to_owned(), None).await?;\nlet captain_marvel_id = insert_result\n .inserted_id\n .as_object_id()\n .expect(\"Retrieved _id should have been of type ObjectId\");\nprintln!(\"Captain Marvel document ID: {:?}\", captain_marvel_id);\n```\n\nWhen I ran the code above, the output looked like this:\n\n``` none\nCaptain Marvel document ID: ObjectId(5e835f30007760020028b0ae)\n```\n\nIt's great to be able to create data using Rust's native datatypes, but I think it's even more valuable to be able to deserialize data into structs. This is what I'll show you next. In many ways, this is the same process as above, but in reverse.\n\nThe code below retrieves a single movie document, converts it into a `Bson::Document` value, and then calls `from_bson` on it, which will deserialize it from BSON into whatever type is on the left-hand side of the expression. This is why I've had to specify that `loaded_movie` is of type `Movie` on the left-hand side, rather than just allowing the rust compiler to derive that information for me. An alternative is to use the turbofish notation on the `from_bson` call, explicitly calling `from_bson::(loaded_movie)`. At the end of the day, as in many things Rust, it's your choice.\n\n``` rust\n// Retrieve Captain Marvel from the database, into a Movie struct:\n// Read the document from the movies collection:\nlet loaded_movie = movies\n .find_one(Some(doc! { \"_id\": captain_marvel_id.clone() }), None)\n .await?\n .expect(\"Document not found\");\n\n// Deserialize the document into a Movie instance\nlet loaded_movie_struct: Movie = bson::from_bson(Bson::Document(loaded_movie))?;\nprintln!(\"Movie loaded from collection: {:?}\", loaded_movie_struct);\n```\n\nAnd finally, here's what I got when I printed out the debug representation of the Movie struct (this is why I derived `Debug` on the struct definition above):\n\n``` none\nMovie loaded from collection: Movie { id: Some(ObjectId(5e835f30007760020028b0ae)), title: \"Captain Marvel\", year: 2019 }\n```\n\nYou can check out the full Tokio code example on github.\n\n## When You Don't Want To Run Under Tokio\n\n### Async-std\n\nIf you prefer to use `async-std` instead of `tokio`, you're in luck! The changes are trivial. First, you'll need to disable the defaults features and enable the `async-std-runtime` feature:\n\n``` none\ndependencies]\nasync-std = \"1\"\nmongodb = { version = \"2.1\", default-features = false, features = [\"async-std-runtime\"] }\n```\n\nThe only changes you'll need to make to your rust code is to add `use async_std;` to the imports and tag your async main function with `#[async_std::main]`. All the rest of your code should be identical to the Tokio example.\n\n``` rust\nuse async_std;\n\n#[async_std::main]\nasync fn main() -> Result<(), Box> {\n // Your code goes here.\n}\n```\n\nYou can check out the full async-std code example [on github.\n\n### Synchronous Code\n\nIf you don't want to run under an async framework, you can enable the sync feature. In your `Cargo.toml` file, disable the default features and enable `sync`:\n\n``` none\ndependencies]\nmongodb = { version = \"2.1\", default-features = false, features = [\"sync\"] }\n```\n\nYou won't need your enclosing function to be an `async fn` any more. You'll need to use a different `Client` interface, defined in `mongodb::sync` instead, and you don't need to await the result of any of the IO functions:\n\n``` rust\nuse mongodb::sync::Client;\n\n// Use mongodb::sync::Client, instead of mongodb::Client:\nlet client = Client::with_uri_str(client_uri.as_ref())?;\n\n// .insert_one().await? becomes .insert_one()?\nlet insert_result = movies.insert_one(new_doc.clone(), None)?;\n```\n\nYou can check out the full synchronous code example [on github.\n\n## Further Reading\n\nThe documentation for the MongoDB Rust Driver is very good. Because the BSON crate is also leveraged quite heavily, it's worth having the docs for that on-hand too. I made lots of use of them writing this quick start.\n\n- Rust Driver Crate\n- Rust Driver Reference Docs\n- Rust Driver GitHub Repository\n- BSON Crate\n- BSON Reference Docs\n- BSON GitHub Repository\n- The BSON Specification\n- Serde Documentation\n\n## Conclusion\n\nPhew! That was a pretty big tutorial, wasn't it? The operations described here will be ones you use again and again, so it's good to get comfortable with them.\n\nWhat *I* learned writing the code for this tutorial is how much value the `bson` crate provides to you and the mongodb driver - it's worth getting to know that at least as well as the `mongodb` crate, as you'll be using it for data generation and conversion *a lot* and it's a deceptively rich library.\n\nThere will be more Rust Quick Start posts on MongoDB Developer Hub, covering different parts of MongoDB and the MongoDB Rust Driver, so keep checking back!\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Rust"], "pageDescription": "Learn how to perform CRUD operations using Rust for MongoDB databases.", "contentType": "Quickstart"}, "title": "Get Started with Rust and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/querying-mongodb-browser-realm-react", "action": "created", "body": "# Querying MongoDB in the Browser with React and the Web SDK\n\nWhen we think of connecting to a database, we think of serverside. Our application server connects to our database server using the applicable driver for our chosen language. But with the Atlas App Service's Realm Web SDK, we can run queries against our Atlas cluster from our web browser, no server required.\n\n## Security and Authentication\n\nOne of the primary reasons database connections are traditionally server-to-server is so that we do not expose any admin credentials. Leaking credentials like this is not a concern with the Web SDK as it has APIs for user management, authentication, and access control. There are no admin credentials to expose as each user has a separate account. Then, using Rules, we control what data each user has permission to access.\n\nThe MongoDB service uses a strict rules system that prevents all operations unless they are specifically allowed. MongoDB Atlas App Services determines if each operation is allowed when it receives the request from the client, based on roles that you define. Roles are sets of document-level and field-level CRUD permissions and are chosen individually for each document associated with a query.\n\nRules have the added benefit of enforcing permissions at the data access level, so you don't need to include any permission checks in your application logic.\n\n## Creating an Atlas Cluster and Realm App\n\nYou can find instructions for creating a free MongoDB Atlas cluster and App Services App in our documentation: Create a App (App Services UI).\n\nWe're going to be using one of the sample datasets in this tutorial, so after creating your free cluster, click on Collections and select the option to load a sample dataset. Once the data is loaded, you should see several new databases in your cluster. We're going to be using the `sample_mflix` database in our code later.\n\n## Users and Authentication Providers\n\nAtlas Application Services supports a multitude of different authentication providers, including Google, Apple, and Facebook. For this tutorial, we're going to stick with regular email and password authentication.\n\nIn your App, go to the Users section and enable the Email/Password provider, the user confirmation method should be \"automatic\", and the password reset method should be a reset function. You can use theprovided stubbed reset function for now.\n\nIn a real-world application, we would have a registration flow so that users could create accounts. But for the sake of this tutorial, we're going to create a new user manually. While still in the \"Users\" section of your App, click on \"Add New User\" and enter the email and password you would like to use.\n\n## Rules and Roles\n\nRules and Roles govern what operations a user can perform. If an operation has not been explicitly allowed, Atlas App Services will reject it. At the moment, we have no Rules or Roles, so our users can't access anything. We need to configure our first set of permissions.\n\nNavigate to the \"Rules\" section and select the `sample_mflix` database and the `movies` collection. App Services has several \"Permissions Template\"s ready for you to use.\n\n- Users can only read and write their own data.\n- Users can read all data, but only write their own data.\n- Users can only read all data.\n\nThese are just the most common types of permissions; you can create your own much more advanced rules to match your requirements.\n\n- Configure a role that can only insert documents.\n- Define field-level read or write permissions for a field in an embedded document.\n- Determine field-level write permissions dynamically using a JSON expression.\n- Invoke an Atlas Function to perform more involved checks, such as checking data from a different collection.\n\nRead the documentation \"Configure Advanced Rules\" for more information.\n\nWe only want our users to be able to access their data and nothing else, so select the \"Users can only read and write their own data\" template.\n\nApp Services does not stipulate what field name you must use to store your user id; we must enter it when creating our configuration. Enter `authorId` as the field name in this example.\n\nBy now, you should have the Email/Password provider enabled, a new user created, and rules configured to allow users to access any data they own. Ensure you deploy all your changes, and we can move onto the code.\n\n## Creating Our Web Application with React\n\nDownload the source for our demo application from GitHub.\n\nOnce you have the code downloaded, you will need to install a couple of dependencies.\n\n``` shell\nnpm install\n```\n\n## The App Provider\n\nAs we're going to require access to our App client throughout our React component tree, we use a Context Provider. You can find the context providers for this project in the `providers` folder in the repo.\n\n``` javascript\nimport * as RealmWeb from \"realm-web\"\n\nimport React, { useContext, useState } from \"react\"\n\nconst RealmAppContext = React.createContext(null)\n\nconst RealmApp = ({ children }) => {\n const REALM_APP_ID = \"realm-web-demo\"\n const app = new RealmWeb.App({ id: REALM_APP_ID })\n const user, setUser] = useState(null)\n\n const logIn = async (email, password) => {\n const credentials = RealmWeb.Credentials.emailPassword(email, password)\n try {\n await app.logIn(credentials)\n setUser(app.currentUser)\n return app.currentUser\n } catch (e) {\n setUser(null)\n return null\n }\n }\n\n const logOut = () => {\n if (user !== null) {\n app.currentUser.logOut()\n setUser(null)\n }\n }\n\n return (\n \n {children}\n \n )\n}\n\nexport const useRealmApp = () => {\n const realmContext = useContext(RealmAppContext)\n if (realmContext == null) {\n throw new Error(\"useRealmApp() called outside of a RealmApp?\")\n }\n return realmContext\n}\n\nexport default RealmApp\n```\n\nThis provider handles the creation of our Web App client, as well as providing methods for logging in and out. Let's look at these parts in more detail.\n\n``` javascript\nconst RealmApp = ({ children }) => {\n const REALM_APP_ID = \"realm-web-demo\"\n const app = new RealmWeb.App({ id: REALM_APP_ID })\n const [user, setUser] = useState(null)\n```\n\nThe value for `REALM_APP_ID` is on your Atlas App Services dashboard. We instantiate a new Web App with the relevant ID. It is this App which allows us to access the different Atlas App Services services. You can find all required environment variables in the `.envrc.example` file.\n\nYou should ensure these variables are available in your environment in whatever manner you normally use. My personal preference is [direnv.\n\n``` javascript\nconst logIn = async (email, password) => {\n const credentials = RealmWeb.Credentials.emailPassword(email, password)\n try {\n await app.logIn(credentials)\n setUser(app.currentUser)\n return app.currentUser\n } catch (e) {\n setUser(null)\n return null\n }\n}\n```\n\nThe `logIn` method accepts the email and password provided by the user and creates an App Services credentials object. We then use this to attempt to authenticate with our App. If successful, we store the authenticated user in our state.\n\n## The MongoDB Provider\n\nJust like the App context provider, we're going to be accessing the Atlas service throughout our component tree, so we create a second context provider for our database.\n\n``` javascript\nimport React, { useContext, useEffect, useState } from \"react\"\n\nimport { useRealmApp } from \"./realm\"\n\nconst MongoDBContext = React.createContext(null)\n\nconst MongoDB = ({ children }) => {\n const { user } = useRealmApp()\n const db, setDb] = useState(null)\n\n useEffect(() => {\n if (user !== null) {\n const realmService = user.mongoClient(\"mongodb-atlas\")\n setDb(realmService.db(\"sample_mflix\"))\n }\n }, [user])\n\n return (\n \n {children}\n \n )\n}\n\nexport const useMongoDB = () => {\n const mdbContext = useContext(MongoDBContext)\n if (mdbContext == null) {\n throw new Error(\"useMongoDB() called outside of a MongoDB?\")\n }\n return mdbContext\n}\n\nexport default MongoDB\n```\n\nThe Web SDK provides us with access to some of the different Atlas App Services, as well as [our custom functions. For this example, we are only interested in the `mongodb-atlas` service as it provides us with access to the linked MongoDB Atlas cluster.\n\n``` javascript\nuseEffect(() => {\n if (user !== null) {\n const realmService = user.mongoClient(\"mongodb-atlas\")\n setDb(realmService.db(\"sample_mflix\"))\n }\n}, user])\n```\n\nIn this React hook, whenever our user variable updates\u2014and is not null, so we have an authenticated user\u2014we set our db variable equal to the database service for the `sample_mflix` database.\n\nOnce the service is ready, we can begin to run queries against our MongoDB database in much the same way as we would with the Node.js driver.\n\nHowever, it is a subset of actions, so not all are available\u2014the most notable absence is `collection.watch()`, but that is being actively worked on and should be released soon\u2014but the common CRUD actions will work.\n\n## Wrap the App in Index.js\n\nThe default boilerplate generated by `create-react-app` places the DOM renderer in `index.js`, so this is a good place for us to ensure that we wrap the entire component tree within our `RealmApp` and `MongoDB` contexts.\n\n``` javascript\nReactDOM.render(\n \n \n \n \n \n \n ,\n document.getElementById(\"root\")\n)\n```\n\nThe order of these components is essential. We must create our Web App first before we attempt to access the `mongodb-atlas` service. So, you must ensure that `` is before ``. Now that we have our `` component nestled within our App and MongoDB contexts, we can query our Atlas cluster from within our React component!\n\n## The Demo Application\n\nOur demo has two main components: a login form and a table of movies, both of which are contained within the `App.js`. Which component we show depends upon whether the current user has authenticated or not.\n\n``` javascript\nfunction LogInForm(props) {\n return (\n \n \n \n Log in\n \n \n props.setEmail(e.target.value)}\n value={props.email}\n />\n \n \n props.setPassword(e.target.value)}\n value={props.password}\n />\n \n \n Log in\n \n \n \n )\n}\n```\n\nThe login form consists of two controlled text inputs and a button to trigger the handleLogIn function.\n\n``` javascript\nfunction MovieList(props) {\n return (\n \n \n \n\n \n Title\n Plot\n Rating\n Year\n \n \n {props.movies.map((movie) => (\n \n {movie.title}\n {movie.plot}\n {movie.rated}\n {movie.year}\n \n ))}\n \n \n\n \n Log Out\n \n \n \n )\n}\n```\n\nThe MovieList component renders an HTML table with a few details about each movie, and a button to allow the user to log out.\n\n``` javascript\nfunction App() {\n const { logIn, logOut, user } = useRealmApp()\n const { db } = useMongoDB()\n const [email, setEmail] = useState(\"\")\n const [password, setPassword] = useState(\"\")\n const [movies, setMovies] = useState([])\n\n useEffect(() => {\n async function wrapMovieQuery() {\n if (user && db) {\n const authoredMovies = await db.collection(\"movies\").find()\n setMovies(authoredMovies)\n }\n }\n wrapMovieQuery()\n }, [user, db])\n\n async function handleLogIn() {\n await logIn(email, password)\n }\n\n return user && db && user.state === \"active\" ? (\n \n ) : (\n \n )\n}\n\nexport default App\n```\n\nHere, we have our main `` component. Let's look at the different sections in order.\n\n``` javascript\nconst { logIn, logOut, user } = useRealmApp()\nconst { db } = useMongoDB()\nconst [email, setEmail] = useState(\"\")\nconst [password, setPassword] = useState(\"\")\nconst [movies, setMovies] = useState([])\n```\n\nWe're going to use the App and the MongoDB provider in this component: App for authentication, MongoDB to run our query. We also set up some state to store our email and password for logging in, and hopefully later, any movie data associated with our account.\n\n``` javascript\nuseEffect(() => {\n async function wrapMovieQuery() {\n if (user && db) {\n const authoredMovies = await db.collection(\"movies\").find()\n setMovies(authoredMovies)\n }\n }\n wrapMovieQuery()\n}, [user, db])\n```\n\nThis React hook runs whenever our user or db updates, which occurs whenever we successfully log in or out. When the user logs in\u2014i.e., we have a valid user and a reference to the `mongodb-atlas` service\u2014then we run a find on the movies collection.\n\n``` javascript\nconst authoredMovies = await db.collection(\"movies\").find()\n```\n\nNotice we do not need to specify the User Id to filter by in this query. Because of the rules, we configured earlier only those documents owned by the current user will be returned without any additional filtering on our part.\n\n## Taking Ownership of Documents\n\nIf you run the demo and log in now, the movie table will be empty. We're using the sample dataset, and none of the documents within it belongs to our current user. Before trying the demo, modify a few documents in the movies collection and add a new field, `authorId`, with a value equal to your user's ID. You can find their ID in the App Users section.\n\nOnce you have given ownership of some documents to your current user, try running the demo application and logging in.\n\nCongratulations! You have successfully queried your database from within your browser, no server required!\n\n## Change the Rules\n\nTry modifying the rules and roles you created to see how it impacts the demo application.\n\nIgnore the warning and delete the configuration for the movies collection. Now, your App should die with a 403 error: \"no rule exists for namespace' sample_mflix.movies.'\"\n\nUse the \"Users can read all data, but only write their own data\" template. I would suggest also modifying the `find()` or adding a `limit()` as otherwise, the demo will try to show every movie in your table!\n\nAdd field-level permissions. In this example, non-owners cannot write to any documents, but they can read the title and year fields for all documents.\n\n``` javascript\n{\n \"roles\": [\n {\n \"name\": \"owner\",\n \"apply_when\": {\n \"authorId\": \"%%user.id\"\n },\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"read\": true,\n \"write\": true,\n \"fields\": {\n \"title\": {},\n \"year\": {}\n },\n \"additional_fields\": {}\n },\n {\n \"name\": \"non-owner\",\n \"apply_when\": {},\n \"insert\": false,\n \"delete\": false,\n \"search\": true,\n \"write\": false,\n \"fields\": {\n \"title\": {\n \"read\": true\n },\n \"year\": {\n \"read\": true\n }\n },\n \"additional_fields\": {}\n }\n ],\n \"filters\": [\n {\n \"name\": \"filter 1\",\n \"query\": {},\n \"apply_when\": {},\n \"projection\": {}\n }\n ],\n \"schema\": {}\n}\n```\n\n## Further Reading\n\nFor more information on MongoDB Atlas App Services and the Web SDK, I recommend reading our documentation:\n\n- [Introduction to MongoDB Atlas App Services for Backend and Web Developers\n- Users & Authentication\n- Realm Web SDK\n\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "React"], "pageDescription": "Learn how to run MongoDB queries in the browser with the Web SDK and React", "contentType": "Tutorial"}, "title": "Querying MongoDB in the Browser with React and the Web SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/introduction-indexes-mongodb-atlas-search", "action": "created", "body": "# An Introduction to Indexes for MongoDB Atlas Search\n\nImagine reading a long book like \"A Song of Fire and Ice,\" \"The Lord of\nthe Rings,\" or \"Harry Potter.\" Now imagine that there was a specific\ndetail in one of those books that you needed to revisit. You wouldn't\nwant to search every page in those long books to find what you were\nlooking for. Instead, you'd want to use some sort of book index to help\nyou quickly locate what you were looking for. This same concept of\nindexing content within a book can be carried to MongoDB Atlas\nSearch with search indexes.\n\nAtlas Search makes it easy to build fast, relevant, full-text search on\ntop of your data in the cloud. It's fully integrated, fully managed, and\navailable with every MongoDB Atlas cluster running MongoDB version 4.2\nor higher.\n\nCorrectly defining your indexes is important because they are\nresponsible for making sure that you're receiving relevant results when\nusing Atlas Search. There is no one-size-fits-all solution and different\nindexes will bring you different benefits.\n\nIn this tutorial, we're going to get a gentle introduction to creating\nindexes that will be valuable for various full-text search use cases.\n\nBefore we get too invested in this introduction, it's important to note\nthat Atlas Search uses Apache Lucene. This\nmeans that search indexes are not unique to Atlas Search and if you're\nalready comfortable with Apache Lucene, your existing knowledge of\nindexing will transfer. However, the tutorial could act as a solid\nrefresher regardless.\n\n## Understanding the Data Model for the Documents in the Example\n\nBefore we start creating indexes, we should probably define what our\ndata model will be for the example. In an effort to cover various\nindexing scenarios, the data model will be complex.\n\nTake the following for example:\n\n``` json\n{\n \"_id\": \"cea29beb0b6f7b9187666cbed2f070b3\",\n \"name\": \"Pikachu\",\n \"pokedex_entry\": {\n \"red\": \"When several of these Pokemon gather, their electricity could build and cause lightning storms.\",\n \"yellow\": \"It keeps its tail raised to monitor its surroundings. If you yank its tail, it will try to bite you.\"\n },\n \"moves\": \n {\n \"name\": \"Thunder Shock\",\n \"description\": \"A move that may cause paralysis.\"\n },\n {\n \"name\": \"Thunder Wave\",\n \"description\": \"An electrical attack that may paralyze the foe.\"\n }\n ],\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [-127, 37]\n }\n}\n```\n\nThe above example document is around Pokemon, but Atlas Search can be\nused on whatever documents are part of your application.\n\nExample documents like the one above allow us to use text search, geo\nsearch, and potentially others. For each of these different search\nscenarios, the index might change.\n\nWhen we create an index for Atlas Search, it is created at the\ncollection level.\n\n## Statically Mapping Fields in a Document or Dynamically Mapping Fields as the Schema Evolves\n\nThere are two ways to map fields within a document when creating an\nindex:\n\n- Dynamic Mappings\n- Static Mappings\n\nIf your document schema is still changing or your use case doesn't allow\nfor it to be rigidly defined, you might want to choose to dynamically\nmap your document fields. A dynamic mapping will automatically assign\nfields when new data is inserted.\n\nTake the following for example:\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": true\n }\n}\n```\n\nThe above JSON represents a valid index. When you add it to a\ncollection, you are essentially mapping every field that exists in the\ndocuments and any field that might exist in the future.\n\nWe can do a simple search using this index like the following:\n\n``` javascript\ndb.pokemon.aggregate([\n {\n \"$search\": {\n \"text\": {\n \"query\": \"thunder\",\n \"path\": [\"moves.name\"]\n }\n }\n }\n]);\n```\n\nWe didn't explicitly define the fields for this index, but attempting to\nsearch for \"thunder\" within the `moves` array will give us matching\nresults based on our example data.\n\nTo be clear, dynamic mappings can be applied at the document level or\nthe field level. At the document level, a dynamic mapping automatically\nindexes all common data types. At both levels, it automatically indexes\nall new and existing data.\n\nWhile convenient, having a dynamic mapping index on all fields of a\ndocument comes at a cost. These indexes will take up more disk space and\nmay be less performant.\n\nThe alternative is to use a static mapping, in which case you specify\nthe fields to map and what type of fields they are. Take the following\nfor example:\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n }\n }\n }\n}\n```\n\nIn the above example, the only field within our document that is being\nindexed is the `name` field.\n\nThe following search query would return results:\n\n``` javascript\ndb.pokemon.aggregate([\n {\n \"$search\": {\n \"text\": {\n \"query\": \"pikachu\",\n \"path\": [\"name\"]\n }\n }\n }\n]);\n```\n\nIf we try to search on any other field within our document, we won't end\nup with results because those fields are not statically mapped nor is\nthe document schema dynamically mapped.\n\nThere is, however, a way to get the best of both worlds if we need it.\n\nTake the following which uses static and dynamic mappings:\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"pokedex_entry\": {\n \"type\": \"document\",\n \"dynamic\": true\n }\n }\n }\n}\n```\n\nIn the above example, we are still using a static mapping for the `name`\nfield. However, we are using a dynamic mapping on the `pokedex_entry`\nfield. The `pokedex_entry` field is an object so any field within that\nobject will get the dynamic mapping treatment. This means all sub-fields\nare automatically mapped, as well as any new fields that might exist in\nthe future. This could be useful if you want to specify what top level\nfields to map, but map all fields within a particular object as well.\n\nTake the following search query as an example:\n\n``` javascript\ndb.pokemon.aggregate([\n {\n \"$search\": {\n \"text\": {\n \"query\": \"pokemon\",\n \"path\": [\"name\", \"pokedex_entry.red\"]\n }\n }\n }\n]);\n```\n\nThe above search will return results if \"pokemon\" appears in the `name`\nfield or the `red` field within the `pokedex_entry` object.\n\nWhen using a static mapping, you need to specify a type for the field or\nhave `dynamic` set to true on the field. If you only specify a type,\n`dynamic` defaults to false. If you only specify `dynamic` as true, then\nAtlas Search can automatically default certain field types (e.g.,\nstring, date, number).\n\n## Atlas Search Indexes for Complex Fields within a Document\n\nWith the basic dynamic versus static mapping discussion out of the way\nfor MongoDB Atlas Search indexes, now we can focus on more complicated\nor specific scenarios.\n\nLet's first take a look at what our fully mapped index would look like\nfor the document in our example:\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"moves\": {\n \"type\": \"document\",\n \"fields\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"description\": {\n \"type\": \"string\"\n }\n }\n },\n \"pokedex_entry\": {\n \"type\": \"document\",\n \"fields\": {\n \"red\": {\n \"type\": \"string\"\n },\n \"yellow\": {\n \"type\": \"string\"\n }\n }\n },\n \"location\": {\n \"type\": \"geo\"\n }\n }\n }\n}\n```\n\nIn the above example, we are using a static mapping for every field\nwithin our documents. An interesting thing to note is the `moves` array\nand the `pokedex_entry` object in the example document. Even though one\nis an array and the other is an object, the index is a `document` for\nboth. While writing searches isn't the focus of this tutorial, searching\nan array and object would be similar using dot notation.\n\nHad any of the fields been nested deeper within the document, the same\napproach would be applied. For example, we could have something like\nthis:\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"pokedex_entry\": {\n \"type\": \"document\",\n \"fields\": {\n \"gameboy\": {\n \"type\": \"document\",\n \"fields\": {\n \"red\": {\n \"type\": \"string\"\n },\n \"yellow\": {\n \"type\": \"string\"\n }\n }\n }\n }\n }\n }\n }\n}\n```\n\nIn the above example, the `pokedex_entry` field was changed slightly to\nhave another level of objects. Probably not a realistic way to model\ndata for this dataset, but it should get the point across about mapping\ndeeper nested fields.\n\n## Changing the Options for Specific Mapped Fields\n\nUp until now, each of the indexes have only had their types defined in\nthe mapping. The default options are currently being applied to every\nfield. Options are a way to refine the index further based on your data\nto ultimately get more relevant search results. Let's play around with\nsome of the options within the mappings of our index.\n\nMost of the fields in our example use the\n[string\ndata type, so there's so much more we can do using options. Let's see\nwhat some of those are.\n\n``` json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"searchAnalyzer\": \"lucene.spanish\",\n \"ignoreAbove\": 3000\n }\n }\n }\n}\n```\n\nIn the above example, we are specifying that we want to use a\nlanguage\nanalyzer on the `name` field instead of the default\nstandard\nanalyzer. We're also saying that the `name` field should not be indexed\nif the field value is greater than 3000 characters.\n\nThe 3000 characters is just a random number for this example, but adding\na limit, depending on your use case, could improve performance or the\nindex size.\n\nIn a future tutorial, we're going to explore the finer details in\nregards to what the search analyzers are and what they can accomplish.\n\nThese are just some of the available options for the string data type.\nEach data type will have its own set of options. If you want to use the\ndefault for any particular option, it does not need to be explicitly\nadded to the mapped field.\n\nYou can learn more about the data types and their indexing options in\nthe official\ndocumentation.\n\n## Conclusion\n\nYou just received what was hopefully a gentle introduction to creating\nindexes to be used in Atlas Search. To use Atlas Search, you will need\nat least one index on your collection, even if it is a default dynamic\nindex. However, if you know your schema and are able to create static\nmappings, it is usually the better way to go to fine-tune relevancy and\nperformance.\n\nTo learn more about Atlas Search indexes and the various data types,\noptions, and analyzers available, check out the official\ndocumentation.\n\nTo learn how to build more on Atlas Search, check out my other\ntutorials: Building an Autocomplete Form Element with Atlas Search and\nJavaScript\nand Visually Showing Atlas Search Highlights with JavaScript and\nHTML.\n\nHave a question or feedback about this tutorial? Head to the MongoDB\nCommunity Forums and let's chat!\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Get a gentle introduction for creating a variety of indexes to be used with MongoDB Atlas Search.", "contentType": "Tutorial"}, "title": "An Introduction to Indexes for MongoDB Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-data-types", "action": "created", "body": "# Realm Data Types\n\n## Introduction\n\nA key feature of Realm is you don\u2019t have to think about converting data to/from JSON, or using ORMs. Just create your objects using the data types your language natively supports. We\u2019re adding new supported types to all our SDKs, here is a refresher and a taste of the new supported types.\n\n## Swift: Already supported types\n\nThe complete reference of supported data types for iOS can be found here.\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| Bool\nA value type whose instances are either true or false. | `// Declaring as Required`\n`@objc dynamic var value = false`\n\n`// Declaring as Optional`\n`let value = RealmProperty()` |\n| Int, Int8, Int16, Int32, Int64\nA signed integer value type. | `// Declaring as Required`\n`@objc dynamic var value = 0`\n\n`// Declaring as Optional`\n`let value = RealmProperty()` |\n| Float\nA single-precision, floating-point value type. | `// Declaring as Required` \n`@objc dynamic var value: Float = 0.0`\n\n`// Declaring as Optional` `let value = RealmProperty()` |\n| Double\nA double-precision, floating-point value type. | `// Declaring as Required`\n`@objc dynamic var value: Double = 0.0`\n\n`// Declaring as Optional`\n`let value = RealmProperty()` |\n| String\nA Unicode string value that is a collection of characters. | `// Declaring as Required`\n`@objc dynamic var value = \"\"`\n\n`// Declaring as Optional`\n`@objc dynamic var value: String? = nil` |\n| Data\nA byte buffer in memory. | `// Declaring as Required`\n`@objc dynamic var value = Data()`\n\n`// Declaring as Optional`\n`@objc dynamic var value: Data? = nil` |\n| Date\nA specific point in time, independent of any calendar or time zone. | `// Declaring as Required`\n`@objc dynamic var value = Date()`\n\n`// Declaring as Optional`\n`@objc dynamic var value: Date? = nil` |\n| Decimal128\nA structure representing a base-10 number. | `// Declaring as Required`\n`@objc dynamic var decimal: Decimal128 = 0`\n\n`// Declaring as Optional`\n`@objc dynamic var decimal: Decimal128? = nil` |\n| List\nList is the container type in Realm used to define to-many relationships. | `let value = List()` |\n| ObjectId\nA 12-byte (probably) unique object identifier. Compatible with the ObjectId type used in the MongoDB database. | `// Declaring as Required`\n`@objc dynamic var objectId = ObjectId.generate()`\n\n`// Declaring as Optional`\n`@objc dynamic var objectId: ObjectId? = nil` |\n| User-defined Object Your own classes. | `// Declaring as Optional`\n`@objc dynamic var value: MyClass? = nil` |\n\n## Swift: New Realm Supported Data Types\n\nStarting with **Realm iOS 10.8.0**\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| Maps \nStore data in arbitrary key-value pairs. They\u2019re used when a developer wants to add flexibility to data models that may evolve over time, or handle unstructured data from a remote endpoint. | `class Player: Object {` \n\u2003`@objc dynamic var name = String?`\n\u2003`@objc dynamic var email: String?`\n\u2003`@objc dynamic var playerHandle: String?`\n\u2003`let gameplayStats = Map()`\n\u2003`let competitionStats = Map()`\n`}`\n`try! realm.write {`\n\u2003`let player = Player()`\n\u2003`player.name = \"iDubs\"`\n\n\u2003`// get the RealmDictionary field from the object we just created and add stats`\n\u2003`let statsDictionary = player.gameplayStats`\n\u2003`statsDictioanry\"mostCommonRole\"] = \"Medic\"`\n\u2003`statsDictioanry[\"clan\"] = \"Realmers\"`\n\u2003`statsDictioanry[\"favoriteMap\"] = \"Scorpian bay\"`\n\u2003`statsDictioanry[\"tagLine\"] = \"Always Be Healin\"`\n\u2003`statsDictioanry[\"nemesisHandle\"] = \"snakeCase4Life\"`\n\u2003`let competitionStats = player.comeptitionStats`\n\n\u2003`competitionStats[\"EastCoastInvitational\"] = \"2nd Place\"`\n\u2003`competitionStats[\"TransAtlanticOpen\"] = \"4th Place\"`\n`}` |\n| [MutableSet \nMutableSet is the container type in Realm used to define to-many relationships with distinct values as objects. | `// MutableSet declaring as required` \n`let value = MutableSet()`\n\n`// Declaring as Optional`\n`let value: MutableSet? = nil `|\n| AnyRealmValue \nAnyRealmValue is a Realm property type that can hold different data types. | `// Declaring as Required` \n`let value = RealmProperty()`\n\n`// Declaring as Optional`\n`let value: RealmProperty? = nil` |\n| UUID \nUUID is a 16-byte globally-unique value. | `// Declaring as Required` \n`@objc dynamic var uuid = UUID()`\n\n`// Declaring as Optional`\n`@objc dynamic var uuidOpt: UUID? = nil` |\n\n## Android/Kotlin: Already supported types\n\nYou can use these types in your RealmObject subclasses. The complete reference of supported data types for Kotlin can be found here.\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| Boolean or boolean\nRepresents boolean objects that can have two values: true and false. | `// Declaring as Required` \n`var visited = false`\n\n`// Declaring as Optional`\n`var visited = false` |\n| Integer or int\nA 32-bit signed number. | `// Declaring as Required` \n`var number: Int = 0`\n\n`// Declaring as Optional`\n`var number: Int? = 0` |\n| Short or short\nA 16-bit signed number. | `// Declaring as Required` \n`var number: Short = 0`\n\n`// Declaring as Optional`\n`var number: Short? = 0` |\n| Long or long\nA 64-bit signed number. | `// Declaring as Required` \n`var number: Long = 0`\n\n`// Declaring as Optional`\n`var number: Long? = 0` |\n| Byte or byte]\nA 8-bit signed number. | `// Declaring as Required` \n`var number: Byte = 0`\n\n`// Declaring as Optional`\n`var number: Byte? = 0` |\n| [Double or double\nFloating point number(IEEE 754 double precision) | `// Declaring as Required` \n`var number: Double = 0`\n\n`// Declaring as Optional`\n`var number: Double? = 0.0` |\n| Float or float\nFloating point number(IEEE 754 single precision) | `// Declaring as Required` \n`var number: Float = 0`\n\n`// Declaring as Optional`\n`var number: Float? = 0.0` |\n| String | `// Declaring as Required` \n`var sdkName: String = \"Realm\"`\n\n`// Declaring as Optional`\n`var sdkName: String? = \"Realm\"` |\n| Date | `// Declaring as Required` \n`var visited: Date = Date()`\n\n`// Declaring as Optional`\n`var visited: Date? = null` |\n| Decimal128 from org.bson.types\nA binary integer decimal representation of a 128-bit decimal value | `var number: Decimal128 = Decimal128.POSITIVE_INFINITY` |\n| ObjectId from org.bson.types\nA globally unique identifier for objects. | `var oId = ObjectId()` |\n| Any RealmObject subclass | `// Define an embedded object` \n`@RealmClass(embedded = true)`\n`open class Address(`\n\u2003`var street: String? = null,`\n\u2003`var city: String? = null,`\n\u2003`var country: String? = null,`\n\u2003`var postalCode: String? = null`\n`): RealmObject() {}`\n`// Define an object containing one embedded object`\n`open class Contact(_name: String = \"\", _address: Address? = null) : RealmObject() {`\n\u2003`@PrimaryKey var _id: ObjectId = ObjectId()`\n\u2003`var name: String = _name`\n\n\u2003`// Embed a single object.`\n\u2003`// Embedded object properties must be marked optional`\n\u2003`var address: Address? = _address`\n`}` |\n| RealmList \nRealmList is used to model one-to-many relationships in a RealmObject. | `var favoriteColors : RealmList? = null` |\n\n## Android/Kotlin: New Realm Supported Data Types\nStarting with **Realm Android 10.6.0**\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| RealmDictionary \nManages a collection of unique String keys paired with values. | `import io.realm.RealmDictionary` \n`import io.realm.RealmObject`\n\n`open class Frog: RealmObject() {`\n\u2003`var name: String? = null`\n\u2003`var nicknamesToFriends: RealmDictionary = RealmDictionary()`\n`}` |\n| RealmSet \nYou can use the RealmSet data type to manage a collection of unique keys. | `import io.realm.RealmObject` \n`import io.realm.RealmSet`\n\n`open class Frog: RealmObject() {`\n\u2003`var name: String = \"\"`\n\u2003`var favoriteSnacks: RealmSet = RealmSet();`\n`}` |\n| Mixed\nRealmAny\nYou can use the RealmAny data type to create Realm object fields that can contain any of several underlying types. | `import io.realm.RealmAny` \n`import io.realm.RealmObject`\n\n`open class Frog(var bestFriend: RealmAny? = RealmAny.nullValue()) : RealmObject() {`\n\u2003`var name: String? = null`\n\u2003`open fun bestFriendToString(): String {`\n\u2003\u2003`if (bestFriend == null) {`\n\u2003\u2003\u2003`return \"null\"`\n\u2003\u2003`}`\n\u2003\u2003`return when (bestFriend!!.type) {`\n\u2003\u2003\u2003`RealmAny.Type.NULL -> {`\n\u2003\u2003\u2003\u2003`\"no best friend\"`\n\u2003\u2003\u2003`}`\n\u2003\u2003\u2003`RealmAny.Type.STRING -> {`\n\u2003\u2003\u2003\u2003`bestFriend!!.asString()`\n\u2003\u2003\u2003`}`\n\u2003\u2003\u2003`RealmAny.Type.OBJECT -> {`\n\u2003\u2003\u2003\u2003`if (bestFriend!!.valueClass == Person::class.java) {`\n\u2003\u2003\u2003\u2003\u2003`val person = bestFriend!!.asRealmModel(Person::class.java)`\n\u2003\u2003\u2003\u2003\u2003`person.name`\n\u2003\u2003\u2003\u2003`}`\n\u2003\u2003\u2003\u2003`\"unknown type\"`\n\u2003\u2003\u2003`}`\n\u2003\u2003\u2003`else -> {`\n\u2003\u2003\u2003\u2003`\"unknown type\"`\n\u2003\u2003\u2003`}`\n\u2003\u2003`}`\n\u2003`}`\n`}` |\n| UUID from java.util.UUID | `var id = UUID.randomUUID()` |\n\n## JavaScript - React Native SDK: : Already supported types\n\nThe complete reference of supported data types for JavaScript Node.js can be found here.\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| `bool` maps to the JavaScript Boolean type | `var x = new Boolean(false);` |\n| `int` maps to the JavaScript Number type. Internally, Realm Database stores int with 64 bits. | `Number('123')` |\n| `float` maps to the JavaScript Number type. Internally, Realm Database stores float with 32 bits. | `Number('123.0')` |\n| `double` maps to the JavaScript Number type. Internally, Realm Database stores double with 64 bits. | `Number('123.0')` |\n| `string` `maps` to the JavaScript String type. | `const string1 = \"A string primitive\";` |\n| `decimal128` for high precision numbers. | |\n| `objectId` maps to BSON `ObjectId` type. | `ObjectId(\"507f1f77bcf86cd799439011\")` |\n| `data` maps to the JavaScript ArrayBuffer type. | `const buffer = new ArrayBuffer(8);` |\n| `date` maps to the JavaScript Date type. | `new Date()` |\n| `list` maps to the JavaScript Array type. You can also specify that a field contains a list of primitive value type by appending ] to the type name. | `let fruits = ['Apple', 'Banana']` |\n| `linkingObjects` is a special type used to define an inverse relationship. | |\n\n## JavaScript - React Native SDK: New Realm supported types\n\nStarting with __Realm JS 10.5.0__\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| [dictionary used to manage a collection of unique String keys paired with values. | `let johnDoe;` \n`let janeSmith;`\n`realm.write(() => {`\n\u2003`johnDoe = realm.create(\"Person\", {`\n\u2003\u2003`name: \"John Doe\",`\n\u2003\u2003`home: {`\n\u2003\u2003\u2003`windows: 5,`\n\u2003\u2003\u2003`doors: 3,`\n\u2003\u2003\u2003`color: \"red\",`\n\u2003\u2003\u2003`address: \"Summerhill St.\",`\n\u2003\u2003\u2003`price: 400123,`\n\u2003\u2003`},`\n\u2003`});`\n\u2003`janeSmith = realm.create(\"Person\", {`\n\u2003\u2003`name: \"Jane Smith\",`\n\u2003\u2003`home: {`\n\u2003\u2003\u2003`address: \"100 northroad st.\",`\n\u2003\u2003\u2003`yearBuilt: 1990,`\n\u2003\u2003`},`\n\u2003`});`\n`});` |\n| set is based on the JavaScript Set type.\nA Realm Set is a special object that allows you to store a collection of unique values. Realm Sets are based on JavaScript sets, but can only contain values of a single type and can only be modified within a write transaction. | `let characterOne, characterTwo;` \n`realm.write(() => {`\n\u2003`characterOne = realm.create(\"Character\", {`\n\u2003\u2003`_id: new BSON.ObjectId(),`\n\u2003\u2003`name: \"CharacterOne\",`\n\u2003\u2003`inventory: \"elixir\", \"compass\", \"glowing shield\"],`\n\u2003\u2003`levelsCompleted: [4, 9],`\n\u2003`});`\n`characterTwo = realm.create(\"Character\", {`\n\u2003\u2003`_id: new BSON.ObjectId(),`\n\u2003`name: \"CharacterTwo\",`\n\u2003\u2003`inventory: [\"estus flask\", \"gloves\", \"rune\"],`\n\u2003\u2003`levelsCompleted: [1, 2, 5, 24],`\n\u2003`});`\n`});` |\n| [mixed is a property type that can hold different data types.\nThe mixed data type is a realm property type that can hold any valid Realm data type except a collection. You can create collections (lists, sets, and dictionaries) of type mixed, but a mixed itself cannot be a collection. Properties using the mixed data type can also hold null values. | `realm.write(() => {` \n\u2003`// create a Dog with a birthDate value of type string`\n\u2003`realm.create(\"Dog\", { name: \"Euler\", birthDate: \"December 25th, 2017\" });`\n\u2003`// create a Dog with a birthDate value of type date`\n`realm.create(\"Dog\", {`\n\u2003\u2003`name: \"Blaise\",`\n\u2003\u2003`birthDate: new Date(\"August 17, 2020\"),`\n\u2003`});`\n\u2003`// create a Dog with a birthDate value of type int`\n\u2003`realm.create(\"Dog\", {`\n\u2003\u2003`name: \"Euclid\",`\n\u2003\u2003`birthDate: 10152021,`\n\u2003`});`\n\u2003`// create a Dog with a birthDate value of type null`\n\u2003\u2003`realm.create(\"Dog\", {`\n\u2003\u2003`name: \"Pythagoras\",`\n\u2003\u2003`birthDate: null,`\n\u2003`});`\n`});` |\n| uuid is a universally unique identifier from Realm.BSON.\nUUID (Universal Unique Identifier) is a 16-byte unique value. You can use UUID as an identifier for objects. UUID is indexable and you can use it as a primary key. | `const { UUID } = Realm.BSON;` \n`const ProfileSchema = {`\n\u2003`name: \"Profile\",`\n\u2003`primaryKey: \"_id\",`\n\u2003`properties: {`\n\u2003\u2003`_id: \"uuid\",`\n\u2003\u2003`name: \"string\",`\n\u2003`},`\n`};`\n`const realm = await Realm.open({`\n\u2003`schema: ProfileSchema],`\n`});`\n`realm.write(() => {`\n\u2003`realm.create(\"Profile\", {`\n\u2003\u2003`name: \"John Doe.\",`\n\u2003\u2003`_id: new UUID(), // create a _id with a randomly generated UUID`\n\u2003`});`\n`realm.create(\"Profile\", {`\n\u2003\u2003`name: \"Tim Doe.\",`\n\u2003\u2003`_id: new UUID(\"882dd631-bc6e-4e0e-a9e8-f07b685fec8c\"), // create a _id with a specific UUID value`\n\u2003`});`\n`});` |\n\n## .NET Field Types \n The complete reference of supported data types for .Net/C# can be found [here.\n\n| Type Name | Code Sample |\n| --- | --- |\n| Realm Database supports the following .NET data types and their nullable counterparts:\nbool\nbyte\nshort\nint\nlong\nfloat\ndouble\ndecimal\nchar\nstring\nbyte]\nDateTimeOffset\nGuid\nIList, where T is any of the supported data types | Regular C# code, nothing special to see here! |\n| ObjectId maps to [BSON `ObjectId` type. | |\n\n## .Net Field Types: New supported types\n\nStarting with __.NET SDK 10.2.0__\n\n| Type Name | Code Sample |\n| --------- | ----------- |\n| Dictionary \nA Realm dictionary is an implementation of IDictionary that has keys of type String and supports values of any Realm type except collections. To define a dictionary, use a getter-only IDictionary property, where TValue is any of the supported types. | `public class Inventory : RealmObject` \n`{`\n\u2003`// The key must be of type string; the value can be`\n\u2003`// of any Realm-supported type, including objects`\n\u2003`// that inherit from RealmObject or EmbeddedObject`\n\u2003`public IDictionary PlantDict { get; }`\n\u2003`public IDictionary BooleansDict { get; }`\n\u2003`// Nullable types are supported in local-only`\n\u2003`// Realms, but not with Sync`\n\u2003`public IDictionary NullableIntDict { get; }`\n\u2003`// For C# types that are implicitly nullable, you can`\n\u2003`// use the Required] attribute to prevent storing null values`\n\u2003`[Required]`\n\u2003`public IDictionary RequiredStringsDict { get; }`\n`}` |\n| [Sets \nA Realm set, like the C# HashSet<>, is an implementation of ICollection<> and IEnumerable<>. It supports values of any Realm type except collections. To define a set, use a getter-only ISet property, where TValue is any of the supported types. | `public class Inventory : RealmObject` \n`{`\n\u2003`// A Set can contain any Realm-supported type, including`\n\u2003`// objects that inherit from RealmObject or EmbeddedObject`\n\u2003`public ISet PlantSet { get; }\npublic ISet DoubleSet { get; }`\n\u2003`// Nullable types are supported in local-only`\n\u2003`// Realms, but not with Sync`\n\u2003`public ISet NullableIntsSet { get; }`\n\u2003`// For C# types that are implicitly nullable, you can`\n\u2003`// use the Required] attribute to prevent storing null values`\n\u2003`[Required]`\n\u2003`public ISet RequiredStrings { get; }`\n`}` |\n| [RealmValue \nThe RealmValue data type is a mixed data type, and can represent any other valid Realm data type except a collection. You can create collections (lists, sets and dictionaries) of type RealmValue, but a RealmValue itself cannot be a collection. | `public class MyRealmValueObject : RealmObject` \n`{`\n\u2003`PrimaryKey]`\n\u2003`[MapTo(\"_id\")]`\n\u2003`public Guid Id { get; set; }`\n\u2003`public RealmValue MyValue { get; set; }`\n\u2003`// A nullable RealmValue preoprtrty is *not supported*`\n\u2003`// public RealmValue? NullableRealmValueNotAllowed { get; set; }`\n`}`\n`private void TestRealmValue()`\n`{`\n\u2003`var obj = new MyRealmValueObject();`\n\u2003`// set the value to null:`\n\u2003`obj.MyValue = RealmValue.Null;`\n\u2003`// or an int...`\n\u2003`obj.MyValue = 1;`\n\u2003`// or a string...`\n\u2003`obj.MyValue = \"abc\";`\n\u2003`// Use RealmValueType to check the type:`\n\u2003`if (obj.MyValue.Type == RealmValueType.String)`\n\u2003`{`\n\u2003\u2003`var myString = obj.MyValue.AsString();`\n\u2003`}`\n`}` |\n| [Guid and ObjectId Properties \nMongoDB.Bson.ObjectId is a MongoDB-specific 12-byte unique value, while the built-in .NET type Guid is a 16-byte universally-unique value. Both types are indexable, and either can be used as a Primary Key. | |", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Review of existing and supported Realm Data Types for the different SDKs.", "contentType": "Tutorial"}, "title": "Realm Data Types", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/creating-user-profile-store-game-nodejs-mongodb", "action": "created", "body": "# Creating a User Profile Store for a Game With Node.js and MongoDB\n\nWhen it comes to game development, or at least game development that has an online component to it, you're going to stumble into the territory of user profile stores. These are essentially records for each of your players and these records contain everything from account information to what they've accomplished in the game.\n\nTake the game Plummeting People that some of us at MongoDB (Karen Huaulme, Adrienne Tacke, and Nic Raboy) are building, streaming, and writing about. The idea behind this game, as described in a previous article, is to create a Fall Guys: Ultimate Knockout tribute game with our own spin on it.\n\nSince this game will be an online multiplayer game, each player needs to retain game-play information such as how many times they've won, what costumes they've unlocked, etc. This information would exist inside a user profile document.\n\nIn this tutorial, we're going to see how to design a user profile store and then build a backend component using Node.js and MongoDB Realm for interacting with it.\n\n## Designing a Data Model for the Player Documents of a Game\n\nTo get you up to speed, Fall Guys: Ultimate Knockout is a battle royale style game where you compete for first place in several obstacle courses. As you play the game, you get karma points, crowns, and costumes to make the game more interesting.\n\nSince we're working on a tribute game and not a straight up clone, we determined our Plummeting People game should have the following data stored for each player:\n\n- Experience points (XP)\n- Falls\n- Steps taken\n- Collisions with players or objects\n- Losses\n- Wins\n- Pineapples (Currency)\n- Achievements\n- Inventory (Outfits)\n- Plummie Tag (Username)\n\nOf course, there could be much more information or much less information stored per player in any given game. In all honesty, the things we think we should store may evolve as we progress further in the development of the game. However, this is a good starting point.\n\nNow that we have a general idea of what we want to store, it makes sense to convert these items into an appropriate data model for a document within MongoDB.\n\nTake the following, for example:\n\n``` json\n{\n \"_id\": \"4573475234234\",\n \"plummie_tag\": \"nraboy\",\n \"xp\": 298347234,\n \"falls\": 328945783957,\n \"steps\": 438579348573,\n \"collisions\": 2345325,\n \"losses\": 3485,\n \"wins\": 3,\n \"created_at\": 3498534,\n \"updated_at\": 4534534,\n \"lifetime_hours_played\": 5,\n \"pineapples\": 24532,\n \"achievements\": \n {\n \"name\": \"Super Amazing Person\",\n \"timestamp\": 2345435\n }\n ],\n \"inventory\": {\n \"outfits\": [\n {\n \"id\": 34345,\n \"name\": \"The Kilowatt Huaulme\",\n \"timestamp\": 2345345\n }\n ]\n }\n}\n```\n\nNotice that we have the information previously identified. However, the structure is a bit different. In addition, you'll notice extra fields such as `created_at` and other timestamp-related data that could be helpful behind the scenes.\n\nFor achievements, an array of objects might be a good idea because the achievements might change over time, and each player will likely receive more than one during the lifetime of their gaming experience. Likewise, the `inventory` field is an object with arrays of objects because, while the current plan is to have an inventory of player outfits, that could later evolve into consumable items to be used within the game, or anything else that might expand beyond outfits.\n\nOne thing to note about the above user profile document model is that we're trying to store everything about the player in a single document. We're not trying to maintain relationships to other documents unless absolutely necessary. The document for any given player is like a log of their lifetime experience with the game. It can very easily evolve over time due to the flexible nature of having a JSON document model in a NoSQL database like MongoDB.\n\nTo get more insight into the design process of our user profile store documents, check out the [on-demand Twitch recording we created.\n\n## Create a Node.js Backend API With MongoDB Atlas to Interact With the User Profile Store\n\nWith a general idea of how we chose to model our player document, we could start developing the backend responsible for doing the create, read, update, and delete (CRUD) spectrum of operations against our database.\n\nSince Express.js is a common, if not the most common, way to work with Node.js API development, it made sense to start there. What comes next will reproduce what we did during the Twitch stream.\n\nFrom the command line, execute the following commands in a new directory:\n\n``` none\nnpm init -y\nnpm install express mongodb body-parser --save\n```\n\nThe above commands will initialize a new **package.json** file within the current working directory and then install Express.js, the MongoDB Node.js driver, and the Body Parser middleware for accepting JSON payloads.\n\nWithin the same directory as the **package.json** file, create a **main.js** file with the following Node.js code:\n\n``` javascript\nconst { MongoClient, ObjectID } = require(\"mongodb\");\nconst Express = require(\"express\");\nconst BodyParser = require('body-parser');\n\nconst server = Express();\n\nserver.use(BodyParser.json());\nserver.use(BodyParser.urlencoded({ extended: true }));\n\nconst client = new MongoClient(process.env\"ATLAS_URI\"]);\n\nvar collection;\n\nserver.post(\"/plummies\", async (request, response, next) => {});\nserver.get(\"/plummies\", async (request, response, next) => {});\nserver.get(\"/plummies/:id\", async (request, response, next) => {});\nserver.put(\"/plummies/:plummie_tag\", async (request, response, next) => {});\n\nserver.listen(\"3000\", async () => {\n try {\n await client.connect();\n collection = client.db(\"plummeting-people\").collection(\"plummies\");\n console.log(\"Listening at :3000...\");\n } catch (e) {\n console.error(e);\n }\n});\n```\n\nThere's quite a bit happening in the above code. Let's break it down!\n\nYou'll first notice the following few lines:\n\n``` javascript\nconst { MongoClient, ObjectID } = require(\"mongodb\");\nconst Express = require(\"express\");\nconst BodyParser = require('body-parser');\n\nconst server = Express();\n\nserver.use(BodyParser.json());\nserver.use(BodyParser.urlencoded({ extended: true }));\n```\n\nWe had previously downloaded the project dependencies, but now we are importing them for use in the project. Once imported, we're initializing Express and are telling it to use the body parser for JSON and URL encoded payloads coming in with POST, PUT, and similar requests. These requests are common when it comes to creating or modifying data.\n\nNext, you'll notice the following lines:\n\n``` javascript\nconst client = new MongoClient(process.env[\"ATLAS_URI\"]);\n\nvar collection;\n```\n\nThe `client` in this example assumes that your MongoDB Atlas connection string exists in your environment variables. To be clear, the connection string would look something like this:\n\n``` none\nmongodb+srv://:@plummeting-us-east-1.hrrxc.mongodb.net/\n```\n\nYes, you could hard-code that value, but because the connection string will contain your username and password, it makes sense to use an environment variable or configuration file for security reasons.\n\nThe `collection` variable is being defined because it will have our collection handle for use within each of our endpoint functions.\n\nSpeaking of endpoint functions, we're going to skip those for a moment. Instead, let's look at serving our API:\n\n``` javascript\nserver.listen(\"3000\", async () => {\n try {\n await client.connect();\n collection = client.db(\"plummeting-people\").collection(\"plummies\");\n console.log(\"Listening at :3000...\");\n } catch (e) {\n console.error(e);\n }\n});\n```\n\nIn the above code we are serving our API on port 3000. When the server starts, we establish a connection to our MongoDB Atlas cluster. Once connected, we make use of the `plummeting-people` database and the `plummies` collection. In this circumstance, we're calling each player a **plummie**, hence the name of our user profile store collection. Neither the database or collection need to exist prior to starting the application.\n\nTime to focus on those endpoint functions.\n\nTo create a player \u2014 or plummie, in this case \u2014 we need to take a look at the POST endpoint:\n\n``` javascript\nserver.post(\"/plummies\", async (request, response, next) => {\n try {\n let result = await collection.insertOne(request.body);\n response.send(result);\n } catch (e) {\n response.status(500).send({ message: e.message });\n }\n});\n```\n\nThe above endpoint expects a JSON payload. Ideally, it should match the data model that we had defined earlier in the tutorial, but we're not doing any data validation, so anything at this point would work. With the JSON payload an `insertOne` operation is done and that payload is turned into a user profile. The result of the create is sent back to the user.\n\nIf you want to handle the validation of data, check out database level [schema validation or using a client facing validation library like Joi.\n\nWith the user profile document created, you may need to fetch it at some point. To do this, take a look at the GET endpoint:\n\n``` javascript\nserver.get(\"/plummies\", async (request, response, next) => {\n try {\n let result = await collection.find({}).toArray();\n response.send(result);\n } catch (e) {\n response.status(500).send({ message: e.message });\n }\n});\n```\n\nIn the above example, all documents in the collection are returned because there is no filter specified. The above endpoint is useful if you want to find all user profiles, maybe for reporting purposes. If you want to find a specific document, you might do something like this:\n\n``` javascript\nserver.get(\"/plummies/:plummie_tag\", async (request, response, next) => {\n try {\n let result = await collection.findOne({ \"plummie_tag\": request.params.plummie_tag });\n response.send(result);\n } catch (e) {\n response.status(500).send({ message: e.message });\n }\n});\n```\n\nThe above endpoint takes a `plummie_tag`, which we're expecting to be a unique value. As long as the value exists on the `plummie_tag` field for a document, the profile will be returned.\n\nEven though there isn't a game to play yet, we know that we're going to need to update these player profiles. Maybe the `xp` increased, or new `achievements` were gained. Whatever the reason, a PUT request is necessary and it might look like this:\n\n``` javascript\nserver.put(\"/plummies/:plummie_tag\", async (request, response, next) => {\n try {\n let result = await collection.updateOne(\n { \"plummie_tag\": request.params.plummie_tag },\n { \"$set\": request.body }\n );\n response.send(result);\n } catch (e) {\n response.status(500).send({ message: e.message });\n }\n});\n```\n\nIn the above request, we are expecting a `plummie_tag` to be passed to represent the document we want to update. We are also expecting a payload to be sent with the data we want to update. Like with the `insertOne`, the `updateOne` is experiencing no prior validation. Using the `plummie_tag` we can filter for a document to change and then we can use the `$set` operator with a selection of changes to make.\n\nThe above endpoint will update any field that was passed in the payload. If the field doesn't exist, it will be created.\n\nOne might argue that user profiles can only be created or changed, but never removed. It is up to you whether or not the profile should have an `active` field or just remove it when requested. For our game, documents will never be deleted, but if you wanted to, you could do the following:\n\n``` javascript\nserver.delete(\"/plummies/:plummie_tag\", async (request, response, next) => {\n try {\n let result = await collection.deleteOne({ \"plummie_tag\": request.params.plummie_tag });\n response.send(result);\n } catch (e) {\n response.status(500).send({ message: e.message });\n }\n});\n```\n\nThe above code will take a `plummie_tag` from the game and delete any documents that match it in the filter.\n\nIt should be reiterated that these endpoints are expected to be called from within the game. So when you're playing the game and you create your player, it should be stored through the API.\n\n## Realm Webhook Functions: An Alternative Method for Interacting With the User Profile Store:\n\nWhile Node.js with Express.js might be popular, it isn't the only way to build a user profile store API. In fact, it might not even be the easiest way to get the job done.\n\nDuring the Twitch stream, we demonstrated how to offload the management of Express and Node.js to Realm.\n\nAs part of the MongoDB data platform, Realm offers many things Plummeting People can take advantage of as we build out this game, including triggers, functions, authentication, data synchronization, and static hosting. We very quickly showed how to re-create these APIs through Realm's HTTP Service from right inside of the Atlas UI.\n\nTo create our GET, POST, and DELETE endpoints, we first had to create a Realm application. Return to your Atlas UI and click **Realm** at the top. Then click the green **Start a New Realm App** button.\n\nWe named our Realm application **PlummetingPeople** and linked to the Atlas cluster holding the player data. All other default settings are fine:\n\nCongrats! Realm Application Creation Achievment Unlocked! \ud83d\udc4f\n\nNow click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service. We named ours **RealmOfPlummies**:\n\nClick the green **Add a Service** button, and you'll be directed to **Add Incoming Webhook**.\n\nLet's re-create our GET endpoint first. Once in the **Settings** tab, name your first webhook **getPlummies**. Enable **Respond with Result** set the HTTP Method to **GET**. To make things simple, let's just run the webhook as the System and skip validation with **No Additional Authorization.** Make sure to click the **Review and Deploy** button at the top along the way.\n\nIn this service function editor, replace the example code with the following:\n\n``` javascript\nexports = async function(payload, response) {\n\n // get a reference to the plummies collection\n const collection = context.services.get(\"mongodb-atlas\").db(\"plummeting-people\").collection(\"plummies\");\n\n var plummies = await collection.find({}).toArray();\n\n return plummies;\n};\n```\n\nIn the above code, note that MongoDB Realm interacts with our `plummies` collection through the global `context` variable. In the service function, we use that context variable to access all of our `plummies.` We can also add a filter to find a specific document or documents, just as we did in the Express + Node.js endpoint above.\n\nSwitch to the **Settings** tab of `getPlummies`, and you'll notice a Webhook URL has been generated.\n\nWe can test this endpoint out by executing it in our browser. However, if you have tools like Postman installed, feel free to try that as well. Click the **COPY** button and paste the URL into your browser.\n\nIf you receive an output showing your plummies, you have successfully created an API endpoint in Realm! Very cool. \ud83d\udcaa\ud83d\ude0e\n\nNow, let's step through that process again to create an endpoint to add new plummies to our game. In the same **RealmOfPlummies** service, add another incoming webhook. Name it `addPlummie` and set it as a **POST**. Switch to the function editor and replace the example code with the following:\n\n``` javascript\nexports = function(payload, response) {\n\n console.log(\"Adding Plummie...\");\n const plummies = context.services.get(\"mongodb-atlas\").db(\"plummeting-people\").collection(\"plummies\");\n\n // parse the body to get the new plummie\n const plummie = EJSON.parse(payload.body.text());\n\n return plummies.insertOne(plummie);\n\n};\n```\n\nIf you go back to Settings and grab the Webhook URL, you can now use this to POST new plummies to our Atlas **plummeting-people** database.\n\nAnd finally, the last two endpoints to `DELETE` and to `UPDATE` our players.\n\nName a new incoming webhook `removePlummie` and set as a POST. The following code will remove the `plummie` from our user profile store:\n\n``` javascript\nexports = async function(payload) {\n console.log(\"Removing plummie...\");\n\n const ptag = EJSON.parse(payload.body.text());\n\n let plummies = context.services.get(\"mongodb-atlas\").db(\"plummeting-people\").collection(\"plummies_kwh\");\n\n return plummies.deleteOne({\"plummie_tag\": ptag});\n\n};\n```\n\nThe final new incoming webhook `updatePlummie` and set as a PUT:\n\n``` javascript\nexports = async function(payload, response) {\n\n console.log(\"Updating Plummie...\");\n var result = {};\n\n if (payload.body) {\n\n const plummies = context.services.get(\"mongodb-atlas\").db(\"plummeting-people\").collection(\"plummies_kwh\");\n\n const ptag = payload.query.plummie_tag;\n console.log(\"plummie_tag : \" + ptag);\n\n // parse the body to get the new plummie update\n var updatedPlummie = EJSON.parse(payload.body.text());\n console.log(JSON.stringify(updatedPlummie));\n\n return plummies.updateOne(\n {\"plummie_tag\": ptag},\n {\"$set\": updatedPlummie}\n );\n }\n\n return ({ok:true});\n};\n```\n\nWith that, we have another option to handle all four endpoints allowing complete CRUD operations to our `plummie` data - without needing to spin-up and manage a Node.js and Express backend.\n\n## Conclusion\n\nYou just saw some examples of how to design and create a user profile store for your next game. The user profile store used in this tutorial is an active part of a game that some of us at MongoDB (Karen Huaulme, Adrienne Tacke, and Nic Raboy) are building. It is up to you whether or not you want develop your own backend using the MongoDB Node.js driver or take advantage of MongoDB Realm with webhook functions.\n\nThis particular tutorial is part of a series around developing a Fall Guys: Ultimate Knockout tribute game using Unity and MongoDB.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Learn how to create a user profile store for a game using MongoDB, Node.js, and Realm.", "contentType": "Tutorial"}, "title": "Creating a User Profile Store for a Game With Node.js and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/adl-sql-integration", "action": "created", "body": "# Atlas Data Lake SQL Integration to Form Powerful Data Interactions\n\n>As of June 2022, the functionality previously known as Atlas Data Lake is now named Atlas Data Federation. Atlas Data Federation\u2019s functionality is unchanged and you can learn more about it here. Atlas Data Lake will remain in the Atlas Platform, with newly introduced functionality that you can learn about here.\n\nModern platforms have a wide variety of data sources. As businesses grow, they have to constantly evolve their data management and have sophisticated, scalable, and convenient tools to analyse data from all sources to produce business insights.\n\nMongoDB has developed a rich and powerful query language, including a very robust aggregation framework. \n\nThese were mainly done to optimize the way developers work with data and provide great tools to manipulate and query MongoDB documents.\n\nHaving said that, many developers, analysts, and tools still prefer the legacy SQL language to interact with the data sources. SQL has a strong foundation around joining data as this was a core concept of the legacy relational databases normalization model. \n\nThis makes SQL have a convenient syntax when it comes to describing joins. \n\nProviding MongoDB users the ability to leverage SQL to analyse multi-source documents while having a flexible schema and data store is a compelling solution for businesses.\n\n## Data Sources and the Challenge\n\nConsider a requirement to create a single view to analyze data from operative different systems. For example:\n\n- Customer data is managed in the user administration systems (REST API).\n- Financial data is managed in a financial cluster (Atlas cluster).\n- End-to-end transactions are stored in files on cold storage gathered from various external providers (cloud object storage - Amazon S3 or Microsoft Azure Blob Storage).\n\nHow can we combine and best join this data? \n\nMongoDB Atlas Data Lake connects multiple data sources using the different source types. Once the data sources are mapped, we can create collections consuming this data. Those collections can have SQL schema generated, allowing us to perform sophisticated joins and do JDBC queries from various BI tools.\n\nIn this article, we will showcase the extreme power hidden in the Data Lake SQL interface.\n\n## Setting Up My Data Lake\nIn the following view, I have created three main data sources: \n- S3 Transaction Store (S3 sample data).\n- Accounts from my Atlas clusters (Sample data sample_analytics.accounts).\n- Customer data from a secure https source.\n\nI mapped the stores into three collections under `FinTech` database:\n\n- `Transactions`\n- `Accounts`\n- `CustomerDL`\n\nNow, I can see them through a data lake connection as MongoDB collections.\n\nLet's grab our data lake connection string from the Atlas UI.\n\nThis connection string can be used with our BI tools or client applications to run SQL queries.\n\n## Connecting and Using $sql\n\nOnce we connect to the data lake via a mongosh shell, we can generate a SQL schema for our collections. This is required for the JDBC or $sql operators to recognise collections as SQL \u201ctables.\u201d\n\n#### Generate SQL schema for each collection:\n```js\nuse admin;\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: \"FinTech.customersDL\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [\"FinTech.accounts\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\ndb.runCommand({sqlGenerateSchema: 1, sampleNamespaces: [\"FinTech.transactions\"], sampleSize: 1000, setSchemas: true})\n{\n ok: 1,\n schemas: [ { databaseName: 'FinTech', namespaces: [Array] } ]\n}\n```\n#### Running SQL queries and joins using $sql stage:\n```js\nuse FinTech;\ndb.aggregate([{\n $sql: {\n statement: \"SELECT a.* , t.transaction_count FROM accounts a, transactions t where a.account_id = t.account_id SORT BY t.transaction_count DESC limit 2\",\n format: \"jdbc\",\n formatVersion: 2,\n dialect: \"mysql\",\n }\n}])\n```\n\nThe above query will prompt account information and the transaction counts of each account.\n\n## Connecting Via JDBC\n\nLet\u2019s connect a powerful BI tool like Tableau with the JDBC driver.\n\n[Download JDBC Driver.\n\nSetting `connection.properties` file.\n```\nuser=root\npassword=*******\nauthSource=admin\ndatabase=FinTech\nssl=true\ncompressors=zlib\n```\n\n#### Connect to Tableau\n\nClick the \u201cOther Databases (JDBC)\u201d connector and load the connection.properties file pointing to our data lake URI.\n\nOnce the data is read successfully, the collections will appear on the right side.\n\n#### Setting and Joining Data\n\nWe can drag and drop collections from different sources and link them together.\n\nIn my case, I connected `Transactions` => `Accounts` based on the `Account Id` field, and accounts and users based on the `Account Id` to `Accounts` field.\n\nIn this view, we will see a unified table for all accounts with usernames and their transactions start quarter. \n\n## Summary\n\nMongoDB has all the tools to read, transform, and analyse your documents for almost any use-case. \n\nWhether your data is in an Atlas operational cluster, in a service, or on cold storage like cloud object storage, Atlas Data Lake will provide you with the ability to join the data in real time. With the option to use powerful join SQL syntax and SQL-based BI tools like Tableau, you can get value out of the data in no time.\n\nTry Atlas Data Lake with your BI tools and SQL today.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how new SQL-based syntax can power your data lake insights in minutes. Integrate this capability with powerful BI tools like Tableau to get immediate value out of your data. ", "contentType": "Article"}, "title": "Atlas Data Lake SQL Integration to Form Powerful Data Interactions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/realm-minesweeper", "action": "created", "body": "# Building a Collaborative iOS Minesweeper Game with Realm\n\n## Introduction\n\nI wanted to build an app that we could use at events to demonstrate Realm Sync. It needed to be fun to interact with, and so a multiplayer game made sense. Tic-tac-toe is too simple to get excited about. I'm not a game developer and so _Call Of Duty_ wasn't an option. Then I remembered Microsoft's Minesweeper. \n\nMinesweeper was a Windows fixture from 1990 until Windows 8 relegated it to the app store in 2012. It was a single-player game, but it struck me as something that could be a lot of fun to play with others. Some family beta-testing of my first version while waiting for a ferry proved that it did get people to interact with each other (even if most interactions involved shouting, \"Which of you muppets clicked on that mine?!\").\n\nYou can download the back end and iOS apps from the Realm-Sweeper repo, and get it up and running in a few minutes if you want to play with it.\n\nThis article steps you through some of the key aspects of setting up the backend Realm app, as well as the iOS code. Hopefully, you'll see how simple it is and try building something for yourself. If anyone's looking for ideas, then Sokoban could be interesting.\n\n## Prerequisites\n\n- Realm-Cocoa 10.20.1+\n- iOS 15+\n\n## The Minesweeper game\n\nThe gameplay for Minesweeper is very simple.\n\nYou're presented with a grid of gray tiles. You tap on a tile to expose what's beneath. If you expose a mine, game over. If there isn't a mine, then you'll be rewarded with a hint as to how many mines are adjacent to that tile. If you deduce (or guess) that a tile is covering a mine, then you can plant a flag to record that.\n\nYou win the game when you correctly flag every mine and expose what's behind every non-mined tile.\n\n### What Realm-Sweeper adds\n\nMinesweeper wasn't designed for touchscreen devices; you had to use a physical mouse. Realm-Sweeper brings the game into the 21st century by adding touch controls. Tap a tile to reveal what's beneath; tap and hold to plant a flag.\n\nMinesweeper was a single-player game. All people who sign into Realm-Sweeper with the same user ID get to collaborate on the same game in real time.\n\nYou also get to configure the size of the grid and how many mines you'd like to hide.\n\n## The data model\n\nI decided to go for a simple data model that would put Realm sync to the test. \n\nEach game is a single document/object that contains meta data (score, number of rows/columns, etc.) together with the grid of tiles (the board):\n\nThis means that even a modestly sized grid (20x20 tiles) results in a `Game` document/object with more than 2,000 attributes. \n\nEvery time you tap on a tile, the `Game` object has to be synced with all other players. Those players are also tapping on tiles, and those changes have to be synced too. If you tap on a tile which isn't adjacent to any mines, then the app will recursively ripple through exposing similar, connected tiles. That's a lot of near-simultaneous changes being made to the same object from different devices\u2014a great test of Realm's automatic conflict resolution!\n\n## The backend Realm app\n\nIf you don't want to set this up yourself, simply follow the instructions from the repo to import the app.\n\nIf you opt to build the backend app yourself, there are only two things to configure once you create the empty Realm app:\n\n1. Enable email/password authentication. I kept it simple by opting to auto-confirm new users and sticking with the default password-reset function (which does nothing).\n2. Enable partitioned Realm sync. Set the partition key to `partition` and enable developer mode (so that the schema will be created automatically when the iOS app syncs for the first time).\n\nThe `partition` field will be set to the username\u2014allowing anyone who connects as that user to sync all of their games.\n\nYou can also add sync rules to ensure that a user can only sync their own games (in case someone hacks the mobile app). I always prefer using Realm functions for permissions. You can add this for both the read and write rules:\n\n```json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": \n \"%%partition\"\n ],\n \"name\": \"canAccessPartition\"\n }\n }\n}\n```\n\nThe `canAccessPartition` function is:\n\n```js\nexports = function(partition) {\n const user = context.user.data.email;\n return partition === user;\n};\n```\n\n## The iOS app\n\nI'd suggest starting by downloading, configuring, and running the app\u2014just follow the [instructions from the repo. That way, you can get a feel for how it works.\n\nThis isn't intended to be a full tutorial covering every line of code in the app. Instead, I'll point out some key components. \n\nAs always with Realm and MongoDB, it all starts with the data\u2026\n\n### Model\n\nThere's a single top-level Realm Object\u2014`Game`:\n\n```swift\nclass Game: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var numRows = 0\n @Persisted var numCols = 0\n @Persisted var score = 0\n @Persisted var startTime: Date? = Date()\n @Persisted var latestMoveTime: Date?\n @Persisted var secondsTakenToComplete: Int?\n @Persisted var board: Board?\n @Persisted var gameStatus = GameStatus.notStarted\n @Persisted var winningTimeInSeconds: Int?\n \u2026\n}\n```\n\nMost of the fields are pretty obvious.The most interesting is `board`, which contains the grid of tiles:\n\n```swift\nclass Board: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var rows = List()\n @Persisted var startingNumberOfMines = 0\n ... \n}\n```\n\n`row` is a list of `Cells`:\n\n```swift\nclass Row: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var cells = List()\n ...\n}\n\nclass Cell: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var isMine = false\n @Persisted var numMineNeigbours = 0\n @Persisted var isExposed = false\n @Persisted var isFlagged = false\n @Persisted var hasExploded = false\n ...\n}\n```\n\nThe model is also where the ~~business~~ game logic is implemented. This means that the views can focus on the UI. For example, `Game` includes a computed variable to check whether the game has been solved:\n\n```swift\nvar hasWon: Bool {\n guard let board = board else { return false }\n if board.remainingMines != 0 { return false }\n\n var result = true\n\n board.rows.forEach() { row in\n row.cells.forEach() { cell in\n if !cell.isExposed && !cell.isFlagged {\n result = false\n return\n }\n }\n if !result { return }\n }\n return result\n}\n```\n\n### Views\n\nAs with any SwiftUI app, the UI is built up of a hierarchy of many views.\n\nHere's a quick summary of the views that make up Real-Sweeper:\n\n**`ContentView`** is the top-level view. When the app first runs, it will show the `LoginView`. Once the user has logged in, it shows `GameListView` instead. It's here that we set the Realm Sync partition (to be the `username` of the user that's just logged in):\n\n```swift\nGameListView()\n .environment(\\.realmConfiguration,\n realmApp.currentUser!.configuration(partitionValue: username))\n```\n\n`ContentView` also includes the `LogoutButton` view.\n\n**`LoginView`** allows the user to provide a username and password:\n\nThose credentials are then used to register or log into the backend Realm app:\n\n```swift\nfunc userAction() {\n Task {\n do {\n if newUser {\n try await realmApp.emailPasswordAuth.registerUser(\n email: email, password: password)\n }\n let _ = try await realmApp.login(\n credentials: .emailPassword(email: email, password: password))\n username = email\n } catch {\n errorMessage = error.localizedDescription\n }\n }\n}\n```\n\n**`GameListView`** reads the list of this user's existing games.\n\n```swift\n@ObservedResults(Game.self, \n sortDescriptor: SortDescriptor(keyPath: \"startTime\", ascending: false)) var games\n```\n\nIt displays each of the games within a `GameSummaryView`. If you tap one of the games, then you jump to a `GameView` for that game:\n\n```swift\nNavigationLink(destination: GameView(game: game)) {\n GameSummaryView(game: game)\n}\n```\n\nTap the settings button and you're sent to `SettingsView`.\n\nTap the \"New Game\" button and a new `Game` object is created and then stored in Realm by appending it to the `games` live query:\n\n```swift\nprivate func createGame() {\n numMines = min(numMines, numRows * numColumns)\n game = Game(rows: numRows, cols: numColumns, mines: numMines)\n if let game = game {\n $games.append(game)\n }\n startGame = true\n}\n```\n\n**`SettingsView`** lets the user choose the number of tiles and mines to use:\n\nIf the user uses multiple devices to play the game (e.g., an iPhone and an iPad), then they may want different-sized boards (taking advantage of the extra screen space on the iPad). Because of that, the view uses the device's `UserDefaults` to locally persist the settings rather than storing them in a synced realm:\n\n```swift\n@AppStorage(\"numRows\") var numRows = 10\n@AppStorage(\"numColumns\") var numColumns = 10\n@AppStorage(\"numMines\") var numMines = 15\n```\n\n**`GameSummaryView`** displays a summary of one of the user's current or past games.\n\n**`GameView`** shows the latest stats for the current game at the top of the screen: \n \n\nIt uses the `LEDCounter` and `StatusButton` views for the summary.\n\nBelow the summary, it displays the `BoardView` for the game.\n\n**`LEDCounter`** displays the provided number as three digits using a retro LED font:\n\n**`StatusButton`** uses a `ZStack` to display the symbol for the game's status on top of a tile image:\n\nThe view uses SwiftUI's `GeometryReader` function to discover how much space is available so that it can select an appropriate font size for the symbol:\n\n```swift\nGeometryReader { geo in\n Text(status)\n .font(.system(size: geo.size.height * 0.7))\n}\n```\n\n**`BoardView`** displays the game's grid of tiles:\n\nEach of the tiles is represented by a `CellView` view.\n\nWhen a tile is tapped, this view exposes its contents:\n\n```swift\n.onTapGesture() {\n expose(row: row, col: col)\n}\n```\n\nOn a tap-and-hold, a flag is dropped:\n\n```swift\n.onLongPressGesture(minimumDuration: 0.1) {\n flag(row: row, col: col)\n}\n```\n\nWhen my family tested the first version of the app, they were frustrated that they couldn't tell whether they'd held long enough for the flag to be dropped. This was an easy mistake to make as their finger was hiding the tile at the time\u2014an example of where testing with a mouse and simulator wasn't a substitute for using real devices. It was especially frustrating as getting it wrong meant that you revealed a mine and immediately lost the game. Fortunately, this is easy to fix using iOS's haptic feedback:\n\n```swift\nfunc hapticFeedback(_ isSuccess: Bool) {\n let generator = UINotificationFeedbackGenerator()\n generator.notificationOccurred(isSuccess ? .success : .error)\n}\n```\n\nYou now feel a buzz when the flag has been dropped.\n\n**`CellView`** displays an individual tile: \n\nWhat's displayed depends on the contents of the `Cell` and the state of the game. It uses four further views to display different types of tile: `FlagView`, `MineCountView`, `MineView`, and `TileView`.\n\n**`FlagView`**\n\n**`MineCountView`**\n\n**`MineView`**\n\n**`TileView`**\n\n## Conclusion\n\nRealm-Sweeper gives a real feel for how quickly Realm is able to synchronize data over the internet.\n\nI intentionally avoided optimizing how I updated the game data in Realm. When you see a single click exposing dozens of tiles, each cell change is an update to the `Game` object that needs to be synced.\n\nNote that both instances of the game are running in iPhone simulators on an overworked Macbook in England. The Realm backend app is running in the US\u2014that's a 12,000 km/7,500 mile round trip for each sync.\n\nI took this approach as I wanted to demonstrate the performance of Realm synchronization. If an app like this became super-popular with millions of users, then it would put a lot of extra strain on the backend Realm app.\n\nAn obvious optimization would be to condense all of the tile changes from a single tap into a single write to the Realm object. If you're interested in trying that out, just fork the repo and make the changes. If you do implement the optimization, then please create a pull request. (I'd probably add it as an option within the settings so that the \"slow\" mode is still an option.)\n\nGot questions? Ask them in our Community forum.", "format": "md", "metadata": {"tags": ["Swift", "Realm", "iOS"], "pageDescription": "Using MongoDB Realm Sync to build an iOS multi-player version of the classic Windows game", "contentType": "Tutorial"}, "title": "Building a Collaborative iOS Minesweeper Game with Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-java-to-kotlin-sdk", "action": "created", "body": "# How to migrate from Realm Java SDK to Realm Kotlin SDK\n\n> This article is targeted to existing Realm developers who want to understand how to migrate to Realm Kotlin SDK.\n\n## Introduction\n\nAndroid has changed a lot in recent years notably after the Kotlin language became a first-class \ncitizen, so does the Realm SDK. Realm has recently moved its much-awaited Kotlin SDK to beta \nenabling developers to use Realm more fluently with Kotlin and opening doors to the world of Kotlin \nMultiplatform.\n\nLet's understand the changes required when you migrate from Java to Kotlin SDK starting from setup\ntill its usage.\n\n## Changes in setup\n\nThe new Realm Kotlin SDK is based on Kotlin Multiplatform architecture which enables you to have one\ncommon module for all your data needs for all platforms. But this doesn't mean to use the new SDK\nyou would have to convert your existing Android app to KMM app right away, you can do that later.\n\nLet's understand the changes needed in the gradle file to use Realm Kotlin SDK, by comparing the\nprevious implementation with the new one.\n\nIn project level `build.gradle`\n\nEarlier with Java SDK\n\n```kotlin\n buildscript {\n\n repositories {\n google()\n jcenter()\n }\n dependencies {\n classpath \"com.android.tools.build:gradle:4.1.3\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:1.4.31\"\n\n // Realm Plugin \n classpath \"io.realm:realm-gradle-plugin:10.10.1\"\n\n // NOTE: Do not place your application dependencies here; they belong\n // in the individual module build.gradle files\n }\n}\n```\n\nWith Kotlin SDK, we can **delete the Realm plugin** from `dependencies`\n\n```kotlin\n buildscript {\n\n repositories {\n google()\n jcenter()\n }\n dependencies {\n classpath \"com.android.tools.build:gradle:4.1.3\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:1.4.31\"\n\n // NOTE: Do not place your application dependencies here; they belong\n // in the individual module build.gradle files\n }\n}\n```\n\nIn the module-level `build.gradle`\n\nWith Java SDK, we \n\n1. Enabled Realm Plugin\n2. Enabled Sync, if applicable\n\n```groovy\n\nplugins {\n id 'com.android.application'\n id 'kotlin-android'\n id 'kotlin-kapt'\n id 'realm-android'\n}\n```\n\n```groovy\n android {\n\n ... ....\n\n realm {\n syncEnabled = true\n }\n}\n```\n\nWith Kotlin SDK,\n\n1. Replace ``id 'realm-android'`` with ``id(\"io.realm.kotlin\") version \"0.10.0\"``\n\n```groovy\n plugins {\n id 'com.android.application'\n id 'kotlin-android'\n id 'kotlin-kapt'\n id(\"io.realm.kotlin\") version \"0.10.0\"\n}\n```\n\n2. Remove the Realm block under android tag\n\n```groovy\n android {\n ... ....\n\n }\n```\n\n3. Add Realm dependency under `dependencies` tag\n\n```groovy\n dependencies {\n\n implementation(\"io.realm.kotlin:library-sync:0.10.0\")\n\n }\n```\n\n> If you are using only Realm local SDK, then you can add\n> ```groovy\n> dependencies {\n> implementation(\"io.realm.kotlin:library-base:0.10.0\")\n> }\n>```\n\nWith these changes, our Android app is ready to use Kotlin SDK.\n\n## Changes in implementation\n\n### Realm Initialization\n\nTraditionally before using Realm for querying information in our project, we had to initialize and\nset up few basic properties like name, version with sync config for database, let's update them \nas well.\n\nSteps with JAVA SDK :\n\n1. Call `Realm.init()`\n2. Setup Realm DB properties like name, version, migration rules etc using `RealmConfiguration`.\n3. Setup logging\n4. Configure Realm Sync\n\nWith Kotlin SDK :\n\n1. Call `Realm.init()` _Is not needed anymore_.\n2. Setup Realm DB properties like db name, version, migration rules etc. using `RealmConfiguration`-\n _This remains the same apart from a few minor changes_.\n3. Setup logging - _This is moved to `RealmConfiguration`_\n4. Configure Realm Sync - _No changes_\n\n### Changes to Models\n\nNo changes are required in model classes, except you might have to remove a few currently \nunsupported annotations like `@RealmClass` which is used for the embedded object.\n\n> Note: You can remove `Open` keyword against `class` which was mandatory for using Java SDK in\n> Kotlin.\n\n### Changes to querying\n\nThe most exciting part starts from here \ud83d\ude0e(IMO).\n\nTraditionally Realm SDK has been on the top of the latest programming trends like Reactive\nprogramming (Rx), LiveData and many more but with the technological shift in Android programming\nlanguage from Java to Kotlin, developers were not able to fully utilize the power of the language \nwith Realm as underlying SDK was still in Java, few of the notable were support for the Coroutines, \nKotlin Flow, etc.\n\nBut with the Kotlin SDK that all has changed and further led to the reduction of boiler code. \nLet's understand these by examples.\n\nExample 1: As a user, I would like to register my visit as soon as I open the app or screen.\n\nSteps to complete this operation would be\n\n1. Authenticate with Realm SDK.\n2. Based on the user information, create a sync config with the partition key.\n3. Open Realm instance.\n4. Start a Realm Transaction.\n5. Query for current user visit count and based on that add/update count.\n\nWith JAVA SDK:\n\n```kotlin\nprivate fun updateData() {\n _isLoading.postValue(true)\n\n fun onUserSuccess(user: User) {\n val config = SyncConfiguration.Builder(user, user.id).build()\n\n Realm.getInstanceAsync(config, object : Realm.Callback() {\n override fun onSuccess(realm: Realm) {\n realm.executeTransactionAsync {\n var visitInfo = it.where(VisitInfo::class.java).findFirst()\n visitInfo = visitInfo?.updateCount() ?: VisitInfo().apply {\n partition = user.id\n }.updateCount()\n\n it.copyToRealmOrUpdate(visitInfo).apply {\n _visitInfo.postValue(it.copyFromRealm(this))\n }\n _isLoading.postValue(false)\n }\n }\n\n override fun onError(exception: Throwable) {\n super.onError(exception)\n // some error handling \n _isLoading.postValue(false)\n }\n })\n }\n\n realmApp.loginAsync(Credentials.anonymous()) {\n if (it.isSuccess) {\n onUserSuccess(it.get())\n } else {\n _isLoading.postValue(false)\n }\n }\n}\n```\n\nWith Kotlin SDK:\n\n```kotlin\nprivate fun updateData() {\n viewModelScope.launch(Dispatchers.IO) {\n _isLoading.postValue(true)\n val user = realmApp.login(Credentials.anonymous())\n val config = SyncConfiguration.Builder(\n user = user,\n partitionValue = user.identity,\n schema = setOf(VisitInfo::class)\n ).build()\n\n val realm = Realm.open(configuration = config)\n realm.write {\n val visitInfo = this.query().first().find()\n copyToRealm(visitInfo?.updateCount()\n ?: VisitInfo().apply {\n partition = user.identity\n visitCount = 1\n })\n }\n _isLoading.postValue(false)\n }\n}\n```\n\nUpon quick comparing, you would notice that lines of code have decreased by 30%, and we are using\ncoroutines for doing the async call, which is the natural way of doing asynchronous programming in\nKotlin. Let's check this with one more example.\n\nExample 2: As user, I should be notified immediately about any change in user visit info. This is\nmore like observing the change to visit count.\n\nWith Java SDK:\n\n```kotlin\n fun onRefreshCount() {\n _isLoading.postValue(true)\n\n fun getUpdatedCount(realm: Realm) {\n val visitInfo = realm.where(VisitInfo::class.java).findFirst()\n visitInfo?.let {\n _visitInfo.value = it\n _isLoading.postValue(false)\n }\n }\n\n fun onUserSuccess(user: User) {\n val config = SyncConfiguration.Builder(user, user.id).build()\n\n Realm.getInstanceAsync(config, object : Realm.Callback() {\n override fun onSuccess(realm: Realm) {\n getUpdatedCount(realm)\n }\n\n override fun onError(exception: Throwable) {\n super.onError(exception)\n //TODO: Implementation pending\n _isLoading.postValue(false)\n }\n })\n }\n\n realmApp.loginAsync(Credentials.anonymous()) {\n if (it.isSuccess) {\n onUserSuccess(it.get())\n } else {\n _isLoading.postValue(false)\n }\n }\n}\n```\n\nWith Kotlin SDK :\n\n```kotlin\n\nfun onRefreshCount(): Flow {\n\n val user = runBlocking { realmApp.login(Credentials.anonymous()) }\n val config = SyncConfiguration.Builder(\n user = user,\n partitionValue = user.identity,\n schema = setOf(VisitInfo::class)\n ).build()\n\n val realm = Realm.open(config)\n return realm.query().first().asFlow()\n}\n\n```\n\nAgain upon quick comparing you would notice that lines of code have decreased drastically, by more\nthan **60%**, and apart coroutines for doing async call, we are using _Kotlin Flow_ to observe the\nvalue changes.\n\nWith this, as mentioned earlier, we are further able to reduce our boilerplate code,\nno callback hell and writing code is more natural now.\n\n## Other major changes\n\nApart from Realm Kotlin SDK being written in Kotlin language, it is fundamentally little different \nfrom the JAVA SDK in a few ways:\n\n- **Frozen by default**: All objects are now frozen. Unlike live objects, frozen objects do not\n automatically update after the database writes. You can still access live objects within a write\n transaction, but passing a live object out of a write transaction freezes the object.\n- **Thread-safety**: All realm instances, objects, query results, and collections can now be\n transferred across threads.\n- **Singleton**: You now only need one instance of each realm.\n\n## Should you migrate now?\n\nThere is no straight answer to question, it really depends on usage, complexity of the app and time.\nBut I think so this the perfect time to evaluate the efforts and changes required to migrate as \nRealm Kotlin SDK would be the future.\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Java"], "pageDescription": "This article is targeted to existing Realm developers who want to understand how to migrate to Realm Kotlin SDK.", "contentType": "Tutorial"}, "title": "How to migrate from Realm Java SDK to Realm Kotlin SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/advanced-modeling-realm-dotnet", "action": "created", "body": "# Advanced Data Modeling with Realm .NET\n\nRealm's intuitive data model approach means that in most cases, you don't even think of Realm models as entities. You just declare your POCOs, have them inherit from `RealmObject`, and you're done. Now you have persistable models, with `INotifyPropertyChanged` capabilities all wired up, that are also \"live\"\u2014i.e., every time you access a property, you get the latest state and not some snapshot from who knows how long ago. This is great and most of our users absolutely love the simplicity. Still, there are some use cases where being aware that you're working with a database can really bring your data models to the next level. In this blog post, we'll evaluate three techniques you can apply to make your models fit your needs even better.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Constructor Validation\n\nOne of the core requirements of Realm is that all models need to have a parameterless constructor. This is needed because Realm needs to be able to instantiate an object without figuring out what arguments to pass to the constructor. What not many people know is that you can make this parameterless constructor private to communicate expectations to your callers. This means that if you have a `Person` class where you absolutely expect that a `Name` is provided upon object creation, you can have a public constructor with a `name` argument and a private parameterless one for use by Realm:\n\n```csharp\nclass Person : RealmObject\n{\n public string Name { get; set; }\n\n public Person(string name)\n {\n ValidateName(name);\n\n Name = name;\n }\n\n // This is used by Realm, even though it's private\n private Person()\n {\n }\n}\n```\n\nAnd I know what some of you may be thinking: \"Oh no! \ud83d\ude31 Does that mean Realm uses the suuuuper slow reflection to create object instances?\" Fortunately, the answer is no. Instead, at compile time, Realm injects a nested helper class in each model that has a `CreateInstance` method. Since the helper class is nested in the model classes, it has access to private members and is thus able to invoke the private constructor.\n\n## Property Access Modifiers\n\nSimilar to the point above, another relatively unknown feature of Realm is that persisted properties don't need to be public. You can either have the entire property be private or just one of the accessors. This synergizes nicely with the private constructor technique that we mentioned above. If you expose a constructor that explicitly validates the person's name, it would be fairly annoying to do all that work and have some code accidentally set the property to `null` the very next line. So it would make sense to make the setter of the name property above private:\n\n```csharp\nclass Person : RealmObject\n{\n public string Name { get; private set; }\n\n public Person(string name)\n {\n // ...\n }\n}\n```\n\nThat way, you're communicating clearly to the class consumers that they need to provide the name at object creation time and that it can't be changed later. A very common use case here is to make the `Id` setter private and generate a random `Id` at object creation time:\n\n```csharp\nclass Transaction : RealmObject\n{\n public Guid Id { get; private set; } = Guid.NewGuid();\n}\n```\n\nSometimes, it makes sense to make the entire property private\u2014typically, when you want to expose a different public property that wraps it. If we go back to our `Person` and `Name` example, perhaps we want to allow changing the name, but we want to still validate the new name before we persist it. Then, we create a private autoimplemented property that Realm will use for persistence, and a public one that does the validation:\n\n```csharp\nclass Person : RealmObject\n{\n [MapTo(\"Name\")]\n private string _Name { get; set; }\n\n public string Name\n {\n get => _Name;\n set\n {\n ValidateName(value);\n _Name = value;\n }\n }\n}\n```\n\nThis is quite neat as it makes the public API of your model safe, while preserving its persistability. Of note is the `MapTo` attribute applied to `_Name`. It is not strictly necessary. I just added it to avoid having ugly column names in the database. You can use it or not use it. It's totally up to you. One thing to note when utilizing this technique is that Realm is completely unaware of the relationship between `Name` and `_Name`. This has two implications. 1) Notifications will be emitted for `_Name` only, and 2) You can't use LINQ queries to filter `Person` objects by name. Let's see how we can mitigate both:\n\nFor notifications, we can override `OnPropertyChanged` and raise a notification for `Name` whenever `_Name` changes:\n\n```csharp\nclass Person : RealmObject\n{\n protected override void OnPropertyChanged(string propertyName)\n {\n base.OnPropertyChanged(propertyName);\n\n if (propertyName == nameof(_Name))\n {\n RaisePropertyChanged(nameof(Name));\n }\n }\n}\n```\n\nThe code is fairly straightforward. `OnPropertyChanged` will be invoked whenever any property on the object changes and we just re-raise it for the related `Name` property. Note that, as an optimization, `OnPropertyChanged` will only be invoked if there are subscribers to the `PropertyChanged` event. So if you're testing this out and don't see the code get executed, make sure you added a subscriber first.\n\nThe situation with queries is slightly harder to work around. The main issue is that because the property is private, you can't use it in a LINQ query\u2014e.g., `realm.All().Where(p => p._Name == \"Peter\")` will result in a compile-time error. On the other hand, because Realm doesn't know that `Name` is tied to `_Name`, you can't use `p.Name == \"Peter\"` either. You can still use the string-based queries, though. Just remember to use the name that Realm knows about\u2014i.e., the string argument of `MapTo` if you remapped the property name or the internal property (`_Name`) if you didn't:\n\n```csharp\n// _Name is mapped to 'Name' which is what we use here\nvar peters = realm.All().Filter(\"Name == 'Peter'\");\n```\n\n## Using Unpersistable Data Types\n\nRealm has a wide variety of supported data types\u2014most primitive types in the Base Class Library (BCL), as well as advanced collections, such as sets and dictionaries. But sometimes, you'll come across a data type that Realm can't store yet, the most obvious example being enums. In such cases, you can build on top of the previous technique to expose enum properties in your models and have them be persisted as one of the supported data types:\n\n```csharp\nenum TransactionState\n{\n Pending,\n Settled,\n Error\n}\n\nclass Transaction : RealmObject\n{\n private string _State { get; set; }\n\n public TransactionState State\n {\n get => Enum.Parse(_State);\n set => _State = value.ToString();\n }\n}\n```\n\nUsing this technique, you can persist many other types, as long as they can be converted losslessly to a persistable primitive type. In this case, we chose `string`, but we could have just as easily used integer. The string representation takes a bit more memory but is also more explicit and less error prone\u2014e.g., if you rearrange the enum members, the data will still be consistent.\n\nAll that is pretty cool, but we can take it up a notch. By building on top of this idea, we can also devise a strategy for representing complex data types, such as `Vector3` in a Unity game or a `GeoCoordinate` in a location-aware app. To do so, we'll take advantage of embedded objects\u2014a Realm concept that represents a complex data structure that is owned entirely by its parent. Embedded objects are a great fit for this use case because we want to have a strict 1:1 relationship and we want to make sure that deleting the parent also cleans up the embedded objects it owns. Let's see this in action:\n\n```csharp\nclass Vector3Model : EmbeddedObject\n{\n // Casing of the properties here is unusual for C#,\n // but consistent with the Unity casing.\n private float x { get; set; }\n private float y { get; set; }\n private float z { get; set; }\n\n public Vector3Model(Vector3 vector)\n {\n x = vector.x;\n y = vector.y;\n z = vector.z;\n }\n\n private Vector3Model()\n {\n }\n\n public Vector3 ToVector3() => new Vector3(x, y, z);\n}\n\nclass Powerup : RealmObject\n{\n [MapTo(\"Position\")]\n private Vector3Model _Position { get; set; }\n\n public Vector3 Position\n {\n get => _Position?.ToVector3() ?? Vector3.zero;\n set => _Position = new Vector3Model(value);\n }\n\n protected override void OnPropertyChanged(string propertyName)\n {\n base.OnPropertyChanged(propertyName);\n\n if (propertyName == nameof(_Position))\n {\n RaisePropertyChanged(nameof(Position));\n }\n }\n}\n```\n\nIn this example, we've defined a `Vector3Model` that roughly mirrors Unity's `Vector3`. It has three float properties representing the three components of the vector. We've also utilized what we learned in the previous sections. It has a private constructor to force consumers to always construct it with a `Vector3` argument. We've also marked its properties as private as we don't want consumers directly interacting with them. We want users to always call `ToVector3` to obtain the Unity type. And for our `Powerup` model, we're doing exactly that in the publicly exposed `Position` property. Note that similarly to our `Person` example, we're making sure to raise a notification for `Position` whenever `_Position` changes.\n\nAnd similarly to the exaple in the previous section, this approach makes querying via LINQ impossible and we have to fall back to the string query syntax if we want to find all powerups in a particular area:\n\n```csharp\nIQueryable PowerupsAroundLocation(Vector3 location, float radius)\n{\n // Note that this query returns a cube around the location, not a sphere.\n var powerups = realm.All().Filter(\n \"Position.x > $0 AND Position.x < $1 AND Position.y > $2 AND Position.y < $3 AND Position.z > $4 AND Position.z < $5\",\n location.x - radius, location.x + radius,\n location.y - radius, location.y + radius,\n location.z - radius, location.z + radius);\n}\n```\n\n## Conclusion\n\nThe list of techniques above is by no means meant to be exhaustive. Neither is it meant to imply that this is the only, or even \"the right,\" way to use Realm. For most apps, simple POCOs with a list of properties is perfectly sufficient. But if you need to add extra validations or persist complex data types that you're using a lot, but Realm doesn't support natively, we hope that these examples will give you ideas for how to do that. And if you do come up with an ingenious way to use Realm, we definitely want to hear about it. Who knows? Perhaps we can feature it in our \"Advanced^2 Data Modeling\" article!", "format": "md", "metadata": {"tags": ["Realm", "C#"], "pageDescription": "Learn how to structure your Realm models to add validation, protect certain properties, and even persist complex objects coming from third-party packages.", "contentType": "Article"}, "title": "Advanced Data Modeling with Realm .NET", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/resumable-initial-sync", "action": "created", "body": "# Resumable Initial Sync in MongoDB 4.4\n\n## Introduction\n\nHello, everyone. My name is Nuno and I have been working with MongoDB databases for almost eight years now as a sysadmin and as a Technical Services Engineer.\n\nOne of the most common challenges in MongoDB environments is when a replica set member requires a resync and the Initial Sync process is interrupted for some reason.\n\nInterruptions like network partitions between the sync source and the node doing the initial sync causes the process to fail forcing it to restart from scratch to ensure database consistency.\n\nThis began to be particularly problematic when faced with a large dataset sizes which can take up to several days when they are in terms of terabytes.\n\nYou may have already noticed that I am talking in the past tense as this is no longer a problem you need to face. I am very happy to share with you one of the latest enhancements introduced by MongoDB in v4.4: Resumable Initial Sync.\n\nResumable Initial Sync now enables nodes doing initial sync to survive events like transient network errors or a sync source restart when fetching data from the sync source node.\n\n## Resumable Initial Sync\n\nThe time spent when recovering replica set members with Initial Sync procedures on large data environments has two common challenges:\n\n- Falling off the oplog\n- Transient network failures\n\nMongoDB became more resilient to these types of failures with MongoDB v3.4 by adding the ability to pull newly added oplog records during the data copy phase, and more recently with MongoDB v4.4 and the ability to resume the initial sync where it left off.\n\n## Behavioral Description\n\nThe initial sync process will restart the interrupted or failed command and keep retrying until the command succeeds a non-resumable error occurs, or a period specified by the parameter initialSyncTransientErrorRetryPeriodSeconds passes (default: 24 hours). These restarts are constrained to use the same sync source, and are not tolerant to rollbacks on the sync source. That is if the sync source experiences a rollback, the entire initial sync attempt will fail.\n\nResumable errors include retriable errors when `ErrorCodes::isRetriableError` return `true` which includes all network errors as well as some other transient errors.\n\nThe `ErrorCodes::NamespaceNotFound`, `ErrorCodes::OperationFailed`, `ErrorCodes::CursorNotFound`, or `ErrorCodes::QueryPlanKilled` mean the collection may have been dropped, renamed, or modified in a way which caused the cursor to be killed. These errors will cause `ErrorCodes::InitialSyncFailure` and will be treated the same as transient retriable errors (except for not killing the cursor), mark `ErrorCodes::isRetriableError` as `true`, and will allow the initial sync to resume where it left off.\n\nOn `ErrorCodes::NamespaceNotFound`, it will skip this entire collection and return success. Even if the collection has been renamed, simply resuming the query is sufficient since we are querying by `UUID`; the name change will be handled during `oplog` application.\n\nAll other errors are `non-resumable`.\n\n## Configuring Custom Retry Period\n\nThe default retry period is 24 hours (86,400 seconds). A database administrator can choose to increase this period with the following command:\n\n``` javascript\n// Default is 86400\ndb.adminCommand({\n setParameter: 1,\n initialSyncTransientErrorRetryPeriodSeconds: 86400\n})\n```\n\n>Note: The 24-hour value is the default period estimated for a database administrator to detect any ongoing failure and be able to act on restarting the sync source node.\n\n## Upgrade/Downgrade Requirements and Behaviors\n\nThe full resumable behavior will always be available between 4.4 nodes regardless of FCV - Feature Compatibility Version. Between 4.2 and 4.4 nodes, the initial sync will not be resumable during the query phase of the `CollectionCloner` (where we are actually reading data from collections), nor will it be resumable after collection rename, regardless of which node is 4.4. Resuming after transient failures in other commands will be possible when the syncing node is 4.4 and the sync source is 4.2.\n\n## Diagnosis/Debuggability\n\nDuring initial sync, the sync source node can become unavailable (either due to a network failure or process restart) and still, be able to resume and complete.\n\nHere are examples of what messages to expect in the logs.\n\nInitial Sync attempt successfully started:\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:49:21.826+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21164, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Starting initial sync attempt\",\"attr\":{\"initialSyncAttempt\":1,\"initialSyncMaxAttempts\":10}}\n{\"t\":{\"$date\":\"2020-11-10T19:49:22.905+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21173, \"ctx\":\"ReplCoordExtern-1\",\"msg\":\"Initial syncer oplog truncation finished\",\"attr\":{\"durationMillis\":0}}\n```\n\nMessages caused by network failures (or sync source node restart):\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:50:04.822+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21078, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Transient error occurred during cloner stage\",\"attr\":{\"cloner\":\"CollectionCloner\",\"stage\":\"query\",\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"recv failed while exhausting cursor :: caused by :: Connection closed by peer\"}}}\n{\"t\":{\"$date\":\"2020-11-10T19:50:04.823+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21075, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Initial Sync retrying cloner stage due to error\",\"attr\":{\"cloner\":\"CollectionCloner\",\"stage\":\"query\",\"error\":{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"recv failed while exhausting cursor :: caused by :: Connection closed by peer\"}}}\n```\n\nInitial Sync is resumed after being interrupted:\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:51:43.996+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21139, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Attempting to kill old remote cursor with id: {id}\",\"attr\":{\"id\":118250522569195472}}\n{\"t\":{\"$date\":\"2020-11-10T19:51:43.997+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21133, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Collection cloner will resume the last successful query\"}\n```\n\nData cloners resume:\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.345+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21072, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Cloner finished running stage\",\"attr\":{\"cloner\":\"CollectionCloner\",\"stage\":\"query\"}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.347+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21069, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Cloner running stage\",\"attr\":{\"cloner\":\"CollectionCloner\",\"stage\":\"setupIndexBuildersForUnfinishedIndexes\"}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.349+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21072, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Cloner finished running stage\",\"attr\":{\"cloner\":\"CollectionCloner\",\"stage\":\"setupIndexBuildersForUnfinishedIndexes\"}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.350+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21148, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Collection clone finished\",\"attr\":{\"namespace\":\"test.data\"}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.351+00:00\"},\"s\":\"D1\", \"c\":\"INITSYNC\", \"id\":21057, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Database clone finished\",\"attr\":{\"dbName\":\"test\",\"status\":{\"code\":0,\"codeName\":\"OK\"}}}\n```\n\nData cloning phase completes successfully. Oplog cloning phase starts:\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.352+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21183, \"ctx\":\"ReplCoordExtern-0\",\"msg\":\"Finished cloning data. Beginning oplog replay\",\"attr\":{\"databaseClonerFinishStatus\":\"OK\"}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.353+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21195, \"ctx\":\"ReplCoordExtern-3\",\"msg\":\"Writing to the oplog and applying operations until stopTimestamp before initial sync can complete\",\"attr\":{\"stopTimestamp\":{\"\":{\"$timestamp\":{\"t\":1605038002,\"i\":1}}},\"beginFetchingTimestamp\":{\"\":{\"$timestamp\":{\"t\":1605037760,\"i\":1}}},\"beginApplyingTimestamp\":{\"\":{\"$timestamp\":{\"t\":1605037760,\"i\":1}}}}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.359+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21181, \"ctx\":\"ReplCoordExtern-1\",\"msg\":\"Finished fetching oplog during initial sync\",\"attr\":{\"oplogFetcherFinishStatus\":\"CallbackCanceled: oplog fetcher shutting down\",\"lastFetched\":\"{ ts: Timestamp(1605038002, 1), t: 296 }\"}}\n```\n\nInitial Sync completes successfully and statistics are provided:\n\n``` none\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.360+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21191, \"ctx\":\"ReplCoordExtern-1\",\"msg\":\"Initial sync attempt finishing up\"}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.360+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21192, \"ctx\":\"ReplCoordExtern-1\",\"msg\":\"Initial Sync Attempt Statistics\",\"attr\":{\"statistics\":{\"failedInitialSyncAttempts\":0,\"maxFailedInitialSyncAttempts\":10,\"initialSyncStart\":{\"$date\":\"2020-11-10T19:49:21.826Z\"},\"initialSyncAttempts\":],\"appliedOps\":25,\"initialSyncOplogStart\":{\"$timestamp\":{\"t\":1605037760,\"i\":1}},\"initialSyncOplogEnd\":{\"$timestamp\":{\"t\":1605038002,\"i\":1}},\"totalTimeUnreachableMillis\":203681,\"databases\":{\"databasesCloned\":3,\"admin\":{\"collections\":2,\"clonedCollections\":2,\"start\":{\"$date\":\"2020-11-10T19:49:23.150Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.452Z\"},\"elapsedMillis\":302,\"admin.system.keys\":{\"documentsToCopy\":2,\"documentsCopied\":2,\"indexes\":1,\"fetchedBatches\":1,\"start\":{\"$date\":\"2020-11-10T19:49:23.150Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.291Z\"},\"elapsedMillis\":141,\"receivedBatches\":1},\"admin.system.version\":{\"documentsToCopy\":1,\"documentsCopied\":1,\"indexes\":1,\"fetchedBatches\":1,\"start\":{\"$date\":\"2020-11-10T19:49:23.291Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.452Z\"},\"elapsedMillis\":161,\"receivedBatches\":1}},\"config\":{\"collections\":3,\"clonedCollections\":3,\"start\":{\"$date\":\"2020-11-10T19:49:23.452Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.976Z\"},\"elapsedMillis\":524,\"config.system.indexBuilds\":{\"documentsToCopy\":0,\"documentsCopied\":0,\"indexes\":1,\"fetchedBatches\":0,\"start\":{\"$date\":\"2020-11-10T19:49:23.452Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.591Z\"},\"elapsedMillis\":139,\"receivedBatches\":0},\"config.system.sessions\":{\"documentsToCopy\":1,\"documentsCopied\":1,\"indexes\":2,\"fetchedBatches\":1,\"start\":{\"$date\":\"2020-11-10T19:49:23.591Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.801Z\"},\"elapsedMillis\":210,\"receivedBatches\":1},\"config.transactions\":{\"documentsToCopy\":0,\"documentsCopied\":0,\"indexes\":1,\"fetchedBatches\":0,\"start\":{\"$date\":\"2020-11-10T19:49:23.801Z\"},\"end\":{\"$date\":\"2020-11-10T19:49:23.976Z\"},\"elapsedMillis\":175,\"receivedBatches\":0}},\"test\":{\"collections\":1,\"clonedCollections\":1,\"start\":{\"$date\":\"2020-11-10T19:49:23.976Z\"},\"end\":{\"$date\":\"2020-11-10T19:53:27.350Z\"},\"elapsedMillis\":243374,\"test.data\":{\"documentsToCopy\":29000000,\"documentsCopied\":29000000,\"indexes\":1,\"fetchedBatches\":246,\"start\":{\"$date\":\"2020-11-10T19:49:23.976Z\"},\"end\":{\"$date\":\"2020-11-10T19:53:27.349Z\"},\"elapsedMillis\":243373,\"receivedBatches\":246}}}}}}\n{\"t\":{\"$date\":\"2020-11-10T19:53:27.451+00:00\"},\"s\":\"I\", \"c\":\"INITSYNC\", \"id\":21163, \"ctx\":\"ReplCoordExtern-3\",\"msg\":\"Initial sync done\",\"attr\":{\"durationSeconds\":245}}\n```\n\nThe new InitialSync statistics from [replSetGetStatus.initialSyncStatus can be useful to review the initial sync progress status.\n\nStarting in MongoDB 4.2.1, replSetGetStatus.initialSyncStatus metrics are only available when run on a member during its initial sync (i.e., STARTUP2 state).\n\nThe metrics are:\n\n- syncSourceUnreachableSince - The date and time at which the sync source became unreachable.\n- currentOutageDurationMillis - The time in milliseconds that the sync source has been unavailable.\n- totalTimeUnreachableMillis - The total time in milliseconds that the member has been unavailable during the current initial sync.\n\nFor each Initial Sync attempt from replSetGetStatus.initialSyncStatus.initialSyncAttempts:\n\n- totalTimeUnreachableMillis - The total time in milliseconds that the member has been unavailable during the current initial sync.\n- operationsRetried - Total number of all operation retry attempts.\n- rollBackId - The sync source's rollback identifier at the start of the initial sync attempt.\n\nAn example of this output is:\n\n``` none\nreplset:STARTUP2> db.adminCommand( { replSetGetStatus: 1 } ).initialSyncStatus\n{\n \"failedInitialSyncAttempts\" : 0,\n \"maxFailedInitialSyncAttempts\" : 10,\n \"initialSyncStart\" : ISODate(\"2020-11-06T20:16:21.649Z\"),\n \"initialSyncAttempts\" : ],\n \"appliedOps\" : 0,\n \"initialSyncOplogStart\" : Timestamp(1604693779, 1),\n \"syncSourceUnreachableSince\" : ISODate(\"2020-11-06T20:16:32.950Z\"),\n \"currentOutageDurationMillis\" : NumberLong(56514),\n \"totalTimeUnreachableMillis\" : NumberLong(56514),\n \"databases\" : {\n \"databasesCloned\" : 2,\n \"admin\" : {\n \"collections\" : 2,\n \"clonedCollections\" : 2,\n \"start\" : ISODate(\"2020-11-06T20:16:22.948Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.219Z\"),\n \"elapsedMillis\" : 271,\n \"admin.system.keys\" : {\n \"documentsToCopy\" : 2,\n \"documentsCopied\" : 2,\n \"indexes\" : 1,\n \"fetchedBatches\" : 1,\n \"start\" : ISODate(\"2020-11-06T20:16:22.948Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.085Z\"),\n \"elapsedMillis\" : 137,\n \"receivedBatches\" : 1\n },\n \"admin.system.version\" : {\n \"documentsToCopy\" : 1,\n \"documentsCopied\" : 1,\n \"indexes\" : 1,\n \"fetchedBatches\" : 1,\n \"start\" : ISODate(\"2020-11-06T20:16:23.085Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.219Z\"),\n \"elapsedMillis\" : 134,\n \"receivedBatches\" : 1\n }\n },\n \"config\" : {\n \"collections\" : 3,\n \"clonedCollections\" : 3,\n \"start\" : ISODate(\"2020-11-06T20:16:23.219Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.666Z\"),\n \"elapsedMillis\" : 447,\n \"config.system.indexBuilds\" : {\n \"documentsToCopy\" : 0,\n \"documentsCopied\" : 0,\n \"indexes\" : 1,\n \"fetchedBatches\" : 0,\n \"start\" : ISODate(\"2020-11-06T20:16:23.219Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.348Z\"),\n \"elapsedMillis\" : 129,\n \"receivedBatches\" : 0\n },\n \"config.system.sessions\" : {\n \"documentsToCopy\" : 1,\n \"documentsCopied\" : 1,\n \"indexes\" : 2,\n \"fetchedBatches\" : 1,\n \"start\" : ISODate(\"2020-11-06T20:16:23.348Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.538Z\"),\n \"elapsedMillis\" : 190,\n \"receivedBatches\" : 1\n },\n \"config.transactions\" : {\n \"documentsToCopy\" : 0,\n \"documentsCopied\" : 0,\n \"indexes\" : 1,\n \"fetchedBatches\" : 0,\n \"start\" : ISODate(\"2020-11-06T20:16:23.538Z\"),\n \"end\" : ISODate(\"2020-11-06T20:16:23.666Z\"),\n \"elapsedMillis\" : 128,\n \"receivedBatches\" : 0\n }\n },\n \"test\" : {\n \"collections\" : 1,\n \"clonedCollections\" : 0,\n \"start\" : ISODate(\"2020-11-06T20:16:23.666Z\"),\n \"test.data\" : {\n \"documentsToCopy\" : 29000000,\n \"documentsCopied\" : 714706,\n \"indexes\" : 1,\n \"fetchedBatches\" : 7,\n \"start\" : ISODate(\"2020-11-06T20:16:23.666Z\"),\n \"receivedBatches\" : 7\n }\n }\n }\n}\nreplset:STARTUP2>\n```\n\n## Wrap Up\n\nUpgrade your MongoDB database to the new v4.4 and take advantage of the new Resumable Initial Sync feature. Your deployment will now survive transient network errors or a sync source restarts.\n\n> If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Discover the new Resumable Initial Sync feature in MongoDB v4.4", "contentType": "Article"}, "title": "Resumable Initial Sync in MongoDB 4.4", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-visual-studio-code-plugin", "action": "created", "body": "# How To Use The MongoDB Visual Studio Code Plugin\n\nTo make developers more productive when working with MongoDB, we built\nMongoDB for Visual Studio\nCode,\nan extension that allows you to quickly connect to MongoDB and MongoDB\nAtlas and work with your data to\nbuild applications right inside your code editor. With MongoDB for\nVisual Studio Code you can:\n\n- Connect to a MongoDB or MongoDB\n Atlas cluster, navigate\n through your databases and collections, get a quick overview of your\n schema, and see the documents in your collections;\n- Create MongoDB Playgrounds, the fastest way to prototype CRUD\n operations and MongoDB commands;\n- Quickly access the MongoDB Shell, to launch the MongoDB Shell from\n the command palette and quickly connect to the active cluster.\n\n## Getting Started with MongoDB Atlas\n\n### Create an Atlas Account\n\nFirst things first, we will need to set up a MongoDB\nAtlas account. And don't worry,\nyou can create an M0 MongoDB Atlas cluster for free. No credit card is\nrequired to get started! To get up and running with a free M0 cluster,\nfollow the MongoDB Atlas Getting Started\nguide, or follow the\nsteps below. First you will need to start at the MongoDB Atlas\nregistration page, and\nfill in your account information. You can find more information about\nhow to create a MongoDB Atlas account in our\ndocumentation\n\n### Deploy a Free Tier Cluster\n\nOnce you log in, Atlas prompts you to build your first cluster. You need\nto click \"Build a Cluster.\" You will then select the Starter Cluster.\nStarter clusters include the M0, M2, and M5 cluster tiers. These\nlow-cost clusters are suitable for users who are learning MongoDB or\ndeveloping small proof-of-concept applications.\n\nAtlas supports M0 Free Tier clusters on Amazon Web Services\n(AWS),\nGoogle Cloud Platform\n(GCP),\nand Microsoft\nAzure.\nAtlas displays only the regions that support M0 Free Tier and M2/M5\nShared tier clusters.\n\nOnce you deploy your cluster, it can take up to 10 minutes for your\ncluster to provision and become ready to use.\n\n### Add Your Connection IP Address to Your IP Access List\n\nYou must add your IP address to the IP access\nlist\nbefore you can connect to your cluster. To add your IP address to the IP\naccess list. This is important, as it ensures that only you can access\nthe cluster in the cloud from your IP address. You also have the option\nof allowing access from anywhere, though this means that anyone can have\nnetwork access to your cluster. This is a potential security risk if\nyour password and other credentials leak. From your Clusters view, click\nthe Connect button for your cluster.\n\n### Configure your IP access list entry\n\nClick Add Your Current IP Address.\n\n### Create a Database User for Your Cluster\n\nFor security purposes, you must create a database user to access your\ncluster.\nEnter the new username and password. You'll then have the option of\nselecting user privileges, including admin, read/write access, or\nread-only access. From your Clusters view, click the Connect button for\nyour cluster.\n\nIn the **Create a MongoDB User** step of the dialog, enter a Username\nand a Password for your database user. You'll use this username and\npassword combination to access data on your cluster.\n\n>\n>\n>For information on configuring additional database users on your\n>cluster, see Configure Database\n>Users.\n>\n>\n\n## Install MongoDB for Visual Studio Code\n\nNext, we are going to connect to our new MongoDB Atlas database cluster\nusing the Visual Studio Code MongoDB\nPlugin.\nTo install MongoDB for Visual Studio Code, simply search for it in the\nExtensions list directly inside Visual Studio Code or head to the\n\"MongoDB for Visual Studio Code\"\nhomepage\nin the Visual Studio Code Marketplace.\n\n## Connect Your MongoDB Data\n\nMongoDB for Visual Studio Code can connect to MongoDB standalone\ninstances or clusters on MongoDB Atlas or self-hosted. Once connected,\nyou can **browse databases**, **collections**, and **read-only views**\ndirectly from the tree view.\n\nFor each collection, you will see a list of sample documents and a quick\noverview of the schema. This is very useful as a reference while writing\nqueries and aggregations.\n\nOnce installed there will be a new MongoDB tab that we can use to add\nour connections by clicking \"Add Connection\". If you've used MongoDB\nCompass before, then the form\nshould be familiar. You can enter your connection details in the form,\nor use a connection string. I went with the latter as my database is\nhosted on MongoDB Atlas.\n\nTo obtain your connection string, navigate to your \"Clusters\" page and\nselect \"Connect\".\n\nChoose the \"Connect using MongoDB Compass\" option and copy the\nconnection string. Make sure to add your username and password in their\nrespective places before entering the string in Visual Studio Code.\n\nThen paste this string into Visual Studio Code.\n\nOnce you've connected successfully, you should see an alert. At this\npoint, you can explore the data in your cluster, as well as your\nschemas.\n\n## Navigate Your Data\n\nOnce you connect to your deployment using MongoDB for Visual Studio\nCode, use the left navigation to:\n\n- Explore your databases, collections, read-only views, and documents.\n- Create new databases and collections.\n- Drop databases and collections.\n\n## Databases and Collections\n\nWhen you expand an active connection, MongoDB for Visual Studio Code\nshows the databases in that deployment. Click a database to view the\ncollections it contains.\n\n### View Collection Documents and Schema\n\nWhen you expand a collection, MongoDB for Visual Studio Code displays\nthat collection's document count next to the Documents label in the\nnavigation panel.\n\nWhen you expand a collection's documents, MongoDB for Visual Studio Code\nlists the `_id` of each document in the collection. Click an `_id` value\nto open that document in Visual Studio Code and view its contents.\n\nAlternatively, right-click a collection and click View Documents to view\nall the collection's documents in an array.\n\nOpening collection documents provides a **read-only** view of your data.\nTo modify your data using MongoDB for Visual Studio Code, use a\nJavaScript\nPlayground\nor launch a shell by right-clicking your active deployment in the\nMongoDB view in the Activity Bar.\n\n#### Schema\n\nYour collection's schema defines the fields and data types within the\ncollection. Due to MongoDB's flexible schema model, different documents\nin a collection may contain different fields, and data types may vary\nwithin a field. MongoDB can enforce schema\nvalidation to\nensure your collection documents have the same shape.\n\nWhen you expand a collection's schema, MongoDB for Visual Studio Code\nlists the fields which appear in that collection's documents. If a field\nexists in all documents and its type is consistent throughout the\ncollection, MongoDB for Visual Studio Code displays an icon indicating\nthat field's data type.\n\n### Create a New Database\n\nWhen you create a new database, you must populate it with an initial\ncollection. To create a new database:\n\n1. Hover over the connection for the deployment where you want your\n database to exist.\n2. Click the Plus icon that appears.\n3. In the prompt, enter a name for your new database.\n4. Press the enter key.\n5. Enter a name for the first collection in your new database.\n6. Press the enter key.\n\n### Create a New Collection\n\nTo create a new collection:\n\n1. Hover over the database where you want your collection to exist.\n2. Click the Plus icon that appears.\n3. In the prompt, enter a name for your new collection.\n4. Press the enter key to confirm your new collection.\n\n## Explore Your Data with Playgrounds\n\nMongoDB Playgrounds are the most convenient way to prototype and execute\nCRUD operations and other MongoDB commands directly inside Visual Studio\nCode. Use JavaScript environments to interact your data. Prototype\nqueries, run aggregations, and more.\n\n- Prototype your queries, aggregations, and MongoDB commands with\n MongoDB syntax highlighting and intelligent autocomplete for MongoDB\n shell API, MongoDB operators, and for database, collection, and\n field names.\n- Run your playgrounds and see the results instantly. Click the play\n button in the tab bar to see the output.\n- Save your playgrounds in your workspace and use them to document how\n your application interacts with MongoDB\n- Build aggregations quickly with helpful and well-commented stage\n snippets\n\n### Open the Visual Studio Code Command Palette.\n\nTo open a playground and begin interacting with your data, open Visual\nStudio Code and press one of the following key combinations:\n\n- Control + Shift + P on Windows or Linux.\n- Command + Shift + P on macOS.\n\nThe Command Palette provides quick access to commands and keyboard\nshortcuts.\n\n### Find and run the \"Create MongoDB Playground\" command.\n\nUse the Command Palette search bar to search for commands. All commands\nrelated to MongoDB for Visual Studio Code are prefaced with MongoDB:.\n\nWhen you run the MongoDB: Create MongoDB Playground command, MongoDB for\nVisual Studio Code opens a playground pre-configured with a few\ncommands.\n\n## Run a Playground\n\nTo run a playground, click the Play Button in Visual Studio Code's top\nnavigation bar.\n\nYou can use a MongoDB Playground to perform CRUD (create, read, update,\nand delete) operations on documents in a collection on a connected\ndeployment. Use the\nMongoDB CRUD Operators and\nshell methods to\ninteract with your databases in MongoDB Playgrounds.\n\n### Perform CRUD Operations\n\nLet's run through the default MongoDB Playground template that's created\nwhen you initialize a new Playground. In the default template, it\nexecutes the following:\n\n1. `use('mongodbVSCodePlaygroundDB')` switches to the\n `mongodbVSCodePlaygroundDB` database.\n2. db.sales.drop()\n drops the sales collection, so the playground will start from a\n clean slate.\n3. Inserts eight documents into the mongodbVSCodePlaygroundDB.sales\n collection.\n 1. Since the collection was dropped, the insert operations will\n create the collection and insert the data.\n 2. For a detailed description of this method's parameters, see\n insertOne()\n in the MongoDB Manual.\n4. Runs a query to read all documents sold on April 4th, 2014.\n 1. For a detailed description of this method's parameters, see\n find()\n in the MongoDB Manual.\n\n``` javascript\n// MongoDB Playground\n// To disable this template go to Settings \\| MongoDB \\| Use Default Template For Playground.\n// Make sure you are connected to enable completions and to be able to run a playground.\n// Use Ctrl+Space inside a snippet or a string literal to trigger completions.\n\n// Select the database to use.\nuse('mongodbVSCodePlaygroundDB');\n\n// The drop() command destroys all data from a collection.\n// Make sure you run it against proper database and collection.\ndb.sales.drop();\n\n// Insert a few documents into the sales collection.\ndb.sales.insertMany(\n { '_id' : 1, 'item' : 'abc', 'price' : 10, 'quantity' : 2, 'date' : new Date('2014-03-01T08:00:00Z') },\n { '_id' : 2, 'item' : 'jkl', 'price' : 20, 'quantity' : 1, 'date' : new Date('2014-03-01T09:00:00Z') },\n { '_id' : 3, 'item' : 'xyz', 'price' : 5, 'quantity' : 10, 'date' : new Date('2014-03-15T09:00:00Z') },\n { '_id' : 4, 'item' : 'xyz', 'price' : 5, 'quantity' : 20, 'date' : new Date('2014-04-04T11:21:39.736Z') },\n { '_id' : 5, 'item' : 'abc', 'price' : 10, 'quantity' : 10, 'date' : new Date('2014-04-04T21:23:13.331Z') },\n { '_id' : 6, 'item' : 'def', 'price' : 7.5, 'quantity': 5, 'date' : new Date('2015-06-04T05:08:13Z') },\n { '_id' : 7, 'item' : 'def', 'price' : 7.5, 'quantity': 10, 'date' : new Date('2015-09-10T08:43:00Z') },\n { '_id' : 8, 'item' : 'abc', 'price' : 10, 'quantity' : 5, 'date' : new Date('2016-02-06T20:20:13Z') },\n]);\n\n// Run a find command to view items sold on April 4th, 2014.\ndb.sales.find({\n date: {\n $gte: new Date('2014-04-04'),\n $lt: new Date('2014-04-05')\n }\n});\n```\n\nWhen you press the Play Button, this operation outputs the following\ndocument to the Output view in Visual Studio Code:\n\n``` javascript\n{\n acknowleged: 1,\n insertedIds: {\n '0': 2,\n '1': 3,\n '2': 4,\n '3': 5,\n '4': 6,\n '5': 7,\n '6': 8,\n '7': 9\n }\n}\n```\n\nYou can learn more about the basics of MQL and CRUD operations in the\npost, [Getting Started with Atlas and the MongoDB Query Language\n(MQL).\n\n### Run Aggregation Pipelines\n\nLet's run through the last statement of the default MongoDB Playground\ntemplate. You can run aggregation\npipelines on your\ncollections in MongoDB for Visual Studio Code. Aggregation pipelines\nconsist of\nstages\nthat process your data and return computed results.\n\nCommon uses for aggregation include:\n\n- Grouping data by a given expression.\n- Calculating results based on multiple fields and storing those\n results in a new field.\n- Filtering data to return a subset that matches a given criteria.\n- Sorting data.\n\nWhen you run an aggregation, MongoDB for Visual Studio Code conveniently\noutputs the results directly within Visual Studio Code.\n\nThis pipeline performs an aggregation in two stages:\n\n1. The\n $match\n stage filters the data such that only sales from the year 2014 are\n passed to the next stage.\n2. The\n $group\n stage groups the data by item. The stage adds a new field to the\n output called totalSaleAmount, which is the culmination of the\n item's price and quantity.\n\n``` javascript\n// Run an aggregation to view total sales for each product in 2014.\nconst aggregation = \n { $match: {\n date: {\n $gte: new Date('2014-01-01'),\n $lt: new Date('2015-01-01')\n }\n } },\n { $group: {\n _id : '$item', totalSaleAmount: {\n $sum: { $multiply: [ '$price', '$quantity' ] }\n }\n } },\n];\n\ndb.sales.aggregate(aggregation);\n```\n\nWhen you press the Play Button, this operation outputs the following\ndocuments to the Output view in Visual Studio Code:\n\n``` javascript\n[\n {\n _id: 'abc',\n totalSaleAmount: 120\n },\n {\n _id: 'jkl',\n totalSaleAmount: 20\n },\n {\n _id: 'xyz',\n totalSaleAmount: 150\n }\n]\n```\n\nSee [Run Aggregation\nPipelines\nfor more information on running the aggregation pipeline from the\nMongoDB Playground.\n\n## Terraform snippet for MongoDB Atlas\n\nIf you use Terraform to manage your infrastructure, MongoDB for Visual\nStudio Code helps you get started with the MongoDB Atlas\nProvider.\nWe aren't going to cover this feature today, but if you want to learn\nmore, check out Create an Atlas Cluster from a Template using\nTerraform,\nfrom the MongoDB manual.\n\n## Summary\n\nThere you have it! MongoDB for Visual Studio Code Extension allows you\nto connect to your MongoDB instance and enables you to interact in a way\nthat fits into your native workflow and development tools. You can\nnavigate and browse your MongoDB databases and collections, and\nprototype queries and aggregations for use in your applications.\n\nIf you are a Visual Studio Code user, getting started with MongoDB for\nVisual Studio Code is easy:\n\n1. Install the extension from the\n marketplace;\n2. Get a free Atlas cluster\n if you don't have a MongoDB server already;\n3. Connect to it and start building a playground.\n\nYou can find more information about MongoDB for Visual Studio Code and\nall its features in the\ndocumentation.\n\n>\n>\n>If you have any questions on MongoDB for Visual Studio Code, you can\n>join in the discussion at the MongoDB Community\n>Forums, and you can\n>share feature requests using the MongoDB Feedback\n>Engine.\n>\n>\n\n>\n>\n>When you're ready to try out the MongoDB Visual Studio Code plugin for\n>yourself, check out MongoDB Atlas, MongoDB's\n>fully managed database-as-a-service. Atlas is the easiest way to get\n>started with MongoDB and has a generous, forever-free tier.\n>\n>\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- Ready to install MongoDB for Visual Studio\n Code?\n- MongoDB for Visual Studio Code\n Documentation\n- Getting Started with Atlas and the MongoDB Query Language\n (MQL)\n- Want to learn more about MongoDB? Be sure to take a class on the\n MongoDB University\n- Have a question, feedback on this post, or stuck on something be\n sure to check out and/or open a new post on the MongoDB Community\n Forums\n- Want to check out more cool articles about MongoDB? Be sure to\n check out more posts like this on the MongoDB Developer\n Hub\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to connect to MongoDB from VS Code! Navigate your databases, use playgrounds to prototype queries and aggregations, and more!", "contentType": "Tutorial"}, "title": "How To Use The MongoDB Visual Studio Code Plugin", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/joining-collections-mongodb-dotnet-core-aggregation-pipeline", "action": "created", "body": "# Joining Collections in MongoDB with .NET Core and an Aggregation Pipeline\n\nIf you've been keeping up with my .NET Core series on MongoDB, you'll remember that we explored creating a simple console application as well as building a RESTful API with basic CRUD support. In both examples, we used basic filters when interacting with MongoDB from our applications.\n\nBut what if we need to do something a bit more complex, like join data from two different MongoDB collections?\n\nIn this tutorial, we're going to take a look at aggregation pipelines and some of the ways that you can work with them in a .NET Core application.\n\n## The Requirements\n\nBefore we get started, there are a few requirements that must be met to be successful:\n\n- Have a MongoDB Atlas cluster deployed and configured.\n- Install .NET Core 6+.\n- Install the MongoDB sample data sets.\n\nWe will be using .NET Core 6.0 for this particular tutorial. Older or newer versions might work, but there's a chance that some of the commands may be a little different. The expectation is that you already have a MongoDB Atlas cluster ready to go. This could be a free M0 cluster or better, but you'll need it properly configured with user roles and network access rules. You'll also need the MongoDB sample data sets to be attached.\n\nIf you need help with this, check out a previous tutorial I wrote on the topic.\n\n## A Closer Look at the Data Model and Expected Outcomes\n\nBecause we're expecting to accomplish some fairly complicated things in this tutorial, it's probably a good idea to break down the data going into it and the data that we're expecting to come out of it.\n\nIn this tutorial, we're going to be using the **sample_mflix** database and the **movies** collection. We're also going to be using a custom **playlist** collection that we're going to add to the **sample_mflix** database.\n\nTo give you an idea of the data that we're going to be working with, take the following document from the **movies** collection:\n\n```json\n{\n \"_id\": ObjectId(\"573a1390f29313caabcd4135\"),\n \"title\": \"Blacksmith Scene\",\n \"plot\": \"Three men hammer on an anvil and pass a bottle of beer around.\",\n \"year\": 1893,\n // ...\n}\n```\n\nAlright, so I didn't include the entire document because it is actually quite huge. Knowing every single field is not going to help or hurt the example as long as we're familiar with the `_id` field.\n\nNext, let's look at a document in the proposed **playlist** collection:\n\n```json\n{\n \"_id\": ObjectId(\"61d8bb5e2d5fe0c2b8a1007d\"),\n \"username\": \"nraboy\",\n \"items\": \n \"573a1390f29313caabcd42e8\",\n \"573a1391f29313caabcd8a82\"\n ]\n}\n```\n\nKnowing the fields in the above document is important as they'll be used throughout our aggregation pipelines.\n\nOne of the most important things to take note of between the two collections is the fact that the `_id` fields are `ObjectId` and the values in the `items` field are strings. More on this as we progress.\n\nNow that we know our input documents, let's take a look at what we're expecting as a result of our queries. If I were to query for a playlist, I don't want the id values for each of the movies. I want them fully expanded, like the following:\n\n```json\n{\n \"_id\": ObjectId(\"61d8bb5e2d5fe0c2b8a1007d\"),\n \"username\": \"nraboy\",\n \"items\": [\n {\n \"_id\": ObjectId(\"573a1390f29313caabcd4135\"),\n \"title\": \"Blacksmith Scene\",\n \"plot\": \"Three men hammer on an anvil and pass a bottle of beer around.\",\n \"year\": 1893,\n // ...\n },\n {\n \"_id\": ObjectId(\"573a1391f29313caabcd8a82\"),\n \"title\": \"The Terminator\",\n \"plot\": \"A movie about some killer robots.\",\n \"year\": 1984,\n // ...\n }\n ]\n}\n```\n\nThis is where the aggregation pipelines come in and some joining because we can't just do a normal filter on a `Find` operation, unless we wanted to perform multiple `Find` operations.\n\n## Creating a New .NET Core Console Application with MongoDB Support\n\nTo keep things simple, we're going to be building a console application that uses our aggregation pipeline. You can take the logic and apply it towards a web application if that is what you're interested in.\n\nFrom the CLI, execute the following:\n\n```bash\ndotnet new console -o MongoExample\ncd MongoExample\ndotnet add package MongoDB.Driver\n```\n\nThe above commands will create a new .NET Core project and install the latest MongoDB driver for C#. Everything we do next will happen in the project's \"Program.cs\" file.\n\nOpen the \"Program.cs\" file and add the following C# code:\n\n```csharp\nusing MongoDB.Driver;\nusing MongoDB.Bson;\n\nMongoClient client = new MongoClient(\"ATLAS_URI_HERE\");\n\nIMongoCollection playlistCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"playlist\");\n\nList results = playlistCollection.Find(new BsonDocument()).ToList();\n\nforeach(BsonDocument result in results) {\n Console.WriteLine(result[\"username\"] + \": \" + string.Join(\", \", result[\"items\"]));\n}\n```\n\nThe above code will connect to a MongoDB cluster, get a reference to our **playlist** collection, and dump all the documents from that collection into the console. Finding and returning all the documents in the collection is not a requirement for the aggregation pipeline, but it might help with the learning process.\n\nThe `ATLAS_URI_HERE` string can be obtained from the [MongoDB Atlas Dashboard after clicking \"Connect\" for a particular cluster.\n\n## Building an Aggregation Pipeline with .NET Core Using Raw BsonDocument Stages\n\nWe're going to explore a few different options towards creating an aggregation pipeline query with .NET Core. The first will use raw `BsonDocument` type data.\n\nWe know our input data and we know our expected outcome, so we need to come up with a few pipeline stages to bring it together.\n\nLet's start with the first stage:\n\n```csharp\nBsonDocument pipelineStage1 = new BsonDocument{\n {\n \"$match\", new BsonDocument{\n { \"username\", \"nraboy\" }\n }\n }\n};\n```\n\nThe first stage of this pipeline uses the `$match` operator to find only documents where the `username` is \"nraboy.\" This could be more than one because we're not treating `username` as a unique field.\n\nWith the filter in place, let's move to the next stage:\n\n```csharp\nBsonDocument pipelineStage2 = new BsonDocument{\n { \n \"$project\", new BsonDocument{\n { \"_id\", 1 },\n { \"username\", 1 },\n { \n \"items\", new BsonDocument{\n {\n \"$map\", new BsonDocument{\n { \"input\", \"$items\" },\n { \"as\", \"item\" },\n {\n \"in\", new BsonDocument{\n {\n \"$convert\", new BsonDocument{\n { \"input\", \"$$item\" },\n { \"to\", \"objectId\" }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n }\n};\n```\n\nRemember how the document `_id` fields were ObjectId and the `items` array were strings? For the join to be successful, they need to be of the same type. The second pipeline stage is more of a manipulation stage with the `$project` operator. We're defining the fields we want passed to the next stage, but we're also modifying some of the fields, in particular the `items` field. Using the `$map` operator we can take the string values and convert them to ObjectId values.\n\nIf your `items` array contained ObjectId instead of string values, this particular stage wouldn't be necessary. It might also not be necessary if you're using POCO classes instead of `BsonDocument` types. That is a lesson for another day though.\n\nWith our item values mapped correctly, we can push them to the next stage in the pipeline:\n\n```csharp\nBsonDocument pipelineStage3 = new BsonDocument{\n {\n \"$lookup\", new BsonDocument{\n { \"from\", \"movies\" },\n { \"localField\", \"items\" },\n { \"foreignField\", \"_id\" },\n { \"as\", \"movies\" }\n }\n }\n};\n```\n\nThe above pipeline stage is where the JOIN operation actually happens. We're looking into the **movies** collection and we're using the ObjectId fields from our **playlist** collection to join them to the `_id` field of our **movies** collection. The output from this JOIN will be stored in a new `movies` field.\n\nThe `$lookup` is like saying the following:\n\n```\nSELECT movies\nFROM playlist\nJOIN movies ON playlist.items = movies._id\n```\n\nOf course there is more to it than the above SQL statement because `items` is an array, something you can't natively work with in most SQL databases.\n\nSo as of right now, we have our joined data. However, its not quite as elegant as what we wanted in our final outcome. This is because the `$lookup` output is an array which will leave us with a multidimensional array. Remember, `items` was an array and each `movies` is an array. Not the most pleasant thing to work with, so we probably want to further manipulate the data in another stage.\n\n```csharp\nBsonDocument pipelineStage4 = new BsonDocument{\n { \"$unwind\", \"$movies\" }\n};\n```\n\nThe above stage will take our new `movies` field and flatten it out with the `$unwind` operator. The `$unwind` operator basically takes each element of an array and creates a new result item to sit adjacent to the rest of the fields of the parent document. So if you have, for example, one document that has an array with two elements, after doing an `$unwind`, you'll have two documents.\n\nOur end goal, though, is to end up with a single dimension array of movies, so we can fix this with another pipeline stage.\n\n```csharp\nBsonDocument pipelineStage5 = new BsonDocument{\n {\n \"$group\", new BsonDocument{\n { \"_id\", \"$_id\" },\n { \n \"username\", new BsonDocument{\n { \"$first\", \"$username\" }\n } \n },\n { \n \"movies\", new BsonDocument{\n { \"$addToSet\", \"$movies\" }\n }\n }\n }\n }\n};\n```\n\nThe above stage will group our documents and add our unwound movies to a new `movies` field, one that isn't multidimensional.\n\nSo let's bring the pipeline stages together so they can be run in our application.\n\n```csharp\nBsonDocument] pipeline = new BsonDocument[] { \n pipelineStage1, \n pipelineStage2, \n pipelineStage3, \n pipelineStage4, \n pipelineStage5 \n};\n\nList pResults = playlistCollection.Aggregate(pipeline).ToList();\n\nforeach(BsonDocument pResult in pResults) {\n Console.WriteLine(pResult);\n}\n```\n\nExecuting the code thus far should give us our expected outcome in terms of data and format.\n\nNow, you might be thinking that the above five-stage pipeline was a lot to handle for a JOIN operation. There are a few things that you should be aware of:\n\n- Our id values were not of the same type, which resulted in another stage.\n- Our values to join were in an array, not a one-to-one relationship.\n\nWhat I'm trying to say is that the length and complexity of your pipeline is going to depend on how you've chosen to model your data.\n\n## Using a Fluent API to Build Aggregation Pipeline Stages\n\nLet's look at another way to accomplish our desired outcome. We can make use of the Fluent API that MongoDB offers instead of creating an array of pipeline stages.\n\nTake a look at the following:\n\n```csharp\nvar pResults = playlistCollection.Aggregate()\n .Match(new BsonDocument{{ \"username\", \"nraboy\" }})\n .Project(new BsonDocument{\n { \"_id\", 1 },\n { \"username\", 1 },\n {\n \"items\", new BsonDocument{\n {\n \"$map\", new BsonDocument{\n { \"input\", \"$items\" },\n { \"as\", \"item\" },\n {\n \"in\", new BsonDocument{\n {\n \"$convert\", new BsonDocument{\n { \"input\", \"$$item\" },\n { \"to\", \"objectId\" }\n }\n }\n }\n }\n }\n }\n }\n }\n })\n .Lookup(\"movies\", \"items\", \"_id\", \"movies\")\n .Unwind(\"movies\")\n .Group(new BsonDocument{\n { \"_id\", \"$_id\" },\n {\n \"username\", new BsonDocument{\n { \"$first\", \"$username\" }\n }\n },\n {\n \"movies\", new BsonDocument{\n { \"$addToSet\", \"$movies\" }\n }\n }\n })\n .ToList();\n\nforeach(var pResult in pResults) {\n Console.WriteLine(pResult);\n}\n```\n\nIn the above example, we used methods such as `Match`, `Project`, `Lookup`, `Unwind`, and `Group` to get our final result. For some of these methods, we didn't need to use a `BsonDocument` like we saw in the previous example.\n\n## Conclusion\n\nYou just saw two ways to do a MongoDB aggregation pipeline for joining collections within a .NET Core application. Like previously mentioned, there are a few ways to accomplish what we want, all of which are going to be dependent on how you've chosen to model the data within your collections.\n\nThere is a third way, which we'll explore in another tutorial, and this uses LINQ to get the job done.\n\nIf you have questions about anything you saw in this tutorial, drop by the [MongoDB Community Forums and get involved!", "format": "md", "metadata": {"tags": ["C#", "MongoDB"], "pageDescription": "Learn how to use the MongoDB aggregation pipeline to create stages that will join documents and collections in a .NET Core application.", "contentType": "Tutorial"}, "title": "Joining Collections in MongoDB with .NET Core and an Aggregation Pipeline", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-asyncopen-autoopen", "action": "created", "body": "# Open Synced Realms in SwiftUI using @Auto/AsyncOpen\n\n## Introduction\n\nWe\u2019re very happy to announce that v10.12.0 of the Realm Cocoa SDK includes our two new property wrappers `@AutoOpen` and `@AsyncOpen` for asynchronous opening of a realm for Realm Sync users. This new feature, which is a response to your community feedback, aligns with our goal to make our developer experience better and more effortless, integrating it with SwiftUI, and removing boilerplate code.\n\nUp until now, the standard approach for opening a realm for any sync user is to call `Realm.asyncOpen()` using a user\u2019s sync configuration, then publish the opened Realm to the view:\n\n``` swift\nenum AsyncOpenState {\n case waiting\n case inProgress(Progress)\n case open(Realm)\n case error(Error)\n}\n\nstruct AsyncView: View {\n @State var asyncOpenState: AsyncOpenState = .waiting\n\n var body: some View {\n switch asyncOpenState {\n case .waiting:\n ProgressView()\n .onAppear(perform: initAsyncOpen)\n case .inProgress(let progress):\n ProgressView(progress)\n case .open(let realm):\n ContactsListView()\n .environment(\\.realm, realm)\n case .error(let error):\n ErrorView(error: error)\n }\n }\n\n func initAsyncOpen() {\n let app = App(id: \"appId\")\n guard let currentUser = app.currentUser else { return }\n let realmConfig = currentUser.configuration(partitionValue: \"myPartition\")\n Realm.asyncOpen(configuration: realmConfig,\n callbackQueue: DispatchQueue.main) { result in\n switch result {\n case .success(let realm):\n asyncOpenState = .open(realm)\n case .failure(let error):\n asyncOpenState = .error(error)\n }\n }.addProgressNotification { syncProgress in\n let progress = Progress(totalUnitCount: Int64(syncProgress.transferredBytes))\n progress.completedUnitCount = Int64(syncProgress.transferredBytes)\n asyncOpenState = .inProgress(progress)\n }\n }\n}\n```\n\nWith `@AsyncOpen` and `@AutoOpen`, we are reducing development time and boilerplate, making it easier, faster, and cleaner to implement Realm.asyncOpen(). `@AsyncOpen` and `@AutoOpen` give the user the possibility to cover two common use cases in synced apps.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Prerequisites\n\n- Realm Cocoa 10.12.0+\n\n## @AsyncOpen\n\nWith the `@AsyncOpen` property wrapper, we have the same behavior as using `Realm.asyncOpen()`, but with a much more natural API for SwiftUI developers. Using this property wrapper prevents your app from trying to fetch the Realm file if there is no network connection, and it will only return a realm when it's synced with MongoDB Realm data. If there is no internet connection, then @AsyncOpen< will throw an error.\n\nLet\u2019s take, for example, a game app, which the user can play both on an iPhone and iPad. Having the data not updated would result in losing track of the current status of the player. In this case, it\u2019s very important to have our data updated with any latest changes. This is the perfect use case for `@AsyncOpen`. \n\nThis property wrapper's API gives you the flexibility to optionally specify a MongoDB Realm AppId. If no AppId is provided, and you\u2019ve only used one ID within your App, then that will be used. You can also provide a timeout for your asynchronous operation:\n\n```swift\n@AsyncOpen(appId: \"appId\",\n partitionValue: \"myPartition\",\n configuration: Realm.Configuration(objectTypes: SwiftPerson.self])\n timeout: 20000)\nvar asyncOpen\n```\n\nAdding it to your SwiftUI App is as simple as declaring it in your view and have your view react to the state of the sync operation:\n\n- Display a progress view while downloading or waiting for a user to be logged in.\n- Display an error view if there is a failure during sync.\n- Navigate to a new view after our realm is opened\n\nOnce the synced realm has been successfully opened, you can pass it to another view (embedded or via a navigation link):\n\n```swift\nstruct AsyncOpenView: View {\n @AsyncOpen(appId: \"appId\",\n partitionValue: \"myPartition\",\n configuration: Realm.Configuration(objectTypes: [SwiftPerson.self])\n timeout: 20000)\n var asyncOpen\n\n var body: some View {\n VStack {\n switch asyncOpen {\n case .connecting:\n ProgressView()\n case .waitingForUser:\n ProgressView(\"Waiting for user to logged in...\")\n case .open(let realm):\n ListView()\n .environment(\\.realm, realm)\n case .error(let error):\n ErrorView(error: error)\n case .progress(let progress):\n ProgressView(progress)\n }\n }\n }\n}\n```\n\nIf you have been using Realm.asyncOpen() in your current SwiftUI App and want to maintain the same behavior, you may want to migrate to `@AsyncOpen`. It will simplify your code and make it more intuitive.\n\n## @AutoOpen\n\n`@AutoOpen` should be used when you want to work with the synced realm file even when there is no internet connection.\n\nLet\u2019s take, for example, Apple\u2019s Notes app, which tries to sync your data if there is internet access and shows you all the notes synced from other devices. If there is no internet connection, then Notes shows you your local (possibly stale) data. This use case is perfect for the `@AutoOpen` property wrapper. When the user recovers a network connection, Realm will sync changes in the background, without the need to add any extra code.\n\nThe syntax for using `@AutoOpen` is the same as for `@AsyncOpen`:\n\n```swift\nstruct AutoOpenView: View {\n @AutoOpen(appId: \"appId\",\n partitionValue: \"myPartition\",\n configuration: Realm.Configuration(objectTypes: [SwiftPerson.self])\n timeout: 10000)\n var autoOpen\n\n var body: some View {\n VStack {\n switch autoOpen {\n case .connecting:\n ProgressView()\n case .waitingForUser:\n ProgressView(\"Waiting for user to logged in...\")\n case .open(let realm):\n ContactView()\n .environment(\\.realm, realm)\n case .error(let error):\n ErrorView(error: error)\n case .progress(let progress):\n ProgressView(progress)\n }\n }\n }\n}\n```\n\n## One Last Thing\u2026 \n\nWe added a new key to our set of Environment Values: a \u201cpartition value\u201d environment key which is used by our new property wrappers `@AsyncOpen` and `@AutoOpen` to dynamically inject a partition value when it's derived and not static. For example, in the case of using the user id as a partition value, you can pass this environment value to the view where `@AsyncOpen` or `@AutoOpen` are used:\n\n```swift\nAsyncView()\n .environment(\\.partitionValue, user.id!)\n```\n\n## Conclusion\n\nWith these property wrappers, we continue to better integrate Realm into your SwiftUI apps. With the release of this feature, and more to come, we want to make it easier for you to incorporate our SDK and sync functionality into your apps, no matter whether you\u2019re using UIKit or SwiftUI.\n\nWe are excited for our users to test these new features. Please share any feedback or ideas for new features in our [community forum.\n\nDocumentation on both of these property wrappers can be found in our docs. \n", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Learn how to use the new Realm @AutoOpen and @AsyncOpen property wrappers to open synced realms from your SwiftUI apps.", "contentType": "News & Announcements"}, "title": "Open Synced Realms in SwiftUI using @Auto/AsyncOpen", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/nextjs-building-modern-applications", "action": "created", "body": "# Building Modern Applications with Next.js and MongoDB\n\n>\n>\n>This article is out of date. Check out the official Next.js with MongoDB tutorial for the latest guide on integrating MongoDB with Next.js.\n>\n>\n\nDevelopers have more choices than ever before when it comes to choosing the technology stack for their next application. Developer productivity is one of the most important factors in choosing a modern stack and I believe that Next.js coupled with MongoDB can get you up and running on the next great application in no time at all. Let's find out how and why!\n\nIf you would like to follow along with this tutorial, you can get the code from the GitHub repo. Also, be sure to sign up for a free MongoDB Atlas account to make it easier to connect your MongoDB database.\n\n## What is Next.js\n\nNext.js is a React based framework for building modern web applications. The framework comes with a lot of powerful features such as server side rendering, automatic code splitting, static exporting and much more that make it easy to build scalable and production ready apps. Its opinionated nature means that the framework is focused on developer productivity, but still flexible enough to give developers plenty of choice when it comes to handling the big architectural decisions.\n\nFor this tutorial, I'll assume that you are already familiar with React, and if so, you'll be up and running with Next.js in no time at all. If you are not familiar with React, I would suggest looking at resources such as the official React docs or taking a free React starter course to get familiar with the framework first.\n\n## What We're Building: Macro Compliance Tracker\n\nThe app we're building today is called the Macro Compliance Tracker. If you're like me, you probably had a New Years Resolution of *\"I'm going to get in better shape!\"* This year, I am taking that resolution seriously, and have gotten a person trainer and nutritionist. One interesting thing that I learned is that while the old adage of calories in needs to be less than calories out to lose weight is generally true, your macronutrients also play just as an important role in weight loss.\n\nThere are many great apps that help you track your calories and macros. Unfortunately, most apps do not allow you to track a range and another interesting thing that I learned in my fitness journey this year is that for many beginners trying to hit their daily macro goals is a challenge and many folks end up giving up when they fail to hit the exact targets consistently. For that reason, my coach suggests a target range for calories and macros rather than a hard set number.\n\nSo that's what we're building today. We'll use Next.js to build our entire application and MongoDB as our database to store our progress. Let's get into it!\n\n## Setting up a Next.js Application\n\nThe easiest way to create a Next.js application is by using the official create-next-app npx command. To do that we'll simply open up our Terminal window and type: `npx create-next-app mct`. \"mct\" is going to be the name of our application as well as the directory where our code is going to live.\n\nExecute this command and a default application will be created. Once the files are created navigate into the directory by running `cd mct` in the Terminal window and then execute `npm run dev`. This will start a development server for your Next.js application which you'll be able to access at `localhost:3000`.\n\nNavigate to `localhost:3000` and you should see a page very similar to the one in the above screenshot. If you see the Welcome to Next.js page you are good to go. If not, I would suggest following the Next.js docs and troubleshooting tips to ensure proper setup.\n\n## Next.js Directory Structure\n\nBefore we dive into building our application any further, let's quickly look at how Next.js structures our application. The default directory structure looks like this:\n\nThe areas we're going to be focused on are the pages, components, and public directories. The .next directory contains the build artifacts for our application, and we should generally avoid making direct changes to it.\n\nThe pages directory will contain our application pages, or another way to think of these is that each file here will represent a single route in our application. Our default app only has the index.js page created which corresponds with our home route. If we wanted to add a second page, for example, an about page, we can easily do that by just creating a new file called about.js. The name we give to the filename will correspond to the route. So let's go ahead and create an `about.js` file in the pages directory.\n\nAs I mentioned earlier, Next.js is a React based framework, so all your React knowledge is fully transferable here. You can create components using either as functions or as classes. I will be using the function based approach. Feel free to grab the complete GitHub repo if you would like to follow along. Our About.js component will look like this:\n\n``` javascript\nimport React from 'react'\nimport Head from 'next/head'\nimport Nav from '../components/nav'\n\nconst About = () => (\n \n\n \n About\n \n \n\n \n\n \n\n \n\nMACRO COMPLIANCE TRACKER!\n\n \n\n This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution!\n \n\n \n\n \n\n)\n\nexport default About\n```\n\nGo ahead and save this file. Next.js will automatically rebuild the application and you should be able to navigate to `http://localhost:3000/about` now and see your new component in action.\n\nNext.js will automatically handle all the routing plumbing and ensure the right component gets loaded. Just remember, whatever you name your file in the pages directory is what the corresponding URL will be.\n\n## Adding Some Style with Tailwind.css\n\nOur app is looking good, but from a design perspective, it's looking pretty bare. Let's add Tailwind.css to spruce up our design and make it a little easier on the eyes. Tailwind is a very powerful CSS framework, but for brevity we'll just import the base styles from a CDN and won't do any customizations. To do this, we'll simply add `` in the Head components of our pages.\n\nLet's do this for our About component and also add some Tailwind classes to improve our design. Our next component should look like this:\n\n``` javascript\nimport React from 'react'\nimport Head from 'next/head'\nimport Nav from '../components/nav'\n\nconst About = () => (\n \n\n \n About\n \n \n \n\n \n\n \n Macro Compliance Tracker!\n \n This app will help you ensure your macros are within a selected range to help you achieve your New Years Resolution!\n \n\n \n\n \n)\n\nexport default About\n```\n\nIf we go and refresh our browser, the About page should look like this:\n\nGood enough for now. If you want to learn more about Tailwind, check out their official docs here.\n\nNote: If when you make changes to your Next.js application such as adding the `className`'s or other changes, and they are not reflected when you refresh the page, restart the dev server.\n\n## Creating Our Application\n\nNow that we have our Next.js application setup, we've gone through and familiarized ourselves with how creating components and pages works, let's get into building our Macro Compliance Tracker app. For our first implementation of this app, we'll put all of our logic in the main index.js page. Open the page up and delete all the existing Next.js boilerplate.\n\nBefore we write the code, let's figure out what features we'll need. We'll want to show the user their daily calorie and macro goals, as well as if they're in compliance with their targeted range or not. Additionally, we'll want to allow the user to update their information every day. Finally, we'll want the user to be able to view previous days and see how they compare.\n\nLet's create the UI for this first. We'll do it all in the Home component, and then start breaking it up into smaller individual components. Our code will look like this:\n\n``` javascript\nimport React from 'react'\nimport Head from 'next/head'\nimport Nav from '../components/nav'\n\nconst Home = () => (\n \n\n \n Home\n \n \n \n\n \n\n \n \n Macro Compliance Tracker\n \n\n \n\n \n Previous Day\n 1/23/2020\n Next Day\n \n\n \n \n 1850\n \n 1700\n 1850\n 2000\n \n \n Calories\n \n \n 195\n \n 150\n 160\n 170\n \n \n Carbs\n \n \n 55\n \n 50\n 60\n 70\n \n \n Fat\n \n \n 120\n \n 145\n 160\n 175\n \n \n Protein\n \n \n\n \n \n Results\n \n Calories\n \n \n \n Carbs\n \n \n \n Fat\n \n \n \n Protein\n \n \n \n \n Save\n \n \n \n \n Target\n \n Calories\n \n \n \n Carbs\n \n \n \n Fat\n \n \n \n Protein\n \n \n \n \n Save\n \n \n \n \n Variance\n \n Calories\n \n \n \n Carbs\n \n \n \n Fat\n \n \n \n Protein\n \n \n \n \n Save\n \n \n \n \n \n \n)\n\nexport default Home\n```\n\nAnd this will result in our UI looking like this:\n\nThere is a bit to unwind here. So let's take a look at it piece by piece. At the very top we have a simple header that just displays the name of our application. Next, we have our day information and selection options. After that, we have our daily results showing whether we are in compliance or not for the selected day. If we are within the suggested range, the background is green. If we are over the range, meaning we've had too much of a particular macro, the background is red, and if we under-consumed a particular macro, the background is blue. Finally, we have our form which allows us to update our daily results, our target calories and macros, as well as variance for our range.\n\nOur code right now is all in one giant component and fairly static. Next let's break up our giant component into smaller parts and add our front end functionality so we're at least working with non-static data. We'll create our components in the components directory and then import them into our index.js page component. Components we create in the components directory can be used across multiple pages with ease allowing us reusability if we add multiple pages to our application.\n\nThe first component that we'll create is the result component. The result component is the green, red, or blue block that displays our result as well as our target and variance ranges. Our component will look like this:\n\n``` javascript\nimport React, {useState, useEffect} from 'react'\nconst Result = ({results}) => {\n let bg, setBg] = useState(\"\");\n\n useEffect(() => {\n setBackground()\n });\n\n const setBackground = () => {\n let min = results.target - results.variant;\n let max = results.target + results.variant;\n\n if(results.total >= min && results.total <= max) {\n setBg(\"bg-green-500\");\n } else if ( results.total < min){\n setBg(\"bg-blue-500\");\n } else {\n setBg(\"bg-red-500\")\n }\n }\n\n return (\n \n {results.total}\n \n {results.target - results.variant}\n {results.target}\n {results.target + results.variant}\n \n \n {results.label}\n \n )\n }\n\nexport default Result\n```\n\nThis will allow us to feed this component dynamic data and based on the data provided, we'll display the correct background, as well as target ranges for our macros. We can now simplify our index.js page component by removing all the boilerplate code and replacing it with:\n\n``` xml\n\n \n \n \n \n\n```\n\nLet's also go ahead and create some dummy data for now. We'll get to retrieving live data from MongoDB soon, but for now let's just create some data in-memory like so:\n\n``` javascript\nconst Home = () => {\n let data = {\n calories: {\n label: \"Calories\",\n total: 1840,\n target: 1840,\n variant: 15\n },\n carbs: {\n label: \"Carbs\",\n total: 190,\n target: 160,\n variant: 15\n },\n fat: {\n label: \"Fat\",\n total: 55,\n target: 60,\n variant: 10\n },\n protein: {\n label: \"Protein\",\n total: 120,\n target: 165,\n variant: 10\n }\n }\n\nconst [results, setResults] = useState(data);\n\nreturn ( ... )}\n```\n\nIf we look at our app now, it won't look very different at all. And that's ok. All we've done so far is change how our UI is rendered, moving it from hard coded static values, to an in-memory object. Next let's go ahead and make our form work with this in-memory data. Since our forms are very similar, we can create a component here as well and re-use the same component.\n\nWe will create a new component called MCTForm and in this component we'll pass in our data, a name for the form, and an onChange handler that will update the data dynamically as we change the values in the input boxes. Also, for simplicity, we'll remove the Save button and move it outside of the form. This will allow the user to make changes to their data in the UI, and when the user wants to lock in the changes and save them to the database, then they'll hit the Save button. So our Home component will now look like this:\n\n``` javascript\nconst Home = () => {\n let data = {\n calories: {\n label: \"Calories\",\n total: 1840,\n target: 1850,\n variant: 150\n },\n carbs: {\n label: \"Carbs\",\n total: 190,\n target: 160,\n variant: 15\n },\n fat: {\n label: \"Fat\",\n total: 55,\n target: 60,\n variant: 10\n },\n protein: {\n label: \"Protein\",\n total: 120,\n target: 165,\n variant: 10\n }\n }\n\n const [results, setResults] = useState(data);\n\n const onChange = (e) => {\n const data = { ...results };\n\n let name = e.target.name;\n\n let resultType = name.split(\" \")[0].toLowerCase();\n let resultMacro = name.split(\" \")[1].toLowerCase();\n\n data[resultMacro][resultType] = e.target.value;\n\n setResults(data);\n }\n\n return (\n \n\n \n Home\n \n \n \n\n \n\n \n \n Macro Compliance Tracker\n \n\n \n\n \n Previous Day\n 1/23/2020\n Next Day\n \n\n \n \n \n \n \n \n\n \n \n \n \n \n\n \n \n \n Save\n \n \n \n \n \n)}\n\nexport default Home\n```\n\nAside from cleaning up the UI code, we also added an onChange function that will be called every time the value of one of the input boxes changes. The onChange function will determine which box was changed and update the data value accordingly as well as re-render the UI to show the new changes.\n\nNext, let's take a look at our implementation of the `MCTForm` component.\n\n``` javascript\nimport React from 'react'\n\nconst MCTForm = ({data, item, onChange}) => {\n return(\n \n {item}\n \n Calories\n onChange(e)}>\n \n \n Carbs\n onChange(e)}>\n \n \n Fat\n onChange(e)}>\n \n \n Protein\n onChange(e)}>\n \n \n )\n}\n\nexport default MCTForm\n```\n\nAs you can see this component is in charge of rendering our forms. Since the input boxes are the same for all three types of forms, we can reuse the component multiple times and just change the type of data we are working with.\n\nAgain if we look at our application in the browser now, it doesn't look much different. But now the form works. We can replace the values and the application will be dynamically updated showing our new total calories and macros and whether or not we are in compliance with our goals. Go ahead and play around with it for a little bit to make sure it all works.\n\n## Connecting Our Application to MongoDB\n\nOur application is looking good. It also works. But, the data is all in memory. As soon as we refresh our page, all the data is reset to the default values. In this sense, our app is not very useful. So our next step will be to connect our application to a database so that we can start seeing our progress over time. We'll use MongoDB and [MongoDB Atlas to accomplish this.\n\n## Setting Up Our MongoDB Database\n\nBefore we can save our data, we'll need a database. For this I'll use MongoDB and MongoDB Atlas to host my database. If you don't already have MongoDB Atlas, you can sign up and use it for free here, otherwise go into an existing cluster and create a new database. Inside MongoDB Atlas, I will use an existing cluster and set up a new database called MCT. With this new database created, I will create a new collection called daily that will store my daily results, target macros, as well as allowed variants.\n\nWith my database set up, I will also add a few days worth of data. Feel free to add your own data or if you'd like the dataset I'm using, you can get it here. I will use MongoDB Compass to import and view the data, but you can import the data however you want: use the CLI, add in manually, or use Compass.\n\nThanks to MongoDB's document model, I can represent the data exactly as I had it in-memory. The only additional fields I will have in my MongoDB model is an `_id` field that will be a unique identifier for the document and a date field that will represent the data for a specific date. The image below shows the data model for one document in MongoDB Compass.\n\nNow that we have some real data to work with, let's go ahead and connect our Next.js application to our MongoDB Database. Since Next.js is a React based framework that's running Node server-side we will use the excellent Mongo Node Driver to facilitate this connection.\n\n## Connecting Next.js to MongoDB Atlas\n\nOur pages and components directory renders both server-side on the initial load as well as client-side on subsequent page changes. The MongoDB Node Driver works only on the server side and assumes we're working on the backend. Not to mention that our credentials to MongoDB need to be secure and not shared to the client ever.\n\nNot to worry though, this is where Next.js shines. In the pages directory, we can create an additional special directory called api. In this API directory, as the name implies, we can create api endpoints that are executed exclusively on the backend. The best way to see how this works is to go and create one, so let's do that next. In the pages directory, create an api directory, and there create a new file called daily.js.\n\nIn the `daily.js` file, add the following code:\n\n``` javascript\nexport default (req, res) => {\n res.statusCode = 200\n res.setHeader('Content-Type', 'application/json')\n res.end(JSON.stringify({ message: 'Hello from the Daily route' }))\n}\n```\n\nSave the file, go to your browser and navigate to `localhost:3000/api/daily`. What you'll see is the JSON response of `{message:'Hello from the Daily route'}`. This code is only ever run server side and the only thing the browser receives is the response we send. This seems like the perfect place to set up our connection to MongoDB.\n\nWhile we can set the connection in this daily.js file, in a real world application, we are likely to have multiple API endpoints and for that reason, it's probably a better idea to establish our connection to the database in a middleware function that we can pass to all of our api routes. So as a best practice, let's do that here.\n\nCreate a new middleware directory at the root of the project structure alongside pages and components and call it middleware. The middleware name is not reserved so you could technically call it whatever you want, but I'll stick to middleware for the name. In this new directory create a file called database.js. This is where we will set up our connection to MongoDB as well as instantiate the middleware so we can use it in our API routes.\n\nOur `database.js` middleware code will look like this:\n\n``` javascript\nimport { MongoClient } from 'mongodb';\nimport nextConnect from 'next-connect';\n\nconst client = new MongoClient('{YOUR-MONGODB-CONNECTION-STRING}', {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function database(req, res, next) {\n req.dbClient = client;\n req.db = client.db('MCT');\n return next();\n}\n\nconst middleware = nextConnect();\n\nmiddleware.use(database);\n\nexport default middleware;\n```\n\nIf you are following along, be sure to replace the `{YOUR-MONGODB-CONNECTION-STRING}` variable with your connection string, as well as ensure that the client.db matches the name you gave your database. Also be sure to run `npm install --save mongodb next-connect` to ensure you have all the correct dependencies. Database names are case sensitive by the way. Save this file and now open up the daily.js file located in the pages/api directory.\n\nWe will have to update this file. Since now we want to add a piece of middleware to our function, we will no longer be using an anonymous function here. We'll utility next-connect to give us a handler chain as well as allow us to chain middleware to the function. Let's take a look at what this will look like.\n\n``` javascript\nimport nextConnect from 'next-connect';\nimport middleware from '../../middleware/database';\n\nconst handler = nextConnect();\n\nhandler.use(middleware);\n\nhandler.get(async (req, res) => {\n\n let doc = await req.db.collection('daily').findOne()\n console.log(doc);\n res.json(doc);\n});\n\nexport default handler;\n```\n\nAs you can see we now have a handler object that gives us much more flexibility. We can use different HTTP verbs, add our middleware, and more. What the code above does, is that it connects to our MongoDB Atlas cluster and from the MCT database and daily collection, finds and returns one item and then renders it to the screen. If we hit `localhost:3000/api/daily` now in our browser we'll see this:\n\nWoohoo! We have our data and the data model matches our in-memory data model, so our next step will be to use this real data instead of our in-memory sample. To do that, we'll open up the index.js page.\n\nOur main component is currently instantiated with an in-memory data model that the rest of our app acts upon. Let's change this. Next.js gives us a couple of different ways to do this. We can always get the data async from our React component, and if you've used React in the past this should be second nature, but since we're using Next.js I think there is a different and perhaps better way to do it.\n\nEach Next.js page component allows us to fetch data server-side thanks to a function called `getStaticProps`. When this function is called, the initial page load is rendered server-side, which is great for SEO. The page doesn't render until this function completes. In `index.js`, we'll make the following changes:\n\n``` javascript\nimport fetch from 'isomorphic-unfetch'\nconst Home = ({data}) => { ... }\n\nexport async function getStaticProps(context) {\n const res = await fetch(\"http://localhost:3000/api/daily\");\n const json = await res.json();\n return {\n props: {\n data: json,\n },\n };\n}\n\nexport default Home\n```\n\nInstall the `isomorphic-unfetch` library by running `npm install --save isomorphic-unfetch`, then below your Home component add the `getStaticProps` method. In this method we're just making a fetch call to our daily API endpoint and storing that json data in a prop called data. Since we created a data prop, we then pass it into our Home component, and at this point, we can go and remove our in-memory data variable. Do that, save the file, and refresh your browser.\n\nCongrats! Your data is now coming live from MongoDB. But at the moment, it's only giving us one result. Let's make a few final tweaks so that we can see daily results, as well as update the data and save it in the database.\n\n## View Macro Compliance Tracker Data By Day\n\nThe first thing we'll do is add the ability to hit the Previous Day and Next Day buttons and display the corresponding data. We won't be creating a new endpoint since I think our daily API endpoint can do the job, we'll just have to make a few enhancements. Let's do those first.\n\nOur new daily.js API file will look as such:\n\n``` javascript\nhandler.get(async (req, res) => {\n const { date } = req.query;\n\n const dataModel = { \"_id\": new ObjectID(), \"date\": date, \"calories\": { \"label\": \"Calories\", \"total\": 0, \"target\": 0, \"variant\": 0 }, \"carbs\": { \"label\": \"Carbs\", \"total\": 0, \"target\": 0, \"variant\": 0 }, \"fat\": { \"label\" : \"Fat\", \"total\": 0, \"target\": 0, \"variant\": 0 }, \"protein\": { \"label\" : \"Protein\", \"total\": 0, \"target\": 0, \"variant\": 0 }}\n\n let doc = {}\n\n if(date){\n doc = await req.db.collection('daily').findOne({date: new Date(date)})\n } else {\n doc = await req.db.collection('daily').findOne()\n }\n if(doc == null){\n doc = dataModel\n }\n res.json(doc)\n});\n```\n\nWe made a couple of changes here so let's go through them one by one. The first thing we did was we are looking for a date query parameter to see if one was passed to us. If a date parameter was not passed, then we'll just pick a random item using the `findOne` method. But, if we did receive a date, then we'll query our MongoDB database against that date and return the data for that specified date.\n\nNext, as our data set is not exhaustive, if we go too far forwards or backwards, we'll eventually run out of data to display, so we'll create an empty in-memory object that serves as our data model. If we don't have data for a specified date in our database, we'll just set everything to 0 and serve that. This way we don't have to do a whole lot of error handling on the front and can always count on our backend to serve some type of data.\n\nNow, open up the `index.js` page and let's add the functionality to see the previous and next days. We'll make use of dayjs to handle our dates, so install it by running `npm install --save dayjs` first. Then make the following changes to your `index.js` page:\n\n``` javascript\n// Other Imports ...\nimport dayjs from 'dayjs'\n\nconst Home = ({data}) => {\n const results, setResults] = useState(data);\n\n const onChange = (e) => {\n }\n\n const getDataForPreviousDay = async () => {\n let currentDate = dayjs(results.date);\n let newDate = currentDate.subtract(1, 'day').format('YYYY-MM-DDTHH:mm:ss')\n const res = await fetch('http://localhost:3000/api/daily?date=' + newDate)\n const json = await res.json()\n\n setResults(json);\n }\n\n const getDataForNextDay = async () => {\n let currentDate = dayjs(results.date);\n let newDate = currentDate.add(1, 'day').format('YYYY-MM-DDTHH:mm:ss')\n const res = await fetch('http://localhost:3000/api/daily?date=' + newDate)\n const json = await res.json()\n\n setResults(json);\n }\n\nreturn (\n \n Previous Day\n {dayjs(results.date).format('MM/DD/YYYY')}\n Next Day\n \n\n)}\n```\n\nWe added two new methods, one to get the data from the previous day and one to get the data from the following day. In our UI we also made the date label dynamic so that it displays and tells us what day we are currently looking at. With these changes go ahead and refresh your browser and you should be able to see the new data for days you have entered in your database. If a particular date does not exist, it will show 0's for everything.\n\n![MCT No Data\n\n## Saving and Updating Data In MongoDB\n\nFinally, let's close out this tutorial by adding the final piece of functionality to our app, which will be to make updates and save new data into our MongoDB database. Again, I don't think we need a new endpoint for this, so we'll use our existing daily.js API. Since we're using the handler convention and currently just handle the GET verb, let's add onto it by adding logic to handle a POST to the endpoint.\n\n``` javascript\nhandler.post(async (req, res) => {\n let data = req.body;\n data = JSON.parse(data);\n data.date = new Date(data.date);\n let doc = await req.db.collection('daily').updateOne({date: new Date(data.date)}, {$set:data}, {upsert: true})\n\n res.json({message: 'ok'});\n})\n```\n\nThe code is pretty straightforward. We'll get our data in the body of the request, parse it, and then save it to our MongoDB daily collection using the `updateOne()` method. Let's take a closer look at the values we're passing into the `updateOne()` method.\n\nThe first value we pass will be what we match against, so in our collection if we find that the specific date already has data, we'll update it. The second value will be the data we are setting and in our case, we're just going to set whatever the front-end client sends us. Finally, we are setting the upsert value to true. What this will do is, if we cannot match on an existing date, meaning we don't have data for that date already, we'll go ahead and create a new record.\n\nWith our backend implementation complete, let's add the functionality on our front end so that when the user hits the Save button, the data gets properly updated. Open up the index.js file and make the following\nchanges:\n\n``` javascript\nconst Home = ({data}) => {\n const updateMacros = async () => {\n const res = await fetch('http://localhost:3000/api/daily', {\n method: 'post',\n body: JSON.stringify(results)\n })\n }\n\n return (\n \n \n \n Save\n \n \n \n)}\n```\n\nOur new updateMacros method will make a POST request to our daily API endpoint with the new data. Try it now! You should be able to update existing macros or create data for new days that you don't already have any data for. We did it!\n\n## Putting It All Together\n\nWe went through a lot in today's tutorial. Next.js is a powerful framework for building modern web applications and having a flexible database powered by MongoDB made it possible to build a fully fledged application in no time at all. There were a couple of items we omitted for brevity such as error handling and deployment, but feel free to clone the application from GitHub, sign up for MongoDB Atlas for free, and build on top of this foundation.", "format": "md", "metadata": {"tags": ["JavaScript", "Next.js"], "pageDescription": "Learn how to couple Next.js and MongoDB for your next-generation applications.", "contentType": "Tutorial"}, "title": "Building Modern Applications with Next.js and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/integrating-mongodb-amazon-apache-kafka", "action": "created", "body": "# Integrating MongoDB with Amazon Managed Streaming for Apache Kafka (MSK)\n\nAmazon Managed Streaming for Apache Kafka (MSK) is a fully managed, highly available Apache Kafka service. MSK makes it easy to ingest and process streaming data in real time and leverage that data easily within the AWS ecosystem. By being able to quickly stand up a Kafka solution, you spend less time managing infrastructure and more time solving your business problems, dramatically increasing productivity. MSK also supports integration of data sources such as MongoDB via the AWS MSK Connect (Connect) service. This Connect service works with the MongoDB Connector for Apache Kafka, enabling you to easily integrate MongoDB data. \n\nIn this blog post, we will walk through how to set up MSK, configure the MongoDB Connector for Apache Kafka, and create a secured VPC Peered connection with MSK and a MongoDB Atlas cluster. The high-level process is as follows:\n\n* Configure Amazon Managed Streaming for Apache Kafka\n* Configure EC2 client \n* Configure a MongoDB Atlas Cluster\n* Configure Atlas Private Link including VPC and subnet of the MSK\n* Configure plugin in MSK for MongoDB Connector\n* Create topic on MSK Cluster\n* Install MongoSH command line tool on client\n* Configure MongoDB Connector as a source or sink\n\nIn this example, we will have two collections in the same MongoDB cluster\u2014the \u201csource\u201d and the \u201csink.\u201d We will insert sample data into the source collection from the client, and this data will be consumed by MSK via the MongoDB Connector for Apache Kafka running as an MSK connector. As data arrives in the MSK topic, another instance of the MongoDB Connector for Apache Kafka will write the data to the MongoDB Atlas cluster \u201csink\u201d collection. To align with best practices for secure configuration, we will set up an AWS Network Peered connection between the MongoDB Atlas cluster and the VPC containing MSK and the client EC2 instance. \n\n## Configure AWS Managed Service for Kafka\n\nTo create an Amazon MSK cluster using the AWS Management Console, sign in to the AWS Management Console, and open the Amazon MSK console.\n\n* Choose Create cluster and select Quick create.\n\nFor Cluster name, enter MongoDBMSKCluster.\n\nFor Apache Kafka version, select one that is 2.6.2 or above.\n\nFor broker type, select kafla.t3.small.\n\nFrom the table under All cluster settings, copy the values of the following settings and save them because you will need them later in this blog:\n\n* VPC\n* Subnets\n* Security groups associated with VPC\n\n* Choose \u201cCreate cluster.\u201d\n\n## Configure an EC2 client\n\nNext, let's configure an EC2 instance to create a topic. This is where the MongoDB Atlas source collection will write to. This client can also be used to query the MSK topic and monitor the flow of messages from the source to the sink.\n\nTo create a client machine, open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.\n\n* Choose Launch instances.\n* Choose Select to create an instance of Amazon Linux 2 AMI (HVM) - Kernel 5.10, SSD Volume Type.\n* Choose the t2.micro instance type by selecting the check box.\n* Choose Next: Configure Instance Details.\n* Navigate to the Network list and choose the VPC whose ID you saved in the previous step.\n* Go to Auto-assign Public IP list and choose Enable.\n* In the menu near the top, select Add Tags.\n* Enter Name for the Key and MongoDBMSKCluster for the Value.\n* Choose Review and Launch, and then click Launch.\n* Choose Create a new key pair, enter MongoDBMSKKeyPair for Key pair name, and then choose Download Key Pair. Alternatively, you can use an existing key pair if you prefer.\n* Start the new instance by pressing Launch Instances.\n\nNext, we will need to configure the networking to allow connectivity between the client instance and the MSK cluster.\n\n* Select View Instances. Then, in the Security Groups column, choose the security group that is associated with the MSKTutorialClient instance.\n* Copy the name of the security group, and save it for later.\n* Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.\n* In the navigation pane, click on Security Groups. Find the security group whose ID you saved in Step 1 (Create an Amazon MSK Cluster). Choose this row by selecting the check box in the first column.\n* In the Inbound Rules tab, choose Edit inbound rules.\n* Choose Add rule.\n* In the new rule, choose All traffic in the Type column. In the second field in the Source column, select the security group of the client machine. This is the group whose name you saved earlier in this step.\n* Click Save rules.\n\nThe cluster's security group can now accept traffic that comes from the client machine's security group.\n\n## Create MongoDB Atlas Cluster\n\nTo create a MongoDB Atlas Cluster, follow the Getting Started with Atlas tutorial. Note that in this blog, you will need to create an M30 Atlas cluster or above\u2014as VPC peering is not available for M0, M2, and M5 clusters.\n\nOnce the cluster is created, configure an AWS private endpoint in the Atlas Network Access UI supplying the same subnets and VPC. \n\n* Click on Network Access.\n* Click on Private Endpoint, and then the Add Private Endpoint button.\n\n* Fill out the VPC and subnet IDs from the previous section.\n\n* SSH into the client machine created earlier and issue the following command in the Atlas portal: **aws ec2 create-vpc-endpoint **\n * Note that you may have to first configure the AWS CLI command using **aws configure** before you can create the VPC through this tool. See Configuration Basics for more information.\n\n## Configure MSK plugin\n\nNext, we need to create a custom plugin for MSK. This custom plugin will be the MongoDB Connector for Apache Kafka. For reference, note that the connector will need to be uploaded to an S3 repository **before** you can create the plugin. You can download the MongoDB Connector for Apache Kafka from Confluent Hub.\n\n* Select \u201cCreate custom plugin\u201d from the Custom Plugins menu within MSK.\n* Fill out the custom plugin form, including the S3 location of the downloaded connector, and click \u201cCreate custom plugin.\u201d\n\n## Create topic on MSK cluster\n\nWhen we start reading the data from MongoDB, we also need to create a topic in MSK to accept the data. On the client EC2 instance, let\u2019s install Apache Kafka, which includes some basic tools. \n\nTo begin, run the following command to install Java:\n\n**sudo yum install java-1.8.0**\n\nNext, run the command below to download Apache Kafka.\n\n**wget https://archive.apache.org/dist/kafka/2.6.2/kafka_2.12-2.6.2.tgz**\n\nBuilding off the previous step, run this command in the directory where you downloaded the TAR file:\n\n**tar -xzf kafka_2.12-2.6.2.tgz**\n\nThe distribution of Kafka includes a **bin** folder with tools that can be used to manage topics. Go to the **kafka_2.12-2.6.2** directory.\n\nTo create the topic that will be used to write MongoDB events, issue this command:\n\n`bin/kafka-topics.sh --create --zookeeper (INSERT YOUR ZOOKEEPER INFO HERE)--replication-factor 1 --partitions 1 --topic MongoDBMSKDemo.Source`\n\nAlso, remember that you can copy the Zookeeper server endpoint from the \u201cView Client Information\u201d page on your MSK Cluster. In this example, we are using plaintext.\n\n## Configure source connector\n\nOnce the plugin is created, we can create an instance of the MongoDB connector by selecting \u201cCreate connector\u201d from the Connectors menu.\n\n* Select the MongoDB plug in and click \u201cNext.\u201d\n* Fill out the form as follows:\n\nConector name: **MongoDB Source Connector**\n\nCluster Type: **MSK Connector**\n\nSelect the MSK cluster that was created previously, and select \u201cNone\u201d under the authentication drop down menu.\n\nEnter your connector configuration (shown below) in the configuration settings text area.\n\n`connector.class=com.mongodb.kafka.connect.MongoSourceConnector\ndatabase=MongoDBMSKDemo\ncollection=Source\ntasks.max=1\nconnection.uri=(MONGODB CONNECTION STRING HERE)\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter=org.apache.kafka.connect.storage.StringConverter`\n\n**Note**: You can find your Atlas connection string by clicking on the Connect button on your Atlas cluster. Select \u201cPrivate Endpoint\u201d if you have already configured the Private Endpoint above, then press \u201cChoose a connection method.\u201d Next, select \u201cConnect your application\u201d and copy the **mongodb+srv** connection string.\n\nIn the \u201cAccess Permissions\u201d section, you will need to create an IAM role with the required trust policy.\n\nOnce this is done, click \u201cNext.\u201d The last section will offer you the ability to use logs\u2014which we highly recommend, as it will simplify the troubleshooting process. \n\n## Configure sink connector\n\nNow that we have the source connector up and running, let\u2019s configure a sink connector to complete the round trip. Create another instance of the MongoDB connector by selecting \u201cCreate connector\u201d from the Connectors menu.\n\nSelect the same plugin that was created previously, and fill out the form as follows: \n\nConnector name: **MongoDB Sink Connector**\nCluster type: **MSK Connector**\nSelect the MSK cluster that was created previously and select \u201cNone\u201d under the authentication drop down menu.\nEnter your connector configuration (shown below) in the Configuration Settings text area.\n\n`connector.class=com.mongodb.kafka.connect.MongoSinkConnector\ndatabase=MongoDBMSKDemo\ncollection=Sink\ntasks.max=1\ntopics=MongoDBMSKDemo.Source\nconnection.uri=(MongoDB Atlas Connection String Gos Here)\nvalue.converter=org.apache.kafka.connect.storage.StringConverter\nkey.converter=org.apache.kafka.connect.storage.StringConverter`\n\nIn the Access Permissions section, select the IAM role created earlier that has the required trust policy. As with the previous connector, be sure to leverage a log service like CloudWatch. \n\nOnce the connector is successfully configured, we can test the round trip by writing to the Source collection and seeing the same data in the Sink collection. \n\nWe can insert data in one of two ways: either through the intuitive Atlas UI, or with the new MongoSH (mongoshell) command line tool. Using MongoSH, you can interact directly with a MongoDB cluster to test queries, perform ad hoc database operations, and more. \n\nFor your reference, we\u2019ve added a section on how to use the mongoshell on your client EC2 instance below. \n\n## Install MongoDB shell on client\n\nOn the client EC2 instance, create a **/etc/yum.repos.d/mongodb-org-5.0.repo** file by typing:\n\n`sudo nano /etc/yum.repos.d/mongodb-org-5.0.repo`\n\nPaste in the following:\n\n`mongodb-org-5.0]\nname=MongoDB Repository\nbaseurl=https://repo.mongodb.org/yum/amazon/2/mongodb-org/5.0/x86_64/\ngpgcheck=1\nenabled=1\ngpgkey=[https://www.mongodb.org/static/pgp/server-5.0.asc`\n\nNext, install the MongoSH shell with this command:\n\n`sudo yum install -y mongodb-mongosh`\n\nUse the template below to connect to your MongoDB cluster via mongoshell: \n\n`mongosh \u201c (paste in your Atlas connection string here) \u201c`\n\nOnce connected, type:\n`Use MongoDBMSKDemo\ndb.Source.insertOne({\u201cTesting\u201d:123})`\n\nTo check the data on the sink collection, use this command:\n`db.Sink.find({})`\n\nIf you run into any issues, be sure to check the log files. In this example, we used CloudWatch to read the events that were generated from MSK and the MongoDB Connector for Apache Kafka.\n\n## Summary\n\nAmazon Managed Streaming for Apache Kafka (MSK) is a fully managed, secure, and highly available Apache Kafka service that makes it easy to ingest and process streaming data in real time. MSK allows you to import Kafka connectors such as the MongoDB Connector for Apache Kafka. These connectors make working with data sources seamless within MSK. In this article, you learned how to set up MSK, MSK Connect, and the MongoDB Connector for Apache Kafka. You also learned how to set up a MongoDB Atlas cluster and configure it to use AWS network peering. To continue your learning, check out the following resources:\n\nMongoDB Connector for Apache Kafka Documentation\nAmazon MSK Getting Started\nAmazon MSK Connect Getting Started", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Kafka"], "pageDescription": "In this article, learn how to set up Amazon MSK, configure the MongoDB Connector for Apache Kafka, and how it can be used as both a source and sink for data integration with MongoDB Atlas running in AWS.", "contentType": "Tutorial"}, "title": "Integrating MongoDB with Amazon Managed Streaming for Apache Kafka (MSK)", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/christmas-2021-mongodb-data-api", "action": "created", "body": "# Christmas Lights and Webcams with the MongoDB Data API\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nWhen I set out to demonstrate how the MongoDB Atlas Data API allows modern microcontrollers to communicate directly with MongoDB Atlas, I initially built a pager. You know, those buzzy things that go off during the holiday season to let you know there is bad news and you need to go fix something right now. I quickly realized this was not what people wanted to be reminded of during the holiday season, nor did it allow everyone viewing to interact\u2026 Well, I could let them page me and interrupt my holiday, but how would they know I got it? In the end, I decided to put my Christmas tree on the internet instead.\n\nLooking at the problem, it meant I needed to provide two parts: a way to control the tree lights using Atlas and an API, and a way to view the tree. In this holiday special article, I describe how to do both: create API-controlled fairy lights, and build a basic MongoDB-powered IP surveillance camera.\n\nBefore I bore you with details of breadboard voltages, SRAM banks, and base64 encoding, here is a link to the live view of the tree with details of how you can change the light colours. It may take a few seconds before you see your change.\n\n \n \n#### https://xmastree-lpeci.mongodbstitch.com/\n \nThis is not a step-by-step tutorial. I'm sorry but that would be too long. However, if you are familiar with Arduino and other Maker tools, or are prepared to google a few how-tos, it should provide you with all the information you need to create your own setup. Otherwise, it's simply a fascinating read about tiny computers and their challenges.\n\nThe MongoDB Atlas Data APIis an HTTPS-based API that allows us to read and write data in Atlas, where a MongoDB driver library is either not available or not desirable. In this case, I am looking at how to call it from an ESP32 Microcontroller using the Arduino APIs and C++/Wiring.\n\n## Prerequisites\n\nYou will need the Arduino IDE to upload code to our microcontrollers.\n\nYou will also need an Atlas cluster for which you have enabled the Data API, and our endpoint URL and API key. You can learn how to get these in this article or this video if you do not have them already.\n\nIf you want to directly upload the Realm application to enable the API and viewer, you will need the Realm command-line interface.\n\nYou will also need the following hardware or similar:\n\n##### Lights\n\n* JZK ESP-32S ESP32 Development Board ($10 Here)\n* Neopixel compatible light string ($20 Here)\n* Breadboard, Power Regulator, and Dupont Cables ($5 here)\n* 9v x 3A Power supply ($14 here)\n* 1000 microfarad capacitor\n* 330 ohm resistor\n\n##### Webcam\n\n* ESP32 AI Thinker Camera ($10 here)\n* USB Power Supply\n\n### Creating the Christmas Light Hardware\n\nNeopixel 24-bit RGB individually addressable LEDs have become somewhat ubiquitous in maker projects. Aside from occasional finicky power requirements, they are easy to use and well supported and need only power and one data line to be connected. Neopixels and clones have also dropped in price dramatically since I first tried to make smart Christmas lights back in (checks email\u2026) 2014, when I paid $14 for four of them and even then struggled to solder them nicely. I saw a string of 50 LEDs at under $20 on very fine wire and thought I had to try again with the Christmas tree idea.\n\nNeopixels on a String\n\nOverall, the circuit for the tree is very simple. Neopixels don't need a lot of supporting hardware, just a capacitor across the power cables to avoid any sudden power spikes. They do need to be run at 5v, though. Initially, I was using 3.3v as that was what the ESP board data pins output, but that resulted in the blue colour being dim as underpowered and equal red, green, and blue values giving an orange colour rather than white.\n\nSince moving to an all 5v power circuit and 3.3v just on the data line, it's a much better colour, although given the length and fineness of the wire, you can see the furthest away neopixels are dimmer, especially in blue. Looping wires from the end back to the start like a household ring-main would be a good fix for this but I'd already cut the JST connector off at the top.\n\nMy board isn't quite as neatly laid out. I had to take 5v from the power pins directly. It works just as well though. (DC Power Supply not shown in the image, Pixel string out of shot.)\n\n## Developing the Controller Software for the ESP32\n\nSource Code Location: \nhttps://github.com/mongodb-developer/xmas_2021_tree_camera/tree/main/ESP32-Arduino/mongo_xmastree\n\nI used the current Arduino IDE to develop the controller software as it is so well supported and there are many great resources to help. I only learned about the 2.0 beat after I finished it.\n\nArduino IDE is designed to be simple and easy to use\n\nAfter the usual messing about and selecting a neopixel library (my first two choices didn't work correctly as neopixels are unusually sensitive to the specific processor and board due to strict timing requirements), I got them to light up and change colour, so I set about pulling the colour values from Atlas.\n\nUnlike the original Arduino boards with their slow 16 bit CPUs, today's ESP32 are fast and have plenty of RAM (500KB!), and built-in WiFi. It also has some hardware support for TLS encryption calculations. For a long time, if you wanted to have a Microcontroller talk to the Internet, you had to use an unencrypted HTTP connection, but no more. ESP32 boards can talk to anything.\n\nLike most Makers, I took some code I already had, read some tutorial hints and tips, and mashed it all together. And it was surprisingly easy to put this code together. You can see what I ended up with here (and no, that's not my real WiFi password).\n\nThe `setup() `function starts a serial connection for debugging, connects to the WiFi network, initialises the LED string, and sets the clock via NTP. I'm not sure that's required but HTTPS might want to check certificate expiry and ESP32s have no real-time clock.\n\nThe` loop()` function just checks if 200ms have passed, and if so, calls the largest function, getLightDefinition().\n\nI have to confess, the current version of this code is sub-optimal. I optimised the camera code as you will see later but didn't backport the changes. This means it creates and destroys an SSL and then HTTPS connection every time it's called, which on this low powered hardware can take nearly a second. But I didn't need a super-fast update time here.\n\n## Making an HTTPS Call from an ESP32\n\nOnce it creates a WiFiClientSecure, it then sets a root CA certificate for it. This is required to allow it to make an HTTPS connection. What I don't understand is why *this* certificate works as it's not in the chain of trust for Atlas. I suspect the ESP32 just ignores cases where it cannot validate the server, but it does demand to be given some form of root CA. Let me know if you have an answer to that.\n\nOnce you have a WiFiClientSecure, which encapsulates TLS rather than TCP, the rest of the code is the same as an HTTP connection, but you pass the TLS-enabled WiFiClientSecure in the constructor of the HTTPClient object. And of course, give it an HTTPS URL to work with.\n\nTo authenticate to the Data API, all I need to do is pass a header called \"api-key\" with the Atlas API key. This is very simple and can be done with the following fragment.\n\n```\nHTTPClient https;\n\n if (https.begin(*client, AtlasAPIEndpoint)) { // HTTPS\n /* Headers Required for Data API*/\n https.addHeader(\"Content-Type\", \"application/json\");\n https.addHeader(\"api-key\", AtlasAPIKey);\n```\n\nThe Data API uses POSTed JSON for all calls. This makes it cleaner for more complex requests and avoids any caching issues. You could argue that find() operations, which are read-only, should use GET, but that would require passing JSON as part of the URL and that is ugly and has security and size limitations, so all the calls use POST and a JSON Body.\n\n## Writing JSON on an ESP32 Using Arduino JSON\n\nI was amazed at how easy and efficient the ArduinoJSON library was to use. If you aren't used to computers with less than 1MB of total RAM and a 240MHz CPU, you may think of JSON as just a good go-to data format. But the truth is JSON is far from efficient when it comes to processing. This is one reason MongoDB uses BSON. I think only XML takes more CPU cycles to read and write than JSON does. Beno\u00eet Blanchon has done an amazing job developing this lightweight and efficient but comprehensive library.\n\nThis might be a good time to mention that although ESP32-based systems can run https://micropython.org/, I chose to build this using the Arduino IDE and C++/Wiring. This is a bit more work but possibly required for some of the libraries I used.\n\nThis snippet shows what a relatively small amount of code is required to create a JSON payload and call the Atlas Data API to get the latest light pattern.\n\n```\n DynamicJsonDocument payload (1024);\n payload\"dataSource\"] = \"Cluster0\";\n payload[\"database\"] = \"xmastree\";\n payload[\"collection\"] = \"patterns\";\n payload[\"filter\"][\"device\"] = \"tree_1\";\n if(strcmp(lastid,\"0\")) payload[\"filter\"][\"_id\"][\"$gt\"][\"$oid\"] = lastid;\n payload[\"limit\"] = 1;\n payload[\"sort\"][\"_id\"] = -1;\n\n String JSONText;\n size_t JSONlength = serializeJson(payload, JSONText);\n Serial.println(JSONText);\n int httpCode = https.sendRequest(\"POST\", JSONText);\n\n```\n\n## Using Explicit BSON Data Types to Search via EJSON\n\nTo avoid fetching the light pattern every 500ms, I included a query to say only fetch the latest pattern` sort({_id:1).limit(1)` and only if the _id field is greater than the last one I fetched. My _id field is using the default ObjectID data type, which means as I insert them, they are increasing in value automatically.\n\nNote that to search for a field of type ObjectID, a MongoDB-specific Binary GUID data type, I had to use Extended JSON (EJSON) and construct a query that goes` { _id : { $gt : {$oid : \"61bb4a79ee3a9009e25f9111\"}}}`. If I used just `{_id:61bb4a79ee3a9009e25f9111\"}`, Atlas would be searching for that string, not for an ObjectId with that binary value.\n\n## Parsing a JSON Payload with Arduino and ESP32\n\nThe [ArduinoJSON library also made parsing my incoming response very simple too\u2014both to get the pattern of lights but also to get the latest value of _id to use in future queries. Currently, the Data API only returns JSON, not EJSON, so you don't need to worry about parsing any BSON types\u2014for example, our ObjectId. \n\n```\nif (httpCode == HTTP_CODE_OK || httpCode == HTTP_CODE_MOVED_PERMANENTLY) {\n String payload = https.getString();\n DynamicJsonDocument description(32687);\n DeserializationError error = deserializeJson(description, payload);\n if (error) {\n Serial.println(error.f_str());\n delete client; \n return;\n }\n\n if(description\"documents\"].size() == 0) {\n Serial.println(\"No Change to Lights\"); \n delete client; return;}\n \n JsonVariant lights = description[\"documents\"][0][\"state\"];\n if(! lights.is()) {\n Serial.println(\"state is not an array\");\n delete client;\n return;\n }\n \n setLights(lights.as());\n strncpy(lastid,description[\"documents\"][0][\"_id\"],24); \n```\n\nUsing ArduinoJSON, I can even pass a JSONArray to a function without needing to know what it is an array of. I can inspect the destination function and deal with different data types appropriately. I love when libraries in strongly typed languages like C++ that deal with dynamic data structures provide this type of facility.\n\nThis brings us to the last part of our lights code: setting the lights. This makes use of JsonVariant, a type you can inspect and convert at runtime to the C++ type you need.\n\n```\n\nvoid setLights(const JsonArray& lights)\n{\n Serial.println(lights.size());\n int light_no;\n for (JsonVariant v : lights) {\n int r = (int) v[\"r\"].as();\n int g = (int) v[\"g\"].as();\n int b = (int) v[\"b\"].as();\n RgbColor light_colour(r,g,b);\n strip.SetPixelColor(light_no,light_colour); \n light_no++;\n }\n Serial.println(\"Showing strip\");\n strip.Show();\n}\n\n```\n\n## Creating a Pubic Lighting Control API with MongoDB Realm\n\nWhilst I could set the values of the lights with the MongoDB Shell, I wanted a safe way to allow anyone to set the lights. The simplest and safest way to do that was to create an API. And whilst I could have used the Amazon API Gateway and AWS Lambda, I chose to use hosted functions in Realm instead. After all, I do work for MongoDB.\n\nI created an HTTPS endpoint in the Realm GUI, named it /lights, marked it as requiring a POST, and that it sends a response. I then, in the second part, said it should call a function.\n\n![\n\nI then added the following function, running as system, taking care to sanitise the input and take only what I was expecting from it.\n\n```\n// This function is the endpoint's request handler.\nexports = async function({ query, headers, body}, response) {\n\n try {\n const payload = JSON.parse(body.text());\n console.log(JSON.stringify(payload))\n state = payload.state;\n if(!state) { response.StatusCode = 400; return \"Missing state\" }\n \n if(state.length != 50) { response.StatusCode = 400; return \"Must be 50 states\"}\n \n newstate = ];\n for(x=0;x255||g<0||g>255||b<0||b>255) { response.StatusCode = 400; return \"Value out of range\"}\n newstate.push({r,g,b})\n }\n \n \n doc={device:\"tree_1\",state:newstate};\n const collection = context.services.get(\"mongodb-atlas\").db(\"xmastree\").collection(\"patterns\")\n rval = await collection.insertOne(doc)\n response.StatusCode = 201; return rval;\n } catch(e) {\n console.error(e);\n response.StatusCode = 500; return `Internal error, Sorry. ${e}`;\n }\n return \"Eh?\"\n};\n```\n\nI now had the ability to change the light colours by posting to the URL shown on the web page.\n\n## Creating the Webcam Hardware\n\nThis was all good, and if you were making a smart Christmas tree for yourself, you could stop there. But I needed to allow others to see it. I had honestly considered just an off-the-shelf webcam and a Twitch stream, but I stumbled across what must be the bargain of the decade: the AI Thinker ESP32 Cam. These are super low-cost ESP32 chips with a camera, an SD card slot, a bright LED light, and enough CPU and RAM to do some slow but capable AI inferencing\u2014for example, to recognize faces\u2014and they cost $10 or less. They are in two parts. The camera board is ready to plug into a breadboard, which has no USB circuitry, so you need a USB to FTDI programmer or similar and a nice USB to FTDI docking station you can use to program it. And if you just want to power it from USB as I do, this adds a reset button too.\n\n![\n\n**ESP CAM: 160MHz CPU, 4MB RAM + 520K Fast RAM, Wifi, Bluetooth, Camera, SD Card slot for $10**\n\nThere was nothing I had to do for the hardware except clip these two components together, plug in a USB cable (being careful not to snap the USB socket off as I did the first time I tried), and mount it on a tripod.\n\n## Writing the Webcam Software\nSource Code : https://github.com/mongodb-developer/xmas_2021_tree_camera/tree/main/ESP32-Arduino/mongo_cam\n\nCalling the Data API with a POST should have been just the same as it was in the lights, a different endpoint to insert images in a collection rather than finding them, but otherwise the same. However, this time I hit some other challenges. After a lot of searching, debugging, and reading the library source, I'd like to just highlight the difficult parts so if you do anything like this, it will help.\n\n## Sending a Larger Payload with ESP32 HTTP POST by Using a JSONStream\n## \n\nI quickly discovered that unless the image resolution was configured to be tiny, the POST requests failed, arriving mangled at the Data API. Researching, I found there was a size limit on the size of a POST imposed by the HTTP Library. If the payload was supplied as a string, it would be passed to the TLS layer, which had a limit and only posted part of it. The HTTP layer, then, rather than send the next part, simply returned an error. This seemed to kick in at about 14KB of data.\n\nReading the source, I realised this did not happen if, instead of posting the body as a string, you sent a stream\u2014a class like a filehandle or a queue that the consumer can query for data until it's empty. The HTTP library, in this case, would send the whole buffer\u2014only 1.4KB at a time, but it would send it as long as the latency to the Data API was low. This would work admirably.\n\nI, therefore, wrote a stream class that converted a JSONObject to a stream of i's string representation. \n\n```\nclass JSONStream: public Stream {\n private:\n uint8_t *buffer;\n size_t buffer_size;\n size_t served;\n int start;\n int end;\n\n public:\n JSONStream(DynamicJsonDocument &payload ) {\n int jsonlen = measureJson(payload);\n this->buffer = (uint8_t*) heap_caps_calloc(jsonlen + 1, 1, MALLOC_CAP_8BIT);\n this->buffer_size = serializeJson(payload, this->buffer, jsonlen + 1);\n this->served = 0;\n this->start = millis();\n }\n ~JSONStream() {\n heap_caps_free((void*)this->buffer);\n }\n\n void clear() {}\n size_t write(uint8_t) {}\n int available() {\n size_t whatsleft = buffer_size - served;\n if (whatsleft == 0) return -1;\n return whatsleft;\n }\n int peek() {\n return 0;\n }\n void flush() { }\n int read() {}\n size_t readBytes(uint8_t *outbuf, size_t nbytes) {\n //Serial.println(millis()-this->start);\n if (nbytes > buffer_size - served) {\n nbytes = buffer_size - served;\n }\n memcpy(outbuf, buffer + served, nbytes);\n served = served + nbytes;\n return nbytes;\n }\n\n};\n```\n\nThen use this to send an ArduinoJson Object to it and stream the JSON String.\n\n```\nDynamicJsonDocument payload (1024);\npayload\"dataSource\"] = \"Cluster0\";\npayload[\"database\"] = \"espcam\";\npayload[\"collection\"] = \"frames\";\ntime_t nowSecs = time(nullptr);\n\nchar datestring[32];\nsprintf(datestring, \"%lu000\", nowSecs);\n\npayload[\"document\"][\"time\"][\"$date\"][\"$numberLong\"] = datestring; /*Encode Date() as EJSON*/\n\nconst char* base64Image = base64EncodeImage(fb) ;\npayload[\"document\"][\"img\"][\"$binary\"][\"base64\"] = base64Image; /*Encide as a Binary() */\npayload[\"document\"][\"img\"][\"$binary\"][\"subType\"] = \"07\";\n\nJSONStream *buffer = new JSONStream(payload);\n\nint httpCode = https.sendRequest(\"POST\", buffer, buffer->available());\n```\n\n## Allocating more than 32KB RAM on an ESP32 Using Capability Defined RAM\n## \n\nThis was simple, generally, except where I tried to allocate 40KB of RAM using malloc and discovered the default behaviour is to allocate that on the stack which was too small. I, therefore, had to use heap_caps_calloc() with MALLOC_CAP_8BIT to be more specific about the exact place I wanted my RAM allocated. And of course, had to use the associated heap_caps_free() to free it. This is doubly important in something that has both SRAM and PSRAM with different speeds and hardware access paths.\n## Sending Dates to the Data API with a Microcontroller and C++\n\nA pair of related challenges I ran into involved sending data that wasn't text or numbers. I needed a date in my documents so I could use a TTL index to delete them once they were a few days old. Holding a huge number of images would quickly fill my free tier data quota. This is easy with EJSON. You send JSON of the form` { $date: { $numberLong: \"xxxxxxxxx\"}} `where the string is the number of milliseconds since 1-1-1970. Sounds easy enough. However, being a 32-bit machine, the ESP32 really didn't like printing 64-bit numbers, and I tried a lot of bit-shifting, masking, and printing two 32-bit unsigned numbers until I realised I could simply print the 32-bit seconds since 1-1-1970 and add \"000\" on the end.\n\n## Base 64 Encoding on the ESP32 to Send Via JSON\n\nThe other was how to send a Binary() datatype to MongoDB to hold the image. The EJSON representation of that is {$binary:{$base64: \"Base 64 String of Data\"}} but it was very unclear how to get an ESP32 to do base64 encoding. Many people seemed to have written their own, and things I tried failed until I eventually found a working library and applied what I know about allocating capability memory. That led me to the code below. This can be easily adapted to any binary buffer if you also know the length.\n\n```\n#include \"mbedtls/base64.h\"\n\nconst char* base64EncodeImage(camera_fb_t *fb)\n{\n /* Base 64 encode the image - this was the simplest way*/\n unsigned char* src = fb->buf;\n size_t slen = fb->len;\n size_t dlen = 0;\n\n int err = mbedtls_base64_encode(NULL, 0 , &dlen, src, slen);\n /* For a larger allocation like thi you need to use capability allocation*/\n const char *dst = (char*) heap_caps_calloc(dlen, 1, MALLOC_CAP_8BIT);\n\n size_t olen;\n err = mbedtls_base64_encode((unsigned char*)dst, dlen , &olen, src, slen);\n\n if (err != 0) {\n Serial.printf(\"error base64 encoding, error %d, buff size: %d\", err, olen);\n return NULL;\n }\n return dst;\n}\n```\n\n## Viewing the Webcam Images with MongoDB Realm\n\nHaving put all that together, I needed a way to view it. And for this, I decided rather than create a web service, I would use Realm Web and QueryAnywhere with Read-only security rules and anonymous users.\n\nThis is easy to set up by clicking a few checkboxes in your Realm app. Then in a web page (hosted for free in Realm Hosting), I can simply add code as follows, to poll for new images (again, using the only fetch if changes trick with _id). \n\n```\n\n \n \n\n```\n\nYou can see this in action at https://xmastree-lpeci.mongodbstitch.com/. Use *view-source *or the developer console in Chrome to see the code. Or look at it in GitHub [here.\n\n## \n## Conclusion\n## \nI don't have a deep or dramatic conclusion for this as I did this mostly for fun. I personally learned a lot about connecting the smallest computers to the modern cloud. Some things we can take for granted on our desktops and laptops due to an abundance of RAM and CPU still need thought and consideration. The Atlas Data API, though, worked exactly the same way as it does on these larger platforms, which is awesome. Next time, I'll use Micropython or even UIFlow Block coding and see if it's even easier.\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "I built a Christmas tree with an API so you, dear reader, can control the lights as well as a webcam to view it. All built using ESP32 Microcontrollers, Neopixels and the MongoDB Atlas Data API.", "contentType": "Article"}, "title": "Christmas Lights and Webcams with the MongoDB Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/real-time-location-updates-stitch-change-streams-mapbox", "action": "created", "body": "# Real-Time Location Updates with MongoDB Stitch, Change Streams, and Mapbox\n\n>\n>\n>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.\n\nWhen it comes to modern web applications, interactions often need to be done in real-time. This means that instead of periodically checking in for changes, watching or listening for changes often makes more sense.\n\nTake the example of tracking something on a map. When it comes to package shipments, device tracking, or anything else where you need to know the real-time location, watching for those changes in location is great. Imagine needing to know where your fleet is so that you can dispatch them to a nearby incident?\n\nWhen it comes to MongoDB, watching for changes can be done through change streams. These change streams can be used in any of the drivers, including front-end applications with MongoDB Stitch.\n\nIn this tutorial, we're going to leverage MongoDB Stitch change streams. When the location data in our NoSQL documents change, we're going to update the information on an interactive map powered by Mapbox.\n\nTake the following animated image for example:\n\nRather than building an Internet of Things (IoT) device to track and submit GPS data, we're going to simulate the experience by directly changing our documents in MongoDB. When the update operations are complete, the front-end application with the interactive map is watching for those changes and responding appropriately.\n\n## The Requirements\n\nTo be successful with this example, you'll need to have a few things ready to go prior:\n\n- A MongoDB Atlas cluster\n- A MongoDB Stitch application\n- A Mapbox account\n\nFor this example, the data will exist in MongoDB Atlas. Since we're planning on interacting with our data using a front-end application, we'll be using MongoDB Stitch. A Stitch application should be created within the MongoDB Cloud and connected to the MongoDB Atlas cluster prior to exploring this tutorial.\n\n>Get started with MongoDB Atlas and Stitch for FREE in the MongoDB Cloud.\n\nMapbox will be used as our interactive map. Since Mapbox is a service, you'll need to have created an account and have access to your access token.\n\nIn the animated image, I'm using the MongoDB Visual Studio Code plugin for interacting with the documents in my collection. You can do the same or use another tool such as Compass, the CLI, or the data explorer within Atlas to get the job done.\n\n## Understanding the Document Model for the Location Tracking Example\n\nBecause we're only planning on moving a marker around on a map, the data model that we use doesn't need to be extravagant. For this example, the following is more than acceptable:\n\n``` json\n{\n \"_id\": \"5ec44f70fa59d66ba0dd93ae\",\n \"coordinates\": [\n -121.4252,\n 37.7397\n ],\n \"username\": \"nraboy\"\n}\n```\n\nIn the above example, the coordinates array has the first item representing the longitude and the second item representing the latitude. We're including a username to show that we are going to watch for changes based on a particular document field. In a polished application, all users probably wouldn't be watching for changes for all documents. Instead they'd probably be watching for changes of documents that belong to them.\n\nWhile we could put authorization rules in place for users to access certain documents, it is out of the scope of this example. Instead, we're going to mock it.\n\n## Building a Real-Time Location Tracking Application with Mapbox and the Stitch SDK\n\nNow we're going to build our client-facing application which consists of Mapbox, some basic HTML and JavaScript, and MongoDB Stitch.\n\nLet's start by adding the following boilerplate code:\n\n``` xml\n\n \n \n \n\n \n \n \n \n\n```\n\nThe above code sets us up by including the Mapbox and MongoDB Stitch SDKs. When it comes to querying MongoDB and interacting with the map, we're going to be doing that from within the `", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to use change streams with MongoDB Stitch to update location on a Mapbox map in real-time.", "contentType": "Article"}, "title": "Real-Time Location Updates with MongoDB Stitch, Change Streams, and Mapbox", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/ops-manager/enterprise-operator-kubernetes-openshift", "action": "created", "body": "# Introducing the MongoDB Enterprise Operator for Kubernetes and OpenShift\n\nToday more DevOps teams are leveraging the power of containerization,\nand technologies like Kubernetes and Red Hat OpenShift, to manage\ncontainerized database clusters. To support teams building cloud-native\napps with Kubernetes and OpenShift, we are introducing a Kubernetes\nOperator (beta) that integrates with Ops Manager, the enterprise\nmanagement platform for MongoDB. The operator enables a user to deploy\nand manage MongoDB clusters from the Kubernetes API, without having to\nmanually configure them in Ops Manager.\n\nWith this Kubernetes integration, you can consistently and effortlessly\nrun and deploy workloads wherever they need to be, standing up the same\ndatabase configuration in different environments, all controlled with a\nsimple, declarative configuration. Operations teams can also offer\ndevelopers new services like MongoDB-as-a-Service, that could provide\nfor them a fully managed database, alongside other products and\nservices, managed by Kubernetes and OpenShift.\n\nIn this blog, we'll cover the following:\n\n- Brief discussion on the container revolution\n- Overview of MongoDB Ops Manager\n- How to Install and configure the MongoDB Enterprise Operator for\n Kubernetes\n- Troubleshooting\n- Where to go for more information\n\n## The containerization movement\n\nIf you ever visited an international shipping port or drove down an\ninterstate highway you may have seen large rectangular metal containers\ngenerally referred to as intermodal containers. These containers are\ndesigned and built using the same specifications even though the\ncontents of these boxes can vary greatly. The consistent design not only\nenables these containers to freely move from ship, to rail, and to\ntruck, they also allow this movement without unloading and reloading the\ncargo contents.\n\nThis same concept of a container can be applied to software applications\nwhere the application is the contents of the container along with its\nsupporting frameworks and libraries. The container can be freely moved\nfrom one platform to another all without disturbing the application.\nThis capability makes it easy to move an application from an on-premise\ndatacenter server to a public cloud provider, or to quickly stand up\nreplica environments for development, test, and production usage.\n\nMongoDB 4.0 introduces the MongoDB Enterprise Operator for Kubernetes\nwhich enables a user to deploy and manage MongoDB clusters from the\nKubernetes API, without the user having to connect directly to Ops\nManager or Cloud Manager\n(the hosted version of Ops Manager, delivered as a\nservice.\n\nWhile MongoDB is fully supported in a containerized environment, you\nneed to make sure that the benefits you get from containerizing the\ndatabase exceed the cost of managing the configuration. As with any\nproduction database workload, these containers should use persistent\nstorage and will require additional configuration depending on the\nunderlying container technology used. To help facilitate the management\nof the containers themselves, DevOps teams are leveraging the power of\norchestration technologies like Kubernetes and Red Hat OpenShift. While\nthese technologies are great at container management, they are not aware\nof application specific configurations and deployment topologies such as\nMongoDB replica sets and sharded clusters. For this reason, Kubernetes\nhas Custom Resources and Operators which allow third-parties to extend\nthe Kubernetes API and enable application aware deployments.\n\nLater in this blog you will learn how to install and get started with\nthe MongoDB Enterprise Operator for Kubernetes. First let's cover\nMongoDB Ops Manager, which is a key piece in efficient MongoDB cluster\nmanagement.\n\n## Managing MongoDB\n\nOps Manager is an\nenterprise class management platform for MongoDB clusters that you run\non your own infrastructure. The capabilities of Ops Manager include\nmonitoring, alerting, disaster recovery, scaling, deploying and\nupgrading of replica sets and sharded clusters, and other MongoDB\nproducts, such as the BI Connector. While a thorough discussion of Ops\nManager is out of scope of this blog it is important to understand the\nbasic components that make up Ops Manager as they will be used by the\nKubernetes Operator to create your deployments.\n\nA simplified Ops Manager architecture is shown in Figure 2 below. Note\nthat there are other agents that Ops Manager uses to support features\nlike backup but these are outside the scope of this blog and not shown.\nFor complete information on MongoDB Ops Manager architecture see the\nonline documentation found at the following URL:\n\nThe MongoDB HTTP Service provides a web application for administration.\nThese pages are simply a front end to a robust set of Ops Manager REST\nAPIs that are hosted in the Ops Manager HTTP Service. It is through\nthese REST\nAPIs that\nthe Kubernetes Operator will interact with Ops Manager.\n\n## MongoDB Automation Agent\n\nWith a typical Ops Manager deployment there are many management options\nincluding upgrading the cluster to a different version, adding\nsecondaries to an existing replica set and converting an existing\nreplica set into a sharded cluster. So how does Ops Manager go about\nupgrading each node of a cluster or spinning up new MongoD instances? It\ndoes this by relying on a locally installed service called the Ops\nManager Automation Agent which runs on every single MongoDB node in the\ncluster. This lightweight service is available on multiple operating\nsystems so regardless if your MongoDB nodes are running in a Linux\nContainer or Windows Server virtual machine or your on-prem PowerPC\nServer, there is an Automation Agent available for that platform. The\nAutomation Agents receive instructions from Ops Manager REST APIs to\nperform work on the cluster node.\n\n## MongoDB Monitoring Agent\n\nWhen Ops Manager shows statistics such as database size and inserts per\nsecond it is receiving this telemetry from the individual nodes running\nMongoDB. Ops Manager relies on the Monitoring Agent to connect to your\nMongoDB processes, collect data about the state of your deployment, then\nsend that data to Ops Manager. There can be one or more Monitoring\nAgents deployed in your infrastructure for reliability but only one\nprimary agent per Ops Manager Project is collecting data. Ops Manager is\nall about automation and as soon as you have the automation agent\ndeployed, other supporting agents like the Monitoring agent are deployed\nfor you. In the scenario where the Kubernetes Operator has issued a\ncommand to deploy a new MongoDB cluster in a new project, Ops Manager\nwill take care of deploying the monitoring agent into the containers\nrunning your new MongoDB cluster.\n\n## Getting started with MongoDB Enterprise Operator for Kubernetes\n\nOps Manager is an integral part of automating a MongoDB cluster with\nKubernetes. To get started you will need access to an Ops Manager 4.0+\nenvironment or MongoDB Cloud Manager.\n\nThe MongoDB Enterprise Operator for Kubernetes is compatible with\nKubernetes v1.9 and above. It also has been tested with Openshift\nversion 3.9. You will need access to a Kubernetes environment. If you do\nnot have access to a Kubernetes environment, or just want to stand up a\ntest environment, you can use minikube which deploys a local single node\nKubernetes cluster on your machine. For additional information and setup\ninstructions check out the following URL:\nhttps://kubernetes.io/docs/setup/minikube.\n\nThe following sections will cover the three step installation and\nconfiguration of the MongoDB Enterprise Operator for Kubernetes. The\norder of installation will be as follows:\n\n- Step 1: Installing the MongoDB Enterprise Operator via a helm or\n yaml file\n- Step 2: Creating and applying a Kubernetes ConfigMap file\n- Step 3: Create the Kubernetes secret object which will store the Ops\n Manager API Key\n\n## Step 1: Installing MongoDB Enterprise Operator for Kubernetes\n\nTo install the MongoDB Enterprise Operator for Kubernetes you can use\nhelm, the Kubernetes package manager, or pass a yaml file to kubectl.\nThe instructions for both of these methods is as follows, pick one and\ncontinue to step 2.\n\nTo install the operator via Helm:\n\nTo install with Helm you will first need to clone the public repo\n\nChange directories into the local copy and run the following command on\nthe command line:\n\n``` shell\nhelm install helm_chart/ --name mongodb-enterprise\n```\n\nTo install the operator via a yaml file:\n\nRun the following command from the command line:\n\n``` shell\nkubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-enterprise-kubernetes/master/mongodb-enterprise.yaml\n```\n\nAt this point the MongoDB Enterprise Operator for Kubernetes is\ninstalled and now needs to be configured. First, we must create and\napply a Kubernetes ConfigMap file. A Kubernetes ConfigMap file holds\nkey-value pairs of configuration data that can be consumed in pods or\nused to store configuration data. In this use case the ConfigMap file\nwill store configuration information about the Ops Manager deployment we\nwant to use.\n\n## Step 2: Creating the Kubernetes ConfigMap file\n\nFor the Kubernetes Operator to know what Ops Manager you want to use you\nwill need to obtain some properties from the Ops Manager console and\ncreate a ConfigMap file. These properties are as follows:\n\n- **Base Url**: The URL of your Ops Manager or Cloud Manager.\n- **Project Id**: The id of an Ops Manager Project which the\n Kubernetes Operator will deploy into.\n- **User**: An existing Ops Manager username.\n- **Public API Key**: Used by the Kubernetes Operator to connect to\n the Ops Manager REST API endpoint.\n- **Base Url**: The Base Uri is the URL of your Ops Manager or Cloud\n Manager.\n\nIf you already know how to obtain these fellows, copy them down and\nproceed to Step 3.\n\n>\n>\n>Note: If you are using Cloud Manager the Base Url is\n>\n>\n>\n\nTo obtain the Base Url in Ops Manager copy the Url used to connect to\nyour Ops Manager server from your browser's navigation bar. It should be\nsomething similar to . You can also perform the\nfollowing:\n\nLogin to Ops Manager and click on the Admin button. Next select the \"Ops\nManager Config\" menu item. You will be presented with a screen similar\nto the figure below:\n\nCopy down the value displayed in the URL To Access Ops Manager box.\nNote: If you don't have access to the Admin drop down you will have to\ncopy the Url used to connect to your Ops Manager server from your\nbrowser's navigation bar.\n\n**Project Id**\n\nThe Project Id is the id of an Ops Manager Project which the Kubernetes\nOperator will deploy into.\n\nAn Ops Manager Project is a logical organization of MongoDB clusters and\nalso provides a security boundary. One or more\nProjects\nare apart of an Ops Manager Organization. If you need to create an\nOrganization click on your user name at the upper right side of the\nscreen and select, \"Organizations\". Next click on the \"+ New\nOrganization\" button and provide a name for your Organization. Once you\nhave an Organization you can create a Project.\n\nTo create a new Project, click on your Organization name. This will\nbring you to the Projects page and from here click on the \"+ New\nProject\" button and provide a unique name for your Project. If you are\nnot an Ops Manager administrator you may not have this option and will\nhave to ask your administrator to create a Project.\n\nOnce the Project is created or if you already have a Project created on\nyour behalf by an administrator you can obtain the Project Id by\nclicking on the Settings menu option as shown in the Figure below.\n\nCopy the Project ID.\n\n**User**\n\nThe User is an existing Ops Manager username.\n\nTo see the list of Ops Manager users return to the Project and click on\nthe \"Users & Teams\" menu. You can use any Ops Manager user who has at\nleast Project Owner access. If you'd like to create another username\nclick on the \"Add Users & Team\" button as shown in Figure 6.\n\nCopy down the email of the user you would like the Kubernetes Operator\nto use when connecting to Ops Manager.\n\n**Public API Key**\n\nThe Ops Manager API Key is used by the Kubernetes Operator to connect to\nthe Ops Manager REST API endpoint. You can create a API Key by clicking\non your username on the upper right hand corner of the Ops Manager\nconsole and selecting, \"Account\" from the drop down menu. This will open\nthe Account Settings page as shown in Figure 7.\n\nClick on the \"Public API Access\" tab. To create a new API key click on\nthe \"Generate\" button and provide a description. Upon completion you\nwill receive an API key as shown in Figure 8.\n\nBe sure to copy the API Key as it will be used later as a value in a\nconfiguration file. **It is important to copy this value while the\ndialog is up since you can not read it back once you close the dialog**.\nIf you missed writing the value down you will need to delete the API Key\nand create a new one.\n\n*Note: If you are using MongoDB Cloud Manager or have Ops Manager\ndeployed in a secured network you may need to allow the IP range of your\nKubernetes cluster so that the Operator can make requests to Ops Manager\nusing this API Key.*\n\nNow that we have acquired the necessary Ops Manager configuration\ninformation we need to create a Kubernetes ConfigMap file for the\nKubernetes Project. To do this use a text editor of your choice and\ncreate the following yaml file, substituting the bold placeholders for\nthe values you obtained in the Ops Manager console. For sample purposes\nwe can call this file \"my-project.yaml\".\n\n``` yaml\napiVersion: v1\nkind: ConfigMap\nmetadata:\n name: <>\n namespace: mongodb\ndata:\n projectId: <>\n baseUrl: <>\n```\n\nFigure 9: Sample ConfigMap file\n\nNote: The format of the ConfigMap file may change over time as features\nand capabilities get added to the Operator. Be sure to check with the\nMongoDB documentation if you are having problems submitting the\nConfigMap file.\n\nOnce you create this file you can apply the ConfigMap to Kubernetes\nusing the following command:\n\n``` shell\nkubectl apply -f my-project.yaml\n```\n\n## Step 3: Creating the Kubernetes Secret\n\nFor a user to be able to create or update objects in an Ops Manager\nProject they need a Public API Key. Earlier in this section we created a\nnew API Key and you hopefully wrote it down. This API Key will be held\nby Kubernetes as a Secret object. You can create this Secret with the\nfollowing command:\n\n``` shell\nkubectl -n mongodb create secret generic <> --from-literal=\"user=<>\" --from-literal=\"publicApiKey=<>\"\n```\n\nMake sure you replace the User and Public API key values with those you\nobtained from your Ops Manager console. You can pick any name for the\ncredentials - just make a note of it as you will need it later when you\nstart creating MongoDB clusters.\n\nNow we're ready to start deploying MongoDB Clusters!\n\n## Deploying a MongoDB Replica Set\n\nKubernetes can deploy a MongoDB standalone, replica set or a sharded\ncluster. To deploy a 3 node replica set create the following yaml file:\n\n``` shell\napiVersion: mongodb.com/v1\nkind: MongoDbReplicaSet\nmetadata:\nname: <>\nnamespace: mongodb\nspec:\nmembers: 3\nversion: 3.6.5\n\npersistent: false\n\nproject: <>\ncredentials: <>\n```\n\nFigure 10: simple-rs.yaml file describing a three node replica set\n\nThe name of your new cluster can be any name you chose. The name of the\nOpsManager Project config map and the name of credentials secret were\ndefined previously.\n\nTo submit the request for Kubernetes to create this cluster simply pass\nthe name of the yaml file you created to the following kubectl command:\n\n``` shell\nkubectl apply -f simple-rs.yaml\n```\n\nAfter a few minutes your new cluster will show up in Ops Manager as\nshown in Figure 11.\n\nNotice that Ops Manager installed not only the Automation Agents on\nthese three containers running MongoDB, it also installed Monitoring\nAgent and Backup Agents.\n\n## A word on persistent storage\n\nWhat good would a database be if anytime the container died your data\nwent to the grave as well? Probably not a good situation and maybe one\nwhere tuning up the resum\u00e9 might be a good thing to do as well. Up until\nrecently, the lack of persistent storage and consistent DNS mappings\nwere major issues with running databases within containers. Fortunately,\nrecent work in the Kubernetes ecosystem has addressed this concern and\nnew features like `PersistentVolumes` and `StatefulSets` have emerged\nallowing you to deploy databases like MongoDB without worrying about\nlosing data because of hardware failure or the container moved elsewhere\nin your datacenter. Additional configuration of the storage is required\non the Kubernetes cluster before you can deploy a MongoDB Cluster that\nuses persistent storage. In Kubernetes there are two types of persistent\nvolumes: static and dynamic. The Kubernetes Operator can provision\nMongoDB objects (i.e. standalone, replica set and sharded clusters)\nusing either type.\n\n## Connecting your application\n\nConnecting to MongoDB deployments in Kubernetes is no different than\nother deployment topologies. However, it is likely that you'll need to\naddress the network specifics of your Kubernetes configuration. To\nabstract the deployment specific information such as hostnames and ports\nof your MongoDB deployment, the Kubernetes Enterprise Operator for\nKubernetes uses Kubernetes Services.\n\n### Services\n\nEach MongoDB deployment type will have two Kubernetes services generated\nautomatically during provisioning. For example, suppose we have a single\n3 node replica set called \"my-replica-set\", then you can enumerate the\nservices using the following statement:\n\n``` shell\nkubectl get all -n mongodb --selector=app=my-replica-set-svc\n```\n\nThis statement yields the following results:\n\n``` shell\nNAME READY STATUS RESTARTS AGE\npod/my-replica-set-0 1/1 Running 0 29m\npod/my-replica-set-1 1/1 Running 0 29m\npod/my-replica-set-2 1/1 Running 0 29m\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\nservice/my-replica-set-svc ClusterIP None 27017/TCP 29m\nservice/my-replica-set-svc-external NodePort 10.103.220.236 27017:30057/TCP 29m\n\nNAME DESIRED CURRENT AGE\nstatefulset.apps/my-replica-set 3 3 29m\n```\n\n**Note the appended string \"-svc\" to the name of the replica set.**\n\nThe service with \"-external\" is a NodePort - which means it's exposed to\nthe overall cluster DNS name on port 30057.\n\nNote: If you are using Minikube you can obtain the IP address of the\nrunning replica set by issuing the following:\n\n``` shell\nminikube service list\n```\n\nIn our example which used minikube the result set contained the\nfollowing information: mongodb my-replica-set-svc-external\n\nNow that we know the IP of our MongoDB cluster we can connect using the\nMongo Shell or whatever application or tool you would like to use.\n\n## Basic Troubleshooting\n\nIf you are having problems submitting a deployment you should read the\nlogs. Issues like authentication issues and other common problems can be\neasily detected in the log files. You can view the MongoDB Enterprise\nOperator for Kubernetes log files via the following command:\n\n``` shell\nkubectl logs -f deployment/mongodb-enterprise-operator -n mongodb\n```\n\nYou can also use kubectl to see the logs of the database pods. The main\ncontainer processes is continually tailing the Automation Agent logs and\ncan be seen with the following statement:\n\n``` shell\nkubectl logs <> -n mongodb\n```\n\nNote: You can enumerate the list of pods using\n\n``` shell\nkubectl get pods -n mongodb\n```\n\nAnother common troubleshooting technique is to shell into one of the\ncontainers running MongoDB. Here you can use common Linux tools to view\nthe processes, troubleshoot, or even check mongo shell connections\n(sometimes helpful in diagnosing network issues).\n\n``` shell\nkubectl exec -it <> -n mongodb -- /bin/bash\n```\n\nAn example output of this command is as follows:\n\n``` shell\nUID PID PPID C STIME TTY TIME CMD\nmongodb 1 0 0 16:23 ? 00:00:00 /bin/sh -c supervisord -c /mongo\nmongodb 6 1 0 16:23 ? 00:00:01 /usr/bin/python /usr/bin/supervi\nmongodb 9 6 0 16:23 ? 00:00:00 bash /mongodb-automation/files/a\nmongodb 25 9 0 16:23 ? 00:00:00 tail -n 1000 -F /var/log/mongodb\nmongodb 26 1 4 16:23 ? 00:04:17 /mongodb-automation/files/mongod\nmongodb 45 1 0 16:23 ? 00:00:01 /var/lib/mongodb-mms-automation/\nmongodb 56 1 0 16:23 ? 00:00:44 /var/lib/mongodb-mms-automation/\nmongodb 76 1 1 16:23 ? 00:01:23 /var/lib/mongodb-mms-automation/\nmongodb 8435 0 0 18:07 pts/0 00:00:00 /bin/bash\n```\n\nFrom inside the container we can make a connection to the local MongoDB\nnode easily by running the mongo shell via the following command:\n\n``` shell\n/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-3.6.5/bin/mongo --port 27017\n```\n\nNote: The version of the automation agent may be different than 3.6.5,\nbe sure to check the directory path\n\n## Where to go for more information\n\nMore information will be available on the MongoDB documentation\nwebsite in the near future. Until\nthen check out these resources for more information:\n\nGitHub: \n\nTo see all MongoDB operations best practices, download our whitepaper:\n\n", "format": "md", "metadata": {"tags": ["Ops Manager", "Kubernetes"], "pageDescription": "Introducing a Kubernetes Operator (beta) that integrates with Ops Manager, the enterprise management platform for MongoDB.", "contentType": "News & Announcements"}, "title": "Introducing the MongoDB Enterprise Operator for Kubernetes and OpenShift", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/polymorphic-pattern", "action": "created", "body": "# Building with Patterns: The Polymorphic Pattern\n\n## Introduction\n\nOne frequently asked question when it comes to MongoDB is \"How do I\nstructure my schema in MongoDB for my application?\" The honest answer\nis, it depends. Does your application do more reads than writes? What\ndata needs to be together when read from the database? What performance\nconsiderations are there? How large are the documents? How large will\nthey get? How do you anticipate your data will grow and scale?\n\nAll of these questions, and more, factor into how one designs a database\nschema in MongoDB. It has been said that MongoDB is schemaless. In fact,\nschema design is very important in MongoDB. The hard fact is that most\nperformance issues we've found trace back to poor schema design.\n\nOver the course of this series, Building with Patterns, we'll take a\nlook at twelve common Schema Design Patterns that work well in MongoDB.\nWe hope this series will establish a common methodology and vocabulary\nyou can use when designing schemas. Leveraging these patterns allows for\nthe use of \"building blocks\" in schema planning, resulting in more\nmethodology being used than art.\n\nMongoDB uses a document data\nmodel. This\nmodel is inherently flexible, allowing for data models to support your\napplication needs. The flexibility also can lead to schemas being more\ncomplex than they should. When thinking of schema design, we should be\nthinking of performance, scalability, and simplicity.\n\nLet's start our exploration into schema design with a look at what can\nbe thought as the base for all patterns, the *Polymorphic Pattern*. This\npattern is utilized when we have documents that have more similarities\nthan differences. It's also a good fit for when we want to keep\ndocuments in a single collection.\n\n## The Polymorphic Pattern\n\nWhen all documents in a collection are of similar, but not identical,\nstructure, we call this the Polymorphic Pattern. As mentioned, the\nPolymorphic Pattern is useful when we want to access (query) information\nfrom a single collection. Grouping documents together based on the\nqueries we want to run (instead of separating the object across tables\nor collections) helps improve performance.\n\nImagine that our application tracks professional sports athletes across\nall different sports.\n\nWe still want to be able to access all of the athletes in our\napplication, but the attributes of each athlete are very different. This\nis where the Polymorphic Pattern shines. In the example below, we store\ndata for athletes from two different sports in the same collection. The\ndata stored about each athlete does not need to be the same even though\nthe documents are in the same collection.\n\nProfessional athlete records have some similarities, but also some\ndifferences. With the Polymorphic Pattern, we are easily able to\naccommodate these differences. If we were not using the Polymorphic\nPattern, we might have a collection for Bowling Athletes and a\ncollection for Tennis Athletes. When we wanted to query on all athletes,\nwe would need to do a time-consuming and potentially complex join.\nInstead, since we are using the Polymorphic Pattern, all of our data is\nstored in one Athletes collection and querying for all athletes can be\naccomplished with a simple query.\n\nThis design pattern can flow into embedded sub-documents as well. In the\nabove example, Martina Navratilova didn't just compete as a single\nplayer, so we might want to structure her record as follows:\n\nFrom an application development standpoint, when using the Polymorphic\nPattern we're going to look at specific fields in the document or\nsub-document to be able to track differences. We'd know, for example,\nthat a tennis player athlete might be involved with different events,\nwhile a different sports player may not be. This will, typically,\nrequire different code paths in the application code based on the\ninformation in a given document. Or, perhaps, different classes or\nsubclasses are written to handle the differences between tennis,\nbowling, soccer, and rugby players.\n\n## Sample Use Case\n\nOne example use case of the Polymorphic Pattern is Single View\napplications. Imagine\nworking for a company that, over the course of time, acquires other\ncompanies with their technology and data patterns. For example, each\ncompany has many databases, each modeling \"insurances with their\ncustomers\" in a different way. Then you buy those companies and want to\nintegrate all of those systems into one. Merging these different systems\ninto a unified SQL schema is costly and time-consuming.\n\nMetLife was able to leverage MongoDB and the\nPolymorphic Pattern to build their single view application in a few\nmonths. Their Single View application aggregates data from multiple\nsources into a central repository allowing customer service, insurance\nagents, billing, and other departments to get a 360\u00b0 picture of a\ncustomer. This has allowed them to provide better customer service at a\nreduced cost to the company. Further, using MongoDB's flexible data\nmodel and the Polymorphic Pattern, the development team was able to\ninnovate quickly to bring their product online.\n\nA Single View application is one use case of the Polymorphic Pattern. It\nalso works well for things like product catalogs where a bicycle has\ndifferent attributes than a fishing rod. Our athlete example could\neasily be expanded into a more full-fledged content management system\nand utilize the Polymorphic Pattern there.\n\n## Conclusion\n\nThe Polymorphic Pattern is used when documents have more similarities\nthan they have differences. Typical use cases for this type of schema\ndesign would be:\n\n- Single View applications\n- Content management\n- Mobile applications\n- A product catalog\n\nThe Polymorphic Pattern provides an easy-to-implement design that allows\nfor querying across a single collection and is a starting point for many\nof the design patterns we'll be exploring in upcoming posts. The next\npattern we'll discuss is the\nAttribute Pattern.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Over the course of this blog post series, we'll take a look at twelve common Schema Design Patterns that work well in MongoDB.", "contentType": "Article"}, "title": "Building with Patterns: The Polymorphic Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/triggers-tricks-auto-increment-fields", "action": "created", "body": "# Triggers Treats and Tricks - Auto-Increment a Running ID Field\n\nIn this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level.\n\nEssentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event.\n\n- **Database triggers:** We have triggers that can be scheduled based on database events\u2014like `deletes`, `inserts`, `updates`, and `replaces`\u2014called database triggers.\n- **Scheduled triggers:** We can schedule a trigger based on a `cron` expression via scheduled triggers.\n- **Authentication triggers:** These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application.\n\nFor this blog post, I would like to showcase an auto-increment of a running ID in a collection similar to relational database sequence use. A sequence in relational databases like Oracle or SqlServer lets you use it to maintain a running ID for your table rows.\n\nIf we translate this into a `students` collection example, we would like to get the `studentId` field auto incremented.\n\n``` javascript\n{\n studentId : 1,\n studentName : \"Mark Olsen\",\n age : 15,\n phone : \"+1234546789\",\n},\n{\n studentId : 2,\n studentName : \"Peter Parker\",\n age : 17,\n phone : \"+1234546788\",\n}\n```\n\nI wanted to share an interesting solution based on triggers, and throughout this article, we will use a students collection example with `studentsId` field to explain the discussed approach.\n\n## Prerequisites\n\nFirst, verify that you have an Atlas project with owner privileges to create triggers.\n\n- MongoDB Atlas account, Atlas cluster\n- A MongoDB Realm application or access to MongoDB Atlas triggers.\n\n> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n> \n## The Idea Behind the Main Mechanism\n\nThree main components allow our auto-increment mechanism to function.\n\n## 1. Define a Source Collection\n\nWe should pick the collection that we need the auto-increment to work upon (`students`) and we can define a unique index on that field. This is not a must but it makes sense:\n\n``` javascript\ndb.students.createIndex({studentsId : 1}, {unique : true});\n```\n\n## 2. Define a Generic Function to Auto-Increment the ID\n\nIn order for us to reuse the auto-increment code for more than one collection, I've decided to build a generic function and later associate it with the relevant triggers. Let's call the function `autoIncrement`. This function will receive an \"insert\" event from the source collection and increment a helper `counters` collection document that stores the current counter per collection. It uses `findOneAndUpdate` to return an automatically incremented value per the relevant source namespace, using the \\_id as the namespace identifier. Once retrieved, the source collection is being set with a generic field called `Id` (in this example, `studentsId`).\n\n```javascript\nexports = async function(changeEvent) {\n // Source document _id\n const docId = changeEvent.fullDocument._id;\n\n // Get counter and source collection instances\n const counterCollection = context.services.get(\"\").db(changeEvent.ns.db).collection(\"counters\");\n const targetCollection = context.services.get(\"\").db(changeEvent.ns.db).collection(changeEvent.ns.coll);\n\n // automically increment and retrieve a sequence relevant to the current namespace (db.collection)\n const counter = await counterCollection.findOneAndUpdate({_id: changeEvent.ns },{ $inc: { seq_value: 1 }}, { returnNewDocument: true, upsert : true});\n\n // Set a generic field Id \n const doc = {};\n doc`${changeEvent.ns.coll}Id`] = counter.seq_value;\n const updateRes = await targetCollection.updateOne({_id: docId},{ $set: doc});\n\n console.log(`Updated ${JSON.stringify(changeEvent.ns)} with counter ${counter.seq_value} result: ${JSON.stringify(updateRes)}`);\n};\n```\n\n>Important: Replace \\ with your linked service. The default value is \"mongodb-atlas\" if you have only one cluster linked to your Realm application.\n\nNote that when we query and increment the counter, we expect to get the new version of the document `returnNewDocument: true` and `upsert: true` in case this is the first document.\n\nThe `counter` collection document after the first run on our student collection will look like this:\n\n``` javascript\n{\n _id: {\n db: \"app\",\n coll: \"students\"\n },\n seq_value: 1\n}\n```\n\n## 3. Building the Trigger on Insert Operation and Associating it with Our Generic Function\n\nNow let's define our trigger based on our Atlas cluster service and our database and source collection, in my case, `app.students`.\n\nPlease make sure to select \"Event Ordering\" toggled to \"ON\" and the \"insert\" operation.\n\nNow let's associate it with our pre-built function: `autoIncrement`.\n\nOnce we will insert our document into the collection, it will be automatically updated with a running unique number for `studentsId`.\n\n## Wrap Up\n\nWith the presented technique, we can leverage triggers to auto-increment and populate id fields. This may open your mind to other ideas to design your next flows on MongoDB Realm.\n\nIn the following article in this series, we will use triggers to auto-translate our documents and benefit from Atlas Search's multilingual abilities.\n\n> If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this article, we will explore a trick that lets us auto-increment a running ID using a trigger.", "contentType": "Article"}, "title": "Triggers Treats and Tricks - Auto-Increment a Running ID Field", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/use-function-accumulator-operators", "action": "created", "body": "# How to Use Custom Aggregation Expressions in MongoDB 4.4\n\nThe upcoming release of MongoDB 4.4 makes it easier than ever to work with, transform, access, and make sense of your data. This release, the beta of which you can try right now, comes with a couple of new operators that make it possible to write custom functions to extend the MongoDB Query Language. This feature, called Custom Aggregation Expressions, allows you to write JavaScript functions that execute as part of an aggregation pipeline stage. These come in handy when you need to implement behavior that is not supported by the MongoDB Query Language by default.\n\nThe MongoDB Query Language has many operators, or functions, that allow you to manipulate and transform your data to fit your application's use case. Operators such as $avg , $concat , and $filter make it easy for developers to query, manipulate, and transform their dataset directly at the database level versus having to write additional code and transforming the data elsewhere. While there are operators for almost anything you can think of, there are a few edge cases where a provided operator or series of operators won't be sufficient, and that's where custom aggregation expressions come in.\n\nIn this blog post, we'll learn how we can extend the MongoDB Query Language to suit our needs by writing our own custom aggregation expressions using the new $function and $accumulator operators. Let's dive in!\n\n## Prerequisites\n\nFor this tutorial you'll need:\n\n- MongoDB 4.4.\n- MongoDB Compass.\n- Familiarity with MongoDB Aggregation Framework.\n\n## Custom Aggregation Expressions\n\nMongoDB 4.4 comes with two new operators: $function and $accumulator . These two operators allow us to write custom JavaScript functions that can be used in a MongoDB aggregation pipeline. We are going to look at examples of how to use both by implementing our own custom aggregation expressions.\n\nTo get the most value out of this blog post, I will assume that you are already familiar with the MongoDB aggregation framework. If not, I suggest checking out the docs and following a tutorial or two and becoming familiar with how this feature works before diving into this more advanced topic.\n\nBefore we get into the code, I want to briefly talk about why you would care about this feature in the first place. The first reason is delivering higher performance to your users. If you can get the exact data you need directly out of the database in one trip, without having to do additional processing and manipulating, you will be able to serve and fulfill requests quicker. Second, custom aggregation expressions allow you to take care of edge cases directly in your aggregation pipeline stage. If you've worked with the aggregation pipeline in the past, you'll feel right at home and be productive in no time. If you're new to the aggregation pipeline, you'll only have to learn it once. By the time you find yourself with a use case for the `$function` or `$accumulator` operators, all of your previous knowledge will transfer over. I think those are two solid reasons to care about custom aggregation expressions: better performance for your users and increased developer productivity.\n\nThe one caveat to the liberal use of the `$function` and `$accumulator` operators is performance. Executing JavaScript inside of an aggregation expression is resource intensive and may reduce performance. You should always opt to use existing, highly optimized operators first, especially if they can get the job done for your use case. Only consider using `$function` and `$accumulator` if an existing operator cannot fulfill your application's needs.\n\n## $function Operator\n\nThe first operator we'll take a look at is called `$function`. As the name implies, this operator allows you to implement a custom JavaScript function to implement any sort of behavior. The syntax for this operator is:\n\n``` \n{\n $function: {\n body: ,\n args: ,\n lang: \"js\"\n }\n}\n```\n\nThe `$function` operator has three properties. The `body` , which is going to be our JavaScript function, an `args` array containing the arguments we want to pass into our function, and a `lang` property specifying the language of our `$function`, which as of MongoDB 4.4 only supports JavaScript.\n\nThe `body` property holds our JavaScript function as either a type of BSON Code or String. In our examples in this blog post, we'll write our code as a String. Our JavaScript function will have a signature that looks like this:\n\n``` \nfunction(arg){\n return arg\n}\n```\n\nFrom a cursory glance, it looks like a standard JavaScript function. You can pass in `n` number of arguments, and the function returns a result. The arguments within the `body` property will be mapped to the arguments provided in the `args` array property, so you'll need to make sure you pass in and capture all of the provided arguments.\n\n### Implementing the $function Operator\n\nNow that we know the properties of the `$function` operator, let's use it in an aggregation pipeline. To get started, let's choose a data set to work from. We'll use one of the provided MongoDB sample datasets that you can find on MongoDB Atlas. If you don't already have a cluster set up, you can do so by creating a free MongoDB Atlas account. Loading the sample datasets is as simple as clicking the \"...\" button on your cluster and selecting the \"Load Sample Dataset\" option.\n\nOnce you have the sample dataset loaded, let's go ahead and connect to our MongoDB cluster. Whenever learning something new, I prefer to use a visual approach, so for this tutorial, I'll rely on MongoDB Compass. If you already have MongoDB Compass installed, connect to your cluster that has the sample dataset loaded, otherwise download the latest version here, and then connect.\n\nWhether you are using MongoDB Compass or connecting via the mongo shell, you can find your MongoDB Atlas connection string by clicking the \"Connect\" button on your cluster, choosing the type of app you'll be using to connect with, and copying the string which will look like this: `mongodb+srv://mongodb:@cluster0-tdm0q.mongodb.net/test`.\n\nOnce you are connected, the dataset that we will work with is called `sample_mflix` and the collection `movies`. Go ahead and connect to that collection and then navigate to the \"Aggregations\" tab. To ensure that everything works fine, let's write a very simple aggregation pipeline using the new `$function` operator. From the dropdown, select the `$addFields` operator and add the following code as its implementation:\n\n``` \n{\n fromFunction: {$function: {body: \"function(){return 'hello'}\", args: ], lang: 'js'}}\n}\n```\n\nIf you are using the mongo shell to execute these queries the code will look like this:\n\n``` \ndb.movies.aggregate([\n { \n $addFields: {\n fromFunction: {\n $function: {\n body: \"function(){return 'hello'}\",\n args: [], \n lang: 'js'\n }\n }\n }\n }\n])\n```\n\nIf you look at the output in MongoDB Compass and scroll to the bottom of each returned document, you'll see that each document now has a field called `fromFunction` with the text `hello` as its value. We could have simply passed the string \"hello\" instead of using the `$function` operator, but the reason I wanted to do this was to ensure that your version of MongoDB Compass supports the `$function` operator and this is a minimal way to test it.\n\n![Basic Example of $function operator\n\nNext, let's implement a custom function that actually does some work. Let's add a new field to every movie that has Ado's review score, or perhaps your own?\n\nI'll name my field `adoScore`. Now, my rating system is unique. Depending on the day and my mood, I may like a movie more or less, so we'll start figuring out Ado's score of a movie by randomly assigning it a value between 0 and 5. So we'll have a base that looks like this: `let base = Math.floor(Math.random() * 6);`.\n\nNext, if critics like the movie, then I do too, so let's say that if a movie has an IMDB score of over 8, we'll give it +1 to Ado's score. Otherwise, we'll leave it as is. For this, we'll pass in the `imdb.rating` field into our function.\n\nFinally, movies that have won awards also get a boost in Ado's scoring system. So for every award nomination a movie receives, the total Ado score will increase by 0.25, and for every award won, the score will increase by 0.5. To calculate this, we'll have to provide the `awards` field into our function as well.\n\nSince nothing is perfect, we'll add a custom rule to our function: if the total score exceeds 10, we'll just output the final score to be 9.9. Let's see what this entire function looks like:\n\n``` \n{\n adoScore: {$function: {\n body: \"function(imdb, awards){let base = Math.floor(Math.random() * 6) \\n let imdbBonus = 0 \\n if(imdb > 8){ imdbBonus = 1} \\n let nominations = (awards.nominations * 0.25) \\n let wins = (awards.wins * 0.5) \\n let final = base + imdbBonus + nominations + wins \\n if(final > 10){final = 9.9} \\n return final}\", \n args: \"$imdb.rating\", \"$awards\"], \n lang: 'js'}}\n}\n```\n\nTo make the JavaScript function easier to read, here it is in non-string form:\n\n``` \nfunction(imdb, awards){\n let base = Math.floor(Math.random() * 6)\n let imdbBonus = 0 \n\n if(imdb > 8){ imdbBonus = 1} \n\n let nominations = awards.nominations * 0.25 \n let wins = awards.wins * 0.5 \n\n let final = base + imdbBonus + nominations + wins \n if(final > 10){final = 9.9} \n\n return final\n}\n```\n\nAnd again, if you are using the mongo shell, the code will look like:\n\n``` \ndb.movies.aggregate([\n { \n $addFields: {\n adoScore: {\n $function: {\n body: \"function(imdb, awards){let base = Math.floor(Math.random() * 6) \\n let imdbBonus = 0 \\n if(imdb > 8){ imdbBonus = 1} \\n let nominations = (awards.nominations * 0.25) \\n let wins = (awards.wins * 0.5) \\n let final = base + imdbBonus + nominations + wins \\n if(final > 10){final = 9.9} \\n return final}\", \n args: [\"$imdb.rating\", \"$awards\"], \n lang: 'js'\n }\n }\n }\n }\n])\n```\n\nRunning the above `$addFields` aggregation , which uses the `$function` operator, will produce a result that adds a new `adoScore` field to the end of each document. This field will contain a numeric value ranging from 0 to 9.9. In the `$function` operator, we passed our custom JavaScript function into the `body` property. As we iterated through our documents, the `$imdb.rating` and `$awards` fields from each document were passed into our custom function.\n\nUsing dot notation, we've seen how to specify any sub-document you may want to use in an aggregation. We also learned how to use an entire field and it's subfields in an aggregation, as we've seen with the `$awards` parameter in our earlier example. Our final result looks like this:\n\n![Ado Review Score using $function\n\nThis is just scratching the surface of what we can do with the `$function` operator. In our above example, we paired it with the `$addFields` operator, but we can also use `$function` as an alternative to the `$where` operator, or with other operators as well. Check out the `$function` docs for more information.\n\n## $accumulator Operator\n\nThe next operator that we'll look at, which also allows us to write custom JavaScript functions, is called the $accumulator operator and is a bit more complex. This operator allows us to define a custom accumulator function with JavaScript. Accumulators are operators that maintain their state as documents progress through the pipeline. Much of the same rules apply to the `$accumulator` operator as they do to `$function`. We'll start by taking a look at the syntax for the `$accumulator` operator:\n\n``` \n{\n $accumulator: {\n init: ,\n initArgs: , // Optional\n accumulate: ,\n accumulateArgs: ,\n merge: ,\n finalize: , // Optional\n lang: \n }\n} \n```\n\nWe have a couple of additional fields to discuss. Rather than just one `body` field that holds a JavaScript function, the `$accumulator` operator gives us four additional places to write JavaScript:\n\n- The `init` field that initializes the state of the accumulator.\n- The `accumulate` field that accumulates documents coming through the pipeline.\n- The `merge` field that is used to merge multiple states.\n- The `finalize` field that is used to update the result of the accumulation.\n\nFor arguments, we have two places to provide them: the `initArgs` that get passed into our `init` function, and the `accumulateArgs` that get passed into our `accumulate` function. The process for defining and passing the arguments is the same here as it is for the `$function` operator. It's important to note that for the `accumulate` function the first argument is the `state` rather than the first item in the `accumulateArgs` array.\n\nFinally, we have to specify the `lang` field. As before, it will be `js` as that's the only supported language as of the MongoDB 4.4 release.\n\n### Implementing the $accumulator Operator\n\nTo see a concrete example of the `$accumulator` operator in action, we'll continue to use our `sample_mflix` dataset. We'll also build on top of the `adoScore` we added with the `$function` operator. We'll pair our `$accumulator` with a `$group` operator and return the number of movies released each year from our dataset, as well as how many movies are deemed watchable by Ado's scoring system (meaning they have a score greater than 8). Our `$accumulator` function will look like this:\n\n``` \n{\n _id: \"$year\",\n consensus: {\n $accumulator: {\n init: \"function(){return {total:0, worthWatching: 0}}\",\n accumulate: \"function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}\",\n accumulateArgs:\"$adoScore\"],\n merge: \"function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}\",\n } \n }\n}\n```\n\nAnd just to display the JavaScript functions in non-string form for readability:\n\n``` \n// Init\nfunction(){\n return { total:0, worthWatching: 0 }\n}\n\n// Accumulate\nfunction(state, adoScore){\n let worthIt = 0; \n if(adoScore > 8){ worthIt = 1}; \n return {\n total: state.total + 1, \n worthWatching: state.worthWatching + worthIt }\n}\n\n// Merge\nfunction(state1, state2){\n return {\n total: state1.total + state2.total, \n worthWatching: state1.worthWatching + state2.worthWatching \n }\n}\n```\n\nIf you are running the above aggregation using the mongo shell, the query will look like this:\n\n``` \ndb.movies.aggregate([\n { \n $group: {\n _id: \"$year\",\n consensus: {\n $accumulator: {\n init: \"function(){return {total:0, worthWatching: 0}}\",\n accumulate: \"function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}\",\n accumulateArgs:[\"$adoScore\"],\n merge: \"function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}\",\n } \n }\n }\n }\n ])\n```\n\nThe result of running this query on the `sample_mflix` database will look like this:\n\n![$accumulator function\n\nNote: Since the `adoScore` function does rely on `Math.random()` for part of its calculation, you may get varying results each time you run the aggregation.\n\nJust like the `$function` operator, writing a custom accumulator and using the `$accumulator` operator should only be done when existing operators cannot fulfill your application's use case. Similarly, we are also just scratching the surface of what is achievable by writing your own accumulator. Check out the docs for more.\n\nBefore we close out this blog post, let's take a look at what our completed aggregation pipeline will look like combining both our `$function` and `$accumulator` operators. If you are using the `sample_mflix` dataset, you should be able to run both examples with the following aggregation pipeline code:\n\n``` \ndb.movies.aggregate(\n {\n '$addFields': {\n 'adoScore': {\n '$function': {\n 'body': 'function(imdb, awards){let base = Math.floor(Math.random() * 6) \\n let imdbBonus = 0 \\n if(imdb > 8){ imdbBonus = 1} \\n let nominations = (awards.nominations * 0.25) \\n let wins = (awards.wins * 0.5) \\n let final = base + imdbBonus + nominations + wins \\n if(final > 10){final = 9.9} \\n return final}', \n 'args': [\n '$imdb.rating', '$awards'\n ], \n 'lang': 'js'\n }\n }\n }\n }, {\n '$group': {\n '_id': '$year', \n 'consensus': {\n '$accumulator': {\n 'init': 'function(){return {total:0, worthWatching: 0}}', \n 'accumulate': 'function(state, adoScore){let worthIt = 0; if(adoScore > 8){worthIt = 1}; return {total:state.total + 1, worthWatching: state.worthWatching + worthIt }}', \n 'accumulateArgs': [\n '$adoScore'\n ], \n 'merge': 'function(state1, state2){return {total: state1.total + state2.total, worthWatching: state1.worthWatching + state2.worthWatching}}'\n }\n }\n }\n }\n]) \n```\n\n## Conclusion\n\nThe new `$function` and `$accumulator` operators released in MongoDB 4.4 improve developer productivity and allow MongoDB to handle many more edge cases out of the box. Just remember that these new operators, while powerful, should only be used if existing operators cannot get the job done as they may degrade performance!\n\nWhether you are trying to use new functionality with these operators, fine-tuning your MongoDB cluster to get better performance, or are just trying to get more done with less, MongoDB 4.4 is sure to provide a few new and useful things for you. You can try all of these features out today by deploying a MongoDB 4.4 beta cluster on [MongoDB Atlas for free.\n\nIf you have any questions about these new operators or this blog post, head over to the MongoDB Community forums and I'll see you there.\n\nHappy experimenting!\n\n>\n>\n>**Safe Harbor Statement**\n>\n>The development, release, and timing of any features or functionality\n>described for MongoDB products remains at MongoDB's sole discretion.\n>This information is merely intended to outline our general product\n>direction and it should not be relied on in making a purchasing decision\n>nor is this a commitment, promise or legal obligation to deliver any\n>material, code, or functionality. Except as required by law, we\n>undertake no obligation to update any forward-looking statements to\n>reflect events or circumstances after the date of such statements.\n>\n>\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to use custom aggregation expressions in your MongoDB aggregation pipeline operations.", "contentType": "Tutorial"}, "title": "How to Use Custom Aggregation Expressions in MongoDB 4.4", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/learn-mongodb-university-online-free-mooc", "action": "created", "body": "# Learn MongoDB with MongoDB University Free Courses\n\n## Introduction\n\nYour cheap boss doesn't want to pay for this awesome MongoDB Training\nyou found online? In-person trainings are quite challenging given the\ncurrent situation we are all facing with COVID-19. Also, is this\ntraining even up-to-date with the most recent MongoDB release?\n\nWho is better than MongoDB to teach you MongoDB? MongoDB\nUniversity offers free courses for\nbeginners and more advanced MongoDB users. Our education team is\ndedicated to keep these courses up-to-date and build new content around\nour latest features.\n\nIn this blog post, we will have a look at these\ncourses and what you\ncan learn from each of them.\n\n## Course Format\n\nMongoDB courses are online and free. You can do them at your pace and at\nthe most convenient time for you.\n\nEach course contains a certain number of chapters, and each chapter\ncontains a few items.\n\nAn item usually contains a five- to 10-minute video in English, focussed\naround one specific topic you are learning, and a little quiz or a\nguided lab, which are here to make sure that you understood the core\nconcepts of that particular item.\n\n## Learning Paths\n\nMongoDB University proposes two\ndifferent learning paths to suit you best. There are also some more\ncourses that are not in the learning paths, but you are completely free\nto create your own learning path and follow whichever courses you want\nto. These two are just general guidelines if you don't know where to\nstart.\n\n- If you are a developer, you will probably want to start with the\n developer\n path.\n- If you are a DBA, you will more likely prefer the DBA\n path.\n\n### Developer Path\n\nThe developer path contains six recommended trainings, which I will\ndescribe more in detail in the next section.\n\n- M001: MongoDB Basics.\n- M103: Basic Cluster Administration.\n- M121: Aggregation Framework.\n- M220: MongoDB for Developers.\n- M201: MongoDB Performance.\n- M320: MongoDB Data Modeling.\n\n### DBA Path\n\nThe DBA path contains five recommended trainings, which I will also\ndescribe in detail in the next section.\n\n- M001: MongoDB Basics.\n- M103: Basic Cluster Administration.\n- M201: MongoDB Performance.\n- M301: MongoDB Security.\n- M312: Diagnostics and Debugging.\n\n## MongoDB University Courses\n\nLet's see all the courses available in more details.\n\n### M001 - MongoDB Basics\n\nLevel: Introductory\n\nIn this six-chapter course, you will get your hands on all the basics,\nincluding querying, computing, connecting to, storing, indexing, and\nanalyzing your data.\n\nLearn more and\nregister.\n\n### M100 - MongoDB for SQL Pros\n\nLevel: Introductory\n\nIn this four-chapter course, you will build a solid understanding of how\nMongoDB differs from relational databases. You will learn how to model\nin terms of documents and how to use MongoDB's drivers to easily access\nthe database.\n\nLearn more and\nregister.\n\n### M103 - Basic Cluster Administration\n\nLevel: Introductory\n\nIn this four-chapter course, you'll build standalone nodes, replica\nsets, and sharded clusters from scratch. These will serve as platforms\nto learn how administration varies depending on the makeup of a cluster.\n\nLearn more and\nregister.\n\n### M121 - The MongoDB Aggregation Framework\n\nLevel: Introductory\n\nIn this seven-chapter course, you'll build an understanding of how to\nuse MongoDB Aggregation Framework pipeline, document transformation, and\ndata analysis. We will look into the internals of the Aggregation\nFramework alongside optimization and pipeline building practices.\n\nLearn more and\nregister.\n\n### A300 - Atlas Security\n\nLevel: Intermediate\n\nIn this one-chapter course, you'll build a solid understanding of Atlas\nsecurity features such as:\n\n- Threat Modeling and Security Concepts\n- Data Flow\n- Network Access Control\n- Authentication and Authorization\n- Encryption\n- Logging\n- Compliance\n- Configuring VPC Peering\n- VPC Peering Lab\n\nLearn more and\nregister.\n\n### M201 - MongoDB Performance\n\nLevel: Intermediate\n\nIn this five-chapter course, you'll build a good understanding of how to\nanalyze the different trade-offs of commonly encountered performance\nscenarios.\n\nLearn more and\nregister.\n\n### M220J - MongoDB for Java Developers\n\nLevel: Intermediate\n\nIn this five-chapter course, you'll build the back-end for a\nmovie-browsing application called MFlix.\n\nUsing the MongoDB Java Driver, you will implement MFlix's basic\nfunctionality. This includes basic and complex movie searches,\nregistering new users, and posting comments on the site.\n\nYou will also add more features to the MFlix application. This includes\nwriting analytical reports, increasing the durability of MFlix's\nconnection with MongoDB, and implementing security best practices.\n\nLearn more and\nregister.\n\n### M220JS - MongoDB for JavaScript Developers\n\nLevel: Intermediate\n\nSame as the one above but with JavaScript and Node.js.\n\nLearn more and\nregister.\n\n### M220JS - MongoDB for .NET Developers\n\nLevel: Intermediate\n\nSame as the one above but with C# and .NET.\n\nLearn more and\nregister.\n\n### M220P - MongoDB for Python Developers\n\nLevel: Intermediate\n\nSame as the one above but with Python.\n\nLearn more and\nregister.\n\n### M310 - MongoDB Security\n\nLevel: Advanced\n\nIn this three-chapter course, you'll build an understanding of how to\ndeploy a secure MongoDB cluster, configure the role-based authorization\nmodel to your needs, set up encryption, do proper auditing, and follow\nsecurity best practices.\n\nLearn more and\nregister.\n\n### M312 - Diagnostics and Debugging\n\nLevel: Advanced\n\nIn this five-chapter course, you'll build a good understanding of the\ntools you can use to diagnose the most common issues that arise in\nproduction deployments, and how to fix those problems when they arise.\n\nLearn more and\nregister.\n\n### M320 - Data Modeling\n\nLevel: Advanced\n\nIn this five-chapter course, you'll build a solid understanding of\nfrequent patterns to apply when modeling and will be able to apply those\nin your designs.\n\nLearn more and\nregister.\n\n## Get MongoDB Certified\n\nIf you have built enough experience with MongoDB, you can get\ncertified and be\nofficially recognised as a MongoDB expert.\n\nTwo certifications are available:\n\n- C100DEV:\n MongoDB Certified Developer Associate Exam.\n- C100DBA:\n MongoDB Certified DBA Associate Exam.\n\nOnce certified, you will appear in the list of MongoDB Certified\nProfessionals which can be found in the MongoDB Certified Professional\nFinder.\n\n## Wrap-Up\n\nMongoDB University is the best place\nto learn MongoDB. There is content available for beginners and more\nadvanced users.\n\nMongoDB official certifications are definitely a great addition to your\nLinkedIn profile too once you have built enough experience with MongoDB.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Presentation of MongoDB's free courses in the MongoDB University online.", "contentType": "News & Announcements"}, "title": "Learn MongoDB with MongoDB University Free Courses", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-sample-datasets", "action": "created", "body": "# The MongoDB Atlas Sample Datasets\n\nDid you know that MongoDB Atlas provides a complete set of example data to help you learn faster? The Load Sample Data feature enables you to load eight datasets into your database to explore. You can use this with the MongoDB Atlas M0 free tier to try out MongoDB Atlas and MongoDB's features. The sample data helps you try out features such as indexing, querying including geospatial, and aggregations, as well as using MongoDB Tooling such as MongoDB Charts and MongoDB Compass.\n\nIn the rest of this post, we'll explore why it was created, how to first load the sample data, and then we'll outline what the datasets contain. We'll also cover how you can download these datasets to use them on your own local machine.\n\n## Table of Contents\n\n- Why Did We Create This Sample Data Set?\n- Loading the Sample Data Set into Your Atlas Cluster\n- A Deeper Dive into the Atlas Sample Data\n - Sample AirBnB Listings Dataset\n - Sample Analytics Dataset\n - Sample Geospatial Dataset\n - Sample Mflix Dataset\n - Sample Restaurants Dataset\n - Sample Supply Store Dataset\n - Sample Training Dataset\n - Sample Weather Dataset\n- Downloading the Dataset for Use on Your Local Machine\n- Wrap Up\n\n## Why Did We Create This Sample Data Set?\n\nBefore diving into how we load the sample data, it's worth highlighting why we built the feature in the first place. We built this feature because often people would create a new empty Atlas cluster and they'd then have to wait until they wrote their application or imported data into it before they were able to learn and explore the platform. Atlas's Sample Data was the solution. It removes this roadblock and quickly allows you to get a feel for how MongoDB works with different types of data.\n\n## Loading the Sample Data Set into Your Atlas Cluster\n\nLoading the Sample Data requires an existing Atlas cluster and three steps.\n\n- In your left navigation pane in Atlas, click Clusters, then choose which cluster you want to load the data into.\n- For that cluster, click the Ellipsis (...) button.\n\n- Then, click the button \"Load Sample Dataset.\"\n\n- Click the correspondingly named button, \"Load Sample Dataset.\"\n\nThis process will take a few minutes to complete, so let's look at exactly what kind of data we're going to load. Once the process is completed, you should see a banner on your Atlas Cluster similar to this image below.\n\n## A Deeper Dive into the Atlas Sample Data\n\nThe Atlas Sample Datasets are comprised of eight databases and their associated collections. Each individual dataset is documented to illustrate the schema, the collections, the indexes, and a sample document from each collection.\n\n### Sample AirBnB Listings Dataset\n\nThis dataset consists of a single collection of AirBnB reviews and listings. There are indexes on the `property type`, `room type`, `bed`, `name`, and on the `location` fields as well as on the `_id` of the documents.\n\nThe data is a randomized subset of the original publicly available AirBnB dataset. It covers several different cities around the world. This dataset is used extensively in MongoDB University courses.\n\nYou can find more details on the Sample AirBnB Documentation page.\n\n### Sample Analytics Dataset\n\nThis dataset consists of three collections of randomly generated financial services data. There are no additional indexes beyond the `_id` index on each collection. The collections represent accounts, transactions, and customers.\n\nThe transactions collection uses the Bucket Pattern to hold a set of transactions for a period. It was built for MongoDB's private training, specifically for the MongoDB for Data Analysis course.\n\nThe advantages in using this pattern are a reduction in index size when compared to storing each transaction in a single document. It can potentially simplify queries and it provides the ability to use pre-aggregated data in our documents.\n\n``` json\n// transaction collection document example\n{\n\"account_id\": 794875,\n\"transaction_count\": 6,\n\"bucket_start_date\": {\"$date\": 693792000000},\n\"bucket_end_date\": {\"$date\": 1473120000000},\n\"transactions\": \n {\n \"date\": {\"$date\": 1325030400000},\n \"amount\": 1197,\n \"transaction_code\": \"buy\",\n \"symbol\": \"nvda\",\n \"price\": \"12.7330024299341033611199236474931240081787109375\",\n \"total\": \"15241.40390863112172326054861\"\n },\n {\n \"date\": {\"$date\": 1465776000000},\n \"amount\": 8797,\n \"transaction_code\": \"buy\",\n \"symbol\": \"nvda\",\n \"price\": \"46.53873172406391489630550495348870754241943359375\",\n \"total\": \"409401.2229765902593427995271\"\n },\n {\n \"date\": {\"$date\": 1472601600000},\n \"amount\": 6146,\n \"transaction_code\": \"sell\",\n \"symbol\": \"ebay\",\n \"price\": \"32.11600884852845894101847079582512378692626953125\",\n \"total\": \"197384.9903830559086514995215\"\n },\n {\n \"date\": {\"$date\": 1101081600000},\n \"amount\": 253,\n \"transaction_code\": \"buy\",\n \"symbol\": \"amzn\",\n \"price\": \"37.77441226157566944721111212857067584991455078125\",\n \"total\": \"9556.926302178644370144411369\"\n },\n {\n \"date\": {\"$date\": 1022112000000},\n \"amount\": 4521,\n \"transaction_code\": \"buy\",\n \"symbol\": \"nvda\",\n \"price\": \"10.763069758141103449133879621513187885284423828125\",\n \"total\": \"48659.83837655592869353426977\"\n },\n {\n \"date\": {\"$date\": 936144000000},\n \"amount\": 955,\n \"transaction_code\": \"buy\",\n \"symbol\": \"csco\",\n \"price\": \"27.992136535152877030441231909207999706268310546875\",\n \"total\": \"26732.49039107099756407137647\"\n }\n]\n}\n```\n\nYou can find more details on the [Sample Analytics Documentation page.\n\n### Sample Geospatial Dataset\n\nThis dataset consists of a single collection with information on shipwrecks. It has an additional index on the `coordinates` field (GeoJSON). This index is a Geospatial 2dsphere index. This dataset was created to help explore the possibility of geospatial queries within MongoDB.\n\nThe image below was created in MongoDB Charts and shows all of the shipwrecks on the eastern seaboard of North America.\n\nYou can find more details on the Sample Geospatial Documentation page.\n\n### Sample Mflix Dataset\n\nThis dataset consists of five collections with information on movies, movie theatres, movie metadata, and user movie reviews and their ratings for specific movies. The data is a subset of the IMDB dataset. There are three additional indexes beyond `_id`: on the sessions collection on the `user_id` field, on the theatres collection on the `location.geo` field, and on the users collection on the `email` field. You can see this dataset used in this MongoDB Charts tutorial.\n\nThe Atlas Search Movies site uses this data and MongoDB's Atlas Search to provide a searchable movie catalog.\n\nThis dataset is the basis of our Atlas Search tutorial.\n\nYou can find more details on the Sample Mflix Documentation page.\n\n### Sample Restaurants Dataset\n\nThis dataset consists of two collections with information on restaurants and neighbourhoods in New York. There are no additional indexes. This dataset is the basis of our Geospatial tutorial. The restaurant document only contains the location and the name for a given restaurant.\n\n``` json\n// restaurants collection document example\n{\n location: {\n type: \"Point\",\n coordinates: -73.856077, 40.848447]\n },\n name: \"Morris Park Bake Shop\"\n}\n```\n\nIn order to use the collections for geographical searching, we need to add an index, specifically a [2dsphere index. We can add this index and then search for all restaurants in a one-kilometer radius of a given location, with the results being sorted by those closest to those furthest away. The code below creates the index, then adds a helper variable to represent 1km, which our query then uses with the $nearSphere criteria to return the list of restaurants within 1km of that location.\n\n``` javascript\ndb.restaurants.createIndex({ location: \"2dsphere\" })\nvar ONE_KILOMETER = 1000\ndb.restaurants.find({ location: { $nearSphere: { $geometry: { type: \"Point\", coordinates: -73.93414657, 40.82302903 ] }, $maxDistance: ONE_KILOMETER } } })\n```\n\nYou can find more details on the [Sample Restaurants Documentation page.\n\n### Sample Supply Store Dataset\n\nThis dataset consists of a single collection with information on mock sales data for a hypothetical office supplies company. There are no additional indexes. This is the second dataset used in the MongoDB Chart tutorials.\n\nThe sales collection uses the Extended Reference pattern to hold both the items sold and their details as well as information on the customer who purchased these items. This pattern includes frequently accessed fields in the main document to improve performance at the cost of additional data duplication.\n\n``` json\n// sales collection document example\n{\n \"_id\": {\n \"$oid\": \"5bd761dcae323e45a93ccfe8\"\n },\n \"saleDate\": {\n \"$date\": { \"$numberLong\": \"1427144809506\" }\n },\n \"items\": \n {\n \"name\": \"notepad\",\n \"tags\": [ \"office\", \"writing\", \"school\" ],\n \"price\": { \"$numberDecimal\": \"35.29\" },\n \"quantity\": { \"$numberInt\": \"2\" }\n },\n {\n \"name\": \"pens\",\n \"tags\": [ \"writing\", \"office\", \"school\", \"stationary\" ],\n \"price\": { \"$numberDecimal\": \"56.12\" },\n \"quantity\": { \"$numberInt\": \"5\" }\n },\n {\n \"name\": \"envelopes\",\n \"tags\": [ \"stationary\", \"office\", \"general\" ],\n \"price\": { \"$numberDecimal\": \"19.95\" },\n \"quantity\": { \"$numberInt\": \"8\" }\n },\n {\n \"name\": \"binder\",\n \"tags\": [ \"school\", \"general\", \"organization\" ],\n \"price\": { \"$numberDecimal\": \"14.16\" },\n \"quantity\": { \"$numberInt\": \"3\" }\n }\n ],\n \"storeLocation\": \"Denver\",\n \"customer\": {\n \"gender\": \"M\",\n \"age\": { \"$numberInt\": \"42\" },\n \"email\": \"cauho@witwuta.sv\",\n \"satisfaction\": { \"$numberInt\": \"4\" }\n },\n \"couponUsed\": true,\n \"purchaseMethod\": \"Online\"\n}\n```\n\nYou can find more details on the [Sample Supply Store Documentation page.\n\n### Sample Training Dataset\n\nThis dataset consists of nine collections with no additional indexes. It represents a selection of realistic data and is used in the MongoDB private training courses.\n\nIt includes a number of public, well-known data sources such as the OpenFlights, NYC's OpenData, and NYC's Citibike Data.\n\nThe routes collection uses the Extended Reference pattern to hold OpenFlights data on airline routes between airports. It references airline information in the `airline` sub document, which has details about the specific plane on the route. This is another example of improving performance at the cost of minor data duplication for fields that are likely to be frequently accessed.\n\n``` json\n// routes collection document example\n{\n \"_id\": {\n \"$oid\": \"56e9b39b732b6122f877fa5c\"\n },\n \"airline\": {\n \"alias\": \"2G\",\n \"iata\": \"CRG\",\n \"id\": 1654,\n \"name\": \"Cargoitalia\"\n },\n \"airplane\": \"A81\",\n \"codeshare\": \"\",\n \"dst_airport\": \"OVB\",\n \"src_airport\": \"BTK\",\n \"stops\": 0\n}\n```\n\nYou can find more details on the Sample Training Documentation page.\n\n### Sample Weather Dataset\n\nThis dataset consists of a single collection with no additional indexes. It represents detailed weather reports from locations across the world. It holds geospatial data on the locations in the form of legacy coordinate pairs.\n\nYou can find more details on the Sample Weather Documentation page.\n\nIf you have ideas or suggestions for new datasets, we are always interested. Let us know on the developer community website.\n\n### Downloading the Dataset for Use on Your Local Machine\n\nIt is also possible to download and explore these datasets on your own local machine. You can download the complete sample dataset via the wget command:\n\n``` shell\nwget https://atlas-education.s3.amazonaws.com/sampledata.archive\n```\n\nNote: You can also use the curl command:\n\n``` shell\ncurl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive\n```\n\nYou should check you are running a local `mongod` instance or you should start a new `mongod` instance at this point. This `mongod` will be used in conjunction with `mongorestore` to unpack and host a local copy of the sample dataset. You can find more details on starting mongod instances on this documentation page.\n\nThis section assumes that you're connecting to a relatively straightforward setup, with a default authentication database and some authentication set up. (You should *always* create some users for authentication!)\n\nIf you don't provide any connection details to `mongorestore`, it will attempt to connect to MongoDB on your local machine, on port 27017 (which is MongoDB's default). This is the same as providing `--host localhost:27017`.\n\n``` bash\nmongorestore --archive=sampledata.archive\n```\n\nYou can use a variety of tools to view your documents. You can use MongoDB Compass, the CLI, or the MongoDB Visual Studio Code (VSCode) plugin to interact with the documents in your collections. You can find out how to use MongoDB Playground for VSCode and integrate MongoDB into a Visual Studio Code environment.\n\nIf you find the sample data useful for building or helpful, let us know on the community forums!\n\n## Wrap Up\n\nThese datasets offer a wide selection of data that you can use to both explore MongoDB's features and prototype your next project without having to worry about where you'll find the data.\n\nCheck out the documentation on Load Sample Data to learn more on these datasets and load it into your Atlas Cluster today to start exploring it!\n\nTo learn more about schema patterns and MongoDB, please check out our blog series Building with Patterns and the free MongoDB University Course M320: Data Modeling to level up your schema design skills.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Explaining the MongoDB Atlas Sample Data and diving into its various datasets", "contentType": "Article"}, "title": "The MongoDB Atlas Sample Datasets", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/using-expo-realm-expo-dev-client", "action": "created", "body": "# Using Expo and Realm React Native with expo-dev-client\n\nIn our last post on how to build an offline-first React Native mobile app with Expo and Realm React Native, we talked about a limitation of using Realm React Native and Expo where we stated that Realm React Native is not compatible with Expo-managed workflows. Well, wait no more, because now Expo works with Realm React Native and we have a nice custom development client that will have roughly the same functionality as Expo Go.\n\n## Creating a React Native app using Expo and Realm React Native in one simple step\n\nYes, it sounds like clickbait, but it's true. If you want to build a full application that uses TypeScript, just type in your terminal:\n\n```bash \nnpx expo-cli init ReactRealmTSTemplateApp -t @realm/expo-template-js\n```\n\nIf you'd rather do JavaScript, just type:\n\n```bash\nnpx expo-cli init ReactRealmJSTemplateApp -t @realm/expo-template-js\n```\n\nAfter either of these two, change to the directory containing the project that has just been created and start the iOS or Android app:\n\n```bash\ncd ReactRealmJSTemplateApp\nyarn android\n```\n\nOr\n\n```bash\ncd ReactRealmJSTemplateApp\nyarn ios\n``` \n\nThis will create a prebuilt Expo app. That is, you'll see `ios` and `android` folders in your project and this won't be a managed Expo app, where all the native details are hidden and Expo takes care of everything. Having said that, you don't need to go into the `ios` or `android` folders unless you need to add some native code in Swift or Kotlin.\n\nOnce launched, the app will ask to open in `ReactRealmJSTemplateApp`, not in Expo Go. This means we're running this nice, custom, dev client that will bring us most of the Expo Go experience while also working with Realm React Native.\n\nWe can install our app and use it using `yarn ios/android`. If we want to start the dev-client to develop, we can also use `yarn start`.\n\n## Adding our own code\n\nThis template is a quick way to start with Realm React Native, so it includes all code you'll need to write your own Realm React Native application:\n\n* It adds the versions of Expo (^44.0.6), React Native (0.64.3), and Realm (^10.13.0) that work together.\n* It also adds `expo-dev-client` and `@realm/react` packages, to make the custom development client part work.\n* Finally, in `app`, you'll find sample code to create your own model object, initialize a connection with Atlas Device Sync, save and fetch data, etc.\n\nBut I want to reuse the Read it Later - Maybe app I wrote for the last post on Expo and Realm React Native. Well, I just need to delete all JavaScript files inside `app`, copy over all my code from that App, and that's all. Now my old app's code will work with this custom dev client!\n\n## Putting our new custom development client to work\n\nShowing the debug menu is explained in the React Native debug documentation, but you just need to:\n\n> Use the \u2318D keyboard shortcut when your app is running in the iOS Simulator, or \u2318M when running in an Android emulator on macOS, and Ctrl+M on Windows and Linux.\n\n| Android Debug Menu | iOS Debug Menu |\n|--------------|-----------|\n| | | \n\nAs this is an Expo app, we can also show the Expo menu by just pressing `m` from terminal while our app is running.\n \n\n## Now do Hermes and react-native-reanimated\nThe Realm React Native SDK has a `hermes` branch that is indeed compatible with Hermes. So, it'll work with `react-native-reanimated` v2 but not with Expo, due to the React Native version the Expo SDK is pinned to. \n\nSo, right now, you have to choose: \n* Have Expo + Realm working out of the box. \n* Or start your app using Realm React Native+ Hermes (not using Expo).\n\nBoth the Expo team and the Realm JavaScript SDK teams are working hard to make everything work together, and we'll update you with a new post in the future on using React Native Reanimated + Expo + Hermes + Realm (when all required dependencies are in place).\n\n## Recap\n\nIn this post, we've shown how simple it is now to create a React Native application that uses Expo + Realm React Native. This still won't work with Hermes, but watch this space as Realm is already compatible with it!\n\n## One more thing\n\nOur community has also started to leverage our new capabilities here. Watch this video from Aaron Saunders explaining how to use Realm React Native + Expo building a React Native app.\n\nAnd, as always, you can hang out in our Community Forums and ask questions (and get answers) about your React Native development with Expo, Realm React Native and MongoDB.\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "TypeScript", "React Native"], "pageDescription": "Now we can write our React Native Expo Apps using Realm, React Native and use a custom-dev-client to get most of the functionality of Expo Go, in just one simple step.", "contentType": "Tutorial"}, "title": "Using Expo and Realm React Native with expo-dev-client", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/use-atlas-on-heroku", "action": "created", "body": "# How to Deploy MongoDB on Heroku\n\n## Can I deploy MongoDB on Heroku?\n\nYes! It's easy to set up and free to use with MongoDB Atlas.\n\nAs we begin building more cloud-native applications, choosing the right services and tools can be quite overwhelming. Luckily, when it comes to choosing a cloud database service, MongoDB Atlas may be the easiest choice yet!\n\nWhen paired with Heroku, one of the most popular PaaS solutions for developers, you'll be able to build and deploy fully managed cloud applications in no time. The best part? MongoDB Atlas integrates easily with Heroku applications. All you need to do is set your Atlas cluster's connection string to a Heroku config variable. That's really all there is to it!\n\nIf you're already familiar with MongoDB, using MongoDB Atlas with your cloud applications is a natural choice. MongoDB Atlas is a fully-managed cloud database service for MongoDB that automates the management of MongoDB clusters in the cloud. Offering features such as automated backup, auto-scaling, multi-AZ fault tolerance, and a full suite of management and analytics tools, Atlas is the most sophisticated DBaaS anywhere, and is just a few clicks away.\n\nTo see how quick it is to get up and running with MongoDB Atlas, just follow the next few steps to set up your first free cluster. Then, see how quickly you can connect your new Atlas cluster to your Heroku application by following the step-by-step instructions later on in this tutorial.\n\n## Prerequisites\n\nThis tutorial assumes the following:\n\n- You are familiar with MongoDB and have written applications that use MongoDB.\n- You are familiar with Heroku and know how to deploy Heroku apps. - You have the Heroku CLI installed.\n- You are familiar with and have Git installed.\n\nWith these assumptions in mind, let's get started!\n\n## Setting up your Atlas Cluster in 5 steps (or less!)\n\n### Step 1: Create an Atlas account\n\n>\ud83d\udca1 If you already created a MongoDB account using your email address, you can skip this step! Sign into your account instead.\n\nYou can register for an Atlas account with your email address or your Google Account.\n\n### Step 2: Create your organization and project\n\nAfter registering, Atlas will prompt you to create an organization and project where you can deploy your cluster.\n\n### Step 3: Deploy Your first cluster\n\nYou'll now be able to select from a range of cluster options. For this tutorial, we'll select the Shared Clusters option, which is Atlas's Free Tier cluster. Click \"Create a cluster\" under the Shared Clusters option:\n\nOn the next page, you'll be prompted to choose a few options for your cluster:\n\n*Cloud provider & region*\n\nChoose where you want to deploy your cluster to. It is important to select the available region closest to your application, and ideally the same region, in order to minimize latency. In our case, let's choose the N. Virginia (us-east-1) region, with AWS as our cloud provider (since we're deploying on Heroku, and that is where Heroku hosts its infrastructure):\n\n*Cluster tier*\n\nHere, you'll see the cluster tiers available for the shared clusters option. You can view a comparison of RAM, Storage, vCPU, and Base Price between the tiers to help you choose the right tier. For our tutorial, leave the default M0 Sandbox tier selected:\n\n*Additional settings*\n\nDepending on the tier you choose, some additional options may be available for you. This includes the MongoDB version you wish to deploy and, for M2 clusters and up, Backup options. For this tutorial, select the latest version, MongoDB 4.4:\n\n*Cluster name*\n\nLastly, you can give your cluster a name. Keep in mind that once your cluster is created, you won't be able to change it! Here, we'll name our cluster `leaflix-east` to help us know which project and region this cluster will be supporting:\n\nThat's it! Be sure to review your options one last time before clicking the \"Create Cluster\" button.\n\n### Step 4: Create a database user for your cluster\n\nAtlas requires clients to authenticate as MongoDB database users to access clusters, so let's create one real quick for your cluster.\n\nAs you can see in the GIF above, creating a database user is straightforward. First navigate to the \"Database Access\" section (located under \"Security\" in the left-hand navigation bar). Click on \"Create a new Database User\". A prompt will appear where you can choose this user's authentication method and database user privileges.\n\nSelect the \"Password\" authentication method and give this user a username and password. As a convenience, you can even autogenerate a secure password right in Atlas, which we highly recommend.\n\n>\ud83d\udca1 After autogenerating your password, be sure to click Copy and store it in a safe place for now. We'll need it later when connecting to our cluster!\n\nChoose a built-in role for this user. For this tutorial, I'm choosing \"Atlas admin\" which grants the most privileges.\n\nFinally, click the \"Add User\" button. You've created your cluster's first database user!\n\n### Step 5: Grant authorized IP addresses access to your cluster\n\nThe last step in setting up your cluster is to choose which IP addresses are allowed to access it. To quickly get up and running, set your cluster to allow access from anywhere:\n\n**Congratulations! You've just successfully set up your Atlas cluster!**\n\n>\ud83d\udca1 Note: You probably don't want to allow this type of access in a production environment. Instead, you'll want to identify the exact IP addresses you know your application will be hosted on and explicitly set which IP addresses, or IP ranges, should have access to your cluster. After setting up your Heroku app, follow the steps in the \"Configuring Heroku IP Addresses in Atlas\" section below to see how to add the proper IP addresses for your Heroku app.\n\n## Configuring Heroku to point to MongoDB Atlas Cluster using config vars\n\nQuickly setting up our Atlas cluster was pretty exciting, but we think you'll find this section even more thrilling!\n\nAtlas-backed, Heroku applications are simple to set up. All you need to do is create an application-level config var that holds your cluster's connection string. Once set up, you can securely access that config var within your application!\n\nHere's how to do it:\n\n### Step 1: Log into the Heroku CLI\n\n``` bash\nheroku login\n```\n\nThis command opens your web browser to the Heroku login page. If you're already logged in, just click the \"Log in\" button. Alternatively, you can use the -i flag to log in via the command line.\n\n### Step 2: Clone My Demo App\n\nTo continue this tutorial, I've created a demo Node application that uses MongoDB Atlas and is an app I'd like to deploy to Heroku. Clone it, then navigate to its directory:\n\n``` bash\ngit clone https://github.com/adriennetacke/mongodb-atlas-heroku-leaflix-demo.git\n\ncd mongodb-atlas-heroku-leaflix-demo\n```\n\n### Step 3: Create the Heroku app\n\n``` bash\nheroku create leaflix\n```\n\nAs you can see, I've named mine `leaflix`.\n\n### Get your Atlas Cluster connection string\n\nHead back to your Atlas cluster's dashboard as we'll need to grab our connection string.\n\nClick the \"Connect\" button.\n\nChoose the \"Connect your application\" option.\n\nHere, you'll see the connection string we'll need to connect to our cluster. Copy the connection string.\n\nPaste the string into an editor; we'll need to modify it a bit before we can set it to a Heroku config variable.\n\nAs you can see, Atlas has conveniently added the username of the database user we previously created. To complete the connection string and make it valid, replace the \\ with your own database user's password and `` with `sample_mflix`, which is the sample dataset our demo application will use.\n\n>\ud83d\udca1 If you don't have your database user's password handy, autogenerate a new one and use that in your connection string. Just remember to update it if you autogenerate it again! You can find the password by going to Database Access \\> Clicking \"Edit\" on the desired database user \\> Edit Password \\> Autogenerate Secure Password\n\n### Set a MONGODB_URI config var\n\nNow that we've properly formed our connection string, it's time to store it in a Heroku config variable. Let's set our connection string to a config var called MONGODB_URI:\n\n``` bash\nheroku config:set MONGODB_URI=\"mongodb+srv://yourUsername:yourPassword@yourClusterName.n9z04.mongodb.net/sample_mflix?retryWrites=true&w=majority\"\n```\n\nSome important things to note here:\n\n- This command is all one line.\n- Since the format of our connection string contains special characters, it is necessary to wrap it within quotes.\n\nThat's all there is to it! You've now properly added your Atlas cluster's connection string as a Heroku config variable, which means you can securely access that string once your application is deployed to Heroku.\n\n>\ud83d\udca1 Alternatively, you can also add this config var via your app's \"Settings\" tab in the Heroku Dashboard. Head to your apps \\> leaflix \\> Settings. Within the Config Vars section, click the \"Reveal Config Vars\" button, and add your config var there.\n\nThe last step is to modify your application's code to access these variables.\n\n## Connecting your app to MongoDB Atlas Cluster using Heroku config var values\n\nIn our demo application, you'll see that we have hard-coded our Atlas cluster connection string. We should refactor our code to use the Heroku config variable we previously created.\n\nConfig vars are exposed to your application's code as environment variables. Accessing these variables will depend on your application's language; for example, you'd use `System.getenv('key')` calls in Java or `ENV'key']` calls in Ruby.\n\nKnowing this, and knowing our application is written in Node, we can access our Atlas cluster via the `process.env` property, made available to us in Node.js. In the `server.js` file, change the uri constant to this:\n\n``` bash\nconst uri = process.env.MONGODB_URI;\n```\n\nThat's it! Since we've added our Atlas cluster connection string as a Heroku config var, our application will be able to access it securely once it's deployed.\n\nSave that file, commit that change, then deploy your code to Heroku.\n\n``` bash\ngit commit -am \"fix: refactor hard coded connection string to Heroku config var\"\n\ngit push heroku master\n```\n\nYour app is now deployed! You can double check that at least one instance of Leaflix is running by using this command:\n\n``` bash\nheroku ps:scale web=1\n```\n\nIf you see a message that says `Scaling dynos... done, now running web at 1:Free`, you'll know that at least one instance is up and running.\n\nFinally, go visit your app. You can do so with this useful command:\n\n``` bash\nheroku open\n```\n\nIf all is well, you'll see something like this:\n\n![Leaflix App\n\nWhen you click on the \"Need a Laugh?\" button, our app will randomly choose a movie that has the \"Comedy\" genre in its genres field. This comes straight from our Atlas cluster and uses the `sample_mflix` dataset.\n\n## Configuring Heroku IP addresses in MongoDB Atlas\n\nWe have our cluster up and running and our app is deployed to Heroku!\n\nTo get us through the tutorial, we initially configured our cluster to accept connections from any IP address. Ideally you would like to restrict access to only your application, and there are a few ways to do this on Heroku.\n\nThe first way is to use an add-on to provide a static outbound IP address for your application that you can use to restrict access in Atlas. You can find some listed here:\n\nAnother way would be to use Heroku Private Spaces and use the static outbound IPs for your space. This is a more expensive option, but does not require a separate add-on.\n\nThere are some documents and articles out there that suggest you can use IP ranges published by either AWS or Heroku to allow access to IPs originating in your AWS region or Heroku Dynos located in those regions. While this is possible, it is not recommended as those ranges are subject to change over time. Instead we recommend one of the two methods above.\n\nOnce you have the IP address(es) for your application, you can use them to configure your firewall in Atlas.\n\nHead to your Atlas cluster, delete any existing IP ranges, then add them to your allow list:\n\nOf course, at all times you will be communicating between your application and your Atlas database securely via TLS encryption.\n\n## Conclusion\n\nWe've accomplished quite a bit in a relatively short time! As a recap:\n\n- We set up and deployed an Atlas cluster in five steps or less.\n- We created a Heroku config variable to securely store our Atlas connection string, enabling us to connect our Atlas cluster to our Heroku application.\n- We learned that Heroku config variables are exposed to our application's code as environment variables.\n- We refactored the hard-coded URI string in our code to point to a `process.env.MONGODB_URI` variable instead.\n\nHave additional questions or a specific use case not covered here? Head over to MongoDB Developer's Community Forums and start a discussion! We look forward to hearing from you.\n\nAnd to learn more about MongoDB Atlas, check out this great Intro to MongoDB Atlas in 10 Minutes by fellow developer advocate Jesse Hall!\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to deploy MongoDB Atlas on Heroku for fully managed cloud applications.", "contentType": "Tutorial"}, "title": "How to Deploy MongoDB on Heroku", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/anonytexts", "action": "created", "body": "# Anonytexts\n\n## Creators\nMaryam Mudashiru and Idris Aweda Zubair contributed this project.\n\n## About the Project\n\nAnonytexts lets you message friends and family completely anonymously. Pull a prank with your friends or send your loved one a secret message.\n\n## Inspiration\n\nIt's quite a popular way to have fun amongst students in Nigeria to create profiles on anonymous messaging platforms to be shared amongst their peers so they may speak their minds.\n\nBeing a student, and having used a couple of these, most of them don't make it as easy and fun as it should be.\n\n## Why MongoDB?\n\nWe wanted to stand out by adding giving users more flexibility and customization while also considering the effects in the long run. We needed a database with a flexible structure that allows for scalability with zero deployment issues. MongoDB Atlas was the best bet.\n\n## How It Works\n\nYou create an account on the platform, with just your name, email and password. You choose to set a username or not. You get access to your dashboard where you can share your unique link to friends. People message you completely anonymously, leaving you to have to figure out which message is from which person. You may reply messages from users who have an account on the platform too.", "format": "md", "metadata": {"tags": ["JavaScript"], "pageDescription": "A web application to help users message and be messaged completely anonymously.", "contentType": "Code Example"}, "title": "Anonytexts", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/build-offline-first-react-native-mobile-app-with-expo-and-realm", "action": "created", "body": "# Build an Offline-First React Native Mobile App with Expo and Realm React Native\n\n* * *\n> Atlas App Services (Formerly MongoDB Realm )\n> \n> Atlas Device Sync (Formerly Realm Sync)\n> \n* * *\n## Introduction\n\nBuilding Mobile Apps that work offline and sync between different devices is not an easy task. You have to write code to detect when you\u2019re offline, save data locally, detect when you\u2019re back online, compare your local copy of data with that in the server, send and receive data, parse JSON, etc. \n\nIt\u2019s a time consuming process that\u2019s needed, but that appears over and over in every single mobile app. You end up solving the same problem for each new project you write. And it\u2019s worse if you want to run your app in iOS and Android. This means redoing everything twice, with two completely different code bases, different threading libraries, frameworks, databases, etc.\n\nTo help with offline data management and syncing between different devices, running different OSes, we can use MongoDB\u2019s client-side datastore Realm and Atlas Device Sync. To create a single code base that works well in both platforms we can use React Native. And the simplest way to create React Native Apps is using Expo. \n\n### React Native Apps\n\nThe React Native Project, allows you to create iOS and Android apps using React _\u201ca best-in-class JavaScript library for building user interfaces_\u201d. So if you\u2019re an experienced Web developer who already knows React, using React Native will be the natural next step to create native Mobile Apps.\n\nBut even if you\u2019re a native mobile developer with some experience using SwiftUI in iOS or Compose in Android, you\u2019ll find lots of similarities here.\n\n### Expo and React Native\n\nExpo is a set of tools built around React Native. Using Expo you can create React Native Apps quickly and easily. For that, we need to install Expo using Node.js package manager `npm`:\n\n```\nnpm install --global expo-cli\n```\n\nThis will install `expo-cli` globally so we can call it from anywhere in our system. In case we need to update Expo we\u2019ll use that very same command. __For this tutorial we\u2019ll need the latest version of Expo, that\u2019s been updated to support the Realm React Native__. You can find all the new features and changes in the Expo SDK 44 announcement blog post.\n\nTo ensure you have the latest Expo version run:\n```\nexpo --version\n```\nShould return at least `5.0.1`. If not, run again `npm install --global expo-cli`\n\n## Prerequisites\n\nNow that we have the latest Expo installed, let\u2019s check out that we have everything we need to develop our application:\n\n* Xcode 13, including Command Line Tools, if we want to develop an iOS version. We\u2019ll also need a macOS computer running at least macOS 11/Big Sur in order to run Xcode.\n* Android Studio, to develop for Android and at least one Android Emulator ready to test our apps.\n* Any code editor. I\u2019ll be using Visual Studio Code as it has plugins to help with React Native Development, but you can use any other editor.\n* Check that you have the latest version of yarn running `npm install -g yarn`\n* Make sure you are NOT on the latest version of node, however, or you will see errors about unsupported digital envelope routines. You need the LTS version instead. Get the latest LTS version number from https://nodejs.org/ and then run:\n```\nnvm install 16.13.1 # swap for latest LTS version\n```\n\nIf you don\u2019t have Xcode or Android Studio, and need to build without installing anything locally you can also try Expo Application Services, a cloud-based building service that allows you to build your Expo Apps remotely.\n\n### MongoDB Atlas and App Services App\n\nOur App will store data in a cloud-backed MongoDB Atlas cluster. So we need to create a free MongoDB account and set up a cluster. For this tutorial, a Free-forever, M0 cluster will be enough.\n\nOnce we have our cluster created we can go ahead and create an app in Atlas Application Services. The app will sync our data from a mobile device into a MongoDB Atlas database, although it has many other uses: manages authentication, can run serverless functions, host static sites, etc. Just follow this quick tutorial (select the React Native template) but don\u2019t download any code, as we\u2019re going to use Expo to create our app from scratch. That will configure our app correctly to use Sync and set it into Development Mode.\n\n## Read It Later - Maybe\n\nNow we can go ahead and create our app, a small \u201cread it later\u201d kind of app to store web links we save for later reading. As sometimes we never get back to those links I\u2019ll call it Read It Later - _Maybe_. \n\nYou can always clone the repo and follow along.\n\n| Login | Adding a Link | \n| :-------------: | :----------: | \n| | | \n\n| All Links | Deleting a Link | \n| :-------------: | :----------: | \n| | | \n\n### Install Expo and create the App\n\nWe\u2019ll use Expo to create our app using `expo init read-later-maybe`. This will ask us which template we want to use for our app. Using up and down cursors we can select the desired template, in this case, from the Managed Workflows we will choose the `blank` one, that uses JavaScript. This will create a `read-later-maybe` directory for us containing all the files we need to get started.\n\nTo start our app, just enter that directory and start the React Native Metro Server using ` yarn start`. This will tell Expo to install any dependencies and start the Metro Server.\n\n```bash\ncd read-later-maybe\nyarn start\n```\n\nThis will open our default browser, with the Expo Developer Tools at http://localhost:19002/. If your browser doesn't automatically open, press `d` to open Developer Tools in the browser. From this web page we can:\n\n* Start our app in the iOS Simulator\n* Start our app in the Android Emulator\n* Run it in a Web browser (if our app is designed to do that)\n* Change the connection method to the Developer Tools Server\n* Get a link to our app. (More on this later when we talk about Expo Go)\n\nWe can also do the same using the developer menu that\u2019s opened in the console, so it\u2019s up to you to use the browser and your mouse or your Terminal and the keyboard.\n\n## Running our iOS App\n\nTo start the iOS App in the Simulator, we can either click \u201cStart our app in the iOS Simulator\u201d on Expo Developer Tools or type `i` in the console, as starting expo leaves us with the same interface we have in the browser, replicated in the console. We can also directly run the iOS app in Simulator by typing `yarn ios` if we don\u2019t want to open the development server. \n\n### Expo Go\n\nThe first time we run our app Expo will install Expo Go. This is a native application (both for iOS and Android) that will take our JavaScript and other resources bundled by Metro and run it in our devices (real or simulated/emulated). Once run in Expo Go, we can make changes to our JavaScript code and Expo will take care of updating our app on the fly, no reload needed.\n\n| Open Expo Go | 1st time Expo Go greeting | Debug menu |\n| :-------------: | :----------: | :----------: | \n| | | |\n\nExpo Go apps have a nice debugging menu that can be opened pressing \u201cm\u201d in the Expo Developer console.\n\n### Structure of our App\n\nNow our app is working, but it only shows a simple message: \u201cOpen up App.js to start working on your app!\u201d. So we\u2019ll open the app using our code editor. These are the main files and folders we have so far:\n\n```\n.\n\u251c\u2500\u2500 .expo-shared\n\u2502 \u2514\u2500\u2500 assets.json\n\u251c\u2500\u2500 assets\n\u2502 \u251c\u2500\u2500 adaptive-icon.png\n\u2502 \u251c\u2500\u2500 favicon.png\n\u2502 \u251c\u2500\u2500 icon.png\n\u2502 \u2514\u2500\u2500 splash.png\n\u251c\u2500\u2500 .gitignore\n\u251c\u2500\u2500 App.js\n\u251c\u2500\u2500 app.json\n\u251c\u2500\u2500 babel.config.js\n\u251c\u2500\u2500 package.json\n\u2514\u2500\u2500 yarn.lock\n```\n\nThe main three files here are:\n\n* `package.json`, where we can check / add / delete our app\u2019s dependencies\n* `app.json`: configuration file for our app \n* `App.js`: the starting point for our JavaScript code \n\nThese changes can be found in tag `step-0` of the repo.\n\n## Let\u2019s add some navigation\n\nOur App will have a Login / Register Screen and then will show the list of Links for that particular User. We\u2019ll navigate from the Login Screen to the list of Links and when we decide to Log Out our app we\u2019ll navigate back to the Login / Register Screen. So first we need to add the React Native Navigation Libraries, and the gesture handler (for swipe & touch detection, etc). Enter the following commands in the Terminal:\n\n```bash\nexpo install @react-navigation/native\nexpo install @react-navigation/stack\nexpo install react-native-gesture-handler\nexpo install react-native-safe-area-context\nexpo install react-native-elements\n```\n\nThese changes can be found in tag `step-1` of the repo.\n\nNow, we\u2019ll create a mostly empty LoginView in `views/LoginView.js` (the `views` directory does not exist yet, we need to create it first) containing:\n\n```javascript\nimport React from \"react\";\nimport { View, Text, TextInput, Button, Alert } from \"react-native\";\n\nexport function LoginView({ navigation }) {\n return (\n \n Sign Up or Sign In:\n \n \n \n \n \n \n \n \n \n );\n}\n```\n\nThis is just the placeholder for our Login screen. We open it from App.js. Change the `App` function to:\n\n```javascript\nexport default function App() {\n return (\n \n \n \n \n \n );\n}\n```\n\nAnd add required `imports` to the top of the file, below the existing `import` lines.\n\n```javascript\nimport { NavigationContainer } from \"@react-navigation/native\";\nimport { createStackNavigator } from \"@react-navigation/stack\";\nimport { LoginView } from './views/LoginView';\nconst Stack = createStackNavigator();\n```\n\nAll these changes can be found in tag `step-2` of the repo.\n\n## Adding the Realm React Native\n\n### Installing Realm React Native\n\nTo add our Realm React Native SDK to the project we\u2019ll type in the Terminal:\n\n```bash\nexpo install realm\n```\n\nThis will add Realm as a dependency in our React Native Project. Now we can also create a file that will hold the Realm initialization code, we\u2019ll call it `RealmApp.js` and place it in the root of the directory, alongside `App.js`.\n\n```javascript\nimport Realm from \"realm\";\nconst app = new Realm.App({id: \"your-atlas-app-id-here\"});\nexport default app;\n```\n\nWe need to add a App ID to our code. Here are instructions on how to do so. In short, we will use a local database to save changes and will connect to MongoDB Atlas using a App Services applicaation that we create in the cloud. We have Realm React Native as a library in our Mobile App, doing all the heavy lifting (sync, offline, etc.) for our React Native app, and an App Services App in the cloud that connects to MongoDB Atlas, acting as our backend. This way, if we go offline we\u2019ll be using our local database on device and when online, all changes will propagate in both directions.\n\nAll these changes can be found in tag `step-3` of the repo.\n\n> \n> __Update 24 January 2022__\n> \n> A simpler way to create a React Native App that uses Expo & Realm is just to create it using a template. \n> For JavaScript based apps:\n> `npx expo-cli init ReactRealmJsTemplateApp -t @realm/expo-template-js`\n> \n> For TypeScript based apps:\n> `npx create-react-native-app ReactRealmTsTemplateApp -t with-realm`\n> \n## Auth Provider\n\nAll Realm related code to register a new user, log in and log out is inside a Provider. This way we can provide all descendants of this Provider with a context that will hold a logged in user. All this code is in `providers/AuthProvider.js`. You\u2019ll need to create the `providers` folder and then add `AuthProvider.js` to it.\n\nWith Realm mobile database you can store data offline and with Atlas Device Sync, you can sync across multiple devices and stores all your data in MongoDB Atlas, but can also run Serverless Functions, host static html sites or authenticate using multiple providers. In this case we\u2019ll use the simpler email/password authentication.\n\nWe create the context with:\n\n```javascript\nconst AuthContext = React.createContext(null);\n```\n\nThe SignIn code is asynchronous:\n\n```javascript\nconst signIn = async (email, password) => {\n const creds = Realm.Credentials.emailPassword(email, password);\n const newUser = await app.logIn(creds);\n setUser(newUser);\n };\n```\n\nAs is the code to register a new user:\n\n```javascript\n const signUp = async (email, password) => {\n await app.emailPasswordAuth.registerUser({ email, password });\n };\n```\n\nTo log out we simply check if we\u2019re already logged in, in that case call `logOut`\n\n```javascript\nconst signOut = () => {\n if (user == null) {\n console.warn(\"Not logged in, can't log out!\");\n return;\n }\n user.logOut();\n setUser(null);\n };\n```\n\nAll these changes can be found in tag `step-4` of the repo.\n\n### Login / Register code\n\nTake a moment to have a look at the styles we have for the app in the `stylesheet.js` file, then modify the styles to your heart\u2019s content. \n\nNow, for Login and Logout we\u2019ll add a couple `states` to our `LoginView` in `views/LoginView.js`. We\u2019ll use these to read both email and password from our interface.\n\nPlace the following code inside `export function LoginView({ navigation }) {`:\n\n```javascript\n const email, setEmail] = useState(\"\");\n const [password, setPassword] = useState(\"\");\n``` \n\nThen, we\u2019ll add the UI code for Login and Sign up. Here we use `signIn` and `signUp` from our `AuthProvider`.\n\n```javascript\n const onPressSignIn = async () => {\n console.log(\"Trying sign in with user: \" + email);\n try {\n await signIn(email, password);\n } catch (error) {\n const errorMessage = `Failed to sign in: ${error.message}`;\n console.error(errorMessage);\n Alert.alert(errorMessage);\n }\n };\n\n const onPressSignUp = async () => {\n console.log(\"Trying signup with user: \" + email);\n try {\n await signUp(email, password);\n signIn(email, password);\n } catch (error) {\n const errorMessage = `Failed to sign up: ${error.message}`;\n console.error(errorMessage);\n Alert.alert(errorMessage);\n }\n };\n```\n\nAll changes can be found in [`step-5`.\n\n## Prebuilding our Expo App\n\nOn save we\u2019ll find this error:\n\n```\nError: Missing Realm constructor. Did you run \"pod install\"? Please see https://realm.io/docs/react-native/latest/#missing-realm-constructor for troubleshooting\n```\n\nRight now, Realm React Native is not compatible with Expo Managed Workflows. In a managed Workflow Expo hides all iOS and Android native details from the JavaScript/React developer so they can concentrate on writing React code. Here, we need to prebuild our App, which will mean that we lose the nice Expo Go App that allows us to load our app using a QR code.\n\nThe Expo Team is working hard on improving the compatibility with Realm React Native, as is our React Native SDK team, who are currently working on improving the compatibility with Expo, supporting the Hermes JavaScript Engine and expo-dev-client. Watch this space for all these exciting announcements!\n\nSo to run our app in iOS we\u2019ll do:\n\n```\nexpo run:ios\n```\n\nWe need to provide a Bundle Identifier to our iOS app. In this case we\u2019ll use `com.realm.read-later-maybe`\n\nThis will install all needed JavaScript libraries using `yarn`, then install all native libraries using CocoaPods, and finally will compile and run our app. To run on Android we\u2019ll do:\n\n```\nexpo run:android\n```\n\n## Navigation completed\n\nNow we can register and login in our App. Our `App.js` file now looks like:\n\n```javascript\nexport default function App() {\n return (\n \n \n \n \n \n \n \n );\n}\n``` \n\nWe have an AuthProvider that will provide the user logged in to all descendants. Inside is a Navigation Container with one Screen: Login View. But we need to have two Screens: our \u201cLogin View\u201d with the UI to log in/register and \u201cLinks Screen\u201d, which will show all our links. \n\nSo let\u2019s create our LinksView screen:\n\n```javascript\nimport React, { useState, useEffect } from \"react\";\nimport { Text } from \"react-native\";\n\nexport function LinksView() {\n return (\n Links go here\n );\n}\n```\n\nRight now only shows a simple message \u201cLinks go here\u201d, as you can check in `step-6`\n\n## Log out\n\nWe can register and log in, but we also need to log out of our app. To do so, we\u2019ll add a Nav Bar item to our Links Screen, so instead of having \u201cBack\u201d we\u2019ll have a logout button that closes our Realm, calls logout and pops out our Screen from the navigation, so we go back to the Welcome Screen.\n\nIn our LinksView Screen in we\u2019ll add:\n\n```javascript\nReact.useLayoutEffect(() => {\n navigation.setOptions({\n headerBackTitle: \"Log out\",\n headerLeft: () => \n });\n }, navigation]); \n```\n\nHere we use a `components/Logout` component that has a button. This button will call `signOut` from our `AuthProvider`. You\u2019ll need to add the `components` folder.\n\n```javascript\n return (\n {\n Alert.alert(\"Log Out\", null, [\n {\n text: \"Yes, Log Out\",\n style: \"destructive\",\n onPress: () => {\n navigation.popToTop();\n closeRealm();\n signOut();\n },\n },\n { text: \"Cancel\", style: \"cancel\" },\n ]);\n }}\n />\n );\n```\n\nNice! Now we have Login, Logout and Register! You can follow along in [`step-7`.\n\n## Links\n\n### CRUD\n\nWe want to store Links to read later. So we\u2019ll start by defining how our Link class will look like. We\u2019ll store a Name and a URL for each link. Also, we need an `id` and a `partition` field to avoid pulling all Links for all users. Instead we\u2019ll just sync Links for the logged in user. These changes are in `schemas.js`\n\n```javascript\nclass Link {\n constructor({\n name,\n url,\n partition,\n id = new ObjectId(),\n }) {\n\n this._partition = partition;\n this._id = id;\n this.name = name;\n this.url = url;\n }\n\n static schema = {\n name: 'Link',\n properties: {\n _id: 'objectId',\n _partition: 'string',\n name: 'string',\n url: 'string',\n },\n\n primaryKey: '_id',\n };\n}\n```\n\nYou can get these changes in `step-8` of the repo.\n\nAnd now, we need to code all the CRUD methods. For that, we\u2019ll go ahead and create a `LinksProvider` that will fetch Links and delete them. But first, we need to open a Realm to read the Links for this particular user:\n\n```javascript\n realm.open(config).then((realm) => {\n realmRef.current = realm;\n const syncLinks = realm.objects(\"Link\");\n let sortedLinks = syncLinks.sorted(\"name\");\n setLinks(...sortedLinks]);\n\n // we observe changes on the Links, in case Sync informs us of changes\n // started in other devices (or the cloud)\n sortedLinks.addListener(() => {\n console.log(\"Got new data!\");\n setLinks([...sortedLinks]);\n });\n });\n```\n\nTo add a new Link we\u2019ll have this function that uses `[realm.write` to add a new Link. This will also be observed by the above listener, triggering a UI refresh.\n\n```javascript\nconst createLink = (newLinkName, newLinkURL) => {\n\n const realm = realmRef.current;\n\n realm.write(() => {\n // Create a new link in the same partition -- that is, using the same user id.\n realm.create(\n \"Link\",\n new Link({\n name: newLinkName || \"New Link\",\n url: newLinkURL || \"http://\",\n partition: user.id,\n })\n );\n });\n };\n```\n\nFinally to delete Links we\u2019ll use `realm.delete`.\n\n```javascript\n const deleteLink = (link) => {\n\n const realm = realmRef.current;\n\n realm.write(() => {\n realm.delete(link);\n // after deleting, we get the Links again and update them\n setLinks(...realm.objects(\"Link\").sorted(\"name\")]);\n });\n };\n```\n\n### Showing Links\n\nOur `LinksView` will `map` the contents of the `links` array of `Link` objects we get from `LinkProvider` and show a simple List of Views to show name and URL of each Link. We do that using:\n\n```javascript\n{links.map((link, index) =>\n \n \n \n {link.name}\n \n \n {link.url}\n \n \n \n \n```\n\n### UI for deleting Links\n\nAs we want to delete links we\u2019ll use a swipe right-to-left gesture to show a button to delete that Link\n\n```javascript\n onClickLink(link)}\n bottomDivider\n key={index} \n rightContent={\n deleteLink(link)}\n />\n }\n>\n```\n\nWe get `deleteLink` from the `useLinks` hook in `LinksProvider`:\n\n```javascript\n const { links, createLink, deleteLink } = useLinks();\n```\n\n### UI for adding Links\n\nWe\u2019ll have a [TextInput for entering name and URL, and a button to add a new Link directly at the top of the List of Links. We\u2019ll use an accordion to show/hide this part of the UI:\n\n```javascript\n\n Create new Link\n \n }\n isExpanded={expanded}\n onPress={() => {\n setExpanded(!expanded);\n }}\n >\n {\n <>\n \n \n { createLink(linkDescription, linkURL); }}\n />\n \n }\n \n```\n\n## Adding Links in the main App\n\nFinally, we\u2019ll integrate the new `LinksView` inside our `LinksProvider` in `App.js`\n\n```javascript\n\n {() => {\n return (\n \n \n \n );\n }} \n\n```\n\n## The final App\n\nWow! That was a lot, but now we have a React Native App, that works with the same code base in both iOS and Android, storing data in a MongoDB Atlas Database in the cloud thanks to Atlas Device Sync. And what\u2019s more, any changes in one device syncs in all other devices with the same user logged-in. But the best part is that Atlas Device Sync works even when offline!\n\n| Syncing iOS and Android | Offline Syncing! | \n| :-------------: | :----------: | \n| | | \n\n## Recap\n\nIn this tutorial we\u2019ve seen how to build a simple React Native application using Expo that takes advantage of Atlas Device Sync for their offline and syncing capabilities. This App is a prebuilt app as right now Managed Expo Workflows won\u2019t work with Realm React Native (yet, read more below). But you still get all the simplicity of use that Expo gives you, all the Expo libraries and the EAS: build your app in the cloud without having to install Xcode or Android Studio.\n\nThe Realm React Native team is working hard to make the SDK fully compatible with Hermes. Once we release an update to the Realm React Native SDK compatible with Hermes, we\u2019ll publish a new post updating this app. Also, we\u2019re working to finish an Expo Custom Development Client. This will be our own Expo Development Client that will substitute Expo Go while developing with Realm React Native. Expect also a piece of news when that is approved!\n\nAll the code for this tutorial can be found in this repo.\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "React Native"], "pageDescription": "In this post we'll build, step by step, a simple React Native Mobile App for iOS and Android using Expo and Realm React Native. The App will use Atlas Device Sync to store data in MongoDB Atlas, will Sync automatically between devices and will work offline.", "contentType": "Tutorial"}, "title": "Build an Offline-First React Native Mobile App with Expo and Realm React Native", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/saving-data-in-unity3d-using-files", "action": "created", "body": "# Saving Data in Unity3D Using Files\n\n*(Part 2 of the Persistence Comparison Series)*\n\nPersisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.\n\nIn Part 1 of this series, we explored Unity's own solution: `PlayerPrefs`. This time, we look into one of the ways we can use the underlying .NET framework by saving files. Here is an overview of the complete series:\n\n- Part 1: PlayerPrefs\n- Part 2: Files *(this tutorial)*\n- Part 3: BinaryReader and BinaryWriter *(coming soon)*\n- Part 4: SQL\n- Part 5: Realm Unity SDK\n- Part 6: Comparison of all those options\n\nLike Part 1, this tutorial can also be found in the https://github.com/realm/unity-examples repository on the persistence-comparison branch.\n\nEach part is sorted into a folder. The three scripts we will be looking at are in the `File` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.\n\n## Example game\n\n*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.*\n\nThe goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.\n\nTherefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.\n\nA simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.\n\nWhen you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.\n\nYou can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.\n\nThe scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.\n\n```cs\nusing UnityEngine;\n\n/// \n/// This script shows the basic structure of all other scripts.\n/// \npublic class HitCountExample : MonoBehaviour\n{\n // Keep count of the clicks.\n SerializeField] private int hitCount; // 1\n\n private void Start() // 2\n {\n // Read the persisted data and set the initial hit count.\n hitCount = 0; // 3\n }\n\n private void OnMouseDown() // 4\n {\n // Increment the hit count on each click and save the data.\n hitCount++; // 5\n }\n}\n```\n\nThe first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.\n\nWhenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.\n\nThe second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.\n\n## File\n\n(See `FileExampleSimple.cs` in the repository for the finished version.)\n\nOne of the ways the .NET framework offers us to save data is using the [`File` class:\n\n> Provides static methods for the creation, copying, deletion, moving, and opening of a single file, and aids in the creation of FileStream objects.\n\nBesides that, the `File` class is also used to manipulate the file itself, reading and writing data. On top of that, it offers ways to read meta data of a file, like time of creation.\n\nWhen working with a file, you can also make use of several options to change `FileMode` or `FileAccess.`\n\nThe `FileStream` mentioned in the documentation is another approach to work with those files, providing additional options. In this tutorial, we will just use the plain `File` class.\n\nLet's have a look at what we have to change in the example presented in the previous section to save the data using `File`:\n\n```cs\nusing System;\nusing System.IO;\nusing UnityEngine;\n\npublic class FileExampleSimple : MonoBehaviour\n{\n // Resources:\n // https://docs.microsoft.com/en-us/dotnet/api/system.io.file?view=net-5.0\n\n SerializeField] private int hitCount = 0;\n\n private const string HitCountFile = \"hitCountFile.txt\";\n\n private void Start()\n {\n if (File.Exists(HitCountFile))\n {\n var fileContent = File.ReadAllText(HitCountFile);\n hitCount = Int32.Parse(fileContent);\n }\n }\n\n private void OnMouseDown()\n {\n hitCount++;\n\n // The easiest way when working with Files is to use them directly.\n // This writes all input at once and overwrites a file if executed again.\n // The File is opened and closed right away.\n File.WriteAllText(HitCountFile, hitCount.ToString());\n }\n\n}\n```\n\nFirst we define a name for the file that will hold the data (1). If no additional path is provided, the file will just be saved in the project folder when running the game in the Unity editor or the game folder when running a build. This is fine for the example.\n\nWhenever we click on the capsule (2) and increment the hit count (3), we need to save that change. Using `File.WriteAllText()` (4), the file will be opened, data will be saved, and it will be closed right away. Besides the file name, this function expects the contents as a string. Therefore, we have to transform the `hitCount` by calling `ToString()` before passing it on.\n\nThe next time we start the game (5), we want to load the previously saved data. First we check if the file already exists (6). If it does not exist, we never saved before and can just keep the default value for `hitCount`. If the file exists, we use `ReadAllText()` to get that data (7). Since this is a string again, we need to convert here as well using `Int32.Parse()` (8). Note that this means we have to be sure about what we read. If the structure of the file changes or the player edits it, this might lead to problems during the parsing of the file.\n\nLet's look into extending this simple example in the next section.\n\n## Extended example\n\n(See `FileExampleExtended.cs` in the repository for the finished version.)\n\nThe previous section showed the most simple example, using just one variable that needs to be saved. What if we want to save more than that?\n\nDepending on what needs to saved, there are several different approaches. You could use multiple files or you can write multiple lines inside the same file. The latter shall be shown in this section by extending the game to recognize modifier keys. We want to detect normal clicks, Shift+Click, and Control+Click.\n\nFirst, update the hit counts so that we can save three of them:\n\n```cs\n[SerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nWe also want to use a different file name so we can look at both versions next to each other:\n\n```cs\nprivate const string HitCountFileUnmodified = \"hitCountFileExtended.txt\";\n```\n\nThe last field we need to define is the key that is pressed:\n\n```cs\nprivate KeyCode modifier = default;\n```\n\nThe first thing we need to do is check if a key was pressed and which key it was. Unity offers an easy way to achieve this using the [`Input` class's `GetKey` function. It checks if the given key was pressed or not. You can pass in the string for the key or to be a bit more safe, just use the `KeyCode` enum. We cannot use this in the `OnMouseClick()` when detecting the mouse click though:\n\n> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.\n\nAdd a new method called `Update()` (1) which is called in every frame. Here we need to check if the `Shift` or `Control` key was pressed (2) and if so, save the corresponding key in `modifier` (3). In case none of those keys was pressed (4), we consider it unmodified and reset `modifier` to its `default` (5).\n\n```cs\nprivate void Update() // 1\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 2\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift; // 3\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 2\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl; // 3\n }\n else // 4\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 5\n }\n}\n```\n\nNow to saving the data when a click happens:\n\n```cs\nprivate void OnMouseDown() // 6\n{\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift: // 7\n // Increment the Shift hit count.\n hitCountShift++; // 8\n break;\n case KeyCode.LeftCommand: // 7\n // Increment the Control hit count.\n hitCountControl++; // 8\n break;\n default: // 9\n // If neither Shift nor Control was held, we increment the unmodified hit count.\n hitCountUnmodified++; // 10\n break;\n }\n\n // 11\n // Create a string array with the three hit counts.\n string] stringArray = {\n hitCountUnmodified.ToString(),\n hitCountShift.ToString(),\n hitCountControl.ToString()\n };\n\n // 12\n // Save the entries, line by line.\n File.WriteAllLines(HitCountFileUnmodified, stringArray);\n}\n```\n\nWhenever a mouse click is detected on the capsule (6), we can then perform a similar check to what happened in `Update()`, only we use `modifier` instead of `Input.GetKey()` here.\n\nCheck if `modifier` was set to `KeyCode.LeftShift` or `KeyCode.LeftControl` (7) and if so, increment the corresponding hit count (8). If no modifier was used (9), increment the `hitCountUnmodified`.\n\nAs seen in the last section, we need to create a string that can be saved in the file. There is a second function on `File` that accepts a string array and then saves each entry in one line: `WriteAllLines()`.\n\nKnowing this, we create an array containing the three hit counts (11) and pass this one on to `File.WriteAllLines()`.\n\nStart the game, and click the capsule using Shift and Control. You should see the three counters in the Inspector.\n\n![\n\nAfter stopping the game and therefore saving the data, a new file `hitCountFileExtended.txt` should exist in your project folder. Have a look at it. It should look something like this:\n\nLast but not least, let's look at how to load the file again when starting the game:\n\n```cs\nprivate void Start()\n{\n // 12\n // Check if the file exists. If not, we never saved before.\n if (File.Exists(HitCountFileUnmodified))\n {\n // 13\n // Read all lines.\n string] textFileWriteAllLines = File.ReadAllLines(HitCountFileUnmodified);\n\n // 14\n // For this extended example we would expect to find three lines, one per counter.\n if (textFileWriteAllLines.Length == 3)\n {\n // 15\n // Set the counters correspdoning to the entries in the array.\n hitCountUnmodified = Int32.Parse(textFileWriteAllLines[0]);\n hitCountShift = Int32.Parse(textFileWriteAllLines[1]);\n hitCountControl = Int32.Parse(textFileWriteAllLines[2]);\n }\n }\n}\n```\n\nFirst, we check if the file even exists (12). If we ever saved data before, this should be the case. If it exists, we read the data. Similar to writing with `WriteAllLines()`, we use `ReadAllLines` (13) to create a string array where each entry represents one line in the file.\n\nWe do expect there to be three lines, so we should expect the string array to have three entries (14).\n\nUsing this knowledge, we can then assign the three entries from the array to the corresponding hit counts (15).\n\nAs long as all the data saved to those lines belongs together, the file can be one option. If you have several different properties, you might create multiple files. Alternatively, you can save all the data into the same file using a bit of structure. Note, though, that the numbers will not be associated with the properties. If the structure of the object changes, we would need to migrate the file as well and take this into account the next time we open and read the file.\n\nAnother possible approach to structuring your data will be shown in the next section using JSON.\n\n## More complex data\n\n(See `FileExampleJson.cs` in the repository for the finished version.)\n\nJSON is a very common approach when saving structured data. It's easy to use and there are frameworks for almost every language. The .NET framework provides a [`JsonSerializer`. Unity has its own version of it: `JsonUtility`.\n\nAs you can see in the documentation, the functionality boils down to these three methods:\n\n- *FromJson*: Create an object from its JSON representation.\n- *FromJsonOverwrite*: Overwrite data in an object by reading from its JSON representation.\n- *ToJson*: Generate a JSON representation of the public fields of an object.\n\nThe `JsonUtility` transforms JSON into objects and back. Therefore, our first change to the previous section is to define such an object with public fields:\n\n```cs\nprivate class HitCount\n{\n public int Unmodified;\n public int Shift;\n public int Control;\n}\n```\n\nThe class itself can be `private` and just be added inside the `FileExampleJson` class, but its fields need to be public.\n\nAs before, we use a different file to save this data. Update the filename to:\n\n```cs\nprivate const string HitCountFileJson = \"hitCountFileJson.txt\";\n```\n\nWhen saving the data, we will use the same `Update()` method as before to detect which key was pressed.\n\nThe first part of `OnMouseDown()` (1) can stay the same as well, since this part only increments the hit count in depending on the modifier used.\n\n```cs\nprivate void OnMouseDown()\n{\n // 1\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift:\n // Increment the Shift hit count.\n hitCountShift++;\n break;\n case KeyCode.LeftCommand:\n // Increment the Control hit count.\n hitCountControl++;\n break;\n default:\n // If neither Shift nor Control was held, we increment the unmodified hit count.\n hitCountUnmodified++;\n break;\n }\n\n // 2\n // Create a new HitCount object to hold this data.\n var updatedCount = new HitCount\n {\n Unmodified = hitCountUnmodified,\n Shift = hitCountShift,\n Control = hitCountControl,\n };\n\n // 3\n // Create a JSON using the HitCount object.\n var jsonString = JsonUtility.ToJson(updatedCount, true);\n\n // 4\n // Save the json to the file.\n File.WriteAllText(HitCountFileJson, jsonString);\n}\n```\n\nHowever, we need to update the second part. Instead of a string array, we create a new `HitCount` object and set the three public fields to the values of the hit counters (2).\n\nUsing `JsonUtility.ToJson()`, we can transform this object to a string (3). If you pass in `true` for the second, optional parameter, `prettyPrint`, the string will be formatted in a nicely readable way.\n\nFinally, as in `FileExampleSimple.cs`, we just use `WriteAllText()` since we're only saving one string, not an array (4).\n\nThen, when the game starts, we need to read the data back into the hit count:\n\n```cs\nprivate void Start()\n{\n // Check if the file exists to avoid errors when opening a non-existing file.\n if (File.Exists(HitCountFileJson)) // 5\n {\n // 6\n var jsonString = File.ReadAllText(HitCountFileJson);\n var hitCount = JsonUtility.FromJson(jsonString);\n\n // 7\n if (hitCount != null)\n {\n // 8\n hitCountUnmodified = hitCount.Unmodified;\n hitCountShift = hitCount.Shift;\n hitCountControl = hitCount.Control;\n }\n }\n}\n```\n\nWe check if the file exists first (5). In case it does, we saved data before and can proceed reading it.\n\nUsing `ReadAllText`, we read the string from the file and transform it via `JsonUtility.FromJson<>()` into an object of type `HitCount` (6).\n\nIf this happened successfully (7), we can then assign the three properties to their corresponding hit count (8).\n\nWhen you run the game, you will see that in the editor, it looks identical to the previous section since we are using the same three counters. If you open the file `hitCountFileJson.txt`, you should then see the three counters in a nicely formatted JSON.\n\nNote that the data is saved in plain text. In a future tutorial, we will look at encryption and how to improve safety of your data.\n\n## Conclusion\n\nIn this tutorial, we learned how to utilize `File` to save data. `JsonUtility` helps structure this data. They are simple and easy to use, and not much code is required.\n\nWhat are the downsides, though?\n\nFirst of all, we open, write to, and save the file every single time the capsule is clicked. While not a problem in this case and certainly applicable for some games, this will not perform very well when many save operations are made.\n\nAlso, the data is saved in plain text and can easily be edited by the player.\n\nThe more complex your data is, the more complex it will be to actually maintain this approach. What if the structure of the `HitCount` object changes? You have to change account for that when loading an older version of the JSON. Migrations are necessary.\n\nIn the following tutorials, we will (among other things) have a look at how databases can make this job a lot easier and take care of the problems we face here.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["C#", "Realm", "Unity"], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Tutorial"}, "title": "Saving Data in Unity3D Using Files", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/can-you-keep-a-secret", "action": "created", "body": "# Can You Keep a Secret?\n\nThe median time to discovery for a secret key leaked to GitHub is 20 seconds. By the time you realise your mistake and rotate your secrets, it could be too late. In this talk, we'll look at some techniques for secret management which won't disrupt your workflow, while keeping your services safe.\n\n>:youtube]{vid=2XNIbOMYr_Q}\n>\n>This is a complete transcript of the [2020 PyCon Australia conference talk \"Can you keep a secret?\" Slides are also available to download on Notist.\n\nHey, everyone. Thank you for joining me today.\n\nBefore we get started, I would just like to take a moment to express my heartfelt thanks, gratitude, and admiration to everyone involved with this year's PyCon Australia. They have done such an amazing job, in a really very difficult time.\n\nIt would have been so easy for them to have skipped putting on a conference at all this year, and no one would have blamed them if they did, but they didn't, and what they achieved should really be celebrated. So a big thank you to them.\n\nWith that said, let's get started!\n\nSo, I'm Aaron Bassett.\n\nYou can find me pretty much everywhere as Aaron Bassett, because I have zero imagination. Twitter, GitHub, LinkedIn, there's probably an old MySpace and Bebo account out there somewhere too. You can find me on them all as Aaron Bassett.\n\nI am a Senior Developer Advocate at MongoDB.\n\nFor anyone who hasn't heard of MongoDB before, it is a general purpose, document-based, distributed database, often referred to as a No-SQL database. We have a fully managed cloud database service called Atlas, an on-premise Enterprise Server, an on device database called Realm, but we're probably most well known for our free and open source Community Server.\n\nIn fact, much of what we do at MongoDB is open source, and as a developer advocate, almost the entirety of what I produce is open source and publicly available. Whether it is a tutorial, demo app, conference talk, Twitch stream, and so on. It's all out there to use.\n\nHere's an example of the type of code I write regularly. This is a small snippet to perform a geospatial query.\n\n``` python\nimport pprint\nfrom pymongo import MongoClient\n\nclient = MongoClient(\n \"C01.5tsil.mongodb.net\",\n username=\"admin\", password=\"hunter2\"\n)\ndb = client.geo_example\n\nquery = {\"loc\": {\"$within\": {\"$center\": [0, 0], 6]}}}\nfor doc in db.places.find(query).sort(\"_id\"):\n pprint.pprint(doc)\n```\n\nFirst, we import our MongoDB Python Driver. Then, we instantiate our database client. And finally, we execute our query. Here, we're trying to find all documents whose location is within a defined radius of a chosen point.\n\nBut even in this short example, we have some secrets that we really shouldn't be sharing. The first line highlighted here is the URI. This isn't so much a secret as a configuration variable.\n\nSomething that's likely to change between your development, staging, and production environments. So, you probably don't want this hard coded either. The next line, however, is the real secrets. Our database username and password. These are the types of secrets you never want to hard code in your scripts, not even for a moment.\n\n``` python\nimport pprint\nfrom pymongo import MongoClient\n\nDB_HOST = \"C01.5tsil.mongodb.net\"\nDB_USERNAME = \"admin\"\nDB_PASSWORD = \"hunter2\"\n\nclient = MongoClient(DB_HOST, username=DB_USERNAME, password=DB_PASSWORD)\ndb = client.geo_example\n\nquery = {\"loc\": {\"$within\": {\"$center\": [[0, 0], 6]}}}\nfor doc in db.places.find(query).sort(\"_id\"):\n pprint.pprint(doc)\n```\n\nSo often I see it where someone has pulled out their secrets into variables, either at the top of their script\u00a7 or sometimes they'll hard code them in a settings.py or similar. I've been guilty of this as well.\n\nYou have every intention of removing the secrets before you publish your code, but then it's a couple of days later, the kids are trying to get your attention, you **need** to go make your morning coffee, or there's one of the million other things that happen in our day-to-day lives distracting you, and as you get up, you decide to save your working draft, muscle memory kicks in...\n\n``` shell\ngit add .\ngit commit -m \"wip\"\ngit push\n```\n\nAnd... well... that's all it takes.\n\nAll it takes is that momentary lapse and now your secrets are public, and as soon as those secrets hit GitHub or another public repository, you have to assume they're immediately breached.\n\nMichael Meli, Matthew R. McNiece, and Bradley Reaves from North Carolina State University published a research paper titled [\"How Bad Can It Git? Characterizing Secret Leakage in Public GitHub Repositories\".\n\nThis research showed that the median time for discovery for a secret published to GitHub was 20 seconds, and it could be as low as half a second. It appeared to them that the only limiting factor on how fast you could discover secrets on GitHub was how fast GitHub was able to index new code as it was pushed up.\n\nThe longest time in their testing from secrets being pushed until they could potentially be compromised was four minutes. There was no correlation between time of day, etc. It most likely would just depend on how many other people were pushing code at the same time. But once the code was indexed, then they were able to locate the secrets using some well-crafted search queries.\n\nBut this is probably not news to most developers. Okay, the speed of which secrets can be compromised might be surprising, but most developers will know the perils of publishing their secrets publicly.\n\nMany of us have likely heard or read horror stories of developers accidentally committing their AWS keys and waking up to a huge bill as someone has been spinning up EC2 instances on their account. So why do we, and I'm including myself in that we, why do we keep doing it?\n\nBecause it is easy. We know it's not safe. We know it is likely going to bite us in the ass at some point. But it is so very, very easy. And this is the case in most software.\n\nThis is the security triangle. It represents the balance between security, functionality, and usability. It's a trade-off. As two points increase, one will always decrease. If we have an app that is very, very secure and has a lot of functionality, it's probably going to feel pretty restrictive to use. If our app is very secure and very usable, it probably doesn't have to do much.\n\nA good example of where a company has traded some security for additional functionality and usability is Amazon's One Click Buy button.\n\nIt functions very much as the name implies. When you want to order a product, you can click a single button and Amazon will place your order using your default credit card and shipping address from their records. What you might not be aware of is that Amazon cannot send the CVV with that order. The CVV is the normally three numbers on the back of your card above the signature strip.\n\nCard issuers say that you should send the CVV for each Card Not Present transaction. Card Not Present means that the retailer cannot see that you have the physical card in your possession, so every online transaction is a Card Not Present transaction.\n\nOkay, so the issuers say that you should send the CVV each time, but they also say that you MUST not store it. This is why for almost all retailers, even if they have your credit card stored, you will still need to enter the CVV during checkout, but not Amazon. Amazon simply does not send the CVV. They know that decreases their security, but for them, the trade-off for additional functionality and ease of use is worth it.\n\nA bad example of where a company traded sanity\u2014sorry, I mean security\u2014for usability happened at a, thankfully now-defunct, agency I worked at many, many years ago. They decided that while storing customer's passwords in plaintext lowered their security, being able to TELL THE CUSTOMER THEIR PASSWORD OVER THE TELEPHONE WHEN THEY CALLED was worth it in usability.\n\nIt really was the wild wild west of the web in those days...\n\nSo a key tenant of everything I'm suggesting here is that it has to be as low friction as possible. If it is too hard, or if it reduces the usability side of our triangle too much, then people will not adopt it.\n\nIt also has to be easy to implement. I want these to be techniques which you can start using personally today, and have them rolled out across your team by this time next week.\n\nIt can't have any high costs or difficult infrastructure to set up and manage. Because again, we are competing with hard code variables, without a doubt the easiest method of storing secrets.\n\nSo how do we know when we're done? How do we measure success for this project? Well, for that, I'm going to borrow from the 12 factor apps methodology.\n\nThe 12 factor apps methodology is designed to enable web applications to be built with portability and resilience when deployed to the web. And it covers 12 different factors.\n\nCodebase, dependencies, config, backing services, build, release, run, and so on. We're only interested in number 3: Config.\n\nHere's what 12 factor apps has to say about config;\n\n\"A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials\"\n\nAnd this is super important even for those of you who may never publish your code publicly. What would happen if your source code were to leak right now? In 2015, researchers at internetwache found that 9700 websites in Alexa's top one million had their .git folder publicly available in their site root. This included government websites, NGOs, banks, crypto exchanges, large online communities, a few porn sites, oh, and MTV.\n\nDeploying websites via Git pulls isn't as uncommon as you might think, and for those websites, they're just one server misconfiguration away from leaking their source code. So even if your application is closed source, with source that will never be intentionally published publicly, it is still imperative that you do not hard code secrets.\n\nLeaking your source code would be horrible. Leaking all the keys to your kingdom would be devastating.\n\nSo if we can't store our secrets in our code, where do we put them? Environment variables are probably the most common place.\n\nNow remember, we're going for ease of use and low barrier to entry. There are better ways for managing secrets in production. And I would highly encourage you to look at products like HashiCorp's Vault. It will give you things like identity-based access, audit logs, automatic key rotation, encryption, and so much more. But for most people, this is going to be overkill for development, so we're going to stick to environment variables.\n\nBut what is an environment variable? It is a variable whose value is set outside of your script, typically through functionality built into your operating system and are part of the environment in which a process runs. And we have a few different ways these can be accessed in Python.\n\n``` python\nimport os\nimport pprint\nfrom pymongo import MongoClient\n\nclient = MongoClient(\n os.environ\"DB_HOST\"],\n username=os.environ[\"DB_USERNAME\"],\n password=os.environ[\"DB_PASSWORD\"],\n)\ndb = client.geo_example\n\nquery = {\"loc\": {\"$within\": {\"$center\": [[0, 0], 6]}}}\nfor doc in db.places.find(query).sort(\"_id\"):\n pprint.pprint(doc)\n```\n\nHere we have the same code as earlier, but now we've removed our hard coded values and instead we're using environment variables in their place. Environ is a mapping object representing the environment variables. It is worth noting that this mapping is captured the first time the os module is imported, and changes made to the environment after this time will not be reflected in environ. Environ behaves just like a Python dict. We can reference a value by providing the corresponding key. Or we can use get.\n\n``` python\nimport os\nimport pprint\nfrom pymongo import MongoClient\n\nclient = MongoClient(\n os.environ.get(\"DB_HOST\"),\n username=os.environ.get(\"DB_USERNAME\"),\n password=os.environ.get(\"DB_PASSWORD\"),\n)\ndb = client.geo_example\n\nquery = {\"loc\": {\"$within\": {\"$center\": [[0, 0], 6]}}}\nfor doc in db.places.find(query).sort(\"_id\"):\n pprint.pprint(doc)\n```\n\nThe main difference between the two approaches is when using get, if an environment variable does not exist, it will return None, whereas if you are attempting to access it via its key, then it will raise a KeyError exception. Also, get allows you to provide a second argument to be used as a default value if the key does not exist. There is a third way you can access environment variables: getenv.\n\n``` python\nimport os\nimport pprint\nfrom pymongo import MongoClient\n\nclient = MongoClient(\n os.getenv(\"DB_HOST\"),\n username=os.getenv(\"DB_USERNAME\"),\n password=os.getenv(\"DB_PASSWORD\"),\n)\ndb = client.geo_example\n\nquery = {\"loc\": {\"$within\": {\"$center\": [[0, 0], 6]}}}\nfor doc in db.places.find(query).sort(\"_id\"):\n pprint.pprint(doc)\n```\n\ngetenv behaves just like environ.get. In fact, it behaves so much like it I dug through the source to try and figure out what the difference was between the two and the benefits of each. But what I found is that there is no difference. None.\n\n``` python\ndef getenv(key, default=None):\n \"\"\"Get an environment variable, return None if it doesn't exist.\n The optional second argument can specify an alternate default.\n key, default and the result are str.\"\"\"\n return environ.get(key, default)\n```\n\ngetenv is simply a wrapper around environ.get. I'm sure there is a reason for this beyond saving a few key strokes, but I did not uncover it during my research. If you know the reasoning behind why getenv exists, I would love to hear it.\n\n>[Joe Drumgoole has put forward a potential reason for why `getenv` might exist: \"I think it exists because the C library has an identical function called getenv() and it removed some friction for C programmers (like me, back in the day) who were moving to Python.\"\n\nNow we know how to access environment variables, how do we create them? They have to be available in the environment whenever we run our script, so most of the time, this will mean within our terminal. We could manually create them each time we open a new terminal window, but that seems like way too much work, and very error-prone. So, where else can we put them?\n\nIf you are using virtualenv, you can manage your environment variables within your activate script.\n\n``` shell\n#\u00a0This\u00a0file\u00a0must\u00a0be\u00a0used\u00a0with\u00a0\"source\u00a0bin/activate\"\u00a0*from\u00a0bash*\n#\u00a0you\u00a0cannot\u00a0run\u00a0it\u00a0directly\n\ndeactivate\u00a0()\u00a0{\n\u00a0\u00a0\u00a0\u00a0...\n\n\u00a0\u00a0\u00a0\u00a0#\u00a0Unset\u00a0variables\n\u00a0\u00a0\u00a0\u00a0unset\u00a0NEXMO_KEY\n\u00a0\u00a0\u00a0\u00a0unset\u00a0NEXMO_SECRET\n\u00a0\u00a0\u00a0\u00a0unset\u00a0MY_NUMBER\n}\n\n...\n\nexport\u00a0NEXMO_KEY=\"a925db1ar392\"\nexport\u00a0NEXMO_SECRET=\"01nd637fn29oe31mc721\"\nexport\u00a0MY_NUMBER=\"447700900981\"\n```\n\nIt's a little back to front in that you'll find the deactivate function at the top, but this is where you can unset any environment variables and do your housekeeping. Then at the bottom of the script is where you can set your variables. This way, when you activate your virtual environment, your variables will be automatically set and available to your scripts. And when you deactivate your virtual environment, it'll tidy up after you and unset those same variables.\n\nPersonally, I am not a fan of this approach.\n\nI never manually alter files within my virtual environment. I do not keep them under source control. I treat them as wholly disposable. At any point, I should be able to delete my entire environment and create a new one without fear of losing anything. So, modifying the activate script is not a viable option for me.\n\nInstead, I use direnv. direnv is an extension for your shell. It augments existing shells with a new feature that can load and unload environment variables depending on the current directory. What that means is when I cd into a directory containing an .envrc file, direnv will automatically set the environment variables contained within for me.\n\nLet's look at a typical direnv workflow. First, we create an .envrc file and add some export statements, and we get an error. For security reasons, direnv will not load an .envrc file until you have allowed it. Otherwise, you might end up executing malicious code simply by cd'ing into a directory. So, let's tell direnv to allow this directory.\n\nNow that we've allowed the .envrc file, direnv has automatically loaded it for us and set the DB_PASSWORD environment variable. Then, if we leave the directory, direnv will unload and clean up after us by unsetting any environment variables it set.\n\nNow, you should NEVER commit your envrc file. I advise adding it to your projects gitignore file and your global gitignore file. There should be no reason why you should ever commit an .envrc file.\n\nYou will, however, want to share a list of what environment variables are required with your team. The convention for this is to create a .envrc.example file which only includes the variable names, but no values. You could even automate this grep or similar.\n\nWe covered keeping simple secrets out of your source code, but what about if you need to share secret files with coworkers? Let's take an example of when you might need to share a file in your repo, but ensure that even if your repository becomes public, only those authorised to access the file can do so.\n\nMongoDB supports Encryption at Rest and Client side field level encryption.\n\nWith encryption at rest, the encryption occurs transparently in the storage layer; i.e. all data files are fully encrypted from a filesystem perspective, and data only exists in an unencrypted state in memory and during transmission.\n\nWith client-side field level encryption, applications can encrypt fields in documents prior to transmitting data over the wire to the server.\n\nOnly applications with access to the correct encryption keys can decrypt and read the protected data. Deleting an encryption key renders all data encrypted using that key as permanently unreadable. So. with Encryption at Rest. each database has its own encryption key and then there is a master key for the server. But with client-side field level encryption. you can encrypt individual fields in documents with customer keys.\n\nI should point out that in production, you really should use a key management service for either of these. Like, really use a KMS. But for development, you can use a local key.\n\nThese commands generate a keyfile to be used for encryption at rest, set the permissions, and then enables encryption on my server. Now, if multiple developers needed to access this encrypted server, we would need to share this keyfile with them.\n\nAnd really, no one is thinking, \"Eh... just Slack it to them...\" We're going to store the keyfile in our repo, but we'll encrypt it first.\n\ngit-secret encrypts files and stores them inside the git repository. Perfect. Exactly what we need. With one little caveat...\n\nRemember these processes all need to be safe and EASY. Well, git-secret is easy... ish.\n\nGit-secret itself is very straightforward to use. But it does rely upon PGP. PGP, or pretty good privacy, is an encryption program that provides cryptographic privacy and authentication via public and private key pairs. And it is notoriously fiddly to set up.\n\nThere's also the problem of validating a public key belongs to who you think it does. Then there are key signing parties, then web of trust, and lots of other things that are way out of scope of this talk.\n\nThankfully, there are pretty comprehensive guides for setting up PGP on every OS you can imagine, so for the sake of this talk, I'm going to assume you already have PGP installed and you have your colleagues' public keys.\n\nSo let's dive into git-secret. First we initiate it, much the same as we would a git repository. This will create a hidden folder .gitsecret. Now we need to add some users who should know our secrets. This is done with git secret tell followed by the email address associated with their public key.\n\nWhen we add a file to git-secret, it creates a new file. It does not change the file in place. So, our unencrypted file is still within our repository! We must ensure that it is not accidentally committed. Git-secret tries to help us with this. If you add a file to git-secret, it'll automatically add it to your .gitignore, if it's not already there.\n\nIf we take a look at our gitignore file after adding our keyfile to our list of secrets, we can see that it has been added, along with some files which .gitsecret needs to function but which should not be shared.\n\nAt this point if we look at the contents of our directory we can see our unencrypted file, but no encrypted version. First we have to tell git secret to hide all the files we've added. Ls again and now we can see the encrypted version of the file has been created. We can now safely add that encrypted version to our repository and push it up.\n\nWhen one of our colleagues pulls down our encrypted file, they run reveal and it will use their private key to decrypt it.\n\nGit-secret comes with a few commands to make managing secrets and access easier.\n\n- Whoknows will list all users who are able to decrypt secrets in a repository. Handy if someone leaves your team and you need to figure out which secrets need to be rotated.\n- List will tell you which files in a repository are secret.\n- And if someone does leave and you need to remove their access, there is the rather morbidly named killperson.\n\nThe killperson command will ensure that the person cannot decrypt any new secrets which are created, but it does not re-encrypt any existing secrets, so even though the person has been removed, they will still be able to decrypt any existing secrets.\n\nThere is little point in re-encrypting the existing files as they will need to be rotated anyways. Then, once the secret has been rotated, when you run hide on the new secret, the removed user will not be able to access the new version.\n\nAnother tool I want to look at is confusingly called git secrets, because the developers behind git tools have apparently even less imagination than I do.\n\ngit-secrets scans commits, commit messages, and --no-ff merges to prevent adding secrets into your git repositories\n\nAll the tools and processes we've looked at so far have attempted to make it easier to safely manage secrets. This tool, however, attacks the problem in a different way. Now we're going to make it more difficult to hard code secrets in your scripts.\n\nGit-secrets uses regexes to attempt to detect secrets within your commits. It does this by using git hooks. Git secrets install will generate some Git templates with hooks already configured to check each commit. We can then specify these templates as the defaults for any new git repositories.\n\n``` shell\n$\u00a0git\u00a0secrets\u00a0--register-aws\u00a0--global\nOK\n\n$\u00a0git\u00a0secrets\u00a0--install\u00a0~/.git-templates/git-secrets\n\u2713\u00a0Installed\u00a0commit-msg\u00a0hook\u00a0to\u00a0/Users/aaronbassett/.git-templates/git-secrets/hooks/commit-msg\n\u2713\u00a0Installed\u00a0pre-commit\u00a0hook\u00a0to\u00a0/Users/aaronbassett/.git-templates/git-secrets/hooks/pre-commit\n\u2713\u00a0Installed\u00a0prepare-commit-msg\u00a0hook\u00a0to\u00a0/Users/aaronbassett/.git-templates/git-secrets/hooks/prepare-commit-msg\n\n$\u00a0git\u00a0config\u00a0--global\u00a0init.templateDir\u00a0~/.git-templates/git-secrets\n```\n\nGit-secrets is from AWS labs, so it comes with providers to detect AWS access keys, but you can also add your own. A provider is simply a list of regexes, one per line. Their recommended method is to store them all in a file and then cat them. But this has some drawbacks.\n\n``` shell\n$ git\u00a0secrets\u00a0--add-provider\u00a0--\u00a0cat\u00a0/secret/file/patterns\n```\n\nSo some regexes are easy to recognise. This is the regex for an RSA key. Straight forward. But what about this one? I'd love to know if anyone recognises this right away. It's a regex for detecting Google oAuth access tokens. This one? Facebook access tokens.\n\nSo as you can see, having a single large file with undocumented regexes could quickly become very difficult to maintain. Instead, I place mine in a directory, neatly organised. Seperate files depending on the type of secret I want to detect. Then in each file, I have comments and whitespace to help me group regexes together and document what secret they're going to detect.\n\nBut, git-secrets will not accept these as a provider, so we need to get a little creative with egrep.\n\n``` shell\ngit\u00a0secrets\u00a0--add-provider\u00a0--\u00a0egrep\u00a0-rhv\u00a0\"(^#|^$)\"\u00a0/secret/file/patterns\n```\n\nWe collect all the files in our directory, strip out any lines which start with a hash or which are empty, and then return the result of this transformation to git-secrets. Which is exactly the input we had before, but now much more maintainable than one long undocumented list!\n\nWith git-secrets and our custom providers installed, if we try to commit a private key, it will throw an error. Now, git-secrets can produce false positives. The error message gives you some examples of how you can force your commit through. So if you are totally committed to shooting yourself in the foot, you still can. But hopefully, it introduces just enough friction to make hardcoding secrets more of a hassle than just using environment variables.\n\nFinally, we're going to look at a tool for when all else fails. Gitleaks\n\nAudit git repos for secrets. Gitleaks provides a way for you to find unencrypted secrets and other unwanted data types in git source code repositories. Git leaks is for when even with all of your best intentions, a secret has made it into your repo. Because the only thing worse than leaking a secret is not knowing you've leaked a secret.\n\nIt works in much the same way as git-secrets, but rather than inspecting individual commits you can inspect a multitude of things.\n\n- A single repo\n- All repos by a single user\n- All repos under an organisation\n- All code in a GitHub PR\n- And it'll also inspect Gitlab users and groups, too\n\nI recommend using it in a couple of different ways.\n\n1. Have it configured to run as part of your PR process. Any leaks block the merge.\n2. Run it against your entire organisation every hour/day/week, or at whatever frequency you feel is sufficient. Whenever it detects a leak, you'll get a nice report showing which rule was triggered, by which commit, in which file, and who authored it.\n\nIn closing...\n\n- Keep secrets and code separate.\n- If you must share secrets, encrypt them first. Yes, PGP can be fiddly, but it's worth it in the long run.\n- Automate, automate, automate. If your secret management requires lots of manual work for developers, they will skip it. I know I would. It's so easy to justify to yourself. It's just this once. It's just a little proof of concept. You'll totally remember to remove them before you push. I've made all the same excuses to myself, too. So, keep it easy. Automate where possible.\n- And late is better than never. Finding out you've accidentally leaked a secret is a stomach-dropping, heart-racing, breath-catching experience. But leaking a secret and not knowing until after it has been compromised is even worse. So, run your gitleak scans. Run them as often as you can. And have a plan in place for when you do inevitably leak a secret so you can deal with it quickly.\n\nThank you very much for your attention.\n\nPlease do add me on Twitter at aaron bassett. I would love to hear any feedback or questions you might have! If you would like to revisit any of my slides later, they will all be published at Notist shortly after this talk.\n\nI'm not sure how much time we have left for questions, but I will be available in the hallway chat if anyone would like to speak to me there. I know I've been sorely missing seeing everyone at conferences this year, so it will be nice to catch up.\n\nThanks again to everyone who attended my talk and to the PyCon Australia organisers.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Can you keep a secret? Here are some techniques that you can use to properly store, share, and manage your secrets.", "contentType": "Article"}, "title": "Can You Keep a Secret?", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/non-root-user-mongod-process", "action": "created", "body": "# Procedure to Allow Non-Root Users to Stop/Start/Restart \"mongod\" Process\n\n## Introduction\n\nSystems' security plays a fundamental role in today's modern\napplications. It is very important to restrict non-authorized users'\naccess to root capabilities. With this blog post, we intend to document\nhow to avoid jeopardizing root system resources, but allow authorized,\nnon-root users, to perform administrative operations on `mongod`\nprocesses such as starting or stopping the daemon.\n\nThe methodology is easily extensible to other administrative operations\nsuch as preventing non-authorized users from modifying `mongod` audit\nlogs.\n\nUse this procedure for Linux based systems to allow users with\nrestricted permissions to stop/start/restart `mongod` processes. These\nusers are set up under a non-root Linux group. Further, the Linux group\nof these users is different from the Linux user group under which the\n`mongod` process runs.\n\n## Considerations\n\n>\n>\n>WARNING: The procedure requires root access for the setup. Incorrect\n>settings can lead to an unresponsive system, so always test on a\n>development environment before implementing in production. Ensure you\n>have a current backup of your data.\n>\n>\n\nIt's recommended to perform this procedure while setting up a new\nsystem. If it is not possible, perform the procedure during the\nmaintenance window.\n\nThe settings will impact only one local system, thus in case of replica\nset or a sharded cluster perform the procedure in a rolling matter and\nnever change all nodes at once.\n\n## Tested Linux flavors\n\n- CentOS 6\\|7\n- RHEL 6\\|7\n- Ubuntu 18.04\n- Amazon Linux 2\n\n>\n>\n>Disclaimer: For other Linux distributions the procedure should work in a\n>similar way however, only the above versions were tested while writing\n>this article.\n>\n>\n\n## Procedure\n\n- Add the user with limited permissions (replace testuser with your\n user):\n\n``` bash\n$ adduser testuser\n$ groupadd testgroup\n```\n\n- Install MongoDB\n Community\n \\|\n Enterprise\n following our recommended procedures.\n- Edit the MongoDB configuration file `/etc/mongod.conf` permissions:\n\n``` none\n$ sudo chown mongod:mongod /etc/mongod.conf\n$ sudo chmod 600 /etc/mongod.conf\n$ ls -l /etc/mongod.conf\n-rw-------. 1 mongod mongod 330 Feb 27 18:43 /etc/mongod.conf\n```\n\nWith this configuration, only the mongod user (and root) will have\npermissions to access and edit the `mongod.conf` file. No other user\nwill be allowed to read/write and have access to its content.\n\n### Systems running with systemd\n\nThis procedure works for CentOS 7 and RHEL 7.\n\n- Add the following configuration lines to the\n sudoers file with\n visudo:\n\n``` bash\n%mongod ALL =(ALL) NOPASSWD: /bin/systemctl start mongod.service, /bin/systemctl stop mongod.service, /bin/systemctl restart mongod.service\n%testuser ALL =(ALL) NOPASSWD: /bin/systemctl start mongod.service, /bin/systemctl stop mongod.service, /bin/systemctl restart mongod.service\n```\n\n>\n>\n>Note: The root user account may become non-functional if a syntax error\n>is introduced in the sudoers file.\n>\n>\n\n### Systems running with System V Init\n\nThis procedure works for CentOS 6, RHEL 6, Amazon Linux 2 and Ubuntu\n18.04.\n\n- MongoDB init.d-mongod script is available on our repository\n here\n in case manual download is required (make sure you save it in the\n /etc/init.d/ directory with permissions set to 755).\n- Add the following configuration lines to the\n sudoers file with\n visudo:\n\nFor CentOS 6, RHEL 6 and Amazon Linux 2:\n\n``` bash\n%mongod ALL =(ALL) NOPASSWD: /sbin/service mongod start, /sbin/service mongod stop, /sbin/service mongod restart\n%testuser ALL =(ALL) NOPASSWD: /sbin/service mongod start, /sbin/service mongod stop, /sbin/service mongod restart\n```\n\nFor Ubuntu 18.04:\n\n``` bash\n%mongod ALL =(ALL) NOPASSWD: /usr/sbin/service mongod start, /usr/sbin/service mongod stop, /usr/sbin/service mongod restart\n%testuser ALL =(ALL) NOPASSWD: /usr/sbin/service mongod start, /usr/sbin/service mongod stop, /usr/sbin/service mongod restart\n```\n\n>\n>\n>Note: The root may become non-functional if a syntax error is introduced\n>in the sudoers file.\n>\n>\n\n## Testing procedure\n\n### Systems running with systemd (systemctl service)\n\nSo with these settings testuser has no permissions to read\n/etc/mongod.conf but can start and stop the mongod service:\n\n``` none\ntestuser@localhost ~]$ sudo /bin/systemctl start mongod.service\n[testuser@localhost ~]$ sudo /bin/systemctl stop mongod.service\n[testuser@localhost ~]$ vi /etc/mongod.conf\n\"/etc/mongod.conf\" [Permission Denied]\n[testuser@localhost ~]$ sudo vi /etc/mongod.conf\n\"/etc/mongod.conf\" [Permission Denied]\n```\n\n>\n>\n>Note: The authorization is given when using the `/bin/systemctl`\n>command. With this procedure, the `sudo systemctl start mongod` will\n>prompt the sudo password for the testuser.\n>\n>\n\n### Systems running with System V Init\n\nUse sudo service mongod \\[start|stop|restart\\]:\n\n``` none\n[testuser@localhost ~]$ sudo service mongod start\nStarting mongod: [ OK ]\n[testuser@localhost ~]$ sudo service mongod stop\nStopping mongod: [ OK ]\n[testuser@localhost ~]$ vi /etc/mongod.conf\n\"/etc/mongod.conf\" [Permission Denied]\n[testuser@localhost ~]$ sudo vi /etc/mongod.conf\n[sudo] password for testuser:\nSorry, user testuser is not allowed to execute '/bin/vi /etc/mongod.conf' as root on localhost.\n```\n\n>\n>\n>Note: Additionally, test restarting other services with the testuser\n>with (and without) the required permissions.\n>\n>\n\n## Wrap Up\n\nIt is one of the critical security requirements, not to give\nunauthorized users full root privileges. With that requirement in mind,\nit is important for system administrators to know that it is possible to\ngive access to actions like restart/stop/start for a `mongod` process\n(or any other process) without giving root privileges, using Linux\nsystems capabilities.\n\n>\n>\n>If you have questions, please head to our [developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["MongoDB", "Bash"], "pageDescription": "Secure your MongoDB installation by allowing non-root users to stop/start/restart your mongod process.", "contentType": "Tutorial"}, "title": "Procedure to Allow Non-Root Users to Stop/Start/Restart \"mongod\" Process", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/bash/wordle-bash-data-api", "action": "created", "body": "# Build Your Own Wordle in Bash with the Data API\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nBy now, you have most certainly heard about Wordle, the new word game that was created in October 2021 by a former Reddit engineer, Josh Wardle. It gained so much traction at the beginning of the year even Google has a secret easter egg for it when you search for the game.\n\nI wanted to brush up on my Bash scripting skills, so I thought, \u201cWhy not create the Wordle game in Bash?\u201d I figured this would be a good exercise that would include some `if` statements and loops. However, the word list I have available for the possible Wordles is in a MongoDB collection. Well, thanks to the new Atlas Data API, I can now connect to my MongoDB database directly from a Bash script.\n\nLet\u2019s get to work!\n\n## Requirements\nYou can find the complete source code for this repository on Github. You can use any MongoDB Atlas cluster for the data API part; a free tier would work perfectly. \n\nYou will need Bash Version 3 or more to run the Bash script. \n\n```bash\n$ bash --version\nGNU bash, version 3.2.57(1)-release (x86_64-apple-darwin20)\n```\n\nYou will need curl to access the Data API. \n\n```bash\n$ curl --version\ncurl 7.64.1 (x86_64-apple-darwin20.0) libcurl/7.64.1 (SecureTransport) LibreSSL/2.8.3 zlib/1.2.11 nghttp2/1.41.0\n```\n\nFinally, you will use jq to manipulate JSON objects directly in the command line. \n\n```bash\njq --version\njq-1.6\n```\n\n## Writing the game\n\nThe game will run inside a while loop that will accept user inputs. The loop will go on until either the user finds the right word or has reached five tries without finding the right word. \n\nFirst, we\u2019ll start by creating a variable that will hold the word that needs to be guessed by the user. In Bash, you don\u2019t need to initialize variables; you can simply assign a value to it. To access the variable, you use the dollar sign followed by the variable's name.\n\n```bash\nWORD=MONGO\necho Hello $WORD\n# Hello MONGO\n```\n\nNext up, we will need a game loop. In Bash, a `while` loop uses the following syntax.\n\n```bash\nwhile ]\ndo\n # Stuff\ndone\n```\n\nFinally, we will also need an if statement to compare the word. The syntax for `if` in Bash is as follows.\n\n```bash\nif [ ]\nthen\n # Stuff\nelif [ ]\nthen\n # Optional else-if block\nelse \n # Else block\nfi\n```\n\nTo get started with the game, we will create a variable for the while condition, ask the user for input with the `read` command, and exit if the user input matches the word we have hard-coded.\n\n```bash\nWORD=MONGO\nGO_ON=1\nwhile [ $GO_ON -eq 1 ]\ndo\n read -n 5 -p \"What is your guess: \" USER_GUESS\n if [ $USER_GUESS == $WORD ]\n then\n echo -e \"You won!\"\n GO_ON=0\n fi\ndone\n```\n\nSave this code in a file called `wordle.sh`, set the execute permission on the file, and then run it.\n\n```bash\n$ chmod +x ./wordle.sh\n$ ./wordle.sh\n```\n\nSo far, so good; we now have a loop that users can only exit if they find the right word. Let\u2019s now make sure that they can only have five guesses. To do so, we will use a variable called TRIES, which will be incremented using `expr` at every guess. If it reaches five, then we change the value of the GO_ON variable to stop the main loop.\n\n```bash\nGO_ON=1\nTRIES=0\nwhile [ $GO_ON -eq 1 ]\ndo\n TRIES=$(expr $TRIES + 1)\n read -n 5 -p \"What is your guess: \" USER_GUESS\n if [ $USER_GUESS == $WORD ]\n then\n echo -e \"You won!\"\n GO_ON=0\n elif [ $TRIES == 5 ]\n then\n echo -e \"You failed.\\nThe word was \"$WORD\n GO_ON=0\n fi\ndone\n```\n\nLet\u2019s now compare the value that we got from the user and compare it with the word. Because we want the coloured squares, we will need to compare the two words letter by letter. We will use a for loop and use the index `i` of the character we want to compare. For loops in Bash have the following syntax.\n\n```bash\nfor i in {0\u202610}\ndo\n # stuff\ndone\n```\n\nWe will start with an empty `STATE` variable for our round result. We will add a green square for each letter if it\u2019s a match, a yellow square if the letter exists elsewhere, or a black square if it\u2019s not part of the solution. Add the following block after the `read` line and before the `if` statement.\n\n```bash\n STATE=\"\"\n for i in {0..4}\n do\n if [ \"${WORD:i:1}\" == \"${USER_GUESS:i:1}\" ]\n then\n STATE=$STATE\"\ud83d\udfe9\"\n elif [[ $WORD =~ \"${USER_GUESS:i:1}\" ]]\n then\n STATE=$STATE\"\ud83d\udfe8\"\n else\n STATE=$STATE\"\u2b1b\ufe0f\"\n fi\n done\n echo \" \"$STATE\n```\n\nNote how we then output the five squares using the `echo` command. This output will tell the user how close they are to finding the solution.\n\nWe have a largely working game already, and you can run it to see it in action. The only major problem left now is that the comparison is case-sensitive. To fix this issue, we can transform the user input into uppercase before starting the comparison. We can achieve this with a tool called `awk` that is frequently used to manipulate text in Bash. Right after the `read` line, and before we initialize the empty STATE variable, add the following line to uppercase the user input.\n\n```bash\n USER_GUESS=$(echo \"$USER_GUESS\" | awk '{print toupper($0)}')\n```\n\nThat\u2019s it; we now have a fully working Wordle clone. \n\n## Connecting to MongoDB\n\nWe now have a fully working game, but it always uses the same start word. In order for our application to use a random word, we will start by populating our database with a list of words, and then pick one randomly from that collection. \n\nWhen working with MongoDB Atlas, I usually use the native driver available for the programming language I\u2019m using. Unfortunately, no native drivers exist for Bash. That does not mean we can\u2019t access the data, though. We can use curl (or another command-line tool to transfer data) to access a MongoDB collection using the new Data API.\n\nTo enable the data API on your MongoDB Atlas cluster, you can follow the instructions from the [Getting Started with the Data API article.\n\nLet\u2019s start with adding a single word to our `words` collection, in the `wordle` database. Each document will have a single field named `word`, which will contain one of the possible Wordles. To add this document, we will use the `insertOne` endpoint of the Data API.\n\nCreate a file called `insert_words.sh`. In that file, create three variables that will hold the URL endpoint, the API key to access the data API, and the cluster name.\n\n```bash\nAPI_KEY=\"\"\nURL=\"\"\nCLUSTER=\"\"\n```\n\nNext, use a curl command to insert a single document. As part of the payload for this request, you will add your document, which, in this case, is a JSON object with the word \u201cMONGO.\u201d Add the following to the `insert_words.sh` file.\n\n```bash\ncurl --location --request POST $URL'/action/insertOne' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header 'api-key: '$API_KEY \\\n --data-raw '{\n \"collection\":\"words\",\n \"database\":\"wordle\",\n \"dataSource\":\"'$CLUSTER'\",\n \"document\": { \"word\": \"MONGO\" }\n}'\n```\n\nRunning this file, you should see a result similar to\n\n```\n{\"insertedId\":\"620275c014c4be86ede1e4e7\"}\n```\n\nThis tells you that the insert was successful, and that the new document has this `_id`.\n\nYou can add more words to the list, or you can import the official list of words to your MongoDB cluster. You can find that list in the `words.json` file in this project\u2019s repository. You can change the `insert_words.sh` script to use the raw content from Github to import all the possible Wordles at once with the following curl command. This command will use the `insertMany` endpoint to insert the array of documents from Github.\n\n```bash\ncurl --location --request POST $URL'/action/insertMany' \\\n --header 'Content-Type: application/json' \\\n --header 'Access-Control-Request-Headers: *' \\\n --header 'api-key: '$API_KEY \\\n --data-raw '{\n \"collection\":\"words\",\n \"database\":\"wordle\",\n \"dataSource\":\"'$CLUSTER'\",\n \"documents\": '$(curl -s https://raw.githubusercontent.com/mongodb-developer/bash-wordle/main/words.json)'\n}'\n```\n\nNow back to the `wordle.sh` file, add two variables that will hold the URL endpoint, the API key to access the data API, and cluster name at the top of the file.\n\n```bash\nAPI_KEY=\"\"\nURL_ENDPOINT=\"\"\nCLUSTER=\"\"\n```\n\nNext, we\u2019ll use a curl command to run an aggregation pipeline on our Wordle database. This aggregation pipeline will use the `$sample` stage to return one random word. The curl result will then be piped into `jq`, a tool to extract JSON data from the command line. Jq will return the actual value for the `word` field in the document we get from the aggregation pipeline. All of this is then assigned to the WORD variable. \n\nRight after the two new variables, you can add this code.\n\n```bash\nWORD=$(curl --location --request POST -s $URL'/action/aggregate' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: '$API_KEY \\\n--data-raw '{\n \"collection\":\"words\",\n \"database\":\"wordle\",\n \"dataSource\":\"Cluster0\",\n \"pipeline\": [{\"$sample\": {\"size\": 1}}]\n}' | jq -r .documents[0].word)\n\n```\n\nAnd that\u2019s it! Now, each time you run the `wordle.sh` file, you will get to try out a new word.\n\n```\nWhat is your guess: mongo \u2b1b\ufe0f\ud83d\udfe8\ud83d\udfe8\u2b1b\ufe0f\ud83d\udfe8\nWhat is your guess: often \ud83d\udfe8\u2b1b\ufe0f\u2b1b\ufe0f\u2b1b\ufe0f\ud83d\udfe9\nWhat is your guess: adorn \ud83d\udfe8\u2b1b\ufe0f\ud83d\udfe8\ud83d\udfe8\ud83d\udfe9\nWhat is your guess: baron \u2b1b\ufe0f\ud83d\udfe9\ud83d\udfe8\ud83d\udfe9\ud83d\udfe9\nWhat is your guess: rayon \ud83d\udfe9\ud83d\udfe9\ud83d\udfe9\ud83d\udfe9\ud83d\udfe9\nYou won!\n```\n\n## Summary\n\nThat\u2019s it! You now have your very own version of Wordle so that you can practice over and over directly in your favorite terminal. This version only misses one feature if you\u2019re up to a challenge. At the moment, any five letters are accepted as input. Why don\u2019t you add a validation step so that any word input by the user must have a match in the collection of valid words? You could do this with the help of the data API again. Don\u2019t forget to submit a pull request to the repository if you manage to do it!\n\n", "format": "md", "metadata": {"tags": ["Bash", "Atlas"], "pageDescription": "Learn how to build a Wordle clone using bash and the MongoDB Data API.", "contentType": "Code Example"}, "title": "Build Your Own Wordle in Bash with the Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/exploring-php-driver-jeremy-mikola", "action": "created", "body": "# Exploring the PHP Driver with Jeremy Mikola - Podcast Episode\n\nJeremy Mikola is a Staff Engineer at MongoDB and helps maintain the MongoDB PHP Driver and Extension. In this episode of the podcast, Jesse Hall and Michael Lynn sit down with Jeremy to talk about the PHP Driver and some of the history of PHP and Mongodb.\n\n:youtube]{vid=qOuGM6dNDm8}\n\nMichael: [00:00:00] Hey, Jesse, how are you doing today? \n\nJesse: [00:00:02] Good. How are you?\n\nMichael: [00:00:02] Fantastic. It's good to have you back on the podcast. Hey, what's your experience with PHP? \n\nJesse: I've done a little bit of PHP in the past. Mostly JavaScript though, so not too much, but today we do have a special guest. Jeremy Mikola is a staff engineer with Mongo DB, and he knows all about the PHP driver. Why don't you give us a little bit of background on how long have you been with MongoDB?\n\nJeremy: [00:00:26] Hi, nice to be here. So I joined MongoDB just over nine years. So in the middle of May was my nine-year anniversary. And the entire time of year, a lot of employees been here that long. They tend to shuffle around in different departments and get new experiences. I've been on the drivers team the entire time.\nSo when I find a place that you're comfortable with, you stick there. So when I came on board team was maybe 10 or 12 people, maybe one or two people per language. We didn't have nearly as many officially supported languages as we do today. But the PHP driver was one of the first ones.\n\nIt was developed actually by some of the server engineers. Christina, she was one of the early employees, no longer at MongoDB now, but. So yeah, back then it was PHP, Python, Ruby, C# Java, and I think Node. And we've kind of grown out since then. \n\nMichael: [00:01:05] Fantastic. And what's your personal experience with PHP? How did you get involved in PHP? \n\nJeremy: [00:01:11] So I picked up PHP as a hobby in high school. Date myself here in high school graduation was around 2001. It's kind of the mid nineties getting home from school, load up Napster work on a personal, had a personal SimCity website. We started off around this time of PHP. Nuke was one of the early CMS frameworks back then.\n\nAnd a lot of it was just tinkering, copy/pasting and finding out how stuff works, kind of self-taught until you get to college and then actually have real computer science classes and you understand there's math behind programming and all these other things, concepts. So it's definitely, it was a hobby through most of college.\n\nMy college curriculum was not PHP at all. And then afterwards I was able to, ended up getting a full-time job I working on, and that was with a Symfony 1.0 at the time around like 2007 and followed a couple of companies in the role after that. Ended up being the Symfony 2.0 framework, I was just coming out and that was around the time that PHP really started maturing with like package managers and much more object oriented, kind of shedding the some of the old\n\nbad publicity had had of the early years. And from there, that was also the that second PHP job was where I got started with MongoDB. So we were actually across the street from MongoDB's office in Midtown, New York on the flat iron district and customer support back then used to be go downstairs, go across the street and go up to Elliot's desk and the ShopWiki offices and the Mongo old 10gen offices.\nAnd you'd go ask your question. That kind of works when you only have a few official customers. \n\nMichael: [00:02:36] Talking about Elliot Horowitz. \n\nJeremy: [00:02:37] Yes, as Elliot Horowitz, the co-founder was much more accessible then when the company was a lot smaller. And from that role ended up jumping to a second PHP company kind of the same framework, also using MongoDB.\nIt was the same tech stack. And after that role, I was approached by an old coworker from the first company that used MongoDB. He had ended up at the drivers team, Steve Franzia. He was one of the first engineering managers, the drivers team help build the initial, a lot of the employees that are still on the drivers team\n\nnow, a lot of the folks leading the teams were hired by him or came around the same time. So the early developers of the Python, the Java driver and so he, we had a interview came back, wasn't allowed to recruit me out of the first job whatever paperwork you signed, you can't recruit your old coworkers.\n\nBut after I spent some time somewhere else, he was happy to bring me on. I learned about the opportunity to come on the drivers team. And I was really excited to go from working on applications, to going and developing libraries suited for other developers instead of like a customer facing product. And so that's kind of been the story since then, just really enjoyed working on APIs as well as it was working on the ODM library at the time, which we can talk about a little bit later. So kind of was already involved in a lot of MongoDB PHP ecosystem.\n\nJesse: [00:03:46] Cool. So let's, let's talk more about that, that PHP driver. So, what is it, why is it useful to our listeners? How does it work?\n\nJeremy: [00:03:54] okay. Yep. So level set for the basic explanation. So every language since MongoDB to be doesn't expose a ... it's. Not like some databases, that might have a REST protocol or you just have a web client accessing it. So you do need a driver to speak the wire protocol language, and the MongoDB drivers are particularly different from some other database drivers.\n\nWe do a lot of the monitoring of servers and kind of a lot more heavy than you might find in an equivalent like SQL driver especially like the PHP SQL drivers. So the drivers much more intensive library with a lot of the network programming. We're also responsible for converting, how MongoDB stores documents in a it's binary JSON format BSON converting that to whatever the language's\nnatural representation is. So I can Java that may be just be mapping at the Java classes with PHP. The original driver would turn everything into associative arrays. Make sure that a MongoDB string becomes a PHP string, vice versa. And so the original PHP driver you had familiar concepts across all drivers.\n\nYou have your client object that you connect to the database with, and then you have your database, your collection. And the goal is to make whatever language to users. Running their application, make the drivers as idiomatic as possible. And this kind of bit us early on because the drivers may be too idiomatic and they're inconsistent with each other, which becomes a struggle with someone that's writing a MongoDB application, in say C# and PHP.\n\nThere might be two very different experiences over Python and NodeJS. And the one thing that we hadn't since then was writing specifications to kind of codify what are the areas that we want to be idiomatic, but we also want to have consistent APIs. And this has also been a boon to our support team because if the drivers can behave predictably, both kind of have a familiar API in the outside that our users develop with.\nAnd then also internally, how do they behave when they connect to MongoDO, so kind of being able to enforce that and having internal tests that are shared across all the different drivers has been a huge plus to our support team.\nMichael: [00:05:38] So talk, talk a little bit about that, the balance between a standards-based approach and the idiomatic approach, how does that come together? \n\nJeremy: [00:05:48] Right. So this has definitely been a learning process from the, some of the early specifications. One of the first specifications we had was for the CRUD API which stands acronym for create, read, update, delete. And that was one of the, that's an essential component of every API. Like how do you insert data into MongoDB and read it back? And having that API let's us standardize on a this is a fine method. What are the options that should take how does this map to the servers? And the MongoDB shell API as well.\nThat was another project that exists outside of the driver's team's control. But from our customer standpoint, the Mongo shell is also something that they're common to use. So we try to enforce some consistency with that as well. And the specifications we want to, at a functional level provide a consistent experience.\n\nBut in terms of honoring that every language should be idiomatic. We're going to make allowances that say in C# you have special types to represent time units of time. Whereas other languages like C or Python, you might just use integers or numeric types. So having the specifications say if you're going to express\n\nlike the query time or a time limit on the query will allow like C# driver will say, if you have a time object, you can certainly make use of that type. And another language or students providing guidance and also consistent naming. So we'll say this method should be called find or findOne in your language, if you use camel case or you use snake case like Python with underscores, we're going to let you use that variation.\nAnd that'll keep things idiomatic, so that a user using a Python library doesn't expect to see Pascal style method names in their application. They're going to want it to blend in with other libraries in that languages ecosystem. But the behaviors should be predictable. And there should be a common sense of what functionality is supported across all the different the drivers. \n\nMichael: [00:07:26] Is that supported through synonyms in the language itself? So for, you mentioned, find and find one and maybe some people are used to other, other words to that stand for the, the read functionality in CRUD. \n\nJeremy: [00:07:41] So, this is, that's a point where we do need to be opinionated about, because this overlaps with also the MongoDB documentation. So if you go to the MongoDB server manual that the driver's team doesn't maintain you'll find language examples in there. An initiative we started a few years ago and that's code that we keep in the driver project that the docs team will then parse out and be able to embed in them are going to be manual.\n\nSo the benefit of a, we get to test it in C.I. Environments. And then the MongoDB manual you're browsing. You can say, I use this language and then all the code examples, instead of the MongoDB shell might be in your, in C# or Java or PHP. And so having consistent having, being able to enforce the actual names, we have to be opinionated that we want a method that reads the database instead of calling it query or select.\n\nWe want that to be called find. So we want that to be consistently named and we'll just leave flexibility in terms of the, the casing or if you need prefixing or something like that, but there's certain common or certain core words. We want users to think, oh, this is a find, this is a find operation.\n\nIt also maps to the find command in the database. That's the same thing with inserts and updates. One of the other changes with the old drivers. We would have an update method and in MongoDB different ways that you work with documents, you can update them in place, or you can replace the document.\n\nAnd both of those in the server's perspective happened to be called an update command. So you had original drivers that would just have an update method with a bunch of options. And depending what options you pass in, they could do myriad different behaviors. You might be overwriting the entire document.\n\nYou might be incrementing a value inside of it. So one of the things that CRUD API implemented was saying, we're going to kind of, it's a kind of a poor design pattern to have an overloaded method name that changes behavior wildly based on the arguments. So let's create an updateOne method I replaced one method and updateMany method.\n\n> For more information about the PHP Driver's implementation of CRUD, refer to the [PHP Quickstart series.\n\nSo now that when the users write their applications instead of having to infer, what are the options that I'm passing into this method? The method name itself leads to more by self-documenting code in that user's application.\n\nJesse: 00:09:31] awesome. So how do users get started using the driver?\n\nJeremy: [00:09:35] Yeah, so I think a lot of users some, maybe their first interaction might be through the online education courses that we have through MongoDB university. Not every driver, I don't believe there's a PHP class for that. There's definitely a Python Java node, a few others and just kind of a priority list of limited resources to produce that content.\n\nBut a lot of users are introduced, I would say through MongoDB University. Probably also through going back nine years early on in the company. MongoDB had a huge presence at like college hackathons going to conferences and doing booths, try out MongoDB and that definitely more appropriate when we were sa maller company, less people had heard about MongoDB now where it's kind of a different approach to capturing the developers.\n\nI think in this case, a lot of developers already heard about MongoDB and maybe it's less of a. Maybe the focus has shifted towards find out how this database works to changing maybe misconceptions they might have about it, or getting them to learn about some new features that we're implementing. I think another way that users pick up using databases sometimes through projects that have MongoDB integrations. \nSo at the first company where I was using MongoDB and Symfony to in both of them were, it was like a really early time to be using both of those technologies much less together. There was the concept of ORM libraries for PHP, which would kind of map your PHP classes to relational database.\n\nAnd at the time I don't know who made this decision, but early startup, the worst thing you can possibly do is use two very new technologies that are changing constantly and are arguably unproven. Someone higher up than me decided let's use MongoDB with this new web framework. It was still being actively developed and not formally released yet.\n\nAnd we need an ORM library for MongoDB cause we don't want to just write raw database queries back and forth. And so we developed a ODM library, object document mapper instead of object relational mapper. And that was based on the same common interfaces as the corresponding ORM library. So that was the doctrine ODM.\n\nAnd so this was really early time to be writing that. But it integrated so well. It was into the framework and from a such an early point that a lot of users when picking up the Symphony two framework, they realized, oh, we have this ORM library that's integrated in an ODM library. They both have\n\nbasically the same kind of support for all the common features, both in terms of integrating with the web forms all the bundles for like storing user accounts and user sessions and things like that. So in all those fleet or functionalities is kind of a drop-in replacement. And maybe those users said, oh MongoDB's new.\n\nI want to try this out. And so that being able to. Have a very low barrier of entry to switch into it. Probably drove some users to to certainly try it out and stick with it. We definitely that's. The second company was at was kind of using it in the same vein. It was available as a drop-in replacement and they were excited about the not being bound to a relational schema.\nSo definitely had its use as a first company. It was an e-commerce product. So it definitely made use of storing like flexible the flexible schema design for storing like product information and stuff. And then the, we actually used SQL database side by side there just to do all the order, transactional stuff.\n\nBecause certainly at the time MongoDB did not have the same kind of level of transactions and stuff that it does today. So that was, I credit that experience of using the right tool for the job and the different part of the company like using MongoDB to represent products and using the relational database to do the order processing and transactions with time.\n\nDefinitely left me with a positive experience of using MongoDB versus like trying to shoehorn everything into the database at the time and realizing, oh, it doesn't work for, for this use case. I'm gonna write an angry blog post about it. \n\nMichael: [00:12:53] Yeah, I can relate. So if listeners are looking to get started today, you mentioned the ODM, you mentioned the driver what's the best way to get started today?\n\nJeremy: [00:13:04] So I definitely would suggest users not jump right in with an ODM library. Because while that's going to help you ramp up and quickly develop an application, it's also going to extract a lot of the components of the database away from you. So you're not going to get an understanding of how the query language works completely, or maybe how to interact with aggregation pipelines, which are some of the richer features of MongoDB.\n\nThat said there's going to be some users that like, when you need to you're rapidly developing something, you don't want to think about that. Like you're deciding like uncomfortable and maybe I want to use Atlas and use all the infrastructure behind it with the scaling and being able to easily set up backups and all that functionality.\n\nAnd so I just want to get down and start writing my application, crank out these model classes and just tap them, store to MongoDB. So different use cases, I would say, but if you really want to learn MongoDB, install the PHP driver comes in two parts. There's the PHP extension, which is implemented in C.\n\nSo that's gonna be the first thing you're gonna install. And that's published as a pickle package, like a lot of third-party PHP extensions. So you will install that and that's going to provide a very basic API on top of that. We have a higher level package written in PHP code itself. And that's kind of the offload, like what is the essential heavy lifting code that we have to do in C and what is the high level API that we can implement in PHP? It's more maintainable for us. And then also users can read the code or easily contribute to it if they wish. And so those two components collectively form what we call it, the PHP driver. And so using once those are both installed getting familiar with the API in terms of our documentation for that high-level library kind of goes through all the methods.\n\nWe don't, I would say where there's never nearly enough tutorials, but there's a bunch of tutorials in there to introduce the CRUD methods. Kind of explain the basics of inserting and reading and writing documents. MongoDB writing queries at patient pipelines iterating cursors. When you do a query, you get this cursor object back, how you read your results back.\n\nSo that would hopefully give users enough of a kind of a launchpad to get started. And I was certainly biased from having been exposed to MongoDB so long, but I think the driver APIs are mostly intuitive. And that's been, certainly been the goal with a lot of the specifications we write. And I'll say this, this does fall apart\n\nwhen we get into things like say client-side encryption, these advanced features we're even being a long-term employee. Some of these features don't make complete sense to me because I'm not writing applications with them the same way our users are. We would kind of, a driver engineer, we might have a portion of the, the average team work on a, on a new feature, a new specification for it.\nSo not every driver engineer has the same benefit of being, having the same holistic experience of the database platform as is was easy to do so not years ago where going to say oh, I came in, I was familiar with all these aspects of MongoDB, and now there's like components of MongoDB that I've never interacted with.\n\nLike some of the authentication mechanisms. Some of that, like the Atlas, a full text search features there's just like way too much for us to wrap our heads around.\n\nJesse: [00:15:49] Awesome. Yeah. And if the users want to get started, be sure to check the show notes. We'll include links to everything there. Let's talk about the development process. So, how does that work? And is there any community participation there?\n\nJeremy: [00:16:02] Yep. So the drivers spec process That's something that's definitely that's changed over the time is that I mentioned the specifications. So all the work that I mean kind of divide the drivers workload into two different things. We have the downstream work that comes from the server or other teams like Atlas has a new feature.\n\nThe server has a new feature, something like client side encryption or the full text search. And so the, for that to be used by our community, we need support for that in the driver. Right? So we're going to have downstream tickets be created and a driver engineer or two, a small team is going to spec out what the driver API for that feature should be.\n\nAnd that's going to come on our plate for the next so if you consider like MongoDB 5.0, I was coming out soon. Or so if we look at them, MongoDB 5.0, which should be out within the summer that's going to have a bunch of new features that need to end up in the driver API. And we're in the process of designing those and writing our tests for those.\n\nAnd then there's going to be another handful of features that are maybe fully contained within the driver, or maybe a single language as a new feature we want to write, let's give you an example, a PHP, we have a desire to improve the API is around mapping these on to PHB classes and back and forth.\n\nSo that's something that tied back to the doctorate ODM library. That was something that was. The heavy lifting and that was done. That doctor did entirely at PHB there's ways that we can use the C extension to do that. And it's a matter of writing enough C code to get the job done that said doctrine can fully rely on it instead of having to do a lot of it still on its own.\n\n So the two of us working on the PHP driver now, myself and Andres Broan we both have a history of working on Doctrine, ODM project. So we know what the needs of that library are.\n \nAnd we're a good position to spec out the kind of features. And more importantly, in this case, it involves a lot of prototyping to find out the right balance of how much code we want to write. And what's the performance improvement that we'll be able to give the third, the higher level libraries that can use the driver.\n\nThat's something that we're going to be. Another example for other drivers is implementing a client side operations timeout. So that's, this is an example of a cross driver project that is basically entirely on the language drivers. And this is to give users a better API. Then so right now MongoDB\n\nhas a whole bunch of options. If you want to use socket timeout. So we can say run this operation X amount of time, but in terms of what we want to give our users and the driver is just think about a logical amount of time that you want something to complete in and not have to set five different timeout options at various low levels.\n\nAnd so this is something that's being developed inside. We're specing out a common driver API to provide this and this feature really kind of depends entirely on the drivers and money and it's not really a server feature or an Atlas feature. So those are two examples of the tickets that aren't downstream changes at all.\n\nWe are the originators of that feature. And so you've got, we have a mix of both, and it's always a lack of, not enough people to get all the work done. And so what do we prioritize? What gets punted? And fortunately, it's usually the organic drivers projects that have to take a back seat to the downstream stuff coming from other departments, because there's a, we have to think in terms of the global MongoDB ecosystem.\n\nAnd so if an Atlas team is going to develop a new feature and folks can't use that from drivers, no one's going to be writing their application with the MongoDB shell directly. So if we need, there are certain things we need to have and drivers, and then we've just kind of solved this by finding enough resources and staff to get the job done. \n\nMichael: [00:19:12] I'm curious about the community involvement, are there a lot of developers contributing code? \n\nJeremy: [00:19:19] So I can say definitely on the PHP driver, there's looking at the extension side and see there's a high barrier of entry in terms of like, when I joined the company, I didn't know how to write C extensions and see, it's not just a matter of even knowing C. It's knowing all the macros that PHP itself uses.\n\nWe've definitely had a few smaller contributions for the library that's written in PHP. But I would say even then it's not the same as if we compare it to like the Symfony project or other web frameworks, like Laravel where there's a lot of community involvement. Like people aren't running an application, they want a particular feature.\n\nOr there's a huge list of bugs. That there's not enough time for the core developers to work on. And so users pick up the low-hanging fruit and or the bigger projects, depending on what time. And they make a contribution back to the framework and that's what I was doing. And that for the first company, when you use Symphone and Mongo. But I'd say in terms of the drivers speaking for PHP, there's not a lot of community involvement in terms of us.\n\nDefinitely for, we get issues reported, but in terms of submitting patches or requesting new features, I don't kind of see that same activity. And I don't remember that. I'd say what the PHP driver, I don't see the same kind of user contribution activity that you'd see in popular web frameworks and things.\n\nI don't know if that's a factor of the driver does what it needs to do or people are just kind of considered a black box. It's this is the API I'm going to do its functionally here and not try and add in new features. Every now and then we do get feature requests, but I don't think they materialize in, into code contributions.\n\nIt might be like someone wants this functionality. They're not sure how we would design it. Or they're not sure, like what, what internal refactorings or what, what is it? What is the full scope of work required to get this feature done? But they've voiced to us that oh, it'd be nice if maybe going like MongoDB's date type was more usable with, with time zones or something like that.\nSo can you provide us with a better way to this is identifiable identify a pain point for us, and that will point us to say, develop some resources into thinking it through. And maybe that becomes a general drivers spec. Maybe that just becomes a project for the PHP driver. Could say a little bit of both.\n\nI do want to point out with community participation in drivers versus existing drivers. We definitely have a lot of community developed drivers, so that MongoDB as a company limited staffing. We have maybe a dozen or so languages that we actively support with drivers. There's many more than that in terms of community developed drivers.\n\nAnd so that's one of the benefits of us publishing specifications to develop our drivers kind of like open sourcing our development process. Is also a boon for community drivers, whether they have the resources to follow along with every feature or not, they might decide some of these features like the more enterprise features, maybe a community driver doesn't doesn't care about that.\nBut if we're updating the CRUD API or one of the more essential and generally useful features, they can follow along the development processes and see what changes are coming for new server versions and implement that into the community driver. And so that's kind of in the most efficient way that we've come up with to both support them without having the resources to actually contribute on all those community projects.\n\nCause I think if we could, it would be great to have MongoDB employees working on a driver for every possible language just isn't feasible. So it's the second best thing we can do. And maybe in lieu of throwing venture capital money at them and sponsoring the work, which we've done in the past with some drivers at different degrees.\n\nBut is this open sourcing the design process, keeping that as much the, not just the finished product, but also the communication, the review process and keeping that in it to give up yards as much as possible so people can follow the design rationale that goes into the specifications and keep up to date with the driver changes. \n\nMichael: [00:22:46] I'm curious about the about the decline in the PHP community, there's been obviously a number of factors around that, right? The advent of Node JS and the popularity of frameworks around JavaScript, it's probably contributing to it. But I'm curious as someone who works in the PHP space, what are your thoughts around the, the general decline\nof, or also I say the decrease in the number of new programmers leveraging PHP, do you see that continuing or do you think that maybe PHP has some life left?\n\nJeremy: [00:23:24] so I think the it's hard for me to truly identify this cause I've been disconnected from developing PHP applications for a long time. But in my time at MongoDB, I'd say maybe with the first seven or eight years of my time here, COVID kind of disrupted everything, but I was reasonably active in attending conferences in the community and watching the changes in the PHP ecosystem with the frameworks like Symfony and Laravel I think Laravel particularly. And some of these are kind of focused on region where you might say so Symfony is definitely like more active in, in Europe. Laravel I think if you look at like USB HP users and they may be there versus if they didn't catch on in the US quite the same way that Laravel did, I'm like, excuse me where the Symfony community, maybe didn't develop in\n\nat the same pace that laravel did it in the United States. The, if you go to these conferences, you'll see there's huge amounts of people excited about the language and actively people still giving testimonies that they like taught themselves programming, wrote their first application in one of these frameworks and our supporting their families, or transitioned from a non-tech job into those.\nSo you definitely still have people learning PHP. I'd say it doesn't have the same story that we get from thinking about Node JS where there's like these bootcamps that exists. I don't think you kind of have that same experience for PHP. But there's definitely still a lot of people learning PHP and then making careers out of it.\n\nAnd even in the shift of, in terms of the language maturity, you could say. Maybe it's a bit of a stereotype that you'd say PHP is a relic of the early nineties. And when people think about the older CMS platforms and maybe projects like a WordPress or Drupal which if we focused on the numbers are still in like using an incredible numbers in terms of the number of websites they power.\n\nBut it's also, I don't think people necessarily, they look at WordPress deployments and things like, oh, this is the they might look at that as a more data platform and that's a WordPress. It was more of a software that you deploy as well as a web framework. But like in terms of them supporting older PHP installations and things, and then looking at the newer frameworks where they can do cutting edge, like we're only going to support PHP.\n\nThe latest three-year releases of PHP, which is not a luxury that an established platform like WordPress or Drupal might have. But even if we consider Drupal or in the last, in the time I've been at MongoDB, they went from being a kind of a roll their own framework to redeveloping themselves on top of the Symphony framework and kind of modernizing their innards.\n\nAnd that brought a lot of that. We could say the siloed communities where someone might identify as a Drupal developer and just only work in the Drupal ecosystem. And then having that framework change now be developed upon a Symfony and had more interoperability with other web frameworks and PHP packages.\n\nSome of those only triple developers transitioned to becoming a kind of a Jack of all trades, PHP developer and more of a, kind of a well-balanced software engineer in that respect. And I think you'll find people in both camps, like you could certainly be incredibly successful, writing WordPress plugins.\n\nSo you could be incredibly successful writing pumping out websites for clients on web frameworks. The same way that you can join a full-time company that signs this entire platform is going to be built on a particular web framework.\n\nJesse: [00:26:28] Yeah, that's kind of a loaded question there. I don't think that PHP is a, is going to go anywhere. I think JavaScript gets a lot of publicity. But PHP has a strong foothold in the community and that's where I have some experience there with WordPress. That's kind of where I got introduced to PHP as well.\n\nBut PHP is, yeah, it's not going to go anywhere.\n\nJeremy: [00:26:49] I think from our perspective on the drivers, it's also, we get to look longingly at a lot of the new PHP versions that come out. So like right now they're working on kind of an API for async support, a lot of the new we have typing a lot more strictly type systems, which as a software engineer, you appreciate, you realize in terms of the flexibility of a scripting language, you don't want typing, but depending which way you're approaching it, as it says, it.\n\nWorking on the MongoDB driver, there's a lot of new features we want to use. And we're kind of limited in terms of we have customers that are still on earlier versions of PHP seven are definitely still some customers maybe on PHP five. So we have to do the dance in terms of when did we cut off support for older PHP versions or even older MongoDB versions?\n\nSo it's not quite as not quite the same struggle that maybe WordPress has to do with being able to be deployed everywhere. But I think when you're developing a project for your own company and you have full control of the tech stack, you can use the latest new features and like some new technology comes off.\n\nYou want to integrate it, you control your full tech stack. When you're writing a library, you kind of have to walk the balance of what is the lowest common denominator reasonably that we're going to support? Because we still have a user base. And so that's where the driver's team we make use of our, our product managers to kind of help us do that research.\n\nWe collect stats on Atlas users to find out what PHP versions they're using, what MongoDB versions are using as well. And so that gives us some kind of intelligence to say should we still be supporting this old PHP version while we have one, one or 2% of users? Is that, is that worth the amount of time or the sacrifice of features?\n\nThat we're not being able to take advantage of.\n\nJesse: [00:28:18] Sure. So I think you talked a little bit about this already, but what's on the roadmap? What's coming up?\n\nJeremy: [00:28:24] So Andreas and I are definitely looking forward when we have time to focus on just the PHP project development revisiting some of the BSON integration on coming up with better API is to not just benefit doctrine, but I'd say any library that integrates step provides Object mapper on top of the driver.\n\nFind something generally useful. There's also framework integrations that so I mentioned I alluded to Laravel previously. So for Laravel, as a framework is kind of a PDA around there or on that ships with the framework is based around relational databases. And so there is a MongoDB integration for laravel that's Kind of community developed and that kind of deals with the least common denominator problem over well, we can't take advantage of all the MongoDB features because we have to provide a consistent API with the relational ORM that ships with Laravel. And this is a similar challenge when in the past, within the drivers team or people outside, the other departments in MongoDB have said, oh, why don't we get WordPress working on them? MongoDB around, we get Drupal running on MongoDB, and it's not as easy as it seems, because if the entire platform assumes because... same thing has come up before with a very long time ago with the Django Python framework. It was like, oh, let's get Django running on MongoDB. And this was like 10 years ago. And I think it's certainly a challenge when the framework itself has you can't fight the inertia of the opinionated decisions of the framework.\n\nSo in Laravel's case they have this community supported MongoDB integration and it struggles with implementing a lot of MongoDB features that just kind of can't be shoehorned into that. And so that's a project that is no longer in the original developers' hands. It kind of as a team behind it, of people in the community that have varying levels of amount of time to focus on these features.\nSo that project is now in the hands of a team, not the original maintainer. And there, I think, I mean, they all have jobs. They all have other things that they're doing this in their spare time offer free. So is this something that we can provide some guidance on in the past, like we've chipped in on code reviews and try to answer some difficult questions about MongoDB.\n\nI think the direction they're going now is kind of, they want to remove features for next future version and kind of simplify things and get what they have really stable. But if that's something when, if we can build up our staff here and devote more time to, because. We look at our internal stats.\n\nWe definitely have a lot of MongoDB customers happen to be using Laravel with PHP or the Symfony framework. So I think a lot of our, given how many PHP users use things like Drupal and WordPress, we're not seeing them on MongoDB the same way that people using the raw frameworks and developing applications themselves might choose that in that case, they're in full control of what they deploy on.\nAnd when they choose to use MongoDB, we want to make sure that they have as. It may not be the first class because it's that can't be the same experience as the aura that ships with the framework. But I think it's definitely there's if we strategize and think about what are the features that we can support.\n\nBut that, and that's definitely gonna require us familiarizing ourselves with the framework, because I'd say that the longer we spend at MongoDB working on the driver directly. We become more disconnected from the time when we were application developers. And so we can approach this two ways.\n\nWe can devote our time to spending time running example applications and finding those pain points for ourselves. We can try and hire someone familiar with the library, which is like the benefit of when I was hired or Andreas was hired coming out of a PHP application job. And then you get to bring that experience and then it's a matter of time before they become disconnected over the next 10 years.\n\nEither, yeah. Either recruiting someone with the experience or spending time to experiment the framework and find out the pain points or interview users is another thing that our product managers do. And that'll give us some direction in terms of one of the things we want to focus on time permitting and where can we have the most impact to give our users a better experience? \n\nMichael: [00:31:59] Folks listening that want to give feedback. What's the best way to do that? Are are you involved in the forums, the community.MongoDB.com forums? \n\nJeremy: [00:32:09] so we do monitor those. I'd say a lot of the support questions there because the drivers team itself is just a few people for language versus the entirety of our community support team and the technical services department. So I'm not certainly not going on there every day to check for a new user questions.\n\nAnd to give credit to our community support team. Like they're able to answer a lot of the language questions themselves. That's something, and then they come, they escalate stuff to us. If there's, if there's a bigger question, just like our paids commercial support team does they feel so many things ourselves.\n\nAnd then maybe once or twice a month, we'll get like a language question come to us. And it's just we're, we're kind of stumped here. Can you explain what the driver's doing here? Tell us that this is a bug. But I would say the community forums is the best way to if you're posting there. The information will definitely reach us because certainly our product managers the people that are kind of a full-time focus to dealing with the community are going to see that first in terms of, for the drivers we're definitely active on JIRA and our various GitHub projects.\nAnd I think those are best used for reporting actual bugs instead of general support inquiries. I know like some other open source projects, they'll use GitHub to track like ideas and then the whole not just bug reports and things like that. In our case, we kind of, for our make best use of our time, we kind of silo okay, we want to keep JIRA and GitHub for bugs and the customer support issues.\n\nIf there is open discussion to have, we have these community forums and that helps us efficiently kind of keep the information in the, in the best forum, no pun intended to discuss it. \n\nMichael: [00:33:32] Yeah, this has been a great discussion. Thank you so much for sharing all of the details about the PHP driver and the extension. Is there anything we missed? Anything you want to make sure that the listeners know about the about PHP and Mongo DB? \n\nJeremy: [00:33:46] I guess an encouragement to share a feedback and if there are, if there are pain points, we definitely like we're I definitely say like other like over languages have more vocal people. And so it's always unsure. It's do we just have not had people talking to us or is it a matter of or the users don't think that they should be raising the concerns, so just reiterate and encourage people to share the feedback?\n\nJesse: [00:34:10] Or there's no concerns.\nJeremy: [00:34:12] Yeah. Or maybe they're actually in terms of our, our bug reports are like very, we have very few bug reports relatively compared to some other drivers. \n\nMichael: [00:34:19] That's a good thing. Yeah.\n\nAwesome. Jeremy, thank you so much once again, truly appreciate your time, Jesse. Thanks for for helping out with the interview. \n\nJesse: [00:34:28] Thanks for having me.\n\nJeremy: [00:34:29] Great talking to you guys. Thanks.\n\nWe hope you enjoyed this podcast episode. If you're interested in learning more about the PHP Driver, please visit our [documentation page, and the GitHub Repository. I would also encourage you to visit our forums, where we have a category specifically for PHP.\n\nI would also encourage you to check out the PHP Quickstart articles I wrote recently on our Developer Hub. Feedback is always welcome!", "format": "md", "metadata": {"tags": ["PHP"], "pageDescription": "Jeremy Mikola is a Senior Software Engineer at MongoDB and helps maintain the MongoDB PHP Driver and Extension. In this episode of the podcast, Jesse Hall and Michael Lynn sit down with Jeremy to talk about the PHP Driver and some of the history of PHP and MongoDB.", "contentType": "Podcast"}, "title": "Exploring the PHP Driver with Jeremy Mikola - Podcast Episode", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-demo-restaurant-app", "action": "created", "body": "# Atlas Search from Soup to Nuts: The Restaurant Finder Demo App\n\n Hey! Have you heard about the trendy, new restaurant in Manhattan named Karma? No need to order off the menu. You just get what you deserve. \ud83d\ude0b \ud83e\udd23 And with MongoDB Atlas Search, you also get exactly what you deserve by using a modern application development platform. You get the lightning-fast, relevance-based search capabilities that come with Apache Lucene on top of the developer productivity, resilience, and scale of MongoDB Atlas. Apache Lucene is the world\u2019s most popular search engine library. Now together with Atlas, making sophisticated, fine-grained search queries is a piece of cake. \n\nIn this video tutorial, I am going to show you how to build out Atlas Search queries quickly with our Atlas Search Restaurant Finder demo application, which you will find at www.atlassearchrestaurants.com. This app search demo is based on a partially mocked dataset of over 25,000 restaurants in the New York City area. In it, you can search for restaurants based on a wide variety of search criteria, such as name, menu items, location, and cuisine. \n\nThis sample search app serves up all the Atlas Search features and also gives away the recipe by providing live code examples. As you interact with the What\u2019s Cooking Restaurant Finder, see how your search parameters are blended together with varying operators within the $search stage of a MongoDB aggregation pipeline. Like combining the freshest ingredients for your favorite dish, Atlas Search lets you easily mix simple searches together using the compound operator.\n\nI named this application \u201cWhat\u2019s Cooking,\u201d but I should have called it \u201cThe Kitchen Sink\u201d because it offers a smorgasbord of so many popular Atlas Search features:\n\n* Fuzzy Search - to tolerate typos and misspellings. Desert, anyone? \n* Autocomplete - to search-as-you-type\n* Highlighting - to extract document snippets that display search terms in their original context \n* Geospatial search - to search within a location\u2019s radius or shape\n* Synonyms - Wanna Coke or a Pop? Search for either one with defined synonyms\n* Custom Scoring - that extra added flavor to modify search results rankings or to boost promoted content \n* Facets and Counts - slice and dice your returned data into different categories\n\nLooking for some killer New York pizza within a few blocks of MongoDB\u2019s New York office in Midtown? How about some savory search synonyms! Any special restaurants with promotions? We have search capabilities for every appetite. And to kick it up a notch, we let you sample the speed of fast, native Lucene facets and counts - currently in public preview. \n\nFeast your eyes! \n\n>\n\n>\n>Atlas Search queries are built using $search in a MongoDB aggregation pipeline\n>\n\nNotice that even though searches are based in Lucene, Atlas Search queries look like any other aggregation stage, easily integrated into whatever programming language without any extra transformation code. No more half-baked context switching as needed with any other stand-alone search engine. This boils down to an instant productivity boost!\n\nWhat is in our secret sauce, you ask? We have embedded an Apache Lucene search engine alongside your Atlas database. This synchronizes your data between the database and search index automatically. This also takes off your plate the operational burden and additional cost of setting-up, maintaining, and scaling a separate search platform. Now not only is your data architecture simplified, but also your developer workload, as now developers can work with a single API. Simply stated, you can now have champagne taste on a beer budget. \ud83e\udd42\ud83c\udf7e\n\nIf this application has whet your appetite for development, the code for the What\u2019s Cooking Restaurant Finder application can be found here: (\nhttps://github.com/mongodb-developer/whatscooking)\nThis repo has everything from soup to nuts to recreate the What\u2019s Cooking Restaurant Finder:\n\n* React and Tailwind CSS on the front-end\n* The code for the backend APIs\n* The whatscooking.restaurants dataset mongodb+srv://mongodb:atlassearch@shareddemoapps.dezdl.mongodb.net/whatscooking\n\nThis recipe - like all recipes - is merely a starting point. Like any chef, experiment. Use different operators in different combinations to see how they affect your scores and search results. Try different things to suit your tastes. We\u2019ll even let you eat for free - forever! Most of these Atlas Search capabilities are available on free clusters in Atlas. \n\nBon appetit and happy coding!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this video tutorial, I'm going to show you how to build out Atlas Search queries with our Atlas Search Restaurant Finder demo application.", "contentType": "Article"}, "title": "Atlas Search from Soup to Nuts: The Restaurant Finder Demo App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-lake-online-archive", "action": "created", "body": "# How to Archive Data to Cloud Object Storage with MongoDB Online Archive\n\nMongoDB Atlas Online Archive is a new feature of the MongoDB Cloud Data Platform. It allows you to set a rule to automatically archive data off of your Atlas cluster to fully-managed cloud object storage. In this blog post, I'll demonstrate how you can use Online Archive to tier your data for a cost-effective data management strategy.\n\nThe MongoDB Cloud data platform with Atlas Data Federation provides a serverless and scalable Federated Database Instance which allows you to natively query your data across cloud object storage and MongoDB Atlas clusters in-place.\n\nIn this blog post, I will use one of the MongoDB Open Data COVID-19 time series collections to demonstrate how you can combine Online Archive and Atlas Data Federation to save on storage costs while retaining easy access to query all of your data.\n\n## Prerequisites\n\nFor this tutorial, you will need:\n\n- a MongoDB Atlas M10 cluster or higher as Online Archive is currently not available for the shared tiers,\n- MongoDB Compass or MongoDB Shell to access your cluster.\n\n## Let's get some data\n\nTo begin with, let's retrieve a time series collection. For this tutorial, I will use one of the time series collections that I built for the MongoDB Open Data COVID19 project.\n\nThe `covid19.global_and_us` collection is the most complete COVID-19 times series in our open data cluster as it combines all the data that JHU keeps into separated CSV files.\n\nAs I would like to retrieve the entire collection and its indexes, I will use `mongodump`.\n\n``` shell\nmongodump --uri=\"mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\" --collection='global_and_us'\n```\n\nThis will create a `dump` folder in your current directory. Let's now import this collection in our cluster.\n\n``` shell\nmongorestore --uri=\"mongodb+srv://:\n>\n>Note here that the **date** field is an IsoDate in extended JSON relaxed notation.\n>\n>\n\nThis time series collection is fairly simple. For each day and each place, we have a measurement of the number of `confirmed`, `deaths` and `recovered` if it's available. More details in our documentation.\n\n## What's the problem?\n\nProblem is, it's a time series! So each day, we add a new entry for each place in the world and our collection will get bigger and bigger every single day. But as time goes on, it's likely that the older data is less important and less frequently accessed so we could benefit from archiving it off of our Atlas cluster.\n\nToday, July 10th 2020, this collection contains 599760 documents which correspond to 3528 places, time 170 days and it's only 181.5 MB thanks to WiredTiger compression algorithm.\n\nWhile this would not really be an issue with this trivial example, it will definitely force you to upgrade your MongoDB Atlas cluster to a higher tier if an extra GB of data was going in your cluster each day.\n\nUpgrading to a higher tier would cost more money and maybe you don't need to keep all this cold data in your cluster.\n\n## Online Archive to the Rescue!\n\nManually archiving a subset of this dataset is tedious. I actually wrote a blog post about this.\n\nIt works, but you will need to extract and remove the documents from your MongoDB Atlas cluster yourself and then use the new $out operator or the s3.PutObject MongoDB Realm function to write your documents to cloud object storage - Amazon S3 or Microsoft Azure Blob Storage.\n\nLucky for you, MongoDB Atlas Online Archive does this for you automatically!\n\nLet's head to MongoDB Atlas and click on our cluster to access our cluster details. Currently, Online Archive is not set up on this cluster.\n\nNow let's click on **Online Archive** then **Configure Online Archive**.\n\nThe next page will give you some information and documentation about MongoDB Atlas Online Archive and in the next step you will have to configure your archiving rule.\n\nIn our case, it will look like this:\n\nAs you can see, I'm using the **date** field I mentioned above and if this document is more than 60 days old, it will be automatically moved to my cloud object storage for me.\n\nNow, for the next step, I need to think about my access pattern. Currently, I'm using this dataset to create awesome COVID-19 charts.\n\nAnd each time, I have to first filter by date to reduce the size of my chart and then optionally I filter by country then state if I want to zoom on a particular country or region.\n\nAs these fields will convert into folder names into my cloud object storage, they need to exist in all the documents. It's not the case for the field \"state\" because some countries don't have sub-divisions in this dataset.\n\nAs the date is always my first filter, I make sure it's at the top. Folders will be named and organised this way in my cloud object storage and folders that don't need to be explored will be eliminated automatically to speed up the data retrieval process.\n\nFinally, before starting the archiving process, there is a final step: making sure Online Archive can efficiently find and remove the documents that need to be archived.\n\nI already have a few indexes on this collection, let's see if this is really needed. Here are the current indexes:\n\nAs we can see, I don't have the recommended index. I have its opposite: `{country: 1, date: 1}` but they are **not** equivalent. Let's see how this query behaves in MongoDB Compass.\n\nWe can note several things in here:\n\n- We are using the **date** index. Which is a good news, at least it's not a collection scan!\n- The final sort operation is `{ date: 1, country: 1}`\n- Our index `{date:1}` doesn't contain the information about country so an in-memory sort is required.\n- Wait a minute... Why do I have 0 documents returned?!\n\nI have 170 days of data. I'm filtering all the documents older than 60 days so I should match `3528 places * 111 days = 391608` documents.\n\n>\n>\n>111 days (not 170-60=110) because we are July 10th when I'm writing this\n>and I don't have today's data yet.\n>\n>\n\nWhen I check the raw json output in Compass, I actually see that an\nerror has occurred.\n\nSadly, it's trimmed. Let's run this again in the new\nmongosh to see the complete\nerror:\n\n``` none\nerrorMessage: 'Exec error resulting in state FAILURE :: caused by :: Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit.'\n```\n\nI ran out of RAM...oops! I have a few other collections in my cluster\nand the 2GB of RAM of my M10 cluster are almost maxed out.\n\nIn-memory\nsorts\nactually use a lot of RAM and if you can avoid these, I would definitely\nrecommend that you get rid of them. They are forcing some data from your\nworking set out of your cache and that will result in cache pressure and\nmore IOPS.\n\nLet's create the recommended index and see how the situation improves:\n\n``` javascript\ndb.global_and_us.createIndex({ date: 1, country: 1})\n```\n\nLet's run our query again in the Compass explain plan:\n\nThis time, in-memory sort is no longer used, as we can return documents\nin the same order they appear in our index. 391608 documents are\nreturned and we are using the correct index. This query is **MUCH** more\nmemory efficient than the previous one.\n\nNow that our index is created, we can finally start the archiving\nprocess.\n\nJust before we start our archiving process, let's run an aggregation\npipeline in MongoDB Compass to check the content of our collection.\n\n``` javascript\n\n {\n '$sort': {\n 'date': 1\n }\n }, {\n '$group': {\n '_id': {\n 'country': '$country',\n 'state': '$state',\n 'county': '$county'\n },\n 'count': {\n '$sum': 1\n },\n 'first_date': {\n '$first': '$date'\n },\n 'last_date': {\n '$last': '$date'\n }\n }\n }, {\n '$count': 'number_places'\n }\n]\n```\n\n![Aggregation pipeline in MongoDB Compass\n\nAs you can see, by grouping the documents by country, state and county,\nwe can see:\n\n- how many days are reported: `170`,\n- the first date: `2020-01-22T00:00:00.000+00:00`,\n- the last date: `2020-07-09T00:00:00.000+00:00`,\n- the number of places being monitored: `3528`.\n\nOnce started, your Online Archive will look like this:\n\nWhen the initialisation is done, it will look like this:\n\nAfter some times, all your documents will be migrated in the underlying\ncloud object storage.\n\nIn my case, as I had 599760 in my collection and 111 days have been\nmoved to my cloud object storage, I have `599760 - 111 * 3528 = 208152`\ndocuments left in my collection in MongoDB Atlas.\n\n``` none\nPRIMARY> db.global_and_us.count()\n208152\n```\n\nGood. Our data is now archived and we don't need to upgrade our cluster\nto a higher cluster tier!\n\n## How to access my archived data?\n\nUsually, archiving data rhymes with \"bye bye data\". The minute you\ndecide to archive it, it's gone forever and you just take it out of the\nold dusty storage system when the actual production system just burnt to\nthe ground.\n\nLet me show you how you can keep access to the **ENTIRE** dataset we\njust archived on my cloud object storage using MongoDB Atlas Data\nFederation.\n\nFirst, let's click on the **CONNECT** button. Either directly in the\nOnline Archive tab:\n\nOr head to the Data Federation menu on the left to find your automatically\nconfigured Data Lake environment.\n\nRetrieve the connection command line for the Mongo Shell:\n\nMake sure you replace the database and the password in the command. Once\nyou are connected, you can run the following aggregation pipeline:\n\n``` javascript\n\n {\n '$match': {\n 'country': 'France'\n }\n }, {\n '$sort': {\n 'date': 1\n }\n }, {\n '$group': {\n '_id': '$uid',\n 'first_date': {\n '$first': '$date'\n },\n 'last_date': {\n '$last': '$date'\n },\n 'count': {\n '$sum': 1\n }\n }\n }\n]\n```\n\nAnd here is the same query in command line - easier for a quick copy &\npaste.\n\n``` shell\ndb.global_and_us.aggregate([ { '$match': { 'country': 'France' } }, { '$sort': { 'date': 1 } }, { '$group': { '_id': { 'country': '$country', 'state': '$state', 'county': '$county' }, 'first_date': { '$first': '$date' }, 'last_date': { '$last': '$date' }, 'count': { '$sum': 1 } } } ])\n```\n\nHere is the result I get:\n\n``` json\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Reunion\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Saint Barthelemy\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Martinique\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Mayotte\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"French Guiana\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Guadeloupe\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"New Caledonia\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"St Martin\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"Saint Pierre and Miquelon\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n{ \"_id\" : { \"country\" : \"France\", \"state\" : \"French Polynesia\" }, \"first_date\" : ISODate(\"2020-01-22T00:00:00Z\"), \"last_date\" : ISODate(\"2020-07-09T00:00:00Z\"), \"count\" : 170 }\n```\n\nAs you can see, even if our cold data is archived, we can still access\nour **ENTIRE** dataset even though it was partially archived. The first\ndate is still January 22nd and the last date is still July 9th for a\ntotal of 170 days.\n\n## Wrap Up\n\nMongoDB Atlas Online Archive is your new best friend to retire and store\nyour cold data safely in cloud object storage with just a few clicks.\n\nIn this tutorial, I showed you how to set up an Online Archive to automatically archive your data to fully-managed cloud object storage while retaining easy access to query the entirety of the dataset in-place, across sources, using Atlas Data Federation.\n\nJust in case this blog post didn't make it clear, Online Archive is\n**NOT** a replacement for backups or a [backup\nstrategy. These are 2\ncompletely different topics and they should not be confused.\n\nIf you have questions, please head to our developer community\nwebsite where the MongoDB engineers and\nthe MongoDB community will help you build your next big idea with\nMongoDB.\n\nTo learn more about MongoDB Atlas Data\nFederation, read the other blogs\nposts in this series below, or check out the\ndocumentation.\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Automatically tier your data across Atlas clusters and cloud object storage while retaining access to query it all with Atlas Data Federation.", "contentType": "Tutorial"}, "title": "How to Archive Data to Cloud Object Storage with MongoDB Online Archive", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/go-kafka-confluent-atlas", "action": "created", "body": "# Go to MongoDB Using Kafka Connectors - Ultimate Agent Guide\n\nGo is a modern language built on typed and native code compiling concepts while feeling and utilizing some benefits of dynamic languages. It is fairly simple to install and use, as it provides readable and robust code for many application use cases.\n\nOne of those use cases is building agents that report to a centralized data platform via streaming. A widely accepted approach is to communicate the agent data through subscription of distributed queues like Kafka. The Kafka topics can then propagate the data to many different sources, such as a MongoDB Atlas cluster. \n\nHaving a Go agent allows us to utilize the same code base for various operating systems, and the fact that it has good integration with JSON data and packages such as a MongoDB driver and Confluent Go Kafka Client makes it a compelling candidate for the presented use case.\n\nThis article will demo how file size data on a host is monitored from a cross-platform agent written in Golang via a Kafka cluster using a Confluent hosted sink connector to MongoDB Atlas. MongoDB Atlas stores the data in a time series collection. The MongoDB Charts product is a convenient way to show the gathered data to the user.\n\n## Preparing the Golang project, Kafka cluster, and MongoDB Atlas\n\n### Configuring a Go project\n\nOur agent is going to run Go. Therefore, you will need to install the Go language software on your host.\n\nOnce this step is done, we will create a Go module to begin our project in our working directory:\n``` shell\ngo mod init example/main\n```\nNow we will need to add the Confluent Kafka dependency to our Golang project:\n``` shell\ngo get -u gopkg.in/confluentinc/confluent-kafka-go.v1/kafka\n```\n\n### Configuring a Kafka cluster \nCreating a Confluent Kafka Cluster is done via the Confluent UI. Start by creating a basic Kafka cluster in the Confluent Cloud. Once ready, create a topic to be used in the Kafka cluster. I created one named \u201cfiles.\u201d \n\nGenerate an api-key and api-secret to interact with this Kafka cluster. For the simplicity of this tutorial, I have selected the \u201cGlobal Access\u201d api-key. For production, it is recommended to give as minimum permissions as possible for the api-key used. Get a hold of the generated keys for future use.\n\nObtain the Kafka cluster connection string via Cluster Overview > Cluster Settings > Identification > Bootstrap server for future use. Basic clusters are open to the internet and in production, you will need to amend the access list for your specific hosts to connect to your cluster via advanced cluster ACLs.\n\n> **Important:** The Confluent connector requires that the Kafka cluster and the Atlas cluster are deployed in the same region.\n>\n### Configuring Atlas project and cluster\nCreate a project and cluster or use an existing Atlas cluster in your project. \n\nSince we are using a time series collection, the clusters must use a 5.0+ version. Prepare your Atlas cluster for a Confluent sink Atlas connection. Inside your project\u2019s access list, enable user and relevant IP addresses of your connector IPs. The access list IPs should be associated to the Atlas Sink Connector, which we will configure in a following section. Finally, get a hold of the Atlas connection string and the main cluster DNS. For more information about best securing and getting the relevant IPs from your Confluent connector, please read the following article: MongoDB Atlas Sink Connector for Confluent Cloud.\n\n## Adding agent main logic\nNow that we have our Kafka cluster and Atlas clusters created and prepared, we can initialize our agent code by building a small main file that will monitor my `./files` directory and capture the file names and sizes. I\u2019ve added a file called `test.txt` with some data in it to bring it to ~200MB.\n\nLet\u2019s create a file named `main.go` and write a small logic that performs a constant loop with a 1 min sleep to walk through the files in the `files` folder:\n``` go\npackage main\n\nimport (\n \"fmt\"\n \"encoding/json\"\n \"time\"\n \"os\"\n \"path/filepath\"\n)\n\ntype Message struct {\n Name string\n Size float64\n Time int64\n}\n\nfunc samplePath (startPath string) error {\n err := filepath.Walk(startPath,\n func(path string, info os.FileInfo, err error) error {\n \n var bytes int64\n bytes = info.Size()\n\n var kilobytes int64\n kilobytes = (bytes / 1024)\n\n var megabytes float64\n megabytes = (float64)(kilobytes / 1024) // cast to type float64\n\n var gigabytes float64\n gigabytes = (megabytes / 1024)\n\n now := time.Now().Unix()*1000\n\n m := Message{info.Name(), gigabytes, now}\n value, err := json.Marshal(m)\n\n \n if err != nil {\n panic(fmt.Sprintf(\"Failed to parse JSON: %s\", err))\n }\n\n fmt.Printf(\"value: %v\\n\", string(value))\n return nil;\n })\n if err != nil {\n return err\n }\n return nil;\n}\n\nfunc main() {\n for {\n err := samplePath(\"./files\");\n if err != nil {\n panic(fmt.Sprintf(\"Failed to run sample : %s\", err))\n }\n time.Sleep(time.Minute)\n }\n\n}\n```\nThe above code simply imports helper modules to traverse the directories and for JSON documents out of the files found. \n\nSince we need the data to be marked with the time of the sample, it is a great fit for time series data and therefore should eventually be stored in a time series collection on Atlas. If you want to learn more about time series collection and data, please read our article, MongoDB Time Series Data.\n\nWe can test this agent by running the following command:\n``` shell\ngo run main.go\n```\nThe agent will produce JSON documents similar to the following format:\n``` shell\nvalue: {\"Name\":\"files\",\"Size\":0,\"Time\":1643881924000}\nvalue: {\"Name\":\"test.txt\",\"Size\":0.185546875,\"Time\":1643881924000}\n```\n## Creating a Confluent MongoDB connector for Kafka\nNow we are going to create a Kafka Sink connector to write the data coming into the \u201cfiles\u201d topic to our Atlas Cluster\u2019s time series collection.\n\nConfluent Cloud has a very popular integration running MongoDB\u2019s Kafka connector as a hosted solution integrated with their Kafka clusters. Follow these steps to initiate a connector deployment.\n\nThe following are the inputs provided to the connector:\n\nOnce you set it up, following the guide, you will eventually have a similar launch summary page:\n\nAfter provisioning every populated document into the `files` queue will be pushed to a time series collection `hostMonitor.files` where the date field is `Time` and metadata field is `Name`.\n## Pushing data to Kafka\nNow let\u2019s edit the `main.go` file to use a Kafka client and push each file measurement into the \u201cfiles\u201d queue.\n\nAdd the client library as an imported module:\n``` go\nimport (\n \"fmt\"\n \"encoding/json\"\n \"time\"\n \"os\"\n \"path/filepath\"\n \"github.com/confluentinc/confluent-kafka-go/kafka\"\n)\n```\nAdd the Confluent cloud credentials and cluster DNS information. Replace `:` found on the Kafka Cluster details page and the `` , `` generated in the Kafka Cluster:\n``` go\nconst (\n bootstrapServers = \u201c:\"\n ccloudAPIKey = \"\"\n ccloudAPISecret = \"\"\n)\n```\nThe following code will initiate the producer and produce a message out of the marshaled JSON document: \n``` go\n topic := \"files\"\n // Produce a new record to the topic...\n producer, err := kafka.NewProducer(&kafka.ConfigMap{\n \"bootstrap.servers\": bootstrapServers,\n \"sasl.mechanisms\": \"PLAIN\",\n \"security.protocol\": \"SASL_SSL\",\n \"sasl.username\": ccloudAPIKey,\n \"sasl.password\": ccloudAPISecret})\n\n if err != nil {\n panic(fmt.Sprintf(\"Failed to create producer: %s\", err))\n }\n\n producer.Produce(&kafka.Message{\n TopicPartition: kafka.TopicPartition{Topic: &topic,\n Partition: kafka.PartitionAny},\n Value: ]byte(value)}, nil)\n\n // Wait for delivery report\n e := <-producer.Events()\n\n message := e.(*kafka.Message)\n if message.TopicPartition.Error != nil {\n fmt.Printf(\"failed to deliver message: %v\\n\",\n message.TopicPartition)\n } else {\n fmt.Printf(\"delivered to topic %s [%d] at offset %v\\n\",\n *message.TopicPartition.Topic,\n message.TopicPartition.Partition,\n message.TopicPartition.Offset)\n }\n\n producer.Close()\n ```\n\nThe entire `main.go` file will look as follows:\n``` go \npackage main\n\nimport (\n \"fmt\"\n \"encoding/json\"\n \"time\"\n \"os\"\n \"path/filepath\"\n \"github.com/confluentinc/confluent-kafka-go/kafka\")\n\ntype Message struct {\n Name string\n Size float64\n Time int64\n}\n\nconst (\n bootstrapServers = \":\"\n ccloudAPIKey = \"\"\n ccloudAPISecret = \"\"\n)\n\nfunc samplePath (startPath string) error {\n \n err := filepath.Walk(startPath,\n func(path string, info os.FileInfo, err error) error {\n if err != nil {\n return err\n }\n fmt.Println(path, info.Size())\n\n var bytes int64\n bytes = info.Size()\n\n var kilobytes int64\n kilobytes = (bytes / 1024)\n\n var megabytes float64\n megabytes = (float64)(kilobytes / 1024) // cast to type float64\n\n var gigabytes float64\n gigabytes = (megabytes / 1024)\n\n now := time.Now().Unix()*1000\n\n \n\n m := Message{info.Name(), gigabytes, now}\n value, err := json.Marshal(m)\n\n \n if err != nil {\n panic(fmt.Sprintf(\"Failed to parse JSON: %s\", err))\n }\n\n fmt.Printf(\"value: %v\\n\", string(value))\n\n topic := \"files\"\n // Produce a new record to the topic...\n producer, err := kafka.NewProducer(&kafka.ConfigMap{\n \"bootstrap.servers\": bootstrapServers,\n \"sasl.mechanisms\": \"PLAIN\",\n \"security.protocol\": \"SASL_SSL\",\n \"sasl.username\": ccloudAPIKey,\n \"sasl.password\": ccloudAPISecret})\n \n if err != nil {\n panic(fmt.Sprintf(\"Failed to create producer: %s\", err))\n }\n \n producer.Produce(&kafka.Message{\n TopicPartition: kafka.TopicPartition{Topic: &topic,\n Partition: kafka.PartitionAny},\n Value: []byte(value)}, nil)\n \n // Wait for delivery report\n e := <-producer.Events()\n \n message := e.(*kafka.Message)\n if message.TopicPartition.Error != nil {\n fmt.Printf(\"failed to deliver message: %v\\n\",\n message.TopicPartition)\n } else {\n fmt.Printf(\"delivered to topic %s [%d] at offset %v\\n\",\n *message.TopicPartition.Topic,\n message.TopicPartition.Partition,\n message.TopicPartition.Offset)\n }\n \n producer.Close()\n\n return nil;\n})\nif err != nil {\n return err\n}\n return nil;\n}\n\nfunc main() {\n for {\n err := samplePath(\"./files\");\n if err != nil {\n panic(fmt.Sprintf(\"Failed to run sample : %s\", err))\n }\n time.Sleep(time.Minute)\n \n }\n\n}\n```\nNow when we run the agent while the Confluent Atlas sink connector is fully provisioned, we will see messages produced into the `hostMonitor.files` time series collection:\n![Atlas Data\n## Analyzing the data using MongoDB Charts\nTo put our data into use, we can create some beautiful charts on top of the time series data. In a line graph, we configure the X axis to use the Time field, the Y axis to use the Size field, and the series to use the Name field. The following graph shows the colored lines represented as the evolution of the different file sizes over time.\n\nNow we have an agent and a fully functioning Charts dashboard to analyze growing files trends. This architecture allows big room for extensibility as the Go agent can have further functionalities, more subscribers can consume the monitored data and act upon it, and finally, MongoDB Atlas and Charts can be used by various applications and embedded to different platforms.\n\n## Wrap Up\nBuilding Go applications is simple yet has big benefits in terms of performance, cross platform code, and a large number of supported libraries and clients. Adding MongoDB Atlas via a Confluent Cloud Kafka service makes the implementation a robust and extensible stack, streaming data and efficiently storing and presenting it to the end user via Charts.\n\nIn this tutorial, we have covered all the basics you need to know in order to start using Go, Kafka, and MongoDB Atlas in your next streaming projects.\n\nTry MongoDB Atlas and Go today!", "format": "md", "metadata": {"tags": ["Connectors", "Go", "Kafka"], "pageDescription": "Go is a cross-platform language. When combined with the power of Confluent Kafka streaming to MongoDB Atlas, you\u2019ll be able to form tools and applications with real-time data streaming and analytics. Here\u2019s a step-by-step tutorial to get you started.", "contentType": "Tutorial"}, "title": "Go to MongoDB Using Kafka Connectors - Ultimate Agent Guide", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/announcing-realm-kotlin-beta", "action": "created", "body": "# Announcing the Realm Kotlin Beta: A Database for Multiplatform Apps\n\nThe Realm team is happy to announce the beta release of our Realm Kotlin SDK\u2014with support for both Kotlin for Android and Kotlin Multiplatform apps. With this release, you can deploy and maintain a data layer across iOS, Android, and desktop platforms from a single Kotlin codebase.\n\n:youtube]{vid=6vL5h8pbt5g}\n\nRealm is a super fast local data store that makes storing, querying, and syncing data simple for modern applications. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects, enabling you to refresh the UI with first-class support for Kotlin\u2019s programming primitives such as Coroutines, Channels, and Flows. \n\n## Introduction\n\nOur goal with Realm has always been to provide developers with the tools they need to easily build data-driven, reactive mobile applications. Back in 2014, this meant providing Android developers with a first-class Java SDK. But the Android community is changing. With the growing importance of Kotlin, our team had two options: refactor the existing Realm Java SDK to make it Kotlin-friendly or use this as an opportunity to build a new SDK from the ground up that is specifically tailored to the needs of the Kotlin community. After collecting seven years of feedback from Android developers, we chose to build a new Kotlin-first SDK that pursued the following directives:\n\n* Expose Realm APIs that are idiomatic and directly integrate with Kotlin design patterns such as Coroutines and Flows, eliminating the need to write glue code between the data layer and the UI.\n* Remove Realm Java\u2019s thread confinement for Realms and instead emit data objects as immutable structs, conforming to the prevalent design pattern on Android and other platforms.\n* Expose a single Realm singleton instance that integrates into Android\u2019s lifecycle management automatically, removing the custom code needed to spin up and tear down Realm instances on a per-activity or fragment basis.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!\n\n## What is Realm?\n\nRealm is a fast, easy-to-use alternative to SQLite + Room with built-in cloud capabilities, including a real-time edge-to-cloud sync solution. Written from the ground up in C++, it is not a wrapper around SQLite or any other relational data store. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages of disk space into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. Simply put, Realm makes it easy to store, query, and sync your mobile data across devices and the back end.\n\n## Realm for Kotlin developers\n\nRealm is an object database, so your schema is defined in the same way you define your object classes. Under the hood, Realm uses a Kotlin compiler plugin to generate the necessary getters, setters, and schema for your database, freeing Android developers from the monotony of Room\u2019s DAOs and the pain of investigating inaccurate SQL query responses from SQLite. \n\nRealm also brings true relationships to your object class definitions, enabling you to have one-to-one, one-to-many, many-to-many, and even inverse relationships. And because Realm objects are memory-mapped, traversing the object graph across relationships is done in the blink of an eye. \n\nAdditionally, Realm delivers a simple and intuitive query system that will feel natural to Kotlin developers. No more context switching to SQL to instantiate your schema or looking behind the curtain when an ORM fails to translate your calls into SQL. \n\n```kotlin\n// Define your schema - Notice Project has a one-to-many relationship to Task\nclass Project : RealmObject {\n var name: String = \"\"\n var tasks: RealmList = realmListOf()\n}\n\nclass Task : RealmObject {\n var name: String = \"\"\n var status: String = \"Open\"\n var owner: String = \"\"\n}\n\n// Set the config and open the realm instance\nval easyConfig = RealmConfiguration.with(schema = setOf(Task::class, Project::class))\n\nval realm: Realm = Realm.open(easyConfig)\n\n// Write asynchronously using suspend functions\nrealm.write { // this: MutableRealm\n val project = Project().apply {\n name = \"Kotlin Beta\"\n }\n val task = Task().apply {\n name = \"Ship It\"\n status = \"InProgress\"\n owner = \"Christian\"\n }\n project.tasks = task\n copyToRealm(project)\n}\n\n// Get a reference to the Project object\nval currentProject: Project =\n realm.query(\n \"name == $0\", \"Kotlin Beta\"\n ).first().find()!!\n\n// Or query multiple objects\nvall allTasks: RealmResults = \n realm.query().find()\n\n// Get notified when data changes using Flows\ncurrentProject.tasks.asFlow().collect { change: ListChange ->\n when (change) {\n is InitialList -> {\n // Display initial data on UI\n updateUI(change.list)\n }\n is UpdatedList -> {\n // Get information about changes compared\n // to last version.\n Log.debug(\"Changes: ${change.changeRanges}\")\n Log.debug(\"Insertions: ${change.insertionRanges}\")\n Log.debug(\"Deletions: ${change.deletionRanges}\")\n updateUI(change.list)\n }\n is DeletedList -> {\n updateUI(change.list) // Empty list\n }\n }\n}\n\n// Write synchronously - this blocks execution on the caller thread \n// until the transaction is complete\nrealm.writeBlocking { // this: MutableRealm\n val newTask = Task().apply {\n name = \"Write Blog\"\n status = \"InProgress\"\n owner = \"Ian\"\n }\n\n findLatest(currentProject)?.apply {\n tasks.add(newTask)\n }\n}\n\n// The UI will now automatically display two tasks because \n// of the above defined Flow on currentProject\n```\nFinally, one of Realm\u2019s main benefits is its out-of-the-box data synchronization solution with MongoDB Atlas. Realm Sync makes it easy for developers to build reactive mobile apps that stay up to date in real-time.\n\n## Looking ahead\n\nThe Realm Kotlin SDK is a free and open source database available for you to get started building applications with today! With this beta release, we believe that all of the critical components for building a production-grade data layer in an application are in place. In the future, we will look to add embedded objects; data types such as Maps, Sets, and the Mixed type; ancillary sync APIs as well as support for Flexible Sync; and finally, some highly optimized write and query helper APIs with the eventual goal of going GA later this year.\n\nGive it a try today and let us know what you think! Check out our samples, read our docs, and follow our repo.\n\n>For users already familiar with Realm Java, check out the video at the top of this article if you haven't already, or our migration blog post and documentation to see what is needed to port over your existing implementation.\n\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin"], "pageDescription": "Announcing the Realm Kotlin Beta\u2014making it easy to store, query, and sync data in your Kotlin for Android and Kotlin Multiplatform apps.", "contentType": "Article"}, "title": "Announcing the Realm Kotlin Beta: A Database for Multiplatform Apps", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/php/fifa-php-app", "action": "created", "body": "# Go-FIFA\n\n## Creators\nDhiren and Nirbhay contributed this project. \n\n## About the project\n\nGoFifa is a PHP-Mongo based Football Stats Client. GoFifa's data has been sourced from Kaggle, Sofifa and FifaIndex. Data has been stored in MongoDB on AWS. The application is hosted on Heroku and deployed using GitHub as a VCS.\n\n## Inspiration\n\nThe project was part of my course called \u2018knowledge representation techniques.\u2019 The assignment was to create a project that used the basic features of MongoDB. Together with my teammate, we decided to make the best use of this opportunity. We thought about what we could do; we needed proper structured and useful quality data, and also supported the idea behind MongoDB. We went to a lot of sites with sample data sites. \n\nWe went for soccer because my teammate is a huge soccer fan. We found this sample data, and we decided to use it for our project. \n\n:youtube]{vid=YGNDGTnQdNQ}\n\n## Why MongoDB?\n\nAs mentioned briefly, we were asked to use MongoDB in our project. We decided to dive deeper into everything that MongoDB has to offer. This project uses the most known querying techniques with MongoDB, and other features like geodata, leaflet js, grid fs, depth filtering, mapping crawled data to MongoDB, and references, deployment, etc. It can prove to be an excellent start for someone who wants to learn how to use MongoDB effectively and a rest client on top of it. \n\n![\n\n## How it works\n\nGoFifa is a web application where you can find soccer players and learn more about them. \n\n All in all, we created a project that was a full-stack. \n\nFirst, we started crawling the data, so we created a crawler that feeds the data into the database as chunks. And we also use the idea behind references. When querying, we also made sure what we were querying with wildcards next to the normal querying. \n\nWe wanted to create a feature that can be used in the real world. That\u2019s why we also decided to use geo queries and gridFS. It turned into a nice full-stack app. And top it off, the best part has been that since that project, we\u2019ve used MongoDB in so many places. \n\n## Challenges and learnings\n\nI (Nirbhay) learned a lot from this project. I was a more PHP centric person. Now that's not the case anymore, but I was. And it was a little difficult to integrate the PHP driver at the time. Now it's all become very easy. More and more articles are written about bothered about all those codes. So it's all become very easy. But at that time, it wasn't easy. But other than that, I would say: the documentation provided by MongoDB was pretty good. It helps understand things to a certain level. I don't think that I've used all the features yet, but I'll try to use them more in the future. \n\n", "format": "md", "metadata": {"tags": ["PHP"], "pageDescription": "GoFifa - A comprehensive soccer stats tracker.", "contentType": "Code Example"}, "title": "Go-FIFA", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/mongoose-versus-nodejs-driver", "action": "created", "body": "# MongoDB & Mongoose: Compatibility and Comparison\n\nIn this article, we\u2019ll explore the Mongoose library for MongoDB. Mongoose is a Object Data Modeling (ODM) library for MongoDB distributed as an npm package. We'll compare and contrast Mongoose to using the native MongoDB Node.js driver together with MongoDB Schema Validation.\n\nWe\u2019ll see how the MongoDB Schema Validation helps us enforce a database schema, while still allowing for great flexibility when needed. Finally, we\u2019ll see if the additional features that Mongoose provides are worth the overhead of introducing a third-party library into our applications.\n\n## What is Mongoose?\n\nMongoose is a Node.js-based Object Data Modeling (ODM) library for MongoDB. It is akin to an Object Relational Mapper (ORM) such as SQLAlchemy for traditional SQL databases. The problem that Mongoose aims to solve is allowing developers to enforce a specific schema at the application layer. In addition to enforcing a schema, Mongoose also offers a variety of hooks, model validation, and other features aimed at making it easier to work with MongoDB.\n\n## What is MongoDB Schema Validation?\n\nMongoDB Schema Validation makes it possible to easily enforce a schema against your MongoDB database, while maintaining a high degree of flexibility, giving you the best of both worlds. In the past, the only way to enforce a schema against a MongoDB collection was to do it at the application level using an ODM like Mongoose, but that posed significant challenges for developers.\n\n## Getting Started\n\nIf you want to follow along with this tutorial and play around with schema validations but don't have a MongoDB instance set up, you can set up a free MongoDB Atlas cluster here.\n\n## Object Data Modeling in MongoDB\n\nA huge benefit of using a NoSQL database like MongoDB is that you are not constrained to a rigid data model. You can add or remove fields, nest data multiple layers deep, and have a truly flexible data model that meets your needs today and can adapt to your ever-changing needs tomorrow. But being too flexible can also be a challenge. If there is no consensus on what the data model should look like, and every document in a collection contains vastly different fields, you're going to have a bad time.\n\n### Mongoose Schema and Model\n\nOn one end of the spectrum, we have ODM's like Mongoose, which from the get-go force us into a semi-rigid schema.\u00a0With Mongoose, you would define a `Schema` object in your application code that maps to a collection in your MongoDB database. The `Schema` object defines the structure of the documents in your collection. Then, you need to create a `Model` object out of the schema. The model is used to interact with the collection.\n\nFor example, let's say we're building a blog and want to represent a blog post. We would first define a schema and then create an accompanying Mongoose model:\n\n``` javascript\nconst blog = new Schema({\n title: String,\n slug: String,\n published: Boolean,\n content: String,\n tags: String],\n comments: [{\n user: String,\n content: String,\n votes: Number\n }]\n});\n \nconst Blog = mongoose.model('Blog', blog);\n```\n\n### Executing Operations on MongoDB with Mongoose\n\nOnce we have a Mongoose model defined, we could run queries for fetching,updating, and deleting data against a MongoDB collection that alignswith the Mongoose model. With the above model, we could do things like:\n\n``` javascript\n// Create a new blog post\nconst article = new Blog({\n title: 'Awesome Post!',\n slug: 'awesome-post',\n published: true,\n content: 'This is the best post ever',\n tags: ['featured', 'announcement'],\n});\n \n// Insert the article in our MongoDB database\narticle.save();\n \n// Find a single blog post\nBlog.findOne({}, (err, post) => {\n console.log(post);\n});\n```\n\n### Mongoose vs MongoDB Node.js Driver: A Comparison\n\nThe benefit of using Mongoose is that we have a schema to work against in our application code and an explicit relationship between our MongoDB documents and the Mongoose models within our application. The downside is that we can only create blog posts and they have to follow the above defined schema. If we change our Mongoose schema, we are changing the relationship completely, and if you're going through rapid development, this can greatly slow you down.\n\nThe other downside is that this relationship between the schema and model only exists within the confines of our Node.js application. Our MongoDB database is not aware of the relationship, it just inserts or retrieves data it is asked for without any sort of validation. In the event that we used a different programming language to interact with our database, all the constraints and models we defined in Mongoose would be worthless.\n\nOn the other hand, if we decided to use just the [MongoDB Node.js driver, we could\nrun queries against any collection in our database, or create new ones on the fly. The MongoDB Node.js driver does not have concepts of object data modeling or mapping.\n\nWe simply write queries against the database and collection we wish to work with to accomplish the business goals. If we wanted to insert a new blog post in our collection, we could simply execute a command like so:\n\n``` javascript\ndb.collection('posts').insertOne({\n title: 'Better Post!',\n slug: 'a-better-post',\n published: true,\n author: 'Ado Kukic',\n content: 'This is an even better post',\n tags: 'featured'],\n});\n```\n\nThis `insertOne()` operation would run just fine using the Node.js Driver. If we tried to save this data using our Mongoose `Blog` model, it would fail, because we don't have an `author` property defined in our Blog Mongoose model.\n\nJust because the Node.js driver doesn't have the concept of a model, does not mean we couldn't create models to represent our MongoDB data at the application level. We could just as easily create a generic model or use a library such as [objectmodel. We could create a `Blog` model like so:\n\n``` javascript\nfunction Blog(post) {\n this.title = post.title;\n this.slug = post.slug;\n ...\n}\n```\n\nWe could then use this model in conjunction with our MongoDB Node.js driver, giving us both the flexibility of using the model, but not being constrained by it.\n\n``` javascript\ndb.collection('posts').findOne({}).then((err, post) => {\n let article = new Blog(post);\n});\n```\n\nIn this scenario, our MongoDB database is still blissfully unaware of our Blog model at the application level, but our developers can work with it, add specific methods and helpers to the model, and would know that this model is only meant to be used within the confines of our Node.js application. Next, let's explore schema validation.\n\n## Adding Schema Validation\n\nWe can choose between two different ways of adding schema validation to our MongoDB collections. The first is to use application-level validators, which are defined in the Mongoose schemas. The second is to use MongoDB schema validation, which is defined in the MongoDB collection itself. The huge difference is that native MongoDB schema validation is applied at the database level. Let's see why that matters by exploring both methods.\n\n### Schema Validation with Mongoose\n\nWhen it comes to schema validation, Mongoose enforces it at the application layer as we've seen in the previous section. It does this in two ways.\n\nFirst, by defining our model, we are explicitly telling our Node.js application what fields and data types we'll allow to be inserted into a specific collection. For example, our Mongoose Blog schema defines a `title` property of type `String`. If we were to try and insert a blog post with a `title` property that was an array, it would fail. Anything outside of the defined fields, will also not be inserted in the database.\n\nSecond, we further validate that the data in the defined fields matches our defined set of criteria. For example, we can expand on our Blog model by adding specific validators such as requiring certain fields, ensuring a minimum or maximum length for a specific field, or coming up with our custom logic even. Let's see how this looks with Mongoose. In our code we would simply expand on the property and add our validators:\n\n``` javascript\nconst blog = new Schema({\n title: {\n type: String,\n required: true,\n },\n slug: {\n type: String,\n required: true,\n },\n published: Boolean,\n content: {\n type: String,\n required: true,\n minlength: 250\n },\n ...\n});\n \nconst Blog = mongoose.model('Blog', blog);\n```\n\nMongoose takes care of model definition and schema validation in one fell swoop. The downside though is still the same. These rules only apply at the application layer and MongoDB itself is none the wiser.\n\nThe MongoDB Node.js driver itself does not have mechanisms for inserting or managing validations, and it shouldn't. We can define schema validation rules for our MongoDB database using the MongoDB Shell or\u00a0Compass.\n\nWe can create a schema validation when creating our collection or after the fact on an existing collection. Since we've been working with this blog idea as our example, we'll add our schema validations to it. I will use Compass and MongoDB Atlas. For a great resource on how to programmatically add schema validations, check out this series.\n\n> If you want to follow along with this tutorial and play around with\n> schema validations but don't have a MongoDB instance set up, you can\n> set up a free MongoDB Atlas cluster here.\n\nCreate a collection called `posts` and let's insert our two documents that we've been working with. The documents are:\n\n``` javascript\n{\"title\":\"Better Post!\",\"slug\":\"a-better-post\",\"published\":true,\"author\":\"Ado Kukic\",\"content\":\"This is an even better post\",\"tags\":[\"featured\"]}, {\"_id\":{\"$oid\":\"5e714da7f3a665d9804e6506\"},\"title\":\"Awesome Post\",\"slug\":\"awesome-post\",\"published\":true,\"content\":\"This is an awesome post\",\"tags\":[\"featured\",\"announcement\"]}]\n```\n\nNow, within the Compass UI, I will navigate to the **Validation** tab. As expected, there are currently no validation rules in place, meaning our database will accept any document as long as it is valid BSON. Hit the **Add a Rule** button and you'll see a user interface for creating your own validation rules.\n\n![Valid Document Schema\n\nBy default, there are no rules, so any document will be marked as passing. Let's add a rule to require the `author` property. It will look like this:\n\n``` javascript\n{\n $jsonSchema: {\n bsonType: \"object\",\n required: \"author\" ]\n }\n}\n```\n\nNow we'll see that our initial post, that does not have an `author` field has failed validation, while the post that does have the `author` field is good to go.\n\n![Invalid Document Schema\n\nWe can go further and add validations to individual fields as well. Say for SEO purposes we wanted all the titles of the blog posts to be a minimum of 20 characters and have a maximum length of 80 characters. We can represent that like this:\n\n``` javascript\n{\n $jsonSchema: {\n bsonType: \"object\",\n required: \"tags\" ],\n properties: {\n title: {\n type: \"string\",\n minLength: 20,\n maxLength: 80\n }\n }\n }\n}\n```\n\nNow if we try to insert a document into our `posts` collection either via the Node.js Driver or via Compass, we will get an error.\n\n![Validation Error\n\nThere are many more rules and validations you can add. Check out the full list here. For a more advanced guided approach, check out the articles on schema validation with arrays and dependencies.\n\n### Expanding on Schema Validation\n\nWith Mongoose, our data model and schema are the basis for our interactions with MongoDB. MongoDB itself is not aware of any of these constraints, Mongoose takes the role of judge, jury, and executioner on what queries can be executed and what happens with them.\n\nBut with MongoDB native schema validation, we have additional flexibility. When we implement a schema, validation on existing documents does not happen automatically. Validation is only done on updates and inserts. If we wanted to leave existing documents alone though, we could change the `validationLevel` to only validate new documents inserted in the database.\n\nAdditionally, with schema validations done at the MongoDB database level, we can choose to still insert documents that fail validation. The `validationAction` option allows us to determine what happens if a query fails validation. By default, it is set to `error`, but we can change it to `warn` if we want the insert to still occur. Now instead of an insert or update erroring out, it would simply warn the user that the operation failed validation.\n\nAnd finally, if we needed to, we can bypass document validation altogether by passing the `bypassDocumentValidation` option with our query. To show you how this works, let's say we wanted to insert just a `title` in our `posts` collection and we didn't want any other data. If we tried to just do this...\n\n``` javascript\ndb.collection('posts').insertOne({ title: 'Awesome' });\n```\n\n... we would get an error saying that document validation failed. But if we wanted to skip document validation for this insert, we would simply do this:\n\n``` javascript\ndb.collection('posts').insertOne(\n { title: 'Awesome' },\n { bypassDocumentValidation: true }\n);\n```\n\nThis would not be possible with Mongoose. MongoDB schema validation is more in line with the entire philosophy of MongoDB where the focus is on a flexible design schema that is quickly and easily adaptable to your use cases.\n\n## Populate and Lookup\n\nThe final area where I would like to compare Mongoose and the Node.js MongoDB driver is its support for pseudo-joins. Both Mongoose and the native Node.js driver support the ability to combine documents from multiple collections in the same database, similar to a join in traditional relational databases.\n\nThe Mongoose approach is called **Populate**. It allows developers to create data models that can reference each other and then, with a simple API, request data from multiple collections. For our example, let's expand on the blog post and add a new collection for users.\n\n``` javascript\nconst user = new Schema({\n name: String,\n email: String\n});\n \nconst blog = new Schema({\n title: String,\n slug: String,\n published: Boolean,\n content: String,\n tags: String],\n comments: [{\n user: { Schema.Types.ObjectId, ref: 'User' },\n content: String,\n votes: Number\n }]\n});\n \nconst User = mongoose.model('User', user);\nconst Blog = mongoose.model('Blog', blog);\n```\n\nWhat we did above was we created a new model and schema to represent users leaving comments on blog posts. When a user leaves a comment, instead of storing information on them, we would just store that user\u2019s `_id`. So, an update operation to add a new comment to our post may look something like this:\n\n``` javascript\nBlog.updateOne({\n comments: [{ user: \"12345\", content: \"Great Post!!!\" }]\n});\n```\n\nThis is assuming that we have a user in our `User` collection with the `_id` of `12345`. Now, if we wanted to **populate** our `user` property when we do a query\u2014and instead of just returning the `_id` return the entire document\u2014we could do:\n\n``` javascript\nBlog.\n findOne({}).\n populate('comments.user').\n exec(function (err, post) {\n console.log(post.comments[0].user.name) // Name of user for 1st comment\n });\n```\n\nPopulate coupled with Mongoose data modeling can be very powerful, especially if you're coming from a relational database background. The drawback though is the amount of magic going on under the hood to make this happen. Mongoose would make two separate queries to accomplish this task and if you're joining multiple collections, operations can quickly slow down.\n\nThe other issue is that the populate concept only exists at the application layer. So while this does work, relying on it for your database management can come back to bite you in the future.\n\nMongoDB as of version 3.2 introduced a new operation called `$lookup` that allows to developers to essentially do a left outer join on collections within a single MongoDB database. If we wanted to populate the user information using the Node.js driver, we could create an aggregation pipeline to do it. Our starting point using the `$lookup` operator could look like this:\n\n``` javascript\ndb.collection('posts').aggregate([\n {\n '$lookup': {\n 'from': 'users', \n 'localField': 'comments.user', \n 'foreignField': '_id', \n 'as': 'users'\n }\n }, {}\n], (err, post) => {\n console.log(post.users); //This would contain an array of users\n});\n```\n\nWe could further create an additional step in our aggregation pipeline to replace the user information in the `comments` field with the users data, but that's a bit out of the scope of this article. If you wish to learn more about how aggregation pipelines work with MongoDB, check out the [aggregation docs.\n\n## Final Thoughts: Do I Really Need Mongoose?\n\nBoth Mongoose and the MongoDB Node.js driver support similar functionality. While Mongoose does make MongoDB development familiar to someone who may be completely new, it does perform a lot of magic under the hood that could have unintended consequences in the future.\n\nI personally believe that you don't need an ODM to be successful with MongoDB. I am also not a huge fan of ORMs in the relational database world. While they make initial dive into a technology feel familiar, they abstract away a lot of the power of a database.\n\nDevelopers have a lot of choices to make when it comes to building applications. In this article, we looked at the differences between using an ODM versus the native driver and showed that the difference between the two is not that big. Using an ODM like Mongoose can make development feel familiar but forces you into a rigid design, which is an anti-pattern when considering building with MongoDB.\n\nThe MongoDB Node.js driver works natively with your MongoDB database to give you the best and most flexible development experience. It allows the database to do what it's best at while allowing your application to focus on what it's best at, and that's probably not managing data models.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Learn why using an Object Data Modeling library may not be the best choice when building MongoDB apps with Node.js.", "contentType": "Article"}, "title": "MongoDB & Mongoose: Compatibility and Comparison", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-api-aws-gateway", "action": "created", "body": "# Creating an API with the AWS API Gateway and the Atlas Data API\n\n## Introduction\n\n> This tutorial discusses the preview version of the Atlas Data API which is now generally available with more features and functionality. Learn more about the GA version here.\n\nThis article will walk through creating an API using the Amazon API Gateway in front of the MongoDB Atlas Data API. When integrating with the Amazon API Gateway, it is possible but undesirable to use a driver, as drivers are designed to be long-lived and maintain connection pooling. Using serverless functions with a driver can result in either a performance hit \u2013 if the driver is instantiated on each call and must authenticate \u2013 or excessive connection numbers if the underlying mechanism persists between calls, as you have no control over when code containers are reused or created.\n\nTheMongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. For example, when creating serverless microservices with MongoDB.\n\nAWS (Amazon Web Services) describe their API Gateway as:\n\n> \"A fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the \"front door\" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.\n> API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.\"\n\n## Prerequisites.\n\nA core requirement for this walkthrough is to have an Amazon Web Services account, the API Gateway is available as part of the AWS free tier, allowing up to 1 million API calls per month, at no charge, in your first 12 months with AWS.\n\nWe will also need an Atlas Cluster for which we have enabled the Data API \u2013 and our endpoint URL and API Key. You can learn how to get these in this Article or this Video if you do not have them already.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nA common use of Atlas with the Amazon API Gateway might be to provide a managed API to a restricted subset of data in our cluster, which is a common need for a microservice architecture. To demonstrate this, we first need to have some data available in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing \"Load Sample Dataset\", or following instructions here. \n\n## Creating an API with the Amazon API Gateway and the Atlas Data API\n## \nThe instructions here are an extended variation from Amazon's own \"Getting Started with the API Gateway\" tutorial. I do not presume to teach you how best to use Amazon's API Gateway as Amazon itself has many fine resources for this, what we will do here is use it to get a basic Public API enabled that uses the Data API.\n\n> The Data API itself is currently in an early preview with a flat security model allowing all users who have an API key to query or update any database or collection. Future versions will have more granular security. We would not want to simply expose the current data API as a 'Public' API but we can use it on the back-end to create more restricted and specific access to our data. \n> \nWe are going to create an API which allows users to GET the ten films for any given year which received the most awards - a notional \"Best Films of the Year\". We will restrict this API to performing only that operation and supply the year as part of the URL\n\nWe will first create the API, then analyze the code we used for it.\n\n## Create a AWS Lambda Function to retrieve data with the Data API\n\n1. Sign in to the Lambda console athttps://console.aws.amazon.com/lambda.\n2. Choose **Create function**.\n3. For **Function name**, enter top-movies-for-year.\n4. Choose **Create function**.\n\nWhen you see the Javascript editor that looks like this\n\nReplace the code with the following, changing the API-KEY and APP-ID to the values for your Atlas cluster. Save and click **Deploy** (In a production application you might look to store these in AWS Secrets manager , I have simplified by putting them in the code here).\n\n```\nconst https = require('https');\n \nconst atlasEndpoint = \"/app/APP-ID/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"API-KEY\";\n \n \nexports.handler = async(event) => {\n \n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n \n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n \n \n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n \n \n const payload = JSON.stringify({\n dataSource: \"Cluster0\",\n database: \"sample_mflix\",\n collection: \"movies\",\n filter: { year },\n projection: { _id: 0, title: 1, awards: \"$awards.wins\" },\n sort: { \"awards.wins\": -1 },\n limit: 10\n });\n \n \n const options = {\n hostname: 'data.mongodb-api.com',\n port: 443,\n path: atlasEndpoint,\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Content-Length': payload.length,\n 'api-key': atlasAPIKey\n }\n };\n \n let results = '';\n \n const response = await new Promise((resolve, reject) => {\n const req = https.request(options, res => {\n res.on('data', d => {\n results += d;\n });\n res.on('end', () => {\n console.log(`end() status code = ${res.statusCode}`);\n if (res.statusCode == 200) {\n let resultsObj = JSON.parse(results)\n resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });\n }\n else {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key\n }\n });\n });\n //Do not give the user clues about backend issues for security reasons\n req.on('error', error => {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable\n });\n \n req.write(payload);\n req.end();\n });\n return response;\n \n};\n\n```\n\nAlternatively, if you are familiar with working with packages and Lambda, you could upload an HTTP package like Axios to Lambda as a zipfile, allowing you to use the following simplified code.\n\n```\n\nconst axios = require('axios');\n\nconst atlasEndpoint = \"https://data.mongodb-api.com/app/APP-ID/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"API-KEY\";\n\nexports.handler = async(event) => {\n\n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n\n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n\n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n\n const payload = {\n dataSource: \"Cluster0\",\n database: \"sample_mflix\",\n collection: \"movies\",\n filter: { year },\n projection: { _id: 0, title: 1, awards: \"$awards.wins\" },\n sort: { \"awards.wins\": -1 },\n limit: 10\n };\n\n try {\n const response = await axios.post(atlasEndpoint, payload, { headers: { 'api-key': atlasAPIKey } });\n return response.data.documents;\n }\n catch (e) {\n return { statusCode: 500, body: 'Unable to service request' }\n }\n};\n```\n\n## Create an HTTP endpoint for our custom API function\n## \nWe now need to route an HTTP endpoint to our Lambda function using the HTTP API. \n\nThe HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes requests to your Lambda function, and then returns the function's response to clients.\n\n1. Go to the API Gateway console athttps://console.aws.amazon.com/apigateway.\n2. Do one of the following:\n To create your first API, for HTTP API, choose **Build**.\n If you've created an API before, choose **Create API**, and then choose **Build** for HTTP API.\n3. For Integrations, choose **Add integration**.\n4. Choose **Lambda**.\n5. For **Lambda function**, enter top-movies-for-year.\n6. For **API name**, enter movie-api.\n\n8. Choose **Next**.\n\n8. Review the route that API Gateway creates for you, and then choose **Next**.\n\n9. Review the stage that API Gateway creates for you, and then choose **Next**.\n\n10. Choose **Create**.\n\nNow you've created an HTTP API with a Lambda integration and the Atlas Data API that's ready to receive requests from clients.\n\n## Test your API\n\nYou should now be looking at API Gateway details that look like this, if not you can get to it by going tohttps://console.aws.amazon.com/apigatewayand clicking on **movie-api**\n\nTake a note of the **Invoke URL**, this is the base URL for your API\n\nNow, in a new browser tab, browse to `/top-movies-for-year?year=2001` . Changing ` `to the Invoke URL shown in AWS. You should see the results of your API call - JSON listing the top 10 \"Best\" films of 2001.\n\n## Reviewing our Function.\n## \nWe start by importing the Standard node.js https library - the Data API needs no special libraries to call it. We also define our API Key and the path to our find endpoint, You get both of these from the Data API tab in Atlas.\n\n```\nconst https = require('https');\n \nconst atlasEndpoint = \"/app/data-amzuu/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"YOUR-API-KEY\";\n```\n\nNow we check that the API call included a parameter for year and that it's a number - we need to convert it to a number as in MongoDB, \"2001\" and 2001 are different values, and searching for one will not find the other. The collection uses a number for the movie release year.\n\n \n```\nexports.handler = async (event) => {\n \n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n \n \n const payload = JSON.stringify({\n dataSource: \"Cluster0\", database: \"sample_mflix\", collection: \"movies\",\n filter: { year }, projection: { _id: 0, title: 1, awards: \"$awards.wins\" }, sort: { \"awards.wins\": -1 }, limit: 10\n });\n\n```\n\nTHen we construct our payload - the parameters for the Atlas API Call, we are querying for year = year, projecting just the title and the number of awards, sorting by the numbers of awards descending and limiting to 10.\n\n \n```\n const payload = JSON.stringify({\n dataSource: \"Cluster0\", database: \"sample_mflix\", collection: \"movies\",\n filter: { year }, projection: { _id: 0, title: 1, awards: \"$awards.wins\" }, \n sort: { \"awards.wins\": -1 }, limit: 10\n });\n\n```\n\nWe then construct the options for the HTTPS POST request to the Data API - here we pass the Data API API-KEY as a header.\n\n```\n const options = {\n hostname: 'data.mongodb-api.com',\n port: 443,\n path: atlasEndpoint,\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Content-Length': payload.length,\n 'api-key': atlasAPIKey\n }\n };\n\n```\n\nFinally we use some fairly standard code to call the API and handle errors. We can get Request errors - such as being unable to contact the server - or Response errors where we get any Response code other than 200 OK - In both cases we return a 500 Internal error from our simplified API to not leak any details of the internals to a potential hacker.\n\n \n```\n let results = '';\n \n const response = await new Promise((resolve, reject) => {\n const req = https.request(options, res => {\n res.on('data', d => {\n results += d;\n });\n res.on('end', () => {\n console.log(`end() status code = ${res.statusCode}`);\n if (res.statusCode == 200) {\n let resultsObj = JSON.parse(results)\n resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });\n } else {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key\n }\n });\n });\n //Do not give the user clues about backend issues for security reasons\n req.on('error', error => {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable\n });\n \n req.write(payload);\n req.end();\n });\n return response;\n \n};\n```\n\nOur Axios verison is just the same functionality as above but simplified by the use of a library.\n## Conclusion\n\nAs we can see, calling the Atlas Data API from AWS Lambda function is incredibly simple, especially if making use of a library like Axios. The Data API is also stateless, so there are no concerns about connection setup times or maintaining long lived connections as there would be using a Driver. \n", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "In this article, we look at how the Atlas Data API is a great choice for accessing MongoDB Atlas from AWS Lambda Functions by creating a custom API with the AWS API Gateway. ", "contentType": "Quickstart"}, "title": "Creating an API with the AWS API Gateway and the Atlas Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/kenya-hostels", "action": "created", "body": "# Hostels Kenya Example App\n\n## Creators\nDerrick Muteti and Felix Omuok from Kirinyaga University in Kenya contributed this project. \n\n## About the project\n\nHostels Kenya is a website that provides students the opportunity to find any hostel of their choice by filtering by distance from school, university name, room type, and even the monthly rent. It also provides the students with directions in case they are new to the area. Once they find a hostel that they like, they have the option to make a booking request, after which the landlord/landlady is automatically notified via SMS by our system. The students can also request to receive a notification when the hostel of their choice is fully occupied. Students have the opportunity to review and rate the hostels available in our system, helping other students make better decisions when looking for hostels. We launched the website on 1st September 2020, and so far, we have registered 26 hostels around our university and we are expanding to cover other universities.\n\n## Inspiration\n\nI come from Nyanza Province in Kenya and I study at Kirinyaga University, the university in Kenya's central region, which is around 529km from my home. Most universities in Kenya do not offer student accommodation, and if any, a tiny percentage of the students are accommodated by the school. Because of this reason, most students stay in privately owned hostels outside the school. Therefore, getting a hostel is always challenging, especially for students who are new to the area. In my case, I had to travel from home to Kirinyaga University a month before the admission date to book a hostel. Thus, I decided to develop hostels Kenya to help students from different parts of the country find student hostels easily and make booking requests. \n\n## Why MongoDB?\n \nMy journey of developing this project has had ups and downs. I started working on the project last year using PHP and MYSQL. After facing many challenges in storing my data and dealing with geospatial queries, I had to stop the project. The funny thing is that last year, I did not know MongoDB existed. But I saw that MongoDB was part of the GitHub Student Developer Pack. And now that I was faced with a problem, I had to take the time and learn MongoDB. \n\nIn April this year, I started the project from scratch using Node.js and MongoDB. \n\nMongoDB made it very easy for me to deal with geospatial queries and the fact that I was able to embed documents made it very fast when reading questions. This was not possible with MYSQL, and that is why I opted for a NoSQL database.\nLearning MongoDB was also straightforward, and it took me a short duration of time to set up my project. I love the fact that MongoDB handles most of the heavy tasks for me. To be sincere, I do not think I could have finished the project in time with all the functionalities had I not used MongoDB. \n\nSince the site's launch on 1st October 2020, the site has helped over 1 thousand students from my university find hostels, and we hope this number will grow once we expand to other universities. With the government's current COVID-19 regulations on traveling, many students have opted to use this site instead of traveling for long distances as they wait to resume in-person learning come January 2021.\n\n## How it works\n\nStudents can create an account on our website. Our search query uses the school they go to, the room type they're looking for, the monthly rent, and the school's distance. Once students fill out this search, it will return the hostels that match their wishes. We use Geodata, the school's longitude, latitude, and the hostels to come up with the closest hostels. Filtering and querying this is obviously where the MongoDB aggregation framework comes into place. We love it!\n\nHostel owners can register their hostel via the website. They will be added to our database, and students will be able to start booking a room via our website. \n\nStudents can also view all the hostels on a map and select one of their choices. It was beneficial that we could embed all of this data, and the best part was MongoDB's ability to deal with GeoData. \n\nToday hostel owners can register their hostel via the website; they can log in to their account and change pictures. But we're looking forward to implementing more features like a dashboard and making it more user friendly. \n\nWe're currently using mongoose, but we're thinking of expanding and using MongoDB Atlas in the future. I've been watching talks about Atlas at MongoDB.live Asia, and I was amazed. I'm looking forward to implementing this. I've also been watching some MongoDB YouTube videos on design patterns, and I realize that this is something that we can add in the future. \n\n \n## Challenges and learnings\n\nExcept for the whole change from PHP and SQL, to MongoDB & Node.js, finding hostels has been our challenge. I underestimated the importance of marketing. I never knew how difficult it would be until I had to go out and talk to hostel owners., trying to convince them to come on board. But I am seeing that the students who are using the application are finding it very useful. \n\nWe decided to bring another person on board to help us with marketing. And we are also trying to reach the school to see how they can help us engage with the hostels. \n\nFor the future, we want to create a desktop application for hostel owners. Something that can be installed on their computer makes it easy for them to manage their students' bookings.\n\nMost landlords are building many hostels around the school, so we're hoping to have them on board.\n\nBut first, we want to add more hostels into the system in December and create more data for our students. Especially now we might go back to school in January, it's essential to keep adding accommodations. \n\nAs for me, I\u2019m also following courses on MongoDB University. I noticed that there is no MongoDB Certified Professional in my country, and I would like to become the first one.\n", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "Node.js"], "pageDescription": "Find hostels and student apartments all over Kenya", "contentType": "Code Example"}, "title": "Hostels Kenya Example App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/zap-tweet-repeat-how-to-use-zapier-mongodb", "action": "created", "body": "# Zap, Tweet, and Repeat! How to Use Zapier with MongoDB\n\nI'm a huge fan of automation when the scenario allows for it. Maybe you need to keep track of guest information when they RSVP to your event, or maybe you need to monitor and react to feeds of data. These are two of many possible scenarios where you probably wouldn't want to do things manually.\n\nThere are quite a few tools that are designed to automate your life. Some of the popular tools include IFTTT, Zapier, and Automate. The idea behind these services is that given a trigger, you can do a\nseries of events.\n\nIn this tutorial, we're going to see how to collect Twitter data with Zapier, store it in MongoDB using a Realm webhook function, and then run aggregations on it using the MongoDB query language (MQL).\n\n## The Requirements\n\nThere are a few requirements that must be met prior to starting this tutorial:\n\n- A paid tier of Zapier with access to premium automations\n- A properly configured MongoDB Atlas cluster\n- A Twitter account\n\nThere is a Zapier free tier, but because we plan to use webhooks, which are premium in Zapier, a paid account is necessary. To consume data from Twitter in Zapier, a Twitter account is necessary, even if we plan to consume data that isn't related to our account. This data will be stored in MongoDB, so a cluster with properly configured IP access and user permissions is required.\n\n>You can get started with MongoDB Atlas by launching a free M0 cluster, no credit card required.\n\nWhile not necessary to create a database and collection prior to use, we'll be using a **zapier** database and a **tweets** collection throughout the scope of this tutorial.\n\n## Understanding the Twitter Data Model Within Zapier\n\nSince the plan is to store tweets from Twitter within MongoDB and then create queries to make sense of it, we should probably get an understanding of the data prior to trying to work with it.\n\nWe'll be using the \"Search Mention\" functionality within Zapier for Twitter. Essentially, it allows us to provide a Twitter query and trigger an automation when the data is found. More on that soon.\n\nAs a result, we'll end up with the following raw data:\n\n``` json\n{\n \"created_at\": \"Tue Feb 02 20:31:58 +0000 2021\",\n \"id\": \"1356701917603238000\",\n \"id_str\": \"1356701917603237888\",\n \"full_text\": \"In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript\",\n \"truncated\": false,\n \"display_text_range\": 0, 188],\n \"metadata\": {\n \"iso_language_code\": \"en\",\n \"result_type\": \"recent\"\n },\n \"source\": \"TweetDeck\",\n \"in_reply_to_status_id\": null,\n \"in_reply_to_status_id_str\": null,\n \"in_reply_to_user_id\": null,\n \"in_reply_to_user_id_str\": null,\n \"in_reply_to_screen_name\": null,\n \"user\": {\n \"id\": \"227546834\",\n \"id_str\": \"227546834\",\n \"name\": \"Nic Raboy\",\n \"screen_name\": \"nraboy\",\n \"location\": \"Tracy, CA\",\n \"description\": \"Advocate of modern web and mobile development technologies. I write tutorials and speak at events to make app development easier to understand. I work @MongoDB.\",\n \"url\": \"https://t.co/mRqzaKrmvm\",\n \"entities\": {\n \"url\": {\n \"urls\": [\n {\n \"url\": \"https://t.co/mRqzaKrmvm\",\n \"expanded_url\": \"https://www.thepolyglotdeveloper.com\",\n \"display_url\": \"thepolyglotdeveloper.com\",\n \"indices\": [0, 23]\n }\n ]\n },\n \"description\": {\n \"urls\": \"\"\n }\n },\n \"protected\": false,\n \"followers_count\": 4599,\n \"friends_count\": 551,\n \"listed_count\": 265,\n \"created_at\": \"Fri Dec 17 03:33:03 +0000 2010\",\n \"favourites_count\": 4550,\n \"verified\": false\n },\n \"lang\": \"en\",\n \"url\": \"https://twitter.com/227546834/status/1356701917603237888\",\n \"text\": \"In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript\"\n}\n```\n\nThe data we have access to is probably more than we need. However, it really depends on what you're interested in. For this example, we'll be storing the following within MongoDB:\n\n``` json\n{\n \"created_at\": \"Tue Feb 02 20:31:58 +0000 2021\",\n \"user\": {\n \"screen_name\": \"nraboy\",\n \"location\": \"Tracy, CA\",\n \"followers_count\": 4599,\n \"friends_count\": 551\n },\n \"text\": \"In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript\"\n}\n```\n\nWithout getting too far ahead of ourselves, our analysis will be based off the `followers_count` and the `location` of the user. We want to be able to make sense of where our users are and give priority to users that meet a certain followers threshold.\n\n## Developing a Webhook Function for Storing Tweet Information with MongoDB Realm and JavaScript\n\nBefore we start connecting Zapier and MongoDB, we need to develop the middleware that will be responsible for receiving tweet data from Zapier.\n\nRemember, you'll need to have a properly configured MongoDB Atlas cluster.\n\nWe need to create a Realm application. Within the MongoDB Atlas dashboard, click the **Realm** tab.\n\n![MongoDB Realm Applications\n\nFor simplicity, we're going to want to create a new application. Click the **Create a New App** button and proceed to fill in the information about your application.\n\nFrom the Realm Dashboard, click the **3rd Party Services** tab.\n\nWe're going to want to create an **HTTP** service. The name doesn't matter, but it might make sense to name it **Twitter** based on what we're planning to do.\n\nBecause we plan to work with tweet data, it makes sense to call our webhook function **tweet**, but the name doesn't truly matter.\n\nWith the exception of the **HTTP Method**, the defaults are fine for this webhook. We want the method to be POST because we plan to create data with this particular webhook function. Make note of the **Webhook URL** because it will be used when we connect Zapier.\n\nThe next step is to open the **Function Editor** so we can add some logic behind this function. Add the following JavaScript code:\n\n``` javascript\nexports = function (payload, response) {\n\n const tweet = EJSON.parse(payload.body.text());\n\n const collection = context.services.get(\"mongodb-atlas\").db(\"zapier\").collection(\"tweets\");\n\n return collection.insertOne(tweet);\n\n};\n```\n\nIn the above code, we are taking the request payload, getting a handle to the **tweets** collection within the **zapier** database, and then doing an insert operation to store the data in the payload.\n\nThere are a few things to note in the above code:\n\n1. We are not validating the data being sent in the request payload. In a realistic scenario, you'd probably want some kind of validation logic in place to be sure about what you're storing.\n2. We are not authenticating the user sending the data. In this example, we're trusting that only Zapier knows about our URL.\n3. We aren't doing any error handling.\n\nWhen we call our function, a new document should be created within MongoDB.\n\nBy default, the function will not deploy when saving. After saving, make sure to review and deploy the changes through the notification at the top of the browser window.\n\n## Creating a \"Zap\" in Zapier to Connect Twitter to MongoDB\n\nSo, we know the data we'll be working with and we have a MongoDB Realm webhook function that is ready for receiving data. Now, we need to bring everything together with Zapier.\n\nFor clarity, new Twitter matches will be our trigger in Zapier, and the webhook function will be our event.\n\nWithin Zapier, choose to create a new \"Zap,\" which is an automation. The trigger needs to be a **Search Mention in Twitter**, which means that when a new Tweet is detected using a search query, our events happen.\n\nFor this example, we're going to use the following Twitter search query:\n\n``` none\nurl:developer.mongodb.com -filter:retweets filter:safe lang:en -from:mongodb -from:realm\n```\n\nThe above query says that we are looking for tweets that include a URL to developer.mongodb.com. The URL doesn't need to match exactly as long as the domain matches. The query also says that we aren't interested in retweets. We only want original tweets, they have to be in English, and they have to be detected as safe for work.\n\nIn addition to the mentioned search criteria, we are also excluding tweets that originate from one of the MongoDB accounts.\n\nIn theory, the above search query could be used to see what people are saying about the MongoDB Developer Hub.\n\nWith the trigger in place, we need to identify the next stage of the automation pipeline. The next stage is taking the data from the trigger and sending it to our Realm webhook function.\n\nAs the event, make sure to choose **Webhooks by Zapier** and specify a POST request. From here, you'll be prompted to enter your Realm webhook URL and the method, which should be POST. Realm is expecting the payload to be JSON, so it is important to select JSON within Zapier.\n\nWe have the option to choose which data from the previous automation stage to pass to our webhook. Select the fields you're interested in and save your automation.\n\nThe data I chose to send looks like this:\n\n``` json\n{\n \"created_at\": \"Tue Feb 02 20:31:58 +0000 2021\",\n \"username\": \"nraboy\",\n \"location\": \"Tracy, CA\",\n \"follower_count\": \"4599\",\n \"following_count\": \"551\",\n \"message\": \"In case anyone is interested in learning about how to work with streaming data using Node.js, I wrote a tutorial about it on the @MongoDB Developer Hub. https://t.co/Dxt80lD8xj #javascript\"\n}\n```\n\nThe fields do not match the original fields brought in by Twitter. It is because I chose to map them to what made sense for me.\n\nWhen deploying the Zap, anytime a tweet is found that matches our query, it will be saved into our MongoDB cluster.\n\n## Analyzing the Twitter Data in MongoDB with an Aggregation Pipeline\n\nWith tweet data populating in MongoDB, it's time to start querying it to make sense of it. In this fictional example, we want to know what people are saying about our Developer Hub and how popular these individuals are.\n\nTo do this, we're going to want to make use of an aggregation pipeline within MongoDB.\n\nTake the following, for example:\n\n``` json\n\n {\n \"$addFields\": {\n \"follower_count\": {\n \"$toInt\": \"$follower_count\"\n },\n \"following_count\": {\n \"$toInt\": \"$following_count\"\n }\n }\n }, {\n \"$match\": {\n \"follower_count\": {\n \"$gt\": 1000\n }\n }\n }, {\n \"$group\": {\n \"_id\": {\n \"location\": \"$location\"\n },\n \"location\": {\n \"$sum\": 1\n }\n }\n }\n]\n```\n\nThere are three stages in the above aggregation pipeline.\n\nWe want to understand the follower data for the individual who made the tweet, but that data comes into MongoDB as a string rather than an integer. The first stage of the pipeline takes the `follower_count` and `following_count` fields and converts them from string to integer. In reality, we are using `$addFields` to create new fields, but because they have the same name as existing fields, the existing fields are replaced.\n\nThe next stage is where we want to identify people with more than 1,000 followers as a person of interest. While people with fewer followers might be saying great things, in this example, we don't care.\n\nAfter we've filtered out people by their follower count, we do a group based on their location. It might be valuable for us to know where in the world people are talking about MongoDB. We might want to know where our target audience exists.\n\nThe aggregation pipeline we chose to use can be executed with any of the MongoDB drivers, through the MongoDB Atlas dashboard, or through the CLI.\n\n## Conclusion\n\nYou just saw how to use [Zapier with MongoDB to automate certain tasks and store the results as documents within the NoSQL database. In this example, we chose to store Twitter data that matched certain criteria, later to be analyzed with an aggregation pipeline. The automations and analysis options that you can do are quite limitless.\n\nIf you enjoyed this tutorial and want to get engaged with more content and like-minded developers, check out the MongoDB Community.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Node.js"], "pageDescription": "Learn how to create automated workflows with Zapier and MongoDB.", "contentType": "Tutorial"}, "title": "Zap, Tweet, and Repeat! How to Use Zapier with MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-on-raspberry-pi", "action": "created", "body": "# Install & Configure MongoDB on the Raspberry Pi\n\nI've been a big fan of the Raspberry Pi since the first version was\nreleased in 2012. The newer generations are wonderful home-automation\nand IoT prototyping computers, with built in WiFi, and the most recent\nversions (the Pi 3 and Pi 4) are 64-bit. This means they can run the\nMongoDB server, mongod, locally! MongoDB even provides a pre-compiled\nversion for the Raspberry Pi processor, so it's relatively\nstraightforward to get it installed.\n\nI'm currently building a home-automation service on a Raspberry Pi 4.\nIts job is to run background tasks, such as periodically requesting data\nfrom the internet, and then provide the data to a bunch of small devices\naround my house, such as some smart displays, and (ahem) my coffee\ngrinder.\n\nThe service doesn't have super-complex data storage requirements, and I\ncould have used an embedded database, such as SQLite. But I've become\nresistant to modelling tables and joins in a relational database and\nworking with flat rows. The ability to store rich data structures in a\nsingle MongoDB database is a killer feature for me.\n\n## Prerequisites\n\nYou will need:\n\n- A Raspberry Pi 3 or 4\n- A suitably sized Micro SD card (I used a 16 Gb card)\n- A computer and SD card reader to write the SD card image. (This\n *can* be another Raspberry Pi, but I'm using my desktop PC)\n- A text editor on the host computer. (I recommend VS\n Code)\n\n## What This Tutorial Will Do\n\nThis tutorial will show you how to:\n\n- Install the 64-bit version of Ubuntu Server on your Raspberry Pi.\n- Configure it to connect to your WiFi.\n- *Correctly* install MongoDB onto your Pi.\n- Add a user account, so you can *safely* expose MongoDB on your home\n network.\n\nWhen you're done, you'll have a secured MongoDB instance available on\nyour home network.\n\n>\n>\n>Before we get too far into this, please bear in mind that you don't want\n>to run a production, web-scale database on a Raspberry Pi. Despite the\n>processor improvements on the Pi 4, it's still a relatively low-powered\n>machine, with a relatively low amount of RAM for a database server.\n>Still! For a local, offline MongoDB instance, with the ease of\n>development that MongoDB offers, a Raspberry Pi is a great low-cost\n>solution. If you *do* wish to serve your data to the Internet, you\n>should definitely check out\n>Atlas, MongoDB's cloud hosting\n>solution. MongoDB will host your database for you, and the service has a\n>generous (and permanent) free tier!\n>\n>\n\n## Things Not To Do\n\n*Do not* run `apt install mongodb` on your Raspberry Pi, or indeed any\nLinux computer! The versions of MongoDB shipped with Linux distributions\nare *very* out of date. They won't run as well, and some of them are so\nold they're no longer supported.\n\nMongoDB provide versions of the database, pre-packaged for many\ndifferent operating systems, and Ubuntu Server on Raspberry Pi is one of\nthem.\n\n## Installing Ubuntu\n\nDownload and install the Raspberry Pi\nImager for your host computer.\n\nRun the Raspberry Pi Imager, and select Ubuntu Server 20.04, 64-bit for\nRaspberry Pi 3/4.\n\nMake sure you *don't* accidentally select Ubuntu Core, or a 32-bit\nversion.\n\nInsert your Micro SD Card into your computer and select it in the\nRaspberry Pi Imager window.\n\nClick **Write** and wait for the image to be written to the SD Card.\nThis may take some time! When it's finished, close the Raspberry Pi\nImager. Then remove the Micro SD Card from your computer, and re-insert\nit.\n\nThe Ubuntu image for Raspberry Pi uses\ncloud-init to configure the system\nat boot time. This means that in your SD card `system-boot` volume,\nthere should be a YAML file, called `network-config`. Open this file in\nVS Code (or your favourite text editor).\n\nEdit it so that it looks like the following. The indentation is\nimportant, and it's the 'wifis' section that you're editing to match\nyour wifi configuration. Replace 'YOUR-WIFI-SSD' with your WiFi's name,\nand 'YOUR-WIFI-PASSWORD' with your WiFi password.\n\n``` yaml\nversion: 2\nethernets:\n eth0:\n dhcp4: true\n optional: true\nwifis:\n wlan0:\n dhcp4: true\n optional: true\n access-points:\n \"YOUR-WIFI-SSID\":\n password: \"YOUR-WIFI-PASSWORD\"\n```\n\nNow eject the SD card (safely!) from your computer, insert it into the\nPi, and power it up! It may take a few minutes to start up, at least the\nfirst time. You'll need to monitor your network to wait for the Pi to\nconnect. When it does, ssh into the Pi with\n`ssh ubuntu@`. The password is also `ubuntu`.\n\nYou'll be prompted to change your password to something secret.\n\nOnce you've set your password update the operating system by running the\nfollowing commands:\n\n``` bash\nsudo apt update\nsudo apt upgrade\n```\n\n## Install MongoDB\n\nNow let's install MongoDB. This is done as follows:\n\n``` bash\n# Install the MongoDB 4.4 GPG key:\nwget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -\n\n# Add the source location for the MongoDB packages:\necho \"deb arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list\n\n# Download the package details for the MongoDB packages:\nsudo apt-get update\n\n# Install MongoDB:\nsudo apt-get install -y mongodb-org\n```\n\nThe instructions above have mostly been taken from [Install MongoDB\nCommunity Edition on\nUbuntu\n\n## Run MongoDB\n\nUbuntu 20.04 uses Systemd to run background services, so to set up\nmongod to run in the background, you need to enable and start the\nservice:\n\n``` bash\n# Ensure mongod config is picked up:\nsudo systemctl daemon-reload\n\n# Tell systemd to run mongod on reboot:\nsudo systemctl enable mongod\n\n# Start up mongod!\nsudo systemctl start mongod\n```\n\nNow, you can check to see if the service is running correctly by\nexecuting the following command. You should see something like the\noutput below it:\n\n``` bash\n$ sudo systemctl status mongod\n\n\u25cf mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Tue 2020-08-09 08:09:07 UTC; 4s ago\n Docs: https://docs.mongodb.org/manual\nMain PID: 2366 (mongod)\n CGroup: /system.slice/mongod.service\n \u2514\u25002366 /usr/bin/mongod --config /etc/mongod.conf\n```\n\nIf your service is running correctly, you can run the MongoDB client,\n`mongo`, from the command-line to connect:\n\n``` bash\n# Connect to the local mongod, on the default port:\n$ mongo\nMongoDB shell version v4.4.0\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"576ec12b-6c1a-4382-8fae-8b6140e76d51\") }\nMongoDB server version: 4.4.0\n---\nThe server generated these startup warnings when booting:\n 2020-08-09T08:09:08.697+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2020-08-09T08:09:10.712+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n---\n---\n Enable MongoDB's free cloud-based monitoring service, which will then receive and display\n metrics about your deployment (disk utilization, CPU, operation statistics, etc).\n\n The monitoring data will be available on a MongoDB website with a unique URL accessible to you\n and anyone you share the URL with. MongoDB may use this information to make product\n improvements and to suggest MongoDB products and deployment options to you.\n\n To enable free monitoring, run the following command: db.enableFreeMonitoring()\n To permanently disable this reminder, run the following command: db.disableFreeMonitoring()\n---\n```\n\nFirst, check the warnings. You can ignore the recommendation to run the\nXFS filesystem, as this is just a small, local install. The warning\nabout access control not being enabled for the database is important\nthough! You'll fix that in the next section. At this point, if you feel\nlike it, you can enable the free\nmonitoring\nthat MongoDB provides, by running `db.enableFreeMonitoring()` inside the\nmongo shell.\n\n## Securing MongoDB\n\nHere's the next, essential steps, that other tutorials miss out, for\nsome reason. Recent versions of mongod won't connect to the network\nunless user authentication has been configured. Because of this, at the\nmoment your database is only accessible from the Raspberry Pi itself.\nThis may actually be fine, if like me, the services you're running with\nMongoDB are running on the same device. It's still a good idea to set a\nusername and password on the database.\n\nHere's how you do that, inside `mongo` (replace SUPERSECRETPASSWORD with\nan *actual* secret password!):\n\n``` javascript\nuse admin\ndb.createUser( { user: \"admin\",\n pwd: \"SUPERSECRETPASSWORD\",\n roles: \"userAdminAnyDatabase\",\n \"dbAdminAnyDatabase\",\n \"readWriteAnyDatabase\"] } )\nexit\n```\n\nThe three roles listed give the `admin` user the ability to administer\nall user accounts and data in MongoDB. Make sure your password is\nsecure. You can use a [random password\ngenerator to be safe.\n\nNow you need to reconfigure mongod to run with authentication enabled,\nby adding a couple of lines to `/etc/mongod.conf`. If you're comfortable\nwith a terminal text editor, such as vi or emacs, use one of those. I\nused nano, because it's a little simpler, with\n`sudo nano /etc/mongod.conf`. Add the following two lines somewhere in\nthe file. Like the `network-config` file you edited earlier, it's a YAML\nfile, so the indentation is important!\n\n``` yaml\n# These two lines must be uncommented and in the file together:\nsecurity:\n authorization: enabled\n```\n\nAnd finally, restart mongod:\n\n``` bash\nsudo systemctl restart mongod\n```\n\nEnsure that authentication is enforced by connecting `mongo` without\nauthentication:\n\n``` bash\n$ mongo\nMongoDB shell version v4.4.0\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"4002052b-1a39-4158-8a99-234cfd818e30\") }\nMongoDB server version: 4.4.0\n> db.adminCommand({listDatabases: 1})\n{\n \"ok\" : 0,\n \"errmsg\" : \"command listDatabases requires authentication\",\n \"code\" : 13,\n \"codeName\" : \"Unauthorized\"\n}\n> exit\n```\n\nEnsure you've exited `mongo` and now test that you can connect and\nauthenticate with the user details you created:\n\n``` bash\n$ mongo -u \"admin\" -p \"SUPERSECRETPASSWORD\"\nMongoDB shell version v4.4.0\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\nImplicit session: session { \"id\" : UUID(\"3dee8ec3-6e7f-4203-a6ad-976b55ea3020\") }\nMongoDB server version: 4.4.0\n> db.adminCommand({listDatabases: 1})\n{\n \"databases\" : \n {\n \"name\" : \"admin\",\n \"sizeOnDisk\" : 151552,\n \"empty\" : false\n },\n {\n \"name\" : \"config\",\n \"sizeOnDisk\" : 36864,\n \"empty\" : false\n },\n {\n \"name\" : \"local\",\n \"sizeOnDisk\" : 73728,\n \"empty\" : false\n },\n {\n \"name\" : \"test\",\n \"sizeOnDisk\" : 8192,\n \"empty\" : false\n }\n ],\n \"totalSize\" : 270336,\n \"ok\" : 1\n}\n> exit\n```\n\n## Make MongoDB Available to your Network\n\n**This step is optional!** Now that you've configured authentication on\nyour server, if you want your database to be available to other\ncomputers on your network, you need to:\n\n- Bind MongoDb to the Raspberry Pi's public IP address\n- Open up port `27017` on the Raspberry Pi's firewall.\n\n>\n>\n>If you *don't* want to access your data from your network, *don't*\n>follow these steps! It's always better to leave things more secure, if\n>possible.\n>\n>\n\nFirst, edit `/etc/mongod.conf` again, the same way as before. This time,\nchange the IP address to 0.0.0.0:\n\n``` yaml\n# Change the bindIp to '0.0.0.0':\nnet:\n port: 27017\n bindIp: 0.0.0.0\n```\n\nAnd restart `mongod` again:\n\n``` bash\nsudo systemctl restart mongod\n```\n\nOpen up port 27017 on your Raspberry Pi's firewall:\n\n``` bash\nsudo ufw allow 27017/tcp\n```\n\nNow, on *another computer on your network*, with the MongoDB client\ninstalled, run the following to ensure that `mongod` is available on\nyour network:\n\n``` bash\n# Replace YOUR-RPI-IP-ADDRESS with your Raspberry Pi's actual IP address:\nmongo --host 'YOUR-RPI-IP-ADDRESS'\n```\n\nIf it connects, then you've successfully installed and configured\nMongoDB on your Raspberry Pi!\n\n### Security Caveats\n\n*This short section is extremely important. Don't skip it.*\n\n- *Never* open up an instance of `mongod` to the internet without\n authentication enabled.\n- Configure your firewall to limit the IP addresses which can connect\n to your MongoDB port. (Your Raspberry Pi has just been configured to\n allow connections from *anywhere*, with the assumption that your\n home network has a firewall blocking access from outside.)\n- Ensure the database user password you created is secure!\n- Set up different database users for each app that connects to your\n database server, with *only* the permissions required by each app.\n\nMongoDB comes with sensible security defaults. It uses TLS, SCRAM-based\npassword authentication, and won't bind to your network port without\nauthentication being set up. It's still up to you to understand how to\nsecure your Raspberry Pi and any data you store within it. Go and read\nthe [MongoDB Security\nChecklist\nfor further information on keeping your data secure.\n\n## Wrapping Up\n\nAs you can see, there are a few steps to properly installing and\nconfiguring MongoDB yourself. I hadn't done it for a while, and I'd\nforgotten how complicated it can be! For this reason, you should\ndefinitely consider using MongoDB Atlas\nwhere a lot of this is taken care of for you. Not only is the\nfree-forever tier quite generous for small use-cases, there are also a\nbunch of extra services thrown in, such as serverless functions,\ncharting, free-text search, and more!\n\nYou're done! Go write some code in your favourite programming language,\nand if you're proud of it (or even if you're just having some trouble\nand would like some help) let us\nknow!. Check out all the cool blog\nposts on the MongoDB Developer Hub,\nand make sure to bookmark MongoDB\nDocumentation\n", "format": "md", "metadata": {"tags": ["MongoDB", "RaspberryPi"], "pageDescription": "Install and correctly configure MongoDB on Raspberry Pi", "contentType": "Tutorial"}, "title": "Install & Configure MongoDB on the Raspberry Pi", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/measuring-mongodb-kafka-connector-performance", "action": "created", "body": "# Measuring MongoDB Kafka Connector Performance\n\nWith today\u2019s need of flexible event-driven architectures, companies across the globe choose best of breed technologies like MongoDB and Apache Kafka to help solve these challenges. While these two complementary technologies provide the power and flexibility to solve these large scale challenges, performance has always been at the forefront of concerns. In this blog, we will cover how to measure performance of the MongoDB Connector for Apache Kafka in both a source and sink configuration. \n\n## Measuring Sink Performance\n\nRecall that the MongoDB sink connector writes data from a Kafka topic into MongoDB. Writes by default use the ReplaceOneModel where the data is either updated if it's present on the destination cluster or created as a new document if it is not present. You are not limited to this upsert behavior. In fact, you can change the sink to perform deletes or inserts only. These write behaviors are defined by the Write Model Strategy setting in the sink configuration.\n\nTo determine the performance of the sink connector, we need a timestamp of when the document was written to MongoDB. Currently, the only write model strategy that writes a timestamp field on behalf of the user is UpdateOneTimestampsStrategy and UpdateOneBusinessKeyTimestampStrategy. These two write models insert a new field named **_insertedTS**, which can be used to query the lag between Kafka and MongoDB.\n\nIn this example, we\u2019ll use MongoDB Atlas. MongoDB Atlas is a public cloud MongoDB data platform providing out-of-the-box capabilities such as MongoDB Charts, a tool to create visual representations of your MongoDB data. If you wish to follow along, you can create a free forever tier.\n\n### Generate Sample Data\n\nWe will generate sample data using the datagen Kafka Connector provided by Confluent. Datagen is a convenient way of creating test data in the Kafka ecosystem. There are a few quickstart schema specifications bundled with this connector. We will use a quickstart called **users**.\n\n```\ncurl -X POST -H \"Content-Type: application/json\" --data '\n {\"name\": \"datagen-users\",\n \"config\": { \"connector.class\": \"io.confluent.kafka.connect.datagen.DatagenConnector\",\n \"kafka.topic\": \"topic333\",\n \"quickstart\": \"users\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\": \"false\",\n \"max.interval\": 50,\n \"iterations\": 5000,\n \"tasks.max\": \"2\"\n}}' http://localhost:8083/connectors -w \"\\n\"\n\n```\n\n### Configure Sink Connector\n\nNow that the data is generated and written to the Kafka topic, \u201ctopic333,\u201d let\u2019s create our MongoDB sink connector to write this topic data into MongoDB Atlas. As stated earlier, we will add a field **_insertedTS** for use in calculating the lag between the message timestamp and this value. To perform the insert, let\u2019s use the **UpdateOneTimestampsStrategy** write mode strategy.\n\n```\ncurl -X POST -H \"Content-Type: application/json\" --data '\n{\"name\": \"kafkametadata3\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"topics\": \"topic333\",\n \"connection.uri\": \"MONGODB CONNECTION STRING GOES HERE\",\n \"writemodel.strategy\": \"com.mongodb.kafka.connect.sink.writemodel.strategy.UpdateOneTimestampsStrategy\",\n \"database\": \"kafka\",\n \"collection\": \"datagen\",\n \"errors.log.include.messages\": true,\n \"errors.deadletterqueue.context.headers.enable\": true,\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"document.id.strategy\": \"com.mongodb.kafka.connect.sink.processor.id.strategy.KafkaMetaDataStrategy\",\n \"tasks.max\": 2,\n \"value.converter.schemas.enable\":false,\n \"transforms\": \"InsertField\",\n \"transforms.InsertField.type\": \"org.apache.kafka.connect.transforms.InsertField$Value\",\n \"transforms.InsertField.offset.field\": \"offsetColumn\",\n \"transforms\": \"InsertField\",\n \"transforms.InsertField.type\": \"org.apache.kafka.connect.transforms.InsertField$Value\",\n \"transforms.InsertField.timestamp.field\": \"timestampColumn\"\n}}' http://localhost:8083/connectors -w \"\\n\"\n\n```\n\nNote: The field **_insertedTS** is populated with the time value of the Kafka connect server.\n\n### Viewing Results with MongoDB Charts\n\nTake a look at the MongoDB Atlas collection \u201cdatagen\u201d and familiarize yourself with the added fields.\n\nIn this blog, we will use MongoDB Charts to display a performance graph. To make it easy to build the chart, we will create a view.\n\n```\nuse kafka\ndb.createView(\"SinkView\",\"datagen\",\n\n{\n\"$sort\" : {\n\"_insertedTS\" : 1,\n\"timestampColumn\" : 1\n}\n},\n{\n\"$project\" : {\n\"_insertedTS\" : 1,\n\"timestampColumn\" : 1,\n\"_id\" : 0\n}\n},\n{\n\"$addFields\" : {\n\"diff\" : {\n\"$subtract\" : [\n\"$_insertedTS\",\n{\n\"$convert\" : {\n\"input\" : \"$timestampColumn\",\n\"to\" : \"date\"\n}\n}\n]\n}\n}\n}\n])\n\n```\n\nTo create a chart, click on the Charts tab in MongoDB Atlas:\n\n![\n\nClick on Datasources and \u201cAdd Data Source.\u201d The dialog will show the view that was created.\n\nSelect the SinkView and click Finish.\n\nDownload the MongoDB Sink performance Chart from Gist. \n\n```\ncurl https://gist.githubusercontent.com/RWaltersMA/555b5f17791ecb58e6e683c54bafd381/raw/748301bcb7ae725af4051d40b2e17a8882ef2631/sink-chart-performance.charts -o sink-performance.charts\n\n```\n\nChoose **Import Dashbaord** from the Add Dashboard dropdown and select the downloaded file.\n\nLoad the sink-perfromance.chart file.\n\nSelect the kafka.SinkView as the data source at the destination then click Save.\n\nNow the KafkaPerformance chart is ready to view. When you click on the chart, you will see something like the following: \n\nThis chart shows statistics on the differences between the timestamp in the Kafka topic and Kafka connector. In the above example, the maximum time delta is approximately one second (997ms) from inserting 40,000 documents.\n\n## Measuring Source Performance\n\nTo measure the source, we will take a different approach using KSQL to create a stream of the clusterTime timestamp from the MongoDB change stream and the time the row was written in the Kafka topic. From here, we can push this data into a MongoDB sink and display the results in a MongoDB Chart. \n\n### Configure Source Connector\n\nThe first step will be to create the MongoDB Source connector that will be used to push data onto the Kafka topic.\n\n```\ncurl -X POST -H \"Content-Type: application/json\" --data '\n{\"name\": \"mongo-source-perf\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"errors.log.enable\": \"true\",\n \"errors.log.include.messages\": \"true\",\n \"connection.uri\": \"mongodb+srv://MONGODB CONNECTION STRING HERE\",\n \"database\": \"kafka\",\n \"collection\": \"source-perf-test\",\n \"mongo.errors.log.enable\": \"true\",\n \"topic.prefix\":\"mdb\",\n \"output.json.formatter\" : \"com.mongodb.kafka.connect.source.json.formatter.SimplifiedJson\",\n \"output.format.value\":\"schema\",\n \"output.schema.infer.value\":true,\n \"output.format.key\":\"json\",\n \"publish.full.document.only\": \"false\",\n \"change.stream.full.document\": \"updateLookup\"\n}}' http://localhost:8083/connectors -w \"\\n\"\n```\n\n### Generate Sample Data\n\nThere are many ways to generate sample data on MongoDB. In this blog post, we will use the doc-gen tool (Github repo) to quickly create sample documents based upon the user\u2019s schema, which is defined as follows:\n\n```\n{\n \"_id\" : ObjectId(\"59b99db4cfa9a34dcd7885b6\"),\n \"name\" : \"Ned Stark\",\n \"email\" : \"sean_bean@gameofthron.es\",\n \"password\" : \"$2b$12$UREFwsRUoyF0CRqGNK0LzO0HM/jLhgUCNNIJ9RJAqMUQ74crlJ1Vu\"\n}\n\n```\n\nTo generate data in your MongoDB cluster, issue the following:\n\n```\ndocker run robwma/doc-gen:1.0 python doc-gen.py -s '{\"name\":\"string\",\"email\":\"string\",\"password\":\"string\"}' -c \"MONGODB CONNECTION STRING GOES HERE\" -t 1000 -db \"kafka\" -col \"source-perf-test\"\n```\n\n### Create KSQL Queries\n\nLaunch KSQL and create a stream of the clusterTime within the message. \n\nNote: If you do not have KSQL, you can run it as part of the Confluent Platform all in Docker using the following instructions.\n\nIf using Control Center, click ksQLDB, click Editor, and then paste in the following KSQL:\n\n```\nCREATE STREAM stats (\n clusterTime BIGINT\n ) WITH (\n KAFKA_TOPIC='kafka.source-perf-test',\n VALUE_FORMAT='AVRO'\n );\n\n```\n\nThe only information that we need from the message is the clusterTime. This value is provided within the change stream event. For reference, this is a sample event from change streams.\n\n```\n{\n _id: { },\n \"operationType\": \"\",\n \"fullDocument\": { },\n \"ns\": {\n \"db\": ,\n \"coll\": \n },\n \"to\": {\n \"db\": ,\n \"coll\": \n },\n \"documentKey\": {\n _id: \n },\n \"updateDescription\": {\n \"updatedFields\": { },\n \"removedFields\": , ... ]\n },\n \"clusterTime\": ,\n \"txnNumber\": ,\n \"lsid\": {\n \"id\": ,\n \"uid\": \n }\n}\n\n```\n\n**Step 3**\n\nNext, we will create a ksql stream that calculates the difference between the cluster time (time when it was created on MongoDB) and the time where it was inserted on the broker. \n\n```\nCREATE STREAM STATS2 AS\n select ROWTIME - CLUSTERTIME as diff, 1 AS ROW from STATS EMIT CHANGES;\n\n```\n\nAs stated previously, this diff value may not be completely accurate if the clocks on Kafka and MongoDB are different. \n\n**Step 4**\n\nTo see how the values change over time, we can use a window function and write the results to a table which can then be written into MongoDB via a sink connector.\n\n```\nSET 'ksql.suppress.enabled' = 'true';\n\nCREATE TABLE STATSWINDOW2 AS\n SELECT AVG( DIFF ) AS AVG, MAX(DIFF) AS MAX, count(*) AS COUNT, ROW FROM STATS2\n WINDOW TUMBLING (SIZE 10 SECONDS)\n GROUP BY ROW\n EMIT FINAL;\n\n```\n\nWindowing lets you control how to group records that have the same key for stateful operations, such as aggregations or joins into so-called windows. There are three ways to define time windows in ksqlDB: hopping windows, tumbling windows, and session windows. In this example, we will use tumbling as it is a fixed-duration, non-overlapping, and gap-less window.\n\n![\n\n### Configure Sink Connector\n\nThe final step is to create a sink connector to insert all this aggregate data on MongoDB.\n\n```\ncurl -X POST -H \"Content-Type: application/json\" --data '\n{\n \"name\": \"MongoSource-SinkPerf\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\": \"1\",\n \"errors.log.enable\": true,\n \"errors.log.include.messages\": true,\n \"topics\": \"STATSWINDOW2\",\n \"errors.deadletterqueue.context.headers.enable\": true,\n \"connection.uri\": \"MONGODB CONNECTION STRING GOES HERE\",\n \"database\": \"kafka\",\n \"collection\": \"sourceStats\",\n \"mongo.errors.log.enable\": true,\n \"transforms\": \"InsertField\",\n \"transforms.InsertField.type\": \"org.apache.kafka.connect.transforms.InsertField$Value\",\n \"transforms.InsertField.timestamp.field\": \"timestampColumn\"\n}}' http://localhost:8083/connectors -w \"\\n\"\n\n```\n\n### Viewing Results with MongoDB Charts\n\nDownload the MongoDB Source performance Chart from Gist. \n\n```\ncurl https://gist.githubusercontent.com/RWaltersMA/011f1473cf937badc61b752a6ab769d4/raw/bc180b9c2db533536e6c65f34c30b2d2145872f9/mongodb-source-performance.chart -o source-performance.charts\n\n```\n\nChoose **Import Dashboard** from the Add Dashboard dropdown and select the downloaded file.\n\nYou will need to create a Datasource to the new sink collection, \u201ckafka.sourceStats.\u201d\n\nClick on the Kafka Performance Source chart to view the statistics.\n\nIn the above example, you can see the 10-second sliding window performance statistics for 1.5M documents. The average difference was 252s, with the maximum difference being 480s. Note that some of this delta could be differences in clocks between MongoDB and Kafka. While not taking these numbers as absolute, simply using this technique is good enough to determine trends and if the performance is getting worse or better.\n\nIf you have any opinions on features or functionality enhancements that you would like to see with respect to monitoring performance or monitoring the MongoDB Connector for Apache Kafka in general, please add a comment to KAFKA-64. \n\nHave any questions? Check out our Connectors and Integrations MongoDB community forum.", "format": "md", "metadata": {"tags": ["Connectors"], "pageDescription": "Learn about measuring the performance of the MongoDB Connector for Apache Kafka in both a source and sink configuration.", "contentType": "Article"}, "title": "Measuring MongoDB Kafka Connector Performance", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/getting-started-realm-sdk-unity", "action": "created", "body": "# Getting Started with the Realm SDK for Unity\n\nDid you know that MongoDB has a Realm\nSDK for the\nUnity game development framework that makes\nworking with game data effortless? The Realm SDK is currently an alpha\nrelease, but you can already start using it to build persistence into\nyour cross platform gaming projects.\n\nA few weeks ago I streamed about and\nwrote about creating an infinite\nrunner\ntype game using Unity and the Realm SDK for Unity. Realm was used for\nstoring the score between scenes and sessions within the game.\n\nThere were a lot of deep topics in the infinite runner (think Temple Run\nor Subway Surfer) example, so I wanted to take a step back. In this\ntutorial, we're going to spend less time making an interesting game and\nmore time including and using Realm within a Unity\nproject.\n\nTo get an idea of what we're going to accomplish, take a look at the\nfollowing animated image:\n\nIn the above example, we have three rectangles, each of a different\ncolor. When clicking our mouse on a rectangle, the numeric values\nincrease. If the game were to be closed and then opened again, the\nnumeric values would be retained.\n\n## The Requirements\n\nThere aren't many requirements to using Realm with Unity, and once Realm\nbecomes production ready, those requirements will be even less. However,\nfor now you need the following:\n\n- Unity 2020.2.4f1+\n- Realm SDK for\n Unity 10.1.1+\n\nFor now, the Realm SDK for Unity needs to be downloaded and imported\nmanually into a project. This will change when the SDK can be added\nthrough the Unity Asset Store.\n\nWhen you download Unity, you'll likely be using a different and\npotentially older version by default. Within the Unity Hub software, pay\nattention to the version you're using and either upgrade or downgrade as\nnecessary.\n\n## Adding the Realm SDK for Unity to a Project\n\nFrom GitHub, download the\nlatest Realm SDK for\nUnity tarball. If given the option, choose the **bundle** file. For\nexample, **realm.unity.bundle-10.1.1.tgz** is what I'm using.\n\nCreate a new Unity project and use the 2D template when prompted.\n\nWithin a Unity project, choose **Window -> Package Manager** and then\nclick the plus icon to add a tarball.\n\nThe process of importing the tarball should only take a minute or two.\nOnce it has been added, it is ready for use within the game. Do note\nthat adding the tarball to your project only adds a reference based on\nits current location on your disk. Moving or removing the tarball on\nyour filesystem will break the link.\n\n## Designing a Data Model for the Realm Objects Within the Game\n\nBefore we can start persisting data to Realm and then accessing it\nlater, we need to define a model of what our data will look like. Since\nRealm is an object-oriented database, we're going to define a class with\nappropriate member variables and methods. This will represent what the\ndata looks like when persisted.\n\nTo align with the basic example that we're interested in, we essentially\nwant to store various score information.\n\nWithin the Unity project, create a new script file titled\n**GameModel.cs** with the following C# code:\n\n``` csharp\nusing Realms;\n\npublic class GameModel : RealmObject {\n\n PrimaryKey]\n public string gamerTag { get; set; }\n\n public int redScore { get; set; }\n public int greenScore { get; set; }\n public int whiteScore { get; set; }\n\n public GameModel() { }\n\n public GameModel(string gamerTag, int redScore, int greenScore, int whiteScore) {\n this.gamerTag = gamerTag;\n this.redScore = redScore;\n this.greenScore = greenScore;\n this.whiteScore = whiteScore;\n }\n\n}\n```\n\nThe `redScore`, `greenScore`, and `whiteScore` variables will keep the\nscore for each square on the screen. Since a game is usually tied to a\nperson or a computer, we need to define a [primary\nkey\nfor the associated data. The Realm primary key uniquely identifies an\nobject within a Realm. For this example, we're use a `gamerTag` variable\nwhich represents a person or player.\n\nTo get an idea of what our model might look like as JSON, take the\nfollowing:\n\n``` json\n{\n \"gamerTag\": \"poketrainernic\",\n \"redScore\": 0,\n \"greenScore\": 0,\n \"whiteScore\": 0\n}\n```\n\nFor this example, and many Realm with Unity examples, we won't ever have\nto worry about how it looks like as JSON since everything will be done\nlocally as objects.\n\nWith the `RealmObject` class configured, we can make use of it inside\nthe game.\n\n## Interacting with Persisted Realm Data in the Game\n\nThe `RealmObject` only represents the storage model for our data. There\nare extra steps when it comes to interacting with the data that is\nmodeled using it.\n\nWithin the Unity project, create a **GameController.cs** file with the\nfollowing C# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\nusing Realms;\nusing UnityEngine.UI;\n\npublic class GameController : MonoBehaviour {\n\n private Realm _realm;\n private GameModel _gameModel;\n\n public Text scoreText;\n\n void OnEnable() {\n _realm = Realm.GetInstance();\n _gameModel = _realm.Find(\"poketrainernic\");\n if(_gameModel == null) {\n _realm.Write(() => {\n _gameModel = _realm.Add(new GameModel(\"poketrainernic\", 0, 0, 0));\n });\n }\n }\n\n void OnDisable() {\n _realm.Dispose();\n }\n\n public void SetButtonScore(string color, int inc) {\n switch(color) {\n case \"RedSquare\":\n _realm.Write(() => {\n _gameModel.redScore++;\n });\n break;\n case \"GreenSquare\":\n _realm.Write(() => {\n _gameModel.greenScore++;\n });\n break;\n case \"WhiteSquare\":\n _realm.Write(() => {\n _gameModel.whiteScore++;\n });\n break;\n default:\n Debug.Log(\"Color Not Found\");\n break;\n }\n }\n\n void Update() {\n scoreText.text = \"Red: \" + _gameModel.redScore + \"\\n\" + \"Green: \" + _gameModel.greenScore + \"\\n\" + \"White: \" + _gameModel.whiteScore;\n }\n\n}\n```\n\nIn the above code, we have a few things going on, all related to\ninteracting with Realm.\n\nIn the `OnEnable` method, we are getting an instance of our Realm\ndatabase and we are finding an object based on our `GameModel` class.\nThe primary key is the `gamerTag` string variable, so we are providing a\nvalue to query on. If the query returns a null value, it means that no\ndata exists based on the primary key used. In that circumstance, we\ncreate a `Write` block and add a new object based on the constructor\nwithin the `GameModel` class. By the end of the query or creation of our\ndata, we'll have a `_gameModel` object that we can work with in our\ngame.\n\nWe're hard coding the \"poketrainernic\" value because we don't plan to\nuse any kind of authentication in this example. Everyone who plays this\ngame is considered the \"poketrainernic\" player.\n\nThe `OnDisable` method is for cleanup. It is important to dispose of the\nRealm instance when the game ends to prevent any unexpected behavior.\n\nFor this particular game example, most of our logic happens in the\n`SetButtonScore` method. In the `SetButtonScore` method, we are checking\nto see which color should be incremented and then we are doing so. The\namazing thing is that changing the `_gameModel` object changes what is\npersisted, as long as the changes happen in a `Write` block. No having\nto write queries or do anything out of the ordinary beyond just working\nwith your objects as you would normally.\n\nWhile we don't have a `Text` object configured yet within our game, the\n`Update` method will update the text on the screen every frame. If one\nof the values in our Realm instance changes, it will be reflected on the\nscreen.\n\n## Adding Basic Logic to the Game Objects Within the Scene\n\nAt this point, we have a `RealmObject` data model for our persisted data\nand we have a class for interacting with that data. We don't have\nanything to tie it together visually like you'd expect in a game. In\nother words, we need to be able to click on a colored sprite and have it\npersist something new.\n\nWithin the Unity project, create a **Button.cs** file with the following\nC# code:\n\n``` csharp\nusing System.Collections;\nusing System.Collections.Generic;\nusing UnityEngine;\n\npublic class Button : MonoBehaviour {\n\n public GameController game;\n\n void OnMouseDown() {\n game.SetButtonScore(gameObject.name, 1);\n }\n\n}\n```\n\nThe `game` variable in the above code will eventually be from a game\nobject within the scene and configured through a series of dragging and\ndropping, but we're not there yet. As of right now, we're focusing on\nthe code, and less on game objects.\n\nThe `OnMouseDown` method is where the magic happens for this script. The\ngame object that this script will eventually be attached to will have a\ncollider\nwhich gives us access to the `OnMouseDown` method. When the game object\nis clicked, we use the `SetButtonScore` method to send the name of the\ncurrent game object as well as a value to increase the score by.\nRemember, inside the `SetButtonScore` method we are expecting a string\nvalue for our switch statement. In the next few steps, naming the game\nobjects appropriately is critical based on our already applied logic.\n\nIf you're not sure where `gameObject` is coming from, it is inherited as\npart of the `MonoBehavior` class, and it represents the current game\nobject to which the script is currently attached to.\n\n## Game Objects, Colliders, and the Gaming Wrap-Up\n\nThe Unity project has a bunch of short scripts sitting out in the ether.\nIt's time to add game objects to the scene so we can attach the scripts\nand do something interesting.\n\nBy the time we're done, our Unity editor should look something like the\nfollowing:\n\nWe need to add a few game objects, add the scripts to those game\nobjects, then reference a few other game objects. Yes, it sounds\ncomplicated, but it really isn't!\n\nWithin the Unity editor, add the following game objects to the scene.\nWe'll walk through adding them and some of the specifics next:\n\n- GameController\n- RedSquare\n- GreenSquare\n- WhiteSquare\n- Canvas\n - Scores\n- EventSystem\n\nNow the `Scores` game object is for our text. You can add a `Text` game\nobject from the menu and it will add the `Canvas` and `EventSystem` for\nyou. You don't need to add a `Canvas` or `EventSystem` manually if Unity\ncreated one for you. Just make sure you name and position the game\nobject for scores appropriately.\n\nIf the `Scores` text is too small or not visible, make sure the\nrectangular boundaries for the text is large enough.\n\nThe `RedSquare`, `GreenSquare`, and `WhiteSquare` game objects are\n`Square` sprites, each with a different color. These sprites can be\nadded using the **GameObject -> 2D Object -> Sprites -> Square** menu\nitem. You'll need to rename them to the desired name after adding them\nto the scene. Finally, the `GameController` is nothing more than an\nempty game object.\n\nDrag the **Button.cs** script to the inspector panel of each of the\ncolored square sprites. The sprites depend on being able to access the\n`SetButtonScore` method, so the `GameController` game object must be\ndragged onto the **Score Text** field within the script area on each of\nthe squares as well. Drag the **GameController.cs** script to the\n`GameController` game object. Next, drag the `Scores` game object into\nthe scripts section of the `GameController` game object so that the\n`GameController` game object can control the score text.\n\nWe just did a lot of drag and drop on the game objects within the scene.\nWe're not quite done yet though. In order to use the `OnMouseDown`\nmethod for our squares, they need to have a collider. Make sure to add a\n**Box Collider 2D** to each of the squares. The **Box Collider 2D** is a\ncomponent that can be added to the game objects through the inspector.\n\nYou should be able to run the game with success as of now! You can do\nthis by either creating and running a build from the **File** menu, or\nby using the play button within your editor to preview the game.\n\n## Conclusion\n\nYou just saw how to get started with the Realm SDK for\nUnity. I wrote another example of using Realm with\nUnity, but the game was a little more exciting, which added more\ncomplexity. Once you have a firm understanding of how Realm works in\nUnity, it is worth checking out Build an Infinite Runner Game with\nUnity and the Realm Unity\nSDK.\n\nAs previously mentioned, the Realm SDK for Unity is currently an alpha\nrelease. Expect that there will be problems at some point, so it\nprobably isn't best to use it in your production ready game. However,\nyou should be able to get comfortable including it.\n\nFor more examples on using the Realm SDK, check out the C#\ndocumentation.\n\nQuestions? Comments? We'd love to connect with you. Join the\nconversation on the MongoDB Community\nForums.\n\n", "format": "md", "metadata": {"tags": ["Realm", "C#", "Unity"], "pageDescription": "Learn how to get started with the Realm SDK for Unity for data persistance in your game.", "contentType": "Tutorial"}, "title": "Getting Started with the Realm SDK for Unity", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/under-used-features", "action": "created", "body": "# Three Underused MongoDB Features\n\nAs a Developer Advocate for MongoDB, I have quite a few conversations with developers. Many of these developers have never used MongoDB, and so the conversation is often around what kind of data MongoDB is particularly good for. (Spoiler: Nearly *all* of them! MongoDB is a general purpose database that just happens to be centered around documents instead of tables.)\n\nBut there are lots of developers out there who already use MongoDB every day, and in those situations, my job is to make sure they know how to use MongoDB effectively. I make sure, first and foremost, that these developers know about MongoDB's Aggregation Framework, which is, in my opinion, MongoDB's most powerful feature. It is relatively underused. If you're not using the Aggregation Framework in your projects, then either your project is very simple, or you could probably be doing things more efficiently by adding some aggregation pipelines.\n\nBut this article is not about the Aggregation Framework! This article is about three *other* features of MongoDB that deserve to be better known: TTL Indexes, Capped Collections, and Change Streams.\n\n## TTL Indexes\n\nOne of the great things about MongoDB is that it's so *easy* to store data in it, without having to go through complex steps to map your data to the model expected by your database's schema expectations.\n\nBecause of this, it's quite common to use MongoDB as a cache as well as a database, to store things like session information, authentication data for third-party services, and other things that are relatively short-lived.\n\nA common idiom is to store an expiry date in the document, and then when retreiving the document, to compare the expiry date to the current time and only use it if it's still valid. In some cases, as with OAuth access tokens, if the token has expired, a new one can be obtained from the OAuth provider and the document can be updated.\n\n``` \ncoll.insert_one(\n {\n \"name\": \"Professor Bagura\",\n # This document will disappear before 2022:\n \"expires_at\": datetime.fromisoformat(\"2021-12-31 23:59:59\"),\n }\n)\n\n# Retrieve a valid document by filtering on docs where `expires_at` is in the future:\nif (doc := coll.find_one({\"expires_at\": {\"$gt\": datetime.now()}})) is None:\n # If no valid documents exist, create one (and probably store it):\n doc = create_document()\n\n# Code to use the retrieved or created document goes here.\nprint(doc)\n```\n\nAnother common idiom also involves storing an expiry date in the document, and then running code periodically that either deletes or refreshes expired documents, depending on what's correct for the use-case.\n\n``` python\nwhile True:\n # Delete all documents where `expires_at` is in the past:\n coll.delete_many({\"expires_at\": {\"$lt\": datetime.now()}})\n time.sleep(60)\n```\n\nAn alternative way to manage data that has an expiry, either absolute or relative to the time the document is stored, is to use a TTL index.\n\nTo use the definition from the documentation: \"TTL indexes are special single-field indexes that MongoDB can use to automatically remove documents from a collection after a certain amount of time or at a specific clock time.\" TTL indexes are why I like to think of MongoDB as\na platform for building data applications, not just a database. If you apply a TTL index to your documents' expiry field, MongoDB will automatically remove the document for you! This means that you don't need to write your own code for removing expired documents, and you don't need to remember to always filter documents based on whether their expiry is earlier than the current time. You also don't need to calculate the absolute expiry time if all you have is the number of seconds a document remains valid!\n\nLet me show you how this works. The code below demonstrates how to create an index on the `created_at` field. Because `expiresAfterSeconds` is set to 3600 (which is one hour), any documents in the collection with `created_at` set to a date will be deleted one hour after that point in time.\n\n``` python\ncoll = db.get_collection(\"ttl_collection\")\n\n# Creates a new index on the `created_at`.\n# The document will be deleted when current time reaches one hour (3600 seconds)\n# after the date stored in `created_at`:\ncoll.create_index((\"expires_at\", 1)], expireAfterSeconds=3600)\n\ncoll.insert_one(\n {\n \"name\": \"Professor Bagura\",\n \"created_at\": datetime.now(), # Document will disappear after one hour.\n }\n)\n```\n\nAnother common idiom is to explicitly set the expiry time, when the document should be deleted. This is done by setting `expireAfterSeconds` to 0:\n\n``` python\ncoll = db.get_collection(\"expiry_collection\")\n\n# Creates a new index on the `expires_at`.\n# The document will be deleted when\n# the current time reaches the date stored in `expires_at`:\ncoll.create_index([(\"expires_at\", 1)], expireAfterSeconds=0)\n\ncoll.insert_one(\n {\n \"name\": \"Professor Bagura\",\n # This document will disappear before 2022:\n \"expires_at\": datetime.fromisoformat(\"2021-12-31 23:59:59\"),\n }\n)\n```\n\nBear in mind that the background process that removes expired documents only runs every 60 seconds, and on a cluster under heavy load, maybe less frequently than that. So, if you're working with documents with very short-lived expiry durations, then this feature probably isn't for you. An alternative is to continue to filter by the expiry in your code, to benefit from finer-grained control over document validity, but allow the TTL expiry service to maintain the collection over time, removing documents that have very obviously expired.\n\nIf you're working with data that has a lifespan, then TTL indexes are a great feature for maintaining the documents in a collection.\n\n## Capped Collections\n\nCapped collections are an interesting feature of MongoDB, useful if you wish to efficiently store a ring buffer of documents.\n\nA capped collection has a maximum size in bytes and optionally a maximum number of documents. (The lower of the two values is used at any time, so if you want to reach the maximum number of documents, make sure you set the byte size large enough to handle the number of documents you wish to store.) Documents are stored in insertion order, without the need for a specific index to maintain that order, and so can handle higher throughput than an indexed collection. When either the collection reaches the set byte `size`, or the `max` number of documents, then the oldest documents in the collection are purged.\n\nCapped collections can be useful for buffering recent operations (application-level operations - MongoDB's oplog is a different kind ofthing), and these can be queried when an error state occurs, in order to have a log of recent operations leading up to the error state.\n\nOr, if you just wish to efficiently store a fixed number of documents in insertion order, then capped collections are the way to go.\n\nCapped collections are created with the [createCollection method, by setting the `capped`, `size`, and optionally the `max` parameters:\n\n``` python\n# Create acollection with a large size value that will store a max of 3 docs:\ncoll = db.create_collection(\"capped\", capped=True, size=1000000, max=3)\n\n# Insert 3 docs:\ncoll.insert_many({\"name\": \"Chico\"}, {\"name\": \"Harpo\"}, {\"name\": \"Groucho\"}])\n\n# Insert a fourth doc! This will evict the oldest document to make space (Zeppo):\ncoll.insert_one({\"name\": \"Zeppo\"})\n\n# Print out the docs in the collection:\nfor doc in coll.find():\n print(doc)\n\n# {'_id': ObjectId('600e8fcf36b07f77b6bc8ecf'), 'name': 'Harpo'}\n# {'_id': ObjectId('600e8fcf36b07f77b6bc8ed0'), 'name': 'Groucho'}\n# {'_id': ObjectId('600e8fcf36b07f77b6bc8ed1'), 'name': 'Zeppo'}\n```\n\nIf you want a rough idea of how big your bson documents are in bytes, for calculating the value of `size`, you can either use your driver's [bsonSize method in the `mongo` shell, on a document constructed in code, or you can use MongoDB 4.4's new bsonSize aggregation operator, on documents already stored in MongoDB.\n\nNote that with the improved efficiency that comes with capped collections, there are also some limitations. It is not possible to explicitly delete a document from a capped collection, although documents will eventually be replaced by newly inserted documents. Updates in a capped collection also cannot change a document's size. You can't shard a capped collection. There are some other limitations around replacing and updating documents and transactions. Read the documentation for more details.\n\nIt's worth noting that this pattern is similar in feel to the Bucket Pattern, which allows you to store a capped number of items in an array, and automatically creates a new document for storing subsequent values when that cap is reached.\n\n## Change Streams and the `watch` method\n\nAnd finally, the biggest lesser-known feature of them all! Change streams are a live stream of changes to your database. The `watch` method, implemented in most MongoDB drivers, streams the changes made to a collection,\na database, or even your entire MongoDB replicaset or cluster, to your application in real-time. I'm always surprised by how few people have not heard of it, given that it's one of the first MongoDB features that really excited me. Perhaps it's just luck that I stumbled across it earlier.\n\nIn Python, if I wanted to print all of the changes to a collection as they're made, the code would look a bit like this:\n\n``` python\nwith my_database.my_collection.watch() as stream:\n for change in stream:\n print(change)\n```\n\nIn this case, `watch` returns an iterator which blocks until a change is made to the collection, at which point it will yield a BSON document describing the change that was made.\n\nYou can also filter the types of events that will be sent to the change stream, so if you're only interested in insertions or deletions, then those are the only events you'll receive.\n\nI've used change streams (which is what the `watch` method returns) to implement a chat app, where changes to a collection which represented a conversation were streamed to the browser using WebSockets.\n\nBut fundamentally, change streams allow you to implement the equivalent of a database trigger, but in your favourite programming language, using all the libraries you prefer, running on the servers you specify. It's a super-powerful feature and deserves to be better known.\n\n## Further Resources\n\nIf you don't already use the Aggregation Framework, definitely check out the documentation on that. It'll blow your mind (in a good way)!\n\nFurther documentation on the topics discussed here:\n\n- TTL Index Documentation,\n- Capped Collection Documentation\n- Change Streams\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "Go beyond CRUD with these 3 special features of MongoDB!", "contentType": "Article"}, "title": "Three Underused MongoDB Features", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-swiftui-combine-first-app", "action": "created", "body": "# Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine\n\nI'm relatively new to building iOS apps (a little over a year's experience), and so I prefer using the latest technologies that make me a more productive developer. That means my preferred app stack looks like this:\n\n>\n>\n>This article was updated in July 2021 to replace `objc` and `dynamic` with the `@Persisted` annotation that was introduced in Realm-Cocoa 10.10.0.\n>\n>\n\n##### Technologies Used by the App\n\n| In \ud83d\udd25 | Out \u2744\ufe0f |\n|-----------------------------------|-------------------------------------|\n| Swift | Objective C |\n| SwiftUI | UIKit |\n| Combine | RxSwift |\n| Realm | Core Data |\n| MongoDB Realm Sync (where needed) | Home-baked cross-platform data sync |\n\nThis article presents a simple task management app that I built on that stack. To continue my theme on being productive (lazy), I've borrowed heavily (stolen) from MongoDB's official iOS Swift tutorial:\n\n- I've refactored the original front end, adding Combine for event management, and replacing the UIKit ViewControllers with Swift views.\n- The back end Realm app is entirely unchanged. Note that once you've stood up this back end, then this app can share its data with the equivalent Android, React/JavaScript, and Node.js apps with no changes.\n\nI'm going to focus here on the iOS app. Check the official tutorial if you want to understand how the back end works.\n\nYou can download all of the code for the front end app from the GitHub repo.\n\n## Prerequisites\n\nI'm lucky that I don't have to support an existing customer base that's running on old versions of iOS, and so I can take advantage of the latest language, operating system, and SDK features:\n\n- A Mac (sorry Windows and Linux users)\n- iOS14+ / XCode 12.2+\n - It would be pretty easy to port the app back to iOS13, but iOS14 makes SwiftUI more of a first-class citizen (though there are still times when a more complex app would need to break out into UIKit code\u2014e.g., if you wanted to access the device's camera).\n - Apple introduced SwiftUI and Combine in iOS13, and so you'd be better sticking with the original tutorial if you need to support iOS12 or earlier.\n- Realm Cocoa SDK 10.1+\n - Realm Cocoa 10 adds support for Combine and the ability to \"Freeze\" Realm Objects, making it simpler and safer to embed them directly within SwiftUI views.\n - CocoaPods 1.10+\n\n## Running the App for Yourself\n\nI always prefer to build and run an app before being presented with code snippets; these are the steps:\n\n1. If you don't already have Xcode 12 installed, install it through the Apple App Store.\n2. Set up your back end Realm app. Make a note of the ID:\n\n \n \n \n\n3. Download the iOS app, install dependencies, and open the workspace in Xcode:\n\n ``` bash\n git clone https://github.com/ClusterDB/task-tracker-swiftui.git\n cd task-tracker-swiftui\n pod install --repo-update\n open task-tracker-swiftui.xcworkspace\n ```\n\n4. Within Xcode, edit `task-tracker-swiftui/task_tracker_swiftuiApp.swift` and set the Realm application ID to the value you noted in Step 2:\n\n ``` swift\n let app = App(id: \"tasktracker-xxxxx\")\n ```\n\n5. In Xcode, select an iOS simulator:\n\n \n \n Select an iOS simulator in Xcode\n \n\n6. Build and run the app using `\u2318-R`.\n7. Go ahead and play with the app:\n\n \n \n Demo of the app in an iOS simulator\n \n\n## Key Pieces of Code\n\nUsually, when people start explaining SwiftUI, they begin with, \"You know how you do X with UIKit? With SwiftUI, you do Y instead.\" But, I'm not going to assume that you're an experienced UIKit developer.\n\n### The Root of a SwiftUI App\n\nIf you built and ran the app, you've already seen the \"root\" of the app in `swiftui_realmApp.swift`:\n\n``` swift\nimport SwiftUI\nimport RealmSwift:\n\nlet app = App(id: \"tasktracker-xxxxx\") // TODO: Set the Realm application ID\n\n@main\nstruct swiftui_realmApp: SwiftUI.App {\n @StateObject var state = AppState()\n\n var body: some Scene {\n WindowGroup {\n ContentView()\n .environmentObject(state)\n }\n }\n}\n```\n\n`app` is the Realm application that will be used by our iOS app to store and retrieve data stored in Realm.\n\nSwiftUI works with views, typically embedding many views within other views (a recent iOS app I worked on has over 500 views), and you always start with a top-level view for the app\u2014in this case, `ContentView`.\n\nIndividual views contain their own state (e.g., the details of the task that's currently being edited, or whether a pop-up sheet should be displayed), but we store any app-wide state in the `state` variable. `@ObservedObject` is a SwiftUI annotation to indicate that a view should be refreshed whenever particular attributes within an object change. We pass state to `ContentView` as an `environmentOject` so that any of the app's views can access it.\n\n### Application-Wide State Management\n\nLike other declarative, state-driven frameworks (e.g., React or Vue.js), components/views can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the `AppState` class:\n\n``` swift\nclass AppState: ObservableObject {\n var loginPublisher = PassthroughSubject()\n var logoutPublisher = PassthroughSubject()\n let userRealmPublisher = PassthroughSubject()\n var cancellables = Set()\n\n @Published var shouldIndicateActivity = false\n @Published var error: String?\n\n var user: User?\n}\n```\n\nWe use `shouldIndicateActivity` to control whether a \"working on it\" view should be displayed while the app is busy. error is set whenever we want to display an error message. Both of these variables are annotated with `@Published` to indicate that referencing views should be refreshed when their values change.\n\n`user` represents the Realm user that's currently logged into the app.\n\nThe app uses the Realm SDK to interact with the back end Realm application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use \"Combine\" publishers and subscribers to handle these events. `loginPublisher`, `logoutPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening a Realm for a user.\n\nAs an example, when an event is sent to `loginPublisher` to indicate that the login process has completed, Combine will run this pipeline:\n\n``` swift\ninit() {\nloginPublisher\n .receive(on: DispatchQueue.main)\n .flatMap { user -> RealmPublishers.AsyncOpenPublisher in\n self.shouldIndicateActivity = true\n var realmConfig = user.configuration(partitionValue: \"user=\\(user.id)\")\n realmConfig.objectTypes = User.self, Project.self]\n return Realm.asyncOpen(configuration: realmConfig)\n }\n .receive(on: DispatchQueue.main)\n .map {\n self.shouldIndicateActivity = false\n return $0\n }\n .subscribe(userRealmPublisher)\n .store(in: &self.cancellables)\n}\n```\n\nThe pipeline receives the freshly-logged-in Realm user.\n\nThe `receive(on: DispatchQueue.main)` stage specifies that the next stage in the pipeline should run in the main thread (because it will update the UI).\n\nThe Realm user is passed to the `flatMap` stage which:\n\n- Updates the UI to show that the app is busy.\n- Opens a Realm for this user (requesting Objects where the partition matches the string `\"user=\\(user.id\"`).\n- Passes a publisher for the opening of the Realm to the next stage.\n\nThe `.subscribe` stage subscribes the `userRealmPublisher` to outputs from the publisher it receives from the previous stage. In that way, a pipeline associated with the `userRealmPublisher` publisher can react to an event indicating when the Realm has been opened.\n\nThe `.store` stage stores the publisher in the `cancellables` array so that it isn't removed when the `init()` function completes.\n\n### The Object Model\n\nYou'll find the Realm object model in the `Model` group in the Xcode workspace. These are the objects used in the iOS app and synced to MongoDB Atlas in the back end.\n\nThe `User` class represents application users. It inherits from `Object` which is a class in the Realm SDK and allows instances of the class to be stored in Realm:\n\n``` swift\nimport RealmSwift\n\nclass User: Object {\n @Persisted(primaryKey: true) var _id: String = UUID().uuidString\n @Persisted var _partition: String = \"\"\n @Persisted var name: String = \"\"\n @Persisted let memberOf = RealmSwift.List()\n}\n```\n\nNote that instances of classes that inherit from `Object` can be used as `@ObservedObjects` without inheriting from `ObservableObject` or annotating attributes with `@Public`.\n\nSummary of the attributes:\n\n- `_id` uniquely identifies a `User` object. We set it to be the Realm primary key.\n- `_partition` is used as the partition key, which can be used by the app to filter which `User` `Objects` it wants to access.\n- `name` is the username (email address).\n- `membersOf` is a Realm List of projects that the user can access. (It always contains its own project, but it may also include other users' projects if those users have added this user to their teams.)\n\nThe elements in `memberOf` are instances of the `Project` class. `Project` inherits from `EmbeddedObject` which means that instances of `Project` can be embedded within other Realm `Objects`:\n\n``` swift\nimport RealmSwift\n\nclass Project: EmbeddedObject {\n @Persisted var name: String?\n @Persisted var partition: String?\n convenience init(partition: String, name: String) {\n self.init()\n self.partition = partition\n self.name = name\n }\n}\n```\n\nSummary of the attributes:\n\n- `name` is the project's name.\n- `partition` is a string taking the form `\"project=project-name\"` where `project-name` is the `_id` of the project's owner.\n\nIndividual tasks are represented by the `Task` class:\n\n``` swift\nimport RealmSwift\n\nenum TaskStatus: String {\n case Open\n case InProgress\n case Complete\n}\n\nclass Task: Object {\n @Persisted(primaryKey: true) var _id: ObjectId = ObjectId.generate()\n @Persisted var _partition: String = \"\"\n @Persisted var name: String = \"\"\n @Persisted var owner: String?\n @Persisted var status: String = \"\"\n\n var statusEnum: TaskStatus {\n get {\n return TaskStatus(rawValue: status) ?? .Open\n }\n set {\n status = newValue.rawValue\n }\n }\n\n convenience init(partition: String, name: String) {\n self.init()\n self._partition = partition\n self.name = name\n }\n}\n```\n\nSummary of the attributes:\n\n- `_id` uniquely identifies a `Task` object. We set it to be the Realm primary key.\n- `_partition` is used as the partition key, which can be used by the app to filter which `Task` `Objects` it wants to access. It takes the form `\"project=project-id\"`.\n- `name` is the task's title.\n- `status` takes on the value \"Open\", \"InProgress\", or \"Complete\".\n\n### User Authentication\n\nWe want app users to only be able to access the tasks from their own project (or the projects of other users who have added them to their team). Our users need to see their tasks when they restart the app or run it on a different device. Realm's username/password authentication is a simple way to enable this.\n\nRecall that our top-level SwiftUI view is `ContentView` (`task-tracker-swiftui/Views/ContentView.swift`). `ContentView` selects whether to show the `LoginView` or `ProjectsView` view based on whether a user is already logged into Realm:\n\n``` swift\nstruct ContentView: View {\n @EnvironmentObject var state: AppState\n\n var body: some View {\n NavigationView {\n ZStack {\n VStack {\n if state.loggedIn && state.user != nil {\n if state.user != nil {\n ProjectsView()\n }\n } else {\n LoginView()\n }\n Spacer()\n if let error = state.error {\n Text(\"Error: \\(error)\")\n .foregroundColor(Color.red)\n }\n }\n if state.shouldIndicateActivity {\n ProgressView(\"Working With Realm\")\n }\n }\n .navigationBarItems(leading: state.loggedIn ? LogoutButton() : nil)\n }\n }\n}\n```\n\nNote that `ContentView` also renders the `state.error` message and the `ProgressView` views. These will kick in whenever a sub-view updates state.\n\n`LoginView` (`task-tracker-swiftui/Views/User Accounts/LoginView.swift`) presents a simple form for existing app users to log in:\n\n \n\nWhen the user taps \"Log In\", the `login` function is executed:\n\n``` swift\nprivate func login(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n return\n }\n self.state.error = nil\n state.shouldIndicateActivity = true\n app.login(credentials: .emailPassword(email: username, password: password))\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n state.loginPublisher.send($0)\n })\n .store(in: &state.cancellables)\n}\n```\n\n`login` calls `app.login` (`app` is the Realm app that we create when the app starts) which returns a Combine publisher. The results from the publisher are passed to a Combine pipeline which updates the UI and sends the resulting Realm user to `loginPublisher`, which can then complete the process.\n\nIf it's a first-time user, then they tap \"Register new user\" to be taken to `SignupView` which registers a new user with Realm (`app.emailPasswordAuth.registerUser`) before popping back to `loginView` (`self.presentationMode.wrappedValue.dismiss()`):\n\n``` swift\nprivate func signup(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n return\n }\n self.state.error = nil\n state.shouldIndicateActivity = true\n app.emailPasswordAuth.registerUser(email: username, password: password)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n self.presentationMode.wrappedValue.dismiss()\n })\n .store(in: &state.cancellables)\n}\n```\n\nTo complete the user lifecycle, `LogoutButton` logs them out from Realm and then sends an event to `logoutPublisher`:\n\n``` swift\nstruct LogoutButton: View {\n @EnvironmentObject var state: AppState\n var body: some View {\n Button(\"Log Out\") {\n state.shouldIndicateActivity = true\n app.currentUser?.logOut()\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { _ in\n }, receiveValue: {\n state.shouldIndicateActivity = false\n state.logoutPublisher.send($0)\n })\n .store(in: &state.cancellables)\n }\n .disabled(state.shouldIndicateActivity)\n }\n}\n```\n\n### Projects View\n\n \n\nAfter logging in, the user is shown `ProjectsView` (`task-tracker-swiftui/Views/Projects & Tasks/ProjectsView.swift`) which displays a list of projects that they're a member of:\n\n``` swift\nvar body: some View {\n VStack(spacing: Dimensions.padding) {\n if let projects = state.user?.memberOf {\n ForEach(projects, id: \\.self) { project in\n HStack {\n LabeledButton(label: project.partition ?? \"No partition\",\n text: project.name ?? \"No project name\") {\n showTasks(project)\n }\n }\n }\n }\n Spacer()\n if let tasksRealm = tasksRealm {\n NavigationLink( destination: TasksView(realm: tasksRealm, projectName: projectName),\n isActive: $showingTasks) {\n EmptyView() }\n }\n }\n .navigationBarTitle(\"Projects\", displayMode: .inline)\n .toolbar {\n ToolbarItem(placement: .bottomBar) {\n Button(action: { self.showingSheet = true }) {\n ManageTeamButton()\n }\n }\n }\n .sheet(isPresented: $showingSheet) { TeamsView() }\n .padding(.all, Dimensions.padding)\n}\n```\n\nRecall that `state.user` is assigned the data retrieved from Realm when the pipeline associated with `userRealmPublisher` processes the event forwarded from the login pipeline:\n\n``` swift\nuserRealmPublisher\n .sink(receiveCompletion: { result in\n if case let .failure(error) = result {\n self.error = \"Failed to log in and open realm: \\(error.localizedDescription)\"\n }\n }, receiveValue: { realm in\n self.user = realm.objects(User.self).first\n })\n .store(in: &cancellables)\n```\n\nEach project in the list is a button that invokes `showTasks(project)`:\n\n``` swift\nfunc showTasks(_ project: Project) {\n state.shouldIndicateActivity = true\n let realmConfig = app.currentUser?.configuration(partitionValue: project.partition ?? \"\")\n guard var config = realmConfig else {\n state.error = \"Cannot get Realm config from current user\"\n return\n }\n config.objectTypes = [Task.self]\n Realm.asyncOpen(configuration: config)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { result in\n state.shouldIndicateActivity = false\n if case let .failure(error) = result {\n self.state.error = \"Failed to open realm: \\(error.localizedDescription)\"\n }\n }, receiveValue: { realm in\n self.tasksRealm = realm\n self.projectName = project.name ?? \"\"\n self.showingTasks = true\n state.shouldIndicateActivity = false\n })\n .store(in: &self.state.cancellables)\n} \n```\n\n`showTasks` opens a new Realm and then sets up the variables which are passed to `TasksView` in body (note that the `NavigationLink` is automatically followed when `showingTasks` is set to `true`):\n\n``` swift\nNavigationLink(\n destination: TasksView(realm: tasksRealm, projectName: projectName),\n isActive: $showingTasks) {\n EmptyView()\n }\n```\n\n### Tasks View\n\n \n\n`TasksView` (`task-tracker-swiftui/Views/Projects & Tasks/TasksView.swift`) presents a list of the tasks within the selected project:\n\n``` swift\nvar body: some View {\n VStack {\n if let tasks = tasks {\n List {\n ForEach(tasks.freeze()) { task in\n if let tasksRealm = tasks.realm {\n TaskView(task: (tasksRealm.resolve(ThreadSafeReference(to: task)))!)\n }\n }\n .onDelete(perform: deleteTask)\n }\n } else {\n Text(\"Loading...\")\n }\n if let lastUpdate = lastUpdate {\n LastUpdate(date: lastUpdate)\n }\n }\n .navigationBarTitle(\"Tasks in \\(projectName)\", displayMode: .inline)\n .navigationBarItems(trailing: Button(action: { self.showingSheet = true }) {\n Image(systemName: \"plus.circle.fill\")\n .renderingMode(.original)\n\n })\n .sheet(isPresented: $showingSheet) { AddTaskView(realm: realm) }\n .onAppear(perform: loadData)\n .onDisappear(perform: stopWatching)\n}\n```\n\nTasks can be removed from the projects by other instances of the application or directly from Atlas in the back end. SwiftUI tends to crash if an item is removed from a list which is bound to the UI, and so we use Realm's \"freeze\" feature to isolate the UI from those changes:\n\n``` swift\nForEach(tasks.freeze()) { task in ...\n```\n\nHowever, `TaskView` can make changes to a task, and so we need to \"unfreeze\" `Task` `Objects` before passing them in:\n\n``` swift\nTaskView(task: (tasksRealm.resolve(ThreadSafeReference(to: task)))!)\n```\n\nWhen the view loads, we must fetch the latest list of tasks in the project. We want to refresh the view in the UI whenever the app observes a change in the list of tasks. The `loadData` function fetches the initial list, and then observes the Realm and updates the `lastUpdate` field on any changes (which triggers a view refresh):\n\n``` swift\nfunc loadData() {\n tasks = realm.objects(Task.self).sorted(byKeyPath: \"_id\")\n realmNotificationToken = realm.observe { _, _ in\n lastUpdate = Date()\n }\n}\n```\n\nTo conserve resources, we release the refresh token when leaving this view:\n\n``` swift\nfunc stopWatching() {\n if let token = realmNotificationToken {\n token.invalidate()\n }\n}\n```\n\nWe delete a task when the user swipes it to the left:\n\n``` swift\nfunc deleteTask(at offsets: IndexSet) {\n do {\n try realm.write {\n guard let tasks = tasks else {\n return\n }\n realm.delete(tasks[offsets.first!])\n }\n } catch {\n state.error = \"Unable to open Realm write transaction\"\n }\n}\n```\n\n### Task View\n\n \n\n`TaskView` (`task-tracker-swiftui/Views/Projects & Tasks/TaskView.swift`) is responsible for rendering a `Task` `Object`; optionally adding an image and format based on the task status:\n\n``` swift\nvar body: some View {\n Button(action: { self.showingUpdateSheet = true }) {\n HStack(spacing: Dimensions.padding) {\n switch task.statusEnum {\n case .Complete:\n Text(task.name)\n .strikethrough()\n .foregroundColor(.gray)\n Spacer()\n Image(systemName: \"checkmark.square\")\n .foregroundColor(.gray)\n case .InProgress:\n Text(task.name)\n .fontWeight(.bold)\n Spacer()\n Image(systemName: \"tornado\")\n case .Open:\n Text(task.name)\n Spacer()\n }\n }\n }\n .sheet(isPresented: $showingUpdateSheet) {\n UpdateTaskView(task: task)\n }\n .padding(.horizontal, Dimensions.padding)\n}\n```\n\nThe task in the UI is a button that exposes `UpdateTaskView` when tapped. That view doesn't cover any new ground, and so I won't dig into it here.\n\n### Teams View\n\n \n\nA user can add others to their team; all team members can view and edit tasks in the user's project. For the logged-in user to add another member to their team, they need to update that user's `User` `Object`. This isn't allowed by the Realm Rules in the back end app. Instead, we make use of Realm Functions that have been configured in the back end to make these changes securely.\n\n`TeamsView` (`task-tracker-swiftui/Views/Teams/TeamsView.swift`) presents a list of all the user's teammates:\n\n``` swift\nvar body: some View {\n NavigationView {\n VStack {\n List {\n ForEach(members) { member in\n LabeledText(label: member.id, text: member.name)\n }\n .onDelete(perform: removeTeamMember)\n }\n Spacer()\n }\n .navigationBarTitle(Text(\"My Team\"), displayMode: .inline)\n .navigationBarItems(\n leading: Button(\n action: { self.presentationMode.wrappedValue.dismiss() }) { Image(systemName: \"xmark.circle\") },\n trailing: Button(action: { self.showingAddTeamMember = true }) { Image(systemName: \"plus.circle.fill\")\n .renderingMode(.original)\n }\n )\n }\n .sheet(isPresented: $showingAddTeamMember) {\n // TODO: Not clear why we need to pass in the environmentObject, appears that it may\n // be a bug \u2013 should test again in the future.\n AddTeamMemberView(refresh: fetchTeamMembers)\n .environmentObject(state)\n }\n .onAppear(perform: fetchTeamMembers)\n} \n```\n\nWe invoke a Realm Function to fetch the list of team members, when this view is opened (`.onAppear`) through the `fetchTeamMembers` function:\n\n``` swift\nfunc fetchTeamMembers() {\n state.shouldIndicateActivity = true\n let user = app.currentUser!\n\n user.functions.getMyTeamMembers([]) { (result, error) in\n DispatchQueue.main.sync {\n state.shouldIndicateActivity = false\n guard error == nil else {\n state.error = \"Fetch team members failed: \\(error!.localizedDescription)\"\n return\n }\n guard let result = result else {\n state.error = \"Result from fetching members is nil\"\n return\n }\n self.members = result.arrayValue!.map({ (bson) in\n return Member(document: bson!.documentValue!)\n })\n }\n }\n}\n```\n\nSwiping left removes a team member using another Realm Function:\n\n``` swift\nfunc removeTeamMember(at offsets: IndexSet) {\n state.shouldIndicateActivity = true\n let user = app.currentUser!\n let email = members[offsets.first!].name\n user.functions.removeTeamMember([AnyBSON(email)]) { (result, error) in\n DispatchQueue.main.sync {\n state.shouldIndicateActivity = false\n if let error = error {\n self.state.error = \"Internal error, failed to remove member: \\(error.localizedDescription)\"\n } else if let resultDocument = result?.documentValue {\n if let resultError = resultDocument[\"error\"]??.stringValue {\n self.state.error = resultError\n } else {\n print(\"Removed team member\")\n self.fetchTeamMembers()\n }\n } else {\n self.state.error = \"Unexpected result returned from server\"\n }\n }\n }\n} \n```\n\nTapping on the \"+\" button opens up the `AddTeamMemberView` sheet/modal, but no new concepts are used there, and so I'll skip it here.\n\n## Summary\n\nOur app relies on the latest features in the Realm-Cocoa SDK (notably Combine and freezing objects) to bind the model directly to our SwiftUI views. You may have noticed that we don't have a view model.\n\nWe use Realm's username/password functionality and Realm Sync to ensure that each user can work with all of their tasks from any device.\n\nYou've seen how the front end app can delegate work to the back end app using Realm Functions. In this case, it was to securely work around the data access rules for the `User` object; other use-cases for Realm Functions are:\n\n- Securely access other network services without exposing credentials in the front end app.\n- Complex data wrangling using the MongoDB Aggregation Framework.\n- We've used Apple's Combine framework to handle asynchronous events, such as performing follow-on actions once the back end confirms that a user has been authenticated and logged in.\n\nThis iOS app reuses the back end Realm application from the official MongoDB Realm tutorials. This demonstrates how the same data and back end logic can be shared between apps running on iOS, Android, web, Node.js...\n\n## References\n\n- [GitHub Repo for this app\n- UIKit version of this app\n- Instructions for setting up the backend Realm app\n- Freezing Realm Objects\n- GitHub Repo for Realm-Cocoa SDK\n- Realm Cocoa SDK documentation\n- MongoDB's Realm documentation\n- WildAid O-FISH \u2013 an example of a **much** bigger app built on Realm and MongoDB Realm Sync\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS", "React Native", "Mobile"], "pageDescription": "Build your first iOS mobile app using Realm, SwiftUI, and Combine.", "contentType": "Tutorial"}, "title": "Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/compass/mongodb-compass-aggregation-improvements", "action": "created", "body": "# A Better MongoDB Aggregation Experience via Compass\n\n## Introduction\nMongoDB Compass has had an aggregation pipeline builder since 2018. Its primary focus has always been enabling developers to quickly prototype and troubleshoot aggregations. Aggregations would then be exported to the developer\u2019s preferred programming language and copy-pasted inside application code.\n \nAs of MongoDB World 2022, we are relaunching the aggregation experience within Compass, and we\u2019re happy to announce richer, more powerful functionality for developers.\n\nCompass 1.32.x series includes the following:\nIn-Use encryption, including providing options for KMS details in the connection form and CRUD support for encrypted collections, and more specifically for Queryable Encryption when creating collections\nSaved queries and aggregations in the My Queries tab\nExplain plan for aggregations\nRun aggregations against the whole collection\nExport aggregation results\n\nBelow, we will talk a bit more about each of these features and how they are useful to developers.\n\n## In-Use Encryption\nOur latest release of MongoDB Compass provides options for using KMS details in the connection form, as well as CRUD support for encrypted collections. Specifically, we also include functionality for Queryable Encryption when creating a collection. \n\n## My Queries Section\n\nUsers can already save aggregations in Compass for later use.\n\nHowever, saved aggregations are bound to a namespace and what we\u2019ve seen often is that our users had trouble finding again the queries and aggregations they\u2019ve saved and reuse them across namespaces. We decided the experience had to be improved: developers often reuse code they\u2019ve written in the past as a starting point for new code. Similarly, they\u2019ve told us that the queries and aggregations they saved are their \u201cbest queries\u201d, the ones they want to use as the basis to build new ones.\n\nWe took their input seriously and recently we added a new \u201cMy Queries\u201d screen to Compass. Now, you can find all your queries and aggregations in one place, you can filter them by namespace and search across all of them.\n\n## Explain Plan for Aggregations\nWhen building aggregations for collections with more than a few hundreds documents, performance best practices start to become important.\n\n\u201cExplain Plan\u201d is the most reliable way to understand the performance of an aggregation and ensure it\u2019s using the right indexes, so it was not surprising to see a feature request for explaining aggregations quickly rising up to be in the top five requests in our feedback portal.\n\nNow \u201cExplain Plan\u201d is finally available and built into the aggregation-building experience: with just one click you can dig into the performance metrics for your aggregations, monitor the execution time, and double-check that the right indexes are in place.\n\nHowever, the role of a developer in a modern engineering team is expanding to include tasks related to gathering users and product insights from live data (what we sometimes refer to as real-time analytics) and generating reports for other functions in the team or in the company.\n\nWhen this happens, users are puzzled about not having the ability to run the aggregations they have created on the full dataset and export results in the same way they can do with a query. This is understandable and it\u2019s reasonable that they assume this to be table-stakes functionality in a database GUI.\n\nNow this is finally possible. Once you are done building your aggregation, you can just click \u201cRun\u201d and wait for the results to appear. You can also export them as JSON or CSV, which is something really useful when you need to share the insights you\u2019ve extracted with other parts of the business.\n\n## Summary of Compass 1.32 release\nIn summary, the latest version of MongoDB Compass means that users of MongoDB that are exploring the aggregation framework can get started more easily and build their first aggregations in no time without reading a lot of documentation. Experts and established users will be able to get more out of the aggregation framework by creating and reuse aggregations more effectively, including sharing them with teammates, confirming their performance, and running the aggregation via Compass directly.\n\nIf you're interested in trying Compass, download for free here.", "format": "md", "metadata": {"tags": ["Compass"], "pageDescription": "MongoDB Compass is one of the most popular database GUIs", "contentType": "News & Announcements"}, "title": "A Better MongoDB Aggregation Experience via Compass", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/getting-started-mongodb-cpp", "action": "created", "body": "# Getting Started with MongoDB and C++\n\nThis article will show you how to utilize Microsoft Visual Studio to compile and install the MongoDB C and C++ drivers on Windows, and use these drivers to create a console application that can interact with your MongoDB data by performing basic CRUD operations.\n\nTools and libraries used in this tutorial:\n\n1. Microsoft Windows 11\n2. Microsoft Visual Studio 2022 17.3.6\n3. Language standard: C++17\n4. MongoDB C Driver version: 1.23\n5. MongoDB C++ Driver version: 3.7.0\n6. boost: 1.80.0\n7. Python: 3.10\n8. CMake: 3.25.0\n\n## Prerequisites\n\n1. MongoDB Atlas account with a cluster created.\n2. *(Optional)* Sample dataset loaded into the Atlas cluster.\n3. Your machine\u2019s IP address is whitelisted. Note: You can add *0.0.0.0/0* as the IP address, which should allow access from any machine. This setting is not recommended for production use.\n\n## Installation: IDE and tools\n\nStep 1: Install Visual Studio: Download Visual Studio Tools - Install Free for Windows, Mac, Linux.\n\nIn the Workloads tab during installation, select \u201cDesktop development with C++.\u201d\n\nStep 2: Install CMake: Download \\| CMake\n\n* For simplicity, choose the installer.\n* In the setup, make sure to select \u201cAdd CMake to the system PATH for all users.\u201d This enables the CMake executable to be easily accessible.\n\nStep 3: Install Python 3: Download Python.\n\nStep 4: *(Optional)* Download boost library from Boost Downloads and extract it to *C:\\boost*.\n\n## Installation: Drivers\n\n> Detailed instructions and configurations available here:\n> \n> \n> * Installing the mongocxx driver\n> * Installing the MongoDB C Driver (libmongoc) and BSON library (libbson)\n\n### Step 1: Install C Driver\n\nC++ Driver has a dependency on C driver. Hence, we need to install C Driver first.\n\n* Download C Driver\n * Check compatibility at Windows - Installing the mongocxx driver for the driver to download.\n * Download release tarball \u2014 Releases \u00b7 mongodb/mongo-c-driver \u2014 and extract it to *C:\\Repos\\mongo-c-driver-1.23.0*.\n* Setup build via CMake\n * Launch powershell/terminal as an administrator.\n * Navigate to *C:\\Repos\\mongo-c-driver-1.23.0* and create a new folder named *cmake-build* for the build files.\n * Navigate to *C: \\Repos\\mongo-c-driver-1.23.0\\cmake-build*.\n * Run the below command to configure and generate build files using CMake. \n \n```\ncmake -G \"Visual Studio 17 2022\" -A x64 -S \"C:\\Repos\\mongo-c-driver-1.23.0\" -B \"C:\\Repos\\mongo-c-driver-1.23.0\\cmake-build\"\n```\n\nNote: Build setup can be done with the CMake GUI application, as well.\n* Execute build\n * Visual Studio\u2019s default build type is Debug. A release build with debug info is recommended for production use.\n * Run the below command to build and install the driver\n\n```\ncmake --build . --config RelWithDebInfo --target install\n```\n\n* You should now see libmongoc and libbson installed in *C:/Program Files/mongo-c-driver*.\n\n* Move the *mongo-c-driver* to *C:/* for convenience. Hence, C Driver should now be present at *C:/mongo-c-driver*.\n\n### Step 2: Install C++ Driver\n\n* Download C++ Driver\n * Download release tarball \u2014 Releases \u00b7 mongodb/mongo-cxx-driver \u2014 and extract it to *C:\\Repos\\mongo-cxx-driver-r3.7.0*.\n* Set up build via CMake\n * Launch powershell/terminal as an administrator.\n * Navigate to *C:\\Repos\\mongo-cxx-driver-r3.7.0\\build*.\n * Run the below command to generate and configure build files via CMake.\n\n```\ncmake .. -G \"Visual Studio 17 2022\" -A x64 -DCMAKE_CXX_STANDARD=17 -DCMAKE_CXX_FLAGS=\"/Zc:__cplusplus /EHsc\" -DCMAKE_PREFIX_PATH=C:\\mongo-c-driver -DCMAKE_INSTALL_PREFIX=C:\\mongo-cxx-driver\n```\n\nNote: Setting *DCMAKE_CXX_FLAGS* should not be required for C++ driver version 3.7.1 and above.\n\n* Execute build\n * Run the below command to build and install the driver\n \n```\ncmake --build . --config RelWithDebInfo --target install\n```\n\n* You should now see C++ driver installed in *C:\\mongo-cxx-driver*.\n\n## Visual Studio: Setting up the dev environment\n\n* Create a new project in Visual Studio.\n* Select *Console App* in the templates.\n\n* Visual Studio should create a new project and open a .cpp file which prints \u201cHello World.\u201d Navigate to the Solution Explorer panel, right-click on the solution name (*MongoCXXGettingStarted*, in this case), and click Properties.\n\n* Go to *Configuration Properties > C/C++ > General > Additional Include Directories* and add the include directories from the C and C++ driver installation folders, as shown below.\n\n* Go to *Configuration Properties > C/C++ > Language* and change the *C++ Language Standard* to C++17.\n\n* Go to *Configuration Properties > C/C++ > Command Line* and add */Zc:\\_\\_cplusplus* in the *Additional Options* field. This flag is needed to opt into the correct definition of \\_\\_cplusplus.\n* Go to *Configuration Properties > Linker > Input* and add the driver libs in *Additional Dependencies* section, as shown below.\n\n* Go to *Configuration Properties > Debugging > Environment* to add a path to the driver executables, as shown below.\n\n## Building the console application\n\n> Source available here\n\nLet\u2019s build an application that maintains student records. We will input student data from the user, save them in the database, and perform different CRUD operations on the database.\n\n### Connecting to the database\n\nLet\u2019s start with a simple program to connect to the MongoDB Atlas cluster and access the databases. Get the connection string (URI) to the cluster and create a new environment variable with key as *\u201cMONGODB\\_URI\u201d* and value as the connection string (URI). It\u2019s a good practice to keep the connection string decoupled from the code.\n\nTip: Restart your machine after creating the environment variable in case the *\u201cgetEnvironmentVariable\u201d* function fails to retrieve the environment variable.\n\n```\n#include \n#include \n#include \n#include \n#include \n#include \n#include \n#include \nusing namespace std;\nstd::string getEnvironmentVariable(std::string environmentVarKey)\n{\nchar* pBuffer = nullptr;\nsize_t size = 0;\nauto key = environmentVarKey.c_str();\n// Use the secure version of getenv, ie. _dupenv_s to fetch environment variable.\nif (_dupenv_s(&pBuffer, &size, key) == 0 && pBuffer != nullptr)\n{\nstd::string environmentVarValue(pBuffer);\nfree(pBuffer);\nreturn environmentVarValue;\n}\nelse\n{\nreturn \"\";\n}\n}\nauto mongoURIStr = getEnvironmentVariable(\"MONGODB_URI\");\nstatic const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };\n// Get all the databases from a given client.\nvector getDatabases(mongocxx::client& client)\n{\nreturn client.list_database_names();\n}\nint main()\n{\n // Create an instance.\n mongocxx::instance inst{};\n mongocxx::options::client client_options;\n auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };\n client_options.server_api_opts(api);\n mongocxx::client conn{ mongoURI, client_options}\n auto dbs = getDatabases(conn);\n for (auto db : dbs)\n {\n cout << db << endl;\n }\n return 0;\n}\n```\n\nClick on \u201cLaunch Debugger\u201d to launch the console application. The output should looks something like this:\n\n### CRUD operations\n\n> Full tutorial\n\nSince the database is successfully connected to our application, let\u2019s write some helper functions to interact with the database, performing CRUD operations.\n\n#### Create\n\n```\n// Create a new collection in the given database.\nvoid createCollection(mongocxx::database& db, const string& collectionName)\n{\ndb.create_collection(collectionName);\n}\n// Create a document from the given key-value pairs.\nbsoncxx::document::value createDocument(const vector>& keyValues)\n{\nbsoncxx::builder::stream::document document{};\nfor (auto& keyValue : keyValues)\n{\ndocument << keyValue.first << keyValue.second;\n}\nreturn document << bsoncxx::builder::stream::finalize;\n}\n// Insert a document into the given collection.\nvoid insertDocument(mongocxx::collection& collection, const bsoncxx::document::value& document)\n{\ncollection.insert_one(document.view());\n}\n```\n\n#### Read\n\n```\n// Print the contents of the given collection.\nvoid printCollection(mongocxx::collection& collection)\n{\n// Check if collection is empty.\nif (collection.count_documents({}) == 0)\n{\ncout << \"Collection is empty.\" << endl;\nreturn;\n}\nauto cursor = collection.find({});\nfor (auto&& doc : cursor)\n{\ncout << bsoncxx::to_json(doc) << endl;\n}\n}\n// Find the document with given key-value pair.\nvoid findDocument(mongocxx::collection& collection, const string& key, const string& value)\n{\n// Create the query filter\nauto filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;\n//Add query filter argument in find\nauto cursor = collection.find({ filter });\nfor (auto&& doc : cursor)\n{\n cout << bsoncxx::to_json(doc) << endl;\n}\n}\n```\n\n#### Update\n\n```\n// Update the document with given key-value pair.\nvoid updateDocument(mongocxx::collection& collection, const string& key, const string& value, const string& newKey, const string& newValue)\n{\ncollection.update_one(bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize,\nbsoncxx::builder::stream::document{} << \"$set\" << bsoncxx::builder::stream::open_document << newKey << newValue << bsoncxx::builder::stream::close_document << bsoncxx::builder::stream::finalize);\n}\n```\n\n#### Delete\n\n```\n// Delete a document from a given collection.\nvoid deleteDocument(mongocxx::collection& collection, const bsoncxx::document::value& document)\n{\ncollection.delete_one(document.view());\n}\n```\n\n### The main() function\n\nWith all the helper functions in place, let\u2019s create a menu in the main function which we can use to interact with the application.\n\n```\n// ********************************************** I/O Methods **********************************************\n// Input student record.\nvoid inputStudentRecord(mongocxx::collection& collection)\n{\nstring name, rollNo, branch, year;\ncout << \"Enter name: \";\ncin >> name;\ncout << \"Enter roll number: \";\ncin >> rollNo;\ncout << \"Enter branch: \";\ncin >> branch;\ncout << \"Enter year: \";\ncin >> year;\ninsertDocument(collection, createDocument({ {\"name\", name}, {\"rollNo\", rollNo}, {\"branch\", branch}, {\"year\", year} }));\n}\n// Update student record.\nvoid updateStudentRecord(mongocxx::collection& collection)\n{\nstring rollNo, newBranch, newYear;\ncout << \"Enter roll number: \";\ncin >> rollNo;\ncout << \"Enter new branch: \";\ncin >> newBranch;\ncout << \"Enter new year: \";\ncin >> newYear;\nupdateDocument(collection, \"rollNo\", rollNo, \"branch\", newBranch);\nupdateDocument(collection, \"rollNo\", rollNo, \"year\", newYear);\n}\n// Find student record.\nvoid findStudentRecord(mongocxx::collection& collection)\n{\nstring rollNo;\ncout << \"Enter roll number: \";\ncin >> rollNo;\nfindDocument(collection, \"rollNo\", rollNo);\n}\n// Delete student record.\nvoid deleteStudentRecord(mongocxx::collection& collection)\n{\nstring rollNo;\ncout << \"Enter roll number: \";\ncin >> rollNo;\ndeleteDocument(collection, createDocument({ {\"rollNo\", rollNo} }));\n}\n// Print student records.\nvoid printStudentRecords(mongocxx::collection& collection)\n{\nprintCollection(collection);\n}\n// ********************************************** Main **********************************************\nint main()\n{\n if(mongoURI.to_string().empty())\n {\ncout << \"URI is empty\";\nreturn 0;\n }\n // Create an instance.\n mongocxx::instance inst{};\n mongocxx::options::client client_options;\n auto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };\n client_options.server_api_opts(api);\n mongocxx::client conn{ mongoURI, client_options};\nconst string dbName = \"StudentRecords\";\nconst string collName = \"StudentCollection\";\nauto dbs = getDatabases(conn);\n// Check if database already exists.\nif (!(std::find(dbs.begin(), dbs.end(), dbName) != dbs.end()))\n{\n// Create a new database & collection for students.\nconndbName];\n}\nauto studentDB = conn.database(dbName);\nauto allCollections = studentDB.list_collection_names();\n// Check if collection already exists.\nif (!(std::find(allCollections.begin(), allCollections.end(), collName) != allCollections.end()))\n{\ncreateCollection(studentDB, collName);\n}\nauto studentCollection = studentDB.collection(collName);\n// Create a menu for user interaction\n int choice = -1;\n do while (choice != 0)\n {\n //system(\"cls\");\n cout << endl << \"**************************************************************************************************************\" << endl;\n cout << \"Enter 1 to input student record\" << endl;\n cout << \"Enter 2 to update student record\" << endl;\n cout << \"Enter 3 to find student record\" << endl;\n cout << \"Enter 4 to delete student record\" << endl;\n cout << \"Enter 5 to print all student records\" << endl;\n cout << \"Enter 0 to exit\" << endl;\n cout << \"Enter Choice : \"; \n cin >> choice;\n cout << endl;\n switch (choice)\n {\n case 1:\n inputStudentRecord(studentCollection);\n break;\n case 2:\n updateStudentRecord(studentCollection);\n break;\n case 3:\n findStudentRecord(studentCollection);\n break;\n case 4:\n deleteStudentRecord(studentCollection);\n break;\n case 5:\n printStudentRecords(studentCollection);\n break;\n case 0:\n break;\n default:\n cout << \"Invalid choice\" << endl;\n break;\n }\n } while (choice != 0);\n return 0;\n}\n```\n\n## Application in action\n\nWhen this application is executed, you can manage the student records via the console interface. Here\u2019s a demo:\n\nYou can also see the collection in Atlas reflecting any change made via the console application.\n\n![Student Records collection in Atlas\n\n## Wrapping up\nWith this article, we covered installation of C/C++ driver and creating a console application in Visual Studio that connects to MongoDB Atlas to perform basic CRUD operations.\n\nMore information about the C++ driver is available at MongoDB C++ Driver. ", "format": "md", "metadata": {"tags": ["MongoDB", "C++"], "pageDescription": "This article will show you how to utilize Microsoft Visual Studio to compile and install the MongoDB C and C++ drivers on Windows, and use these drivers to create a console application that can interact with your MongoDB data.", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB and C++", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-schema-design-best-practices", "action": "created", "body": "# MongoDB Schema Design Best Practices\n\nHave you ever wondered, \"How do I model a schema for my application?\"\nIt's one of the most common questions devs have pertaining to MongoDB.\nAnd the answer is, *it depends*. This is because document databases have\na rich vocabulary that is capable of expressing data relationships in\nmore nuanced ways than SQL. There are many things to consider when\npicking a schema. Is your app read or write heavy? What data is\nfrequently accessed together? What are your performance considerations?\nHow will your data set grow and scale?\n\nIn this post, we will discuss the basics of data modeling using real\nworld examples. You will learn common methodologies and vocabulary you\ncan use when designing your database schema for your application.\n\nOkay, first off, did you know that proper MongoDB schema design is the\nmost critical part of deploying a scalable, fast, and affordable\ndatabase? It's true, and schema design is often one of the most\noverlooked facets of MongoDB administration. Why is MongoDB Schema\nDesign so important? Well, there are a couple of good reasons. In my\nexperience, most people coming to MongoDB tend to think of MongoDB\nschema design as the same as legacy relational schema design, which\ndoesn't allow you to take full advantage of all that MongoDB databases\nhave to offer. First, let's look at how legacy relational database\ndesign compares to MongoDB schema design.\n\n## Schema Design Approaches \u2013 Relational vs.\u00a0MongoDB\n\nWhen it comes to MongoDB database schema design, this is what most\ndevelopers think of when they are looking at designing a relational\nschema and a MongoDB schema.\n\nI have to admit that I understand the impulse to design your MongoDB\nschema the same way you have always designed your SQL schema. It's\ncompletely normal to want to split up your data into neat little tables\nas you've always done before. I was guilty of doing this when I first\nstarted learning how to use MongoDB. However, as we will soon see, you\nlose out on many of the awesome features of MongoDB when you design your\nschema like an SQL schema.\n\nAnd this is how that makes me feel.\n\nHowever, I think it's best to compare MongoDB schema design to\nrelational schema design since that's where many devs coming to MongoDB\nare coming from. So, let's see how these two design patterns differ.\n\n### Relational Schema Design\n\nWhen designing a relational schema, typically, devs model their schema\nindependent of queries. They ask themselves, \"What data do I have?\"\nThen, by using prescribed approaches, they will\nnormalize\n(typically in 3rd normal\nform).\nThe tl;dr of normalization is to split up your data into tables, so you\ndon't duplicate data. Let's take a look at an example of how you would\nmodel some user data in a relational database.\n\nIn this example, you can see that the user data is split into separate\ntables and it can be JOINED together using foreign keys in the `user_id`\ncolumn of the Professions and Cars table. Now, let's take a look at how\nwe might model this same data in MongoDB.\n\n### MongoDB Schema Design\n\nNow, MongoDB schema design works a lot differently than relational\nschema design. With MongoDB schema design, there is:\n\n- No formal process\n- No algorithms\n- No rules\n\nWhen you are designing your MongoDB schema design, the only thing that matters is that you design a schema that will work well for ___your___ application. Two different apps that use the same exact data might have very different schemas if the applications are used differently. When designing a schema, we want to take into consideration the following:\n\n- Store the data\n- Provide good query performance\n- Require reasonable amount of hardware\n\nLet's take a look at how we might model the relational User model in\nMongoDB.\n\n``` json\n{\n \"first_name\": \"Paul\",\n \"surname\": \"Miller\",\n \"cell\": \"447557505611\",\n \"city\": \"London\",\n \"location\": 45.123, 47.232],\n \"profession\": [\"banking\", \"finance\", \"trader\"],\n \"cars\": [\n {\n \"model\": \"Bentley\",\n \"year\": 1973\n },\n {\n \"model\": \"Rolls Royce\",\n \"year\": 1965\n }\n ]\n}\n```\n\nYou can see that instead of splitting our data up into separate\ncollections or documents, we take advantage of MongoDB's document based\ndesign to embed data into arrays and objects within the User object. Now\nwe can make one simple query to pull all that data together for our\napplication.\n\n## Embedding vs.\u00a0Referencing\n\nMongoDB schema design actually comes down to only two choices for every\npiece of data. You can either embed that data directly or reference\nanother piece of data using the\n[$lookup\noperator (similar to a JOIN). Let's look at the pros and cons of using\neach option in your schema.\n\n### Embedding\n\n#### Advantages\n\n- You can retrieve all relevant information in a single query.\n- Avoid implementing joins in application code or using\n $lookup.\n- Update related information as a single atomic operation. By\n default, all CRUD operations on a single document are ACID\n compliant.\n- However, if you need a transaction across multiple operations, you\n can use the transaction\n operator.\n- Though transactions are available starting\n 4.0,\n however, I should add that it's an anti-pattern to be overly reliant\n on using them in your application.\n\n#### Limitations\n\n- Large documents mean more overhead if most fields are not relevant.\n You can increase query performance by limiting the size of the\n documents that you are sending over the wire for each query.\n- There is a 16-MB document size limit in\n MongoDB. If you\n are embedding too much data inside a single document, you could\n potentially hit this limit.\n\n### Referencing\n\nOkay, so the other option for designing our schema is referencing\nanother document using a document's unique object\nID and\nconnecting them together using the\n$lookup\noperator. Referencing works similarly as the JOIN operator in an SQL\nquery. It allows us to split up data to make more efficient and scalable\nqueries, yet maintain relationships between data.\n\n#### Advantages\n\n- By splitting up data, you will have smaller documents.\n- Less likely to reach 16-MB-per-document\n limit.\n- Infrequently accessed information not needed on every query.\n- Reduce the amount of duplication of data. However, it's important to\n note that data duplication should not be avoided if it results in a\n better schema.\n\n#### Limitations\n\n- In order to retrieve all the data in the referenced documents, a\n minimum of two queries or\n $lookup\n required to retrieve all the information.\n\n## Type of Relationships\n\nOkay, so now that we have explored the two ways we are able to split up\ndata when designing our schemas in MongoDB, let's look at common\nrelationships that you're probably familiar with modeling if you come\nfrom an SQL background. We will start with the more simple relationships\nand work our way up to some interesting patterns and relationships and\nhow we model them with real-world examples. Note, we are only going to\nscratch the surface of modeling relationships in MongoDB here.\n\nIt's also important to note that even if your application has the same\nexact data as the examples listed below, you might have a completely\ndifferent schema than the one I outlined here. This is because the most\nimportant consideration you make for your schema is how your data is\ngoing to be used by your system. In each example, I will outline the\nrequirements for each application and why a given schema was used for\nthat example. If you want to discuss the specifics of your schema, be\nsure to open a conversation on the MongoDB\nCommunity Forum, and\nwe all can discuss what will work best for your unique application.\n\n### One-to-One\n\nLet's take a look at our User document. This example has some great\none-to-one data in it. For example, in our system, one user can only\nhave one name. So, this would be an example of a one-to-one\nrelationship. We can model all one-to-one data as key-value pairs in our\ndatabase.\n\n``` json\n{\n \"_id\": \"ObjectId('AAA')\",\n \"name\": \"Joe Karlsson\",\n \"company\": \"MongoDB\",\n \"twitter\": \"@JoeKarlsson1\",\n \"twitch\": \"joe_karlsson\",\n \"tiktok\": \"joekarlsson\",\n \"website\": \"joekarlsson.com\"\n}\n```\n\nDJ Khalid would approve.\n\nOne to One tl;dr:\n\n- Prefer key-value pair embedded in the document.\n- For example,\u00a0an employee can work in one and only one department.\n\n### One-to-Few\n\nOkay, now let's say that we are dealing a small sequence of data that's\nassociated with our users. For example, we might need to store several\naddresses associated with a given user. It's unlikely that a user for\nour application would have more than a couple of different addresses.\nFor relationships like this, we would define this as a *one-to-few\nrelationship.*\n\n``` json\n{\n \"_id\": \"ObjectId('AAA')\",\n \"name\": \"Joe Karlsson\",\n \"company\": \"MongoDB\",\n \"twitter\": \"@JoeKarlsson1\",\n \"twitch\": \"joe_karlsson\",\n \"tiktok\": \"joekarlsson\",\n \"website\": \"joekarlsson.com\",\n \"addresses\": \n { \"street\": \"123 Sesame St\", \"city\": \"Anytown\", \"cc\": \"USA\" }, \n { \"street\": \"123 Avenue Q\", \"city\": \"New York\", \"cc\": \"USA\" }\n ]\n}\n```\n\nRemember when I told you there are no rules to MongoDB schema design?\nWell, I lied. I've made up a couple of handy rules to help you design\nyour schema for your application.\n\n>\n>\n>**Rule 1**: Favor embedding unless there is a compelling reason not to.\n>\n>\n\nGenerally speaking, my default action is to embed data within a\ndocument. I pull it out and reference it only if I need to access it on\nits own, it's too big, I rarely need it, or any other reason.\n\nOne-to-few tl;dr:\n\n- Prefer embedding for one-to-few relationships.\n\n### One-to-Many\n\nAlright, let's say that you are building a product page for an\ne-commerce website, and you are going to have to design a schema that\nwill be able to show product information. In our system, we save\ninformation about all the many parts that make up each product for\nrepair services. How would you design a schema to save all this data,\nbut still make your product page performant? You might want to consider\na *one-to-many* schema since your one product is made up of many parts.\n\nNow, with a schema that could potentially be saving thousands of sub\nparts, we probably do not need to have all of the data for the parts on\nevery single request, but it's still important that this relationship is\nmaintained in our schema. So, we might have a Products collection with\ndata about each product in our e-commerce store, and in order to keep\nthat part data linked, we can keep an array of Object IDs that link to a\ndocument that has information about the part. These parts can be saved\nin the same collection or in a separate collection, if needed. Let's\ntake a look at how this would look.\n\nProducts:\n\n``` json\n{\n \"name\": \"left-handed smoke shifter\",\n \"manufacturer\": \"Acme Corp\",\n \"catalog_number\": \"1234\",\n \"parts\": [\"ObjectID('AAAA')\", \"ObjectID('BBBB')\", \"ObjectID('CCCC')\"]\n}\n```\n\nParts:\n\n``` json\n{\n \"_id\" : \"ObjectID('AAAA')\",\n \"partno\" : \"123-aff-456\",\n \"name\" : \"#4 grommet\",\n \"qty\": \"94\",\n \"cost\": \"0.94\",\n \"price\":\" 3.99\"\n}\n```\n\n>\n>\n>**Rule 2**: Needing to access an object on its own is a compelling\n>reason not to embed it.\n>\n>\n\n>\n>\n>**Rule 3**: Avoid joins/lookups if possible, but don't be afraid if they\n>can provide a better schema design.\n>\n>\n\n### One-to-Squillions\n\nWhat if we have a schema where there could be potentially millions of\nsubdocuments, or more? That's when we get to the one-to-squillions\nschema. And, I know what you're thinking: \\_Is squillions a real word?\\_\n[And the answer is yes, it is a real\nword.\n\nLet's imagine that you have been asked to create a server logging\napplication. Each server could potentially save a massive amount of\ndata, depending on how verbose you're logging and how long you store\nserver logs for.\n\nWith MongoDB, tracking data within an unbounded array is dangerous,\nsince we could potentially hit that 16-MB-per-document limit. Any given\nhost could generate enough messages to overflow the 16-MB document size,\neven if only ObjectIDs are stored in an array. So, we need to rethink\nhow we can track this relationship without coming up against any hard\nlimits.\n\nSo, instead of tracking the relationship between the host and the log\nmessage in the host document, let's let each log message store the host\nthat its message is associated with. By storing the data in the log, we\nno longer need to worry about an unbounded array messing with our\napplication! Let's take a look at how this might work.\n\nHosts:\n\n``` json\n{\n \"_id\": ObjectID(\"AAAB\"),\n \"name\": \"goofy.example.com\",\n \"ipaddr\": \"127.66.66.66\"\n}\n```\n\nLog Message:\n\n``` json\n{\n \"time\": ISODate(\"2014-03-28T09:42:41.382Z\"),\n \"message\": \"cpu is on fire!\",\n \"host\": ObjectID(\"AAAB\")\n}\n```\n\n>\n>\n>**Rule 4**: Arrays should not grow without bound. If there are more than\n>a couple of hundred documents on the \"many\" side, don't embed them; if\n>there are more than a few thousand documents on the \"many\" side, don't\n>use an array of ObjectID references. High-cardinality arrays are a\n>compelling reason not to embed.\n>\n>\n\n### Many-to-Many\n\nThe last schema design pattern we are going to be covering in this post\nis the *many-to-many* relationship. This is another very common schema\npattern that we see all the time in relational and MongoDB schema\ndesigns. For this pattern, let's imagine that we are building a to-do\napplication. In our app, a user may have *many* tasks and a task may\nhave *many* users assigned to it.\n\nIn order to preserve these relationships between users and tasks, there\nwill need to be references from the *one* user to the *many* tasks and\nreferences from the *one* task to the *many* users. Let's look at how\nthis could work for a to-do list application.\n\nUsers:\n\n``` json\n{\n \"_id\": ObjectID(\"AAF1\"),\n \"name\": \"Kate Monster\",\n \"tasks\": ObjectID(\"ADF9\"), ObjectID(\"AE02\"), ObjectID(\"AE73\")]\n}\n```\n\nTasks:\n\n``` json\n{\n \"_id\": ObjectID(\"ADF9\"),\n \"description\": \"Write blog post about MongoDB schema design\",\n \"due_date\": ISODate(\"2014-04-01\"),\n \"owners\": [ObjectID(\"AAF1\"), ObjectID(\"BB3G\")]\n}\n```\n\nFrom this example, you can see that each user has a sub-array of linked\ntasks, and each task has a sub-array of owners for each item in our\nto-do app.\n\n### Summary\n\nAs you can see, there are a ton of different ways to express your schema\ndesign, by going beyond normalizing your data like you might be used to\ndoing in SQL. By taking advantage of embedding data within a document or\nreferencing documents using the $lookup operator, you can make some\ntruly powerful, scalable, and efficient database queries that are\ncompletely unique to your application. In fact, we are only barely able\nto scratch the surface of all the ways that you could model your data in\nMongoDB. If you want to learn more about MongoDB schema design, be sure\nto check out our continued series on schema design in MongoDB:\n\n- [MongoDB schema design\n anti-patterns\n- MongoDB University - M320: Data\n Modeling\n- MongoDB Data Model Design\n Documentation\n- Building with Patterns: A\n Summary\n\nI want to wrap up this post with the most important rule to MongoDB\nschema design yet.\n\n>\n>\n>**Rule 5**: As always, with MongoDB, how you model your data depends \u2013\n>entirely \u2013 on your particular application's data access patterns. You\n>want to structure your data to match the ways that your application\n>queries and updates it.\n>\n>\n\nRemember, every application has unique needs and requirements, so the\nschema design should reflect the needs of that particular application.\nTake the examples listed in this post as a starting point for your\napplication. Reflect on what you need to do, and how you can use your\nschema to help you get there.\n\n>\n>\n>Recap:\n>\n>- **One-to-One** - Prefer key value pairs within the document\n>- **One-to-Few** - Prefer embedding\n>- **One-to-Many** - Prefer embedding\n>- **One-to-Squillions** - Prefer Referencing\n>- **Many-to-Many** - Prefer Referencing\n>\n>\n\n>\n>\n>General Rules for MongoDB Schema Design:\n>\n>- **Rule 1**: Favor embedding unless there is a compelling reason not to.\n>- **Rule 2**: Needing to access an object on its own is a compelling reason not to embed it.\n>- **Rule 3**: Avoid joins and lookups if possible, but don't be afraid if they can provide a better schema design.\n>- **Rule 4**: Arrays should not grow without bound. If there are more than a couple of hundred documents on the *many* side, don't embed them; if there are more than a few thousand documents on the *many* side, don't use an array of ObjectID references. High-cardinality arrays are a compelling reason not to embed.\n>- **Rule 5**: As always, with MongoDB, how you model your data depends **entirely** on your particular application's data access patterns. You want to structure your data to match the ways that your application queries and updates it.\n\nWe have only scratched the surface of design patterns in MongoDB. In\nfact, we haven't even begun to start exploring patterns that aren't even\nremotely possible to perform in a legacy relational model. If you want\nto learn more about these patterns, check out the resources below.\n\n## Additional Resources:\n\n- Now that you know how to design a scalable and performant MongoDB\n schema, check out our MongoDB schema design anti-pattern series to\n learn what NOT to do when building out your MongoDB database schema:\n \n- Video more your thing? Check out our video series on YouTube to\n learn more about MongoDB schema anti-patterns:\n \n- MongoDB University - M320: Data\n Modeling\n- 6 Rules of Thumb for MongoDB Schema Design: Part\n 1\n- MongoDB Data Model Design\n Documentation\n- MongoDB Data Model Examples and Patterns\n Documentation\n- Building with Patterns: A\n Summary\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!", "contentType": "Tutorial"}, "title": "MongoDB Schema Design Best Practices", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-quickstart-crud", "action": "created", "body": "# Basic MongoDB Operations in Python\n\n \n\nLike Python? Want to get started with MongoDB? Welcome to this quick start guide! I'll show you how to set up an Atlas database with some sample data to explore. Then you'll create some data and learn how to read, update and delete it.\n\n## Prerequisites\n\nYou'll need the following installed on your computer to follow along with this tutorial:\n\n- An up-to-date version of Python 3. I wrote the code in this tutorial in Python 3.8, but it should run fine in version 3.6+.\n- A code editor of your choice. I recommend either PyCharm or the free VS Code with the official Python extension.\n\n## Start a MongoDB cluster on Atlas\n\nNow you've got your local environment set up, it's time to create a MongoDB database to work with, and to load in some sample data you can explore and modify.\n\nYou could create a database on your development machine, but it's easier to get started on the Atlas hosted service without having to learn how to configure a MongoDB cluster.\n\n>Get started with an M0 cluster on Atlas today. It's free forever, and it's the easiest way to try out the steps in this blog series.\n\nYou'll need to create a new cluster and load it with sample data. My awesome colleague Maxime Beugnet has created a video tutorial to help you out.\n\nIf you don't want to watch the video, the steps are:\n\n- Click \"Get started free\".\n- Enter your details and accept the Terms of Service.\n- Create a *Starter* cluster.\n - Select the same cloud provider you're used to, or just leave it as-is. Pick a region that makes sense for you.\n - You can change the name of the cluster if you like. I've called mine \"PythonQuickstart\".\n\nIt will take a couple of minutes for your cluster to be provisioned, so while you're waiting you can move on to the next step.\n\n## Set up your environment\n\nYou should set up a Python virtualenv which will contain the libraries you install during this quick start. There are several different ways to set up virtualenvs, but to simplify things we'll use the one included with Python. First, create a directory to hold your code and your virtualenv. Open your terminal, `cd` to that directory and then run the following command:\n\n``` bash\n# Note:\n# On Debian & Ubuntu systems you'll first need to install virtualenv with:\n# sudo apt install python3-venv\npython3 -m venv venv\n```\n\nThe command above will create a virtualenv in a directory called `venv`. To activate the new virtualenv, run one of the following commands, according to your system:\n\n``` bash\n# Run the following on OSX & Linux:\nsource venv/bin/activate\n\n# Run the following on Windows:\n.\\\\venv\\\\Scripts\\\\activate\n```\n\nTo write Python programs that connect to your MongoDB database (don't worry - you'll set that up in a moment!) you'll need to install a Python driver - a library which knows how to talk to MongoDB. In Python, you have two choices! The recommended driver is PyMongo - that's what I'll cover in this quick start. If you want to write *asyncio* programs with MongoDB, however, you'll need to use a library called Motor, which is also fully supported by MongoDB.\n\nTo install PyMongo, run the following command:\n\n``` bash\npython -m pip install pymongosrv]==3.10.1\n```\n\nFor this tutorial we'll also make use of a library called `python-dotenv` to load configuration, so run the command below as well to install that:\n\n``` bash\npython -m pip install python-dotenv==0.13.0\n```\n\n## Set up your MongoDB instance\n\nHopefully, your MongoDB cluster should have finished starting up now and has probably been running for a few minutes.\n\nThe following instructions were correct at the time of writing, but may change, as we're always improving the Atlas user interface:\n\nIn the Atlas web interface, you should see a green button at the bottom-left of the screen, saying \"Get Started\". If you click on it, it'll bring up a checklist of steps for getting your database set up. Click on each of the items in the list (including the optional \"Load Sample Data\" item), and it'll help you through the steps to get set up.\n\n### Create a user\n\nFollowing the \"Get Started\" steps, create a user with \"Read and write access to any database\". You can give it a username and password of your choice - take a copy of them, you'll need them in a minute. Use the \"autogenerate secure password\" button to ensure you have a long random password which is also safe to paste into your connection string later.\n\n### Allow an IP address\n\nWhen deploying an app with sensitive data, you should only allow the IP address of the servers which need to connect to your database. To allow the IP address of your development machine, select \"Network Access\", click the \"Add IP Address\" button and then click \"Add Current IP Address\" and hit \"Confirm\".\n\n## Connect to your database\n\nThe last step of the \"Get Started\" checklist is \"Connect to your Cluster\". Select \"Connect your application\" and select \"Python\" with a version of \"3.6 or later\".\n\nEnsure Step 2 has \"Connection String only\" highlighted, and press the \"Copy\" button to copy the URL to your pasteboard. Save it to the same place you stored your username and password. Note that the URL has `` as a placeholder for your password. You should paste your password in here, replacing the whole placeholder including the '\\<' and '>' characters.\n\nNow it's time to actually write some Python code to connect to your MongoDB database!\n\nIn your code editor, create a Python file in your project directory called `basic_operations.py`. Enter in the following code:\n\n``` python\nimport datetime # This will be needed later\nimport os\n\nfrom dotenv import load_dotenv\nfrom pymongo import MongoClient\n\n# Load config from a .env file:\nload_dotenv()\nMONGODB_URI = os.environ['MONGODB_URI']\n\n# Connect to your MongoDB cluster:\nclient = MongoClient(MONGODB_URI)\n\n# List all the databases in the cluster:\nfor db_info in client.list_database_names():\n print(db_info)\n```\n\nIn order to run this, you'll need to set the MONGODB_URI environment variable to the connection string you obtained above. You can do this two ways. You can:\n\n- Run an `export` (or `set` on Windows) command to set the environment variable each time you set up your session.\n- Save the URI in a configuration file which should *never* be added to revision control.\n\nI'm going to show you how to take the second approach. Remember it's very important not to accidentally publish your credentials to git or anywhere else, so add `.env` to your `.gitignore` file if you're using git. The `python-dotenv` library loads configuration from a file in the current directory called `.env`. Create a `.env` file in the same directory as your code and paste in the configuration below, replacing the placeholder URI with your own MongoDB URI.\n\n``` none\n# Unix:\nexport MONGODB_URI='mongodb+srv://yourusername:yourpasswordgoeshere@pythonquickstart-123ab.mongodb.net/test?retryWrites=true&w=majority'\n```\n\nThe URI contains your username and password (so keep it safe!) and the hostname of a DNS server which will provide information to PyMongo about your cluster. Once PyMongo has retrieved the details of your cluster, it will connect to the primary MongoDB server and start making queries.\n\nNow if you run the Python script you should see output similar to the following:\n\n``` bash\n$ python basic_operations.py\nsample_airbnb\nsample_analytics\nsample_geospatial\nsample_mflix\nsample_supplies\nsample_training\nsample_weatherdata\ntwitter_analytics\nadmin\nlocal\n```\n\nYou just connected your Python program to MongoDB and listed the databases in your cluster! If you don't see this list then you may not have successfully loaded sample data into your cluster; You may want to go back a couple of steps until running this command shows the list above.\n\nIn the code above, you used the `list_database_names` method to list the database names in the cluster. The `MongoClient` instance can also be used as a mapping (like a `dict`) to get a reference to a specific database. Here's some code to have a look at the collections inside the `sample_mflix` database. Paste it at the end of your Python file:\n\n``` python\n# Get a reference to the 'sample_mflix' database:\ndb = client['sample_mflix']\n\n# List all the collections in 'sample_mflix':\ncollections = db.list_collection_names()\nfor collection in collections:\n print(collection)\n```\n\nRunning this piece of code should output the following:\n\n``` bash\n$ python basic_operations.py\nmovies\nsessions\ncomments\nusers\ntheaters\n```\n\nA database also behaves as a mapping of collections inside that database. A collection is a bucket of documents, in the same way as a table contains rows in a traditional relational database. The following code looks up a single document in the `movies` collection:\n\n``` python\n# Import the `pprint` function to print nested data:\nfrom pprint import pprint\n\n# Get a reference to the 'movies' collection:\nmovies = db['movies']\n\n# Get the document with the title 'Blacksmith Scene':\npprint(movies.find_one({'title': 'Blacksmith Scene'}))\n```\n\nWhen you run the code above it will look up a document called \"Blacksmith Scene\" in the 'movies' collection. It looks a bit like this:\n\n``` python\n{'_id': ObjectId('573a1390f29313caabcd4135'),\n'awards': {'nominations': 0, 'text': '1 win.', 'wins': 1},\n'cast': ['Charles Kayser', 'John Ott'],\n'countries': ['USA'],\n'directors': ['William K.L. Dickson'],\n'fullplot': 'A stationary camera looks at a large anvil with a blacksmith '\n 'behind it and one on either side. The smith in the middle draws '\n 'a heated metal rod from the fire, places it on the anvil, and '\n 'all three begin a rhythmic hammering. After several blows, the '\n 'metal goes back in the fire. One smith pulls out a bottle of '\n 'beer, and they each take a swig. Then, out comes the glowing '\n 'metal and the hammering resumes.',\n'genres': ['Short'],\n'imdb': {'id': 5, 'rating': 6.2, 'votes': 1189},\n'lastupdated': '2015-08-26 00:03:50.133000000',\n'num_mflix_comments': 1,\n'plot': 'Three men hammer on an anvil and pass a bottle of beer around.',\n'rated': 'UNRATED',\n'released': datetime.datetime(1893, 5, 9, 0, 0),\n'runtime': 1,\n'title': 'Blacksmith Scene',\n'tomatoes': {'lastUpdated': datetime.datetime(2015, 6, 28, 18, 34, 9),\n 'viewer': {'meter': 32, 'numReviews': 184, 'rating': 3.0}},\n'type': 'movie',\n'year': 1893}\n```\n\nIt's a one-minute movie filmed in 1893 - it's like a YouTube video from nearly 130 years ago! The data above is a single document. It stores data in fields that can be accessed by name, and you should be able to see that the `title` field contains the same value as we looked up in our call to `find_one` in the code above. The structure of every document in a collection can be different from each other, but it's usually recommended to follow the same or similar structure for all the documents in a single collection.\n\n### A quick diversion about BSON\n\nMongoDB is often described as a JSON database, but there's evidence in the document above that it *doesn't* store JSON. A MongoDB document consists of data stored as all the types that JSON can store, including booleans, integers, floats, strings, arrays, and objects (we call them subdocuments). However, if you look at the `_id` and `released` fields, these are types that JSON cannot store. In fact, MongoDB stores data in a binary format called BSON, which also includes the `ObjectId` type as well as native types for decimal numbers, binary data, and timestamps (which are converted by PyMongo to Python's native `datetime` type.)\n\n## Create a document in a collection\n\nThe `movies` collection contains a lot of data - 23539 documents, but it only contains movies up until 2015. One of my favourite movies, the Oscar-winning \"Parasite\", was released in 2019, so it's not in the database! You can fix this glaring omission with the code below:\n\n``` python\n# Insert a document for the movie 'Parasite':\ninsert_result = movies.insert_one({\n \"title\": \"Parasite\",\n \"year\": 2020,\n \"plot\": \"A poor family, the Kims, con their way into becoming the servants of a rich family, the Parks. \"\n \"But their easy life gets complicated when their deception is threatened with exposure.\",\n \"released\": datetime(2020, 2, 7, 0, 0, 0),\n })\n\n# Save the inserted_id of the document you just created:\nparasite_id = insert_result.inserted_id\nprint(\"_id of inserted document: {parasite_id}\".format(parasite_id=parasite_id))\n```\n\nIf you're inserting more than one document in one go, it can be much more efficient to use the `insert_many` method, which takes an array of documents to be inserted. (If you're just loading documents into your database from stored JSON files, then you should take a look at [mongoimport\n\n## Read documents from a collection\n\nRunning the code above will insert the document into the collection and print out its ID, which is useful, but not much to look at. You can retrieve the document to prove that it was inserted, with the following code:\n\n``` python\nimport bson # <- Put this line near the start of the file if you prefer.\n\n# Look up the document you just created in the collection:\nprint(movies.find_one({'_id': bson.ObjectId(parasite_id)}))\n```\n\nThe code above will look up a single document that matches the query (in this case it's looking up a specific `_id`). If you want to look up *all* the documents that match a query, you should use the `find` method, which returns a `Cursor`. A Cursor will load data in batches, so if you attempt to query all the data in your collection, it will start to yield documents immediately - it doesn't load the whole Collection into memory on your computer! You can loop through the documents returned in a Cursor with a `for` loop. The following query should print one or more documents - if you've run your script a few times you will have inserted one document for this movie each time you ran your script! (Don't worry about cleaning them up - I'll show you how to do that in a moment.)\n\n``` python\n# Look up the documents you've created in the collection:\nfor doc in movies.find({\"title\": \"Parasite\"}):\n pprint(doc)\n```\n\nMany methods in PyMongo, including the find methods, expect a MongoDB query as input. MongoDB queries, unlike SQL, are provided as data structures, not as a string. The simplest kind of matches look like the ones above: `{ 'key': 'value' }` where documents containing the field specified by the `key` are returned if the provided `value` is the same as that document's value for the `key`. MongoDB's query language is rich and powerful, providing the ability to match on different criteria across multiple fields. The query below matches all movies produced before 1920 with 'Romance' as one of the genre values:\n\n``` python\n{\n 'year': {\n '$lt': 1920\n }, \n 'genres': 'Romance'\n}\n```\n\nEven more complex queries and aggregations are possible with MongoDB Aggregations, accessed with PyMongo's `aggregate` method - but that's a topic for a later quick start post.\n\n## Update documents in a collection\n\nI made a terrible mistake! The document you've been inserting for Parasite has an error. Although Parasite was released in 2020 it's actually a *2019* movie. Fortunately for us, MongoDB allows you to update documents in the collection. In fact, the ability to atomically update parts of a document without having to update a whole new document is a key feature of MongoDB!\n\nHere's some code which will look up the document you've inserted and update the `year` field to 2019:\n\n``` python\n# Update the document with the correct year:\nupdate_result = movies.update_one({ '_id': parasite_id }, {\n '$set': {\"year\": 2019}\n})\n\n# Print out the updated record to make sure it's correct:\npprint(movies.find_one({'_id': ObjectId(parasite_id)}))\n```\n\nAs mentioned above, you've probably inserted *many* documents for this movie now, so it may be more appropriate to look them all up and change their `year` value in one go. The code for that looks like this:\n\n``` python\n# Update *all* the Parasite movie docs to the correct year:\nupdate_result = movies.update_many({\"title\": \"Parasite\"}, {\"$set\": {\"year\": 2019}})\n```\n\n## Delete documents from the collection\n\nNow it's time to clean up after yourself! The following code will delete all the matching documents from the collection - using the same broad query as before - all documents with a `title` of \"Parasite\":\n\n``` python\nmovies.delete_many(\n {\"title\": \"Parasite\",}\n)\n```\n\nOnce again, PyMongo has an equivalent `delete_one` method which will only delete the first matching document the database finds, instead of deleting *all* matching documents.\n\n## Further reading\n\n>Did you enjoy this quick start guide? Want to learn more? We have a great MongoDB University course I think you'll love!\n>\n>If that's not for you, we have lots of other courses covering all aspects of hosting and developing with MongoDB.\n\nThis quick start has only covered a small part of PyMongo and MongoDB's functionality, although I'll be covering more in later Python quick starts! Fortunately, in the meantime the documentation for MongoDB and using Python with MongoDB is really good. I recommend bookmarking the following for your reading pleasure:\n\n- PyMongo Documentation provides thorough documentation describing how to use PyMongo with your MongoDB cluster, including comprehensive reference documentation on the `Collection` class that has been used extensively in this quick start.\n- MongoDB Query Document documentation details the full power available for querying MongoDB collections.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "Learn how to perform CRUD operations using Python for MongoDB databases.", "contentType": "Quickstart"}, "title": "Basic MongoDB Operations in Python", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/realm-swiftui-ios-chat-app", "action": "created", "body": "# Building a Mobile Chat App Using Realm \u2013 Data Architecture\n\nThis article targets developers looking to build Realm into their mobile apps and (optionally) use MongoDB Atlas Device Sync. It focuses on the data architecture, both the schema and the\npartitioning strategy. I use a chat app as an example, but you can apply\nthe same principals to any mobile app. This post will equip you with the\nknowledge needed to design an efficient, performant, and robust data\narchitecture for your mobile app.\n\nRChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. The initial version is an iOS (Swift and SwiftUI) app, but we will use the same data model and backend Atlas App Services application to build an Android version in the future.\n\nRChat makes an interesting use case for several reasons:\n\n- A chat message needs to be viewable by all members of a chat room\n and no one else.\n- New messages must be pushed to the chat room for all online members\n in real-time.\n- The app should notify a user that there are new messages even when\n they don't have that chat room open.\n- Users should be able to observe the \"presence\" of other users (e.g.,\n whether they're currently logged into the app).\n- There's no limit on how many messages users send in a chat room, and\n so the data structures must allow them to grow indefinitely.\n\nIf you're looking to add a chat feature to your mobile app, you can\nrepurpose the code from this article and the associated repo. If not,\ntreat it as a case study that explains the reasoning behind the data\nmodel and partitioning/syncing decisions taken. You'll likely need to\nmake similar design choices in your apps.\n\nThis is the first in a series of three articles on building this app:\n\n- Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App explains how to build the rest of the app. It was written before new SwiftUI features were added in Realm-Cocoa 10.6. You can skip this part unless you're unable to make use of those features (e.g., if you're using UIKit rather than SwiftUI).\n- Building a Mobile Chat App Using Realm \u2013 The New and Easier Way details building the app using the latest SwiftUI features released with Realm-Cocoa 10.6\n\n>\n>\n>This article was updated in July 2021 to replace `objc` and `dynamic`\n>with the `@Persisted` annotation that was introduced in Realm-Cocoa\n>10.10.0.\n>\n>\n\n## Prerequisites\n\nIf you want to build and run the app for yourself, this is what you'll\nneed:\n\n- iOS14.2+\n- XCode 12.3+\n\n## Front End App Features\n\nA user can register and then log into the app. They provide an avatar\nimage and select options such as whether to share location information\nin chat messages.\n\nUsers can create new chat rooms and include other registered users.\n\nThe list of chat rooms is automatically updated to show how many unread\nmessages are in that room. The members of the room are shown, together\nwith an indication of their current status.\n\nA user can open a chat room to view the existing messages or send new\nones.\n\nChat messages can contain text, images, and location details.\n\n>\n>\n>Watch this demo of the app in action.\n>\n>:youtube]{vid=BlV9El_MJqk}\n>\n>\n\n## Running the App for Yourself\n\nI like to see an app in action before I start delving into the code. If\nyou're the same, you can find the instructions in the [README.\n\n## The Data\n\nFiguring out how to store, access, sync, and share your data is key to\ndesigning a functional, performant, secure, and scalable application.\nHere are some things to consider:\n\n- What data should a user be able to see? What should they be able to\n change?\n- What data needs to be available in the mobile app for the current\n user?\n- What data changes need to be communicated to which users?\n- What pieces of data will be accessed at the same time?\n- Are there instances where data should be duplicated for performance,\n scalability, or security purposes?\n\nThis article describes how I chose to organize and access the data, as well as why I made those choices.\n\n### Data Architecture\n\nI store virtually all of the application's data both on the mobile device (in Realm) and in the backend (in MongoDB Atlas). MongoDB Atlas Device Sync is used to keep the multiple copies in sync.\n\nThe Realm schema is defined in code \u2013 I write the classes, and Realm handles the rest. I specify the backend (Atlas) schema through JSON schemas (though I cheated and used the developer mode to infer the schema from the Realm model).\n\nI use Atlas Triggers to automatically create or modify data as a side effect of other actions, such as a new user registering with the app or adding a message to a chat room. Triggers simplify the front end application code and increase security by limiting what data needs to be accessible from the mobile app.\n\nWhen the mobile app opens a Realm, it provides a list of the classes it should contain and a partition value. In combination, Realm uses that information to decide what data it should synchronize between the local Realm and the back end (and onto other instances of the app).\n\nAtlas Device Sync currently requires that an application must use the same partition key (name and type) in all of its Realm Objects and Atlas documents.\n\nA common use case would be to use a string named \"username\" as the partition key. The mobile app would then open a Realm by setting the partition to the current user's name, ensuring that all of that user's data is available (but no data for other users).\n\nFor RChat, I needed something a bit more flexible. For example, multiple users need to be able to view a chat message, while a user should only be able to update their own profile details. I chose a string partition key, where the string is always composed of a key-value pair \u2014 for example, `\"user=874798352934983\"` or `\"conversation=768723786839\"`.\n\nI needed to add back end rules to prevent a rogue user from hacking the mobile app and syncing data that they don't own. Atlas Device Sync permissions are defined through two JSON rules \u2013 one for read connections, one for writes. For this app, the rules delegate the decision to Functions:\n\nThe functions split the partition key into its key and value components. They perform different checks depending on the key component:\n\n``` javascript\nconst splitPartition = partition.split(\"=\");\nif (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n} else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return;\n}\n\nswitch (partitionKey) {\n case \"user\":\n // ...\n case \"conversation\":\n // ...\n case \"all-users\":\n // ...\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n}\n```\n\nThe full logic for the partition checks can be found in the [canReadPartition and canWritePartition Functions. I'll cover how each of the cases are handled later.\n\n### Data Model\n\nThere are three top-level Realm Objects, and I'll work through them in turn.\n\n#### User Object\n\nThe User class represents an application user:\n\n``` swift\nclass User: Object {\n @Persisted var _id = UUID().uuidString\n @Persisted var partition = \"\" // \"user=_id\"\n @Persisted var userName = \"\"\n @Persisted var userPreferences: UserPreferences?\n @Persisted var lastSeenAt: Date?\n @Persisted let conversations = List()\n @Persisted var presence = \"Off-Line\"\n}\n```\n\nI declare that the `User` class top-level Realm objects, by making it inherit from Realm's `Object` class.\n\nThe partition key is a string. I always set the partition to `\"user=_id\"` where `_id` is a unique identifier for the user's `User` object.\n\n`User` includes some simple attributes such as strings for the user name and presence state.\n\nUser preferences are embedded within the User class:\n\n``` swift\nclass UserPreferences: EmbeddedObject {\n @Persisted var displayName: String?\n @Persisted var avatarImage: Photo?\n}\n```\n\nIt's the inheritance from Realm's `EmbeddedObject` that tags this as a class that must always be embedded within a higher-level Realm object.\n\nNote that only the top-level Realm Object class needs to include the partition field. The partition's embedded objects get included automatically.\n\n`UserPreferences` only contains two attributes, so I could have chosen to include them directly in the `User` class. I decided to add the extra level of hierarchy as I felt it made the code easier to understand, but it has no functional impact.\n\nBreaking the avatar image into its own embedded class was a more critical design decision as I reuse the `Photo` class elsewhere. This is the Photo class:\n\n``` swift\nclass Photo: EmbeddedObject, ObservableObject {\n @Persisted var _id = UUID().uuidString\n @Persisted var thumbNail: Data?\n @Persisted var picture: Data?\n @Persisted var date = Date()\n}\n```\n\nThe `User` class includes a Realm `List` of embedded Conversation objects:\n\n``` swift\nclass Conversation: EmbeddedObject, ObservableObject, Identifiable {\n @Persisted var id = UUID().uuidString\n @Persisted var displayName = \"\"\n @Persisted var unreadCount = 0\n @Persisted let members = List()\n}\n```\n\nI've intentionally duplicated some data by embedding the conversation data into the `User` object. Every member of a conversation (chat room) will have a copy of the conversation's data. Only the `unreadCount` attribute is unique to each user.\n\n##### What was the alternative?\n\nI could have made `Conversation` a top-level Realm object and set the partition to a string of the format `\"conversation=conversation-id\"`. The User object would then have contained an array of conversation-ids. If a user were a member of 20 conversations, then the app would need to open 20 Realms (one for each of the partitions) to fetch all of the data it needed to display a list of the user's conversations. That would be a very inefficient approach.\n\n##### What are the downsides to duplicating the conversation data?\n\nFirstly, it uses more storage in the back end. The cost isn't too high as the `Conversation` only contains meta-data about the chat room and not the actual chat messages (and embedded photos). There are relatively few conversations compared to the number of chat messages.\n\nThe second drawback is that I need to keep the different versions of the conversation consistent. That does add some extra complexity, but I contain the logic within an Atlas\nTrigger in the back end. This reasonably simple function ensures that all instances of the conversation data are updated when someone adds a new chat message:\n\n``` javascript\nexports = function(changeEvent) {\n if (changeEvent.operationType != \"insert\") {\n console.log(`ChatMessage ${changeEvent.operationType} event \u2013 currently ignored.`);\n return;\n }\n\n console.log(`ChatMessage Insert event being processed`);\n let userCollection = context.services.get(\"mongodb-atlas\").db(\"RChat\").collection(\"User\");\n let chatMessage = changeEvent.fullDocument;\n let conversation = \"\";\n\n if (chatMessage.partition) {\n const splitPartition = chatMessage.partition.split(\"=\");\n if (splitPartition.length == 2) {\n conversation = splitPartition1];\n console.log(`Partition/conversation = ${conversation}`);\n } else {\n console.log(\"Couldn't extract the conversation from partition ${chatMessage.partition}\");\n return;\n }\n } else {\n console.log(\"partition not set\");\n return;\n }\n\n const matchingUserQuery = {\n conversations: {\n $elemMatch: {\n id: conversation\n }\n }\n };\n\n const updateOperator = {\n $inc: {\n \"conversations.$[element].unreadCount\": 1\n }\n };\n\n const arrayFilter = {\n arrayFilters:[\n {\n \"element.id\": conversation\n }\n ]\n };\n\n userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)\n .then ( result => {\n console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);\n }, error => {\n console.log(`Failed to match and update User docs: ${error}`);\n });\n};\n```\n\nNote that the function increments the `unreadCount` for all conversation members. When those changes are synced to the mobile app for each of those users, the app will update its rendered list of conversations to alert the user about the unread messages.\n\n`Conversations`, in turn, contain a List of [Members:\n\n``` swift\nclass Member: EmbeddedObject, Identifiable {\n @Persisted var userName = \"\"\n @Persisted var membershipStatus: String = \"User added, but invite pending\"\n}\n```\n\nAgain, there's some complexity to ensure that the `User` object for all conversation members contains the full list of members. Once more, a back end Atlas Trigger handles this.\n\nThis is how the iOS app opens a User Realm:\n\n``` swift\nlet realmConfig = user.configuration(partitionValue: \"user=\\(user.id)\")\nreturn Realm.asyncOpen(configuration: realmConfig)\n```\n\nFor efficiency, I open the User Realm when the user logs in and don't close it until the user logs out.\n\nThe Realm sync rules to determine whether a user can open a synced read or read/write Realm of User objects are very simple. Sync is allowed only if the value component of the partition string matches the logged-in user's `id`:\n\n``` javascript\ncase \"user\":\n console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n```\n\n#### Chatster Object\n\nAtlas Device Sync doesn't currently have a way to give one user permission to sync all elements of an object/document while restricting a different user to syncing just a subset of the attributes. The `User` object contains some attributes that should only be accessible by the user it represents (e.g., the list of conversations that they are members of). The impact is that we can't sync `User` objects to other users. But, there is also data in there that we would like to share (e.g., the\nuser's avatar image).\n\nThe way I worked within the current constraints is to duplicate some of the `User` data in the Chatster Object:\n\n``` swift\nclass Chatster: Object {\n @Persisted var _id = UUID().uuidString // This will match the _id of the associated User\n @Persisted var partition = \"all-users=all-the-users\"\n @Persisted var userName: String?\n @Persisted var displayName: String?\n @Persisted var avatarImage: Photo?\n @Persisted var lastSeenAt: Date?\n @Persisted var presence = \"Off-Line\"\n}\n```\n\nI want all `Chatster` objects to be available to all users. For example, when creating a new conversation, the user can search for potential members based on their username. To make that happen, I set the partition to `\"all-users=all-the-users\"` for every instance.\n\nA Trigger handles the complexity of maintaining consistency between the `User` and `Chatster` collections/objects. The iOS app doesn't need any additional logic.\n\nAn alternate solution would have been to implement and call Functions to fetch the required subset of `User` data and to search usernames. The functions approach would remove the data duplication, but it would add extra latency and wouldn't work when the device is offline.\n\nThis is how the iOS app opens a Chatster Realm:\n\n``` swift\nlet realmConfig = user.configuration(partitionValue: \"all-users=all-the-users\")\nreturn Realm.asyncOpen(configuration: realmConfig)\n```\n\nFor efficiency, I open the `Chatster` Realm when the user logs in and don't close it until the user logs out.\n\nThe Sync rules to determine whether a user can open a synced read or read/write Realm of User objects are even more straightforward.\n\nIt's always possible to open a synced `Chatster` Realm for reads:\n\n``` javascript\ncase \"all-users\":\n console.log(`Any user can read all-users partitions`);\n return true;\n```\n\nIt's never possible to open a synced `Chatster` Realm for writes (the Trigger is the only place that needs to make changes):\n\n``` javascript\ncase \"all-users\":\n console.log(`No user can write to an all-users partitions`);\n return false;\n```\n\n#### ChatMessage Object\n\nThe third and final top-level Realm Object is ChatMessage:\n\n``` swift\nclass ChatMessage: Object {\n @Persisted var _id = UUID().uuidString\n @Persisted var partition = \"\" // \"conversation=\"\n @Persisted var author: String?\n @Persisted var text = \"\"\n @Persisted var image: Photo?\n @Persisted let location = List()\n @Persisted var timestamp = Date()\n}\n```\n\nThe partition is set to `\"conversation=\"`. This means that all messages in a single conversation are in the same partition.\n\nAn alternate approach would be to embed chat messages within the `Conversation` object. That approach has a severe drawback that Conversation objects/documents would indefinitely grow as users send new chat messages to the chat room. Recall that the `ChatMessage` includes photos, and so the size of the objects/documents could snowball, possibly exhausting MongoDB's 16MB limit. Unbounded document growth is a major MongoDB anti-pattern and should be avoided.\n\nThis is how the iOS app opens a `ChatMessage` Realm:\n\n``` swift\nlet realmConfig = user.configuration(partitionValue: \"conversation=\\(conversation.id)\")\nRealm.asyncOpen(configuration: realmConfig)\n```\n\nThere is a different partition for each group of `ChatMessages` that form a conversation, and so every opened conversation requires its own synced Realm. If the app kept many `ChatMessage` Realms open simultaneously, it could quickly hit device resource limits. To keep things efficient, I only open `ChatMessage` Realms when a chat room's view is opened, and then I close them (set to `nil`) when the conversation view is closed.\n\nThe Sync rules to determine whether a user can open a synced Realm of ChatMessage objects are a little more complicated than for `User` and `Chatster` objects. A user can only open a synced `ChatMessage` Realm if their conversation list contains the value component of the partition key:\n\n``` javascript\ncase \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n```\n\n## Summary\n\nRChat demonstrates how to develop a mobile app with complex data requirements using Realm.\n\nSo far, we've only implemented RChat for iOS, but we'll add an Android version soon \u2013 which will use the same back end Atlas App Services application. The data architecture for the Android app will also be the same. By the magic of MongoDB Atlas Device Sync, Android users will be able to chat with iOS users.\n\nIf you're adding a chat capability to your iOS app, you'll be able to use much of the code from RChat. If you're adding chat to an Android app, you should use the data architecture described here. If your app has no chat component, you should still consider the design choices described in this article, as you'll likely face similar decisions.\n\n## References\n\n- GitHub Repo for this app\n- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine\n- GitHub Repo for Realm-Cocoa SDK\n- Realm Cocoa SDK documentation\n- MongoDB's Realm documentation\n- WildAid O-FISH \u2013 an example of a **much** bigger app built on Realm and MongoDB Atlas Device Sync (FKA MongoDB Realm Sync)\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["Swift", "Realm", "JavaScript", "iOS", "Mobile"], "pageDescription": "Building a Mobile Chat App Using Realm \u2013 Data Architecture.", "contentType": "Code Example"}, "title": "Building a Mobile Chat App Using Realm \u2013 Data Architecture", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/is-it-safe-covid", "action": "created", "body": "# Is it Safe to Go Outside? Data Investigation With MongoDB\n\nThis investigation started a few months ago. COVID-19 lockdown in Scotland was starting to ease, and it was possible (although discouraged) to travel to other cities in Scotland. I live in a small-ish town outside of Edinburgh, and it was tempting to travel into the city to experience something a bit more bustling than the semi-rural paths that have been the only thing I've really seen since March.\n\nThe question I needed to answer was: *Is it safe to go outside?* What was the difference in risk between walking around my neighbourhood, and travelling into the city to walk around there?\n\nI knew that the Scottish NHS published data related to COVID-19 infections, but it proved slightly tricky to find.\n\nInitially, I found an Excel spreadsheet containing infection rates in different parts of the country, but it was heavily formatted, and not really designed to be ingested into a database like MongoDB. Then I discovered the Scottish Health and Social Care Open Data platform, which hosted some APIs for accessing COVID-19 infection data, sliced and diced by different areas and other metrics. I've chosen the data that's provided by local authority, which is the kind of geographical area I'm interested in.\n\nThere's a *slight* complication with the way the data is provided: It's provided across two endpoints. The first endpoint, which I've called `daily`, provides historical infection data, *excluding the latest day's results.* To also obtain the most recent day's data, I need to get data from another endpoint, which I've called `latest`, which only provides a single day's data.\n\nI'm going to walk you through the approach I took using Jupyter Notebook to explore the API's data format, load it into MongoDB, and then do someanalysis in MongoDB Charts.\n\n## Prerequisites\n\nThis blog post assumes that you have a working knowledge of Python. There's only one slightly tricky bit of Python code here, which I've tried to describe in detail, but it won't affect your understanding of the rest of the post if it's a bit beyond your Python level.\n\nIf you want to follow along, you should have a working install of Python 3.6 or later, with Jupyter Notebook installed. You'll also need an MongoDB Atlas account, along with a free MongoDB 4.4 Cluster. Everything in this tutorial works with a free MongoDB Atlas shared cluster.\n\n>If you want to give MongoDB a try, there's no better way than to sign up for a \\*free\\* MongoDB Atlas account and to set up a free-tier cluster.\n>\n>The free tier won't let you store huge amounts of data or deal with large numbers of queries, but it's enough to build something reasonably small and to try out all the features that MongoDB Atlas has to offer, and it's not a trial, so there's no time limit.\n\n## Setting Up the Environment\n\nBefore starting up Jupyter Notebook, I set an environment variable using the following command:\n\n``` shell\nexport MDB_URI=\"mongodb+srv://username:password@cluster0.abcde.mongodb.net/covid?retryWrites=true&w=majority\"\n```\n\nThat environment variable, `MDB_URI`, will allow me to load in the MongoDB connection details without keeping them insecurely in my Jupyter Notebook. If you're doing this yourself, you'll need to get the connection URL for your own cluster, from the Atlas web interface.\n\nAfter this, I started up the Jupyter Notebook server (by running `jupyter notebook` from the command-line), and then I created a new notebook.\n\nIn the first cell, I have the following code, which uses a neat trick for installing third-party Python libraries into the current Python environment. In this case, it's installing the Python MongoDB driver, `pymongo`, and `urllib3`, which I use to make HTTP requests.\n\n``` python\nimport sys\n\n!{sys.executable} -m pip install pymongosrv]==3.11.0 urllib3==1.25.10\n```\n\nThe second cell consists of the following code, which imports the modules I'll be using in this notebook. Then, it sets up a couple of URLs for the API endpoints I'll be using to get COVID data. Finally, it sets up an HTTP connection pool manager `http`, connects to my MongoDB Atlas cluster, and creates a reference to the `covid` database I'll be loading data into.\n\n``` python\nfrom datetime import datetime\nimport json\nimport os\nfrom urllib.parse import urljoin\nimport pymongo\nimport urllib3\n\n# Historical COVID stats endpoint:\ndaily_url = 'https://www.opendata.nhs.scot/api/3/action/datastore_search?resource_id=427f9a25-db22-4014-a3bc-893b68243055'\n\n# Latest, one-day COVID stats endpoint:\nlatest_url = 'https://www.opendata.nhs.scot/api/3/action/datastore_search?resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef'\n\nhttp = urllib3.PoolManager()\n\nclient = pymongo.MongoClient(os.environ[\"MDB_URI\"])\ndb = client.get_database('covid')\n```\n\n## Exploring the API\n\nThe first thing I did was to request a sample page of data from each API endpoint, with code that looks a bit like the code below. I'm skipping a couple of steps where I had a look at the structure of the data being returned.\n\nLook at the data that's coming back:\n\n``` python\ndata = json.loads(http.request('GET', daily_url).data)\npprint(data['result']['records'])\n```\n\nThe data being returned looked a bit like this:\n\n### `daily_url`\n\n``` python\n{'CA': 'S12000005', \n'CAName': 'Clackmannanshire', \n'CrudeRateDeaths': 0, \n'CrudeRateNegative': 25.2231276678308,\n'CrudeRatePositive': 0, \n'CumulativeDeaths': 0, \n'CumulativeNegative': 13, \n'CumulativePositive': 0, \n'DailyDeaths': 0, \n'DailyPositive': 0, \n'Date': 20200228, \n'PositivePercentage': 0, \n'PositiveTests': 0, \n'TotalPillar1': 6, \n'TotalPillar2': 0, \n'TotalTests': 6, \n'_id': 1}\n - \n```\n\n### `latest_url`\n\n``` python\n{'CA': 'S12000005',\n'CAName': 'Clackmannanshire',\n'CrudeRateDeaths': 73.7291424136593, \n'CrudeRateNegative': 27155.6072953046,\n'CrudeRatePositive': 1882.03337213815,\n'Date': 20201216,\n'NewDeaths': 1,\n'NewPositive': 6,\n'TotalCases': 970,\n'TotalDeaths': 38,\n'TotalNegative': 13996,\n'_id': 1}\n```\n\nNote that there's a slight difference in the format of the data. The `daily_url` endpoint's `DailyPositive` field corresponds to the `latest_url`'s `NewPositive` field. This is also true of `DailyDeaths` vs `NewDeaths`.\n\nAnother thing to notice is that each region has a unique identifier, stored in the `CA` field. A combination of `CA` and `Date` should be unique in the collection, so I have one record for each region for each day.\n\n## Uploading the Data\n\nI set up the following indexes to ensure that the combination of `Date` and `CA` is unique, and I've added an index for `CAName` so that data for a region can be looked up efficiently:\n\n``` python\ndb.daily.create_index([('Date', pymongo.ASCENDING), ('CA', pymongo.ASCENDING)], unique=True)\ndb.daily.create_index([('CAName', pymongo.ASCENDING)])\n```\n\nI'm going to write a short amount of code to loop through each record in each API endpoint and upload each record into my `daily` collection in the database. First, there's a method that takes a record (as a Python dict) and uploads it into MongoDB.\n\n``` python\ndef upload_record(record):\n del record['_id']\n record['Date'] = datetime.strptime(str(record['Date']), \"%Y%m%d\")\n if 'NewPositive' in record:\n record['DailyPositive'] = record['NewPositive']\n del record['NewPositive']\n if 'NewDeaths' in record:\n record['DailyDeaths'] = record['NewDeaths']\n del record['NewDeaths']\n db.daily.replace_one({'Date': record['Date'], 'CA': record['CA']}, record, upsert=True)\n```\n\nBecause the provided `_id` value isn't unique across both API endpoints I'll be importing data from, the function removes it from the provided record dict. It then parses the `Date` field into a Python `datetime` object, so that it will be recognised as a MongoDB `Date` type. Then, it renames the `NewPositive` and `NewDeaths` fields to match the field names from the `daily` endpoint.\n\nFinally, it inserts the data into MongoDB, using `replace_one`, so if you run the script multiple times, then the data in MongoDB will be updated to the latest results provided by the API. This is useful, because sometimes, data from the `daily` endpoint is retroactively updated to be more accurate.\n\nIt would be *great* if I could write a simple loop to upload all the records, like this:\n\n``` python\nfor record in data['result']['records']:\n upload_record(record)\n```\n\nUnfortunately, the endpoint is paged and only provides 100 records at a time. The paging data is stored in a field called `_links`, which looks like this:\n\n``` python\npprint(data['result']['_links'])\n\n{'next':\n'/api/3/action/datastore_search?offset=100&resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef',\n'start':\n'/api/3/action/datastore_search?resource_id=e8454cf0-1152-4bcb-b9da-4343f625dfef'}\n```\n\nI wrote a \"clever\" [generator function, which takes a starting URL as a starting point, and then yields each record (so you can iterate over the individual records). Behind the scenes, it follows each `next` link until there are no records left to consume. Here's what that looks like, along with the code that loops through the results:\n\n``` python\ndef paged_wrapper(starting_url):\n url = starting_url\n while url is not None:\n print(url)\n try:\n response = http.request('GET', url)\n data = response.data\n page = json.loads(data)\n except json.JSONDecodeError as jde:\n print(f\"\"\"\nFailed to decode invalid json at {url} (Status: {response.status}\n\n{response.data}\n\"\"\")\n raise\n records = page'result']['records']\n if records:\n for record in records:\n yield record\n else:\n return\n\n if n := page['result']['_links'].get('next'):\n url = urljoin(url, n)\n else:\n url = None\n```\n\nNext, I need to load all the records at the `latest_url` that holds the records for the most recent day. After that, I can load all the `daily_url` records that hold all the data since the NHS started to collect it, to ensure that any records that have been updated in the API are also reflected in the MongoDB collection.\n\nNote that I could store the most recent update date for the `daily_url` data in MongoDB and check to see if it's changed before updating the records, but I'm trying to keep the code simple here, and it's not a very large dataset to update.\n\nUsing the paged wrapper and `upload_record` function together now looks like this:\n\n``` python\n# This gets the latest figures, released separately:\nrecords = paged_wrapper(latest_url)\nfor record in records:\n upload_record(record)\n\n# This backfills, and updates with revised figures:\nrecords = paged_wrapper(daily_url)\nfor record in records:\n upload_record(record)\n```\n\nWoohoo! Now I have a Jupyter Notebook that will upload all this COVID data into MongoDB when it's executed.\n\nAlthough these Notebooks are great for writing code with data you're not familiar with, it's a little bit unwieldy to load up Jupyter and execute the notebook each time I want to update the data in my database. If I wanted to run this with a scheduler like `cron` on Unix, I could select `File > Download as > Python`, which would provide me with a python script I could easily run from a scheduler, or just from the command-line.\n\nAfter executing the notebook and waiting a while for all the data to come back, I then had a collection called `daily` containing all of the COVID data dating back to February 2020.\n\n## Visualizing the Data with Charts\n\nThe rest of this blog post *could* have been a breakdown of using the [MongoDB Aggregation Framework to query and analyse the data that I've loaded in. But I thought it might be more fun to *look* at the data, using MongoDB Charts.\n\nTo start building some charts, I opened a new browser tab, and went to . Before creating a new dashboard, I first added a new data source, by clicking on \"Data Sources\" on the left-hand side of the window. I selected my cluster, and then I ensured that my database and collection were selected.\n\nAdding a new data source.\n\nWith the data source set up, it was time to create some charts from the data! I selected \"Dashboards\" on the left, and then clicked \"Add dashboard\" on the top-right. I clicked through to the new dashboard, and pressed the \"Add chart\" button.\n\nThe first thing I wanted to do was to plot the number of positive test results over time. I selected my `covid.daily` data source at the top-left, and that resulted in the fields in the `daily` collection being listed down the left-hand side. These fields can be dragged and dropped into various other parts of the MongoDB Charts interface to change the data visualization.\n\nA line chart is a good visualization of time-series data, so I selected a `Line` Chart Type. Then I drag-and-dropped the `Date` field from the left-hand side to the X Axis box, and `DailyPositive` field to the Y Axis box.\n\nThis gave a really low-resolution chart. That's because the Date field is automatically selected with binning on, and set to `MONTH` binning. That means that all the `DailyPositive` values are aggregated together for each month, which isn't what I wanted to do. So, I deselected binning, and that gives me the chart below.\n\nIt's worth noting that the above chart was regenerated at the start of January, and so it shows a big spike towards the end of the chart. That's possibly due to relaxation of distancing rules over Christmas, combined with a faster-spreading mutation of the disease that has appeared in the UK.\n\nAlthough the data is separated by area (or `CAName`) in the collection, the data in the chart is automatically combined into a single line, showing the total figures across Scotland. I wanted to keep this chart, but also have a similar chart showing the numbers separated by area.\n\nI created a duplicate of this chart, by clicking \"Save & Close\" at the top-right. Then, in the dashboard, I click on the chart's \"...\" button and selected \"Duplicate chart\" from the menu. I picked one of the two identical charts and hit \"Edit.\"\n\nBack in the chart editing screen for the new chart, I drag-and-dropped `CAName` over to the `Series` box. This displays *nearly* the chart that I have in my head but reveals a problem...\n\nNote that although this chart was generated in early January, the data displayed only goes to early August. This is because of the problem described in the warning message at the top of the chart. \"This chart may be displaying incomplete data. The maximum query response size of 5,000 documents for Discrete type charts has been reached.\"\n\nThe solution to this problem is simple in theory: Reduce the number of documents being used to display the chart. In practice, it involves deciding on a compromise:\n\n- I could reduce the number of documents by binning the data by date (as happened automatically at the beginning!).\n- I could limit the date range used by the chart.\n- I could filter out some areas that I'm not interested in.\n\nI decided on the second option: to limit the date range. This *used* to require a custom query added to the \"Query\" text box at the top of the screen, but a recent update to charts allows you to filter by date, using point-and-click operations. So, I clicked on the \"Filter\" tab and then dragged the `Date` field from the left-hand column over to the \"+ filter\" box. I think it's probably useful to see the most recent figures, whenever they might be, so I left the panel with \"Relative\" selected, and chose to filter data from the past 90 days.\n\nFiltering by recent dates has the benefit of scaling the Y axis to the most recent figures. But there are still a lot of lines there, so I added `CAName` to the \"Filter\" box by dragging it from the \"Fields\" column, and then checked the `CAName` values I was interested in. Finally, I hit `Save & Close` to go back to the dashboard.\n\nIdeally, I'd have liked to normalize this data based on population, but I'm going to leave that out of this blog post, to keep this to a reasonable length.\n\n## Maps and MongoDB\n\nNext, I wanted to show how quick it can be to visualize geographical data in MongoDB Charts. I clicked on \"Add chart\" and selected `covid.daily` as my data source again, but this time, I selected \"Geospatial\" as my \"Chart Type.\" Then I dragged the `CAName` field to the \"Location\" box, and `DailyPositive` to the \"Color\" box.\n\nWhoops! It didn't recognize the shapes! What does that mean? The answer is in the \"Customize\" tab, under \"Shape Scheme,\" which is currently set to \"Countries and Regions.\" Change this value to \"UK Counties And Districts.\" You should immediately see a chart like this:\n\nWeirdly, there are unshaded areas over part of the country. It turns out that these correspond to \"Dumfries & Galloway\" and \"Argyll & Bute.\" These values are stored with the ampersand (&) in the `daily` collection, but the chart shapes are only recognized if they contain the full word \"and.\" Fortunately, I could fix this with a short aggregation pipeline in the \"Query\" box at the top of the window.\n\n>**Note**: The $replaceOne operator is only available in MongoDB 4.4! If you've set up an Atlas cluster with an older release of MongoDB, then this step won't work.\n\n``` javascript\n { $addFields: { CAName: { $replaceOne: { input: \"$CAName\", find: \" & \", replacement: \" and \"} } }}]\n```\n\nThis aggregation pipeline consists of a single [$addFields operation which replaces \" & \" in the `CAName` field with \"and.\" This corrects the map so it looks like this:\n\nI'm going to go away and import some population data into my collection, so that I can see what the *concentration* of infections are, and get a better idea of how safe my area is, but that's the end of this tutorial!\n\nI hope you enjoyed this rambling introduction to data massage and import with Jupyter Notebook, and the run-through of a collection of MongoDB Charts features. I find this workflow works well for me as I explore different datasets. I'm always especially amazed at how powerful MongoDB Charts can be, especially with a little aggregation pipeline magic.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Python", "Atlas"], "pageDescription": "In this post, I'll show how to load some data from API endpoints into MongoDB and then visualize the data in MongoDB Charts.", "contentType": "Tutorial"}, "title": "Is it Safe to Go Outside? Data Investigation With MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/key-developer-takeaways-hacktoberfest-2020", "action": "created", "body": "# 5 Key Takeaways from Hacktoberfest 2020\n\nHacktoberfest 2020 is over, and it was a resounding success. We had over\n100 contributions from more than 50 different contributors to the O-FISH\nproject. Learn about why the app is crucial in the NBC story about\nO-FISH.\n\nBefore we get to the Lessons Learned, let's look at what was\naccomplished during Hacktoberfest, and who we have to thank for all the\ngood work.\n\n## Wrap-Up Video\n\nIf you were a part of Hacktoberfest for the O-FISH\nproject, make sure you watch the\nwrap-up\nvideo\nbelow. It's about 10 minutes.\n\n:youtube]{vid=hzzvEy5tA5I}\n\nThe point of Hacktoberfest is to be a CELEBRATION of open source, not\njust \"to make and get contributions.\" All pull requests, no matter how\nbig or small, had a great impact. If you participated in Hacktoberfest,\ndo not forget to claim your [MongoDB Community\nforum badges! You can\nstill get an O-FISH\nbadge\nat any time, by contributing to the O-FISH\nproject. Here's what the badges\nlook like:\n\nJust go to the community forums post Open Source Contributors, Pick Up\nYour Badges\nHere!\n\nThere were lots of bug fixes, as well as feature additions both small\nand big\u2014like dark mode for our mobile applications!\n\n**Contributions by week per repository**\n| Merged/Closed | o-fish-android | o-fish-ios | o-fish-realm | o-fish-web | wildaid. github.io | Total |\n|---------------|--------------------------------------------------------------|------------------------------------------------------|----------------------------------------------------------|------------------------------------------------------|--------------------------------------------------------------------|---------|\n| 01 - 04 Oct | 6 | 6 | 0 | 7 | 14 | 33 |\n| 05 - 11 Oct | 9 | 5 | 0 | 10 | 3 | 27 |\n| 12 - 18 Oct | 15 | 6 | 1 | 11 | 2 | 35 |\n| 19 - 25 Oct | 4 | 1 | 1 | 4 | 1 | 11 |\n| 26 - 31 Oct | 2 | 4 | 2 | 3 | 0 | 11 |\n| **Total** | **36** | **22** | **4** | **35** | **20** | **117** |\n\n## Celebrating the Contributors\n\nHere are the contributors who made Hacktoberfest so amazing for us! This\nwould not have been possible without all these folks.\n\n**Hacktoberfest 2020 Contributors**\n| aayush287 | aayushi2883 | abdulbasit75 | antwonthegreat |\n:---------------------------------------------------:|:---------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------:|\n| **ardlank** | **ashwinpilgaonkar** | **Augs0** | **ayushjainrksh** |\n| **bladebunny** | **cfsnsalazar** | **coltonlemmon** | **CR96** | \n| **crowtech7** | **czuria1** | **deveshchatuphale7** | **Dusch4593** |\n| **ericblancas23** | **evayde** | **evnik** | **fandok** | \n| **GabbyJ** | **gabrielhicks** | **haqiqiw** | **ippschi** |\n| **ismaeldcom** | **jessicasalbert** | **jkreller** | **jokopriyono** | \n| **joquendo** | **k-charette** | **kandarppatel28** | **lenmorld** |\n| **ljhaywar** | **mdegis** | **mfhan** | **newprtst** | \n| **nugmanoff** | **pankova** | **rh9891** | **RitikPandey1** |\n| **Roshanpaswan** | **RuchaYagnik** | **rupalkachhwaha** | **saribricka** |\n| **seemagawaradi** | **SEGH** | **sourabhbagrecha** | |\n| **stennie** | **subbramanil** | **sunny52525** | |\n| **thearavind** | **wlcreate** | **yoobi** | |\n\nHacktoberfest is not about closing issues and merging PRs. It's about\ncelebrating community, coming together and learning from each other. I\nlearned a lot about specific coding conventions, and I felt like we\nreally bonded together as a community that cares about the O-FISH\napplication.\n\nI also learned that some things we thought were code turned out to be\npermissions. That means that some folks did research only to find out\nthat the issue required an instance of their own to debug. And, we fixed\na lot of bugs we didn't even know existed by fixing permissions.\n\n## Lessons Learned\n\nSo, what did we learn from Hacktoberfest? These key takeaways are for\nproject maintainers and developers alike.\n\n### Realize That Project Maintainers are People Managers\n\nBeing a project maintainer means being a people manager. Behind every\npull request (PR) is a person. Unlike in a workplace where I communicate\nwith others all the time, there can be very few communications with\ncontributors. And those communications are public. So, I was careful to\nconsider the recipient of my feedback. There's a world of difference\nbetween, \"This doesn't work,\" and \"I tested this and here's a screenshot\nof what I see\u2014I don't see X, which the PR was supposed to fix. Can you\nhelp me out?\"\n\n>\n>\n>Tip 1: With fewer interactions and established relationships, each word\n>holds more weight. Project maintainers - make sure your feedback is\n>constructive, and your tone is appreciative, helpful and welcoming.\n>Developers - it's absolutely OK to communicate more - ask questions in\n>the Issues, go to any office hours, even comment on your own PR to\n>explain the choices you made or as a question like \"I did this with\n>inline CSS, should I move it to a separate file?\"\n>\n>\n\nPeople likely will not code or organize the way I would expect.\nSometimes that's a drawback - if the PR has code that introduces a\nmemory leak, for example. But often a different way of working is a good\nthing, and leads to discussion.\n\nFor example, we had two issues that were similar, and assigned to two\ndifferent people. One person misunderstood their issue, and submitted\ncode that fixed the first issue. The other person submitted code that\nfixed their issue, but used a different method. I had them talk it out\nwith each other in the comments, and we came to a mutual agreement on\nhow to do it. Which is also awesome, because I learned too - this\nparticular issue was about using onClick and Link in\nnode.js,\nand I didn't know why one was used over the other before this came up.\n\n>\n>\n>Tip 2: Project maintainers - Frame things as a problem, not a specific\n>solution. You'd be surprised what contributors come up with.\n>Developers - read the issue thoroughly to make sure you understand\n>what's being asked. If you have a different idea feel free to bring it\n>up in the issue.\n>\n>\n\nFraming issues as a problem, not a specific solution, is something I do\nall the time as a product person. I would say it is one of the most\nimportant changes that a developer who has been 'promoted' to project\nmaintainer (or team manager!) should internalize.\n\n### Lower the Barrier to Entry\n\nO-FISH has a great backend infrastructure that anyone can build for\nfree. However, it takes time to build\nand it is unrealistic to expect someone doing 30 minutes of work to fix\na bug will spend 2 hours setting up an infrastructure.\n\nSo, we set up a sandbox instance where people can fill out a\nform\nand automatically get a login to the sandbox server.\n\nThere are limitations on our sandbox, and some issues need your own\ninstance to properly diagnose and fix. The sandbox is not a perfect\nsolution, but it was a great way to lower the barrier for the folks who\nwanted to tackle smaller issues.\n\n>\n>\n>Tip 3: Project maintainers - Make it easy for developers to contribute\n>in meaningful ways. Developers - for hacktoberfest, if you've done work\n>but it did not result in a PR, ask if you can make a PR that will be\n>closed and marked as 'accepted' so you get the credit you deserve.\n>\n>\n\n### Cut Back On Development To Make Time For Administration\n\nThere's a lot of work to do, that is not coding work. Issues should be\nwell-described and defined as small amounts of work, with good titles.\nEven though I did this in September, I missed a few important items. For\nexample, we had an issue titled \"Localization Management System\" which\nsounded really daunting and nobody picked it up. During office hours, I\nexplained to someone wanting to do work that it was really 2 small shell\nscripts. They took on the work and did a great job! But if I had not\nexplained it during office hours, nobody would have taken it because the\ntitle sounds like a huge project.\n\nOffice hours were a great idea, and it was awesome that developers\nshowed up to ask questions. That really helped with something I touched\non earlier - being able to build relationships.\n\n>\n>\n>Tip 4: Project Maintainers - Make regular time to meet with contributors\n>in real-time - over video or real-time chat. Developers - Take any and\n>every opportunity you can to talk to other developers and the project\n>maintainer(s).\n>\n>\n\nWe hosted office hours for one hour, twice a week, on Tuesdays and\nThursdays, at different times to accommodate different time zones. Our\nlead developer attended a few office hours as well.\n\n### Open The Gates\n\nWhen I get a pull request, I want to accept it. It's heartbreaking to\nnot approve something. While I am technically the gatekeeper for the\ncode that gets accepted to the project, knowing what to let go of and\nwhat to be firm on is very important.\n\nIn addition to accepting code done differently than I would have done\nit, I also accepted code that was not quite perfect. Sometimes I\naccepted that it was good enough, and other times I suggested a quick\nchange that would fix it.\n\nThis is not homework and it is OK to give hints. If someone queried\nusing the wrong function, I'll explain what they did, and what issues\nthat might cause, and then say \"use this other function - here's how it\nworks, it takes in X and Y and returns A and B.\" And I'll link to that\nfunction in the code. It's more work on my part, but I'm more familiar\nwith the codebase and the contributor can see that I'm on their team -\nI'm not just rejecting their PR and saying \"use this other function\",\nI'm trying to help them out.\n\nAs a product manager, ultimately I hope I'm enabling contributors to\nlearn more than just the code. I hope folks learn the \"why\", and that\ndecisions are not necessarily made easily. There are reasons. Doing that\nkind of mentorship is a very different kind of work, and it can be\ndraining - but it is critical to a project's success.\n\nI was very liberal with the hacktoberfest-accepted label. Sometimes\nsomeone provided a fix that just didn't work due to the app's own\nquirkiness. They spent time on it, we discussed the issue, they\nunderstood. So I closed the PR and added the accepted label, because\nthey did good work and deserved the credit. In other cases, someone\nwould ask questions about an issue, and explain to me why it was not\npossible to fix, and I'd ask them to submit a PR anyway, and I would\ngive them the credit. Not all valuable contributions are in the form of\na PR, but you can have them make a PR to give them credit.\n\n>\n>\n>Tip 5: Project maintainers: Give developers as much credit as you can.\n>Thank them, and connect with them on social media. Developers: Know that\n>all forms of work are valuable, even if there's no tangible outcome. For\n>example, being able to rule out an option is extremely valuable.\n>\n>\n\n### Give People Freedom and They Will Amaze You\n\nThe PRs that most surprised me were ones that made me file additional\ntickets\u2014like folks who pointed out accessibility issues and fixed a few.\nThen, I went back and made tickets for all the rest.\n\n### tl;ra (Too Long; Read Anyway)\n\nAll in all, Hacktoberfest 2020 was successful\u2014for getting code written\nand bugs fixed, but also for building a community. Thanks to all who\nparticipated!\n\n>\n>\n>**It's Not Too Late to Get Involved!**\n>\n>O-FISH is open source and still accepting contributions. If you want to\n>work on O-FISH, just follow the contribution\n>guidelines -. To\n>contact me, message me from my forum\n>page -\n>you need to have the easy-to-achieve Sprout\n>level\n>for messaging.\n>\n>If you have any questions or feedback, hop on over to the MongoDB\n>Community Forums. We\n>love connecting!\n>\n>\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Participating in Hacktoberfest taught us what works and what does not work to build a happy community of contributors for an open source project.", "contentType": "Article"}, "title": "5 Key Takeaways from Hacktoberfest 2020", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/add-comments-section-eleventy-website-mongodb-netlify", "action": "created", "body": "# Add a Comments Section to an Eleventy Website with MongoDB and Netlify\n\nI'm a huge fan of static generated websites! From a personal level, I have The Polyglot Developer, Pok\u00e9 Trainer Nic, and The Tracy Developer Meetup, all three of which are static generated websites built with either Hugo or Eleventy. In addition to being static generated, all three are hosted on Netlify.\n\nI didn't start with a static generator though. I started on WordPress, so when I made the switch to static HTML, I got a lot of benefits, but I ended up with one big loss. The comments of my site, which were once stored in a database and loaded on-demand, didn't have a home.\n\nFast forward to now, we have options!\n\nIn this tutorial, we're going to look at maintaining a static generated website on Netlify with Eleventy, but the big thing here is that we're going to see how to have comments for each of our blog pages.\n\nTo get an idea of what we want to accomplish, let's look at the following scenario. You have a blog with X number of articles and Y number of comments for each article. You want the reader to be able to leave comments which will be stored in your database and you want those comments to be loaded from your database. The catch is that your website is static and you want performance.\n\nA few things are going to happen:\n\n- When the website is generated, all comments are pulled from our database and rendered directly in the HTML.\n- When someone loads a page on your website, all rendered comments will show, but we also want all comments that were created after the generation to show. We'll do that with timestamps and HTTP requests.\n- When someone creates a comment, we want that comment to be stored in our database, something that can be done with an HTTP request.\n\nIt may seem like a lot to take in, but the code involved is actually quite slick and reasonable to digest.\n\n## The Requirements\n\nThere are a few moving pieces in this tutorial, so we're going to assume you've taken care of a few things first. You'll need the following:\n\n- A properly configured MongoDB Atlas cluster, **free** tier or better.\n- A Netlify account connected to your GitHub, GitLab, or Bitbucket account.\n- Node.js 16+.\n- The Realm CLI.\n\nWe're going to be using MongoDB Atlas to store the comments. You'll need a cluster deployed and configured with proper user and network rules. If you need help with this, check out my previous tutorial on the subject.\n\nWe're going to be serving our static site on Netlify and using their build process. This build process will take care of deploying either Realm Functions (part of MongoDB Atlas) or Netlify Functions.\n\nNode.js is a requirement because we'll be using it for Eleventy and the creation of our serverless functions.\n\n## Build a static generated website or blog with Eleventy\n\nBefore we get into the comments side of things, we should probably get a foundation in place for our static website. We're not going to explore the ins and outs of Eleventy. We're just going to do enough so we can make sense of what comes next.\n\nExecute the following commands from your command line:\n\n```bash\nmkdir netlify-eleventy-comments\ncd netlify-eleventy-comments\n```\n\nThe above commands will create a new and empty directory and then navigate into it.\n\nNext we're going to initialize the project directory for Node.js development and install our project dependencies:\n\n```bash\nnpm init -y\nnpm install @11ty/eleventy @11ty/eleventy-cache-assets axios cross-var mongodb-realm-cli --save-dev\n```\n\nAlright, we have quite a few dependencies beyond just the base Eleventy in the above commands. Just roll with it for now because we're going to get into it more later.\n\nOpen the project's **package.json** file and add the following to the `scripts` section:\n\n```json\n\"scripts\": {\n \"clean\": \"rimraf public\",\n \"serve\": \"npm run clean; eleventy --serve\",\n \"build\": \"npm run clean; eleventy --input src --output public\"\n},\n```\n\nThe above script commands will make it easier for us to serve our Eleventy website locally or build it when it comes to Netlify.\n\nNow we can start the actual development of our Eleventy website. We aren't going to focus on CSS in this tutorial, so our final result will look quite plain. However, the functionality will be solid!\n\nExecute the following commands from the command line:\n\n```bash\nmkdir -p src/_data\nmkdir -p src/_includes/layouts\nmkdir -p src/blog\ntouch src/_data/comments.js\ntouch src/_data/config.js\ntouch src/_includes/layouts/base.njk\ntouch src/blog/article1.md\ntouch src/blog/article2.md\ntouch src/index.html\ntouch .eleventy.js\n```\n\nWe made quite a few directories and empty files with the above commands. However, that's going to be pretty much the full scope of our Eleventy website.\n\nMultiple files in our example will have a dependency on the **src/_includes/layouts/base.njk** file, so we're going to work on that file first. Open it and include the following code:\n\n```html\n\n \n \n {{ content | safe }}\n \n\n \n\nCOMMENTS\n\n \n \n \n\n \n \n \n \n \n \n Create Comment\n \n \n \n\n```\n\nAlright, so the above file is, like, 90% complete. I left some pieces out and replaced them with comments because we're not ready for them yet.\n\nThis file represents the base template for our entire site. All other pages will get rendered in this area:\n\n```\n{{ content | safe }}\n```\n\nThat means that every page will have a comments section at the bottom of it.\n\nWe need to break down a few things, particularly the `", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "Netlify"], "pageDescription": "Learn how to add a comments section to your static website powered with MongoDB and either Realm Functions or Netlify Functions.", "contentType": "Tutorial"}, "title": "Add a Comments Section to an Eleventy Website with MongoDB and Netlify", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/7-things-learned-while-modeling-data-youtube-stats", "action": "created", "body": "# 7 Things I Learned While Modeling Data for YouTube Stats\n\nMark Smith, Maxime Beugnet, and I recently embarked on a project to automatically retrieve daily stats about videos on the MongoDB YouTube channel. Our management team had been painfully pulling these stats every month in a complicated spreadsheet. In an effort to win brownie points with our management team and get in a little programming time, we worked together as a team of three over two weeks to rapidly develop an app that pulls daily stats from the YouTube API, stores them in a MongoDB Atlas database, and displays them in a MongoDB Charts dashboard.\n\nScreenshot of the MongoDB Charts dashboard that contains charts about the videos our team has posted on YouTube\n\nMark, Max, and I each owned a piece of the project. Mark handled the OAuth authentication, Max created the charts in the dashboard, and I was responsible for figuring out how to retrieve and store the YouTube stats.\n\nIn this post, I'll share seven things I learned while modeling the data for this app. But, before I jump into what I learned, I'll share a bit of context about how I modeled the data.\n\n## Table of Contents\n\n- Related Videos\n- Our Data Model\n- What I Learned\n - 1. Duplicating data is scary\u2014even for those of us who have been coaching others to do so\n - 2. Use the Bucket Pattern only when you will benefit from the buckets\n - 3. Use a date field to label date-based buckets\n - 4. Cleaning data you receive from APIs will make working with the data easier\n - 5. Optimizing for your use case is really hard when you don't fully know what your use case will be\n - 6. There is no \"right way\" to model your data\n - 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements\n- Summary\n\n## Related Videos\n\nIf you prefer to watch a video instead of read text, look no further.\n\nTo learn more about what we built and why we built it the way we did, check out the recording of the Twitch stream below where Mark, Max, and I shared about our app.\n\n:youtube]{vid=iftOOhVyskA}\n\nIf you'd like the video version of this article, check out the live stream Mark, Max, and I hosted. We received some fantastic questions from the audience, so you'll discover some interesting nuggets in the recording.\n\nIf you'd prefer a more concise video that only covers the contents of this article, check out the recording below.\n\n## Our Data Model\n\nOur project had a tight two-week deadline, so we made quick decisions in our effort to rapidly develop a minimum viable product. When we began, we didn't even know how we wanted to display the data, which made modeling the data even more challenging.\n\nI ended up creating two collections:\n\n- `youtube_videos`: stores metadata about each of the videos on the MongoDB YouTube channel.\n- `youtube_stats`: stores daily YouTube stats (bucketed by month) about every video in the `youtube_videos` collection.\n\nEvery day, a [scheduled trigger calls a Realm serverless function that is responsible for calling the YouTube PlaylistItems\nAPI. This API returns metadata about all of the videos on the MongoDB YouTube channel. The metadata is stored in the `youtube_videos` collection. Below is a document from the `youtube_videos` collection (some of the information is redacted):\n\n``` json\n{\n \"_id\":\"8CZs-0it9r4\",\n \"kind\": \"youtube#playlistItem\",\n \"isDA\": true,\n ...\n \"snippet\": {\n \"publishedAt\": 2020-09-30T15:05:30.000+00:00,\n \"channelId\": \"UCK_m2976Yvbx-TyDLw7n1WA\",\n \"title\": \"Schema Design Anti-Patterns - Part 1\",\n \"description\": \"When modeling your data in MongoDB...\",\n \"thumbnails\": {\n ...\n },\n \"channelTitle\": \"MongoDB\",\n ...\n }\n}\n```\n\nEvery day, another trigger calls a Realm serverless function that is responsible for calling the YouTube Reports API. The stats that this API returns are stored in the `youtube_stats`\ncollection. Below is a document from the collection (some of the stats are removed to keep the document short):\n\n``` json\n{\n \"_id\": \"8CZs-0it9r4_2020_12\",\n \"month\": 12,\n \"year\": 2020,\n \"videoId\": \"8CZs-0it9r4\",\n \"stats\": \n {\n \"date\": 2020-12-01T00:00:00.000+00:00,\n \"views\": 21,\n \"likes\": 1\n ...\n },\n {\n \"date\": 2020-12-02T00:00:00.000+00:00,\n \"views\": 29,\n \"likes\": 1\n ...\n },\n ...\n {\n \"date\": 2020-12-31T00:00:00.000+00:00,\n \"views\": 17,\n \"likes\": 0\n ...\n },\n ]\n}\n```\n\nTo be clear, I'm not saying this was the best way to model our data; this is the data model we ended up with after two weeks of rapid development. I'll discuss some of the pros and cons of our data model throughout the rest of this post.\n\nIf you'd like to take a peek at our code and learn more about our app, visit .\n\n## What I Learned\n\nWithout further ado, let's jump into the seven things I learned while rapidly modeling YouTube data.\n\n### 1. Duplicating data is scary\u2014even for those of us who have been coaching others to do so\n\nOne of the rules of thumb when modeling data for MongoDB is *data that is accessed together should be stored together*. We teach developers that duplicating data is OK, especially if you won't be updating it often.\n\nDuplicating data can feel scary at first\n\nWhen I began figuring out how I was going to use the YouTube API and what data I could retrieve, I realized I would need to make two API calls: one to retrieve a list of videos with all of their metadata and another to retrieve the stats for those videos. For ease of development, I decided to store the information from those two API calls in separate collections.\n\nI wasn't sure what data was going to need to be displayed alongside the stats (put another way, I wasn't sure what data was going to be accessed together), so I duplicated none of the data. I knew that if I were to duplicate the data, I would need to maintain the consistency of that duplicate data. And, to be completely honest, maintaining duplicate data was a little scary based on the time crunch we were under, and the lack of software development process we were following.\n\nIn the current data model, I can easily gather stats about likes, dislikes, views, etc, for a given video ID, but I will have to use [$lookup to join the data with the `youtube_videos` collection in order to tell you anything more. Even something that seems relatively simple like listing the video's name alongside the stats requires the use of `$lookup`. The `$lookup` operation required to join the data in the two collections isn't that complicated, but best practices suggest limiting `$lookup` as these operations can negatively impact performance.\n\nWhile we were developing our minimum viable product, I weighed the ease of development by avoiding data duplication against the potential performance impact of splitting our data. Ease of development won.\n\nNow that I know I need information like the video's name and publication date with the stats, I can implement the Extended Reference Pattern. I can duplicate some of the information from the `youtube_videos` collection in the `youtube_stats` collection. Then, I can create an Atlas trigger that will watch for changes in the `youtube_videos` collection and automatically push those changes to the `youtube_stats` collection. (Note that if I was using a self-hosted database instead of an Atlas-hosted database, I could use a change stream instead of an Atlas trigger to ensure the data remained consistent.)\n\nDuplicating data isn't as scary when (1) you are confident which data needs to be duplicated and (2) you use Atlas triggers or change streams to make sure the data remains consistent.\n\n### 2. Use the Bucket Pattern only when you will benefit from the buckets\n\nI love schema design patterns (check out this blog series or this free MongoDB University course to learn more) and schema design anti-patterns (check out this blog series or this YouTube video series to learn more).\n\nWhen I was deciding how to store the daily YouTube stats, I realized I had time-series data. I knew the Bucket Pattern was useful for time-series data, so I decided to implement that pattern. I decided to create a bucket of stats for a certain timeframe and store all of the stats for that timeframe for a single video in a document.\n\nI wasn't sure how big my buckets should be. I knew I didn't want to fall into the trap of the Massive Arrays Anti-Pattern, so I didn't want my buckets to be too large. In the spirit of moving quickly, I decided a month was a good bucket size and figured I could adjust as needed.\n\nHow big should your bucket be? Big enough to startle your mom.\n\nThe buckets turned out to be really handy during development as I could easily see all of the stats for a video for a given month to ensure they were being pulled correctly.\n\nHowever, the buckets didn't end up helping my teammates and I much in our app. We didn't have so much data that we were worried about reducing our index sizes. We didn't implement the Computed Pattern to pre-compute monthly stats. And we didn't run queries that benefited from having the data grouped by month.\n\nLooking back, creating a document for every video every day would have been fine. We didn't benefit from any of the advantages of the Bucket Pattern. If our requirements were to change, we certainly could benefit from the Bucket Pattern. However, in this case, I added the complexity of grouping the stats into buckets but didn't get the benefits, so it wasn't really worth it.\n\n### 3. Use a date field to label date-based buckets\n\nAs I described in the previous section, I decided to bucket my YouTube video stats by month. I needed a way to indicate the date range for each bucket, so each document contains a field named `year` and a field named `month`. Both fields store values of type `long`. For example, a document for the month of January 2021 would have `\"year\": 2021` and `\"month\": 1`.\n\nNo, I wasn't storing date information as a date. But perhaps I should have.\n\nMy thinking was that we might want to compare months from multiple years (for example, we could compare stats in January for 2019, 2020, and 2021), and this data model would allow us to do that.\n\nAnother option would have been to use a single field of type `date` to\nindicate the date range. For example, for the month of January, I could\nhave set `\"date\": new Date(\"2021-01\")`. This would allow me to perform\ndate-based calculations in my queries.\n\nAs with all data modeling considerations in MongoDB, the best option comes down to your use case and how you will query the data. Use a field of type `date` for date-based buckets if you want to query using dates.\n\n### 4. Cleaning data you receive from APIs will make working with the data easier\n\nAs I mentioned toward the beginning of this post, I was responsible for retrieving and storing the YouTube data. My teammate Max was responsible for creating the charts to visualize the data.\n\nI didn't pay too much attention to how the data I was getting from the API was formatted\u2014I just dumped it into the database. (Have I mentioned that we were working as fast as we could?)\n\nAs long as the data is being dumped into the database, who cares what format it's in?\n\nAs Max began building the charts, he raised a few concerns about the way the data was formatted. The date the video was published was being stored as a `string` instead of a `date`. Also, the month and year were being stored as `string` instead of `long`.\n\nMax was able to do type conversions in MongoDB Charts, but ultimately, we wanted to store the data in a way that would be easy to use whether we were visualizing the data in Charts or querying the data using the MongoDB Query Language\n(MQL).\n\nThe fixes were simple. After retrieving the data from the API, I converted the data to the ideal type before sending it to the database. Take a look at line 37 of my function if you'd like to see an example of how I did this.\n\nIf you're pulling data from an API, consider if it's worth remodeling or reformatting the data before storing it. It's a small thing that could make your and your teammates' jobs much easier in the future.\n\n### 5. Optimizing for your use case is really hard when you don't fully know what your use case will be\n\nOK, yes, this is kind of obvious.\n\nAllow me to elaborate.\n\nAs we began working on our application, we knew that we wanted to visually display YouTube stats on a dashboard. But we didn't know what stats we would be able to pull from the API or how we would want to visualize the data. Our approach was to put the data in the database and then figure it out.\n\nAs I modeled our data, I didn't know what our final use case would be\u2014I didn't know how the data would be accessed. So, instead of following the rule of thumb that data that is accessed together should be stored together, I modeled the data in the way that was easiest for me to work with while retrieving and storing the data.\n\nOne of the nice things about using MongoDB is that you have a lot of flexibility in your schema, so you can make changes as requirements develop and change. (The Schema Versioning Pattern provides a pattern for how to do this successfully.)\n\nAs Max was showing off how he created our charts, I learned that he created an aggregation pipeline inside of Charts that calculates the fiscal year quarter (for example, January of 2021 is in Q4 of Fiscal Year 2021) and adds it to each document in the `youtube_stats` collection. Several of our charts group the data by quarter, so we need this field.\n\nI was pretty impressed with the aggregation pipeline Max built to calculate the fiscal year. However, if I had known that calculating the quarter was one of our requirements when I was modeling the data, I could have calculated the fiscal year quarter and stored it inside of the `youtube_stats` collection so that any chart or query could leverage it. If I had gone this route, I would have been using the Computed Pattern.\n\nNow that I know we have a requirement to display the fiscal year quarter, I can write a script to add the `fiscal_year_quarter` field to the existing documents. I could also update the function that creates new documents in the `youtube_stats` collection to calculate the fiscal year quarter and store it in new documents.\n\nModeling data in MongoDB is all about your use case. When you don't know what your use case is, modeling data becomes a guessing game. Remember that it's OK if your requirements change; MongoDB's flexible schema allows you to update your data model as needed.\n\n### 6. There is no \"right way\" to model your data\n\nI confess that I've told developers who are new to using MongoDB this very thing: There is no \"right way\" to model your data. Two applications that utilize the same data may have different ideal data models based on how the applications use the data.\n\nHowever, the perfectionist in me went a little crazy as I modeled the data for this app. In more than one of our team meetings, I told Mark and Max that I didn't love the data model I had created. I didn't feel like I was getting it \"right.\"\n\nI just want my data model to be perfect. Is that too much to ask?\n\nAs I mentioned above, the problem was that I didn't know the use case that I was optimizing for as I was developing the data model. I was making guesses and feeling uncomfortable. Because I was using a non-relational database, I couldn't just normalize the data systematically and claim I had modeled the data correctly.\n\nThe flexibility of MongoDB gives you so much power but can also leave you wondering if you have arrived at the ideal data model. You may find, as I did, that you may need to revisit your data model as your requirements become more clear or change. And that's OK.\n\n(Don't let the flexibility of MongoDB's schema freak you out. You can use MongoDB's schema validation when you are ready to lock down part or all of your schema.)\n\n### 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements\n\nBuilding on the previous thing I learned that there is no \"right way\" to model your data, data models can likely always be improved. As you identify what your queries will be or your queries change, you will likely find new ways you can optimize your data model.\n\nThe question becomes, \"When is your data model good enough?\" The perfectionist in me struggled with this question. Should I continue optimizing? Or is the data model we have good enough for our requirements?\n\nTo answer this question, I found myself asking two more questions:\n\n- Are my teammates and I able to easily work with the data?\n- Is our app's performance good enough?\n\nThe answers to the questions can be a bit subjective, especially if you don't have hard performance requirements, like a web page must load in X milliseconds.\n\nIn our case, we did not define any performance requirements. Our front end is currently a Charts dashboard. So, I wondered, \"Is our dashboard loading quickly enough?\" And the answer is yes: Our dashboard loads pretty quickly. Charts utilizes caching with a default one-hour refresh to ensure the charts load quickly. Once a user loads the dashboard in their browser, the charts remain displayed\u2014even while waiting for the charts to get the latest data when the cache is refreshed.\n\nIf your developers are able to easily work with the data and your app's performance is good enough, your data model is probably good enough.\n\n## Summary\n\nEvery time I work with MongoDB, I learn something new. In the process of working with a team to rapidly build an app, I learned a lot about data modeling in MongoDB:\n\n- 1. Duplicating data is scary\u2014even for those of us who have been coaching others to do so\n- 2. Use the Bucket Pattern only when you will benefit from the buckets\n- 3. Use a date field to label date-based buckets\n- 4. Cleaning data you receive from APIs will make working with the data easier\n- 5. Optimizing for your use case is really hard when you don't fully know what your use case will be\n- 6. There is no \"right way\" to model your data\n- 7. Determine how much you want to tweak your data model based on the ease of working with the data and your performance requirements\n\nIf you're interested in learning more about data modeling, I highly recommend the following resources:\n\n- Free MongoDB University Course: M320: Data Modeling\n- Blog Series: MongoDB Schema Design Patterns\n- YouTube Video Series: MongoDB Schema Design Anti-Patterns\n- Blog Series: MongoDB Schema Design Anti-Patterns\n\nRemember, every use case is different, so every data model will be different. Focus on how you will be using the data.\n\nIf you have any questions about data modeling, I encourage you to join the MongoDB Community. It's a great place to ask questions. MongoDB employees and community members are there every day to answer questions and share their experiences. I hope to see you there!\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Discover 7 things Lauren learned while modeling data in MongoDB.", "contentType": "Article"}, "title": "7 Things I Learned While Modeling Data for YouTube Stats", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-vs-regex", "action": "created", "body": "# A Decisioning Framework for MongoDB $regex and $text vs Atlas Search\n\nAre you using $text or $regex to provide search-like functionality in your application? If so, MongoDB Atlas\u2019 $search operator offers several advantages to $text and $regex, such as faster and more efficient search results, natural language queries, built-in relevance ranking, and better scalability. Getting started is super easy as $search is embedded as an aggregation stage right into MongoDB Atlas, providing you with full text search capabilities on all of your operational data.\n\nWhile the $text and $regex operators are your only options for on-premises or local deployment, and provide basic text matching and pattern searching, Atlas users will find that $search provides a more comprehensive and performant solution for implementing advanced search functionality in your applications. Features like fuzzy matching, partial word matching, synonyms search, More Like This, faceting, and the capability to search through large data sets are only available with Atlas Search.\n\nMigrating from $text or $regex to $search doesn't necessarily mean rewriting your entire codebase. It can be a gradual process where you start incorporating the $search operator in new features or refactoring existing search functionality in stages. \n\nThe table below explores the benefits of using Atlas Search compared to regular expressions for searching data. Follow along and experience the power of Atlas Search firsthand.\n\n**Create a Search Index Now**\n\n>Note: $text and $regex have had no major updates since 2015, and all future enhancements in relevance-based search will be delivered via Atlas Search. \n\nTo learn more about Atlas Search, check out the documentation.\n\n| App Requirements | $regex | $text | $search | Reasoning |\n| --- | --- | --- | --- | --- |\n| The datastore must respect write concerns | \u2705 | \ud83d\udeab | \ud83d\udeab | If you have a datastore that must respect write concerns for use cases like transactions with heavy reads after writes, $regex is a better choice. For search use cases, reads after writes should be rare. |\n| Language awareness (Spanish, Chinese, English, etc.) | \ud83d\udeab | \ud83d\udeab | \u2705 | Atlas Search natively supports over 40 languages so that you can better tokenize languages, remove stopwords, and interpret diacritics to support improved search relevance. |\n| Case-insensitive text search |\ud83d\udeab | \ud83d\udeab |\u2705 | Case-insensitive text search using $regex is one of the biggest sources of problems among our customer base, and $search offers far more capabilities than $text. |\n| Highlighting result text | \ud83d\udeab |\ud83d\udeab | \u2705 | The ability to highlight text fragments in result documents helps end users contextualize why some documents are returned compared to others. It's essential for user experiences powered by natural language queries. While developers could implement a crude version of highlighting with the other options, the $search aggregation stage provides an easy-to-consume API and a core engine that handles topics like tokenization and offsets. |\n| Geospatial-aware search queries | \u2705 | \ud83d\udeab | \u2705 | Both $regex and $search have geospatial capabilities. The differences between the two lie in the differences between how $regex and $search treat geospatial parameters. For instance, Lucene draws a straight line from one query coordinate to another, whereas MongoDB lines are spherical. Spherical queries are best for flights, whereas flat map queries might be better for short distances. |\n| On-premises or local deployment | \u2705 | \u2705 | \ud83d\udeab | Atlas Search is not available on-premise or for local deployment. The single deployment target enables our team to move fast and innovate at a more rapid pace than if we targeted many deployment models. For that reason, $regex and $text are the only options for people who do not have access to Atlas. |\n| Autocomplete of characters (nGrams) | \ud83d\udeab | \ud83d\udeab | \u2705 | End users typing in a search box have grown accustomed to an experience where their search queries are completed for them. Atlas Search offers edgeGrams for left-to-right autocomplete, nGrams for autocomplete with languages that do not have whitespace, and rightEdgeGram for languages that are written and read right-to-left. |\n| Autocomplete of words (wordGrams) | \ud83d\udeab | \ud83d\udeab | \u2705 | If you have a field with more than two words and want to offer word-based autocomplete as a feature of your application, then a shingle token filter with custom analyzers could be best for you. Custom analyzers offer developers a flexible way to index and modify how their data is stored. |\n| Fuzzy matching on text input | \ud83d\udeab | \ud83d\udeab |\u2705 | If you would like to filter on user generated input, Atlas Search\u2019s fuzzy offers flexibility. Issues like misspelled words are handled best by $search. |\n| Filtering based on more than 10 strings | \ud83d\udeab | \ud83d\udeab | \u2705 | It\u2019s tricky to filter on more than 10 strings in MongoDB due to the limitations of compound text indexes. The compound filter is again the right way to go here. |\n| Relevance score sorted search | \ud83d\udeab |\ud83d\udeab |\u2705 |Atlas Search uses the state-of-art BM25 algorithm for determining the search relevance score of documents and allows for advanced configuration through boost expressions like multiply and gaussian decay, as well as analyzers, search operators, and synonyms. |\n| Cluster needs to be optimized for write performance |\ud83d\udeab | \ud83d\udeab |\u2705 | When you add a database index in MongoDB, you should consider tradeoffs to write performance in cases where database write performance is important. Search Indexes don\u2019t degrade cluster write performance. |\n| Searching through large data sets | \ud83d\udeab |\ud83d\udeab |\u2705 | If you have lots of documents, your queries will linearly get slower. In Atlas Search, the inverted index enables fast document retrieval at very large scales. |\n| Partial indexes for simple text matching | \u2705 |\ud83d\udeab |\ud83d\udeab | Atlas Search does not yet support partial indexing. Today, $regex takes the cake. |\n| Single compound index on arrays |\ud83d\udeab | \ud83d\udeab |\u2705 | Atlas Search is partially designed for this use case, where term indexes are intersected in a single Search index, to eliminate the need for compound indexes for filtering on arrays. |\n| Synonyms search | \ud83d\udeab |\ud83d\udeab |\u2705 | The only option for robust synonyms search is Atlas Search, where synonyms are defined in a collection, and that collection is referenced in your search index. |\n| Fast faceting for counts | \ud83d\udeab |\ud83d\udeab |\u2705 | If you are looking for faceted navigation, or fast counts of documents based on text criteria, let Atlas Search do the bucketing. In our internal testing, it's 100x faster and also supports number and date buckets. |\n| Custom analyzers (stopwords, email/URL token, etc.) | \ud83d\udeab | \ud83d\udeab | \u2705 | Using Atlas Search, you can define a custom analyzer to suit your specific indexing needs. |\n| Partial match | \ud83d\udeab |\ud83d\udeab |\u2705 |MongoDB has a number of partial match options ranging from the wildcard operator to autocomplete, which can be useful for some partial match use cases. |\n| Phrase queries | \ud83d\udeab | \ud83d\udeab | \u2705 | Phrase queries are supported natively in Atlas Search via the phrase operator. |\n\n> Note: The green check mark sometimes does not appear in cases where the corresponding aggregation stage may be able to satisfy an app requirement, and in those cases, it\u2019s because one of the other stages (i.e., $search) is far superior for a given use case. \n\nIf we\u2019ve whetted your appetite to learn more about Atlas Search, we have some resources to get you started:\n\nThe Atlas Search documentation provides reference materials and tutorials, while the MongoDB Developer Hub provides sample apps and code. You can spin up Atlas Search at no cost on the Atlas Free Tier and follow along with the tutorials using our sample data sets, or load your own data for experimentation within your own sandbox.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn about the differences between using $regex, $text, and Atlas Search.", "contentType": "Article"}, "title": "A Decisioning Framework for MongoDB $regex and $text vs Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/3-things-to-know-switch-from-sql-mongodb", "action": "created", "body": "# 3 Things to Know When You Switch from SQL to MongoDB\n\nWelcome to the final post in my series on moving from SQL to MongoDB. In the first post, I mapped terms and concepts from SQL to MongoDB. In the second post, I discussed the top four reasons why you should use MongoDB.\n\nNow that we have an understanding of the terminology as well as why MongoDB is worth the effort of changing your mindset, let's talk about three key ways you need to change your mindset.\n\nYour first instinct might be to convert your existing columns and rows to fields and documents and stick with your old ways of modeling data. We've found that people who try to use MongoDB in the same way that they use a relational database struggle and sometimes fail.\n\n \n\nWe don't want that to happen to you.\n\nLet's discuss three key ways to change your mindset as you move from SQL to MongoDB.\n\n- Embrace Document Diversity\n- Data That is Accessed Together Should Be Stored Together\n- Tread Carefully with Transactions\n\n>\n>\n>This article is based on a presentation I gave at MongoDB World and MongoDB.local Houston entitled \"From SQL to NoSQL: Changing Your Mindset.\"\n>\n>If you prefer videos over articles, check out the recording. Slides are available here.\n>\n>\n\n## Embrace Document Diversity\n\nAs we saw in the first post in this series when we modeled documents for Leslie, Ron, and Lauren, not all documents in a collection need to have the same fields.\n\nUsers\n\n``` json\n{\n \"_id\": 1,\n \"first_name\": \"Leslie\",\n \"last_name\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"city\": \"Pawnee\",\n \"location\": -86.536632, 39.170344 ],\n \"hobbies\": [\"scrapbooking\", \"eating waffles\", \"working\"],\n \"jobHistory\": [\n {\n \"title\": \"Deputy Director\",\n \"yearStarted\": 2004\n },\n {\n \"title\": \"City Councillor\",\n \"yearStarted\": 2012\n },\n {\n \"title\": \"Director, National Parks Service, Midwest Branch\",\n \"yearStarted\": 2014\n }\n ]\n},\n\n{\n \"_id\": 2,\n \"first_name\": \"Ron\",\n \"last_name\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"city\": \"Pawnee\",\n \"hobbies\": [\"woodworking\", \"fishing\"],\n \"jobHistory\": [\n {\n \"title\": \"Director\",\n \"yearStarted\": 2002\n },\n {\n \"title\": \"CEO, Kinda Good Building Company\",\n \"yearStarted\": 2014\n },\n {\n \"title\": \"Superintendent, Pawnee National Park\",\n \"yearStarted\": 2018\n }\n ]\n},\n\n{\n \"_id\": 3,\n \"first_name\": \"Lauren\",\n \"last_name\": \"Burhug\",\n \"city\": \"Pawnee\",\n \"hobbies\": [\"soccer\"],\n \"school\": \"Pawnee Elementary\"\n}\n```\n\nFor those of us with SQL backgrounds, this is going to feel uncomfortable and probably a little odd at first. I promise it will be ok. Embrace document diversity. It gives us so much flexibility and power to model our data.\n\nIn fact, MongoDB has a data modeling pattern specifically for when your documents do not have the same fields. It's called the [Polymorphic Pattern. We use the Polymorphic Pattern when documents in a collection are of similar but not identical structures.\n\nLet's take a look at an example that builds on the Polymorphic Pattern. Let's say we decided to keep a list of each user's social media followers inside of each `User` document. Lauren and Leslie don't have very many followers, so we could easily list their followers in their documents. For example, Lauren's document might look something like this:\n\n``` json\n{\n \"_id\": 3,\n \"first_name\": \"Lauren\",\n \"last_name\": \"Burhug\",\n \"city\": \"Pawnee\",\n \"hobbies\": \"soccer\"],\n \"school\": \"Pawnee Elementary\",\n \"followers\": [\n \"Brandon\",\n \"Wesley\",\n \"Ciara\",\n ...\n ]\n}\n```\n\nThis approach would likely work for most of our users. However, since Ron built a chair that appeared in the very popular Bloosh Magazine, Ron has millions of followers. If we try to list all of his followers in his `User` document, it may exceed the [16 megabyte document size limit. The question arises: do we want to optimize our document model for the typical use case where a user has a few hundred followers or the outlier use case where a user has millions of followers?\n\nWe can utilize the Outlier Pattern to solve this problem. The Outlier Pattern allows us to model our data for the typical use case but still handle outlier use cases.\n\nWe can begin modeling Ron's document just like Lauren's and include a list of followers. When we begin to approach the document size limit, we can add a new `has_extras` field to Ron's document. (The field can be named anything we'd like.)\n\n``` json\n{\n \"_id\": 2,\n \"first_name\": \"Ron\",\n \"last_name\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"city\": \"Pawnee\",\n \"hobbies\": \"woodworking\", \"fishing\"],\n \"jobHistory\": [\n {\n \"title\": \"Director\",\n \"yearStarted\": 2002\n },\n ...\n ], \n \"followers\": [\n \"Leslie\",\n \"Donna\",\n \"Tom\"\n ...\n ],\n \"has_extras\": true\n}\n```\n\nThen we can create a new document where we will store the rest of Ron's followers.\n\n``` json\n{\n \"_id\": 2.1,\n \"followers\": [\n \"Jerry\",\n \"Ann\",\n \"Ben\"\n ...\n ],\n \"is_overflow\": true\n}\n```\n\nIf Ron continues to gain more followers, we could create another overflow document for him.\n\nThe great thing about the Outlier Pattern is that we are optimizing for the typical use case but we have the flexibility to handle outliers.\n\nSo, embrace document diversity. Resist the urge to force all of your documents to have identical structures just because it's what you've always done.\n\nFor more on MongoDB data modeling design patterns, see [Building with Patterns: A Summary and the free MongoDB University Course M320: Data Modeling.\n\n## Data That is Accessed Together Should be Stored Together\n\nIf you have experience with SQL databases, someone probably drilled into your head that you should normalize your data. Normalization is considered good because it prevents data duplication. Let's take a step back and examine the motivation for database normalization.\n\nWhen relational databases became popular, disk space was extremely expensive. Financially, it made sense to normalize data and save disk space. Take a look at the chart below that shows the cost per megabyte over time.\n\n:charts]{url=\"https://charts.mongodb.com/charts-storage-costs-sbekh\" id=\"740dea93-d2da-44c3-8104-14ccef947662\"}\n\nThe cost has drastically gone down. Our phones, tablets, laptops, and flash drives have more storage capacity today than they did even five to ten years ago for a fraction of the cost. When was the last time you deleted a photo? I can't remember when I did. I keep even the really horribly unflattering photos. And I currently backup all of my photos on two external hard drives and multiple cloud services. Storage is so cheap.\n\nStorage has become so cheap that we've seen a shift in the cost of software development. Thirty to forty years ago storage was a huge cost in software development and developers were relatively cheap. Today, the costs have flipped: storage is a small cost of software development and developers are expensive.\n\nInstead of optimizing for storage, we need to optimize for developers' time and productivity.\n\nAs a developer, I like this shift. I want to be able to focus on implementing business logic and iterate quickly. Those are the things that matter to the business and move developers' careers forward. I don't want to be dragged down by data storage specifics.\n\nThink back to the [example in the previous post where I coded retrieving and updating a user's profile information. Even in that simple example, I was able to write fewer lines of code and move quicker when I used MongoDB.\n\nSo, optimize your data model for developer productivity and query optimization. Resist the urge to normalize your data for the sake of normalizing your data.\n\n*Data that is accessed together should be stored together*. If you end up repeating data in your database, that's ok\u2014especially if you won't be updating the data very often.\n\n## Tread Carefully with Transactions\n\nWe discussed in a previous post that MongoDB supports transactions. The MongoDB engineering team did an amazing job of implementing transactions. They work so well!\n\nBut here's the thing. Relying on transactions is a bad design smell.\n\n \n\nWhy? This builds on our first two points in this section.\n\nFirst, not all documents need to have the same fields. Perhaps you're breaking up data between multiple collections because it's not all of identical structure. If that's the only reason you've broken the data up, you can probably put it back together in a single collection.\n\nSecond, data that is accessed together should be stored together. If you're following this principle, you won't need to use transactions. Some use cases call for transactions. Most do not. If you find yourself frequently using transactions, take a look at your data model and consider if you need to restructure it.\n\nFor more information on transactions and when they should be used, see the MongoDB MongoDB Multi-Document ACID Transactions Whitepaper.\n\n## Wrap Up\n\nToday we discussed the three things you need to know as you move from SQL to MongoDB:\n\n- Embrace Document Diversity\n- Data That is Accessed Together Should Be Stored Together\n- Tread Carefully with Transactions\n\nI hope you enjoy using MongoDB! If you want to jump in and start coding, my teammates and I have written Quick Start Tutorials for a variety of programming languages. I also highly recommend the free courses on MongoDB University.\n\nIn summary, don't be like Ron. (I mean, don't be like him in this particular case, because Ron is amazing.)\n\n \n\nChange your mindset and get the full value of MongoDB.\n\n \n\n", "format": "md", "metadata": {"tags": ["MongoDB", "SQL"], "pageDescription": "Discover the 3 things you need to know when you switch from SQL to MongoDB.", "contentType": "Article"}, "title": "3 Things to Know When You Switch from SQL to MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-eventbridge-slack", "action": "created", "body": "# Integrate Your Realm App with Amazon EventBridge\n\n>\n>\n>This post was developed with the help of AWS.\n>\n>\n\nRealm makes it easy to develop compelling mobile applications backed by a serverless MongoDB Realm back end and the MongoDB Atlas database service. You can enrich those applications by integrating with AWS's broad ecosystem of services. In this article, we'll show you how to configure Realm and AWS to turn Atlas database changes into Amazon EventBridge events \u2013 all without adding a single line of code. Once in EventBridge, you can route events to other services which can act on them.\n\nWe'll use an existing mobile chat application (RChat). RChat creates new `ChatMessage` objects which Realm Sync writes to the ChatMessage Atlas collection. Realm also syncs the chat message with all other members of the chat room.\n\nThis post details how to add a new feature to the RChat application \u2013 forwarding messages to a Slack channel.\n\nWe'll add a Realm Trigger that forwards any new `ChatMessage` documents to EventBridge. EventBridge stores those events in an event bus, and a rule will route it to a Lambda function. The Lambda function will use the Slack SDK (using credentials we'll store in AWS Secrets Manager).\n\n>\n>\n>Amazon EventBridge is a serverless event bus that makes it easier to connect applications together using data from your applications, integrated software as a service (SaaS) applications, and AWS services. It does so by delivering a stream of real-time data from various event sources. You can set up routing rules to send data to targets like AWS Lambda and build loosely coupled application architectures that react in near-real time to data sources.\n>\n>\n\n## Prerequisites\n\nIf you want to build and run the app for yourself, this is what you'll\nneed:\n\n- Mac OS 11+\n- Node.js 12.x+\n- Xcode 12.3+\n- iOS 14.2+ (a real device, or the simulator built into Xcode)\n- git command-line tool\n- realm-cli command-line tool\n- AWS account\n- (Free) MongoDB account\n- Slack account\n\nIf you're not interested in running the mobile app (or don't have access to a Mac), the article includes instructions on manually adding a document that will trigger an event being sent to EventBridge.\n\n## Walkthrough\n\nThis walkthrough shows you how to:\n\n- Set Up the RChat Back End Realm App\n- Create a Slack App\n- Receive MongoDB Events in Amazon EventBridge\n- Store Slack Credentials in AWS Secrets Manager\n- Write and Configure the AWS Lambda Function\n- Link the Lambda Function to the MongoDB Partner Event Bus\n- Run the RChat iOS App\n- Test the End-to-End Integration (With or Without the iOS App)\n\n### Set Up the RChat Back End Realm App\n\nIf you don't already have a MongoDB cloud account, create one. You'll also create an Atlas organization and project as you work through the wizard. For this walkthrough, you can use the free tier. Stick with the defaults (e.g., \"Cluster Name\" = \"Cluster0\") but set the version to MongoDB 4.4.\n\nWhile your database cluster is starting, select \"Project Access\" under \"Access Manager.\" Create an API key with \"Project Owner\" permissions. Add your current IP address to the access list. Make a note of the API keys; they're needed when using realm-cli.\n\nWait until the Atlas cluster is running.\n\nFrom a terminal, import the back end Realm application (substituting in your Atlas project's API keys) using realm-cli:\n\n``` bash\ngit clone https://github.com/realm/RChat.git\ncd RChat/RChat-Realm/RChat\nrealm-cli login --api-key --private-api-key \nrealm-cli import # Then answer prompts, naming the app \"RChat\"\n```\n\nFrom the Atlas UI, click on the Realm logo and you will see the RChat app. Open it and make a note of the Realm \"App Id\":\n\nOptionally, create database indexes by using mongorestore to import the empty database from the `dump` folder.\n\n### Create a Slack App\n\nThe Slack app simply allows us to send a message to a Slack channel.\n\nNavigate to the Slack API page. (You'll need to log in or register a new account if you don't have one.)\n\nClick on the button to create a new Slack app, name it \"RChat,\" and select one of your Slack workspaces. (If using your company's account, you may want or need to create a new workspace.)\n\nGive your app a short description and then click \"Save Changes.\"\n\nAfter creating your Slack app, select the \"OAuth & Permissions\" link. Scroll down to \"Bot Token Scopes\" and add the `chat.write` and `channels:read` scopes.\n\nClick on \"Install to Workspace\" and then \"Allow.\"\n\nTake a note of the new \"Bot User OAuth Access Token.\"\n\nFrom your Slack client, create a new channel named \"rchat-notifications.\" Invite your Slack app bot to the channel (i.e., send a message from the channel to \"@RChat Messenger\" or whatever Slack name you gave to your app):\n\nYou now need to find its channel ID from a terminal window (substituting in your Slack OAuth access token):\n\n``` bash\ncurl --location --request GET 'slack.com/api/conversations.list' \\\n--header 'Authorization: Bearer xoxb-XXXXXXXXXXXXXXX-XXXXXXXXXXXX-XXXXXXXXXXXXXXXXXXX'\n```\n\nIn the results, you'll find an entry for your new \"rchat-notifications\" channel. Take a note of its `id`; it will be stored in AWS Secrets Manager and then used from the Lambda function when calling the Slack SDK:\n\n``` json\n{\n \"name\" : \"rchat-notifications\",\n \"is_pending_ext_shared\" : false,\n \"is_ext_shared\" : false,\n \"is_general\" : false,\n \"is_private\" : false,\n \"is_member\" : false,\n \"name_normalized\" : \"rchat-notifications\",\n \"is_archived\" : false,\n \"is_channel\" : true,\n \"topic\" : {\n \"last_set\" : 0,\n \"creator\" : \"\",\n \"value\" : \"\"\n },\n \"unlinked\" : 0,\n \"is_org_shared\" : false,\n \"is_group\" : false,\n \"shared_team_ids\" : \n \"T01JUGHQXXX\"\n ],\n \"is_shared\" : false,\n \"is_mpim\" : false,\n \"is_im\" : false,\n \"pending_connected_team_ids\" : [],\n \"purpose\" : {\n \"last_set\" : 1610987122,\n \"creator\" : \"U01K7ET1XXX\",\n \"value\" : \"This is for testing the RChat app\"\n },\n \"creator\" : \"U01K7ET1XXX\",\n \"created\" : 1610987121,\n \"parent_conversation\" : null,\n \"id\" : \"C01K1NYXXXX\",\n \"pending_shared\" : [],\n \"num_members\" : 3,\n \"previous_names\" : []\n}\n```\n\n### Receive MongoDB Events in Amazon EventBridge\n\nEventBridge supports MongoDB as a partner event source; this makes it very easy to receive change events from Realm Triggers.\n\nFrom the [EventBridge console, select \"Partner event sources.\" Search for the \"MongoDB\" partner and click \"Set up\":\n\nTake a note of your AWS account ID.\n\nReturn to the Realm UI navigate to \"Triggers\" and click \"Add a trigger.\" Configure the trigger as shown here:\n\nRather than sticking with the default \"Function\" event type (which is\nRealm Function, not to be confused with Lambda), select \"EventBridge,\"\nadd your AWS Account ID from the previous section, and click \"Save\"\nfollowed by \"REVIEW & DEPLOY\":\n\nReturn to the AWS \"Partner event sources\" page, select the new source, and click \"Associate with event bus\":\n\nOn the next screen, leave the \"Resource-based policy\" empty.\n\nReturning to the \"Event buses\" page, you'll find the new MongoDB partner bus.\n\n### Store Slack Credentials in AWS Secrets Manager\n\nWe need a new Lambda function to be invoked on any MongoDB change events added to the event bus. That function will use the Slack API to send messages to our channel. The Lambda function must provide the OAuth token and channel ID to use the Slack SDK. Rather than storing that private information in the function, it's more secure to hold them in AWS Secrets Manager.\n\nNavigate to the Secrets Manager console and click \"Store a new secret.\" Add the values you took a note of when creating the Slack app:\n\nClick through the wizard, and apart from assigning a unique name to the secret (and take a note of it as it's needed when configuring the Lambda function), leave the other fields as they are. Take a note of the ARN for the new secret as it's required when configuring the Lambda function.\n\n### Write and Configure the AWS Lambda Function\n\nFrom the Lambda console, click \"Create Function.\" Name the function \"sendToSlack\" and set the runtime to \"Node.js 12.x.\"\n\nAfter creating the Lambda function, navigate to the \"Permissions\" tab and click on the \"Execution role\" role name. On the new page, click on the \"Policy name\" and then \"Edit policy.\"\n\nClick \"Add additional permissions\" and select the \"Secrets Manager\" service:\n\nSelect the \"ListSecrets\" action. This permission allows the Lambda function to see what secrets are available, but not to read our specific Slack secret. To remedy that, click \"Add additional permissions\" again. Once more, select the \"Secrets Manager\" service, but this time select the \"Read\" access level and specify your secret's ARN in the resources section:\n\nReview and save the new permissions.\n\nReturning to the Lambda function, select the \"Configuration\" tab and add an environment variable to set the \"secretName\" to the name you chose when creating the secret (the function will use this to access Secret Manager):\n\nIt can take some time for the function to fetch the secret for the first time, so set the timeout to 30 seconds in the \"Basic settings\" section.\n\nFinally, we can write the actual Lambda function.\n\nFrom a terminal, bootstrap the function definition:\n\n``` bash\nmkdir lambda\ncd lambda\nnpm install '@slack/web-api'\n```\n\nIn the same lambda directory, create a file called `index.js`:\n\n``` javascript\nconst {WebClient} = require('@slack/web-api');\nconst AWS = require('aws-sdk');\n\nconst secretName = process.env.secretName;\n\nlet slackToken = \"\";\nlet channelId = \"\";\nlet secretsManager = new AWS.SecretsManager();\n\nconst initPromise = new Promise((resolve, reject) => {\n secretsManager.getSecretValue(\n { SecretId: secretName },\n function(err, data) {\n if(err) {\n console.error(`Failed to fetch secrets: ${err}`);\n reject();\n } else {\n const secrets = JSON.parse(data.SecretString);\n slackToken = secrets.slackToken;\n channelId = secrets.channelId;\n resolve()\n }\n }\n )\n});\n\nexports.handler = async (event) => {\n await initPromise;\n const client = new WebClient({ token: slackToken });\n const blocks = \n {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": `*${event.detail.fullDocument.author} said...*\\n\\n${event.detail.fullDocument.text}`\n },\n \"accessory\": {\n \"type\": \"image\",\n \"image_url\": \"https://cdn.dribbble.com/users/27903/screenshots/4327112/69chat.png?compress=1&resize=800x600\",\n \"alt_text\": \"Chat logo\"\n }\n },\n {\n \"type\": \"section\",\n \"text\": {\n \"type\": \"mrkdwn\",\n \"text\": `Sent from `\n }\n },\n {\n \"type\": \"divider\"\n }\n ]\n\n await publishMessage(\n channelId, `Sent from RChat: ${event.detail.fullDocument.author} said \"${event.detail.fullDocument.text}\"`,\n blocks);\n\n const response = {\n statusCode: 200,\n body: JSON.stringify('Slack message sent')\n };\n return response;\n\n async function publishMessage(id, text, blocks) {\n try {\n const result = await client.chat.postMessage({\n token: slackToken,\n channel: id,\n text: text,\n blocks: blocks\n });\n }\n catch (error) {\n console.error(error);\n }\n }\n};\n```\n\nThere are a couple of things to call out in that code.\n\nThis is how the Slack credentials are fetched from Secret Manager:\n\n``` javascript\nconst secretName = process.env.secretName;\nvar MyPromise = new AWS.SecretsManager();\nconst secret = await MyPromise.getSecretValue({ SecretId: secretName}).promise();\nconst openSecret = JSON.parse(secret.SecretString);\nconst slackToken = openSecret.slackToken;\nconst channelId = openSecret.channelId;\n```\n\n`event` is passed in as a parameter, and the function retrieves the original MongoDB document's contents from `event.detail.fullDocument`.\n\n`blocks` is optional, and if omitted, the SDK uses text as the body of the Slack message.\n\nPackage up the Lambda function:\n\n``` bash\nzip -r ../lambda.zip .\n```\n\nFrom the Lambda console, upload the zip file and then deploy:\n\n![\"Upload Lambda Function\"\n\nThe Lambda function is now complete, and the next section will start routing events from the EventBridge partner message bus to it.\n\n### Link the Lambda Function to the MongoDB Partner Event Bus\n\nThe final step to integrate our Realm app with the new Lambda function is to have that function consume the events from the event bus. We do that by adding a new EventBridge rule.\n\nReturn to the EventBridge console and click the \"Rules\" link. Select the \"aws.partner/mongodb.com/stitch.trigger/xxx\" event bus and click \"Create rule.\"\n\nThe \"Name\" can be anything. You should use an \"Event pattern,\" set \"Pre-defined pattern by service,\" search for \"Service partner\" \"MongoDB,\" and leave the \"Event pattern\" as is. This rule matches all bus events linked to our AWS account (i.e., it will cover everything sent from our Realm function):\n\nSelect the new Lambda function as the target and click \"Create\":\n\n### Run the RChat iOS App\n\nAfter creating the back end Realm app, open the RChat iOS app in Xcode:\n\n``` bash\ncd ../../RChat-iOS\nopen RChat.xcodeproj\n```\n\nNavigate to `RChatApp.swift`. Replace `rchat-xxxxx` with your Realm App Id:\n\nSelect your target device (a connected iPhone/iPad or one of the built-in simulators) and build and run the app with `\u2318r`.\n\n### Test the End-to-End Integration (With or Without the iOS App)\n\nTo test a chat app, you need at least two users and two instances of the chat app running.\n\nFrom Xcode, run (`\u2318r`) the RChat app in one simulator, and then again in a second simulator after changing the target device. On each device, register a new user. As one user, create a new chat room (inviting the second user). Send messages to the chat room from either user, and observe that message also appearing in Slack:\n\n#### If You Don't Want to Use the iOS App\n\nSuppose you're not interested in using the iOS app or don't have access to a Mac. In that case, you can take a shortcut by manually adding documents to the `ChatMessage` collection within the `RChat` database. Do this from the \"Collections\" tab in the Atlas UI. Click on \"INSERT DOCUMENT\" and then ensure that you include fields for \"author\" and \"text\":\n\n## Summary\n\nThis post stepped through how to get your data changes from MongoDB into your AWS ecosystem with no new code needed. Once your EventBridge bus has received the change events, you can route them to one or more services. Here we took a common approach by sending them to a Lambda function which then has the freedom to import external libraries and work with other AWS or external services.\n\nTo understand more about the Realm chat app that was the source of the messages, read Building a Mobile Chat App Using Realm \u2013 Data Architecture.\n\n## References\n\n- RChat GitHub repo\n- Building a Mobile Chat App Using Realm \u2013 Data Architecture\n- Slack SDK\n- Sending Trigger Events to AWS EventBridge\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Step through extending a Realm chat app to send messages to a Slack channel using Amazon EventBridge", "contentType": "Tutorial"}, "title": "Integrate Your Realm App with Amazon EventBridge", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-aws-kinesis-firehose-destination", "action": "created", "body": "# Using MongoDB Realm WebHooks with Amazon Kinesis Data Firehose\n\nWith MongoDB Realm's AWS integration, it has always been as simple as possible to use MongoDB as a Kinesis data stream. Now with the launch of third-party data destinations in Kinesis, you can also use MongoDB Realm and MongoDB Atlas as an AWS Kinesis Data Firehose destination.\n\n>Keep in mind that this is just an example. You do not need to use Atlas as both the source **and** destination for your Kinesis streams. I am only doing so in this example to demonstrate how you can use MongoDB Atlas as both an AWS Kinesis Data and Delivery Stream. But, in actuality, you can use any source for your data that AWS Kinesis supports, and still use MongoDB Atlas as the destination.\n\n## Prerequisites\n\nBefore we get started, you will need the following:\n\n- A MongoDB Atlas account with a deployed cluster; a free M0 cluster is perfectly adequate for this example. \u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n- A MongoDB Realm App. You can learn more about creating a Realm App and linking it to your Atlas cluster in our \"Create a Realm App\" guide\n- An AWS account and the AWS CLI. Check out \"What Is the AWS Command Line Interface?\" for a guide to installing and configuring the AWS CLI\n\n## Setting up our Kinesis Data Stream\n\nIn this example, the source of my data is a Raspberry Pi with a Sense HAT. The output from the Sense HAT is read by a Python script running on the Pi. This script then stores the sensor data such as temperature, humidity, and pressure in MongoDB Atlas.\n\n``` python\nimport platform\nimport time\nfrom datetime import datetime\nfrom pymongo import MongoClient\nfrom sense_hat import SenseHat\n\n# Setup the Sense HAT module and connection to MongoDB Atlas\nsense = SenseHat()\nclient = MongoClient(process.env.MONGODB_CONNECTION_STRING)\ndb = client.monitors\n\nsense.load_image(\"img/realm-sensehat.png\")\n\n# If the acceleration breaches 1G we assume the device is being moved\ndef is_moving(x, y, z):\n for acceleration in x, y, z]:\n if acceleration < -1 or acceleration > 1:\n return True\n\n return False\n\nwhile True:\n\n # prepare the object to save as a document in Atlas\n log = {\n \"nodeName\": platform.node(),\n \"humidity\": sense.get_humidity(),\n \"temperature\": sense.get_temperature(),\n \"pressure\": sense.get_pressure(),\n \"isMoving\": is_moving(**sense.get_accelerometer_raw()),\n \"acceleration\": sense.get_accelerometer_raw(),\n \"recordedAt\": datetime.now(),\n }\n\n # Write the report object to MongoDB Atlas\n report = db.reports.insert_one(log)\n\n # Pause for 0.5 seconds before capturing next round of sensor data\n time.sleep(0.5)\n```\n\nI then use a [Realm Database Trigger to transform this data into a Kinesis Data Stream.\n\n>Realm functions are useful if you need to transform or do some other computation with the data before putting the record into Kinesis. However, if you do not need to do any additional computation, it is even easier with the AWS Eventbridge. MongoDB offers an AWS Eventbridge partner event source that lets you send Realm Trigger events to an event bus instead of calling a Realm Function. You can configure any Realm Trigger to send events to EventBridge. You can find out more in the documentation: \"Send Trigger Events to AWS EventBridge\"\n\n``` javascript\n// Function is triggered anytime a document is inserted/updated in our collection\nexports = function (event) {\n\n // Access the AWS service in Realm\n const awsService = context.services.get(\"AWSKinesis\")\n\n try {\n awsService\n .kinesis()\n .PutRecord({\n /* this trigger function will receive the full document that triggered the event\n put this document into Kinesis\n */\n Data: JSON.stringify(event.fullDocument),\n StreamName: \"realm\",\n PartitionKey: \"1\",\n })\n .then(function (response) {\n return response\n })\n } catch (error) {\n console.log(JSON.parse(error))\n }\n}\n```\n\nYou can find out more details on how to do this in our blog post \"Integrating MongoDB and Amazon Kinesis for Intelligent, Durable Streams.\"\n\n## Amazon Kinesis Data Firehose Payloads\n\nAWS Kinesis HTTP(s) Endpoint Delivery Requests are sent via POST with a single JSON document as the request body. Delivery destination URLs must be HTTPS.\n\n### Delivery Stream Request Headers\n\nEach Delivery Stream Request contains essential information in the HTTP headers, some of which we'll use in our Realm WebHook in a moment.\n\n- `X-Amz-Firehose-Protocol-Version`: This header indicates the version of the request/response formats. Currently, the only version is 1.0, but new ones may be added in the future\n- `X-Amz-Firehose-Request-Id`: This value of this header is an opaque GUID used for debugging purposes. Endpoint implementations should log the value of this header if possible, for both successful and unsuccessful requests. The request ID is kept the same between multiple attempts of the same request\n- `X-Amz-Firehose-Source-Arn`: The ARN of the Firehose Delivery Stream represented in ASCII string format. The ARN encodes region, AWS account id, and the stream name\n- `X-Amz-Firehose-Access-Key`: This header carries an API key or other credentials. This value is set when we create or update the delivery stream. We'll discuss it in more detail later\n\n### Delivery Stream Request Body\n\nThe body carries a single JSON document, you can configure the max body size, but it has an upper limit of 64 MiB, before compression. The JSON document has the following properties:\n\n- `requestId`: Same as the value in the X-Amz-Firehose-Request-Id header, duplicated here for convenience\n- `timestamp`: The timestamp (milliseconds since epoch) at which the Firehose server generated this request\n- `records`: The actual records of the Delivery Stream, carrying your data. This is an array of objects, each with a single property of data. This property is a base64 encoded string of your data. Each request can contain a minimum of 1 record and a maximum of 10,000. It's worth noting that a record can be empty\n\n### Response Format\n\nWhen responding to a Delivery Stream Request, there are a few things you should be aware of.\n\n#### Status Codes\n\nThe HTTP status code must be in the 2xx, 4xx, 5xx range; they will not follow redirects, so nothing in the 3xx range. Only a status of 200 is considered a successful delivery of the records; all other statuses are regarded as a retriable error, except 413.\n\n413 (size exceeded) is considered a permanent failure, and will not be retried. In all other error cases, they will reattempt delivery of the same batch of records using an exponential back-off algorithm.\n\nThe retries are backed off using an initial back-off time of 1 second with a jitter factor of 15% . Each subsequent retry is backed off using the formula initial-backoff-time \\* (multiplier(2) ^ retry_count) with added jitter. The back-off time is capped by a maximum interval of 2 minutes. For example on the 'n'-th retry the back-off time is = MAX(120sec, (1 \\* (2^n)) \\* random(0.85, 1.15).\n\nThese parameters are subject to change. Please refer to the AWS Firehose documentation for exact initial back-off time, max back-off time, multiplier, and jitter percentages.\n\n#### Other Response Headers\n\nAs well as the HTTP status code your response should include the following headers:\n\n- `Content-Type`: The only acceptable content type is application/json\n- `Content-Length`: The Content-Length header must be present if the response has a body\n\nDo not send a `Content-Encoding` header, the body must be uncompressed.\n\n#### Response Body\n\nJust like the Request, the Response body is JSON, but it has a max filesize of 1MiB. This JSON body has two required properties:\n\n- `requestId`: This must match the requestId in the Delivery Stream Request\n- `timestamp`: The timestamp (milliseconds since epoch) at which the server processed this request\n\nIf there was a problem processing the request, you could optionally include an errorMessage property. If a request fails after exhausting all retries, the last Instance of this error message is copied to the error output S3 bucket, if one has been configured for the Delivery Stream.\n\n## Storing Shared Secrets\n\nWhen we configure our Kinesis Delivery Stream, we will have the opportunity to set an AccessKey value. This is the same value which is sent with each request as the `X-Amz-Firehose-Access-Key` header. We will use this shared secret to validate the source of the request.\n\nWe shouldn't hard-code this access key in our Realm function; instead, we will create a new secret named `FIREHOSE_ACCESS_KEY`. It can be any value, but keep a note of it as you'll need to reference it later when we configure the Kinesis Delivery Stream.\n\n## Creating our Realm WebHook\n\nBefore we can write the code for our WebHook, we first need to configure it. The \"Configure Service WebHooks guide in the Realm documentation goes into more detail, but you will need to configure the following options:\n\n- Authentication type must be set to system\n- The HTTP method is POST\n- \"Respond with result\" is disabled\n- Request validation must be set to \"No Additional Authorisation\"; we need to handle authenticating Requests ourselves using the X-Amz-Firehose-Access-Key header\n\n### The Realm Function\n\nFor our WebHook we need to write a function which:\n\n- Receives a POST request from Kinesis\n- Ensures that the `X-Amz-Firehose-Access-Key` header value matches the `FIREHOSE_ACCESS_KEY` secret\n- Parses the JSON body from the request\n- Iterates over the reports array and base64 decodes the data in each\n- Parses the base64 decoded JSON string into a JavaScript object\n- Writes the object to MongoDB Atlas as a new document\n- Returns the correct status code and JSON body to Kinesis in the response\n\n``` javascript\nexports = function(payload, response) {\n\n /* Using Buffer in Realm causes a severe performance hit\n this function is ~6 times faster\n */\n const decodeBase64 = (s) => {\n var e={},i,b=0,c,x,l=0,a,r='',w=String.fromCharCode,L=s.length\n var A=\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\"\n for(i=0;i<64;i++){eA.charAt(i)]=i}\n for(x=0;x=8){((a=(b>>>(l-=8))&0xff)||(x<(L-2)))&&(r+=w(a))}\n }\n return r\n }\n\n // Get AccessKey from Request Headers\n const firehoseAccessKey = payload.headers[\"X-Amz-Firehose-Access-Key\"]\n\n // Check shared secret is the same to validate Request source\n if(firehoseAccessKey == context.values.get(\"FIREHOSE_ACCESS_KEY\")) {\n\n // Payload body is a JSON string, convert into a JavaScript Object\n const data = JSON.parse(payload.body.text())\n\n // Each record is a Base64 encoded JSON string\n const documents = data.records.map((record) => {\n const document = JSON.parse(decodeBase64(record.data))\n return {\n ...document,\n _id: new BSON.ObjectId(document._id)\n }\n })\n\n // Perform operations as a bulk\n const bulkOp = context.services.get(\"mongodb-atlas\").db(\"monitors\").collection(\"firehose\").initializeOrderedBulkOp()\n documents.forEach((document) => {\n bulkOp.find({ _id:document._id }).upsert().updateOne(document)\n })\n\n response.addHeader(\n \"Content-Type\",\n \"application/json\"\n )\n\n bulkOp.execute().then(() => {\n // All operations completed successfully\n response.setStatusCode(200)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime()\n }))\n return\n }).catch((error) => {\n // Catch any error with execution and return a 500 \n response.setStatusCode(500)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime(),\n errorMessage: error\n }))\n return\n })\n } else {\n // Validation error with Access Key\n response.setStatusCode(401)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime(),\n errorMessage: \"Invalid X-Amz-Firehose-Access-Key\"\n }))\n return\n }\n}\n```\n\nAs you can see, Realm functions are mostly just vanilla JavaScript. We export a function which takes the request and response as arguments and returns the modified response.\n\nOne extra we do have within Realm functions is the global context object. This provides access to other Realm functions, values, and services; you may have noticed in the trigger function at the start of this article that we use the context object to access our AWS service. Whereas in the code above we're using the context object to access the `mongodb-atlas` service and to retrieve our secret value. You can read more about what's available in the Realm context in our documentation.\n\n#### Decoding and Parsing the Payload Body\n\n``` javascript\n// Payload body is a JSON string, convert into a JavaScript Object\nconst data = JSON.parse(payload.body.text())\n\n// Each record is a Base64 encoded JSON string\nconst documents = data.records.map((record) => {\n const document = JSON.parse(decodeBase64(record.data))\n return {\n ...document,\n _id: new BSON.ObjectId(document._id)\n }\n})\n```\n\nWhen we receive the POST request, we first have to convert the body\u2014which is a JSON string\u2014into a JavaScript object. Then we can iterate over each of the records.\n\nThe data in each of these records is Base64 encoded, so we have to decode it first.\n\n>Using `Buffer()` within Realm functions may currently cause a degradation in performance. Currently we do not recommend using Buffer to decode Base64 strings, but instead to use a function such as `decodeBase64()` in the example above.\n\nThis data could be anything, whatever you've supplied in your Delivery Stream, but in this example, it is the MongoDB document sent from our Realm trigger. This document is also a JSON string, so we'll need to parse it back into a JavaScript object.\n\n#### Writing the Reports to MongoDB Atlas\n\nOnce the parsing and decoding are complete, we're left with an array of between 1 and 10,000 objects, depending on the size of the batch. It's tempting to pass this array to `insertMany()`, but there is the possibility that some records might already exist as documents in our collection.\n\nRemember if Kinesis does not receive an HTTP status of 200 in response to a request it will, in the majority of cases, retry the batch. We have to take into account that there could be an issue after the documents have been written that prevents Kinesis from receiving the 200 OK status. If this occurs and we try to insert the document again, MongoDB will raise a `Duplicate key error` exception.\n\nTo prevent this we perform a `find()` and `updateOne()`, `with upsert()`.\n\nWhen updating/inserting a single document, you can use `updateOne()` with the `upsert` option.\n\n``` javascript\ncontext.services.get(\"mongodb-atlas\").db(\"monitors\").collection(\"firehose\").updateOne(\n {_id: document._id},\n document,\n {upsert: true}\n)\n```\n\nBut we could potentially have to update/insert 10,000 records, so instead, we perform a bulk write.\n\n``` javascript\n// Perform operations as a bulk\nconst bulkOp = context.services.get(\"mongodb-atlas\").db(\"monitors\").collection(\"firehose\").initializeOrderedBulkOp()\ndocuments.forEach((document) => {\n bulkOp.find({ _id:document._id }).upsert().updateOne(document)\n})\n```\n\n#### Sending the Response\n\n``` javascript\nbulkOp.execute().then(() => {\n // All operations completed successfully\n response.setStatusCode(200)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime()\n }))\n return\n})\n```\n\nIf our write operations have completed successfully, we return an HTTP 200 status code with our response. Otherwise, we return a 500 and include the error message from the exception in the response body.\n\n``` javascript\n).catch((error) => {\n // Catch any error with execution and return a 500 \n response.setStatusCode(500)\n response.setBody(JSON.stringify({\n requestId: payload.headers['X-Amz-Firehose-Request-Id'][0],\n timestamp: (new Date()).getTime(),\n errorMessage: error\n }))\n return\n})\n```\n\n### Our WebHook URL\n\nNow we've finished writing our Realm Function, save and deploy it. Then on the settings tab copy the WebHook URL, we'll need it in just a moment.\n\n## Creating an AWS Kinesis Delivery Stream\n\nTo create our Kinesis Delivery Stream we're going to use the AWS CLI, and you'll need the following information:\n\n- Your Kinesis Data Stream ARN\n- The ARN of your respective IAM roles, also ensure that service-principal firehose.amazonaws.com is allowed to assume these roles\n- Bucket and Role ARNs for the S3 bucket to be used for errors/backups\n- MongoDB Realm WebHook URL\n- The value of the `FIREHOSE_ACCESS_KEY`\n\nYour final AWS CLI command will look something like this:\n\n``` bash\naws firehose --endpoint-url \"https://firehose.us-east-1.amazonaws.com\" \\\ncreate-delivery-stream --delivery-stream-name RealmDeliveryStream \\\n--delivery-stream-type KinesisStreamAsSource \\\n--kinesis-stream-source-configuration \\\n\"KinesisStreamARN=arn:aws:kinesis:us-east-1:78023564309:stream/realm,RoleARN=arn:aws:iam::78023564309:role/KinesisRealmRole\" \\\n--http-endpoint-destination-configuration \\\n\"RoleARN=arn:aws:iam::78023564309:role/KinesisFirehoseFullAccess,\\\nS3Configuration={RoleARN=arn:aws:iam::78023564309:role/KinesisRealmRole, BucketARN=arn:aws:s3:::realm-kinesis},\\\nEndpointConfiguration={\\\nUrl=https://webhooks.mongodb-stitch.com/api/client/v2.0/app/realmkinesis-aac/service/kinesis/incoming_webhook/kinesisDestination,\\\nName=RealmCloud,AccessKey=sdhfjkdbf347fb3icb34i243orn34fn234r23c}\"\n```\n\nIf everything executes correctly, you should see your new Delivery Stream appear in your Kinesis Dashboard. Also, after a few moments, the WebHook event will appear in your Realm logs and documents will begin to populate your collection!\n\n![Screenshot Kinesis delivery stream dashboard\n\n## Next Steps\n\nWith the Kinesis data now in MongoDB Atlas, we have a wealth of possibilities. We can transform it with aggregation pipelines, visualise it with Charts, turn it into a GraphQL API, or even trigger more Realm functions or services.\n\n## Further reading\n\nNow you've seen how you can use MongoDB Realm as an AWS Kinesis HTTP Endpoint you might find our other articles on using MongoDB with Kinesis useful:\n\n- Integrating MongoDB and Amazon Kinesis for Intelligent, Durable Streams\n- Processing Data Streams with Amazon Kinesis and MongoDB Atlas\n- MongoDB Stitch Triggers & Amazon Kinesis \u2014 The AWS re\\:Invent Stitch Rover Demo\n- Near-real time MongoDB integration with AWS kinesis stream and Apache Spark Streaming\n\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "AWS"], "pageDescription": "With the launch of third-party data destinations in Kinesis, you can use MongoDB Realm and MongoDB Atlas as an AWS Kinesis Data Firehose destination.", "contentType": "Tutorial"}, "title": "Using MongoDB Realm WebHooks with Amazon Kinesis Data Firehose", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/aggregation-pipeline-covid19-benford-law", "action": "created", "body": "# Aggregation Pipeline: Applying Benford's Law to COVID-19 Data\n\n## Introduction\n\nIn this blog post, I will show you how I built an aggregation\npipeline to\napply Benford's law on\nthe COVID-19 data set that we have made available in the following\ncluster:\n\n``` none\nmongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\n```\n\nIf you want to know more about this cluster and how we transformed the\nCSV files from Johns Hopkins University's repository into clean MongoDB documents, check out this blog post.\n\nFinally, based on this pipeline, I was able to produce a dashboard in MongoDB Charts. For example, here is one Chart that applies Benford's law on the worldwide daily cases of COVID-19:\n\n:charts]{url=\"https://charts.mongodb.com/charts-open-data-covid-19-zddgb\" id=\"bff5cb5e-ce3d-4fe7-a208-be9da0502621\"}\n\n>\n>\n>**Disclaimer**: This article will focus on the aggregation pipeline and\n>the stages I used to produce the result I wanted to get to be able to\n>produce these charts\u2014not so much on the results themselves, which can be\n>interpreted in many different ways. One of the many issues here is the\n>lack of data. The pandemic didn't start at the same time in all the\n>countries, so many countries don't have enough data to make the\n>percentages accurate. But feel free to interpret these results the way\n>you want...\n>\n>\n\n## Prerequisites\n\nThis blog post assumes that you already know the main principles of the\n[aggregation pipeline\nand you are already familiar with the most common stages.\n\nIf you want to follow along, feel free to use the cluster mentioned\nabove or take a copy using mongodump or mongoexport, but the main takeaway from this blog post is the techniques I used to\nproduce the output I wanted.\n\nAlso, I can't recommend you enough to use the aggregation pipeline\nbuilder in MongoDB Atlas\nor Compass to build your pipelines and play with the ones you will see in this blog post.\n\nAll the code is available in this repository.\n\n## What is Benford's Law?\n\nBefore we go any further, let me tell you a bit more about Benford's\nlaw. What does Wikipedia\nsay?\n\n>\n>\n>Benford's law \\...\\] is an observation about the frequency distribution\n>of leading digits in many real-life sets of numerical data. The law\n>states that in many naturally occurring collections of numbers, the\n>leading digit is likely to be small. In sets that obey the law, the\n>number 1 appears as the leading significant digit about 30% of the time,\n>while 9 appears as the leading significant digit less than 5% of the\n>time. If the digits were distributed uniformly, they would each occur\n>about 11.1% of the time. Benford's law also makes predictions about the\n>distribution of second digits, third digits, digit combinations, and so\n>on.\n>\n>\n\nHere is the frequency distribution of the first digits that we can\nexpect for a data set that respects Benford's law:\n\nA little further down in Wikipedia's article, in the \"Applications\"\nsection, you can also read the following:\n\n>\n>\n>**Accounting fraud detection**\n>\n>In 1972, Hal Varian suggested that the law could be used to detect\n>possible fraud in lists of socio-economic data submitted in support of\n>public planning decisions. Based on the plausible assumption that people\n>who fabricate figures tend to distribute their digits fairly uniformly,\n>a simple comparison of first-digit frequency distribution from the data\n>with the expected distribution according to Benford's law ought to show\n>up any anomalous results.\n>\n>\n\nSimply, if your data set distribution is following Benford's law, then\nit's theoretically possible to detect fraudulent data if a particular\nsubset of the data doesn't follow the law.\n\nIn our situation, based on the observation of the first chart above, it\nlooks like the worldwide daily confirmed cases of COVID-19 are following\nBenford's law. But is it true for each country?\n\nIf I want to answer this question (I don't), I will have to build a\nrelatively complex aggregation pipeline (I do \ud83d\ude04).\n\n## The Data Set\n\nI will only focus on a single collection in this blog post:\n`covid19.countries_summary`.\n\nAs its name suggests, it's a collection that I built (also using an\n[aggregation\npipeline)\nthat contains a daily document for each country in the data set.\n\nHere is an example:\n\n``` json\n{\n _id: ObjectId(\"608b24d4e7a11f5710a66b05\"),\n uids: 504 ],\n confirmed: 19645,\n deaths: 305,\n country: 'Morocco',\n date: 2020-07-25T00:00:00.000Z,\n country_iso2s: [ 'MA' ],\n country_iso3s: [ 'MAR' ],\n country_codes: [ 504 ],\n combined_names: [ 'Morocco' ],\n population: 36910558,\n recovered: 16282,\n confirmed_daily: 811,\n deaths_daily: 6,\n recovered_daily: 182\n}\n```\n\nAs you can see, for each day and country, I have daily counts of the\nCOVID-19 confirmed cases and deaths.\n\n## The Aggregation Pipeline\n\nLet's apply Benford's law on these two series of numbers.\n\n### The Final Documents\n\nBefore we start applying stages (transformations) to our documents,\nlet's define the shape of the final documents which will make it easy to\nplot in MongoDB Charts.\n\nIt's easy to do and defines clearly where to start (the document in the\nprevious section) and where we are going:\n\n``` json\n{\n country: 'US',\n confirmed_size: 435,\n deaths_size: 424,\n benford: [\n { digit: 1, confirmed: 22.3, deaths: 36.1 },\n { digit: 2, confirmed: 21.1, deaths: 14.4 },\n { digit: 3, confirmed: 11.5, deaths: 10.6 },\n { digit: 4, confirmed: 11.7, deaths: 8 },\n { digit: 5, confirmed: 11, deaths: 5 },\n { digit: 6, confirmed: 11.7, deaths: 4.7 },\n { digit: 7, confirmed: 6.7, deaths: 6.8 },\n { digit: 8, confirmed: 2.3, deaths: 6.4 },\n { digit: 9, confirmed: 1.6, deaths: 8 }\n ]\n}\n```\n\nSetting the final objective makes us focused on the target while doing\nour successive transformations.\n\n### The Pipeline in English\n\nNow that we have a starting and an ending point, let's try to write our\npipeline in English first:\n\n1. Regroup all the first digits of each count into an array for the\n confirmed cases and into another one for the deaths for each\n country.\n2. Clean the arrays (remove zeros and negative numbers\u2014see note below).\n3. Calculate the size of these arrays.\n4. Remove countries with empty arrays (countries without cases or\n deaths).\n5. Calculate the percentages of 1s, 2s, ..., 9s in each arrays.\n6. Add a fake country \"BenfordTheory\" with the theoretical values of\n 1s, 2s, etc. we are supposed to find.\n7. Final projection to get the document in the final shape I want.\n\n>\n>\n>Note: The daily fields that I provide in this collection\n>`covid19.countries_summary` are computed from the cumulative counts that\n>Johns Hopkins University (JHU) provides. Simply: Today's count, for each\n>country, is today's cumulative count minus yesterday's cumulative count.\n>In theory, I should have zeros (no deaths or no cases that day), but\n>never negative numbers. But sometimes, JHU applies corrections on the\n>counts without applying them retroactively in the past (as these counts\n>were official counts at some point in time, I guess). So, negative\n>values exist and I chose to ignore them in this pipeline.\n>\n>\n\nNow that we have a plan, let's execute it. Each of the points in the\nabove list is an aggregation pipeline stage, and now we \"just\" have to\ntranslate them.\n\n### Stage 1: Arrays of Leading Digits\n\nFirst, I need to be able to extract the first character of\n`$confirmed_daily`, which is an integer.\n\nMongoDB provides a\n[$substring\noperator which we can use if we transform this integer into a string.\nThis is easy to do with the\n$toString\noperator.\n\n``` json\n{ \"$substr\": { \"$toString\": \"$confirmed_daily\" }, 0, 1 ] }\n```\n\nThen, apply this transformation to each country and regroup\n([$group)\nthe result into an array using\n$push.\n\nHere is the first stage:\n\n``` json\n{\n \"$group\": {\n \"_id\": \"$country\",\n \"confirmed\": {\n \"$push\": {\n \"$substr\": \n {\n \"$toString\": \"$confirmed_daily\"\n },\n 0,\n 1\n ]\n }\n },\n \"deaths\": {\n \"$push\": {\n \"$substr\": [\n {\n \"$toString\": \"$deaths_daily\"\n },\n 0,\n 1\n ]\n }\n }\n }\n}\n```\n\nHere is the shape of my documents at this point if I apply this\ntransformation:\n\n``` json\n{\n _id: 'Japan',\n confirmed: [ '1', '3', '7', [...], '7', '5' ],\n deaths: [ '7', '6', '0', [...], '-' , '2' ]\n}\n```\n\n### Stage 2: Clean the Arrays\n\nAs mentioned above, my arrays might contains zeros and `-` which is the\nleading character of a negative number. I decided to ignore this for my\nlittle mathematical experimentation.\n\nIf I now translate *\"clean the arrays\"* into something more\n\"computer-friendly,\" what I actually want to do is *\"filter the\narrays.\"* We can leverage the\n[$filter\noperator and overwrite our existing arrays with their filtered versions\nwithout zeros and dashes by using the\n$addFields\nstage.\n\n``` js\n{\n \"$addFields\": {\n \"confirmed\": {\n \"$filter\": {\n \"input\": \"$confirmed\",\n \"as\": \"elem\",\n \"cond\": {\n \"$and\": \n {\n \"$ne\": [\n \"$$elem\",\n \"0\"\n ]\n },\n {\n \"$ne\": [\n \"$$elem\",\n \"-\"\n ]\n }\n ]\n }\n }\n },\n \"deaths\": { ... } // same as above with $deaths\n }\n}\n```\n\nAt this point, our documents in the pipeline have the same shape as\npreviously.\n\n### Stage 3: Array Sizes\n\nThe final goal here is to calculate the percentages of 1s, 2s, ..., 9s\nin these two arrays, respectively. To compute this, I will need the size\nof the arrays to apply the [rule of\nthree.\n\nThis stage is easy as\n$size\ndoes exactly that.\n\n``` json\n{\n \"$addFields\": {\n \"confirmed_size\": {\n \"$size\": \"$confirmed\"\n },\n \"deaths_size\": {\n \"$size\": \"$deaths\"\n }\n }\n}\n```\n\nTo be completely honest, I could compute this on the fly later, when I\nactually need it. But I'll need it multiple times later on, and this\nstage is inexpensive and eases my mind so... Let's\nKISS.\n\nHere is the shape of our documents at this point:\n\n``` json\n{\n _id: 'Japan',\n confirmed: '1', '3', '7', [...], '7', '5' ],\n deaths: [ '7', '6', '9', [...], '2' , '1' ],\n confirmed_size: 452,\n deaths_size: 398\n}\n```\n\nAs you can see for Japan, our arrays are relatively long, so we could\nexpect our percentages to be somewhat accurate.\n\nIt's far from being true for all the countries...\n\n``` json\n{\n _id: 'Solomon Islands',\n confirmed: [\n '4', '1', '1', '3',\n '1', '1', '1', '2',\n '1', '5'\n ],\n deaths: [],\n confirmed_size: 10,\n deaths_size: 0\n}\n```\n\n``` json\n{\n _id: 'Fiji',\n confirmed: [\n '1', '1', '1', '2', '2', '1', '6', '2',\n '2', '1', '2', '1', '5', '5', '3', '1',\n '4', '1', '1', '1', '2', '1', '1', '1',\n '1', '2', '4', '1', '1', '3', '1', '4',\n '3', '2', '1', '4', '1', '1', '1', '5',\n '1', '4', '8', '1', '1', '2'\n ],\n deaths: [ '1', '1' ],\n confirmed_size: 46,\n deaths_size: 2\n}\n```\n\n### Stage 4: Eliminate Countries with Empty Arrays\n\nI'm not good enough at math to decide which size is significant enough\nto be statistically accurate, but good enough to know that my rule of\nthree will need to divide by the size of the array.\n\nAs dividing by zero is bad for health, I need to remove empty arrays. A\nsound statistician would probably also remove the small arrays... but\nnot me \ud83d\ude05.\n\nThis stage is a trivial\n[$match:\n\n``` \n{\n \"$match\": {\n \"confirmed_size\": {\n \"$gt\": 0\n },\n \"deaths_size\": {\n \"$gt\": 0\n }\n }\n}\n```\n\n### Stage 5: Percentages of Digits\n\nWe are finally at the central stage of our pipeline. I need to apply a\nrule of three to calculate the percentage of 1s in an array:\n\n- Find how many 1s are in the array.\n- Multiply by 100.\n- Divide by the size of the array.\n- Round the final percentage to one decimal place. (I don't need more\n precision for my charts.)\n\nThen, I need to repeat this operation for each digit and each array.\n\nTo find how many times a digit appears in the array, I can reuse\ntechniques we learned earlier:\n\n``` \n{\n \"$size\": {\n \"$filter\": {\n \"input\": \"$confirmed\",\n \"as\": \"elem\",\n \"cond\": {\n \"$eq\": \n \"$$elem\",\n \"1\"\n ]\n }\n }\n }\n}\n```\n\nI'm creating a new array which contains only the 1s with `$filter` and I\ncalculate its size with `$size`.\n\nNow I can\n[$multiply\nthis value (let's name it X) by 100,\n$divide\nby the size of the `confirmed` array, and\n$round\nthe final result to one decimal.\n\n``` \n{\n \"$round\": \n {\n \"$divide\": [\n { \"$multiply\": [ 100, X ] },\n \"$confirmed_size\"\n ]\n },\n 1\n ]\n}\n```\n\nAs a reminder, here is the final document we want:\n\n``` json\n{\n country: 'US',\n confirmed_size: 435,\n deaths_size: 424,\n benford: [\n { digit: 1, confirmed: 22.3, deaths: 36.1 },\n { digit: 2, confirmed: 21.1, deaths: 14.4 },\n { digit: 3, confirmed: 11.5, deaths: 10.6 },\n { digit: 4, confirmed: 11.7, deaths: 8 },\n { digit: 5, confirmed: 11, deaths: 5 },\n { digit: 6, confirmed: 11.7, deaths: 4.7 },\n { digit: 7, confirmed: 6.7, deaths: 6.8 },\n { digit: 8, confirmed: 2.3, deaths: 6.4 },\n { digit: 9, confirmed: 1.6, deaths: 8 }\n ]\n}\n```\n\nThe value we just calculated above corresponds to the `22.3` that we\nhave in this document.\n\nAt this point, we just need to repeat this operation nine times for each\ndigit of the `confirmed` array and nine other times for the `deaths`\narray and assign the results accordingly in the new `benford` array of\ndocuments.\n\nHere is what it looks like in the end:\n\n``` json\n{\n \"$addFields\": {\n \"benford\": [\n {\n \"digit\": 1,\n \"confirmed\": {\n \"$round\": [\n {\n \"$divide\": [\n {\n \"$multiply\": [\n 100,\n {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$confirmed\",\n \"as\": \"elem\",\n \"cond\": {\n \"$eq\": [\n \"$$elem\",\n \"1\"\n ]\n }\n }\n }\n }\n ]\n },\n \"$confirmed_size\"\n ]\n },\n 1\n ]\n },\n \"deaths\": {\n \"$round\": [\n {\n \"$divide\": [\n {\n \"$multiply\": [\n 100,\n {\n \"$size\": {\n \"$filter\": {\n \"input\": \"$deaths\",\n \"as\": \"elem\",\n \"cond\": {\n \"$eq\": [\n \"$$elem\",\n \"1\"\n ]\n }\n }\n }\n }\n ]\n },\n \"$deaths_size\"\n ]\n },\n 1\n ]\n }\n },\n {\"digit\": 2...},\n {\"digit\": 3...},\n {\"digit\": 4...},\n {\"digit\": 5...},\n {\"digit\": 6...},\n {\"digit\": 7...},\n {\"digit\": 8...},\n {\"digit\": 9...}\n ]\n }\n}\n```\n\nAt this point in our pipeline, our documents look like this:\n\n``` \n{\n _id: 'Luxembourg',\n confirmed: [\n '1', '5', '2', '1', '1', '4', '3', '1', '2', '5', '8', '4',\n '1', '4', '1', '1', '1', '2', '3', '1', '9', '5', '3', '2',\n '2', '2', '1', '7', '4', '1', '2', '5', '1', '2', '1', '8',\n '9', '6', '8', '1', '1', '3', '7', '8', '6', '6', '4', '2',\n '2', '1', '1', '1', '9', '5', '8', '2', '2', '6', '1', '6',\n '4', '8', '5', '4', '1', '2', '1', '3', '1', '4', '1', '1',\n '3', '3', '2', '1', '2', '2', '3', '2', '1', '1', '1', '3',\n '1', '7', '4', '5', '4', '1', '1', '1', '1', '1', '7', '9',\n '1', '4', '4', '8',\n ... 242 more items\n ],\n deaths: [\n '1', '1', '8', '9', '2', '3', '4', '1', '3', '5', '5', '1',\n '3', '4', '2', '5', '2', '7', '1', '1', '5', '1', '2', '2',\n '2', '9', '6', '1', '1', '2', '5', '3', '5', '1', '3', '3',\n '1', '3', '3', '4', '1', '1', '2', '4', '1', '2', '2', '1',\n '4', '4', '1', '3', '6', '5', '8', '1', '3', '2', '7', '1',\n '6', '8', '6', '3', '1', '2', '6', '4', '6', '8', '1', '1',\n '2', '3', '7', '1', '8', '2', '1', '6', '3', '3', '6', '2',\n '2', '2', '3', '3', '3', '2', '6', '3', '1', '3', '2', '1',\n '1', '4', '1', '1',\n ... 86 more items\n ],\n confirmed_size: 342,\n deaths_size: 186,\n benford: [\n { digit: 1, confirmed: 36.3, deaths: 32.8 },\n { digit: 2, confirmed: 16.4, deaths: 19.9 },\n { digit: 3, confirmed: 9.1, deaths: 14.5 },\n { digit: 4, confirmed: 8.8, deaths: 7.5 },\n { digit: 5, confirmed: 6.4, deaths: 6.5 },\n { digit: 6, confirmed: 9.6, deaths: 8.6 },\n { digit: 7, confirmed: 5.8, deaths: 3.8 },\n { digit: 8, confirmed: 5, deaths: 4.8 },\n { digit: 9, confirmed: 2.6, deaths: 1.6 }\n ]\n}\n```\n\n>\n>\n>Note: At this point, we don't need the arrays anymore. The target\n>document is almost there.\n>\n>\n\n### Stage 6: Introduce Fake Country BenfordTheory\n\nIn my final charts, I wanted to be able to also display the Bendord's\ntheoretical values, alongside the actual values from the different\ncountries to be able to spot easily which one is **potentially**\nproducing fake data (modulo the statistic noise and many other reasons).\n\nJust to give you an idea, it looks like, globally, all the countries are\nproducing legit data but some arrays are small and produce \"statistical\naccidents.\"\n\n:charts[]{url=\"https://charts.mongodb.com/charts-open-data-covid-19-zddgb\" id=\"5030cc1a-8318-40e0-91b0-b1c118dc719b\"}\n\nTo be able to insert this \"perfect\" document, I need to introduce in my\npipeline a fake and perfect country that has the perfect percentages. I\ndecided to name it \"BenfordTheory.\"\n\nBut (because there is always one), as far as I know, there is no stage\nthat can just let me insert a new document like this in my pipeline.\n\nSo close...\n\nLucky for me, I found a workaround to this problem with the new (since\n4.4)\n[$unionWith\nstage. All I have to do is insert my made-up document into a collection\nand I can \"insert\" all the documents from this collection into my\npipeline at this stage.\n\nI inserted my fake document into the new collection randomly named\n`benford`. Note that I made this document look like the documents at\nthis current stage in my pipeline. I didn't care to insert the two\narrays because I'm about to discard them anyway.\n\n``` json\n{\n _id: 'BenfordTheory',\n benford: \n { digit: 1, confirmed: 30.1, deaths: 30.1 },\n { digit: 2, confirmed: 17.6, deaths: 17.6 },\n { digit: 3, confirmed: 12.5, deaths: 12.5 },\n { digit: 4, confirmed: 9.7, deaths: 9.7 },\n { digit: 5, confirmed: 7.9, deaths: 7.9 },\n { digit: 6, confirmed: 6.7, deaths: 6.7 },\n { digit: 7, confirmed: 5.8, deaths: 5.8 },\n { digit: 8, confirmed: 5.1, deaths: 5.1 },\n { digit: 9, confirmed: 4.6, deaths: 4.6 }\n ],\n confirmed_size: 999999,\n deaths_size: 999999\n}\n```\n\nWith this new collection in place, all I need to do is `$unionWith` it.\n\n``` json\n{\n \"$unionWith\": {\n \"coll\": \"benford\"\n }\n}\n```\n\n### Stage 7: Final Projection\n\nAt this point, our documents look almost like the initial target\ndocument that we have set at the beginning of this blog post. Two\ndifferences though:\n\n- The name of the countries is in the `_id` key, not the `country`\n one.\n- The two arrays are still here.\n\nWe can fix this with a simple\n[$project\nstage.\n\n``` json\n{\n \"$project\": {\n \"country\": \"$_id\",\n \"_id\": 0,\n \"benford\": 1,\n \"confirmed_size\": 1,\n \"deaths_size\": 1\n }\n}\n```\n\n>\n>\n>Note that I chose which field should be here or not in the final\n>document by inclusion here. `_id` is an exception and needs to be\n>explicitly excluded. As the two arrays aren't explicitly included, they\n>are excluded by default, like any other field that would be there. See\n>considerations.\n>\n>\n\nHere is our final result:\n\n``` json\n{\n confirmed_size: 409,\n deaths_size: 378,\n benford: \n { digit: 1, confirmed: 32.8, deaths: 33.6 },\n { digit: 2, confirmed: 20.5, deaths: 13.8 },\n { digit: 3, confirmed: 15.9, deaths: 11.9 },\n { digit: 4, confirmed: 10.8, deaths: 11.6 },\n { digit: 5, confirmed: 5.9, deaths: 6.9 },\n { digit: 6, confirmed: 2.9, deaths: 7.7 },\n { digit: 7, confirmed: 4.4, deaths: 4.8 },\n { digit: 8, confirmed: 3.2, deaths: 5.6 },\n { digit: 9, confirmed: 3.7, deaths: 4.2 }\n ],\n country: 'Bulgaria'\n}\n```\n\nAnd please remember that some documents still look like this in the\npipeline because I didn't bother to filter them:\n\n``` json\n{\n confirmed_size: 2,\n deaths_size: 1,\n benford: [\n { digit: 1, confirmed: 0, deaths: 0 },\n { digit: 2, confirmed: 50, deaths: 100 },\n { digit: 3, confirmed: 0, deaths: 0 },\n { digit: 4, confirmed: 0, deaths: 0 },\n { digit: 5, confirmed: 0, deaths: 0 },\n { digit: 6, confirmed: 0, deaths: 0 },\n { digit: 7, confirmed: 50, deaths: 0 },\n { digit: 8, confirmed: 0, deaths: 0 },\n { digit: 9, confirmed: 0, deaths: 0 }\n ],\n country: 'MS Zaandam'\n}\n```\n\n## The Final Pipeline\n\nMy final pipeline is pretty long due to the fact that I'm repeating the\nsame block for each digit and each array for a total of 9\\*2=18 times.\n\nI wrote a factorised version in JavaScript that can be executed in\n[mongosh:\n\n``` js\nuse covid19;\n\nlet groupBy = {\n \"$group\": {\n \"_id\": \"$country\",\n \"confirmed\": {\n \"$push\": {\n \"$substr\": {\n \"$toString\": \"$confirmed_daily\"\n }, 0, 1]\n }\n },\n \"deaths\": {\n \"$push\": {\n \"$substr\": [{\n \"$toString\": \"$deaths_daily\"\n }, 0, 1]\n }\n }\n }\n};\n\nlet createConfirmedAndDeathsArrays = {\n \"$addFields\": {\n \"confirmed\": {\n \"$filter\": {\n \"input\": \"$confirmed\",\n \"as\": \"elem\",\n \"cond\": {\n \"$and\": [{\n \"$ne\": [\"$$elem\", \"0\"]\n }, {\n \"$ne\": [\"$$elem\", \"-\"]\n }]\n }\n }\n },\n \"deaths\": {\n \"$filter\": {\n \"input\": \"$deaths\",\n \"as\": \"elem\",\n \"cond\": {\n \"$and\": [{\n \"$ne\": [\"$$elem\", \"0\"]\n }, {\n \"$ne\": [\"$$elem\", \"-\"]\n }]\n }\n }\n }\n }\n};\n\nlet addArraySizes = {\n \"$addFields\": {\n \"confirmed_size\": {\n \"$size\": \"$confirmed\"\n },\n \"deaths_size\": {\n \"$size\": \"$deaths\"\n }\n }\n};\n\nlet removeCountriesWithoutConfirmedCasesAndDeaths = {\n \"$match\": {\n \"confirmed_size\": {\n \"$gt\": 0\n },\n \"deaths_size\": {\n \"$gt\": 0\n }\n }\n};\n\nfunction calculatePercentage(inputArray, digit, sizeArray) {\n return {\n \"$round\": [{\n \"$divide\": [{\n \"$multiply\": [100, {\n \"$size\": {\n \"$filter\": {\n \"input\": inputArray,\n \"as\": \"elem\",\n \"cond\": {\n \"$eq\": [\"$$elem\", digit]\n }\n }\n }\n }]\n }, sizeArray]\n }, 1]\n }\n}\n\nfunction calculatePercentageConfirmed(digit) {\n return calculatePercentage(\"$confirmed\", digit, \"$confirmed_size\");\n}\n\nfunction calculatePercentageDeaths(digit) {\n return calculatePercentage(\"$deaths\", digit, \"$deaths_size\");\n}\n\nlet calculateBenfordPercentagesConfirmedAndDeaths = {\n \"$addFields\": {\n \"benford\": [{\n \"digit\": 1,\n \"confirmed\": calculatePercentageConfirmed(\"1\"),\n \"deaths\": calculatePercentageDeaths(\"1\")\n }, {\n \"digit\": 2,\n \"confirmed\": calculatePercentageConfirmed(\"2\"),\n \"deaths\": calculatePercentageDeaths(\"2\")\n }, {\n \"digit\": 3,\n \"confirmed\": calculatePercentageConfirmed(\"3\"),\n \"deaths\": calculatePercentageDeaths(\"3\")\n }, {\n \"digit\": 4,\n \"confirmed\": calculatePercentageConfirmed(\"4\"),\n \"deaths\": calculatePercentageDeaths(\"4\")\n }, {\n \"digit\": 5,\n \"confirmed\": calculatePercentageConfirmed(\"5\"),\n \"deaths\": calculatePercentageDeaths(\"5\")\n }, {\n \"digit\": 6,\n \"confirmed\": calculatePercentageConfirmed(\"6\"),\n \"deaths\": calculatePercentageDeaths(\"6\")\n }, {\n \"digit\": 7,\n \"confirmed\": calculatePercentageConfirmed(\"7\"),\n \"deaths\": calculatePercentageDeaths(\"7\")\n }, {\n \"digit\": 8,\n \"confirmed\": calculatePercentageConfirmed(\"8\"),\n \"deaths\": calculatePercentageDeaths(\"8\")\n }, {\n \"digit\": 9,\n \"confirmed\": calculatePercentageConfirmed(\"9\"),\n \"deaths\": calculatePercentageDeaths(\"9\")\n }]\n }\n};\n\nlet unionBenfordTheoreticalValues = {\n \"$unionWith\": {\n \"coll\": \"benford\"\n }\n};\n\nlet finalProjection = {\n \"$project\": {\n \"country\": \"$_id\",\n \"_id\": 0,\n \"benford\": 1,\n \"confirmed_size\": 1,\n \"deaths_size\": 1\n }\n};\n\nlet pipeline = [groupBy,\n createConfirmedAndDeathsArrays,\n addArraySizes,\n removeCountriesWithoutConfirmedCasesAndDeaths,\n calculateBenfordPercentagesConfirmedAndDeaths,\n unionBenfordTheoreticalValues,\n finalProjection];\n\nlet cursor = db.countries_summary.aggregate(pipeline);\n\nprintjson(cursor.next());\n```\n\nIf you want to read the entire pipeline, it's available in [this github\nrepository.\n\nIf you want to see more visually how this pipeline works step by step,\nimport it in MongoDB Compass\nonce you are connected to the cluster (see the URI in the\nIntroduction). Use the `New Pipeline From Text` option in the\n`covid19.countries_summary` collection to import it.\n\n## An Even Better Pipeline?\n\nDid you think that this pipeline I just presented was *perfect*?\n\nWell well... It's definitely getting the job done, but we can make it\n*better* in many ways. I already mentioned in this blog post that we\ncould remove Stage 3, for example, if we wanted to. It might not be as\noptimal, but it would be shorter.\n\nAlso, there is still Stage 5, in which I literally copy and paste the\nsame piece of code 18 times... and Stage 6, where I have to use a\nworkaround to insert a document in my pipeline.\n\nAnother solution could be to rewrite this pipeline with a\n$facet\nstage and execute two sub-pipelines in parallel to compute the results\nwe want for the confirmed array and the deaths array. But this solution\nis actually about two times slower.\n\nHowever, my colleague John Page came up\nwith this pipeline that is just better than mine\nbecause it's applying more or less the same algorithm, but it's not\nrepeating itself. The code is a *lot* cleaner and I just love it, so I\nthought I would also share it with you.\n\nJohn is using very smartly a\n$map\nstage to iterate over the nine digits which makes the code a lot simpler\nto maintain.\n\n## Wrap-Up\n\nIn this blog post, I tried my best to share with you the process of\ncreating a relatively complex aggregation pipeline and a few tricks to\ntransform as efficiently as possible your documents.\n\nWe talked about and used in a real pipeline the following aggregation\npipeline stages and operators:\n\n- $addFields.\n- $toString.\n- $substr.\n- $group.\n- $push.\n- $filter.\n- $size.\n- $multiply.\n- $divide.\n- $round.\n- $match.\n- $unionWith.\n- $project.\n- $facet.\n- $map.\n\nIf you are a statistician and you can make sense of these results,\nplease post a message on the Community\nForum and ping me!\n\nAlso, let me know if you can find out if some countries are clearly\ngenerating fake data.\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Using the MongoDB Aggregation Pipeline to apply Benford's law on the COVID-19 date set from Johns Hopkins University.", "contentType": "Article"}, "title": "Aggregation Pipeline: Applying Benford's Law to COVID-19 Data", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-jetpackcompose-emoji-android", "action": "created", "body": "# Building an Android Emoji Garden on Jetpack Compose with Realm\n\nAs an Android developer, have you wanted to get acquainted with Jetpack\nCompose and mobile architecture? Or maybe you have wanted to build an\napp end to end, with a hosted database? If yes, then this post is for\nyou!\n\nWe'll be building an app that shows data from a central shared database:\nMongoDB Realm. The app will reflect changes in the database in real-time on all devices that use it.\n\nImagine you're at a conference and you'd like to engage with the other\nattendees in a creative way. How about with emojis? \ud83d\ude0b What if the\nconference had an app with a field of emojis where each emoji represents\nan attendee? Together, they create a beautiful garden. I call this app\n*Emoji Garden*. I'll be showing you how to build such an app in this\npost.\n\nThis article is Part 1 of a two-parter where we'll just be building the\ncore app structure and establishing the connection to Realm and sharing\nour emojis between the database and the app. Adding and changing emojis\nfrom the app will be in Part 2.\n\nHere we see the app at first run. We'll be creating two screens:\n\n1. A **Login Screen**.\n2. An **Emoji Garden Screen** updated with emojis directly from the\nserver. It displays all the attendees of the conference as emojis.\n\nLooks like a lot of asynchronous code, doesn't it? As we know,\nasynchronous code is the bane of Android development. However, you\ngenerally can't avoid it for database and network operations. In our\napp, we store emojis in the local Realm database. The local database\nseamlessly syncs with a MongoDB Realm Sync server instance in the\nbackground. Are we going to need other libraries like RxJava or\nCoroutines? Nope, we won't. **In this article, we'll see how to get**\nRealm to do this all for you!\n\nIf you prefer Kotlin Flows with Coroutines, then don't worry. The Realm\nSDK can generate them for you. I'll show you how to do that too. Let's\nbegin!\n\nLet me tempt you with the tech for Emoji Garden!\n\n* Using Jetpack Compose to put together the UI.\n* Using ViewModels and MVVM effectively with Compose.\n* Using Coroutines\nand Realm functions to keep your UI updated.\n* Using anonymous logins in Realm.\n* Setting up a globally accessible MongoDB Atlas instance to sync to\nyour app's Realm database.\n\n## Prerequisites\n\nRemember that all of the code for the final app is available in the\nGitHub repo. If\nyou'd like to build Emoji Garden\ud83c\udf32 with me, you'll need the following:\n\n1. Android Studio, version\n\"*Arctic Fox (2020.3.1)*\" or later.\n2. A basic understanding of\nbuilding Android apps, like knowing what an Activity is and having tried a bit of Java or Kotlin coding.\n\nEmoji Garden shouldn't be the first Android app you've ever tried to\nbuild. However, it is a great intro into Realm and Jetpack Compose.\n\n> \ud83d\udca1 There's one prerequisite you'd need for anything you're doing and\n> that's a growth mindset \ud83c\udf31. It means you believe you can learn anything. I believe in you!\n\nEstimated time to complete: 2.5-3 hours\n\n## Create a New Compose Project\n\nOnce you've got the Android Studio\nCanary, you can fire up\nthe **New Project** menu and select Empty Compose Activity. Name your\napp \"Emoji Garden\" if you want the same name as mine.\n\n## Project Imports\n\nWe will be adding imports into two files:\n\n1. Into the app level build.gradle.\n2. Into the project level build.gradle.\n\nAt times, I may refer to functions, classes, or variables by putting\ntheir names in italics, like *EmojiClass*, so you can tell what's a\nvariable/constant/class and what isn't.\n\n### App Level build.gradle Imports\n\nFirst, the app level build.gradle. To open the app's build.gradle file,\ndouble-tap Shift in Android Studio and type \"build.gradle.\" **Select the**\none with \"app\" at the end and hit enter. Check out how build.gradle\nlooks in the finished sample\napp.\nYours doesn't need to look exactly like this yet. I'll tell you what to\nadd.\n\nIn the app level build.gradle, we are going to add a few dependencies,\nshown below. They go into the *dependencies* block:\n\n``` kotlin\n// For the viewModel function that imports them into activities\nimplementation 'androidx.activity:activity-ktx:1.3.0'\n\n// For the ViewModelScope if using Coroutines in the ViewModel\nimplementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.2.0'\nimplementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.1'\n```\n\n**After** adding them, your dependencies block should look like this.\nYou could copy and replace the entire block in your app.\n\n``` kotlin\ndependencies {\n\n implementation 'androidx.core:core-ktx:1.3.2'\n implementation 'androidx.appcompat:appcompat:1.2.0'\n implementation 'com.google.android.material:material:1.2.1'\n\n // For Jetpack Compose\n implementation \"androidx.compose.ui:ui:$compose_version\"\n implementation \"androidx.compose.material:material:$compose_version\"\n implementation \"androidx.compose.ui:ui-tooling:$compose_version\"\n\n // For the viewModel function that imports them into activities\n implementation 'androidx.activity:activity-ktx:1.3.0'\n\n // For the ViewModelScope if using Coroutines in the ViewModel\n implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.2.0'\n implementation 'androidx.lifecycle:lifecycle-runtime-ktx:2.3.1'\n\n testImplementation 'junit:junit:4.+'\n androidTestImplementation 'androidx.test.ext:junit:1.1.2'\n androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'\n}\n```\n\nIn the same file under *android* in the app level build.gradle, you\nshould have the *composeOptions* already. **Make sure the**\nkotlinCompilerVersion is at least 1.5.10. Compose needs this to\nfunction correctly.\n\n``` kotlin\ncomposeOptions {\n kotlinCompilerExtensionVersion compose_version\n kotlinCompilerVersion kotlin_ext\n}\n```\n\n### Project Level build.gradle Imports\n\nOpen the **project level** build.gradle file. Double-tap Shift in\nAndroid Studio -> type \"build.gradle\" and **look for the one with a dot**\nat the end. This is how it looks in the sample app.\nFollow along for steps.\n\nMake sure the compose version under buildscript is 1.x.x or greater.\n\n``` kotlin\nbuildscript {\n ext {\n compose_version = '1.0.0'\n kotlin_ext = '1.5.10'\n }\n```\n\nGreat! We're all done with imports. Remember to hit \"Sync Now\" at the\ntop right.\n\n## Overview of the Emoji Garden App\n\n### Folder Structure\n\n*com.example.emojigarden* is the directory where all the code for the\nEmoji Garden app resides. This directory is auto-generated from the app\nname when you create a project. The image shown below is an overview of\nall the classes in the finished app. It's what we'll have when we're\ndone with this article.\n\n## Building the Android App\n\nThe Emoji Garden app is divided into two parts: the UI and the logic.\n\n1. The UI displays the emoji garden.\n2. The logic (classes and functions) will update the emoji garden from\nthe server. This will keep the app in sync for all attendees.\n\n### Creating a New Source File\n\nLet's create a file named *EmojiTile* inside a source folder. If you're\nnot sure where the source folder is, here's how to find it. Hit the\nproject tab (**\u2318+1** on mac or **Ctrl+1** on Windows/Linux).\n\nOpen the app folder -> java -> *com.example.emojigarden* or your package name. Right click on *com.example.emojigarden* to create new files for source code. For this project, we will create all source files here. To see other strategies to organize code, see package-by-feature.\n\nType in the name of the class you want to make\u2014 *EmojiTile*, for\ninstance. Then hit Enter.\n\n### Write the Emoji Tile Class\n\nSince the garden is full of emojis, we need a class to represent the\nemojis. Let's make the *EmojiTile* class for this. Paste this in.\n\n``` kotlin\nclass EmojiTile {\n var emoji : String = \"\"\n}\n```\n\n### Let's Start with the Garden Screen\n\nHere's what the screen will look like. When the UI is ready, the Garden\nScreen will display a grid of beautiful emojis. We still have some work\nto do in setting everything up.\n\n#### The Garden UI Code\n\nLet's get started making that screen. We're going to throw away nearly\neverything in *MainActivity.kt* and write this code in its place.\n\nReach *MainActivity.kt* by **double-tapping Shift** and typing\n\"mainactivity.\" Any of those three results in the image below will take\nyou there.\n\nHere's what the file looks like before we've made any changes.\n\nNow, leave only the code below in *MainActivity.kt* apart from the\nimports. Notice how we've removed everything inside the *setContent*\nfunction except the MainActivityUi function. We haven't created it yet,\nso I've left it commented out. It's the last of the three sectioned UI\nbelow. The extra annotation (*@ExperimentalFoundationApi*) will be\nexplained shortly.\n\n``` kotlin\n@ExperimentalFoundationApi\nclass MainActivity : AppCompatActivity() {\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n // MainActivityUi(emptyList())\n\n }\n }\n}\n```\n\nThe UI code for the garden will be built up in three functions. Each\nrepresents one \"view.\"\n\n> \ud83d\udca1 We'll be using a handful of functions for UI instead of defining it\n> in the Android XML file. Compose uses only regular functions marked\n> @Composeable to define how the UI should look. Compose also features interactive Previews without even deploying to an emulator or device. \"Functions as UI\" make UIs designed in Jetpack Compose incredibly modular.\n\nThe functions are:\n\n1. *EmojiHolder*\n2. *EmojiGrid*\n3. *MainActivityUi*\n\nI'll show how to do previews right after the first function EmojiHolder.\nEach of the three functions will be written at the end of the\n*MainActivity.kt* file. That will put the functions outside the\n*MainActivity* class. **Compose functions are independent of classes.**\nThey'll be composed together like this:\n\n> \ud83d\udca1 Composing just means using inside something else\u2014like calling one\n> Jetpack Compose function inside another Jetpack Compose function.\n\n### Single Emoji Holder\n\nLet's start from the smallest bit of UI, the holder for a single emoji.\n\n``` kotlin\n@Composable\nfun EmojiHolder(emoji: EmojiTile) {\n Text(emoji.emoji)\n}\n```\n\nThe *EmojiHolder* function draws the emoji in a text box. The text\nfunction is part of Jetpack Compose. It's the equivalent of a TextView\nin the XML way making UI. It just needs to have some text handed to it.\nIn this case, the text comes from the *EmojiTile* class.\n\n### Previewing Your Code\n\nA great thing about Compose functions is that they can be previewed\nright inside Android Studio. Drop this function into *MainActivity.kt*\nat the end.\n\n``` kotlin\n@Preview\n@Composable\nfun EmojiPreview() {\n EmojiHolder(EmojiTile().apply { emoji = \"\ud83d\ude3c\" })\n}\n```\n\nYou'll see the image below! If the preview is too small, click it and\nhit **Ctrl+** or **\u2318+** to increase the size. If it's not, choose the\n\"Split View\" (the larger arrow below). It splits the screen between code\nand previews. Previews are only generated once you've changed the code and hit the build icon. To rebuild the code, hit the refresh icon (the\nsmaller green arrow below).\n\n### The EmojiGrid\n\nTo make the garden, we'll be using the *LazyVerticalGrid*, which is like\nRecyclerView in Compose. It only renders items that are visible, as opposed to those\nthat scroll offscreen. *LazyVerticalGrid* is a new class in Jetpack\nCompose version alpha9. Since it's experimental, it requires the\n*@ExperimentalFoundationApi* annotation. It's fun to play with though!\nCopy this into your project.\n\n``` kotlin\n@ExperimentalFoundationApi\n@Composable\nfun EmojiGrid(emojiList: List) {\n\n LazyVerticalGrid(cells = GridCells.Adaptive(20.dp)) {\n items(emojiList) { emojiTile ->\n EmojiHolder(emojiTile)\n }\n }\n}\n```\n\n### Garden Screen Container: MainActivityUI\n\nFinally, the EmojiGrid is centered in a full-width *Box*. *Box* itself\nis a compose function.\n\n> \ud83d\udca1 Since my app was named \"Emoji Garden,\" the auto-generated theme for it is EmojiGardenTheme. The theme name may be different for you. Type it in, if so.\n\nSince the *MainActivityUi* is composed of *EmojiGrid*, which uses the\n*@ExperimentalFoundationApi* annotation, *MainActivityUi* now has to use the same annotation.\n\n``` kotlin\n@ExperimentalFoundationApi\n@Composable\nfun MainActivityUi(emojiList: List) {\n EmojiGardenTheme {\n Box(\n Modifier.fillMaxWidth().padding(16.dp),\n contentAlignment = Alignment.Center\n ) {\n EmojiGrid(emojiList)\n }\n }\n}\n```\n\n### Previews\n\nTry previews for any of these! Here's a preview function for\n*MainActivityUI*. Preview functions should be in the same file as the\nfunctions they're trying to preview.\n\n``` kotlin\n@ExperimentalFoundationApi\n@Preview(showBackground = true)\n@Composable\nfun DefaultPreview() {\n MainActivityUi(List(102){ i -> EmojiTile().apply { emoji = emojisi] }})\n}\n\nval emojis = listOf(\"\ud83d\udc24\",\"\ud83d\udc26\",\"\ud83d\udc14\",\"\ud83e\udda4\",\"\ud83d\udd4a\",\"\ufe0f\",\"\ud83e\udd86\",\"\ud83e\udd85\",\"\ud83e\udeb6\",\"\ud83e\udda9\",\"\ud83d\udc25\",\"-\",\"\ud83d\udc23\",\"\ud83e\udd89\",\"\ud83e\udd9c\",\"\ud83e\udd9a\",\"\ud83d\udc27\",\"\ud83d\udc13\",\"\ud83e\udda2\",\"\ud83e\udd83\",\"\ud83e\udda1\",\"\ud83e\udd87\",\"\ud83d\udc3b\",\"\ud83e\uddab\",\"\ud83e\uddac\",\"\ud83d\udc08\",\"\u200d\",\"\u2b1b\",\"\ud83d\udc17\",\"\ud83d\udc2a\",\"\ud83d\udc08\",\"\ud83d\udc31\",\"\ud83d\udc3f\",\"\ufe0f\",\"\ud83d\udc04\",\"\ud83d\udc2e\",\"\ud83e\udd8c\",\"\ud83d\udc15\",\"\ud83d\udc36\",\"\ud83d\udc18\",\"\ud83d\udc11\",\"\ud83e\udd8a\",\"\ud83e\udd92\",\"\ud83d\udc10\",\"\ud83e\udd8d\",\"\ud83e\uddae\",\"\ud83d\udc39\",\"\ud83e\udd94\",\"\ud83e\udd9b\",\"\ud83d\udc0e\",\"\ud83d\udc34\",\"\ud83e\udd98\",\"\ud83d\udc28\",\"\ud83d\udc06\",\"\ud83e\udd81\",\"\ud83e\udd99\",\"\ud83e\udda3\",\"\ud83d\udc12\",\"\ud83d\udc35\",\"\ud83d\udc01\",\"\ud83d\udc2d\",\"\ud83e\udda7\",\"\ud83e\udda6\",\"\ud83d\udc02\",\"\ud83d\udc3c\",\"\ud83d\udc3e\",\"\ud83d\udc16\",\"\ud83d\udc37\",\"\ud83d\udc3d\",\"\ud83d\udc3b\",\"\u200d\",\"\u2744\",\"\ufe0f\",\"\ud83d\udc29\",\"\ud83d\udc07\",\"\ud83d\udc30\",\"\ud83e\udd9d\",\"\ud83d\udc0f\",\"\ud83d\udc00\",\"\ud83e\udd8f\",\"\ud83d\udc15\",\"\u200d\",\"\ud83e\uddba\",\"\ud83e\udda8\",\"\ud83e\udda5\",\"\ud83d\udc05\",\"\ud83d\udc2f\",\"\ud83d\udc2b\",\"-\",\"\ud83e\udd84\",\"\ud83d\udc03\",\"\ud83d\udc3a\",\"\ud83e\udd93\",\"\ud83d\udc33\",\"\ud83d\udc21\",\"\ud83d\udc2c\",\"\ud83d\udc1f\",\"\ud83d\udc19\",\"\ud83e\uddad\",\"\ud83e\udd88\",\"\ud83d\udc1a\",\"\ud83d\udc33\",\"\ud83d\udc20\",\"\ud83d\udc0b\",\"\ud83c\udf31\",\"\ud83c\udf35\",\"\ud83c\udf33\",\"\ud83c\udf32\",\"\ud83c\udf42\",\"\ud83c\udf40\",\"\ud83c\udf3f\",\"\ud83c\udf43\",\"\ud83c\udf41\",\"\ud83c\udf34\",\"\ud83e\udeb4\",\"\ud83c\udf31\",\"\u2618\",\"\ufe0f\",\"\ud83c\udf3e\",\"\ud83d\udc0a\",\"\ud83d\udc0a\",\"\ud83d\udc09\",\"\ud83d\udc32\",\"\ud83e\udd8e\",\"\ud83e\udd95\",\"\ud83d\udc0d\",\"\ud83e\udd96\",\"-\",\"\ud83d\udc22\")\n```\n\nHere's a preview generated by the code above. Remember to hit the build arrows if it doesn't show up.\n\nYou might notice that some of the emojis aren't showing up. That's\nbecause we haven't begun to use [EmojiCompat yet. We'll get to that in the next article.\n\n### Login Screen\n\nYou can use a Realm database locally without logging in. Syncing data\nrequires a user account. Let's take a look at the UI for login since\nwe'll need it soon. If you're following along, drop this into the\n*MainActivity.kt*, at the end of the file. The login screen is going to\nbe all of one button. Notice that the actual login function is passed\ninto the View. Later, we'll make a *ViewModel* named *LoginVm*. It will\nprovide the login function.\n\n``` kotlin\n@Composable\nfun LoginView(login : () -> Unit) {\n Column(modifier = Modifier.fillMaxWidth().padding(16.dp),\n verticalArrangement = Arrangement.Center,\n horizontalAlignment = Alignment.CenterHorizontally){\n\n Button(login){\n Text(\"Login\")\n }\n }\n}\n```\n\n## Set Up Realm Sync\n\nWe've built as much of the app as we can without Realm. Now it's time to enable storing our emojis locally. Then we can begin syncing them to\nyour own managed Realm instance in the cloud.\n\nNow we need to:\n\n1. Create a free MongoDB Atlas\naccount\n * Follow the link above to host your data in the cloud. The emojis\n in the garden will be synced to this database so they can be\n sent to all connecting mobile devices. Configure your Atlas\n account with the following steps:\n * Add your connection\n IP,\n so only someone with your IP can access the database.\n * Create a database\n user,\n so you have an admin user to run commands with. Note down the\n username and password you create here.\n2. Create a Realm App on the cloud account\n * Hit the Realm tab\n * \n * You're building a Mobile app for Android from scratch. How cool!\n Hit Start a New realm App.\n * \n * You can name your application anything you want. Even the\n default \"Application 0\" is fine.\n3. Turn on Anonymous\nauthentication \\- We don't want to make people wait around to authenticate with a\nusername and password. So, we'll just hand them a login button\nthat will perform an anonymous authentication. Follow the link\nin the title to turn it on.\n4. Enable Realm Sync\n * This will allow real-time data synchronization between mobile\n clients.\n * Go to https://cloud.mongodb.com and hit the Realm tab.\n * Click your application. It might have a different name.\n * \n * As in the image below, hit Sync (on the left) in the Realm tab.\n Then \"Define Data Models\" on the page that opens.\n * \n * Choose the default cluster. For the partition key, type in\n \"event\" and select a type of \"string\" for it. Under \"Define a\n database name,\" type in \"gardens.\" Hit \"Turn Dev Mode On\" at the\n bottom.\n * \n\n> \ud83d\udca1 For this use case, the \"partition key\" should be named \"event\" and be\n> of type \"String.\" We'll see why when we add the partition key to our\n> EmojiTile later. The partition key is a way to separate data within the\n> collection by when it's going to be used.\n\nFill in those details and hit \"Turn Dev Mode On.\" Now click \"Review and\nDeploy.\"\n\n## Integrating Realm into the App\n\n### Install the SDK\n\nInstall the Realm Android\nSDK\n\nFollow the link above to install the SDK. This provides Realm\nauthentication and database methods within the app. When they talk about\nadding \"apply plugin:\" just replace that with \"id,\" like in the image\nbelow:\n\n### Add Internet Permissions\n\nOpen the AndroidManifest.xml file by **double-tapping Shift** in Android\nStudio and typing in \"manifest.\"\n\nAdd the Internet permission to your Android Manifest above the\napplication tag.\n\n``` xml\n\n```\n\nThe file should start off like this after adding it:\n\n``` xml\n\n \n\n \ud83d\udca1 \"event\" might seem a strange name for a field for an emoji. Here,\n> it's the partition key. Emojis for a single garden will be assigned the\n> same partition key. Each instance of Realm on mobile can only be\n> configured to retrieve objects with one partition key.\n\n### Separating Your Concerns\n\nWe're going to need objects from the Realm Mobile SDK that give access\nto login and data functions. These will be abstracted into their own\nclass, called RealmModule.\n\nLater, I'll create a custom application class *EmojiGardenApplication*\nto instantiate *RealmModule*. This will make it easy to pass into the\n*ViewModels*.\n\n#### RealmModule\n\nGrab a copy of the RealmModule from the sample\nrepo.\nThis will handle Realm App initialization and connecting to a synced\ninstance for you. It also contains a method to log in. Copy/paste it\ninto the source folder. You might end up with duplicate *package*\ndeclarations. Delete the extra one, if so. Let's take a look at what's\nin *RealmModule*. Skip to the next section if you want to get right to using it.\n\n##### The Constructor and Class Variables\n\nThe init{ } block is like a Kotlin constructor. It'll run as soon as an\ninstance of the class is created. Realm.init is required for local or\nremote Realms. Then, a configuration is created from your appId as part\nof initialization, as seen\nhere. To get\na synced realm, we need to log in.\n\nWe'll need to hold onto the Realm App object for logins later, so it's a\nclass variable.\n\n``` kotlin\nprivate var syncedRealm: Realm? = null\nprivate val app : App\nprivate val TAG = RealmModule::class.java.simpleName\n\ninit {\n Realm.init(application)\n app = App(AppConfiguration.Builder(appId).build())\n\n // Login anonymously because a logged in user is required to open a synced realm.\n loginAnonSyncedRealm(\n onSuccess = {Log.d(TAG, \"Login successful\") },\n onFailure = {Log.d(TAG, \"Login Unsuccessful, are you connected to the net?\")}\n )\n}\n```\n\n##### The Login Function\n\nBefore you can add data to a synced Realm, you need to be logged in. You\nonly need to be online the first time you log in. Your credentials are\npreserved and data can be inserted offline after that.\n\nNote the partition key. Only objects with the same value for the\npartition key as specified here will be synced by this Realm instance.\nTo sync objects with different keys, you would need to create another\ninstance of Realm. Once login succeeds, the logged-in user object is\nused to instantiate the synced Realm.\n\n``` kotlin\nfun loginAnonSyncedRealm(partitionKey : String = \"default\", onSuccess : () -> Unit, onFailure : () -> Unit ) {\n\n val credentials = Credentials.anonymous()\n\n app.loginAsync(credentials) { loginResult ->\n Log.d(\"RealmModule\", \"logged in: $loginResult, error? : ${loginResult.error}\")\n if (loginResult.isSuccess) {\n instantiateSyncedRealm(loginResult.get(), partitionKey)\n onSuccess()\n } else {\n onFailure()\n }\n }\n\n}\n\nprivate fun instantiateSyncedRealm(user: User?, partition : String) {\n val config: SyncConfiguration = SyncConfiguration.defaultConfig(user, partition)\n syncedRealm = Realm.getInstance(config)\n}\n```\n\n##### Initialize the Realm Schema\n\nPart of the setup of Realm is telling the server a little about the data\ntypes it can expect. This is only important for statically typed\nprogramming languages like Kotlin, which would refuse to sync objects\nthat it can't cast into expected types.\n\n> \ud83d\udca1 There are a few ways to do this:\n> \n> \n> 1. Manually code the schema as a JSON schema document.\n> 2. Let Realm generate the schema from what's stored in the database already.\n> 3. Let Realm figure out the schema from the documents at the time they're pushed into the db from the mobile app.\n> \n> \n> We'll be doing #3.\n\nIf you're wondering where the single soil emoji comes from when you log\nin, it's from this function. It will be called behind the scenes (in\n*LoginVm*) to set up the schema for the *EmojiTile* collection. Later,\nwhen we add emojis from the server, it'll have stronger guarantees about\nwhat types it contains.\n\n``` kotlin\nfun initializeCollectionIfEmpty() {\n syncedRealm?.executeTransactionAsync { realm ->\n if (realm.where(EmojiTile::class.java).count() == 0L) {\n realm.insert(EmojiTile().apply {\n emoji = \"\ud83d\udfeb\"\n })\n }\n }\n}\n```\n\n##### Minor Functions\n\n*getSyncedRealm* Required to work around the fact that *syncedRealm*\nmust be nullable internally. The internal nullability is used to figure\nout whether it's initialized. When it's retrieved externally, we'd\nalways expect it to be available and so we throw an exception if it\nisn't.\n\n``` kotlin\nfun isInitialized() = syncedRealm != null\n\nfun getSyncedRealm() : Realm = syncedRealm ?: throw IllegalStateException(\"loginAnonSyncedRealm has to return onSuccess first\")\n```\n\n### EmojiGarden Custom Application\n\nCreate a custom application class for the Emoji Garden app which will\ninstantiate the *RealmModule*.\n\nRemember to add your appId to the appId variable. You could name the new\nclass *EmojiGardenApplication*.\n\n``` kotlin\nclass EmojiGardenApplication : Application() {\n lateinit var realmModule : RealmModule\n\n override fun onCreate() {\n super.onCreate()\n\n // Get your appId from https://realm.mongodb.com/ for the database you created under there.\n val appId = \"your appId here\"\n realmModule = RealmModule(this, appId)\n }\n}\n```\n\n## ViewModels\n\nViewModels hold the logic and data for the UI. There will be one\nViewModel each for the Login and Garden UIs.\n\n### Login ViewModel\n\nWhat the *LoginVm* does:\n\n1. An anonymous login.\n2. Initializing the MongoDB Realm Schema.\n\nCopy *LoginVm*'s complete code from\nhere.\n\nHere's how the *LoginVm* works:\n\n1. Retrieve an instance of the RealmModule from the custom application.\n2. Once login succeeds, it adds initial data (like a \ud83d\udfeb emoji) to the\ndatabase to initialize the Realm schema.\n\n> \ud83d\udca1 Initializing the Realm schema is only required right now because the\n> app doesn't provide a way to choose and insert your emojis. At least one\n> inserted emoji is required for Realm Sync to figure out what kind of\n> data will be synced. When the app is written to handle inserts by\n> itself, this can be removed.\n\n*showGarden* will be used to \"switch\" between whether the Login screen\nor the Garden screen should be shown. This will be covered in more\ndetail later. It is marked\n\"*private set*\" so that it can't be modified from outside *LoginVm*.\n\n``` kotlin\nvar showGarden : Boolean by mutableStateOf(getApplication().realmModule.isInitialized())\n private set\n```\n\n*initializeData* will insert a sample emoji into Realm Sync. When it's\ndone, it will signal for the garden to be shown. We're going to call\nthis after *login*.\n\n``` kotlin\nprivate fun initializeData() {\n getApplication().realmModule.initializeCollectionIfEmpty()\n showGarden = true\n}\n```\n\n*login* calls the equivalent function in *RealmModule* as seen earlier.\nIf it succeeds, it initializes the data. Failures are only logged, but\nyou could do anything with them.\n\n``` kotlin\nfun login() = getApplication().realmModule.loginAnonSyncedRealm(\n onSuccess = ::initializeData,\n onFailure = { Log.d(TAG, \"Failed to login\") }\n )\n```\n\nYou can now modify *MainActivity.kt* to display and use Login. You might\nneed to *import* the *viewModel* function. Android Studio will give you\nthat option.\n\n``` kotlin\n@ExperimentalFoundationApi\nclass MainActivity : AppCompatActivity() {\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n val loginVm : LoginVm = viewModel()\n if(!loginVm.showGarden) {\n LoginView(loginVm::login)\n }\n }\n }\n}\n```\n\nOnce you've hit login, the button will disappear, leaving you a blank\nscreen. Let's understand what happened and get to work on the garden\nscreen, which should appear instead.\n\n> \ud83d\udca1 If you get an error like \"Caused by: java.lang.ClassCastException:\n> android.app.Application cannot be cast to EmojiGardenApplication at\n> com.example.emojigarden.LoginVm.\\(LoginVm.kt:20),\" then you might\n> have forgotten to add the EmojiGardenApplication to the name attribute in the manifest.\n\n### What Initialization Did\n\nHere's how you can verify what happened because of the initialization.\nBefore logging in and sending the first *EmojiTile*, you could go look\nup your data's schema by going to https://cloud.mongodb.com in the\nRealm tab. Click Schema on the options on the left and you'd see this:\n\nMongoDB Realm Sync has inferred the data types in EmojiTile when the\nfirst EmojiTile was pushed up. Here's what that section says now\ninstead:\n\nIf we had inserted data on the server side prior to this, it would've\ndefaulted the *index* field type to *Double* instead. The Realm SDK\nwould not have been able to coerce it on mobile, and sync would've\nfailed.\n\n### The Garden ViewModel\n\nThe UI code is only going to render data that is given to them by the\nViewModels, which is why if you run the app without previews, everything\nhas been blank so far.\n\nAs a refresher, we're using the MVVM architecture, and we'll be using Android ViewModels.\nThe ViewModels that we'll be using are custom classes that extend the\nViewModel class. They implement their own methods to retrieve and hold\nonto data that UI should render. In this case, that's the EmojiTile\nobjects that we'll be loading from the MongoDB Realm Sync server.\n\nI'm going to demonstrate two ways to do this:\n\n1. With Realm alone handling the asynchronous data retrieval via Realm\nSDK functions. In the class EmojiVmRealm.\n2. With Kotlin Coroutines Flow handling the data being updated\nasynchronously, but with Realm still providing the data. In the\nclass EmojiVmFlow.\n\nEither way is fine. You can pick whichever way suits you. You could even\nswap between them by changing a single line of code. If you would like\nto avoid any asynchronous handling of data by yourself, use\nEmojiVmRealm and let Realm do all the heavy lifting!\nIf you are already using Kotlin Flows, and would like to use that model\nof handling asynchronous operations, use EmojiVmFlow.\n\n###### Here's what's common to both ViewModels.\n\nTake a look at the code of EmojiVmRealm and EmojiVmFlow side by side.\n\nHere's how they work:\n\n1. The *emojiState* variable is observed by Compose since it's created via the mutableStateOf. It allows Jetpack Compose to observe and react to values when they change to redraw the UI. Both ViewModels will get data from the Realm database and update the emojiState variable with it. This separates the code for how the UI is rendered from how the data for it is retrieved.\n2. The ViewModel is set up as an AndroidViewModel to allow it to receive an Application object.\n3. Since Application is accessible from it, the RealmModule can be pulled in.\n4. RealmModule was instantiated in the custom application so that it could be passed to any ViewModel in the app.\n * We get the Realm database instance from the RealmModule via getSyncedRealm.\n * Searching for EmojiTile objects is as simple as calling where(EmojiTile::class.java).\n * Calling .sort on the results of where sorts them by their index in ascending order.\n * They're requested asynchronously with findAllAsync, so the entire operation runs in a background thread.\n\n### EmojiVmRealm\n\nEmojiVmRealm is a class that extends\nViewModel.\nTake a look at the complete code\nand copy it into your source folder. It provides logic operations and updates data to the Jetpack Compose UI. It uses standard Realm SDK functionality to asynchronously load up the emojis and order them for display.\n\nApart from what the two ViewModels have in common, here's how this class works:\n\n#### Realm Change Listeners\n\nA change listener watches for changes in the database. These changes might come from other people setting their emojis in their own apps.\n\n``` kotlin\nprivate val emojiTilesResults : RealmResults = getApplication().realmModule\n .getSyncedRealm()\n .where(EmojiTile::class.java)\n .sort(EmojiTile::index.name)\n .findAllAsync()\n .apply {\n addChangeListener(emojiTilesChangeListener)\n }\n```\n\n> \ud83d\udca1 The Realm change listener is at the heart of reactive programming with Realm.\n\n``` kotlin\nprivate val emojiTilesChangeListener =\n OrderedRealmCollectionChangeListener> { updatedResults, _ ->\n emojiState = updatedResults.freeze()\n }\n```\n\nThe change listener function defines what happens when a change is\ndetected in the database. Here, the listener operates on any collection\nof *EmojiTiles* as can be seen from its type parameter of\n*RealmResults\\*. In this case, when changes are detected, the\n*emojiState* variable is reassigned with \"frozen\" results.\n\nThe freeze function is part of the Realm SDK and makes the object\nimmutable. It's being used here to avoid issues when items are deleted\nfrom the server. A delete would invalidate the Realm object, and if that\nobject was providing data to the UI at the time, it could lead to\ncrashes if it wasn't frozen.\n\n#### MutableState: emojiState\n\n``` kotlin\nimport androidx.compose.runtime.getValue\nimport androidx.compose.runtime.neverEqualPolicy\nimport androidx.compose.runtime.setValue\n\n var emojiState : List by mutableStateOf(listOf(), neverEqualPolicy())\n private set\n```\n\n*emojiState* is a *mutableStateOf* which Compose can observe for\nchanges. It's been assigned a *private set*, which means that its value\ncan only be set from inside *EmojiVmRealm* for code separation. When a\nchange is detected, the *emojiState* variable is updated with results.\nThe changeset isn't required so it's marked \"\\_\".\n\n*neverEqualPolicy* needs to be specified since Mutable State's default\nstructural equality check doesn't see a difference between updated\n*RealmResults*. *neverEqualPolicy* is then required to make it update. I\nspecify the imports here because sometimes you'd get an error if you\ndidn't specifically import them.\n\n``` kotlin\nprivate val emojiTilesChangeListener =\n OrderedRealmCollectionChangeListener> { updatedResults, _ ->\n emojiState = updatedResults.freeze()\n}\n```\n\nChange listeners have to be released when the ViewModel is being\ndisposed. Any resources in a ViewModel that are meant to be released\nwhen it's being disposed should be in onCleared.\n\n``` kotlin\noverride fun onCleared() {\n super.onCleared()\n emojiTilesResults.removeAllChangeListeners()\n}\n```\n\n### EmojiVmFlow\n\n*EmojiVmFlow* offloads some asynchronous operations to Kotlin Flows while still retrieving data from Realm. Take a look at it in the sample repo here, and copy it to your app.\n\nApart from what the two ViewModels have in common, here's what this VM does:\n\nThe toFlow operator from the Realm SDK automatically retrieves the list of emojis when they're updated on the server.\n\n``` kotlin\nprivate val _emojiTiles : Flow> = getApplication().realmModule\n .getSyncedRealm()\n .where(EmojiTile::class.java)\n .sort(EmojiTile::index.name)\n .findAllAsync()\n .toFlow()\n```\n\nThe flow is launched in\nviewModelScope\nto tie it to the ViewModel lifecycle. Once collected, each emitted list\nis stored in the emojiState variable.\n\n``` kotlin\ninit {\n viewModelScope.launch {\n _emojiTiles.collect {\n emojiState = it\n }\n }\n}\n```\n\nSince *viewModelScope* is a built-in library scope that's cleared when\nthe ViewModel is shut down, we don't need to bother with disposing of\nit.\n\n## Switching UI Between Login and Gardens\n\nAs we put both the screens together in the view for the actual Activity,\nhere's what we're trying to do:\n\nFirst, connect the *LoginVm* to the view and check if the user is\nauthenticated. Then:\n\n* If authenticated, show the garden.\n* If not authenticated, show the login view.\n* This is done via *if(loginVm.showGarden)*.\n\nTake a look at the entire\nactivity\nin the repo. The only change we'll be making is in the *onCreate*\nfunction. In fact, only the *setContent* function is modified to\nselectively show either the Login or the Garden Screen\n(*MainActivityUi*). It also connects the ViewModels to the Garden Screen\nnow.\n\nThe *LoginVm* internally maintains whether to *showGarden* or not based\non whether the login succeeded. If this succeeds, the garden screen\n*MainActivityUI* is instantiated with its own ViewModel, supplying the\nemojis it gathers from Realm. If the login hasn't happened, it shows the\nlogin view.\n\n> \ud83d\udca1 The code below uses EmojiVmRealm. If you were using EmojiVmFlow, just type in EmojiVmFlow instead. Everything will just work.\n\n``` kotlin\n@ExperimentalFoundationApi\nclass MainActivity : AppCompatActivity() {\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n val loginVm : LoginVm = viewModel()\n\n if(loginVm.showGarden){\n val model : EmojiVmRealm = viewModel()\n MainActivityUi(model.emojiState)\n } else\n {\n LoginView(loginVm::login)\n }\n\n }\n }\n}\n```\n\n## Tending the Garden Remotely\n\nHere's what you'll have on your app once you're all logged in and the\ngarden screen is hooked up too: a lone \ud83d\udfeb emoji on a vast, empty screen.\n\nLet's move to the server to add some more emojis and let the server\nhandle sending them back to the app! Every user of the app will see the\nsame list of emojis. I'll show how to insert the emojis from the web\nconsole.\n\nOpen up https://cloud.mongodb.com again. Hit *collections*. *Insert*\ndocument will appear at the middle right. Then hit *insert document*.\n\nHit the curly braces so you can copy/paste\nthis\nhuge pile of emojis into it.\n\nYou'll have all these emojis we just added to the server pop up on the\ndevice. Enjoy your critters!\n\nFeel free to play around with the console. Change the emojis in\ncollections by double-clicking them.\n\n## Summary\n\nThis has been a walkthrough for how to build an Android app that\neffectively uses Compose and Realm together with the latest techniques\nto build reactive apps with very little code.\n\nIn this article, we've covered:\n\n* Using the MVVM architectural pattern with Jetpack Compose.\n* Setting up MongoDB Realm.\n* Using Realm in ViewModels.\n* Using Realm to Kotlin Flows in ViewModels.\n* Using anonymous authentication in Realm.\n* Building Conditional UIs with Jetpack Compose.\n\nThere's a lot here to add to any of your projects. Feel free to use any\nparts of this walkthrough or use the whole thing! I hope you've gotten\nto see what MongoDB Realm can do for your mobile apps!\n\n## What's Next?\n\nIn Part 2, I'll get to best practises for dealing with emojis using\nEmojiCompat.\nI'll also get into how to change the emojis from the device itself and\nadd some personalization that will enhance the app's functionality. In\naddition, we'll also have to add some \"rules\" to handle use cases\u2014for\nexample, users can only alter unclaimed \"soil\" tiles and handle conflict\nresolution when two users try to claim the same tile simultaneously.\nWhat happens when two people pick the same tiles at nearly the same\ntime? Who gets to keep it? How can we avoid pranksters changing our own\nemojis? These questions and more will be answered in Part 2.\n\n## References\n\nHere's some additional reading if you'd like to learn more about what we\ndid in this article.\n\n1. The official docs on Compose layout are incredible to see Compose's flexibility.\n2. The codelabs teach this method of handling state.\n3. All the code for this project.\n4. Also, thanks to Monica Dinculescu for coming up with the idea for the garden on the web. This is an adaptation of her ideas.\n\n> If you have questions, please head to our developer community\n> website where the MongoDB engineers and\n> the MongoDB community will help you build your next big idea with\n> MongoDB.", "format": "md", "metadata": {"tags": ["Realm", "Android", "Jetpack Compose", "Mobile"], "pageDescription": "Dive into: Compose architecture A globally synced Emoji garden Reactivity like you've never seen before!", "contentType": "Tutorial"}, "title": "Building an Android Emoji Garden on Jetpack Compose with Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-database-cascading-deletes", "action": "created", "body": "# Realm SDKs 10.0: Cascading Deletes, ObjectIds, Decimal128, and more\n\nThe Realm SDK 10.0 is now Generally Available with new capabilities such as Cascading Deletes and new types like Decimal128.\n\n## Release Highlights\n\nWe're happy to announce that as of this month, the new Realm Mobile Database 10.0 is now Generally Available for our Java/Kotlin, Swift/Obj-C, and JavaScript SDKs.\n\nThis is the culmination of all our hard work for the last year and lays a new foundation for Realm. With Realm 10.0, we've increased the stability of the database and improved performance. We've responded to the Realm Community's feedback and built key new features, like cascading deletes, to make it simpler to implement changes and maintain data integrity. We've also added new data types.\n\nRealm .NET is now released as a feature-complete beta. And, we're promoting the Realm Web library to 1.0, replacing the MongoDB Stitch Browser SDK. Realm Studio is also getting released as 10.0 as a local Realm Database viewer for the 10.0 version of the SDKs.\n\nWith this release, the Realm SDKs also support all functionality unlocked by MongoDB Realm. You can call a serverless function from your mobile app, making it simple to build a feature like sending a notification via Twilio. Or, you could use triggers to call a Square API once an Order object has been synced to MongoDB Realm. Realm's Functions and Triggers speed up your development and reduce the code you need to write as well as having to stand up and maintain web servers to wait for these requests. And you now have full access to all of MongoDB Realm's built-in authentication providers, including the ability to call your own custom logic.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Cascading Deletes\n\nWe're excited to announce that one of our most requested features - cascading deletes - is now available. Previous versions of Realm put the burden of cascading deletes on you as the developer. Now, we're glad to be reducing the complexity and amount of code you need to write.\n\nIf you're a Realm user, you know that object relationships are a key part of the Realm Database. Realm doesn't impose restrictions on how you can link your objects, no matter how complex relationships become. Realm allows one-to-one, one-to-many, many-to-many, and backlinks. Realm stores relationships by reference, so even when you end up with a complicated object graph, Realm delivers incredibly fast lookups by traversing pointers.\n\nBut historically, some use cases prevented us from delivering cascading deletes. For instance, you might have wanted to delete a Customer object but still keep a record of all of the Order objects they placed over the years. The Realm SDKs wouldn't know if a parent-child object relationship had strict ownership to safely allow for cascading deletes.\n\nIn this release, we've made cascading deletes possible by introducing a new type of object that we're calling Embedded Objects. With Embedded Objects, you can convey ownership to whichever object creates a link to the embedded object. Using embedded object references gives you the ability to delete all objects that are linked to the parent upon deletion.\n\nImagine you have a BusRoute object that has a list of BusStop embedded objects, and a BusDriver object who is assigned to the route. You want to delete BusRoute and automatically delete only the BusStop objects, without deleting the BusDriver object, because he still works for the company and can drive other routes. Here's what it looks like: When you delete the BusRoute, the Realm SDK will automatically delete all BusStops. For the BusDriver objects you don't want deleted, you use a regular object reference. Your BusDriver objects will not be automatically deleted and can drive other routes.\n\nThe Realm team is proud to say that we've heard you, and we hope that you give this feature a try to simplify your code and improve your development experience.\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` Swift\n// Define an object with one embedded object\n\nclass Contact: Object {\n @objc dynamic var _id = ObjectId.generate()\n @objc dynamic var name = \"\"\n\n // Embed a single object.\n // Embedded object properties must be marked optional. \n @objc dynamic var address: Address? = nil\n\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n convenience init(name: String, address: Address) {\n self.init()\n self.name = name\n self.address = address\n } \n }\n\n// Define an embedded object\nclass Address: EmbeddedObject {\n @objc dynamic var street: String? = nil\n @objc dynamic var city: String? = nil\n @objc dynamic var country: String? = nil\n @objc dynamic var postalCode: String? = nil\n}\n\nlet sanFranciscoContact = realm.objects(Contact.self)\nguard let sanFranciscoContact = realm.objects(Contact.self)\n .filter(\"address.city = %@\", \"San Francisco\")\n .sorted(byKeyPath: \"address.street\")\n .first,\nlet sanFranciscoAddress = sanFranciscoContact.address else {\n print(\"Could not find San Francisco Contact!\")\n return\n}\n\n// prints City: San Francisco\nprint(\"City: \\(sanFranciscoAddress.city ?? \"nil\")\")\n\ntry! realm.write {\n // Delete the instance from the realm.\n realm.delete(sanFranciscoContact)\n}\n\n// now the embedded Address will be invalid.\n// prints Is Invalidated: true\nprint(\"Is invalidated: \\(sanFranciscoAddress.isInvalidated)\")\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` Kotlin\n// Define an object containing one embedded object\nopen class Contact(\n @RealmField(\"_id\")\n @PrimaryKey\n var id: ObjectId = ObjectId(),\n var name: String = \"\",\n // Embed a single object.\n // Embedded object properties must be marked optional\n var address: Address? = null) : RealmObject() {}\n\n// Define an embedded object\n@RealmClass(embedded = true)\nopen class Address(\n var street: String? = null,\n var city: String? = null,\n var country: String? = null,\n var postalCode: String? = null\n): RealmObject() {}\n\n// insert some data\nrealm.executeTransaction {\n val contact = it.createObject()\n val address = it.createEmbeddedObject(contact, \"address\")\n address.city = \"San Francisco\"\n address.street = \"495 3rd St\"\n contact.name = \"Ian\"\n}\nval sanFranciscoContact = realm.where()\n .equalTo(\"address.city\", \"San Francisco\")\n .sort(\"address.street\").findFirst()\n\nLog.v(\"EXAMPLE\", \"City: ${sanFranciscoContact?.address?.city}\")\n// prints San Francisco\n\n// Get a contact to delete which satisfied the previous query\nval contact = realm.where()\n .equalTo(\"name\", \"Ian\").findFirst()\n\nLog.v(\"EXAMPLE\", \"IAN = : ${contact?.name}\")\n\nrealm.executeTransaction {\n // Delete the contact instance from its realm.\n contact?.deleteFromRealm()\n}\n// now lets print an address query\nLog.v(\"EXAMPLE\", \"Number of addresses: ${realm.where().count()}\") // == 0\nif (BuildConfig.DEBUG && sanFranciscoContact?.isValid != false) {\n error(\"Assertion failed\") \n} \nLog.v(\"EXAMPLE\", \"sanFranciscoContact is valid: ${sanFranciscoContact?.address?.isValid}\") // false\n```\n:::\n:::tab[]{tabid=\"Javascript\"}\n``` js\nconst ContactSchema = {\nname: \"Contact\",\nprimaryKey: \"_id\",\nproperties: {\n _id: \"objectId\",\n name: \"string\",\n address: \"Address\", // Embed a single object\n},\n};\n\nconst AddressSchema = {\nname: \"Address\",\nembedded: true, // default: false\nproperties: {\n street: \"string?\",\n city: \"string?\",\n country: \"string?\",\n postalCode: \"string?\",\n},\n};\n\nconst sanFranciscoContact = realm.objects(\"Contact\")\n .filtered(\"address.city = 'San Francisco'\")\n .sorted(\"address.street\");\n\nlet ianContact = sanFranciscoContacts[0];\nconsole.log(ianContact.address.city); // prints San Francisco\n\nrealm.write(() => {\n// Delete ian from the realm.\n\n realm.delete(ianContact);\n});\n\n//now lets see print the same query returns - \nconsole.log(ianContact.address.city);\n\n// address returns null\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Contact : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n\n // Embed a single object.\n [MapTo(\"address\")]\n public Address Address { get; set; }\n}\n\npublic class Address : EmbeddedObject\n{\n [MapTo(\"street\")]\n public string Street { get; set; }\n\n [MapTo(\"city\")]\n public string City { get; set; }\n\n [MapTo(\"country\")]\n public string Country { get; set; }\n\n [MapTo(\"postalCode\")]\n public string PostalCode { get; set; }\n}\n\nvar sanFranciscoContact = realm.All()\n .Filter(\"Contact.Address.City == 'San Francisco'\").\n .OrderBy(c => c.Address.Street)\n .First();\n\n// Prints Ian\nConsole.WriteLine(sanFranciscoContact.Name);\n\nvar iansAddress = sanFranciscoContact.Address;\n\n// Prints San Francisco\nConsole.WriteLine(iansAddress.City);\n\n// Delete an object with a transaction\nrealm.Write(() =>\n{\n realm.Remove(sanFranciscoContact);\n});\n\n// Prints false - since the parent object was deleted, the embedded address\n// was removed too.\nConsole.WriteLine(iansAddress.IsValid);\n\n// This will throw an exception because the object no longer belongs\n// to the Realm.\n// Console.WriteLine(iansAddress.City);\n```\n:::\n::::\n\nWant to try it out? Head over to our docs page for your respective SDK and take it for a spin!\n\n- [iOS SDK\n- Android SDK\n- React Native SDK\n- Node.js SDK\n- .NET SDK\n\n## ObjectIds\n\nObjectIds are a new type introduced to the Realm SDKs, used to provide uniqueness between objects. Previously, you would need to create your own unique identifier, using a function you wrote or imported. You'd then cast it to a string or some other Realm primitive type. Now, ObjectIds save space by being smaller, making it easier to work with your data.\n\nAn ObjectId is a 12-byte hexadecimal value that follows this order:\n\n- A 4-byte timestamp value, representing the ObjectId's creation, measured in seconds since the Unix epoch\n- A 5-byte random value\n- A 3-byte incrementing counter, initialized to a random value\n\nBecause of the way ObjectIds are generated - with a timestamp value in the first 4 bytes - you can sort them by time using the ObjectId field. You no longer need to create another timestamp field for ordering. ObjectIDs are also smaller than the string representation of UUID. A UUID string column will take 36 bytes, whereas an ObjectId is only 12.\n\nThe Realm SDKs contain a built-in method to automatically generate an ObjectId.\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` Swift\nclass Task: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partition: ProjectId? = nil\n @objc dynamic var name = \"\"\n\noverride static func primaryKey() -> String? {\n return \"_id\"\n}\n\nconvenience init(partition: String, name: String) {\n self.init()\n self._partition = partition\n self.name = name\n }\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` Kotlin\nopen class Task(\n @PrimaryKey var _id: ObjectId = ObjectId(),\n var name: String = \"Task\", \n _partition: String = \"My Project\") : RealmObject() {}\n```\n:::\n:::tab[]{tabid=\"Javascript\"}\n``` js\nconst TaskSchema = {\n name: \"Task\",\n properties: {\n _id: \"objectId\",\n _partition: \"string?\",\n name: \"string\",\n },\n primaryKey: \"_id\",\n};\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Task : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"_partition\")]\n public string Partition { get; set; }\n\n [MapTo(\"name\")]\n public string Name { get; set; }\n\n}\n```\n:::\n::::\n\nTake a look at our documentation on Realm models by going here:\n\n- [iOS SDK\n- Android SDK\n- React Native SDK\n- Node.js SDK\n- .NET SDK\n\n## Decimal128\n\nWe're also introducing Decimal128 as a new type in the Realm SDKs. With Decimal128, you're able to store the exact value of a decimal type and avoid the potential for rounding errors in calculations.\n\nIn previous versions of Realm, you were limited to int64 and double, which only stored 64 bits of range. Decimal128 is a 16-byte decimal floating-point number format. It's intended for calculations on decimal numbers where high levels of precision are required, like financial (i.e. tax calculations, currency conversions) and scientific computations.\n\nDecimal128 has over 128 bits of range and precision. It supports 34 decimal digits of significance and an exponent range of \u22126143 to +6144. It's become the industry standard, and we're excited to see how the community leverages this new type in your mathematical calculations. Let us know if it unlocks new use cases for you.\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` Swift\nclass Task: Object {\n @objc dynamic var _id: ObjectId = ObjectId.generate()\n @objc dynamic var _partition: String = \"\"\n @objc dynamic var name: String = \"\"\n @objc dynamic var owner: String? = nil\n @objc dynamic var myDecimal: Decimal128? = nil\n override static func primaryKey() -> String? {\n return \"_id\"\n}\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` Kotlin\nopen class Task(_name: String = \"Task\") : RealmObject() {\n @PrimaryKey var _id: ObjectId = ObjectId()\n var name: String = _name\n var owner: String? = null\n var myDecimal: Decimal128? = null\n}\n```\n:::\n:::tab[]{tabid=\"Javascript\"}\n``` js\nconst TaskSchema = {\n name: \"Task\",\n properties: {\n _id: \"objectId\",\n _partition: \"string?\",\n myDecimal: \"decimal128?\",\n name: \"string\",\n },\n primaryKey: \"_id\",\n};\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Foo : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"_partition\")]\n public string Partition { get; set; }\n\n public string Name { get; set; };\n\n public Decimal128 MyDecimal { get; set; }\n}\n```\n:::\n::::\n\nTake a look at our documentation on Realm models by going here -\n\n- [iOS SDK\n- Android SDK\n- React Native SDK\n- Node.js SDK\n- .NET SDK\n\n## Open Sourcing Realm Sync\n\nSince launching MongoDB Realm and Realm Sync in June, we've also made the decision to open source the code for Realm Sync.\n\nSince Realm's founding, we've committed to open source principles in our work. As we continue to invest in building the Realm SDKs and MongoDB Realm, we want to remain transparent in how we're developing our products.\n\nWe want you to see the algorithm we're using for Realm Sync's automatic conflict resolution, built upon Operational Transformation. Know that any app you build with Realm now has the source algorithm available. We hope that you'll give us feedback and show us the projects you're building with it.\n\nSee the repo to check out the code\n\n## About the New Versioning\n\nYou may have noticed that with this release, we've updated our versioning across all SDKs to Realm 10.0. Our hope is that by aligning all SDKs, we're making it easier to know how database versions align across languages. We can't promise that all versions will stay aligned in the future. But for now, we hope this helps you to notice major changes and avoid accidental upgrades.\n\n## Looking Ahead\n\nThe Realm SDKs continue to evolve as a part of MongoDB, and we truly believe that this new functionality gives you the best experience yet when using Realm. Looking ahead, we're continuing to invest in providing a best-in-class solution and are working to to support new platforms and use cases.\n\nStay tuned by following @realm on Twitter.\n\nWant to Ask a Question? Visit our Forums\n\nWant to make a feature request? Visit our Feedback Portal\n\nWant to be notified of upcoming Realm events such as our iOS Hackathon in November 2020? Visit our Global Community Page\n\nRunning into issues? Visit our Github to file an Issue.\n\n- RealmJS\n- RealmSwift\n- RealmJava\n- RealmDotNet\n\n>Safe Harbor\nThe development, release, and timing of any features or functionality described for our products remains at our sole discretion. This information is merely intended to outline our general product direction and it should not be relied on in making a purchasing decision nor is this a commitment, promise or legal obligation to deliver any material, code, or functionality.", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "The Realm SDK 10.0 is now Generally Available with new capabilities such as Cascading Deletes and new types like Decimal128.", "contentType": "News & Announcements"}, "title": "Realm SDKs 10.0: Cascading Deletes, ObjectIds, Decimal128, and more", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/using-linq-query-mongodb-dotnet-core-application", "action": "created", "body": "# Using LINQ to Query MongoDB in a .NET Core Application\n\n# Using LINQ to Query MongoDB in a .NET Core Application\n\nIf you've been keeping up with my series of tutorials around .NET Core and MongoDB, you'll likely remember that we explored using the Find operator to query for documents as well as an aggregation pipeline. Neither of these previously explored subjects are too difficult, but depending on what you're trying to accomplish, they could be a little messy. Not to mention, they aren't necessarily \"the .NET way\" of doing business.\n\nThis is where LINQ comes into the mix of things!\n\nWith Language Integrated Queries (LINQ), we can use an established and well known C# syntax to work with our MongoDB documents and data.\n\nIn this tutorial, we're going to look at a few LINQ queries, some as a replacement to simple queries using the MongoDB Query API and others as a replacement to more complicated aggregation pipelines.\n\n## The requirements\n\nTo be successful with this tutorial, you should already have the following ready to go:\n\n- .NET Core 6+\n- MongoDB Atlas, the free tier or better\n\nWhen it comes to MongoDB Atlas, you'll need to have a cluster deployed and properly configured with user roles and network rules. If you need help with this, take a look at my previous tutorial on the subject. You will also need the sample datasets installed.\n\nWhile this tutorial is part of a series, you don't need to have read the others to be successful. However, you'd be doing yourself a favor by checking out the other ways you can do business with .NET Core and MongoDB.\n\n## Creating a new .NET Core console application with the CLI\n\nTo keep this tutorial simple and easy to understand, we're going to create a new console application and work from that.\n\nExecute the following from the CLI to create a new project that is ready to go with the MongoDB driver:\n\n```bash\ndotnet new console -o MongoExample\ncd MongoExample\ndotnet add package MongoDB.Driver\n```\n\nFor this tutorial, our MongoDB Atlas URI string will be stored as an environment variable on our computer. Depending on your operating system, you can do something like this:\n\n```bash\nexport ATLAS_URI=\"YOUR_ATLAS_URI_HERE\"\n```\n\nThe Atlas URI string can be found in your MongoDB Atlas Dashboard after clicking the \"Connect\" button and choosing your programming language.\n\nOpen the project's **Program.cs** file and add the following C# code:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Linq;\n\nMongoClientSettings settings = MongoClientSettings.FromConnectionString(\n Environment.GetEnvironmentVariable(\"ATLAS_URI\")\n);\n\nsettings.LinqProvider = LinqProvider.V3;\n\nMongoClient client = new MongoClient(settings);\n```\n\nIn the above code, we are explicitly saying that we want to use LINQ Version 3 rather than Version 2, which is the default in MongoDB. While you can accomplish many LINQ-related tasks in MongoDB with Version 2, you'll get a much better experience with Version 3.\n\n## Writing MongoDB LINQ queries in your .NET Core project\n\nWe're going to take it slow and work our way up to bigger and more complicated queries with LINQ.\n\nIn case you've never seen the \"sample_mflix\" database that is part of the sample datasets that MongoDB offers, it's a movie database with several collections. We're going to focus strictly on the \"movies\" collection which has documents that look something like this:\n\n```json\n{\n \"_id\": ObjectId(\"573a1398f29313caabceb515\"),\n \"title\": \"Batman\",\n \"year\": 1989,\n \"rated\": \"PG-13\",\n \"runtime\": 126,\n \"plot\": \"The Dark Knight of Gotham City begins his war on crime with his first major enemy being the clownishly homicidal Joker.\",\n \"cast\": \"Michael Keaton\", \"Jack Nicholson\", \"Kim Basinger\" ]\n}\n```\n\nThere are quite a bit more fields to each of the documents in that collection, but the above fields are enough to get us going.\n\nTo use LINQ, we're going to need to create mapped classes for our collection. In other words, we won't want to be using `BsonDocument` when writing our queries. At the root of your project, create a **Movie.cs** file with the following C# code:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\n\n[BsonIgnoreExtraElements]\npublic class Movie {\n\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; }\n\n [BsonElement(\"title\")]\n public string Title { get; set; } = null!;\n\n [BsonElement(\"year\")]\n public int Year { get; set; }\n\n [BsonElement(\"runtime\")]\n public int Runtime { get; set; }\n\n [BsonElement(\"plot\")]\n [BsonIgnoreIfNull]\n public string Plot { get; set; } = null!;\n\n [BsonElement(\"cast\")]\n [BsonIgnoreIfNull]\n public List Cast { get; set; } = null!;\n\n}\n```\n\nWe used a class like the above in our previous tutorials. We've just defined a few of our fields, mapped them to BSON fields in our database, and told our class to ignore any extra fields that may exist in our database that we chose not to define in our class.\n\nLet's say that we want to return movies that were released between 1980 and 1990. If we weren't using LINQ, we'd be doing something like the following in our **Program.cs** file:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nMongoClient client = new MongoClient(\n Environment.GetEnvironmentVariable(\"ATLAS_URI\")\n);\n\nIMongoCollection moviesCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"movies\");\n\nBsonDocument filter = new BsonDocument{\n { \n \"year\", new BsonDocument{\n { \"$gt\", 1980 },\n { \"$lt\", 1990 }\n } \n }\n};\n\nList movies = moviesCollection.Find(filter).ToList();\n\nforeach(Movie movie in movies) {\n Console.WriteLine($\"{movie.Title}: {movie.Plot}\");\n}\n```\n\nHowever, since we want to use LINQ, we can update our **Program.cs** file to look like the following:\n\n```csharp\nusing MongoDB.Driver;\nusing MongoDB.Driver.Linq;\n\nMongoClientSettings settings = MongoClientSettings.FromConnectionString(\n Environment.GetEnvironmentVariable(\"ATLAS_URI\")\n);\n\nsettings.LinqProvider = LinqProvider.V3;\n\nMongoClient client = new MongoClient(settings);\n\nIMongoCollection moviesCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"movies\");\n\nIMongoQueryable results =\n from movie in moviesCollection.AsQueryable()\n where movie.Year > 1980 && movie.Year < 1990\n select movie;\n\nforeach(Movie result in results) {\n Console.WriteLine(\"{0}: {1}\", result.Title, result.Plot);\n}\n```\n\nIn the above code, we are getting a reference to our collection and creating a LINQ query. To break down the LINQ query to see how it relates to MongoDB, we have the following:\n\n1. The \"WHERE\" operator is the equivalent to doing a \"$MATCH\" or a filter within MongoDB. The documents have to match the criteria in this step.\n2. The \"SELECT\" operator is the equivalent to doing a projection or using the \"$PROJECT\" operator. We're defining which fields should be returned from the query\u2014in this case, all fields that we've defined in our class.\n\nTo diversify our example a bit, we're going to change the match condition to match within an array, something non-flat.\n\nChange the LINQ query to look like the following:\n\n```csharp\nvar results =\n from movie in moviesCollection.AsQueryable()\n where movie.Cast.Contains(\"Michael Keaton\")\n select new { movie.Title, movie.Plot };\n```\n\nA few things changed in the above code along with the filter. First, you'll notice that we are matching on the `Cast` array as long as \"Michael Keaton\" exists in that array. Next, you'll notice that we're doing a projection to only return the movie title and the movie plot instead of all other fields that might exist in the data.\n\nWe're going to make things slightly more complex now in terms of our query. This time we're going to do what would have been a MongoDB aggregation pipeline, but this time using LINQ.\n\nChange the C# code in the **Program.cs** file to look like the following:\n\n```csharp\nusing MongoDB.Driver;\nusing MongoDB.Driver.Linq;\n\nMongoClientSettings settings = MongoClientSettings.FromConnectionString(\n Environment.GetEnvironmentVariable(\"ATLAS_URI\")\n);\n\nsettings.LinqProvider = LinqProvider.V3;\n\nMongoClient client = new MongoClient(settings);\n\nIMongoCollection moviesCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"movies\");\n\nvar results = \n from movie in moviesCollection.AsQueryable()\n where movie.Cast.Contains(\"Ryan Reynolds\")\n from cast in movie.Cast\n where cast == \"Ryan Reynolds\"\n group movie by cast into g\n select new { Cast = g.Key, Sum = g.Sum(x => x.Runtime) };\n\nforeach(var result in results) {\n Console.WriteLine(\"{0} appeared on screen for {1} minutes!\", result.Cast, result.Sum);\n}\n```\n\nIn the above LINQ query, we're doing a series of steps, just like stages in an aggregation pipeline. These stages can be broken down like the following:\n\n1. Match all documents where \"Ryan Reynolds\" is in the cast.\n2. Unwind the array of cast members so the documents sit adjacent to each other. This will flatten the array for us.\n3. Do another match on the now smaller subset of documents, filtering out only results that have \"Ryan Reynolds\" in them.\n4. Group the remaining results by the cast, which will only be \"Ryan Reynolds\" in this example.\n5. Project only the group key, which is the cast member, and the sum of all the movie runtimes.\n\nIf you haven't figured it out yet, what we attempted to do was determine the total amount of screen time Ryan Reynolds has had. We isolated our result set to only documents with Ryan Reynolds, and then we summed the runtime of the documents that were matched.\n\nWhile the full scope of the MongoDB aggregation pipeline isn't supported with LINQ, you'll be able to accomplish quite a bit, resulting in a lot cleaner looking code. To get an idea of the supported operators, take a look at the [MongoDB LINQ documentation.\n\n## Conclusion\n\nYou just got a taste of LINQ with MongoDB in your .NET Core applications. While you don't have to use LINQ, as demonstrated in a few previous tutorials, it's common practice amongst C# developers.\n\nGot a question about this tutorial? Check out the MongoDB Community Forums for help!", "format": "md", "metadata": {"tags": ["C#", "MongoDB", ".NET"], "pageDescription": "Learn how to use LINQ to interact with MongoDB in a .NET Core application.", "contentType": "Tutorial"}, "title": "Using LINQ to Query MongoDB in a .NET Core Application", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/type-projections", "action": "created", "body": "# Realm-Swift Type Projections\n\n## Introduction\n\nRealm natively provides a broad set of data types, including `Bool`, `Int`, `Float`, `Double`, `String`, `Date`, `ObjectID`, `List`, `Mutable Set`, `enum`, `Map`, \u2026\n\nBut, there are other data types that many of your iOS apps are likely to use. As an example, if you're using Core Graphics, then it's hard to get away without using types such as `CGFloat`, `CGPoint`, etc. When working with SwiftUI, you use the `Color` struct when working with colors.\n\nA typical design pattern is to persist data using types natively supported by Realm, and then use a richer set of types in your app. When reading data, you add extra boilerplate code to convert them to your app's types. When persisting data, you add more boilerplate code to convert your data back into types supported by Realm.\n\nThat works fine and gives you complete control over the type conversions. The downside is that you can end up with dozens of places in your code where you need to make the conversion.\n\nType projections still give you total control over how to map a `CGPoint` into something that can be persisted in Realm. But, you write the conversion code just once and then forget about it. The Realm-Swift SDK will then ensure that types are converted back and forth as required in the rest of your app.\n\nThe Realm-Swift SDK enables this by adding two new protocols that you can use to extend any Swift type. You opt whether to implement `CustomPersistable` or the version that's allowed to fail (`FailableCustomPersistable`):\n\n```swift\nprotocol CustomPersistable {\n associatedtype PersistedType\n init(persisted: PersistedType)\n var persistableValue: PersistedType { get }\n}\nprotocol FailableCustomPersistable {\n associatedtype PersistedType\n init?(persisted: PersistedType)\n var persistableValue: PersistedType { get }\n}\n```\n\nIn this post, I'll show how the Realm-Drawing app uses type projections to interface between Realm and Core Graphics.\n\n## Prerequisites\n\n- iOS 15+\n- Xcode 13.2+\n- Realm-Swift 10.21.0+\n\n## The Realm-Drawing App\n\nRealm-Drawing is a simple, collaborative drawing app. If two people log into the app using the same username, they can work on a drawing together. All strokes making up the drawing are persisted to Realm and then synced to all other instances of the app where the same username is logged in.\n\nIt's currently iOS-only, but it would also sync with any Android drawing app that is connected to the same Realm back end.\n\n## Using Type Projections in the App\n\nThe Realm-Drawing iOS app uses three types that aren't natively supported by Realm:\n\n- `CGFloat`\n- `CGPoint`\n- `Color` (SwiftUI)\n\nIn this section, you'll see how simple it is to use type projections to convert them into types that can be persisted to Realm and synced.\n\n### Realm Schema (The Model)\n\nAn individual drawing is represented by a single `Drawing` object:\n\n```swift\nclass Drawing: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name = UUID().uuidString\n @Persisted var lines = RealmSwift.List()\n}\n```\n\nA Drawing contains a `List` of `Line` objects:\n\n```swift\nclass Line: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var lineColor: Color\n @Persisted var lineWidth: CGFloat = 5.0\n @Persisted var linePoints = RealmSwift.List()\n}\n```\n\nIt's the `Line` class that uses the non-Realm-native types.\n\nLet's see how each type is handled.\n\n#### CGFloat\n\nI extend `CGFloat` to conform to Realm-Swift's `CustomPersistable` protocol. All I needed to provide was:\n\n- An initializer to convert what's persisted in Realm (a `Double`) into the `CGFloat` used by the model\n- A method to convert a `CGFloat` into a `Double`:\n\n```swift\nextension CGFloat: CustomPersistable {\n public typealias PersistedType = Double\n public init(persistedValue: Double) { self.init(persistedValue) }\n public var persistableValue: Double { Double(self) }\n}\n```\n\nThe `view` can then use `lineWidth` from the model object without worrying about how it's converted by the Realm SDK:\n\n```swift\ncontext.stroke(path, with: .color(line.lineColor),\n style: StrokeStyle(\n lineWidth: line.lineWidth,\n lineCap: .round, l\n ineJoin: .round\n )\n)\n```\n\n#### CGPoint\n\n`CGPoint` is a little trickier, as it can't just be cast into a Realm-native type. `CGPoint` contains the x and y coordinates for a point, and so, I create a Realm-friendly class (`PersistablePoint`) that stores just that\u2014`x` and `y` values as `Doubles`:\n\n```swift\npublic class PersistablePoint: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var x = 0.0\n @Persisted var y = 0.0\n\n convenience init(_ point: CGPoint) {\n self.init()\n self.x = point.x\n self.y = point.y\n }\n}\n```\n\nI implement the `CustomPersistable` protocol for `CGPoint` by mapping between a `CGPoint` and the `x` and `y` coordinates within a `PersistablePoint`:\n\n```swift\nextension CGPoint: CustomPersistable {\n public typealias PersistedType = PersistablePoint \n public init(persistedValue: PersistablePoint) { self.init(x: persistedValue.x, y: persistedValue.y) }\n public var persistableValue: PersistablePoint { PersistablePoint(self) }\n}\n```\n\n#### SwiftUI.Color\n\n`Color` is made up of the three RGB components plus the opacity. I use the `PersistableColor` class to persist a representation of `Color`:\n\n```swift\npublic class PersistableColor: EmbeddedObject {\n @Persisted var red: Double = 0\n @Persisted var green: Double = 0\n @Persisted var blue: Double = 0\n @Persisted var opacity: Double = 0\n\n convenience init(color: Color) {\n self.init()\n if let components = color.cgColor?.components {\n if components.count >= 3 {\n red = components0]\n green = components[1]\n blue = components[2]\n }\n if components.count >= 4 {\n opacity = components[3]\n }\n }\n }\n}\n```\n\nThe extension to implement `CustomPersistable` for `Color` provides methods to initialize `Color` from a `PersistableColor`, and to generate a `PersistableColor` from itself:\n\n```swift\nextension Color: CustomPersistable {\n public typealias PersistedType = PersistableColor\n\n public init(persistedValue: PersistableColor) { self.init(\n .sRGB,\n red: persistedValue.red,\n green: persistedValue.green,\n blue: persistedValue.blue,\n opacity: persistedValue.opacity) }\n\n public var persistableValue: PersistableColor {\n PersistableColor(color: self)\n }\n}\n```\n\nThe [view can then use `selectedColor` from the model object without worrying about how it's converted by the Realm SDK:\n\n```swift\ncontext.stroke(\n path,\n with: .color(line.lineColor),\n style: StrokeStyle(lineWidth:\n line.lineWidth,\n lineCap: .round,\n lineJoin: .round)\n)\n```\n\n## Conclusion\n\nType projections provide a simple, elegant way to convert any type to types that can be persisted and synced by Realm.\n\nIt's your responsibility to define how the mapping is implemented. After that, the Realm SDK takes care of everything else.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "Simply persist and sync Swift objects containing any type in Realm", "contentType": "Tutorial"}, "title": "Realm-Swift Type Projections", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/typescript/myleg", "action": "created", "body": "# myLeG\n\n## Creator\nJustus Alvermann, student in Germany, developed this project.\n\n## About the Project\nThe project shows the substitutions of my school in a more readable way and also sorted, so the users only see the entries that are relevant to them. \nIt can also send out push notifications for new or changed substitutions and has some information about the current COVID regulations\n\n## Inspiration\nI didn't like the current way substitutions are presented and also wanted a way to be notified about upcoming substitutions. In addition, I was tired of coming to school even though the first lessons were cancelled because I forgot to look at the substitution schedule. \n \n## Why MongoDB?\nSince not every piece of information (e.g. new room for cancelled lessons) on the substitution plan is available for all entries, a document-based solution was the only sensible database. \n\n## How It Works\n\nEvery 15 minutes, a NodeJS script crawls the substitution plan of my school and saves all new or changed entires into my MongoDB collection. This script also sends out push notifications via the web messaging api to the users who subscribed to them.\nI used Angular for the frontend and Vercel Serverless functions for the backend. \nThe serverless functions get the information from the database and can be queried via their Rest API. \nThe login credentials are stored in MongoDB too and logins are saved as JWTs in the users cookies. ", "format": "md", "metadata": {"tags": ["TypeScript", "Atlas", "JavaScript", "Vercel", "Serverless"], "pageDescription": "This project downloads the substitution plan of my school and converts it into a user-friendly page.", "contentType": "Code Example"}, "title": "myLeG", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-ios-database-access-using-realm-studio", "action": "created", "body": "# Accessing Realm Data on iOS Using Realm Studio\n\nThe Realm makes it much\nfaster to develop mobile applications. Realm\nStudio is a desktop app that\nlets you view, manipulate, and import data held within your mobile app's\nRealm database.\n\nThis article steps through how to track down the locations of your iOS\nRealm database files, open them in Realm Studio, view the data, and make\nchanges. If you're developing an Android app, then finding the Realm\ndatabase files will be a little different (we'll follow up with an\nAndroid version later), but if you can figure out how to locate the\nRealm file, then the instructions on using Realm Studio should work.\n\n## Prerequisites\n\nIf you want to build and run the app for yourself, this is what you'll\nneed:\n\n- A Mac.\n- Xcode \u2013 any reasonably recent version will work.\n- Realm Studio 10.1.0+ \u2013 earlier versions had an issue when working\n with Realms using Atlas Device\n Sync.\n\nI'll be using the data and schema from my existing\nRChat iOS app. You can use any\nRealm-based app, but if you want to understand more about RChat, check\nout Building a Mobile Chat App Using Realm \u2013 Data Architecture and\nBuilding a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App.\n\n## Walkthrough\n\nThis walkthrough shows you how to:\n\n- Install & Run Realm Studio\n- Track Down Realm Data Files \u2013 Xcode Simulator\n- Track Down Realm Data Files \u2013 Real iOS Devices\n- View Realm Data\n- Add, Modify, and Delete Data\n\n### Install & Run Realm Studio\n\nI'm using Realm Studio\n10.1.1 to create this\narticle. If you have 10.1.0 installed, then that should work too.\n\nIf you don't already have Realm Studio 10.1.0+ installed, then download\nit here and install.\n\nThat's it.\n\nBut, when you open the app, you're greeted by this:\n\nThere's a single button letting me \"Open Realm file,\" and when I click\non it, I get a file explorer where I can browse my laptop's file system.\n\nWhere am I supposed to find my Realm file? I cover that in the next\nsection.\n\n### Track Down Realm Data Files \u2013 Xcode Simulator\n\nIf you're running your app in one of Xcode's simulators, then the Realm\nfiles are stored in your Mac's file system. They're typically somewhere\nalong the lines of\n`~/Library/Developer/CoreSimulator/Devices/???????/data/Containers/Data/Application/???????/Documents/mongodb-realm/???????/????????/???????.realm`.\n\nThe scientific way to find the file's location is to add some extra code\nto your app or to use a breakpoint.\n\nWhile my app's in development, I'll normally print the location of a\nRealm's file whenever I open it. Don't worry if you're not explicitly\nopening your Realm(s) in your code (e.g., if you're using the default\nrealm) as I'll cover the file search approach soon. This is the code to\nadd to your app once you've opened your realm \u2013 `realm`:\n\n``` swift\nprint(\"User Realm User file location: \\(realm.configuration.fileURL!.path)\")\n```\n\nIf you don't want to edit the code, then an Xcode breakpoint delivers\nthe same result:\n\nOnce you have the file location, open it in Realm Studio from the\nterminal:\n\n``` bash\nopen /Users/andrew.morgan/Library/Developer/CoreSimulator/Devices/E7526AFE-E886-490A-8085-349C8E8EDC5B/data/Containers/Data/Application/C3ADE2F2-ABF0-4BD0-9F47-F21894E850DB/Documents/mongodb-realm/rchat-saxgm/60099aefb33c57e9a9828d23/%22user%3D60099aefb33c57e9a9828d23%22.realm\n```\n\nLess scientific but simpler is to take advantage of the fact that the\ndata files will always be of type `realm` and located somewhere under\n`~/Library/Developer/CoreSimulator/Devices`. Open Finder in that folder:\n`open ~/Library/Developer/CoreSimulator/Devices` and then create a\n\"saved search\" so that you can always find all of your realm files.\nYou'll most often be looking for the most recent one.\n\nThe nice thing about this approach is that you only need to create the\nsearch once. Then click on \"Realms,\" find the file you need, and then\ndouble-click it to open it in Realm Studio.\n\n### Track Down Realm Data Files \u2013 Real iOS Devices\n\nUnfortunately, you can't use Realm Studio to interact with live Realm\nfiles on a real iOS device.\n\nWhat we can do is download a copy of your app's Realm files from your\niOS device to your laptop. You need to connect your iOS device to your\nMac, agree to trust the computer, etc.\n\nOnce connected, you can use Xcode to download a copy of the \"Container\"\nfor your app. Open the Xcode device manager\u2014\"Window/Devices and\nSimulators.\" Find your device and app, then download the container:\n\nNote that you can only access the containers for apps that you've built\nand installed through Xcode, not ones you've installed through the App\nStore.\n\nRight-click the downloaded file and \"Show Package Contents.\" You'll find\nyour Realm files under\n`AppData/Documents/mongodb-realm//?????`. Find the file for\nthe realm you're interested in and double-click it to open it in Realm\nStudio.\n\n### View Realm Data\n\nAfter opening a Realm file, Realm Studio will show a window with all of\nthe top-level Realm Object classes in use by your app. In this example,\nthe realm I've opened only contains instances of the `Chatster` class.\nThere's a row for each `Chatster` Object that I'd created through the\napp:\n\nIf there are a lot of objects, then you can filter them using a simple\nquery\nsyntax:\n\nIf the Realm Object class contains a `List` or an `EmbeddedObject`, then\nthey will show as blue links\u2014in this example, `conversations` and\n`userPreferences` are a list of `Conversation` objects and an embedded\n`UserPreferences` object respectively:\n\nClicking on one of the `UserPreferences` links brings up the contents of\nthe embedded object:\n\n### Add, Modify, and Delete Data\n\nThe ability to view your Realm data is invaluable for understanding\nwhat's going on inside your app.\n\nRealm Studio takes it a step further by letting you add, modify, and\ndelete data. This ability helps to debug and test your app.\n\nAs a first example, I click on \"Create ChatMessage\" to add a new message\nto a conversation:\n\nFill out the form and click \"Create\" to add the new `ChatMessage`\nobject:\n\nWe can then observe the effect of that change in our app:\n\nI could have tested that change using the app, but there are different\nthings that I can try using Realm Studio.\n\nI haven't yet included the ability to delete or edit existing messages,\nbut I can now at least test that this view can cope when the data\nchanges:\n\n## Summary\n\nIn this article, we've seen how to find and open your iOS Realm data\nfiles in Realm Studio. We've viewed the data and then made changes and\nobserved the iOS app reacting to those changes.\n\nRealm Studio has several other useful features that I haven't covered\nhere. As it's a GUI, it's fairly easy to figure out how to use them, and\nthe docs are available if you\nget stuck. These functions include:\n\n- Import data into Realm from a CSV file.\n- Export your Realm data as a JSON file.\n- Edit the schema.\n- Open the Realm file from an app and export the schema in a different\n language. We used this for the WildAid O-FISH\n project. I created the schema in the\n iOS app, and another developer exported a Kotlin version of the\n schema from Realm Studio to use in the Android app.\n\n## References\n\n- GitHub Repo for RChat App.\n- Read Building a Mobile Chat App Using Realm \u2013 Data Architecture to understand the data model and partitioning strategy behind the RChat app.\n- Read Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App to learn how to create the RChat app.\n- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine.\n- GitHub Repo for Realm-Cocoa SDK.\n- Realm Cocoa SDK documentation.\n- MongoDB's Realm documentation.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS", "Postman API"], "pageDescription": "Discover how to access and manipulate your iOS App's Realm data using the Realm Studio GUI.", "contentType": "Tutorial"}, "title": "Accessing Realm Data on iOS Using Realm Studio", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/saving-data-in-unity3d-using-binary-reader-writer", "action": "created", "body": "# Saving Data in Unity3D Using BinaryReader and BinaryWriter\n\n(Part 3 of the Persistence Comparison Series)\n\nPersisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.\n\nIn Part 1 of this series, we explored Unity's own solution: `PlayerPrefs`. This time, we look into one of the ways we can use the underlying .NET framework by saving files. Here is an overview of the complete series:\n\n- Part 1: PlayerPrefs\n- Part 2: Files\n- Part 3: BinaryReader and BinaryWriter *(this tutorial)*\n- Part 4: SQL *(coming soon)*\n- Part 5: Realm Unity SDK\n- Part 6: Comparison of all those options\n\nLike Part 1 and 2, this tutorial can also be found in the https://github.com/realm/unity-examples repository on the persistence-comparison branch.\n\nEach part is sorted into a folder. The three scripts we will be looking at are in the `BinaryReaderWriter` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.\n\n## Example game\n\n*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.*\n\nThe goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.\n\nTherefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.\n\nA simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.\n\nWhen you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.\n\nYou can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.\n\nThe scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.\n\n```cs\nusing UnityEngine;\n\n/// \n/// This script shows the basic structure of all other scripts.\n/// \npublic class HitCountExample : MonoBehaviour\n{\n // Keep count of the clicks.\n SerializeField] private int hitCount; // 1\n\n private void Start() // 2\n {\n // Read the persisted data and set the initial hit count.\n hitCount = 0; // 3\n }\n\n private void OnMouseDown() // 4\n {\n // Increment the hit count on each click and save the data.\n hitCount++; // 5\n }\n}\n```\n\nThe first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.\n\nWhenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.\n\nThe second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.\n\n## BinaryReader and BinaryWriter\n\n(See `BinaryReaderWriterExampleSimple.cs` in the repository for the finished version.)\n\nIn the previous tutorial, we looked at `Files`. This is not the only way to work with data in files locally. Another option that .NET is offering us is the [`BinaryWriter` and BinaryReader.\n\n> The BinaryWriter class provides methods that simplify writing primitive data types to a stream. For example, you can use the Write method to write a Boolean value to the stream as a one-byte value. The class includes write methods that support different data types.\n\nParts of this tutorial will look familiar if you have worked through the previous one. We will use `File` again here to create and open file streams which can then be used by the `BinaryWriter` to save data into those files.\n\nLet's have a look at what we have to change in the example presented in the previous section to save the data using `BinaryWriter` and then read it again using it's opposite `BinaryReader`:\n\n```cs\nusing System;\nusing System.IO;\nusing UnityEngine;\n\npublic class BinaryReaderWriterExampleSimple : MonoBehaviour\n{\n // Resources:\n // https://docs.microsoft.com/en-us/dotnet/api/system.io.binarywriter?view=net-5.0\n // https://docs.microsoft.com/en-us/dotnet/api/system.io.binaryreader?view=net-5.0\n // https://docs.microsoft.com/en-us/dotnet/api/system.io.filestream?view=net-5.0\n // https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-statement\n // https://docs.microsoft.com/en-us/dotnet/api/system.io.stream?view=net-5.0\n\n SerializeField] private int hitCount = 0;\n\n private const string HitCountFile = \"BinaryReaderWriterExampleSimple\"; // 1\n\n private void Start() // 7\n {\n // Check if the file exists to avoid errors when opening a non-existing file.\n if (File.Exists(HitCountFile)) // 8\n {\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 9\n using BinaryReader binaryReader = new(fileStream); // 10\n hitCount = binaryReader.ReadInt32(); // 11\n }\n }\n\n private void OnMouseDown() // 2\n {\n hitCount++; // 3\n\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 4\n using BinaryWriter binaryWriter = new(fileStream); // 5\n binaryWriter.Write(hitCount); // 6\n }\n\n}\n```\n\nFirst we define a name for the file that will hold the data (1). If no additional path is provided, the file will just be saved in the project folder when running the game in the Unity editor or the game folder when running a build. This is fine for the example.\n\nWhenever we click on the capsule (2) and increment the hit count (3), we need to save that change. First, we open the file that is supposed to hold the data (4) by calling `File.Open`. It takes two parameters: the file name, which we defined already, and a `FileMode`. Since we want to create a new file, the `FileMode.Create` option is the right choice here.\n\nUsing this `FileStream`, we then create a new `BinaryWriter` that takes the stream as an argument (5). After that, we can simply write the current `hitCount` to the file using `Write()` (6).\n\nThe next time we start the game (7), we check if the file that we saved our data to already exists. If so, it means we have saved data before and can now read it. Once again, we create a new `Filestream` (9) first, this time using the `FileMode.Open` option. To read the data from the file, we need to use the `BinaryReader` (10), which also gets initialized with the `FileStream` identical to the `BinaryWriter`.\n\nFinally, using `ReadInt32()`, we can read the hit count from the file and assign it to `hitCount`.\n\nLet's look into extending this simple example in the next section.\n\n## Extended example\n\n(See `BinaryReaderWriterExampleExtended.cs` in the repository for the finished version.)\n\nThe previous section showed the most simple example, using just one variable that needs to be saved. What if we want to save more than that?\n\nDepending on what needs to be saved, there are several different approaches. You could use multiple files or you can write multiple variables inside the same file. The latter shall be shown in this section by extending the game to recognize modifier keys. We want to detect normal clicks, Shift+Click, and Control+Click.\n\nFirst, update the hit counts so that we can save three of them:\n\n```cs\n[SerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nWe also want to use a different file name so we can look at both versions next to each other:\n\n```cs\nprivate const string HitCountFile = \"BinaryReaderWriterExampleExtended\";\n```\n\nThe last field we need to define is the key that is pressed:\n\n```cs\nprivate KeyCode modifier = default;\n```\n\nThe first thing we need to do is check if a key was pressed and which key it was. Unity offers an easy way to achieve this using the [`Input` class's `GetKey()` function. It checks if the given key was pressed or not. You can pass in the string for the key or, to be a bit more safe, just use the `KeyCode` enum. We cannot use this in the `OnMouseClick()` when detecting the mouse click though:\n\n> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.\n\nAdd a new method called `Update()` (1) which is called in every frame. Here we need to check if the `Shift` or `Control` key was pressed (2) and if so, save the corresponding key in `modifier` (3). In case none of those keys was pressed (4), we consider it unmodified and reset `modifier` to its `default` (5).\n\n```cs\nprivate void Update() // 1\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 2\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift; // 3\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 2\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl; // 3\n }\n else // 4\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 5\n }\n}\n```\n\nNow to saving the data when a click happens:\n\n```cs\nprivate void OnMouseDown() // 6\n{\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift: // 7\n // Increment the Shift hit count.\n hitCountShift++; // 8\n break;\n case KeyCode.LeftControl: // 7\n // Increment the Control hit count.\n hitCountControl++; // 8\n break;\n default: // 9\n // If neither Shift nor Control was held, we increment the unmodified hit count.\n hitCountUnmodified++; // 10\n break;\n }\n\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 11\n using BinaryWriter binaryWriter = new(fileStream, Encoding.UTF8); // 12\n binaryWriter.Write(hitCountUnmodified); // 13\n binaryWriter.Write(hitCountShift); // 13\n binaryWriter.Write(hitCountControl); // 13\n}\n```\n\nWhenever a mouse click is detected on the capsule (6), we can then perform a similar check to what happened in `Update()`, only we use `modifier` instead of `Input.GetKey()` here.\n\nCheck if `modifier` was set to `KeyCode.LeftShift` or `KeyCode.LeftControl` (7) and if so, increment the corresponding hit count (8). If no modifier was used (9), increment the `hitCountUnmodified` (10).\n\nSimilar to the simple version, we create a `FileStream` (11) and with it the `BinaryWriter` (12). Writing multiple variables into the file can simply be achieved by calling `Write()` multiple times (13), once for each hit count that we want to save.\n\nStart the game, and click the capsule using Shift and Control. You should see the three counters in the Inspector.\n\nAfter stopping the game and therefore saving the data, a new file `BinaryReaderWriterExampleExtended` should exist in your project folder. Have a look at it. It should look something like this:\n\nThe three hit counters can be seen in there and correspond to the values in the inspector:\n\n- `0f` == 15\n- `0c` == 12\n- `05` == 5\n\nLast but not least, let's look at how to load the file again when starting the game (14):\n\n```cs\nprivate void Start() // 14\n{\n // Check if the file exists to avoid errors when opening a non-existing file.\n if (File.Exists(HitCountFile)) // 15\n {\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 16\n using BinaryReader binaryReader = new(fileStream); // 17\n hitCountUnmodified = binaryReader.ReadInt32(); // 18\n hitCountShift = binaryReader.ReadInt32(); // 18\n hitCountControl = binaryReader.ReadInt32(); // 18\n }\n}\n```\n\nFirst, we check if the file even exists (15). If we ever saved data before, this should be the case. If it exists, we read the databy creating a `FileStream` again (16) and opening a `BinaryReader` with it (17). Similar to writing with `Write()` (on the `BinaryWriter`), we use `ReadInt32()` (18) to read an `integer`. We do this three times since we saved them all individually.\n\nNote that knowing the structure of the file is necessary here. If we saved an `integers`, a `boolean`, and a `string`, we would have to use `ReadInt32()`, `ReadBoolean()`, and `ReadString()`.\n\nThe more complex data gets, the more complicated it will be to make sure there are no mistakes in the structure when reading or writing it. Different types, adding and removing variables, changing the structure. The more data we want to add to this file, the more it makes sense to think about alternatatives. For this tutorial, we will stick with the `BinaryReader` and `BinaryWriter` and see what we can do to decrease the complexity a bit when adding more data.\n\nOne of those options will be shown in the next section.\n\n## More complex data\n\n(See `BinaryReaderWriterExampleJson.cs` in the repository for the finished version.)\n\nJSON is a very common approach when saving structured data. It's easy to use and there are frameworks for almost every language. The .NET framework provides a `JsonSerializer`. Unity has its own version of it: `JsonUtility`.\n\nAs you can see in the documentation, the functionality boils down to these three methods:\n\n- *FromJson()*: Create an object from its JSON representation.\n- *FromJsonOverwrite()*: Overwrite data in an object by reading from its JSON representation.\n- *ToJson()*: Generate a JSON representation of the public fields of an object.\n\nThe `JsonUtility` transforms JSON into objects and back. Therefore, our first change to the previous section is to define such an object with public fields:\n\n```cs\nprivate class HitCount\n{\n public int Unmodified;\n public int Shift;\n public int Control;\n}\n```\n\nThe class itself can be `private` and just be added inside the `BinaryReaderWriterExampleJson` class, but its fields need to be public.\n\nAs before, we use a different file to save this data. Update the filename to:\n\n```cs\nprivate const string HitCountFile = \"BinaryReaderWriterExampleJson\";\n```\n\nWhen saving the data, we will use the same `Update()` method as before to detect which key was pressed.\n\nThe first part of `OnMouseDown()` (1) can stay the same as well, since this part only increments the hit count depending on the modifier used.\n\n```cs\nprivate void OnMouseDown() // 1\n{\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift:\n // Increment the Shift hit count.\n hitCountShift++;\n break;\n case KeyCode.LeftControl:\n // Increment the Control hit count.\n hitCountControl++;\n break;\n default:\n // If neither Shift nor Control was held, we increment the unmodified hit count.\n hitCountUnmodified++;\n break;\n }\n\n // 2\n // Create a new HitCount object to hold this data.\n var updatedCount = new HitCount\n {\n Unmodified = hitCountUnmodified,\n Shift = hitCountShift,\n Control = hitCountControl,\n };\n\n // 3\n // Create a JSON using the HitCount object.\n var jsonString = JsonUtility.ToJson(updatedCount, true);\n\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Create); // 5\n using BinaryWriter binaryWriter = new(fileStream, Encoding.UTF8); // 6\n binaryWriter.Write(jsonString); // 7\n}\n```\n\nHowever, we need to update the second part. Instead of a string array, we create a new `HitCount` object and set the three public fields to the values of the hit counters (2).\n\nUsing `JsonUtility.ToJson()`, we can transform this object to a string (3). If you pass in `true` for the second, optional parameter, `prettyPrint`, the string will be formatted in a nicely readable way.\n\nFinally, as before, we create a `FileStream` (5) and `BinaryWriter` (6) and use `Write()` (7) to write the `jsonString` into the file.\n\nThen, when the game starts (8), we need to read the data back into the hit count fields:\n\n```cs\nprivate void Start() // 8\n{\n // Check if the file exists to avoid errors when opening a non-existing file.\n if (File.Exists(HitCountFile)) // 9\n {\n // Open a stream to the file that the `BinaryReader` can use to read data.\n // They need to be disposed at the end, so `using` is good practice\n // because it does this automatically.\n using FileStream fileStream = File.Open(HitCountFile, FileMode.Open); // 10\n using BinaryReader binaryReader = new(fileStream); // 11\n\n // 12\n var jsonString = binaryReader.ReadString();\n var hitCount = JsonUtility.FromJson(jsonString);\n\n // 13\n if (hitCount != null)\n {\n // 14\n hitCountUnmodified = hitCount.Unmodified;\n hitCountShift = hitCount.Shift;\n hitCountControl = hitCount.Control;\n }\n }\n}\n```\n\nWe check if the file exists first (9). In case it does, we saved data before and can proceed reading it.\n\nUsing a `FileStream` again (10) with `FileMode.Open`, we create a `BinaryReader` (11). Since we are reading a json string, we need to use `ReadString()` (12) this time and then transform it via `FromJson()` into a `HitCount` object.\n\nIf this worked out (13), we can then extract `hitCountUnmodified`, `hitCountShift`, and `hitCountControl` from it (14).\n\nNote that the data is saved in a binary format, which is, of course, not safe. Tools to read binary are available and easy to find. For example, this `BinaryReaderWriterExampleJson` file read with `bless` would result in this:\n\nYou can clearly identify the three values we saved. While the `BinaryReader` and `BinaryWriter` are a simple and easy way to save data and they at least offer a way so that the data is not immidiately readable, they are by no means safe.\n\nIn a future tutorial, we will look at encryption and how to improve safety of your data along with other useful features like migrations and performance improvements.\n\n## Conclusion\n\nIn this tutorial, we learned how to utilize `BinaryReader` and `BinaryWriter` to save data. `JsonUtility` helps structure this data. They are simple and easy to use, and not much code is required.\n\nWhat are the downsides, though?\n\nFirst of all, we open, write to, and save the file every single time the capsule is clicked. While not a problem in this case and certainly applicable for some games, this will not perform very well when many save operations are made when your game gets a bit more complex.\n\nAlso, the data is saved in a readable format and can easily be edited by the player.\n\nThe more complex your data is, the more complex it will be to actually maintain this approach. What if the structure of the `HitCount` object changes? You have to account for that when loading an older version of the JSON. Migrations are necessary.\n\nIn the following tutorials, we will have a look at how databases can make this job a lot easier and take care of the problems we face here.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["Realm", "Unity", ".NET"], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Tutorial"}, "title": "Saving Data in Unity3D Using BinaryReader and BinaryWriter", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/typescript/twitter-trend-analyser", "action": "created", "body": "# Trends analyser\n\n## Creators\nOsama Bin Junaid contributed this project.\n\n## About the Project\nThe project uses twitter api to fetch realtime trends data and save it into MongoDB for later analysis.\n \n ## Inspiration\n In today's world its very hard to keep up with everything thats happening around us. Twitter is one of the first places where things gets reported so my motive was to build an application through which one can see all trends at one place, and also why something is trending.(trying to solve this one now)\n \n ## Why MongoDB?\n I used MongoDB because of its Document nature, I can directly save my JSON objects without breaking down into tables, and also because its easy to design schemas and their relationships using MongoDB\n \n ## How It Works\n Its works by repeatedly invoking 8 serverless functions on ibmcloud at 15 minutes interval, these functions call twitter apis get the data, and do little transformation before saving the data to Mongodb. \n\nThe backend then serves the data to the react frontend.\n\nGitHub repo frontend: https://github.com/ibnjunaid/trendsFunction\nGitHub repo backend: https://github.com/ibnjunaid/trendsBackend", "format": "md", "metadata": {"tags": ["TypeScript", "Atlas", "JavaScript"], "pageDescription": "Analyse how hashtags on twitter change over time. ", "contentType": "Code Example"}, "title": "Trends analyser", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/mongodb-geospatial-queries-csharp", "action": "created", "body": "# MongoDB Geospatial Queries in C#\n\n# MongoDB Geospatial Queries with C#\n\nIf you've ever glanced at a map to find the closest lunch spots to you, you've most likely used a geospatial query under the hood! Using GeoJSON objects to store geospatial data in MongoDB Atlas, you can create your own geospatial queries for your application. In this tutorial, we'll see how to work with geospatial queries in the MongoDB C# driver.\n\n## Quick Jump\n\n* What are Geospatial Queries?\n * GeoJSON\n* Prerequisities\n* Atlas Setup\n * Download and Import Sample Data\n * Create 2dsphere Indexes\n* Creating the Project\n* Geospatial Query Code Examples\n * $near\n * $geoWithin\n * $geoIntersects\n * $geoWithin and $center Combined\n * $geoWithin and $centerSphere Combined\n* Spherical Geometry Calculations with Radians\n## What are Geospatial Queries?\n\nGeospatial queries allow you to work with geospatial data. Whether that's on a 2d space (like a flat map) or 3d space (when viewing a spherical representation of the world), geospatial data allows you to find areas and places in reference to a point or set of points.\n\nThese might sound complicated, but you've probably encountered these use cases in everyday life: searching for points of interest in a new city you're exploring, discovering which coffee shops are closest to you, or finding every bakery within a three-mile radius of your current position (for science!).\n\nThese kinds of queries can easily be done with special geospatial query operators in MongoDB. And luckily for us, these operators are also implemented in most of MongoDB's drivers, including the C# driver we'll be using in this tutorial.\n\n### GeoJSON\n\nOne important aspect of working with geospatial data is something called the GeoJSON format. It's an open standard for representing simple geographical features and makes it easier to work with geospatial data. Here's what some of the GeoJSON object types look like:\n\n``` JSON\n// Point GeoJSON type\n{\n \"type\" : \"Point\",\n \"coordinates\" : -115.20146200000001, 36.114704000000003]\n}\n\n// Polygon GeoJSON type\n{\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [100.0, 0.0], \n [101.0, 0.0], \n [101.0, 1.0],\n [100.0, 1.0], \n [100.0, 0.0]\n ]\n ]\n}\n```\n\nWhile MongoDB supports storing your geospatial data as [legacy coordinate pairs, it's preferred to work with the GeoJSON format as it makes complicated queries possible and much simpler.\n\n> \ud83d\udca1 Whether working with coordinates in the GeoJSON format or as legacy coordinate pairs, queries require the **longitude** to be passed first, followed by **latitude**. This might seem \"backwards\" compared to what you may be used to, but be assured that this format actually follows the `(X, Y)` order of math! Keep this in mind as MongoDB geospatial queries will also require coordinates to be passed in `longitude, latitude]` format where applicable.\n\nAlright, let's get started with the tutorial!\n\n## Prerequisites\n\n* [Visual Studio Community (2019 or higher)\n* MongoDB C#/.NET Driver (latest preferred, minimum 2.11)\n* MongoDB Atlas cluster\n* mongosh\n\n## Atlas Setup\n\nTo make this tutorial easier to follow along, we'll work with the `restaurants` and `neighborhoods` datasets, both publicly available in our documentation. They are both `JSON` files that contain a sizable amount of New York restaurant and neighborhood data already in GeoJSON format!\n\n### Download Sample Data and Import into Your Atlas Cluster\n\nFirst, download this `restaurants.json` file and this `neighborhoods.json` file.\n\n> \ud83d\udca1 These files differ from the the `sample_restaurants` dataset that can be loaded in Atlas! While the collection names are the same, the JSON files I'm asking you to download already have data in GeoJSON format, which will be required for this tutorial.\n\nThen, follow these instructions to import both datasets into your cluster.\n\n> \ud83d\udca1 When you reach Step 5 of importing your data into your cluster (*Run mongoimport*), be sure to keep track of the `database` and `collection` names you pass into the command. We'll need them later! If you want to use the same names as in this tutorial, my database is called `sample-geo` and my collections are called `restaurants` and `neighborhoods` .\n\n### Create 2dsphere Indexes\n\nLastly, to work with geospatial data, a 2dsphere index needs to be created for each collection. You can do this in the MongoDB Atlas portal.\n\nFirst navigate to your cluster and click on \"Browse Collections\":\n\nYou'll be brought to your list of collections. Find your restaurant data (if following along, it will be a collection called `restaurants` within the `sample-geo` database).\n\nWith the collection selected, click on the \"Indexes\" tab:\n\nClick on the \"CREATE INDEX\" button to open the index creation wizard. In the \"Fields\" section, you'll specify which *field* to create an index on, as well as what *type* of index. For our tutorial, clear the input, and copy and paste the following:\n\n``` JSON\n{ \"location\": \"2dsphere\" }\n```\n\nClick \"Review\". You'll be asked to confirm creating an index on `sample-geo.restaurants` on the field `{ \"location\": \"2dsphere\" }` (remember, if you aren't using the same database and collection names, confirm your index is being created on `yourDatabaseName.yourCollectionName`). Click \"Confirm.\"\n\nLikewise, find your neighborhood data (`sample-geo.neighborhoods` unless you used different names). Select your `neighborhoods` collection and do the same thing, this time creating this index:\n\n``` JSON\n{ \"geometry\": \"2dsphere\" }\n```\n\nAlmost instantly, the indexes will be created. You'll know the index has been successfully created once you see it listed under the Indexes tab for your selected collection.\n\nNow, you're ready to work with your restaurant and neighborhood data!\n\n## Creating the Project\n\nTo show these samples, we'll be working within the context of a simple console program. We'll implement each geospatial query operator as its own method and log the corresponding MongoDB Query it executes.\n\nAfter creating a new console project, add the MongoDB Driver to your project using the Package Manager or the .NET CLI:\n\n*Package Manager*\n\n```\nInstall-Package MongoDB.Driver\n```\n\n*.NET CLI*\n\n```\ndotnet add package MongoDB.Driver\n```\n\nNext, add the following dependencies to your `Program.cs` file:\n\n``` csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.IO;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Driver;\nusing MongoDB.Driver.GeoJsonObjectModel;\nusing System;\n```\n\nFor all our examples, we'll be using the following `Restaurant` and `Neighborhood` classes as our models:\n\n``` csharp\npublic class Restaurant\n{\n public ObjectId Id { get; set; }\n public GeoJsonPoint Location { get; set; }\n public string Name { get; set; }\n}\n```\n\n``` csharp\npublic class Neighborhood\n{\n public ObjectId Id { get; set; }\n public GeoJsonPoint Geometry { get; set; }\n public string Name { get; set; }\n}\n```\n\nAdd both to your application. For simplicity, I've added them as additional classes in my `Program.cs` file.\n\nNext, we need to connect to our cluster. Place the following code within the `Main` method of your program:\n\n``` csharp\n// Be sure to update yourUsername, yourPassword, yourClusterName, and yourProjectId to your own! \n// Similarly, also update \"sample-geo\", \"restaurants\", and \"neighborhoods\" to whatever you've named your database and collections.\nvar client = new MongoClient(\"mongodb+srv://yourUsername:yourPassword@yourClusterName.yourProjectId.mongodb.net/sample-geo?retryWrites=true&w=majority\");\nvar database = client.GetDatabase(\"sample-geo\");\nvar restaurantCollection = database.GetCollection(\"restaurants\");\nvar neighborhoodCollection = database.GetCollection(\"neighborhoods\");\n```\n\nFinally, we'll add a helper method called `Log()` within our `Program` class. This will take the geospatial queries we write in C# and log the corresponding MongoDB Query to the console. This gives us an easy way to copy it and use elsewhere.\n\n``` csharp\nprivate static void Log(string exampleName, FilterDefinition filter)\n{\n var serializerRegistry = BsonSerializer.SerializerRegistry;\n var documentSerializer = serializerRegistry.GetSerializer();\n var rendered = filter.Render(documentSerializer, serializerRegistry);\n Console.WriteLine($\"{exampleName} example:\");\n Console.WriteLine(rendered.ToJson(new JsonWriterSettings { Indent = true }));\n Console.WriteLine();\n}\n```\n\nWe now have our structure in place. Now we can create the geospatial query methods!\n\n## Geospatial Query Code Examples in C#\n\nSince MongoDB has dedicated operators for geospatial queries, we can take advantage of the C# driver's filter definition builder to build type-safe queries. Using the filter definition builder also provides both compile-time safety and refactoring support in Visual Studio, making it a great way to work with geospatial queries.\n\n### $near Example in C#\n\nThe `.Near` filter implements the $near geospatial query operator. Use this when you want to return geospatial objects that are in proximity to a center point, with results sorted from nearest to farthest.\n\nIn our program, let's create a `NearExample()` method that does that. Let's search for restaurants that are *at most* 10,000 meters away and *at least* 2,000 meters away from a Magnolia Bakery (on Bleecker Street) in New York:\n\n``` cs\nprivate static void NearExample(IMongoCollection collection)\n{\n // Instantiate builder\n var builder = Builders.Filter;\n\n // Set center point to Magnolia Bakery on Bleecker Street\n var point = GeoJson.Point(GeoJson.Position(-74.005, 40.7358879));\n\n // Create geospatial query that searches for restaurants at most 10,000 meters away,\n // and at least 2,000 meters away from Magnolia Bakery (AKA, our center point)\n var filter = builder.Near(x => x.Location, point, maxDistance: 10000, minDistance: 2000);\n\n // Log filter we've built to the console using our helper method\n Log(\"$near\", filter);\n}\n```\n\nThat's it! Whenever we call this method, a `$near` query will be generated that you can copy and paste from the console. Feel free to paste that query into the data explorer in Atlas to see which restaurants match the filter (don't forget to change `\"Location\"` to a lowercase `\"location\"` when working in Atlas). In a future post, we'll delve into how to visualize these results on a map!\n\nFor now, you can call this method (and all other following methods) from the `Main` method like so:\n\n```cs\nstatic void Main(string] args)\n{\n var client = new MongoClient(\"mongodb+srv://yourUsername:yourPassword@yourClusterName.yourProjectId.mongodb.net/sample-geo?retryWrites=true&w=majority\");\n var database = client.GetDatabase(\"sample-geo\");\n var restaurantCollection = database.GetCollection(\"restaurants\");\n var neighborhoodCollection = database.GetCollection(\"neighborhoods\");\n\n NearExample(restaurantCollection);\n // Add other methods here as you create them\n}\n```\n\n> \u26a1 Feel free to modify this code! Change your center point by changing the coordinates or let the method accept variables for the `point`, `maxDistance`, and `minDistance` parameters instead of hard-coding it.\n\nIn most use cases, `.Near` will do the trick. It measures distances against a flat, 2d plane ([Euclidean plane) that will be accurate for most applications. However, if you need queries to run against spherical, 3d geometry when measuring distances, use the `.NearSphere` filter (which implements the `$nearSphere` operator). It accepts the same parameters as `.Near`, but will calculate distances using spherical geometry.\n\n### $geoWithin Example in C#\n\nThe `.GeoWithin` filter implements the $geoWithin geospatial query operator. Use this when you want to return geospatial objects that exist entirely within a specified shape, either a GeoJSON `Polygon`, `MultiPolygon`, or shape defined by legacy coordinate pairs. As you'll see in a later example, that shape can be a circle and can be generated using the `$center` operator.\n\nTo implement this in our program, let's create a `GeoWithinExample()` method that searches for restaurants within an area\u2014specifically, this area:\n\nIn code, we describe this area as a polygon and work with it as a list of points:\n\n``` cs\nprivate static void GeoWithinExample(IMongoCollection collection)\n{\n var builder = Builders.Filter;\n\n // Build polygon area to search within.\n // This must always begin and end with the same coordinate \n // to \"close\" the polygon and fully surround the area.\n var coordinates = new GeoJson2DCoordinates]\n {\n GeoJson.Position(-74.0011869, 40.752482),\n GeoJson.Position(-74.007384, 40.743641),\n GeoJson.Position(-74.001856, 40.725631),\n GeoJson.Position(-73.978511, 40.726793),\n GeoJson.Position(-73.974408, 40.755243),\n GeoJson.Position(-73.981669, 40.766716),\n GeoJson.Position(-73.998423, 40.763535),\n GeoJson.Position(-74.0011869, 40.752482),\n };\n var polygon = GeoJson.Polygon(coordinates);\n\n // Create geospatial query that searches for restaurants that fully fall within the polygon.\n var filter = builder.GeoWithin(x => x.Location, polygon);\n\n // Log the filter we've built to the console using our helper method.\n Log(\"$geoWithin\", filter);\n}\n```\n\n### $geoIntersects Example in C#\n\nThe `.GeoIntersects` filter implements the [$geoIntersects geospatial query operator. Use this when you want to return geospatial objects that span the same area as a specified object, usually a point.\n\nFor our program, let's create a `GeoIntersectsExample()` method that checks if a specified point falls within one of the neighborhoods stored in our neighborhoods collection:\n\n``` cs\nprivate static void GeoIntersectsExample(IMongoCollection collection)\n{\n var builder = Builders.Filter;\n\n // Set specified point. For example, the location of a user (with granted permission)\n var point = GeoJson.Point(GeoJson.Position(-73.996284, 40.720083));\n\n // Create geospatial query that searches for neighborhoods that intersect with specified point.\n // In other words, return results where the intersection of a neighborhood and the specified point is non-empty.\n var filter = builder.GeoIntersects(x => x.Geometry, point);\n\n // Log the filter we've built to the console using our helper method.\n Log(\"$geoIntersects\", filter);\n}\n```\n\n> \ud83d\udca1 For this method, an overloaded `Log()` method that accepts a `FilterDefinition` of type `Neighborhood` needs to be created.\n\n### Combined $geoWithin and $center Example in C#\n\nAs we've seen, the `$geoWithin` operator returns geospatial objects that exist entirely within a specified shape. We can set this shape to be a circle using the `$center` operator.\n\nLet's create a `GeoWithinCenterExample()` method in our program. This method will search for all restaurants that exist within a circle that we have centered on the Brooklyn Bridge:\n\n``` cs\nprivate static void GeoWithinCenterExample(IMongoCollection collection)\n{\n var builder = Builders.Filter;\n\n // Set center point to Brooklyn Bridge\n var point = GeoJson.Point(GeoJson.Position(-73.99631, 40.705396));\n\n // Create geospatial query that searches for restaurants that fall within a radius of 20 (units used by the coordinate system)\n var filter = builder.GeoWithinCenter(x => x.Location, point.Coordinates.X, point.Coordinates.Y, 20);\n Log(\"$geoWithin.$center\", filter);\n}\n```\n\n### Combined $geoWithin and $centerSphere Example in C#\n\nAnother way to query for places is by combining the `$geoWithin` and `$centerSphere` geospatial query operators. This differs from the `$center` operator in a few ways:\n\n* `$centerSphere` uses spherical geometry while `$center` uses flat geometry for calculations.\n* `$centerSphere` works with both GeoJSON objects and legacy coordinate pairs while `$center` *only* works with and returns legacy coordinate pairs.\n* `$centerSphere` uses radians for distance, which requires additional calculations to produce an accurate query. `$center` uses the units used by the coordinate system and may be less accurate for some queries.\n\nWe'll get to our example method in a moment, but first, a little context on how to calculate radians for spherical geometry!\n\n#### Spherical Geometry Calculations with Radians\n\n> \ud83d\udca1 An important thing about working with `$centerSphere` (and any other geospatial operators that use spherical geometry), is that it uses *radians* for distance. This means the distance units used in queries (miles or kilometers) first need to be converted to radians. Using radians properly considers the spherical nature of the object we're measuring (usually Earth) and let's the `$centerSphere` operator calculate distances correctly. \n\nUse this handy chart to convert between distances and radians:\n\n| Conversion | Description | Example Calculation |\n| ---------- | ----------- | ------------------- |\n| *distance (miles) to radians* | Divide the distance by the radius of the sphere (e.g., the Earth) in miles. The equitorial radius of the Earth in miles is approximately `3,963.2`. | Search for objects with a radius of 100 miles: `100 / 3963.2` |\n| *distance (kilometers) to radians* | Divide the distance by the radius of the sphere (e.g., the Earth) in kilometers. The equitorial radius of the Earth in kilometers is approximately `6,378.1`. | Search for objects with a radius of 100 kilometers: `100 / 6378.1` |\n| *radians to distance(miles)* | Multiply the radian measure by the radius of the sphere (e.g., the Earth). The equitorial radius of the Earth in miles is approximately `3,963.2`. | Find the radian measurement of 50 in miles: `50 * 3963.2` |\n| *radians to distance(kilometers)* | Multiply the radian measure by the radius of the sphere (e.g., the Earth). The equitorial radius of the Earth in kilometers is approximately `6,378.1`. | Find the radian measurement of 50 in kilometers: `50 * 6378.1` |\n\n#### Let's Get Back to the Example!\n\nFor our program, let's create a `GeoWithinCenterSphereExample()` that searches for all restaurants within a three-mile radius of Apollo Theater in Harlem:\n\n``` cs\nprivate static void GeoWithinCenterSphereExample(IMongoCollection collection)\n{\n var builder = Builders.Filter;\n\n // Set center point to Apollo Theater in Harlem\n var point = GeoJson.Point(GeoJson.Position(-73.949995, 40.81009));\n\n // Create geospatial query that searches for restaurants that fall within a 3-mile radius of Apollo Theater.\n // Notice how we pass our 3-mile radius parameter as radians (3 / 3963.2). This ensures accurate calculations with the $centerSphere operator.\n var filter = builder.GeoWithinCenterSphere(x => x.Location, point.Coordinates.X, point.Coordinates.Y, 3 / 3963.2);\n\n // Log the filter we've built to the console using our helper method.\n Log(\"$geoWithin.$centerSphere\", filter);\n}\n```\n\n## Next Time on Geospatial Queries in C#\n\nAs we've seen, working with MongoDB geospatial queries in C# is possible through its support for the geospatial query operators. In another tutorial, we'll take a look at how to visualize our geospatial query results on a map!\n\nIf you have any questions or get stuck, don't hesitate to post on our MongoDB Community Forums! And if you found this tutorial helpful, don't forget to rate it and leave any feedback. This helps us improve our articles so that they are awesome for everyone!", "format": "md", "metadata": {"tags": ["C#"], "pageDescription": "If you've ever glanced at a map to find the closest lunch spots to you, you've most likely used a geospatial query under the hood! In this tutorial, we'll learn how to store geospatial data in MongoDB Atlas and how to work with geospatial queries in the MongoDB C# driver.", "contentType": "Tutorial"}, "title": "MongoDB Geospatial Queries in C#", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/ehealth-example-app", "action": "created", "body": "# EHRS-Peru\n\n## Creators\nJorge Fatama Vera and Katherine Ruiz\n from Pontificia Universidad Cat\u00f3lica del Per\u00fa (PUCP) contributed this project.\n\n## About the project\n\nThis is a theoretical Electronic Health Record system (EHR-S) in Peru, which uses a MongoDB cluster to store clinical information.\n\nNote: This is my (Jorge) dual-thesis project for the degree in Computer Engineering (with the role of Backend Development). The MongoDB + Spring service is hosted in the \"ehrs-format\" folder of the Gitlab repository, in the \"develop\" branch. \n\n## Inspiration\n\nWhen I started this project, I didn\u2019t know about MongoDB. I think in Peru, it\u2019s a myth that MongoDB is only used in Data Analytics or Big Data. Few people talk about using MongoDB as their primary database. Most of the time, we use MySQL. SQL Server or Oracle. In university, we only learn about relational databases. When I looked into my project thesis and other Electronic Health Record Systems, I discovered many applications use MongoDB. So I started to investigate more, and I learned that MongoDB has many advantages as my primary database.\n\n## Why MongoDB?\n \nWe chose MongoDB for its horizontal scaling, powerful query capacity, and document flexibility. We specifically used these features to support various clinical information formats regulated by local legal regulations.\n\nWhen we chose MongoDB as our system's clinical information database, I hadn't much previous experience in that. During system development, I was able to identify the benefits that MongoDB offers. This motivated me to learn more about system development with MongoDB, both in programming forums and MongoDB University courses. Then, I wondered how the technological landscape would be favored with integrating NoSQL databases in information systems with potential in data mining and/or high storage capability.\n\nIn the medium term, we'll see more systems developed using MongoDB as the primary database in Peru universities' projects for information systems, taking advantage of the growing spread of Big Data and Data Analytics in the Latin American region.\n\n## How it works\n\nFor this project, I\u2019m using information systems from relational databases and non-relational databases. Because I discovered that they are not necessarily separated, they can both be convenient to use. \n\nThis is a system with a microservice-oriented architecture. There is a summary of each project in the GitLab repository (each folder represents a microservice):\n\n* **ehrs-eureka**: Attention Service, which works as a server for the other microservices.\n* **ehrs-gateway**: Distribution Service, which works as a load balancer, which allows the use of a single port for the requests received by the system.\n* **ehrs-auth**: Authentication Service, which manages access to the system.\n* **ehrs-auditoria**: Audit Service, which performs the audit trails of the system.\n* **ehrs-formatos**: Formats Service, which records clinical information in the database of formats.\n* **ehrs-fhir under maintenance]**: FHIR Query Service, which consults the information under the HL7 FHIR standard.\n\n## Challenges and learnings\n\nWhen I presented this idea to my advisor M.Sc. Angel Lena, he didn\u2019t know about MongoDB as a support in this area. We had to make a plan to justify the use of MongoDB as the primary database. \n\nThe challenge, later on, was how we could store all the different formats in one collection. \n\nAt the moment, we\u2019ve been working with the free cluster. As the program will scale and go into the deployment phase, I probably need to increase my cluster. That will be a challenge for me because the investment can be a problem. Besides that, there are not many other projects built with MongoDB in my university, and it is sometimes difficult for me to get support. \n\nTo solve this problem, I\u2019ve been working on increasing my knowledge of MongoDB. I\u2019ve been taking classes at [MongoDB University. I\u2019ve completed the basics course and the cluster administration course. There are not many certified MongoDB professionals in my country; only two, I believe, and I would like to become the third one. \n\nWhen I started working on my thesis, I didn\u2019t imagine that I had the opportunity to share my project in this way, and I\u2019m very excited that I can. I hope that MongoDB will work on a Student ambassador program for universities in the future. Universities still need to learn a lot about MongoDB, and it\u2019s exciting that an ambassador program is in the works.\n\n", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas"], "pageDescription": " EHRS PUCP, a theoretical national Electronic Health System in Peru", "contentType": "Code Example"}, "title": "EHRS-Peru", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/unit-test-atlas-serverless-functions", "action": "created", "body": "# How to Write Unit Tests for MongoDB Atlas Functions\n\nI recently built a web app for my team using Atlas Functions. I wanted to be able to iterate quickly and frequently deploy my changes. To do so, I needed to implement DevOps infrastructure that included a strong foundation of test automation. Unfortunately, I didn't know how to do any of that for apps built using Atlas Functions.\n\nIn this series, I'll walk you through what I discovered. I'll share how you can build a suite of automated tests and a\nCI/CD pipeline for web applications that are built on serverless functions.\n\nToday, I'll explain how you can write automated unit tests for Atlas Functions. Below is a summary of what we'll cover:\n\n- About the Social Stats App\n- App Architecture\n - Serverless Architecture and Atlas App Services\n - Social Stats Architecture\n- Unit Testing Atlas Functions\n - Modifying Functions to be Testable\n - Unit Testing Self-Contained Functions\n - Unit Testing Functions Using Mocks\n- Wrapping Up\n\n>\n>\n>Prefer to learn by video? Many of the concepts I cover in this series\n>are available in this video.\n>\n>\n\n## About the Social Stats App\n\nBefore I jump into how I tested my app, I want to give you a little background on what the app does and how it's built.\n\nMy teammates and I needed a way to track our Twitter statistics together.\n\nTwitter provides a way for their users to download Twitter statistics. The download is a comma-separated value (CSV) file that contains a row of statistics for each Tweet. If you want to try it out, navigate to and choose to export your data by Tweet.\n\nOnce my teammates and I downloaded our Tweet statistics, we needed a way to regularly combine our stats without duplicating data from previous CSV files. So I decided to build a web app.\n\nThe app is really light, and, to be completely honest, really ugly. The app currently consists of two pages.\n\nThe first page allows anyone on our team to upload their Twitter statistics CSV file.\n\nThe second page is a dashboard where we can slice and dice our data. Anyone on our team can access the dashboard to pull individual stats or grab combined stats. The dashboard is handy for both my teammates and our management chain.\n\n## App Architecture\n\nLet's take a look at how I architected this app, so we can understand how I tested it.\n\n### Serverless Architecture and Atlas\n\nThe app is built using a serverless architecture. The term \"serverless\" can be a bit misleading. Serverless doesn't mean the app uses no servers. Serverless means that developers don't have to manage the servers themselves. (That's a major win in my book!)\n\nWhen you use a serverless architecture, you write the code for a function. The cloud provider handles executing the function on its own servers whenever the function needs to be run.\n\nServerless architectures have big advantages over traditional, monolithic applications:\n\n- **Focus on what matters.** Developers don't have to worry about servers, containers, or infrastructure. Instead, we get to focus on the application code, which could lead to reduced development time and/or more innovation.\n- **Pay only for what you use.** In serverless architectures, you typically pay for the compute power you use and the data you're transferring. You don't typically pay for the servers when they are sitting idle. This can result in big cost savings.\n- **Scale easily.** The cloud provider handles scaling your functions. If your app goes viral, the development and operations teams don't need to stress.\n\nI've never been a fan of managing infrastructure, so I decided to build the Social Stats app using a serverless architecture.\n\nMongoDB Atlas offers several serverless cloud services \u2013 including Atlas Data API, Atlas GraphQL API, and Atlas Triggers \u2013 that make building serverless apps easy. \n\n### Social Stats Architecture\n\nLet's take a look at how the Social Stats app is architected. Below is a flow diagram of how the pieces of the app work together.\n\nWhen a user wants to upload their Twitter statistics CSV file, they navigate to `index.html` in their browser. `index.html` could be hosted anywhere. I chose to host `index.html` using Static Hosting. I like the simplicity\nof keeping my hosted files and serverless functions in one project that is hosted on one platform.\n\nWhen a user chooses to upload their Twitter statistics CSV file,`index.html` encodes the CSV file and passes it to the `processCSV` Atlas Function.\n\nThe `processCSV` function decodes the CSV file and passes the results to the `storeCsvInDb` Atlas Function.\n\nThe `storeCsvInDb` function calls the `removeBreakingCharacters` Atlas Function that removes any emoji or other breaking characters from the data. Then the `storeCsvInDb` function converts the cleaned data to JSON (JavaScript Object Notation) documents and stores those documents in a MongoDB database hosted by Atlas.\n\nThe results of storing the data in the database are passed up the function chain.\n\nThe dashboard that displays the charts with the Twitter statistics is hosted by MongoDB Charts. The\ngreat thing about this dashboard is that I didn't have to do any programming to create it. I granted Charts access to my database, and then I was able to use the Charts UI to create charts with customizable filters.\n\n(Sidenote: Linking to a full Charts dashboard worked fine for my app, but I know that isn't always ideal. Charts also allows you to embed individual charts in your app through an iframe or SDK.)\n\n## Unit Testing Atlas Functions\n\nNow that I've explained what I had to test, let's explore how I tested\nit. Today, we'll talk about the tests that form the base of the testing pyramid:unit tests.\n\nUnit tests are designed to test the small units of your application. In this case, the units we want to test are serverless functions. Unit tests should have a clear input and output. They should not test how the units interact with each other.\n\nUnit tests are valuable because they:\n\n1. Are typically faster to write than other automated tests.\n2. Can be executed quickly and independently as they do not rely on other integrations and systems.\n3. Reveal bugs early in the software development lifecycle when they are cheapest to fix.\n4. Give developers confidence we aren't introducing regressions as we update and refactor other parts of the code.\n\nMany JavaScript testing frameworks exist. I chose to use\nJest for building my unit tests as it's a popular\nchoice in the JavaScript community. The examples below use Jest, but you can apply the principles described in the examples below to any testing framework.\n\n### Modifying Atlas Functions to be Testable\n\nEvery Atlas Function assigns a function to the global\nvariable `exports`. Below is the code for a boilerplate Function that returns `\"Hello, world!\"`\n\n``` javascript\nexports = function() {\n return \"Hello, world!\";\n};\n```\n\nThis function format is problematic for unit testing: calling this function from another JavaScript file is impossible.\n\nTo workaround this problem, we can add the following three lines to the bottom of Function source files:\n\n``` javascript\nif (typeof module === 'object') {\n module.exports = exports;\n}\n```\n\nLet's break down what's happening here. If the type of the module is an`object`, the function is being executed outside of an Atlas environment, so we need to assign our function (stored in `exports`) to `module.exports`. If the type of the module is not an `object`, we can safely assume the function is being executed in a Atlas environment, so we don't need to do anything special.\n\nOnce we've added these three lines to our serverless functions, we are ready to start writing unit tests.\n\n### Unit Testing Self-Contained Functions\n\nUnit testing functions is easiest when the functions are self-contained, meaning that the functions don't call any other functions or utilize any services like a database. So let's start there.\n\nLet's begin by testing the `removeBreakingCharacters` function. This function removes emoji and other breaking characters from the Twitter statistics. Below is the source code for the `removeBreakingCharacters` function.\n\n``` javascript\nexports = function (csvTweets) {\n csvTweets = csvTweets.replace(/^a-zA-Z0-9\\, \"\\/\\\\\\n\\`~!@#$%^&*()\\-_\u2014+=[\\]{}|:;\\'\"<>,.?/']/g, '');\n return csvTweets;\n};\n\nif (typeof module === 'object') {\n module.exports = exports;\n}\n```\n\nTo test this function, I created a new test file named\n`removeBreakingCharacters.test.js`. I began by importing the`removeBreakingCharacters` function.\n\n``` javascript\nconst removeBreakingCharacters = require('../../../functions/removeBreakingCharacters/source.js');\n```\n\nNext I imported several constants from\n[constants.js. Each constant represents a row of data in a Twitter statistics CSV file.\n\n``` javascript\nconst { header, validTweetCsv, emojiTweetCsv, emojiTweetCsvClean, specialCharactersTweetCsv } = require('../../constants.js');\n```\n\nThen I was ready to begin testing. I began with the simplest case: a single valid Tweet.\n\n``` javascript\ntest('SingleValidTweet', () => {\n const csv = header + \"\\n\" + validTweetCsv;\n expect(removeBreakingCharacters(csv)).toBe(csv);\n})\n```\n\nThe `SingleValidTweet` test creates a constant named `csv`. `csv` is a combination of a valid header, a new line character, and a valid Tweet. Since the Tweet is valid, `removeBreakingCharacters` shouldn't remove any characters. The test checks that when `csv` is passed to the `removeBreakingCharacters` function, the function returns a String equal to `csv`.\n\nEmojis were a big problem that were breaking my app, so I decided to create a test just for them.\n\n``` javascript\ntest('EmojiTweet', () => {\n const csvBefore = header + \"\\n\" + emojiTweetCsv;\n const csvAfter = header + \"\\n\" + emojiTweetCsvClean;\n expect(removeBreakingCharacters(csvBefore)).toBe(csvAfter);\n})\n```\n\nThe `EmojiTweet` test creates two constants:\n\n- `csvBefore` stores a valid header, a new line character, and stats\n about a Tweet that contains three emoji.\n- `csvAfter` stores the same valid header, a new line character, and stats about the same Tweet except the three emojis have been removed.\n\nThe test then checks that when I pass the `csvBefore` constant to the `removeBreakingCharacters` function, the function returns a String equal to `csvAfter`.\n\nI created other unit tests for the `removeBreakingCharacters` function. You can find the complete set of unit tests in removeBreakingCharacters.test.js.\n\n### Unit Testing Functions Using Mocks\n\nUnfortunately, unit testing most serverless functions will not be as straightforward as the example above. Serverless functions tend to rely on other functions and services.\n\nThe goal of unit testing is to test individual units\u2014not how the units interact with each other.\n\nWhen a function relies on another function or service, we can simulate the function or service with a mock\nobject. Mock objects allow developers to \"mock\" what a function or service is doing. The mocks allows us to test individual units.\n\nLet's take a look at how I tested the `storeCsvInDb` function. Below is the source code for the function.\n\n``` javascript\nexports = async function (csvTweets) {\n const CSV = require(\"comma-separated-values\");\n\n csvTweets = context.functions.execute(\"removeBreakingCharacters\", csvTweets);\n\n // Convert the CSV Tweets to JSON Tweets\n jsonTweets = new CSV(csvTweets, { header: true }).parse();\n\n // Prepare the results object that we will return\n var results = {\n newTweets: ],\n updatedTweets: [],\n tweetsNotInsertedOrUpdated: []\n }\n\n // Clean each Tweet and store it in the DB\n jsonTweets.forEach(async (tweet) => {\n\n // The Tweet ID from the CSV is being rounded, so we'll manually pull it out of the Tweet link instead\n delete tweet[\"Tweet id\"];\n\n // Pull the author and Tweet id out of the Tweet permalink\n const link = tweet[\"Tweet permalink\"];\n const pattern = /https?:\\/\\/twitter.com\\/([^\\/]+)\\/status\\/(.*)/i;\n const regexResults = pattern.exec(link);\n tweet.author = regexResults[1];\n tweet._id = regexResults[2]\n\n // Generate a date from the time string\n tweet.date = new Date(tweet.time.substring(0, 10));\n\n // Upsert the Tweet, so we can update stats for existing Tweets\n const result = await context.services.get(\"mongodb-atlas\").db(\"TwitterStats\").collection(\"stats\").updateOne(\n { _id: tweet._id },\n { $set: tweet },\n { upsert: true });\n\n if (result.upsertedId) {\n results.newTweets.push(tweet._id);\n } else if (result.modifiedCount > 0) {\n results.updatedTweets.push(tweet._id);\n } else {\n results.tweetsNotInsertedOrUpdated.push(tweet._id);\n }\n });\n return results;\n};\n\nif (typeof module === 'object') {\n module.exports = exports;\n}\n```\n\nAt a high level, the `storeCsvInDb` function is doing the following:\n\n- Calling the `removeBreakingCharacters` function to remove breaking characters.\n- Converting the Tweets in the CSV to JSON documents.\n- Looping through the JSON documents to clean and store each one in the database.\n- Returning an object that contains a list of Tweets that were inserted, updated, or unable to be inserted or updated.\n\nTo unit test this function, I created a new file named\n`storeCsvInDB.test.js`. The top of the file is very similar to the top of `removeBreakingCharacters.test.js`: I imported the function I wanted to test and imported constants.\n\n``` javascript\nconst storeCsvInDb = require('../../../functions/storeCsvInDb/source.js');\n\nconst { header, validTweetCsv, validTweetJson, validTweetId, validTweet2Csv, validTweet2Id, validTweet2Json, validTweetKenId, validTweetKenCsv, validTweetKenJson } = require('../../constants.js');\n```\n\nThen I began creating mocks. The function interacts with the database, so I knew I needed to create mocks to support those interactions. The function also calls the `removeBreakingCharacters` function, so I created a mock for that as well.\n\nI added the following code to `storeCsvInDB.test.js`.\n\n``` javascript\nlet updateOne;\n\nbeforeEach(() => {\n // Mock functions to support context.services.get().db().collection().updateOne()\n updateOne = jest.fn(() => {\n return result = {\n upsertedId: validTweetId\n }\n });\n\n const collection = jest.fn().mockReturnValue({ updateOne });\n const db = jest.fn().mockReturnValue({ collection });\n const get = jest.fn().mockReturnValue({ db });\n\n collection.updateOne = updateOne;\n db.collection = collection;\n get.db = db;\n\n // Mock the removeBreakingCharacters function to return whatever is passed to it\n // Setup global.context.services\n global.context = {\n functions: {\n execute: jest.fn((functionName, csvTweets) => { return csvTweets; })\n },\n services: {\n get\n }\n }\n});\n```\n\nJest runs the [beforeEach function before each test in the given file. I chose to put the instantiation of the mocks inside of `beforeEach` so that I could add checks for how many times a particular mock is called in a given test case. Putting mocks inside of `beforeEach` can also be handy when we want to change what the mock returns the first time it is called versus the second.\n\nOnce I had created my mocks, I was ready to begin testing. I created a test for the simplest case: a single tweet.\n\n``` javascript\ntest('Single tweet', async () => {\n\n const csvTweets = header + \"\\n\" + validTweetCsv;\n\n expect(await storeCsvInDb(csvTweets)).toStrictEqual({\n newTweets: validTweetId],\n tweetsNotInsertedOrUpdated: [],\n updatedTweets: []\n });\n\n expect(context.functions.execute).toHaveBeenCalledWith(\"removeBreakingCharacters\", csvTweets);\n expect(context.services.get.db.collection.updateOne).toHaveBeenCalledWith(\n { _id: validTweetId },\n {\n $set: validTweetJson\n },\n { upsert: true });\n})\n```\n\nLet's walk through what this test is doing.\n\nJust as we saw in earlier tests in this post, I began by creating a constant to represent the CSV Tweets. `csvTweets` consists of a valid header, a newline character, and a valid Tweet.\n\nThe test then calls the `storeCsvInDb` function, passing the `csvTweets` constant. The test asserts that the function returns an object that shows that the Tweet we passed was successfully stored in the database.\n\nNext, the test checks that the mock of the `removeBreakingCharacters` function was called with our `csvTweets` constant.\n\nFinally, the test checks that the database's `updateOne` function was called with the arguments we expect.\n\nAfter I finished this unit test, I wrote an additional test that checks the `storeCsvInDb` function correctly handles multiple Tweets.\n\nYou can find the complete set of unit tests in\n[storeCsvInDB.test.js.\n\n## Wrapping Up\n\nUnit tests can be incredibly valuable. They are one of the best ways to find bugs early in the software development lifecycle. They also lay a strong foundation for CI/CD.\n\nKeep in mind the following two tips as you write unit tests for Atlas Functions:\n\n- Modify the module exports in the source file of each Function, so you will be able to call the Functions from your test files.\n- Use mocks to simulate interactions with other functions, databases, and other services.\n\nThe Social Stats application source code and associated test files are available in a GitHub repo:\n. The repo's readme has detailed instructions on how to execute the test files.\n\nBe on the lookout for the next post in this series where I'll walk you through how to write integration tests for serverless apps.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- GitHub Repository: Social Stats\n- Video: DevOps + Atlas Functions = \ud83d\ude0d\n- Documentation: MongoDB Atlas App Services\n- MongoDB Atlas\n- MongoDB Charts\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Serverless"], "pageDescription": "Learn how to write unit tests for MongoDB Atlas Functions.", "contentType": "Tutorial"}, "title": "How to Write Unit Tests for MongoDB Atlas Functions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/python-acid-transactions", "action": "created", "body": "# Introduction to Multi-Document ACID Transactions in Python\n\n## Introduction\n\n \n\nMulti-document transactions arrived in MongoDB 4.0 in June 2018. MongoDB has always been transactional around updates to a single document. Now, with multi-document ACID transactions we can wrap a set of database operations inside a start and commit transaction call. This ensures that even with inserts and/or updates happening across multiple collections and/or databases, the external view of the data meets ACID constraints.\n\nTo demonstrate transactions in the wild we use a trivial example app that emulates a flight booking for an online airline application. In this simplified booking we need to undertake three operations:\n\n- Allocate a seat in the `seat_collection`\n- Pay for the seat in the `payment_collection`\n- Update the count of allocated seats and sales in the `audit_collection`\n\nFor this application we will use three separate collections for these documents as detailed above. The code in `transactions_main.py` updates these collections in serial unless the `--usetxns argument` is used. We then wrap the complete set of operations inside an ACID transaction. The code in `transactions_main.py` is built directly using the MongoDB Python driver (Pymongo 3.7.1).\n\nThe goal of this code is to demonstrate to the Python developers just how easy it is to covert existing code to transactions if required or to port older SQL based systems.\n\n## Setting up your environment\n\nThe following files can be found in the associated github repo, pymongo-transactions.\n\n- `gitignore` : Standard Github .gitignore for Python.\n- `LICENSE` : Apache's 2.0 (standard Github) license.\n- `Makefile` : Makefile with targets for default operations.\n- `transaction_main.py` : Run a set of writes with and without transactions. Run python `transactions_main.py -h` for help.\n- `transactions_retry.py` : The file containing the transactions retry functions.\n- `watch_transactions.py` : Use a MongoDB change stream to watch collections as they change when transactions_main.py is running.\n- `kill_primary.py` : Starts a MongoDB replica set (on port 7100) and kills the primary on a regular basis. This is used to emulate an election happening in the middle of a transaction.\n- `featurecompatibility.py` : check and/or set feature compatibility for the database (it needs to be set to \"4.0\" for transactions).\n\nYou can clone this repo and work alongside us during this blog post (please file any problems on the Issues tab in Github).\n\nWe assume for all that follows that you have Python 3.6 or greater correctly installed and on your path.\n\nThe Makefile outlines the operations that are required to setup the test environment.\n\nAll the programs in this example use a port range starting at **27100** to ensure that this example does not clash with an existing MongoDB installation.\n\n## Preparation\n\nTo setup the environment you can run through the following steps manually. People that have `make` can speed up installation by using the `make install` command.\n\n### Set a python virtualenv\n\nCheck out the doc for virtualenv.\n\n``` bash\n$ cd pymongo-transactions\n$ virtualenv -p python3 venv\n$ source venv/bin/activate\n```\n\n### Install Python MongoDB Driver pymongo\n\nInstall the latest version of the PyMongo MongoDB Driver (3.7.1 at the time of writing).\n\n``` bash\npip install --upgrade pymongo\n```\n\n### Install mtools\n\nmtools is a collection of helper scripts to parse, filter, and visualize MongoDB log files (mongod, mongos). mtools also includes `mlaunch`, a utility to quickly set up complex MongoDB test environments on a local machine. For this demo we are only going to use the mlaunch program.\n\n``` bash\npip install mtools\n```\n\nThe `mlaunch` program also requires the psutil package.\n\n``` bash\npip install psutil\n```\n\nThe `mlaunch` program gives us a simple command to start a MongoDB replica set as transactions are only supported on a replica set.\n\nStart a replica set whose name is **txntest**. See the `make init_server` make target for details:\n\n``` bash\nmlaunch init --port 27100 --replicaset --name \"txntest\"\n```\n\n### Using the Makefile for configuration\n\nThere is a `Makefile` with targets for all these operations. For those of you on platforms without access to Make, it should be easy enough to cut and paste the commands out of the targets and run them on the command line.\n\nRunning the `Makefile`:\n\n``` bash\n$ cd pymongo-transactions\n$ make\n```\n\nYou will need to have MongoDB 4.0 on your path. There are other convenience targets for starting the demo programs:\n\n- `make notxns` : start the transactions client without using transactions.\n- `make usetxns` : start the transactions client with transactions enabled.\n- `make watch_seats` : watch the seats collection changing.\n- `make watch_payments` : watch the payment collection changing.\n\n## Running the transactions example\n\nThe transactions example consists of two python programs.\n\n- `transaction_main.py`,\n- `watch_transactions.py`.\n\n### Running transactions_main.py\n\n``` none\n$ python transaction_main.py -h\nusage: transaction_main.py -h] [--host HOST] [--usetxns] [--delay DELAY]\n [--iterations ITERATIONS]\n [--randdelay RANDDELAY RANDDELAY]\n\noptional arguments:\n -h, --help show this help message and exit\n --host HOST MongoDB URI [default: mongodb://localhost:27100,localh\n ost:27101,localhost:27102/?replicaSet=txntest&retryWri\n tes=true]\n --usetxns Use transactions [default: False]\n --delay DELAY Delay between two insertion events [default: 1.0]\n --iterations ITERATIONS\n Run N iterations. O means run forever\n --randdelay RANDDELAY RANDDELAY\n Create a delay set randomly between the two bounds\n [default: None]\n```\n\nYou can choose to use `--delay` or `--randdelay`. If you use both --delay takes precedence. The `--randdelay` parameter creates a random delay between a lower and an upper bound that will be added between each insertion event.\n\nThe `transactions_main.py` program knows to use the **txntest** replica set and the right default port range.\n\nTo run the program without transactions you can run it with no arguments:\n\n``` none\n$ python transaction_main.py\nusing collection: SEATSDB.seats\nusing collection: PAYMENTSDB.payments\nusing collection: AUDITDB.audit\nUsing a fixed delay of 1.0\n\n1. Booking seat: '1A'\n1. Sleeping: 1.000\n1. Paying 330 for seat '1A'\n2. Booking seat: '2A'\n2. Sleeping: 1.000\n2. Paying 450 for seat '2A'\n3. Booking seat: '3A'\n3. Sleeping: 1.000\n3. Paying 490 for seat '3A'\n4. Booking seat: '4A'\n4. Sleeping: 1.000\n```\n\nThe program runs a function called `book_seat()` which books a seat on a plane by adding documents to three collections. First it adds the seat allocation to the `seats_collection`, then it adds a payment to the `payments_collection`, finally it updates an audit count in the `audit_collection`. (This is a much simplified booking process used purely for illustration).\n\nThe default is to run the program **without** using transactions. To use transactions we have to add the command line flag `--usetxns`. Run this to test that you are running MongoDB 4.0 and that the correct [featureCompatibility is configured (it must be set to 4.0). If you install MongoDB 4.0 over an existing `/data` directory containing 3.6 databases then featureCompatibility will be set to 3.6 by default and transactions will not be available.\n\n>\n>\n>Note: If you get the following error running python `transaction_main.py --usetxns` that means you are picking up an older version of pymongo (older than 3.7.x) for which there is no multi-document transactions support.\n>\n>\n\n``` none\nTraceback (most recent call last):\n File \"transaction_main.py\", line 175, in\n total_delay = total_delay + run_transaction_with_retry( booking_functor, session)\n File \"/Users/jdrumgoole/GIT/pymongo-transactions/transaction_retry.py\", line 52, in run_transaction_with_retry\n with session.start_transaction():\nAttributeError: 'ClientSession' object has no attribute 'start_transaction'\n```\n\n## Watching Transactions\n\nTo actually see the effect of transactions we need to watch what is happening inside the collections `SEATSDB.seats` and `PAYMENTSDB.payments`.\n\nWe can do this with `watch_transactions.py`. This script uses MongoDB Change Streams to see what's happening inside a collection in real-time. We need to run two of these in parallel so it's best to line them up side by side.\n\nHere is the `watch_transactions.py` program:\n\n``` none\n$ python watch_transactions.py -h\nusage: watch_transactions.py -h] [--host HOST] [--collection COLLECTION]\n\noptional arguments:\n -h, --help show this help message and exit\n --host HOST mongodb URI for connecting to server [default:\n mongodb://localhost:27100/?replicaSet=txntest]\n --collection COLLECTION\n Watch [default:\n PYTHON_TXNS_EXAMPLE.seats_collection]\n```\n\nWe need to watch each collection so in two separate terminal windows start the watcher.\n\nWindow 1:\n\n``` none\n$ python watch_transactions.py --watch seats\nWatching: seats\n...\n```\n\nWindow 2:\n\n``` none\n$ python watch_transactions.py --watch payments\nWatching: payments\n...\n```\n\n## What happens when you run without transactions?\n\nLets run the code without transactions first. If you examine the `transaction_main.py` code you will see a function `book_seats`.\n\n``` python\ndef book_seat(seats, payments, audit, seat_no, delay_range, session=None):\n '''\n Run two inserts in sequence.\n If session is not None we are in a transaction\n\n :param seats: seats collection\n :param payments: payments collection\n :param seat_no: the number of the seat to be booked (defaults to row A)\n :param delay_range: A tuple indicating a random delay between two ranges or a single float fixed delay\n :param session: Session object required by a MongoDB transaction\n :return: the delay_period for this transaction\n '''\n price = random.randrange(200, 500, 10)\n if type(delay_range) == tuple:\n delay_period = random.uniform(delay_range[0], delay_range[1])\n else:\n delay_period = delay_range\n\n # Book Seat\n seat_str = \"{}A\".format(seat_no)\n print(count( i, \"Booking seat: '{}'\".format(seat_str)))\n seats.insert_one({\"flight_no\" : \"EI178\",\n \"seat\" : seat_str,\n \"date\" : datetime.datetime.utcnow()},\n session=session)\n print(count( seat_no, \"Sleeping: {:02.3f}\".format(delay_period)))\n #pay for seat\n time.sleep(delay_period)\n payments.insert_one({\"flight_no\" : \"EI178\",\n \"seat\" : seat_str,\n \"date\" : datetime.datetime.utcnow(),\n \"price\" : price},\n session=session)\n audit.update_one({ \"audit\" : \"seats\"}, { \"$inc\" : { \"count\" : 1}}, upsert=True)\n print(count(seat_no, \"Paying {} for seat '{}'\".format(price, seat_str)))\n\n return delay_period\n```\n\nThis program emulates a very simplified airline booking with a seat being allocated and then paid for. These are often separated by a reasonable time frame (e.g. seat allocation vs external credit card validation and anti-fraud check) and we emulate this by inserting a delay. The default is 1 second.\n\nNow with the two `watch_transactions.py` scripts running for `seats_collection` and `payments_collection` we can run `transactions_main.py` as follows:\n\n``` bash\n$ python transaction_main.py\n```\n\nThe first run is with no transactions enabled.\n\nThe bottom window shows `transactions_main.py` running. On the top left we are watching the inserts to the seats collection. On the top right we are watching inserts to the payments collection.\n\n![watching without transactions\n\nWe can see that the payments window lags the seats window as the watchers only update when the insert is complete. Thus seats sold cannot be easily reconciled with corresponding payments. If after the third seat has been booked we CTRL-C the program we can see that the program exits before writing the payment. This is reflected in the Change Stream for the payments collection which only shows payments for seat 1A and 2A versus seat allocations for 1A, 2A and 3A.\n\nIf we want payments and seats to be instantly reconcilable and consistent we must execute the inserts inside a transaction.\n\n## What happens when you run with Transactions?\n\nNow lets run the same system with `--usetxns` enabled.\n\n``` bash\n$ python transaction_main.py --usetxns\n```\n\nWe run with the exact same setup but now set `--usetxns`.\n\nNote now how the change streams are interlocked and are updated in parallel. This is because all the updates only become visible when the transaction is committed. Note how we aborted the third transaction by hitting CTRL-C. Now neither the seat nor the payment appear in the change streams unlike the first example where the seat went through.\n\nThis is where transactions shine in world where all or nothing is the watchword. We never want to keeps seats allocated unless they are paid for.\n\n## What happens during failure?\n\nIn a MongoDB replica set all writes are directed to the Primary node. If the primary node fails or becomes inaccessible (e.g. due to a network partition) writes in flight may fail. In a non-transactional scenario the driver will recover from a single failure and retry the write. In a multi-document transaction we must recover and retry in the event of these kinds of transient failures. This code is encapsulated in `transaction_retry.py`. We both retry the transaction and retry the commit to handle scenarios where the primary fails within the transaction and/or the commit operation.\n\n``` python\ndef commit_with_retry(session):\n while True:\n try:\n # Commit uses write concern set at transaction start.\n session.commit_transaction()\n print(\"Transaction committed.\")\n break\n except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc:\n # Can retry commit\n if exc.has_error_label(\"UnknownTransactionCommitResult\"):\n print(\"UnknownTransactionCommitResult, retrying \"\n \"commit operation ...\")\n continue\n else:\n print(\"Error during commit ...\")\n raise\n\ndef run_transaction_with_retry(functor, session):\n assert (isinstance(functor, Transaction_Functor))\n while True:\n try:\n with session.start_transaction():\n result=functor(session) # performs transaction\n commit_with_retry(session)\n break\n except (pymongo.errors.ConnectionFailure, pymongo.errors.OperationFailure) as exc:\n # If transient error, retry the whole transaction\n if exc.has_error_label(\"TransientTransactionError\"):\n print(\"TransientTransactionError, retrying \"\n \"transaction ...\")\n continue\n else:\n raise\n\n return result\n```\n\nIn order to observe what happens during elections we can use the script `kill_primary.py`. This script will start a replica-set and continuously kill the primary.\n\n``` none\n$ make kill_primary\n. venv/bin/activate && python kill_primary.py\nno nodes started.\nCurrent electionTimeoutMillis: 500\n1. (Re)starting replica-set\nno nodes started.\n1. Getting list of mongod processes\nProcess list written to mlaunch.procs\n1. Getting replica set status\n1. Killing primary node: 31029\n1. Sleeping: 1.0\n2. (Re)starting replica-set\nlaunching: \"/usr/local/mongodb/bin/mongod\" on port 27101\n2. Getting list of mongod processes\nProcess list written to mlaunch.procs\n2. Getting replica set status\n2. Killing primary node: 31045\n2. Sleeping: 1.0\n3. (Re)starting replica-set\nlaunching: \"/usr/local/mongodb/bin/mongod\" on port 27102\n3. Getting list of mongod processes\nProcess list written to mlaunch.procs\n3. Getting replica set status\n3. Killing primary node: 31137\n3. Sleeping: 1.0\n```\n\n`kill_primary.py` resets electionTimeOutMillis to 500ms from its default of 10000ms (10 seconds). This allows elections to resolve more quickly for the purposes of this test as we are running everything locally.\n\nOnce `kill_primary.py` is running we can start up `transactions_main.py` again using the `--usetxns` argument.\n\n``` none\n$ make usetxns\n. venv/bin/activate && python transaction_main.py --usetxns\nForcing collection creation (you can't create collections inside a txn)\nCollections created\nusing collection: PYTHON_TXNS_EXAMPLE.seats\nusing collection: PYTHON_TXNS_EXAMPLE.payments\nusing collection: PYTHON_TXNS_EXAMPLE.audit\nUsing a fixed delay of 1.0\nUsing transactions\n\n1. Booking seat: '1A'\n1. Sleeping: 1.000\n1. Paying 440 for seat '1A'\nTransaction committed.\n2. Booking seat: '2A'\n2. Sleeping: 1.000\n2. Paying 330 for seat '2A'\nTransaction committed.\n3. Booking seat: '3A'\n3. Sleeping: 1.000\nTransientTransactionError, retrying transaction ...\n3. Booking seat: '3A'\n3. Sleeping: 1.000\n3. Paying 240 for seat '3A'\nTransaction committed.\n4. Booking seat: '4A'\n4. Sleeping: 1.000\n4. Paying 410 for seat '4A'\nTransaction committed.\n5. Booking seat: '5A'\n5. Sleeping: 1.000\n5. Paying 260 for seat '5A'\nTransaction committed.\n6. Booking seat: '6A'\n6. Sleeping: 1.000\nTransientTransactionError, retrying transaction ...\n6. Booking seat: '6A'\n6. Sleeping: 1.000\n6. Paying 380 for seat '6A'\nTransaction committed.\n...\n```\n\nAs you can see during elections the transaction will be aborted and must be retried. If you look at the `transaction_rety.py` code you will see how this happens. If a write operation encounters an error it will throw one of the following exceptions:\n\n- pymongo.errors.ConnectionFailure\n- pymongo.errors.OperationFailure\n\nWithin these exceptions there will be a label called TransientTransactionError. This label can be detected using the `has_error_label(label)` function which is available in pymongo 3.7.x. Transient errors can be recovered from and the retry code in `transactions_retry.py` has code that retries for both writes and commits (see above).\n\n## Conclusion\n\nMulti-document transactions are the final piece of the jigsaw for SQL developers who have been shying away from trying MongoDB. ACID transactions make the programmer's job easier and give teams that are migrating from an existing SQL schema a much more consistent and convenient transition path.\n\nAs most migrations involving a move from highly normalised data structures to more natural and flexible nested JSON documents one would expect that the number of required multi-document transactions will be less in a properly constructed MongoDB application. But where multi-document transactions are required programmers can now include them using very similar syntax to SQL.\n\nWith ACID transactions in MongoDB 4.0 it can now be the first choice for an even broader range of application use cases.\n\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\nTo try it locally download MongoDB 4.0.", "format": "md", "metadata": {"tags": ["Python", "MongoDB"], "pageDescription": "How to perform multi-document transactions with Python.", "contentType": "Quickstart"}, "title": "Introduction to Multi-Document ACID Transactions in Python", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-podcast-doug-eck-google-brain", "action": "created", "body": "# At the Intersection of AI/ML and HCI with Douglas Eck of Google (MongoDB Podcast)\n\nDoug Eck is a principal scientist at Google and a research director on the Brain Team. He created the ongoing research project, Magenta, which focuses on the role of machine learning in the process of creating art and music. He is joining Anaiya Raisinghani, Michael Lynn, and Nic Raboy today to discuss all things artificial intelligence, machine learning, and to give us some insight into his role at Google. \n\nWe are going to be diving head first into HCI (Human Computer Interaction), Google\u2019s new GPT-3 language model, and discussing some of the hard issues with combining databases and deep learning. With all the hype surrounding AI, you may have some questions as to its past and potential future, so stay tuned to hear from one of Google\u2019s best. \n\n:youtube]{vid=Wge-1tcRQco}\n\n*Doug Eck* :[00:00:00] Hi everybody. My name is Doug Eck and welcome to the MongoDB podcast.\n\n*Michael Lynn* : [00:00:08] Welcome to the show. Today we're talking with [Doug Eck. He's a principal scientist at Google and a research director on the Brain Team. He also created and helps lead the Magenta team, an ongoing research project exploring the role of machine learning and the process of creating art and music. Today's episode was produced and the interview was led by Anaiya Raisinghani She's a summer intern here at MongoDB. She's doing a fantastic job. I hope you enjoy this episode.\n\nWe've got a couple of guests today and our first guest is a summer intern at MongoDB.\n\n*Anaiya Raisinghani* : 00:00:55] Hi everyone. My name is [Anaiya Raisinghani and I am the developer advocacy intern here at MongoDB.\n\n*Michael Lynn* : 00:01:01] Well, welcome to the show. It's great to have you on the podcast. Before we begin, why don't you tell the folks a little bit about yourself?\n\n*Anaiya Raisinghani* : [00:01:08] Yeah, of course. I'm from the Bay Area. I grew up here and I go to school in LA at the [University of Southern California. My undergrad degree is in Computational Linguistics, which is half CS, half linguistics. And I want to say my overall interest in artificial intelligence, really came from the cool classes I have the unique opportunity to take, like speech recognition, natural language processing, and just being able to use machine learning libraries like TensorFlow in some of my school projects. So I feel very lucky to have had an early exposure to AI than most.\n\n*Michael Lynn* : 00:01:42] Well, great. And I understand that you brought a guest with you today. Do you want to talk a little bit about who that is and what we're going to discuss today?\n\n*Anaiya Raisinghani* : [00:01:48] Yes, definitely. So today we have a very, very special guest Doug Eck, who is a principal scientist at Google, a research director on the Brain Team and the creator of Magenta, so today we're going to be chatting about machine learning, AI, and some other fun topics.\nThank you so much, Doug, for being here today.\n\n*Doug Eck* :[00:02:07] I'm very happy to be here, Anaiya.\n\n*Michael Lynn* : [00:02:08] Well, Doug, it's great to have you on the show. Thanks so much for taking the time to talk with us. And at this point, I kind of want to turn it over to Anaiya. She's got some prepared questions. This is kind of her field of study, and she's got some passion and interest around it.\nSo we're going to get into some really interesting topics in the machine learning space. And Anaiya, I'll turn it over to you.\n\n*Anaiya Raisinghani* : [00:02:30] Perfect. Thank you so much, Mike. Just to get us started, Doug, could you give us a little background about what you do at Google? \n\n*Doug Eck* :[00:02:36] Sure, thanks, Anaiya. Well, right now in my career, I go to a lot of meetings. By that, I mean I'm running a large team of researchers on the Google brain team, and I'm trying to help keep things going. Sometimes it feels like herding cats because we hire very talented and very self motivated researchers who are doing fundamental research in machine learning. Going back a bit, I've been doing something like this, God, it's terrifying to think about, but almost 30 years. In a previous life when I was young, like you Anaiya, I was playing a lot of music, playing guitar. I was an English major as an undergrad, doing a lot of writing and I just kept getting drawn into technology. And once I finished my undergrad, I worked as a database programmer.\n\nWell, well, well before MongoDB. And, uh, I did that for a few years and really enjoyed it. And then I decided that my passion was somewhere in the overlap between music and artificial intelligence. And at that point in my life, I'm not sure I could have provided a crisp definition of artificial intelligence, but I knew I wanted to do it.\n\nI wanted to see if we can make intelligent computers help us make music. And so I made my way back into grad school. Somehow I tricked a computer science department into letting an English major do a PhD in computer science with a lot of extra math. And, uh, I made my way into an area of AI called machine learning, where our goal is to build computer programs that learn to solve problems, rather than kind of trying to write down the recipe ourselves.\n\nAnd for the last 20 years, I've been active in machine learning as a post-doc doing a post-doctoral fellowship in Switzerland. And then I moved to Canada and became a professor there and worked with some great people at the University of Montreal, just like changing my career every, every few years.\n\nSo, uh, after seven years there, I switched and came to California and became a research scientist at Google. And I've been very happily working here at Google, uh, ever since for 11 years, I feel really lucky to have had a chance to be part of the growth and the, I guess, Renaissance of neural networks and machine learning across a number of really important disciplines and to have been part of spearheading a bit of interest in AI and creativity.\n\n*Anaiya Raisinghani* : [00:04:45] That's great. Thank you so much. So there's currently a lot of hype around just AI in general and machine learning, but for some of our listeners who may not know what it is, how would you describe it in the way that you understand it?\n\n*Doug Eck* :[00:04:56] I was afraid you were going to ask that because I said, you know, 30 years ago, I couldn't have given you a crisp definition of AI and I'm not sure I can now without resorting to Wikipedia and cheating, I would define artificial intelligence as the task of building software that behaves intelligently.\nAnd traditionally there have been two basic approaches to AI in the past, in the distant past, in the eighties and nineties, we called this neat versus scruffy. Where neat was the idea of writing down sets of rules, writing down a recipe that defined complex behavior like translate a translation maybe, or writing a book, and then having computer programs that can execute those rules.\nContrast that with scruffy scruffy, because it's a bit messier. Um, instead of thinking we know the rules, instead we build programs that can examine data can look at large data sets. Sometimes datasets that have labels, like this is a picture, this is a picture of an orangutan. This is a picture of a banana, et cetera, and learn the relationship between those labels and that data.\nAnd that's a kind of machine learning where our goal is to help the machine learn, to solve a problem, as opposed to building in the answer. And long-term at least the current evidence where we are right now in 2021, is that for many, many hard tasks, probably most of them it's better to teach the machine how to learn rather than to try to provide the solution to the problem.\nAnd so that's how I would define a machine learning is writing software that learns to solve problems by processing information like data sets, uh, what might come out of a camera, what might come out of a microphone. And then learn to leverage what it's learned from that data, uh, to solve specific sub problems like translation or, or labeling, or you pick it.\nThere are thousands of possible examples. \n\n*Anaiya Raisinghani* : [00:06:51] That's awesome. Thank you so much. So I also wanted to ask, because you said from 30 years ago, you wouldn't have known that definition. What has it been like to see how machine learning has improved over the years? Especially now from an inside perspective at Google. \n\n*Doug Eck* :[00:07:07] I think I've consistently underestimated how fast we can move.\nPerhaps that's human nature. I noticed a statistic that, this isn't about machine learning, but something less than 70 years, 60, 61 years passed between the first flight, the Wright brothers and landing on the moon. And like 60 years, isn't very long. That's pretty shocking how fast we moved. And so I guess it shouldn't be in retrospect, a surprise that we've, we've moved so fast.\nI did a retrospective where I'm looking at the quality of image generation. I'm sure all of you have seen these hyper-realistic faces that are not really faces, or maybe you've heard some very realistic sounding music, or you've seen a machine learning algorithm able to generate really realistic text, and this was all happening.\nYou know, in the last five years, really, I mean, the work has been there and the ideas have been there and the efforts have been there for at least two decades, but somehow I think the combination of scale, so having very large datasets and also processing power, having large or one large computer or many coupled computers, usually running a GPU is basically, or TPU is what you think of as a video card, giving us the processing power to scale much more information.\nAnd, uh, I don't know. It's been really fun. I mean, every year I'm surprised I get up in the morning on Monday morning and I don't dread going to work, which makes me feel extremely lucky. And, uh, I'm really proud of the work that we've done at Google, but I'm really proud of what what's happened in the entire research community.\n\n*Michael Lynn* : [00:08:40] So Doug, I want to ask you and you kind of alluded to it, but I'm curious about the advances that we've made. And I realize we are very much standing on the shoulders of giants and the exponential rate at which we increase in the advances. I'm curious from your perspective, whether you think that's software or hardware and maybe what, you know, what's your perspective on both of those avenues that we're advancing in.\n\n*Doug Eck* :[00:09:08] I think it's a trade off. It's a very clear trade off. When you have slow hardware or not enough hardware, then you need to be much, much more clever with your software. So arguably the, the models, the approaches that we were using in the late 1990s, if you like terminology, if your crowd likes buzzwords support, vector machines, random forests, boosting, these are all especially SVM support vector machines are all relatively complicated. There's a lot of machinery there. And for very small data sets and for limited processing power, they can outperform simpler approaches, a simpler approach, it may not sound simple because it's got a fancy name, a neural network, the underlying mechanism is actually quite simple and it's all about having a very simple rule to update a few numbers.\nWe call them parameters, or maybe we call them weights and neural networks don't work all that well for small datasets and for small neural networks compared to other solutions. So in the 1980s and 1990s, it looked like they weren't really very good. If you scale these up and you run a simple, very simple neural network on with a lot of weights, a lot of parameters that you can adjust, and you have a lot of data allowing the model to have some information, to really grab onto they work astonishingly well, and they seem to keep working better and better as you make the datasets larger and you add more processing power. And that could be because they're simple. There's an argument to be made there that there's something so simple that it scales to different data sets, sizes and different, different processing power. We can talk about calculus, if you want. We can dive into the chain rule.\nIt's only two applications on the chain rule to get to backprop. \n\n*Michael Lynn* : [00:10:51] I appreciate your perspective. I do want to ask one more question about, you know, we've all come from this conventional digital, you know, binary based computing background and fascinating things are happening in the quantum space. I'm curious, you know, is there anything happening at Google that you can talk about in that space?\n\n*Doug Eck* :[00:11:11] Well, absolutely. We have. So first caveat, I am not an expert in quantum. We have a top tier quantum group down in Santa Barbara and they have made a couple of. It had been making great progress all along a couple of breakthroughs last year, my understanding of the situation that there's a certain class of problems that are extraordinarily difficult to solve with the traditional computer, but which a quantum computer will solve relatively easily.\nAnd that in fact, some of these core problems can form the basis for solving a much broader class of problems if you kind of rewrite these other problems as one of these core problems, like factorizing prime numbers, et cetera. And I have to admit, I am just simply not a quantum expert. I'm as fascinated about it as you are, we're invested.\nI think the big question mark is whether the class of problems that matter to us is big enough to warrant the investment and basically I've underestimated every other technological revolution. Right. You know, like I didn't think we'd get to where we are now. So I guess, you know, my skepticism about quantum is just, this is my personality, but I'm super excited about what it could be.\nIt's also, you know, possible that we'll be in a situation where Quantum yield some breakthroughs that provides us with some challenges, especially with respect to security and cryptography. If we find new ways to solve massive problems that lead indirectly for us to be able to crack cryptographic puzzles.\nBut if there's any quantum folks in the audience and you're shrugging your shoulders and be like, this guy doesn't know what he's talking about. This guy admits he doesn't really know what he's talking about.\n\n*Michael Lynn* : [00:12:44] I appreciate that. So I kind of derailed the conversation Anaiya, you can pick back up if you like.\n\n*Anaiya Raisinghani* : [00:12:51] Perfect. Thank you. Um, I wanted to ask you a little bit about HCI which is human computer interaction and what you do in that space. So a lot of people may not have heard about human computer interaction and the listeners. I can get like a little bit of a background if you guys would like, so it's really just a field that focuses on the design of computer technology and the way that humans and computers interact.\nAnd I feel like when people think about artificial intelligence, the first thing that they think about are, you know, robots or big spaces. So I wanted to ask you with what you've been doing at Google. Do you believe that machine learning can really help advance human computer interaction and the way that human beings and machines interact ethically?\n\n*Doug Eck* :[00:13:36] Thank you for that. That's an amazingly important question. So first a bit of a preface. I think we've made a fairly serious error in how we talk about AI and machine learning. And specifically I'm really turned off by the personification of AI. Like the AI is going to come and get you, right?\nLike it's a conscious thing that has volition and wants to help you or hurt you. And this link with AI and robotics, and I'm very skeptical of this sort of techno-utopian folks who believe that we can solve all problems in the world by building a sentient AI. Like there are a lot of real problems in front of us to solve.\nAnd I think we can use technology to help help us solve them. But I'm much more interested in solving the problems that are right in front of us, on the planet, rather than thinking about super intelligence or AGI, which is artificial general intelligence, meaning something smarter than us. So what does this mean for HCI human computer interaction?\nI believe fundamentally. We use technology to help us solve problems. We always have, we have from the very beginning of humanity with things like arrowheads and fire, right. And I fundamentally don't see AI and machine learning as any different. I think what we're trying to do is use technology to solve problems like translation or, you know, maybe automatic identification of objects and images and things like that.\nIdeally many more interesting problems than that. And one of the big roadblocks comes from taking a basic neural network or some other model trained on some data and actually doing something useful with it. And often it's a vast, vast, vast distance between a model and a lab that can, whatever, take a photograph and identify whether there's an orangutan or a banana in it and build something really useful, like perhaps some sort of medical software that will help you identify skin cancer. Right. And that, that distance ends up being more and more about how to actually make the software work for people deal with the messy real-world constraints that exist in our real, you know, in our actual world.\nAnd, you know, this means that like I personally and our team in general, the brain team we've become much more interested in HCI. And I wouldn't say, I think the way you worded it was can machine learning help revolutionize HCI or help HCI or help move HCI along. It's the wrong direction we need there like we need HCI's help. So, so we've, we've been humbled, I think by our inability to take like our fancy algorithms and actually have them matter in people's lives. And I think partially it's because we haven't engaged enough in the past decade or so with the HCI community. And, you know, I personally and a number of people on my, in my world are trying really hard to address that.\nBy tackling problems with like joint viewpoints, that viewpoint of like the mathematically driven AI researcher, caring about what the data is. And then the HCI and the user interface folks were saying, wait, what problem are you trying to solve? And how are you going to actually take what this model can do and put it in the hands of users and how are you going to do it in a way that's ethical per your comment Anaiya?\nAnd I hope someone grabbed the analogy of going from an image recognition algorithm to identifying skincancers. This has been one topic, for example, this generated a lot of discussion because skin cancers and skin color correlates with race and the ability for these algorithms to work across a spectrum of skin colors may differ, um, and our ability to build trust with doctors so that they want to use the software and patients, they believe they can trust the software.\nLike these issues are like so, so complicated and it's so important for us to get them right. So you can tell I'm a passionate about this. I guess I should bring this to a close, which is to say I'm a convert. I guess I have the fervor of a convert who didn't think much about HCI, maybe five, six years ago.\nI just started to see as these models get more and more powerful that the limiting factor is really how we use them and how we deploy them and how we make them work for us human beings. We're the personified ones, not the software, not the AI. \n\n*Anaiya Raisinghani* : [00:17:37] That's awesome. Thank you so much for answering my question, that was great. And I appreciate all the points you brought up because I feel like those need to be talked about a lot more, especially in the AI community. \nI do want to like pivot a little bit and take part of what you said and talk about some of the issues that come with deep learning and AI, and kind of connect them with neural networks and databases, because I would love to hear about some of the things that have come up in the past when deep learning has been tried to be integrated into databases. And I know that there can be a lot of issues with deep learning and tabular databases, but what about document collection based databases? And if the documents are analogous to records or rows in a relational database, do you think that machine learning might work or do you believe that the same issues might come up? \n\n*Doug Eck* :[00:18:24] Another great question.\nSo, so first to put this all in content, arguably a machine learning researcher. Who's really writing code day to day, which I did in the past and now I'm doing more management work, but you're, you know, you're writing code day-to-day, you're trying to solve a hard problem. Maybe 70 or 80% of your time is spent dealing with data and how to manage data and how to make sure that you don't have data errors and how to move the data through your system.\nProbably like in, in other areas of computer science, you know, we tend to call it plumbing. You spend a lot of time working on plumbing. And this is a manageable task. When you have a dataset of the sort we might've worked with 15 years ago, 10,000, 28 by 28 pixel images or something like that. I hope I got the pixels, right. Something called eminence, a bunch of written digits. \nIf we start looking at datasets that are all of the web basically represented in some way or another, all of the books in the library of Congress as a, as a hypothetical massive, massive image, data sets, massive video data sets, right? The ability to just kind of fake it.\nRight, write a little bit of Python code that processes your data and throws it in a flat file of some sort becomes, you know, becomes basically untraceable. And so I think we're at an inflection point right now maybe we were even at that inflection point a year or two ago. Where a lot of machine learning researchers are thinking about scalable ways to handle data.\nSo that's the first thing. The second thing is that we're also specifically with respect to very large neural networks, wanting predictions to be factual. If we have a chat bot that chats with you and that chat bot is driven by a neural network and you ask it, what's the capital of Indiana, my home state.\nWe hope it says Indianapolis every time. Uh, we don't want this to be a roll of the dice. We don't want it to be a probabilistic model that rolls the dice and says Indianapolis, you know, 50 times, but 51 time that 51st time instead says Springfield. So there's this very, very active and rich research area of bridging between databases and neural networks, which are probabilistic and finding ways to land in the database and actually get the right answer.\nAnd it's the right answer because we verify that it's the right answer. We have a separate team working with that database and we understand how to relate that to some decision-making algorithm that might ask a question: should I go to Indianapolis? Maybe that's a probabilistic question. Maybe it's role as a dice.\nMaybe you all don't want to come to Indianapolis. It's up to you, but I'm trying to make the distinction between, between these two kinds of, of decisions. Two kinds of information. One of them is probabilistic. Every sentence is unique. We might describe the same scene with a million different sentences.\nBut we don't want to miss on facts, especially if we want to solve hard problems. And so there's an open challenge. I do not have an answer for it. There are many, many smarter people than me working on ways in which we can bridge the gap between products like MongoDB and machine learning. It doesn't take long to realize there are a lot of people thinking about this.\nIf you do a Google search and you limit to the site, reddit.com and you put them on MongoDB and machine learning, you see a lot of discussion about how can we back machine learning algorithms with, with databases. So, um, it's definitely an open topic. Finally. Third, you mentioned something about rows and columns and the actual structure of a relational database.\nI think that's also very interesting because algorithms that are sensitive, I say algorithm, I mean a neural network or some other model program designed to solve a problem. You know, those algorithms might actually take advantage of that structure. Not just like cope with it, but actually understand in some ways how, in ways that it's learning how to leverage the structure of the database to make it easier to solve certain problems.\nAnd then there's evidence outside of, of databases for general machine learning to believe that's possible. So, for example, in work, for example, predicting the structure of proteins and other molecules, we have some what we might call structural prior information we have some idea about the geometry of what molecules should look like.\nAnd there are ways to leverage that geometry to kind of limit the space of predictions that the model would make. It's kind of given that structure as, as foundation for, for, for the productions, predictions is making such that it won't likely make predictions that violate that structure. For example, graph neural networks that actually work on a graph.\nYou can write down a database structure as a graph if you'd like, and, and take advantage of that graph for solving hard problems. Sorry, that was, it's like a 10 minute answer. I'll try to make them shorter next time, Anaiya, but that's my answer.\n\n*Anaiya Raisinghani* : [00:23:03] Yeah. Cause I, well, I was researching for this and then also when I got the job, a lot of the questions during the interview were, like how you would use machine learning, uh, during my internship and I saw articles like stretching all the way back the early two thousands talking about just how applying, sorry, artificial neural networks and ANN's to large modern databases seems like such a great idea in theory, because you know, like they, they offer potential fault tolerance, they're inherently parallel. Um, and the intersection between them just looks really super attractive. But I found this article about that and like, the date was 2000 and then I looked for other stuff and everything from there was the issues between connecting databases and deep learning.\nSo thank you so much for your answer. I really appreciate that. I feel like, I feel like, especially on this podcast, it was a great, great answer to a hard question. \n\n*Doug Eck* :[00:23:57] Can I throw, can I throw one more thing before you move on? There are also some like what I call low hanging fruit. Like a bunch of simpler problems that we can tackle.\nSo one of the big areas of machine learning that I've been working in is, is that of models of, of language of text. Right? And so think of translation, you type in a string in one language, and we translate it to another language or if, and if, if your listeners have paid attention to some, some new um, machine learning models that can, you can chat with them like chatbots, like Google's Lambda or some large language models that can write stories.\nWe're realizing we can use those for data augmentation and, and maybe indirectly for data verification. So we may be able to use neural networks to predict bad data entries. We may be able to, for example, let's say your database is trying to provide a thousand different ways to describe a scene. We may be able to help automate that.\nAnd then you'd have a human who's coming in. Like the humans always needs to be there I think to be responsible, you know, saying, okay, here's like, you know, 20 different ways to describe this scene at different levels of complexity, but we use the neural network to help make their work much, much faster.\nAnd so if we move beyond trying to solve the entire problem of like, what is a database and how do we generate it, or how do we do upkeep on it? Like, that's one thing that's like the holy grail, but we can be thinking about using neural networks in particularly language models to, to like basically super charge human data, data quality people in ways that I think are just gonna go to sweep through the field and help us do a much, much better job of, of that kind of validation. And even I remember from like a long time ago, when I did databases, data validation is a pain, right? Everybody hates bad data. It's garbage in, garbage out.\nSo if we can make cleaner, better data, then we all win.\n\n*Anaiya Raisinghani* : [00:25:39] Yeah. And on the subject of language models, I also wanted to talk about the GPT 3 and I saw an article from MIT recently about how they're thinking it can replace Google's page rank. And I would just love to hear your thoughts on what you think might happen in the future and if language models actually could replace indexing. \n\n*Doug Eck* :[00:25:58] So to be clear, we will still need to do indexing, right? We still need to index the documents and we have to have some idea of what they mean. Here's the best way to think about it. So we, we talked to IO this year about using some large language models to improve our search in our products.\nAnd we've talked about it in other blogs. I don't want to get myself in trouble by poorly stating what has already been stated. I'd refer you there because you know, nobody wants, nobody wants to have to talk to their boss after the podcast comes out and says, why did you say that? You know, but here's the thing.\nThis strikes me. And this is just my opinion. Google's page rank. For those of you who don't know what page rank is, the basic idea is instead of looking at a document and what the document contains. We decide the value of the document by other documents that link into that document and how much we trust the other documents.\nSo if a number of high profile websites link to a document that happens to be about automobiles, we'll trust that that document is about automobiles, right? Um, and so it's, it's a graph problem where we assign trust and propagate it from, from incoming links. Um, thank you, Larry and Sergei. Behind that is this like fundamental mistrust of being able to figure out what's in a document.\nRight, like the whole idea is to say, we don't really know what's in this document. So we're going to come up with a trick that allows us to value this document based upon what other documents think about it. Right. And one way you could think about this revolution and large language models, um, like GPT-3 which came from open AI and, um, which is based upon some core technology that came from our group called transformer. That's the T in GPT-3 with there's always friendly rivalries that the folks at Open AI are great. And I think our team is great too. We'll kind of ratcheting up who can, who can move faster, um, cheers to Open AI.\nNow we have some pretty good ways of taking a document full of words. And if you want to think about this abstractly, projecting it into another space of numbers. So maybe for that document, which may have like as many words as you need for the document, let's say it's between 500 and 2,000 words, right. We take a neural network and we run that sequence through the neural network.\nAnd we come out with this vector of numbers that vector, that sequence of numbers maybe it's a thousand numbers right, now, thanks to the neural network that thousand numbers actually does a really good job of describing what's in the document. We can't read it with our eyes, cause it's just a sequence of numbers.\nBut if we take that vector and compare it to other vectors, what we'll find is similar vectors actually contain documents that contain very similar information and they might be written completely differently. Right. But topically they're similar. And so what we get is the ability to understand massive, massive data sets of text vis-a-vis what it's about, what it means, who it's for. And so we have a much better job of what's in a document now, and we can use that information to augment what we know about how people use documents, how they link to them and how much they trust them. And so that just gives us a better way to surface relevant documents for people.\nAnd that's kind of the crux in my mind, or at least in my view of why a large language model might matter for a search company. It helps us understand language and fundamentally most of search is about language.\n\n*Anaiya Raisinghani* : [00:29:11] I also wanted to talk to you about, because language is one of the big things with AI, but then now there's been a lot of movement towards art and music.\nAnd I know that you're really big into that. So I wanted to ask you about for the listeners, if you could explain a little bit behind Magenta, and then I also wanted to talk to you about Yacht because I heard that they used Magenta for yeah. For their new album. And so like, what are your thoughts on utilizing AI to continue on legacies in art and music and just creation?\n\n*Doug Eck* :[00:29:45] Okay, cool. Well, this is a fun question for me. Uh, so first what's Magenta? Magenta is an open source project that I'm very proud to say I created initially about six years ago. And our goal with Magenta is to explore the role of machine learning as a tool in the creative process. If you want to find it, it's at g.co/magenta.\nWe've been out there for a long time. You could also just search for Google Magenta and you'll find us, um, everything we do goes in open source basically provide tools for musicians and artists, mostly musicians based upon the team. We are musicians at heart. That you can use to extend your musical, uh, your musical self.\nYou can generate new melodies, you can change how things sound you can understand more, uh, the technology. You can use us to learn JavaScript or Python, but everything we do is about extending people and their music making. So one of the first things I always say is I think it would be, it's kind of cool that we can generate realistic sounding melodies that, you know, maybe sound like Bach or sound like another composer, but that's just not the point. That's not fun. Like, I think music is about people communicating with people. And so we're really more in the, in the heritage of, you know, Les Paul who invented was one of the inventors of the electric guitar or the cool folks that invented guitar pedals or amplifiers, or pick your favorite technology that we use to make a new kind of music.\nOur real question is can we like build a new kind of musical instrument or a new kind of music making experience using machine learning. And we've spent a lot of time doing fundamental research in this space, published in conferences and journals of the sort that all computer scientists do. And then we've done a lot of open source work in JavaScript so that you can do stuff really fast in the browser.\nAlso plugins for popular software for musicians like Ableton and then sort of core hardcore machine learning in Python, and we've done some experimental work with some artists. So we've tried to understand better on the HCI side, how this all works for real artists. And one of the first groups we worked with is in fact, thank you for asking a group called Yacht.\nThey're phenomenal in my mind, a phenomenal pop band. I think some part LCD sound system. I don't know who else to even add. They're from LA their front person. We don't say front man, because it's Claire is Claire Evans. She's an amazing singer, an utterly astonishing presence on stage. She's also a tech person, a tech writer, and she has a great book out that everybody should read, especially every woman in tech, Anaiya, called BroadBand the story of, um, of women in the internet. I mean, I don't remember if I've got the subtitle, right. So anyway very interesting people and what they did was they came to us and they worked with a bunch of other AI folks, not just Google at all. Like we're one of like five or six collaborators and they just dove in headfirst and they just wrestled with the technology and they tried to do something interesting.\nAnd what they did was they took from us, they took a machine learning model. That's able to generate variations on a theme. So, and they use pop music. So, you know, you give it right. And then suddenly the model is generating lots of different variations and they can browse around the space and they can play around and find different things.\nAnd so they had this like a slight AI extension of themselves. Right. And what they did was utterly fascinating. I think it's important. Um, they, they first just dove in and technically dealt with the problems we had. Our HCI game was very low then like we're like quite, quite literally first type this pro type this command into, into, into a console.\nAnd then it'll generate some midi files and, you know, there are musicians like they're actually quite technically good, but another set of musicians of like what's a command line. Right. You know, like what's terminal. So, you know, you have these people that don't work with our tooling, so we didn't have anything like fancy for them.\nBut then they also set constraints. So, uh, Jona and Rob the other two folks in the band, they came up with kind of a rule book, which I think is really interesting. They said, for example, if we take a melody generated by the Magenta model, we won't edit it ever, ever, ever. Right. We might reject it. Right. We might listen to a bunch of them, but we won't edit it.\nAnd so in some sense, they force themselves to like, and I think if they didn't do that, it would just become this mush. Like they, they wouldn't know what the AI had actually done in the end. Right. So they did that and they did the same with another, uh, some other folks, uh, generating lyrics, same idea.\nThey generated lots and lots of lyrics. And then Claire curated them. So curation was important for them. And, uh, this curation process proved to be really valuable for them. I guess I would summarize it as curation, without editing. They also liked the mistakes. They liked when the networks didn't do the right thing.\nSo they liked breakage like this idea that, oh, this didn't do what it was supposed to. I like that. And so this combination of like curiosity work they said it was really hard work. Um, and in a sense of kind of building some rules, building a kind of what I would call it, grammar around what they're doing the same way that like filmmakers have a grammar for how you tell a story.\nThey told a really beautiful story, and I don't know. I'm I really love Chain Tripping. That's the album. If you listened to it, every baseline was written by a magenta model. The lyrics were written by, uh, an LSTM network by another group. The cover art is done by this brilliant, uh, artists in Australia, Tom white, you know, it's just a really cool album overall.\n\n*Anaiya Raisinghani* : [00:35:09] Yeah, I've listened to it. It's great. I feel like it just alludes to how far technology has come. \n\n*Doug Eck* :[00:35:16] I agree. Oh, by the way that the, the drum beats, the drum beats come from the same model. But we didn't actually have a drum model. So they just threw away the notes and kept the durations, you know, and the baselines come from a model that was trained on piano, where the both of, both of both Rob and Jona play bass, but Rob, the guy who usually plays bass in the band is like, it would generate these baselines that are really hard to play.\nSo you have this like, idea of like the AI is like sort of generating stuff that they're just physically not used to playing on stage. And so I love that idea too, that it's like pushing them, even in ways that like onstage they're having to do things slightly differently with their hands than they would have to do.\nUm, so it's kind of pushes them out. \n\n*Michael Lynn* : [00:35:54] So I'm curious about the authoring process with magenta and I mean, maybe even specifically with the way Yacht put this album together, what are the input files? What trains the system. \n\n*Doug Eck* :[00:36:07] So in this case, this was great. We gave them the software, they provided their own midi stems from their own work.\nSo, that they really controlled the process. You know, our software has put out and is licensed for, you know, it's an Apache license, but we make no claims on what's being created. They put in their own data, they own it all. And so that actually made the process much more interesting. They weren't like working with some like weird, like classical music, piano dataset, right.\nThey were like working with their own stems from their own, um, their own previous recordings. \n\n*Michael Lynn* : [00:36:36] Fantastic. \n\n*Anaiya Raisinghani* : [00:36:38] Great. For my last question to kind of round this out, I just wanted to ask, what do you see that's shocking and exciting about the future of machine learning. \n\n*Doug Eck* :[00:36:49] I'm so bad at crystal ball. Um, \n\n*Michael Lynn* : [00:36:53] I love the question though.\n\n*Doug Eck* :[00:36:56] Yeah. So, so here, I think, I think first, we should always be humble about what we've achieved. If you, if you look, you know, humans are really smart, like way smarter than machines. And if you look at the generated materials coming from deep learning, for example, faces, when they first come out, whatever new model first comes out, like, oh my God, I can't tell them from human faces.\nAnd then if you play with them for a while, you're like, oh yeah, they're not quite right. They're not quite right. And this has always been true. I remember reading about like when the phonograph first came out and they would, they would demo the phonograph on, on like a stage in a theater. And this is like a, with a wax cylinder, you know?\nPeople will leave saying it sounds exactly like an orchestra. I can't tell it apart. Right. They're just not used to it. Right. And so like first I think we should be a little bit humble about what we've achieved. I think, especially with like GPT-3, like models, large language models, we've achieved a kind of fluency that we've never achieved before.\nSo the model sounds like it's doing something, but like it's not really going anywhere. Right. And so I think, I think by and large, the real shocking new, new breakthroughs are going to come as we think about how to make these models controllable so can a user really shape the output of one of these models?\nCan a policymaker add layers to the model that allow it to be safer? Right. So can we really have like use this core neural network as, you know, as a learning device to learn the things that needs to define patterns in data, but to provide users with much, much more control about how, how those patterns are used in a product.\nAnd that's where I think we're going to see the real wins, um, an ability to actually harness this, to solve problems in the right way.\n\n*Anaiya Raisinghani* : [00:38:33] Perfect. Doug, thank you so much for coming on today. It was so great to hear from you. \n\n*Doug Eck* :[00:38:39] That was great. Thanks for all the great questions, Anaiya, was fantastic \n\n*Michael Lynn* : [00:38:44] I'll reiterate that. Thanks so much, Doug. It's been great chatting with you. \nThanks for listening. If you enjoyed this episode, please like, and subscribe, have a question or a suggestion for the show? Visit us in the MongoDB community forums at community.Mongodb.com.\n\nThank you so much for taking the time to listen to our episode today. If you would like to learn more about Doug\u2019s work at Google, you can find him through his [LinkedIn profile or his Google Research profile. If you have any questions or comments about the episode, please feel free to reach out to Anaiya Raisinghani, Michael Lynn, or Nic Raboy. \n\nYou can also find this, and all episodes of the MongoDB Podcast on your favorite podcast network.\n\n* Apple Podcasts \n* Google Podcasts\n* Spotify\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Douglas Eck is a Principal Scientist at Google Research and a research director on the Brain Team. His work lies at the intersection of machine learning and human-computer interaction (HCI). Doug created and helps lead Magenta (g.co/magenta), an ongoing research project exploring the role of machine learning in the process of creating art and music. This article is a transcript of the podcast episode where Anaiya Rasinghani leads an interview with Doug to learn more about the intersection between AI, ML, HCI, and Databases.", "contentType": "Podcast"}, "title": "At the Intersection of AI/ML and HCI with Douglas Eck of Google (MongoDB Podcast)", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/5-different-ways-deploy-free-database-mongodb-atlas", "action": "created", "body": "# 5 Different Ways to Deploy a Free Database with MongoDB Atlas\n\nYou might have already known that MongoDB offers a free tier through M0 clusters on MongoDB Atlas, but did you know that there are numerous ways to deploy depending on your infrastructure needs? To be clear, there's no wrong way to deploy a MongoDB Atlas cluster, but there could be an easier way to fit your operations needs.\n\nIn this article, we're going to have a quick look at the various ways you can deploy a MongoDB Atlas cluster using tools like Terraform, CloudFormation, CLIs, and simple point and click.\n\n## Using the Atlas Web UI to Deploy a Cluster\n\nIf you're a fan of point and click deployments like I am, the web UI for MongoDB Atlas will probably fit your needs. Let's take a quick look at how to deploy a new cluster with a database using the UI found within the MongoDB Cloud Dashboard.\n\nWithin the **Databases** tab for your account, if you don't have any databases or clusters, you'll be presented with the opportunity to build one using the \"Build a Database\" button.\n\nSince we're keeping things free for this article, let's choose the \"Shared\" option when presented on the next screen. If you think you'll need something else, don't let me stop you!\n\nAfter selecting \"Shared\" from the options, you'll be able to create a new cluster by first selecting your cloud service provider and region.\n\nYou can use the defaults, or select a provider or region that you would prefer to use. Your choice has no impact on how you will end up working with your cluster. However, choosing a provider and location that matches your other services could render performance improvements.\n\nAfter selecting the \"Create Cluster\" button, your cluster will deploy. This could take a few minutes depending on your cluster size.\n\nAt this point, you can continue exploring Atlas, create a database or two, and be on your way to creating great applications. A good next step after deploying your cluster would be adding entries to your access list. You can learn how to do that here.\n\nLet's say you prefer a more CLI-driven approach.\n\n## Using the MongoDB CLI to Deploy a Cluster\n\nThe MongoDB CLI can be useful if you want to do script-based deployments or if you prefer to do everything from the command line.\n\nTo install the MongoDB CLI, check out the installation documentation and follow the instructions. You'll also need to have a MongoDB Cloud account created.\n\nIf this is your first time using the MongoDB CLI, check out the configuration documentation to learn how to add your credentials and other information.\n\nFor this example, we're going to use the quick start functionality that the CLI offers. From the CLI, execute the following:\n\n```bash\nmongocli atlas quickstart\n```\n\nUsing the quick start approach, you'll be presented with a series of questions regarding how you want your Atlas cluster configured. This includes the creation of users, network access rules, and other various pieces of information.\n\nTo see some of the other options for the CLI, check out the documentation.\n\n## Using the Atlas Admin API to Deploy a Cluster\n\nA similar option to using the CLI for creating MongoDB Atlas clusters is to use the Atlas Admin API. One difference here is that you don't need to download or install any particular CLI and you can instead use HTTP requests to get the job done using anything capable of making HTTP requests.\n\nTake the following HTTP request, for example, one that can still be executed from the command prompt:\n\n```\ncurl --location --request POST 'https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP_ID}/clusters?pretty=true' \\\n--user \"{PUBLIC_KEY}:{PRIVATE_KEY}\" --digest \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"name\": \"MyCluster\",\n \"providerSettings\": {\n \"providerName\": \"AWS\",\n \"instanceSizeName\": \"M10\",\n \"regionName\": \"US_EAST_1\"\n }\n}'\n```\n\nThe above cURL request is a trimmed version, containing just the required parameters, taken from the Atlas Admin API documentation. You can try the above example after switching the `GROUP_ID`, `PUBLIC_KEY`, and `PRIVATE_KEY` placeholders with those found in your Atlas dashboard. The `GROUP_ID` is the project id representing where you'd like to create your cluster. The `PUBLIC_KEY` and `PRIVATE_KEY` are the keys for a particular project with proper permissions for creating clusters.\n\nThe same cURL components can be executed in a programming language or even a tool like Postman. The Atlas Admin API is not limited to just cURL using a command line.\n\nWhile you can use the Atlas Admin API to create users, apply access rules, and similar, it would take a few different HTTP requests in comparison to what we saw with the CLI because the CLI was designed to make these kinds of interactions a little easier.\n\nFor information on the other optional fields that can be used in the request, refer to the documentation.\n\n## Using HashiCorp Terraform to Deploy a Cluster\n\nThere's a chance that your organization is already using an infrastructure-as-code (IaC) solution such as Terraform. The great news is that we have a Terraform provider for MongoDB Atlas that allows you to create a free Atlas database easily.\n\nTake the following example Terraform configuration:\n\n```\nlocals {\nmongodb_atlas_api_pub_key = \"PUBLIC_KEY\"\nmongodb_atlas_api_pri_key = \"PRIVATE_KEY\"\nmongodb_atlas_org_id = \"ORG_ID\"\nmongodb_atlas_project_id = \"PROJECT_ID\"\n}\n\nterraform {\nrequired_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n version = \"1.1.1\"\n }\n}\n}\n\nprovider \"mongodbatlas\" {\npublic_key = local.mongodb_atlas_api_pub_key\nprivate_key = local.mongodb_atlas_api_pri_key\n}\n\nresource \"mongodbatlas_cluster\" \"my_cluster\" {\nproject_id = local.mongodb_atlas_project_id\nname = \"terraform\"\n\nprovider_name = \"TENANT\"\nbacking_provider_name = \"AWS\"\nprovider_region_name = \"US_EAST_1\"\nprovider_instance_size_name = \"M0\"\n}\n\noutput \"connection_strings\" {\nvalue = mongodbatlas_cluster.my_cluster.connection_strings.0.standard_srv\n}\n\n```\n\nIf you added the above configuration to a **main.tf** file and swapped out the information at the top of the file with your own, you could execute the following commands to deploy a cluster with Terraform:\n\n```\nterraform init\nterraform plan\nterraform apply\n```\n\nThe configuration used in this example was taken from the Terraform template accessible within the Visual Studio Code Extension for MongoDB. However, if you'd like to learn more about Terraform with MongoDB, check out the official provider information within the Terraform Registry.\n\n## Using AWS CloudFormation to Deploy a Cluster\n\nIf your applications are all hosted in AWS, then CloudFormation, another IaC solution, may be one you want to utilize.\n\nIf you're interested in a script-like configuration for CloudFormation, Cloud Product Manager Jason Mimick wrote a thorough tutorial titled Get Started with MongoDB Atlas and AWS CloudFormation. However, like I mentioned earlier, I'm a fan of a point and click solution.\n\nA point and click solution can be accomplished with AWS CloudFormation! Navigate to the MongoDB Atlas on AWS page and click \"How to Deploy.\"\n\nYou'll have a few options, but the simplest option is to launch the Quick Start for deploying without VPC peering.\n\nThe next steps involve following a four-part configuration and deployment wizard.\n\nThe first step consists of selecting a configuration template.\n\nUnless you know your way around CloudFormation, the defaults should work fine.\n\nThe second step of the configuration wizard is for defining the configuration information for MongoDB Atlas. This is what was seen in other parts of this article.\n\nReplace the fields with your own information, including the public key, private key, and organization id to be used with CloudFormation. Once more, these values can be found and configured within your MongoDB Atlas Dashboard.\n\nThe final stage of the configuration wizard is for defining permissions. For the sake of this article, everything in the final stage will be left with the default provided information, but feel free to use your own.\n\nOnce you review the CloudFormation configuration, you can proceed to the deployment, which could take a few minutes.\n\nAs I mentioned, if you'd prefer not to go through this wizard, you can also explore a more scripted approach using the CloudFormation and AWS CLI.\n\n## Conclusion\n\nYou just got an introduction to some of the ways that you can deploy MongoDB Atlas clusters. Like I mentioned earlier, there isn't a wrong way, but there could be a better way depending on how you're already managing your infrastructure.\n\nIf you get stuck with your MongoDB Atlas deployment, navigate to the MongoDB Community Forums for some help!", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to quickly and easily deploy a MongoDB Atlas cluster using a variety of methods such as CloudFormation, Terraform, the CLI, and more.", "contentType": "Quickstart"}, "title": "5 Different Ways to Deploy a Free Database with MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-massive-number-collections", "action": "created", "body": "# Massive Number of Collections\n\nIn the first post in this MongoDB Schema Design Anti-Patterns series, we discussed how we should avoid massive arrays when designing our schemas. But what about having a massive number of collections? Turns out, they're not great either. In this post, we'll examine why.\n\n>\n>\n>:youtube]{vid=8CZs-0it9r4 t=719}\n>\n>Are you more of a video person? This is for you.\n>\n>\n\n## Massive Number of Collections\n\nLet's begin by discussing why having a massive number of collections is an anti-pattern. If storage is relatively cheap, who cares how many collections you have?\n\nEvery collection in MongoDB [automatically has an index on the \\_id field. While the size of this index is pretty small for empty or small collections, thousands of empty or unused indexes can begin to drain resources. Collections will typically have a few more indexes to support efficient queries. All of these indexes add up.\n\nAdditionally, the WiredTiger storage engine (MongoDB's default storage engine) stores a file for each collection and a file for each index. WiredTiger will open all files upon startup, so performance will decrease when an excessive number of collections and indexes exist.\n\nIn general, we recommend limiting collections to 10,000 per replica set. When users begin exceeding 10,000 collections, they typically see decreases in performance.\n\nTo avoid this anti-pattern, examine your database and remove unnecessary collections. If you find that you have an increasing number of collections, consider remodeling your data so you have a consistent set of collections.\n\n## Example\n\nLet's take an example from the greatest tv show ever created: Parks and Recreation. Leslie is passionate about maintaining the parks she oversees, and, at one point, she takes it upon herself to remove the trash in the Pawnee River.\n\nLet's say she wants to keep a minute-by-minute record of the water level and temperature of the Pawnee River, the Eagleton River, and the Wamapoke River, so she can look for trends. She could send her coworker Jerry to put 30 sensors in each river and then begin storing the sensor data in a MongoDB database.\n\nOne way to store the data would be to create a new collection every day to store sensor data. Each collection would contain documents that store information about one reading for one sensor.\n\n``` javascript\n// 2020-05-01 collection\n{\n \"_id\": ObjectId(\"5eac643e64faf3ff31d70d35\"),\n \"river\": \"PawneeRiver\",\n \"sensor\": 1\n \"timestamp\": \"2020-05-01T00:00:00Z\",\n \"water-level\": 61.56,\n \"water-temperature\": 72.1\n},\n{\n \"_id\": ObjectId(\"5eac643e64faf3ff31d70d36\"),\n \"river\": \"PawneeRiver\",\n \"sensor\": 2\n \"timestamp\": \"2020-05-01T00:00:00Z\",\n \"water-level\": 61.55,\n \"water-temperature\": 72.1\n},\n...\n{\n \"_id\": ObjectId(\"5eac643e64faf3ff31d70dfc\"),\n \"river\": \"WamapokeRiver\",\n \"sensor\": 90\n \"timestamp\": \"2020-05-01T23:59:00Z\",\n \"water-level\": 72.03,\n \"water-temperature\": 64.1\n}\n\n// 2020-05-02 collection\n{\n \"_id\": ObjectId(\"5eac644c64faf3ff31d90775\"),\n \"river\": \"PawneeRiver\",\n \"sensor\": 1\n \"timestamp\": \"2020-05-02T00:00:00Z\",\n \"water-level\": 63.12,\n \"water-temperature\": 72.8\n},\n {\n \"_id\": ObjectId(\"5eac644c64faf3ff31d90776\"),\n \"river\": \"PawneeRiver\",\n \"sensor\": 2\n \"timestamp\": \"2020-05-02T00:00:00Z\",\n \"water-level\": 63.11,\n \"water-temperature\": 72.7\n},\n...\n{\n \"_id\": ObjectId(\"5eac644c64faf3ff31d9079c\"),\n \"river\": \"WamapokeRiver\",\n \"sensor\": 90\n \"timestamp\": \"2020-05-02T23:59:00Z\",\n \"water-level\": 71.58,\n \"water-temperature\": 66.2\n}\n```\n\nLet's say that Leslie wants to be able to easily query on the `river` and `sensor` fields, so she creates an index on each field.\n\nIf Leslie were to store hourly data throughout all of 2019 and create two indexes in each collection (in addition to the default index on `_id`), her database would have the following stats:\n\n- Database size: 5.2 GB\n- Index size: 1.07 GB\n- Total Collections: 365\n\nEach day she creates a new collection and two indexes. As Leslie continues to collect data and her number of collections exceeds 10,000, the performance of her database will decline.\n\nAlso, when Leslie wants to look for trends across weeks and months, she'll have a difficult time doing so since her data is spread across multiple collections.\n\n \n\nLet's say Leslie realizes this isn't a great schema, so she decides to restructure her data. This time, she decides to keep all of her data in a single collection. She'll bucket her information, so she stores one hour's worth of information from one sensor in each document.\n\n``` javascript\n// data collection\n{\n \"_id\": \"PawneeRiver-1-2019-05-01T00:00:00.000Z\",\n \"river\": \"PawneeRiver\",\n \"sensor\": 1,\n \"readings\": \n {\n \"timestamp\": \"2019-05-01T00:00:00.000+00:00\",\n \"water-level\": 61.56,\n \"water-temperature\": 72.1\n },\n {\n \"timestamp\": \"2019-05-01T00:01:00.000+00:00\",\n \"water-level\": 61.56,\n \"water-temperature\": 72.1\n },\n ...\n {\n \"timestamp\": \"2019-05-01T00:59:00.000+00:00\",\n \"water-level\": 61.55,\n \"water-temperature\": 72.0\n }\n ]\n},\n...\n{\n \"_id\": \"PawneeRiver-1-2019-05-02T00:00:00.000Z\",\n \"river\": \"PawneeRiver\",\n \"sensor\": 1,\n \"readings\": [\n {\n \"timestamp\": \"2019-05-02T00:00:00.000+00:00\",\n \"water-level\": 63.12,\n \"water-temperature\": 72.8\n },\n {\n \"timestamp\": \"2019-05-02T00:01:00.000+00:00\",\n \"water-level\": 63.11,\n \"water-temperature\": 72.8\n },\n ...\n {\n \"timestamp\": \"2019-05-02T00:59:00.000+00:00\",\n \"water-level\": 63.10,\n \"water-temperature\": 72.7\n }\n ]\n}\n...\n```\n\nLeslie wants to query on the `river` and `sensor` fields, so she creates two new indexes for this collection.\n\nIf Leslie were to store hourly data for all of 2019 using this updated schema, her database would have the following stats:\n\n- Database size: 3.07 GB\n- Index size: 27.45 MB\n- Total Collections: 1\n\nBy restructuring her data, she sees a massive reduction in her index size (1.07 GB initially to 27.45 MB!). She now has a single collection with three indexes.\n\nWith this new schema, she can more easily look for trends in her data because it's stored in a single collection. Also, she's using the default index on `_id` to her advantage by storing the hour the water level data was gathered in this field. If she wants to query by hour, she already has an index to allow her to efficiently do so.\n\n \n\nFor more information on modeling time-series data in MongoDB, see [Building with Patterns: The Bucket Pattern.\n\n## Removing Unnecessary Collections\n\nIn the example above, Leslie was able to remove unnecessary collections by changing how she stored her data.\n\nSometimes, you won't immediately know what collections are unnecessary, so you'll have to do some investigating yourself. If you find an empty collection, you can drop it. If you find a collection whose size is made up mostly of indexes, you can probably move that data into another collection and drop the original. You might be able to use $merge to move data from one collection to another.\n\nBelow are a few ways you can begin your investigation.\n\n \n\n### Using MongoDB Atlas\n\nIf your database is hosted in Atlas, navigate to the Atlas Data Explorer. The Data Explorer allows you to browse a list of your databases and collections. Additionally, you can get stats on your database including the database size, index size, and number of collections.\n\nIf you are using an M10 cluster or larger on Atlas, you can also use the Real-Time Performance Panel to check if your application is actively using a collection you're considering dropping.\n\n### Using MongoDB Compass\n\nRegardless of where your MongoDB database is hosted, you can use MongoDB Compass, MongoDB's desktop GUI. Similar to the Data Explorer, you can browse your databases and collections so you can check for unused collections. You can also get stats at the database and collection levels.\n\n### Using the Mongo Shell\n\nIf you prefer working in a terminal instead of a GUI, connect to your database using the mongo shell.\n\nTo see a list of collections, run `db.getCollectionNames()`. Output like the following will be displayed:\n\n``` javascript\n\n \"2019-01-01\",\n \"2019-01-02\",\n \"2019-01-03\",\n \"2019-01-04\",\n \"2019-01-05\",\n ...\n]\n```\n\nTo retrieve stats about your database, run `db.stats()`. Output like the following will be displayed:\n\n``` javascript\n{\n \"db\" : \"riverstats\",\n \"collections\" : 365,\n \"views\" : 0,\n \"objects\" : 47304000,\n \"avgObjSize\" : 118,\n \"dataSize\" : 5581872000,\n \"storageSize\" : 1249677312,\n \"numExtents\" : 0,\n \"indexes\" : 1095,\n \"indexSize\" : 1145790464,\n \"scaleFactor\" : 1,\n \"fsUsedSize\" : 5312217088,\n \"fsTotalSize\" : 10726932480,\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1588795184, 3),\n \"signature\" : {\n \"hash\" : BinData(0,\"orka3bVeAiwlIGdbVoP+Fj6N01s=\"),\n \"keyId\" : NumberLong(\"6821929184550453250\")\n }\n },\n \"operationTime\" : Timestamp(1588795184, 3)\n}\n```\n\nYou can also run `db.collection.stats()` to see information about a particular collection.\n\n## Summary\n\nBe mindful of creating a massive number of collections as each collection likely has a few indexes associated with it. An excessive number of collections and their associated indexes can drain resources and impact your database's performance. In general, try to limit your replica set to 10,000 collections.\n\nCome back soon for the next post in this anti-patterns series!\n\n>\n>\n>When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB. With a forever-free tier, you're on your way to realizing the full value of MongoDB.\n>\n>\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Reduce Number of Collections\n- MongoDB Docs: Data Modeling Introduction\n- MongoDB Docs: Use Buckets for Time-Series Data\n- MongoDB University M320: Data Modeling\n- Blog Series: Building with Patterns\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Massive Number of Collections", "contentType": "Article"}, "title": "Massive Number of Collections", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/cross-cluster-search", "action": "created", "body": "# Cross Cluster Search Using Atlas Search and Data Federation\n\nThe document model is the best way to work with data, and it\u2019s the main factor to drive MongoDB popularity. The document model is also helping MongoDB innovate it's own solutions to power the world's most sophisticated data requirements.\n\nData federation, allows you to form federated database instances that span multiple data sources like different Atlas clusters, AWS S3 buckets, and other HTTPs sources. Now, one application or service can work with its individual cluster with dedicated resources and data compliance while queries can run on a union of the datasets. This is great for analytics or those global view dashboards and many other use cases in distributed systems.\n\nAtlas Search is also an emerging product that allows applications to build relevance-based search powered by Lucene directly on their MongoDB collections. While both products are amazing on their own, they can work together to form a multi-cluster, robust text search to solve challenges that were hard to solve beforehand.\n\n## Example use case\n\nPlotting attributes on a map based on geo coordinates is a common need for many applications. Complex code needs to be added if we want to merge different search sources into one data set based on the relevance or other score factors within a single request.\n\nWith Atlas federated queries run against Atlas search indexes, this task becomes as easy as firing one query. \n\nIn my use case, I have two clusters: cluster-airbnb (Airbnb data) and cluster-whatscooking (restaurant data). For most parts of my applications, both data sets have nothing really in common and are therefore kept in different clusters for each application.\n\nHowever, if I am interested in plotting the locations of restaurants and Airbnbs (and maybe shops, later) around the user, I have to merge the datasets together with a search index built on top of the merged data. \n\n## With federated queries, everything becomes easier\n\nAs mentioned above, the two applications are running on two separated Atlas clusters due to their independent microservice nature. They can even be placed on different clouds and regions, like in this picture.\n\nThe restaurants data is stored in a collection named \u201crestaurants\u201d followed by a common modeling, such as grades/menu/location.\n\nThe Airbnb application stores a different data set model keeping Airbnb data, such as bookings/apartment details/location. \n\nThe power of the document model and federated queries is that those data sets can become one if we create a federated database instance and group them under a \u201cvirtual collection\u201d called \u201cpointsOfInterest.\u201d\n\nThe data sets can now be queried as if we have a collection named \u201cpointsOfInterest\u201d unioning the two.\n\n## Lets add Atlas Search to the mix\n\nSince the collections are located on Atlas, we can easily use Atlas search to individually index each. It\u2019s also most probable that we already did that as our underlying applications require search capabilities of restaurants and Airbnb facilities. \n\nHowever, if we make sure that the names of the indexes are identical\u2014for example, \u201cdefault\u201d\u2014and that key fields for special search\u2014like geo\u2014are the same (e.g., \u201clocation\u201d), we can run federated search queries on \u201cpointsOfInterest.\u201d We are able to do that since the federated queries are propagated to each individual data source that comprise the virtual collection. With Atlas Search, it's surprisingly powerful as we can get results with a correct merging of the search scores between all of our data sets. This means that if geo search points of interest are close to my location, we will get either Airbnb or restaurants correctly ordered by the distance. What\u2019s even cooler is that Atlas Data Federation intelligently \u201cpushes down\u201d as much of a query as possible, so the search operation will be done locally on the clusters and the union will be done in the federation layer, making this operation as efficient as possible.\n\n## Finally, let's chart it up\n\nWe can take the query we just ran in Compass and export it to MongoDB Charts, our native charting offering that can directly connect to a federated database instance, plotting the data on a map:\n\n:charts]{url=\"https://charts.mongodb.com/charts-search-demos-rtbgg\" id=\"62cea0c6-2fb0-4a7e-893f-f0e9a2d1ef39\"}\n\n## Wrap-up\n\nWith new products come new power and possibilities. Joining the forces of [Data Federation and Atlas Search allows creators to easily form applications like never before. Start innovating today with MongoDB Atlas.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Atlas Data Federation opens a new world of data opportunities. Cross cluster search is available on MongoDB Atlas by combining the power of data federation on different Atlas Search indexes scattered cross different clusters, regions or even cloud providers.", "contentType": "Article"}, "title": "Cross Cluster Search Using Atlas Search and Data Federation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/build-host-docc-documentation-using-github-actions-netlify", "action": "created", "body": "# Continuously Building and Hosting our Swift DocC Documentation using Github Actions and Netlify\n\nIn a past post of this series, we showed how easy it was to generate documentation for our frameworks and libraries using DocC and the benefits of doing it. We also saw the different content we can add, like articles, how-tos, and references for our functions, classes, and structs.\n\nBut once generated, you end up with an archived DocC folder that is not _that_ easy to share. You can compress it, email it, put it somewhere in the cloud so it can be downloaded, but this is not what we want. We want:\n\n* Automatic (and continuous) generation of our DocC documentation bundle.\n* Automatic (and continuous) posting of that documentation to the web, so it can be read online. \n\n## What\u2019s in a DocC bundle?\n\nA `.doccarchive` archive is, like many other things in macOS, a folder. Clone the repository we created with our documentation and look inside `BinaryTree.doccarchive` from a terminal. \n\n```bash\ngit clone https://github.com/mongodb-developer/realm-binary-tree-docc \ncd BinaryTree.doccarchive\n```\n\nYou\u2019ll see:\n\n```\n..\n\u251c\u2500\u2500 css\n\u2502 \u251c\u2500\u2500 \u2026\n\u2502 \u2514\u2500\u2500 tutorials-overview.7d1da3df.css\n\u251c\u2500\u2500 data\n\u2502 \u251c\u2500\u2500 documentation\n\u2502 \u2502 \u251c\u2500\u2500 binarytree\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 \u2026\n\u2502 \u2502 \u2502 \u2514\u2500\u2500 treetraversable-implementations.json\n\u2502 \u2514\u2500\u2500 tutorials\n\u2502 \u251c\u2500\u2500 binarytree\n\u2502 \u2502 \u251c\u2500\u2500 \u2026\n\u2502 \u2502 \u2514\u2500\u2500 traversingtrees.json\n\u2502 \u2514\u2500\u2500 toc.json\n\u251c\u2500\u2500 downloads\n\u251c\u2500\u2500 favicon.ico\n\u251c\u2500\u2500 favicon.svg\n\u251c\u2500\u2500 images\n\u2502 \u251c\u2500\u2500 \u2026\n\u2502 \u2514\u2500\u2500 tree.png\n\u251c\u2500\u2500 img\n\u2502 \u251c\u2500\u2500 \u2026\n\u2502 \u2514\u2500\u2500 modified-icon.5d49bcfe.svg\n\u251c\u2500\u2500 index\n\u2502 \u251c\u2500\u2500 availability.index\n\u2502 \u251c\u2500\u2500 data.mdb\n\u2502 \u251c\u2500\u2500 lock.mdb\n\u2502 \u2514\u2500\u2500 navigator.index\n\u251c\u2500\u2500 index.html\n\u251c\u2500\u2500 js\n\u2502 \u251c\u2500\u2500 chunk-2d0d3105.459bf725.js\n\u2502 \u251c \u2026\n\u2502 \u2514\u2500\u2500 tutorials-overview.db178ab9.js\n\u251c\u2500\u2500 metadata.json\n\u251c\u2500\u2500 theme-settings.json\n\u2514\u2500\u2500 videos\n```\n\nThis is a single-page web application. Sadly, we can\u2019t just open `index.html` and expect it to render correctly. As Apple explains in the documentation, for this to work, it has to be served from a proper web server, with a few rewrite rules added: \n\n> To host a documentation archive on your website, do the following:\n> \n> 1. Copy the documentation archive to the directory that your web server uses to serve files. In this example, the documentation archive is SlothCreator.doccarchive.\n> 1. Add a rule on the server to rewrite incoming URLs that begin with /documentation or /tutorial to SlothCreator.doccarchive/index.html.\n> 1. Add another rule for incoming requests to support bundled resources in the documentation archive, such as CSS files and image assets.\n\nThey even add a sample configuration to use with the Apache `httpd` server. So, to recap:\n\n* We can manually generate our documentation and upload it to a web server.\n* We need to add the rewrite rules described in Apple\u2019s documentation for the DocC bundle to work properly.\n\nEach time we update our documentation, we need to generate it and upload it. Let\u2019s generate our docs automatically.\n\n## Automating generation of our DocC archive using GitHub Actions\n\nWe\u2019ll continue using our Binary Tree Package as an example to generate the documentation. We\u2019ll add a GitHub Action to generate docs on each new push to main. This way, we can automatically refresh our documentation with the latest changes introduced in our library.\n\nTo add the action, we\u2019ll click on the `Actions` button in our repo. In this case, a Swift Action is offered as a template to start. We\u2019ll choose that one:\n\nAfter clicking on `Configure`, we can start tweaking our action. A GitHub action is just a set of steps that GitHub runs in a container for us. There are predefined steps, or we can just write commands that will work in our local terminal. What we need to do is:\n\n* Get the latest version of our code.\n* Build out our documentation archive.\n* Find where the `doccarchive` has been generated.\n* Copy that archive to a place where it can be served online.\n\nWe\u2019ll call our action `docc.yml`. GitHub actions are YAML files, as the documentation tells us. After adding them to our repository, they will be stored in `.github/workflows/`. So, they\u2019re just text files we can edit locally and push to our repo.\n\n### Getting the latest version of our code\n\nThis is the easy part. Every time a Github action starts, it creates a new, empty container and clones our repo. So, our code is there, ready to be compiled, pass all tests, and does everything we need to do with it.\n\nOur action starts with:\n\n```yaml\n\nname: Generate DocC\non:\n push:\n branches: main ]\n \njobs:\n Build-Github-Actions:\n runs-on: macos-latest\n\n steps:\n - name: Git Checkout\n uses: actions/checkout@v2\n```\n\nSo, here:\n\n* We gave the action the name \u201cGenerate DocC\u201d.\n* Then we select when it\u2019ll run, i.e., on any pushes to `main`.\n* We run this on a macOS container, as we need Xcode.\n* The first step is to clone our repo. We use a predefined action, `checkout`, that GitHub provides us with.\n\n### Building out our documentation archive\n\nNow that our code is in place, we can use `xcodebuild` to build the DocC archive. We can [build our projects from the command line, run our tests, or in this case, build the documentation.\n\n```bash\nxcodebuild docbuild -scheme BinaryTree -derivedDataPath ./docbuild -destination 'platform=iOS Simulator,OS=latest,name=iPhone 13 mini'\n```\n\nHere we\u2019re building to generate DocC (`docbuild` parameter), choosing the `BinaryTree` scheme in our project, putting all generated binaries in a folder at hand (`docbuild`), and using an iPhone 13 mini as Simulator. When we build our documentation, we need to compile our library too. That\u2019s why we need to choose the Simulator (or device) used for building.\n\n### Find where the `doccarchive` has been generated\n\nIf everything goes well, we\u2019ll have our documentation built inside `docbuild`. We\u2019ll search for it, as each build will generate a different hash to store the results of our build. And this is, on each run, a clean machine. To find the archive, we use:\n\n```bash\nfind ./docbuild -type d -iname \"BinaryTree.doccarchive\"\n```\n\n### Copy our documentation to a place where it can be served online\n\nNow that we know where our DocC archive is, it\u2019s time to put it in a different repository. The idea is we\u2019ll have one repository for our code and one for our generated DocC bundle. Netlify will read from this second repository and host it online.\n\nSo, we clone the repository that will hold our documentation with:\n\n```bash\ngit clone https://github.com/mongodb-developer/realm-binary-tree-docc\n```\n\nSo, yes, now we have two repositories, one cloned at the start of the action and now this one that holds only the documentation. We copy over the newly generated DocC archive:\n\n \n\n```bash\ncp -R \"$DOCC_DIR\" realm-binary-tree-docc\n```\n\nAnd we commit all changes:\n\n```bash\ncd realm-binary-tree-docc\ngit add .\ngit commit -m \"$DOC_COMMIT_MESSAGE\"\ngit status\n```\n\nHere, `$DOC_COMMIT_MESSAGE` is just a variable we populate with the last commit message from our repo and current date. But it can be any message.\n\nAfter this, we need to push the changes to the documentation repository.\n\n```bash\ngit config --get remote.origin.url\ngit remote set-url origin https://${{ secrets.API_TOKEN_GITHUB}}@github.com/mongodb-developer/realm-binary-tree-docc\n\ngit push origin\n```\n\nHere we first print our `origin` (the repo where we\u2019ll be pushing our changes) with\n\n```bash\ngit config --get remote.origin.url\n```\n\nThis command will show the origin of a git repository. It will print the URL of our code repository. But this is not where we want to push. We want to push to the _documentation_ repository. So, we set the origin pointing to https://github.com/mongodb-developer/realm-binary-tree-docc. As we will need permission to push changes, we authenticate using a Personal Access Token. From Github Documentation on Personal Access Tokens:\n\n> You should create a personal access token to use in place of a password with the command line or with the API.\n\nLuckily, Github Actions has a way to store these secrets, so they\u2019re publicly accessible. Just go to your repository\u2019s Settings and expand Secrets. You\u2019ll see an \u201cActions\u201d option. There you can give your secret a name to be used later in your actions.\n\nFor reference, this is the complete action I\u2019ve used.\n\n## Hosting our DocC archives in Netlify\n\nAs shown in this excellent post by Joseph Duffy, we'll be hosting our documentation in Netlify. Creating a free account is super easy. In this case, I advise you to use your Github credentials to log in Netlify. This way, adding a new site that reads from a Github repo will be super easy. Just add a new site and select Import an existing project. You can then choose Github, and once authorized, you\u2019ll be able to select one of your repositories. \n\nNow I set it to deploy with \u201cAny pull request against your production branch / branch deploy branches.\u201d So, every time your repo changes, Netlify will pick up the change and host it online (if it\u2019s a web app, that is).\n\nBut we\u2019re missing just one detail. Remember I mentioned before that we need to add some rewrite rules to our hosted documentation? We\u2019ll add those in a file called `netlify.toml`. This file looks like:\n\n```toml\nbuild]\npublish = \"BinaryTree.doccarchive/\"\n\n[[redirects]]\nfrom = \"/documentation/*\"\nstatus = 200\nto = \"/index.html\"\n\n[[redirects]]\nfrom = \"/tutorials/*\"\nstatus = 200\nto = \"/index.html\"\n\n[[redirects]]\nfrom = \"/data/documentation.json\"\nstatus = 200\nto = \"/data/documentation/binarytree.json\"\n\n[[redirects]]\nforce = true\nfrom = \"/\"\nstatus = 302\nto = \"/documentation/\"\n\n[[redirects]]\nforce = true\nfrom = \"/documentation\"\nstatus = 302\nto = \"/documentation/\"\n\n[[redirects]]\nforce = true\nfrom = \"/tutorials\"\nstatus = 302\nto = \"/tutorials/\"\n```\n\nTo use it in your projects, just review the lines:\n\n```toml\npublish = \"BinaryTree.doccarchive/\"\n\u2026\nto = \"/data/documentation/binarytree.json\"\n```\n\nAnd change them accordingly.\n\n## Recap\n\nIn this post, we\u2019ve seen how to:\n\n* Add a Github Action to a [code repository that continuously builds a DocC documentation bundle every time we push a change to the code.\n* That action will in turn push that newly built documentation to a documentation repository for our library.\n* That documentation repository will be set up in Netlify and add some rewrite rules so we'll be able to host it online.\n\nDon\u2019t wait and add continuous generation of your library\u2019s documentation to your CI pipeline!\n", "format": "md", "metadata": {"tags": ["Swift", "Realm", "GitHub Actions"], "pageDescription": "In this post we'll see how to use Github Actions to continuously generate the DocC documentation for our Swift libraries and how to publish this documentation so that can be accessed online, using Netlify.", "contentType": "Tutorial"}, "title": "Continuously Building and Hosting our Swift DocC Documentation using Github Actions and Netlify", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/swift-ui-meetup", "action": "created", "body": "# SwiftUI Best Practices with Realm\n\nDidn't get a chance to attend the SwiftUI Best Practices with Realm Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n>SwiftUI Best Practices with Realm\n>\n>:youtube]{vid=mTv96vqTDhc}\n\nIn this event, Jason Flax, the engineering lead for the Realm iOS team, explains what SwiftUI is, why it's important, how it will change mobile app development, and demonstrates how Realm's integration with SwiftUI makes it easy for iOS developers to leverage this framework.\n\nIn this 50-minute recording, Jason covers: \n- SwiftUI Overview and Benefits\n- SwiftUI Key Concepts and Architecture\n- Realm Integration with SwiftUI\n- Realm Best Practices with SwiftUI\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!\n\n## Transcript\n\n**Ian Ward**: All right, I think we are just about ready to kick it off. So to kick it off here, this is a new year and this is our kind of first inaugural meeting for the user group for the Realm user group. What we're going to do over the course of this next year is have and schedule these at least once a month if not multiple times a month. Where we kind of give a talk about a particular topic and then we can have a Q&A. And this is kind of just about exchanging new and interesting ideas. Some of them will be about iOS, obviously we have other SDKs. It could be about Android. Some of them will come from the Realm team. Others could come from the community. So if you have a particular talk or something you want to talk about, please reach out to us. We would love to hear from you. And potentially, you could do a talk here as well.\n\n**Ian Ward**: The format kind of today is we're going to hear from Jason Flax. And actually, I should introduce myself. I'm Ian Ward. I do product at MongoDB but I focus on the Realm SDK, so all of our free and opensource community SDKs as well as the synchronization product. I came over the with the Realm acquisition approximately almost two years ago now. And so I've been working on mobile application architecture for the last several years. Today, we're going to hear from Jason Flax who is our lead iOS engineer. The iOS team has been doing a ton of work on making our SwiftUI integration really great with Realm. So he's going to talk about. He's also going to kind of give a quick rundown of SwiftUI. I'll let him get into that.\n\n**Ian Ward**: But if you could please, just in terms of logistics, mute yourself during the presentation, he has a lot to go through. This will be recorded so we'll share this out later. And then at the end, we'll save some time for Q&A, you can ask questions. We can un-mute, we can answer any questions you might have. If you have questions during the presentation, just put them in the chat and we'll make sure to get to them at the end. So without further adieu, Jason, if you want to kick it off here and maybe introduce yourself.\n\n**Jason Flax**: Sure thing. Yeah, thanks for the intro there. So I'm Jason Flax. I am the lead of the Cocoa team or the iOS team. I did not come along with the Realm acquisition, I was at Mongo previously. But the product that I was working on Stitch largely overlapped with the work of Realm and it was a natural move for me to move over to the Cocoa team.\n\n**Jason Flax**: It's been a great, has it been two years? I actually don't even know. Time doesn't mean much right now. But yeah, it's been a lot of fun. We've been working really hard to try to get Realm compatible with SwiftUI and just working really well. And I think we've got things in a really nice place now and I'm pretty excited to show everyone in the presentation. I'll try to move through it quickly though because I do want to get time for questions and just kind of the mingling that you normally have at an actual user group. Cool.\n\n**Ian Ward**: Perfect. Thanks Jason. Yeah, normally we would have refreshments and pie at the end. But we'll have to settle for some swag. So send out a link in the chat if you want to fill it out to get some swag, it'd be great. Thank you for attending and thank you Jason.\n\n**Jason Flax**: Cool. I'll start sharing this. Let's see where it's at. Can people see the presentation?\n\n**Ian Ward**: I can. You're good.\n\n**Jason Flax**: Cool. All right, well, this is SwiftUI and Realm best practices. I am Jason Flax lead engineer for the Realm Cocoa team. All very self explanatory. Excited to be here, as I said. Let's get started. Next. Cool, so the agenda for today, why SwitftUI, SwiftUI basics which I'll try to move through quickly. I know you all are very excited to hear me talk about what VStack is. Realm models, how they actually function, what they do, how they function with SwiftUI and how live objects function with SwiftUI since it's a very state based system. SwiftUI architecture which expect that to be mildly controversial. Things have changed a lot since the old MVC days. A lot of the old architectures, the three letter, five letter acronyms, they don't really make as much sense anymore so I'm here to talk about those. And then the Q&A.\n\n**Jason Flax**: Why SwiftUI? I'm sure you all are familiar with this. That on the right there would actually be a fairly normal storyboard, as sad as that is. And below would be a bit of a merge conflict there between the underlying nib code which for anybody that's been doing iOS development for a while, it's a bit of a nightmare. I don't want to slag UI kit too hard, it actually is really cool. I think one of my favorite things about UI kit was being able to just drag and drop elements into code as IBOutlets or so on.\n\n**Jason Flax**: I've actually used that as means to teach people programming before because it's like, oh right, I have this thing on this view, I drag and drop it into the code. That code is the thing. That code is the thing. That's a really powerful learning tool and I think Apple did a great job there. But it's kind of old. It didn't necessarily scale well for larger teams. If you had a team of four or five people working on an app and you had all these merge conflicts, you'd spend a full day on them. Not to mention the relationships between views being so ridiculously complex. It's the stuff of nightmares but it's come a long way now. SwiftUI, it seems like something they've been building towards for a really long time.\n\n**Jason Flax**: Architectures, this is what I was just talking about. UI kit, though it was great, introduced a lot of complex problems that people needed to solve, right? You'd end up with all of this spaghetti code trying to update your views and separate the view object from the business logic which is something you totally need to do. But you ended up having to figure out clever ways to break these pieces up and connect all the wires. The community figured out various ways. Some working better for others. Some of them were use-case based. The problem was if you actually adhered to any of them, with the exception of maybe MVC and MVI which we'll talk about later, you'd end up with all of these neat little pieces, like Legos even but there's a lot of boilerplate. And I'd say you'd kind of have gone from spaghetti code to ravioli code which is a whole different set of problems. I'll talk later on about what the new better architectures might be for SwiftUI since you don't need a lot of these things anymore.\n\n**Jason Flax**: Let's go over the basics. This is an app. SwiftUI.app there is a class that is basically replacing app delegate, not a class, protocol, sorry. It's basically replacing the old app delegate. This isn't a massive improvement. It's removing, I don't know, 10 lines of code. It's a nice small thing, it's a visual adjustment. It sets the\ntone I guess of for the rest of SwiftUI. Here we are, @main struct app, this is my app, this is my scene, this is my view. There you go, that's it. If you still need some of the old functionality of SwiftUI, or sorry not SwiftUI, app delegate, there is a simple property wrapper that you just add on the view, it's called UI application development adaptor. That'll give a lot of the old features that you probably still will need in most iOS apps.\n\n**Jason Flax**: Moving on, content view. This is where the meat of the view code is. This is our first view. Content view as itself is not a concept. It happens to be when I'm in my view, it is very descriptive of what this is. It is the content. Basically the rest of the SwiftUI presentation is going to be me stepping through each of the individual views, explaining them, explaining what I'm doing and how they all connect together. On the right, what we're building here is a reminders' app. Anybody here that has an iPhone has a reminders' app. This is obviously, doesn't have all the bells and whistles that that does but it's a good way to show what SwiftUI can do.\n\n**Jason Flax**: The navigation view is going to be many people's top level view. It enables a whole bunch of functionality in SwiftUI, like edit buttons, like navigation links, like the ability to have main and detailed views and detailed views of detailed views. All the titles are in alignment. As you can see, that edit button, which I am highlighting with my mouse, that's actually built in. That is the edit button here. That will just sort of automagically enable a bunch of things if you have it in your view hierarchy. This is both a really cool thing and somewhat problematic with SwiftUI. There is a load of implicit behavior that you kind of just need to learn. That said, once you do learn it, it does mean a lot less code. And I believe the line goes, the best line of code is the one that's never been written. So less code the better as far as I'm concerned so let's dig in.\n\n**Ian Ward**: That's right.\n\n**Jason Flax**: It's one of my favorites. Cool, so let's talk about the VStack. Really straightforward, not gong to actually harp on this too long. It's a vertical stack views, it's exactly what it sounds like. I suppose the nice thing here is that this one actually is intuitive. You put a bunch of views together, those views will be stacked. So what have here, you have the search bar, the search view, the reminder list results view. Each one of these is a reminder list and they hook into the Realm results struct, which I'll dig into later. A spacer, which I'll also dig into a bit, and the footer which is just this add list button and it stacks them vertically.\n\n**Jason Flax**: The spacers a weird thing that has been added which, again, it's one of these non-intuitive things that once you learn it, it's incredibly useful. Anybody familiar with auto layout and view constraints will immediately sort of latch onto this because it is nice when you get it right. The tricky part is always getting it right. But it's pretty straightforward. It creates space between views. In this case, what it's literally doing is pushing back the reminder list results view in the footer, giving them the space needed so that if this... it knocks the footer to the bottom of the view which is exactly what we want. And if the list continues to grow, this inner, say, view, will be scrollable while the footer stays at the bottom.\n\n**Jason Flax**: Right, @State and the $ operator. This is brand new. It is all tied to property wrappers. It was introduced alongside SwiftUI though they are technically separate features. @STate is really cool. Under the hood what's happening is search filter isn't actually baked into the class that way. When this code compiles, there actually will be an underscore search filter on the view that is the string. The search filter you see here, the one that I'm referencing several times in the code, that is actually the property or the @State. And @State is a struct that contains reference based storage to this sting that I can modify it between views. And when the state property is updated, it will actually automatically update the view because state inherits from a thing call dynamic property which allows you to notify the view, right, this thing has changed, please update my view.\n\n**Jason Flax**: And as you can see on the right side here, when I type into the search bar, I type ORK, O-R-K, which slowly narrows down the reminder list that we have available. It does that all kind of magically. Basically, in the search view I type in, it passes that information back to state which isn't technically accurate but I'll explain more later. And then will automatically update the results view with that information.\n\n**Jason Flax**: The $ operator is something interesting out of property wrappers as well. All property wrappers have the ability to project a value. It's a variable on the property wrapper called projected value. That can be any type you want. In the case of @State, it is a binding type. Binding is going to encapsulate the property. It's going to have a getter and setter and it' going to do some magic under the hood that when I pass this down to the views, it's going to get stored in the view hierarchy. And when I modify it, because it holds the same let's say reference and memory as @State, @State is going to know, okay, I need to update the view now. We're going to see the $ operator a bunch here. I'll bring it up several times because Realm is now also taking advantage of this feature.\n\n**Jason Flax**: Let's dig into custom views. SwiftUI has a lot of really cool baked in functionality. And I know they're doing another release in June and there's a whole bunch of stuff coming down the pipeline for it. Not every view you'd think would exist exists. I really wanted a simple clean Appley looking search view in this app so I had to make my own custom class for it. You'll also notice that I pass in the search filter which is stored as a state variable on the content view. We'll get to that in a moment. This is the view. Search view, there's a search filter on top. This is the @binding I was talking about. We have a verticle stack of views here. Let's dig in.\n\n**Jason Flax**: The view stack actually just kind of sets up the view. This goes back into some of the knowledge that you just kind of have to gain about how spacers work and how all of these stacks work. It's aligned in the view in a certain way that it fills in the view in the way that I want it too. It's pretty specific to this view so I'm not going to harp on it too long. HStack is the opposite of a VStack, is a horizontal stack of views. The image here will trail the text field. If you have a right to left language, I'm fairly certain it also switches to which is pretty cool. Everything is done through leading and trailing, not left and right so that you can just broadly support anything language based. Cool.\n\n**Jason Flax**: This is the image class. I actually think this is a great addition. They've done a whole bunch of cool stuff under the hood with this. Really simple, it's the magnifying glass that is the system name for the icon. With SwiftUI, Apple came out with this thing called SF Symbols. SF stands for San Francisco there, it ties to their San Francisco font. They came out with this set of icons, 600 or so, I'm not sure the exact number, that perfectly align themselves with the bog standard Swift UI views and the bog standard Apple fonts and all that kind of thing. You can download the program that lets you look at each one you have and see which ones you have access to. It's certainly not a secret. And then you can just access them in your app very simply as so.\n\n**Jason Flax**: The cool thing here that I like is that I remember so many times going back and forth back when I was doing app development with our designers, with our product team of I need this thing to look like this. It's like, right, well, that's kind of non-standard, can you provide us with the icon in the 12 different sizes we needed. I need it in a very specific format, do you have Sketch? You have Sketch, right? Cool, okay, Sketch, great. There's no need for that anymore. Of course there's still going to be uses for it. It's still great for sketching views and creating designs. But the fact that Apple has been working towards standardizing everything is great. It still leaves room for creativity. It's not to say you have to use these things but it's a fantastic option to have.\n\n**Jason Flax**: This is your standard text field. You type in it. Pretty straightforward. The cool thing here ties back to the search filter. We're passing that binding back in, that $ operator. We're using it again. You're going to keep seeing it. When this is modified, it updates the search filter and it's, for a lack of better way to put it, passed back between the views. Yeah. This slide is basically, yeah, it's the app binding.\n\n**Jason Flax**: Let's dig into the models. We have our custom view, we have our search view. We now have our reminder view that we need to dig into to have any of this working. We need our data models, right? Our really basic sort of dummy structures that contain the data, right? This is a list of reminders. This is the name of that list, the icon of that list. This is a reminder. A reminder has a name, a priority, a date that it's due, whether or not it's complete, things like that, right? We really want to store this data in a really simple way and that's whee Realm comes in handy.\n\n**Jason Flax**: This is our reminder class. I have it as an embedded object. Embedded objects are a semi-new feature of Realm that differ from regular objects. The long story short is that they effectively enable cascade and deletes which means that if you have a reminder list and you delete it, you really wouldn't want your reminder still kind of hanging out in ether not attached to this high level list. So when you delete that list, EmbeddedObject enables it just be automatically nixed from the system. Oops, sorry, skipped a slide there.\n\n**Jason Flax**: RealmEnum is another little interesting thing here. It allows you to store enums in Realm. Unfortunately the support is only for basic enums right now but that is still really nice. In this case, the enum is of a priority. Reminders have priorities. There's a big difference between taking out the trash and, I don't know, going for a well checkup for the doctor kind of thing. Pretty standard stuff. Yeah.\n\n**Jason Flax**: ObjectKeyIdentifiable is something that we've also introduced recently. This one's a little more complex and it ties into combine. Combine is something I haven't actually said by name yet but it's the subscription based framework that Apple introduced that hooks into SwiftUI that enables all of the cool automatic updates of views. When you're updating a view, it's because new data is published and then you trigger that update. And that all happens through combine. Your data models are being subscribed to by the view effectively. What ObjectKeyIdentifiable does is that it provides unique identifier for your Realm object.\n\n**Jason Flax**: The key distinction there is that it's not providing an identifier for the class, it's providing an identifier for the object. Realm objects are live. If I have this reminder in memory here and then it's also say in a list somewhere, in a result set somewhere and it's a different address in memory, it's still the same object under the hood, it's still the same persisted object and we need to make sure that combine knows that. Otherwise, when notifying the view, it'll notify the view in a whole bunch of different places when in reality, it's all the same change. If I change the title, I only want it to notify the view once. And that's what ObjectKeyIdentifiable does. It tells combine, it tells SwiftUI this is the persisted reminder.\n\n**Jason Flax**: Reminder list, this is what I was talking about before. It is the reminder list that houses the reminders. One funny thing here, you have to do RealmSwift.List if you're defining them in the same file that you're importing SwiftUI. We picked the name List for lists because it was not array and that was many, many years ago but of course SwiftUI has come up with their own class called Lists. So just a little tidbit there. If you store your models in different classes, which for the sake of a non-demo project you would probably do, it probably won't be an issue for you. But just a funny little thing there. One thing I also wanted to point out is the icon field. This ties back to system name. I think it's really neat that you can effectively store UI based things in Realm. That's always been possible but it's just a lot neater now with all of the built in things that they supply you with.\n\n**Jason Flax**: Let's bring it way back to the content view. This again is the top level view that we have and let's go into the results list, ReminderListResultsView and see what's happening there. So as you can see, we're passing in again that $searchFilter which will filter this list which I'm circling. This is the ReminderListResultsView, it's a bit more code. There's a little bit more going on here but still 20 lines to be able to, sorry, add lists, delete lists, filter lists, all that kind of thing. So list and for each, as I was just referencing, have been introduced by SwiftUI. There seems to be a bit of confusion about the difference between the two especially with the last release of SwiftUI. At the beginning it was basically, right, lists are static data. They are data that is not going to change and ForEach is mutable data. That's still the general rule to go by. The awkward bit is that ForEach still needs to be nested in lists and lists can actually take mutable data now. I'd say in the June release, they'll probably clean this up a bit. But in the meantime, this is just kind of one of those, again, semi non-intuitive things that you have do with SwiftUI.\n\n**Jason Flax**: That said, if you think about the flip side and the nice part of this to spin it more positively, this is a table view. And I'm sure anybody that's worked with iOS remembers how large and cumbersome table views are. This is it. This is the whole table view. Yeah, it's a little non-intuitive but it's a few lines of code. I love it. It's a lot less thinking and mental real estate for table views to be taking up.\n\n**Jason Flax**: NavigationLink is also a bunch of magic sauce that ties back to the navigation view. This NavigationLink is going to just pass us to the DetailView when tapped which I'll display in a later slide. I'm going to tap on one of these reminder lists and it's going to take me to the detailed view that actually shows each reminder. Yeah.\n\n**Jason Flax**: Tying this all back with the binding, as you can see in the animation as we showed before, I type, it changes the view automatically and filters out the non-matching results. In this case, NSPredicate is the predicate that you have to create to actually filter the view. The search filter here is the binding that we passed in. That is automatically updated from the search view in the previous slides. When that changes, whenever this search filter is edited, it's going to force update the view. So basically, this is going to get called again. This filter is going to contain the necessary information to filter out the unnecessary data. And it's all going to tie into this StateRealmObject thing here.\n\n**Jason Flax**: What is StateRealmObject? This is our own homegrown property wrapper meant and designed in a way to mimic the way that state objects work in SwiftUI. It does all the same stuff. It has heap based storage that stores a reference to the underlying property. In this case, this is a fancy little bit of syntactic sugar to be able to store results on a view. Results is a Realm type that contains, depending on the query that you provide, either entire view of the table or object type or class type that you're looking up. Or that table then queried on to provide you with what you want which is what's happened here with the filter NSPredicate. This is going to tie into the onDelete method. Realm objects function in a really specific way that I'll get to later. But because everything is live always with Realm, we need to store State slightly differently than the way that the @State and @ObservedObject property wrappers naturally do.\n\n**Jason Flax**: OnDelete is, again, something really cool. It eliminates so much code as you can see from the animation here. Really simple, just swipe left, you hit delete or swipe right depending on the language and it just deletes it. Simple as. The strange non-intuitive thing that I'll talk about first is the fact that that view, that swipe left ability is enabled simply by adding this onDelete method to the view hierarchy. That's a lot of implicit behavior. I'm generally not keen on implicit behavior. In this case, again, enabling something really cool in a small amount of code that is just simply institutional knowledge that has to be learned.\n\n**Jason Flax**: When you delete it, and this ties into the StateRealmObject and the $ remove here, with, I suppose it'll be the next update, of Realm Swift, the release of StateRealmObject, we are projecting a binding similar to the way that State does. We've added methods to that binding to allow you to really simply remove, append and move objects within a list or results depending on your use case. What this is doing is wrapping things in a write transaction. So for those that are unfamiliar with Realm, whenever you modify a managed type or a managed object within Realm, that has to be done within a write transaction. It's not a lot of code but considering SwiftUI's very declarative structure, it would be a bit frustrating to have to do that all over your views. Always wrapping these bound values in a write transaction. So we provided a really simple way to do that automagically under the hood, $ property name.remove, .append, .whatever. That's going to properly remove it.\n\n**Jason Flax**: In this case because it's results, it's going to just remove the object from the table. It's going to notify the view, this results set, that, right, I've had something removed, you need to update. It's going to refresh the view. And as you can see, it will always show the live state of your database of the Realm which is a pretty neat thing to sort of unlock here is the two-way data binding.\n\n**Jason Flax**: ReminderListRowView, small shout out, it's just the actual rows here. But digging into it, we're going to be passing from the ForEach each one of these reminder lists. Lists is kind of an unfortunate name because lists is also a concept in Realm. It's a group of reminders, the list of reminders. We're going to pass that into the row view. That is going to hydrate this ObservedRealmObject which is the other property wrapper I mentioned. Similarly to ObservedObject which anyone that's worked with SwiftUI so far has probably encountered ObservedObject, this is the Realm version. This does some special things which I'm happy to talk about in the Q&A later. But basically what this is doing is again binding this to the view. You can use the $ operator. In this case, the name of the reminder list is passed into a TextField. When you edit that, it's going to automatically persist to the realm and update the view.\n\n**Jason Flax**: In my head I'm currently referring to this as a bound property because of the fact that it's a binding. Binding is a SwiftUI concept that we're kind of adopting with Realm which had made things work easy peasy. So stepping back and going into the actual ReminderListView which is the detailed view that the navigation link is going to send us to, destination is a very accurate parameter name here. Let's dig in. Bit more code here. This is your classic detailed view, right? There's a back button that sends you back to the reminders view. There's a couple other buttons here in the navigation view. Not super happy that edit's next to add but it was the best to do right now. This is going to be the title of the view, work items. These are things I have to do for work, right? Had to put together this SwiftUI talk, had to put together property wrappers. And all my chores were done as you saw on the other view so here we are. Let's take a look.\n\n**Jason Flax**: Really simple to move and delete things, right? So you hit the edit button, you move them around. Very, very similar to OnDelete. SwiftUI recognizes that this onMove function has been appended to the view hierarchy and it's going to add this ability which you can see over here on the right when the animation plays, these little hamburger bars. It enables those when you add that to the view. It's something throws people off a lot. There's a load of of stack overflow questions like how do I move things, how do I move things? I've put the edit button on, et cetera. You just add the onMove function. And again, tying back to the ObservedRealmObject that we spoke about in the ReminderListRowView, we added these operators for you from the $ operator, move and remove to be able to remove and delete objects without having to wrap things in a write transaction.\n\n**Jason Flax**: And just one last shout out to those sort of bound methods here. In this case, we hit add, there's a few bits of code here that I can dig into later or we can send around the deck or whatever if people are interested in what's going on because there is sort of a custom text field here that when I edit it's focused on. SwiftUI does not currently offer the ability to manually focus views so I had to do some weird stuff with UIViewRepresentable there. The $ append is doing exactly what I said. It's adding a brand new reminder to the reminder list without having to write things in a realm, wrap things in a write transaction, sorry.\n\n**Jason Flax**: What I've kind of shown there is how two-way data binding functions with SwiftUI, right? Think about what we did. We added to a persisted list, we removed from a persisted list. We moved objects around a persisted list, we changed fields in persisted objects. But didn't actually have to do anything else. We didn't have to have a V-model, we didn't have to abstract anything else out. I know, of course, this is a very, very simple application. It's a few hundred lines of code, not even, probably like 200. Of course as your application scales, if you have a chat app, which our developer relations team is currently working on using Realm in SwiftUI, you have to manage a whole bunch more State. You're probably going to have a\nlarge ObservedObject app state class that most SwiftUI apps still need to have. But otherwise, it doesn't seem to make much sense to create these middle layers when Realm offers the ability to just keep everything live and fresh at all times.\n\n**Jason Flax**: I'm going to kind of take a stance on architectures here and say that most of them kind of go away. I'd say many people here would be familiar with MVC, right? There's a view, there's a controller, there's a model. But even just briefly talking a look at these arrows and comparing them to the example that I just went through, these don't really apply anymore. You don't need the controller to send updates to the model because the models being updated by the view, right? And the views not even really updating the model. The models just being updated. And it certainty doesn't need to receive State from the view because it's stateful itself. It has it's state, it is live. It is ready to go. So this whole thing, there is no controller anymore. It eliminates the C in MVC. In which case I say dump it. There's no need for it anymore. I'm over it, I don't want to hear it MVC again. I'm also just kidding. Of course there still will be uses for it but I don't think it has much for us.\n\n**Jason Flax**: MVVM, same thing. It's a bit odd when you have Realm live objects to want to notify the models of updates when nine times out of 10 you probably want the model to just immediately receive those updates. There are still a couple of use cases where say you have a form, you want something not persisted to the Realm. Maybe you want to save some kind of draft state, maybe you don't want everything persisted immediately. Or maybe your really complex objects that you don't want to automatically use these bound write transactions on because that can be pretty expensive, right? There are still use cases where you want to abstract this out but I cannot see a reason why you should use V-models as a hard and fast rule for your code base. In which case I would say, throw it away, done with it. Again MVVM, not really very useful anymore.\n\n**Jason Flax**: Viper is another very popular one. I think this one's actually a bit newer because not really anybody was talking about it back when I was doing iOS around a few years ago for actual UI applications. It's view interactor presenter entity router, it certainly doesn't roll off the tongue nicely though I suppose Viper's meant to sound bad ass or something. But I actually think this one worked out pretty well for the most part for UI. It created these really clear-cut relationships and offered a bit of granularity that was certainly needed that maybe MVVM or MVC just couldn't supply to somebody building an app. But I don't really think it fits with UI. Again, you'll end up with a bunch of ravioli code, all these neat little parts.\n\n**Jason Flax**: The neat little parts of SwiftUI should be all the view components that your building and the models that you have. But creating things like routers and presenters doesn't really make sense when it's all just kind of baked in. And it's a concept that I've had to get used to, oh right, all this functionality is just baked in. It's just there, we just have to use it. So yeah, doesn't remotely sound correct anymore for our use case. I don't generally think you should use it with SwiftUI. You absolutely can but I know that we actually played with it in-house to see, right, does this makes sense? And you just end up with a ton of boilerplate.\n\n**Jason Flax**: So this is MVI, Model View Intent. If I'm not mistaken, this slide was actually presented and WWDC. This is what I'm proposing as the sort of main architecture to use for SwiftUI apps mainly because of how loosely it can be interpreted. So even here the model is actually state. So your data models aren't even mentioned on this graphic, they're kind of considered just part of the state of the application. Everything is state based and because the view and the state have this two-way relationship, everything is just driven by user action, right? The user action is what mutates the models which mutate the view which show the latest and greatest, right? So personally, I think this the way to go. It keeps things simple. Keeping it simple is kind of the mantra of SwiftUI. It's trying to abstract out two decades, three decades of UI code to make things easier for us. So why makes things more difficult, right?\n\n**Jason Flax**: Thank you very much everyone. Thanks for hearing me out, rambled about architectures. That's all. Just wanted to give a quick shout out to some presentations coming down the line. Nichola on the left there is going to give a presentation on Xamarin Guidance with Realm using the .Net SDK. And Andrew Morgan on the right is going to show us how to use Realm Sync in a real live chat application. The example that I've shown there is currently, unfortunately on a branch. It will eventually move to the main branch. But for now it's there while it's still in review. And yeah, thanks for your time everyone.\n\n**Ian Ward**: Great. Well Jason, thank you so much. That was very enlightening. I think we do have a couple questions here so I think we'll transition into Q&A. I'll do the questions off the chat first and then we can open it up for other questions if they come to you. First one there is the link to the code somewhere? So you just saw that. Is it in the example section of the Realm Cocoa repo or on your branch or what did you-\n\n**Jason Flax**: It is currently in a directory called SwiftUI TestToast. It will move to the examples repo and be available there. I will update the code after this user group.\n\n**Ian Ward**: Awesome. The next question here is around the documentation for all the SwiftUI constructs like State, Realm, Object and some of the property wrappers we have there. I don't know if you caught it yet but I guess this hasn't been released yet. This is the pre new release. You guys are getting the preview right now of the new hotness coming out. Is that right?\n\n**Jason Flax**: That is correct. Don't worry, there will be a ton of documentation when it is released. And that's not just this thing does this thing, it will also be best practices with it. There's some implicit reasons why you might want to use StateRealmObject verses ObservedRealmObject. But it's all Opensource, it's all available. You'll be able to look at it. And of course we're always available on GitHub to chat about it if the documentation isn't clear.\n\n**Ian Ward**: Yeah, and then maybe you could talk a little bit about some of the work that the Cocoa team has done to kind of expose this stuff. You mentioned property wrappers, a lot of that has to do with not having to explicitly call Realm.write in the view. But also didn't we do stuff for sync specific objects? We had the user and the app state and you made that as part of ObservableObject, is that right?\n\n**Jason Flax**: Correct, yeah. I didn't have time to get to sync here unfortunately. But yes, if you are using MongoDB Realm, which contains the sync component of Realm, we have enabled it so that the app class and the user class will also automatically update the view state similar to what I presented earlier.\n\n**Ian Ward**: Awesome. And then, I think this came up during some of your architecture discussions I believe around MVC. Question is from Simon, what if you have a lot of writes. What if you have a ton of writes? I guess the implication here is that you can lag the UI, right, if you're writing a lot. So is there any best practices around that? Should we be dispatching to the background? How do you think about that?\n\n**Jason Flax**: Yeah, I would. That would be the first thing if you are doing a ton of writes, move them off to a background queue. The way that I presented to use Realm and SwiftUI is the lowest common denominator, simplest way bog standard way for really simple things, right? If you do have a ton of writes, you're not locked into any of this functionality. All of the old Realm API is still there, it's not old, it's the current\nAPI, right? As opposed to doing $ list.append or whatever, if you have 1,000 populated objects ready to hop in that list, all of those SwiftUI closures that I was kind of supplying a method to, you can just do the Realm.write in there. You can do it as you would normally do it. And as your app grows in complexity, you'll have to end up doing that. As far as the way that you want to organize your application around that, one thing to keep in mind here, SwiftUI is really new. I don't know how many people are using it in production yet. Best practices with some of this stuff is going to come in time as more people use it, as more ideas come about. So for now, yeah, I would do things the old way when it comes to things like extensive writes.\n\n**Ian Ward**: Yeah, that's fair. Simon, sorry I think you had a followup question here. Do you want to just unmute yourself and maybe discuss a little bit about what you're talking about with the write transaction? I can ask to unmute, how do I do that?\n\n**Jason Flax**: It seems the question is about lag, it's about the cutoffs, I don't need a real time sync. Okay, yeah, I can just answer the question then, that's no bother.\n\n**Ian Ward**: I think he's referring to permitting a transaction for a character stroke. I don't know if we would really look to, that would probably not be our best practice or how would you think about that for each character.\n\n**Jason Flax**: It would depend. Write transactions aren't expensive for something simple like string on a view. Now it seems like if, local usage, okay. If you're syncing that up to the server, yes, I would not recommend committing a write transaction on each keystroke but it isn't that expensive to do. If you do want to batch those, again, that is available for you to do. You can still mess with our API and play around to the point where, right, maybe you only want to batch certain ones together.\n\n**Jason Flax**: What I would do in that case if you are genuinely worried about performance, I would not use a string associated with your property. I would pass in a plain old string and observe that string. And whenever you want to actually commit that string, depending on let's say you want every fifth keystroke, I wouldn't personally use that because there's not really a rhyme or reason for that. But if you wanted that, then you monitor it. You wait for the fifth one and then you write it to the Realm. Again, you don't have to follow the rules of writing on every keystroke but it is available to people that want it.\n\n**Ian Ward**: Got it. Yeah, that's important to note here. Some of the questions are when are we going to get the release? I think \\crosstalk 00:42:56\\] chomping on the bit here. And then what version are we thinking this will be released?\n\n**Jason Flax**: I don't think it would be a major bump as this isn't going to break the existing API. So it'll probably be 10.6. I still have to consult with the team on that. But my guess would be 10.6 based on the current versioning from... As far as when. I will not vaguely say soon, as much as I want to. But my guess would be considering that this is already in review, it'll be in the next week or two. So hold on tight, it's almost there.\n\n**Ian Ward**: And then I think there's a question here around freezing. And I guess we haven't released a thaw API but all of that is getting, the freezing and thawing is getting wrapped in these property wrappers. Is that what we're doing, right?\n\n**Jason Flax**: Correct, yeah. Basically because SwiftUI stores so much State, you actually need to freeze Realm objects before you pass them into the views? Why is that? If you have a list of things, SwiftUI keeps a State of that list so that it can diff it against changes. The problem is RealmObjects and RealmLists, they're all live. SwiftUI actually cannot diff the changes in a list because it's just going to see it as the same exact list. It also presented itself in a weird way where if you deleted something from the list, because it could cache the old version of it, it would crash the app because it was trying to render an index of the list that no longer exists. So what we're doing under the hood, because previously you had to freeze your list, you had to thaw the objects that come out of the list and then you could finally operate on them, introduced a whole bunch of complexity that we've now abstracted out with these property wrappers.\n\n**Ian Ward**: And we have some questions around our build system integration, Swift Package Manager, CocoaPods, Carthage. Maybe you want to talk a little bit about some of the work that we've done over the last few months. I know it was kind of a bear getting into SPM but I feel like we should have full Swift Package Managed Support. Is that right?\n\n**Jason Flax**: We do, yeah. Full SPM support. So the reason that that's changed for us is because previously our sync client was closed source. It's been open sourced. I probably should not look at the chat at the same time as talking. Sorry. It's become open source now. Everything is all available to be viewed, as open source projects are. That change enabled us to be able to use SPM properly. So basically under the hood SPM is downloading the core dependency and then supplying our source files and users can just use it really simply. Thanks for the comments about the hair.\n\n**Jason Flax**: The nice thing is, so we're promoting SPM as the main way we want people to consume Realm. I know that that's much easier said than done because so many applications are still reliant on CocoaPods and Carthage. Obviously we're going to continue to support them for as long as they're being used. It's not even a question of whether or not we drop support but I would definitely recommend that if you are having trouble for some reason with CocoaPods or Carthage, to start moving over to SPM because it's just so much simpler. It's so much easier to manage dependencies with and doesn't come with the weird cost of XE work spaces and stale dependencies and CocoaPod downloads which can take a while, so yeah.\n\n**Ian Ward**: I think unfortunately, part of it was that we were kind of hamstrung a little bit by the CocoaPods team, right? They had to add a particular source code for us and then people would open issues on our GitHub and we'd have to send them back. It's good that now we have a blessed installable version of Swift Package Manger so I think hopefully will direct people towards that. Of course, we'd love to continue to support CocoaPods but sometimes we get hamstrung by what that team supports. So next question here is regarding the dependencies. So personally, I like keeping my dependencies in check. I usually keep Realm in a separate target to make my app not aware of what persistence I use. So this is kind of about abstracting away. What you described in the presentation it seems like you suggest to integrate Realm deeply in the UI part of the app. I was thinking more about using publishers with Realm models, erase the protocol types instead of the integrating Realm objects with the RealmStateObject inside of my UI. Do you have any thoughts on that Jason?\n\n**Jason Flax**: I was thinking about using publishers with Realm models, erase the protocol types, interesting. I'm not entirely sure what you mean Andre about erasing them to protocol types and then using the base object type and just listening to changes for those. Because it sounds like if that's what you're doing, when RealmStateObject, ObservedRealmObject are release, it seems like it would obviate the need for that. But I could also be misunderstanding what you're trying to do here. Yeah, I don't know if you have a mic on or if you want to followup but it does seem like the feature being released here would obviate the need for that as all of the things that would need to listen to are going to be updating the view. I suppose there could be a case where if you want to ignore certain properties, if there are updates to them, then maybe you'd want some customization around that. And maybe there's something that we can release feature-wise there to support that but that's the only reason I could think why you'd want to abstract out the listening part of the publishers.\n\n**Ian Ward**: Okay, great. Any other questions? It looks like a couple questions have been answered via the chat so thank you very much. Any other questions? Anyone else have anything? If not, we can conclude. Okay, great. Well, thank you so much Jason. This has been great. If you have any additional questions, please come to our forums, forums.realm.io, you can ask them there. Myself and Jason and the Cocoa team are on there answering questions so please reach out to us. You can reach out on our Twitter @Realm and yeah, of course on our GitHub as well Realm-cocoa. Thank you so much and have a great rest of your week.\n\n**Jason Flax**: Thanks everyone. Thanks for tuning in.\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "Missed the first of our new Realm Meetups on SwiftUI Best Practices with Realm? Don't worry, you can catch up here.", "contentType": "Article"}, "title": "SwiftUI Best Practices with Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/serverless-development-lambda-atlas", "action": "created", "body": "# Write A Serverless Function with AWS Lambda and MongoDB\n\nThe way we write code, deploy applications, and manage scale is constantly changing and evolving to meet the growing demands of our stakeholders. In the past, companies commonly deployed and maintained their own infrastructure. In recent times, everyone is moving to the cloud. The cloud is pretty nebulous (heh) though and means different\nthings to different people. Maybe one day in the future, developers will be able to just write code and not worry about how or where it's deployed and managed.\n\nThat future is here and it's called **serverless computing**. Serverless computing allows developers to focus on writing code, not managing servers. Serverless functions further allow developers to break up their application into individual pieces of functionality that can be independently developed, deployed, and scaled. This modern practice of software development allows teams to build faster, reduce costs, and limit downtime.\n\nIn this blog post, we'll get a taste for how serverless computing can allow us to quickly develop and deploy applications. We'll use AWS Lambda as our serverless platform and MongoDB Atlas as our database provider.\n\nLet's get to it.\n\nTo follow along with this tutorial, you'll need the following:\n\n- MongoDB Atlas Account (Sign up for Free)\n- AWS Account\n- Node.js 12\n\n>MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud. Learn more about the Atlas Free Tier cluster here.\n\n## My First AWS Lambda Serverless Function\n\nAWS Lambda is Amazon's serverless computing platform and is one of the leaders in the space. To get started with AWS Lambda, you'll need an Amazon Web Services account, which you can sign up for free if you don't already have one.\n\nOnce you are signed up and logged into the AWS Management Console, to find the AWS Lambda service, navigate to the **Services** top-level menu and in the search field type \"Lambda\", then select \"Lambda\" from the dropdown menu.\n\nYou will be taken to the AWS Lambda dashboard. If you have a brand new account, you won't have any functions and your dashboard should look something like this:\n\nWe are ready to create our first serverless function with AWS Lambda. Let's click on the orange **Create function** button to get started.\n\nThere are many different options to choose from when creating a new serverless function with AWS Lambda. We can choose to start from scratch or use a blueprint, which will have sample code already implemented for us. We can choose what programming language we want our serverless function to be written in. There are permissions to consider. All this can get overwhelming quickly, so let's keep it simple.\n\nWe'll keep all the defaults as they are, and we'll name our function **myFirstFunction**. Your selections should look like this:\n\n- Function Type: **Author from scratch**\n- Function Name: **myFirstFunction**\n- Runtime: **Node.js 12.x**\n- Permissions: **Create a new role with basic Lambda permissions**.\n\nWith these settings configured, hit the orange **Create function** button to create your first AWS Lambda serverless function. This process will take a couple of seconds, but once your function is created you will be greeted with a new screen that looks like this:\n\nLet's test out our function to make sure that it runs. If we scroll down to the **Function code** section and take a look at the current code it should look like this:\n\n``` javascript\nexports.handler = async (event) => {\n // TODO implement\n const response = {\n statusCode: 200,\n body: JSON.stringify('Hello from Lambda!'),\n };\n return response;\n};\n```\n\nLet's hit the **Test** button to execute the code and make sure it runs. Hitting the **Test** button the first time will ask us to configure a test event. We can keep all the defaults here, but we will need to name our event. Let's name it **RunFunction** and then hit the **Create** button to create the test event. Now click the **Test** button again and the code editor will display the function's execution results.\n\nWe got a successful response with a message saying **\"Hello from Lambda!\"** Let's make an edit to our function. Let's change the message to \"My First Serverless Function!!!\". Once you've made this edit, hit the **Save** button and the serverless function will be re-deployed. The next time you hit the **Test** button you'll get the updated message.\n\nThis is pretty great. We are writing Node.js code in the cloud and having it update as soon as we hit the save button. Although our function doesn't do a whole lot right now, our AWS Lambda function is not exposed to the Internet. This means that the functionality we have created cannot be consumed by anyone. Let's fix that next.\n\nWe'll use AWS API Gateway to expose our AWS Lambda function to the Internet. To do this, scroll up to the top of the page and hit the **Add Trigger** button in the **Designer** section of the page.\n\nIn the trigger configuration dropdown menu we'll select **API Gateway** (It'll likely be the first option). From here, we'll select **Create an API** and for the type, choose **HTTP API**. To learn about the differences between HTTP APIs and REST APIs, check out this AWS docs page. For security, we'll select **Open** as securing the API endpoint is out of the scope of this article. We can leave all other options alone and just hit the **Add** button to create our API Gateway.\n\nWithin a couple of seconds, we should see our Designer panel updated to include the API Gateway we created. Clicking on the API Gateway and opening up details will give us additional information including the URL where we can now call our serverless function from our browser.\n\nIn my case, the URL is\n. Navigating to this URL displays the response you'd expect:\n\n**Note:** If you click the above live URL, you'll likely get a different result, as it'll reflect a change made later in this tutorial.\n\nWe're making great progress. We've created, deployed, and exposed a AWS Lambda serverless function to the Internet. Our function doesn't do much though. Let's work on that next. Let's add some real functionality to our serverless function.\n\nUnfortunately, the online editor at present time does not allow you to manage dependencies or run scripts, so we'll have to shift our development to our local machine. To keep things concise, we'll do our development from now on locally. Once we're happy with the code, we'll zip it up and upload it to AWS Lambda.\n\nThis is just one way of deploying our code and while not necessarily the most practical for a real world use case, it'll make our tutorial easier to follow as we won't have to manage the extra steps of setting up the AWS CLI or deploying our code to GitHub and using GitHub Actions to deploy our AWS Lambda functions. These options are things you should explore when deciding to build actual applications with serverless frameworks as they'll make it much easier to scale your apps in the long run.\n\nTo set up our local environment let's create a new folder that we'll use to store our code. Create a folder and call it `myFirstFunction`. In this folder create two files: `index.js` and `package.json`. For the `package.json` file, for now let's just add the following:\n\n``` javascript\n{\n \"name\": \"myFirstFunction\",\n \"version\": \"1.0.0\",\n \"dependencies\": {\n \"faker\" : \"latest\"\n }\n}\n```\n\nThe `package.json` file is going to allow us to list dependencies for our applications. This is something that we cannot do at the moment in the online editor. The Node.js ecosystem has a plethora of packages that will allow us to easily bring all sorts of functionality to our apps. The current package we defined is called `faker` and is going to allow us to generate fake data. You can learn more about faker on the project's GitHub Page. To install the faker dependency in your `myFirstFunction` folder, run `npm install`. This will download the faker dependency and store it in a `node_modules` folder.\n\nWe're going to make our AWS Lambda serverless function serve a list of movies. However, since we don't have access to real movie data, this is where faker comes in. We'll use faker to generate data for our function. Open up your `index.js` file and add the following code:\n\n``` javascript\nconst faker = require(\"faker\");\n\nexports.handler = async (event) => {\n // TODO implement\n const movie = {\n title: faker.lorem.words(),\n plot: faker.lorem.paragraph(),\n director: `${faker.name.firstName()} ${faker.name.lastName()}`,\n image: faker.image.abstract(),\n };\n const response = {\n statusCode: 200,\n body: JSON.stringify(movie),\n };\n return response;\n};\n```\n\nWith our implementation complete, we're ready to upload this new code to our AWS Lambda serverless function. To do this, we'll first need to zip up the contents within the `myFirstFunction` folder. The way you do this will depend on the operating system you are running. For Mac, you can simply highlight all the items in the `myFirstFunction` folder, right click and select **Compress** from the menu. On Windows, you'll highlight the contents, right click and select **Send to**, and then select **Compressed Folder** to generate a single .zip file. On Linux, you can open a shell in `myFirstFunction` folder and run `zip aws.zip *`.\n\n**NOTE: It's very important that you zip up the contents of the folder, not the folder itself. Otherwise, you'll get an error when you upload the file.**\n\nOnce we have our folder zipped up, it's time to upload it. Navigate to the **Function code** section of your AWS Lambda serverless function and this time, rather than make code changes directly in the editor, click on the **Actions** button in the top right section and select **Upload a .zip file**.\n\nSelect the compressed file you created and upload it. This may take a few seconds. Once your function is uploaded, you'll likely see a message that says *The deployment package of your Lambda function \"myFirstFunction\" is too large to enable inline code editing. However, you can still invoke your function.* This is ok. The faker package is large, and we won't be using it for much longer.\n\nLet's test it. We'll test it in within the AWS Lambda dashboard by hitting the **Test** button at the top.\n\nWe are getting a successful response! The text is a bunch of lorem ipsum but that's what we programmed the function to generate. Every time you hit the test button, you'll get a different set of data.\n\n## Getting Up and Running with MongoDB Atlas\n\nGenerating fake data is fine, but let's step our game up and serve real movie data. For this, we'll need access to a database that has real data we can use. MongoDB Atlas has multiple free datasets that we can utilize and one of them just happens to be a movie dataset.\n\nLet's start by setting up our MongoDB Atlas account. If you don't already have one, sign up for one here.\n\n>MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud.\n\nWhen you are signed up and logged into the MongoDB Atlas dashboard, the\nfirst thing we'll do is set up a new cluster. Click the **Build a Cluster** button to get started.\n\nFrom here, select the **Shared Clusters** option, which will have the free tier we want to use.\n\nFinally, for the last selection, you can leave all the defaults as is and just hit the green **Create Cluster** button at the bottom. Depending on your location, you may want to choose a different region, but I'll leave everything as is for the tutorial. The cluster build out will take about a minute to deploy.\n\nWhile we wait for the cluster to be deployed, let's navigate to the **Database Access** tab in the menu and create a new database user. We'll need a database user to be able to connect to our MongoDB database. In the **Database Access** page, click on the **Add New Database User** button and give your user a unique username and password. Be sure to write these down as you'll need them soon enough. Ensure that this database user can read and write to any database by checking the **Database User Privileges** dropdown. It should be selected by default, but if it's not, ensure that it's set to **Read and write to any database**.\n\nNext, we'll also want to configure network access by navigating to the **Network Access** tab in the dashboard. For the sake of this tutorial, we'll enable access to our database from any IP as long as the connection has the correct username and password. In a real world scenario, you'll want to limit database access to specific IPs that your\napplication lives on, but configuring that is out of scope for this tutorial.\n\nClick on the green **Add IP Address** button, then in the modal that pops up click on **Allow Access From Anywhere**. Click the green **Confirm** button to save the change.\n\nBy now our cluster should be deployed. Let's hit the **Clusters** selection in the menu and we should see our new cluster created and ready to go. It will look like this:\n\nOne final thing we'll need to do is add our sample datasets. To do this, click on the **...** button in your cluster and select the **Load Sample Dataset** option. Confirm in the modal that you want to load the data and the sample dataset will be loaded.\n\nAfter the sample dataset is loaded, let's click the **Collections** button in our cluster to see the data. Once the **Collections** tab is loaded, from the databases section, select the **sample_mflix** database, and the **movies** collection within it. You'll see the collection information at the top and the first twenty movies displayed on the right. We have our dataset!\n\nNext, let's connect our MongoDB databases that's deployed on MongoDB Atlas to our Serverless AWS Lambda function.\n\n## Connecting MongoDB Atlas to AWS Lambda\n\nWe have our database deployed and ready to go. All that's left to do is connect the two. On our local machine, let's open up the `package.json` file and add `mongodb` as a dependency. We'll remove `faker` as we'll no longer use it for our movies.\n\n``` javascript\n{\n \"name\": \"myFirstFunction\",\n \"version\": \"1.0.0\",\n \"dependencies\": {\n \"mongodb\": \"latest\"\n }\n}\n```\n\nThen, let's run `npm install` to install the MongoDB Node.js Driver in our `node_modules` folder.\n\nNext, let's open up `index.js` and update our AWS Lambda serverless function. Our code will look like this:\n\n``` javascript\n// Import the MongoDB driver\nconst MongoClient = require(\"mongodb\").MongoClient;\n\n// Define our connection string. Info on where to get this will be described below. In a real world application you'd want to get this string from a key vault like AWS Key Management, but for brevity, we'll hardcode it in our serverless function here.\nconst MONGODB_URI =\n \"mongodb+srv://:@cluster0.cvaeo.mongodb.net/test?retryWrites=true&w=majority\";\n\n// Once we connect to the database once, we'll store that connection and reuse it so that we don't have to connect to the database on every request.\nlet cachedDb = null;\n\nasync function connectToDatabase() {\n if (cachedDb) {\n return cachedDb;\n }\n\n // Connect to our MongoDB database hosted on MongoDB Atlas\n const client = await MongoClient.connect(MONGODB_URI);\n\n // Specify which database we want to use\n const db = await client.db(\"sample_mflix\");\n\n cachedDb = db;\n return db;\n}\n\nexports.handler = async (event, context) => {\n\n /* By default, the callback waits until the runtime event loop is empty before freezing the process and returning the results to the caller. Setting this property to false requests that AWS Lambda freeze the process soon after the callback is invoked, even if there are events in the event loop. AWS Lambda will freeze the process, any state data, and the events in the event loop. Any remaining events in the event loop are processed when the Lambda function is next invoked, if AWS Lambda chooses to use the frozen process. */\n context.callbackWaitsForEmptyEventLoop = false;\n\n // Get an instance of our database\n const db = await connectToDatabase();\n\n // Make a MongoDB MQL Query to go into the movies collection and return the first 20 movies.\n const movies = await db.collection(\"movies\").find({}).limit(20).toArray();\n\n const response = {\n statusCode: 200,\n body: JSON.stringify(movies),\n };\n\n return response;\n};\n```\n\nThe `MONGODB_URI` is your MongoDB Atlas connection string. To get this value, head over to your MongoDB Atlas dashboard. On the Clusters overview page, click on the **Connect** button.\n\nFrom here, select the **Connect your application** option and you'll be taken to a screen that has your connection string. **Note:** Your username will be pre-populated, but you'll have to update the **password** and **dbname** values.\n\nOnce you've made the above updates to your `index.js` file, save it, and zip up the contents of your `myFirstFunction` folder again. We'll redeploy this code, by going back to our AWS Lambda function and uploading the new zip file. Once it's uploaded, let's test it by hitting the **Test** button at the top right of the page.\n\nIt works! We get a list of twenty movies from our `sample_mflix` MongoDB database that is deployed on MongoDB Atlas.\n\nWe can also call our function directly by going to the API Gateway URL from earlier and seeing the results in the browser as well. Navigate to the API Gateway URL you were provided and you should see the same set of results. If you need a refresher on where to find it, navigate to the **Designer** section of your AWS Lambda function, click on **API Gateway**, click the **Details** button to expand all the information, and you'll see an **API Endpoint** URL which is where you can publicly access this serverless function.\n\nThe query that we have written returns a list of twenty movies from our `sample_mflix.movies` collection. You can modify this query to return different types of data easily. Since this file is much smaller, we're able to directly modify it within the browser using the AWS Lambda online code editor. Let's change our query around so that we get a list of twenty of the highest rated movies and instead of getting back all the data on each movie, we'll just get back the movie title, plot, rating, and cast. Replace the existing query which looks like:\n\n``` javascript\nconst movies = await db.collection(\"movies\").find({}).limit(20).toArray();\n```\n\nTo:\n\n``` javascript\nconst movies = await db.collection(\"movies\").find({},{projection: {title: 1, plot: 1, metacritic: 1, cast:1}}).sort({metacritic: -1}).limit(20).toArray()\n```\n\nOur results will look slightly different now. The first result we get now is **The Wizard of Oz** which has a Metacritic rating of 100.\n\n## One More Thing...\n\nWe created our first AWS Lambda serverless function and we made quite a few modifications to it. With each iteration we changed the functionality of what the function is meant to do, but generally we settled on this function retrieving data from our MongoDB database.\n\nTo close out this article, let's quickly create another serverless function, this one to add data to our movies collection. Since we've already become pros in the earlier section, this should go much faster.\n\n### Creating a Second AWS Lambda Function\n\nWe'll start by navigating to our AWS Lambda functions homepage. Once here, we'll see our existing function accounted for. Let's hit the orange **Create function** button to create a second AWS Lambda serverless function.\n\nWe'll leave all the defaults as is, but this time we'll give the function name a more descriptive name. We'll call it **AddMovie**.\n\nOnce this function is created, to speed things up, we'll actually upload the .zip file from our first function. So hit the **Actions** menu in the **Function Code** section, select **Upload Zip File** and choose the file in your **myFirstFunction** folder.\n\nTo make sure everything is working ok, let's create a test event and run it. We should get a list of twenty movies. If you get an error, make sure you have the correct username and password in your `MONGODB_URI` connection string. You may notice that the results here will not have **The Wizard of Oz** as the first item. That is to be expected as we made those edits within our `myFirstFunction` online editor. So far, so good.\n\nNext, we'll want to capture what data to insert into our MongoDB database. To do this, let's edit our test case. Instead of the default values provided, which we do not use, let's instead create a JSON object that can represent a movie.\n\nNow, let's update our serverless function to use this data and store it in our MongoDB Atlas database in the `movies` collection of the `sample_mflix` database. We are going to change our MongoDB `find()` query:\n\n``` javascript\nconst movies = await db.collection(\"movies\").find({}).limit(20).toArray();\n```\n\nTo an `insertOne()`:\n\n``` javascript\nconst result = await db.collection(\"movies\").insertOne(event);\n```\n\nThe complete code implementation is as follows:\n\n``` javascript\nconst MongoClient = require(\"mongodb\").MongoClient;\nconst MONGODB_URI =\n \"mongodb+srv://:@cluster0.cvaeo.mongodb.net/test?retryWrites=true&w=majority\";\n\nlet cachedDb = null;\n\nasync function connectToDatabase() {\n\n if (cachedDb) {\n return cachedDb;\n }\n\n const client = await MongoClient.connect(MONGODB_URI);\n const db = await client.db('sample_mflix');\n cachedDb = db;\n return db\n}\n\nexports.handler = async (event, context) => {\n context.callbackWaitsForEmptyEventLoop = false;\n\n const db = await connectToDatabase();\n\n // Insert the event object, which is the test data we pass in\n const result = await db.collection(\"movies\").insertOne(event);\n const response = {\n statusCode: 200,\n body: JSON.stringify(result),\n };\n\n return response;\n};\n```\n\nTo verify that this works, let's test our function. Hitting the test button, we'll get a response that looks like the following image:\n\nThis tells us that the insert was successful. In a real world application, you probably wouldn't want to send this message to the user, but for our illustrative purposes here, it's ok. We can also confirm that the insert was successful by going into our original function and running it. Since in our test data, we set the metacritic rating to 101, this result should be the first one returned. Let's check.\n\nAnd we're good. Our Avengers movie that we added with our second serverless function is now returned as the first result because it has the highest metacritic rating.\n\n## Putting It All Together\n\nWe did it! We created our first, and second AWS Lambda serverless functions. We learned how to expose our AWS Lambda serverless functions to the world using AWS API Gateway, and finally we learned how to integrate MongoDB Atlas in our serverless functions. This is just scratching the surface. I made a few call outs throughout the article saying that the reason we're doing things a certain way is for brevity, but if you are building real world applications I want to leave you with a couple of resources and additional reading.\n\n- MongoDB Node.js Driver Documentation\n- MongoDB Best Practices Connecting from AWS Lambda\n- Setting Up Network Peering\n- Using AWS Lambda with the AWS CLI\n- MongoDB University\n\nIf you have any questions or feedback, join us on the MongoDB Community forums and let's keep the conversation going!\n\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to write serverless functions with AWS Lambda and MongoDB", "contentType": "Tutorial"}, "title": "Write A Serverless Function with AWS Lambda and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/movie-score-prediction-bigquery-vertex-ai-atlas", "action": "created", "body": "# Movie Score Prediction with BigQuery, Vertex AI, and MongoDB Atlas\n\nHey there! It\u2019s been a minute since we last wrote about Google Cloud and MongoDB Atlas together. We had an idea for this new genre of experiment that involves BigQuery, BQML, Vertex AI, Cloud Functions, MongoDB Atlas, and Cloud Run and we thought of putting it together in this blog. You will get to learn how we brought these services together in delivering a full stack application and other independent functions and services the application uses. Have you read our last blog about Serverless MEAN stack applications with Cloud Run and MongoDB Atlas? If not, this would be a good time to take a look at that, because some topics we cover in this discussion are designed to reference some steps from that blog. In this experiment, we are going to bring BigQuery, Vertex AI, and MongoDB Atlas to predict a categorical variable using a Supervised Machine Learning Model created with AutoML.\n\n## The experiment\n\nWe all love movies, right? Well, most of us do. Irrespective of language, geography, or culture, we enjoy not only watching movies but also talking about the nuances and qualities that go into making a movie successful. I have often wondered, \u201cIf only I could alter a few aspects and create an impactful difference in the outcome in terms of the movie\u2019s rating or success factor.\u201d That would involve predicting the success score of the movie so I can play around with the variables, dialing values up and down to impact the result. That is exactly what we have done in this experiment.\n\n## Summary of architecture\n\nToday we'll predict a Movie Score using Vertex AI AutoML and have transactionally stored it in MongoDB Atlas. The model is trained with data stored in BigQuery and registered in Vertex AI. The list of services can be composed into three sections:\n\n**1. ML Model Creation\n2. User Interface / Client Application\n3. Trigger to predict using the ML API**\n\n### ML Model Creation\n\n1. Data sourced from CSV to BigQuery\n - MongoDB Atlas for storing transactional data and powering the client application\n - Angular client application interacting with MongoDB Atlas\n - Client container deployed in Cloud Run\n2. BigQuery data integrated into Vertex AI for AutoML model creation\n - MongoDB Atlas for storing transactional data and powering the client application\n - Angular client application interacting with MongoDB Atlas\n - Client container deployed in Cloud Run\n3. Model deployed in Vertex AI Model Registry for generating endpoint API\n - Java Cloud Functions to trigger invocation of the deployed AutoML model\u2019s endpoint that takes in movie details as request from the UI, returns the predicted movie SCORE, and writes the response back to MongoDB\n\n## Preparing training data\n\nYou can use any publicly available dataset, create your own, or use the dataset from CSV in GitHub. I have done basic processing steps for this experiment in the dataset in the link. Feel free to do an elaborate cleansing and preprocessing for your implementation. Below are the independent variables in the dataset:\n\n* Name (String)\n* Rating (String)\n* Genre (String, Categorical)\n* Year (Number)\n* Released (Date)\n* Director (String)\n* Writer (String)\n* Star (String)\n* Country (String, Categorical)\n* Budget (Number)\n* Company (String)\n* Runtime (Number)\n\n## BigQuery dataset using Cloud Shell\n\nBigQuery is a serverless, multi-cloud data warehouse that can scale from bytes to petabytes with zero operational overhead. This makes it a great choice for storing ML training data. But there\u2019s more \u2014 the built-in machine learning (ML) and analytics capabilities allow you to create no-code predictions using just SQL queries. And you can access data from external sources with federated queries, eliminating the need for complicated ETL pipelines. You can read more about everything BigQuery has to offer in the BigQuery product page.\n\nBigQuery allows you to focus on analyzing data to find meaningful insights. In this blog, you'll use the **bq** command-line tool to load a local CSV file into a new BigQuery table. Follow the below steps to enable BigQuery:\n\n### Activate Cloud Shell and create your project\n\nYou will use Cloud Shell, a command-line environment running in Google Cloud. Cloud Shell comes pre-loaded with **bq**.\n\n1. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project.\n2. Make sure that billing is enabled for your Cloud project. Learn how to check if billing is enabled on a project.\n3. Enable the BigQuery API and open the BigQuery web UI.\n4. From the Cloud Console, click Activate Cloud Shell. Make sure you navigate to the project and that it\u2019s authenticated. Refer to gcloud config commands.\n\n## Creating and loading the dataset\n\nA BigQuery dataset is a collection of tables. All tables in a dataset are stored in the same data location. You can also attach custom access controls to limit access to a dataset and its tables.\n\n1. In Cloud Shell, use the `bq mk` command to create a dataset called \"movies.\"\n ```\n bq mk \u2013location=<> movies\n ```\n \n > Use \u2013location=LOCATION to set the location to a region you can remember to set as the region for the VERTEX AI step as well (both instances should be on the same region).\n\n2. Make sure you have the data file (.csv) ready. The file can be downloaded from GitHub. Execute the following commands in Cloud Shell to clone the repository and navigate to the project:\n ```\n git clone https://github.com/AbiramiSukumaran/movie-score.git\n cd movie-score\n ```\n \n *You may also use a public dataset of your choice. To open and query the public dataset, follow the documentation.*\n\n3. Use the `bq load` command to load your CSV file into a BigQuery table (please note that you can also directly upload from the BigQuery UI):\n\n ```\n bq load --source_format=CSV --skip_leading_rows=1 movies.movies_score \\\n ./movies_bq_src.csv \\ \n Id:numeric,name:string,rating:string,genre:string,year:numeric,released:string,score:string,director:string,writer:string,star:string,country:string,budget:numeric,company:string,runtime:numeric,data_cat:string\n ```\n \n - `--source_format=CSV` \u2014 uses CSV data format when parsing data file.\n - `--skip_leading_rows=1` \u2014 skips the first line in the CSV file because it is a header row.\n - `movies.movies_score` \u2014 defines the table the data should be loaded into.\n - `./movies_bq_src.csv` \u2014 defines the file to load. The `bq load` command can load files from Cloud Storage with gs://my_bucket/path/to/file URIs.\n\n A schema, which can be defined in a JSON schema file or as a comma-separated list. (I\u2019ve used a comma-separated list.)\n \n Hurray! Our CSV data is now loaded in the table `movies.movies`. Remember, you can create a view to keep only essential columns that contribute to the model training and ignore the rest.\n \n4. Let\u2019s query it, quick!\n\n We can interact with BigQuery in three ways:\n \n 1. BigQuery web UI\n 2. The bq command\n 3. API\n\n Your queries can also join your data against any dataset (or datasets, so long as they're in the same location) that you have permission to read. Find a snippet of the sample data below:\n \n ```sql\n SELECT name, rating, genre, runtime FROM movies.movies_score limit 3;\n ```\n \n I have used the BigQuery Web SQL Workspace to run queries. The SQL Workspace looks like this:\n \n \n \n \n \n## Predicting movie success score (user score on a scale of 1-10)\n\nIn this experiment, I am predicting the success score (user score/rating) for the movie as a multi-class classification model on the movie dataset.\n\n**A quick note about the choice of model**\n\nThis is an experimental choice of model chosen here, based on the evaluation of results I ran across a few models initially and finally went ahead with LOGISTIC REG to keep it simple and to get results closer to the actual movie rating from several databases. Please note that this should be considered just as a sample for implementing the model and is definitely not the recommended model for this use case. One other way of implementing this is to predict the outcome of the movie as GOOD/BAD using the Logistic Regression model instead of predicting the score. \n\n## Using BigQuery data in Vertex AI AutoML integration\n\nUse your data from BigQuery to directly create an AutoML model with Vertex AI. Remember, we can also perform AutoML from BigQuery itself and register the model with VertexAI and expose the endpoint. Refer to the documentation for BigQuery AutoML. In this example, however, we will use Vertex AI AutoML to create our model. \n\n### Creating a Vertex AI data set\n\nGo to Vertex AI from Google Cloud Console, enable Vertex AI API if not already done, expand data and select Datasets, click on Create data set, select TABULAR data type and the \u201cRegression / classification\u201d option, and click Create:\n\n### Select data source\n\nOn the next page, select a data source:\n\nChoose the \u201cSelect a table or view from BigQuery\u201d option and select the table from BigQuery in the BigQuery path BROWSE field. Click Continue.\n\n**A Note to remember**\n\nThe BigQuery instance and Vertex AI data sets should have the same region in order for the BigQuery table to show up in Vertex AI.\n\nWhen you are selecting your source table/view, from the browse list, remember to click on the radio button to continue with the below steps. If you accidentally click on the name of the table/view, you will be taken to Dataplex. You just need to browse back to Vertex AI if this happens to you.\n\n### Train your model \n\nOnce the dataset is created, you should see the Analyze page with the option to train a new model. Click that:\n\n### Configure training steps \n\nGo through the steps in the Training Process.\n\nLeave Objective as **Classification**.\n\nSelect AutoML option in first page and click continue:\n\nGive your model a name.\n\nSelect Target Column name as \u201cScore\u201d from the dropdown that shows and click Continue.\n\nAlso note that you can check the \u201cExport test dataset to BigQuery\u201d option, which makes it easy to see the test set with results in the database efficiently without an extra integration layer or having to move data between services.\n\nOn the next pages, you have the option to select any advanced training options you need and the hours you want to set the model to train. Please note that you might want to be mindful of the pricing before you increase the number of node hours you want to use for training.\n\nClick **Start Training** to begin training your new model.\n\n### Evaluate, deploy, and test your model \n\nOnce the training is completed, you should be able to click Training (under the Model Development heading in the left-side menu) and see your training listed in the Training Pipelines section. Click that to land on the Model Registry page. You should be able to: \n\n1. View and evaluate the training results.\n\n \n\n1. Deploy and test the model with your API endpoint.\n\n Once you deploy your model, an API endpoint gets created which can be used in your application to send requests and get model prediction results in the response.\n \n\n1. Batch predict movie scores.\n\n You can integrate batch predictions with BigQuery database objects as well. Read from the BigQuery object (in this case, I have created a view to batch predict movies score) and write into a new BigQuery table. Provide the respective BigQuery paths as shown in the image and click CREATE:\n \n \n Once it is complete, you should be able to query your database for the batch prediction results. But before you move on from this section, make sure you take a note of the deployed model\u2019s Endpoint id, location, and other details on your Vertex AI endpoint section.\n \n We have created a custom ML model for the same use case using BigQuery ML with no code but only SQL, and it\u2019s already detailed in another blog.\n \n## Serverless web application with MongoDB Atlas and Angular\n\nThe user interface for this experiment is using Angular and MongoDB Atlas and is deployed on Cloud Run. Check out the blog post describing how to set up a MongoDB serverless instance to use in a web app and deploy that on Cloud Run.\n\nIn the application, we\u2019re also utilizing Atlas Search, a full-text search capability, integrated into MongoDB Atlas. Atlas Search enables autocomplete when entering information about our movies. For the data, we imported the same dataset we used earlier into Atlas.\n\nYou can find the source code of the application in the dedicated Github repository. \n\n## MongoDB Atlas for transactional data\n\nIn this experiment, MongoDB Atlas is used to record transactions in the form of: \n\n1. Real time user requests. \n1. Prediction result response.\n1. Historical data to facilitate UI fields autocompletion. \n\nIf instead, you want to configure a pipeline for streaming data from MongoDB to BigQuery and vice-versa, check out the dedicated Dataflow templates.\n\nOnce you provision your cluster and set up your database, make sure to note the below in preparation of our next step, creating the trigger:\n\n1. Connection String\n1. Database Name\n1. Collection Name\n\nPlease note that this client application uses the Cloud Function Endpoint (which is explained in the below section) that uses user input and predicts movie score and inserts in MongoDB.\n\n## Java Cloud Function to trigger ML invocation from the UI\n\nCloud Functions is a lightweight, serverless compute solution for developers to create single-purpose, stand-alone functions that respond to Cloud events without needing to manage a server or runtime environment. In this section, we will prepare the Java Cloud Functions code and dependencies and authorize for it to be executed on triggers\n\nRemember how we have the endpoint and other details from the ML deployment step? We are going to use that here, and since we are using Java Cloud Functions, we will use pom.xml for handling dependencies. We use google-cloud-aiplatform library to consume the Vertex AI AutoML endpoint API:\n\n```xml\n\n com.google.cloud\n google-cloud-aiplatform\n 3.1.0\n\n```\n\n1. Search for Cloud Functions in Google Cloud console and click \u201cCreate Function.\u201d \n\n2. Enter the configuration details, like Environment, Function name, Region, Trigger (in this case, HTTPS), Authentication of your choice, enable \u201cRequire HTTPS,\u201d and click next/save.\n\n \n\n3. On the next page, select Runtime (Java 11), Source Code (Inline or upload), and start editing\n\n \n \n4. You can clone the Java source code and pom.xml from the GitHub repository.\n\n > If you are using Gen2 (recommended), you can use the class name and package as-is. If you use Gen1 Cloud Functions, please change the package name and class name to \u201cExample.\u201d\n\n5. In the .java file, you will notice the part where we connect to MongoDB instance to write data: (use your credentials)\n\n ```java\n MongoClient client = MongoClients.create(YOUR_CONNECTION_STRING);\nMongoDatabase database = client.getDatabase(\"movies\");\nMongoCollection collection = database.getCollection(\"movies\");\n ```\n \n6. You should also notice the ML model invocation part in the java code (use your endpoint):\n\n ```java\n PredictionServiceSettings predictionServiceSettings = PredictionServiceSettings.newBuilder().setEndpoint(\"<>-aiplatform.googleapis.com:443\")\n .build();\n int cls = 0;\n \u2026\n EndpointName endpointName = EndpointName.of(project, location, endpointId);\n ```\n \n7. Go ahead and deploy the function once all changes are completed. You should see the endpoint URL that will be used in the client application to send requests to this Cloud Function.\n\nThat\u2019s it! Nothing else to do in this section. The endpoint is used in the client application for the user interface to send user parameters to Cloud Functions as a request and receive movie score as a response. The endpoint also writes the response and request to the MongoDB collection.\n\n## What\u2019s next?\n\nThank you for following us on this journey! As a reward for your patience, you can check out the predicted score for your favorite movie. \n\n1. Analyze and compare the accuracy and other evaluation parameters between the BigQuery ML manually using SQLs and Vertex AI Auto ML model.\n1. Play around with the independent variables and try to increase the accuracy of the prediction result.\n1. Take it one step further and try the same problem as a Linear Regression model by predicting the score as a float/decimal point value instead of rounded integers.\n\nTo learn more about some of the key concepts in this post you can dive in here:\n\nLinear Regression Tutorial\n\nAutoML Model Types\n\nCodelabs\n", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud", "AI"], "pageDescription": "We're using BigQuery, Vertex AI, and MongoDB Atlas to predict a categorical variable using a Supervised Machine Learning Model created with AutoML.", "contentType": "Tutorial"}, "title": "Movie Score Prediction with BigQuery, Vertex AI, and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-massive-arrays", "action": "created", "body": "# Massive Arrays\n\nDesign patterns are a fundamental part of software engineering. They provide developers with best practices and a common language as they architect applications.\n\nAt MongoDB, we have schema design patterns to help developers be successful as they plan and iterate on their schema designs. Daniel Coupal and Ken Alger co-wrote a fantastic blog series that highlights each of the schema design patterns. If you really want to dive into the details (and I recommend you do!), check out MongoDB University's free course on Data Modeling.\n\nSometimes, developers jump right into designing their schemas and building their apps without thinking about best practices. As their apps begin to scale, they realize that things are bad.\n\nWe've identified several common mistakes developers make with MongoDB. We call these mistakes \"schema design anti-patterns.\"\n\nThroughout this blog series, I'll introduce you to six common anti-patterns. Let's start today with the Massive Arrays anti-pattern.\n\n>\n>\n>:youtube]{vid=8CZs-0it9r4 start=236}\n>\n>Prefer to learn by video? I've got you covered.\n>\n>\n\n## Massive Arrays\n\nOne of the rules of thumb when modeling data in MongoDB is *data that is accessed together should be stored together*. If you'll be retrieving or updating data together frequently, you should probably store it together. Data is commonly stored together by embedding related information in subdocuments or arrays.\n\nThe problem is that sometimes developers take this too far and embed massive amounts of information in a single document.\n\nConsider an example where we store information about employees who work in various government buildings. If we were to embed the employees in the building document, we might store our data in a buildings collection like the following:\n\n``` javascript\n// buildings collection\n{\n \"_id\": \"city_hall\",\n \"name\": \"City Hall\",\n \"city\": \"Pawnee\",\n \"state\": \"IN\",\n \"employees\": [\n {\n \"_id\": 123456789,\n \"first\": \"Leslie\",\n \"last\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"start-year\": \"2004\"\n },\n {\n \"_id\": 234567890,\n \"first\": \"Ron\",\n \"last\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"start-year\": \"2002\"\n }\n ]\n}\n```\n\nIn this example, the employees array is unbounded. As we begin storing information about all of the employees who work in City Hall, the employees array will become massive\u2014potentially sending us over the [16 mb document maximum. Additionally, reading and building indexes on arrays gradually becomes less performant as array size increases.\n\nThe example above is an example of the massive arrays anti-pattern.\n\nSo how can we fix this?\n\nInstead of embedding the employees in the buildings documents, we could flip the model and instead embed the buildings in the employees documents:\n\n``` javascript\n// employees collection\n{\n \"_id\": 123456789,\n \"first\": \"Leslie\",\n \"last\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"start-year\": \"2004\",\n \"building\": {\n \"_id\": \"city_hall\",\n \"name\": \"City Hall\",\n \"city\": \"Pawnee\",\n \"state\": \"IN\"\n }\n},\n{\n \"_id\": 234567890,\n \"first\": \"Ron\",\n \"last\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"start-year\": \"2002\",\n \"building\": {\n \"_id\": \"city_hall\",\n \"name\": \"City Hall\",\n \"city\": \"Pawnee\",\n \"state\": \"IN\"\n }\n}\n```\n\nIn the example above, we are repeating the information about City Hall in the document for each City Hall employee. If we are frequently displaying information about an employee and their building in our application together, this model probably makes sense.\n\nThe disadvantage with this approach is we have a lot of data duplication. Storage is cheap, so data duplication isn't necessarily a problem from a storage cost perspective. However, every time we need to update information about City Hall, we'll need to update the document for every employee who works there. If we take a look at the information we're currently storing about the buildings, updates will likely be very infrequent, so this approach may be a good one.\n\nIf our use case does not call for information about employees and their building to be displayed or updated together, we may want to instead separate the information into two collections and use references to link them:\n\n``` javascript\n// buildings collection\n{\n \"_id\": \"city_hall\",\n \"name\": \"City Hall\",\n \"city\": \"Pawnee\",\n \"state\": \"IN\"\n}\n\n// employees collection\n{\n \"_id\": 123456789,\n \"first\": \"Leslie\",\n \"last\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"start-year\": \"2004\",\n \"building_id\": \"city_hall\"\n},\n{\n \"_id\": 234567890,\n \"first\": \"Ron\",\n \"last\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"start-year\": \"2002\",\n \"building_id\": \"city_hall\"\n}\n```\n\nHere we have completely separated our data. We have eliminated massive arrays, and we have no data duplication.\n\nThe drawback is that if we need to retrieve information about an employee and their building together, we'll need to use $lookup to join the data together. $lookup operations can be expensive, so it's important to consider how often you'll need to perform $lookup if you choose this option.\n\nIf we find ourselves frequently using $lookup, another option is to use the extended reference pattern. The extended reference pattern is a mixture of the previous two approaches where we duplicate some\u2014but not all\u2014of the data in the two collections. We only duplicate the data that is frequently accessed together.\n\nFor example, if our application has a user profile page that displays information about the user as well as the name of the building and the state where they work, we may want to embed the building name and state fields in the employee document:\n\n``` javascript\n// buildings collection\n{\n \"_id\": \"city_hall\",\n \"name\": \"City Hall\",\n \"city\": \"Pawnee\",\n \"state\": \"IN\"\n}\n\n// employees collection\n{\n \"_id\": 123456789,\n \"first\": \"Leslie\",\n \"last\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"start-year\": \"2004\",\n \"building\": {\n \"name\": \"City Hall\",\n \"state\": \"IN\"\n }\n},\n{\n \"_id\": 234567890,\n \"first\": \"Ron\",\n \"last\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"start-year\": \"2002\",\n \"building\": {\n \"name\": \"City Hall\",\n \"state\": \"IN\"\n }\n}\n```\n\nAs we saw when we duplicated data previously, we should be mindful of duplicating data that will frequently be updated. In this particular case, the name of the building and the state the building is in are very unlikely to change, so this solution works.\n\n## Summary\n\nStoring related information that you'll be frequently querying together is generally good. However, storing information in massive arrays that will continue to grow over time is generally bad.\n\nAs is true with all MongoDB schema design patterns and anti-patterns, carefully consider your use case\u2014the data you will store and how you will query it\u2014in order to determine what schema design is best for you.\n\nBe on the lookout for more posts in this anti-patterns series in the coming weeks.\n\n>\n>\n>When you're ready to build a schema in MongoDB, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB. With a forever-free tier, you're on your way to realizing the full value of MongoDB.\n>\n>\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Unbounded Arrays Anti-Pattern\n- MongoDB Docs: Data Modeling Introduction\n- MongoDB Docs: Model One-to-One Relationships with Embedded Documents\n- MongoDB Docs: Model One-to-Many Relationships with Embedded Documents\n- MongoDB Docs: Model One-to-Many Relationships with Document References\n- MongoDB University M320: Data Modeling\n- Blog Series: Building with Patterns\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Massive Arrays", "contentType": "Article"}, "title": "Massive Arrays", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-unnecessary-indexes", "action": "created", "body": "# Unnecessary Indexes\n\nSo far in this MongoDB Schema Design Anti-Patterns series, we've discussed avoiding massive arrays as well as a massive number of collections.\n\nToday, let's talk about indexes. Indexes are great (seriously!), but it's easy to get carried away and make indexes that you'll never actually use. Let's examine why an index may be unnecessary and what the consequences of keeping it around are.\n\n>\n>\n>:youtube]{vid=mHeP5IbozDU start=32}\n>\n>Would you rather watch than read? The video above is just for you.\n>\n>\n\n## Unnecessary Indexes\n\nBefore we go any further, we want to emphasize that [indexes are good. Indexes allow MongoDB to efficiently query data. If a query does not have an index to support it, MongoDB performs a collection scan, meaning that it scans *every* document in a collection. Collection scans can be very slow. If you frequently execute a query, make sure you have an index to support it.\n\nNow that we have an understanding that indexes are good, you might be wondering, \"Why are unnecessary indexes an anti-pattern? Why not create an index on every field just in case I'll need it in the future?\"\n\nWe've discovered three big reasons why you should remove unnecessary indexes:\n\n1. **Indexes take up space**. Each index is at least 8 kB and grows with the number of documents associated with it. Thousands of indexes can begin to drain resources.\n2. **Indexes can impact the storage engine's performance**. As we discussed in the previous post in this series about the Massive Number of Collections Anti-Pattern, the WiredTiger storage engine (MongoDB's default storage engine) stores a file for each collection and for each index. WiredTiger will open all files upon startup, so performance will decrease when an excessive number of collections and indexes exist.\n3. **Indexes can impact write performance**. Whenever a document is created, updated, or deleted, any index associated with that document must also be updated. These index updates negatively impact write performance.\n\nIn general, we recommend limiting your collection to a maximum of 50 indexes.\n\nTo avoid the anti-pattern of unnecessary indexes, examine your database and identify which indexes are truly necessary. Unnecessary indexes typically fall into one of two categories:\n\n1. The index is rarely used or not at all.\n2. The index is redundant because another compound index covers it.\n\n## Example\n\nConsider Leslie from the incredible TV show Parks and Recreation. Leslie often looks to other powerful women for inspiration.\n\nLet's say Leslie wants to inspire others, so she creates a website about her favorite inspirational women. The website allows users to search by full name, last name, or hobby.\n\nLeslie chooses to use MongoDB Atlas to create her database. She creates a collection named `InspirationalWomen`. Inside of that collection, she creates a document for each inspirational woman. Below is a document she created for Sally Ride.\n\n``` javascript\n// InspirationalWomen collection\n\n{\n \"_id\": {\n \"$oid\": \"5ec81cc5b3443e0e72314946\"\n },\n \"first_name\": \"Sally\",\n \"last_name\": \"Ride\",\n \"birthday\": 1951-05-26T00:00:00.000Z,\n \"occupation\": \"Astronaut\",\n \"quote\": \"I would like to be remembered as someone who was not afraid to do what \n she wanted to do, and as someone who took risks along the way in order \n to achieve her goals.\",\n \"hobbies\": \n \"Tennis\",\n \"Writing children's books\"\n ]\n}\n```\n\nLeslie eats several sugar-filled Nutriyum bars, and, riding her sugar high, creates an index for every field in her collection.\n\n \n\nShe also creates a compound index on the last_name and first_name fields, so that users can search by full name. Leslie now has one collection with eight indexes:\n\n1. `_id` is indexed by default (see the [MongoDB Docs for more details)\n2. `{ first_name: 1 }`\n3. `{ last_name: 1 }`\n4. `{ birthday: 1 }`\n5. `{ occupation: 1 }`\n6. `{ quote: 1 }`\n7. `{ hobbies: 1 }`\n8. `{ last_name: 1, first_name: 1}`\n\nLeslie launches her website and is excited to be helping others find inspiration. Users are discovering new role models as they search by full name, last name, and hobby.\n\n### Removing Unnecessary Indexes\n\nLeslie decides to fine-tune her database and wonders if all of those indexes she created are really necessary.\n\nShe opens the Atlas Data Explorer and navigates to the Indexes pane. She can see that the only two indexes that are being used are the compound index named `last_name_1_first_name_1` and the `hobbies_1` index. She realizes that this makes sense.\n\nHer queries for inspirational women by full name are covered by the `last_name_1_first_name_1` index. Additionally, her query for inspirational women by last name is covered by the same `last_name_1_first_name_1` compound index since the index has a `last_name` prefix. Her queries for inspirational women by hobby are covered by the `hobbies_1` index. Since those are the only ways that users can query her data, the other indexes are unnecessary.\n\nIn the Data Explorer, Leslie has the option of dropping all of the other unnecessary indexes. Since MongoDB requires an index on the `_id` field, she cannot drop this index.\n\nIn addition to using the Data Explorer, Leslie also has the option of using MongoDB Compass to check for unnecessary indexes. When she navigates to the Indexes pane for her collection, she can once again see that the `last_name_1_first_name_1` and the `hobbies_1` indexes are the only indexes being used regularly. Just as she could in the Atlas Data Explorer, Leslie has the option of dropping each of the indexes except for `_id`.\n\nLeslie decides to drop all of the unnecessary indexes. After doing so, her collection now has the following indexes:\n\n1. `_id` is indexed by default\n2. `{ hobbies: 1 }`\n3. `{ last_name: 1, first_name: 1}`\n\n## Summary\n\nCreating indexes that support your queries is good. Creating unnecessary indexes is generally bad.\n\nUnnecessary indexes reduce performance and take up space. An index is considered to be unnecessary if (1) it is not frequently used by a query or (2) it is redundant because another compound index covers it.\n\nYou can use the Atlas Data Explorer or MongoDB Compass to help you discover how frequently your indexes are being used. When you discover an index is unnecessary, remove it.\n\nBe on the lookout for the next post in this anti-patterns series!\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Remove Unnecessary Indexes\n- MongoDB Docs: Indexes\n- MongoDB Docs: Compound Indexes \u2014 Prefixes\n- MongoDB Docs: Indexing Strategies\n- MongoDB Docs: Data Modeling Introduction\n- MongoDB University M320: Data Modeling\n- MongoDB University M201: MongoDB Performance\n- Blog Series: Building with Patterns\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Unnecessary Indexes", "contentType": "Article"}, "title": "Unnecessary Indexes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/restapi-mongodb-code-example", "action": "created", "body": "# Final Space API\n\n## Creator\nAshutosh Kumar Singh contributed this project. \n\n## About the Project\n\nThe Final Space API is based on the television show Final Space by Olan Rogers from TBS. From talking cats to evil aliens, the animated show tells the intergalactic adventures of Gary Goodspeed and his alien friend Mooncake as they unravel the mystery of \"Final Space\". The show can be viewed, amongst other places, on TBS, AdultSwim, and Netflix.\n\nAll data of this API, such as character info, is obtained from the Final Space wiki. More data such as season and episode information is planned for future release. This data can be used for your own projects such as fan pages or any way you see fit.\n\nAll this information is available through a RESTful API implemented in NodeJS. This API returns data in a friendly json format.\n\nThe Final Space API is maintained as an open source project on GitHub. More information about contributing can be found in the readme.\n \n ## Inspiration\n\nDuring Hacktoberfest 2020, I want to create and maintain a project and not just contribute during the hacktoberfest.\nFinal Space is one of my favorite animated television shows. I took inspiration from Rick & Morty API and tried to build the MVP of the API. \n\nThe project saw huge cntributions from developers all around the world and finished the version 1 of the API by the end of October.\n \n ## Why MongoDB?\n \nI wanted that data should be accessed quickly and can be easily maintained. MongoDB was my obvious choice, the free cluster is more than enough for all my needs. I believe I can increase the data hundred times and still find that Free Cluster is meeting my needs.\n \n ## How It Works\n \n You can fetch the data by making a POST request to any of the endpoints.\n \nThere are four available resources:\n\nCharacter: used to get all the characters.\nhttps://finalspaceapi.com/api/v0/character\n\nEpisode: used to get all the episodes.\nhttps://finalspaceapi.com/api/v0/episode\n\nLocation: used to get all the locations.\nhttps://finalspaceapi.com/api/v0/location\n\nQuote: used to get quotes from Final Space.\nhttps://finalspaceapi.com/api/v0/quote", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas"], "pageDescription": "Final Space API is a public RESTful API based on the animated television show Final Space.", "contentType": "Code Example"}, "title": "Final Space API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-flexible-sync", "action": "created", "body": "# Introducing Flexible Sync (Preview) \u2013 The Next Iteration of Realm Sync\n\nToday, we are excited to announce the public preview of our next version of Realm Sync: Flexible Sync. This new method of syncing puts the power into the hands of the developer. Now, developers can get more granular control over the data synced to user applications with intuitive language-native queries and hierarchical permissions.\n\n:youtube]{vid=aJ6TI1mc7Bs}\n\n## Introduction \n\nPrior to launching the general availability of Realm Sync in February 2021, the Realm team spent countless hours with developers learning how they build best-in-class mobile applications. A common theme emerged\u2014building real-time, offline-first mobile apps require an overwhelming amount of complex, non-differentiating work. \n\nOur [first version of Realm Sync addressed this pain by abstracting away offline-first, real-time syncing functionality using declarative APIs. It expedited the time-to-market for many developers and worked well for apps where data is static and compartmentalized, or where permissions rarely need to change. But for dynamic apps and complex use cases, developers still had to spend time creating workarounds instead of developing new features. With that in mind, we built the next iteration of Realm Sync: Flexible Sync. Flexible Sync is designed to help developers: \n\n- Get to market faster: Use intuitive, language-native queries to define the data synced to user applications instead of proprietary concepts.\n- Optimize real-time collaboration between users: Utilize object-level conflict-resolution logic.\n- Simplify permissions: Apply role-based logic to applications with an expressive permissions system that groups users into roles on a pe-class or collection basis.\n\nFlexible Sync requires MongoDB 5.0+.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Language-Native Querying\n\nFlexible Sync\u2019s query-based sync logic is distinctly different from how Realm Sync operates today. The new structure is designed to more closely mirror how developers are used to building sync today\u2014typically using GET requests with query parameters. \n\nOne of the primary benefits of Flexible Sync is that it eliminates all the time developers spend determining what query parameters to pass to an endpoint. Instead, the Realm APIs directly integrate with the native querying system on the developer\u2019s choice of platform\u2014for example, a predicate-based query language for iOS, a Fluent query for Android, a string-based query for Javascript, and a LINQ query for .NET. \n\nUnder the hood, the Realm Sync thread sends the query to MongoDB Realm (Realm\u2019s cloud offering). MongoDB Realm translates the query to MongoDB\u2019s query language and executes the query against MongoDB Atlas. Atlas then returns the resulting documents. Those documents are then translated into Realm objects, sent down to the Realm client, and stored on disk. The Realm Sync thread keeps a queue of any changes made locally to synced objects\u2014even when offline. As soon as connectivity is reestablished, any changes made to the server-side or client-side are synced down using built-in granular conflict resolution logic. All of this occurs behind the scenes while the developer is interacting with the data. This is the part we\u2019ve heard our users describe as \u201cmagic.\u201d \n\nFlexible Sync also enables much more dynamic queries, based on user inputs. Picture a home listing app that allows users to search available properties in a certain area. As users define inputs\u2014only show houses in Dallas, TX that cost less than $300k and have at least three bedrooms\u2014the query parameters can be combined with logical ANDs and ORs to produce increasingly complex queries, and narrow down the search result even further. All query results are combined into a single realm file on the client\u2019s device, which significantly simplifies code required on the client-side and ensures changes to data are synced efficiently and in real time. \n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n```swift\n// Set your Schema\nclass Listing: Object {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var location: String\n @Persisted var price: Int\n @Persisted var bedrooms: Int\n}\n\n// Configure your App and login\nlet app = App(id: \"XXXX\")\nlet user = try! await app.login(credentials:\n .emailPassword(email: \"email\", password: \"password\"))\n\n// Set the new Flexible Sync Config and open the Realm\nlet config = user.flexibleSyncConfiguration()\nlet realm = try! await Realm(configuration: config, downloadBeforeOpen: .always)\n\n// Create a Query and Add it to your Subscriptions\nlet subscriptions = realm.subscriptions\n\ntry! await subscriptions.write {\n subscriptions.append(QuerySubscription(name: \"home-search\") {\n $0.location == \"dallas\" && $0.price < 300000 && $0.bedrooms >= 3\n })\n}\n\n// Now query the local realm and get your home listings - output is 100 listings\n// in the results\nprint(realm.objects(Listing.self).count)\n\n// Remove the subscription - the data is removed from the local device but stays\n// on the server\ntry! await subscriptions.write {\n subscriptions.remove(named: \"home-search\")\n}\n\n// Output is 0 - listings have been removed locally\nprint(realm.objects(Listing.self).count)\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n```kotlin\n// Set your Schema\nopen class Listing: ObjectRealm() {\n @PrimaryKey\n @RealmField(\"_id\")\n var id: ObjectId\n var location: String = \"\"\n var price: Int = 0\n var bedrooms: Int = 0\n}\n\n// Configure your App and login\nval app = App(\"\")\nval user = app.login(Credentials.emailPassword(\"email\", \"password\"))\n\n// Set the new Flexible Sync Config and open the Realm\nlet config = SyncConfiguration.defaultConfig(user)\nlet realm = Realm.getInstance(config)\n\n// Create a Query and Add it to your Subscriptions\nval subscriptions = realm.subscriptions\nsubscriptions.update { mutableSubscriptions ->\n val sub = Subscription.create(\n \"home-search\", \n realm.where()\n .equalTo(\"location\", \"dallas\")\n .lessThan(\"price\", 300_000)\n .greaterThanOrEqual(\"bedrooms\", 3)\n )\n mutableSubscriptions.add(subscription)\n}\n\n// Wait for server to accept the new subscription and download data\nsubscriptions.waitForSynchronization()\nrealm.refresh()\n\n// Now query the local realm and get your home listings - output is 100 listings \n// in the results\nval homes = realm.where().count()\n\n// Remove the subscription - the data is removed from the local device but stays \n// on the server\nsubscriptions.update { mutableSubscriptions ->\n mutableSubscriptions.remove(\"home-search\")\n}\nsubscriptions.waitForSynchronization()\nrealm.refresh()\n\n// Output is 0 - listings have been removed locally\nval homes = realm.where().count()\n```\n:::\n:::tab[]{tabid=\".NET\"}\n```csharp\n// Set your Schema\nclass Listing: RealmObject\n{\n [PrimaryKey, MapTo(\"_id\")]\n public ObjectId Id { get; set; }\n public string Location { get; set; }\n public int Price { get; set; }\n public int Bedrooms { get; set; }\n}\n\n// Configure your App and login\nvar app = App.Create(YOUR_APP_ID_HERE);\nvar user = await app.LogInAsync(Credentials.EmailPassword(\"email\", \"password\"));\n\n// Set the new Flexible Sync Config and open the Realm\nvar config = new FlexibleSyncConfiguration(user);\nvar realm = await Realm.GetInstanceAsync(config);\n\n// Create a Query and Add it to your Subscriptions\nvar dallasQuery = realm.All().Where(l => l.Location == \"dallas\" && l.Price < 300_000 && l.Bedrooms >= 3);\nrealm.Subscriptions.Update(() =>\n{\n realm.Subscriptions.Add(dallasQuery);\n});\n\nawait realm.Subscriptions.WaitForSynchronizationAsync();\n\n// Now query the local realm and get your home listings - output is 100 listings\n// in the results\nvar numberOfListings = realm.All().Count();\n\n// Remove the subscription - the data is removed from the local device but stays\n// on the server\n\nrealm.Subscriptions.Update(() =>\n{\n realm.Subscriptions.Remove(dallasQuery);\n});\n\nawait realm.Subscriptions.WaitForSynchronizationAsync();\n\n// Output is 0 - listings have been removed locally\nnumberOfListings = realm.All().Count();\n```\n:::\n:::tab[]{tabid=\"JavaScript\"}\n```js\nimport Realm from \"realm\";\n\n// Set your Schema\nconst ListingSchema = {\n name: \"Listing\",\n primaryKey: \"_id\",\n properties: {\n _id: \"objectId\",\n location: \"string\",\n price: \"int\",\n bedrooms: \"int\",\n },\n};\n\n// Configure your App and login\nconst app = new Realm.App({ id: YOUR_APP_ID_HERE });\nconst credentials = Realm.Credentials.emailPassword(\"email\", \"password\");\nconst user = await app.logIn(credentials);\n\n// Set the new Flexible Sync Config and open the Realm\nconst realm = await Realm.open({\n schema: [ListingSchema],\n sync: { user, flexible: true },\n});\n\n// Create a Query and Add it to your Subscriptions\nawait realm.subscriptions.update((mutableSubscriptions) => {\n mutableSubscriptions.add(\n realm\n .objects(ListingSchema.name)\n .filtered(\"location = 'dallas' && price < 300000 && bedrooms = 3\", {\n name: \"home-search\",\n })\n );\n});\n\n// Now query the local realm and get your home listings - output is 100 listings\n// in the results\nlet homes = realm.objects(ListingSchema.name).length;\n\n// Remove the subscription - the data is removed from the local device but stays\n// on the server\nawait realm.subscriptions.update((mutableSubscriptions) => {\n mutableSubscriptions.removeByName(\"home-search\");\n});\n\n// Output is 0 - listings have been removed locally\nhomes = realm.objects(ListingSchema.name).length;\n```\n:::\n::::\n## Optimizing for Real-Time Collaboration\n\nFlexible Sync also enhances query performance and optimizes for real-time user collaboration by treating a single object or document as the smallest entity for synchronization. Flexible Sync allows for Sync Realms to more efficiently share data and for conflict resolution to incorporate changes faster and with less data transfer.\n\nFor example, you and a fellow employee are analyzing the remaining tasks for a week. Your coworker wants to see all of the time-intensive tasks remaining (`workunits > 5`), and you want to see all the tasks you have left for the week (`owner == ianward`). Your queries will overlap where `workunits > 5` and `owner == ianward`. If your coworker notices one of your tasks is marked incorrectly as `7 workunits` and changes the value to `6`, you will see the change reflected on your device in real time. Under the hood, the merge algorithm will only sync the changed document instead of the entire set of query results increasing query performance. \n\n![Venn diagram showing that 2 different queries can share some of the same documents\n\n## Permissions\n\nWhether it\u2019s a company\u2019s internal application or an app on the App Store, permissions are required in almost every application. That\u2019s why we are excited by how seamless Flexible Sync makes applying a document-level permission model when syncing data\u2014meaning synced documents can be limited based on a user\u2019s role.\n\nConsider how a sales organization uses a CRM application. An individual sales representative should only be able to access her own sales pipeline while her manager needs to be able to see the entire region\u2019s sales pipeline. In Flexible Sync, a user\u2019s role will be combined with the client-side query to determine the appropriate result set. For example, when the sales representative above wants to view her deals, she would send a query where `opportunities.owner == \"EmmaLullo\"` but when her manager wants to see all the opportunities for their entire team, they would query with opportunities.team == \"West\u201d. If a user sends a much more expansive query, such as querying for all opportunities, then the permissions system would only allow data to be synced for which the user had explicit access.\n\n```json\n{\n \"Opportunities\": {\n \"roles\": \n {\n name: \"manager\", \n applyWhen: { \"%%user.custom_data.isSalesManager\": true},\n read: {\"team\": \"%%user.custom_data.teamManager\"}\n write: {\"team\": \"%%user.custom_data.teamManager\"}\n },\n {\n name: \"salesperson\",\n applyWhen: {},\n read: {\"owner\": \"%%user.id\"}\n write: {\"owner\": \"%%user.id\"}\n }\n ]\n },\n{\n \"Bookings\": {\n \"roles\": [\n {\n name: \"accounting\", \n applyWhen: { \"%%user.custom_data.isAccounting\": true},\n read: true,\n write: true\n },\n {\n name: \"sales\",\n applyWhen: {},\n read: {\"%%user.custom_data.isSales\": true},\n write: false\n }\n ]\n }\n```\n\n## Looking Ahead\n\nUltimately, our goal with Flexible Sync is to deliver a sync service that can fit any use case or schema design pattern imaginable without custom code or workarounds. And while we are excited that Flexible Sync is now in preview, we\u2019re nowhere near done. \n\nThe Realm Sync team is planning to bring you more query operators and permissions integrations over the course of 2022. Up next we are looking to expose array operators and enable querying on embedded documents, but really, we look to you, our users, to help us drive the roadmap. Submit your ideas and feature requests to our [feedback portal and ask questions in our Community forum. Happy building!", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Realm Flexible Sync (now in preview) gives developers new options for syncing data to your apps", "contentType": "News & Announcements"}, "title": "Introducing Flexible Sync (Preview) \u2013 The Next Iteration of Realm Sync", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/kotlin/realm-google-authentication-android", "action": "created", "body": "# Start Implementing Google Auth With MongoDB Realm in Your Android App\n\nHello, everyone. I am Henna. I started with Mobile Application back in 2017 when I was a lucky recipient of the Udacity Scholarship. I had always used SQLite when it came to using databases in my mobile apps. Using SQLite was definitely a lot of boilerplate code, but using it with Room library did make it easier.\n\nI had heard about Realm before but I got so comfortable using Room with SQLite that I never thought of exploring the option.\n\nAt the time, I was not aware that Realm had multiple offerings, from being used as a local database on mobile, to offering Sync features to be able to sync your app data to multiple devices.\n\nI will pen down my experiments with MongoDB Realm as a series of articles. This is the first article in the series and it is divided into two parts.\n\n**Part A** will explain how to create a MongoDB Realm back end for your mobile app.\n\n**Part B** will explain how to implement Google Authentication in the\napp.\n\n>Pre-Requisites: You have created at least one app using Android Studio.\n\nPhoto by Emily Finch on Unsplash\n\nLet's get some coffee and get the ball rolling. :)\n\n## Part A: \n\n### Step 1. How to Create an Account on MongoDB Cloud\n\nMongoDB Realm is a back end as a service. When you want to use MongoDB Realm Sync functionality, you need to create a MongoDB Realm account and it is free :D :D\n\n> MongoDB\u2019s Atlas offering of the database as a service is what makes this database so amazing. For mobile applications, we use Realm DB locally on the mobile device, and the local data gets synced to MongoDB Atlas on the cloud.\n\nAn account on MongoDB Cloud can be easily created by visiting\n.\n\nOnce you sign-in to your account, you will be asked to create an Organization\n\nOnce you click on the Create button, you will be asked to enter organization name and select MongoDB Atlas as a Cloud Service as shown below and click Next.\n\nAdd members and permissions as desired and click on Create Organization. Since I am working on my own I added only myself as Project Owner.\n\nNext you will be asked to create a project, name it and add members and permissions. Each permission is described on the right side. Be cautious of whom you give read/write access of your database.\n\nOnce you create a project, you will be asked to deploy your database as shown below\n\nDepending on your use-case, you can select from given options. For this article, I will choose shared and Free service :)\n\nNext select advance configuration options and you will be asked to select a Cloud Provider and Region\n\nA cluster is a group of MongoDB Servers that store your data on the cloud. Depending on your app requirement, you choose one. I opted for a free cluster option for this app.\n\n> Be mindful of the Cloud Provider and the Location you choose. Realm App is currently available only with AWS and it is recommended to have Realm App region closer to the cluster region and same Cloud Provider. So I choose the settings as shown.\n> \nGive a name to your cluster. Please note this cannot be changed later.\n\nAnd with this, you are all set with Step 1.\n\n### Step 2. Security Quickstart\nOnce you have created your cluster, you will be asked to create a user to access data stored in Atlas. This used to be a manual step earlier but now you get the option to add details as and when you create a cluster.\n\nThese credentials can be used to connect to your cluster via MongoDB Compass or Mongo Shell but we will come to that later.\n\nYou can click on \u201cAdd My Current IP Address\u201d to whitelist your IP address and follow as instructed.\n\nIf you need to change your settings at a later time, use Datasbase Access and Network Access from Security section that will appear on left panel.\n\nWith this Step 2 is done.\n### Step 3. How to Create a Realm App on the Cloud\nWe have set up our cluster, so the next step is to create a Realm app and link it to it. Click on the Realm tab as shown.\n\nYou will be shown a template window that you can choose from. For this article, I will select \u201cBuild your own App\u201d\n\nNext you will be asked to fill details as shown. Your Data Source is the Atlas Cluster you created in Step1. If you have multiple clusters, select the one you want to link your app with.\n\nPlease note, Realm app names should have fewer than 64 characters.\n\nFor better performance, It is recommended to have local deployment and region the same or closer to your cluster region.\n\nCheck the Global Deployment section in MongoDB's official documentation for more details.\n\nYou will be shown a section of guides once you click on \u201cCreate a Realm Application\u201d. You can choose to follow the guides if you know what you are doing, but for brevity of this article, I will close guides and this will bring you to your Realm Dashboard as shown\n\nPlease keep a note of the \u201cApp Id\u201d. This will be needed when you create the Android Studio project.\n\nThere are plethora of cloud services that comes with MongoDB Realm. You can use functions, triggers, and other features depending on your app use cases. For this article, you will be using Authentication.\n\nWith this, you are finished with Part A. Yayyy!! :D\n\n## Part B: \n\n### Step 1. Creating an Android Studio Project\n\nI presume you all have experience creating mobile applications using Android Studio. In this step, you would \"Start a new Android Project.\" You can enter any name of your choice and select Kotlin as the language and min API 21.\n\nOnce you create the project, you need to add dependencies for the Realm Database and Google Authentication.\n\n**For Realm**, add this line of code in the project-level `build.gradle` file. This is the latest version at the time of writing this article.\n\n**Edit 01:** Plugin is updated to current (may change again in future).\n\n``` java\nclasspath \"io.realm:realm-gradle-plugin:10.9.0\"\n```\n\nAfter adding this, the dependencies block would look like this.\n\n``` java\ndependencies {\n classpath \"com.android.tools.build:gradle:4.0.0\"\n classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version\"\n \n classpath \"io.realm:realm-gradle-plugin:10.9.0\"\n\n // NOTE: Do not place your application dependencies here; they belong\n // in the individual module build.gradle files\n}\n```\n\nNow we add Realm plugin and Google Authentication in the app-level `build.gradle` file. Add this code at the top of the file but below the `kotlin-kapt` extension. If you are using Java, then this would come after the Android plugin.\n\n``` java\napply plugin: 'kotlin-kapt'\napply plugin: 'realm-android'\n```\n\nIn the same file, we would also add the below code to enable the Realm sync in the application. You can add it anywhere inside the Android block.\n\n``` java\nandroid {\n...\n...\nrealm {\n syncEnabled = true\n }\n...\n}\n```\n\nFor Google Auth, add the following dependency in the app-level gradle file. Please note, the versions may change since the time of this article.\n\n**Edit 02:** gms version is updated to current.\n\n``` java\ndependencies{\n...\n...\n//Google OAuth\nimplementation 'com.google.android.gms:play-services-auth:20.0.1'\n...\n}\n```\n\nWith this, we are finished with Step 1. Let's move onto the next step to implement Google Authentication in the project.\n\n### Step 2. Adding Google Authentication to the Application\n\nNow, I will not get into too much detail on implementing Google Authentication to the app since that will deviate from our main topic. I have listed below the set of steps I took and links I followed to implement Google Authentication in my app.\n\n1. Configure a Google API Console project. (Create credentials for Android Application and Web Application). Your credential screen should have 2 oAuth Client IDs.\n\n2. Configure Google Sign-in and the GoogleSignInClient object (in the Activity's onCreate method).\n3. Add the Google Sign-in button to the layout file.\n4. Implement Sign-in flow.\n\nThis is what the activity will look like at the end of the four steps.\n\n>**Please note**: This is only a guideline. Your variable names and views can be different. The String server_client_id here is the web client-id you created in Google Console when you created Google Auth credentials in the Google Console Project.\n\n``` java\nclass MainActivity : AppCompatActivity() {\n\n private lateinit var client: GoogleSignInClient\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n\n val googleSignInOptions = GoogleSignInOptions.Builder(GoogleSignInOptions.DEFAULT_SIGN_IN)\n .requestEmail()\n .requestServerAuthCode(getString(R.string.server_client_id))\n .build()\n\n client = GoogleSignIn.getClient(this, googleSignInOptions)\n\n findViewById(R.id.sign_in_button).setOnClickListener{\n signIn()\n }\n }\n\n override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {\n super.onActivityResult(requestCode, resultCode, data)\n if(requestCode == 100){\n val task = GoogleSignIn.getSignedInAccountFromIntent(data)\n val account = task.getResult(ApiException::class.java)\n handleSignInResult(account)\n }\n }\n\n private fun handleSignInResult(account: GoogleSignInAccount?) {\n try{\n Log.d(\"MainActivity\", \"${account?.serverAuthCode}\")\n //1\n val idToken = account?.serverAuthCode\n\n //signed in successfully, forward credentials to MongoDB realm\n //2\n val googleCredentials = Credentials.google(idToken)\n //3\n app.loginAsync(googleCredentials){\n if(it.isSuccess){\n Log.d(\"MainActivity\", \"Successfully authenticated using Google OAuth\")\n //4\n startActivity(Intent(this, SampleResult::class.java))\n } else {\n Log.d(\"MainActivity\", \"Failed to Log in to MongoDB Realm: ${it.error.errorMessage}\")\n }\n }\n } catch(exception: ApiException){\n Log.d(\"MainActivity\", exception.printStackTrace().toString())\n }\n }\n\n private fun signIn() {\n val signIntent = client.signInIntent\n startActivityForResult(signIntent, 100)\n }\n}\n```\n\nWhen you run your app, your app should ask you to sign in with your Google account, and when successful, it should open SampleResult Activity. I displayed a random text to show that it works. :D\n\nNow, we will move onto the next step and configure the Google Auth provider on the MongoDB Realm cloud account.\n\n### Step 3. Configure Google Auth Provider on MongoRealm UI\n\nReturn to the MongoDB Realm account where you created your Realm app. On the left panel, click on the Authentication tab and you will see the list of auth providers that MongoDB Realm supports.\n\nClick on the *edit* icon corresponding to Google Authentication provider and you will be led to a page as shown below.\n\n**Edit 03:** Updated Screenshot as there is now a new option OpenID connect.\n\nToggle the **Provider Enabled** switch to **On** and enter the **Web-Client ID** and **Web Client Secret** from the Google Console Project you created above.\n\nYou can choose the Metadata Fields as per your app use case and click Save.\n\n> Keeping old UI here as OpenID Connect is not used.\n> \n\nWith this, we are finished with Step 3. \n\n### Step 4. Implementing Google Auth Sync Credentials to the Project\n\nThis is the last step of Part 2. We will use the Google Auth token received upon signing in with our Google Account in the previous step to authenticate to our MongoDB Realm account.\n\nWe already added dependencies for Realm in Step 3 and we created a Realm app on the back end in Step 2. Now, we initialize Realm and use the appId (Remember I asked you to make a note of the app Id? Check Step 2. ;)) to connect back end with our mobile app.\n\nCreate a new Kotlin class that extends the application class and write the following code onto it.\n\n``` java\nval appId =\"realmsignin-abyof\" // Enter your own App Id here\nlateinit var app: App\n\nclass RealmApp: Application() {\n\n override fun onCreate() {\n super.onCreate()\n Realm.init(this)\n\n app = App(AppConfiguration.Builder(appId).build())\n\n }\n}\n```\n\nAn \"App\" is the main client-side entry point for interacting with the MongoDB Realm app and all its features, so we configure it in the application subclass for getting global access to the variable.\n\nThis is the simplest way to configure it. After configuring the \"App\", you can add authentication, manage users, open synchronized realms, and all other functionalities that MongoDB Realm offers.\n\nTo add more details when configuring, check the MongoDB Realm Java doc.\n\nDon't forget to add the RealmApp class or whatever name you chose to the manifest file.\n\n``` java\n\n ....\n ....\n \n ...\n ...\n \n\n```\n\nNow come back to the `handleSignInResult()` method call in the MainActivity, and add the following code to that method.\n\n``` java\nprivate fun handleSignInResult(account: GoogleSignInAccount?) {\n try{\n Log.d(\"MainActivity\", \"${account?.serverAuthCode}\")\n\n // Here, you get the serverAuthCode after signing in with your Google account.\n val idToken = account?.serverAuthCode\n\n // signed in successfully, forward credentials to MongoDB realm\n // In this statement, you pass the token received to ``Credentials.google()`` method to pass it to MongoDB Realm.\n val googleCredentials = Credentials.google(idToken)\n\n // Here, you login asynchronously by passing Google credentials to the method.\n app.loginAsync(googleCredentials){\n if(it.isSuccess){\n Log.d(\"MainActivity\", \"Successfully authenticated using Google OAuth\")\n\n // If successful, you navigate to another activity. This may give a red mark because you have not created SampleResult activity. Create an empty activity and name it SampleResult.\n startActivity(Intent(this, SampleResult::class.java))\n } else {\n Log.d(\"MainActivity\", \"Failed to Log in to MongoDB Realm: ${it.error.errorMessage}\")\n }\n }\n } catch(exception: ApiException){\n Log.d(\"MainActivity\", exception.printStackTrace().toString())\n }\n}\n```\n\nAdd a TextView with a Successful Login message to the SampleResult layout file.\n\nNow, when you run your app, log in with your Google account and your SampleResult Activity with Successful Login message should be shown.\n\nWhen you check the App Users section in your MongoDB Realm account, you should notice one user created.\n\n## Wrapping Up\n\nYou can get the code for this tutorial from this GitHub repo.\n\nWell done, everyone. We are finished with implementing Google Auth with MongoDB Realm, and I would love to know if you have any feedback for me.\u2764\n\nYou can post questions on MongoDB Community Forums or if you are struggling with any topic, please feel free to reach out.\n\nIn the next article, I talk about how to implement Realm Sync in your Android Application.\n", "format": "md", "metadata": {"tags": ["Kotlin", "Realm", "Google Cloud", "Android", "Mobile"], "pageDescription": "Getting Started with MongoDB Realm and Implementing Google Authentication in Your Android App", "contentType": "Tutorial"}, "title": "Start Implementing Google Auth With MongoDB Realm in Your Android App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-cocoa-swiftui-combine", "action": "created", "body": "# Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine\n\nAfter three years of work, we're proud to announce the public release of Realm Cocoa 5.0, with a ground-up rearchitecting of the core database.\n\nIn the time since we first released the Realm Mobile Database to the world in 2014, we've done our best to adapt to how people have wanted to use Realm and help our users build better apps, faster. Some of the difficulties developers ran into came down to some consequences of design decisions we made very early on, so in 2017 we began a project to rethink our core architecture. In the process, we came up with a new design that simplified our code base, improves performance, and lets us be more flexible around multi-threaded usage.\n\nIn case you missed a similar writeup for Realm Java with code examples you can find it here.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free! \n\n## Frozen Objects\n\nOne of the big new features this enables is Frozen Objects.\n\nOne of the core ideas of Realm is our concept of live, thread-confined objects that reduce the code mobile developers need to write. Objects are the data, so when the local database is updated for a particular thread, all objects are automatically updated too. This design ensures you have a consistent view of your data and makes it extremely easy to hook the local database up to the UI. But it came at a cost for developers using reactive frameworks.\n\nSometimes Live Objects don't work well with Functional Reactive Programming (FRP) where you typically want a stream of immutable objects. This means that Realm objects have to be confined to a single thread. Frozen Objects solve both of these problems by letting you obtain an immutable snapshot of an object or collection which is fully thread-safe, *without* copying it out of the realm. This is especially important with Apple's release of Combine and SwiftUI, which are built around many of the ideas of Reactive programming.\n\nFor example, suppose we have a nice simple list of Dogs in SwiftUI:\n\n``` Swift\nclass Dog: Object, ObjectKeyIdentifable {\n @objc dynamic var name: String = \"\"\n @objc dynamic var age: Int = 0\n}\n\nstruct DogList: View {\n @ObservedObject var dogs: RealmSwift.List\n\n var body: some View {\n List {\n ForEach(dogs) { dog in\n Text(dog.name)\n }\n }\n }\n}\n```\n\nIf you've ever tried to use Realm with SwiftUI, you can probably see a problem here: SwiftUI holds onto references to the objects passed to `ForEach()`, and if you delete an object from the list of dogs it'll crash with an index out of range error. Solving this used to involve complicated workarounds, but with Realm Cocoa 5.0 is as simple as freezing the list passed to `ForEach()`:\n\n``` swift\nstruct DogList: View {\n @ObservedObject var dogs: RealmSwift.List\n\n var body: some View {\n List {\n ForEach(dogs.freeze()) { dog in\n Text(dog.name)\n }\n }\n }\n}\n```\n\nNow let's suppose we want to make this a little more complicated, and group the dogs by their age. In addition, we want to do the grouping on a background thread to minimize the amount of work done on the main thread. Fortunately, Realm Cocoa 5.0 makes this easy:\n\n``` swift\nstruct DogGroup {\n let label: String\n let dogs: Dog]\n}\n\nfinal class DogSource: ObservableObject {\n @Published var groups: [DogGroup] = []\n\n private var cancellable: AnyCancellable?\n init() {\n cancellable = try! Realm().objects(Dog.self)\n .publisher\n .subscribe(on: DispatchQueue(label: \"background queue\"))\n .freeze()\n .map { dogs in\n Dictionary(grouping: dogs, by: { $0.age }).map { DogGroup(label: \"\\($0)\", dogs: $1) }\n }\n .receive(on: DispatchQueue.main)\n .assertNoFailure()\n .assign(to: \\.groups, on: self)\n }\n deinit {\n cancellable?.cancel()\n }\n}\n\nstruct DogList: View {\n @EnvironmentObject var dogs: DogSource\n\n var body: some View {\n List {\n ForEach(dogs.groups, id: \\.label) { group in\n Section(header: Text(group.label)) {\n ForEach(group.dogs) { dog in\n Text(dog.name)\n }\n }\n }\n }\n }\n}\n```\n\nBecause frozen objects aren't thread-confined, we can subscribe to change notifications on a background thread, transform the data to a different form, and then pass it back to the main thread without any issues.\n\n## Combine Support\n\nYou may also have noticed the `.publisher` in the code sample above. [Realm Cocoa 5.0 comes with basic built-in support for using Realm objects and collections with Combine. Collections (List, Results, LinkingObjects, and AnyRealmCollection) come with a `.publisher` property which emits the collection each time it changes, along with a `.changesetPublisher` property that emits a `RealmCollectionChange` each time the collection changes. For Realm objects, there are similar `publisher()` and `changesetPublisher()` free functions which produce the equivalent for objects.\n\nFor people who want to use live objects with Combine, we've added a `.threadSafeReference()` extension to `Publisher` which will let you safely use `receive(on:)` with thread-confined types. This lets you write things like the following code block to easily pass thread-confined objects or collections between threads.\n\n``` swift\npublisher(object)\n .subscribe(on: backgroundQueue)\n .map(myTransform)\n .threadSafeReference()\n .receive(on: .main)\n .sink {print(\"\\($0)\")}\n```\n\n## Queue-confined Realms\n\nAnother threading improvement coming in Realm Cocoa 5.0 is the ability to confine a realm to a serial dispatch queue rather than a thread. A common pattern in Swift is to use a dispatch queue as a lock which guards access to a variable. Historically, this has been difficult with Realm, where queues can run on any thread.\n\nFor example, suppose you're using URLSession and want to access a Realm each time you get a progress update. In previous versions of Realm you would have to open the realm each time the callback is invoked as it won't happen on the same thread each time. With Realm Cocoa 5.0 you can open a realm which is confined to that queue and can be reused:\n\n``` swift\nclass ProgressTrackingDelegate: NSObject, URLSessionDownloadDelegate {\n public let queue = DispatchQueue(label: \"background queue\")\n private var realm: Realm!\n\n override init() {\n super.init()\n queue.sync { realm = try! Realm(queue: queue) }\n }\n\n public var operationQueue: OperationQueue {\n let operationQueue = OperationQueue()\n operationQueue.underlyingQueue = queue\n return operationQueue\n }\n\n func urlSession(_ session: URLSession,\n downloadTask: URLSessionDownloadTask,\n didWriteData bytesWritten: Int64,\n totalBytesWritten: Int64,\n totalBytesExpectedToWrite: Int64) {\n guard let url = downloadTask.originalRequest?.url?.absoluteString else { return }\n try! realm.write {\n let progress = realm.object(ofType: DownloadProgress.self, forPrimaryKey: url)\n if let progress = progress {\n progress.bytesWritten = totalBytesWritten\n } else {\n realm.create(DownloadProgress.self, value: \n \"url\": url,\n \"bytesWritten\": bytesWritten\n ])\n }\n }\n }\n}\nlet delegate = ProgressTrackingDelegate()\nlet session = URLSession(configuration: URLSessionConfiguration.default,\n delegate: delegate,\n delegateQueue: delegate.operationQueue)\n```\n\nYou can also have notifications delivered to a dispatch queue rather than the current thread, including queues other than the active one. This is done by passing the queue to the observe function: `let token = object.observe(on: myQueue) { ... }`.\n\n## Performance\n\nWith [Realm Cocoa 5.0, we've greatly improved performance in a few important areas. Sorting Results is roughly twice as fast, and deleting objects from a Realm is as much as twenty times faster than in 4.x. Object insertions are 10-25% faster, with bigger gains being seen for types with primary keys.\n\nMost other operations should be similar in speed to previous versions.\n\nRealm Cocoa 5.0 should also typically produce smaller Realm files than previous versions. We've adjusted how we store large binary blobs so that they no longer result in files with a large amount of empty space, and we've reduced the size of the transaction log that's written to the file.\n\n## Compatibility\n\nRealm Cocoa 5.0 comes with a new version of the Realm file format. Any existing files that you open will be automatically upgraded to the new format, with the exception of read-only files (such as those bundled with your app). Those will need to be manually upgraded, which can be done by opening them in Realm Studio or recreating them through whatever means you originally created the file. The upgrade process is one-way, and realms cannot be converted back to the old file format.\n\nOnly minor API changes have been made, and we expect most applications which did not use any deprecated functions will compile and work with no changes. You may notice some changes to undocumented behavior, such as that deleting objects no longer changes the order of objects in an unsorted `Results`.\n\nPre-1.0 Realms containing `Date` or `Any` properties can no longer be opened.\n\nWant to try it out for yourself? Check out our working demo app using Frozen Objects, SwiftUI, and Combine.\n\n- Simply clone the realm-cocoa repo and open `RealmExamples.xworkspace` then select the `ListSwiftUI` app in Xcode and Build.\n\n## Wrap Up\n\nWe're very excited to finally get these features out to you and to see what new things you'll be able to build with them. Stay tuned for more exciting new features to come; the investment in the Realm Database continues.\n\n## Links\n\nWant to learn more? Review the documentation..\n\nReady to get started? Get Realm Core 6.0 and the SDKs.\n\nWant to ask a question? Head over to our MongoDB Realm Developer Community Forums.", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Public release of Realm Cocoa 5.0, with a ground-up rearchitecting of the core database", "contentType": "News & Announcements"}, "title": "Realm Cocoa 5.0 - Multithreading Support with Integration for SwiftUI & Combine", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-animated-timeline-chart-embedding-sdk", "action": "created", "body": "# How to Build an Animated Timeline Chart with the MongoDB Charts Embedding SDK\n\nThe Charts Embedding SDK allows you to embed data visualizations in your\napplication effortlessly, giving users and developers control over\nembedded charts. It can be a powerful tool, especially when bound to\nuser actions. My goal today is to show you the Embedding SDK in action.\n\nThis is just scratching the surface of what you can build with the SDK,\nand I hope this helps spark ideas as to its use within your\napplications. If you want to read more about the SDK, make sure to check the npm package page.\n\nReading this blog post will give you a practical example of how to build\na timeline chart in your application using the Embedding SDK.\n\n## What is a timeline chart?\n\nA timeline chart is an effective way to visualize a process or events in\nchronological order. A good example might be showing population growth\nover time, or temperature readings per second from an IOT device.\n\nAt the moment of writing this, we support 23 chart types in MongoDB\nCharts, and a timeline chart\nis not one of them. Thanks to the Charts Embedding SDK and a bit of\ncode, we can build similar behaviour on our own, and I think that's a\ngreat example of how flexible the SDK is. It allows us to\nprogrammatically change an embedded chart using filters and setting\ndifferent configurations.\n\nWe will build a timeline chart in three steps:\n\n1. Create the static chart in MongoDB Charts\n2. Embed the chart in your application\n3. Programmatically manage the chart's behaviour with the Embedding SDK\n to show the data changes over time\n\nI've done these three steps for a small example application that is\npresenting a timeline of the Olympic Games, and it shows the Olympic\nmedals per country during the whole history of the Olympics (data\nsourced from Kaggle). I'm using two charts \u2014 a\ngeospatial and a bar chart. They give different perspectives of how the\ndata changes over time, to see where the medals are distributed, and the\nmagnitude of wins. The slider allows the user to move through time.\n\nWatching the time lapse, you can see some insights about the data that\nyou wouldn't have noticed if that was a static chart. Here are some\nobservations:\n\n- Greece got most of the medals in the first Olympics (Athens, 1896)\n and France did the same in the second Olympics (Paris, 1900), so it\n looks like being a host boosts your performance.\n- 1924 was a very good year for most Nordic countries - we have Sweden\n at 3rd place, Norway(6th), Denmark(7th) and Finland(8th). If you\n watch Sweden closely, you will see that it was in top 5 most of the\n time.\n- Russia (which includes the former USSR in this dataset) got in top 8\n for the first time hardly in 1960 but caught up quickly and is 3rd\n in the overall statistics.\n- Australia reached top 8 in 2008 and have kept that position since.\n- The US was a leader almost the entire time of the timeline.\n\nHere is how I built it in more details:\n\n## Step 1: Create the chart in MongoDB Charts\n\nYou have to create the chart you intend to be part of the timeline you\nare building. The easiest way to do that is to use MongoDB\nAtlas with a free tier cluster.\nOnce your data is loaded into your cluster, you can activate Charts in\nyour project and start charting. If you haven't used Charts before, you\ncan check the steps to create a chart in this blog post\nhere, or\nyou can also follow the\ntutorials in our\ncomprehensive documentation.\n\nHere are the two charts I've created on my dashboard, that I will embed\nin my example application:\n\nWe have a bar chart that shows the first 8 countries ordered by the\naccumulated sum of medals they won in the history of the Olympics.\n\n:charts]{url=https://charts.mongodb.com/charts-data-science-project-aygif id=ff518bbb-923c-4c2c-91f5-4a2b3137f312 theme=light}\n\nAnd there is also a geospatial chart that shows the same data but on the\nmap.\n\n:charts[]{url=https://charts.mongodb.com/charts-data-science-project-aygif id=b1983061-ee44-40ad-9c45-4bb1d4e74884 theme=light}\n\nSo we have these two charts, and they provide a good view of the overall\ndata without any filters. It will be more impressive to see how these\nnumbers progressed for the timeline of the Olympics. For this purpose,\nI've embedded these two charts in my application, where thanks to the\nEmbedding SDK, I will programmatically control their behaviour using a\n[filter\non the data.\n\n## Step 2: Embedding the charts\n\nYou also have to allow embedding for the data and the charts. To do that\nat once, open the menu (...) on the chart and select \"Embed Chart\":\n\nSince this data is not sensitive, I've enabled unauthenticated embedding\nfor each of my two charts with this toggle shown in the image below. For\nmore sensitive data you should choose the Authenticated option to\nrestrict who can view the embedded charts.\n\nNext, you have to explicitly allow the fields that will be used in the\nfilters. You do that in the same embedding dialog that was shown above.\nFiltering an embedded chart is only allowed on fields you specify and\nthese have to be set up in advance. Even if you use unauthenticated\nembedding, you still control the security over your data, so you can\ndecide what can be filtered. In my case, this is just one field - the\n\"year\" field because I'm setting filters on the different Olympic years\nand that's all I need for my demo.\n\n## Step 3: Programmatically control the charts in your app\n\nThis is the step that includes the few lines of code I mentioned above.\n\nThe example application is a small React application that has the two\nembedded charts that you saw earlier positioned side-by-side.\n\nThere is a slider on the top of the charts. This slider moves through\nthe timeline and shows the sum of medals the countries have won by the\nrelevant year. In the application, you can navigate through the years\nyourself by using the slider, however there is also a play button at the\ntop right, which presents everything in a timelapse manner. How the\nslider works is that every time it changes position, I set a filter to\nthe embedded charts using the SDK method `setFilter`. For example, if\nthe slider is at year 2016, it means there is a filter that gets all\ndata for the years starting from the beginning up until 2016.\n\n``` javascript\n// This function is creating the filter that will be executed on the data.\nconst getDataFromAllPreviousYears = (endYear) => {\n let filter = {\n $and: \n { Year: { $gte: firstOlympicsYear } },\n { Year: { $lte: endYear } },\n ],\n };\n\n return Promise.all([\n geoChart.setFilter(filter),\n barChart.setFilter(filter),\n ]);\n};\n```\n\nFor the play functionality, I'm doing the same thing - changing the\nfilter every 2 seconds using the Javascript function setInterval to\nschedule a function call that changes the filter every 2 seconds.\n\n``` javascript\n// this function schedules a filter call with the specified time interval\nconst setTimelineInterval = () => {\n if (playing) {\n play();\n timerIdRef.current = setInterval(play, timelineInterval);\n } else {\n clearInterval(timerIdRef.current);\n }\n};\n```\n\nIn the geospatial map, you can zoom to an area of interest. Europe would\nbe an excellent example as it has a lot of countries and that makes the\ngeospatial chart look more dynamic. You can also pause the\nauto-forwarding at any moment and resume or even click forwards or\nbackwards to a specific point of interest.\n\n## Conclusion\n\nThe idea of making this application was to show how the Charts Embedding\nSDK can allow you to add interactivity to your charts. Doing timeline\ncharts is not a feature of the Embedding SDK, but it perfectly\ndemonstrates that with a little bit of code, you can do different things\nwith your charts. I hope you liked the example and got an idea of how\npowerful the SDK is.\n\nThe whole code example can be seen in [this\nrepo.\nAll you need to do to run it is to clone the repo, run `npm install` and\n`npm start`. Doing this will open the browser with the timeline using my\nembedded charts so you will see a working example straight away. If you\nwish to try this using your data and charts, I've put some highlights in\nthe example code of what has to be changed.\n\nYou can jump-start your ideas by signing up for MongoDB\nCloud, deploying a free Atlas cluster, and\nactivating MongoDB Charts. Feel free to check our\ndocumentation and explore more\nembedding example\napps,\nincluding authenticated examples if you wish to control who can see your\nembedded charts.\n\nWe would also love to see how you are using the Embedding SDK. If you\nhave suggestions on how to improve anything in Charts, use the MongoDB\nFeedback Engine. We\nuse this feedback to help improve Charts and figure out what features to\nbuild next.\n\nHappy Charting!\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "Learn how to build an animated timeline chart with the MongoDB Charts Embedding SDK", "contentType": "Tutorial"}, "title": "How to Build an Animated Timeline Chart with the MongoDB Charts Embedding SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/tuning-mongodb-kafka-connector", "action": "created", "body": "# Tuning the MongoDB Connector for Apache Kafka\n\nMongoDB Connector for Apache Kafka (MongoDB Connector) is an open-source Java application that works with Apache Kafka Connect enabling seamless data integration of MongoDB with the Apache Kafka ecosystem. When working with the MongoDB Connector, the default values cover a great variety of scenarios, but there are some scenarios that require more fine-grained tuning. In this article, we will walk through important configuration properties that affect the MongoDB Kafka Source and Sink Connectors performance, and share general recommendations.\n\n## Tuning the source connector\n\nLet\u2019s first take a look at the connector when it is configured to read data from MongoDB and write it into a Kafka topic. When you configure the connector this way, it is known as a \u201csource connector.\u201d \n\nWhen the connector is configured as a source, a change stream is opened within the MongoDB cluster based upon any configuration you specified, such as pipeline. These change stream events get read into the connector and then written out to the Kafka topic, and they resemble the following:\n\n```\n{\n _id : { },\n \"operationType\" : \"\",\n \"fullDocument\" : { },\n \"ns\" : {\n \"db\" : \"\",\n \"coll\" : \"\"\n },\n \"to\" : {\n \"db\" : \"\",\n \"coll\" : \"\"\n },\n \"documentKey\" : { \"_id\" : },\n \"updateDescription\" : {\n \"updatedFields\" : { },\n \"removedFields\" : \"\", ... ],\n \"truncatedArrays\" : [\n { \"field\" : , \"newSize\" : },\n ...\n ]\n },\n \"clusterTime\" : ,\n \"txnNumber\" : ,\n \"lsid\" : {\n \"id\" : ,\n \"uid\" : \n }\n}\n```\nThe connector configuration properties help define what data is written out to Kafka. For example, consider the scenario where we insert into MongoDB the following:\n\n```\nUse Stocks\ndb.StockData.insertOne({'symbol':'MDB','price':441.67,'tx_time':Date.now()})\n```\nWhen **publish.full.document.only** is set to false (the default setting), the connector writes the entire event as shown below:\n\n```\n{\"_id\": \n{\"_data\": \"826205217F000000022B022C0100296E5A1004AA1707081AA1414BB9F647FD49855EE846645F696400646205217FC26C3DE022E9488E0004\"},\n\"operationType\": \"insert\",\n\"clusterTime\":\n {\"$timestamp\": \n {\"t\": 1644503423, \"i\": 2}},\n\"fullDocument\":\n {\"_id\":\n {\"$oid\": \"6205217fc26c3de022e9488e\"},\n \"symbol\": \"MDB\",\n \"price\": 441.67,\n \"tx_time\": 1.644503423267E12},\n \"ns\":\n {\"db\": \"Stocks\", \"coll\": \"StockData\"},\n \"documentKey\":\n {\"_id\": {\"$oid\": \"6205217fc26c3de022e9488e\"}}}}\n}\n```\nWhen **publish.full.document.only** is set to true and we issue a similar statement, it looks like the following:\n\n```\nuse Stocks\ndb.StockData.insertOne({'symbol':'TSLA','price':920.00,'tx_time':Date.now()})\n```\nWe can see that the data written to the Kafka topic is just the changed document itself, which in this example, is an inserted document.\n\n```\n{\"_id\": {\"$oid\": \"620524b89d2c7fb2a606aa16\"}, \"symbol\": \"TSLA\",\n \"price\": 920,\n \"tx_time\": 1.644504248732E12}\"}\n```\n### Resume tokens\n\nAnother import concept to understand with source connectors is resume tokens. Resume tokens make it possible for the connector to fail, get restarted, and resume where it left off reading the MongoDB change stream. Resume tokens by default are stored in a Kafka topic defined by the **offset.storage.topic** parameter (configurable at the Kafka Connect Worker level for distributed environments) or in the file system in a file defined by the **offset.storage.file.filename** parameter (configurable at the Kafka Connect Worker level for standalone environments). In the event that the connector has been offline and the underlying MongoDB oplog has rolled over, you may get an error when the connector restarts. Read the [Invalid Resume Token section of the online documentation to learn more about this condition.\n\n### Configuration properties\n\nThe full set of properties for the Kafka Source Connector can be found in the documentation. The properties that should be considered with respect to performance tuning are as follows:\n\n* **batch.size**: the cursor batch size that defines how many change stream documents are retrieved on each **getMore** operation. Defaults to 1,000.\n* **poll.await.time.ms**: the amount of time to wait in milliseconds before checking for new results on the change stream. Defaults to 5,000.\n* **poll.max.batch.size**: maximum number of source records to send to Kafka at once. This setting can be used to limit the amount of data buffered internally in the Connector. Defaults to 1,000.\n* **pipeline**: an array of aggregation pipeline stages to run in your change stream. Defaults to an empty pipeline that provides no filtering.\n* **copy.existing.max.threads**: the number of threads to use when performing the data copy. Defaults to the number of processors.\n* **copy.existing.queue.size**: the max size of the queue to use when copying data. This is buffered internally by the Connector. Defaults to 16,000.\n\n### Recommendations\nThe following are some general recommendations and considerations when configuring the source connector:\n\n#### Scaling the source\nOne of the most common questions is how to scale the source connector. For scenarios where you have a large amount of data to be copied via **copy.existing**, keep in mind that using the source connector this way may not be the best way to move this large amount of data. Consider the process for copy.existing:\n\n* Store the latest change stream resume token.\n* Spin up a thread (up to **copy.existing.max.threads**) for each namespace that is being copied.\n* When all threads finish, the resume tokens are read, written, and caught up to current time.\n\nWhile technically, the data will eventually be copied, this process is relatively slow. And if your data size is large and your incoming data is faster than the copy process, the connector may never get into a state where new data changes are handled by the connector.\n\nFor high throughput datasets trying to be copied with copy.existing, a typical situation is overwriting the resume token stored in (1) due to high write activity. This breaks the copy.existing functionality, and it will need to be restarted, on top of dealing with the messages that were already processed to the Kafka topic. When this happens, the alternatives are:\n\n* Increase the oplog size to make sure the copy.existing phase can finish.\n* Throttle write activity in the source cluster until the copy.existing phase finishes. \n\nAnother option for handling high throughput of change data is to configure multiple source connectors. Each source connector should use a **pipeline** and capture changes from a subset of the total data. Keep in mind that each time you create a source connector pointed to the same MongoDB cluster, it creates a separate change stream. Each change stream requires resources from the MongoDB cluster, and continually adding them will decrease server performance. That said, this degradation may not become noticeable until the amount of connectors reaches the 100+ range, so breaking your collections into five to 10 connector pipelines is the best way to increase source performance. In addition, using several different source connectors on the same namespace changes the total ordering of the data on the sink versus the original order of data in the source cluster. \n\n#### Tune the change stream pipeline\nWhen building your Kafka Source Connector configuration, ensure you appropriately tune the \u201cpipeline\u201d so that only wanted events are flowing from MongoDB to Kafka Connect, which helps reduce network traffic and processing times. For a detailed pipeline example, check out the Customize a Pipeline to Filter Change Events section of the online documentation.\n\n#### Adjust to the source cluster throughput\nYour Kafka Source Connector can be watching a set of collections with a low volume of events, or the opposite, a set of collections with a very high volume of events.\n\nIn addition, you may want to tune your Kafka Source Connector to react faster to changes, reduce round trips to MongoDB or Kafka, and similar changes.\n\nWith this in mind, consider adjusting the following properties for the Kafka Source Connector:\n\n* Adjust the value of **batch.size**:\n * Higher values mean longer processing times on the source cluster but fewer round trips to it. It can also increase the chances of finding relevant change events when the volume of events being watched is small.\n * Lower values mean shorter processing times on the source cluster but more round trips to it. It can reduce the chances of finding relevant change events when the volume of events being watched is small.\n* Adjust the value of **poll.max.batch.size**:\n * Higher values require more memory to buffer the source records with fewer round trips to Kafka. This comes at the expense of the memory requirements and increased latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic.\n * Lower values require less memory to buffer the source records with more round trips to Kafka. It can also help reduce the latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic.\n* Adjust the value of **poll.await.time.ms**:\n * Higher values can allow source clusters with a low volume of events to have any information to be sent to Kafka at the expense of increased latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic.\n * Lower values reduce latency from the moment a change takes place in MongoDB to the point the Kafka message associated with that change reaches the destination topic. But for source clusters with a low volume of events, it can prevent them from having any information to be sent to Kafka.\n\nThis information is an overview of what to expect when changing these values, but keep in mind that they are deeply interconnected, with the volume of change events on the source cluster having an important impact too:\n\n1. The Kafka Source Connector issues getMore commands to the source cluster using **batch.size**.\n2. The Kafka Source Connector receives the results from step 1 and waits until either **poll.max.batch.size** or **poll.await.time.ms** is reached. While this doesn\u2019t happen, the Kafka Source Connector keeps \u201cfeeding\u201d itself with more getMore results.\n3. When either **poll.max.batch.size** or **poll.await.time.ms** is reached, the source records are sent to Kafka.\n\n#### \u201cCopy existing\u201d feature \n\nWhen running with the **copy.existing** property set to **true**, consider these additional properties:\n\n* **copy.existing.queue.size**: the amount of records the Kafka Source Connector buffers internally. This queue and its size include all the namespaces to be copied by the \u201cCopy Existing\u201d feature. If this queue is full, the Kafka Source Connector blocks until space becomes available.\n* **copy.existing.max.threads**: the amount of concurrent threads used for copying the different namespaces. There is a one namespace to one thread mapping, so it is common to increase this up to the maximum number of namespaces being copied. If the number exceeds the number of cores available in the system, then the performance gains can be reduced.\n* **copy.existing.allow.disk.use**: allows the copy existing aggregation to use temporary disk storage if required. The default is set to true but should be set to false if the user doesn't have the permissions for disk access.\n\n#### Memory implications \n\nIf you experience JVM \u201cout of memory\u201d issues on the Kafka Connect Worker process, you can try reducing the following two properties that control the amount of data buffered internally:\n\n* **poll.max.batch.size**\n* **copy.existing.queue.size**: applicable if the \u201ccopy.existing\u201d property is set to true.\n\nIt is important to note that lowering these values can result in unwanted impact. Adjusting the JVM Heap Size to your environment needs is recommended as long as you have available resources and the memory needs are not the result of memory leaks.\n\n## Tuning the sink connector\nWhen the MongoDB Connector is configured as a sink, it reads from a Kafka topic and writes to a MongoDB collection.\n\nAs with the source, there exists a mechanism to ensure offsets are stored in the event of a sink failure. Kafka connect manages this, and the information is stored in the __consumer_offsets topic. The MongoDB Connector has configuration properties that affect performance. They are as follows:\n\n* **max.batch.size**: the maximum number of sink records to batch together for processing. A higher number will result in more documents being sent as part of a single bulk command. Default value is 0.\n* **rate.limiting.every.n**: number of processed batches that trigger the rate limit. A value of 0 means no rate limiting. Default value is 0. In practice, this setting is rarely used.\n* **rate.limiting.timeout**: how long (in milliseconds) to wait before continuing to process data once the rate limit is reached. Default value is 0. This setting is rarely used.\n* **tasks.max**: the maximum number of tasks. Default value is 1.\n\n### Recommendations \n#### Add indexes to your collections for consistent performance \nWrites performed by the sink connector take additional time to complete as the size of the underlying MongoDB collection grows. To prevent performance deterioration, use an index to support these write queries.\n\n#### Achieve as much parallelism as possible \nThe Kafka Sink Connector (KSC) can take advantage of parallel execution thanks to the **tasks.max** property. The specified number of tasks will only be created if the source topic has the same number of partitions. Note: A partition should be considered as a logic group of ordered records, and the producer of the data determines what each partition contains.\nHere is the breakdown of the different combinations of number of partitions in the source topic and tasks.max values:\n\n**If working with more than one partition but one task:**\n\n* The task processes partitions one by one: Once a batch from a partition is processed, it moves on to another one so the order within each partition is still guaranteed.\n* Order among all the partitions is not guaranteed.\n\n**If working with more than one partition and an equal number of tasks:**\n\n* Each task is assigned one partition and the order is guaranteed within each partition.\n* Order among all the partitions is not guaranteed.\n\n**If working with more than one partition and a smaller number of tasks:**\n\n* The tasks that are assigned more than one partition process partitions one by one: Once a batch from a partition is processed, it moves on to another one so the order within each partition is still guaranteed.\n* Order among all the partitions is not guaranteed.\n\n**If working with more than one partition and a higher number of tasks:**\n\n* Each task is assigned one partition and the order is guaranteed within each partition.\n* KSC will not generate an excess number of tasks.\n* Order among all the partitions is not guaranteed.\n\nProcessing of partitions may not be in order, meaning that Partition B may be processed before Partition A. All messages within the partition conserve strict order.\n\nNote: When using MongoDB to write CDC data, the order of data is important since, for example, you do not want to process a delete before an update on the same data. If you specify more than one partition for CDC data, you run the risk of data being out of order on the sink collection.\n\n#### Tune the bulk operations \nThe Kafka Sink Connector (KSC) works by issuing bulk write operations. All the bulk operations that the KSC executes are, by default, ordered and as such, the order of the messages is guaranteed within a partition. See Ordered vs Unordered Operations for more information. Note: As of 1.7, **bulk.write.ordered**, if set to false, will process the bulk out of order, enabling more documents within the batch to be written in the case of a failure of a portion of the batch.\n\nThe amount of operations that are sent in a single bulk command can have a direct impact on performance. You can modify this by adjusting **max.batch.size**:\n\n* A higher number will result in more operations being sent as part of a single bulk command. This helps improve throughput at the expense of some added latency. However, a very big number might result in cache pressure on the destination cluster.\n* A small number will ease the potential cache pressure issues which might be useful for destination clusters with fewer resources. However, throughput decreases, and you might experience consumer lag on the source topics as the producer might publish messages in the topic faster than the KSC processes them.\n* This value affects processing within each of the tasks of the KSC.\n\n#### Throttle the Kafka sink connector \nIn the event that the destination MongoDB cluster is not able to handle consistent throughput, you can configure a throttling mechanism. You can do this with two properties:\n\n* **rate.limiting.every.n**: number of processed batches that should trigger the rate limit. A value of 0 means no rate limiting.\n* **rate.limiting.timeout**: how long (in milliseconds) to wait before continuing to process data once the rate limit is reached.\n\nThe end result is that whenever the KSC writes **rate.limiting.every.n** number of batches, it waits **rate.limiting.timeout milliseconds** before writing the next batch. This allows a destination MongoDB cluster that cannot handle consistent throughput to recover before receiving new load from the KSC.", "format": "md", "metadata": {"tags": ["Connectors", "Kafka"], "pageDescription": "When building a MongoDB and Apache Kafka solution, the default configuration values satisfy many scenarios, but there are some tweaks to increase performance. In this article, we walk through important configuration properties as well as general best practice recommendations. ", "contentType": "Tutorial"}, "title": "Tuning the MongoDB Connector for Apache Kafka", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/client-side-field-level-encryption-csfle-mongodb-node", "action": "created", "body": "# How to use MongoDB Client-Side Field Level Encryption (CSFLE) with Node.js\n\nHave you ever had to develop an application that stored sensitive data,\nlike credit card numbers or social security numbers? This is a super\ncommon use case for databases, and it can be a pain to save this data is\nsecure way. Luckily for us there are some incredible security features\nthat come packaged with MongoDB. For example, you should know that with\nMongoDB, you can take advantage of:\n\n- Network and user-based\n rules, which\n allows administrators to grant and restrict collection-level\n permissions for users.\n- Encryption of your data at\n rest,\n which encrypts the database files on disk.\n- Transport Encryption using\n TLS/SSL\n which encrypts data over the network.\n- And now, you can even have client-side encryption, known as\n client-side field level encryption\n (CSFLE).\n\nThe following diagram is a list of MongoDB security features offered and\nthe potential security vulnerabilities that they address:\n\nClient-side Field Level Encryption allows the engineers to specify the\nfields of a document that should be kept encrypted. Sensitive data is\ntransparently encrypted/decrypted by the client and only communicated to\nand from the server in encrypted form. This mechanism keeps the\nspecified data fields secure in encrypted form on both the server and\nthe network. While all clients have access to the non-sensitive data\nfields, only appropriately-configured CSFLE clients are able to read and\nwrite the sensitive data fields.\n\nIn this post, we will design a Node.js client that could be used to\nsafely store select fields as part of a medical application.\n\n## The Requirements\n\nThere are a few requirements that must be met prior to attempting to use\nClient-Side Field Level Encryption (CSFLE) with the Node.js driver.\n\n- MongoDB Atlas 4.2+ or MongoDB Server 4.2\n Enterprise\n- MongoDB Node driver 3.6.2+\n- The libmongocrypt\n library installed (macOS installation instructions below)\n- The\n mongocryptd\n binary installed (macOS installation instructions below)\n\n>\n>\n>This tutorial will focus on automatic encryption. While this tutorial\n>will use MongoDB Atlas, you're\n>going to need to be using version 4.2 or newer for MongoDB Atlas or\n>MongoDB Enterprise Edition. You will not be able to use automatic field\n>level encryption with MongoDB Community Edition.\n>\n>\n\nThe assumption is that you're familiar with developing Node.js\napplications that use MongoDB. If you want a refresher, take a look at\nthe quick start\nseries\nthat we published on the topic.\n\n## Installing the Libmongocrypt and Mongocryptd Binaries and Libraries\n\nBecause of the **libmongocrypt** and **mongocryptd** requirements, it's\nworth reviewing how to install and configure them. We'll be exploring\ninstallation on macOS, but refer to the documentation for\nlibmongocrypt and\nmongocryptd\nfor your particular operating system.\n\n### libmongocrypt\n\n**libmongocrypt** is required for automatic field level\nencryption,\nas it is the component that is responsible for performing the encryption\nor decryption of the data on the client with the MongoDB 4.2-compatible\nNode drivers. Now, there are currently a few solutions for installing\nthe **libmongocrypt** library on macOS. However, the easiest is with\nHomebrew. If you've got Homebrew installed, you can\ninstall **libmongocrypt** with the following command:\n\n``` bash\nbrew install mongodb/brew/libmongocrypt\n```\n\n>\n>\n>I ran into an issue with libmongocrypt when I tried to run my code,\n>because libmongocrypt was trying to statically link against\n>libmongocrypt instead of dynamically linking. I have submitted an issue\n>to the team to fix this issue, but to fix it, I had to run:\n>\n>\n\n``` bash\nexport BUILD_TYPE=dynamic\n```\n\n### mongocryptd\n\n**mongocryptd** is required for automatic field level\nencryption\nand is included as a component in the MongoDB Enterprise\nServer\npackage. **mongocryptd** is only responsible for supporting automatic\nclient-side field level encryption and does *not* perform encryption or\ndecryption.\n\nYou'll want to consult the\ndocumentation\non how to obtain the **mongocryptd** binary as each operating system has\ndifferent steps.\n\nFor macOS, you'll want to download MongoDB Enterprise Edition from the\nMongoDB Download\nCenter.\nYou can refer to the Enterprise Edition installation\ninstructions\nfor macOS to install, but the gist of the installation involves\nextracting the TAR file and moving the files to the appropriate\ndirectory.\n\nBy this point, all the appropriate components for client-side field\nlevel encryption should be installed or available. Make sure that you\nare running MongoDB enterprise on your client while using CSFLE, even if\nyou are saving your data to Atlas.\n\n## Project Setup\n\nLet's start by setting up all the files and dependencies we will need.\nIn a new directory, create the following files, running the following\ncommand:\n\n``` bash\ntouch clients.js helpers.js make-data-key.js\n```\n\nBe sure to initialize a new NPM project, since we will be using several\nNPM dependencies.\n\n``` bash\nnpm init --yes\n```\n\nAnd let's just go ahead and install all the packages that we will be\nusing now.\n\n``` bash\nnpm install -S mongodb mongodb-client-encryption node-gyp\n```\n\n>\n>\n>Note: The complete codebase for this project can be found here:\n>\n>\n>\n\n## Create a Data Key in MongoDB for Encrypting and Decrypting Document Fields\n\nMongoDB Client-Side Field Level Encryption (CSFLE) uses an encryption\nstrategy called envelope encryption in which keys used to\nencrypt/decrypt data (called data encryption keys) are encrypted with\nanother key (called the master key). The following diagram shows how the\n**master key** is created and stored:\n\n>\n>\n>Warning\n>\n>The Local Key Provider is not suitable for production.\n>\n>The Local Key Provider is an insecure method of storage and is therefore\n>**not recommended** if you plan to use CSFLE in production. Instead, you\n>should configure a master key in a Key Management\n>System\n>(KMS) which stores and decrypts your data encryption keys remotely.\n>\n>To learn how to use a KMS in your CSFLE implementation, read the\n>Client-Side Field Level Encryption: Use a KMS to Store the Master\n>Key\n>guide.\n>\n>\n\n``` javascript\n// clients.js\n\nconst fs = require(\"fs\")\nconst mongodb = require(\"mongodb\")\nconst { ClientEncryption } = require(\"mongodb-client-encryption\")\nconst { MongoClient, Binary } = mongodb\n\nmodule.exports = {\nreadMasterKey: function (path = \"./master-key.txt\") {\n return fs.readFileSync(path)\n},\nCsfleHelper: class {\n constructor({\n kmsProviders = null,\n keyAltNames = \"demo-data-key\",\n keyDB = \"encryption\",\n keyColl = \"__keyVault\",\n schema = null,\n connectionString = \"mongodb://localhost:27017\",\n mongocryptdBypassSpawn = false,\n mongocryptdSpawnPath = \"mongocryptd\"\n } = {}) {\n if (kmsProviders === null) {\n throw new Error(\"kmsProviders is required\")\n }\n this.kmsProviders = kmsProviders\n this.keyAltNames = keyAltNames\n this.keyDB = keyDB\n this.keyColl = keyColl\n this.keyVaultNamespace = `${keyDB}.${keyColl}`\n this.schema = schema\n this.connectionString = connectionString\n this.mongocryptdBypassSpawn = mongocryptdBypassSpawn\n this.mongocryptdSpawnPath = mongocryptdSpawnPath\n this.regularClient = null\n this.csfleClient = null\n }\n\n /**\n * In the guide, https://docs.mongodb.com/ecosystem/use-cases/client-side-field-level-encryption-guide/,\n * we create the data key and then show that it is created by\n * retreiving it using a findOne query. Here, in implementation, we only\n * create the key if it doesn't already exist, ensuring we only have one\n * local data key.\n *\n * @param {MongoClient} client\n */\n async findOrCreateDataKey(client) {\n const encryption = new ClientEncryption(client, {\n keyVaultNamespace: this.keyVaultNamespace,\n kmsProviders: this.kmsProviders\n })\n\n await this.ensureUniqueIndexOnKeyVault(client)\n\n let dataKey = await client\n .db(this.keyDB)\n .collection(this.keyColl)\n .findOne({ keyAltNames: { $in: this.keyAltNames] } })\n\n if (dataKey === null) {\n dataKey = await encryption.createDataKey(\"local\", {\n keyAltNames: [this.keyAltNames]\n })\n return dataKey.toString(\"base64\")\n }\n\n return dataKey[\"_id\"].toString(\"base64\")\n }\n}\n```\n\nThe following script generates a 96-byte, locally-managed master key and\nsaves it to a file called master-key.txt in the directory from which the\nscript is executed, as well as saving it to our impromptu key management\nsystem in Atlas.\n\n``` javascript\n// make-data-key.js\n\nconst { readMasterKey, CsfleHelper } = require(\"./helpers\");\nconst { connectionString } = require(\"./config\");\n\nasync function main() {\nconst localMasterKey = readMasterKey()\n\nconst csfleHelper = new CsfleHelper({\n kmsProviders: {\n local: {\n key: localMasterKey\n }\n },\n connectionString: \"PASTE YOUR MONGODB ATLAS URI HERE\"\n})\n\nconst client = await csfleHelper.getRegularClient()\n\nconst dataKey = await csfleHelper.findOrCreateDataKey(client)\nconsole.log(\"Base64 data key. Copy and paste this into clients.js\\t\", dataKey)\n\nclient.close()\n}\n\nmain().catch(console.dir)\n```\n\nAfter saving this code, run the following to generate and save our keys.\n\n``` bash\nnode make-data-key.js\n```\n\nAnd you should get this output in the terminal. Be sure to save this\nkey, as we will be using it in our next step.\n\n![\n\nIt's also a good idea to check in to make sure that this data has been\nsaved correctly. Go to your clusters in Atlas, and navigate to your\ncollections. You should see a new key saved in the\n**encryption.\\_\\_keyVault** collection.\n\nYour key should be shaped like this:\n\n``` json\n{\n \"_id\": \"UUID('27a51d69-809f-4cb9-ae15-d63f7eab1585')\",\n \"keyAltNames\": \"demo-data-key\"],\n \"keyMaterial\": \"Binary('oJ6lEzjIEskH...', 0)\",\n \"creationDate\": \"2020-11-05T23:32:26.466+00:00\",\n \"updateDate\": \"2020-11-05T23:32:26.466+00:00\",\n \"status\": \"0\",\n \"masterKey\": {\n \"provider\": \"local\"\n }\n}\n```\n\n## Defining an Extended JSON Schema Map for Fields to be Encrypted\n\nWith the data key created, we're at a point in time where we need to\nfigure out what fields should be encrypted in a document and what fields\nshould be left as plain text. The easiest way to do this is with a\nschema map.\n\nA schema map for encryption is extended JSON and can be added directly\nto the Go source code or loaded from an external file. From a\nmaintenance perspective, loading from an external file is easier to\nmaintain.\n\nThe following table illustrates the data model of the Medical Care\nManagement System.\n\n| **Field type** | **Encryption Algorithm** | **BSON Type** |\n|--------------------------|--------------------------|-------------------------------------------|\n| Name | Non-Encrypted | String |\n| SSN | Deterministic | Int |\n| Blood Type | Random | String |\n| Medical Records | Random | Array |\n| Insurance: Policy Number | Deterministic | Int (embedded inside insurance object) |\n| Insurance: Provider | Non-Encrypted | String (embedded inside insurance object) |\n\nLet's add a function to our **csfleHelper** method in helper.js file so\nour application knows which fields need to be encrypted and decrypted.\n\n``` javascript\nif (dataKey === null) {\n throw new Error(\n \"dataKey is a required argument. Ensure you've defined it in clients.js\"\n )\n}\nreturn {\n \"medicalRecords.patients\": {\n bsonType: \"object\",\n // specify the encryptMetadata key at the root level of the JSON Schema.\n // As a result, all encrypted fields defined in the properties field of the\n // schema will inherit this encryption key unless specifically overwritten.\n encryptMetadata: {\n keyId: [new Binary(Buffer.from(dataKey, \"base64\"), 4)]\n },\n properties: {\n insurance: {\n bsonType: \"object\",\n properties: {\n // The insurance.policyNumber field is embedded inside the insurance\n // field and represents the patient's policy number.\n // This policy number is a distinct and sensitive field. \n policyNumber: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n }\n }\n }\n },\n // The medicalRecords field is an array that contains a set of medical record documents. \n // Each medical record document represents a separate visit and specifies information\n // about the patient at that that time, such as their blood pressure, weight, and heart rate.\n // This field is sensitive and should be encrypted.\n medicalRecords: {\n encrypt: {\n bsonType: \"array\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n },\n // The bloodType field represents the patient's blood type.\n // This field is sensitive and should be encrypted. \n bloodType: {\n encrypt: {\n bsonType: \"string\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n },\n // The ssn field represents the patient's \n // social security number. This field is \n // sensitive and should be encrypted.\n ssn: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n }\n }\n }\n}\n```\n\n## Create the MongoDB Client\n\nAlright, so now we have the JSON Schema and encryption keys necessary to\ncreate a CSFLE-enabled MongoDB client. Let's recap how our client will\nwork. Our CSFLE-enabled MongoDB client will query our encrypted data,\nand the **mongocryptd** process will be automatically started by\ndefault. **mongocryptd** handles the following responsibilities:\n\n- Validates the encryption instructions defined in the JSON Schema and\n flags the referenced fields for encryption in read and write\n operations.\n- Prevents unsupported operations from being executed on encrypted\n fields.\n\nTo create the CSFLE-enabled client, we need to instantiate a standard\nMongoDB client object with the additional automatic encryption settings\nwith the following **code snippet**:\n\n``` javascript\nasync getCsfleEnabledClient(schemaMap = null) {\n if (schemaMap === null) { \n throw new Error(\n \"schemaMap is a required argument. Build it using the CsfleHelper.createJsonSchemaMap method\"\n )\n }\n const client = new MongoClient(this.connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n monitorCommands: true,\n autoEncryption: {\n // The key vault collection contains the data key that the client uses to encrypt and decrypt fields.\n keyVaultNamespace: this.keyVaultNamespace,\n // The client expects a key management system to store and provide the application's master encryption key.\n // For now, we will use a local master key, so they use the local KMS provider.\n kmsProviders: this.kmsProviders,\n // The JSON Schema that we have defined doesn't explicitly specify the collection to which it applies.\n // To assign the schema, they map it to the medicalRecords.patients collection namespace\n schemaMap\n }\n })\n return await client.connect()\n}\n```\n\nIf the connection was successful, the client is returned.\n\n## Perform Encrypted Read/Write Operations\n\nWe now have a CSFLE-enabled client and we can test that the client can\nperform queries that meet our security requirements.\n\n### Insert a Document with Encrypted Fields\n\nThe following diagram shows the steps taken by the client application\nand driver to perform a write of field-level encrypted data:\n\n![Diagram that shows the data flow for a write of field-level encrypted\ndata\n\nWe need to write a function in our clients.js to create a new patient\nrecord with the following **code snippet**:\n\nNote: Clients that do not have CSFLE configured will insert unencrypted\ndata. We recommend using server-side schema\nvalidation to\nenforce encrypted writes for fields that should be encrypted.\n\n``` javascript\nconst { readMasterKey, CsfleHelper } = require(\"./helpers\");\nconst { connectionString, dataKey } = require(\"./config\");\n\nconst localMasterKey = readMasterKey()\n\nconst csfleHelper = new CsfleHelper({\n // The client expects a key management system to store and provide the application's master encryption key. For now, we will use a local master key, so they use the local KMS provider.\n kmsProviders: {\n local: {\n key: localMasterKey\n }\n },\n connectionString,\n})\n\nasync function main() {\nlet regularClient = await csfleHelper.getRegularClient()\nlet schemeMap = csfleHelper.createJsonSchemaMap(dataKey)\nlet csfleClient = await csfleHelper.getCsfleEnabledClient(schemeMap)\n\nlet exampleDocument = {\n name: \"Jon Doe\",\n ssn: 241014209,\n bloodType: \"AB+\",\n medicalRecords: \n {\n weight: 180,\n bloodPressure: \"120/80\"\n }\n ],\n insurance: {\n provider: \"MaestCare\",\n policyNumber: 123142\n }\n}\n\nconst regularClientPatientsColl = regularClient\n .db(\"medicalRecords\")\n .collection(\"patients\")\nconst csfleClientPatientsColl = csfleClient\n .db(\"medicalRecords\")\n .collection(\"patients\")\n\n// Performs the insert operation with the csfle-enabled client\n// We're using an update with an upsert so that subsequent runs of this script\n// don't insert new documents\nawait csfleClientPatientsColl.updateOne(\n { ssn: exampleDocument[\"ssn\"] },\n { $set: exampleDocument },\n { upsert: true }\n)\n\n// Performs a read using the encrypted client, querying on an encrypted field\nconst csfleFindResult = await csfleClientPatientsColl.findOne({\n ssn: exampleDocument[\"ssn\"]\n})\nconsole.log(\n \"Document retreived with csfle enabled client:\\n\",\n csfleFindResult\n)\n\n// Performs a read using the regular client. We must query on a field that is\n// not encrypted.\n// Try - query on the ssn field. What is returned?\nconst regularFindResult = await regularClientPatientsColl.findOne({\n name: \"Jon Doe\"\n})\nconsole.log(\"Document retreived with regular client:\\n\", regularFindResult)\n\nawait regularClient.close()\nawait csfleClient.close()\n}\n\nmain().catch(console.dir)\n```\n\n### Query for Documents on a Deterministically Encrypted Field\n\nThe following diagram shows the steps taken by the client application\nand driver to query and decrypt field-level encrypted data:\n\n![\n\nWe can run queries on documents with encrypted fields using standard\nMongoDB driver methods. When a doctor performs a query in the Medical\nCare Management System to search for a patient by their SSN, the driver\ndecrypts the patient's data before returning it:\n\n``` json\n{\n \"_id\": \"5d6ecdce70401f03b27448fc\",\n \"name\": \"Jon Doe\",\n \"ssn\": 241014209,\n \"bloodType\": \"AB+\",\n \"medicalRecords\": \n {\n \"weight\": 180,\n \"bloodPressure\": \"120/80\"\n }\n ],\n \"insurance\": {\n \"provider\": \"MaestCare\",\n \"policyNumber\": 123142\n }\n}\n```\n\nIf you attempt to query your data with a MongoDB that isn't configured\nwith the correct key, this is what you will see:\n\n![\n\nAnd you should see your data written to your MongoDB Atlas database:\n\n## Running in Docker\n\nIf you run into any issues running your code locally, I have developed a\nDocker image that you can use to help you get setup quickly or to\ntroubleshoot local configuration issues. You can download the code\nhere.\nMake sure you have docker configured locally before you run the code.\nYou can download Docker\nhere.\n\n1. Change directories to the Docker directory.\n\n ``` bash\n cd docker\n ```\n\n2. Build Docker image with a tag name. Within this directory, execute:\n\n ``` bash\n docker build . -t mdb-csfle-example\n ```\n\n This will build a Docker image with a tag name *mdb-csfle-example*.\n\n3. Run the Docker image by executing:\n\n ``` bash\n docker run -tih csfle mdb-csfle-example\n ```\n\n The command above will run a Docker image with tag *mdb-csfle-example* and provide it with *csfle* as its hostname.\n\n4. Once you're inside the Docker container, you can follow the below\n steps to run the NodeJS code example.\n\n ``` bash\n $ export MONGODB_URL=\"mongodb+srv://USER:PWD@EXAMPLE.mongodb.net/dbname?retryWrites=true&w=majority\"\n\n $ node ./example.js\n ```\n\n Note: If you're connecting to MongoDB Atlas, please make sure to Configure Allowlist Entries.\n\n## Summary\n\nWe wanted to develop a system that securely stores sensitive medical\nrecords for patients. We also wanted strong data access and security\nguarantees that do not rely on individual users. After researching the\navailable options, we determined that MongoDB Client-Side Field Level\nEncryption satisfies their requirements and decided to implement it in\ntheir application. To implement CSFLE, we did the following:\n\n**1. Created a Locally-Managed Master Encryption Key**\n\nA locally-managed master key allowed us to rapidly develop the client\napplication without external dependencies and avoid accidentally leaking\nsensitive production credentials.\n\n**2. Generated an Encrypted Data Key with the Master Key**\n\nCSFLE uses envelope encryption, so we generated a data key that encrypts\nand decrypts each field and then encrypted the data key using a master\nkey. This allows us to store the encrypted data key in MongoDB so that\nit is shared with all clients while preventing access to clients that\ndon't have access to the master key.\n\n**3. Created a JSON Schema**\n\nCSFLE can automatically encrypt and decrypt fields based on a provided\nJSON Schema that specifies which fields to encrypt and how to encrypt\nthem.\n\n**4. Tested and Validated Queries with the CSFLE Client**\n\nWe tested their CSFLE implementation by inserting and querying documents\nwith encrypted fields. We then validated that clients without CSFLE\nenabled could not read the encrypted data.\n\n## Move to Production\n\nIn this guide, we stored the master key in your local file system. Since\nyour data encryption keys would be readable by anyone that gains direct\naccess to your master key, we **strongly recommend** that you use a more\nsecure storage location such as a Key Management System (KMS).\n\n## Further Reading\n\nFor more information on client-side field level encryption in MongoDB,\ncheck out the reference docs in the server manual:\n\n- Client-Side Field Level\n Encryption\n- Automatic Encryption JSON Schema\n Syntax\n- Manage Client-Side Encryption Data\n Keys\n- Comparison of Security\n Features\n- For additional information on the MongoDB CSFLE API, see the\n official Node.js driver\n documentation\n- Questions? Comments? We'd love to connect with you. Join the\n conversation on the MongoDB Community\n Forums.\n\n", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "Learn how to encrypt document fields client-side in Node.js with MongoDB client-side field level encryption (CSFLE).", "contentType": "Tutorial"}, "title": "How to use MongoDB Client-Side Field Level Encryption (CSFLE) with Node.js", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongo-socket-chat-example", "action": "created", "body": "\n \n\nHELLO WORLD FROM FILE\n\n \n\n \n\n \n \n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "If you're interested in how to integrate MongoDB eventing with Socket.io, this tutorial builds in the Socket.IO getting started guide to incorporate MongoDB.", "contentType": "Tutorial"}, "title": "Integrating MongoDB Change Streams with Socket.IO", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/sizing-mongodb-with-jay-runkel", "action": "created", "body": "# Hardware Sizing for MongoDB with Jay Runkel\n\nThe process of determining the right amount of server resources for your application database is a bit like algebra. The variables are many and varied. Here are just a few:\n\n- The total amount of data stored\n - Number of collections\n - Number of documents in each collection\n - Size of each document\n- Activity against the database\n - Number and frequency of reads\n - Number and frequency of writes, updates, deletes\n- Data schema and indexes\n - Number of index entries, size of documents indexed\n- Users\n - Proximity to your database servers\n - Total number of users, the pattern of usage (see reads/writes)\n\nThese are just a few and it's a bit tricky because the answer to one of these questions may depend entirely on the answer to another, whose answer depends on yet another. Herein lies the difficulty with performing a sizing exercise.\n\nOne of the best at this part science, part art exercise is Jay Runkel. Jay joined us on the podcast to discuss the process and possibilities. This article contains the transcript of that episode.\n\nIf you prefer to listen, here's a link to the episode on YouTube.\n\n:youtube]{vid=OgGLl5KZJQM}\n\n## Podcast Transcript\n\nMichael Lynn (00:00): Welcome to the podcast. On this episode, we're talking about sizing. It's a difficult task sometimes to figure out how much server you need in order to support your application and it can cost you if you get it wrong. So we've got the experts helping us today. We're bringing in Jay Runkel. Jay Runkel is an executive solutions architect here at MongoDB. Super smart guy. He's been doing this quite some time. He's helped hundreds of customers size their instances, maybe even thousands. So a great conversation with Jay Runkel on sizing your MongoDB instances. I hope you enjoy the episode.\n\nMichael Lynn (00:55): Jay, how are you? It's great to see you again. It's been quite a while for us. Why don't you tell the audience who you are and what you do?\n\nJay Runkel (01:02): So I am a executive solution architect at MongoDB. So MongoDB sales teams are broken up into two classes individual. There are the sales reps who handle the customer relationship, a lot of the business aspects of the sales. And there are solution architects who play the role of presales, and we handle a lot of the technical aspects of the sales. So I spend a lot of time working with customers, understanding their technical challenges and helping them understand how MongoDB can help them solve those technical challenges.\n\nMichael Lynn (01:34): That's an awesome role. I spent some time as a solution architect over the last couple of years, and even here at MongoDB, and it's just such a fantastic role. You get to help customers through their journey, to using MongoDB and solve some of their technical issues. So today we're going to focus on sizing, what it's like to size a MongoDB cluster, whether it be on prem in your own data center or in MongoDB Atlas, the database as a service. But before we get there, I'd like to learn a little bit more about what got you to this point, Jay. Where were you before MongoDB? What were you doing? And how is it that you're able to bridge the gap between something that requires the skills of a developer, but also sort of getting into that sales role?\n\nJay Runkel (02:23): Yeah, so my training and my early career experience was as a developer and I did that for about five, six years and realized that I did not want to sit in front of a desk every day. So what I did was I started looking for other roles where I could spend a lot more time with customers. And I happened to give a presentation in front of a sales VP one time about 25 years ago. And after the meeting, he said, \"Hey, I really need you to help support the sales team.\" And that kind of started my career in presales. And I've worked for a lot of different companies over the years, most recently related to MongoDB. Before MongoDB, I worked for MarkLogic where MarkLogic is another big, no SQL database. And I got most of my experience around document databases at MarkLogic, since they have an XML based document database.\n\nMichael Lynn (03:18): So obviously working with customers and helping them understand how to use MongoDB and the document model, that's pretty technical. But the sales aspect of it is almost on the opposite end of the personality spectrum. How do you find that? Do you find that challenging going between those two types of roles?\n\nJay Runkel (03:40): For me, it kind of almost all blurs together. I think in terms of this role, it's technical but sales kind of all merged together. You're either, we can do very non-technical things where you're just trying to understand a customer's business pain and helping them understand how if they went from MongoDB solution, it would address those business pain. But also you can get down into technology as well and work with the developer and understand some technical challenges they have and how MongoDB can solve that pain as well. So to me, it seems really seamless and most interactions with customers start at that high level where we're really understanding the business situation and the pain and where they want to be in the future. And generally the conversation evolves to, \"All right, now that we have this business pain, what are the technical requirements that are needed to achieve that solution to remove the pain and how MongoDB can deliver on those requirements?\"\n\nNic Raboy (04:41): So I imagine that you experience a pretty diverse set of customer requests. Like every customer is probably doing something really amazing and they need MongoDB for a very specific use case. Do you ever feel like stressed out that maybe you won't know how to help a particular customer because it's just so exotic?\n\nJay Runkel (05:03): Yes, but that's like the great thing about the job. The great thing about being at MongoDB is that often customers look at MongoDB because they failed with something else, either because they built an app like an Oracle or Postgres or something like that and it's not performing, or they can't roll out new functionality fast enough, or they've just looked at the requirements for this new application they want to build and realize they can't build it on traditional data platforms. So yeah, often you can get in with a customer and start talking about a use case or problem they have, and in the beginning, you can be, \"Geez, I don't know how we're ever going to solve this.\" But as you get into the conversation, you typically work and collaborate with the customer. They know their business, they know their technical infrastructure. You know MongoDB. And by combining those two sources of information, very often, not always, you can come up with a solution to solve the problem. But that's the challenge, that's what makes it fun.\n\nNic Raboy (06:07): So would I be absolutely incorrect if I said something like you are more in the role of filling the gap of what the customer is looking for, rather than trying to help them figure out what they need for their problem? It sounds like they came from maybe say an another solution that failed for them, you said. And so they maybe have a rough idea of what they want to accomplish with the database, but you need to get them to that next step versus, \"Hey, I've got this idea. How do I execute this idea?\" kind of thing.\n\nJay Runkel (06:36): Yeah, I would say some customers, it's pretty simple, pretty straightforward. Let's say we want to build the shopping cart application. There's probably hundreds or thousands of shopping cart applications built on MongoDB. It's pretty cookie cutter. That's not a long conversation. But then there are other customers that want to be able to process let's say 500,000 digital payments per second and have all of these requirements around a hundred percent availability, be able to have the application continue running without a hiccup if a whole data center goes down where you have to really dig in and understand their use case and all the requirements to a fine grain detail to figure out a solution that will work for them. In that case the DevOps role is often who we're talking to.\n\nNic Raboy (07:20): Awesome.\n\nMichael Lynn (07:21): Yeah. So before we get into the technical details of exactly how you do what you do in terms of recommending the sizing for a deployment, let's talk a little bit about the possibilities around MongoDB deployments. Some folks may be listening and thinking, \"Well, I've got this idea for an app and it's on my laptop now and I know I have to go to production at some point.\" What are the options they have for deploying MongoDB?\n\n>You can run MongoDB on your laptop, move it to a mainframe, move it to the cloud in [MongoDB Atlas, move it from one cloud provider to another within Atlas, and no modifications to your code besides the connection string.\n\nJay Runkel (07:46): So MongoDB supports just about every major platform you can consider. MongoDB realm has a database for a mobile device. MongoDB itself runs on Microsoft and MAC operating systems. It runs on IBM mainframes. It runs on a variety of flavors of Linux. You can also run MongoDB in the cloud either yourself, you can spin up a AWS instance or an Azure instance and install MongoDB and run it. Or we also have our cloud solution called Atlas where we will deploy and manage your MongoDB cluster for you on the cloud provider of your choice. So you essentially have that whole range and you can pick the platform and you can essentially pick who's going to manage the cluster for you.\n\nMichael Lynn (08:34): Fantastic. I mean, the options are limitless and the great thing is, the thing that you really did mention there, but it's a consistent API across all of those platforms. So you can develop and build your application, which leverages MongoDB in whatever language you're working in and not have to touch that regardless of the\ndeployment target you use. So right on your laptop, run it locally, download MongoDB server and run it on your laptop. Run it in a docker instance and then deploy to literally anywhere and not have to touch your code. Is that the case?\n\nJay Runkel (09:07): That's absolutely the case. You can run it on your laptop, move it to a mainframe, move it to the cloud in Atlas, move it from one cloud provider to another within Atlas, and no modifications to your code besides the connection string.\n\nMichael Lynn (09:20): Fantastic.\n\nNic Raboy (09:21): But when you're talking to customers, we have all of these options. How do you determine whether or not somebody should be on prem or somebody should be in Atlas or et cetera?\n\nJay Runkel (09:32): That's a great question. Now, I think from a kind of holistic perspective, everybody should be on Atlas because who wants to spend energy resources managing a database when that is something that MongoDB has streamlined, automated, ensured that it's deployed with best practices, with the highest level of security possible? So that's kind of the ideal case. I think that's where most of our customers are going towards. Now, there are certain industries and certain customers that have certain security requirements or policies that prevent them from running in a cloud provider, and those customers are the ones that still do self managed on-prem.\n\nNic Raboy (10:15): But when it comes to things that require, say the self managed on-prem, those requirements, what would they be? Like HIPAA and FERPA and all of those other security reasons? I believe Atlas supports that, right?\n\nJay Runkel (10:28): Yes. But I would say even if the regulations that will explicitly allow organizations to be in the cloud, many times they have internal policies that are additionally cautious and don't even want to take the risks, so they will just stay on prem. Other options are, if you're a company that has historically been deployed within your own data centers, if you have the new application that you're building, if it's the only thing in the cloud and all your app servers are still within your own data centers, sometimes that doesn't make a lot of sense as well.\n\nMichael Lynn (11:03): So I want to clear something up. You did mention, and your question was around compliance. And I want to just make sure it's clear. There's no reason why someone who requires compliance can't deploy in an Atlas apart from something internally, some internal compliance. I mean, we're able to manage applications that require HIPAA and FERPA and all of those compliance constraints, right?\n\nJay Runkel (11:27): Absolutely. We have financial services organizations, healthcare companies that are running their business, their core applications, within Atlas today, managing all sorts of sensitive data, PII, HIPAA data. So, yeah, that has been done and can be done given all of the security infrastructure provided by Atlas.\n\nNic Raboy (11:48): Awesome.\n\nMichael Lynn (11:49): Just wanted to clear that up. Go ahead, Nic.\n\nNic Raboy (11:51): I wanted to just point out as a plug here, for anyone who's listening to this particular podcast episode, we recorded a previous episode with Ken White, right Mike?\n\nMichael Lynn (12:01): Right.\n\nNic Raboy (12:01): ... on the different security practices of MongoDB, in case you want to learn more.\n\nMichael Lynn (12:06): Yeah. Great. Okay. So we're a couple of minutes in already and I'm chomping at the bit to get into the heart of the matter around sizing. But before we jump into the technical details, let's talk about what is big, what is small and kind of set the stage for the possibilities.\n\nJay Runkel (12:24): Okay. So big and small is somewhat relative, but MongoDB has customers that have a simple replica set with a few gigabytes of data to customers that manage upwards of petabytes of data in MongoDB clusters. And the number of servers there can range from three instances in a replica set that maybe have one gigabyte of RAM each to a cluster that has several hundred servers and is maybe 50 or a hundred shards, something like that.\n\nMichael Lynn (12:59): Wow. Okay. So a pretty big range. And just to clarify the glossary here, Jay's using terms like replica set. For those that are new to MongoDB, MongoDB has built in high availability and you can deploy multiple instances of MongoDB that work in unison to replicate the changes to the database and we call that a cluster or a\nreplica set. Great. So let's talk about the approach to sizing. What do you do when you're approaching a new customer or a new deployment and what do you need to think about when you start to think about how to size and implementation?\n\nJay Runkel (13:38): Okay. So before we go there, let's even kind of talk about what sizing is and what sizing means. So typically when we talk about sizing in MongoDB, we're really talking about how big of a cluster do we need to solve a customer's problem? Essentially, how much hardware do we need to devote to MongoDB so that the application will perform well? And the challenge around that is that often it's not obvious. If you're building an application, you're going to know roughly how much\ndata and roughly how the users are going to interact with the application. And somebody wants to know how many servers do you need and how much RAM do they have on them? How many cores? How big should the disks be? So it's a non-obvious, it's a pretty big gap from what you know, to what the answers you need. So what I hope I can do today is kind of walk you through how you get there.\n\nMichael Lynn (14:32): Awesome. Please do.\n\nJay Runkel (14:33): Okay. So let's talk about that. So there's a couple things that we want to get to, like we said. First of all, we want to figure out, is it a sharded cluster? Not like you already kind of defined what sharding is, essentially. It's a way of partitioning the data so that you can distribute the data across a set of servers, so that you can have more servers either managing the data or processing queries. So that's one thing. We want to figure out how many partitions, how many shards of the data we need. And then we also need to figure out what do the specifications of those servers look like? How much RAM should they have? How much CPU? How much disk? That type of thing.\n\nJay Runkel (15:12): So the easiest way I find to deal with this is to break this process up into two steps. The first step is just figure out the total amount of RAM we need, the total number of cores, essentially, the total amount of disk space, that type of thing. Once we have the totals, we can then figure out how many servers we need to deliver on those totals. So for example, if we do some math, which I'll explain in a little bit, and we figure out that we need 500 gigabytes of RAM, then we can figure out that we need five shards if all of our servers have a hundred gigabytes of RAM. That's pretty much kind of the steps we're going to go through. Just figure out how much RAM, how much disk, how much IO. And then figure out how many servers we need to deliver on those totals.\n\nMichael Lynn (15:55): Okay. So some basic algebra, and one of the variables is the current servers that we have. What if we don't have servers available and that's kind of an open and undefined variable?\n\nJay Runkel (16:05): Yes, so in Atlas, you have a lot of options. There's not just one. Often if we're deploying in some customer's data center, they have a standard pizza box that goes in a rack, so we know what that looks like, and we can design to that. In something like Atlas, it becomes a price optimization problem. So if we figure out that we need 500 gigabytes of RAM, like I said, we can figure out is it better to do 10 shards where each shard has 50 gigabytes of RAM? Is it cheaper basically? Or should we do five shards where each shard has a hundred gigabytes of RAM? So in Atlas it's like, you really just kind of experiment and find the price point that is the most effective.\n\nMichael Lynn (16:50): Gotcha, okay.\n\nNic Raboy (16:52): But are we only looking at a price point that is effective? I mean, maybe I missed it, but what are we gaining or losing by going with the 50 gigabyte shards versus the hundred gigabytes shards?\n\nJay Runkel (17:04): So there are some other considerations. One is backup and restore time. If you partition the data, if you shard the data more, each partition has less data. So if you think about like recovering from a disaster, it will be faster because you're going to restore a larger number of smaller servers. That tends to be faster than restoring a single stream, restoring a fewer larger servers. The other thing is, if you think about many of our customers grow over time, so they're adding shards. If you use shards of smaller machines, then every incremental step is smaller. So it's easier to right size the cluster because you can, in smaller chunks, you can add additional shards to add more capacity. Where if you have fewer larger shards, every additional shard is a much bigger step in terms of capacity, but also cost.\n\nMichael Lynn (18:04): Okay. So you mentioned sharding and we briefly touched on what that is. It's partitioning of the data. Do you always shard?\n\nJay Runkel (18:12): I would say most of our customers do not shard. I mean, a single replica set, which is one shard can typically, this is again going to depend on the workload and the server side and all that. But generally we see somewhere around one to two terabytes of data on a single replica set as kind of the upper bounds. And most of our applications, I don't know the exact percentages, but somewhere 80 - 90% of MongoDB applications are below the one terabyte range. So most applications, you don't even have to worry about sharding.\n\nMichael Lynn (18:47): I love it because I love rules of thumb, things that we can think about that like kind of simplify the process. And what I got there was look, if you've got one terabyte of data or more under management for your cluster, you're typically going to want to start to think about sharding.\n\nJay Runkel (19:02): Think about it. And it might not be necessary, but you might want to start thinking about it. Yes.\n\nMichael Lynn (19:06): Okay, great. Now we mentioned algebra and one of the variables was the server size and the resources available. Tell me about the individual elements on the server that we look at and and then we'll transition to like what the application is doing and how we overlay that.\n\nJay Runkel (19:25): Okay. So when you like look at a server, there's a lot of specifications that you could potentially consider. It turns out that with MongoDB, again let's say 95% of the time, the only things you really need to worry about is how much disk space, how much RAM, and then how fast of an IO system you have, really how many IOPS you need. It turns out other things like CPU and network, while theoretically they could be bottlenecks, most of the time, they're not. Normally it's disk\nspace RAM and IO. And I would say it's somewhere between 98, 99% of MongoDB applications, if you size them just looking at RAM, IOPS, and disk space, you're going to do a pretty good estimate of sizing and you'll have way more CPU, way more network than you need.\n\nMichael Lynn (20:10): All right. I'm loving it because we're, we're progressing. So super simple rule of thumb, look at the amount of their database storage required. If you've got one terabyte or more, you might want to do some more math. And then the next step would be, look at the disk space, the RAM and the speed of the disks or the IOPS, iOS per second required.\n\nJay Runkel (20:29): Yeah. So IOPS is a metric that all IO device manufacturers provide, and it's really a measurement of how fast the IO system can randomly access blocks of data. So if you think about what a database does, MongoDB or any database, when somebody issues a query, it's really going around on disk and grabbing the random blocks of data that satisfy that query. So IOPS is a really good metric for sizing IO systems for database.\n\nMichael Lynn (21:01): Okay. Now I've heard the term working set, and this is crucial when you're talking about sizing servers, sizing the deployment for a specific application. Tell me about the working set, what it is and how you determine what it is.\n\nJay Runkel (21:14): Okay. So we said that we had to size three things: RAM, the IOPS, and the disk space. So the working set really helps us determine how much RAM we need. So the definition of working set is really the size of the indexes plus the set of frequently accessed documents used by the application. So let me kind of drill into that a little bit. If you're thinking about any database, MongoDB included, if you want good performance, you want the stuff that is frequently accessed by the database to be in memory, to be in cache. And if it's not in cache, what that means is the server has to go to the disk, which is really slow, at least in comparison to RAM. So the more of that working set, the indexes and the frequently accessed documents fit into memory, the better performance is going to be. The reason why you want the indexes in memory is that just about every query, whether it is a fine query or an update, is going to have to use the indexes to find the documents that are going to be affected. And therefore, since every query needs to use the indexes, you want them to be in cache, so that performance is good.\n\nMichael Lynn (22:30): Yeah. That makes sense. But let's double click on this a little bit. How do I go about determining what the frequently accessed documents are?\n\nJay Runkel (22:39): Oh, that's a great question. That's unfortunately, that's why there's a little bit of art to sizing, as opposed to us just shipping out a spreadsheet and saying, \"Fill it out and you get the answer.\" So the frequently accessed documents, it's really going to depend upon your knowledge of the application and how you would expect it to be used or how users are using it if it's already an application that's in production. So it's really the set of data that is accessed all the time. So I can give you some examples and maybe that'll make it clear.\n\nMichael Lynn (23:10): Yeah, perfect.\n\nJay Runkel (23:10): Let's say it's an application where customers are looking up their bills. Maybe it's a telephone company or cable company or something like that or Hulu, Netflix, what have you. Most of the time, people only care about the bills that they got this month, last month, maybe two months ago, three months ago. If you're somebody like me that used to travel a lot before COVID, maybe you get really far behind on your expense reports and you look back four or five months, but rarely ever passed that. So in that type of application, the frequently accessed documents are probably going to be the current month's bills. Those are the ones that people are looking at all the time, and the rest of the stuff doesn't need to be in cache because it's not accessed that often.\n\nNic Raboy (23:53): So what I mean, so as far as the frequently accessed, let's use the example of the most recent bills. What if your application or your demand is so high? Are you trying to accommodate all most recent bills in this frequently accessed or are you further narrowing down the subset?\n\nJay Runkel (24:13): I think the way I would look at it for that application specific, it's probably if you think about this application, let's say you've got a million customers, but maybe only a thousand are ever online at the same time, you really are just going to need the indexes plus the data for the thousand active users. If I log into the application and it takes a second or whatever to bring up that first bill, but everything else is really fast after that as I drill into the different rows in my bill or whatever, I'm happy. So that's typically what you're looking at is just for the people that are currently engaged in the system, you want their data to be in RAM.\n\nMichael Lynn (24:57): So I published an article maybe two or three years ago, and the title of the article was \"Knowing the Unknowable.\" And that's a little bit of what we're talking about here, because you're mentioning things like indexes and you're mentioning things like frequently accessed documents. So this is obviously going to require that you understand how your data is laid out. And we refer to that as a schema. You're also going to have to have a good understanding of how you're indexing, what indexes you're creating. So tell me Jay, to what degree does sizing inform the schema or vice versa?\n\nJay Runkel (25:32): So, one of the things that we do as part of the kind of whole MongoDB design process is make sizing as part of the design processes as you're suggesting. Because what can happen is, you can come up with a really great schema and figure out what index is you use, and then you can look at that particular design and say, \"Wow, that's going to mean I'm going to need 12 shards.\" You can think about it a little bit further, come up with a different schema and say, \"Oh, that one's only going to require two shards.\" So if you think about, now you've got to go to your boss and ask for hardware. If you need two shards, you're probably asking for six servers. If you have 12 shards, you're asking for 36 servers. I guarantee your boss is going to be much happier paying for six versus 36. So obviously it is definitely a trade off that you want to make certain. Schemas will perform better, they may be easier to develop, and they also will have different implications on the infrastructure you need.\n\nMichael Lynn (26:35): Okay. And so obviously the criticality of sizing is increased when you're talking about an on-prem deployment, because obviously to get a server into place, it's a purchase. You're waiting for it to come. You have to do networking. Now when we move to the cloud, it's somewhat reduced. And I want to talk a little bit about the flexibility that comes with a deployment in MongoDB Atlas, because we know that MongoDB Atlas starts at zero. We have a free forever instance, that's called an M0 tier and it goes all the way up to M700 with a whole lot of RAM and a whole lot of CPU. What's to stop me from saying, \"Okay, I'm not really going to concentrate on sizing and maybe I'll just deploy in an M0 and see how it goes.\"\n\n>MongoDB Atlas offers different tiers of clusters with varying amounts of RAM, CPU, and disk. These tiers are labeled starting with M0 - Free, and continuing up to M700 with massive amounts of RAM and CPU. Each tier also offers differing sizes and speeds of disks.\n\nJay Runkel (27:22): So you could, actually. That's the really fabulous thing about Atlas is you could deploy, I wouldn't start with M0, but you might start with an M10 and you could enable, there's kind of two features in Atlas. One will automatically scale up the disk size for you. So as you load more data, it will, I think as the disk gets about 90% full, it will automatically scale it up. So you could start out real small and just rely on Atlas to scale it up. And then similarly for the instance size itself, there's another feature where it will automatically scale up the instance as the workload. So as you start using more RAM and CPU, it will automatically scale the instance. So that it would be one way. And you could say, \"Geez, I can just drop from this podcast right now and just use that feature and that's great.\" But often what people want is some understanding of the budget. What should they expect to spend in Atlas? And that's where the sizing comes in useful because it gives you an idea of, \"What is my Atlas budget going to be?\"\n\nNic Raboy (28:26): I wanted to do another shameless plug here for a previous podcast episode. If you want to learn more about the auto-scaling functionality of Atlas, we actually did an episode. It's part of a series with Rez Con from MongoDB. So if this is something you're interested in learning more about, definitely check out that previous episode.\n\nMichael Lynn (28:44): Yeah, so auto-scaling, an incredible feature. So what I heard Jay, is that you could under deploy and you could manually ratchet up as you review the shards and look at the monitoring. Or you could implement a relatively small instance size and rely on MongoDB to auto-scale you into place.\n\nJay Runkel (29:07): Absolutely, and then if your boss comes to you and says, \"How much are we going to be spending in November on Atlas?\" You might want to go through some of this analysis we've been talking about to figure out, \"Well, what size instance do we actually need or where do I expect that list to scale us up to so that I can have some idea of what to tell my boss.\"\n\nMichael Lynn (29:27): Absolutely. That's the one end of the equation. The other end of the equation is the performance. So if you're under scaling and waiting for the auto-scale to kick in, you're most likely going to experience some pain on the user front, right?\n\nJay Runkel (29:42): So it depends. If you have a workload that is going to take big steps up. I mean, there's no way for Atlas to know that right now, you're doing 10 queries a second and on Monday you're doing a major marketing initiative and you expect your user base to grow and starting Monday afternoon instead of 10 queries a second, you're going to have a thousand queries per second. There's no way for Atlas to predict that. So if that's the case, you should manually scale up the cluster in advance of that so you don't have problems. Alternatively, though, if you just, every day you're adding a few users and over time, they're loading more and more data, so the utilization is growing at a nice, steady, linear pace, then Atlas should be able to predict, \"Hey, that trend is going to continue,\" and scale you up, and you should probably have a pretty seamless auto scale and good customer experience.\n\nMichael Lynn (30:40): So it sounds like a great safety net. You could do your, do your homework, do your sizing, make sure you're informing your decisions about the schema and vice versa, and then make a bet, but also rely on auto-scaling to select the minimum and also specify a maximum that you want to scale into.\n\nJay Runkel (30:57): Absolutely.\n\nMichael Lynn (30:58): Wow. So we've covered a lot of ground.\n\nNic Raboy (30:59): So I have some questions since you actually do interface with customers. When you're working with them to try to find a scaling solution or a sizing solution for them, do you ever come to the scenario where, you know what, the customer assumed that they're going to need all of this, but in reality, they need far less or the other way around?\n\nJay Runkel (31:19): So I think both scenarios are true. I think there are customers that are used to using relational databases and doing sizings for those. And those customers are usually positively happy when they see how much hardware they need for MongoDB. Generally, given the fact that MongoDB is a document model and uses way far fewer joints that the server requirements to satisfy the same workload for MongoDB are significantly less than a relational database. I think we also run into\ncustomers though that have really high volume workloads and maybe have unrealistic budgetary expectations as well. Maybe it's their first time ever having to deal with the problem of the scale that they're currently facing. So sometimes that requires some education and working with that customer.\n\nMichael Lynn (32:14): Are there tools available that customers can use to help them in this process?\n\n>...typically the index size is 10% of the data size. But if you want to get more accurate, what you can do is there are tools out there, one's called Faker...\n\nJay Runkel (32:18): So there's a couple of things. We talked about trying to figure out what our index sizes are and things like that. What if you don't, let's say you're just starting to design the application. You don't have any data. You don't know what the indexes are. It's pretty hard to kind of make these kinds of estimates. So there's a couple of things you can do. One is you can use some rule of thumbs, like typically the index size is 10% of the data size. But if you want to get more accurate, what you can do is there are tools out there, one's called Faker for Python. There's a website called Mockaroo where it enables you to just generate a dataset. You essentially provide one document and these tools or sites will generate many documents and you can load those into MongoDB. You can build your indexes. And then you can just measure how big everything is. So that's kind of some tools that give you the ability to figure out what at least the index size of the working set is going to be just by creating a dataset.\n\nMichael Lynn (33:16): Yeah. Love those tools. So to mention those again, I've used those extensively in sizing exercises. Mockaroo is a great online. It's just in a webpage and you specify the shape of the document that you want and the number of documents you want created. There's a free tier and then there's a paid tier. And then Faker is a JavaScript library I've used a whole lot to generate fake documents.\n\nJay Runkel (33:37): Yeah. I think it's also available in Python, too.\n\nMichael Lynn (33:40): Oh, great. Yeah. Terrific.\n\nNic Raboy (33:41): Yeah, this is awesome. If people have more questions regarding sizing their potential MongoDB clusters, are you active in the MongoDB community forums by chance?\n\nJay Runkel (33:56): Yes, I definitely am. Feel free to reach out to me and I'd be happy to answer any of your questions.\n\nNic Raboy (34:03): Yeah, so that's community.MongoDB.com for anyone who's never been to our forums before.\n\nMichael Lynn (34:09): Fantastic. Jay, we've covered a lot of ground in a short amount of time. I hope this was really helpful for developers. Obviously it's a topic we could talk about for a long time. We like to keep the episodes around 30 to 40 minutes. And I think we're right about at that time. Is there anything else that you'd like to share with folks listening in that want to learn about sizing?\n\nJay Runkel (34:28): So I gave a presentation on sizing in MongoDB World 2017, and that video is still available. So if you just go to MongoDB's website and search for Runkel and sizing, you'll find it. And if you want to get an even more detailed view of sizing in MongoDB, you can kind of take a look at that presentation.\n\nNic Raboy (34:52): So 2017 is quite some time ago in tech years. Is it still a valid piece of content?\n\nJay Runkel (35:00): I don't believe I mentioned the word Atlas in that presentation, but the concepts are all still valid.\n\nMichael Lynn (35:06): So we'll include a link to that presentation in the show notes. Be sure to look for that. Where can people find you on social? Are you active in the social space?\n\nJay Runkel (35:16): You can reach me at Twitter at @jayrunkel. I do have a Facebook account and stuff like that, but I don't really pay too much attention to it.\n\nMichael Lynn (35:25): Okay, great. Well, Jay, it's been a great conversation. Thanks so much for sharing your knowledge around sizing MongoDB. Nic, anything else before we go?\n\nNic Raboy (35:33): No, that's it. This was fantastic, Jay.\n\nJay Runkel (35:36): I really appreciate you guys having me on.\n\nMichael Lynn (35:38): Likewise. Have a great day.\n\nJay Runkel (35:40): All right. Thanks a lot.\n\nSpeaker 2 (35:43): Thanks for listening. If you enjoyed this episode, please like and subscribe. Have a question or a suggestion for the show? Visit us in the MongoDB Community Forums at https://www.mongodb.com/community/forums/.\n\n### Summary\n\nDetermining the correct amount of server resource for your databases involves an understanding of the types, amount, and read/write patterns of the data. There's no magic formula that works in every case. Thanks to Jay for helping us explore the process. Jay put together a presentation from MongoDB World 2017 that is still very applicable.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Hardware Sizing MongoDB with Jay Runkel", "contentType": "Podcast"}, "title": "Hardware Sizing for MongoDB with Jay Runkel", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/gatsby-modern-blog", "action": "created", "body": "# Build a Modern Blog with Gatsby and MongoDB\n\nThe web, like many other industries, works in a very cyclical way. Trends are constantly born and reborn. One of my favorite trends that's making a huge come back is static websites and focus on website performance. GatsbyJS presents a new way of building websites that mergers the static with the dynamic that in my opinion provides a worthwhile framework for your consideration.\n\nIn today's tutorial, we're going to take a look at how we can leverage GatsbyJS and MongoDB to build a modern blog that can be served anywhere. We'll dive into how GraphQL makes it easy to visualize and work with our content regardless of where it's coming from. Get the code from this GitHub repo to follow along.\n\n## Prerequisites\n\nFor this tutorial you'll need:\n\n- Node.js\n- npm\n- MongoDB\n\nYou can download Node.js here, and it will come with the latest version of npm. For MongoDB, you can use an existing install or MongoDB Atlas for free. The dataset we'll be working with comes from Hakan \u00d6zler, and can be found in this GitHub repo. All other required items will be covered in the article.\n\n## What We're Building: A Modern Book Review Blog\n\nThe app that we are building today is called Books Plus. It is a blog that reviews technical books.\n\n## Getting Started with GatsbyJS\n\nGatsbyJS is a React based framework for building highly performant websites and applications. The framework allows developers to utilize the modern JavaScript landscape to quickly build static websites. What makes GatsbyJS really stand out is the ecosystem built around it. Plugins for all sorts of features and functionality easily interoperate to provide a powerful toolkit for anything you want your website to do.\n\nThe second key feature of GatsbyJS is it's approach to data sources. While most static website generators simply process Markdown files into HTML, GatsbyJS provides a flexible mechanism for working with data from any source. In our article today, we'll utilize this functionality to show how we can have data both in Markdown files as well as in a MongoDB database, and GatsbyJS will handle it all the same.\n\n## Setting Up Our Application\n\nTo create a GatsbyJS site, we'll need to install the Gatsby CLI. In your Terminal window run `npm install -g gatsby-cli`.\n\nTo confirm that the CLI is properly installed run `gatsby -help` in your Terminal. You'll see a list of available commands such as **gatsby build** and **gatsby new**. If you see information similar to the screenshot above, you are good to go.\n\nThe next step will be to create a new GatsbyJS website. There's a couple of different ways we can do this. We can start with a barebones GatsbyJS app or a starter app that has various plugins already installed. To keep things simple we'll opt for the former. To create a new barebones GatsbyJS website run the following command:\n\n``` bash\ngatsby new booksplus\n```\n\nExecuting this command in your Terminal will create a new barebones GatsbyJS application in a directory called `booksplus`. Once the installation is complete, navigate to this new directory by running `cd booksplus` and once in this directory let's start up our local GatsbyJS development server. To do this we'll run the following command in our Terminal window.\n\n``` bash\ngatsby develop\n```\n\nThis command will take a couple of seconds to execute, but once it has, you'll be able to navigate to `localhost:8080` to see the default GatsbyJS starter page.\n\nThe default page is not very impressive, but seeing it tells us that we are on the right path. You can also click the **Go to page 2** hyperlink to see how Gatsby handles navigation.\n\n## GatsbyJS Secret Sauce: GraphQL\n\nIf you were paying attention to your Terminal window while GatsbyJS was building and starting up the development server you may have also noticed a message saying that you can navigate to `localhost:8000/___graphql` to explore your sites data and schema. Good eye! If you haven't, that's ok, let's navigate to this page as well and make sure that it loads and works correctly.\n\nGraphiQL is a powerful user interface for working with GraphQL schemas, which is what GatsbyJS generates for us when we run `gatsby develop`. All of our websites content, including pages, images, components, and so on become queryable. This API is automatically generated by Gatsby's build system, we just have to learn how to use it to our advantage.\n\nIf we look at the **Explorer** tab in the GraphiQL interface, we'll see the main queries for our API. Let's run a simple query to see what our current content looks like. The query we'll run is:\n\n``` javascript\nquery MyQuery {\n allSitePage {\n totalCount\n }\n}\n```\n\nRunning this query will return the total number of pages our website currently has which is 5.\n\nWe can add on to this query to return the path of all the pages. This query will look like the following:\n\n``` javascript\nquery MyQuery {\n allSitePage {\n totalCount\n nodes {\n path\n }\n }\n}\n```\n\nAnd the result:\n\nThe great thing about GraphQL and GraphiQL is that it's really easy to build powerful queries. You can use the explorer to see what fields you can get back. Covering all the ins and outs of GraphQL is out of the scope of this article, but if you are interested in learning more about GraphQL check out this crash course that will get you writing pro queries in no time.\n\nNow that we have our app set up, let's get to building our application.\n\n## Adding Content To Our Blog\n\nA blog isn't very useful without content. Our blog reviews books. So the first thing we'll do is get some books to review. New books are constantly being released, so I don't think it would be wise to try and keep track of our books within our GatsbyJS site. A database like MongoDB on the other hand makes sense. Hakan \u00d6zler has a curated list of datasets for MongoDB and one of them just happens to be a list of 400+ books. Let's use this dataset.\n\nI will import the dataset into my database that resides on MongoDB Atlas. If you don't already have MongoDB installed, you can get a free account on MongoDB Atlas.\n\nIn my MongoDB Atlas cluster, I will create a new database and call it `gatsby`. In this new database, I will create a collection called `books`. There are many different ways to import data into your MongoDB database, but since I'm using MongoDB Atlas, I'll just import it directly via the web user interface.\n\nOur sample dataset contains 431 books, so after the import we should see 431 documents in the books collection.\n\n## Connecting MongoDB and GatsbyJS\n\nNow that we have our data, let's use it in our GatsbyJS application. To use MongoDB as a data source for our app, we'll need to install the `gatsby-source-mongodb` plug in. Do so by running\n\n``` bash\nnpm install --save gatsby-source-mongodb\n```\n\nin your Terminal window. With the plugin installed, the next step will be to configure it. Open up the `gatsby-config.js` file. This file contains our site metadata as well as plugin configuration options. In the `plugins` array, let's add the `gatsby-source-mongodb` plugin. It will look something like this:\n\n``` javascript\n{\n // The name of the plugin\n resolve: 'gatsby-source-mongodb',\n options: {\n // Name of the database and collection where are books reside\n dbName: 'gatsby',\n collection: 'books',\n server: {\n address: 'main-shard-00-01-zxsxp.mongodb.net',\n port: 27017\n },\n auth: {\n user: 'ado',\n password: 'password'\n },\n extraParams: {\n replicaSet: 'Main-shard-0',\n ssl: true,\n authSource: 'admin',\n retryWrites: true\n }\n }\n},\n```\n\nSave the file. If your `dbName` and `collection` are different from the above, take note of them as the naming here is very important and will determine how you interact with the GraphQL API.\n\nIf your GatsbyJS website is still running, stop it, run `gatsby clean` and then `gatsby develop` to restart the server. The `gatsby clean` command will clear the cache and delete the previous version. In my experience, it is recommended to run this as otherwise you may run into issues with the server restarting correctly.\n\nWhen the `gatsby develop` command has successfully been re-run, navigate to the GraphiQL UI and you should see two new queries available: `mongodbGatsbyBooks` and `allMongodbGatsbyBooks`. Please note that if you named your database and collection something different, then these query names will be different. The convention they will follow though will be `mongodb` and `allMongodb`.\n\nLet's play with one of these queries and see what data we have access to. Execute the following query in GraphiQL:\n\n``` javascript\nquery MyQuery {\n allMongodbGatsbyBooks {\n edges {\n node {\n title\n }\n }\n }\n}\n```\n\nYour result will look something like this:\n\nExcellent. Our plugin was configured successfully and we see our collection data in our GatsbyJS website. We can add on to this query by requesting additional parameters like the authors, categories, description and so on, but rather than do that here, why don't we render it in our website.\n\n## Displaying Book Data On the Homepage\n\nWe want to display the book catalog on our homepage. Let's open up the `index.js` page located in the `src/pages` directory. This React component represent our homepage. Let's clean it up a bit before we start adding additional styles. Our new barebones component will look like this:\n\n``` javascript\nimport React from \"react\"\nimport { Link } from \"gatsby\"\n\nimport Layout from \"../components/layout\"\n\nconst IndexPage = () => (\n \n \n\n \n \n)\n\nexport default IndexPage\n```\n\nNext let's add a GraphQL query to get our books data into this page. The updated code will look like this:\n\n``` javascript\nimport React from \"react\"\nimport { Link } from \"gatsby\"\nimport { graphql } from \"gatsby\"\n\nimport Layout from \"../components/layout\"\n\nconst IndexPage = () => (\n \n \n\n \n \n)\n\nexport default IndexPage\n\nexport const pageQuery = graphql`\n query {\n allMongodbGatsbyBooks {\n edges {\n node {\n id\n title\n shortDescription\n thumbnailUrl\n }\n }\n }\n }\n`\n```\n\nWe are making a call to the `allMongodbGatsbyBooks` query and asking for all the books in the collection. For each book we want to get its id, title, shortDescription and thumbnailUrl. Finally, to get this data into our component, we'll pass it through props:\n\n``` javascript\nimport React from \"react\"\nimport { Link } from \"gatsby\"\nimport { graphql } from \"gatsby\"\n\nimport Layout from \"../components/layout\"\n\nconst IndexPage = (props) => {\n const books = props.data.allMongodbGatsbyBooks.edges;\n\n return (\n \n \n\n \n\n \n )\n}\n\nexport default IndexPage\n\nexport const pageQuery = graphql`\n query {\n allMongodbGatsbyBooks {\n edges {\n node {\n id\n title\n shortDescription\n thumbnailUrl\n }\n }\n }\n }\n`\n```\n\nNow we can render our books to the page. We'll do so by iterating over the books array and displaying all of the information we requested. The code will look like this:\n\n``` javascript\nreturn (\n \n \n {books.map(book =>\n \n \n \n \n\n{BOOK.NODE.TITLE}\n\n \n\n{book.node.shortDescription}\n\n \n \n )}\n \n \n)\n```\n\nLet's go to `localhost:8000` and see what our website looks like now. It should look something like:\n\nIf you start scrolling you'll notice that all 400+ books were rendered on the page. All this data was cached so it will load very quickly. But if we click on any of the links, we will get a 404. That's not good, but there is a good reason for it. We haven't created an individual view for the books. We'll do that shortly. The other issue you might have noticed is that we added the classes `book-container` and `book` but they don't seem to have applied any sort of styling. Let's fix that issue first.\n\nOpen up the `layout.css` file located in the `src/components` directory and add the following styles to the bottom of the page:\n\n``` javascript\n.book-container {\n display: flex;\n flex-direction: row;\n flex-wrap: wrap\n}\n\n.book {\n width: 25%;\n flex-grow: 1;\n text-align: center;\n}\n\n.book img {\n width: 50%;\n}\n```\n\nNext, let's simplify our UI by just displaying the cover of the book. If a user wants to learn more about it, they can click into it. Update the `index.js` return to the following:\n\n``` javascript\nconst IndexPage = (props) => {\n const books = props.data.allMongodbGatsbyBooks.edges;\n\n return (\n \n\n \n {books.map(book =>\n \n {book.node.thumbnailUrl &&\n \n \n \n }\n \n )}\n \n\n \n )\n}\n```\n\nWhile we're at it, let's change the name of our site in the header to Books Plus by editing the `gatsby-config.js` file. Update the `siteMetadata.title` property to **Books Plus**.\n\nOur updated UI will look like this:\n\n## Creating the Books Info Page\n\nAs mentioned earlier, if you click on any of the book covers you will be taken to a 404 page. GatsbyJS gives us multiple ways to tackle how we want to create this page. We can get this content dynamically, but I think pre-rendering all of this pages at build time will give our users a much better experience, so we'll do that.\n\nThe first thing we'll need to do is create the UI for what our single book view page is going to look like. Create a new file in the components directory and call it `book.js`. The code for this file will look like this:\n\n``` javascript\nimport React from \"react\"\nimport { graphql } from \"gatsby\"\nimport Layout from \"./layout\"\n\nclass Item extends React.Component {\n render() {\n const book = this.props.data.mongodbGatsbyBooks\n\n return (\n \n \n\n \n \n\n{BOOK.TITLE}\n\n \n\nBy {book.authors.map(author => ( {author}, ))}\n\n \n\n{book.longDescription}\n\n \n\nPublished: {book.publishedDate} | ISBN: {book.isbn}\n\n {book.categories.map(category => category)}\n \n\n \n )\n }\n}\n\nexport default Item\n\nexport const pageQuery = graphql`\n query($id: String!) {\n mongodbGatsbyBooks(id: { eq: $id }) {\n id\n title\n longDescription\n thumbnailUrl\n isbn\n pageCount\n publishedDate(formatString: \"MMMM DD, YYYY\")\n authors\n categories\n }\n }\n`\n```\n\nTo break down what is going on in this component, we are making use of the `mongodbGatsbyBooks` query which returns information requested on a single book based on the `id` provided. That'll do it for our component implementation. Now let's get to the fun part.\n\nEssentially what we want to happen when we start up our Gatsby server is to go and get all the book information from our MongoDB database and create a local page for each document. To do this, let's open up the `gatsby-node.js` file. Add the following code and I'll explain it below:\n\n``` javascript\nconst path = require('path')\n\nexports.createPages = async ({ graphql, actions }) => {\n const { createPage } = actions\n\n const { data } = await graphql(`\n {\n books: allMongodbGatsbyBooks {\n edges {\n node {\n id\n }\n }\n }\n }\n `)\n\n const pageTemplate = path.resolve('./src/components/book.js')\n\n for (const { node } of data.books.edges) {\n createPage({\n path: '/book/${node.id}/',\n component: pageTemplate,\n context: {\n id: node.id,\n },\n })\n }\n}\n```\n\nThe above code will do the heavy lifting of going through our list of 400+ books and creating a static page for each one. It does this by utilizing the Gatsby `createPages` API. We supply the pages we want, alongside the React component to use, as well as the path and context for each, and GatsbyJS does the rest. Let's save this file, run `gatsby clean` and `gatsby develop`, and navigate to `localhost:8000`.\n\nNow when the page loads, you should be able to click on any of the books and instead of seeing a 404, you'll see the details of the book rendered at the `/book/{id}` url.\n\nSo far so good!\n\n## Writing Book Reviews with Markdown\n\nWe've shown how we can use MongoDB as a data source for our books. The next step will be to allow us to write reviews on these books and to accomplish that we'll use a different data source: trusty old Markdown.\n\nIn the `src` directory, create a new directory and call it `content`. In this directory, let's create our first post called `welcome.md`. Open up the new `welcome.md` file and paste the following markdown:\n\n``` \n---\ntitle: Welcome to Books Plus\nauthor: Ado Kukic\nslug: welcome\n---\n\nWelcome to BooksPlus, your trusted source of tech book reviews!\n```\n\nSave this file. To use Markdown files as our source of content, we'll have to add another plugin. This plugin will be used to transform our `.md` files into digestible content for our GraphQL API as well as ultimately our frontend. This plugin is called `gatsby-transformer-remark` and you can install it by running `npm install --save gatsby-transformer-remark`.\n\nWe'll have to configure this plugin in our `gatsby-config.js` file. Open it up and make the following changes:\n\n``` javascript\n{\n resolve: 'gatsby-source-filesystem',\n options: {\n name: 'content',\n path: `${__dirname}/src/content/`,\n },\n},\n'gatsby-transformer-remark',\n```\n\nThe `gatsby-source-filesystem` plugin is already installed, and we'll overwrite it to just focus on our markdown files. Below it we'll add our new plugin to transform our Markdown into a format our GraphQL API can work with. While we're at it we can also remove the `image.js` and `seo.js` starter components as we will not be using them in our application.\n\nLet's restart our Gatsby server and navigate to the GraphiQL UI. We'll see two new queries added: `allMarkdownRemark` and `markdownRemark`. These queries will allow us to query our markdown content. Let's execute the following query:\n\n``` javascript\nquery MyQuery {\n allMarkdownRemark {\n edges {\n node {\n frontmatter {\n title\n author\n }\n html\n }\n }\n }\n}\n```\n\nOur result should look something like the screenshot below, and will\nlook exactly like the markdown file we created earlier.\n\n## Rendering Our Blog Content\n\nNow that we can query our markdown content, we can just as pre-generate\nthe markdown pages for our blog. Let's do that next. The first thing\nwe'll need is a template for our blog. To create it, create a new file\ncalled `blog.js` located in the `src/components` directory. My code will\nlook like this:\n\n``` javascript\nimport React from \"react\"\nimport { graphql } from \"gatsby\"\nimport Layout from \"./layout\"\n\nclass Blog extends React.Component {\n render() {\n const post = this.props.data.markdownRemark\n\n return (\n \n \n\n \n\n{POST.FRONTMATTER.TITLE}\n\n \n\nBY {POST.FRONTMATTER.AUTHOR}\n\n \n\n \n\n \n\n )\n }\n}\n\nexport default Blog\n\nexport const pageQuery = graphql`\n query($id: String!) {\n markdownRemark(frontmatter : {slug: { eq: $id }}) {\n frontmatter { \n title\n author\n }\n html\n }\n }\n`\n```\n\nNext we'll need to tell Gatsby to build our markdown pages at build\ntime. We'll open the `gatsby-node.js` file and make the following\nchanges:\n\n``` javascript\nconst path = require('path')\n\nexports.createPages = async ({ graphql, actions }) => {\n const { createPage } = actions\n\n const { data } = await graphql(`\n {\n books: allMongodbGatsbyBooks {\n edges {\n node {\n id\n }\n }\n }\n posts: allMarkdownRemark {\n edges {\n node {\n frontmatter {\n slug\n }\n }\n }\n }\n }\n `)\n\n const blogTemplate = path.resolve('./src/components/blog.js')\n const pageTemplate = path.resolve('./src/components/book.js')\n\n for (const { node } of data.posts.edges) {\n\n createPage({\n path: `/blog/${node.frontmatter.slug}/`,\n component: blogTemplate,\n context: {\n id: node.frontmatter.slug\n },\n })\n }\n\n for (const { node } of data.books.edges) {\n createPage({\n path: `/book/${node.id}/`,\n component: pageTemplate,\n context: {\n id: node.id,\n },\n })\n }\n}\n```\n\nThe changes we made above will not only generate a different page for\neach book, but will now generate a unique page for every markdown file.\nInstead of using a randomly generate id for the content page, we'll use\nthe user-defined slug in the frontmatter.\n\nLet's restart our Gatsby server and navigate to\n`localhost:8000/blog/welcome` to see our changes in action.\n\n## Displaying Posts on the Homepage\n\nWe want our users to be able to read our content and reviews. Currently\nyou can navigate to `/blog/welcome` to see the post, but it'd be nice to\ndisplay our latest blog posts on the homepage as well. To do this we'll,\nmake a couple of updates on our `index.js` file. We'll make the\nfollowing changes:\n\n``` javascript\nimport React from \"react\"\nimport { Link } from \"gatsby\"\nimport { graphql } from \"gatsby\"\n\nimport Layout from \"../components/layout\"\n\nconst IndexPage = (props) => {\n const books = props.data.books.edges;\n const posts = props.data.posts.edges;\n\n return (\n \n \n {posts.map(post =>\n \n \n\n{POST.NODE.FRONTMATTER.TITLE}\n\n \n\nBy {post.node.frontmatter.author}\n\n )}\n \n \n {books.map(book =>\n \n {book.node.thumbnailUrl &&\n \n \n \n }\n \n )}\n \n\n \n )\n}\n\nexport default IndexPage\n\nexport const pageQuery = graphql`\n query {\n posts: allMarkdownRemark {\n edges {\n node {\n frontmatter {\n title\n slug\n author\n }\n }\n }\n }\n books: allMongodbGatsbyBooks {\n edges {\n node {\n id\n title\n shortDescription\n thumbnailUrl\n }\n }\n }\n }\n`\n```\n\nWe've updated our GraphQL query to get us not only the list of books,\nbut also all of our posts. We named these queries `books` and `posts`\naccordingly so that it's easier to work with them in our template.\nFinally we updated the template to render the new UI. If you navigate to\n`localhost:8000` now you should see your latest post at the top like\nthis:\n\nAnd of course, you can click it to view the single blog post.\n\n## Combining Mongo and Markdown Data Sources\n\nThe final thing I would like to do in our blog today is the ability to\nreference a book from MongoDB in our review. This way when a user reads\na review, they can easily click through and see the book information.\n\nTo get started with this, we'll need to update our `gatsby-node.js` file\nto allow us to query a specific book provided in the frontmatter of a\npost. We'll update the `allMarkdownRemark` so that in addition to\ngetting the slug, we'll get the book parameter. The query will look like\nthis:\n\n``` javascript\nallMarkdownRemark {\n edges {\n node {\n frontmatter {\n slug\n book\n }\n }\n }\n }\n ...\n}\n```\n\nAdditionally, we'll need to update our `createPage()` method when\ngenerating the blog pages, to pass along the book information in the\ncontext.\n\n``` javascript\ncreatePage({\n path: `/blog/${node.frontmatter.slug}/`,\n component: blogTemplate,\n context: {\n id: node.frontmatter.slug,\n book: node.frontmatter.book\n },\n })\n }\n```\n\nWe'll be able to use anything passed in this `context` property in our\nGraphQL queries in our blog component.\n\nNext, we'll update our blog component to account for the new query. This\nquery will be the MongoDB based book query. It will look like so:\n\n``` javascript\nexport const pageQuery = graphql`\n query($id: String!, $book: String) {\n post: markdownRemark(frontmatter : {slug: { eq: $id }}) {\n id\n frontmatter {\n title\n author\n }\n html\n }\n book: mongodbGatsbyBooks(id: { eq: $book }) {\n id\n thumbnailUrl\n }\n }\n`\n```\n\nNotice that the `$book` parameter is optional. This means that a post\ncould be associated with a specific book, but it doesn't have to be.\nWe'll update our UI to display the book information if a book is\nprovided.\n\n``` javascript\nclass Blog extends React.Component {\n render() {\n const post = this.props.data.post\n const book = this.props.data.book\n\n return (\n \n \n\n \n\n{POST.FRONTMATTER.TITLE}\n\n \n\nBY {POST.FRONTMATTER.AUTHOR}\n\n \n\n {book && \n \n \n \n }\n \n\n \n\n )\n }\n}\n```\n\nIf we look at our original post, it doesn't have a book associated with\nit, so that specific post shouldn't look any different. But let's write\na new piece of content, that does contain a review of a specific book.\nCreate a new markdown file called `mongodb-in-action-review.md`. We'll\nadd the following review:\n\n``` javascript\n---\ntitle: MongoDB In Action Book Review\nauthor: Ado Kukic\nslug: mdb-in-action-review\nbook: 30e4050a-da76-5c08-a52c-725b4410e69b\n---\n\nMongoDB in Action is an essential read for anybody wishing to learn the ins and outs of MongoDB. Although the book has been out for quite some time, it still has a lot of valuable information and is a great start to learning MongoDB.\n```\n\nRestart your Gatsby server so that the new content can be generated. On\nyour homepage, you'll now see two blog posts, the original **Welcome**\npost as well as the new **MongoDB In Action Review** post.\n\nClicking the **MongoDB In Action Review** link will take you to a blog\npage that contains the review we wrote a few seconds ago. But now,\nyou'll also see the thumbnail of the book. Clicking this thumbnail will\nlead you to the books page where you can learn more about the book.\n\n## Putting It All Together\n\nIn this tutorial, I showed you how to build a modern blog with GatsbyJS.\nWe used multiple data sources, including a remote MongoDB\nAtlas database and local markdown\nfiles, to generate a static blog. We took a brief tour of GraphQL and\nhow it enhances our development experience by consolidating all of our\ndata sources into a single API that we can query both at build and run\ntime. I hope you learned something new, if you have any questions feel\nfree to ask in our MongoDB community\nforums.\n\n>If you want to get the code for this tutorial, you can clone it from this GitHub repo. The sample books dataset can also be found here. Try MongoDB Atlas to make it easy to manage and scale your MongoDB database.\n\nHappy coding!", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "Learn how to build a modern blog with GatsbyJS, MongoDB, and Markdown.", "contentType": "Tutorial"}, "title": "Build a Modern Blog with Gatsby and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/introduction-to-modern-databases-mongodb-academia", "action": "created", "body": "# MongoDB Academia - Introduction to Modern Databases\n\n## Introduction\n\nAs part of the MongoDB for Academia program, we are pleased to announce the publication of a new course, Introduction to Modern Databases, and related teaching materials.\n\nThe course materials are designed for the use of educators teaching MongoDB in universities, colleges, technical bootcamps, or other learning programs.\n\nIn this article, we describe why we've created this course, its structure and content, and how educators can use this material to support hands-on learning with the MongoDB Web Shell.\n\n## Table of Contents\n\n- Course Format\n- Why Create This Course?\n- Course Outline\n- Course Lessons\n- What is in a Lesson\n- Using the MongoDB Web Shell\n- What is MongoDB for Academia?\n- Course Materials and Getting Involved in the MongoDB for Academia Program\n\n## Course Format\n\nIntroduction to Modern Databases has been designed to cover the A-Z of MongoDB for educators.\n\nThe course consists of 22 lessons in slide format. Educators are welcome to teach the entire course or select individual lessons and/or slides as needed.\n\nQuiz questions with explained answers and instructions for hands-on exercises are included on slides interspersed throughout.\n\nThe hands-on activities use the browser-based MongoDB Web Shell, an environment that runs on servers hosted by MongoDB.\n\nThis means the only technical requirement for these activities is Internet access and a web browser.\n\nThe materials are freely available for non-commercial use and are licensed under Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.\n\n## Why Create This Course?\n\nWe created this course in response to requests from the educational community for support bridging the gap in teaching materials for MongoDB.\n\nWe received many requests from the academic community for teaching materials on document databases, MongoDB's features, and how to model schemas using the document model.\n\nWe hope this material will be a valuable resource for educators teaching with MongoDB in a variety of contexts, and their learners.\n\n## Course Outline\n\nThe course compares and contrasts relational and non-relational databases, outlines the architecture of MongoDB, and details how to model data in MongoDB. The included quizzes and hands-on exercises support active learning and retention of key concepts and skills.\n\nThis material can support a wide variety of instructional objectives, including learning best practices for querying data and structuring data models in MongoDB, and using features like transactions and aggregations.\n\n## Course Lessons\n\nThe course consists of 22 lessons across a wide range of MongoDB topics.\n\nThe lessons can be taught individually or as part of a wider selection of lessons from the course.\n\nThe lessons are as follows:\n\n- What is a modern general purpose database?\n- SQL and MQL\n- Non-relational databases\n- Querying in SQL and in MQL\n- When to use SQL and when to use MQL\n- Documents and MongoDB\n- MongoDB is a data platform\n- MongoDB architecture\n- MongoDB Atlas\n- The MongoDB Query Language (MQL)\n- Querying complex data with MQL\n- Querying data with operators and compound conditions\n- Inserting and updating data in MongoDB\n- Deleting data in MongoDB\n- The MongoDB aggregation framework\n- Querying data in MongoDB with the aggregation framework\n- Data modeling and schema design patterns\n- Sharding in MongoDB\n- Indexing in MongoDB\n- Transactions in MongoDB\n- Change streams in MongoDB\n- Drivers, connectors, and the wider ecosystem\n\n## What is in a Lesson\n\nEach lesson covers the specified topic and includes a number of quizzes designed to assess the material presented.\n\nSeveral lessons provide hands-on examples suitable for students to follow themselves or for the educator to present in a live-coding fashion to the class.\n\nThis provides a command line interface similar to the Mongo Shell but which you interact with through a standard web browser.\n\n## Using the MongoDB Web Shell\n\nThe MongoDB Web Shell is ideal for use in the hands-on exercise portions of Introduction to Modern Databases or anytime a web browser-accessible MongoDB environment is needed.\n\nThe MongoDB Web Shell provides a command line interface similar to the Mongo Shell but which you interact with through a standard web browser.\n\nLet us walk through a small exercise using the MongoDB Web Shell:\n\n- First, open another tab in your web browser and navigate to the MongoDB Web Shell.\n- Now for our exercise, let's create a collection for cow documents and insert 10 new cow documents into the collection. We will include a name field and a field with a varying value of 'milk'.\n\n``` javascript\nfor(c=0;c<10;c++) {\n db.cows.insertOne( { name: \"daisy\", milk: c } )\n}\n```\n\n- Let's now use the follow query in the same tab with the MongoDB Web Shell to find all the cow documents where the value for milk is greater than eight.\n\n``` javascript\ndb.cows.find( { milk: { $gt: 8 } } )\n```\n\n- The output in the MongoDB Web Shell will be similar to the following but with a different ObjectId.\n\n``` json\n{ \"_id\": ObjectId(5f2aefa8fde88235b959f0b1e), \"name\" : \"daisy\", \"milk\" : 9 }\n```\n\n- Then let's show that we can perform another CRUD operation, update, and let's change the name of the cow to 'rose' and change the value of milk to 10 for that cow.\n\n``` javascript\ndb.cows.updateOne( { milk: 9 }, { $set: { name: \"rose\" }, $inc: { milk: 1 } } )\n```\n\n- We can query on the name of the cow to see the results of the update operation.\n\n``` javascript\ndb.cows.find( { name: \"rose\" } )\n```\n\nThis example gives only a small taste of what you can do with the MongoDB Web Shell.\n\n## What is MongoDB for Academia?\n\nMongoDB for Academia is our program to support educators and students.\n\nThe program offers educational content, resources, and community for teaching and learning MongoDB, whether in colleges and universities, technical bootcamps, online learning courses, high schools, or other educational programs.\n\nFor more information on MongoDB for Academia's free resources and support for educators and students, visit the MongoDB for Academia website.\n\n## Course Materials and Getting Involved in the MongoDB for Academia Program\n\nAll of the materials for Introduction to Modern Databases can be downloaded here.\n\nIf you also want to get involved and learn more about the MongoDB Academia program, you can join the email list at and join our community forums.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Introduction to Modern Databases, a new free course with materials and resources for educators.", "contentType": "News & Announcements"}, "title": "MongoDB Academia - Introduction to Modern Databases", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/top-4-reasons-to-use-mongodb", "action": "created", "body": "# The Top 4 Reasons Why You Should Use MongoDB\n\nWelcome (or welcome back!) to the SQL to MongoDB series. In the first post in this series, I mapped terms and concepts from SQL to MongoDB.\n\nI also introduced you to Ron. Let's take a moment and return to Ron. Ron is pretty set in his ways. For example, he loves his typewriter. It doesn't matter that computers are a bajillion times more powerful than typewriters. Until someone convinces him otherwise, he's sticking with his typewriter.\n\n \n\nMaybe you don't have a love for typewriters. But perhaps you have a love for SQL databases. You've been using them for years, you've learned how to make them work well enough for you, and you know that learning MongoDB will require you to change your mindset. Is it really worth the effort?\n\nYes!\n\nIn this post, we'll examine the top four reasons why you should use MongoDB:\n\n* Scale Cheaper\n* Query Faster\n* Pivot Easier\n* Program Faster\n\n> This article is based on a presentation I gave at MongoDB World and MongoDB.local Houston entitled \"From SQL to NoSQL: Changing Your Mindset.\"\n> \n> If you prefer videos over articles, check out the recording. Slides are available here.\n\n## Scale Cheaper\n\nYou can scale cheaper with MongoDB. Why?\n\nLet's begin by talking about scaling SQL databases. Typically, SQL databases scale vertically-when a database becomes too big for its server, it is migrated to a larger server.\n\nVertical scaling by migrating to larger servers\n\nA few key problems arise with vertical scaling:\n\n* Large servers tend to be more expensive than two smaller servers with the same total capacity.\n* Large servers may not be available due to cost limitations, cloud provider limitations, or technology limitations (a server the size you need may not exist).\n* Migrating to a larger server may require application downtime.\n\nWhen you use MongoDB, you have the flexibility to scale horizontally through sharding. Sharding is a method for distributing data across multiple servers. When your database exceeds the capacity of its current server, you can begin sharding and split it over two servers. As your database continues to grow, you can continue to add more servers. The advantage is that these new servers don't need to be big, expensive machines-they can be cheaper, commodity hardware. Plus, no downtime is required.\n\nHorizonal scaling by adding more commodity servers\n\n## Query Faster\n\nYour queries will typically be faster with MongoDB. Let's examine why.\n\nEven in our simple example in the previous post where we modeled Leslie's data in SQL, we saw that her information was spread across three tables. Whenever we want to query for Leslie's information, we'll need to join three tables together.\n\nIn these three small tables, the join will be very fast. However, as the tables grow and our queries become more complex, joining tables together becomes very expensive.\n\nRecall our rule of thumb when modeling data in MongoDB: *data that is accessed together should be stored together*. When you follow this rule of thumb, most queries will not require you to join any data together.\n\nContinuing with our earlier example, if we want to retrieve Leslie's information from MongoDB, we can simply query for a single document in the `Users` collection. As a result, our query will be very fast.\n\nAs our documents and collections grow larger, we don't have to worry about our queries slowing down as long as we are using indexes and continue following our rule of thumb: *data that is accessed together should be stored together*.\n\n## Pivot Easier\n\nRequirements change. Sometimes the changes are simple and require only a\nfew tweaks to the user interface. But sometimes changes go all the way\ndown to the database.\n\nIn the previous post in this series, we discovered\u2014after implementing\nour application\u2014that we needed to store information about Lauren's school.\nLet's take a look at this example a little more closely.\n\nTo add a new `school` column in our SQL database, we're going to have to\nalter the `Users` table. Executing the `Alter Table` command could take\na couple of hours depending on how much data is in the table. The\nperformance of our application could be decreased while the table is\nbeing altered, and we may need to schedule downtime for our application.\n\nNow let's examine how we can do something similar in MongoDB. When our\nrequirements change and we need to begin storing the name of a user's\nschool in a `User` document, we can simply begin doing so. We can choose\nif and when to update existing documents in the collection.\n\nIf we had implemented schema validation, we would have the option of\napplying the validation to all inserts and updates or only to inserts\nand updates to documents that already meet the schema requirements. We\nwould also have the choice of throwing an error or a warning if a\nvalidation rule is violated.\n\nWith MongoDB, you can easily change the shape of your data as your app\nevolves.\n\n## Program Faster\n\nTo be honest with you, this advantage is one of the biggest surprises to\nme. I figured that it didn't matter what you used as your backend\ndatabase\u2014the code that interacts with it would be basically the same. I\nwas wrong.\n\nMFW I realized how much easier it is to code with MongoDB.\n\nMongoDB documents map to data structures in most popular programming languages. This sounds like such a simple thing, but it makes a *humongous* difference when you're writing code.\n\nA friend encouraged me to test this out, so I did. I implemented the code to retrieve and update user profile information. My code has some simplifications in it to enable me to focus on the interactions with the database rather than the user interface. I also limited the user profile information to just contact information and hobbies.\n\nBelow is a comparison of my implementation using MySQL and MongoDB.\n\nI wrote the code in Python, but, don't worry if you're not familiar with Python, I'll walk you through it step by step. The concepts will be applicable no matter what your programming language of choice is.\n\n### Connect to the Databases\n\nLet's begin with the typical top-of-the-file stuff. We'll import what we need, connect to the database, and declare our variables. I'm going to simplify things by hardcoding the User ID of the user whose profile we will be retrieving rather than pulling it dynamically from the frontend code.\n\nMySQL\n\n``` python\nimport mysql.connector\n\n# CONNECT TO THE DB\nmydb = mysql.connector.connect(\n host=\"localhost\",\n user=\"root\",\n passwd=\"rootroot\",\n database=\"CityHall\"\n)\nmycursor = mydb.cursor(dictionary=True)\n\n# THE ID OF THE USER WHOSE PROFILE WE WILL BE RETRIEVING AND UPDATING\nuserId = 1\n```\n\nWe'll pass the dictionary=True option when we create the cursor so that each row will be returned as a dictionary.\n\nMongoDB\n\n``` python\nimport pymongo\nfrom pymongo import MongoClient\n\n# CONNECT TO THE DB\nclient = MongoClient()\nclient = pymongo.MongoClient(\"mongodb+srv://root:rootroot@mycluster.mongodb.net/test?retryWrites=true&w=majority\")\ndb = client.CityHall\n\n# THE ID OF THE USER WHOSE PROFILE WE WILL BE RETRIEVING AND UPDATING\nuserId = 1\n```\n\nSo far, the code is pretty much the same.\n\n### Get the User's Profile Information\n\nNow that we have our database connections ready, let's use them to retrieve our user profile information. We'll store the profile information in a Python Dictionary. Dictionaries are a common data structure in Python and provide an easy way to work with your data.\n\nLet's begin by implementing the code for MySQL.\n\nSince the user profile information is spread across the `Users` table and the `Hobbies` table, we'll need to join them in our query. We can use prepared statements to ensure our data stays safe.\n\nMySQL\n\n``` python\nsql = \"SELECT * FROM Users LEFT JOIN Hobbies ON Users.ID = Hobbies.user_id WHERE Users.id=%s\"\nvalues = (userId,)\nmy cursor.execute(sql, values)\nuser = mycursor.fetchone()\n```\n\nWhen we execute the query, a result is returned for every user/hobby combination. When we call `fetchone()`, we get a dictionary like the following:\n\n``` python\n{u'city': u'Pawnee', u'first_name': u'Leslie', u'last_name': u'Yepp', u'user_id': 1, u'school': None, u'longitude': -86.5366, u'cell': u'8125552344', u'latitude': 39.1703, u'hobby': u'scrapbooking', u'ID': 10}\n```\n\nBecause we joined the `Users` and the `Hobbies` tables, we have a result for each hobby this user has. To retrieve all of the hobbies, we need to iterate the cursor. We'll append each hobby to a new `hobbies` array and then add the `hobbies` array to our `user` dictionary.\n\nMySQL\n\n``` python\nhobbies = ]\nif (user[\"hobby\"]):\n hobbies.append(user[\"hobby\"])\ndel user[\"hobby\"]\ndel user[\"ID\"]\nfor result in mycursor:\n hobbies.append(result[\"hobby\"])\nuser[\"hobbies\"] = hobbies\n```\n\nNow let's implement that same functionality for MongoDB.\n\nSince we stored all of the user profile information in the `User` document, we don't need to do any joins. We can simply retrieve a single document in our collection.\n\nHere is where the big advantage that *MongoDB documents map to data structures in most popular programming languages* comes in. I don't have to do any work to get my data into an easy-to-work-with Python Dictionary. MongoDB gives me all of the results in a Python Dictionary automatically.\n\nMongoDB\n\n``` python\nuser = db['Users'].find_one({\"_id\": userId})\n```\n\nAnd that's it\u2014we're done. What took us 12 lines for MySQL, we were able to implement in 1 line for MongoDB.\n\nOur `user` dictionaries are now pretty similar in both pieces of code.\n\nMySQL\n\n``` json\n{\n 'city': 'Pawnee', \n 'first_name': 'Leslie', \n 'last_name': 'Yepp', \n 'school': None, \n 'cell': '8125552344', \n 'latitude': 39.1703,\n 'longitude': -86.5366,3\n 'hobbies': ['scrapbooking', 'eating waffles', 'working'],\n 'user_id': 1\n}\n```\n\nMongoDB\n\n``` json\n{\n 'city': 'Pawnee', \n 'first_name': 'Leslie', \n 'last_name': 'Yepp', \n 'cell': '8125552344', \n 'location': [-86.536632, 39.170344], \n 'hobbies': ['scrapbooking', 'eating waffles', 'working'],\n '_id': 1\n}\n```\n\nNow that we have retrieved the user's profile information, we'd likely send that information up the stack to the frontend UI code.\n\n### Update the User's Profile Information\n\nWhen Leslie views her profile information in our application, she may discover she needs to update her profile information. The frontend UI code would send that updated information in a Python dictionary to the Python files we've been writing.\n\nTo simulate Leslie updating her profile information, we'll manually update the Python dictionary ourselves for both MySQL and MongoDB.\n\nMySQL\n\n``` python\nuser.update( {\n \"city\": \"Washington, DC\",\n \"latitude\": 38.897760,\n \"longitude\": -77.036809,\n \"hobbies\": [\"scrapbooking\", \"eating waffles\", \"signing bills\"]\n } )\n```\n\nMongoDB\n\n``` python\nuser.update( {\n \"city\": \"Washington, DC\",\n \"location\": [-77.036809, 38.897760],\n \"hobbies\": [\"scrapbooking\", \"eating waffles\", \"signing bills\"]\n } )\n```\n\nNow that our `user` dictionary is updated, let's push the updated information to our databases.\n\nLet's begin with MySQL. First, we need to update the information that is stored in the `Users` table.\n\nMySQL\n\n``` python\nsql = \"UPDATE Users SET first_name=%s, last_name=%s, cell=%s, city=%s, latitude=%s, longitude=%s, school=%s WHERE (ID=%s)\"\nvalues = (user[\"first_name\"], user[\"last_name\"], user[\"cell\"], user[\"city\"], user[\"latitude\"], user[\"longitude\"], user[\"school\"], userId)\nmycursor.execute(sql, values)\nmydb.commit()\n```\n\nSecond, we need to update our hobbies. For simplicity, we'll delete any existing hobbies in the `Hobbies` table for this user and then we'll insert the new hobbies into the `Hobbies` table.\n\nMySQL\n\n``` python\nsql = \"DELETE FROM Hobbies WHERE user_id=%s\"\nvalues = (userId,)\nmycursor.execute(sql, values)\nmydb.commit()\n\nif(len(user[\"hobbies\"]) > 0):\n sql = \"INSERT INTO Hobbies (user_id, hobby) VALUES (%s, %s)\"\n values = []\n for hobby in user[\"hobbies\"]:\n values.append((userId, hobby))\n mycursor.executemany(sql,values)\n mydb.commit()\n```\n\nNow let's update the user profile information in MongoDB. Since the user's profile information is stored in a single document, we only have to do a single update. Once again we will benefit from MongoDB documents mapping to data structures in most popular programming languages. We can send our `user` Python dictionary when we call `update_one()`, which significantly simplifies our code.\n\nMongoDB\n\n``` python\nresult = db['Users'].update_one({\"_id\": userId}, {\"$set\": user})\n```\n\nWhat took us 15 lines for MySQL, we were able to implement in 1 line for\nMongoDB.\n\n### Summary of Programming Faster\n\nIn this example, we wrote 27 lines of code to interact with our data in\nMySQL and 2 lines of code to interact with our data in MongoDB. While\nfewer lines of code is not always indicative of better code, in this\ncase, we can probably agree that fewer lines of code will likely lead to\neasier maintenance and fewer bugs.\n\nThe examples above were relatively simple with small queries. Imagine\nhow much bigger the difference would be for larger, more complex\nqueries.\n\nMongoDB documents mapping to data structures in most popular programming\nlanguages can be a huge advantage in terms of time to write, debug, and\nmaintain code.\n\nThe code above was written in Python and leveraged the Python MongoDB\nDriver. For a complete list of all of the programming languages that\nhave MongoDB drivers, visit the [MongoDB Manual.\n\nIf you'd like to grab a copy of the code in the examples above, visit my\nGitHub repo.\n\n## Wrap Up\n\nIn this post, we discussed the top four reasons why you should use\nMongoDB:\n\n* Scale Cheaper\n* Query Faster\n* Pivot Easier\n* Program Faster\n\nBe on the lookout for the final post in this series where I'll discuss\nthe top three things you need to know as you move from SQL to MongoDB.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Discover the top 4 reasons you should use MongoDB", "contentType": "Article"}, "title": "The Top 4 Reasons Why You Should Use MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/internet-of-toilets", "action": "created", "body": "# An Introduction to IoT (Internet of Toilets)\n\nMy favorite things in life are cats, computers, and crappy ideas, so I decided to combine all three and explore what was possible with JavaScript by creating a brand new Internet of Things (IoT) device for my feline friend at home. If you're reading this, you have probably heard about how hot internet-connected devices are, and you are probably interested in learning how to get into IoT development as a JavaScript developer. In this post, we will explore why you should consider JavaScript for your next IoT project, talk about IoT data best practices, and we will explore my latest creation, the IoT Kitty Litter Box.\n\n## IoT And JS(?!?!)\n\nOkay, so why on earth should you use JavaScript on an IoT project? You might have thought JavaScript was just for webpages. Well, it turns out that JavaScript is famously eating the world and it is now, in fact, running on lots of new and exciting devices, including most internet-enabled IoT chips! Did you know that 58% of developers that identified as IoT developers use Node.js?\n\n### The Internet *Already* Speaks JavaScript\n\nThat's a lot of IoT developers already using Node.js. Many of these developers use Node because the internet *already* speaks JavaScript. It's natural to continue building internet-connected devices using the de facto standard of the internet. Why reinvent the wheel?\n\n### Easy to Update\n\nAnother reason IoT developers use Node is it's ease in updating your code base. With other programming languages commonly used for IoT projects (C or C++), if you want to update the code, you need to physically connect to the device, and reflash the device with the most up-to-date code. However, with an IoT device running Node, all you have to do is remotely run `git pull` and `npm install`. Now that's much easier.\n\n### Node is Event-Based\n\nOne of the major innovations of Node is the event loop. The event loop enables servers running Node to handle events from the outside world (i.e. requests from clients) very quickly. Node is able to handle these events extremely efficiently and at scale.\n\nNow, consider how an IoT device in the wild is built to run. In this thought experiment, let's imagine that we are designing an IoT device for a farm that will be collecting moisture sensor data from a cornfield. Our device will be equipped with a moisture sensor that will send a signal once the moisture level in the soil has dropped below a certain level. This means that our IoT device will be responding to a moisture *event* (sounds a lot like an *event loop* ;P). Nearly all IoT use cases are built around events like this. The fact that Node's event-based architecture nearly identically matches the event-based nature of IoT devices is a perfect fit. Having an event-based IoT architecture means that your device can save precious power when it does not need to respond to an event from the outside world.\n\n### Mature IoT Community\n\nLastly, it's important to note that there is a mature community of IoT developers actively working on IoT libraries for Node.js. My favorites are Johnny-Five and CylonJS. Let's take a look at the \"Hello World\" on IoT devices: making an LED bulb blink. Here's what it looks like when I first got my IoT \"Hello World\" code working.\n\nJust be careful that your cat doesn't try to eat your project while you are getting your Hello World app up and running.\n\n## IoT (AKA: Internet of Toilets) Kitty Litter Box\n\nThis leads me to my personal IoT project, the IoT Kitty Litter Box. For this project, I opted to use Johnny-Five. So, what the heck is this IoT Kitty Litter Box, and why would anyone want to build such a thing? Well, the goal of building an internet-connected litter box was to:\n\n- Help track my feline friend's health by passively measuring my cat's weight every time he sets foot in the litter tray.\n- Monitor my cat's bathroom patterns over time. It will make it easy to track any changes in bathroom behavior.\n- Explore IoT projects and have some fun!\n\nAlso, personally, I like the thought of building something that teeters right on the border of being completely ridiculous and kinda genius. Frankly, I'm shocked that no one has really made a consumer product like this! Here it is in all of its completed glory.\n\n### Materials and Tools\n\n- 1 x Raspberry Pi - I used a Raspberry Pi 3 Model B for this demo, but any model will do.\n- 1 x Breadboard\n- 2 x Female to male wires\n- 1 x 3D printer \\Optional\\] - The 3D printer was used for printing the case where the electronics are enclosed.\n- 1 x PLA filament \\[Optional\\] - Any color will work.\n- 1 x Solder iron and solder wire\n- 8 x M2x6 mm bolts\n- 1 x HX711 module - This module is required as a load cell amplifier and it converts the analog load cell signal to a digital signal so the Raspberry Pi can read the incoming data.\n- 4 x 50 kg load cell (x4) - They are used to measure the weight. In this project, four load cells are used and can measure a maximum weight of 200kg.\n- 1 x Magnetic door sensor - Used to detect that the litter box is opened.\n- 1 x Micro USB cable\n- 1 x Cat litter box\n\n### How Does the IoT Kitty Litter Box Work?\n\nSo how does this IoT Kitty Litter Box work? Let's take a look at the events that I needed to handle:\n\n- When the lid of the box is removed, the box enters \"maintenance mode.\" When in maintenance mode, I can remove waste or refresh the litter.\n- When the lid of the box is put back on, it leaves maintenance mode, waits one minute for the litter to settle, then it recalibrates a new base weight after being cleaned.\n- The box then waits for a cat-sized object to be added to the weight of the box. When this event occurs, we wait 15 seconds for the cat to settle and the box records the weight of the cat and records it in a MongoDB database.\n- When the cat leaves the box, we reset the base weight of the box, and the box waits for another bathroom or maintenance event to occur.\n\nYou can also check out this handy animation that walks through the various events that we must handle.\n\n![Animation of how the box\nworks\n\n### How to Write Code That Interacts With the Outside World\n\nFor this project, I opted to work with a Raspberry Pi 3 Model B+ since it runs a full Linux distro and it's easy to get Node running on it. The Raspberry Pi is larger than other internet-enabled chips on the market, but its ease of use makes it ideal for first-timers looking to dip into IoT projects. The other reason I picked the Raspberry Pi is the large array of GPIO pins. These pins allow you to do three things.\n\n1. Power an external sensor or chip.\n2. Read input from a sensor (i.e. read data from a light or moisture sensor).\n3. Send data from the Raspberry Pi to the outside world (i.e. turning a light on and off).\n\nI wired up the IoT Kitty Litter Box using the schema below. I want to note that I am not an electrical engineer and creating this involved lots of Googling, failing, and at least two blown circuit boards. It's okay to make mistakes, especially when you are first starting out.\n\n### Schematics\n\nWe will be using these GPIO pins in order to communicate with our sensors out in the \"real world.\"\n\n## Let's Dig Into the Code\n\nI want to start with the most simple component on the box, the magnetic switch that is triggered when the lid is removed, so we know when the box goes into \"maintenance mode.\" If you want to follow along, you can check out the complete source code here.\n\n### Magnetic Switch\n\n``` javascript\nconst { RaspiIO } = require('raspi-io');\nconst five = require('johnny-five');\n\n// Initialize a new Raspberry Pi Board\nconst board = new five.Board({\n io: new RaspiIO(),\n});\n\n// Wait for the board to initialize then start reading in input from sensors\nboard.on('ready', () => {\n // Initialize a new switch on the 16th GPIO Input pin\n const spdt = new five.Switch('GPIO16');\n\n // Wait for the open event to get triggered by the sensor\n spdt.on('open', () => {\n enterMaintenceMode();\n });\n\n // Recalibrate the box once the sensor has closed\n // Once the box has been cleaned, the box prepares for a new event\n spdt.on('close', () => {\n console.log('close');\n // When the box has been closed again\n // wait 1 min for the box to settle\n // and recalibrate a new base weight\n setTimeout(() => {\n scale.calibrate();\n }, 60000);\n });\n});\n\nboard.on('fail', error => {\n handleError(error);\n});\n```\n\nYou can see the event and asynchronous nature of IoT plays really nicely with Node's callback structure. Here's a demo of the magnetic switch component in action.\n\n### Load Cells\n\nOkay, now let's talk about my favorite component, the load cells. The load cells work basically like any bathroom scale you may have at home. The load cells are responsible for converting the pressure placed on them into a digital weight measurement I can read on the Raspberry Pi. I start by taking the base weight of the litter box. Then, I wait for the weight of something that is approximately cat-sized to be added to the base weight of the box and take the cat's weight. Once the cat leaves the box, I then recalibrate the base weight of the box. I also recalibrate the base weight after every time the lid is taken off in order to account for events like the box being cleaned or having more litter added to the box.\n\nIn regards to the code for reading data from the load cells, things were kind of tricky. This is because the load cells are not directly compatible with Johnny-Five. I was, however, able to find a Python library that can interact with the HX711 load cells.\n\n``` python\n#! /usr/bin/python2\n\nimport time\nimport sys\nimport RPi.GPIO as GPIO\nfrom hx711 import HX711\n# Infintely run a loop that checks the weight every 1/10 of a second\nwhile True:\n try:\n # Prints the weight - and send it to the parent Node process\n val = hx.get_weight()\n print(val)\n\n # Read the weight every 1/10 of a second\n time.sleep(0.1)\n\n except (KeyboardInterrupt, SystemExit):\n cleanAndExit()\n```\n\nIn order to use this code, I had to make use of Node's Spawn Child Process API. The child process API is responsible for spinning up the Python process on a separate thread. Here's what that looks like.\n\n``` javascript\nconst spawn = require('child_process').spawn;\n\nclass Scale {\n constructor(client) {\n // Spin up the child process when the Scale is initialized\n this.process = spawn('python', './hx711py/scale.py'], {\n detached: true,\n });\n }\n\n getWeight() {\n // Takes stdout data from Python child script which executed\n // with arguments and send this data to res object\n this.process.stdout.on('data', data => {\n // The data is returned from the Python process as a string\n // We need to parse it to a float\n this.currWeight = parseFloat(data);\n\n // If a cat is present - do something\n if (this.isCatPresent() {\n this.handleCatInBoxEvent();\n }\n });\n\n this.process.stderr.on('data', err => {\n handleError(String(err));\n });\n\n this.process.on('close', (code, signal) => {\n console.log(\n `child process exited with code ${code} and signal ${signal}`\n );\n });\n }\n [...]\n}\n\nmodule.exports = Scale;\n```\n\nThis was the first time I have played around with the Spawn Child Process API from Node. Personally, I was really impressed by how easy it was to use and troubleshoot. It's not the most elegant solution, but it totally works for my project and it uses some cool features of Node. Let's take a look at what the load cells look like in action. In the video below, you can see how pressure placed on the load cells is registered as a weight measurement from the Raspberry Pi.\n\n![Load Cell\nDemo\n\n## How to Handle IoT Data\n\nOkay, so as a software engineer at MongoDB, I would be remiss if I didn't talk about what to do with all of the data from this IoT device. For my IoT Litter Box, I am saving all of the data in a fully managed database service on MongoDB Atlas. Here's how I connected the litter box to the MongoDB Atlas database.\n\n``` javascript\nconst MongoClient = require('mongodb').MongoClient;\nconst uri = 'YOUR MONGODB URI HERE'\nconst client = new MongoClient(uri, { useNewUrlParser: true });\nclient.connect(err => {\n const collection = client.db('IoT').collection('toilets');\n // perform actions on the collection object\n client.close();\n});\n```\n\n### IoT Data Best Practices\n\nThere are a lot of places to store your IoT data these days, so I want to talk about what you should look for when you are evaluating data platforms.\n\n#### High Database Throughput\n\nFirst thing when selecting a database for your IoT project, you need to ensure that you database is able to handle a massive amount of concurrent writes. Most IoT architectures are write-heavy, meaning that you are writing more data to your database then reading from it. Let's say that I decide to start mass manufacturing and selling my IoT Kitty Litter Boxes. Once I deploy a couple of thousand boxes in the wild, my database could potentially have a massive amount of concurrent writes if all of the cats go to the bathroom at the same time! That's going to be a lot of incoming data my database will need to handle!\n\n#### Flexible Data Schema\n\nYou should also consider a database that is able to handle a flexible schema. This is because it is common to either add or upgrade sensors on an IoT device. For example, on my litter box, I was able to easily update my schema to add the switch data when I decided to start tracking how often the box gets cleaned.\n\n#### Your Database Should Easily Time Series Data\n\nLastly, you will want to select a database that natively handles time series data. Consider how your data will be used. For most IoT projects, the data will be collected, analyzed, and visualized on a graph or chart over time. For my IoT Litter Box, my database schema looks like the following.\n\n``` json\n{\n \"_id\": { \"$oid\": \"dskfjlk2j92121293901233\" },\n \"timestamp_day\": { \"$date\": { \"$numberLong\": \"1573854434214\" } },\n \"type\": \"cat_in_box\",\n \"cat\": { \"name\": \"BMO\", \"weight\": \"200\" },\n \"owner\": \"Joe Karlsson\",\n \"events\": \n {\n \"timestamp_event\": { \"$date\": { \"$numberLong\": \"1573854435016\" } },\n \"weight\": { \"$numberDouble\": \"15.593333333\" }\n },\n {\n \"timestamp_event\": { \"$date\": { \"$numberLong\": \"1573854435824\" } },\n \"weight\": { \"$numberDouble\": \"15.132222222\" }\n },\n {\n \"timestamp_event\": { \"$date\": { \"$numberLong\": \"1573854436632\"} },\n \"type\": \"maintenance\"\n }\n ]\n}\n```\n\n## Summary\n\nAlright, let's wrap this party up. In this post, we talked about why you should consider using Node for your next IoT project: It's easy to update over a network, the internet already speaks JavaScript, there are tons of existing libraries/plugins/APIs (including [CylonJS and Johnny-Five), and JavaScript is great at handling event-driven apps. We looked at a real-life Node-based IoT project, my IoT Kitty Litter Box. Then, we dug into the code base for the IoT Kitty Litter Box. We also discussed what to look for when selecting a database for IoT projects: It should be able to concurrently write data quickly, have a flexible schema, and be able to handle time series data.\n\nWhat's next? Well, if I have inspired you to get started on your own IoT project, I say, \"Go for it!\" Pick out a project, even if it's \"crappy,\" and build it. Google as you go, and make mistakes. I think it's the best way to learn. I hereby give you permission to make stupid stuff just for you, something to help you learn and grow as a human being and a software engineer.\n\n>When you're ready to build your own IoT device, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- Bringing JavaScript to the IoT Edge - Joe Karlsson \\| Node + JS Interactive 2019.\n- IoT Kitty Litter Box Source Code.\n- Want to learn more about MongoDB? Be sure to take a class on the MongoDB University.\n- Have a question, feedback on this post, or stuck on something be sure to check out and/or open a new post on the MongoDB Community Forums.\n- Quick Start: Node.js.\n- Want to check out more cool articles about MongoDB? Be sure to check out more posts like this on the MongoDB Developer Hub.", "format": "md", "metadata": {"tags": ["JavaScript", "RaspberryPi"], "pageDescription": "Learn all about developing IoT projects using JS and MongoDB by building an smart toilet for your cat! Click here for more!", "contentType": "Code Example"}, "title": "An Introduction to IoT (Internet of Toilets)", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/push-notifications-atlas-app-services-realm-sdk", "action": "created", "body": "# Push Notifications Using Atlas App Services & iOS Realm SDK\n\nIn a serverless application, one of the important features that we must implement for the success of our application is push notifications.\n\nRealm allows us to have a complete push notification system directly in our Services App. To do this, we\u2019ll make use of several components that we\u2019ll explain here and develop in this tutorial. But first, let\u2019s describe what our application does.\n\n## Context\n\nThe application consists of a public list of books that are stored locally on the device thanks to Atlas Device Sync, so we can add/delete or update the books directly in our Atlas collection and the changes will be reflected in our application.\n\nWe can also add as favorites any book from the list, although to do so, it\u2019ll be necessary to register beforehand using an email and password. We will integrate email/password authentication in our application through the Atlas App Services authentication providers.\n\nThe books, as they belong to a collection synchronized with Atlas Device Sync, will not only be stored persistently on our device but will also be synchronized with our user. This means that we can retrieve the list of favorites on any device where we register with our credentials. Changes made to our favorites using other devices are automatically synchronized.\n\n### Firebase\n\nThe management of push notifications will be done through Firebase Cloud Messaging. In this way, we benefit from a single service to send notifications to iOS, web, and Android.\n\nThe configuration is similar to the one we would follow for any other application. The difference is that we will install the firebase-admin SDK in our Atlas App Services application.\n\n### Triggers\n\nThe logic of this application for sending push notifications will be done through triggers. To do this, we will define two use cases:\n\n1. A book has been added or deleted: For this, we will make use of the topics in Firebase, so when a user registers to receive this type of notification, they will receive a message every time a book is added/deleted from the general list of books.\n2. A book added to my favorites list has been modified: We will make use of the Firebase tokens for each device. We relate the token received to the user so that when a book is modified, only the user/s that have it in their favorites list will receive the notification.\n\n### Functions\n\nThe Atlas Triggers will have a function linked that will apply the logic for sending push notifications to the end devices. We will make use of the Firebase Admin SDK that we will install in our App Services App as a dependency.\n\n## Overall application logic\n\nThis application will show how we can integrate push notifications with an iOS application developed in Swift. We will discuss how we have created each part with code, diagrams, and example usage.\n\nAt the end of this tutorial, you\u2019ll find a link to a Github repository where you\u2019ll find both the code of the iOS application as well as the code of the App Services application.\n\nWhen we start the application for the first time, we log in using anonymous authentication, since to view the list of books, it\u2019s not necessary to register using email/password. However, an anonymous user will still be created and saved in a collection Users in Atlas.\n\nWhen we first access our application and enable push notifications, in our code, we register with Firebase. This will generate a registration token, also known as FCMToken, that we will use later to send custom push notifications.\n\n```swift\nMessaging.messaging().token { token, error in\n if let error = error {\n print(\"Error fetching FCM registration token: \\(error)\")\n } else if let token = token {\n print(\"FCM registration token: \\(token)\")\n // Save token in user collection\n user.functions.updateFCMUserToken(AnyBSON(token), AnyBSON(\"add\")], self.onCustomDataUpdated(result:realmError:))\n }\n}\n```\n\nOnce we obtain the FCM token, we will [call a function through the Realm SDK to save this token in the user document corresponding to the logged-in user. Within the document, we have defined a token field that will be composed of an array of FCM tokens.\n\nTo do this, we will make use of the Firebase SDK and the `Messaging` method, so that we are notified every time the token changes or a new token is generated. In our Swift code, we will use this function to insert a new FCToken for our user.\n\n```swift\nMessaging.messaging().token { token, error in\n if let error = error {\n print(\"Error fetching FCM registration token: \\(error)\")\n } else if let token = token {\n print(\"FCM registration token: \\(token)\")\n // Save token in user collection\n user.functions.updateFCMUserToken(AnyBSON(token), AnyBSON(\"add\")], self.onCustomDataUpdated(result:realmError:))\n }\n}\n```\n\nIn our App Services app, we must implement the logic of the `updateFCMUserToken` function that will store the token in our user document.\n\n#### Function code in Atlas\n\n```javascript\nexports = function(FCMToken, operation) {\n\n const db = context.services.get(\"mongodb-atlas\").db(\"product\");\n const userData = db.collection(\"customUserData\");\n \n if (operation === \"add\") {\n console.log(\"add\");\n userData.updateOne({\"userId\": context.user.id},\n { \"$addToSet\": {\n \"FCMToken\": FCMToken\n } \n }).then((doc) => {\n return {success: `User token updated`};\n }).catch(err => {\n return {error: `User ${context.user.id} not found`};\n });\n } else if (operation === \"remove\") {\n console.log(\"remove\"); \n } \n};\n```\n\nWe have decided to save an array of tokens to be able to send a notification to each device that the same user has used to access the application.\n\nThe following is an example of a User document in the collection:\n\n```JSON\n{\n \"_id\": {\n \"$oid\": \"626c213ece7819d62ebbfb99\"\n },\n \"color\": \"#1AA7ECFF\",\n \"fullImage\": false,\n \"userId\": \"6268246e3e0b17265d085866\",\n \"bookNotification\": true,\n \"FCMToken\": [\n \"as3SiBN0kBol1ITGdBqGS:APA91bERtZt-O-jEg6jMMCjPCfYdo1wmP9RbeATAXIQKQ3rfOqj1HFmETvdqm2MJHOhx2ZXqGJydtMWjHkaAD20A8OtqYWU3oiSg17vX_gh-19b85lP9S8bvd2TRsV3DqHnJP8k-t2WV\",\n \"e_Os41X5ikUMk9Kdg3-GGc:APA91bFzFnmAgAhbtbqXOwD6NLnDzfyOzYbG2E-d6mYOQKZ8qVOCxd7cmYo8X3JAFTuXZ0QUXKJ1bzSzDo3E0D00z3B4WFKD7Yqq9YaGGzf_XSUcCexDTM46bm4Ave6SWzbh62L4pCbS\"\n ]\n}\n```\n\n## Send notification to a topic\n\nFirebase allows us to subscribe to a topic so that we can send a notification to all devices that have ever subscribed to it without the need to send the notification to specific device tokens.\n\nIn our application, once we have registered using an email and password, we can subscribe to receive notifications every time a new book is added or deleted.\n\n![Setting view in the iOS app\n\nWhen we activate this option, what happens is that we use the Firebase SDK to register in the topic books.\n\n```swift\nstatic let booksTopic = \"books\"\n\n@IBAction func setBookPushNotification(_ sender: Any) {\n if booksNotificationBtn.isOn {\n Messaging.messaging().subscribe(toTopic: SettingsViewController.booksTopic)\n print(\"Subscribed to \\(SettingsViewController.booksTopic)\")\n } else {\n Messaging.messaging().unsubscribe(fromTopic: SettingsViewController.booksTopic)\n print(\"Unsubscribed to \\(SettingsViewController.booksTopic)\")\n }\n}\n```\n\n### How does it work?\n\nThe logic we follow will be as below:\n\nIn our Atlas App Services App, we will have a database trigger that will monitor the Books collection for any new inserts or deletes.\n\nUpon the occurrence of either of these two operations, the linked function shall be executed and send a push notification to the \u201cbooks\u201d topic.\n\nTo configure this trigger, we\u2019ll make use of two very important options:\n\n* **Full Document**: This will allow us to receive the document created or modified in our change event. \n* **Document Pre-Image**: For delete operations, we will receive the document that was modified or deleted before your change event.\n\nWith these options, we can determine which changes have occurred and send a message using the title of the book to inform about the change. \n\nThe configuration of the trigger in the App Services UI will be as follows:\n\nThe function linked to the trigger will determine whether the operation occurred as an `insert` or `delete` and send the push notification to the topic **books** with the title information.\n\nFunction logic:\n\n```javascript\n const admin = require('firebase-admin');\n admin.initializeApp({\n credential: admin.credential.cert({\n projectId: context.values.get('projectId'),\n clientEmail: context.values.get('clientEmail'),\n privateKey: context.values.get('fcm_private_key_value').replace(/\\\\n/g, '\\n'),\n }),\n });\n const topic = 'books';\n const message = {topic};\n if (changeEvent.operationType === 'insert') {\n const name = changeEvent.fullDocument.volumeInfo.title;\n const image = changeEvent.fullDocument.volumeInfo.imageLinks.smallThumbnail;\n message.notification = {\n body: `${name} has been added to the list`,\n title: 'New book added'\n };\n if (image !== undefined) {\n message.apns = {\n payload: {\n aps: {\n 'mutable-content': 1\n }\n },\n fcm_options: {\n image\n }\n };\n }\n } else if (changeEvent.operationType === 'delete') {\n console.log(JSON.stringify(changeEvent));\n const name = changeEvent.fullDocumentBeforeChange.volumeInfo.title;\n message.notification = {\n body: `${name} has been deleted from the list`,\n title: 'Book deleted'\n };\n }\n admin.messaging().send(message)\n .then((response) => {\n // Response is a message ID string.\n console.log('Successfully sent message:', response);\n return true;\n })\n .catch((error) => {\n console.log('Error sending message:', error);\n return false;\n });\n```\n\nWhen someone adds a new book, everyone who opted-in for push notifications will receive the following:\n\n## Send notification to a specific device\n\nTo send a notification to a specific device, the logic will be somewhat different.\n\nFor this use case, every time a book is updated, we will search if it belongs to the favourites list of any user. For those users who have such a book, we will send a notification to all registered tokens.\n\nThis will ensure that only users who have added the updated book to their favorites will receive a notification alerting them that there has been a change.\n\n### How does it work?\n\nFor this part, we will need a database trigger that will monitor for updates operations on the books collection.\n\nThe configuration of this trigger is much simpler, as we only need to monitor the `updates` that occur in the book collection. \n\nThe configuration of the trigger in our UI will be as follows:\n\nWhen such an operation occurs, we\u2019ll check if there is any user who has added that book to their favorites list. If there is, we will create a new document in the ***pushNotifications*** collection.\n\nThis auxiliary collection is used to optimize the sending of push notifications and handle exceptions. It will even allow us to set up a monitoring system as well as retries.\n\nEvery time we send a notification, we\u2019ll insert a document with the following:\n\n1. The changes that occurred in the original document.\n2. The FCM tokens of the recipient devices.\n3. The date when the notification was registered.\n4. A processed property to know if the notification has been sent.\n\nHere\u2019s an example of a push notification document:\n\n```JSON\n{\n \"_id\": {\n \"$oid\": \"62a0da5d860040b7938eab87\"\n },\n \"token\": \n\"e_OpA2X6ikUMk9Kdg3-GGc:APA91bFzFnmAgAhbtbqXOwD6NLnDzfyOzYbG2E-d6mYOQKZ8qVOCxd7cmYo8X3JAFTuXZ0QUXKJ1bzSzDo3E0D00z3B4WFKD7Yqq9YaGGzf_XSUcCexDTM46bm4Ave6SWzbh62L4pCbS\",\n \"fQvffGBN2kBol1ITGdBqGS:APA91bERtZt-O-jEg6jMMCjPCfYdo1wmP9RbeATAXIQKQ3rfOqj1HFmETvdqm2MJHOhx2ZXqGJydtMWjHkaAD20A8OtqYWU3oiSg17vX_gh-19b85lP9S8bvd2TRsV3DqHnJP8k-t2WV\"\n ],\n \"date\": {\n \"$date\": {\n \"$numberLong\": \"1654708829678\"\n }\n },\n \"processed\": true,\n \"changes\": {\n \"volumeInfo\": {\n \"title\": \"Pacific on Linguistics\",\n \"publishedDate\": \"1964\",\n \"industryIdentifiers\": [\n {\n \"type\": \"OTHER\",\n \"identifier\": \"UOM:39015069017963\"\n }\n ],\n \"readingModes\": {\n \"text\": false,\n \"image\": false\n },\n \"categories\": [\n \"Linguistics\"\n ],\n \"imageLinks\": {\n \"smallThumbnail\": \"http://books.google.com/books/content?id=aCVZAAAAMAAJ&printsec=frontcover&img=1&zoom=5&source=gbs_api\",\n \"thumbnail\": \"http://books.google.com/books/content?id=aCVZAAAAMAAJ&printsec=frontcover&img=1&zoom=1&source=gbs_api\"\n },\n \"language\": \"en\"\n }\n }\n}\n```\n\nTo process the notifications, we\u2019ll have a database trigger that will monitor the ***pushNotifications*** collection, and each new document will send a notification to the tokens of the client devices.\n\n#### Function logic\n\n```javascript\nexports = async function(changeEvent) {\n\n const admin = require('firebase-admin');\n const db = context.services.get('mongodb-atlas').db('product');\n\n const id = changeEvent.documentKey._id;\n\n const bookCollection = db.collection('book');\n const pushNotification = db.collection('pushNotification');\n\n admin.initializeApp({\n credential: admin.credential.cert({\n projectId: context.values.get('projectId'),\n clientEmail: context.values.get('clientEmail'),\n privateKey: context.values.get('fcm_private_key_value').replace(/\\\\n/g, '\\n'),\n }),\n });\n\n const registrationToken = changeEvent.fullDocument.token;\n console.log(JSON.stringify(registrationToken));\n const title = changeEvent.fullDocument.changes.volumeInfo.title;\n const image = changeEvent.fullDocument.changes.volumeInfo.imageLinks.smallThumbnail;\n\n const message = {\n notification:{\n body: 'One of your favorites changed',\n title: `${title} changed`\n },\n tokens: registrationToken\n };\n\n if (image !== undefined) {\n message.apns = {\n payload: {\n aps: {\n 'mutable-content': 1\n }\n },\n fcm_options: {\n image\n }\n };\n }\n\n // Send a message to the device corresponding to the provided\n // registration token.\n admin.messaging().sendMulticast(message)\n .then((response) => {\n // Response is a message ID string.\n console.log('Successfully sent message:', response);\n pushNotification.updateOne({'_id': BSON.ObjectId(`${id}`)},{\n \"$set\" : {\n processed: true\n }\n });\n })\n .catch((error) => {\n console.log('Error sending message:', error);\n });\n};\n```\n\nExample of a push notification to a user:\n\n![\n\n## Repository\n\nThe complete code for both the App Services App as well as for the iOS application can be found in a dedicated GitHub repository.\n\nIf you have found this tutorial useful, let me know so I can continue to add more information as well as step-by-step videos on how to do this.\n\nAnd if you\u2019re as excited about Atlas App Services as I am, create your first free App today!", "format": "md", "metadata": {"tags": ["Atlas", "Swift", "JavaScript", "Google Cloud", "iOS"], "pageDescription": "Use our Atlas App Services application to create a complete push notification system that fits our business logic.", "contentType": "Tutorial"}, "title": "Push Notifications Using Atlas App Services & iOS Realm SDK", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-database-new-data-types", "action": "created", "body": "# New Realm Data Types: Dictionaries/Maps, Sets, Mixed, and UUIDs\n\n## TL;DR\n\nStarting with Realm Javascript 10.5, Realm Cocoa 10.8, Realm .NET 10.2,\nand Realm Java 10.6, developers will be able persist and query new\nlanguage specific data types in the Realm Database. These include\nDictionaries/Maps, Sets, a Mixed type, and UUIDs.\n\n## Introduction\n\nWe're excited to announce that the Realm SDK team has shipped four new\ndata types for the Realm Mobile Database. This work \u2013 prioritized in\nresponse to community requests \u2013 continues to make using the Realm SDKs\nan intuitive, idiomatic experience for developers. It eliminates even\nmore boilerplate code from your codebase, and brings the data layer\ncloser to your app's source code.\n\nThese new types make it simple to model flexible data in\nRealm, and easier to work across Realm and\nMongoDB Atlas. Mobile developers\nwho are building with Realm and MongoDB Realm\nSync can leverage the\nflexibility of MongoDB's data structure in their offline-first mobile\napplications.\n\nRead on to learn more about each of the four new data types we've\nreleased, and see examples of when and how to use them in your data\nmodeling:\n\n- Dictionaries/Maps\n- Mixed\n- Sets\n- UUIDs\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Dictionaries/Maps\n\nDictionaries/maps allow developers to store data in arbitrary key-value\npairs. They're used when a developer wants to add flexibility to data\nmodels that may evolve over time, or handle unstructured data from a\nremote endpoint. They also enable a mobile developer to store unique\nkey-value pairs without needing to check the data for uniqueness before\ninsertion.\n\nBoth Dictionaries and Maps can be useful when working with REST APIs,\nwhere extra data may be returned that's not defined in a mobile app's\nschema. Mobile developers who need to future-proof their schema against\nrapidly changing feature requirements and future product iterations will\nalso find it useful to work with these new data types in Realm.\n\nConsider a gaming app that has multiple games within it, and a single\nPlayer class. The developer building the app knows that future releases\nwill need to enable new functionality, like a view of player statistics\nand a leaderboard. But the Player can serve in different roles for each\navailable game. This makes defining a strict structure for player\nstatistics difficult.\n\nWith Dictionary/Map data types, the developer can place a gameplayStats\nfield on the Player class as a dictionary. Using this dictionary, it's\nsimple to display a screen that shows the player's most common roles,\nthe games they've competed in, and any relevant statistics that the\ndeveloper wants to include on the leaderboard. After the leaderboard has\nbeen released and iterated on, the developer can look to migrate their\nDictionary to a more formally structured class as part of a formal\nfeature.\n\n::::tabs\n:::tab]{tabid=\"Kotlin\"}\n``` kotlin\nimport android.util.Log\nimport io.realm.Realm\nimport io.realm.RealmDictionary\nimport io.realm.RealmObject\nimport io.realm.kotlin.where\nimport kotlinx.coroutines.flow.Flow\nimport kotlinx.coroutines.flow.flow\nimport java.util.AbstractMap\n\nopen class Player : RealmObject() {\n var name: String? = null\n var email: String? = null\n var playerHandle: String? = null\n var gameplayStats: RealmDictionary = RealmDictionary()\n var competitionStats: RealmDictionary = RealmDictionary()\n}\n\nrealm.executeTransactionAsync { r: Realm ->\n val player = Player()\n player.playerHandle = \"iDubs\"\n // get the RealmDictionary field from the object we just created and add stats\n player.gameplayStats = RealmDictionary(mapOf())\n .apply {\n \"mostCommonRole\" to \"Medic\"\n \"clan\" to \"Realmers\"\n \"favoriteMap\" to \"Scorpian Bay\"\n \"tagLine\" to \"Always be Healin\"\n \"nemesisHandle\" to \"snakeCase4Life\"\n }\n player.competitionStats = RealmDictionary(mapOf()).apply {\n \"EastCoastInvitational\" to \"2nd Place\"\n \"TransAtlanticOpen\" to \"4th Place\"\n }\n r.insert(player)\n}\n\n// Developer implements a Competitions View -\n// emit all entries in the dictionary for view by the user\nval player = realm.where().equalTo(\"name\", \"iDubs\").findFirst()\nplayer?.let {\n player.competitionStats.addChangeListener { map, changes ->\n val insertions = changes.insertions\n for (insertion in insertions) {\n Log.v(\"EXAMPLE\", \"Player placed at a new competition $insertion\")\n }\n }\n}\n\nfun competitionFlow(): Flow = flow {\n for ((competition, place) in player!!.competitionStats) {\n emit(\"$competition - $place\")\n }\n}\n\n// Build a RealmQuery that searches the Dictionary type\nval query = realm.where().equalTo(\"name\", \"iDubs\")\nval entry = AbstractMap.SimpleEntry(\"nemesisHandle\", \"snakeCase4Life\")\nval playerQuery = query.containsEntry(\"gameplayStats\", entry).findFirst()\n\n// remove player nemesis - they are friends now!\nrealm.executeTransaction { r: Realm ->\n playerQuery?.gameplayStats?.remove(\"nemesisHandle\")\n}\n```\n:::\n:::tab[]{tabid=\"Swift\"}\n``` swift\nimport Foundation\nimport RealmSwift\n\nclass Player: Object {\n @objc dynamic var name: String?\n @objc dynamic var email: String?\n @objc dynamic var playerHandle: String?\n let gameplayStats = Map()\n let competitionStats = Map()\n}\n\nlet realm = try! Realm()\ntry! realm.write {\n let player = Player()\n player.name = \"iDubs\"\n\n // get the Map field from the object we just created and add stats\n let statsDictionary = player.gameplayStats\n statsDictionary[\"mostCommonRole\"] = \"Medic\"\n statsDictionary[\"clan\"] = \"Realmers\"\n statsDictionary[\"favoriteMap\"] = \"Scorpian bay\"\n statsDictionary[\"tagLine\"] = \"Always Be Healin\"\n statsDictionary[\"nemesisHandle\"] = \"snakeCase4Life\"\n\n let competitionStats = player.competitionStats\n\n competitionStats[\"EastCoastInvitational\"] = \"2nd Place\"\n competitionStats[\"TransAtlanticOpen\"] = \"4th Place\"\n\n realm.add(player)\n\n // Developer implements a Competitions View -\n // emit all entries in the dictionary for view by the user\n\n // query for all Player objects\n let players = realm.objects(Player.self)\n\n // run the `.filter()` method on all the returned Players to find the competition rankings\n let playerQuery = players.filter(\"name == 'iDubs'\")\n\n guard let competitionDictionary = playerQuery.first?.competitionStats else {\n return\n }\n\n for entry in competitionDictionary {\n print(\"Competition: \\(entry.key)\")\n print(\"Place: \\(entry.value)\")\n }\n\n // Set up the listener to watch for new competition rankings\n var notificationToken = competitionDictionary.observe(on: nil) { changes in\n switch changes {\n case .update(_, _, let insertions, _):\n for insertion in insertions {\n let insertedCompetition = competitionDictionary[insertion]\n print(\"Player placed at a new competition \\(insertedCompetition ?? \"\")\")\n }\n default:\n print(\"Only handling updates\")\n }\n }\n}\n```\n:::\n:::tab[]{tabid=\"JavaScript\"}\n``` javascript\nconst PlayerSchema = {\n name: \"Player\",\n properties: {\n name: \"string?\",\n email: \"string?\",\n playerHandle: \"string?\",\n gameplayStats: \"string{}\",\n competitionStats: \"string{}\",\n },\n};\n\nlet player;\n\nrealm.write(() => {\n player = realm.create(\"Player\", {\n name: \"iDubs\",\n gameplayStats: {\n mostCommonRole: \"Medic\",\n clan: \"Realmers\",\n favoriteMap: \"Scorpian Bay\",\n tagLine: \"Always Be Healin\",\n nemesisHandle: \"snakeCase4Life\",\n },\n competitionStats: {\n EastCoastInvitational: \"2nd Place\",\n TransAtlanticOpen: \"4th Place\",\n }\n });\n\n// query for all Player objects\nconst players = realm.objects(\"Player\");\n// run the `.filtered()` method on all the returned Players to find the competition rankings\nconst playerQuery = players.filtered(\"name == 'iDubs'\");\n\n// Developer implements a Competitions View -\n// emit all entries in the dictionary for the user to view\n\nconst competitionDictionary = playerQuery.competitionStats;\n\nif(competitionDictionary != null){\n Object.keys(competitionDictionary).forEach(key => {\n console.log(`\"Competition: \" ${key}`);\n console.log(`\"Place: \" ${p[key]}`);\n }\n}\n\n// Set up the listener to watch for new competition rankings\n playerQuery.addListener((changedCompetition, changes) => {\n changes.insertions.forEach((index) => {\n const insertedCompetition = changedCompetition[index];\n console.log(`\"Player placed at a new competition \" ${changedCompetition.@key}!`);\n });\n\n// Build a RealmQuery that searches the Dictionary type\nconst playerNemesis = playerQuery.filtered(\n `competitionStats.@keys = \"playerNemesis\" `\n);\n\n// remove player nemesis - they are friends now!\nif(playerNemesis != null){\n realm.write(() => {\n playerNemesis.remove([\"playerNemesis\"]);\n });\n}\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Player : RealmObject\n{\n public string Name { get; set; }\n public string Email { get; set; }\n public string PlayerHandle { get; set; }\n\n [Required]\n public IDictionary GamePlayStats { get; }\n\n [Required]\n public IDictionary CompetitionStats { get; }\n}\n\nrealm.Write(() =>\n{\n var player = realm.Add(new Player\n {\n PlayerHandle = \"iDubs\"\n });\n\n // get the RealmDictionary field from the object we just created and add stats\n var statsDictionary = player.GamePlayStats;\n\n statsDictionary[\"mostCommonRole\"] = \"Medic\";\n statsDictionary[\"clan\"] = \"Realmers\";\n statsDictionary[\"favoriteMap\"] = \"Scorpian Bay\";\n statsDictionary[\"tagLine\"] = \"Always be Healin\";\n statsDictionary[\"nemesisHandle\"] = \"snakeCase4Life\";\n\n var competitionStats = player.CompetitionStats;\n\n competitionStats[\"EastCoastInvitational\"] = \"2nd Place\";\n competitionStats[\"TransAtlanticOpen\"] = \"4th Place\";\n});\n\n// Developer implements a Competitions View -\n// emit all entries in the dictionary for view by the user\n\nvar player = realm.All().Single(t => t.Name == \"iDubs\");\n\n// Loop through one by one\nforeach (var competition in player.CompetitionStats)\n{\n Debug.WriteLine(\"Competition: \" + $\"{competition.Key}\");\n Debug.WriteLine(\"Place: \" + $\"{competition.Value}\");\n}\n\n// Set up the listener to emit a new competitions\nvar token = competitionStats.\n SubscribeForKeyNotifications((dict, changes, error) =>\n{\n if (changes == null)\n {\n return;\n }\n\n foreach (var key in changes.InsertedKeys)\n {\n Debug.WriteLine($\"Player placed at a new competition: {key}: {dict[key]}\");\n }\n});\n\n// Build a RealmQuery that searches the Dictionary type\nvar snakeCase4LifeEnemies = realm.All.Filter(\"GamePlayStats['playerNemesis'] == 'snakeCase4Life'\");\n\n// snakeCase4Life has changed their attitude and are no longer\n// at odds with anyone\nrealm.Write(() =>\n{\n foreach (var player in snakeCase4LifeEnemies)\n {\n player.GamePlayStats.Remove(\"playerNemesis\");\n }\n});\n```\n:::\n::::\n\n## Mixed\n\nRealm's Mixed type allows any Realm primitive type to be stored in the\ndatabase, helping developers when strict type-safety isn't appropriate.\nDevelopers may find this useful when dealing with data they don't have\ntotal control over \u2013 like receiving data and values from a third-party\nAPI. Mixed data types are also useful when dealing with legacy states\nthat were stored with the incorrect types. Converting the type could\nbreak other APIs and create considerable work. With Mixed types,\ndevelopers can avoid this difficulty and save hours of time.\n\nWe believe Mixed data types will be especially valuable for users who\nwant to sync data between Realm and MongoDB Atlas. MongoDB's\ndocument-based data model allows a single field to support many types\nacross documents. For users syncing data between Realm and Atlas, the\nnew Mixed type allows developers to persist data of any valid Realm\nprimitive type, or any Realm Object class reference. Developers don't\nrisk crashing their app because a field value violated type-safety rules\nin Realm.\n\n::::tabs\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\nimport android.util.Log\nimport io.realm.*\nimport io.realm.kotlin.where\n\nopen class Distributor : RealmObject() {\n var name: String = \"\"\n var transitPolicy: String = \"\"\n}\n\nopen class Business : RealmObject() {\n var name: String = \"\"\n var deliveryMethod: String = \"\"\n}\n\nopen class Individual : RealmObject() {\n var name: String = \"\"\n var salesTerritory: String = \"\"\n}\n\nopen class Palette(var owner: RealmAny = RealmAny.nullValue()) : RealmObject() {\n var scanId: String? = null\n open fun ownerToString(): String {\n return when (owner.type) {\n RealmAny.Type.NULL -> {\n \"no owner\"\n }\n RealmAny.Type.STRING -> {\n owner.asString()\n }\n RealmAny.Type.OBJECT -> {\n when (owner.valueClass) {\n is Business -> {\n val business = owner.asRealmModel(Business::class.java)\n business.name\n }\n is Distributor -> {\n val distributor = owner.asRealmModel(Distributor::class.java)\n distributor.name\n }\n is Individual -> {\n val individual = owner.asRealmModel(Individual::class.java)\n individual.name\n }\n else -> \"unknown type\"\n }\n }\n else -> {\n \"unknown type\"\n }\n }\n }\n}\n\nrealm.executeTransaction { r: Realm ->\n val newDistributor = r.copyToRealm(Distributor().apply {\n name = \"Warehouse R US\"\n transitPolicy = \"Onsite Truck Pickup\"\n })\n val paletteOne = r.copyToRealm(Palette().apply {\n scanId = \"A1\"\n })\n // Add the owner of the palette as an object reference to another Realm class\n paletteOne.owner = RealmAny.valueOf(newDistributor)\n val newBusiness = r.copyToRealm(Business().apply {\n name = \"Mom and Pop\"\n deliveryMethod = \"Cheapest Private Courier\"\n })\n val paletteTwo = r.copyToRealm(Palette().apply {\n scanId = \"B2\"\n owner = RealmAny.valueOf(newBusiness)\n })\n val newIndividual = r.copyToRealm(Individual().apply {\n name = \"Traveling Salesperson\"\n salesTerritory = \"DC Corridor\"\n })\n val paletteThree = r.copyToRealm(Palette().apply {\n scanId = \"C3\"\n owner = RealmAny.valueOf(newIndividual)\n })\n}\n\n// Get a reference to palette one\nval paletteOne = realm.where()\n .equalTo(\"scanId\", \"A1\")\n .findFirst()!!\n\n// Extract underlying Realm Object from RealmAny by casting it RealmAny.Type.OBJECT\nval ownerPaletteOne: Palette = paletteOne.owner.asRealmModel(Palette::class.java)\nLog.v(\"EXAMPLE\", \"Owner of Palette One: \" + ownerPaletteOne.ownerToString())\n\n// Get a reference to the palette owned by Traveling Salesperson\n// so that you can remove ownership - they're broke!\nval salespersonPalette = realm.where()\n .equalTo(\"owner.name\", \"Traveling Salesperson\")\n .findFirst()!!\n\nval salesperson = realm.where()\n .equalTo(\"name\", \"Traveling Salesperson\")\n .findFirst()\n\nrealm.executeTransaction { r: Realm ->\n salespersonPalette.owner = RealmAny.nullValue()\n}\n\nval paletteTwo = realm.where()\n .equalTo(\"scanId\", \"B2\")\n .findFirst()!!\n\n// Set up a listener to see when Ownership changes for relabeling of palettes\nval listener = RealmObjectChangeListener { changedPalette: Palette, changeSet: ObjectChangeSet? ->\n if (changeSet != null && changeSet.changedFields.contains(\"owner\")) {\n Log.i(\"EXAMPLE\",\n \"Palette $'paletteTwo.scanId' has changed ownership.\")\n }\n}\n\n// Observe object notifications.\npaletteTwo.addChangeListener(listener)\n```\n:::\n:::tab[]{tabid=\"Swift\"}\n``` swift\nimport Foundation\nimport RealmSwift\n\nclass Distributor: Object {\n @objc dynamic var name: String?\n @objc dynamic var transitPolicy: String?\n}\n\nclass Business: Object {\n @objc dynamic var name: String?\n @objc dynamic var deliveryMethod: String?\n}\n\nclass Individual: Object {\n @objc dynamic var name: String?\n @objc dynamic var salesTerritory: String?\n}\n\nclass Palette: Object {\n @objc dynamic var scanId: String?\n let owner = RealmProperty()\n var ownerName: String? {\n switch owner.value {\n case .none:\n return \"no owner\"\n case .string(let value):\n return value\n case .object(let object):\n switch object {\n case let obj as Business: return obj.name\n case let obj as Distributor: return obj.name\n case let obj as Individual: return obj.name\n default: return \"unknown type\"\n }\n default: return \"unknown type\"\n }\n }\n}\n\nlet realm = try! Realm()\ntry! realm.write {\n let newDistributor = Distributor()\n newDistributor.name = \"Warehouse R Us\"\n newDistributor.transitPolicy = \"Onsite Truck Pickup\"\n\n let paletteOne = Palette()\n paletteOne.scanId = \"A1\"\n paletteOne.owner.value = .object(newDistributor)\n\n let newBusiness = Business()\n newBusiness.name = \"Mom and Pop\"\n newBusiness.deliveryMethod = \"Cheapest Private Courier\"\n\n let paletteTwo = Palette()\n paletteTwo.scanId = \"B2\"\n paletteTwo.owner.value = .object(newBusiness)\n\n let newIndividual = Individual()\n newIndividual.name = \"Traveling Salesperson\"\n newIndividual.salesTerritory = \"DC Corridor\"\n\n let paletteThree = Palette()\n paletteTwo.scanId = \"C3\"\n paletteTwo.owner.value = .object(newIndividual)\n}\n\n// Get a Reference to PaletteOne\nlet paletteOne = realm.objects(Palette.self)\n .filter(\"name == 'A1'\").first\n\n// Extract underlying Realm Object from AnyRealmValue field\nlet ownerPaletteOneName = paletteOne?.ownerName\nprint(\"Owner of Palette One: \\(ownerPaletteOneName ?? \"not found\")\");\n\n// Get a reference to the palette owned by Traveling Salesperson\n// so that you can remove ownership - they're broke!\n\nlet salespersonPalette = realm.objects(Palette.self)\n .filter(\"owner.name == 'Traveling Salesperson\").first\n\nlet salesperson = realm.objects(Individual.self)\n .filter(\"name == 'Traveling Salesperson'\").first\n\ntry! realm.write {\n salespersonPalette?.owner.value = .none\n}\n\nlet paletteTwo = realm.objects(Palette.self)\n .filter(\"name == 'B2'\")\n```\n:::\n:::tab[]{tabid=\"JavaScript\"}\n``` javascript\nconst DistributorSchema = {\n name: \"Distributor\",\n properties: {\n name: \"string\",\n transitPolicy: \"string\",\n },\n};\n\nconst BusinessSchema = {\n name: \"Business\",\n properties: {\n name: \"string\",\n deliveryMethod: \"string\",\n },\n};\n\nconst IndividualSchema = {\n name: \"Individual\",\n properties: {\n name: \"string\",\n salesTerritory: \"string\",\n },\n};\n\nconst PaletteSchema = {\n name: \"Palette\",\n properties: {\n scanId: \"string\",\n owner: \"mixed\",\n },\n};\n\nrealm.write(() => {\n\n const newDistributor;\n newDistributor = realm.create(\"Distributor\", {\n name: \"Warehouse R Us\",\n transitPolicy: \"Onsite Truck Pickup\"\n });\n\n const paletteOne;\n paletteOne = realm.create(\"Palette\", {\n scanId: \"A1\",\n owner: newDistributor\n });\n\n const newBusiness;\n newBusiness = realm.create(\"Business\", {\n name: \"Mom and Pop\",\n deliveryMethod: \"Cheapest Private Courier\"\n });\n\n const paletteTwo;\n paletteTwo = realm.create(\"Palette\", {\n scanId: \"B2\",\n owner: newBusiness\n });\n\n const newIndividual;\n newIndividual = realm.create(\"Business\", {\n name: \"Traveling Salesperson\",\n salesTerritory: \"DC Corridor\"\n });\n\n const paletteThree;\n paletteThree = realm.create(\"Palette\", {\n scanId: \"C3\",\n owner: newIndividual\n });\n});\n\n//Get a Reference to PaletteOne\nconst paletteOne = realm.objects(\"Palette\")\n .filtered(`scanId == 'A1'`);\n\n//Extract underlying Realm Object from mixed field\nconst ownerPaletteOne = paletteOne.owner;\nconsole.log(`Owner of PaletteOne: \" ${ownerPaletteOne.name}!`);\n\n//Get a reference to the palette owned by Traveling Salesperson\n// so that you can remove ownership - they're broke!\n\nconst salespersonPalette = realm.objects(\"Palette\")\n .filtered(`owner.name == 'Traveling Salesperson'`);\n\nlet salesperson = realm.objects(\"Individual\")\n .filtered(`name == 'Traveling Salesperson'`)\n\nrealm.write(() => {\n salespersonPalette.owner = null\n});\n\n// Observe the palette to know when the owner has changed for relabeling\n\nlet paletteTwo = realm.objects(\"Palette\")\n .filtered(`scanId == 'B2'`)\n\nfunction onOwnerChange(palette, changes) {\n changes.changedProperties.forEach((prop) => {\n if(prop == owner){\n console.log(`Palette \"${palette.scanId}\" has changed ownership to \"${palette[prop]}\"`);\n }\n });\n}\n\npaletteTwo.addListener(onOwnerChange);\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Distributor : RealmObject\n{\n public string Name { get; set; }\n public string TransitPolicy { get; set; }\n}\n\npublic class Business : RealmObject\n{\n public string Name { get; set; }\n public string DeliveryMethod { get; set; }\n}\n\npublic class Individual : RealmObject\n{\n public string Name { get; set; }\n public string SalesTerritory { get; set; }\n}\n\npublic class Palette : RealmObject\n{\n public string ScanId { get; set; }\n public RealmValue Owner { get; set; }\n\n public string OwnerName\n {\n get\n {\n if (Owner.Type != RealmValueType.Object)\n {\n return null;\n }\n\n var ownerObject = Owner.AsRealmObject();\n if (ownerObject.ObjectSchema.TryFindProperty(\"Name\", out _))\n {\n return ownerObject.DynamicApi.Get(\"Name\");\n }\n\n return \"Owner has no name\";\n }\n }\n}\n\nrealm.Write(() =>\n{\n var newDistributor = realm.Add(new Distributor\n {\n Name = \"Warehouse R Us\",\n TransitPolicy = \"Onsite Truck Pickup\"\n });\n\n realm.Add(new Palette\n {\n ScanId = \"A1\",\n Owner = newDistributor\n });\n\n var newBusiness =realm.Add(new Business\n {\n Name = \"Mom and Pop\",\n DeliveryPolicy = \"Cheapest Private Courier\"\n });\n\n realm.Add(new Palette\n {\n ScanId = \"B2\",\n Owner = newBusiness\n });\n\n var newIndividual = realm.Add(new Individual\n {\n Name = \"Traveling Salesperson\",\n SalesTerritory = \"DC Corridor\"\n });\n\n realm.Add(new Palette\n {\n ScanId = \"C3\",\n Owner = newIndividual\n });\n});\n\n// Get a Reference to PaletteOne\nvar paletteOne = realm.All()\n .Single(t => t.ScanID == \"A1\");\n\n// Extract underlying Realm Object from mixed field\nvar ownerPaletteOne = paletteOne.Owner.AsRealmObject();\nDebug.WriteLine($\"Owner of Palette One is {ownerPaletteOne.OwnerName}\");\n\n// Get a reference to the palette owned by Traveling Salesperson\n// so that you can remove ownership - they're broke!\n\nvar salespersonPalette = realm.All()\n .Filter(\"Owner.Name == 'Traveling Salesperson'\")\n .Single();\n\nrealm.Write(() =>\n{\n salespersonPalette.Owner = RealmValue.Null;\n});\n\n// Set up a listener to observe changes in ownership so you can relabel the palette\n\nvar paletteTwo = realm.All()\n .Single(p => p.ScanID == \"B2\");\n\npaletteTwo.PropertyChanged += (sender, args) =>\n{\n if (args.PropertyName == nameof(Pallette.Owner))\n {\n Debug.WriteLine($\"Palette {paletteTwo.ScanId} has changed ownership {paletteTwo.OwnerName}\");\n }\n};\n```\n:::\n::::\n\n## Sets\n\nSets allow developers to store an unordered array of unique values. This\nnew data type in Realm opens up powerful querying and mutation\ncapabilities with only a few lines of code.\n\nWith sets, you can compare data and quickly find matches. Sets in Realm\nhave built-in methods for filtering and writing to a set that are unique\nto the type. Unique methods on the Set type include, isSubset(),\ncontains(), intersects(), formIntersection, and formUnion(). Aggregation\nfunctions like min(), max(), avg(), and sum() can be used to find\naverages, sums, and similar.\n\nSets in Realm have the potential to eliminate hundreds of lines of\ngluecode. Consider an app that suggests expert speakers from different\nareas of study, who can address a variety of specific topics. The\ndeveloper creates two classes for this use case: Expert and Topic. Each\nof these classes has a Set field of strings which defines the\ndisciplines the user is an expert in, and the fields that the topic\ncovers.\n\nSets will make the predicted queries easy for the developer to\nimplement. An app user who is planning a Speaker Panel could see all\nexperts who have knowledge of both \"Autonomous Vehicles\" and \"City\nPlanning.\" The application could also run a query that looks for experts\nin one or more of these disciples by using the built-in intersect\nmethod, and the user can use results to assemble a speaker panel.\n\nDevelopers who are using [MongoDB Realm\nSync to keep data up-to-date\nbetween Realm and MongoDB Atlas are able to keep the semantics of a Set\nin place even when synchronizing data.\n\nYou can depend on the enforced uniqueness among the values of a Set.\nThere's no need to check the array for a value match before performing\nan insertion, which is a common implementation pattern that any user of\nSQLite will be familiar with. The operations performed on Realm Set data\ntypes will be synced and translated to documents using the\n$addToSet\ngroup of operations on MongoDB, preserving uniqueness in arrays.\n\n::::tabs\n:::tab]{tabid=\"Kotlin\"}\n``` kotlin\nimport android.util.Log\nimport io.realm.*\nimport io.realm.kotlin.where\n\nopen class Expert : RealmObject() {\n var name: String = \"\"\n var email: String = \"\"\n var disciplines: RealmSet = RealmSet()\n}\n\nopen class Topic : RealmObject() {\n var name: String = \"\"\n var location: String = \"\"\n var discussionThemes: RealmSet = RealmSet()\n var panelists: RealmList = RealmList()\n}\n\nrealm.executeTransaction { r: Realm ->\n val newExpert = r.copyToRealm(Expert())\n newExpert.name = \"Techno King\"\n // get the RealmSet field from the object we just created\n val disciplineSet = newExpert.disciplines\n // add value to the RealmSet\n disciplineSet.add(\"Trance\")\n disciplineSet.add(\"Meme Coins\")\n val topic = realm.copyToRealm(Topic())\n topic.name = \"Bitcoin Mining and Climate Change\"\n val discussionThemes = topic.discussionThemes\n // Add a list of themes\n discussionThemes.addAll(listOf(\"Memes\", \"Blockchain\", \"Cloud Computing\",\n \"SNL\", \"Weather Disasters from Climate Change\"))\n}\n\n// find experts for a discussion topic and add them to the panelists list\nval experts: RealmResults = realm.where().findAll()\nval topic = realm.where()\n .equalTo(\"name\", \"Bitcoin Mining and Climate Change\")\n .findFirst()!!\ntopic.discussionThemes.forEach { theme ->\n experts.forEach { expert ->\n if (expert.disciplines.contains(theme)) {\n topic.panelists.add(expert)\n }\n }\n}\n\n//observe the discussion themes set for any changes in the set\nval discussionTopic = realm.where()\n .equalTo(\"name\", \"Bitcoin Mining and Climate Change\")\n .findFirst()\nval anotherDiscussionThemes = discussionTopic?.discussionThemes\nval changeListener = SetChangeListener { collection: RealmSet,\n changeSet: SetChangeSet ->\n Log.v(\n \"EXAMPLE\",\n \"New discussion themes has been added: ${changeSet.numberOfInsertions}\"\n )\n}\n\n// Observe set notifications.\nanotherDiscussionThemes?.addChangeListener(changeListener)\n\n// Techno King is no longer into Meme Coins - remove the discipline\nrealm.executeTransaction {\n it.where()\n .equalTo(\"name\", \"Techno King\")\n .findFirst()?.let { expert ->\n expert.disciplines.remove(\"Meme Coins\")\n }\n}\n```\n:::\n:::tab[]{tabid=\"Swift\"}\n``` swift\nimport Foundation\nimport RealmSwift\n\nclass Expert: Object {\n @objc dynamic var name: String?\n @objc dynamic var email: String?\n let disciplines = MutableSet()\n}\n\nclass Topic: Object {\n @objc dynamic var name: String?\n @objc dynamic var location: String?\n let discussionThemes = MutableSet()\n let panelists = List()\n}\n\nlet realm = try! Realm()\ntry! realm.write {\n let newExpert = Expert()\n newExpert.name = \"Techno King\"\n newExpert.disciplines.insert(\"Trace\")\n newExpert.disciplines.insert(\"Meme Coins\")\n realm.add(newExpert)\n\n let topic = Topic()\n topic.name = \"Bitcoin Mining and Climate Change\"\n topic.discussionThemes.insert(\"Memes\")\n topic.discussionThemes.insert(\"Blockchain\")\n topic.discussionThemes.insert(\"Cloud Computing\")\n topic.discussionThemes.insert(\"SNL\")\n topic.discussionThemes.insert(\"Weather Disasters from Climate Change\")\n realm.add(topic)\n}\n\n// find experts for a discussion topic and add them to the panelists list\n\nlet experts = realm.objects(Expert.self)\nlet topic = realm.objects(Topic.self)\n .filter(\"name == 'Bitcoin Mining and Climate Change'\").first\nguard let topic = topic else { return }\nlet discussionThemes = topic.discussionThemes\n\nfor expert in experts where expert.disciplines.intersects(discussionThemes) {\n try! realm.write {\n topic.panelists.append(expert)\n }\n}\n\n// Observe the discussion themes set for new entries\nlet notificationToken = discussionThemes.observe { changes in\n switch changes {\n case .update(_, _, let insertions, _):\n for insertion in insertions {\n let insertedTheme = discussionThemes[insertion]\n print(\"A new discussion theme has been added: \\(insertedTheme)\")\n }\n default:\n print(\"Only handling updates\")\n }\n}\n\n// Techno King is no longer into Meme Coins - remove the discipline\ntry! realm.write {\n newExpert.disciplines.remove(\"Meme Coins\")\n}\n```\n:::\n:::tab[]{tabid=\"JavaScript\"}\n``` javascript\nconst ExpertSchema = {\n name: \"Expert\",\n properties: {\n name: \"string?\",\n email: \"string?\",\n disciplines: \"string<>\"\n },\n};\n\nconst TopicSchema = {\n name: \"Topic\",\n properties: {\n name: \"string?\",\n locaton: \"string?\",\n discussionThemes: \"string<>\", //<> indicate a Set datatype\n panelists: \"Expert[]\"\n },\n};\n\nrealm.write(() => {\n let newExpert;\n newExpert = realm.create(\"Expert\", {\n name: \"Techno King\",\n disciplines: [\"Trance\", \"Meme Coins\"],\n });\n\n let topic;\n topic = realm.create(\"Topic\", {\n name: \"Bitcoin Mining and Climate Change\",\n discussionThemes: [\"Memes\", \"Blockchain\", \"Cloud Computing\",\n \"SNL\", \"Weather Disasters from Climate Change\"],\n });\n});\n\n// find experts for a discussion topic and add them to the panelists list\nconst experts = realm.objects(\"Expert\");\n\nconst topic = realm.objects(\"Topic\").filtered(`name ==\n 'Bitcoin Mining and Climate Change'`);\nconst discussionThemes = topic.discussionThemes;\n\nfor (int i = 0; i < discussionThemes.size; i++) {\n for (expert in experts){\n if(expert.disciplines.has(dicussionThemes[i]){\n realm.write(() => {\n realm.topic.panelists.add(expert)\n });\n }\n }\n}\n\n// Set up the listener to watch for new discussion themes added to the topic\n discussionThemes.addListener((changedDiscussionThemes, changes) => {\n changes.insertions.forEach((index) => {\n const insertedDiscussion = changedDiscussionThemes[index];\n console.log(`\"A new discussion theme has been added: \" ${insertedDiscussion}!`);\n });\n\n// Techno King is no longer into Meme Coins - remove the discipline\nnewExpert.disciplines.delete(\"Meme Coins\")\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Expert : RealmObject\n{\n public string Name { get; set; }\n public string Email { get; set; }\n\n [Required]\n public ISet Disciplines { get; }\n}\n\npublic class Topic : RealmObject\n{\n public string Name { get; set; }\n public string Location { get; set; }\n\n [Required]\n public ISet DiscussionThemes { get; }\n\n public IList Panelists { get; }\n}\n\nrealm.Write(() =>\n{\n var newExpert = realm.Add(new Expert\n {\n Name = \"Techno King\"\n });\n\n newExpert.Disciplines.Add(\"Trance\");\n newExpert.Disciplines.Add(\"Meme Coins\");\n\n var topic = realm.Add(new Topic\n {\n Name = \"Bitcoin Mining and Climate Change\"\n });\n\n topic.DiscussionThemes.Add(\"Memes\");\n topic.DiscussionThemes.Add(\"Blockchain\");\n topic.DiscussionThemes.Add(\"Cloud Computing\");\n topic.DiscussionThemes.Add(\"SNL\");\n topic.DiscussionThemes.Add(\"Weather Disasters from Climate Change\");\n});\n\n// find experts for a discussion topic and add them to the panelists list\nvar experts = realm.All();\nvar topic = realm.All()\n .Where(t => t.Name == \"Bitcoin Mining and Climate Change\");\n\nforeach (expert in experts)\n{\n if (expert.Disciplines.Overlaps(topic.DiscussionThemes))\n {\n realm.Write(() =>\n {\n topic.Panelists.Add(expert);\n });\n }\n}\n\n// Set up the listener to watch for new dicussion themes added to the topic\nvar token = topic.DiscussionThemes\n .SubscribeForNotifications((collection, changes, error) =>\n{\n foreach (var i in changes.InsertedIndices)\n {\n var insertedDiscussion = collection[i];\n Debug.WriteLine($\"A new discussion theme has been added to the topic {insertedDiscussion}\");\n }\n});\n\n// Techno King is no longer into Meme Coins - remove the discipline\nnewExpert.Disciplines.Remove(\"Meme Coins\")\n```\n:::\n::::\n\n## UUIDs\n\nThe Realm SDKs also now support the ability to generate and persist\nUniversally Unique Identifiers (UUIDs) natively. UUIDs are ubiquitous in\napp development as the most common type used for primary keys. As a\n128-bit value, they have become the default for distributed storage of\ndata in mobile to cloud application architectures - making collisions\nunheard of.\n\nPreviously, Realm developers would generate a UUID and then cast it as a\nstring to store in Realm. But we saw an opportunity to eliminate\nrepetitive code, and with the release of UUID data types, Realm comes\none step closer to boilerplate-free code.\n\nLike with the other new data types, the release of UUIDs also brings\nRealm's data types to parity with MongoDB. Now mobile application\ndevelopers will be able to set UUIDs on both ends of their distributed\ndatastore, and can rely on Realm Sync to perform the replication.\n\n::::tabs\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\nimport io.realm.Realm\nimport io.realm.RealmObject\nimport io.realm.annotations.PrimaryKey\nimport io.realm.annotations.RealmField\nimport java.util.UUID;\nimport io.realm.kotlin.where\n\nopen class Task: RealmObject() {\n @PrimaryKey\n @RealmField(\"_id\")\n var id: UUID = UUID.randomUUID()\n var name: String = \"\"\n var owner: String= \"\"\n}\n\nrealm.executeTransaction { r: Realm ->\n // UUID field is generated automatically in the class constructor\n val newTask = r.copyToRealm(Task())\n newTask.name = \"Update to use new Data Types\"\n newTask.owner = \"Realm Developer\"\n}\n\nval taskUUID: Task? = realm.where()\n .equalTo(\"_id\", \"38400000-8cf0-11bd-b23e-10b96e4ef00d\")\n .findFirst()\n```\n:::\n:::tab[]{tabid=\"Swift\"}\n``` swift\nimport Foundation\nimport RealmSwift\n\nclass Task: Object {\n @objc dynamic var _id = UUID()\n @objc dynamic var name: String?\n @objc dynamic var owner: String?\n override static func primaryKey() -> String? {\n return \"_id\"\n }\n\n convenience init(name: String, owner: String) {\n self.init()\n self.name = name\n self.owner = owner\n }\n}\n\nlet realm = try! Realm()\ntry! realm.write {\n // UUID field is generated automatically in the class constructor\n let newTask =\n Task(name: \"Update to use new Data Types\", owner: \"Realm Developers\")\n}\n\nlet uuid = UUID(uuidString: \"38400000-8cf0-11bd-b23e-10b96e4ef00d\")\n\n// Set up the query to retrieve the object with the UUID\nlet predicate = NSPredicate(format: \"_id = %@\", uuid! as CVarArg)\n\nlet taskUUID = realm.objects(Task.self).filter(predicate).first\n```\n:::\n:::tab[]{tabid=\"JavaScript\"}\n``` javascript\nconst { UUID } = Realm.BSON;\n\nconst TaskSchema = {\n name: \"Task\",\n primaryKey: \"_id\",\n properties: {\n _id: \"uuid\",\n name: \"string?\",\n owner: \"string?\"\n },\n};\n\nlet task;\n\nrealm.write(() => {\n task = realm.create(\"Task\", {\n _id: new UUID(),\n name: \"Update to use new Data Type\",\n owner: \"Realm Developers\"\n });\n\nlet searchUUID = UUID(\"38400000-8cf0-11bd-b23e-10b96e4ef00d\");\n\nconst taskUUID = realm.objects(\"Task\")\n .filtered(`_id == $0`, searchUUID);\n```\n:::\n:::tab[]{tabid=\".NET\"}\n``` csharp\npublic class Task : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public Guid Id { get; private set; } = Guid.NewGuid();\n public string Name { get; set; }\n public string Owner { get; set; }\n}\n\nrealm.Write(() =>\n{\n realm.Add(new Task\n {\n PlayerHandle = \"Update to use new Data Type\",\n Owner = \"Realm Developers\"\n });\n});\n\nvar searchGUID = Guid.Parse(\"38400000-8cf0-11bd-b23e-10b96e4ef00d\");\n\nvar taskGUID = realm.Find(searchGUID);\n```\n:::\n::::\n\n## Conclusion\n\nFrom the beginning, Realm's engineering team has believed that the best\nline of code is the one a developer doesn't need to write. With the\nrelease of these unique types for mobile developers, we're eliminating\nthe workarounds \u2013 the boilerplate code and negative impact on CPU and\nmemory \u2013 that are commonly required with certain data structures. And\nwe're doing it in a way that's idiomatic to the platform you're building\non.\n\nBy making it simple to query, store, and sync your data, all in the\nformat you need, we hope we've made it easier for you to focus on\nbuilding your next great app.\n\nStay tuned by following [@realm on Twitter.\n\nWant to Ask a Question? Visit our\nForums.\n\nWant to be notified about upcoming Realm events, like talks on SwiftUI\nBest Practices or our new Kotlin Multiplatform SDK? Visit our Global\nCommunity Page.\n\n", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Four new data types in the Realm Mobile Database - Dictionaries, Mixed, Sets, and UUIDs - make it simple to model flexible data in Realm.", "contentType": "News & Announcements"}, "title": "New Realm Data Types: Dictionaries/Maps, Sets, Mixed, and UUIDs", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/stream-data-mongodb-bigquery-subscription", "action": "created", "body": "# Create a Data Pipeline for MongoDB Change Stream Using Pub/Sub BigQuery Subscription\n\nOn 1st October 2022, MongoDB and Google announced a set of open source Dataflow templates for moving data between MongoDB and BigQuery to run analyses on BigQuery using BQML and to bring back inferences to MongoDB. Three templates were introduced as part of this release, including the MongoDB to BigQuery CDC (change data capture) template. \n\nThis template requires users to run the change stream on MongoDB, which will monitor inserts and updates on the collection. These changes will be captured and pushed to a Pub/Sub topic. The CDC template will create a job to read the data from the topic and get the changes, apply the transformation, and write the changes to BigQuery. The transformations will vary based on the user input while running the Dataflow job.\n\nAlternatively, you can use a native Pub/Sub capability to set up a data pipeline between your MongoDB cluster and BigQuery. The Pub/Sub BigQuery subscription writes messages to an existing BigQuery table as they are received. Without the BigQuery subscription type, you need a pull or push subscription and a subscriber (such as Dataflow) that reads messages and writes them to BigQuery. \n\nThis article explains how to set up the BigQuery subscription to process data read from a MongoDB change stream. As a prerequisite, you\u2019ll need a MongoDB Atlas cluster.\n\n> To set up a free tier cluster, you can register to MongoDB either from Google Cloud Marketplace or from the registration page. Follow the steps in the MongoDB documentation to configure the database user and network settings for your cluster. \n\nOn Google Cloud, we will create a Pub/Sub topic, a BigQuery dataset, and a table before creating the BigQuery subscription.\n\n## Create a BigQuery dataset\n\nWe\u2019ll start by creating a new dataset for BigQuery in Google Cloud console.\n\nThen, add a new table in your dataset. Define it with a name of your choice and the following schema:\n\n| Field name | Type |\n| --- | --- |\n| id | STRING |\n| source_data | STRING |\n| Timestamp | STRING |\n\n## Configure Google Cloud Pub/Sub\n\nNext, we\u2019ll configure a Pub/Sub schema and topic to ingest the messages from our MongoDB change stream. Then, we\u2019ll create a subscription to write the received messages to the BigQuery table we just created.\n\nFor this section, we\u2019ll use the Google Cloud Pub/Sub API. Before proceeding, make sure you have enabled the API for your project.\n\n### Define a Pub/Sub schema \n\nFrom the Cloud Pub/Sub UI, Navigate to _Create Schema_.\n\nProvide an appropriate identifier, such as \u201cmdb-to-bq-schema,\u201d to your schema. Then, select \u201cAvro\u201d for the type. Finally, add the following definition to match the fields from your BigQuery table:\n\n```json\n{\n \"type\" : \"record\",\n \"name\" : \"Avro\",\n \"fields\" : \n {\n \"name\" : \"id\",\n \"type\" : \"string\"\n },\n {\n \"name\" : \"source_data\",\n \"type\" : \"string\"\n },\n {\n \"name\" : \"Timestamp\",\n \"type\" : \"string\"\n }\n ]\n}\n```\n\n![Create a Cloud Pub/Sub schema\n\n### Create a Pub/Sub topic\n\nFrom the sidebar, navigate to \u201cTopics\u201d and click on Create a topic. \n\nGive your topic an identifier, such as \u201cMongoDBCDC.\u201d Enable the Use a schema field and select the schema that you just created. Leave the rest of the parameters to default and click on _Create Topic_.\n\n### Subscribe to topic and write to BigQuery\n\nFrom inside the topic, click on _Create new subscription_. Configure your subscription in the following way:\n\n- Provide a subscription ID \u2014 for example, \u201cmdb-cdc.\u201d\n- Define the Delivery type to _Write to BigQuery_.\n- Select your BigQuery dataset from the dropdown.\n- Provide the name of the table you created in the BigQuery dataset.\n- Enable _Use topic schema_.\n\nYou need to have a `bigquery.dataEditor` role on your service account to create a Pub/Sub BigQuery subscription. To grant access using the `bq` command line tool, run the following command:\n\n```sh\nbq add-iam-policy-binding \\\n --member=\"serviceAccount:service@gcp-sa-pubsub.iam.gserviceaccount.com\" \\\n --role=roles/bigquery.dataEditor \\\n -t \".\n\n\"\n```\n\nKeep the other fields as default and click on _Create subscription_.\n\n## Set up a change stream on a MongoDB cluster\n\nFinally, we\u2019ll set up a change stream that listens for new documents inserted in our MongoDB cluster. \n\nWe\u2019ll use Node.js but you can adapt the code to a programming language of your choice. Check out the Google Cloud documentation for more Pub/Sub examples using a variety of languages. You can find the source code of this example in the dedicated GitHub repository.\n\nFirst, set up a new Node.js project and install the following dependencies.\n\n```sh\nnpm install mongodb @google-cloud/pubsub avro-js\n```\n\nThen, add an Avro schema, matching the one we created in Google Cloud Pub/Sub:\n\n**./document-message.avsc**\n```json\n{\n \"type\": \"record\",\n \"name\": \"DocumentMessage\",\n \"fields\": \n {\n \"name\": \"id\",\n \"type\": \"string\"\n },\n {\n \"name\": \"source_data\",\n \"type\": \"string\"\n },\n {\n \"name\": \"Timestamp\",\n \"type\": \"string\"\n }\n ]\n}\n```\n\nThen create a new JavaScript module \u2014 `index.mjs`. Start by importing the required libraries and setting up your MongoDB connection string and your Pub/Sub topic name. If you don\u2019t already have a MongoDB cluster, you can create one for free in [MongoDB Atlas.\n\n**./index.mjs**\n```js\nimport { MongoClient } from 'mongodb';\nimport { PubSub } from '@google-cloud/pubsub';\nimport avro from 'avro-js';\nimport fs from 'fs';\n \nconst MONGODB_URI = '';\nconst PUB_SUB_TOPIC = 'projects//topics/';\n```\n\nAfter this, we can connect to our MongoDB instance and set up a change stream event listener. Using an aggregation pipeline, we\u2019ll watch only for \u201cinsert\u201d events on the specified collection. We\u2019ll also define a 60-second timeout before closing the change stream.\n\n**./index.mjs**\n```js\nlet mongodbClient;\ntry {\n mongodbClient = new MongoClient(MONGODB_URI);\n await monitorCollectionForInserts(mongodbClient, 'my-database', 'my-collection');\n} finally {\n mongodbClient.close();\n}\n \nasync function monitorCollectionForInserts(client, databaseName, collectionName, timeInMs) {\n const collection = client.db(databaseName).collection(collectionName);\n // An aggregation pipeline that matches on new documents in the collection.\n const pipeline = { $match: { operationType: 'insert' } } ];\n const changeStream = collection.watch(pipeline);\n \n changeStream.on('change', event => {\n const document = event.fullDocument;\n publishDocumentAsMessage(document, PUB_SUB_TOPIC);\n });\n \n await closeChangeStream(timeInMs, changeStream);\n}\n \nfunction closeChangeStream(timeInMs = 60000, changeStream) {\n return new Promise((resolve) => {\n setTimeout(() => {\n console.log('Closing the change stream');\n changeStream.close();\n resolve();\n }, timeInMs)\n })\n};\n```\n\nFinally, we\u2019ll define the `publishDocumentAsMessage()` function that will:\n\n1. Transform every MongoDB document received through the change stream.\n1. Convert it to the data buffer following the Avro schema.\n1. Publish it to the Pub/Sub topic in Google Cloud.\n\n```js\nasync function publishDocumentAsMessage(document, topicName) {\n const pubSubClient = new PubSub();\n const topic = pubSubClient.topic(topicName);\n \n const definition = fs.readFileSync('./document-message.avsc').toString();\n const type = avro.parse(definition);\n \n const message = {\n id: document?._id?.toString(),\n source_data: JSON.stringify(document),\n Timestamp: new Date().toISOString(),\n };\n \n const dataBuffer = Buffer.from(type.toString(message));\n try {\n const messageId = await topic.publishMessage({ data: dataBuffer });\n console.log(`Avro record ${messageId} published.`);\n } catch(error) {\n console.error(error);\n }\n}\n```\n\nRun the file to start the change stream listener:\n\n```sh\nnode ./index.mjs\n```\n\nInsert a new document in your MongoDB collection to watch it go through the data pipeline and appear in your BigQuery table!\n\n## Summary\n\nThere are multiple ways to load the change stream data from MongoDB to BigQuery and we have shown how to use the BigQuery subscription on Pub/Sub. The change streams from MongoDB are monitored, captured, and later written to a Pub/Sub topic using Java libraries.\n\nThe data is then written to BigQuery using BigQuery subscription. The datatype for the BigQuery table is set using Pub/Sub schema. Thus, the change stream data can be captured and written to BigQuery using the BigQuery subscription capability of Pub/Sub.\n\n## Further reading\n\n1. A data pipeline for [MongoDB Atlas and BigQuery using Dataflow.\n1. Setup your first MongoDB cluster using Google Marketplace.\n1. Run analytics using BigQuery using BigQuery ML.\n1. How to publish a message to a topic with schema.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Google Cloud", "Node.js", "AI"], "pageDescription": "Learn how to set up a data pipeline from your MongoDB database to BigQuery using change streams and Google Cloud Pub/Sub.", "contentType": "Tutorial"}, "title": "Create a Data Pipeline for MongoDB Change Stream Using Pub/Sub BigQuery Subscription", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/introduction-to-gdelt-data", "action": "created", "body": "# An Introduction to GDELT Data\n\n## An Introduction to GDELT Data\n### (and How to Work with It and MongoDB)\n\nHey there!\n\nThere's a good chance that if you're reading this, it's because you're planning to enter the MongoDB \"Data as News\" Hackathon! If not, well, go ahead and sign up here!\n\nNow that that's over with, let's get to the first question you probably have:\n\n### What is GDELT?\nGDELT is an acronym, standing for \"Global Database of Events, Language and Tone\". It's a database of geopolitical event data, automatically derived and translated in real time from hundreds of news sources in 65 languages. It's around two terabytes of data, so it's really quite big!\n\nEach event contains the following data:\n\nDetails of the one or more actors - usually countries or political entities.\nThe type of event that has occurred, such as \"appeal for judicial cooperation\"\nThe positive or negative sentiment perceived towards the event, on a scale of -10 (very negative) to +10 (very positive)\nAn \"impact score\" on the Goldstein Scale, indicating the theoretical potential impact that type of event will have on the stability of a country.\n\n### But what does it look like?\nThe raw data GDELT provides is hosted as CSV files, zipped and uploaded for every 15 minutes since February 2015. A row in the CSV files contains data that looks a bit like this:\n\n| Field Name | Value |\n|-----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| _id | 1037207900 |\n| Day | 20210401 |\n| MonthYear | 202104 |\n| Year | 2021 |\n| FractionDate | 2021.2493 |\n| Actor1Code | USA |\n| Actor1Name | NORTH CAROLINA |\n| Actor1CountryCode | USA |\n| IsRootEvent | 1 |\n| EventCode | 43 |\n| EventBaseCode | 43 |\n| EventRootCode | 4 |\n| QuadClass | 1 |\n| GoldsteinScale | 2.8 |\n| NumMentions | 10 |\n| NumSources | 1 |\n| NumArticles | 10 |\n| AvgTone | 1.548672566 |\n| Actor1Geo_Type | 3 |\n| Actor1Geo_Fullname | Albemarle, North Carolina, United States |\n| Actor1Geo_CountryCode | US |\n| Actor1Geo_ADM1Code | USNC |\n| Actor1Geo_ADM2Code | NC021 |\n| Actor1Geo_Lat | 35.6115 |\n| Actor1Geo_Long | -82.5426 |\n| Actor1Geo_FeatureID | 1017529 |\n| Actor2Geo_Type | 0 |\n| ActionGeo_Type | 3 |\n| ActionGeo_Fullname | Albemarle, North Carolina, United States |\n| ActionGeo_CountryCode | US |\n| ActionGeo_ADM1Code | USNC |\n| ActionGeo_ADM2Code | NC021 |\n| ActionGeo_Lat | 35.6115 |\n| ActionGeo_Long | -82.5426 |\n| ActionGeo_FeatureID | 1017529 |\n| DateAdded | 2022-04-01T15:15:00Z |\n| SourceURL | https://www.dailyadvance.com/news/local/museum-to-host-exhibit-exploring-change-in-rural-us/article_42fd837e-c5cf-5478-aec3-aa6bd53566d8.html |\n| downloadId | 20220401151500 |\n\nThis event encodes Actor1 (North Carolina) hosting a visit (Cameo Code 043) \u2026 and in this case the details of the visit aren't included - it's an \"exhibit exploring change in the Rural US.\" You can click through the SourceURL link to read further details.\n\nEvery event looks like this. One or two actors, possibly some \"action\" detail, and then a verb, encoded using the CAMEO verb encoding. CAMEO is short for \"Conflict and Mediation Event Observations\", and you can find the full verb listing in this PDF. If you need a more \"computer readable\" version of the CAMEO verbs, one is hosted here.\n\n### What's So Interesting About an Enormous Table of Geopolitical Data?\nWe think that there are a bunch of different ways to think about the data encoded in the GDELT dataset.\n\nFirstly, it's a longitudinal dataset, going back through time. Data in GDELT v2 goes from the present day back to 2015, providing a huge amount of event data for the past 7 years. But the GDELT v1 dataset, which is less rich, goes back until 1979! This gives an unparalleled opportunity to study the patterns and trends of geopolitics for the past 43 years.\n\nMore than just a historical dataset, however, GDELT is a living dataset, updated every 15 minutes. This means it can also be considered an event system for understanding the world right now. How you use this ability is up to you, but it shouldn't be ignored!\n\nGDELT is also a geographical dataset. Each event encodes one or more points of its actors and actions, so the data can be analysed from a GIS standpoint. But more than all of this, GDELT models human interactions at a large scale. The Goldstein (impact) score (GoldsteinScale), and the sentiment score (AvgTone) provide the human impact of the events being encoded. \n\nWhether you choose to explore one of the axes above, using ML, or visualisation; whether you choose to use GDELT data on its own, or combine it with another data source; whether you choose to home in on specific events in the recent past; we're sure that you'll discover new understandings of the world around you by analysing the news data it contains.\n\n### How To Work with GDELT?\n\nOver the next few weeks we're going to be publishing blog posts, hosting live streams and AMA (ask me anything) sessions to help you with your GDELT and MongoDB journey. In the meantime, you have a couple of options: You can work with our existing GDELT data cluster (containing the entirety of last year's GDELT data), or you can load a subset of the GDELT data into your own cluster.\n\n#### Work With Our Hosted GDELT Cluster\nWe currently host the past year's GDELT data in a cluster called GDELT2. You can access it read-only using Compass, or any of the MongoDB drivers, with the following connection string:\n\n```\nmongodb+srv://readonly:readonly@gdelt2.rgl39.mongodb.net/GDELT?retryWrites=true&w=majority\n```\n\nThe raw data is contained in a collection called \"eventsCSV\", and a slightly massaged copy of the data (with Actors and Actions broken down into subdocuments) is contained in a collection called \"recentEvents\".\n\nWe're still making changes to this cluster, and plan to load more data in as time goes on (as well as keeping up-to-date with the 15-minute updates to GDELT!), so keep an eye out for updates to this blog post!\n\n#### How to Get GDELT into Your Own MongoDB Cluster\nThere's a high likelihood that you can't work with the data in its raw form. For one reason or another you need the data in a different format, or filtered in some way to work with it efficiently. In that case, I highly recommend you follow Adrienne's advice in her GDELT Primer README.\n\nIn the next few days we'll be publishing a tool to efficiently load the data you want into a MongoDB cluster. In the meantime, read up on GDELT, have a look at the sample data, and find some teammates to build with!\n\n### Further Reading\nThe following documents contain most of the official documentation you'll need for working with GDELT. We've summarized much of it here, but it's always good to check the source, and you'll need the CAMEO encoding listing!\n\nGDELT data documentation\n\nGDELT Master file\n\nCAMEO code guide\n\n### What next? \nWe hope the above gives you some insight into this fascinating dataset. We\u2019ve chosen it as the theme, \"Data as News\", for this year's MongoDB World Hackathon due to it\u2019s size, longevity, currency and global relevance. If you fancy exploring the GDELT dataset more, as well as learning MongoDB, and competing for some one-of-a-kind prizes, well, go ahead and sign up here to the Hackathon! We\u2019d be glad to have you! \n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "What is the GDELT dataset and how to work with it and MongoDB and participate in the MongoDB World Hackathon '22", "contentType": "Quickstart"}, "title": "An Introduction to GDELT Data", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/ruby/mongodb-jruby", "action": "created", "body": "# A Plan for MongoDB and JRuby\n\n## TLDR\nMongoDB will continue to support JRuby.\n\n## Background\nIn April 2021, our Ruby team began to discuss the possibility of removing MongoDBs official support for JRuby. At the time, we decided to shelve these discussions and revisit in a year. In March 2022, right on schedule, we began examining metrics and reviewing user feedback around JRuby, as well as evaluating our backlog of items around this runtime.\n\nJRuby itself is still actively maintained and used by many Ruby developers, but our own user base tends toward MRI/CRuby or \u2018vanilla Ruby\u2019. We primarily looked at telemetry from active MongoDB Atlas clusters, commercial support cases, and a number of other sources, like Stack Overflow question volume, etc.\n\nWe decided based on the data available that it would be safe to drop support for JRuby from our automated tests, and stop accepting pull requests related to this runtime.\n\nWe did not expect this decision to be controversial.\n\n## User Feedback\nAs a company that manages numerous open source projects we work in a public space. Our JIRA and GitHub issues are available to peruse. And so it was not very long before a user commented on this work and asked us *not to do this please.*\n\nOne of the core JRuby maintainers, Charles Nutter, also reached out on the Ruby ticket to discuss this change.\n\nUpon opening a pull request to action this decision, the resulting community feedback encouraged us to reconsider this decision. As the goal of any open source project is to bolster adoption and engagement ultimately we chose to reverse course for the time being, especially seeing as JRuby had subsequently tweeted out their upcoming 9.4 release would be compatible with both Rails 7 and Ruby 3.1.\n\nFollowing the JRuby announcement, TruffleRuby 22.1 was released, so it seems the JVM-based Ruby ecosystem is more active than we anticipated.\n\nYou can see the back and forth on RUBY-2781 and RUBY-2960.\n\n## Decision\nWe decided to reverse our decision around JRuby, quite simply, because the community asked us to. Our decisions should be informed by the open source community - not just the developers who work at MongoDB - and if we are too hasty, or wrong, we would like to be able to hear that without flinching and respond appropriately.\n\nSo. Though we weren\u2019t at RailsConf 22 this year, know that if your next application is built using JRuby you should be able to count on MongoDB Atlas being ready to host your application\u2019s data.\n \n", "format": "md", "metadata": {"tags": ["Ruby"], "pageDescription": "MongoDB supports JRuby", "contentType": "News & Announcements"}, "title": "A Plan for MongoDB and JRuby", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/java/spring-java-mongodb-example-app2", "action": "created", "body": "# Build a MongoDB Spring Boot Java Book Tracker for Beginners\n\n## Introduction\nBuild your first application with Java and Spring! This simple application demonstrates basic CRUD operations via a book app - you can add a book, edit a book, delete a book. Stores the data in MongoDB database. \n\n## Technology\n\n* Java\n* Spring Boot\n* MongoDB\n\n", "format": "md", "metadata": {"tags": ["Java", "Spring"], "pageDescription": "Build an application to track the books you've read with Spring Boot, Java, and MongoDB", "contentType": "Code Example"}, "title": "Build a MongoDB Spring Boot Java Book Tracker for Beginners", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/active-active-application-architectures", "action": "created", "body": "# Active-Active Application Architectures with MongoDB\n\n## Introduction\n\nDetermining the best database for a modern application to be deployed across multiple data centers requires careful evaluation to accommodate a variety of complex application requirements. The database will be responsible for processing reads and writes in multiple geographies, replicating changes among them, and providing the highest possible availability, consistency, and durability guarantees. But not all technology choices are equal. For example, one database technology might provide a higher guarantee of availability while providing lower data consistency and durability guarantees than another technology. The tradeoffs made by an individual database technology will affect the behavior of the application upon which it is built.\n\nUnfortunately, there is limited understanding among many application architects as to the specific tradeoffs made by various modern databases. The popular belief appears to be that if an application must accept writes concurrently in multiple data centers, then it needs to use a multi-master database -where multiple masters are responsible for a single copy or partition of the data. This is a misconception and it is compounded by a limited understanding of the (potentially negative) implications this choice has on application behavior.\n\nTo provide some clarity on this topic, this post will begin by describing the database capabilities required by modern multi-data center applications. Next, it describes the categories of database architectures used to realize these requirements and summarize the pros and cons of each. Finally, it will look at MongoDB specifically and describe how it fits into these categories. It will list some of the specific capabilities and design choices offered by MongoDB that make it suited for global application deployments.\n\n## Active-Active Requirements\n\nWhen organizations consider deploying applications across multiple data centers (or cloud regions) they typically want to use an active-active architecture. At a high-level, this means deploying an application across multiple data centers where application servers in all data centers are simultaneously processing requests (Figure 1). This architecture aims to achieve a number of objectives:\n\n- Serve a globally distributed audience by providing local processing (low latencies)\n- Maintain always-on availability, even in the face of complete regional outages\n- Provide the best utilization of platform resources by allowing server resources in multiple data centers to be used in parallel to process application requests.\n\nAn alternative to an active-active architecture is an active-disaster recovery (also known as active-passive) architecture consisting of a primary data center (region) and one or more disaster recovery (DR) regions (Figure 2). Under normal operating conditions, the primary data center processes requests and the DR center is idle. The DR site only starts processing requests (becomes active), if the primary data center fails. (Under normal situations, data is replicated from primary to DR sites, so that the the DR sites can take over if the primary data center fails).\n\nThe definition of an active-active architecture is not universally agreed upon. Often, it is also used to describe application architectures that are similar to the active-DR architecture described above, with the distinction being that the failover from primary to DR site is fast (typically a few seconds) and automatic (no human intervention required). In this interpretation, an active-active architecture implies that application downtime is minimal (near zero).\n\nA common misconception is that an active-active application architecture requires a multi-master database. This is not only false, but using a multi-master database means relaxing requirements that most data owners hold dear: consistency and data durability. Consistency ensures that reads reflect the results of previous writes. Data durability ensures that committed writes will persist permanently: no data is lost due to the resolution of conflicting writes or node failures. Both these database requirements are essential for building applications that behave in the predictable and deterministic way users expect.\n\nTo address the multi-master misconception, let's start by looking at the various database architectures that could be used to achieve an active-active application, and the pros and cons of each. Once we have done this, we will drill into MongoDB's architecture and look at how it can be used to deploy an Active-Active application architecture.\n\n## Database Requirements for Active-Active Applications\n\nWhen designing an active-active application architecture, the database tier must meet four architectural requirements (in addition to standard database functionality: powerful query language with rich secondary indexes, low latency access to data, native drivers, comprehensive operational tooling, etc.):\n\n1. **Performance** - low latency reads and writes. It typically means processing reads and writes on nodes in a data center local to the application.\n2. **Data durability** - Implemented by replicating writes to multiple nodes so that data persists when system failures occur.\n3. **Consistency** - Ensuring that readers see the results of previous writes, readers to various nodes in different regions get the same results, etc.\n4. **Availability** - The database must continue to operate when nodes, data centers, or network connections fail. In addition, the recovery from these failures should be as short as possible. A typical requirement is a few seconds.\n\nDue to the laws of physics, e.g., the speed of light, it is not possiblefor any database to completely satisfy all these requirements at the same time, so the important consideration for any engineering team building an application is to understand the tradeoffs made by each database and selecting the one that provides for the application's most critical requirements.\n\nLet's look at each of these requirements in more detail.\n\n## Performance\n\nFor performance reasons, it is necessary for application servers in a data center to be able to perform reads and writes to database nodes in the same data center, as most applications require millisecond (a few to tens) response times from databases. Communication among nodes across multiple data centers can make it difficult to achieve performance SLAs. If local reads and write are not possible, then the latency associated with sending queries to remote servers significantly impacts application response time. For example, customers in Australia would not expect to have a far worse user experience than customers in the eastern US where the e-commerce vendors primary data center is located. In addition, the lack of network bandwidth between data centers can also be a limiting factor.\n\n## Data Durability\n\nReplication is a critical feature in a distributed database. The database must ensure that writes made to one node are replicated to the other nodes that maintain replicas of the same record, even if these nodes are in different physical locations. The replication speed and data durability guarantees provided will vary among databases, and are influenced by:\n\n- The set of nodes that accept writes for a given record\n- The situations when data loss can occur\n- Whether conflicting writes (two different writes occurring to the same record in different data centers at about the same time) are allowed, and how they are resolved when they occur\n\n## Consistency\n\nThe consistency guarantees of a distributed database vary significantly. This variance depends upon a number of factors, including whether indexes are updated atomically with data, the replication mechanisms used, how much information individual nodes have about the status of corresponding records on other nodes, etc.\n\nThe weakest level of consistency offered by most distributed databases is eventual consistency. It simply guarantees that, eventually, if all writes are stopped, the value for a record across all nodes in the database will eventually coalesce to the same value. It provides few guarantees about whether an individual application process will read the results of its write, or if value read is the latest value for a record.\n\nThe strongest consistency guarantee that can be provided by distributed databases without severe impact to performance is causal consistency. As described by\nWikipedia, causal consistency provides the following guarantees:\n\n- **Read Your Writes**: this means that preceding write operations are indicated and reflected by the following read operations.\n- **Monotonic Reads**: this implies that an up-to-date increasing set of write operations is guaranteed to be indicated by later read operations.\n- **Writes Follow Reads**: this provides an assurance that write operations follow and come after reads by which they are influenced.\n- **Monotonic Writes**: this guarantees that write operations must go after other writes that reasonably should precede them.\n\nMost distributed databases will provide consistency guarantees between eventual and causal consistency. The closer to causal consistency the more an application will behave as users expect, e.g., queries will return the values of previous writes, data won't appear to be lost, and data values will not change in non-deterministic ways.\n\n## Availability\n\nThe availability of a database describes how well the database survives the loss of a node, a data center, or network communication. The degree to which the database continues to process reads and writes in the event of different types of failures and the amount of time required to recover from failures will determine its availability. Some architectures will allow reads and writes to nodes isolated from the rest of the database cluster by a network partition, and thus provide a high level of availability. Also, different databases will vary in the amount of time it takes to detect and recover from failures, with some requiring manual operator intervention to restore a healthy database cluster.\n\n## Distributed Database Architectures\n\nThere are three broad categories of database architectures deployed to meet these requirements:\n\n1. Distributed transactions using two-phase commit\n2. Multi-Master, sometimes also called \"masterless\"\n3. Partitioned (sharded) database with multiple primaries each responsible for a unique partition of the data\n\nLet's look at each of these options in more detail, as well as the pros and cons of each.\n\n## Distributed Transactions with Two-Phase Commit\n\nA distributed transaction approach updates all nodes containing a record as part of a single transaction, instead of having writes being made to one node and then (asynchronously) replicated to other nodes. The transaction guarantees that all nodes will receive the update or the transaction will fail and all nodes will revert back to the previous state if there is any type of failure.\n\nA common protocol for implementing this functionality is called a two-phase\ncommit. The two-phase commit protocol ensures durability and multi-node consistency, but it sacrifices performance. The two-phase commit protocol requires two-phases of communication among all the nodes involved in the transaction with requests and acknowledgments sent at each phase of the operation to ensure every node commits the same write at the same time. When database nodes are distributed across multiple data centers this often pushes query latency from the millisecond range to the multi-second range. Most applications, especially those where the clients are users (mobile devices, web browsers, client applications, etc.) find this level of response time unacceptable.\n\n## Multi-Master\n\nA multi-master database is a distributed database that allows a record to be updated in one of many possible clustered nodes. (Writes are usually replicated so records exist on multiple nodes and in multiple data centers.) On the surface, a multi-master database seems like the ideal platform to realize an active-active architecture. It enables each application server to read and write to a local copy of the data with no restrictions. It has serious limitations, however, when it comes to data consistency.\n\nThe challenge is that two (or more) copies of the same record may be updated simultaneously by different sessions in different locations. This leads to two different versions of the same record and the database, or sometimes the application itself, must perform conflict resolution to resolve this inconsistency. Most often, a conflict resolution strategy, such as most recent update wins or the record with the larger number of modifications wins, is used since performance would be significantly impacted if some other more sophisticated resolution strategy was applied. This also means that readers in different data centers may see a different and conflicting value for the same record for the time between the writes being applied and the completion of the conflict resolution mechanism.\n\nFor example, let's assume we are using a multi-master database as the persistence store for a shopping cart application and this application is deployed in two data centers: East and West. At roughly the same time, a user in San Francisco adds an item to his shopping cart (a flashlight) while an inventory management process in the East data center invalidates a different shopping cart item (game console) for that same user in response to a supplier notification that the release date had been delayed (See times 0 to 1 in Figure 3).\n\nAt time 1, the shopping cart records in the two data centers are different. The database will use its replication and conflict resolution mechanisms to resolve this inconsistency and eventually one of the two versions of the shopping cart (See time 2 in Figure 3) will be selected. Using the conflict resolution heuristics most often applied by multi-master databases (last update wins or most updated wins), it is impossible for the user or application to predict which version will be selected. In either case, data is lost and unexpected behavior occurs. If the East version is selected, then the user's selection of a flashlight is lost and if the West version is selected, the the game console is still in the cart. Either way, information is lost. Finally, any other process inspecting the shopping cart between times 1 and 2 is going to see non-deterministic behavior as well. For example, a background process that selects the fulfillment warehouse and updates the cart shipping costs would produce results that conflict with the eventual contents of the cart. If the process is running in the West and alternative 1 becomes reality, it would compute the shipping costs for all three items, even though the cart may soon have just one item, the book.\n\nThe set of uses cases for multi-master databases is limited to the capture of non-mission-critical data, like log data, where the occasional lost record is acceptable. Most use cases cannot tolerate the combination of data loss resulting from throwing away one version of a record during conflict resolution, and inconsistent reads that occur during this process.\n\n## Partitioned (Sharded) Database\n\nA partitioned database divides the database into partitions, called shards. Each shard is implemented by a set of servers each of which contains a complete copy of the partition's data. What is key here is that each shard maintains exclusive control of its partition of the data. At any given time, for each shard, one server acts as the primary and the other servers act as secondary replicas. Reads and writes are issued to the primary copy of the data. If the primary server fails for any reason (e.g., hardware failure, network partition) one of the secondary servers is automatically elected to primary.\n\nEach record in the database belongs to a specific partition, and is managed by exactly one shard, ensuring that it can only be written to the shard's primary. The mapping of records to shards and the existence of exactly one primary per shard ensures consistency. Since the cluster contains multiple shards, and hence multiple primaries (multiple masters), these primaries may be distributed among the data centers to ensure that writes can occur locally in each datacenter (Figure 4).\n\nA sharded database can be used to implement an active-active application architecture by deploying at least as many shards as data centers and placing the primaries for the shards so that each data center has at least one primary (Figure 5). In addition, the shards are configured so that each shard has at least one replica (copy of the data) in each of the datacenters. For example, the diagram in Figure 5 depicts a database architecture distributed across three datacenters: New York (NYC), London (LON), and Sydney (SYD). The cluster has three shards where each shard has three replicas.\n\n- The NYC shard has a primary in New York and secondaries in London and Sydney\n- The LON shard has a primary in London and secondaries in New York and Sydney\n- The SYD shard has a primary in Sydney and secondaries in New York and London\n\nIn this way, each data center has secondaries from all the shards so the local app servers can read the entire data set and a primary for one shard so that writes can be made locally as well.\n\nThe sharded database meets most of the consistency and performance requirements for a majority of use cases. Performance is great because reads and writes happen to local servers. When reading from the primaries, consistency is assured since each record is assigned to exactly one primary. This option requires architecting the application so that users/queries are routed to the data center that manages the data (contains the primary) for the query. Often this is done via geography. For example, if we have two data centers in the United States (New Jersey and Oregon), we might shard the data set by geography (East and West) and route traffic for East Coast users to the New Jersey data center, which contains the primary for the Eastern shard, and route traffic for West Coast users to the Oregon data center, which contains the primary for the Western shard.\n\nLet's revisit the shopping cart example using a sharded database. Again, let's assume two data centers: East and West. For this implementation, we would shard (partition) the shopping carts by their shopping card ID plus a data center field identifying the data center in which the shopping cart was created. The partitioning (Figure 6) would ensure that all shopping carts with a DataCenter field value of \"East\" would be managed by the shard with the primary in the East data center. The other shard would manage carts with the value of \"West\". In addition, we would need two instances of the inventory management service, one deployed in each data center, with responsibility for updating the carts owned by the local data center.\n\nThis design assumes that there is some external process routing traffic to the correct data center. When a new cart is created, the user's session will be routed to the geographically closest data center and then assigned a DataCenter value for that data center. For an existing cart, the router can use the cart's DataCenter field to identify the correct data center.\n\nFrom this example, we can see that the sharded database gives us all the benefits of a multi-master database without the complexities that come from data inconsistency. Applications servers can read and write from their local primary, but because each cart is owned by a single primary, no inconsistencies can occur. In contrast, multi-master solutions have the potential for data loss and inconsistent reads.\n\n## Database Architecture Comparison\n\nThe pros and cons of how well each database architecture meets active-active application requirements is provided in Figure 7. In choosing between multi-master and sharded databases, the decision comes down to whether or not the application can tolerate potentially inconsistent reads and data loss. If the answer is yes, then a multi-master database might be slightly easier to deploy. If the answer is no, then a sharded database is the best option. Since inconsistency and data loss are not acceptable for most applications, a sharded database is usually the best option.\n\n## MongoDB Active-Active Applications\n\nMongoDB is an example of a sharded database architecture. In MongoDB, the construct of a primary server and set of secondary servers is called a replica set. Replica sets provide high availability for each shard and a mechanism, called Zone Sharding, is used to configure the set of data managed by each shard. Zone sharding makes it possible to implement the geographical partitioning described in the previous section. The details of how to accomplish this are described in the MongoDB Multi-Data Center Deployments white paper and Zone Sharding documentation, but MongoDB operates as described in the \"Partitioned (Sharded) Database\" section.\n\nNumerous organizations use MongoDB to implement active-active application architectures. For example:\n\n- Ebay has codified the use of zone sharding to enable local reads and writes as one of its standard architecture patterns.\n- YouGov deploys MongoDB for their flagship survey system, called Gryphon, in a \"write local, read global\" pattern that facilitates active-active multi data center deployments spanning data centers in North America and Europe.\n- Ogilvy and Maher uses MongoDB as the persistence store for its core auditing application. Their sharded cluster spans three data centers in North America and Europe with active data centers in North American and mainland Europe and a DR data center in London. This architecture minimizes write latency and also supports local reads for centralized analytics and reporting against the entire data set.\n\nIn addition to the standard sharded database functionality, MongoDB provides fine grain controls for write durability and read consistency that make it ideal for multi-data center deployments. For writes, a write concern can be specified to control write durability. The write concern enables the application to specify the number of replica set members that must apply the write before MongoDB acknowledges the write to the application. By providing a write concern, an application can be sure that when MongoDB acknowledges the write, the servers in one or more remote data centers have also applied the write. This ensures that database changes will not be lost in the event of node or a data center failure.\n\nIn addition, MongoDB addresses one of the potential downsides of a sharded database: less than 100% write availability. Since there is only one primary for each record, if that primary fails, then there is a period of time when writes to the partition cannot occur. MongoDB combines extremely fast failover times with retryable writes. With retryable writes, MongoDB provides automated support for retrying writes that have failed due to transient system errors such as network failures or primary elections, therefore significantly simplifying application code.\n\nThe speed of MongoDB's automated failover is another distinguishing feature that makes MongoDB ideally suited for multi-data center deployments. MongoDB is able to failover in 2-5 seconds (depending upon configuration and network reliability), when a node or data center fails or network split occurs. (Note, secondary reads can continue during the failover period.) After a failure occurs, the remaining replica set members will elect a new primary and MongoDB's driver, upon which most applications are built, will automatically identify this new primary. The recovery process is automatic and writes continue after the failover process completes.\n\nFor reads, MongoDB provides two capabilities for specifying the desired level of consistency. First, when reading from secondaries, an application can specify a maximum staleness value (maxStalenessSeconds). This ensures that the secondary's replication lag from the primary cannot be greater than the specified duration, and thus, guarantees the currentness of the data being returned by the secondary. In addition, a read can also be associated with a ReadConcern to control the consistency of the data returned by the query. For example, a ReadConcern of majority tells MongoDB to only return data that has been replicated to a majority of nodes in the replica set. This ensures that the query is only reading data that will not be lost due to a node or data center failure, and gives the application a consistent view of the data over time.\n\nMongoDB 3.6 also introduced causal consistency - guaranteeing that every read operation within a client session will always see the previous write operation, regardless of which replica is serving the request. By enforcing strict, causal ordering of operations within a session, causal consistency ensures every read is always logically consistent, enabling monotonic reads from a distributed system - guarantees that cannot be met by most multi-node databases. Causal consistency allows developers to maintain the benefits of strict data consistency enforced by legacy single node relational databases, while modernizing their infrastructure to take advantage of the scalability and availability benefits of modern distributed data platforms.\n\n## Conclusion\n\nIn this post we have shown that sharded databases provide the best support for the replication, performance, consistency, and local-write, local-read requirements of active-active applications. The performance of distributed transaction databases is too slow and multi-master databases do not provide the required consistency guarantees. In addition, MongoDB is especially suited for multi-data center deployments due to its distributed architecture, fast failover and ability for applications to specify desired consistency and durability guarantees through Read and Write Concerns.\n\nView the MongoDB Architect Hub\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "This post will begin by describing the database capabilities required by modern multi-data center applications.", "contentType": "Article"}, "title": "Active-Active Application Architectures with MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-exact-match", "action": "created", "body": "# Exact Matches in Atlas Search: Beginners Guide\n\n## Contributors\nMuch of this article was contributed by a MongoDB Intern, Humayara Karim. Thanks for spending your summer with us!\n\n## Introduction\n\nSearch engines are powerful tools that users rely on when they're looking for information. They oftentimes rely on them to handle the misspelling of words through a feature called fuzzy matching. Fuzzy matching identifies text, string, and even queries that are very similar but not the same. This is very useful.\n\nBut a lot of the time, the search that is most useful is an exact match. I'm looking for a word, `foobar`, and I want `foobar`, not `foobarr` and not `greenfoobart`. \n\nLuckily, Atlas Search has solutions for both fuzzy searches as well as exact matches. This tutorial will focus on the different ways users can achieve exact matches as well as the pros and cons of each. In fact, there are quite a few ways to achieve exact matches with Atlas Search. \n\n## (Let us count the) Ways to Exact Match in MongoDB\nJust like the NYC subway system, there are many ways to get to the same destination, and not all of them are good. So let's talk about the various methods of doing exact match searches, and the pros and cons. \n\n## Atlas Search Index Analyzers\nThese are policies that allow users to define filters for the text matches they are looking for. For example, if you wanted to find an exact match for a string of text, the best analyzer to use would be the **Keyword Analyzer** as this analyzer indexes text fields as single terms by accepting a string or array of strings as a parameter. \n\nIf you wanted to return exact matches that contain a specific word, the **Standard Analyzer** would be your go-to as it divides texts based on word-boundaries. It's crucial to first identify and understand the appropriate analyzer you will need based on your use case. This is where MongoDB makes our life easier because you can find all the built-in analyzers Atlas Search supports and their purposes all in one place, as shown below: \n\n**Pros**: Users can also make custom and multi analyzers to cater to specific application needs. There are examples on the MongoDB Developer Community Forums demonstrating folks doing this in the wild. \n\nHere's some code for case insensitive search using a custom analyzer and with the keyword tokenizer and a lowercase token filter:\n\n```\"analyzers\": \n {\n \"charFilters\": [],\n \"name\": \"search_keyword_lowercaser\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n ]\n```\n\nOr, a lucene.keyword analyzer for single-word exact match queries and phrase query for multi-word exact match queries [here:\n```\n{\n $search: {\n \"index\": \"movies_search_index\"\n \"phrase\": {\n \"query\": \"Red Robin\",\n \"path\": \"title\"\n }\n }\n}\n```\n\n**Cons**: Dealing with case insensitivity search isn\u2019t super straightforward. It's not impossible, of course, but it requires a few extra steps where you would have to define a custom analyzer and run a diacritic-insensitive query.\n\nThere's a step by step guide on how to do this here. \n\n## The Phrase Operator\nAKA a \"multi-word exact match thing.\" The Phrase Operator can get exact match queries on multiple words (tokens) in a field. But why use a phrase operator instead of only relying on an analyzer? It\u2019s because the phrase operator searches for an *ordered sequence* of terms with the help of an analyzer defined in the index configuration. Take a look at this example, where we want to search the phrases \u201cthe man\u201d and \u201cthe moon\u201d in a movie titles collection:\n\n```\ndb.movies.aggregate(\n{\n \"$search\": {\n \"phrase\": {\n \"path\": \"title\",\n \"query\": [\"the man\", \"the moon\"]\n }\n }\n},\n{ $limit: 10 },\n{\n $project: {\n \"_id\": 0,\n \"title\": 1,\n score: { $meta: \"searchScore\" }\n }\n}\n])\n```\n \n As you can see, the query returns all the results the contain ordered sequence terms \u201cthe man\u201d and \u201cthe moon.\u201d\n```\n{ \"title\" : \"The Man in the Moon\", \"score\" : 4.500046730041504 }\n{ \"title\" : \"Shoot the Moon\", \"score\" : 3.278003215789795 }\n{ \"title\" : \"Kick the Moon\", \"score\" : 3.278003215789795 }\n{ \"title\" : \"The Man\", \"score\" : 2.8860299587249756 }\n{ \"title\" : \"The Moon and Sixpence\", \"score\" : 2.8754563331604004 }\n{ \"title\" : \"The Moon Is Blue\", \"score\" : 2.8754563331604004 }\n{ \"title\" : \"Racing with the Moon\", \"score\" : 2.8754563331604004 }\n{ \"title\" : \"Mountains of the Moon\", \"score\" : 2.8754563331604004 }\n{ \"title\" : \"Man on the Moon\", \"score\" : 2.8754563331604004 }\n{ \"title\" : \"Castaway on the Moon\", \"score\" : 2.8754563331604004 }\n```\n\n**Pros:** There are quite a few [field type options you can use with phrase that gives users the flexibility to customize the exact phrases they want to return. \n\n**Cons:** The phrase operator isn\u2019t compatible with synonym search. What this means is that even if you have synonyms enabled, there can be a chance where your search results are whole phrases instead of an individual word. However, you can use the compound operator with two should clauses, one with the text query that uses synonyms and another that doesn't, to help go about this issue. Here is a sample code snippet of how to achieve this:\n\n```\n compound: {\n should: \n {\n text: {\n query: \"radio tower\",\n path: {\n \"wildcard\": \"*\"\n },\n synonyms: \"synonymCollection\"\n }\n },\n {\n text: {\n query: \"radio tower\",\n path: {\n \"wildcard\": \"*\"\n }\n }\n }\n ]\n }\n}\n```\n\n## Autocomplete Operator\nThere are few things in life that thrill me as much as the [autocomplete. Remember the sheer, wild joy of using that for the first time with Google search? It was just brilliant. It was one of things that made me want to work in technology in the first place. You type, and the machines *know what you're thinking!*\n\nAnd oh yea, it helps me from getting \"no search results\" repeatedly by guiding me to the correct terminology.\n\nTutorial on how to implement this for yourself is here. \n\n**Pros:** Autocomplete is awesome. Faster and more responsive search!\n**Cons:** There are some limitations with auto-complete. You essentially have to weigh the tradeoffs between *faster* results vs *more relevant* results. There are potential workarounds, of course. You can get your exact match score higher by making your autocompleted fields indexed as a string, querying using compound operators, etc... but yea, those tradeoffs are real. I still think it's preferable over plain search, though. \n\n## Text Operator\nAs the name suggests, this operator allows users to search text.\nHere is how the syntax for the text operator looks:\n```\n{\n $search: {\n \"index\": , // optional, defaults to \"default\"\n \"text\": {\n \"query\": \"\",\n \"path\": \"\",\n \"fuzzy\": ,\n \"score\": ,\n \"synonyms\": \"\"\n }\n }\n}\n```\n \nIf you're searching for a *single term* and want to use full text search to do it, this is the operator for you. Simple, effective, no frills. It's simplicity means it's hard to mess up, and you can use it in complex use cases without worrying. You can also layer the text operator with other items.\n \n The `text` operator also supports synonyms and score matching as shown here:\n \n```\ndb.movies.aggregate(\n {\n $search: {\n \"text\": {\n \"path\": \"title\",\n \"query\": \"automobile\",\n \"synonyms\": \"transportSynonyms\"\n }\n }\n },\n {\n $limit: 10\n },\n {\n $project: {\n \"_id\": 0,\n \"title\": 1,\n \"score\": { $meta: \"searchScore\" }\n }\n }\n])\n```\n \n```\ndb.movies.aggregate([\n {\n $search: {\n \"text\": {\n \"query\": \"Helsinki\",\n \"path\": \"plot\"\n }\n }\n },\n {\n $project: {\n plot: 1,\n title: 1,\n score: { $meta: \"searchScore\" }\n }\n }\n])\n```\n\n**Pros:** Straightforward, easy to use. \n**Cons:** The terms in your query are considered individually, so if you want to return a result that contains more than a single word, you have to nest your operators. Not a huge deal, but as a downside, you'll probably have to conduct a little research on the [other operators that fit with your use case. \n\n## Highlighting\nAlthough this feature doesn\u2019t necessarily return exact matches like the other features, it's worth *highlighting. (See what I did there?!)*\n\nI love this feature. It's super useful. Highlight allows users to visually see exact matches. This option also allows users to visually return search terms in their original context. In your application UI, the highlight feature looks like so:\n\nIf you\u2019re interested in learning how to build an application like this, here is a step by step tutorial visually showing Atlas Search highlights with JavaScript and HTML.\n\n**Pros**: Aesthetically, this feature enhances user search experience because users can easily see what they are searching for in a given text.\n\n**Cons**: It can be costly if passages are long because a lot more RAM will be needed to hold the data. In addition, this feature does not work with autocomplete. \n\n## Conclusion\n\nUltimately, there are many ways to achieve exact matches with Atlas Search. Your best approach is to skim through a few of the tutorials in the documentation and take a look at the Atlas search section here in the DevCenter and then tinker with it.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This tutorial will focus on the different ways users can achieve exact matches as well as the pros and cons of each.", "contentType": "Article"}, "title": "Exact Matches in Atlas Search: Beginners Guide", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/csharp/saving-data-in-unity3d-using-sqlite", "action": "created", "body": "# Saving Data in Unity3D Using SQLite\n\n(Part 4 of the Persistence Comparison Series)\n\nOur journey of exploring options given to use when it comes persistence in Unity will in this part lead to databases. More specificaclly: SQLite.\n\nSQLite is a C-based database that is used in many areas. It has been around for a long time and also found its way into the Unity world. During this tutorial series, we have seen options like `PlayerPrefs` in Unity, and on the other side, `File` and `BinaryWriter`/`BinaryReader` provided by the underlying .NET framework.\n\nHere is an overview of the complete series:\n\n- Part 1: PlayerPrefs\n- Part 2: Files\n- Part 3: BinaryReader and BinaryWriter\n- Part 4: SQL *(this tutorial)*\n- Part 5: Realm Unity SDK *(coming soon)*\n- Part 6: Comparison of all these options\n\nSimilar to the previous parts, this tutorial can also be found in our Unity examples repository on the persistence-comparison branch.\n\nEach part is sorted into a folder. The three scripts we will be looking at in this tutorial are in the `SQLite` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.\n\n## Example game\n\n*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we're using the same example for all parts of the series, so that it's easier to see the differences between the approaches.*\n\nThe goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.\n\nTherefore, the example we'll be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.\n\nA simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.\n\nWhen you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.\n\nYou can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.\n\nThe scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.\n\n```cs\nusing UnityEngine;\n\n/// \n/// This script shows the basic structure of all other scripts.\n/// \npublic class HitCountExample : MonoBehaviour\n{\n // Keep count of the clicks.\n SerializeField] private int hitCount; // 1\n\n private void Start() // 2\n {\n // Read the persisted data and set the initial hit count.\n hitCount = 0; // 3\n }\n\n private void OnMouseDown() // 4\n {\n // Increment the hit count on each click and save the data.\n hitCount++; // 5\n }\n}\n```\n\nThe first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.\n\nWhenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.\n\nThe second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.\n\n## SQLite\n\n(See `SqliteExampleSimple.cs` in the repository for the finished version.)\n\nNow let's make sure our hit count gets persisted so we can continue playing the next time we start the game.\n\nSQLite is not included per default in a new Unity project and is also not available directly via the Unity package manager. We have to install two components to start using it.\n\nFirst, head over to [https://sqlite.org/download.html and choose the `Precompiled Binaries` for your operating system. Unzip it and add the two files\u2014`sqlite3.def` and `sqlite3.dll`\u2014to the `Plugin` folder in your Unity project.\n\nThen, open a file explorer in your Unity Hub installation directory, and head to the following sub directory:\n\n```\nUnity/Hub/Editor/2021.2.11f1/Editor/Data/MonoBleedingEdge/lib/mono/unity\n```\n\nIn there, you will find the file `Mono.Data.Sqlite.dll` which also needs to be moved to the `Plugins` folder in your Unity project. The result when going back to the Editor should look like this:\n\nNow that the preparations are finished, we want to add our first script to the capsule. Similar to the `HitCountExample.cs`, create a new `C# script` and name it `SqliteExampleSimple`.\n\nWhen opening it, the first thing we want to do is import SQLite by adding `using Mono.Data.Sqlite;` and `using System.Data;` at the top of the file (1).\n\nNext we will look at how to save whenever the hit count is changed, which happens during `OnMouseDown()`. First we need to open a connection to the database. This is offered by the SQLite library via the `IDbConnection` class (2) which represents an open connection to the database. Since we will need a connection for loading the data later on again, we will extract opening a database connection into another function and call it `private IDbConnection CreateAndOpenDatabase()` (3).\n\nIn there, we first define a name for our database file. I'll just call it `MyDatabase` for now. Accordingly, the URI should be `\"URI=file:MyDatabase.sqlite\"` (4). Then we can create a connection to this database using `new SqliteConnection(dbUri)` (5) and open it with `dbConnection.Open()` (6).\n\n```cs\nusing Mono.Data.Sqlite; // 1\nusing System.Data; // 1\nusing UnityEngine;\n\npublic class SqliteExampleSimple : MonoBehaviour\n{\n // Resources:\n // https://www.mono-project.com/docs/database-access/providers/sqlite/\n\n SerializeField] private int hitCount = 0;\n\n void Start() // 13\n {\n // Read all values from the table.\n IDbConnection dbConnection = CreateAndOpenDatabase(); // 14\n IDbCommand dbCommandReadValues = dbConnection.CreateCommand(); // 15\n dbCommandReadValues.CommandText = \"SELECT * FROM HitCountTableSimple\"; // 16\n IDataReader dataReader = dbCommandReadValues.ExecuteReader(); // 17\n\n while (dataReader.Read()) // 18\n {\n // The `id` has index 0, our `hits` have the index 1.\n hitCount = dataReader.GetInt32(1); // 19\n }\n\n // Remember to always close the connection at the end.\n dbConnection.Close(); // 20\n }\n\n private void OnMouseDown()\n {\n hitCount++;\n\n // Insert hits into the table.\n IDbConnection dbConnection = CreateAndOpenDatabase(); // 2\n IDbCommand dbCommandInsertValue = dbConnection.CreateCommand(); // 9\n dbCommandInsertValue.CommandText = \"INSERT OR REPLACE INTO HitCountTableSimple (id, hits) VALUES (0, \" + hitCount + \")\"; // 10\n dbCommandInsertValue.ExecuteNonQuery(); // 11\n\n // Remember to always close the connection at the end.\n dbConnection.Close(); // 12\n }\n\n private IDbConnection CreateAndOpenDatabase() // 3\n {\n // Open a connection to the database.\n string dbUri = \"URI=file:MyDatabase.sqlite\"; // 4\n IDbConnection dbConnection = new SqliteConnection(dbUri); // 5\n dbConnection.Open(); // 6\n\n // Create a table for the hit count in the database if it does not exist yet.\n IDbCommand dbCommandCreateTable = dbConnection.CreateCommand(); // 6\n dbCommandCreateTable.CommandText = \"CREATE TABLE IF NOT EXISTS HitCountTableSimple (id INTEGER PRIMARY KEY, hits INTEGER )\"; // 7\n dbCommandCreateTable.ExecuteReader(); // 8\n\n return dbConnection;\n }\n}\n```\n\nNow we can work with this SQLite database. Before we can actually add data to it, though, we need to set up a structure. This means creating and defining tables, which is the way most databases are organized. The following screenshot shows the final state we will create in this example.\n\n![\n\nWhen accessing or modifying the database, we use `IDbCommand` (6), which represents an SQL statement that can be executed on a database.\n\nLet's create a new table and define some columns using the following command (7):\n\n```sql\n\"CREATE TABLE IF NOT EXISTS HitCountTableSimple (id INTEGER PRIMARY KEY, hits INTEGER )\"\n```\n\nSo, what does this statement mean? First, we need to state what we want to do, which is `CREATE TABLE IF NOT EXISTS`. Then, we need to name this table, which will just be the same as the script we are working on right now: `HitCountTableSimple`.\n\nLast but not least, we need to define how this new table is supposed to look. This is done by naming all columns as a tuple: `(id INTEGER PRIMARY KEY, hits INTEGER )`. The first one defines a column `id` of type `INTEGER` which is our `PRIMARY KEY`. The second one defines a column `hits` of type `INTEGER`.\n\nAfter assigning this statement as the `CommandText`, we need to call `ExecuteReader()` (8) on `dbCommandCreateTable` to run it.\n\nNow back to `OnMouseClicked()`. With the `dbConnection` created, we can now go ahead and define another `IDbCommand` (9) to modify the new table we just created and add some data. This time, the `CommandText` (10) will be:\n\n```sql\n\"INSERT OR REPLACE INTO HitCountTableSimple (id, hits) VALUES (0, \" + hitCount + \")\"\n```\n\nLet's decipher this one too: `INSERT OR REPLACE INTO` adds a new variable to a table or updates it, if it already exists. Next is the table name that we want to insert into, `HitCountTableSimple`. This is followed by a tuple of columns that we would like to change, `(id, hits)`. The statement `VALUES (0, \" + hitCount + \")` then defines values that should be inserted, also as a tuple. In this case, we just choose `0` for the key and use whatever the current `hitCount` is as the value.\n\nOpposed to creating the table, we execute this command calling `ExecuteNonQuery()` (11) on it.\n\nThe difference can be defined as follows:\n\n> ExecuteReader is used for any result set with multiple rows/columns (e.g., SELECT col1, col2 from sometable). ExecuteNonQuery is typically used for SQL statements without results (e.g., UPDATE, INSERT, etc.).\n\nAll that's left to do is to properly `Close()` (12) the database.\n\nHow can we actually verify that this worked out before we continue on to reading the values from the database again? Well, the easiest way would be to just look into the database. There are many tools out there to achieve this. One of the open source options would be https://sqlitebrowser.org/.\n\nAfter downloading and installing it, all you need to do is `File -> Open Database`, and then browse to your Unity project and select the `MyDatabase.sqlite` file. If you then choose the `Table` `HitCountTableSimple`, the result should look something like this:\n\nGo ahead and run your game. Click a couple times on the capsule and check the Inspector for the change. When you then go back to the DB browser and click refresh, the same number should appear in the `value` column of the table.\n\nThe next time we start the game, we want to load this hit count from the database again. We use the `Start()` function (13) since it only needs to be done when the scene loads. As before, we need to get a hold of the database with an `IDbConnection` (14) and create a new `IDbCommand` (15) to read the data. Since there is only one table and one value, it's quite simple for now. We can just read `all data` by using: \n\n```sql\n\"SELECT * FROM HitCountTableSimple\"\n```\n\nIn this case, `SELECT` stands for `read the following values`, followed by a `*` which indicates to read all the data. The keyword `FROM` then specifies the table that should be read from, which is again `HitCountTableSimple`. Finally, we execute this command using `ExecuteReader()` (17) since we expect data back. This data is saved in an `IDataReader`, from the documentation:\n\n> Provides a means of reading one or more forward-only streams of result sets obtained by executing a command at a data source, and is implemented by .NET data providers that access relational databases.\n\n`IDataReader` addresses its content in an index fashion, where the ordering matches one of the columns in the SQL table. So in our case, `id` has index 0, and `hitCount` has index 1. The way this data is read is row by row. Each time we call `dataReader.Read()` (18), we read another row from the table. Since we know there is only one row in the table, we can just assign the `value` of that row to the `hitCount` using its index 1. The `value` is of type `INTEGER` so we need to use `GetInt32(1)` to read it and specify the index of the field we want to read as a parameter, `id` being `0` and `value` being `1`.\n\nAs before, in the end, we want to properly `Close()` the database (20).\n\nWhen you restart the game again, you should now see an initial value for `hitCount` that is read from the database.\n\n## Extended example\n\n(See `SqliteExampleExtended.cs` in the repository for the finished version.)\n\nIn the previous section, we looked at the most simple version of a database example you can think of. One table, one row, and only one value we're interested in. Even though a database like SQLite can deal with any kind of complexity, we want to be able to compare it to the previous parts of this tutorial series and will therefore look at the same `Extended example`, using three hit counts instead of one and using modifier keys to identify them: `Shift` and `Control`.\n\nLet's start by creating a new script `SqliteExampleExtended.cs` and attach it to the capsule. Copy over the code from `SqliteExampleSimple` and apply the following changes to it. First, defie the three hit counts:\n\n```cs\nSerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nDetecting which key is pressed (in addition to the mouse click) can be done using the [`Input` class that is part of the Unity SDK. Calling `Input.GetKey()`, we can check if a certain key was pressed. This has to be done during `Update()` which is the Unity function that is called each frame. The reason for this is stated in the documentation:\n\n> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.\n\nThe key that was pressed needs to be remembered when recieving the `OnMouseDown()` event. Hence, we need to add a private field to save it like so: \n\n```cs\nprivate KeyCode modifier = default;\n```\n\nNow the `Update()` function can look like this:\n\n```cs\nprivate void Update()\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 1\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift; // 2\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 1\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl; // 2\n }\n else // 3\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 4\n }\n}\n```\n\nFirst, we check if the `LeftShift` or `LeftControl` key was pressed (1) and if so, save the corresponding `KeyCode` in `modifier`. Note that you can use the `string` name of the key that you are looking for or the more type-safe `KeyCode` enum.\n\nIn case neither of those two keys were pressed (3), we define this as the `unmodified` state and just set `modifier` back to its `default` (4).\n\nBefore we continue on to `OnMouseClicked()`, you might ask what changes we need to make in the database structure that is created by `private IDbConnection CreateAndOpenDatabase()`. It turns out we actually don't need to change anything at all. We will just use the `id` introduced in the previous section and save the `KeyCode` (which is an integer) in it.\n\nTo be able to compare both versions later on, we will change the table name though and call it `HitCountTableExtended`:\n\n```cs\ndbCommandCreateTable.CommandText = \"CREATE TABLE IF NOT EXISTS HitCountTableExtended (id INTEGER PRIMARY KEY, hits INTEGER)\";\n```\n\nNow, let's look at how detecting mouse clicks needs to be modified to account for those keys:\n\n```cs\nprivate void OnMouseDown()\n{\n var hitCount = 0;\n switch (modifier) // 1\n {\n case KeyCode.LeftShift:\n // Increment the hit count and set it to PlayerPrefs.\n hitCount = ++hitCountShift; // 2\n break;\n case KeyCode.LeftControl:\n // Increment the hit count and set it to PlayerPrefs.\n hitCount = ++hitCountControl; // 2\n break;\n default:\n // Increment the hit count and set it to PlayerPrefs.\n hitCount = ++hitCountUnmodified; // 2\n break;\n }\n\n // Insert a value into the table.\n IDbConnection dbConnection = CreateAndOpenDatabase();\n IDbCommand dbCommandInsertValue = dbConnection.CreateCommand();\n dbCommandInsertValue.CommandText = \"INSERT OR REPLACE INTO HitCountTableExtended (id, hits) VALUES (\" + (int)modifier + \", \" + hitCount + \")\";\n dbCommandInsertValue.ExecuteNonQuery();\n\n // Remember to always close the connection at the end.\n dbConnection.Close();\n}\n```\n\nFirst, we need to check which modifier was used in the last frame (1). Depending on this, we increment the corresponding hit count and assign it to the local variable `hitCount` (2). As before, we count any other key than `LeftShift` and `LeftControl` as `unmodified`.\n\nNow, all we need to change in the second part of this function is the `id` that we set statically to `0` before and instead use the `KeyCode`. The updated SQL statement should look like this:\n\n```sql\n\"INSERT OR REPLACE INTO HitCountTableExtended (id, hits) VALUES (\" + (int)modifier + \", \" + hitCount + \")\"\n```\n\nThe `VALUES` tuple now needs to set `(int)modifier` (note that the `enum` needs to be casted to `int`) and `hitCount` as its two values.\n\nAs before, we can start the game and look at the saving part in action first. Click a couple times until the Inspector shows some numbers for all three hit counts:\n\nNow, let's open the DB browser again and this time choose the `HitCountTableExtended` from the drop-down:\n\nAs you can see, there are three rows, with the `value` being equal to the hit counts you see in the Inspector. In the `id` column, we see the three entries for `KeyCode.None` (0), `KeyCode.LeftShift` (304), and `KeyCode.LeftControl` (306).\n\nFinally, let's read those values from the database when restarting the game.\n\n```cs\nvoid Start()\n{\n // Read all values from the table.\n IDbConnection dbConnection = CreateAndOpenDatabase(); // 1\n IDbCommand dbCommandReadValues = dbConnection.CreateCommand(); // 2\n dbCommandReadValues.CommandText = \"SELECT * FROM HitCountTableExtended\"; // 3\n IDataReader dataReader = dbCommandReadValues.ExecuteReader(); // 4\n\n while (dataReader.Read()) // 5\n {\n // The `id` has index 0, our `value` has the index 1.\n var id = dataReader.GetInt32(0); // 6\n var hits = dataReader.GetInt32(1); // 7\n if (id == (int)KeyCode.LeftShift) // 8\n {\n hitCountShift = hits; // 9\n }\n else if (id == (int)KeyCode.LeftControl) // 8\n {\n hitCountControl = hits; // 9\n }\n else\n {\n hitCountUnmodified = hits; // 9\n }\n }\n\n // Remember to always close the connection at the end.\n dbConnection.Close();\n}\n```\n\nThe first part works basically unchanged by creating a `IDbConnection` (1) and a `IDbCommand` (2) and then reading all rows again with `SELECT *` (3) but this time from `HitCountTableExtended`, finished by actually executing the command with `ExecuteReader()` (4).\n\nFor the next part, we now need to read each row (5) and then check which `KeyCode` it belongs to. We grab the `id` from index `0` (6) and the `hits` from index `1` (7) as before. Then, we check the `id` against the `KeyCode` (8) and assign it to the corresponding `hitCount` (9).\n\nNow restart the game and try it out!\n\n## Conclusion\n\nSQLite is one of the options when it comes to persistence. If you've read the previous tutorials, you've noticed that using it might at first seem a bit more complicated than the simple `PlayerPrefs`. You have to learn an additional \"language\" to be able to communicate with your database. And due to the nature of SQL not being the easiest format to read, it might seem a bit intimidating at first. But the world of databases offers a lot more than can be shown in a short tutorial like this!\n\nOne of the downsides of plain files or `PlayerPrefs` that we've seen was having data in a structured way\u2014especially when it gets more complicated or relationships between objects should be drawn. We looked at JSON as a way to improve that situation but as soon as we need to change the format and migrate our structure, it gets quite complicated. Encryption is another topic that might be important for you\u2014`PlayerPrefs` and `File` are not safe and can easily be read. Those are just some of the areas a database like SQLite might help you achieve the requirements you have for persisting your data.\n\nIn the next tutorial, we will look at another database, the Realm Unity SDK, which offers similar advantages to SQLite, while being very easy to use at the same time.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["C#", "MongoDB", "Unity", "SQL"], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well. In this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Code Example"}, "title": "Saving Data in Unity3D Using SQLite", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/seed-database-with-fake-data", "action": "created", "body": "# How to Seed a MongoDB Database with Fake Data\n\nHave you ever worked on a MongoDB project and needed to seed your\ndatabase with fake data in order to provide initial values for lookups,\ndemo purposes, proof of concepts, etc.? I'm biased, but I've had to seed\na MongoDB database countless times.\n\nFirst of all, what is database seeding? Database seeding is the initial\nseeding of a database with data. Seeding a database is a process in\nwhich an initial set of data is provided to a database when it is being\ninstalled.\n\nIn this post, you will learn how to get a working seed script setup for\nMongoDB databases using Node.js and\nfaker.js.\n\n## The Code\n\nThis example code uses a single collection of fake IoT data (that I used\nto model for my IoT Kitty Litter Box\nproject).\nHowever, you can change the shape of your template document to fit the\nneeds of your application. I am using\nfaker.js to create the fake data.\nPlease refer to the\ndocumentation\nif you want to make any changes. You can also adapt this script to seed\ndata into multiple collections or databases, if needed.\n\nI am saving my data into a MongoDB\nAtlas database. It's the easiest\nway to get a MongoDB database up and running. You'll need to get your\nMongoDB connection\nURI before you can\nrun this script. For information on how to connect your application to\nMongoDB, check out the\ndocs.\n\nAlright, now that we have got the setup out of the way, let's jump into\nthe code!\n\n``` js\n/* mySeedScript.js */\n\n// require the necessary libraries\nconst faker = require(\"faker\");\nconst MongoClient = require(\"mongodb\").MongoClient;\n\nfunction randomIntFromInterval(min, max) { // min and max included \n return Math.floor(Math.random() * (max - min + 1) + min);\n}\n\nasync function seedDB() {\n // Connection URL\n const uri = \"YOUR MONGODB ATLAS URI\";\n\n const client = new MongoClient(uri, {\n useNewUrlParser: true,\n // useUnifiedTopology: true,\n });\n\n try {\n await client.connect();\n console.log(\"Connected correctly to server\");\n\n const collection = client.db(\"iot\").collection(\"kitty-litter-time-series\");\n\n // The drop() command destroys all data from a collection.\n // Make sure you run it against proper database and collection.\n collection.drop();\n\n // make a bunch of time series data\n let timeSeriesData = ];\n\n for (let i = 0; i < 5000; i++) {\n const firstName = faker.name.firstName();\n const lastName = faker.name.lastName();\n let newDay = {\n timestamp_day: faker.date.past(),\n cat: faker.random.word(),\n owner: {\n email: faker.internet.email(firstName, lastName),\n firstName,\n lastName,\n },\n events: [],\n };\n\n for (let j = 0; j < randomIntFromInterval(1, 6); j++) {\n let newEvent = {\n timestamp_event: faker.date.past(),\n weight: randomIntFromInterval(14,16),\n }\n newDay.events.push(newEvent);\n }\n timeSeriesData.push(newDay);\n }\n collection.insertMany(timeSeriesData);\n\n console.log(\"Database seeded! :)\");\n client.close();\n } catch (err) {\n console.log(err.stack);\n }\n}\n\nseedDB();\n```\n\nAfter running the script above, be sure to check out your database to\nensure that your data has been properly seeded. This is what my database\nlooks after running the script above.\n\n![Screenshot showing the seeded data in a MongoDB Atlas cluster.\n\nOnce your fake seed data is in the MongoDB database, you're done!\nCongratulations!\n\n## Wrapping Up\n\nThere are lots of reasons you might want to seed your MongoDB database,\nand populating a MongoDB database can be easy and fun without requiring\nany fancy tools or frameworks. We have been able to automate this task\nby using MongoDB, faker.js, and Node.js. Give it a try and let me know\nhow it works for you! Having issues with seeding your database? We'd\nlove to connect with you. Join the conversation on the MongoDB\nCommunity Forums.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to seed a MongoDB database with fake data.", "contentType": "Tutorial"}, "title": "How to Seed a MongoDB Database with Fake Data", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/react-query-rest-api-realm", "action": "created", "body": "\nYou need to enable JavaScript to run this app.\n\n", "format": "md", "metadata": {"tags": ["JavaScript", "React"], "pageDescription": "Learn how to query a REST API built with MongoDB Atlas App Services, React and Axios", "contentType": "Code Example"}, "title": "Build a Simple Website with React, Axios, and a REST API Built with MongoDB Atlas App Services", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/email-password-authentication-app-services", "action": "created", "body": "# Configure Email/Password Authentication in MongoDB Atlas App Services\n\n> **Note:** GraphQL is deprecated. Learn more.\n\nOne of the things I like the most is building full-stack apps using Node.js, React, and MongoDB. Every time I get a billion-dollar idea, I immediately start building it using this tech stack. No matter what app I\u2019m working on, there are a few features that are common:\n\n- Authentication and authorization: login, signup, and access controls.\n- Basic CRUD (Create, Read, Update, and Delete) operations.\n- Data analytics.\n- Web application deployment.\n\nAnd without a doubt, all of them play an essential role in any full-stack application. But still, they take a lot of time and energy to build and are mostly repetitive in nature. Therefore, we are left with significantly less time to build the features that our customers are waiting for.\nIn an ideal scenario, your time as a developer should be spent on implementing features and not reinventing the wheel. With MongoDB Atlas App Services, you don\u2019t have to worry about that. All you have to do is connect your client app to the service you need and you\u2019re ready to rock!\nThroughout this series, you will learn how to build a full stack web application with MongoDB Atlas App Services, GraphQL, and React. We will be building an expense manager application called Expengo.\n\n## Authentication\n\nImplementing authentication in your app usually requires you to create and deploy a server while making sure that emails are unique, passwords are encrypted, and sessions/tokens are managed securely.\nIn this blog, we\u2019ll configure email/password authentication on Atlas App Services. In the subsequent part of this series, we\u2019ll integrate this with our React app.\n\n## MongoDB Atlas App Services authentication providers\nMongoDB Atlas is a developer data platform integrating a multi-cloud database service with a set of data services. Atlas App Services provide secure serverless backend services and APIs to save you hours of coding.\nFor authentication, you can choose from many different providers such as email/password, API key, Google, Apple, and Facebook. For this tutorial, we\u2019ll use the email/password authentication provider.\n\n## Deploy your free tier Atlas cluster\nIf you haven\u2019t already, deploy a free tier MongoDB Atlas cluster. This will allow us to store and retrieve data from our database deployment. You will be asked to add your IP to the IP access list and create a username/password to access your database. Once a cluster is created, you can create an App Service and link to it.\n\n## Set up your App Service\nNow, click on the \u201cApp Services\u201d tab as highlighted in the image below:\n\nThere are a variety of templates one can choose from. For this tutorial, we will continue with the \u201cBuild your own App\u201d template and click \u201cNext.\u201d\n\nAdd application information in the next pop-up and click on \u201cCreate App Service.\u201d\n\nClick on \u201cClose Guides\u201d in the next pop-up screen.\n\nNow click on \u201cAuthentication\u201d in the side-bar. Then, click on the \u201cEdit\u201d button on the right side of Email/Password in the list of Authentication Providers.\n\nMake sure the Provider Enabled toggle is set to On.\n\nOn this page, we may also configure the user confirmation settings and the password reset settings for our application. For the sake of simplicity of this tutorial, we will choose:\n\n1. User confirmation method: \u201cAutomatically confirm users.\u201d\n2. Password reset method: \u201cSend a password reset email.\u201d\n3. Placeholder password reset URL: http://localhost:3000/resetPassword.\n > We're not going to implement a password reset functionality in our client application. With that said, the URL you enter here doesn't really matter. If you want to learn how to reset passwords with App Services, check out the dedicated documentation.\n4. Click \u201cSave Draft.\u201d\n\nOnce your Draft has been saved, you will see a blue pop-up at the top, with a \u201cReview Draft & Deploy\u201d button. Click on it and wait for a few moments.\n\nYou will see a pop-up displaying all the changes you made in this draft. Click on \u201cDeploy\u201d to deploy these changes:\n\nYou will see a \u201cDeployment was successful\u201d message in green at the top if everything goes fine. Yay!\n\n## Conclusion\n\nPlease note that all the screenshots were last updated in August 2022. Some UX details may have changed in more recent releases.\nIn the next article of the series, we will learn how we can utilize this email/password authentication provider in our React app.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In less than 6 steps, learn how to set up authentication and allow your users to log in and sign up to your app without writing a single line of server code.", "contentType": "Tutorial"}, "title": "Configure Email/Password Authentication in MongoDB Atlas App Services", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/keep-mongodb-serverless-costs-low", "action": "created", "body": "# Keeping Your Costs Down with MongoDB Atlas Serverless Instances\n\nThe new MongoDB Atlas serverless instance types are pretty great, especially if you\\'re running intermittent workloads, such as in production scenarios where the load is periodic and spiky (I used to work for a big sports website that was quiet unless a game was starting) or on test and integration infrastructure where you\\'re not running your application all the time.\n\nThe pricing for serverless instances is actually pretty straightforward: You pay for what you use. Unlike traditional MongoDB Atlas Clusters where you provision a set of servers on a tier that specifies the performance of the cluster, and pay for that tier unless your instance is scaled up or down, with Atlas serverless instances, you pay for the exact queries that you run, and the instance will automatically be scaled up or down as your usage scales up or down.\n\nBeing able to efficiently query your data is important for scaling your website and keeping your costs low in *any* situation. It's just more visible when you are billed per query. Learning these skills will both save you money *and* take your MongoDB skills to the next level.\n\n## Index your data to keep costs down\n\nI'm not going to go into any detail here on what an RPU is, or exactly how billing is calculated, because my colleague Vishal has already written MongoDB Serverless: Billing 101. I recommend checking that out *first*, just to see how Vishal demonstrates the significant impact having the right index can have on the cost of your queries!\n\nIf you want more information on how to appropriately index your data, there are a bunch of good resources to check out. MongoDB University has a free course, M201: MongoDB Performance. It'll teach you the ins and outs of analyzing your queries and how they make use of indexes, and things to think about when indexing your data.\n\nThe MongoDB Manual also contains excellent documentation on MongoDB Indexes. You'll want to read it and keep it bookmarked for future reference. It's also worth reading up on how to analyze your queries and try to reduce index scans and collection scans as much as possible.\n\nIf you index your data correctly, you'll dramatically reduce your serverless costs by reducing the number of documents that need to be scanned to find the data you're accessing and updating.\n\n## Modeling your data\n\nOnce you've ensured that you know how to efficiently index your data, the next step is to make sure that your schema is designed to be as efficient as possible.\n\nFor example, if you've migrated your schema directly from a relational database, you might have lots of collections containing shallow documents, and you may be using joins to re-combine this data when you're accessing the data. This isn't an efficient way to use MongoDB. For one thing, if you're doing this, you'll want to internalize our mantra, \"data that is accessed together should be stored together.\"\n\nMake use of MongoDB's rich document model to ensure that data can be accessed in a single read operation where possible. In most situations where reads are higher than writes, duplicating data across multiple documents will be much more performant and thus cheaper than storing the data normalized in a separate collection and using the $lookup aggregation stage to query it.\n\nThe MongoDB blog has a series of posts describing MongoDB Design Patterns, and many of them will help you to model your data in a more efficient manner. I recommend these posts in almost every blog post and talk that I do, so it's definitely worth your time getting to know them.\n\nOnce again, the MongoDB Manual contains information about data modeling, and we also have a MongoDB University course, M320: Data Modeling. If you really want to store your data efficiently in MongoDB, you should check them out.\n\n## Use the Atlas performance tools\n\nMongoDB Atlas also offers built-in tools that monitor your usage of queries and indexes in production. From time to time, it's a good idea to log into the MongoDB Atlas web interface, hit \"Browse Collections,\" and then click the \"Performance Advisor\" tab to check if we've identified indexes you could create (or drop).\n\n## Monitor your serverless usage\n\nIt's worth keeping an eye on your serverless instance usage in case a new deployment dramatically spikes your usage of RPUs and WPUs. You can set these up in your Atlas Project Alerts screen.\n\n## Conclusion\n\nIf there's an overall message in this post, it's that efficient modeling and indexing of your data should be your primary focus if you're looking to use MongoDB Atlas serverless instances to keep your costs low. The great thing is that these are skills you probably already have! Or at least, if you need to learn it, then the skills are transferable to any MongoDB database you might work on in the future.", "format": "md", "metadata": {"tags": ["Atlas", "Serverless"], "pageDescription": "A guide to the things you need to think about when using the new MongoDB Atlas serverless instances to keep your usage costs down.", "contentType": "Article"}, "title": "Keeping Your Costs Down with MongoDB Atlas Serverless Instances", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/subscribing-changes-browser-websockets", "action": "created", "body": "", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "Subscribe to MongoDB Change Streams via WebSockets using Python and Tornado.", "contentType": "Tutorial"}, "title": "Subscribe to MongoDB Change Streams Via WebSockets", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-zero-to-mobile-dev", "action": "created", "body": "# From Zero to Mobile Developer in 40 Minutes\n\nAre you an experienced non-mobile developer interested in writing your first iOS app? Or maybe a mobile developer wondering how to enhance your app or simplify your code using Realm and SwiftUI?\n\nI've written a number of tutorials aimed at these cohorts, and I've published a video that attempts to cover everything in just 40 minutes.\n\nI start with a brief tour of the anatomy of a mobile app\u2014covering both the backend and frontend components.\n\nThe bulk of the tutorial is devoted to a hands-on demonstration of building a simple chat app. The app lets users open a chatroom of their choice. Once in a chat room, users can share messages with other members.\n\nWhile the app is simple, it solves some complex distributed data issues, with real-time syncing of data between your backend database and your mobile apps. Realm is also available for other platforms, such as Android, so the same back end and data can be shared with all versions of your app.\n\n:youtubeVideo tutorial showing how to build your first mobile iOS/iPhone app using Realm and SwiftUI]{vid=lSp95xkvo1U}\n\nYou can find all of the code from the tutorial in the [repo.\n\nIf this tutorial whet your appetite and you'd like to see more (and maybe try it for yourself), then I'm running a more leisurely(?) two-hour workshop at MongoDB World 2022\u2014From 0 to Mobile Developer in 2 Hours with Realm and SwiftUI, where I build an all-new app. There's still time to register for MongoDB World 2002. **Use code AndrewMorgan25 for a 25% discount**.", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "Video showing how to build your first iOS app using SwiftUI and Realm", "contentType": "Quickstart"}, "title": "From Zero to Mobile Developer in 40 Minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/php113-release", "action": "created", "body": "# MongoDB PHP Extension 1.13.0 Released\n\nThe PHP team is happy to announce that version 1.13.0 of the mongodb PHP extension is now available on PECL. Thanks also to our intern for 1.13.0, Tanil Su, who added functionality for server discovery and monitoring!\n\n## Release Highlights\n\n`MongoDB\\Driver\\Manager::\\_\\_construct() `supports two new URI options: ` srvMaxHosts` and `srvServiceName`.\n\n* `srvMaxHosts` may be used with sharded clusters to limit the number of hosts that will be added to a seed list following the initial SRV lookup.\n* `srvServiceName` may be used with self-managed deployments to customize the default service name (i.e. \u201cmongodb\u201d).\n\nThis release introduces support for SDAM Monitoring, which applications can use to monitor internal driver behavior for server discovery and monitoring. Similar to the existing command monitoring API, applications can implement the `MongoDB\\Driver\\Monitoring\\SDAMSubscriber ` interface and registering the subscriber globally or for a single Manager using `MongoDB\\Driver\\Monitoring\\addSubscriber() ` or `MongoDB\\Driver\\Manager::addSubscriber`, respectively. In addition to many new event classes, this feature introduces the ServerDescription and TopologyDescription classes.\n\nThis release also upgrades our libbson and libmongoc dependencies to 1.21.1. The libmongocrypt dependency has been upgraded to 1.3.2.\n\nNote that support for MongoDB 3.4 and earlier has been *removed.*\n\nA complete list of resolved issues in this release may be found at: https://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12484&version=32494\n\n## Documentation\n\nDocumentation is available on PHP.net:\nhttp://php.net/set.mongodb\n\n## Installation\n\nYou can either download and install the source manually, or you can install the extension with:\n\n`pecl install mongodb-1.13.0`\nor update with:\n\n`pecl upgrade mongodb-1.13.0`\n\nWindows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb", "format": "md", "metadata": {"tags": ["PHP"], "pageDescription": "Announcing our latest release of the PHP Extension 1.13.0!", "contentType": "News & Announcements"}, "title": "MongoDB PHP Extension 1.13.0 Released", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/next-gen-webapps-remix-atlas-data-api", "action": "created", "body": "# Next Gen Web Apps with Remix and MongoDB Atlas Data API\n\n> Learn more about the GA version here.\n\nJavascript-based application stacks have proven themselves to be the dominating architecture for web applications we all use. From *MEAN* to *MERN* and *MEVN*, the idea is to have a JavaScript-based web client and server, communicating through a REST or GraphQL API powered by the document model of MongoDB as its flexible data store.\n\nRemix is a new JS framework that comes to disrupt the perception of static websites and tiering the view and the controller. This framework aims to simplify the web component nesting by turning our web components into small microservices that can load, manipulate, and present data based on the specific use case and application state.\n\nThe idea of combining the view logic with the server business logic and data load, leaving the state management and binding to the framework, makes the development fast and agile. Now, adding a data access layer such as MongoDB Atlas and its new Data API makes building data-driven web applications super simple. No driver is needed and everything happens in a loader function via some https calls.\n\nTo showcase how easy it is, we have built a demo movie search application based on MongoDB Atlas sample database sample_mflix. In this article, we will cover the main features of this application and learn how to use Atlas Data API and Atlas Search features.\n\n> Make sure to check out the live Remix and MongoDB demo application! You can find its source code in this dedicated GitHub repository.\n\n## Setting Up an Atlas Cluster and Data API\n\nFirst we need to prepare our data tier that we will work with our Remix application. Follow these steps:\n\n* Get started with Atlas and prepare a cluster to work with.\n\n* Enable the Data API and save the API key.\n\n* Load a sample data set into the cluster. (This application is using sample\\_mflix for its demo.)\n\n## Setting Up Remix Application\n\nAs other Node frameworks, the easiest way to bootstrap an app is by deploying a template application as a base:\n\n``` shell\nnpx create-remix@latest\n```\n\nThe command will prompt for several settings. You can use the default ones with the default self hosting option.\n\nLet\u2019s also add a few node packages that we\u2019ll be using in our application. Navigate to your newly created project and execute the following command:\n\n``` shell\nnpm install axios dotenv tiny-invariant\n```\n\nThe application consists of two main files which host the entry point to the demo application with main page html components: `app/root.jsx` and `app/routes/index.jsx`. In the real world, it will probably be the routing to a login or main page.\n\n```\n- app\n - routes\n - index.jsx\n - root.jsx\n```\n\nIn `app/root.jsx`, we have the main building blocks of creating our main page and menu to route us to the different demos.\n\n``` html\n\n \n \n \n * \n Home\n \n \n \n * \n Movies Search Demo\n \n \n \n * \n Facet Search Demo\n \n \n \n * \n GitHub\n \n \n \n\n```\n\n> If you choose to use TypeScript while creating the application, add the navigation menu to `app/routes/index.tsx` instead. Don't forget to import `Link` from `remix`.\n\nMain areas are exported in the `app/routes/index.jsx` under the \u201croutes\u201d directory which we will introduce in the following section.\n\nThis file uses the same logic of a UI representation returned as JSX while loading of data is happening in the loader function. In this case, the loader only provides some static data from the \u201cdata\u201d variable.\n\nNow, here is where Remix introduces the clever routing in the form of routes directories named after our URL path conventions. For the main demo called \u201cmovies,\u201d we created a \u201cmovies\u201d route:\n\n```\n- routes\n - movies\n - $title.jsx\n - index.jsx\n```\n\nThe idea is that whenever our application is redirecting to `/movies`, the index.jsx under `routes/movies` is called. Each jsx file produces a React component and loads its data via a loader function (operating as the server backend data provider).\n\nBefore we can create our main movies page and fetch the movies from the Atlas Data API, let\u2019s create a `.env` file in the main directory to provide the needed Atlas information for our application:\n\n```\nDATA_API_KEY=\nDATA_API_BASE_URL=\nCLUSTER_NAME=\n```\n\nPlace the relevant information from your Atlas project locating the API key, the Data API base URL, and the cluster name. Those will be shortly used in our Data API calls.\n\n> \u26a0\ufe0f**Important**: `.env` file is good for development purposes. However, for production environments, consider the appropriate secret repository to store this information for your deployment.\n\nLet\u2019s load this .env file when the application starts by adjusting the \u201cdev\u201d npm scripts in the `package.json` file:\n``` json\n\"dev\": \"node -r dotenv/config node_modules/.bin/remix dev\"\n```\n\n## `movies/index.jsx` File\n\nLet's start to create our movies list by rendering it from our data loader and the `sample_mflix.movies` collection structure.\n\nNavigate to the \u2018app/routes\u2019 directory and execute the following commands to create new routes for our movies list and movie details pages.\n\n```shell\ncd app/routes\nmkdir movies\ntouch movies/index.jsx movies/\\$title.jsx\n```\n\nThen, open the `movies/index.jsx` file in your favorite code editor and add the following:\n\n``` javascript\nimport { Form, Link, useLoaderData , useSearchParams, useSubmit } from \"remix\";\nconst axios = require(\"axios\");\n \nexport default function Movies() {\n let searchParams, setSearchParams] = useSearchParams();\n let submit = useSubmit();\n let movies = useLoaderData();\n let totalFound = movies.totalCount;\n let totalShow = movies.showCount;\n \n return (\n \n\n \n\nMOVIES\n\n \n submit(e.currentTarget.form)}\n id=\"searchBar\" name=\"searchTerm\" placeholder=\"Search movies...\" />\n \n\nShowing {totalShow} of total {totalFound} movies found\n\n \n \n\n {movies.documents.map(movie => (\n \n\n {movie.title}\n \n ))}\n \n\n \n\n );\n}\n```\n\nAs you can see in the return clause, we have a title named \u201cMovies,\u201d an input inside a \u201cget\u201d form to post a search input if requested. We will shortly explain how forms are convenient when working with Remix. Additionally, there is a link list of the retrieved movies documents. Using the `` component from Remix allows us to create links to each individual movie name. This will allow us to pass the title as a path parameter and trigger the `$title.jsx` component, which we will build shortly.\n\nThe data is retrieved using `useLoaderData()` which is a helper function provided by the framework to retrieve data from the server-side \u201cloader\u201d function.\n\n### The Loader Function\n\nThe interesting part is the `loader()` function. Let's create one to first retrieve the first 100 movie documents and leave the search for later.\n\nAdd the following code to the `movies/index.jsx` file.\n\n```javascript\nexport let loader = async ({ request }) => {\n let pipeline = [{ $limit: 100 }];\n\n let data = JSON.stringify({\n collection: \"movies\",\n database: \"sample_mflix\",\n dataSource: process.env.CLUSTER_NAME,\n pipeline\n });\n\n let config = {\n method: 'post',\n url: `${process.env.DATA_API_BASE_URL}/action/aggregate`,\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'apiKey': process.env.DATA_API_KEY\n },\n data\n };\n\n let movies = await axios(config);\n let totalFound = await getCountMovies();\n\n return {\n showCount: movies?.data?.documents?.length,\n totalCount: totalFound,\n documents: movies?.data?.documents\n };\n};\n\nconst getCountMovies = async (countFilter) => {\n let pipeline = countFilter ?\n [{ $match: countFilter }, { $count: 'count' }] :\n [{ $count: 'count' }];\n\n let data = JSON.stringify({\n collection: \"movies\",\n database: \"sample_mflix\",\n dataSource: process.env.CLUSTER_NAME,\n pipeline\n });\n\n let config = {\n method: 'post',\n url: `${process.env.DATA_API_BASE_URL}/action/aggregate`,\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'apiKey': process.env.DATA_API_KEY\n },\n data\n };\n\n let result = await axios(config);\n\n return result?.data?.documents[0]?.count;\n}\n```\n\nHere we start with an [aggregation pipeline to just limit the first 100 documents for our initial view `pipeline = {$limit : 100}]; `. This pipeline will be passed to our REST API call to the Data API endpoint:\n\n``` javascript \nlet data = JSON.stringify({\n collection: \"movies\",\n database: \"sample_mflix\",\n dataSource: process.env.CLUSTER_NAME,\n pipeline\n});\n\nlet config = {\n method: 'post',\n url: `${process.env.DATA_API_BASE_URL}/action/aggregate`,\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'apiKey': process.env.DATA_API_KEY\n },\n data\n};\n\nlet result = await axios(config);\n```\n\nWe place the API key and the URL from the secrets file we created earlier as environment variables. The results array will be returned to the UI function:\n\n``` javascript \nreturn result?.data?.documents[0]?.count;\n```\n\nTo run the application, we can go into the main folder and execute the following command:\n\n``` shell\nnpm run dev\n```\n\nThe application should start on `http://localhost:3000` URL.\n\n### Adding a Search Via Atlas Text Search\nFor the full text search capabilities of this demo, you need to create a dynamic [Atlas Search index on database `sample_mflix` collection `movies` (use default dynamic mappings). Require version 4.4.11+ (free tier included) or 5.0.4+ of the Atlas cluster for the search metadata and facet searches we will discuss later.\n\nSince we have a `\n` Remix component submitting the form input, data typed into the input box will trigger a data reload. The `\n` reloads the loader function without refreshing the entire page. This will naturally resubmit the URL as `/movies?searchTerm=` and here is why it's easy to use the same loader function, extract to URL parameter, and add a search logic by just amending the base pipeline:\n\n``` javascript\nlet url = new URL(request.url);\nlet searchTerm = url.searchParams.get(\"searchTerm\");\n \nconst pipeline = searchTerm ?\n \n {\n $search: {\n index: 'default',\n text: {\n query: searchTerm,\n path: {\n 'wildcard': '*'\n }\n }\n }\n }, { $limit: 100 }, { \"$addFields\": { meta: \"$$SEARCH_META\" } }\n ] :\n [{ $limit: 100 }];\n ```\n \n In this case, the submission of a form will call the loader function again. If there was a `searchTerm`submitted in the URL, it will be extracted under the `searchTerm` variable and create a `$search` pipeline to interact with the Atlas Search text index.\n\n``` javascript\ntext: {\n query: searchTerm,\n path: {\n 'wildcard': '*'\n }\n}\n```\n\nAdditionally, there is a very neat feature that allow us to get the metadata for our search\u2014for example, how many matches were for this specific keyword (as we don\u2019t want to show more than 100 results). \n\n``` javascript \n{ \"$addFields\" : {meta : \"$$SEARCH_META\"}}\n```\n\nWhen wiring everything together, we get a working searching functionality, including metadata information on our searches. \n\nNow, if you noticed, each movie title is actually a link redirecting to `./movies/` url. But why is this good, you ask? Remix allows us to build parameterized routes based on our URL path parameters. \n\n## `movies/$title.jsx` File\n\nThe `movies/$title.jsx` file will show each movie's details when loaded. The magic is that the loader function will get the name of the movie from the URL. So, in case we clicked on \u201cHome Alone,\u201d the path will be `http:/localhost:3000/movies/Home+Alone`.\n\nThis will allow us to fetch the specific information for that title.\n\nOpen the `movies/$title.jsx` file we created earlier, and add the following:\n\n```javascript\nimport { Link, useLoaderData } from \"remix\";\nimport invariant from \"tiny-invariant\";\n \nconst axios = require('axios');\n \nexport let loader = async ({ params }) => {\n invariant(params.title, \"expected params.title\");\n \n let data = JSON.stringify({\n collection: \"movies\",\n database: \"sample_mflix\",\n dataSource: process.env.CLUSTER_NAME,\n filter: { title: params.title }\n });\n \n let config = {\n method: 'post',\n url: process.env.DATA_API_BASE_URL + '/action/findOne',\n headers: {\n 'Content-Type': 'application/json',\n 'Access-Control-Request-Headers': '*',\n 'apiKey': process.env.DATA_API_KEY\n },\n data\n };\n \n let result = await axios(config);\n let movie = result?.data?.document || {};\n \n return {\n title: params.title,\n plot: movie.fullplot,\n genres: movie.genres,\n directors: movie.directors,\n year: movie.year,\n image: movie.poster\n };\n};\n```\n\nThe `findOne` query will filter the results by title. The title is extracted from the URL params provided as an argument to the loader function. \n\nThe data is returned as a document with the needed information to be presented like \u201cfull plot,\u201d \u201cposter,\u201d \u201cgenres,\u201d etc.\n\nLet\u2019s show the data with a simple html layout: \n\n``` javascript\nexport default function MovieDetails() {\n let movie = useLoaderData();\n \n return (\n
\n

{movie.title}

\n {movie.plot}\n

\n
\n
  • \n Year\n
  • \n {movie.year}\n
    \n
    \n
    \n
  • \n Genres\n
  • \n {movie.genres.map(genre => { return genre + \" | \" })}\n
    \n
    \n
    \n
  • \n Directors\n
  • \n {movie.directors.map(director => { return director + \" | \" })}\n
    \n

    \n \n
    \n );\n}\n```\n\n## `facets/index.jsx` File\n\nMongoDB Atlas Search introduced a new feature complementing a very common use case in the text search world: categorising and allowing a [faceted search. Facet search is a technique to present users with possible search criteria and allow them to specify multiple search dimensions. In a simpler example, it's the search criteria panels you see in many commercial or booking websites to help you narrow your search based on different available categories.\n\nAdditionally, to the different criteria you can have in a facet search, it adds better and much faster counting of different categories. To showcase this ability, we have created a new route called `facets` and added an additional page to show counts per genre under `routes/facets/index.jsx`. Let\u2019s look at its loader function:\n\n``` javascript\nexport let loader = async ({ request }) => {\n let pipeline = \n {\n $searchMeta: {\n facet: {\n operator: {\n range: {\n path: \"year\",\n gte: 1900\n }\n },\n facets: {\n genresFacet: {\n type: \"string\",\n path: \"genres\"\n }\n }\n }\n }\n }\n ];\n \nlet data = JSON.stringify({\n collection: \"movies\",\n database: \"sample_mflix\",\n dataSource: process.env.CLUSTER_NAME,\n pipeline\n });\n \n let config = {\n method: \"post\",\n url: process.env.DATA_API_BASE_URL + \"/action/aggregate\",\n headers: {\n \"Content-Type\": \"application/json\",\n \"Access-Control-Request-Headers\": \"*\",\n \"apiKey\": process.env.DATA_API_KEY\n },\n data\n };\n \n let movies = await axios(config);\n \n return movies?.data?.documents[0];\n};\n```\n\nIt uses a new stage called $searchMeta and two facet stages: one to make sure that movies start from a date (1900) and that we aggregate counts based on genres field:\n\n``` javascript\nfacet: {\n operator: {\n range: {\n path: \"year\",\n gte: 1900\n }\n },\n facets: {\n genresFacet: {\n type: \"string\",\n path: \"genres\"\n }\n }\n}\n```\n\nTo use the facet search, we need to amend the index and add both fields to types for facet. Editing the index is easy through the Atlas visual editor. Just click `[...]` > \u201cEdit with visual editor.\u201d\n![Facet Mappings\n\nAn output document of the search query will look like this:\n\n``` json\n{\"count\":{\"lowerBound\":23494},\n\"facet\":{\"genresFacet\":{\"buckets\":{\"_id\":\"Drama\",\"count\":13771},\n {\"_id\":\"Comedy\",\"count\":7017},\n {\"_id\":\"Romance\",\"count\":3663},\n {\"_id\":\"Crime\",\"count\":2676},\n {\"_id\":\"Thriller\",\"count\":2655},\n {\"_id\":\"Action\",\"count\":2532},\n {\"_id\":\"Documentary\",\"count\":2117},\n {\"_id\":\"Adventure\",\"count\":2038},\n {\"_id\":\"Horror\",\"count\":1703},\n {\"_id\":\"Biography\",\"count\":1401}]\n }}}\n```\n\nOnce we route the UI page under facets demo, the table of genres in the UI will look as:\n![Facet Search UI\n\n### Adding Clickable Filters Using Routes\nTo make the application even more interactive, we have decided to allow clicking on any of the genres on the facet page and redirect to the movies search page with `movies?filter={genres : }`:\n\n```html\n
    \n \n {bucket._id}\n \n \n Press to filter by \"{bucket._id}\" genre\n \n
    \n```\n\nNow, every genre clicked on the facet UI will be redirected back to `/movies?filter={generes: }`\u2014for example, `/movies?filter={genres : \"Drama\"}`.\n\nThis will trigger the `movies/index.jsx` loader function, where we will add the following condition:\n\n```javascript\nlet filter = JSON.parse(url.searchParams.get(\"filter\"));\n...\n \n else if (filter) {\n pipeline = \n {\n \"$match\": filter\n },{$limit : 100}\n ]\n }\n```\n\nLook how easy it is with the aggregation pipelines to switch between a regular match and a full text search.\n\nWith the same approach, we can add any of the presented fields as a search criteria\u2014for example, clicking directors on a specific movie details page passing `/movies?filter={directors: [ ]}`.\n\n|Click a filtered field (eg. \"Directors\") |Redirect to filtered movies list |\n| --- | --- |\n| ![Sefty Last movie details ||\n\n## Wrap Up\n\nRemix has some clever and renewed concepts for building React-based web applications. Having server and client code coupled together inside moduled and parameterized by URL JS files makes developing fun and productive. \n\nThe MongoDB Atlas Data API comes as a great fit to easily access, search, and dice your data with simple REST-like API syntax. Overall, the presented stack reduces the amount of code and files to maintain while delivering best of class UI capabilities.\n\nCheck out the full code at the following GitHub repo and get started with your new application using MongoDB Atlas today!\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas"], "pageDescription": "Remix is a new and exciting javascript web framework. Together with the MongoDB Atlas Data API and Atlas Search it can form powerful web applications. A guided tour will show you how to leverage both technologies together. ", "contentType": "Tutorial"}, "title": "Next Gen Web Apps with Remix and MongoDB Atlas Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/rust-mongodb-frameworks", "action": "created", "body": "# Using Rust Web Development Frameworks with MongoDB\n\n## Introduction\nSo, you've decided to write a Rust application with MongoDB, and you're wondering which of the top web development frameworks to use. Below, we give some suggestions and resources for how to:\n\n1. Use MongoDB with Actix and Rust.\n2. Use MongoDB with Rocket.rs and Rust.\n\nThe TLDR is that any of the popular Rust frameworks can be used with MongoDB, and we have code examples, tutorials, and other resources to guide you.\n\n### Building MongoDB Rust apps with Actix\n\nActix is a powerful and performant web framework for building Rust applications, with a long list of supported features. \n\nYou can find a working example of using MongoDB with Actix in the `databases` directory under Actix's github, but otherwise, if you're looking to build a REST API with Rust and MongoDB, using Actix along the way, this tutorial is one of the better ones we've seen.\n\n### Building MongoDB Rust apps with Rocket.rs\n\nPrefer Rocket? Rocket is a fast, secure, and type safe framework that is low on boilerplate. It's easy to use MongoDB with Rocket to build Rust applications. There's a tutorial on Medium we particularly like on building a REST API with Rust, MongoDB, and Rocket. \n\nIf all you want is to see a code example on github, we recommend this one.\n\n", "format": "md", "metadata": {"tags": ["Rust", "MongoDB"], "pageDescription": "Which Rust frameworks work best with MongoDB?", "contentType": "Article"}, "title": "Using Rust Web Development Frameworks with MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/leverage-mongodb-data-kafka-tutorials", "action": "created", "body": "# Learn How to Leverage MongoDB Data within Kafka with New Tutorials!\n\nThe MongoDB Connector for Apache Kafka documentation now includes new tutorials! These tutorials introduce you to key concepts behind the connector and by the end, you\u2019ll have an understanding of how to move data between MongoDB and Apache Kafka. The tutorials are as follows:\n\n* Explore Change Streams\n\nChange streams is a MongoDB server feature that provides change data capture (CDC) capabilities for MongoDB collections. The source connector relies on change streams to move data from MongoDB to a Kafka topic. In this tutorial, you will explore creating a change stream and reading change stream events all through a Python application.\n\n* Getting Started with the MongoDB Kafka Source Connector\n\nIn this tutorial, you will configure a source connector to read data from a MongoDB collection into an Apache Kafka topic and examine the content of the event messages.\n\n* Getting Started with the MongoDB Kafka Sink Connector\n\nIn this tutorial, you will configure a sink connector to copy data from a Kafka topic into a MongoDB cluster and then write a Python application to write data into the topic.\n\n* Replicate Data with a Change Data Capture Handler\n\nConfigure both a MongoDB source and sink connector to replicate data between two collections using the MongoDB CDC handler.\n\n* Migrate an Existing Collection to a Time Series Collection\n\nTime series collections efficiently store sequences of measurements over a period of time, dramatically increasing the performance of time-based data. In this tutorial, you will configure both a source and sink connector to replicate the data from a collection into a time series collection.\n\nThese tutorials run locally within a Docker Compose environment that includes Apache Kafka, Kafka Connect, and MongoDB. Before starting them, follow and complete the Tutorial Setup. You will work through the steps using a tutorial shell and containers available on Docker Hub. The tutorial shell includes tools such as the new Mongo shell, KafkaCat, and helper scripts that make it easy to configure Kafka Connect from the command line.\n\nIf you have any questions or feedback on the tutorials, please post them on the MongoDB Community Forums. ", "format": "md", "metadata": {"tags": ["Connectors", "Kafka"], "pageDescription": "MongoDB Documentation has released a series of new tutorials based upon a self-hosted Docker compose environment that includes all the components needed to learn.", "contentType": "Article"}, "title": "Learn How to Leverage MongoDB Data within Kafka with New Tutorials!", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/code-example-nextjs-mongodb", "action": "created", "body": "# Blogue\n\n## Creator\nSujan Chhetri contributed this project. \n\n## About the Project: \n\nBlogue is a writing platform where all the writers or non-writers are welcome. We believe in sharing knowlege in the form of words. The page has a 'newshub' for articles, a projecthub where you can share projects among other things. Posts can categorized by health, technology, business, science and more. It's also possible to select the source (CNN / Wired) etc. \n\n ## Inspiration\n \nI created this, so that I could share stuff (blogs / articles) on my own platform. I am a self taught programmer. I want other students to know that you can make stuffs happen if you make plans and start learning. \n \n ## How it Works\n It's backend is written with Node.js and frontend with next.js and react.js. Mongodb Atlas is used as the storage. MongoDB is smooth and fast. The GitHub repo shared above consists of the backend for a blogging platform. Most of the features that are in a blog are available.\n \n Some listed here: \n\n* User Signup / Signin\n* JWT based Authentication System\n* Role Based Authorization System-user/admin\n* Blogs Search\n* Related Blogs\n* Categories\n* Tags\n* User Profile\n* Blog Author Private Contact Form\n* Multiple User Authorization System\n* Social Login with Google\n* Admin / User Dashboard privilage\n* Image Uploads\n* Load More Blogs\n\n", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "Node.js", "Next.js"], "pageDescription": "A reading and writing platform. ", "contentType": "Code Example"}, "title": "Blogue", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/rust/rust-mongodb-blog-project", "action": "created", "body": "# Beginner Coding Project: Build a Blog Engine with Rust and MongoDB\n\n## Description of Application\nA quick and easy example application project that creates a demo blog post engine. Very simple UI, ideal for beginners to Rust programming langauge.\n\n## Technology Stack\nThis code example utilizes the following technologies:\n\n* MongoDB Atlas\n* Rust\n* Rocket.rs framework\n\n", "format": "md", "metadata": {"tags": ["Rust"], "pageDescription": "A beginner level project using MongoDB with Rust", "contentType": "Code Example"}, "title": "Beginner Coding Project: Build a Blog Engine with Rust and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-serverless-applications-sst-mongodb-atlas", "action": "created", "body": "# How to Build Serverless Applications with SST and MongoDB Atlas\n\nServerless computing is now becoming the standard for how applications are developed in the cloud. Serverless at its core lets developers create small packages of code that are executed by a cloud provider to respond to events. These events can range from HTTP requests to cron jobs, or even file upload notifications. These packages of code are called functions or Lambda functions, named after AWS Lambda, the AWS service that powers them. This model allows serverless services to scale with ease and be incredibly cost effective, as you only pay for the exact number of milliseconds it takes to execute them.\n\nHowever, working with Lambda functions locally can be tricky. You\u2019ll need to either emulate the events locally or redeploy them to the cloud every time you make a change. On the other hand, due to the event-based execution of these functions, you\u2019ll need to use services that support a similar model as well. For instance, a traditional database expects you to hold on to a connection and reuse it to make queries. This doesn\u2019t work well in the serverless model, since Lambda functions are effectively stateless. Every invocation of a Lambda function creates a new environment. The state from all previous invocations is lost unless committed to persistent storage.\n\nOver the years, there has been a steady stream of improvements from the community to address these challenges. The developer experience is now at a point where it\u2019s incredibly easy to build full-stack applications with serverless. In this post, we\u2019ll look at the new way for building serverless apps. This includes using:\n\n* Serverless Stack \\(SST\\), a framework for building serverless apps with a great local developer experience.\n* A serverless database in MongoDB Atlas, the most advanced cloud database service on the market.\n\n> \u201cWith MongoDB Atlas and SST, it\u2019s now easier than ever to build full-stack serverless applications.\u201d \u2014 Andrew Davidson, Vice President, Product Management, MongoDB \n\nLet\u2019s look at how using these tools together can make it easy to build serverless applications.\n\n## Developing serverless apps locally\n\nLambda functions are packages of code that are executed in response to cloud events. This makes it a little tricky to work with them locally, since you want them to respond to events that happen in the cloud. You can work around this by emulating these events locally or by redeploying your functions to test them. Both these approaches don\u2019t work well in practice.\n\n### Live Lambda development with SST\n\nSST is a framework for building serverless applications that allows developers to work on their Lambda functions locally. It features something called Live Lambda Development that proxies the requests from the cloud to your local machine, executes them locally, and sends the results back. This allows you to work on your functions locally without having to redeploy them or emulate the events.\n\nLive Lambda development thus allows you to work on your serverless applications, just as you would with a traditional server-based application.\n\n## Serverless databases\n\nTraditional databases operate under the assumption that there is a consistent connection between the application and the database. It also assumes that you\u2019ll be responsible for scaling the database capacity, just as you would with your server-based application.\n\nHowever, in a serverless context, you are simply writing the code and the cloud provider is responsible for scaling and operating the application. You expect your database to behave similarly as well. You also expect it to handle new connections automatically.\n\n### On-demand serverless databases with MongoDB Atlas\n\nTo address this issue, MongoDB Atlas launched serverless instances, currently available in preview. This allows developers to use MongoDB\u2019s world class developer experience without any setup, maintenance, or tuning. You simply pick the region and you\u2019ll receive an on-demand database endpoint for your application.\n\nYou can then make queries to your database, just as you normally would, and everything else is taken care of by Atlas. Serverless instances automatically scale up or down depending on your usage, so you never have to worry about provisioning more than you need and you only pay for what you use.\n\n## Get started\n\nIn this post, we saw a quick overview of the current serverless landscape, and how serverless computing abstracts and automates away many of the lower level infrastructure decisions. So, you can focus on building features that matter to your business!\n\nFor help building your first serverless application with SST and MongoDB Atlas, check out our tutorial: How to use MongoDB Atlas in your serverless app.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n>\"MongoDB Atlas\u2019 serverless instances and SST allow developers to leverage MongoDB\u2019s unparalleled developer experience to build full-stack serverless apps.\" \u2014 Jay V, CEO, SST \n\nAlso make sure to check out the quick start and join the community forums for more insights on building serverless apps with MongoDB Atlas.\n", "format": "md", "metadata": {"tags": ["Atlas", "Serverless", "AWS"], "pageDescription": "The developer experience is at a point where it\u2019s easy to build full-stack applications with serverless. In this post, we\u2019ll look at the new way of building serverless apps.", "contentType": "Article"}, "title": "How to Build Serverless Applications with SST and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/groupus-app", "action": "created", "body": "# GroupUs\n\n## Creator\nAnjay Goel contributed this project.\n\n## About the Project\nA web app that automates group formation for projects/presentations/assignments etc. It saves you the inconvenience of asking tens of people simply to form a group. Also letting an algorithm do the matching ensures that the groups formed are more optimal and fair.\n \n ## Inspiration\n Inspired by the difficulty and the unnecessary hassle in forming several different groups for different classes especially during the virtual classes.\n\n## Why MongoDB?\nUsed MongoDB because the project required a database able to store and query JSON like documents.\n\n## How It Works\nThe user creates a new request and adds participant names,email-ids, group size, deadline etc. Then the app will send a form to all participants asking them to fill out their preferences. Once all participants have filled their choices (or deadline reached), it will form groups using a algorithm and send emails informing everyone of their respective groups.", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "JavaScript"], "pageDescription": "A web-app that automates group formation for projects/assignments etc..", "contentType": "Code Example"}, "title": "GroupUs", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/practical-mongodb-aggregations-book", "action": "created", "body": "# Introducing a New MongoDB Aggregations Book\n\nI'm pleased to announce the publication of my new book, **\"Practical\nMongoDB Aggregations.\"**\n\nThe book is available electronically for free for anyone to use at:\n.\n\nThis book is intended for developers, architects, data analysts, data\nengineers, and data scientists. It aims to improve your productivity and\neffectiveness when building aggregation pipelines and help you\nunderstand how to optimise their pipelines.\n\nThe book is split into two key parts:\n\n1. A set of tips and principles to help you get the most out of\n aggregations.\n2. A bunch of example aggregation pipelines for solving common data\n manipulation challenges, which you can easily copy and try for\n yourself.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn more about our newest book, Practical MongoDB Aggregations, by Paul Done.", "contentType": "News & Announcements"}, "title": "Introducing a New MongoDB Aggregations Book", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/developing-web-application-netlify-serverless-functions-mongodb", "action": "created", "body": "\n \n\nMONGODB WITH NETLIFY FUNCTIONS\n\n \n \n ", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "Node.js"], "pageDescription": "Learn how to build and deploy a web application that leverages MongoDB and Netlify Functions for a serverless experience.", "contentType": "Tutorial"}, "title": "Developing a Web Application with Netlify Serverless Functions and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/go/golang-mongodb-code-example", "action": "created", "body": "# Cinema: Example Go Microservices Application\n\nLooking for a code example that has microservices in Go with Docker, Kubernetes and MongoDB? Look no further!\n\nCinema is an example project which demonstrates the use of microservices for a fictional movie theater. The Cinema backend is powered by 4 microservices, all of which happen to be written in Go, using MongoDB for manage the database and Docker to isolate and deploy the ecosystem.\n\nMovie Service: Provides information like movie ratings, title, etc.\nShow Times Service: Provides show times information.\nBooking Service: Provides booking information.\nUsers Service: Provides movie suggestions for users by communicating with other services.\n\nThis project is available to clone or fork on github from Manuel Morej\u00f3n. \n", "format": "md", "metadata": {"tags": ["Go", "MongoDB", "Kubernetes", "Docker"], "pageDescription": " An easy project using Go, Docker, Kubernetes and MongoDB", "contentType": "Code Example"}, "title": "Cinema: Example Go Microservices Application", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-survey-2022", "action": "created", "body": "# The 2022 MongoDB Java Developer Survey\n\nAccording to the 2022 Stack Overflow Developer Survey, Java is the sixth most popular programming, scripting, or markup language. 17K of the whopping 53K+ respondents indicated that they use Java - that\u2019s a huge footprint!\n\nToday we have more than 132,000 clusters running on Atlas using Java. \n\nWe\u2019re running our first-ever developer survey specifically for Java developers. We\u2019ll use your survey responses to make changes that matter to you. The survey will take approximately 5-10 minutes to complete. As a way of saying thank you, we\u2019ll be raffling off gift cards (five randomly chosen winners will receive $150).\n\nYou can access the survey here.", "format": "md", "metadata": {"tags": ["Java", "Spring"], "pageDescription": "MongoDB is conducted a survey for Java developers", "contentType": "News & Announcements"}, "title": "The 2022 MongoDB Java Developer Survey", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/flexible-querying-with-atlas-search", "action": "created", "body": "# Flexible Querying with Atlas Search\n\n## Introduction\nIn this walkthrough, I will show how the flexibility of Atlas Search's inverted indexes are a powerful option versus traditional b-tree indexes when it comes to supporting ad-hoc queries. \n\n## What is flexible querying?\nFlexible query engines provide the ability to execute a performant query that spans multiple indexes in your data store. This means you can write ad-hoc, dynamically generated queries, where you don't need to know the query, fields, or ordering of fields in advance. \n\nBe sure to check out the MongoDB documentation on this subject!\n\nIt's very rare that MongoDB\u2019s query planner selects a plan that involves multiple indexes. In this tutorial, we\u2019ll walk through a scenario in which this becomes a requirement. \n\n### Your application is in a constant state of evolution\n\nLet\u2019s say you have a movie application with documents like:\n\n```\n{\n \"title\": \"Fight Club\",\n \"year\": 1999,\n \"imdb\": {\n \"rating\": 8.9,\n \"votes\": 1191784,\n \"id\": 137523\n },\n \"cast\": \n \"Edward Norton\",\n \"Brad Pitt\"\n ]\n}\n```\n\n### Initial product requirements\n\nNow for the version 1.0 application, you need to query on title and year, so you first create a compound index via:\n\n`db.movies.createIndex( { \"title\": 1, \"year\": 1 } )`\n\nThen issue the query:\n\n`db.movies.find({\"title\":\"Fight Club\", \"year\":1999})`\n\nWhen you run an explain plan, you have a perfect query with a 1:1 documents-examined to documents-returned ratio:\n\n```\n{\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 1,\n \"totalDocsExamined\": 1\n }\n}\n```\n\n### Our query then needs to evolve\n\nNow our application requirements have evolved and you need to query on cast and imdb. First you create the index:\n\n`db.movies.createIndex( { \"cast\": 1, \"imdb.rating\": 1 } )`\n\nThen issue the query:\n\n`db.movies.find({\"cast\":\"Edward Norton\", \"imdb.rating\":{ $gte:9 } })`\n\nNot the greatest documents-examined to documents-returned ratio, but still not terrible:\n\n```\n{\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 7,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 17,\n \"totalDocsExamined\": 17\n }\n}\n```\n\n### Now our query evolves again\n\nNow, our application requires you issue a new query, which becomes a subset of the original:\n\n`db.movies.find({\"imdb.rating\" : { $gte:9 } })`\n\nThe query above results in the dreaded **collection scan** despite the previous compound index (cast_imdb.rating) comprising the above query\u2019s key. This is because the \"imdb.rating\" field is not the index-prefix, and the query contains no filter conditions on the \"cast\" field.\"\n\n*Note: Collection scans should be avoided because not only do they instruct the cursor to look at every document in the collection which is slow, but it also forces documents out of memory resulting in increased I/O pressure.*\n\nOur query plan results as follows:\n\n```\n{\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 31,\n \"executionTimeMillis\": 26,\n \"totalKeysExamined\": 0,\n \"totalDocsExamined\": 23532\n }\n}\n```\n\nNow you certainly could create a new index composed of just imdb.rating, which would return an index scan for the above query, but that\u2019s three different indexes that the query planner would have to navigate in order to select the most performant response.\n## Alternatively: Atlas Search\nBecause Lucene uses a different index data structure ([inverted indexes vs B-tree indexes), it\u2019s purpose-built to run queries that overlap into multiple indexes.\n\nUnlike compound indexes, the order of fields in the Atlas Search index definition is not important. Fields can be defined in any order. Therefore, it's not subject to the limitation above where a query that is only on a non-prefix field of a compound index cannot use the index.\n\nIf you create a single index that maps all of our four fields above (title, year, cast, imdb):\n\n```\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"title\": {\n \"type\": \"string\",\n \"dynamic\": false\n },\n \"year\": {\n \"type\": \"number\",\n \"dynamic\": false\n },\n \"cast\": {\n \"type\": \"string\",\n \"dynamic\": false\n },\n \"imdb.rating\": {\n \"type\": \"number\",\n \"dynamic\": false\n } \n }\n }\n}\n```\n\nThen you issue a query that first spans title and year via a must (AND) clause, which is the equivalent of `db.collection.find({\"title\":\"Fight Club\", \"year\":1999})`:\n\n```\n{\n \"$search\": {\n \"compound\": {\n \"must\": [{\n \"text\": {\n \"query\": \"Fight Club\",\n \"path\": \"title\"\n }\n },\n {\n \"range\": {\n \"path\": \"year\",\n \"gte\": 1999,\n \"lte\": 1999\n }\n }\n ]\n }\n }\n}]\n```\n\nThe corresponding query planner results:\n\n```\n{\n '$_internalSearchIdLookup': {},\n 'executionTimeMillisEstimate': 6,\n 'nReturned': 0\n}\n```\n\nThen when you add `imdb` and `cast` to the query, you can still get performant results:\n\n```\n[{\n \"$search\": {\n \"compound\": {\n \"must\": [{\n \"text\": {\n \"query\": \"Fight\",\n \"path\": \"title\"\n },\n {\n \"range\": {\n \"path\": \"year\",\n \"gte\": 1999,\n \"lte\": 1999\n },\n {\n \"text\": {\n \"query\": \"Edward Norton\",\n \"path\": \"cast\"\n }\n },\n {\n \"range\": {\n \"path\": \"year\",\n \"gte\": 1999,\n \"lte\": 1999\n }\n }\n ]\n }\n }\n }]\n```\n\nThe corresponding query planner results:\n\n {\n '$_internalSearchIdLookup': {},\n 'executionTimeMillisEstimate': 6,\n 'nReturned': 0\n }\n\n## This isn\u2019t a peculiar scenario\n\nApplications evolve as our users\u2019 expectations and requirements do. In order to support your applications' evolving requirements, Standard B-tree indexes simply cannot evolve at the rate that an inverted index can.\n\n### Use cases\n\nHere are several examples where Atlas Search's inverted index data structures can come in handy, with links to reference material:\n\n- [GraphQL: If your database's entry point is GraphQL, where the queries are defined by the client, then you're a perfect candidate for inverted indexes\n- Advanced Search: You need to expand the filtering criteria for your searchbar beyond several fields.\n- Wildcard Search: Searching across fields that match combinations of characters and wildcards.\n- Ad-Hoc Querying: The need to dynamically generate queries on-demand by our clients.\n\n# Resources\n\n- Full code walkthrough via a Jupyter Notebook\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "GraphQL"], "pageDescription": "Atlas Search provides the ability to execute a performant query that spans multiple indexes in your data store. It's very rare, however, that MongoDB\u2019s query planner selects a plan that involves multiple indexes. We\u2019ll walk through a scenario in which this becomes a requirement. ", "contentType": "Tutorial"}, "title": "Flexible Querying with Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/real-time-data-javascript", "action": "created", "body": "\n\u00a0\u00a0\u00a0\n\nCONNECTED AS USER $\n\n\u00a0\u00a0\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\n\nLatest events:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0OperationDocument KeyFull Document\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\n\n\u00a0\u00a0\u00a0\n\n\u00a0", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "React"], "pageDescription": "In many applications nowadays, you want data to be displayed in real-time. Whether an IoT sensor reporting a value, a stock value that you want to track, or a chat application, you will want the data to automatically update your UI. This is possible using MongoDB Change Streams with the Realm Web SDK.", "contentType": "Tutorial"}, "title": "Real Time Data in a React JavaScript Front-End with Change Streams", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/code-example-js-mongodb-magazinemanagement", "action": "created", "body": "# Magazine Management\n\n## Creator\nTrinh Van Thuan from Vietnam National University contributed this project.\n\n## About the Project\n\nThe system manages students' posts in universities.\nThe system allows students and clients to read posts in diverse categories, such as: math, science, social, etc.\n \n ## Inspiration\n \n Creating an environment for students to communicate and gain knowledge\n\n## Why MongoDB?\nSince MongoDB provides more flexible way to use functions than MySQL or some other query languages\n \n ## How It Works\n There are 5 roles: admin, manager, coordinator, student, clients.\n\n* Admins are in charge of managing accounts.\n* Managers manage coordinators.\n* Coordinators manage students' posts.\n* Students can read their faculty's posts that have been approved.\n* Clients can read all posts that have been approved.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "A system manage student's post for school or an organization", "contentType": "Code Example"}, "title": "Magazine Management", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/time-series-cpp", "action": "created", "body": "# MongoDB Time Series with C++\n\nTime series data is a set of data points collected at regular intervals. It\u2019s a common use case in many industries such as finance, IoT, and telecommunications. MongoDB provides powerful features for handling time series data, and in this tutorial, we will show you how to build a C++ console application that uses MongoDB to store time series data, related to the Air Quality Index (AQI) for a given location. We will also take a look at MongoDB Charts to visualize the data saved in the time series.\n\nLibraries used in this tutorial:\n1. MongoDB C Driver version: 1.23.0\n2. MongoDB C++ Driver version: 3.7.0\n3. cpr library\n4. vcpkg\n5. Language standard: C++17\n\nThis tutorial uses Microsoft Windows 11 and Microsoft Visual Studio 2022 but the code used in this tutorial should work on any operating system and IDE, with minor changes.\n\n## Prerequisites\n1. MongoDB Atlas account with a cluster created.\n2. Microsoft Visual Studio setup with MongoDB C and C++ Driver installed. Follow the instructions in Getting Started with MongoDB and C++ to install MongoDB C/C++ drivers and set up the dev environment in Visual Studio.\n3. Your machine\u2019s IP address is whitelisted. Note: You can add 0.0.0.0/0 as the IP address, which should allow access from any machine. This setting is not recommended for production use.\n4. API token is generated using Air Quality Open Data Platform API Token Request Form to fetch AQI for a given location.\n\n## Installation: Libraries\nLaunch powershell/terminal as an administrator and execute commands shared below.\n\nStep 1: Install vcpkg.\n\n```\ngit clone https://github.com/Microsoft/vcpkg.git\ncd vcpkg\n./bootstrap-vcpkg.sh\n./vcpkg integrate install\n```\n\nStep 2: Install libcpr/cpr.\n\n```\n./vcpkg install cpr:x64-windows\n```\n\nThis tutorial assumes we are working with x64 architecture. If you are targeting x86, please use this command:\n\n```\n./vcpkg install cpr\n```\n\nNote: Below warning (if encountered) can be ignored.\n\n```\n# this is heuristically generated, and may not be correct\nfind_package(cpr CONFIG REQUIRED)\ntarget_link_libraries(main PRIVATE cpr::cpr)\n```\n\n## Building the application\n\n> Source code available here\n\nIn this tutorial, we will build an Air Quality Index (AQI) monitor that will save the AQI of a given location to a time series collection.\n \nThe AQI is a measure of the quality of the air in a particular area, with higher numbers indicating worse air quality. The AQI is based on a scale of 0 to 500 and is calculated based on the levels of several pollutants in the air. \n\nWe are going to build a console application from scratch. Follow the steps on how to set up the development environment in Visual Studio from our previous article Getting Started with MongoDB and C++, under the section \u201cVisual Studio: Setting up the dev environment.\u201d\n\n### Helper functions\nOnce we have set up a Visual Studio solution, let\u2019s start with adding the necessary headers and writing the helper functions.\n* Make sure to include `` to access methods provided by the *cpr* library. \n\nNote: Since we installed the cpr library with vcpkg, it automatically adds the needed include paths and dependencies to Visual Studio.\n\n* Get the connection string (URI) to the cluster and create a new environment variable with key as `\u201cMONGODB_URI\u201d` and value as the connection string (URI). It\u2019s a good practice to keep the connection string decoupled from the code.\nSimilarly, save the API token obtained in the Prerequisites section with the key as `\u201cAQICN_TOKEN\u201d`.\n\nNavigate to the Solution Explorer panel, right-click on the solution name, and click \u201cProperties.\u201d Go to Configuration Properties > Debugging > Environment to add these environment variables as shown below.\n\n* `\u201cgetAQI\u201d` function makes use of the *cpr* library to make a call to the REST API, fetching the AQI data. The response to the request is then parsed to get the AQI figure.\n* `\u201csaveToCollection\u201d` function saves the given AQI figure to the time series collection. Please note that adding the `\u201ctimestamp\u201d` key-value pair is mandatory. A missing timestamp will lead to an exception being thrown. Check out different `\u201ctimeseries\u201d` Object Fields in Create and Query a Time Series Collection \u2014 MongoDB Manual. \n\n```\n#pragma once\n\n#include \n#include \n#include \n#include \n#include \n#include \n\n#include \n#include \n\nusing namespace std;\n\nstd::string getEnvironmentVariable(std::string environmentVarKey)\n{\nchar* pBuffer = nullptr;\nsize_t size = 0;\nauto key = environmentVarKey.c_str();\n\n// Use the secure version of getenv, ie. _dupenv_s to fetch environment variable. \nif (_dupenv_s(&pBuffer, &size, key) == 0 && pBuffer != nullptr)\n{\nstd::string environmentVarValue(pBuffer);\nfree(pBuffer);\nreturn environmentVarValue;\n}\nelse\n{\nreturn \"\";\n}\n}\n\nint getAQI(std::string city, std::string apiToken)\n{\n// Call the API to get the air quality index.\nstd::string aqiUrl = \"https://api.waqi.info/feed/\" + city + \"/?token=\" + apiToken;\nauto aqicnResponse = cpr::Get(cpr::Url{ aqiUrl });\n\n// Get the AQI from the response\nif(aqicnResponse.text.empty())\n{\ncout << \"Error: Response is empty.\" << endl;\nreturn -1;\n}\nbsoncxx::document::value aqicnResponseBson = bsoncxx::from_json(aqicnResponse.text);\nauto aqi = aqicnResponseBson.view()\"data\"][\"aqi\"].get_int32().value;\nreturn aqi;\n}\n\nvoid saveToCollection(mongocxx::collection& collection, int aqi)\n{\nauto timeStamp = bsoncxx::types::b_date(std::chrono::system_clock::now());\n\nbsoncxx::builder::stream::document aqiDoc = bsoncxx::builder::stream::document{};\naqiDoc << \"timestamp\" << timeStamp << \"aqi\" << aqi;\ncollection.insert_one(aqiDoc.view());\n\n// Log to the console window.\ncout << \" TimeStamp: \" << timeStamp << \" AQI: \" << aqi << endl;\n}\n```\n\n### The main() function\nWith all the helper functions in place, let\u2019s write the main function that will drive this application.\n* The main function creates/gets the time series collection by specifying the `\u201ccollection_options\u201d` to the `\u201ccreate_collection\u201d` method. \nNote: MongoDB creates collections implicitly when you first reference the collection in a command, however a time series collection needs to be created explicitly with \u201c[create_collection\u201d. \n* Every 30 minutes, the program gets the AQI figure and updates it into the time series collection. Feel free to modify the time interval as per your liking by changing the value passed to `\u201csleep_for\u201d`.\n\n```\nint main()\n{\n// Get the required parameters from environment variable.\nauto mongoURIStr = getEnvironmentVariable(\"MONGODB_URI\");\nauto apiToken = getEnvironmentVariable(\"AQICN_TOKEN\");\nstd::string city = \"Delhi\";\nstatic const mongocxx::uri mongoURI = mongocxx::uri{ mongoURIStr };\n\nif (mongoURI.to_string().empty() || apiToken.empty())\n{\ncout << \"Invalid URI or API token. Please check the environment variables.\" << endl;\nreturn 0;\n}\n\n// Create an instance.\nmongocxx::instance inst{};\nmongocxx::options::client client_options;\nauto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };\nclient_options.server_api_opts(api);\nmongocxx::client conn{ mongoURI, client_options };\n\n// Setup Database and Collection.\nconst string dbName = \"AQIMonitor\";\nconst string timeSeriesCollectionName = \"AQIMonitorCollection\";\n\n// Setup Time Series collection options.\nbsoncxx::builder::document timeSeriesCollectionOptions =\n{\n \"timeseries\",\n{\n \"timeField\", \"timestamp\",\n \"granularity\", \"minutes\"\n}\n};\n\nauto aqiMonitorDB = conndbName];\nauto aqiMonitorCollection = aqiMonitorDB.has_collection(timeSeriesCollectionName)\n? aqiMonitorDB[timeSeriesCollectionName]\n: aqiMonitorDB.create_collection(timeSeriesCollectionName, timeSeriesCollectionOptions.view().get_document().value);\n\n// Fetch and update AQI every 30 minutes.\nwhile (true)\n{ \nauto aqi = getAQI(city, apiToken);\nsaveToCollection(aqiMonitorCollection, aqi);\nstd::this_thread::sleep_for(std::chrono::minutes(30));\n}\n\nreturn 0;\n}\n```\n\nWhen this application is executed, you can see the below activity in the console window.\n\n![AQI Monitor application in C++ with MongoDB time series\n\nYou can also see the time series collection in Atlas reflecting any change made via the console application.\n\n## Visualizing the data with MongoDB Charts\nWe can make use of MongoDB Charts to visualize the AQI data and run aggregation on top of it.\n\nStep 1: Go to MongoDB Charts and click on \u201cAdd Dashboard\u201d to create a new dashboard \u2014 name it \u201cAQI Monitor\u201d.\n\nStep 2: Click on \u201cAdd Chart\u201d.\n\nStep 3: In the \u201cSelect Data Source\u201d dialog, go to the \u201cProject\u201d tab and navigate to the time series collection created by our code.\n\nStep 4: Change the chart type to \u201cContinuous Line\u201d. We will use this chart to display the AQI trends over time.\n\nStep 5: Drag and drop the \u201ctimestamp\u201d and \u201caqi\u201d fields into the X axis and Y axis respectively. You can customize the look and feel (like labels, color, and data format) in the \u201cCustomize\u201d tab. Click \u201cSave and close\u201d to save the chart.\n\nStep 6: Let\u2019s add another chart to display the maximum AQI \u2014 click on \u201cAdd Chart\u201d and select the same data source as before.\n\nStep 7: Change the chart type to \u201cNumber\u201d.\n\nStep 8: Drag and drop the \u201caqi\u201d field into \u201cAggregation\u201d and change Aggregate to \u201cMAX\u201d.\n\nStep 9: We can customize the chart text to change color based on the AQI values. Let\u2019s make the text green if AQI is less than or equal to 100, and red otherwise. We can perform this action with the conditional formatting option under Customize tab.\n\nStep 10: Similarly, we can add charts for minimum and average AQI. The dashboard should finally look something like this:\n\nTip: Change the dashboard\u2019s auto refresh settings from \u201cRefresh settings\u201d button to choose a refresh time interval of your choice for the charts.\n\n## Conclusion\nWith this article, we covered creating an application in C++ that writes data to a MongoDB time series collection, and used it further to create a MongoDB Charts dashboard to visualize the data in a meaningful way. The application can be further expanded to save other parameters like PM2.5 and temperature. \n\nNow that you've learned how to create an application using the MongoDB C++ driver and MongoDB time series, put your new skills to the test by building your own unique application. Share your creation with the community and let us know how it turned out!\n", "format": "md", "metadata": {"tags": ["MongoDB", "C++"], "pageDescription": "In this tutorial, we will show you how to build a C++ console application that uses MongoDB to store time series data, related to the Air Quality Index (AQI) for a given location. ", "contentType": "Tutorial"}, "title": "MongoDB Time Series with C++", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-multi-language-data-modeling", "action": "created", "body": "# Atlas Search Multi-Language Data Modeling\n\nWe live in an increasingly globalized economy. By extension, users have expectations that our applications will understand the context of their culture and by extension: language.\n\nLuckily, most search engines\u2014including, Atlas Search\u2014support multiple languages. This article will walk through three options of query patterns, data models, and index definitions to support your various multilingual application needs.\n\nTo illustrate the options, we will create a fictitious scenario. We manage a recipe search application that supports three cultures, and by extension, languages: English, Japanese (Kuromoji), and German. Our users are located around the globe and need to search for recipes in their native language.\n\n## 1. Single field\n\nWe have one document for each language in the same collection, and thus each field is indexed separately as its own language. This simplifies the query patterns and UX at the expense of bloated index storage.\n\n**Document:**\n\n```\n[\n {\"name\":\"\u3059\u3057\"},\n {\"name\":\"Fish and Chips\"},\n {\"name\":\"K\u00e4sesp\u00e4tzle\"}\n]\n```\n\n**Index:**\n\n```\n{\n \"name\":\"recipes\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.kuromoji\"\n },\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.english\"\n },\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.german\"\n }\n }\n }\n}\n```\n\n**Query:**\n\n```\n{\n \"$search\": {\n \"index\": \"recipes\",\n \"text\": {\n \"query\": \"Fish and Chips\",\n \"path\": \"name\"\n }\n }\n}\n```\n\n**Pros:**\n\n* One single index definition.\n* Don\u2019t need to specify index name or path based on user\u2019s language.\n* Can support multiple languages in a single query.\n\n**Cons:**\n\n* As more fields get added, the index definition needs to change.\n* Index definition payload is potentially quite large (static field mapping per language).\n* Indexing fields as irrelevant languages causing larger index size than necessary.\n\n## 2. Multiple collections\n\nWe have one collection and index per language, which allows us to isolate the different recipe languages. This could be useful if we have more recipes in some languages than others at the expense of lots of collections and indexes.\n\n**Documents:**\n\n```\nrecipes_jp:\n[{\"name\":\"\u3059\u3057\"}]\n\nrecipes_en:\n[{\"name\":\"Fish and Chips\"}]\n\nrecipes_de:\n[{\"name\":\"K\u00e4sesp\u00e4tzle\"}]\n```\n\n**Index:**\n\n```\n{\n \"name\":\"recipes_jp\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.kuromoji\"\n }\n }\n }\n}\n\n{\n \"name\":\"recipes_en\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.english\"\n }\n }\n }\n}\n\n{\n \"name\":\"recipes_de\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"name\": {\n \"type\": \"string\",\n \"analyzer\": \"lucene.german\"\n }\n }\n }\n}\n```\n\n**Query:**\n\n```\n{\n \"$search\": {\n \"index\": \"recipes_jp\"\n \"text\": {\n \"query\": \"\u3059\u3057\",\n \"path\": \"name\"\n }\n }\n}\n```\n\n**Pros:**\n\n* Can copy the same index definition for each collection (replacing the language).\n* Isolate different language documents.\n\n**Cons:**\n\n* Developers have to provide the language name in the index path in advance.\n* Need to potentially copy documents between collections on update.\n* Each index is a change stream cursor, so could be expensive to maintain.\n\n## 3. Multiple fields\n\nBy embedding each language in a parent field, we can co-locate the translations of each recipe in each document.\n\n**Document:**\n\n```\n{\n \"name\": {\n \"en\":\"Fish and Chips\",\n \"jp\":\"\u3059\u3057\",\n \"de\":\"K\u00e4sesp\u00e4tzle\"\n }\n}\n```\n\n**Index:**\n\n```\n{\n \"name\":\"multi_language_names\",\n\u00a0 \"mappings\": {\n\u00a0\u00a0\u00a0 \"dynamic\": false,\n\u00a0\u00a0\u00a0 \"fields\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"name\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"fields\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"de\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"analyzer\": \"lucene.german\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"type\": \"string\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"en\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"analyzer\": \"lucene.english\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"type\": \"string\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"jp\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"analyzer\": \"lucene.kuromoji\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"type\": \"string\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"type\": \"document\"\n\u00a0\u00a0\u00a0\u00a0\u00a0 }\n\u00a0\u00a0\u00a0 }\n\u00a0 }\n}\n```\n\n**Query:**\n\n```\n{\n \"$search\": {\n \"index\": \"multi_language_names\"\n \"text\": {\n \"query\": \"Fish and Chips\",\n \"path\": \"name.en\"\n }\n }\n}\n```\n\n**Pros:**\n\n* Easier to manage documents.\n* Index definition is sparse.\n\n**Cons:**\n\n* Index definition payload is potentially quite large (static field mapping per language).\n* More complex query and UX.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article will walk through three options of query patterns, data models, and index definitions to support your various multilingual application needs. ", "contentType": "Tutorial"}, "title": "Atlas Search Multi-Language Data Modeling", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/search-engine-using-atlas-full-text-search", "action": "created", "body": "# Tutorial: Build a Movie Search Engine Using Atlas Full-Text Search in 10 Minutes\n\n>This article is out of date. Check out this new post for the most up-to-date way to MongoDB Atlas Search to find your favorite movies. \ud83d\udcfd \ud83c\udf9e\n>\n>\n\nGiving your users the ability to find exactly what they are looking for in your application is critical for a fantastic user experience. With the new MongoDB Atlas Full-Text Search service, we have made it easier than ever to integrate simple yet sophisticated search capabilities into your MongoDB applications. To demonstrate just how easy it is, let's build a movie search engine - in only 10 minutes.\n\nBuilt on Apache Lucene, Full-Text Search adds document data to a full-text search index to make that data searchable in a highly performant, scalable manner. This tutorial will guide you through how to build a web application to search for movies based on a topic using Atlas' sample movie data collection on a free tier cluster. We will create a Full-Text Search index on that sample data. Then we will query on this index to filter, rank and sort through those movies to quickly surface movies by topic.\n\n \n\nArmed with a basic knowledge of HTML and Javascript, here are the tasks we will accomplish:\n\n* \u2b1c Spin up an Atlas cluster and load sample movie data\n* \u2b1c Create a Full-Text Search index in movie data collection\n* \u2b1c Write an aggregation pipeline with $searchBeta operator\n* \u2b1c Create a RESTful API to access data\n* \u2b1c Call from the front end\n\nNow break out the popcorn, and get ready to find that movie that has been sitting on the tip of your tongue for weeks.\n\n \n\nTo **Get Started**, we will need:\n\n1. A free tier (M0) cluster on MongoDB Atlas. Click here to sign up for an account and deploy your free cluster on your preferred cloud provider and region.\n\n2. The Atlas sample dataset loaded into your cluster. You can load the sample dataset by clicking the ellipse button and **Load Sample Dataset**.\n\n \n \n \n\n > For more detailed information on how to spin up a cluster, configure your IP address, create a user, and load sample data, check out Getting Started with MongoDB Atlas from our documentation. \n\n3. (Optional) MongoDB Compass. This is the free GUI for MongoDB that allows you to make smarter decisions about document structure, querying, indexing, document validation, and more. The latest version can be found here .\n\nOnce your sample dataset is loaded into your database, let's have a closer look to see what we are working within the Atlas Data Explorer. In your Atlas UI, click on **Collections** to examine the `movies` collection in the new `sample_mflix` database. This collection has over 23k movie documents with information such as title, plot, and cast.\n\n \n\n* \u2705 Spin up an Atlas cluster and load sample movie data\n* \u2b1c Create a Full-Text Search index in movie data collection\n* \u2b1c Write an aggregation pipeline with $searchBeta operator\n* \u2b1c Create a RESTful API to access data\n* \u2b1c Call from the front end\n\n## Create a Full-Text Search Index\n\nOur movie search engine is going to look for movies based on a topic. We will use Full-Text Search to query for specific words and phrases in the 'fullplot' field.\n\nThe first thing we need is a Full-Text Search index. Click on the tab titled SearchBETA under **Collections**. Clicking on the green **Create a Search Index** button will open a dialog that looks like this:\n\n \n\nBy default, we dynamically map all the text fields in your collection. This suits MongoDB's flexible data model perfectly. As you add new data to your collection and your schema evolves, dynamic mapping accommodates those changes in your schema and adds that new data to the Full-Text Search index automatically.\n\nLet's accept the default settings and click **Create Index**. *And that's all you need to do to start taking advantage of Lucene in your MongoDB Atlas data!*\n\n* \u2705 Spin up an Atlas cluster and load sample movie data\n* \u2705 Create a Full-Text Search index in movie data collection\n* \u2b1c Write an aggregation pipeline with $searchBeta operator\n* \u2b1c Create a RESTful API to access data\n* \u2b1c Call from the front end\n\n## Write Aggregation Pipeline With $searchbeta Operator\n\nFull-Text Search queries take the form of an aggregation pipeline stage. The **$searchBeta** stage performs a search query on the specified field(s) covered by the Full-Text Search index and must be used as the first stage in the aggregation pipeline.\n\nLet's use MongoDB Compass to see an aggregation pipeline that makes use of this Full-Text Search index. For instructions on how to connect your Atlas cluster to MongoDB Compass, click here.\n\n*You do not have to use Compass for this stage, but I really love the easy-to-use UI Compass has to offer. Plus the ability to preview the results by stage makes troubleshooting a snap! For more on Compass' Aggregation Pipeline Builder, check out this* blog*.*\n\nNavigate to the Aggregations tab in the `sample_mflix.movies` collection:\n\n \n\n### Stage 1. $searchBeta\n\n \n\nFor the first stage, select the **$searchBeta** aggregation operator to search for the terms 'werewolves and vampires' in the `fullplot` field.\n\nUsing the **highlight** option will return the highlights by adding fields to the result that display search terms in their original context, along with the adjacent text content. (More on this later.)\n\n \n\n>Note the returned movie documents in the preview panel on the right. If no documents are in the panel, double-check the formatting in your aggregation code.\n\n### Stage 2: $project\n\n \n\nWe use `$project` to get back only the fields we will use in our movie search application. We also use the `$meta` operator to surface each document's **searchScore** and **searchHighlights** in the result set.\n\nLet's break down the individual pieces in this stage further:\n\n**SCORE:** The `\"$meta\": \"searchScore\"` contains the assigned score for the document based on relevance. This signifies how well this movie's `fullplot` field matches the query terms 'werewolves and vampires' above.\n\nNote that by scrolling in the right preview panel, the movie documents are returned with the score in descending order so that the best matches are provided first.\n\n**HIGHLIGHT:** The **\"$meta\": \"searchHighlights\"** contains the highlighted results.\n\n*Because* **searchHighlights** *and* **searchScore** *are not part of the original document, it is necessary to use a $project pipeline stage to add them to the query output.*\n\nNow open a document's **highlight** array to show the data objects with text **values** and **types**.\n\n``` bash\ntitle:\"The Mortal Instruments: City of Bones\"\nfullplot:\"Set in contemporary New York City, a seemingly ordinary teenager, Clar...\"\nyear:2013\nscore:6.849891185760498\nhighlight:Array\n 0:Object\n path:\"fullplot\"\n texts:Array\n 0:Object\n value:\"After the disappearance of her mother, Clary must join forces with a g...\"\n type:\"text\"\n 1:Object\n value:\"vampires\"\n type:\"hit\"\n 2:Object\n 3:Object\n 4:Object\n 5:Object\n 6:Object\n score:3.556248188018799\n```\n\n**highlight.texts.value** - text from the `fullplot` field, which returned a match. **highlight.texts.type** - either a hit or a text. A hit is a match for the query, whereas a **text** is text content adjacent to the matching string. We will use these later in our application code.\n\n### Stage 3: $limit\n\n \n\nRemember the results are returned with the scores in descending order, so $limit: 10 will bring the 10 most relevant movie documents to your search query.\n\nFinally, if you see results in the right preview panel, your aggregation pipeline is working properly! Let's grab that aggregation code with Compass' Export Pipeline to Language feature by clicking the button in the top toolbar.\n\n \n\nYour final aggregation code will be this:\n\n``` bash\n\n { $searchBeta: {\nsearch: {\n query: 'werewolves and vampires',\n path: 'fullplot' },\n highlight: { path: 'fullplot' }\n }},\n { $project: {\n title: 1,\n _id: 0,\n year: 1,\n fullplot: 1,\n score: { $meta: 'searchScore' },\n highlight: { $meta: 'searchHighlights' }\n }},\n { $limit: 10 }\n]\n```\n\n* \u2705 Spin up an Atlas cluster and load sample movie data\n* \u2705 Create a Full-Text Search index in movie data collection\n* \u2705 Write an aggregation pipeline with $searchBeta operator\n* \u2b1c Create a RESTful API to access data\n* \u2b1c Call from the front end\n\n## Create a REST API\n\nNow that we have the heart of our movie search engine in the form of an aggregation pipeline, how will we use it in an application? There are lots of ways to do this, but I found the easiest was to simply create a RESTful API to expose this data - and for that, I used MongoDB Stitch's HTTP Service.\n\n[Stitch is MongoDB's serverless platform where functions written in Javascript automatically scale to meet current demand. To create a Stitch application, return to your Atlas UI and click **Stitch** under SERVICES on the left menu, then click the green **Create New Application** button.\n\nName your Stitch application FTSDemo and make sure to link to your M0 cluster. All other default settings are fine:\n\n \n\nNow click the **3rd Party Services** menu on the left and then **Add a Service**. Select the HTTP service and name it **movies**:\n\n \n\nClick the green **Add a Service** button, and you'll be directed to add an incoming webhook.\n\nOnce in the **Settings** tab, enable **Respond with Result**, set the HTTP Method to **GET**, and to make things simple, let's just run the webhook as the System and skip validation.\n\n \n\nIn this service function editor, replace the example code with the following:\n\n``` javascript\nexports = function(payload) {\n const collection = context.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n let arg = payload.query.arg;\n return collection.aggregate(\n { $searchBeta: {\n search: {\n query: arg,\n path:'fullplot',\n },\n highlight: { path: 'fullplot' }\n }},\n { $project: {\n title: 1,\n _id:0,\n year:1,\n fullplot:1,\n score: { $meta: 'searchScore'},\n highlight: {$meta: 'searchHighlights'}\n }},\n { $limit: 10}\n ]).toArray();\n};\n```\n\nLet's break down some of these components. MongoDB Stitch interacts with your Atlas movies collection through the global **context** variable. In the service function, we use that context variable to access the sample_mflix.movies collection in your Atlas cluster:\n\n``` javascript\nconst collection =\ncontext.services.get(\"mongodb-atlas\").db(\"sample_mflix\").collection(\"movies\");\n```\n\nWe capture the query argument from the payload:\n\n``` javascript\nlet arg = payload.query.arg;\n```\n\nReturn the aggregation code executed on the collection by pasting your aggregation into the code below:\n\n``` javascript\nreturn collection.aggregate(<>).toArray();\n```\n\nFinally, after pasting the aggregation code, we changed the terms 'werewolves and vampires' to the generic arg to match the function's payload query argument - otherwise our movie search engine capabilities will be extremely limited.\n\n \n\nNow you can test in the Console below the editor by changing the argument from **arg1: \"hello\"** to **arg: \"werewolves and vampires\"**.\n\n>Note: Please be sure to change BOTH the field name **arg1** to **arg**, as well as the string value **\"hello\"** to **\"werewolves and vampires\"** - or it won't work.\n\n \n\n \n\nClick **Run** to verify the result:\n\n \n\nIf this is working, congrats! We are almost done! Make sure to **SAVE** and deploy the service by clicking **REVIEW & DEPLOY CHANGES** at the top of the screen.\n\n### Use the API\n\nThe beauty of a REST API is that it can be called from just about anywhere. Let's execute it in our browser. However, if you have tools like Postman installed, feel free to try that as well.\n\nSwitch to the **Settings** tab of the **movies** service in Stitch and you'll notice a Webhook URL has been generated.\n\n \n\nClick the **COPY** button and paste the URL into your browser. Then append the following to the end of your URL: **?arg='werewolves and vampires'**\n\n \n\nIf you receive an output like what we have above, congratulations! You have successfully created a movie search API!\n\n \n\n* \u2705 Spin up an Atlas cluster and load sample movie data\n* \u2705 Create a Full-Text Search index in movie data collection\n* \u2705 Write an aggregation pipeline with $searchBeta operator\n* \u2705 Create a RESTful API to access data\n* \u2b1c Call from the front end\n\n \n\n## Finally! - The Front End\n\nFrom the front end application, it takes a single call from the Fetch API to retrieve this data. Download the following [index.html file and open it in your browser. You will see a simple search bar:\n\n \n\nEntering data in the search bar will bring you movie search results because the application is currently pointing to an existing API.\n\nNow open the HTML file with your favorite text editor and familiarize yourself with the contents. You'll note this contains a very simple container and two javascript functions:\n\n- Line 82 - **userAction()** will execute when the user enters a search. If there is valid input in the search box and no errors, we will call the **buildMovieList()** function.\n- Line 125 - **buildMovieList()** is a helper function for **userAction()** which will build out the list of movies, along with their scores and highlights from the 'fullplot' field. Notice in line 146 that if the highlight.texts.type === \"hit\" we highlight the highlight.texts.value with the tag.\n\n### Modify the Front End Code to Use Your API\n\nIn the **userAction()** function, we take the input from the search form field in line 79 and set it equal to **searchString**. Notice on line 82 that the **webhook_url** is already set to a RESTful API I created in my own FTSDemo application. In this application, we append that **searchString** input to the **webhook_url** before calling it in the fetch API in line 111. To make this application fully your own, simply replace the existing **webhook_url** value on line 82 with your own API from the **movies** Stitch HTTP Service you just created. \ud83e\udd1e\n\nNow save these changes, and open the **index.html** file once more in your browser, et voil\u00e0! You have just built your movie search engine using Full-Text search indexes. \ud83d\ude4c What kind of movie do you want to watch?!\n\n \n\n## That's a Wrap!\n\nNow that you have just seen how easy it is to build a simple, powerful search into an application with MongoDB Atlas Full-Text Search, go ahead and experiment with other more advanced features, such as type-ahead or fuzzy matching, for your fine-grained searches. Check out our $searchBeta documentation for other possibilities.\n\n \n\nHarnessing the power of Apache Lucene for efficient search algorithms, static and dynamic field mapping for flexible, scalable indexing, all while using the same MongoDB Query Language (MQL) you already know and love, spoken in our very best Liam Neeson impression MongoDB now has a very particular set of skills. Skills we have acquired over a very long career. Skills that make MongoDB a DREAM for developers like you.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Build a Movie Search Engine Using Full Text Search Indexes on MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Tutorial: Build a Movie Search Engine Using Atlas Full-Text Search in 10 Minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/satellite-code-example-mongodb", "action": "created", "body": "# EnSat\n\n## Creators\nAshish Adhikari, Awan Shrestha, Sabil Shrestha and Sansrit Paudel from Kathmandu University in Nepal contributed this project.\n\n## About the project\n\nEnSat (Originally, the project was started with the name \"PicoSat,\" and changed later to \"EnSat\") is a miniature version of an Environmental Satellite which helps to record and analyze the environmental parameters such as altitude, pressure, temperature, humidity, and pollution level.\nAccess this project in Githubhere.\n\n## Inspiration\n\nI was always very interested in how things work. We had a television when I was young, and there were all these wires connected to the television. I was fascinated by how the tv worked and how this could show moving images. I always wondered about things like these as a child. I studied this, and now I\u2019m in college learning more about it. For this project, I wanted to do something that included data transfer at a very low level.\n\nMy country is not so advanced technologically. But last year, Nepal\u2019s first satellite was launched into space. That inspired me. I might not be able to do that same thing now, but I wanted to try something smaller, and I built a miniature satellite. And that\u2019s how this project came to be. I was working on the software, and my friends were working on the hardware, and that\u2019s how we collaborated.\n\n:youtube]{vid=1tP2LEQJyNU}\n\n## Why MongoDB?\n\nWe had our Professor Dr. Gajendra Sharma supervising the project, but we were free to choose whatever we wanted. For the first time in this project, I used MongoDB; before that, I was not familiar with MongoDB. I was also not used to the GUI react part; while I was learning React, the course also included MongoDB. Before this project, I was using MySQL, I was planning on using MySQL again, but after following this course, I decided to switch to MongoDB. And this was good; transferring the data and storing the data is so much more comfortable with MongoDB. With MongoDB, we only have to fetch the data from the database and send it. The project is quite complicated, but MongoDB made it so much easier on the software level, so that\u2019s why we chose MongoDB for the project.\n\n## How it works\n\nA satellite with a microcontroller and sensors transmits the environmental data to Ground Station through radio frequency using ISM band 2.4 GHz. The Ground Station has a microcontroller and receiver connected to a computer where the data is stored in the MongoDB database. The API then fetches data from the database providing live data and history data of the environmental parameters. Using the data from API, the information is shown in the GUI built in React. This was our group semester project where the Serialport package for data communication, MongoDB for database, and React was used for the GUI. Our report in the GitHub repository can also tell you more in detail how everything works.\n\nIt is a unique and different project, and it is our small effort to tackle the global issue of climate change and environmental pollution. The project includes both hardware and software parts. EnSat consists of multidisciplinary domains. Creating it was a huge learning opportunity for us as we made our own design and architecture for the project's hardware and software parts. This project can inspire many students to try MongoDB with skills from different domains and try something good for our world.\n\n![\n\n## Challenges and learnings\n\nThere was one challenging part, and I was stuck for three days. It made me build my own serial data port to be able to get data in the server. That was a difficult time. With MongoDB, there was not any difficulty. It made the job so much easier.\n\nIt\u2019s also nice to share that we participated in three competitions and that we won three awards. One contest was where the satellite is actually dropped from the drone from the height, and we have to capture the environmental data at different heights as it comes down. It was the first competition of that kind in my country, and we won that one. We won another one for the best product and another for the best product under the Advancing Towards Smart Cities, Sustainable Development Goals category.\n\nI learned so many things while working on this project. Not only React and MongoDB, but I also learned everything around the hardware: Arduino programming, programming C for Arduino, the hardware level of programming. And the most important thing I learned is never to give up. At times it was so frustrating and difficult to get everything running. If you want to do something, keep on trying, and sometimes it clicks in your mind, and you just do it, and it happens.\n\nI\u2019m glad that MongoDB is starting programs for Students. These are the kind of things that motivate us. Coming from a not so developed country, we sometimes feel a bit separated. It\u2019s so amazing that we can actually take part in this kind of program. It\u2019s the most motivating factor in doing engineering and studying engineering. Working on these complex projects and then being recognized by MongoDB is a great motivation for all of us.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "C++", "Node.js", "React"], "pageDescription": "An environmental satellite to get information about your environment.", "contentType": "Code Example"}, "title": "EnSat", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-to-connect-mongodb-atlas-to-vercel-using-the-new-integration", "action": "created", "body": "# How to Connect MongoDB Atlas to Vercel Using the New Integration\n\nGetting a MongoDB Atlas database created and linked to your Vercel project has never been easier. In this tutorial, we\u2019ll get everything set up\u2014including a starter Next.js application\u2014in just minutes.\n\n## Prerequisites\n\nFor this tutorial, you\u2019ll need:\n\n* MongoDB Atlas (sign up for free).\n* Vercel account (sign up for free).\n* Node.js 14+.\n\n> This tutorial will work with any frontend framework if Next.js isn\u2019t your preference.\n\n## What is Vercel?\n\nVercel is a cloud platform for hosting websites and web applications. Deployment is seamless and scaling is automatic.\n\nMany web frameworks will work on Vercel, but the one most notable is Vercel\u2019s own Next.js. Next.js is a React-based framework and has many cool features, like built-in routing, image optimization, and serverless and Edge Functions. \n\n## Create a starter project\n\nFor our example, we are going to use Next.js. However, you can use any web framework you would like. \n\nWe\u2019ll use the `with-mongodb` example app to get us started. This will give us a Next.js app with MongoDB Atlas integration already set up for us.\n\n```bash\nnpx create-next-app --example with-mongodb vercel-demo -y\n```\n\nWe are using the standard `npx create-next-app` command along with the `--example with-mongodb` parameter which will give us our fully integrated MongoDB application. Then, `vercel-demo` is the name of our application. You can name yours whatever you would like.\n\nAfter that completes, navigate to your application directory:\n\n```bash\ncd vercel-demo\n````\n\nAt this point, we need to configure our MongoDB Atlas database connection. Instead of manually setting up our database and connection within the MongoDB Atlas dashboard, we are going to do it all through Vercel!\n\nBefore we move over to Vercel, let\u2019s publish this project to GitHub. Using the built-in Version Control within VS Code, if you are logged into your GitHub account, it\u2019s as easy as pressing one button in the Source Control tab.\n\nI\u2019m going to press *Publish Branch* and name my new repo `vercel-integration`.\n\n## Create a Vercel project and integrate MongoDB\n\nFrom your Vercel dashboard, create a new project and then choose to import a GitHub repository by clicking Continue with GitHub.\n\nChoose the repo that you just created, and then click Deploy. This deployment will actually fail because we have not set up our MongoDB integration yet.\n\nGo back to the main Vercel dashboard and select the Integrations tab. From here, you can browse the marketplace and select the MongoDB Atlas integration. \n\nClick Add Integration, select your account from the dropdown, and then click continue.\n\nNext, you can either add this integration to all projects or select a specific project. I\u2019m going to select the project I just created, and then click Add Integration.\n\nIf you do not already have a MongoDB Atlas account, you can sign up for one at this step. If you already have one, click \u201cLog in now.\u201d\n\nThe next step will allow you to select which Atlas Organization you would like to connect to Vercel. Either create a new one or select an existing one. Click Continue, and then I Acknowledge.\n\nThe final step allows you to select an Atlas Project and Cluster to connect to. Again, you can either create new ones or select existing ones.\n\nAfter you have completed those steps, you should end up back in Vercel and see that the MongoDB integration has been completed.\n\nIf you go to your project in Vercel, then select the Environment Variables section of the Settings page, you\u2019ll see that there is a new variable called `MONGODB_URI`. This can now be used in our Next.js application. \n\nFor more information on how to connect MongoDB Atlas with Vercel, see our documentation.\n\n## Sync Vercel settings to local environment\n\nAll we have to do now is sync our environment variables to our local environment.\n\nYou can either manually copy/paste your `MONGODB_URI` into your local `.env` file, or you can use the Vercel CLI to automate that.\n\nLet\u2019s add the Vercel CLI to our project by running the following command:\n\n```bash\nnpm i vercel\n```\n\nIn order to link our local project with our Vercel project, run the following command:\n\n```bash\nvercel\n```\n\nChoose a login method and use the browser pop-up to authenticate.\n\nAnswer *yes* to set up and deploy.\n\nSelect the appropriate scope for your project.\n\nWhen asked to link to an existing project, type *Y* and press *enter*.\n\nNow, type the name of your Vercel project. This will link the local project and run another build. Notice that this build works. That is because the environment variable with our MongoDB connection string is already in production.\n\nBut if you run the project locally, you will get an error message.\n\n```bash\nnpm run dev\n```\n\nWe need to pull the environment variables into our project. To do that, run the following:\n\n```bash\nvercel env pull\n```\n\nNow, every time you update your repo, Vercel will automatically redeploy your changes to production!\n\n## Conclusion\n\nIn this tutorial, we set up a Vercel project, linked it to a MongoDB Atlas project and cluster, and linked our local environment to these. \n\nThese same steps will work with any framework and will provide you with the local and production environment variables you need to connect to your MongoDB Atlas database.\n\nFor an in-depth tutorial on Next.js and MongoDB, check out How to Integrate MongoDB Into Your Next.js App.\n\nIf you have any questions or feedback, check out our MongoDB Community forums and let us know what you think.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Vercel", "Node.js"], "pageDescription": "Getting a MongoDB Atlas database created and linked to your Vercel project has never been easier. In this tutorial, we\u2019ll get everything set up\u2014including a starter Next.js application\u2014in just minutes.", "contentType": "Quickstart"}, "title": "How to Connect MongoDB Atlas to Vercel Using the New Integration", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-javascript-nan-to-n-api", "action": "created", "body": "# How We Migrated Realm JavaScript From NAN to N-API\n\nRecently, the Realm JavaScript team has reimplemented the Realm JS\nNode.js SDK from the ground up to use\nN-API. In this post, we\ndescribe the need to migrate to N-API because of breaking changes in the\nJavaScript Virtual Machine and how we approached it in an iterative way.\n\n## HISTORY\n\nNode.js and\nElectron are supported platforms for the\nRealm JS SDK. Our\nembedded library consists of a JavaScript library and a native code\nNode.js addon that interacts with the Realm Database native code. This\nprovides the database functionality to the JS world. It interacts with\nthe V8 engine, which is the JavaScript virtual machine used in Node.js\nthat executes the JavaScript user code.\n\nThere are different ways to write a Node.js addon. One way is to use the\nV8 APIs directly. Another is to use an abstraction layer that hides the\nV8 specifics and provides a stable API across versions of Node.js.\n\nThe JavaScript V8 virtual machine is a moving target. Its APIs are\nconstantly changing between versions. Some are deprecated, and new APIs\nare introduced all the time. Previous versions of Realm JS used\nNAN to interact with the V8 virtual\nmachine because we wanted to have a more stable layer of APIs to\nintegrate with.\n\nWhile useful, this had its drawbacks since NAN also needed to handle\ndeprecated V8 APIs across versions. And since NAN integrates tightly\nwith the V8 APIs, it did not shield us from the virtual machine changes\nunderneath it. In order to work across the different Node.js versions,\nwe needed to create a native binary for every major Node.js version.\nThis sometimes required major effort from the team, resulting in delayed\nreleases of Realm JS for a new Node.js version.\n\nThe changing VM API functionality meant handling the deprecated V8\nfeatures ourselves, resulting in various version checks across the code\nbase and bugs, when not handled in all places.\n\nThere were many other native addons that have experienced the same\nproblem. Thus, the Node.js team decided to create a stable API layer\nbuild within Node.js itself, which guarantees API stability across major\nNode.js versions regardless of the virtual machine API changes\nunderneath. This API layer is called\nN-API. It not only\nprovides API stability but also guarantees ABI stability. This means\nbinaries compiled for one major version are able to run on later major\nversions of Node.js.\n\nN-API is a C API. To support C++ for writing Node.js addons, there is a\nmodule called\nnode-addon-api. This module\nis a more efficient way to write code that calls N-API. It provides a\nlayer on top of N-API. Developers use this to create and manipulate\nJavaScript values with integrated exception handling that allows\nhandling JavaScript exceptions as native C++ exceptions and vice versa.\n\n## N-API Challenges\n\nWhen we started our move to N-API, the Realm JavaScript team decided\nearly on that we would build an N-API native module using the\nnode-addon-api library. This is because Realm JS is written in C++ and\nthere is no reason not to choose the C++ layer over the pure N-API C\nlayer.\n\nThe motivation of needing to defend against breaking changes in the JS\nVM became one of the goals when doing a complete rewrite of the library.\nWe needed to provide exactly the same behavior that currently exists.\nThankfully, the Realm JS library has an extensive suite of tests which\ncover all of the supported features. The tests are written in the form\nof integration tests which test the specific user API, its invocation,\nand the expected result.\n\nThus, we didn't need to handle and rewrite fine-grained unit tests which\ntest specific details of how the implementation is done. We chose this\ntack because we could iteratively convert our codebase to N-API, slowly\nconverting sections of code while running regression tests which\nconfirmed correct behavior, while still running NAN and N-API at the\nsame time. This allowed us to not tackle a full rewrite all at once.\n\nOne of the early challenges we faced is how we were going to approach\nsuch a big rewrite of the library. Rewriting a library with a new API\nwhile at the same time having the ability to test as early as possible\nis ideal to make sure that code is running correctly. We wanted the\nability to perform the N-API migration partially, reimplementing\ndifferent parts step by step, while others still remained on the old NAN\nAPI. This would allow us to build and test the whole project with some\nparts in NAN and others in N-API. Some of the tests would invoke the new\nreimplemented functionality and some tests would be using the old one.\n\nUnfortunately, NAN and N-API diverged too much starting from the initial\nsetup of the native addon. Most of the NAN code used the `v8::Isolate`\nand the N-API code had the opaque structure `Napi::Env` as a substitute\nto it. Our initialization code with NAN was using the v8::Isolate to\ninitialize the Realm constructor in the init function\n\n``` clike\nstatic void init(v8::Local exports, \n v8::Local module, v8::Local context) {\n v8::Isolate* isolate = context->GetIsolate();\n v8::Local realm_constructor = \n js::RealmClass::create_constructor(isolate);\n\n Nan::Set(exports, realm_constructor->GetName(), realm_constructor);\n }\nNODE_MODULE_CONTEXT_AWARE(Realm, realm::node::init);\n```\n\nand our N-API equivalent for this code was going to be\n\n``` clike\nstatic Napi::Object NAPI_Init(Napi::Env env, Napi::Object exports) {\n return exports;\n}\nNODE_API_MODULE(realm, NAPI_Init)\n```\n\nWhen we look at the code, we can see that we can't call `v8::isolate`,\nwhich we used in our old implementation, from the exposed N-API. The\nproblem becomes clear: We don't have any access to the `v8::Isolate`,\nwhich we need if we want to invoke our old initialization logic.\n\nFortunately, it turned out we could just use a hack in our initial\nimplementation. This enabled us to convert certain parts of our Realm JS\nimplementation while we continued to build and ship new versions of\nRealm JS with parts using NAN. Since `Napi::Env` is just an equivalent\nsubstitute for `v8::Isolate`, we can check if it has a `v8::Isolate`\nhiding in it. As it turns out, this is a way to do this - but it's a\nprivate member. We can grab it from memory with\n\n``` clike\nnapi_env e = env;\nv8::Isolate* isolate = (v8::Isolate*)e + 3;\n```\n\nand our NAPI_init method becomes\n\n``` clike\nstatic Napi::Object NAPI_Init(Napi::Env env, Napi::Object exports) {\n//NAPI: FIXME: remove when NAPI complete\n napi_env e = env;\n v8::Isolate* isolate = (v8::Isolate*)e + 3;\n //the following two will fail if isolate is not found at the expected location\n auto currentIsolate = isolate->GetCurrent();\n auto context = currentIsolate->GetCurrentContext();\n //\n\n realm::node::napi_init(env, currentIsolate, exports);\n return exports;\n}\n```\n\nHere, we invoke two functions \u2014 `isolate->GetCurrent()` and\n`isolate->GetCurrentContext()` \u2014 to verify early on that the pointer to\nthe `v8::Isolate` is correct and there are no crashes.\n\nThis allowed us to extract a simple function which can return a\n`v8::Isolate` from the `Napi::Env` structure any time we needed it. We\ncontinued to switch all our function signatures to use the new\n`Napi::Env` structure, but the implementation of these functions could\nbe left unchanged by getting the `v8::Isolate` from `Napi::Env` where\nneeded. Not every NAN function of Realm JS could be reimplemented this\nway but still, this hack allowed for an easy process by converting the\nfunction to NAPI, building and testing. It then gave us the freedom to\nship a fully NAPI version without the hack once we had time to convert\nthe underlying API to the stable version.\n\n## What We Learned\n\nHaving the ability to build the entire project early on and then even\nrun it in hybrid mode with NAN and N-API allowed us to both refactor and\ncontinue to ship net new features. We were able to run specific tests\nwith the new functionality while the other parts of the library remained\nuntouched. Being able to build the project is more valuable than\nspending months reimplementing with the new API, only then to discover\nsomething is not right. As the saying goes, \"Test early, fail fast.\"\n\nOur experience while working with N-API and node-addon-api was positive.\nThe API is easy to use and reason. The integrated error handling is of a\ngreat benefit. It catches JS exceptions from JS callbacks and rethrows\nthem as C++ exceptions and vice versa. There were some quirks along the\nway with how node-addon-api handled allocated memory when exceptions\nwere raised, but we were easily able to overcome them. We have submitted\nPRs for some of these fixes to the node-addon-api library.\n\nRecently, we flipped the switch to one of the major features we gained\nfrom N-API - the build system release of the Realm JS native binary.\nNow, we build and release a single binary for every Node.js major\nversion.\n\nWhen we finished, the Realm JS with N-API implementation resulted in\nmuch cleaner code than we had before and our test suite was green. The\nN-API migration fixed some of the major issues we had with the previous\nimplementation and ensures our future support for every new major\nNode.js version.\n\nFor our community, it means a peace of mind that Realm JS will continue\nto work regardless of which Node.js or Electron version they are working\nwith - this is the reason why the Realm JS team chose to replatform on\nN-API.\n\nTo learn more, ask questions, leave feedback, or simply connect with\nother MongoDB developers, visit our community\nforums. Come to learn.\nStay to connect.\n\n>\n>\n>To get started with RealmJS, visit our GitHub\n>Repo. Getting started with Atlas is\n>also easy. Sign up for a free MongoDB\n>Atlas account to start working with\n>all the exciting new features of MongoDB, including Realm and Charts,\n>today!\n>\n>\n\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "Node.js"], "pageDescription": "The Realm JavaScript team has reimplemented the Realm JS Node.js SDK from the ground up to use N-API. Here we describe how and why.", "contentType": "News & Announcements"}, "title": "How We Migrated Realm JavaScript From NAN to N-API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/merge-url", "action": "created", "body": "# MergeURL - Python Example App\n\n## Creators\nMehant Kammakomati and Sai Vittal B contributed this project.\n\n## About the project\n\nMergeURL is an instant URL shortening and merging service that lets you merge multiple URLs into a single short URL. You can merge up to 5 URLs within no time and share one single shortened URL. MergeURL lifts off the barriers of user registration and authentication, making it instant to use. It also provides two separate URLs to view the URLs' list and open all the browser URLs.\n\nMergeURL is ranked #2 product of the day on ProductHunt. It is used by people across the world, with large numbers coming from the United States and India.\n\n# Inspiration\n\nWe had this problem of sharing multiple URLs in a message or an email or via Twitter. We wanted to create a trustworthy service that can merge all those URLs in a single short one. We tried finding out if there were already solutions to this problem, and most of the solutions we found, required an account creation or to put my credentials. We wanted to have something secure, trustworthy, but that doesn\u2019t require user authentication. Sai Vittal worked mostly on the front end of the application, and I (Mehant) worked on the back end and the MongoDB database. It was a small problem that we encountered that led us to build MergeURL. \n\nWe added our product to ProductHunt last August, and we became number #2 for a while; this gave us the kickstart to reach a wider audience. We currently have around 181.000 users and around 252.000 page views. The number of users motivates us to work a lot on updates and add more security layers to it. \n\n## Why MongoDB?\n \nFor MergeURL, MongoDB plays a crucial role in our URL shortening and merging algorithm, contributing to higher security and reducing data redundancy. MongoDB Atlas lifts off the burden to host and maintain databases that made our initial development of MergeURL 10X faster, and further maintaining and monitoring has become relatively easy. \n\nFirstly we discussed whether to go for a SQL or NoSQL database. According to the algorithms, our primary approach is that going with a NoSQL database would be the better option. MongoDB is at the top of the chart; it is the one that comes to mind when you think about NoSQL databases. Client libraries like PyMongo make it so much easier to connect and use MongoDB. We use MongoDB Atlas itself because it\u2019s already hosted. It made it much easier for us to work with it. We\u2019ve been using the credits that we received from the GitHub Student Developer Pack offer. \n\n## How it works\n\nThe frontend is written using React, and it\u2019s compiled into the optimal static assets. As we know, the material is a relatively simple service; we don\u2019t need a lot of complicated stuff in the back end. Therefore we used a microservice; we used Flask to write the backend server. And we use MongoDB. We have specific algorithms that work on the URLs, and MongoDB played a vital role in implementing those algorithms and taking control of the redundancy. \n\nIt works relatively smoothly. You go to our website; you fill out the URLs you want to shorten, and it will give you a short URL that includes all the URLs. \n\n## Challenges and lessons learned\n\nOne of the challenges lies in our experience. We both didn\u2019t have any experience launching a product and getting it to users. Launching MergeURL was the first time we did this, and it went very well.\n\nMongoDB specific, we didn\u2019t have any problems. Specifically (Mehant), I struggled a lot with SQL databases in my freshman and sophomore years. I\u2019m pleased that I found MongoDB; it saves a lot of stress and frustration. Everything is relatively easy. Besides that, the documents are quite flexible; it\u2019s not restricted as with SQL. We can create many more challenges with MongoDB. \n\nI\u2019ve learned a lot about the process. Converting ideas into actual implementation was the most important thing. One can have many ideas, but turning them into life is essential. \n\nAt the moment, the project merges the URLs. We are thinking of maybe adding a premium plan where people can get user-specific extensions. We use a counter variable to give those IDs to the shortened URL, but we would like to implement adding user specific extensions. \n\nAnd we would like to add analytics. How many users are clicking on your shortened URL? Where is the traffic coming from? \n\nWe are thrilled with the product as it is, but there are plenty of future ideas. \n\n", "format": "md", "metadata": {"tags": ["Python", "Atlas", "Flask"], "pageDescription": "Shorten multiple URLs instantly while overcoming the barriers of user registration.", "contentType": "Code Example"}, "title": "MergeURL - Python Example App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/use-rongo-store-roblox-game-data-in-atlas", "action": "created", "body": "# Storing Roblox Game Data in MongoDB Atlas Using Rongo\n\n## Introduction \n\nThis article will walk you through setting up and using Rongo in your Roblox games, and storing your data in MongoDB Atlas. Rongo is a custom SDK that uses the MongoDB Atlas Data API.\n\nWe\u2019ll walk through the process of inserting Rongo into your game, setting Rongo up with MongoDB, and finally, using Rongo to store a player's data in MongoDB Atlas as an alternative to Roblox\u2019s DataStores. Note that this library is open source and not supported by MongoDB.\n\n## Prerequisites\n\nBefore you can start using Rongo, there are a few things you\u2019ll need to do.\n\nIf you have not already, you\u2019ll need to install Roblox Studio and create a new experience.\n\nNext, you\u2019ll need to set up a MongoDB Atlas Account, which you can learn how to do using MongoDB\u2019s Getting Started with Atlas article.\n\nOnce you have set up your MongoDB Atlas cluster, follow the article on getting started with the MongoDB Data API.\n\nAfter you\u2019ve done all of the above steps, you can get started with installing Rongo into your game!\n\n## Installing Rongo into your game\n\nThe installation of Rongo is fairly simple. However, you have a couple of ways to do it!\n\n### Using Roblox\u2019s Toolbox (recommended)\n\nFirst, head to the Rongo library page and press the **Get** button. After that, you can press the \u201cTry in studio\u201d button, which will open Roblox Studio and insert the module into Studio.\n\nIf you wish to insert the module into a specific experience, then open Roblox Studio, load your experience, and navigate to the **View** tab. Click on **Toolbox**, navigate to the **Inventory** tab of the Toolbox, and locate Rongo. Or, search for it in the toolbox and then drag it into the viewport.\n\n### Downloading the Rongo model\n\nYou can download Rongo from our Github page by visiting our releases page and downloading the **Rongo.rbxm** file or the **Rongo.lua** file. After you have downloaded either of the files, open Roblox studio and load your experience. Next, navigate to the **View** tab and open the **Explorer** window. You can then right click on **ServerScriptService** and press the **Insert from file** button. Once you\u2019ve pressed the **Insert from file** button, locate the Rongo file and insert it into Roblox Studio.\n\n## Setting up Rongo in your game\n\nFirst of all, you\u2019ll need to ensure that the Rongo module is placed in **ServerScriptService**.\n\nNext, you must enable the **Allow HTTP Requests **setting in your games settings (security tab).\n\nAfter you have done the above two steps, create a script in **ServerScriptService** and paste in the example code below.\n\n```lua\nlocal Rongo = require(game:GetService(\"ServerScriptService\"):WaitForChild(\"Rongo\"))\n\nlocal Client = Rongo.new(YOUR_API_ID, YOUR_API_KEY)\nlocal Cluster = Client:GetCluster(\"ExampleCluster\")\nlocal Database = Cluster:GetDatabase(\"ExampleDatabase\")\nlocal Collection = Database:GetCollection(\"ExampleCollection\")\n```\n\nThe above code will allow you to modify your collection by adding data, removing data, and fetching data.\n\nYou\u2019ll need to replace the arguments with the correct data to ensure it works correctly!\n\nRefer to our documentation for more information on the functions.\n\nTo fetch data, you can use this example code:\n\n```lua\nlocal Result = Collection:FindOne({\"Name\"] = \"Value\"})\n```\n\nYou can then print the result to the console using print(Result).\n\nOnce you\u2019ve gotten the script setup, you\u2019re all good to go and you can move onto the next section of this article!\n\n## Using Rongo to save player data\n\nThis section will teach you how to save a player's data when they join and leave your game!\n\nWe\u2019ll be using the script we created in the previous section as a base for the new script.\n\nFirst of all, we\u2019re going to create a function which will be fired whenever the player joins the game. This function will load the players data if they\u2019ve had their data saved before!\n\n```lua\nPlayers.PlayerAdded:Connect(function(Player: Player)\n--// Setup player leaderstats\nlocal Leaderstats = Instance.new(\"Folder\")\nLeaderstats.Parent = Player\nLeaderstats.Name = \"leaderstats\"\n\nlocal Gold = Instance.new(\"IntValue\")\nGold.Parent = Leaderstats\nGold.Name = \"Gold\"\nGold.Value = 0\n\n--// Fetch data from MongoDB\nlocal success, data = pcall(function()\nreturn Collection:FindOne({[\"userId\"] = Player.UserId})\nend)\n\n--// If an error occurs, warn in console\nif not success then\nwarn(\"Failed to fetch player data from MongoDB\")\nreturn\nend\n\n--// Check if data is valid\nif data and data[\"playerGold\"] then\n--// Set player gold leaderstat value\nGold.Value = data[\"playerGold\"]\nend\n\n--// Give player +5 gold each time they join\nGold.Value += 5\nend)\n```\n\nIn the script above, it will first create a leaderstats folder and gold value when the player joins, which will appear in the player list. Next, it will fetch the player data and set the value of the player\u2019s gold to the saved value in the collection. And finally, it will give the player an additional five golds each time they join.\n\nNext, we\u2019ll make a function to save the player\u2019s data whenever they leave the game.\n\n```lua\nPlayers.PlayerRemoving:Connect(function(Player: Player)\n--// Get player gold\nlocal Leaderstats = Player:WaitForChild(\"leaderstats\")\nlocal Gold = Leaderstats:WaitForChild(\"Gold\")\n\n--// Update player gold in database\nlocal success, data = pcall(function()\nreturn Collection:UpdateOne({[\"userId\"] = Player.UserId},\n{\n[\"userId\"] = Player.UserId,\n[\"playerGold\"] = Gold.Value\n}\n, true)\nend)\nend)\n```\n\nThis function will first fetch the player's gold in game and then update it in MongoDB with the upsert value set to true, so it will insert a new document in case the player has not had their data saved before.\n\nYou can now test it in game and see the data updated in MongoDB once you leave!\n\nIf you\u2019d like a more vanilla Roblox DataStore experience, you can also use [MongoStore, which is built on top of Rongo and has identical functions to Roblox\u2019s DataStoreService.\n\nHere is the full script used for this article:\n\n```lua\nlocal Rongo = require(game:GetService(\"ServerScriptService\"):WaitForChild(\"Rongo\"))\nlocal Client = Rongo.new(\"MY_API_ID\", \"MY_API_KEY\")\nlocal Cluster = Client:GetCluster(\"Cluster0\")\nlocal Database = Cluster:GetDatabase(\"ExampleDatabase\")\nlocal Collection = Database:GetCollection(\"ExampleCollection\")\n\nlocal Players = game:GetService(\"Players\")\n\nPlayers.PlayerAdded:Connect(function(Player: Player)\n--// Setup player leaderstats\nlocal Leaderstats = Instance.new(\"Folder\")\nLeaderstats.Parent = Player\nLeaderstats.Name = \"leaderstats\"\n\nlocal Gold = Instance.new(\"IntValue\")\nGold.Parent = Leaderstats\nGold.Name = \"Gold\"\nGold.Value = 0\n\n--// Fetch data from MongoDB\nlocal success, data = pcall(function()\nreturn Collection:FindOne({\"userId\"] = Player.UserId})\nend)\n\n--// If an error occurs, warn in console\nif not success then\nwarn(\"Failed to fetch player data from MongoDB\")\nreturn\nend\n\n--// Check if data is valid\nif data and data[\"playerGold\"] then\n--// Set player gold leaderstat value\nGold.Value = data[\"playerGold\"]\nend\n\n--// Give player +5 gold each time they join\nGold.Value += 5\nend)\n\nPlayers.PlayerRemoving:Connect(function(Player: Player)\n--// Get player gold\nlocal Leaderstats = Player:WaitForChild(\"leaderstats\")\nlocal Gold = Leaderstats:WaitForChild(\"Gold\")\n\n--// Update player gold in database\nlocal success, data = pcall(function()\nreturn Collection:UpdateOne({[\"userId\"] = Player.UserId},\n{\n[\"userId\"] = Player.UserId,\n[\"playerGold\"] = Gold.Value\n}\n, true)\nend)\nend)\n```\n\n## Conclusion\n\nIn summary, Rongo makes it seamless to store your Roblox game data in MongoDB Atlas. You can use it for whatever need be, from player data stores to fetching data from your website's database.\n\nLearn more about Rongo on the [Roblox Developer Forum thread. View Rongo\u2019s source code on our Github page.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article will walk you through the process of using Rongo, a custom SDK built with the Atlas Data API to store data from Roblox Games to MongoDB Atlas. In this article, you\u2019ll learn how to create a script to save data.", "contentType": "Article"}, "title": "Storing Roblox Game Data in MongoDB Atlas Using Rongo", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/javascript/chember-example-app", "action": "created", "body": "# Chember\n\n## Creators\nAlper K\u0131z\u0131lo\u011flu, Aytu\u011f Turanl\u0131o\u011flu, Batu El, Beg\u00fcm Ortao\u011flu, Bora Demiral, Efecan Bah\u00e7\u0131vano\u011flu, Ege \u00c7avu\u015fo\u011flu and \u00d6mer Ekin contributed this amazing project.\n\n## About the project\n\nWith Chember, you can find streetball communities near you. Create your streetball profile, discover the Chember map, see live court densities, and find or create games. We designed Chember for basketball lovers who want to connect with streetball communities around them.\n\n## Inspiration\n\nI (Ege) started this project with a few friends from high school. I was studying abroad in Italy last semester, and Italy was one of the first countries that had to, you know, quarantine and all that when Covid-19 hit. In Italy, I had some friends from high school from my high school basketball team, and they suggested going out and playing streetball with local folks. I liked to do that, but on the other hand, I also wanted to build complex software projects. I experimented with frameworks and MongoDB for school projects and wanted to learn more about how to turn this into a project. I told my friends about the idea of building a streetball app. The advantage was that we didn\u2019t have to talk to our users cause we were the users. I took on the technical responsibility for the project, and that\u2019s how it started. Now we\u2019re a student formed startup that gained 10k users within three weeks after our launch, and we\u2019re continuing to grow.\n\n:youtube]{vid=UBEPpdAaKd4}\n\n## Why MongoDB?\n \nI was already familiar with MongoDB. With the help of MongoDB, We were able to manage various types of data (like geolocation) and carry it out to our users performantly and without worrying about the infrastructure thanks to MongoDB. We work very hard to bond more communities around the world through streetball, and we are sure that we will never have to worry about the storage and accessibility of our data.\n\nTo give you an example. In our app, you can create streetball games, you can create teams, and these teams will be playing each other. And usually, we had games only for individual players, but now we would like to introduce a new feature called teams, which will also allow us to have tournament structures in the app. And the best part of MongoDB is that I can change the schema. We don't worry about schemas; we add the fields we want to have, and then boom, we can have the team vs. team feature with the extra fields needed.\n\n## How it works\n\nI first started to build our backend and to integrate it with MongoDB. This is my first experience doing this complex project, and I had no other help than Google and tutorials. There are two significant projects that I should mention: Expo, a cross-platform mobile app development framework, and the other is MongoDB because it helped us start prototyping and building very quickly with the backend. After four or five months, our team started to grow. More people from my high school team came onboard, so I began to teach the group two front end development and one backend development. By the end of the summer, we were able to launch our app.\n\nMy biggest concern was how the backend data was going to be handled when it launched because when we founded our Instagram profile, we were getting a lot of hits. A lot of people seemed to be excited about our app. All the apps I had built before were school projects, so I never really had to worry about load balancing. And now I had to! We got around 10.000 users in the first weeks after the launch, and we only had a tiny marketing budget. It\u2019s pocket money from university students. We\u2019ve been using our credits from the [GitHub Student Developer Pack to maintain the MongoDB Atlas cluster. \n\nFor our startup, and for most companies, data is the most important thing. We have a lot of data, user data, and street data from courts worldwide. We have like 2500 courts right now. In Turkey, we have like 2300 of them. So we have quite a lot of useful data, we have very detailed information about each court. So this data is vital for our company, and using Atlas, it was so easy to analyze the data, get backups, and integrate with the back-end. MongoDB helped us a lot with building this project.\n\n## Challenges and learnings\n\nCOVID-19 was a challenge for us. Many people were reluctant about our app. Our app is more about bringing people together for streetball. To be prepared for COVID-19, we added a new approach to the code, allowing people to practice social distancing while still being active. When you go to a park or a streetball court, you can check the app, and you can notify the number of people playing at that moment. With this data, we can run schedulers every week to identify the court density, like the court's human density. Before going to that court, you already know how often many people are going to be in that court.\n\nI also want to share about our upcoming plans. Our main goal is to grow the community and help people find more peers to play streetball. Creating communities is essential for us because, especially in Turkey and the United States where I've played street ball, there's a stereotype. If you're not a 20-year-old male over six feet or so, you don't fit into the streetball category. Because of this, a lot of people are hesitating to go out and play streetball. We want to break this stereotype cause many folks from other age groups and genders also play streetball! So we want to allow users to match their skills and physical attributes and implement unique features like women-only games. What we want to do in the first place is to break the stereotype and to build inclusive communities.\n\nWe will be launching our tournament mode this summer, we're out like almost at the testing stage, but we're not sure when to throw it due to COVID-19 and the vaccinations are coming. So we'll see how it goes. Because launching a tournament mode during COVID might not be the best idea.\n\nTo keep people active during winter, we are planning to add private courts to our map. So our map is one of our most vital assets, you can find all the streetball courts around you know in the world, and get detailed information on the map. We're hoping to extend our data for private courts and allow people to book these courts and keep active during winter.\n\nI want to share the story. I'm sure many people want to build great things, and these tools are here for all of us to use. So I think this story could also inspire them to turn their ideas into reality.\n\n", "format": "md", "metadata": {"tags": ["JavaScript"], "pageDescription": "Instantly find the street ball game you are looking for anytime and anywhere with Chember!", "contentType": "Code Example"}, "title": "Chember", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/python/python-crud-mongodb", "action": "created", "body": "# Simple CRUD operations with Python and MongoDB\n\nFor the absolute beginner, there's nothing more simple than a CRUD tutorial. Create, Read, Update, and Delete documents using this mongodb tutorial for Python. \n\n## Introduction\nTo get started, first you'll need to understand that we use pymongo, our python driver, to connect your application to MongoDB. Once you've installed the driver, we'll build a simple CRUD (Create, Read, Update, Delete) application using FastAPI and MongoDB Atlas. The application will be able to create, read, update, and delete documents in a MongoDB database, exposing the functionality through a REST API. \n\nYou can find the finished application on Github here.\n\n## About the App You'll Build\nThis is a very basic example code for managing books using a REST API. The REST API has five endpoints:\n\n`GET /book`: to list all books\n`GET /book/`: to get a book by its ID\n`POST /book`: to create a new book\n`PUT /book/`: to update a book by its ID\n`DELETE /book/`: to delete a book by its ID\n\nTo build the API, we'll use the FastAPI framework. It's a lightweight, modern, and easy-to-use framework for building APIs. It also generates Swagger API documentation that we'll put to use when testing the application.\n\nWe'll be storing the books in a MongoDB Atlas cluster. MongoDB Atlas is MongoDB's database-as-a-service platform. It's cloud-based and you can create a free account and cluster in minutes, without installing anything on your machine. We'll use PyMongo to connect to the cluster and query data.\n\nThis application uses Python 3.6.\n\n", "format": "md", "metadata": {"tags": ["Python", "FastApi"], "pageDescription": "Get started with Python and MongoDB easily with example code in Github", "contentType": "Code Example"}, "title": "Simple CRUD operations with Python and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/how-prisma-introspects-a-schema-from-a-mongodb-database", "action": "created", "body": "# How Prisma Introspects a Schema from a MongoDB Database\n\nPrisma ORM (object relational-mapper) recently released support for MongoDB. This represents the first time Prisma has supported a database outside of the SQL world. Prisma has been known for supporting many relational databases, but how did it end up being able to support the quite different MongoDB?\n\n> \ud83d\udd74\ufe0f I work as the Engineering Manager of Prisma\u2019s Schema team. We are responsible for the schema management parts of Prisma, which most prominently include our migrations and introspection tools, as well as the Prisma schema language and file (and our awesome Prisma VS Code extension!).\n>\n> The other big team working on Prisma object relational-mapper (ORM) is the Client team that builds Prisma Client and the Query Engine. These let users interact with their database to read and manipulate data.\n>\n> In this blog post, I summarize how our team got Prisma\u2019s schema introspection feature to work for the new MongoDB connector and the interesting challenges we solved along the way.\n\n## Prisma\n\nPrisma is a Node.js database ORM built around the Prisma schema language and the Prisma schema file containing an abstract representation of a user\u2019s database. When you have tables with columns of certain data types in your database, those are represented as models with fields of a type in your Prisma schema.\n\nPrisma uses that information to generate a fully type-safe TypeScript/JavaScript Prisma Client that makes it easy to interact with the data in your database (meaning it will complain if you try to write a `String` into a `Datetime` field, and make sure you, for example, include information for all the non-nullable columns without a default and similar).\n\nPrisma Migrate uses your changes to the Prisma schema to automatically generate the SQL required to migrate your database to reflect that new schema. You don\u2019t need to think about the changes necessary. You just write what you want to achieve, and Prisma then intelligently generates the SQL DDL (Data Definition Language) for that.\n\nFor users who want to start using Prisma with their existing database, Prisma has a feature called Introspection. You call the CLI command `prisma db pull` to \u201cpull in\u201d the existing database schema, and Prisma then can create the Prisma schema for you automatically, so your existing database can be used with Prisma in seconds.\n\nThis works the same for PostgreSQL, MySQL, MariaDB, SQL Server, CockroachDB, and even SQLite and relies on _relational databases_ being pretty similar, having tables and columns, understanding some dialect of SQL, having foreign keys, and concepts like referential integrity.\n\n## Prisma + MongoDB\n\nOne of our most requested features was support for Prisma with MongoDB. The feature request issue on GitHub for MongoDB support from January 2020 was for a long time by far the most popular one, having gained more than a total of 800 reactions.\n\nMongoDB is known for its flexible schema and the document model, where you can store JSON-like documents. MongoDB takes a different paradigm from relational databases when modeling data \u2013 there are no tables, no columns, schemas, or foreign keys to represent relations between tables. Data is often stored grouped in the same document with related data or \u201cdenormalized,\u201d which is different from what you would see in a relational database.\n\nSo, how could these very different worlds be brought together?\n\n## Prisma and MongoDB: Schema team edition\n\nFor our team, this meant figuring out: \n\n1. How to represent a MongoDB structure and its documents in a Prisma schema.\n2. How to migrate said data structures.\n3. How to let people introspect their existing MongoDB database to easily be able to start using Prisma.\n\nFortunately solving 1 and 2 was relatively simple:\n\n1. Where relational databases have tables, columns, and foreign keys that are mapped to Prisma\u2019s models, with their fields and relations, MongoDB has equivalent collections, fields, and references that could be mapped the same way. Prisma Client can use that information to provide the same type safety and functionality on the Client side.\n\n \n \nRelational database\n \n Prisma\n \n MongoDB\n \n \n \n Table \u2192\n \n Model\n \n \u2190 Collection\n \n \n \n Column \u2192\n \n Field\n \n \u2190 Field\n \n \n \n Foreign Key \u2192\n \n Relation\n \n \u2190 Reference\n \n \n\nWith no database-side schema to migrate, creating and updating indexes and constraints was all that was needed for evolving a MongoDB database schema. As there is no SQL to modify the database structure (which is not written down or defined anywhere), Prisma also did not have to create migration files with Data Definition Language (DDL) statements and could just scope it down to allowing `prisma db push` to directly bring the database to the desired end state.\n\nA bigger challenge turned out to be the Introspection feature.\n\n## Introspecting a schema with MongoDB\n\nWith relational databases with a schema, there is always a way to inquire for the schema. In PostgreSQL, for example, you can query multiple views in a `information_schema` schema to figure out all the details about the structure of the database\u2014to, for example, generate the DDL SQL required to recreate a database, or abstract it into a Prisma schema.\n\nBecause MongoDB has a flexible schema (unless schemas are enforced through the schema validation feature), no such information store exists that could be easily queried. That, of course, poses a problem for how to implement introspection for MongoDB in Prisma.\n\n## Research\n\nAs any good engineering team would, we started by ... Googling a bit. No need to reinvent the wheel, if someone else solved the problem in the past before. Searches for \u201cMongoDB introspection,\u201d \u201cMongoDB schema reverse engineering,\u201d and (as we learned the native term) \u201cMongoDB infer schema\u201d fortunately brought some interesting and worthwhile results.\n\n### MongoDB Compass\n\nMongoDB\u2019s own database GUI Compass has a \u201cSchema\u201d tab in a collection that can analyze a collection to \u201cprovide an overview of the data type and shape of the fields in a particular collection.\u201d\n\nIt works by sampling 1000 documents from a collection that has at least 1000 documents in it, analyzing the individual fields and then presenting them to the user.\n\n### `mongodb-schema`\n\nAnother resource we found was Lucas Hrabovsky\u2019s `mongodb-infer` repository from 2014. Later that year, it seemed to have merged/been replaced by `mongodb-schema`, which is updated to this day.\n\nIt\u2019s a CLI and library version of the same idea\u2014and indeed, when checking the source code of MongoDB Compass, you see a dependency for `mongodb-schema` that is used under the hood.\n\n## Implementing introspection for MongoDB in Prisma\n\nUsually, finding an open source library with an Apache 2.0 license means you just saved the engineering team a lot of time, and the team can just become a user of the library. But in this case, we wanted to implement our introspection in the same introspection engine we also use for the SQL databases\u2014and that is written in Rust. As there is no `mongodb-schema` for Rust yet, we had to implement this ourselves. Knowing how `mongodb-schema` works, this turned out to be straightforward:\n\nWe start by simply getting all collections in a database. The MongoDB Rust driver provides a handy `db.list_collection_names()` that we can call to get all collections\u2014and each collection is turned into a model for Prisma schema. \ud83e\udd42\n\nTo fill in the fields with their type, we get a sample of up to 1000 random records from each collection and loop through them. For each entry, we note which fields exist, and what data type they have. We map the BSON type to our Prisma scalar types (and native type, if necessary). Optimally, all entries have the same fields with the same data type, which is easily and cleanly mappable\u2014and we are done!\n\nOften, not all entries in a collection are that uniform. Missing fields, for example, are expected and equivalent to `NULL` values in a relational database.\n\n### How to present fields with different types\n\nBut different types (for example, `String` and `Datetime`) pose a problem: Which type should we put into the Prisma schema?\n\n> \ud83c\udf93 **Learning 1: Just choosing the most common data type is not a good idea.**\n>\n> In an early iteration of MongoDB introspection, we defaulted to the most common type, and left a comment with the percentage of the occurrences in the Prisma schema. The idea was that this should work most of the time and give the developer the best development experience\u2014the better the types in your Prisma schema, the more Prisma can help you.\n>\n> But we quickly figured out when testing this that there was a slight (but logical) problem: Any time the Prisma Client encounters a type that does _not_ match what it has been told via the Prisma schema, it has to throw an error and abort the query. Otherwise, it would return data that does not adhere to its own generated types for that data.\n>\n> While we were aware this would happen, it was not intuitive to us _how often_ that would cause the Prisma Client to fail. We quickly learned about this when using such a Prisma schema with conflicting types in the underlying database with Prisma Studio, the built-in database GUI that comes with Prisma CLI (just run `npx prisma studio`). By default, it loads 100 entries of a model you view\u2014and when there were ~5% entries with a different type in a database of 1000 entries, it was very common to hit that on the first page already. Prisma Studio (and also an app using these schemas) was essentially unusable for these data sets this way.\n\nFortunately, _everything_ in MongoDB is a `Document`, which maps to a `Json` type field in Prisma. So, when a field has different data types, we use `Json` instead, output a warning in Prisma CLI, and put a comment above the field in the Prisma schema that we render, which includes information about the data types we found and how common they were.\n\nOutput of Prisma CLI on conflicting data types\n\nResulting Prisma schema with statistics on conflicting data types\n\n### How to iterate on the data to get to a cleaner schema\n\nUsing `Json` instead of a specific data type, of course, substantially lowers the benefit you get from Prisma and effectively enables you to write any JSON into the field (making the data even less uniform and harder to handle over time!). But at least you can read all existing data in Prisma Studio or in your app and interact with it.\n\nThe preferred way to fix conflicting data types is to read and update them manually with a script, and then run `prisma db pull` again. The new Prisma schema should then show only the one type still present in the collection.\n\n> \ud83c\udf93 **Learning 2: Output Prisma types in Prisma schema, not MongoDB types.**\n>\n> Originally, we outputted the raw type information we got from the MongoDB Rust driver, the BSON types, into our CLI warnings and Prisma schema comments to help our users iterate on their data and fix the type. It turned out that while this was technically correct and told the user what type the data was in, using the BSON type names was confusing in a Prisma context. We switched to output the Prisma type names instead and this now feels much more natural to users.\n\nWhile Prisma recommends everyone to clean up their data and minimize the amount of conflicting types that fall back to `Json`, that is, of course, also a valid choice.\n\n### How to enrich the introspected Prisma schema with relations\n\nBy adding relation information to your introspected Prisma schema, you can tell Prisma to handle a specific column like a foreign key and create a relation with the data in it. `user User @relation(fields: userId], references: [id])` creates a relation to the `User` model via the local `userId` field. So, if you are using MongoDB references to model relations, add `@relation` to them for Prisma to be able to access those in Prisma Client, emulate referential actions, and help with referential integrity to keep data clean.\n\nRight now, Prisma does not offer a way to detect or confirm the potential relations between different collections. We want to learn first how MongoDB users actually use relations, and then help them the optimal way.\n\n## Summary\n\nImplementing a good introspection story for MongoDB was a fun challenge for our team. In the beginning, it felt like two very different worlds were clashing together, but in the end, it was straightforward to find the correct tradeoffs and solutions to get the optimal outcome for our users. We are confident we found a great combination that combines the best of MongoDB with what people want from Prisma.\n\nTry out [Prisma and MongoDB with an existing MongoDB database, or start from scratch and create one along the way.", "format": "md", "metadata": {"tags": ["MongoDB", "Rust"], "pageDescription": "In this blog, you\u2019ll learn about Prisma and how we interact with MongoDB, plus the next steps after having a schema.", "contentType": "Article"}, "title": "How Prisma Introspects a Schema from a MongoDB Database", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/integrate-mongodb-vercel-functions-serverless-experience", "action": "created", "body": "# Integrate MongoDB into Vercel Functions for the Serverless Experience\n\nWorking with Functions as a Service (FaaS), often referred to as serverless, but you're stuck when it comes to trying to get your database working? Given the nature of these serverless functions, interacting with a database is a slightly different experience than if you were to create your own fully hosted back end.\n\nWhy is it a different experience, though?\n\nDatabases in general, not just MongoDB, can have a finite amount of concurrent connections. When you host your own web application, that web application is typically connecting to your database one time and for as long as that application is running, so is that same connection to the database. Functions offer a different experience, though. Instead of an always-available application, you are now working with an application that may or may not be available at request time to save resources. If you try to connect to your database in your function logic, you'll risk too many connections. If the function shuts down or hibernates or similar, you risk your database connection no longer being available.\n\nIn this tutorial, we're going to see how to use the MongoDB Node.js driver with Vercel functions, something quite common when developing Next.js applications.\n\n## The requirements\n\nThere are a few requirements that should be met prior to getting started with this tutorial, depending on how far you want to take it.\n\n- You must have a MongoDB Atlas cluster deployed, free tier (M0) or better.\n- You should have a Vercel account if you want to deploy to production.\n- A recent version of Node.js and NPM must be available.\n\nIn this tutorial, we're not going to deploy to production. Everything we plan to do can be tested locally, but if you want to deploy, you'll need a Vercel account and either the CLI installed and configured or your Git host. Both are out of the scope of this tutorial.\n\nWhile we'll get into the finer details of MongoDB Atlas later in this tutorial, you should already have a MongoDB Atlas account and a cluster deployed. If you need help with either, consider checking out this tutorial.\n\nThe big thing you'll need is Node.js. We'll be using it for developing our Next.js application and testing it.\n\n## Creating a new Next.js application with the CLI\n\nCreating a new Next.js project is easy when working with the CLI. From a command line, assuming Node.js is installed, execute the following command:\n\n```bash\nnpx create-next-app@latest\n```\n\nYou'll be prompted for some information which will result in your project being created. At any point in this tutorial, you can execute `npm run dev` to build and serve your application locally. You'll be able to test your Vercel functions too!\n\nBefore we move forward, let\u2019s add the MongoDB Node.js driver dependency:\n\n```bash \nyarn add mongodb\n```\n\nWe won't explore it in this tutorial, but Vercel offers a starter template with the MongoDB Atlas integration already configured. If you'd like to learn more, check out the tutorial by Jesse Hall: How to Connect MongoDB Atlas to Vercel Using the New Integration. Instead, we'll look at doing things manually to get an idea of what's happening at each stage of the development cycle.\n\n## Configuring a database cluster in MongoDB Atlas\n\nAt this point, you should already have a MongoDB Atlas account with a project and cluster created. The free tier is fine for this tutorial.\n\nRather than using our imagination to come up with a new set of data for this example, we're going to make use of one of the sample databases available to MongoDB users.\n\nFrom the MongoDB Atlas dashboard, click the ellipsis menu for one of your clusters and then choose to load the sample datasets. It may take a few minutes, so give it some time.\n\nFor this tutorial, we're going to make use of the **sample_restaurants** database, but in reality, it doesn't really matter as the focus of this tutorial is around setup and configuration rather than the actual data.\n\nWith the sample dataset loaded, go ahead and create a new user in the \"Database Access\" tab of the dashboard followed by adding your IP address to the \"Network Access\" rules. You'll need to do this in order to connect to MongoDB Atlas from your Next.js application. If you choose to deploy your application, you'll need to add a `0.0.0.0` rule as per the Vercel documentation.\n\n## Connect to MongoDB and cache connections for a performance optimized experience\n\nNext.js is one of those technologies where there are a few ways to solve the problem. We could interact with MongoDB at build time, creating a 100% static generated website, but there are plenty of reasons why we might want to keep things adhoc in a serverless function. We could also use the Atlas Data API in the function, but you'll get a richer experience using the Node.js driver.\n\nWithin your Next.js project, create a **.env.local** file with the following variables:\n\n```\nNEXT_ATLAS_URI=YOUR_ATLAS_URI_HERE\nNEXT_ATLAS_DATABASE=sample_restaurants\nNEXT_ATLAS_COLLECTION=restaurants\n```\n\nRemember, we're using the **sample_restaurants** database in this example, but you can be adventurous and use whatever you'd like. Don't forget to swap the connection information in the **.env.local** file with your own.\n\nNext, create a **lib/mongodb.js** file within your project. This is where we'll handle the actual connection steps. Populate the file with the following code:\n\n```javascript\nimport { MongoClient } from \"mongodb\";\n\nconst uri = process.env.NEXT_ATLAS_URI;\nconst options = {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n};\n\nlet mongoClient = null;\nlet database = null;\n\nif (!process.env.NEXT_ATLAS_URI) {\n throw new Error('Please add your Mongo URI to .env.local')\n}\n\nexport async function connectToDatabase() {\n try {\n if (mongoClient && database) {\n return { mongoClient, database };\n }\n if (process.env.NODE_ENV === \"development\") {\n if (!global._mongoClient) {\n mongoClient = await (new MongoClient(uri, options)).connect();\n global._mongoClient = mongoClient;\n } else {\n mongoClient = global._mongoClient;\n }\n } else {\n mongoClient = await (new MongoClient(uri, options)).connect();\n }\n database = await mongoClient.db(process.env.NEXT_ATLAS_DATABASE);\n return { mongoClient, database };\n } catch (e) {\n console.error(e);\n }\n}\n```\n\nIt might not look like much, but quite a bit of important things are happening in the above file, specific to Next.js and serverless functions. Specifically, take a look at the `connectToDatabase` function:\n\n```javascript\nexport async function connectToDatabase() {\n try {\n if (mongoClient && database) {\n return { mongoClient, database };\n }\n if (process.env.NODE_ENV === \"development\") {\n if (!global._mongoClient) {\n mongoClient = await (new MongoClient(uri, options)).connect();\n global._mongoClient = mongoClient;\n } else {\n mongoClient = global._mongoClient;\n }\n } else {\n mongoClient = await (new MongoClient(uri, options)).connect();\n }\n database = await mongoClient.db(process.env.NEXT_ATLAS_DATABASE);\n return { mongoClient, database };\n } catch (e) {\n console.error(e);\n }\n}\n```\n\nThe goal of the above function is to give us a client connection to work with as well as a database. However, the finer details suggest that we need to only establish a new connection if one doesn't exist and to not spam our database with connections if we're in development mode for Next.js. The local development server behaves differently than what you'd get in production, hence the need to check.\n\nRemember, connection quantities are finite, and we should only connect if we aren't already connected.\n\nSo what we're doing in the function is we're first checking to see if that connection exists. If it does, return it and let whatever is calling the function use that connection. If the connection doesn't exist and we're in development mode, we check to see if we have a cached session and use that if we do. Otherwise, we need to create a connection and either cache it for development mode or production.\n\nIf you understand anything from the above code, understand that we're just creating connections if connections don't already exist.\n\n## Querying MongoDB from a Vercel function in the Next.js application\n\nWe've done the difficult part already. We have a connection management system in place for MongoDB to be used throughout our Vercel application. The next part involves creating API endpoints, in a near identical way to Express Framework, and consuming them from within the Next.js front end.\n\nSo what does this look like exactly?\n\nWithin your project, create a **pages/api/list.js** file with the following JavaScript code:\n\n```javascript\nimport { connectToDatabase } from \"../../lib/mongodb\";\n\nexport default async function handler(request, response) {\n \n const { database } = await connectToDatabase();\n const collection = database.collection(process.env.NEXT_ATLAS_COLLECTION);\n\n const results = await collection.find({})\n .project({\n \"grades\": 0,\n \"borough\": 0,\n \"restaurant_id\": 0\n })\n .limit(10).toArray();\n\n response.status(200).json(results);\n\n}\n```\n\nVercel functions exist within the **pages/api** directory. In this case, we're building a function with the goal of listing out data. Specifically, we're going to list out restaurant data.\n\nIn our code above, we are leveraging the `connectToDatabase` function from our connection management code. When we execute the function, we're getting a connection without worrying whether we need to create one or reuse one. The underlying function code handles that for us.\n\nWith a connection, we can find all documents within a collection. Not all the fields are important to us, so we're using a projection to exclude what we don't want. Rather than returning all documents from this large collection, we're limiting the results to just a few.\n\nThe results get returned to whatever code or external client is requesting it.\n\nIf we wanted to consume the endpoint from within the Next.js application, we might do something like the following in the **pages/index.js** file:\n\n```react\nimport { useEffect, useState } from \"react\";\nimport Head from 'next/head'\nimport Image from 'next/image'\nimport styles from '../styles/Home.module.css'\n\nexport default function Home() {\n\n const restaurants, setRestaurants] = useState([]);\n\n useEffect(() => {\n (async () => {\n const results = await fetch(\"/api/list\").then(response => response.json());\n setRestaurants(results);\n })();\n }, []);\n\n return (\n \n \n Create Next App\n \n \n \n\n \n \n MongoDB with Next.js! Example\n \n \n \n {restaurants.map(restaurant => (\n \n \n\n{RESTAURANT.NAME}\n\n \n\n{restaurant.address.street}\n\n \n ))}\n \n \n \n )\n}\n```\n\nIgnoring the boilerplate Next.js code, we added a `useState` and `useEffect` like the following:\n\n```javascript\nconst [restaurants, setRestaurants] = useState([]);\n\nuseEffect(() => {\n (async () => {\n const results = await fetch(\"/api/list\").then(response => response.json());\n setRestaurants(results);\n })();\n}, []);\n```\n\nThe above code will consume the API when the component loads. We can then render it in the following section:\n\n```react\n\n {restaurants.map(restaurant => (\n \n \n\n{RESTAURANT.NAME}\n\n \n\n{restaurant.address.street}\n\n \n ))}\n\n```\n\nThere isn't anything out of the ordinary happening in the process of consuming or rendering. The heavy lifting that was important was in the function itself as well as our connection management file.\n\n## Conclusion\n\nYou just saw how to use MongoDB Atlas with Vercel functions, which is a serverless solution that requires a different kind of approach. Remember, when dealing with serverless, the availability of your functions is up in the air. You don't want to spawn too many connections and you don't want to attempt to use connections that don't exist. We resolved this by caching our connections and using the cached connection if available. Otherwise, spin up a new connection.\n\nGot a question or think you can improve upon this solution? Share it in the [MongoDB Community Forums!", "format": "md", "metadata": {"tags": ["JavaScript", "Next.js", "Node.js"], "pageDescription": "Learn how to build a Next.js application that leverages Vercel functions and MongoDB Atlas to create a serverless development experience.", "contentType": "Tutorial"}, "title": "Integrate MongoDB into Vercel Functions for the Serverless Experience", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/swift-change-streams", "action": "created", "body": "# Working with Change Streams from Your Swift Application\n\nMy day job is to work with our biggest customers to help them get the best out of MongoDB when creating new applications or migrating existing ones. Their use cases often need side effects to be run whenever data changes \u2014 one of the most common requirements is to maintain an audit trail.\n\nWhen customers are using MongoDB Atlas, it's a no-brainer to recommend Atlas Triggers. With triggers, you provide the code, and Atlas makes sure that it runs whenever the data you care about changes. There's no need to stand up an app server, and triggers are very efficient as they run alongside your data.\n\nUnfortunately, there are still some workloads that customers aren't ready to move to the public cloud. For these applications, I recommend change streams. Change streams are the underlying mechanism used by Triggers and many other MongoDB technologies \u2014 Kafka Connector, Charts, Spark Connector, Atlas Search, anything that needs to react to data changes.\n\nUsing change streams is surprisingly easy. Ask the MongoDB Driver to open a change stream and it returns a database cursor. Listen to that cursor, and your application receives an event for every change in your collection.\n\nThis post shows you how to use change streams from a Swift application. The principles are exactly the same for other languages. You can find a lot more on change streams at Developer Center.\n\n## Running the example code\n\nI recently started using the MongoDB Swift Driver for the first time. I decided to build a super-simple Mac desktop app that lets you browse your collections (which MongoDB Compass does a **much** better job of) and displays change stream events in real time (which Compass doesn't currently do).\n\nYou can download the code from the Swift-Change-Streams repo. Just build and run from Xcode.\n\nProvide your connection-string and then browse your collections. Select the \"Enable change streams\" option to display change events in real time.\n\n### The code\n\nYou can find this code in CollectionView.swift.\n\nWe need a variable to store the change stream (a database cursor)\n\n```swift\n@State private var changeStream: ChangeStream>?\n```\n\nas well as one to store the latest change event received from the change stream (this will be used to update the UI):\n\n```swift\n@State private var latestChangeEvent: ChangeStreamEvent?\n```\n\nThe `registerChangeStream` function is called whenever the user checks or unchecks the change stream option:\n\n```swift\nfunc registerChangeStream() async {\n // If the view already has an active change stream, close it down\n if let changeStream = changeStream {\n _ = changeStream.kill()\n self.changeStream = nil\n }\n if enabledChangeStreams {\n do {\n let changeStreamOptions = ChangeStreamOptions(fullDocument: .updateLookup)\n changeStream = try await collection?.watch(options: changeStreamOptions)\n _ = changeStream?.forEach({ changeEvent in\n withAnimation {\n latestChangeEvent = changeEvent\n showingChangeEvent = true\n Task {\n await loadDocs()\n }\n }\n })\n } catch {\n errorMessage = \"Failed to register change stream: \\(error.localizedDescription)\"\n }\n }\n}\n```\n\nThe function specifies what data it wants to see by creating a `ChangeStreamOptions` structure \u2014 you can see the available options in the Swift driver docs. In this app, I specify that I want to receive the complete new document (in addition to the deltas) when a document is updated. Note that the full document is always included for insert and replace operations.\n\nThe code then calls `watch` on the current collection. Note that you can also provide an aggregation pipeline as a parameter named `pipeline` when calling `watch`. That pipeline can filter and reshape the events your application receives.\n\nOnce the asynchronous watch function completes, the `forEach` loop processes each change event as it's received.\n\nWhen the loop updates our `latestChangeEvent` variable, the change is automatically propagated to the `ChangeEventView`:\n\n```swift\n ChangeEventView(event: latestChangeEvent)\n```\n\nYou can see all of the code to display the change event in `ChangeEventView.swift`. I'll show some highlights here.\n\nThe view receives the change event from the enclosing view (`CollectionView`):\n\n```swift\nlet event: ChangeStreamEvent\n```\n\nThe code looks at the `operationType` value in the event to determine what color to use for the window:\n\n```swift\nvar color: Color {\n switch event.operationType { \n case .insert:\n return .green\n case .update:\n return .yellow\n case .replace:\n return .orange\n case .delete:\n return .red\n default:\n return .teal\n }\n}\n```\n\n`documentKey` contains the `_id` value for the document that was changed in the MongoDB collection:\n\n```swift\nif let documentKey = event.documentKey {\n ...\n Text(documentKey.toPrettyJSONString())\n ...\n }\n}\n```\n\nIf the database operation was an update, then the delta is stored in `updateDescription`:\n\n```swift\nif let updateDescription = event.updateDescription {\n ...\n Text(updateDescription.updatedFields.toPrettyJSONString())\n ...\n }\n}\n```\n\nThe complete document after the change was applied in MongoDB is stored in `fullDocument`:\n\n```swift\nif let fullDocument = event.fullDocument {\n ...\n Text(fullDocument.toPrettyJSONString())\n ...\n }\n}\n```\n\nIf the processing of the change events is a critical process, then you need to handle events such as your process crashing. \n\nThe `_id.resumeToken` in the `ChangeStreamEvent` is a token that can be used when starting the process to continue from where you left off. Simply provide this token to the `resumeAfter` or `startAfter` options when opening the change stream. Note that this assumes that the events you've missed haven't rotated out of the Oplog.\n\n### Conclusion\n\nUse Change Streams (or Atlas triggers, if you're able) to simplify your code base by decoupling the handling of side-effects from each place in your code that changes data.\n\nAfter reading this post, you've hopefully realized just how simple it is to create applications that react to data changes using MongoDB Change Streams. Questions? Comments? Head over to our Developer Community to continue the conversation!", "format": "md", "metadata": {"tags": ["Swift", "MongoDB"], "pageDescription": "Change streams let you run your own logic when data changes in your MongoDB collections. This post shows how to consume MongoDB change stream events from your Swift app.", "contentType": "Quickstart"}, "title": "Working with Change Streams from Your Swift Application", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-javascript-v11-react-native", "action": "created", "body": "# Realm JavaScript v11: A Step Forward for React Native \u2014 Hermes Support, Realm React, Flipper, and Much More\n\nAfter over a year of effort by the Realm JavaScript team, today, we are pleased to announce the release of Realm JavaScript version 11 \u2014 a complete re-imagining of the SDK and its APIs to be more idiomatic for React Native and JavaScript developers everywhere. With this release, we have built underlying support for the new Hermes JS engine, now becoming the standard for React Native applications everywhere. We have also introduced a\u00a0new library for React developers, making integration with components, hooks, and context a breeze. We have built a Flipper plugin that makes inspecting, querying, and modifying a Realm within a React Native app incredibly fast. And finally, we have transitioned to a class-based schema definition to make creating your data model as intuitive as defining classes.\n\nRealm is a simple, fast, object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer and is built from the ground up to work cross-platform, making React Native a natural fit. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via React components, hooks, and context.\n\nFinally, Realm JavaScript comes with\u00a0built-in synchronization\u00a0to MongoDB Atlas \u2014 a cloud-managed database-as-a-service for MongoDB. The developer does not need to write any networking or conflict resolution code. All data transfer is done under the hood, abstracting thousands of lines of code away from the developer, and enabling them to build reactive mobile apps that can trigger UI updates automatically from server-side state changes. This delivers a performant and offline-tolerant mobile app because it always renders the state from disk.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## Introduction\u00a0\nReact Native has been a windfall for JavaScript developers everywhere by enabling them to write one code-base and deploy to multiple mobile targets \u2014 mainly iOS and Android \u2014 saving time and cost associated with maintaining multiple code bases. The React Native project has moved aggressively in the past years to solve mobile centric problems such as introducing a new JavaScript engine, Hermes, to solve the cold start problem and increase performance across the board. It has also introduced Fabric and TurboModules, which are projects designed to aid embedded libraries, such as Realm, which at its core is written in C++, to link into the JavaScript context. We believe these new developments from React Native are a great step forward for mobile developers and we have worked closely with the team to align our library to these new developments.\u00a0\n\n## What is Realm?\nThe Realm JavaScript SDK is built on three core concepts:\n* An object database that infers the schema from the developers\u2019 class structure, making working with objects as easy as interacting with objects in code. No conversion code or ORM necessary.\n* Live objects, where the object reference updates as soon as the state changes and the UI refreshes \u2014 all built on top of Realm\u2019s React library \u2014 enabling easy-to-use context, hooks, and components.\n* A columnar store where query results return immediately and integrate with an idiomatic query language that developers are familiar with.\n\nRealm is a fast, easy-to-use alternative to SQLite, that comes with a real-time edge to cloud sync solution out of the box. Written from the ground up in C++, it's not a wrapper around SQLite or any other relational data store and is designed with the mobile environment in mind. It's lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping. with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of state pages from disk into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. Realm makes it easy to store, query, and sync your mobile data across a plethora of devices and the back end.\n\n## Realm for Javascript developers\nWhen Realm JavaScript was first implemented back in 2016, the only JavaScript engine available in React Native was JavaScript Core, which did not expose a way for embedded libraries such as Realm to integrate with. Since then, React Native has expanded their API to give developers the tools they need to work with third-party libraries directly in their mobile application code \u2014 most notably, the new Hermes JavaScript engine for React Native apps. After almost a year of effort, Realm JavaScript now runs through the JavaScript Interface (JSI), allowing us to support JavaScriptCore and, most importantly, Hermes \u2014 facilitating an exponentially faster app boot time and an intuitive debugging experience with Flipper.\n\nThe Realm React library eliminates an incredible amount of boilerplate code that a developer would normally write in order to funnel data from the state store to the UI. With this library, Realm is directly integrated with React and comes with built-in hooks for accessing the query, write, and sync APIs. Previously, React Native developers would need to write this boilerplate themselves. By leveraging the new APIs from our React library, developers can save time and reduce bugs by leaving the Realm centric code to us. We have also added the ability for Realm\u2019s objects to be React component aware, ensuring that re-renders are as efficient as possible in the component tree and freeing the developer from needing to write their own notification code. Lastly, we have harmonized Realm query results and lists with React Native ListView components, ensuring that individual items in lists re-render when they change \u2014 enabling a slick user experience.\n\nAt its core, Realm has always endeavored to make working with the data layer as easy as working with language native objects, which is why your local database schema is inferred from your object definitions. In Realm JavaScript v11, we have now extended our existing functionality to fully support class based objects in JavaScript, aligning with users\u2019 expectations of being able to call a constructor of a class-based model when wanting to create or add a new object. On top of this, we have done this not only in JavaScript but also with Typescript models, allowing developers to declare their types directly in the class definition, cutting out a massive amount of boilerplate code that a developer would need to write and removing a source of bugs while delivering type safety.\u00a0\n\n```\n///////////////////////////////////////////////////\n// File: task.ts\n///////////////////////////////////////////////////\n// Properties:\n// - _id: primary key, create a new value (objectId) when constructing a `Task` object\n// - description: a non-optional string\n// - isComplete: boolean, default value is false; the properties is indexed in the database to speed-up queries\nexport class Task extends Realm.Object {\n _id = new Realm.BSON.ObjectId();\n description!: string;\n @index\n isComplete = false;\n createdAt!: Date = () => new Date();\n userId!: string;\n \n static primaryKey = \"_id\";\n \n constructor(realm, description: string, userId: string) {\n super(realm, { description, userId });\n }\n}\n\nexport const TaskRealmContext = createRealmContext({\n schema: [Task],\n});\n\n///////////////////////////////////////////////////\n// File: index.ts\n///////////////////////////////////////////////////\nconst App = () => \" />;\n\nAppRegistry.registerComponent(appName, () => App);\n\n///////////////////////////////////////////////////\n// File: appwrapper.tsx\n///////////////////////////////////////////////////\nexport const AppWrapper: React.FC<{\n appId: string;\n}> = ({appId}) => {\n const {RealmProvider} = TaskRealmContext;\n\n return (\n \n \n \n \n \n \n \n \n \n );\n};\n\n///////////////////////////////////////////////////\n// File: app.tsx\n///////////////////////////////////////////////////\nfunction TaskApp() {\n const app = useApp();\n const realm = useRealm();\n const [newDescription, setNewDescription] = useState(\"\")\n\n const results = useQuery(Task);\n const tasks = useMemo(() => result.sorted('createdAt'), [result]);\n\n useEffect(() => {\n realm.subscriptions.update(mutableSubs => {\n mutableSubs.add(realm.objects(Task));\n });\n }, [realm, result]);\n\n return (\n \n \n \n {\n realm.write(() => {\n new Task(realm, newDescription, app.currentUser.id);\n });\n setNewDescription(\"\")\n }}>\u2795\n \n item._id.toHexString()} renderItem={({ item }) => {\n return (\n \n \n realm.write(() => {\n item.isComplete = !item.isComplete\n })\n }>{item.isComplete ? \"\u2705\" : \"\u2611\ufe0f\"}\n {item.description}\n {\n realm.write(() => {\n realm.delete(item)\n })\n }} >{\"\ud83d\uddd1\ufe0f\"}\n \n );\n }} >\n \n );\n}\n```\n\n## Looking ahead\n\nThe Realm JavaScript SDK is free, open source, and available for you to try out today. It can be used as an open-source local-only database for mobile apps or can be used to synchronize data to MongoDB Atlas with a generous free tier. The Realm JavaScript team is not done. As we look to the coming year, we will continue to refine our APIs to eliminate more boilerplate and do the heavy lifting for our users especially as it pertains to React 18, hook easily into developer tools like Expo, and explore expanding into other platforms such as web or Electron. \n\nGive it a try today and let us know what you think! Try out our tutorial, read our docs, and follow our repo.", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "React Native"], "pageDescription": "Today, we are pleased to announce the release of Realm JavaScript version 11\u2014 a complete re-imagining of the SDK and its APIs to be more idiomatic for React Native and JavaScript developers everywhere.", "contentType": "Article"}, "title": "Realm JavaScript v11: A Step Forward for React Native \u2014 Hermes Support, Realm React, Flipper, and Much More", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-multi-cloud-global-clusters", "action": "created", "body": "# Atlas Multi-Cloud Global Cluster: Always Available, Even in the Apocalypse!\n\n## Introduction\n\nIn recent years, \"high availability\" has been a buzzword in IT. Using this phrase usually means having your application and services resilient to any disruptions as much as possible.\n\nAs vendors, we have to guarantee certain levels of uptime via SLA contracts, as maintaining high availability is crucial to our customers. These days, downtime, even for a short period of time, is widely unacceptable.\n\nMongoDB Atlas, our data as a service platform, has just the right solution for you!\n\n>\n>\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n>\n>\n\n## Let's Go Global\n\nMongoDB Atlas offers a very neat and flexible way to deploy a global database in the form of a Global Sharded Cluster. Essentially, you can create Zones across the world where each one will have a shard, essentially a replica set. This allows you to read and write data belonging to each region from its local shard/s.\n\nTo improve our network stability and overhead, Atlas provides a \"Local reads in all Zones\" button. It directs Atlas to automatically associate at least one secondary from each shard to one of the other regions. With an appropriate read preference, our application will now be able to get data from all regions without the need to query it cross-region. See our Atlas Replica Set Tags to better understand how to target local nodes or specific cloud nodes.\n\nMongoDB 4.4 introduced another interesting feature around read preferences for sharded clusters, called Hedged Reads. A hedged read query is run across two secondaries for each shard and returns the fastest response. This can allow us to get a fast response even if it is served from a different cloud member. Since this feature is allowed for `non-Primary` read preferences (like `nearest`), it should be considered to be eventually consistent. This should be taken into account with your consistency considerations.\n\n## Let's Go Multi-Cloud\n\nOne of the latest breakthroughs the Atlas service presented is being able to run a deployment across cloud vendors (AWS, Azure, and GCP). This feature is now available also in Global Clusters configurations.\n\nWe are now able to have shards spanning multiple clouds and regions, in one cluster, with one unified connection string. Due to the smart tagging of the replica set and hosts, we can have services working isolated within a single cloud, or benefit from being cloud agnostic.\n\nTo learn more about Multi-Cloud clusters, I suggest you read a great blog post, Create a Multi-Cloud Cluster with MongoDB Atlas, written by my colleague, Adrienne Tacke.\n\n## What is the Insurance Policy We've Got?\n\nWhen you set up a Global Cluster, how it is configured will change the availability features. As you configure your cluster, you can immediately see how your configuration covers your resiliency, HA, and Performance requirements. It's an awesome feature! Let's dive into the full set:\n\n##### Zone Configuration Check-list\n\n| Ability | Description | Feature that covers it |\n| --- | --- | --- |\n| Low latency read and writes in \\ | Having a Primary in each region allows us to query/write data within the region. | Defining a zone in a region covers this ability. |\n| Local reads in all zones | If we want to query a local node for another zone data (e.g., in America, query for Europe documents), we need to allow each other zone to place at least one secondary in the local region (e.g., Europe shard will have one secondary in America region). This requires our reads to use a latency based `readPreference` such as `nearest` or `hedged`. If we do not have a local node we will need to fetch the data remotely. | Pressing the \"Allow local reads in all zones\" will place one secondary in each other zone. |\n| Available during partial region outage | In case there is a cloud \"availability zone\" outage within a specific region, regions with more than one availability zone will allow the region to still function as normal. | Having the preferred region of the zone with a number of electable nodes span across two or more availability zones of the cloud provider to withstand an availability zone outage. Those regions will be marked with a star in the UI. For example: two nodes in AWS N. Virginia where each one is, by design, deployed over three potential availability zones. |\n| Available during full region outage | In case there is a full cloud region outage, we need to have a majority of nodes outside this region to maintain a primary within the | Having a majority of \"Electable\" nodes outside of the zone region. For example: two nodes in N. Virginia, two nodes in N. California, and one node in Ireland |\n| Available during full cloud provider outage | If a whole cloud provider is unavailable, the zones still have a majority of electable nodes on other cloud providers, and so the zones are not dependent on one cloud provider. | Having multi-cloud nodes in all three clouds will allow you to withstand one full cloud provider failure. For example: two nodes on AWS N.Virginia, two nodes on GCP Frankfurt, and one node on Azure London. |\n\n## Could the Apocalypse Not Scare Our Application?\n\nAfter we have deployed our cluster, we now have a fully global cross-region, cross-cloud, fault-tolerant cluster with low read and write latencies across the globe. All this is accessed via a simple unified SRV connection string:\n\n``` javascript\n\"mongodb+srv://user:myRealPassword@cluster0.mongodb.net/test?w=majority\"\n```\n\nThis cluster comes with a full backup and point in time restore option, in case something **really** horrible happens (like human mistakes...).\n\nI don't think that our application has anything to fear, other than its own bugs.\n\nTo show how easy it is to manipulate this complex deployment, I YouTubed it:\n\n>\n>\n>:youtube]{vid=pbhWjNVKMfg}\n>\n>To learn more about how to deploy a cross region global cluster to cover all of our fault tollerence best practices, check out the video.\n>\n>\n\n## Wrap-Up\n\nCovering our global application demand and scale has never been easier, while keeping the highest possible availability and resiliency. Global multi-cloud clusters allow IT to sleep well at night knowing that their data is always available, even in the apocalypse!\n\n>\n>\n>If you have questions, please head to our [developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to build Atlas Multi-Cloud Global Cluster: Always available, even in the apocalypse!", "contentType": "Article"}, "title": "Atlas Multi-Cloud Global Cluster: Always Available, Even in the Apocalypse!", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/announcing-realm-flutter-sdk", "action": "created", "body": "# Announcing the GA of the Realm Flutter SDK\n\nAfter over a year since our first official release, we are excited to announce the general availability of our Realm Flutter SDK. The team has made dozens of releases, merged hundreds of PRs, and squashed thousands of bugs \u2014 many of them raised by you, our community, that has guided us through this preview period. We could not have done it without your feedback and testing. The team also worked in close partnership with the Dart and Google Cloud teams to make sure our Flutter SDK followed best practices. Their guidance was essential to stabilizing our low-level API for 1.0. You can read more about our collaboration with the Dart team on their blog here.\n\nRealm is a simple and fast object-oriented database for mobile applications that does not require an ORM layer or any glue code to work with your data layer. With Realm, working with your data is as simple as interacting with objects from your data model. Any updates to the underlying data store will automatically update your objects as soon as the state on disk has changed, enabling you to automatically refresh the view via StatefulWidgets and Streams.\n\nWith this 1.0 release we have solidified the foundation of our Realm Flutter SDK and stabilized the API in addition to adding features around schema definitions such as support for migrations and new types like lists of primitives, embedded objects, sets, and a RealmValue type, which can contain a mix of any valid type. We\u2019ve also enhanced the API to support asynchronous writes and frozen objects as well as introducing a writeCopy API for converting realm files in code bringing it up to par with our other SDKs. \n\nFinally, the Realm Flutter SDK comes with built-in data synchronization to MongoDB Atlas \u2014 a cloud-managed database-as-a-service for MongoDB. The developer does not need to write any networking or conflict resolution code. All data transfer is done under the hood, abstracting away thousands of lines of code for handling offline state and network availability, and enabling developers to build reactive mobile apps that can trigger UI updates automatically from server-side state changes. This delivers a performant and offline-tolerant mobile app because it always renders the state from disk.\n\n> **Live-code with us**\n> \n> Join us live to build a Flutter mobile app from scratch! Senior Software Engineer Kasper Nielsen walks us through setting up a new Flutter app with local storage via Realm and cloud-syncing via Atlas Device Sync. Register here.\n\n## Why Realm?\nAll of Realm\u2019s SDKs are built on three core concepts:\n* An object database that infers the schema from the developers\u2019 class structure \u2014 making working with objects as easy as interacting with their data layer. No conversion code necessary.\n* Live objects so the developer has a simple way to update their UI \u2014 integrated with StatefulWidgets and Streams.\n* A columnar store so that query results return in lightning speed and directly integrate with an idiomatic query language the developer prefers.\n\nRealm is a database designed for mobile applications as a replacement for SQLite. It was written from the ground up in C++, so it is not a wrapper around SQLite or any other relational datastore. Designed with the mobile environment in mind, it is lightweight and optimizes for constraints like compute, memory, bandwidth, and battery that do not exist on the server side. Realm uses lazy loading and memory mapping with each object reference pointing directly to the location on disk where the state is stored. This exponentially increases lookup and query speed as it eliminates the loading of pages of data into memory to perform calculations. It also reduces the amount of memory pressure on the device while working with the data layer. \n\n***Build better mobile apps with Atlas Device Sync:***\n*Atlas Device Sync is a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!*\n\n## Enhancements to the Realm Flutter SDK\nThe greatest enhancements for the GA of the SDK surround the modeling of the schema \u2014 giving developers enhanced expressiveness and flexibility when it comes to building your data classes for your application. First, the SDK includes the ability to have embedded objects \u2014 this allows you to declare an object as owned by a parent object and attach its lifecycle to the parent. This enables a cascading delete when deleting a parent object because it will also delete any embedded objects. It also frees the developer from writing cleanup code to perform this operation manually. A Set has also been added, which enables a developer to have a collection of any type of elements where uniqueness is automatically enforced along with a variety of methods that can operate on a set. Finally, the flexibility is further enhanced with the addition of RealmValue, which is a mixed type that allows a developer to insert any valid type into the collection or field. This is useful when the developer may not know the type of a value that they are receiving from an API but needs to store it for later manipulation.\n\nThe new SDK also contains ergonomic improvements to the API to make manipulating the data and integrating into the Flutter ecosystem seamless. The writeCopy API allows you to make a copy of a Realm database file to bundle with your application install and enables you to convert from a non-sync to sync with Realm and vice versa. Frozen objects give developers the option to make a snapshot of the state at a certain point in time, making it simple to integrate into a unidirectional data flow pattern or library such as BLoC. Lastly, the writeAsync API introduces a convenient method to offload work to the background and preserve execution of the main thread.\n\n```\n// Define your schema in your object model - here a 1 to 1 relationship\n@RealmModel()\nclass _Car {\n late String make;\n String? model;\n int? kilometers = 500;\n _Person? owner;\n}\n\n// Another object in the schema. A person can have a Set of cars and data can be of any type\n@RealmModel()\nclass _Person {\n late String name;\n int age = 1;\n\n late Set<_Car> cars;\n late RealmValue data;\n}\n\nvoid main(List arguments) async {\n final config = Configuration.local(Car.schema, Person.schema]);\n final realm = Realm(config);\n\n// Create some cars and add them to your person object\n final person = Person(\"myself\", age: 18);\n person.cars.add(Car(\"Tesla\", model: \"Model Y\", kilometers: 818));\n person.cars.add(Car(\"Audi\", model: \"A4\", kilometers: 12));\n person.data = RealmValue.bool(true);\n \n // Dispatch the write to the background to not block the UI \n await realm.writeAsync(() {\n realm.add(person);\n });\n\n// Listen for any changes to the underlying data - useful for updating the UI\n person.cars.changes.listen((e) {\n print(\"set of cars changed\");\n });\n\n// Add some more any type value to the data field\n realm.write(() {\n person.data = RealmValue.string(\"Realm is awesome\");\n });\n\n realm.close();\n }\n```\n\n## Looking ahead\nThe Realm Flutter SDK is free, [open source, and available for you today. We believe that with the GA of the SDK, Flutter developers can easily build an application that seamlessly integrates into Dart\u2019s language primitives and have the confidence to launch their app to production. Thousands of developers have already tried the SDK and many have already shipped their app to the public. Two such companies are Aupair Valley and Dot On.\n\nAupair Valley is a mobile social media platform that helps connect families and au pairs worldwide. The app\u2019s advanced search algorithm facilitates the matching between families and au pairs. They can find and connect with each other and view information about their background. The app also enables chat functionality to set up a meeting. Aupair Valley selected Flutter so that they could easily iterate on both an Android and iOS app in the same codebase, while Realm and Device Sync played an essential role in the development of Aupair Valley by abstracting away the data layer and networking that any two-sided market app requires. Built on Google Cloud and MongoDB Atlas, Aupair Valley drastically reduced development time and costs by leveraging built-in functionality on these platforms.\n\nDot On is a pioneering SaaS Composable Commerce Platform for small and midsize enterprises, spanning Product Management and Order Workflow Automation applications. Native connectors with Brightpearl by Sage and Shopify Plus further enrich capabilities around global data syndication and process automation through purpose-built, deep integrations. With Dot On\u2019s visionary platform, brands are empowered with digital freedom to deliver exceptional and unique customer experiences that exceed expectations in this accelerating digital world.\n\nDot On chose Realm and MongoDB Atlas for their exceptional and innovative technology fused with customer success that is central to Dot On\u2019s core values. To meet this high bar, it was essential to select a vendor whose native-application database solution was tried and tested, highly scalable, and housed great flexibility around data architecture all while maintaining very high standards over security, privacy and compliance. \n\n\u201cRealm and Device Sync is a perfect fit and has accelerated our development. Dot On\u2019s future is incredibly exciting and we look forward to our continued relationship with MongoDB who have been highly supportive from day one.\u201d -Jon Petrie, CEO, Dot On.\n\nThe future is bright for Flutter at Realm and MongoDB. Our roadmap will continue to evolve by adding new types such as Decimal128 and Maps, along with additional MongoDB data access APIs, and deepening our integration into the Flutter framework with guidance and convenience APIs for even simpler integrations into state management and streams. Stay tuned!\n\nGive it a try today and let us know what you think! Check out our samples, read our docs, and follow our repo.\n\n> **Live-code with us**\n>\n>Join us live to build a Flutter mobile app from scratch! Senior Software Engineer Kasper Nielsen walks us through setting up a new Flutter app with local storage via Realm and cloud-syncing via Atlas Device Sync. Register here.", "format": "md", "metadata": {"tags": ["Realm", "Flutter"], "pageDescription": "After over a year since our first official release, we are excited to announce the general availability of our Realm Flutter SDK.", "contentType": "Article"}, "title": "Announcing the GA of the Realm Flutter SDK", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/easy-deployment-mean-stack", "action": "created", "body": "# Easy Deployment of MEAN Stack with MongoDB Atlas, Cloud Run, and HashiCorp Terraform\n\n*This article was originally written by Aja Hammerly and Abirami Sukumaran, developer advocates from Google.*\n\nServerless computing promises the ability to spend less time on infrastructure and more time on developing your application. But historically, if you used serverless offerings from different vendors you didn't see this benefit. Instead, you often spent a significant amount of time configuring the different products and ensuring they can communicate with each other. We want to make this easier for everyone. We've started by using HashiCorp Terraform to make it easier to provision resources to run the MEAN stack on Cloud Run with MongoDB Atlas as your database. If you want to try it out, our GitHub repository is here:\u00a0https://github.com/GoogleCloudPlatform/terraform-mean-cloudrun-mongodb\n\n## MEAN Stack Basics\n\nIf you aren't familiar, the\u00a0MEAN stack\u00a0is a technology stack for building web applications. The MEAN stack is composed of four main components\u2014MongoDB, Express, Angular, and Node.js.\n\n* MongoDB is responsible for data storage\n* Express.js is a Node.js web application framework for building APIs\n* Angular is a client-side JavaScript platform\n* Node.js is a server-side JavaScript runtime environment. The server uses the MongoDB Node.js driver to connect to the database and retrieve and store data\n\nOur project runs the MEAN stack on Cloud Run (Express, Node) and MongoDB Atlas (MongoDB).\n\nThe repository uses a sample application to make it easy to understand all the pieces. In the sample used in this experiment, we have a client and server application packaged in individual containers each that use the MongoDB-Node.js driver to connect to the MongoDB Atlas database.\n\nBelow we'll talk about how we used Terraform to make deploying and configuring this stack easier for developers and how you can try it yourself.\n\n## Required One-Time Setup\n\nTo use these scripts, you'll need to have both MongoDB Atlas and Google Cloud accounts.\n\n### MongoDB Atlas Setup\n\n1. Login with your MongoDB Atlas Account.\n2. Once you're logged in, click on \"Access Manager\" at the top and select \"Organization Access\"\n\n3. Select the \"API Keys\" tab and click the \"Create API Key\" button\n4. Give your new key a short description and select the \"Organization Owner\" permission\n5. Click \"Next\" and then make a note of your public and private keys\n6. Next, you'll need your Organization ID. In the left navigation menu, click \u201cSettings\u201d.\u00a0\n\n7. Locate your Organization ID and copy it.\n\nThat's everything for Atlas. Now you're ready to move on to setting up Google Cloud!\n\n### Google Cloud Tooling and Setup\nYou'll need a billing account setup on your Google Cloud account and to make note of your Billing Account ID. You can find your Billing Account ID on the\u00a0billing page.\n\nYou'll also need to pick a\u00a0region\u00a0for your infrastructure. Note that Google Cloud and Atlas use different names for the same region. You can find a mapping between Atlas regions and Google Cloud regions\u00a0here. You'll need a region that supports the M0 cluster tier. Choose a region close to you and make a note of both the Google Cloud and Atlas region names.\n\nFinally, you'll need a terminal with the\u00a0Google Cloud CLI\u00a0(gcloud) and\u00a0Terraform\u00a0installed. You can use your workstation or try\u00a0Cloud Shell, which has these tools already installed. To get started in Cloud Shell with the repo cloned and ready to configure,\u00a0click here.\n\n### Configuring the Demo\n\nIf you haven't already, clone\u00a0this repo. Run\u00a0`terraform init`\u00a0to make sure Terraform is working correctly and download the provider plugins. Then, create a file in the root of the repository called\u00a0`terraform.tfvars`\u00a0with the following contents, replacing placeholders as necessary:\n\n*atlas\\_pub\\_key\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"\\\"*\n\n*atlas\\_priv\\_key \u00a0 \u00a0 \u00a0 \u00a0 = \"\\\"*\n\n*atlas\\_org\\_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"\\\"*\n\n*google\\_billing\\_account = \"\\\"*\n\nIf you selected the\u00a0*us-central1/US\\_CENTRAL*\u00a0region then you're ready to go. If you selected a different region, add the following to your\u00a0`terraform.tfvars ` file:\n\natlas\\_cluster\\_region = \"\\\"\n\ngoogle\\_cloud\\_region\u00a0 = \"\\\"\n\nRun terraform init again to make sure there are no new errors. If you get an error, check your terraform.tfvars file.\n\n### Deploy the Demo\n\nYou're ready to deploy! You have two options: you can run\u00a0`terraform plan`\u00a0to see a full listing of everything that Terraform wants to do without any risk of accidentally creating those resources. If everything looks good, you can then run\u00a0`terraform apply`\u00a0to execute the plan.\n\nAlternately, you can just run terraform apply on its own and it will create a plan and display it before prompting you to continue. You can learn more about the plan and apply commands in\u00a0this tutorial. For this demo, we're going to just run\u00a0`terraform apply`:\n\nIf everything looks good to you, type yes and press enter. This will take a few minutes. When it's done, Terraform will display the URL of your application:\n\nOpen that URL in your browser and you'll see the sample app running.\n\n### Cleaning Up\nWhen you're done, run terraform destroy to clean everything up:\n\nIf you're sure you want to tear everything down, type yes and press enter. This will take a few minutes. When Terraform is done everything it created will have been destroyed and you will not be billed for any further usage.\n\n## Next Steps\n\nYou can use the code in this repository to deploy your own applications. Out of the box, it will work with any application that runs in a single container and reads the MongoDB connection string from an environment variable called ATLAS\\_URI, but the Terraform code can easily be modified if you have different needs or to support more complex applications.\n\nFor more information please refer to the\u00a0Next Steps\u00a0section of the readme.", "format": "md", "metadata": {"tags": ["Atlas", "Node.js", "Google Cloud", "Terraform"], "pageDescription": "Learn about using HashiCorp Terraform to make it easier to provision resources to run the MEAN stack on Cloud Run with MongoDB Atlas as your database. ", "contentType": "Article"}, "title": "Easy Deployment of MEAN Stack with MongoDB Atlas, Cloud Run, and HashiCorp Terraform", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/kafka-mongodb-atlas-tutorial", "action": "created", "body": "# Kafka to MongoDB Atlas End to End Tutorial\n\nData and event-driven applications are in high demand in a large variety of industries. With this demand, there is a growing challenge with how to sync the data across different data sources. \n\nA widely adopted solution for communicating real-time data transfer across multiple components in organization systems is implemented via clustered queues. One of the popular and proven solutions is Apache Kafka.\n\nThe Kafka cluster is designed for streams of data that sequentially write events into commit logs, allowing real-time data movement between your services. Data is grouped into topics inside a Kafka cluster.\n\nMongoDB provides a Kafka connector certified by Confluent, one of the largest Kafka providers. With the Kafka connector and Confluent software, you can publish data from a MongoDB cluster into Kafka topics using a source connector. Additionally, with a sink connector, you can consume data from a Kafka topic to persist directly and consistently into a MongoDB collection inside your MongoDB cluster.\n\nIn this article, we will provide a simple step-by-step guide on how to connect a remote Kafka cluster\u2014in this case, a Confluent Cloud service\u2014with a MongoDB Atlas cluster. For simplicity purposes, the installation is minimal and designed for a small development environment. However, through the article, we will provide guidance and links for production-related considerations.\n\n> **Pre-requisite**: To avoid JDK known certificate issues please update your JDK to one of the following patch versions or newer:\n> - JDK 11.0.7+\n> - JDK 13.0.3+\n> - JDK 14.0.2+\n\n## Table of Contents\n\n1. Create a Basic Confluent Cloud Cluster\n1. Create an Atlas Project and Cluster \n1. Install Local Confluent Community Binaries to Run a Kafka Connect Instance\n1. Configure the MongoDB Connector with Kafka Connect Locally\n1. Start and Test Sink and Source MongoDB Kafka Connectors\n1. Summary\n\n## Create a Basic Confluent Cloud Cluster\n\nWe will start by creating a basic Kafka cluster in the Confluent Cloud. \n\nOnce ready, create a topic to be used in the Kafka cluster. I created one named \u201corders.\u201d\n\nThis \u201corders\u201d topic will be used by Kafka Sink connector. Any data in this topic will be persisted automatically in the Atlas database.\n\nYou will also need another topic called \"outsource.kafka.receipts\". This topic will be used by the MongoDB Source connector, streaming reciepts from Atlas database.\n\nGenerate an `api-key` and `api-secret` to interact with this Kafka cluster. For the simplicity of this tutorial, I have selected the \u201cGlobal Access\u201d api-key. For production, it is recommended to give as minimum permissions as possible for the api-key used. Get a hold of the generated keys for future use.\n\nObtain the Kafka cluster connection string via `Cluster Overview > Cluster Settings > Identification > Bootstrap server` for future use. Basic clusters are open to the internet and in production, you will need to amend the access list for your specific hosts to connect to your cluster via advanced cluster ACLs.\n\n## Create a MongoDB Atlas Project and Cluster\n\nCreate a project and cluster or use an existing Atlas cluster in your project. \n\nPrepare your Atlas cluster for a kafka-connect connection. Inside your project\u2019s access list, enable user and relevant IP addresses of your local host, the one used for Kafka Connect binaries. Finally, get a hold of the Atlas connection string for future use.\n\n## Install a Kafka Connect Worker\n\nKafka Connect is one of the mechanisms to reliably stream data between different data systems and a Kafka cluster. For production use, we recommend using a distributed deployment for high availability, fault tolerance, and scalability. There is also a cloud version to install the connector on the Confluent Cloud.\n\nFor this simple tutorial, we will use a standalone local Kafka Connect installation.\n\nTo have the binaries to install kafka-connect and all of its dependencies, let\u2019s download the files:\n```shell \ncurl -O http://packages.confluent.io/archive/7.0/confluent-community-7.0.1.tar.gz\ntar -xvf confluent-community-7.0.1.tar.gz\n```\n\n## Configure Kafka Connect\n\nConfigure the plugins directory where we will host the MongoDB Kafka Connector plugin:\n```shell\nmkdir -p /usr/local/share/kafka/plugins\n```\n\nEdit the `/etc/schema-registry/connect-avro-standalone.properties` using the content provided below. Ensure that you replace the `:` with information taken from Confluent Cloud bootstrap server earlier. \n\nAdditionally, replace the generated `` and `` taken from Confluent Cloud in every section.\n```\nbootstrap.servers=:\n\nConnect data. Every Connect user will\n# need to configure these based on the format they want their data in when loaded from or stored into Kafka\nkey.converter=org.apache.kafka.connect.json.JsonConverter\nvalue.converter=org.apache.kafka.connect.json.JsonConverter\n# Converter-specific settings can be passed in by prefixing the Converter's setting with the converter you want to apply\n# it to\nkey.converter.schemas.enable=false\nvalue.converter.schemas.enable=false\n\n# The internal converter used for offsets and config data is configurable and must be specified, but most users will\n# always want to use the built-in default. Offset and config data is never visible outside of Kafka Connect in this format.\ninternal.key.converter=org.apache.kafka.connect.json.JsonConverter\ninternal.value.converter=org.apache.kafka.connect.json.JsonConverter\ninternal.key.converter.schemas.enable=false\ninternal.value.converter.schemas.enable=false\n\n# Store offsets on local filesystem\noffset.storage.file.filename=/tmp/connect.offsets\n# Flush much faster than normal, which is useful for testing/debugging\noffset.flush.interval.ms=10000\n\nssl.endpoint.identification.algorithm=https\n\nsasl.mechanism=PLAIN\nrequest.timeout.ms=20000\nretry.backoff.ms=500\nsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\\nusername=\"\" password=\"\";\nsecurity.protocol=SASL_SSL\n\nconsumer.ssl.endpoint.identification.algorithm=https\nconsumer.sasl.mechanism=PLAIN\nconsumer.request.timeout.ms=20000\nconsumer.retry.backoff.ms=500\nconsumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\\nusername=\"\" password=\"\";\nconsumer.security.protocol=SASL_SSL\n\nproducer.ssl.endpoint.identification.algorithm=https\nproducer.sasl.mechanism=PLAIN\nproducer.request.timeout.ms=20000\nproducer.retry.backoff.ms=500\nproducer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \\\nusername=\"\" password=\"\";\nproducer.security.protocol=SASL_SSL\n\nplugin.path=/usr/local/share/kafka/plugins\n```\n\n**Important**: Place the `plugin.path` to point to our plugin directory with permissions to the user running the kafka-connect process.\n\n### Install the MongoDB connector JAR: \nDownload the \u201call\u201d jar and place it inside the plugin directory.\n\n```shell\ncp ~/Downloads/mongo-kafka-connect-1.6.1-all.jar /usr/local/share/kafka/plugins/\n```\n### Configure a MongoDB Sink Connector\n\nThe MongoDB Sink connector will allow us to read data off a specific Kafka topic and write to a MongoDB collection inside our cluster. Create a MongoDB sink connector properties file in the main working dir: `mongo-sink.properties` with your Atlas cluster details replacing `:@/` from your Atlas connect tab. The working directory can be any directory that the `connect-standalone` binary has access to and its path can be provided to the `kafka-connect` command shown in \"Start Kafka Connect and Connectors\" section.\n\n```\nname=mongo-sink\ntopics=orders\nconnector.class=com.mongodb.kafka.connect.MongoSinkConnector\ntasks.max=1\nconnection.uri=mongodb+srv://:@/?retryWrites=true&w=majority\ndatabase=kafka\ncollection=orders\nmax.num.retries=1\nretries.defer.timeout=5000\n```\n\nWith the above configuration, we will listen to the topic called \u201corders\u201d and publish the input documents into database `kafka` and collection name `orders`. \n\n### Configure Mongo Source Connector\n\nThe MongoDB Source connector will allow us to read data off a specific MongoDB collection topic and write to a Kafka topic. When data will arrive into a collection called `receipts`, we can use a source connector to transfer it to a Kafka predefined topic named \u201coutsource.kafka.receipts\u201d (the configured prefix followed by the `.` name as a topic\u2014it's possible to use advanced mapping to change that). \n\nLet\u2019s create file `mongo-source.properties` in the main working directory:\n```\nname=mongo-source\nconnector.class=com.mongodb.kafka.connect.MongoSourceConnector\ntasks.max=1\n\n# Connection and source configuration\nconnection.uri=mongodb+srv://:@/?retryWrites=true&w=majority\ndatabase=kafka\ncollection=receipts\n\ntopic.prefix=outsource\ntopic.suffix=\npoll.max.batch.size=1000\npoll.await.time.ms=5000\n\n# Change stream options\npipeline=]\nbatch.size=0\nchange.stream.full.document=updateLookup\npublish.full.document.only=true\ncollation=\n```\n\nThe main properties here are the database, collection, and aggregation pipeline used to listen for incoming changes as well as the connection string. The `topic.prefix` adds a prefix to the `.` namespace as the Kafka topic on the Confluent side. In this case, the topic name that will receive new MongoDB records is \u201coutsource.kafka.receipts\u201d and was predefined earlier in this tutorial.\n\nI have also added `publish.full.document.only=true` as I only need the actual document changed or inserted without the change stream event wrapping information.\n\n### Start Kafka Connect and Connectors\n\nFor simplicity reasons, I am running the standalone Kafka Connect in the foreground.\n\n```\n ./confluent-7.0.1/bin/connect-standalone ./confluent-7.0.1/etc/schema-registry/connect-avro-standalone.properties mongo-sink.properties mongo-source.properties\n```\n\n> **Important**: Run with the latest Java version to avoid JDK SSL bugs.\n\nNow every document that will be populated to topic \u201corders\u201d will be inserted into the `orders` collection using a sink connector. A source connector we configured will transmit every receipt document from `receipt` collection back to another topic called \"outsource.kafka.receipts\" to showcase a MongoDB consumption to a Kafka topic.\n\n## Publish Documents to the Kafka Queue\n\nThrough the Confluent UI, I have submitted a test document to my \u201corders\u201d topic.\n![Produce data into \"orders\" topic\n\n### Atlas Cluster is Being Automatically Populated with the Data\n\nLooking into my Atlas cluster, I can see a new collection named `orders` in the `kafka` database.\n\nNow, let's assume that our application received the order document from the `orders` collection and produced a receipt. We can replicate this by inserting a document in the `kafka.reciepts` collection:\n\nThis operation will cause the source connector to produce a message into \u201coutsource.kafka.reciepts\u201d topic.\n### Kafka \"outsource.kafka.reciepts\" Topic\n\nLog lines on kafka-connect will show that the process received and published the document: \n\n```\n2021-12-14 15:31:18,376] INFO [mongo-source|task-0] [Producer clientId=connector-producer-mongo-source-0] Cluster ID: lkc-65rmj (org.apache.kafka.clients.Metadata:287)\n[2021-12-14 15:31:18,675] INFO [mongo-source|task-0] Opened connection [connectionId{localValue:21, serverValue:99712}] to dev-shard-00-02.uvwhr.mongodb.net:27017 (org.mongodb.driver.connection:71)\n[2021-12-14 15:31:18,773] INFO [mongo-source|task-0] Started MongoDB source task (com.mongodb.kafka.connect.source.MongoSourceTask:203)\n[2021-12-14 15:31:18,773] INFO [mongo-source|task-0] WorkerSourceTask{id=mongo-source-0} Source task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:233)\n[2021-12-14 15:31:27,671] INFO [mongo-source|task-0|offsets] WorkerSourceTask{id=mongo-source-0} flushing 0 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:505\n[2021-12-14 15:31:37,673] INFO [mongo-source|task-0|offsets] WorkerSourceTask{id=mongo-source-0} flushing 1 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:505)\n```\n\n## Summary\n\nIn this how-to article, I have covered the fundamentals of building a simple yet powerful integration of MongoDB Atlas to Kafka clusters using MongoDB Kafka Connector and Kafka Connect.\n\nThis should be a good starting point to get you going with your next event-driven application stack and a successful integration between MongoDB and Kafka.\n\nTry out [MongoDB Atlas and Kafka connector today!\n", "format": "md", "metadata": {"tags": ["MongoDB", "Java", "Kafka"], "pageDescription": "A simple step-by-step tutorial on how to use MongoDB Atlas with a Kafka Connector and connect it to any Remote Kafka Cluster.", "contentType": "Tutorial"}, "title": "Kafka to MongoDB Atlas End to End Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/oauth-and-realm-serverless", "action": "created", "body": "# OAuth & MongoDB Realm Serverless Functions\n\nI recently had the opportunity to work with Lauren Schaefer and Maxime Beugnet on a stats tracker for some YouTube statistics that we were tracking manually at the time.\n\nI knew that to access the YouTube API, we would need to authenticate using OAuth 2. I also knew that because we were building the app on MongoDB Realm Serverless functions, someone would probably need to write the implementation from scratch.\n\nI've dealt with OAuth before, and I've even built client implementations before, so I thought I'd volunteer to take on the task of implementing this workflow. It turned out to be easier than I thought, and because it's such a common requirement, I'm documenting the process here, in case you need to do the same thing.\n\nThis post assumes that you've worked with MongoDB Realm Functions in the past, and that you're comfortable with the concepts around calling REST-ish APIs.\n\nBut first...\n\n## What the Heck is OAuth 2?\n\nOAuth 2 is an authorization protocol which allows unrelated servers to allow authenticated access to their services, without sharing user credentials, such as your login password. What this means in this case is that YouTube will allow my Realm application to operate *as if it was logged in as a MongoDB user*.\n\nThere are some extra features for added control and security, like the ability to only allow access to certain functionality. In our case, the application will only need read-only access to the YouTube data, so there's no need to give it permission to delete MongoDB's YouTube videos!\n\n### What Does it Look Like?\n\nBecause OAuth 2 doesn't transmit the user's login credentials, there is some added complexity to make this work.\n\nFrom the user's perspective, it looks like this:\n\n1. The user clicks on a button (or in my minimal implementation, they type in a specific URL), which redirects the browser to the authorizing service\u2014in this case YouTube.\n2. The authorizing service asks the user to log in, if necessary.\n3. The authorizing service asks the user to approve the request to allow the Realm app to make requests to the YouTube API on their behalf.\n4. If the user approves, then the browser redirects back to the Realm application, but with an extra parameter added to the URL containing a code which can be used to obtain access tokens.\n\nBehind the scenes, there's a Step 5, where the Realm service makes an extra HTTPS request to the YouTube API, using the code provided in Step 4, requesting an access token and a refresh token.\n\nAccess tokens are only valid for an hour. When they expire, a new access token can be requested from YouTube, using the refresh token, which only expires if it hasn't been used for six months!\n\nIf this sounds complicated, that's because it is! If you look more closely at the diagram above, though, you can see that there are only actually two requests being made by the browser to the Realm app, and only one request being made by the Realm app directly to Google. As long as you implement those three things, you'll have implemented the OAuth's full authorization flow.\n\nOnce the authorization flow has been completed by the appropriate user (a user who has permission to log in as the MongoDB organization), as long as the access token is refreshed using the refresh token, API calls can be made to the YouTube API indefinitely.\n\n## Setting Up the Necessary Accounts\n\nYou'll need to create a Realm app and an associated Google project, and link the two together. There are quite a few steps, so make sure you don't miss any!\n\n### Create a Realm App\n\nGo to and log in if necessary. I'm going to assume that you have already created a MongoDB Atlas cluster, and an associated Realm App. If not, follow the steps described in the MongoDB documentation.\n\n### Create a Google API Project\n\nThis flow is loosely applicable to any OAuth service, but I'll be working with Google's YouTube API. The first thing to do is to create a project in the Google API Console that is analogous to your Realm app.\n\nGo to . Click the projects list (at the top-left of the screen), then click the \"Create Project\" button, and enter a name. I entered \"DREAM\" because that's the funky acronym we came up with for the analytics monitor project my team was working on. Select the project, then click the radio button that says \"External\" to make the app available to anyone with a Google account, and click \"Create\" to finish creating your project.\n\nIgnore the form that you're presented with for now. On the left-hand side of the screen, click \"Library\" and in the search box, enter \"YouTube\" to filter Google's enormous API list.\n\nSelect each of the APIs you wish to use\u2014I selected the YouTube Data API and the YouTube Analytics API\u2014and click the \"Enable\" button to allow your app to make calls to these APIs.\n\nNow, select \"OAuth consent screen\" from the left-hand side of the window. Next to the name of your app, click \"Edit App.\"\n\nYou'll be taken to a form that will allow you to specify how your OAuth consent screens will look. Enter a sensible app name, your email address, and if you want to, upload a logo for your project. You can ignore the \"App domain\" fields for now. You'll need to enter an Authorized domain by clicking \"Add Domain\" and enter \"mongodb-realm.com\" (without the quotes!). Enter your email address under \"Developer contact information\" and click \"Save and Continue.\"\n\nIn the table of scopes, check the boxes next to the scopes that end with \"youtube.readonly\" and \"yt-analytics.readonly.\" Then click \"Update.\" On the next screen, click \"Save and Continue\" to go to the \"Test users\" page. Because your app will be in \"testing\" mode while you're developing it, you'll need to add the email addresses of each account that will be allowed to authenticate with it, so I added my email address along with those of my team.\n\nClick \"Save and Continue\" for a final time and you're done configuring the OAuth consent screen!\n\nA final step is to generate some credentials your Realm app can use to prove to the Google API that the requests come from where they say they do. Click on \"Credentials\" on the left-hand side of the screen, click \"Create Credentials\" at the top, and select \"OAuth Client ID.\"\n\nThe \"Application Type\" is \"Web application.\" Enter a \"Name\" of \"Realm App\" (or another useful identifier, if you prefer), and then click \"Create.\" You'll be shown your client ID and secret values. Leave them up on the screen, and *in a different tab*, go to your Realm app and select \"Values\" from the left side. Click the \"Create New Value\" button, give it a name of \"GOOGLE_CLIENT_ID,\" select \"Value,\" and paste the client ID into the content text box.\n\nRepeat with the client secret, but select \"Secret,\" and give it the name \"GOOGLE_CLIENT_SECRET.\" You'll then be able to access these values with code like context.values.get(\"GOOGLE_CLIENT_ID\") in your Realm function.\n\nOnce you've got the values safely stored in your Realm App, you've now got everything you need to authorize a user with the YouTube Analytics API.\n\n## Let's Write Some Code!\n\nTo create an HTTP endpoint, you'll need to create an HTTP service in your Realm App. Go to your Realm App, select \"3rd Party Services\" on the left side, and then click the \"Add a Service\" button. Select HTTP and give it a \"Service Name.\" I chose \"google_oauth.\"\n\nA webhook function is automatically created for you, and you'll be taken to its settings page.\n\nGive the webhook a name, like \"authorizor,\" and set the \"HTTP Method\" to \"GET.\" While you're here, you should copy the \"Webhook URL.\" Go back to your Google API project, \"Credentials,\" and then click on the Edit (pencil) button next to your Realm app OAuth client ID.\n\nUnder \"Authorized redirect URIs,\" click \"Add URI,\" paste the URI into the text box, and click \"Save.\"\n\nGo back to your Realm Webhook settings, and click \"Save\" at the bottom of the page. You'll be taken to the function editor, and you'll see that some sample code has been inserted for you. Replace it with the following skeleton:\n\n``` javascript\nexports = async function (payload, response) {\n const querystring = require('querystring');\n};\n```\n\nBecause the function will be making outgoing HTTP calls that will need to be awaited, I've made it an async function. Inside the function, I've required the querystring library because the function will also need to generate query strings for redirecting to Google.\n\nAfter the require line, paste in the following constants, which will be required for authorizing users with Google:\n\n``` javascript\n// https://developers.google.com/youtube/v3/guides/auth/server-side-web-apps#httprest\nconst GOOGLE_OAUTH_ENDPOINT = \"https://accounts.google.com/o/oauth2/v2/auth\"\nconst GOOGLE_TOKEN_ENDPOINT = \"https://oauth2.googleapis.com/token\";\nconst SCOPES = \n \"https://www.googleapis.com/auth/yt-analytics.readonly\",\n \"https://www.googleapis.com/auth/youtube.readonly\",\n];\n```\n\nAdd the following lines, which will obtain values for the Google credentials client ID and secret, and also obtain the URL for the current webhook call:\n\n``` javascript\n// Following obtained from:\nhttps://console.developers.google.com/apis/credentials\n\nconst CLIENT_ID = context.values.get(\"GOOGLE_CLIENT_ID\");\nconst CLIENT_SECRET = context.values.get(\"GOOGLE_CLIENT_SECRET\");\nconst OAUTH2_CALLBACK = context.request.webhookUrl;\n```\n\nOnce this is done, the code should check to see if it's being called via a Google redirect due to an error. This is the case if it's called with an `error` parameter. If that's the case, a good option is to log the error and display it to the user. Add the following code which does this:\n\n``` javascript\nconst error = payload.query.error;\nif (typeof error !== 'undefined') {\n // Google says there's a problem:\n console.error(\"Error code returned from Google:\", error);\n\n response.setHeader('Content-Type', 'text/plain');\n response.setBody(error);\n return response;\n}\n```\n\nNow to implement Step 1 of the authorization flow illustrated at the start of this post! When the user requests this webhook URL, they won't provide any parameters, whereas when Google redirects to it, the URL will include a `code` parameter. So, by checking if the code parameter is absent, you can ensure that we're this is the Step 1 call. Add the following code:\n\n``` javascript\nconst oauthCode = payload.query.code;\n\nif (typeof oauthCode === 'undefined') {\n // No code provided, so let's request one from Google:\n const oauthURL = new URL(GOOGLE_OAUTH_ENDPOINT);\n oauthURL.search = querystring.stringify({\n 'client_id': CLIENT_ID,\n 'redirect_uri': OAUTH2_CALLBACK,\n 'response_type': 'code',\n 'scope': SCOPES.join(' '),\n 'access_type': \"offline\",\n });\n\n response.setStatusCode(302);\n response.setHeader('Location', oauthURL.href);\n} else {\n // This empty else block will be filled in below.\n}\n```\n\nThe code above adds the appropriate parameters to the Google OAuth endpoint described in their [OAuth flow documentation, and then redirects the browser to this endpoint, which will display a consent page to the user. When Steps 2 and 3 are complete, the browser will be redirected to this webhook (because that's the URL contained in `OAUTH2_CALLBACK`) with an added `code` parameter.\n\nAdd the following code inside the empty `else` block you added above, to handle the case where a `code` parameter is provided:\n\n``` javascript\n// We have a code, so we've redirected successfully from Google's consent page.\n// Let's post to Google, requesting an access:\nlet res = await context.http.post({\n url: GOOGLE_TOKEN_ENDPOINT,\n body: {\n client_id: CLIENT_ID,\n client_secret: CLIENT_SECRET,\n code: oauthCode,\n grant_type: 'authorization_code',\n redirect_uri: OAUTH2_CALLBACK,\n },\n encodeBodyAsJSON: true,\n});\n\nlet tokens = JSON.parse(res.body.text());\nif (typeof tokens.expires_in === \"undefined\") {\n throw new Error(\"Error response from Google: \" + JSON.stringify(tokens))\n}\nif (typeof tokens.refresh_token === \"undefined\") {\n return {\n \"message\": `You appear to have already linked to Google. You may need to revoke your OAuth token (${tokens.access_token}) and delete your auth token document. https://developers.google.com/identity/protocols/oauth2/web-server#tokenrevoke`\n };\n}\n\ntokens._id = \"youtube\";\ntokens.updated = new Date();\ntokens.expires_at = new Date();\ntokens.expires_at.setTime(Date.now() + (tokens.expires_in \\* 1000));\n\nconst tokens_collection = context.services.get(\"mongodb-atlas\").db(\"auth\").collection(\"auth_tokens\");\n\nif (await tokens_collection.findOne({ \\_id: \"youtube\" })) {\n await tokens_collection.updateOne(\n { \\_id: \"youtube\" },\n { '$set': tokens }\n );\n} else {\n await tokens_collection.insertOne(tokens);\n}\nreturn {\"message\": \"ok\"};\n```\n\nThere's quite a lot of code here to implement Step 5, but it's not too complicated. It makes a request to the Google token endpoint, providing the code from the URL, to obtain both an access token and a refresh token for when the access token expires (which it does after an hour). It then checks for errors, modifies the JavaScript object a little to make it suitable for storing in MongoDB, and then it saves it to the `tokens_collection`. You can find all the code for this webhook function on GitHub.\n\n## Authorizing the Realm App\n\nGo to the webhook's \"Settings\" tab, copy the webhook's URL, and paste it into a new browser tab. You should see the following scary warning page! This is because the app has not been checked out by Google, which would be the case if it was fully published. You can ignore it for now\u2014it's safe because it's *your* app. Click \"Continue\" to progress to the consent page.\n\nThe consent page should look something like the screenshot below. Click \"Allow\" and you should be presented with a very plain page that says `{\"status\": \"okay\" }`, which means that you've completed all of the authorization steps!\n\nIf you load up the `auth_tokens` collection in MongoDB Atlas, you should see that it contains a single document containing the access and refresh tokens provided by Google.\n\n## Using the Tokens to Make a Call\n\nTo make a test call, create a new HTTP service webhook, and paste in the following code:\n\n``` javascript\nexports = async function(payload, response) {\nconst querystring = require('querystring');\n\n// START OF TEMPORARY BLOCK -----------------------------\n// Get the current token:\nconst tokens_collection =\ncontext.services.get(\"mongodb-atlas\").db(\"auth\").collection(\"auth_tokens\");\nconst tokens = await tokens_collection.findOne({_id: \"youtube\"});\n// If this code is executed one hour after authorization, the token will be invalid:\nconst accessToken = tokens.access_token;\n// END OF TEMPORARY BLOCK -------------------------------\n\n// Get the channels owned by this user:\nconst url = new URL(\"https://www.googleapis.com/youtube/v3/playlists\");\nurl.search = querystring.stringify({\n \"mine\": \"true\",\n \"part\": \"snippet,id\",\n});\n\n// Make an authenticated call:\nconst result = await context.http.get({\n url: url.href,\n headers: {\n 'Authorization': `Bearer ${accessToken}`],\n 'Accept': ['application/json'],\n },\n});\n\nresponse.setHeader('Content-Type', 'text/plain');\nresponse.setBody(result.body.text());\n};\n```\n\nThe summary of this code is that it looks up an access token in the `auth_tokens` collection, and then makes an authenticated request to YouTube's `playlists` endpoint. Authentication is proven by providing the access token as a [bearer token in the 'Authorization' header.\n\nTest out this function by calling the webhook in a browser tab. It should display some JSON, listing details about your YouTube playlists. The problem with this code is that if you run it over an hour after authorizing with YouTube, then the access token will have expired, and you'll get an error message! To account for this, I created a function called `get_token`, which will refresh the access token if it's expired.\n\n## Token Refreshing\n\nThe `get_token` function is a standard MongoDB Realm serverless function, *not* a webhook. Click \"Functions\" on the left side of the page in MongoDB Realm, click \"Create New Function,\" and name your function \"get_token.\" In the function editor, paste in the following code:\n\n``` javascript\nexports = async function(){\n\n const GOOGLE_TOKEN_ENDPOINT = \"https://oauth2.googleapis.com/token\";\n const CLIENT_ID = context.values.get(\"GOOGLE_CLIENT_ID\");\n const CLIENT_SECRET = context.values.get(\"GOOGLE_CLIENT_SECRET\");\n\n const tokens_collection = context.services.get(\"mongodb-atlas\").db(\"auth\").collection(\"auth_tokens\");\n\n // Look up tokens:\n let tokens = await tokens_collection.findOne({_id: \"youtube\"});\n\n if (new Date() >= tokens.expires_at) {\n // access_token has expired. Get a new one.\n let res = await context.http.post({\n url: GOOGLE_TOKEN_ENDPOINT,\n body: {\n client_id: CLIENT_ID,\n client_secret: CLIENT_SECRET,\n grant_type: 'refresh_token',\n refresh_token: tokens.refresh_token,\n },\n encodeBodyAsJSON: true,\n });\n\n tokens = JSON.parse(res.body.text());\n tokens.updated = new Date();\n tokens.expires_at = new Date();\n tokens.expires_at.setTime(Date.now() + (tokens.expires_in \\* 1000));\n\n await tokens_collection.updateOne(\n {\n \\_id: \"youtube\"\n },\n {\n $set: {\n access_token: tokens.access_token,\n expires_at: tokens.expires_at,\n expires_in: tokens.expires_in,\n updated: tokens.updated,\n },\n },\n );\n }\n return tokens.access_token\n};\n```\n\nThe start of this function does the same thing as the temporary block in the webhook\u2014it looks up the currently stored access token in MongoDB Atlas. It then checks to see if the token has expired, and if it has, it makes a call to Google with the `refresh_token`, requesting a new access token, which it then uses to update the MongoDB document.\n\nSave this function and then return to your test webhook. You can replace the code between the TEMPORARY BLOCK comments with the following line of code:\n\n``` javascript\n// Get a token (it'll be refreshed if necessary):\nconst accessToken = await context.functions.execute(\"get_token\");\n```\n\nFrom now on, this should be all you need to do to make an authorized request against the Google API\u2014obtain the access token with `get_token` and add it to your HTTP request as a bearer token in the `Authorization` header.\n\n## Conclusion\n\nI hope you found this useful! The OAuth 2 protocol can seem a little overwhelming, and the incompatibility of various client libraries, such as Google's, with MongoDB Realm can make life a bit more difficult, but this post should demonstrate how, with a webhook and a utility function, much of OAuth's complexity can be hidden away in a well designed MongoDB app.\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "JavaScript", "Serverless"], "pageDescription": "Authenticate with OAuth2 and MongoDB Realm Functions", "contentType": "Tutorial"}, "title": "OAuth & MongoDB Realm Serverless Functions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/use-azure-key-vault-mongodb-client-side-field-level-encryption", "action": "created", "body": "# Integrate Azure Key Vault with MongoDB Client-Side Field Level Encryption\n\nWhen implementing MongoDB\u2019s client-side field level encryption\u00a0(CSFLE), you\u2019ll find yourself making an important decision: Where do I store my customer master key? In\u00a0another tutorial, I guided readers through the basics of CSFLE by using a locally-generated and stored master key. While this works for educational and local development purposes, it isn\u2019t suitable for production! In this tutorial, we\u2019ll see how to use Azure Key Vault to generate and securely store our master key.\n\n## Prerequisites\n\n* A\u00a0MongoDB Atlas cluster\u00a0running MongoDB 4.2 (or later) OR\u00a0MongoDB 4.2 Enterprise Server\u00a0(or later)\u2014required for automatic encryption\n* MongoDB .NET Driver 2.13.0\u00a0(or later)\n* Mongocryptd\n* An\u00a0Azure Account\u00a0with an active subscription and the same permissions as those found in any of these Azure AD roles (only one is needed):\n * Application administrator\n * Application developer\n * Cloud application administrator\n* An\u00a0Azure AD tenant\u00a0(you can use an existing one, assuming you have appropriate permissions)\n* Azure CLI\n* Cloned sample application\n\n## Quick Jump\n\n**Prepare Azure Infrastructure**\n\n* Register App in Azure Active Directory\n* Create a Client Secret\n* Create an Azure Key Vault\n* Create and Add a Key to your Key Vault\n* Grant Application Permissions to Key Vault\n\n**Configure your Client Application to use Azure Key Vault and CSFLE**\n\n* Integrate Azure Key Vault into Your Client Application\n* The Results - What You Get After Integrating Azure Key Vault with MongoDB CSFLE\n\n- - -\n\n## Register App in Azure Active Directory\n\nIn order to establish a trust relationship between our application and the Microsoft identity platform, we first need to register it.\u00a0\n\n1. Sign in to the\u00a0Azure portal.\n2. If you have access to multiple tenants, in the top menu, use the \u201cDirectory + subscription filter\u201d to select the tenant in which you want to register an application.\n3. In the main search bar, search for and select \u201cAzure Active Directory.\u201d\n4. On the left-hand navigation menu, find the Manage section and select \u201cApp registrations,\u201d then \u201c+ New registration.\u201d\n5. Enter a display name for your application. You can change the display name at any time and multiple app registrations can share the same name. The app registration's automatically generated Application (client) ID, not its display name, uniquely identifies your app within the identity platform.\n6. Specify who can use the application, sometimes called its sign-in audience. For this tutorial, I\u2019ve selected \u201cAccounts in this organizational directory only (Default Directory only - Single tenant).\u201d This only allows users that are in my current tenant access to my application.\n7. Click \u201cRegister.\u201d Once the initial app registration is complete, copy the\u00a0**Directory (tenant) ID**\u00a0and\u00a0**Application (client) ID**\u00a0as we\u2019ll need them later on.\n8. Find the linked application under \u201cManaged application in local directory\u201d and click on it. \n9. Once brought to the \u201cProperties\u201d page, also copy the \u201c**Object ID**\u201d as we\u2019ll need this too.\n\n## Create a Client Secret\n\nOnce your application is registered, we\u2019ll need to create a\u00a0client secret\u00a0for it. This will be required when authenticating to the Key Vault we\u2019ll be creating soon.\n\n1. On the overview of your newly registered application, click on \u201cAdd a certificate or secret\u201d:\n2. Under \u201cClient secrets,\u201d click \u201c+ New client secret.\u201d\n3. Enter a short description for this client secret and leave the default \u201cExpires\u201d setting of 6 months.\n4. Click \u201cAd.\" Once the client secret is created, be sure to copy the secret\u2019s \u201c**Value**\u201d as we\u2019ll need it later. It\u2019s also worth mentioning that once you leave this page, the secret value is never displayed again, so be sure to record it at least once!\n\n## Create an Azure Key Vault\n\nNext up, an\u00a0Azure\u00a0Key Vault! We\u2019ll create one so we can securely store our customer master key. We\u2019ll be completing these steps via the Azure CLI, so open up your favorite terminal and follow along:\n\n1. Sign in to the Azure CLI using the\u00a0`az login`\u00a0command. Finish the authentication steps by following the steps displayed in your terminal.\n2. Create a resource group:\u00a0\n\n ``` bash\n az group create --name \"YOUR-RESOURCE-GROUP-NAME\" --location \n ```\n\n3. Create a key vault:\u00a0\n\n ``` bash\n az keyvault create --name \"YOUR-KEYVAULT-NAME\" --resource-group \"YOUR-RESOURCE-GROUP-NAME\" --location \n ```\n\n## Create and Add a Key to Your Key Vault\n\nWith a key vault, we can now create our customer master key! This will be stored, managed, and secured by Azure Key Vault.\n\nCreate a key and add to our key vault:\n\n``` bash\naz keyvault key create --vault-name \"YOUR-KEYVAULT-NAME\" --name \"YOUR-KEY-NAME\" --protection software\n```\n\nThe `--protection` parameter designates the key protection type. For now, we'll use the `software` type. Once the command completes, take note of your key\u2019s \"**name**\" as we\u2018ll need it later!\n\n## Grant Application Permissions to Key Vault\n\nTo enable our client application access to our key vault, some permissions need to be granted:\n\n1. Give your application the\u00a0wrapKey\u00a0and\u00a0unwrapKey\u00a0permissions to the keyvault. (For the `--object-id` parameter, paste in the Object ID of the application we registered earlier. This is the Object ID\u00a0we copied in the last \"Register App in Azure Active Directory\" step.)\n\n ``` bash\n az keyvault set-policy --name \"YOUR-KEYVAULT-NAME\" --key-permissions wrapKey unwrapKey --object-id \n ```\n\n2. Upon success, you\u2019ll receive a JSON object. Find and copy the value for the \u201c**vaultUri**\u201d key. For example, mine is `https://csfle-mdb-demo-vault.vault.azure.net`.\n\n## Integrate Azure Key Vault into Your Client Application\n\nNow that our cloud infrastructure is configured, we can start integrating it into our application. We\u2019ll be referencing the\u00a0sample repo\u00a0from our prerequisites for these steps, but feel free to use the portions you need in an existing application.\n\n1. If you haven\u2019t cloned the repo yet, do so now!\u00a0\n\n ``` shell\n git clone\u00a0https://github.com/adriennetacke/mongodb-csfle-csharp-demo-azure.git\n ```\n\n2. Navigate to the root directory `mongodb-csfle-csharp-demo-azure` and open the `EnvoyMedSys` sample application in Visual Studio.\n3. In the Solution Explorer, find and open the `launchSettings.json` file (`Properties` > `launchSettings.json`).\n4. Here, you\u2019ll see some scaffolding for some variables. Let\u2019s quickly go over what those are:\n * `MDB_ATLAS_URI`: The connection string to your MongoDB Atlas cluster. This enables us to store our data encryption key, encrypted by Azure Key Vault.\n * `AZURE_TENANT_ID`: Identifies the organization of the Azure account.\n * `AZURE_CLIENT_ID`: Identifies the `clientId` to authenticate your registered application.\n * `AZURE_CLIENT_SECRET`: Used to authenticate your registered application.\n * `AZURE_KEY_NAME`: Name of the Customer Master Key stored in Azure Key Vault.\n * `AZURE_KEYVAULT_ENDPOINT`: URL of the Key Vault. E.g., `yourVaultName.vault.azure.net`.\n5. Replace all of the placeholders in the `launchSettings.json` file with **your own information**. Each variable corresponds to a value you were asked to copy and keep track of:\n * `MDB_ATLAS_URI`: Your\u00a0**Atlas URI****.**\n * `AZURE_TENANT_ID`:\u00a0**Directory (tenant) ID**.\n * `AZURE_CLIENT_ID`:\u00a0**Application (client) ID.**\n * `AZURE_CLIENT_SECRET`: Secret\u00a0**Value**\u00a0from our client secret.\n * `AZURE_KEY_NAME`: Key\u00a0**Name**.\n * `AZURE_KEYVAULT_ENDPOINT`: Our Key Vault\u2019s\u00a0**vaultUri**.\n6. Save all your files!\n\nBefore we run the application, let\u2019s go over what\u2019s happening: When we run our main program, we set the connection to our Atlas cluster and our key vault\u2019s collection namespace. We then instantiate two helper classes: a `KmsKeyHelper` and an `AutoEncryptHelper`. The `KmsKeyHelper`\u2019s `CreateKeyWithAzureKmsProvider()` method is called to generate our encrypted data encryption key. This is then passed to the `AutoEncryptHelper`\u2019s `EncryptedWriteAndReadAsync()` method to insert a sample document with encrypted fields and properly decrypt it when we need to fetch it. This is all in our `Program.cs` file:\n\n`Program.cs`\n\n``` cs\nusing System;\nusing MongoDB.Driver;\n\nnamespace EnvoyMedSys\n{\n public enum KmsKeyLocation\n {\n Azure,\n }\n\n class Program\n {\n public static void Main(string] args)\n {\n var connectionString = Environment.GetEnvironmentVariable(\"MDB_ATLAS_URI\");\n var keyVaultNamespace = CollectionNamespace.FromFullName(\"encryption.__keyVaultTemp\");\n\n var kmsKeyHelper = new KmsKeyHelper(\n connectionString: connectionString,\n keyVaultNamespace: keyVaultNamespace);\n var autoEncryptHelper = new AutoEncryptHelper(\n connectionString: connectionString,\n keyVaultNamespace: keyVaultNamespace);\n\n var kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithAzureKmsProvider().GetAwaiter().GetResult();\n\n autoEncryptHelper.EncryptedWriteAndReadAsync(kmsKeyIdBase64, KmsKeyLocation.Azure).GetAwaiter().GetResult();\n\n Console.ReadKey();\n }\n }\n}\n```\n\nTaking a look at the `KmsKeyHelper` class, there are a few important methods: the `CreateKeyWithAzureKmsProvider()` and `GetClientEncryption()` methods. I\u2019ve opted to include comments in the code to make it easier to follow along:\n\n`KmsKeyHelper.cs` /\u00a0`CreateKeyWithAzureKmsProvider()`\n\n``` cs\npublic async Task CreateKeyWithAzureKmsProvider()\n{\n var kmsProviders = new Dictionary>();\n\n // Pull Azure Key Vault settings from environment variables\n var azureTenantId = Environment.GetEnvironmentVariable(\"AZURE_TENANT_ID\");\n var azureClientId = Environment.GetEnvironmentVariable(\"AZURE_CLIENT_ID\");\n var azureClientSecret = Environment.GetEnvironmentVariable(\"AZURE_CLIENT_SECRET\");\n var azureIdentityPlatformEndpoint = Environment.GetEnvironmentVariable(\"AZURE_IDENTIFY_PLATFORM_ENPDOINT\"); // Optional, only needed if user is using a non-commercial Azure instance\n\n // Configure our registered application settings\n var azureKmsOptions = new Dictionary\n {\n { \"tenantId\", azureTenantId },\n { \"clientId\", azureClientId },\n { \"clientSecret\", azureClientSecret },\n };\n\n if (azureIdentityPlatformEndpoint != null)\n {\n azureKmsOptions.Add(\"identityPlatformEndpoint\", azureIdentityPlatformEndpoint);\n }\n\n // Specify remote key location; in this case, Azure\n kmsProviders.Add(\"azure\", azureKmsOptions);\n\n // Constructs our client encryption settings which\n // specify which key vault client, key vault namespace,\n // and KMS providers to use. \n var clientEncryption = GetClientEncryption(kmsProviders);\n\n // Set KMS Provider Settings\n // Client uses these settings to discover the master key\n var azureKeyName = Environment.GetEnvironmentVariable(\"AZURE_KEY_NAME\");\n var azureKeyVaultEndpoint = Environment.GetEnvironmentVariable(\"AZURE_KEYVAULT_ENDPOINT\"); // typically .vault.azure.net\n var azureKeyVersion = Environment.GetEnvironmentVariable(\"AZURE_KEY_VERSION\"); // Optional\n var dataKeyOptions = new DataKeyOptions(\n masterKey: new BsonDocument\n {\n { \"keyName\", azureKeyName },\n { \"keyVaultEndpoint\", azureKeyVaultEndpoint },\n { \"keyVersion\", () => azureKeyVersion, azureKeyVersion != null }\n });\n\n // Create Data Encryption Key\n var dataKeyId = clientEncryption.CreateDataKey(\"azure\", dataKeyOptions, CancellationToken.None);\n Console.WriteLine($\"Azure DataKeyId [UUID]: {dataKeyId}\");\n\n var dataKeyIdBase64 = Convert.ToBase64String(GuidConverter.ToBytes(dataKeyId, GuidRepresentation.Standard));\n Console.WriteLine($\"Azure DataKeyId [base64]: {dataKeyIdBase64}\");\n\n // Optional validation; checks that key was created successfully\n await ValidateKeyAsync(dataKeyId);\n\n return dataKeyIdBase64;\n}\n```\n\n`KmsKeyHelper.cs` / `GetClientEncryption()`\n\n``` cs\nprivate ClientEncryption GetClientEncryption(\nDictionary> kmsProviders)\n{\n // Construct a MongoClient using our Atlas connection string\n var keyVaultClient = new MongoClient(_mdbConnectionString);\n\n // Set MongoClient, key vault namespace, and Azure as KMS provider\n var clientEncryptionOptions = new ClientEncryptionOptions(\n keyVaultClient: keyVaultClient,\n keyVaultNamespace: _keyVaultNamespace,\n kmsProviders: kmsProviders);\n\n return new ClientEncryption(clientEncryptionOptions);\n}\n```\n\nWith our Azure Key Vault connected and data encryption key encrypted, we\u2019re ready to insert some data into our Atlas cluster! This is where the `AutoEncryptHelper` class comes in. The important method to note here is the `EncryptedReadAndWrite()` method:\n\n`AutoEncryptHelper.cs` / `EncryptedReadAndWrite()`\n\n``` cs\npublic async Task EncryptedWriteAndReadAsync(string keyIdBase64, KmsKeyLocation kmsKeyLocation)\n{\n // Construct a JSON Schema\n var schema = JsonSchemaCreator.CreateJsonSchema(keyIdBase64);\n\n // Construct an auto-encrypting client\n var autoEncryptingClient = CreateAutoEncryptingClient(\n kmsKeyLocation,\n _keyVaultNamespace,\n schema);\n\n // Set our working database and collection to medicalRecords.patientData\n var collection = autoEncryptingClient\n .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName)\n .GetCollection(_medicalRecordsNamespace.CollectionName);\n\n var ssnQuery = Builders.Filter.Eq(\"ssn\", __sampleSsnValue);\n\n // Upsert (update if found, otherwise create it) a document into the collection\n var medicalRecordUpdateResult = await collection\n .UpdateOneAsync(ssnQuery, new BsonDocument(\"$set\", __sampleDocFields), new UpdateOptions() { IsUpsert = true });\n\n if (!medicalRecordUpdateResult.UpsertedId.IsBsonNull)\n {\n Console.WriteLine(\"Successfully upserted the sample document!\");\n }\n\n // Query by SSN field with auto-encrypting client\n var result = await collection.Find(ssnQuery).SingleAsync();\n\n // Proper result in console should show decrypted, human-readable document\n Console.WriteLine($\"Encrypted client query by the SSN (deterministically-encrypted) field:\\n {result}\\n\");\n}\n```\n\nNow that we know what\u2019s going on, run your application!\n\n## The Results: What You Get After Integrating Azure Key Vault with MongoDB CSFLE\n\nIf all goes well, your console will print out two `DataKeyIds` (UUID and base64) and a document that resembles the following:\u00a0\n\n`Sample Result Document (using my information)`\n\n``` bash\n{\n _id:UUID('ab382f3e-bc79-4086-8418-836a877efff3'),\nkeyMaterial:Binary('tvehP03XhUsztKr69lxlaGjiPhsNPjy6xLhNOLTpe4pYMeGjMIwvvZkzrwLRCHdaB3vqi9KKe6/P5xvjwlVHacQ1z9oFIwFbp9nk...', 0),\n creationDate:2021-08-24T05:01:34.369+00:00,\n updateDate:2021-08-24T05:01:34.369+00:00,\n status:0,\n masterKey:Object,\n provider:\"azure\",\n keyVaultEndpoint:\"csfle-mdb-demo-vault.vault.azure.net\",\n keyName:\"MainKey\"\n}\n```\n\nHere\u2019s what my console output looks like, for reference:\n\n![Screenshot of console output showing two Azure DatakeyIds and a non-formatted document\n\nSeeing this is great news! A lot of things have just happened, and all of them are good:\n\n* Our application properly authenticated to our Azure Key Vault.\n* A properly generated data encryption key was created by our client application.\n* The data encryption key was properly encrypted by our customer master key that\u2019s securely stored in Azure Key Vault.\n* The encrypted data encryption key was returned to our application and stored in our MongoDB Atlas cluster.\n\nHere\u2019s the same process in a workflow:\n\nAfter a few more moments, and upon success, you\u2019ll see a \u201cSuccessfully upserted the sample document!\u201d message, followed by the properly decrypted results of a test query. Again, here\u2019s my console output for reference:\n\nThis means our sample document was properly encrypted, inserted into our `patientData` collection, queried with our auto-encrypting client by SSN, and had all relevant fields correctly decrypted before returning them to our console. How neat!\u00a0\n\nAnd just because I\u2019m a little paranoid, we can double-check that our data has actually been encrypted. If you log into your Atlas cluster and navigate to the `patientData` collection, you\u2019ll see that our documents\u2019 sensitive fields are all illegible:\n\n## Let's Summarize\n\nThat wasn\u2019t so bad, right? Let's see what we've accomplished! This tutorial walked you through:\n\n* Registering an App in Azure Active Directory.\n* Creating a Client Secret.\n* Creating an Azure Key Vault.\n* Creating and Adding a Key to your Key Vault.\n* Granting Application Permissions to Key Vault.\n* Integrating Azure Key Vault into Your Client Application.\n* The Results: What You Get After Integrating Azure Key Vault with MongoDB CSFLE.\n\nBy using a remote key management system like Azure Key Vault, you gain access to many benefits over using a local filesystem. The most important of these is the secure storage of the key, reduced risk of access permission issues, and easier portability!\n\nFor more information, check out this helpful list of resources I used while preparing this tutorial:\n\n* az keyvault command list\n* Registering an application with the Microsoft Identity Platform\n* MongoDB CSFLE and Azure Key Vault\n\nAnd if you have any questions or need some additional help, be sure to check us out on the MongoDB Community Forums\u00a0and start a topic!\n\nA whole community of MongoDB engineers (including the DevRel team) and fellow developers are sure to help!", "format": "md", "metadata": {"tags": ["C#", "MongoDB", "Azure"], "pageDescription": "Learn how to use Azure Key Vault as your remote key management system with MongoDB's client-side field level encryption, step-by-step.", "contentType": "Tutorial"}, "title": "Integrate Azure Key Vault with MongoDB Client-Side Field Level Encryption", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/csharp/client-side-field-level-encryption-mongodb-csharp", "action": "created", "body": "# How to Use MongoDB Client-Side Field Level Encryption (CSFLE) with C#\n\nClient-side field level encryption (CSFLE) provides an additional layer of security to your most sensitive data. Using a supported MongoDB driver, CSFLE encrypts certain fields that you specify, ensuring they are never transmitted unencrypted, nor seen unencrypted by the MongoDB server.\n\nThis may be the only time I use a Transformers GIF. Encryption GIFs are hard to find!\n\nThis also means that it's nearly impossible to obtain sensitive information from the database server. Without access to a specific key, data cannot be decrypted and exposed, rendering the intercepting data from the client fruitless. Reading data directly from disk, even with DBA or root credentials, will also be impossible as the data is stored in an encrypted state.\n\nKey applications that showcase the power of client-side field level encryption are those in the medical field. If you quickly think back to the last time you visited a clinic, you already have an effective use case for an application that requires a mix of encrypted and non-encrypted fields. When you check into a clinic, the person may need to search for you by name or insurance provider. These are common data fields that are usually left non-encrypted. Then, there are more obvious pieces of information that require encryption: things like a Social Security number, medical records, or your insurance policy number. For these data fields, encryption is necessary.\n\nThis tutorial will walk you through setting up a similar medical system that uses automatic client-side field level encryption in the MongoDB .NET Driver (for explicit, meaning manual, client-side field level encryption, check out these docs).\n\nIn it, you'll:\n\n**Prepare a .NET Core console application**\n\n* Create a .NET Core Console Application\n* Install CSFLE Dependencies\n\n**Generate secure, random keys needed for CSFLE**\n\n* Create a Local Master Key\n* Create a Data Encryption Key\n\n**Configure CSFLE on the MongoClient**\n\n* Specify Encrypted Fields Using a JSON Schema\n* Create the CSFLE-Enabled MongoDB Client\n\n**See CSFLE in action**\n\n* Perform Encrypted Read/Write Operations\n* Bonus: What's the Difference with a Non-Encrypted Client?\n\n> \ud83d\udca1\ufe0f This can be an intimidating tutorial, so don't hesitate to take as many breaks as you need; in fact, complete the steps over a few days! I've tried my best to ensure each step completed acts as a natural save point for the duration of the entire tutorial. :)\n\nLet's do this step by step!\n\n## Prerequisites\n\n* A MongoDB Atlas cluster running MongoDB 4.2 (or later) OR MongoDB 4.2 Enterprise Server (or later)\u2014required for automatic encryption\n* MongoDB .NET Driver 2.12.0-beta (or later)\n* Mongocryptd\n* File system permissions (to start the mongocryptd process, if running locally)\n\n> \ud83d\udcbb The code for this tutorial is available in this repo.\n\n## Create a .NET Core Console Application\n\nLet's start by scaffolding our console application. Open Visual Studio (I'm using Visual Studio 2019 Community Edition) and create a new project. When selecting a template, choose the \"Console App (.NET Core)\" option and follow the prompts to name your project.\n\n \n Visual Studio 2019 create a new project prompt; Console App (.NET Core) option is highlighted.\n\n## Install CSFLE Dependencies\n\nOnce the project template loads, we'll need to install one of our dependencies. In your Package Manager Console, use the following command to install the MongoDB Driver:\n\n```bash\nInstall-Package MongoDB.Driver -Version 2.12.0-beta1\n```\n\n> \ud83d\udca1\ufe0f If your Package Manager Console is not visible in your IDE, you can get to it via *View > Other Windows > Package Manager Console* in the File Menu.\n\nThe next dependency you'll need to install is mongocryptd, which is an application that is provided as part of MongoDB Enterprise and is needed for automatic field level encryption. Follow the instructions to install mongocryptd on your machine. In a production environment, it's recommended to run mongocryptd as a service at startup on your VM or container.\n\nNow that our base project and dependencies are set, we can move onto creating and configuring our different encryption keys.\n\nMongoDB client-side field level encryption uses an encryption strategy called envelope encryption. This strategy uses two different kinds of keys.\n\nThe first key is called a **data encryption key**, which is used to encrypt/decrypt the data you'll be storing in MongoDB. The other key is called a **master key** and is used to encrypt the data encryption key. This is the top-level plaintext key that will always be required and is the key we are going to generate in the next step.\n\n> \ud83d\udea8\ufe0f Before we proceed, it's important to note that this tutorial will\n> demonstrate the generation of a master key file stored as plaintext in\n> the root of our application. This is okay for **development** and\n> educational purposes, such as this tutorial. However, this should\n> **NOT** be done in a **production** environment!\n> \n> Why? In this scenario, anyone that obtains a copy of the disk or a VM\n> snapshot of the app server hosting our application would also have\n> access to this key file, making it possible to access the application's\n> data.\n> \n> Instead, you should configure a master key in a Key Management\n> System\n> such as Azure Key Vault or AWS KMS for production.\n> \n> Keep this in mind and watch for another post that shows how to implement\n> CSFLE with Azure Key Vault!\n\n## Create a Local Master Key\n\nIn this step, we generate a 96-byte, locally-managed master key. Then, we save it to a local file called `master-key.txt`. We'll be doing a few more things with keys, so create a separate class called `KmsKeyHelper.cs`. Then, add the following code to it:\n\n``` csp\n// KmsKeyHelper.cs\n\nusing System;\nusing System.IO;\n\nnamespace EnvoyMedSys\n{\n public class KmsKeyHelper\n {\n private readonly static string __localMasterKeyPath = \"../../../master-key.txt\";\n\n public void GenerateLocalMasterKey()\n {\n using (var randomNumberGenerator = System.Security.Cryptography.RandomNumberGenerator.Create())\n {\n var bytes = new byte96];\n randomNumberGenerator.GetBytes(bytes);\n var localMasterKeyBase64 = Convert.ToBase64String(bytes);\n Console.WriteLine(localMasterKeyBase64);\n File.WriteAllText(__localMasterKeyPath, localMasterKeyBase64);\n }\n }\n }\n}\n```\n\nSo, what's happening here? Let's break it down, line by line:\n\nFirst, we declare and set a private variable called `__localMasterKeyPath`. This holds the path to where we save our master key.\n\nNext, we create a `GenerateLocalMasterKey()` method. In this method, we use .NET's [Cryptography services to create an instance of a `RandomNumberGenerator`. Using this `RandomNumberGenerator`, we generate a cryptographically strong, 96-byte key. After converting it to a Base64 representation, we save the key to the `master-key.txt` file.\n\nGreat! We now have a way to generate a local master key. Let's modify the main program to use it. In the `Program.cs` file, add the following code:\n\n``` csp\n// Program.cs\n\nusing System;\nusing System.IO;\n\nnamespace EnvoyMedSys\n{\n class Program\n {\n public static void Main()\n {\n var kmsKeyHelper = new KmsKeyHelper();\n\n // Ensure GenerateLocalMasterKey() only runs once!\n if (!File.Exists(\"../../../master-key.txt\"))\n {\n kmsKeyHelper.GenerateLocalMasterKey();\n }\n\n Console.ReadKey();\n }\n }\n}\n```\n\nIn the `Main` method, we create an instance of our `KmsKeyHelper`, then call our `GenerateLocalMasterKey()` method. Pretty straightforward!\n\nSave all files, then run your program. If all is successful, you'll see a console pop up and the Base64 representation of your newly generated master key printed in the console. You'll also see a new `master-key.txt` file appear in your solution explorer.\n\nNow that we have a master key, we can move onto creating a data encryption key.\n\n## Create a Data Encryption Key\n\nThe next key we need to generate is a data encryption key. This is the key the MongoDB driver stores in a key vault collection, and it's used for automatic encryption and decryption.\n\nAutomatic encryption requires MongoDB Enterprise 4.2 or a MongoDB 4.2 Atlas cluster. However, automatic *decryption* is supported for all users. See how to configure automatic decryption without automatic encryption.\n\nLet's add a few more lines of code to the `Program.cs` file:\n\n``` csp\nusing System;\nusing System.IO;\nusing MongoDB.Driver;\n\nnamespace EnvoyMedSys\n{\n class Program\n {\n public static void Main()\n {\n var connectionString = Environment.GetEnvironmentVariable(\"MDB_URI\");\n var keyVaultNamespace = CollectionNamespace.FromFullName(\"encryption.__keyVault\");\n\n var kmsKeyHelper = new KmsKeyHelper(\n connectionString: connectionString,\n keyVaultNamespace: keyVaultNamespace);\n\n string kmsKeyIdBase64;\n\n // Ensure GenerateLocalMasterKey() only runs once!\n if (!File.Exists(\"../../../master-key.txt\"))\n {\n kmsKeyHelper.GenerateLocalMasterKey();\n }\n\n kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithLocalKmsProvider();\n\n Console.ReadKey();\n }\n }\n}\n```\n\nSo, what's changed? First, we added an additional import (`MongoDB.Driver`). Next, we declared a `connectionString` and a `keyVaultNamespace` variable.\n\nFor the key vault namespace, MongoDB will automatically create the database `encryption` and collection `__keyVault` if it does not currently exist. Both the database and collection names were purely my preference. You can choose to name them something else if you'd like!\n\nNext, we modified the `KmsKeyHelper` instantiation to accept two parameters: the connection string and key vault namespace we previously declared. Don't worry, we'll be changing our `KmsKeyHelper.cs` file to match this soon.\n\nFinally, we declare a `kmsKeyIdBase64` variable and set it to a new method we'll create soon: `CreateKeyWithLocalKmsProvider();`. This will hold our data encryption key.\n\n### Securely Setting the MongoDB connection\n\nIn our code, we set our MongoDB URI by pulling from environment variables. This is far safer than pasting a connection string directly into our code and is scalable in a variety of automated deployment scenarios.\n\nFor our purposes, we'll create a `launchSettings.json` file.\n\n> \ud83d\udca1\ufe0f Don't commit the `launchSettings.json` file to a public repo! In\n> fact, add it to your `.gitignore` file now, if you have one or plan to\n> share this application. Otherwise, you'll expose your MongoDB URI to the\n> world!\n\nRight-click on your project and select \"Properties\" in the context menu.\n\nThe project properties will open to the \"Debug\" section. In the \"Environment variables:\" area, add a variable called `MDB_URI`, followed by the connection URI:\n\nAdding an environment variable to the project settings in Visual Studio 2019.\n\nWhat value do you set to your `MDB_URI` environment variable?\n\n* MongoDB Atlas: If using a MongoDB Atlas cluster, paste in your Atlas URI.\n* Local: If running a local MongoDB instance and haven't changed any default settings, you can use the default connection string: `mongodb://localhost:27017`.\n\nOnce your `MDB_URI` is added, save the project properties. You'll see that a `launchSettings.json` file will be automatically generated for you! Now, any `Environment.GetEnvironmentVariable()` calls will pull from this file.\n\nWith these changes, we now have to modify and add a few more methods to the `KmsKeyHelper` class. Let's do that now.\n\nFirst, add these additional imports:\n\n``` csp\n// KmsKeyHelper.cs\n\nusing System.Collections.Generic;\nusing System.Threading;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Encryption;\n```\n\nNext, declare two private variables and create a constructor that accepts both a connection string and key vault namespace. We'll need this information to create our data encryption key; this also makes it easier to extend and integrate with a remote KMS later on.\n\n``` csp\n// KmsKeyhelper.cs\n\nprivate readonly string _mdbConnectionString;\nprivate readonly CollectionNamespace _keyVaultNamespace;\n\npublic KmsKeyHelper(\n string connectionString,\n CollectionNamespace keyVaultNamespace)\n{\n _mdbConnectionString = connectionString;\n _keyVaultNamespace = keyVaultNamespace;\n}\n```\n\nAfter the GenerateLocalMasterKey() method, add the following new methods. Don't worry, we'll go over each one:\n\n``` csp\n// KmsKeyHelper.cs\n\npublic string CreateKeyWithLocalKmsProvider()\n{\n // Read Master Key from file & convert\n string localMasterKeyBase64 = File.ReadAllText(__localMasterKeyPath);\n var localMasterKeyBytes = Convert.FromBase64String(localMasterKeyBase64);\n\n // Set KMS Provider Settings\n // Client uses these settings to discover the master key\n var kmsProviders = new Dictionary>();\n var localOptions = new Dictionary\n {\n { \"key\", localMasterKeyBytes }\n };\n kmsProviders.Add(\"local\", localOptions);\n\n // Create Data Encryption Key\n var clientEncryption = GetClientEncryption(kmsProviders);\n var dataKeyid = clientEncryption.CreateDataKey(\"local\", new DataKeyOptions(), CancellationToken.None);\n clientEncryption.Dispose();\n Console.WriteLine($\"Local DataKeyId UUID]: {dataKeyid}\");\n\n var dataKeyIdBase64 = Convert.ToBase64String(GuidConverter.ToBytes(dataKeyid, GuidRepresentation.Standard));\n Console.WriteLine($\"Local DataKeyId [base64]: {dataKeyIdBase64}\");\n\n // Optional validation; checks that key was created successfully\n ValidateKey(dataKeyid);\n return dataKeyIdBase64;\n}\n```\n\nThis method is the one we call from the main program. It's here that we generate our data encryption key. Lines 6-7 read the local master key from our `master-key.txt` file and convert it to a byte array.\n\nLines 11-16 set the KMS provider settings the client needs in order to discover the master key. As you can see, we add the local provider and the matching local master key we've just retrieved.\n\nWith these KMS provider settings, we construct additional client encryption settings. We do this in a separate method called `GetClientEncryption()`. Once created, we finally generate an encrypted key.\n\nAs an extra measure, we call a third new method `ValidateKey()`, just to make sure the data encryption key was created. After these steps, and if successful, the `CreateKeyWithLocalKmsProvider()` method returns our data key id encoded in Base64 format.\n\nAfter the CreateKeyWithLocalKmsProvider() method, add the following method:\n\n``` csp\n// KmsKeyHelper.cs\n\nprivate ClientEncryption GetClientEncryption(\n Dictionary> kmsProviders)\n{\n var keyVaultClient = new MongoClient(_mdbConnectionString);\n var clientEncryptionOptions = new ClientEncryptionOptions(\n keyVaultClient: keyVaultClient,\n keyVaultNamespace: _keyVaultNamespace,\n kmsProviders: kmsProviders);\n\n return new ClientEncryption(clientEncryptionOptions);\n}\n```\n\nWithin the `CreateKeyWithLocalKmsProvider()` method, we call `GetClientEncryption()` (the method we just added) to construct our client encryption settings. These include which key vault client, key vault namespace, and KMS providers to use.\n\nIn this method, we construct a MongoClient using the connection string, then set it as a key vault client. We also use the key vault namespace that was passed in and the local KMS providers we previously constructed. These client encryption options are then returned.\n\nLast but not least, after GetClientEncryption(), add the final method:\n\n``` csp\n// KmsKeyHelper.cs\n\nprivate void ValidateKey(Guid dataKeyId)\n{\n var client = new MongoClient(_mdbConnectionString);\n var collection = client\n .GetDatabase(_keyVaultNamespace.DatabaseNamespace.DatabaseName)\n #pragma warning disable CS0618 // Type or member is obsolete\n .GetCollection(_keyVaultNamespace.CollectionName, new MongoCollectionSettings { GuidRepresentation = GuidRepresentation.Standard });\n #pragma warning restore CS0618 // Type or member is obsolete\n\n var query = Builders.Filter.Eq(\"_id\", new BsonBinaryData(dataKeyId, GuidRepresentation.Standard));\n var keyDocument = collection\n .Find(query)\n .Single();\n\n Console.WriteLine(keyDocument);\n}\n```\n\nThough optional, this method conveniently checks that the data encryption key was created correctly. It does this by constructing a MongoClient using the specified connection string, then queries the database for the data encryption key. If it was successfully created, the data encryption key would have been inserted as a document into your replica set and will be retrieved in the query.\n\nWith these changes, we're ready to generate our data encryption key. Make sure to save all files, then run your program. If all goes well, your console will print out two DataKeyIds (UUID and base64) as well as a document that resembles the following:\n\n``` json\n{\n \"_id\" : CSUUID(\"aae4f3b4-91b6-4cef-8867-3113a6dfb27b\"),\n \"keyMaterial\" : Binary(0, \"rcfTQLRxF1mg98/Jr7iFwXWshvAVIQY6JCswrW+4bSqvLwa8bQrc65w7+3P3k+TqFS+1Ce6FW4Epf5o/eqDyT//I73IRc+yPUoZew7TB1pyIKmxL6ABPXJDkUhvGMiwwkRABzZcU9NNpFfH+HhIXjs324FuLzylIhAmJA/gvXcuz6QSD2vFpSVTRBpNu1sq0C9eZBSBaOxxotMZAcRuqMA==\"),\n \"creationDate\" : ISODate(\"2020-11-08T17:58:36.372Z\"),\n \"updateDate\" : ISODate(\"2020-11-08T17:58:36.372Z\"),\n \"status\" : 0,\n \"masterKey\" : {\n \"provider\" : \"local\"\n }\n}\n```\n\nFor reference, here's what my console output looks like:\n\nConsole output showing two data key ids and a data object; these are successful signs of a properly generated data encryption key.\n\nIf you want to be extra sure, you can also check your cluster to see that your data encryption key is stored as a document in the newly created encryption database and \\_\\_keyVault collection. Since I'm connecting with my Atlas cluster, here's what it looks like there:\n\nSaved data encryption key in MongoDB Atlas\n\nSweet! Now that we have generated a data encryption key, which has been encrypted itself with our local master key, the next step is to specify which fields in our application should be encrypted.\n\n## Specify Encrypted Fields Using a JSON Schema\n\nIn order for automatic client-side encryption and decryption to work, a JSON schema needs to be defined that specifies which fields to encrypt, which encryption algorithms to use, and the BSON Type of each field.\n\nUsing our medical application as an example, let's plan on encrypting the following fields:\n\n##### Fields to encrypt\n\n| Field name | Encryption algorithms | BSON Type |\n| ---------- | --------------------- | --------- |\n| SSN (Social Security Number) | [Deterministic | `Int` |\n| Blood Type | Random | `String` |\n| Medical Records | Random | `Array` |\n| Insurance: Policy Number | Deterministic | `Int` (embedded inside insurance object) |\n\nTo make this a bit easier, and to separate this functionality from the rest of the application, create another class named `JsonSchemaCreator.cs`. In it, add the following code:\n\n``` csp\n// JsonSchemaCreator.cs\n\nusing MongoDB.Bson;\nusing System;\n\nnamespace EnvoyMedSys\n{\n public static class JsonSchemaCreator\n {\n private static readonly string DETERMINISTIC_ENCRYPTION_TYPE = \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\";\n private static readonly string RANDOM_ENCRYPTION_TYPE = \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\";\n\n private static BsonDocument CreateEncryptMetadata(string keyIdBase64)\n {\n var keyId = new BsonBinaryData(Convert.FromBase64String(keyIdBase64), BsonBinarySubType.UuidStandard);\n return new BsonDocument(\"keyId\", new BsonArray(new] { keyId }));\n }\n\n private static BsonDocument CreateEncryptedField(string bsonType, bool isDeterministic)\n {\n return new BsonDocument\n {\n {\n \"encrypt\",\n new BsonDocument\n {\n { \"bsonType\", bsonType },\n { \"algorithm\", isDeterministic ? DETERMINISTIC_ENCRYPTION_TYPE : RANDOM_ENCRYPTION_TYPE}\n }\n }\n };\n }\n\n public static BsonDocument CreateJsonSchema(string keyId)\n {\n return new BsonDocument\n {\n { \"bsonType\", \"object\" },\n { \"encryptMetadata\", CreateEncryptMetadata(keyId) },\n {\n \"properties\",\n new BsonDocument\n {\n { \"ssn\", CreateEncryptedField(\"int\", true) },\n { \"bloodType\", CreateEncryptedField(\"string\", false) },\n { \"medicalRecords\", CreateEncryptedField(\"array\", false) },\n {\n \"insurance\",\n new BsonDocument\n {\n { \"bsonType\", \"object\" },\n {\n \"properties\",\n new BsonDocument\n {\n { \"policyNumber\", CreateEncryptedField(\"int\", true) }\n }\n }\n }\n }\n }\n }\n };\n }\n }\n}\n```\n\nAs before, let's step through each line:\n\nFirst, we create two static variables to hold our encryption types. We use `Deterministic` encryption for fields that are queryable and have high cardinality. We use `Random` encryption for fields we don't plan to query, have low cardinality, or are array fields.\n\nNext, we create a `CreateEncryptMetadata()` helper method. This will return a `BsonDocument` that contains our converted data key. We'll use this key in the `CreateJsonSchema()` method.\n\nLines 19-32 make up another helper method called `CreateEncryptedField()`. This generates the proper `BsonDocument` needed to define our encrypted fields. It will output a `BsonDocument` that resembles the following:\n\n``` json\n\"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"int\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n }\n}\n```\n\nFinally, the `CreateJsonSchema()` method. Here, we generate the full schema our application will use to know which fields to encrypt and decrypt. This method also returns a `BsonDocument`.\n\nA few things to note about this schema:\n\nPlacing the `encryptMetadata` key at the root of our schema allows us to encrypt all fields with a single data key. It's here you see the call to our `CreateEncryptMetadata()` helper method.\n\nWithin the `properties` key go all the fields we wish to encrypt. So, for our `ssn`, `bloodType`, `medicalRecords`, and `insurance.policyNumber` fields, we generate the respective `BsonDocument` specifications they need using our `CreateEncryptedField()` helper method.\n\nWith our encrypted fields defined and the necessary encryption keys generated, we can now move onto enabling client-side field level encryption in our MongoDB client!\n\n> \u2615\ufe0f Don't forget to take a break! This is a lot of information to take\n> in, so don't rush. Be sure to save all your files, then grab a coffee,\n> stretch, and step away from the computer. This tutorial will be here\n> waiting when you're ready. :)\n\n## Create the CSFLE-Enabled MongoDB Client\n\nA CSFLE-enabled `MongoClient` is not that much different from a standard client. To create an auto-encrypting client, we instantiate it with some additional auto-encryption options.\n\nAs before, let's create a separate class to hold this functionality. Create a file called `AutoEncryptHelper.cs` and add the following code (note that since this is a bit longer than the other code snippets, I've opted to add inline comments to explain what's happening rather than waiting until after the code block):\n\n``` csp\n// AutoEncryptHelper.cs\n\nusing System;\nusing System.Collections.Generic;\nusing System.IO;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Encryption;\n\nnamespace EnvoyMedSys\n{\n public class AutoEncryptHelper\n {\n private static readonly string __localMasterKeyPath = \"../../../master-key.txt\";\n\n // Most of what follows are sample fields and a sample medical record we'll be using soon.\n private static readonly string __sampleNameValue = \"Takeshi Kovacs\";\n private static readonly int __sampleSsnValue = 213238414;\n\n private static readonly BsonDocument __sampleDocFields =\n new BsonDocument\n {\n { \"name\", __sampleNameValue },\n { \"ssn\", __sampleSsnValue },\n { \"bloodType\", \"AB-\" },\n {\n \"medicalRecords\",\n new BsonArray(new []\n {\n new BsonDocument(\"weight\", 180),\n new BsonDocument(\"bloodPressure\", \"120/80\")\n })\n },\n {\n \"insurance\",\n new BsonDocument\n {\n { \"policyNumber\", 211241 },\n { \"provider\", \"EnvoyHealth\" }\n }\n }\n };\n\n // Scaffolding of some private variables we'll need.\n private readonly string _connectionString;\n private readonly CollectionNamespace _keyVaultNamespace;\n private readonly CollectionNamespace _medicalRecordsNamespace;\n\n // Constructor that will allow us to specify our auto-encrypting\n // client settings. This also makes it a bit easier to extend and\n // use with a remote KMS provider later on.\n public AutoEncryptHelper(string connectionString, CollectionNamespace keyVaultNamespace)\n {\n _connectionString = connectionString;\n _keyVaultNamespace = keyVaultNamespace;\n _medicalRecordsNamespace = CollectionNamespace.FromFullName(\"medicalRecords.patients\");\n }\n\n // The star of the show. Accepts a key location,\n // a key vault namespace, and a schema; all needed\n // to construct our CSFLE-enabled MongoClient.\n private IMongoClient CreateAutoEncryptingClient(\n KmsKeyLocation kmsKeyLocation,\n CollectionNamespace keyVaultNamespace,\n BsonDocument schema)\n {\n var kmsProviders = new Dictionary>();\n\n // Specify the local master encryption key\n if (kmsKeyLocation == KmsKeyLocation.Local)\n {\n var localMasterKeyBase64 = File.ReadAllText(__localMasterKeyPath);\n var localMasterKeyBytes = Convert.FromBase64String(localMasterKeyBase64);\n var localOptions = new Dictionary\n {\n { \"key\", localMasterKeyBytes }\n };\n kmsProviders.Add(\"local\", localOptions);\n }\n\n // Because we didn't explicitly specify the collection our\n // JSON schema applies to, we assign it here. This will map it\n // to a database called medicalRecords and a collection called\n // patients.\n var schemaMap = new Dictionary();\n schemaMap.Add(_medicalRecordsNamespace.ToString(), schema);\n\n // Specify location of mongocryptd binary, if necessary.\n // Not required if path to the mongocryptd.exe executable\n // has been added to your PATH variables\n var extraOptions = new Dictionary()\n {\n // Optionally uncomment the following line if you are running mongocryptd manually\n // { \"mongocryptdBypassSpawn\", true }\n };\n\n // Create CSFLE-enabled MongoClient\n // The addition of the automatic encryption settings are what \n // transform this from a standard MongoClient to a CSFLE-enabled\n // one\n var clientSettings = MongoClientSettings.FromConnectionString(_connectionString);\n var autoEncryptionOptions = new AutoEncryptionOptions(\n keyVaultNamespace: keyVaultNamespace,\n kmsProviders: kmsProviders,\n schemaMap: schemaMap,\n extraOptions: extraOptions);\n clientSettings.AutoEncryptionOptions = autoEncryptionOptions;\n return new MongoClient(clientSettings);\n }\n }\n}\n```\n\nAlright, we're almost done. Don't forget to save what you have so far! In our next (and final) step, we can finally try out client-side field level encryption with some queries!\n\n> \ud83c\udf1f Know what show this patient is from? Let me know your nerd cred (and\n> let's be friends, fellow fan!) in a\n> [tweet!\n\n## Perform Encrypted Read/Write Operations\n\nRemember the sample data we've prepared? Let's put that to good use! To test out an encrypted write and read of this data, let's add another method to the `AutoEncryptHelper` class. Right after the constructor, add the following method:\n\n``` csp\n// AutoEncryptHelper.cs\n\npublic async void EncryptedWriteAndReadAsync(string keyIdBase64, KmsKeyLocation kmsKeyLocation)\n{\n // Construct a JSON Schema\n var schema = JsonSchemaCreator.CreateJsonSchema(keyIdBase64);\n\n // Construct an auto-encrypting client\n var autoEncryptingClient = CreateAutoEncryptingClient(\n kmsKeyLocation,\n _keyVaultNamespace,\n schema);\n\n var collection = autoEncryptingClient\n .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName)\n .GetCollection(_medicalRecordsNamespace.CollectionName);\n\n var ssnQuery = Builders.Filter.Eq(\"ssn\", __sampleSsnValue);\n\n // Upsert (update document if found, otherwise create it) a document into the collection\n var medicalRecordUpdateResult = await collection\n .UpdateOneAsync(ssnQuery, new BsonDocument(\"$set\", __sampleDocFields), new UpdateOptions() { IsUpsert = true });\n\n if (!medicalRecordUpdateResult.UpsertedId.IsBsonNull) \n {\n Console.WriteLine(\"Successfully upserted the sample document!\");\n }\n\n // Query by SSN field with auto-encrypting client\n var result = collection.Find(ssnQuery).Single();\n\n Console.WriteLine($\"Encrypted client query by the SSN (deterministically-encrypted) field:\\n {result}\\n\");\n}\n```\n\nWhat's happening here? First, we use the `JsonSchemaCreator` class to construct our schema. Then, we create an auto-encrypting client using the `CreateAutoEncryptingClient()` method. Next, lines 14-16 set the working database and collection we'll be interacting with. Finally, we upsert a medical record using our sample data, then retrieve it with the auto-encrypting client.\n\nPrior to inserting this new patient record, the CSFLE-enabled client automatically encrypts the appropriate fields as established in our JSON schema.\n\nIf you like diagrams, here's what's happening:\n\nWhen retrieving the patient's data, it is decrypted by the client. The nicest part about enabling CSFLE in your application is that the queries don't change, meaning the driver methods you're already familiar with can still be used.\n\nFor the diagram people:\n\nTo see this in action, we just have to modify the main program slightly so that we can call the `EncryptedWriteAndReadAsync()` method.\n\nBack in the `Program.cs` file, add the following code:\n\n``` csp\n// Program.cs\n\nusing System;\nusing System.IO;\nusing MongoDB.Driver;\n\nnamespace EnvoyMedSys\n{\n public enum KmsKeyLocation\n {\n Local,\n }\n\n class Program\n {\n public static void Main()\n {\n var connectionString = \"PASTE YOUR MONGODB CONNECTION STRING/ATLAS URI HERE\";\n var keyVaultNamespace = CollectionNamespace.FromFullName(\"encryption.__keyVault\");\n\n var kmsKeyHelper = new KmsKeyHelper(\n connectionString: connectionString,\n keyVaultNamespace: keyVaultNamespace);\n var autoEncryptHelper = new AutoEncryptHelper(\n connectionString: connectionString,\n keyVaultNamespace: keyVaultNamespace);\n\n string kmsKeyIdBase64;\n\n // Ensure GenerateLocalMasterKey() only runs once!\n if (!File.Exists(\"../../../master-key.txt\"))\n {\n kmsKeyHelper.GenerateLocalMasterKey();\n }\n\n kmsKeyIdBase64 = kmsKeyHelper.CreateKeyWithLocalKmsProvider();\n autoEncryptHelper.EncryptedWriteAndReadAsync(kmsKeyIdBase64, KmsKeyLocation.Local);\n\n Console.ReadKey();\n }\n }\n}\n```\n\nAlright, this is it! Save your files and then run your program. After a short wait, you should see the following console output:\n\nConsole output of an encrypted write and read\n\nIt works! The console output you see has been decrypted correctly by our CSFLE-enabled MongoClient. We can also verify that this patient record has been properly saved to our database. Logging into my Atlas cluster, I see Takeshi's patient record stored securely, with the specified fields encrypted:\n\nEncrypted patient record stored in MongoDB Atlas\n\n## Bonus: What's the Difference with a Non-Encrypted Client?\n\nTo see how these queries perform when using a non-encrypting client, let's add one more method to the `AutoEncryptHelper` class. Right after the `EncryptedWriteAndReadAsync()` method, add the following:\n\n``` csp\n// AutoEncryptHelper.cs\n\npublic void QueryWithNonEncryptedClient()\n{\n var nonAutoEncryptingClient = new MongoClient(_connectionString);\n var collection = nonAutoEncryptingClient\n .GetDatabase(_medicalRecordsNamespace.DatabaseNamespace.DatabaseName)\n .GetCollection(_medicalRecordsNamespace.CollectionName);\n var ssnQuery = Builders.Filter.Eq(\"ssn\", __sampleSsnValue);\n\n var result = collection.Find(ssnQuery).FirstOrDefault();\n if (result != null)\n {\n throw new Exception(\"Expected no document to be found but one was found.\");\n }\n\n // Query by name field with a normal non-auto-encrypting client\n var nameQuery = Builders.Filter.Eq(\"name\", __sampleNameValue);\n result = collection.Find(nameQuery).FirstOrDefault();\n if (result == null)\n {\n throw new Exception(\"Expected the document to be found but none was found.\");\n }\n\n Console.WriteLine($\"Query by name (non-encrypted field) using non-auto-encrypting client returned:\\n {result}\\n\");\n}\n```\n\nHere, we instantiate a standard MongoClient with no auto-encryption settings. Notice that we query by the non-encrypted `name` field; this is because we can't query on encrypted fields using a MongoClient without CSFLE enabled.\n\nFinally, add a call to this new method in the `Program.cs` file:\n\n``` csp\n// Program.cs\n\n// Comparison query on non-encrypting client\nautoEncryptHelper.QueryWithNonEncryptedClient();\n```\n\nSave all your files, then run your program again. You'll see your last query returns an encrypted patient record, as expected. Since we are using a non CSFLE-enabled MongoClient, no decryption happens, leaving only the non-encrypted fields legible to us:\n\nQuery output using a non CSFLE-enabled MongoClient. Since no decryption happens, the data is properly returned in an encrypted state.\n\n## Let's Recap\n\nCheers! You've made it this far!\n\nReally, pat yourself on the back. This was a serious tutorial!\n\nThis tutorial walked you through:\n\n* Creating a .NET Core console application.\n* Installing dependencies needed to enable client-side field level encryption for your .NET core app.\n* Creating a local master key.\n* Creating a data encryption key.\n* Constructing a JSON Schema to establish which fields to encrypt.\n* Configuring a CSFLE-enabled MongoClient.\n* Performing an encrypted read and write of a sample patient record.\n* Performing a read using a non-CSFLE-enabled MongoClient to see the difference in the retrieved data.\n\nWith this knowledge of client-side field level encryption, you should be able to better secure applications and understand how it works!\n\n> I hope this tutorial made client-side field level encryption simpler to\n> integrate into your .NET application! If you have any further questions\n> or are stuck on something, head over to the MongoDB Community\n> Forums and start a\n> topic. A whole community of MongoDB engineers (including the DevRel\n> team) and fellow developers are sure to help!\n\nIn case you want to learn a bit more, here are the resources that were\ncrucial to helping me write this tutorial:\n\n* Client-Side Field Level Encryption - .NET Driver\n* CSFLE Examples - .NET Driver\n* Client-Side Field Level Encryption - Security Docs\n* Automatic Encryption Rules\n", "format": "md", "metadata": {"tags": ["C#", "MongoDB"], "pageDescription": "Learn how to use MongoDB client-side field level encryption (CSFLE) with a C# application.", "contentType": "Code Example"}, "title": "How to Use MongoDB Client-Side Field Level Encryption (CSFLE) with C#", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/how-netlify-backfilled-2-million-documents", "action": "created", "body": "# How We Backfilled 2 Million Database Documents\n\nWe recently needed to backfill nearly two million documents in our MongoDB database with a new attribute and wanted to share our process. First, some context on why we were doing this: This backfill was to support Netlify's Growth team, which builds prototypes into Netlify's core product and then evaluates how those prototypes impact user conversion and retention rates. \n\nIf we find that a prototype positively impacts growth, we use that finding to shape deeper investments in a particular area of the product. In this case, to measure the impact of a prototype, we needed to add an attribute that didn't previously exist to one of our database models.\n\nWith that out of the way, let's dive into how we did it!\n\nBackend engineer Eric Betts and I started with a script from a smaller version of this task: backfilling 130,000 documents. The smaller backfill had taken about 11 hours, including time to tweak the script and restart it a few times when it died. At a backfill rate of 175-200 documents per minute, we were looking at a best-case scenario of eight to nine days straight of backfilling for over two million total documents, and that's assuming everything went off without a hitch. With a much bigger backfill ahead of us, we needed to see if we could optimize.\n\nThe starting script took two arguments\u2014a `batch_size` and `thread_pool_size` size\u2014and it worked like this:\n\n1. Create a new queue.\n2. Create a variable to store the number of documents we've processed.\n3. Query the database, limiting returned results to the `batch_size` we passed in.\n4. Push each returned document into the queue.\n5. Create the number of worker threads we passed in with the `thread_pool_size` argument.\n6. Each thread makes an API call to a third-party API, then writes our new attribute to our database with the result from the third-party API.\n7. Update our count of documents processed.\n8. When there are no more documents in the queue to process, clean up the threads.\n\nThe script runs on a Kubernetes pod with memory and CPU constraints. It reads from our production MongoDB database and writes to a secondary.\n\n## More repos, more problems\n\nWhen scaling up the original script to process 20 times the number of documents, we quickly hit some limitations:\n\n**Pod memory constraints.** Running the script with `batch_size` of two million documents and `thread_pool_size` of five was promptly killed by the Kubernetes pod:\n```rb\nBackfill.run(2000000, 5)\n```\n\n**Too much manual intervention.** Running with `batch_size` of 100 and `thread_pool` of five worked much better:\n```rb\nBackfill.run(100, 5)\n```\n\nIt ran super fast \ud83d\ude80 there were no errors \u2728... but we would have had to manually run it 20,000 times.\n\n**Third-party API rate limits.** Even with a reliable `batch_size`, we couldn't crank the `thread_pool_size` too high or we'd hit rate limits at the third-party API. Our script would finish running, but many of our documents wouldn't actually be backfilled, and we'd have to iterate over them again.\n\n## Brainstorming solutions\n\nEric and I needed something that met the following criteria:\n\n* Doesn't use so much memory that it kills the Kubernetes pod.\n* Doesn't use so much memory that it noticeably increases database read/write latency.\n* Iterates through a complete batch of objects at a time; the job shouldn't die before at least attempting to process a full batch.\n* Requires minimal babysitting. Some manual intervention is okay, but we need a job to run for several hours by itself.\n* Lets us pick up where we left off. If the job dies, we don't want to waste time re-processing documents we've already processed once.\n\nWith this list of criteria, we started brainstorming solutions. We could:\n\n1. Dig into why the script was timing out before processing the full batch.\n2. Store references to documents that failed to be updated, and loop back over them later.\n3. Find a way to order the results returned by the database.\n4. Automatically add more jobs to the queue once the initial batch was processed.\n\n## Optimizations\n### You're in time out\n\n#1 was an obvious necessity. We started logging the thread index to see if it would tell us anything:\n```rb\ndef self.run(batch_size, thread_pool_size)\n jobs = Queue.new\n \n # get all the documents that meet these criteria\n objs = Obj.where(...)\n # limit the returned objects to the batch_size\n objs = objs.limit(batch_size)\n # push each document into the jobs queue to be processed\n objs.each { |o| jobs.push o }\n \n # create a thread pool\n workers = (thread_pool_size).times.map do |i|\n Thread.new do\n begin\n while j = jobs.pop(true)\n # log the thread index and object ID\n Rails.logger.with_fields(thread: i, obj: obj.id)\n begin\n # process objects\n end\n...\n```\nThis new log line let us see threads die off as the script ran. We'd go from running with five threads:\n```\nthread=\"4\" obj=\"939bpca...\"\nthread=\"1\" obj=\"939apca...\"\nthread=\"5\" obj=\"939cpca...\"\nthread=\"2\" obj=\"939dpca...\"\nthread=\"3\" obj=\"939fpca...\"\nthread=\"4\" obj=\"969bpca...\"\nthread=\"1\" obj=\"969apca...\"\nthread=\"5\" obj=\"969cpca...\"\nthread=\"2\" obj=\"969dpca...\"\nthread=\"3\" obj=\"969fpca...\"\n```\nto running with a few:\n```\nthread=\"4\" obj=\"989bpca...\"\nthread=\"1\" obj=\"989apca...\"\nthread=\"4\" obj=\"979bpca...\"\nthread=\"1\" obj=\"979apca...\"\n```\nto running with none. \n\nWe realized that when a thread would hit an error in an API request or a write to our database, we were rescuing and printing the error, but not continuing with the loop. This was a simple fix: When we `rescue`, continue to the `next` iteration of the loop.\n```rb\n begin\n # process documents\n rescue\n next\n end\n```\n\n### Order, order\n\nIn a new run of the script, we needed a way to pick up where we left off. Idea #2\u2014keeping track of failures across iterations of the script\u2014was technically possible, but it wasn't going to be pretty. We expected idea #3\u2014ordering the query results\u2014to solve the same problem, but in a better way, so we went with that instead. Eric came up with the idea to order our query results by `created_at` date. This way, we could pass a `not_before` date argument when running the script to ensure that we weren't processing already-processed objects. We could also print each document's `created_at` date as it was processed, so that if the script died, we could grab that date and pass it into the next run. Here's what it looked like:\n\n```rb\ndef self.run(batch_size, thread_pool_size, not_before)\n jobs = Queue.new\n \n # order the query results in ascending order\n objs = Obj.where(...).order(created_at: -1)\n # get documents created after the not_before date\n objs = objs.where(:created_at.gte => not_before)\n # limit the returned documents to the batch_size\n objs = objs.limit(batch_size)\n # push each document into the jobs queue to be processed\n objs.each { |o| jobs.push o }\n\n workers = (thread_pool_size).times.map do |i|\n Thread.new do\n begin\n while j = jobs.pop(true)\n # log each document's created_at date as it's processed\n Rails.logger.with_fields(thread: i, obj: obj.id, created_at: obj.created_at)\n begin\n # process documents\n rescue\n next\n end\n...\n```\n\nSo a log line might look like:\n`thread=\"6\" obj=\"979apca...\" created_at=\"Wed, 11 Nov 2020 02:04:11.891000000 UTC +00:00\"`\n\nAnd if the script died after that line, we could grab that date and pass it back in:\n`Backfill.run(50000, 10, \"Wed, 11 Nov 2020 02:04:11.891000000 UTC +00:00\")`\n\nNice!\n\nUnfortunately, when we added the ordering, we found that we unintentionally introduced a new memory limitation: the query results were sorted in memory, so we couldn't pass in too large of a batch size or we'd run out of memory on the Kubernetes pod. This lowered our batch size substantially, but we accepted the tradeoff since it eliminated the possibility of redoing work that had already been done.\n\n### The job is never done\nThe last critical task was to make our queue add to itself once the original batch of documents was processed. \n\nOur first approach was to check the queue size, add more objects to the queue when queue size reached some threshold, and re-run the original query, but skip all the returned query results that we'd already processed. We stored the number we'd already processed in a variable called `skip_value`. Each time we added to the queue, we would increase `skip_value` and skip an increasingly large number of results. \n\nYou can tell where this is going. At some point, we would try to skip too large of a value, run out of memory, fail to refill the queue, and the job would die.\n\n```rb\n skip_value = batch_size\n step = batch_size\n \n loop do\n if jobs.size < 1000\n objs = Obj.where(...).order(created_at: -1)\n objs = objs.where(:created_at.gte => created_at)\n objs = objs.skip(skip_value).limit(step) # <--- job dies when this skip_value gets too big \u274c\n objs.each { |r| jobs.push r }\n \n skip_value += step # <--- this keeps increasing as we process more objects \u274c\n \n if objs.count == 0\n break\n end\n end\n end\n```\n\nWe ultimately tossed out the increasing `skip_value`, opting instead to store the `created_at` date of the last object processed. This way, we could skip a constant, relatively low number of documents instead of slowing down and eventually killing our query by skipping an increasing number:\n\n```rb\n refill_at = 1000\n step = batch_size\n \n loop do\n if jobs.size < refill_at\n objs = Obj.where(...).order(created_at: -1)\n objs = objs.where(:created_at.gte => last_created_at) # <--- grab last_created_at constant from earlier in the script \u2705\n objs = objs.skip(refill_at).limit(step) # <--- skip a constant amount \u2705\n objs.each { |r| jobs.push r }\n \n if objs.count == 0\n break\n end\n end\n end\n```\n\nSo, with our existing loop to create and kick off the threads, we have something like this:\n```rb\ndef self.run(batch_size, thread_pool_size, not_before)\n jobs = Queue.new\n \n objs = Obj.where(...).order(created_at: -1)\n objs = objs.where(:created_at.gte => not_before)\n objs = objs.limit(step)\n objs.each { |o| jobs.push o }\n\n updated = 0\n last_created_at = \"\" # <--- we update this variable... \n \n workers = (thread_pool_size).times.map do |i|\n Thread.new do\n begin\n while j = jobs.pop(true)\n Rails.logger.with_fields(thread: i, obj: obj.id, created_at: obj.created_at)\n begin\n # process documents\n updated += 1\n last_created_at = obj.created_at # <--- ...with each document processed\n rescue\n next\n end\n end\n end\n end\n end\n \n loop do\n skip_value = batch_size\n step = 10000\n \n if jobs.size < 1000\n objs = Obj.where(...).order(created: -1)\n objs = objs.where(:created_at.gte => not_before)\n objs = objs.skip(skip_value).limit(step)\n \n objs.each { |r| jobs.push r }\n skip_value += step\n \n if objs.count == 0\n break\n end\n end\n end \n workers.map(&:join)\nend\n```\n\nWith this, we were finally getting the queue to add to itself when it was done. But the first time we ran this, we saw something surprising. The initial batch of 50,000 documents was processed quickly, and then the next batch that was added by our self-adding queue was processed very slowly. We ran `top -H` to check CPU and memory usage of our script on the Kubernetes pod and saw that it was using 90% of the system's CPU:\n\nAdding a few `sleep` statements between loop iterations helped us get CPU usage down to a very reasonable 6% for the main process.\n\nWith these optimizations ironed out, Eric and I were able to complete our backfill at a processing rate of 800+ documents/minute with no manual intervention. Woohoo!", "format": "md", "metadata": {"tags": ["MongoDB", "Kubernetes"], "pageDescription": "Learn how the Netlify growth team reduced the time it took to backfill nearly two million documents in our MongoDB database with a new attribute.", "contentType": "Tutorial"}, "title": "How We Backfilled 2 Million Database Documents", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/rust-field-level-encryption", "action": "created", "body": "# MongoDB Field Level Encryption is now Available for Rust applications\n\nWe have some exciting news to announce for Rust developers. Our 2.4.1 release of the MongoDB Rust driver brings a raft of new, innovative features for developers building Rust applications. \n\n## Field Level Encryption for Rust Applications\nThis one has been a long time coming. The 2.4.1 version of the MongoDB Rust driver contains field level encryption capabilities - both client side field level encryption and queryable encryption. Starting with MongoDB 4.2, client-side field level encryption allows an application to encrypt specific data fields in addition to pre-existing MongoDB encryption features such as Encryption at Rest and TLS/SSL (Transport Encryption).\n\nWith field level encryption, applications can encrypt fields in documents prior to transmitting data over the wire to the server. Client-side field level encryption supports workloads where applications must guarantee that unauthorized parties, including server administrators, cannot read the encrypted data.\n\nFor more information, see the Encryption section of the Rust driver documentation.\n\n## GridFS Rust Support\nThe 2.4.1 release of the MongoDB Rust driver also (finally!) added support for GridFS, allowing storage and retrieval of files that exceed the BSON document size limit. \n\n## Tracing Support \nThis release had one other noteworthy item in it - the driver now emits tracing events at points of interest. Note that this API is considered unstable as the tracing crate has not reached 1.0 yet; future minor versions of the driver may upgrade the tracing dependency to a new version which is not backwards-compatible with Subscribers that depend on older versions of tracing. You can read more about tracing from the crates.io documentation here.\n\n## Install the MongoDB Rust Driver\nTo check out these new features, you'll need to install the MongoDB Rust driver, which is available on crates.io. To use the driver in your application, simply add it to your project's Cargo.toml.\n\n```\n[dependencies]\nmongodb = \"2.4.0-beta\"\n```", "format": "md", "metadata": {"tags": ["Rust"], "pageDescription": "MongoDB now support field level encryption for Rust applications", "contentType": "Article"}, "title": "MongoDB Field Level Encryption is now Available for Rust applications", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/queryable-encryption-james-bond", "action": "created", "body": "# How Queryable Encryption Can Keep James Bond Safe\n\nCompanies of all sizes are continuing to embrace the power of data. With that power, however, comes great responsibility \u2014 namely, the responsibility to protect that data and customers, comply with data privacy regulations, and to control and limit access to confidential and regulated data.\n\nThough existing encryption solutions, both in-transit and\u00a0at-rest, do cover many of the use cases above, none of them protect sensitive data while it\u2019s in use. However, in-use encryption is often a requirement for high-sensitivity workloads, particularly for customers in financial services, healthcare, and critical infrastructure organizations.\n\nQueryable Encryption, a new feature from MongoDB currently in **preview**, offers customers a way to encrypt sensitive data and keep it encrypted throughout its entire lifecycle, whether it\u2019s in memory, logs, in-transit, at-rest, or in backups.\n\nYou can now encrypt sensitive data on the client side, store it as fully randomized encrypted data on the server side, and run expressive queries on that encrypted data. Data is never in cleartext in the database, but MongoDB can still process queries and execute operations on the server side.\n\nFind more details on\u00a0Queryable Encryption.\n\n## Setting up Queryable Encryption with Java\n\nThere are two ways to set up Queryable Encryption. You can either go the automatic encryption route, which allows you to perform encrypted reads and writes without needing to write code specifying how the fields should be encrypted, or you could go the manual route, which means you\u2019ll need to specify the logic for encryption.\n\nTo use Queryable Encryption with Java, you\u2019ll need 4.7.0-beta0 (or later) of the\u00a0Java driver, and version 1.5.0-rc2 (or later) of MongoCrypt. You\u2019ll also need either MongoDB Atlas or MongoDB Enterprise if you want to use automatic encryption. If you don\u2019t have Atlas or Enterprise, no worries! You can get a\u00a0free forever cluster\u00a0on Atlas by\u00a0registering.\n\nOnce you\u2019ve completed those prerequisites, you can set up Queryable Encryption and specify which fields you\u2019d like to encrypt. Check out\u00a0the quick start\u00a0to learn more.\n\n## Okay, but what does this have to do with James Bond?\n\nLet\u2019s explore the following use case. Assume, for a moment, that you work for a top-secret company and you\u2019re tasked with keeping the identities of your employees shrouded in secrecy.\n\nThe below code snippet represents a new employee, James Bond, who works at your company:\n```\nDocument employee = new Document()\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.append(\"firstName\", \"James\")\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.append(\"lastName\", \"Bond\")\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.append(\"employeeId\", 1006)\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.append(\"address\", \"30 Wellington Sq\");\n```\n\nThe document containing James Bond\u2019s information is added to an \u201cemployees\u201d collection that has two encrypted fields, **employeeId** and **address**. Learn more about\u00a0encrypted fields.\n\nAssuming someone, maybe Auric Goldfinger, wanted to find James Bond\u2019s address but didn\u2019t have access to an encrypted client, they\u2019d only be able to see the following:\n```\n\u201cfirstName\u201d : \u201cJames\u201d,\n\n\u201clastName\u201d : \u201cBond\u201d,\n\n\"employeeId\": {\"$binary\": {\"base64\": \"B5XwlQMzFkOmmW0VTcE1QhoQ/ZYHhyMqItvaD+J9AfsAFf1koD/TaYpJG/sCOugnDlE7b4K+mciP63k+RdxMw4OVhYUhsCkFPrhvMtk0l8bekyYWhd8Leky+mcNTy547dJF7c3WdaIumcKIwGKJ7vN0Zs78pcA+86SKOA3LCnojK4Zdewv4BCwQwsqxgEAWyDaT9oHbXiUJDae7s+EWj+ZnfZWHyYJNR/oZtaShrooj2CnlRPK0RRInV3fGFzKXtiOJfxXznYXJ//D0zO4Bobc7/ur4UpA==\", \"subType\": \"06\"}},\n\n\"address\": {\"$binary\": {\"base64\": \"Biue77PFDUA9mrfVh6jmw6ACi4xP/AO3xvBcQRCp7LPjh0V1zFPU1GntlyWqTFeHfBARaEOuXHRs5iRtD6Ha5v5EjRWZ9nufHgg6JeMczNXmYo7sOaDJ\", \"subType\": \"06\"}}\n```\nOf the four fields in my document, the last two remained encrypted (**employeeId** and **address**). Because Auric\u2019s client was unencrypted, he wasn\u2019t able to access James Bond\u2019s address.\n\nHowever, if Auric were using an encrypted client, he\u2019d be able to see the following:\n\n```\n\"firstName\": \"James\",\u00a0\n\n\"lastName\": \"Bond\",\u00a0\n\n\"employeeId\": 1006,\u00a0\n\n\"address\": \"30 Wellington Sq\"\n```\n\n\u2026and be able to track down James Bond.\n\n### Summary\n\nOf course, my example with James Bond is fictional, but I hope that it illustrates one of the many ways that Queryable Encryption can be helpful. For more details, check out our\u00a0docs\u00a0or the following helpful links:\n\n* Supported Operations for Queryable Encryption\n* Driver Compatibility Table\n* Automatic Encryption Shared Library\n\nIf you run into any issues using Queryable Encryption, please let us know in\u00a0Community Forums\u00a0or by filing\u00a0tickets\u00a0on the JAVA project. Happy encrypting!", "format": "md", "metadata": {"tags": ["MongoDB", "Java"], "pageDescription": "Learn more about Queryable Encryption and how it could keep one of literature's legendary heroes safe.", "contentType": "Article"}, "title": "How Queryable Encryption Can Keep James Bond Safe", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/strapi-headless-cms-with-atlas", "action": "created", "body": "# Use MongoDB as the Data Store for your Strapi Headless CMS\n\nThe modern web is evolving quickly and one of the best innovations in\nrecent years is the advent of Headless CMS frameworks. I believe that Headless CMS systems will do for content what RESTful APIs did for SaaS. The idea is simple: You decouple content creation and management from the presentation layer. You then expose the content through either RESTful or GraphQL APIs to be consumed by the front end.\n\nHeadless CMS frameworks work especially well with static site generators which have traditionally relied on simple markdown files for content management. This works great for a small personal blog, for example, but quickly becomes a management mess when you have multiple authors, many different types of content, and ever-changing requirements. A Headless CMS system takes care of content organization and creation while giving you flexibility on how you want to present the content.\n\nToday, we are going to look at an open-source Headless CMS called Strapi. Strapi comes from the word \"bootstrap,\" and helps bootSTRAP your API. In this post, we'll look at some of the features of Strapi and how it can help us manage our content as well as how we can combine it with MongoDB to have a modern content management platform.\n\n## Prerequisites\n\nFor this tutorial, you'll need:\n\n- Node.js\n- npm\n- MongoDB\n\nYou can download Node.js here, and it will come with the latest version of npm and npx. For MongoDB, use MongoDB Atlas for free.\n\n## What is Strapi?\n\nStrapi is an open-source Headless CMS framework. It is essentially a back-end or admin panel for content creation. It allows developers to easily define a custom content structure and customize it fully for their use case. The framework has a really powerful plug-in system for making content creation and management painless regardless of your use-case.\n\nIn this tutorial, we'll set up and configure Strapi. We'll do it in two ways. First, we'll do a default install to quickly get started and show off the functionality of Strapi, and then we'll also create a second instance that uses MongoDB as the database to store our content.\n\n## Bootstrapping Strapi\n\nTo get started with Strapi, we'll execute a command in our terminal using\nnpx. If you have a recent version of Node and npm installed, npx will already be installed as well so simply execute the following command in a directory where you want your Strapi app to live:\n\n``` bash\nnpx create-strapi-app my-project --quickstart\n```\n\nFeel free to change the `my-project` name to a more suitable option. The `--quickstart` argument will use a series of default configuration options to get you up and running as quickly as possible.\n\nThe npx command will take some time to run and download all the packages it needs, and once it's done, it will automatically start up your Strapi app. If it does not, navigate to the `my-project` directory and run:\n\n``` bash\nnpm run develop\n```\n\nThis will start the Strapi server. When it is up and running, navigate to `localhost:1337` in your browser and you'll be greeted with the following welcome screen:\n\nFill out the information with either real or fake data and you'll be taken to your new dashboard.\n\nIf you see the dashboard pictured above, you are all set! When we passed the `--quickstart` argument in our npx command, Strapi created a SQLite database to use and store our data. You can find this database file if you navigate to your `my-project` directory and look in the `.tmp` directory.\n\nFeel free to mess around in the admin dashboard to familiarize yourself with Strapi. Next, we're going to rerun our creation script, but this time, we won't pass the `--quickstart` argument. We'll have to set a couple of different configuration items, primarily our database config. When you're ready proceed to the next section.\n\n## Bootstrapping Strapi with MongoDB\n\nBefore we get into working with Strapi, we'll re-run the installation script and change our database provider from the default SQLite to MongoDB. There are many reasons why you'd want to use MongoDB for your Strapi app, but one of the most compelling ones to me is that many virtual machines are ephemeral, so if you're installing Strapi on a VM to test it out, every time you restart the app, that SQLite DB will be gone and you'll have to start over.\n\nNow then, let's go ahead and stop our Strapi app from running and delete the `my-project` folder. We'll start clean. After you've done this, run the following command:\n\n``` bash\nnpx create-strapi-app my-project\n```\n\nAfter a few seconds you'll be prompted to choose an installation type. You can choose between **Quickstart** and **Custom**, and you'll want to select **Custom**. Next, for your database client select **MongoDB**, in the CLI it may say **mongo**. For the database name, you can choose whatever name makes sense to you, I'll go with **strapi**. You do not already have to have a database created in your MongoDB Atlas instance, Strapi will do this for you.\n\nNext, you'll be prompted for the Host URL. If you're running your MongoDB database on Atlas, the host will be unique to your cluster. To find it, go to your MongoDB Atlas dashboard, navigate to your **Clusters** tab, and hit the **Connect** button. Choose any of the options and your connection string will be displayed. It will be the part highlighted in the image below.\n\nAdd your connection string, and the next option you'll be asked for will be **+srv connection** and for this, you'll say **true**. After that, you'll be asked for a Port, but you can ignore this since we are using a `srv` connection. Finally, you will be asked to provide your username and password for the specific cluster. Add those in and continue. You'll be asked for an Authentication database, and you can leave this blank and just hit enter to continue. And at the end of it all, you'll get your final question asking to **Enable SSL connection** and for this one pass in **y** or **true**.\n\nYour terminal window will look something like this when it's all said and done:\n\n``` none\nCreating a new Strapi application at C:\\Users\\kukic\\desktop\\strapi\\my-project.\n\n? Choose your installation type Custom (manual settings)\n? Choose your default database client mongo\n? Database name: strapi\n? Host: {YOUR-MONGODB-ATLAS-HOST}\n? +srv connection: true\n? Port (It will be ignored if you enable +srv): 27017\n? Username: ado\n? Password: ******\n? Authentication database (Maybe \"admin\" or blank):\n? Enable SSL connection: (y/N) Y \n```\n\nOnce you pass the **Y** argument to the final question, npx will take care of the rest and create your Strapi app, this time using MongoDB for its data store. To make sure everything works correctly, once the install is done, navigate to your project directory and run:\n\n``` bash\nnpm run develop\n```\n\nYour application will once again run on `localhost:1337` and you'll be greeted with the familiar welcome screen.\n\nTo see the database schema in MongoDB Atlas, navigate to your dashboard, go into the cluster you've chosen to install the Strapi database, and view its collections. By default it will look like this:\n\n## Better Content Management with Strapi\n\nNow that we have Strapi set up to use MongoDB as our database, let's go into the Strapi dashboard at `localhost:1337/admin` and learn to use some of the features this Headless CMS provides. We'll start by creating a new content type. Navigate to the **Content-Types Builder** section of the dashboard and click on the **Create New Collection Type** button.\n\nA collection type is, as the name implies, a type of content for your application. It can be a blog post, a promo, a quick-tip, or really any sort of content you need for your application. We'll create a blog post. The first thing we'll need to do is give it a name. I'll give my blog posts collection the very creative name of **Posts**.\n\nOnce we have the name defined, next we'll add a series of fields for our collection. This is where Strapi really shines. The default installation gives us many different data types to work with such as text for a title or rich text for the body of a blog post, but Strapi also allows us to create custom components and even customize these default types to suit our needs.\n\nMy blog post will have a **Title** of type **Text**, a **Content** element for the content of the post of type **Rich Text**, and a **Published** value of type **Date** for when the post is to go live. Feel free to copy my layout, or create your own. Once you're satisfied hit the save button and the Strapi server will restart and you'll see your new collection type in the main navigation.\n\nLet's go ahead and create a few posts for our blog. Now that we have some posts created, we can view the content both in the Strapi dashboard, as well as in our MongoDB Atlas collections view. Notice in MongoDB Atlas that a new collection called **posts** was created and that it now holds the blog posts we've written.\n\nWe are only scratching the surface of what's available with Strapi. Let me show you one more powerful feature of Strapi.\n\n- Create a new Content Type, call it **Tags**, and give it only one field called **name**.\n- Open up your existing Posts collection type and hit the **Add another field** button.\n- From here, select the field type of **Relation**.\n- On the left-hand side you'll see Posts, and on the right hand click the dropdown arrow and find your new **Tags** collection and select it.\n- Finally, select the last visual so that it says **Post has many Tags** and hit **Finish**.\n\nNotice that some of the options are traditional 1\\:1, 1\\:M, M\\:M relationships that you might remember from the traditional RDBMS world. Note that even though we're using MongoDB, these relationships will be correctly represented so you don't have to worry about the underlying data model.\n\nGo ahead and create a few entries in your new Tags collection, and then go into an existing post you have created. You'll see the option to add `tags` to your post now and you'll have a dropdown menu to choose from. No more guessing what the tag should be... is it NodeJS, or Node.Js, maybe just Node?\n\n## Accessing Strapi Content\n\nSo far we have created our Strapi app, created various content types, and created some content, but how do we make use of this content in the applications that are meant to consume it? We have two options. We can expose the data via RESTful endpoints, or via GraphQL. I'll show you both.\n\nLet's first look at the RESTful approach. When we create a new content type Strapi automatically creates an accompanying RESTFul endpoint for us. So we could access our posts at `localhost:1337/posts` and our tags at `localhost:1337/tags`. But not so fast, if we try to navigate to either of these endpoints we'll be treated with a `403 Forbidden` message. We haven't made these endpoints publically available.\n\nTo do this, go into the **Roles & Permissions** section of the Strapi dashboard, select the **Public** role and you'll see a list of permissions by feature and content type. By default, they're all disabled. For our demo, let's enable the **count**, **find**, and **findOne** permissions for the **Posts** and **Tags** collections.\n\nNow if you navigate to `localhost:1337/posts` or `localhost:1337:tags` you'll see your content delivered in JSON format.\n\nTo access our content via GraphQL, we'll need to enable the GraphQL plugin. Navigate to the **Marketplace** tab in the Strapi dashboard and download the GraphQL plugin. It will take a couple of minutes to download and install the plugin. Once it is installed, you can access all of your content by navigating to `localhost:1337/graphql`. You'll have to ensure that the Roles & Permissions for the different collections are available, but if you've done the RESTful example above they will be.\n\nWe get everything we'd expect with the GraphQL plugin. We can view our entire schema and docs, run queries and mutations and it all just works. Now we can easily consume this data with any front-end. Say we're building an app with Gatsby or Next.js, we can call our endpoint, get all the data and generate all the pages ahead of time, giving us best-in-class performance as well as content management.\n\n## Putting It All Together\n\nIn this tutorial, I introduced you to Strapi, one of the best open-source Headless CMS frameworks around. I covered how you can use Strapi with MongoDB to have a permanent data store, and I covered various features of the Strapi framework. Finally, I showed you how to access your Strapi content with both RESTful APIs as well as GraphQL.\n\nIf you would like to see an article on how we can consume our Strapi content in a static website generator like Gatsby or Hugo, or how you can extend Strapi for your use case let me know in the MongoDB Community forums, and I'll be happy to do a write-up!\n\n>If you want to safely store your Strapi content in MongoDB, sign up for MongoDB Atlas for free.\n\nHappy content creation!\n", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Node.js"], "pageDescription": "Learn how to use MongoDB Atlas as a data store for your Strapi Headless CMS.", "contentType": "Tutorial"}, "title": "Use MongoDB as the Data Store for your Strapi Headless CMS", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/integrate-atlas-application-services-logs-datadog-aws", "action": "created", "body": "# Integrate Atlas Application Services Logs into Datadog on AWS\n\nDatadog\u00a0is a well-known monitoring and security platform for cloud applications. Datadog\u2019s software-as-a-service (SaaS) platform integrates and automates infrastructure monitoring, application performance monitoring, and\u00a0log management\u00a0to provide unified, real-time observability of a customer\u2019s entire technology stack.\n\nMongoDB Atlas on Amazon Web Services (AWS) already supports\u00a0easy integration with Datadog\u00a0for alerts and events right within the Atlas UI (select\u00a0the three vertical dots \u2192 Integration \u2192 Datadog). With the Log Forwarding feature, it's now possible to send Atlas Application Services logs to Datadog. This blog outlines the configuration steps necessary as well as strategies for customizing the view to suit the need.\n\n**Atlas Application Services**\u00a0(formerly MongoDB Realm) is a set of enhanced services that complement the Atlas database to simplify the development of backend applications. App Services-based apps can react to changes in your MongoDB Atlas data, connect that data to other systems, and scale to meet demand without the need to manage the associated server infrastructure.\n\nApp Services provides user authentication and management, schema validation and data access rules, event-driven serverless functions, secure client-side queries with HTTPS Endpoints, and best of all, synchronization of data across devices with the Realm Mobile SDK.\n\nWith App Services and Datadog, you can simplify the end-to-end development and monitoring of your application. **Atlas App Services** specifically enables the forwarding of logs to Datadog via a serverless function that can also give more fine-grained control over how these logs appear in Datadog, via customizing the associated tags.\n\n## Atlas setup\n\nWe assume that you already have an Atlas account. If not, you can sign up for a\u00a0free account\u00a0on MongoDB or the\u00a0AWS Marketplace. Once you have an Atlas account, if you haven't had a chance to try App Services with Atlas, you can follow one of our\u00a0tutorials\u00a0to get a running application working quickly.\n\nTo initiate custom log forwarding, follow the instructions for App Services to configure\u00a0log forwarding. Specifically, choose the \u201cTo Function\u201d option:\n\nWithin Atlas App Services, we can create a custom function that provides the mapping and ingesting logs into Datadog. Please note the intake endpoint URL from Datadog first, which is\u00a0documented by Datadog.\n\nHere\u2019s a sample function that provides that basic capability:\n\n```\nexports = async function(logs) {\n // `logs` is an array of 1-100 log objects\n // Use an API or library to send the logs to another service.\n await context.http.post({\n url: \"https://http-intake.logs.datadoghq.com/api/v2/logs\",\n headers: {\n \"DD-API-KEY\": \"XXXXXX\"],\n \"DD-APPLICATION-KEY\": [\"XXXXX\"], \n \"Content-Type\": [\"application/json\"]\n },\n body: logs.map(x => {return {\n \"ddsource\": \"mongodb.atlas.app.services\",\n \"ddtags\": \"env:test,user:igor\",\n \"hostname\": \"RealmApp04\",\n \"service\": \"MyRealmService04\",\n \"message\" : JSON.stringify(x)\n }}),\n encodeBodyAsJSON: true\n });\n}\n```\n\nOne of the capabilities of the snippet above is that it allows you to modify the function to supply your Datadog\u00a0[API and application keys. This provides the capability to customize the experience and provide the appropriate context for better observability. You can change\u00a0ddtags, the hostname, and service parameters to reflect your organization, team, environment, or application structure. These parameters will appear as facets helping with filtering the logs.\n\nNote: Datadog supports log ingestion pipelines that allow it to better parse logs. In order for the MongoDB log pipeline to work, your *ddsource* must be set to\u00a0`mongodb.atlas.app.services`.\n\n## Viewing the logs in Datadog\n\nOnce log forwarding is configured, your Atlas App Services logs will appear in the Datadog Logs module.\n\nYou can click on an individual log entry to see the detailed view:\n\n## Conclusion\n\nIn this blog, we showed how to configure log forwarding for Atlas App Services logs. If you would like to try configuring log forwarding yourself, sign up for a\u00a014-day free trial of Datadog\u00a0if you don\u2019t already have an account.\u00a0To try Atlas App Services in AWS Marketplace, sign up for a\u00a0free account.", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "With the Log Forwarding feature, it's now possible to send Atlas Application Services logs to Datadog. This blog outlines the configuration steps necessary as well as strategies for customizing the view to suit the need.", "contentType": "Tutorial"}, "title": "Integrate Atlas Application Services Logs into Datadog on AWS", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/mastering-ops-manager", "action": "created", "body": "# Mastering MongoDB Ops Manager on Kubernetes\n\nThis article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.\n\n- Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud\n\n- Mastering MongoDB Ops Manager\n\n- Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti\n\nManaging MongoDB deployments can be a rigorous task, particularly when working with large numbers of databases and servers. Without the right tools and processes in place, it can be time-consuming to ensure that these deployments are running smoothly and efficiently. One significant issue in managing MongoDB clusters at scale is the lack of automation, which can lead to time-consuming and error-prone tasks such as backups, recovery, and upgrades. These tasks are crucial for maintaining the availability and performance of your clusters.\n\nAdditionally, monitoring and alerting can be a challenge, as it may be difficult to identify and resolve issues with your deployments. To address these problems, it's essential to use software that offers monitoring and alerting capabilities. Optimizing the performance of your deployments also requires guidance and support from the right sources.\n\nFinally, it's critical for your deployments to be secure and compliant with industry standards. To achieve this, you need features that can help you determine if your deployments meet these standards.\n\nMongoDB Ops Manager is a web-based application designed to assist with the management and monitoring of MongoDB deployments. It offers a range of features that make it easier to deploy, manage, and monitor MongoDB databases, such as:\n\n- Automated backups and recovery: Ops Manager can take automated backups of your MongoDB deployments and provide options for recovery in case of failure.\n\n- Monitoring and alerting: Ops Manager provides monitoring and alerting capabilities to help identify and resolve issues with your MongoDB deployments.\n\n- Performance optimization: Ops Manager offers tools and recommendations to optimize the performance of your MongoDB deployments.\n\n- Upgrade management: Ops Manager can help you manage and plan upgrades to your MongoDB deployments, including rolling upgrades and backups to ensure data availability during the upgrade process.\n\n- Security and compliance: Ops Manager provides features to help you secure your MongoDB deployments and meet compliance requirements.\n\nHowever, managing Ops Manager can be a challenging task that requires a thorough understanding of its inner workings and how it interacts with the internal MongoDB databases. It is necessary to have the knowledge and expertise to perform upgrades, monitor it, audit it, and ensure its security. As Ops Manager is a crucial part of managing the operation of your MongoDB databases, its proper management is essential.\n\nFortunately, the MongoDB Enterprise Kubernetes Operator enables us to run Ops Manager on Kubernetes clusters, using native Kubernetes capabilities to manage Ops Manager for us, which makes it more convenient and efficient.\n\n## Kubernetes: MongoDBOpsManager custom resource\n\nThe MongoDB Enterprise Kubernetes Operator is software that can be used to deploy Ops Manager and MongoDB resources to a Kubernetes cluster, and it's responsible for managing the lifecycle of each of these deployments. It has been developed based on years of experience and expertise, and it's equipped with the necessary knowledge to properly install, upgrade, monitor, manage, and secure MongoDB objects on Kubernetes.\n\nThe Kubernetes Operator uses the MongoDBOpsManager custom resource to manage Ops Manager objects. It constantly monitors the specification of the custom resource for any changes and, when changes are detected, the operator validates them and makes the necessary updates to the resources in the Kubernetes cluster.\n\nMongoDBOpsManager custom resources specification defines the following Ops Manager components:\n\n- The Application Database\n\n- The Ops Manager application\n\n- The Backup Daemon\n\nWhen you use the Kubernetes Operator to create an instance of Ops Manager, the Ops Manager MongoDB Application Database will be deployed as a replica set. It's not possible to configure the Application Database as a standalone database or a sharded cluster.\n\nThe Kubernetes Operator automatically sets up Ops Manager to monitor the Application Database that powers the Ops Manager Application. It creates a project named\u00a0 `-db` to allow you to monitor the Application Database deployment. While Ops Manager monitors the Application Database deployment, it does not manage it.\n\nWhen you deploy Ops Manager, you need to configure it. This typically involves using the configuration wizard. However, you can bypass the configuration wizard if you set certain essential settings in your object specification before deployment. I will demonstrate that in this post.\n\nThe Operator automatically enables backup. It deploys a StatefulSet, which consists of a single pod, to host the Backup Daemon Service and creates a Persistent Volume Claim and Persistent Volume for the Backup Daemon's head database. The operator uses the Ops Manager API to enable the Backup Daemon and configure the head database.\n\n## Getting started\n\nAlright, let's get started using the operator and build something! For this tutorial, we will need the following tools:\u00a0\n\n- gcloud\u00a0\n\n- gke-cloud-auth-plugin\n\n- Helm\n\n- kubectl\n\n- kubectx\n\n- git\n\nTo get started, we should first create a Kubernetes cluster and then install the MongoDB Kubernetes Operator on the cluster. Part 1 of this series provides instructions on how to do so.\n\n> **Note**\n> For the sake of simplicity, we are deploying Ops Manager in the same namespace as our MongoDB Operator. In a production environment, you should deploy Ops Manager in its own namespace.\n\n### Environment pre-checks\u00a0\n\nUpon successful creation of a cluster and installation of the operator (described in Part 1), it's essential to validate their readiness for use.\n\n```bash\ngcloud container clusters list\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 LOCATION \u00a0 \u00a0 \u00a0 MASTER_VERSION\u00a0 \u00a0 NUM_NODES\u00a0 STATUS\\\nmaster-operator \u00a0 \u00a0 us-south1-a\u00a0 \u00a0 1.23.14-gke.1800\u00a0 \u00a0 \u00a0 4\u00a0 \u00a0 \u00a0 RUNNING\n```\n\nDisplay our new Kubernetes full cluster name using `kubectx`.\n\n```bash\nkubectx\n```\n\nYou should see your cluster listed here. Make sure your context is set to master cluster.\n\n```bash\nkubectx $(kubectx | grep \"master-operator\" | awk '{print $1}')\n```\n\nIn order to continue this tutorial, make sure that the operator is in the `running`state.\n\n```bash\nkubectl get po -n \"${NAMESPACE}\"\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 READY \u00a0 STATUS \u00a0 RESTARTS \u00a0 AGE\\\nmongodb-enterprise-operator-649bbdddf5 \u00a0 1/1\u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m9s\n```\n\n## Using the MongoDBOpsManager CRD\n\nCreate a secret containing the username and password on the master Kubernetes cluster for accessing the Ops Manager user interface after installation.\n\n```bash\nkubectl -n \"${NAMESPACE}\" create secret generic om-admin-secret \\\n --from-literal=Username=\"opsmanager@example.com\" \\\n --from-literal=Password=\"p@ssword123\" \\\n --from-literal=FirstName=\"Ops\" \\\n --from-literal=LastName=\"Manager\"\n```\n\n### Deploying Ops Manager\u00a0\n\nThen, we can deploy Ops Manger on the master Kubernetes cluster with the help of `opsmanagers` Custom Resource, creating `MongoDBOpsManager` object, using the following manifest:\n\n```bash\nOM_VERSION=6.0.5\nAPPDB_VERSION=5.0.5-ent\nkubectl apply -f - <\u00a0 \u00a0 \u00a0 27017/TCP\nops-manager-svc \u00a0 \u00a0 ClusterIP\u00a0 \u00a0 None\u00a0 \u00a0 \u00a0 \u00a0 ", "format": "md", "metadata": {"tags": ["Connectors", "Kubernetes"], "pageDescription": "Learn how to deploy the MongoDB Ops Manager in a Kubernetes cluster with the MongoDB Kubernetes Operators.", "contentType": "Tutorial"}, "title": "Mastering MongoDB Ops Manager on Kubernetes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/analyzing-analyzers-build-search-index-app", "action": "created", "body": "# Analyzing Analyzers to Build the Right Search Index for Your App\n\n**\u201cWhy am I not getting the right search results?\u201d**\n\nSo, you\u2019ve created your first search query. You are familiar with various Atlas Search operators. You may have even played around with score modifiers to sort your search results. Yet, typing into that big, beautiful search bar still isn\u2019t bringing you the results you expect from your data. Well, It just might be your search index definition. Or more specifically, your analyzer.\n\nYou may know Lucene analyzers are important\u2014but why? How do they work? How do you choose the right one? If this is you, don\u2019t worry. In this tutorial, we will analyze analyzers\u2014more specifically, Atlas Search indexes and the Lucene analyzers used to build them. We\u2019ll define what they are exactly and how they work together to bring you the best results for your search queries.\n\nExpect to explore the following questions:\n\n* What is a search index and how is it different from a traditional MongoDB index?\n* What is an analyzer? What kinds of analyzers are built into Atlas and how do they compare to affect your search results?\n* How can you create an Atlas Search index using different search analyzers?\n\nWe will even offer you a\u00a0nifty web tool\u00a0as a resource to demonstrate a variety of different use cases with analyzers and allow you to test your own sample.\n\nBy the end, cured of your search analysis paralysis, you\u2019ll brim with the confidence and knowledge to choose the right analyzers to create the best Atlas Search index for your application.\n\n## What is an index?\n\nSo, what\u2019s an index? Generally, indexes are special data structures that enable ultra-fast querying and retrieval of documents based on certain identifiers.\u00a0\n\nEvery Atlas Search query requires a search index. Actually, it\u2019s the very first line of every Atlas Search query.\n\nIf you don\u2019t see one written explicitly, the query will use the default search index. Whereas a typical MongoDB index is a\u00a0b-tree index, Atlas Search uses inverted indexes, which are much faster, flexible, and more powerful for text.\n\nLet\u2019s explore the differences by walking through an example. Say we have a set of MongoDB documents that look like this:\n\nEach document has an \u201c\\_id\u201d field as a unique identifier for every MongoDB document and the \u201cs\u201d field of text. MongoDB uses the \\_id field to create the collection\u2019s unique default index. Developers may also create other\u00a0MongoDB indexes\u00a0specific to their application\u2019s querying needs.\n\nIf we were to search through these documents\u2019 sentence fields for the text:\n\n**\u201cIt was the best of times, it was the worst of times.\u201d**\n-A Tale of Two Cities, Charles Dickens\n\nAtlas Search would break down this text data into these seven individual terms for our inverted index :\n\n**it - was - the - best - of - times - worst**\u00a0\n\nNext, Atlas Search would map these terms back to the original MongoDB documents\u2019 \\_id fields as seen below. The word \u201cit\u201d can be found in document with \\_id 4.\u00a0 Find \u201cthe\u201d\u00a0 in documents 2, 3, 4, etc.\n\nSo essentially, an inverted index is a mapping between terms and which documents contain those terms. The inverted index contains the term and the \\_id of the document, along with other relevant metadata, such as the position of the term in the document.\n\nYou can think about the inverted index as analogous to the index you might find in the back of the book. Remember how book indexes contain words or expressions and list the pages in the book where they are found? \u00a0\ud83d\udcd6\ud83d\udcda\n\nWell, these inverted indexes use these terms to point to the specific documents in your database.\n\nImagine if you are looking for Lady MacBeth\u2019s utterance of \u201cOut, damned spot\u201d in Shakespeare\u2019s MacBeth. You wouldn\u2019t start at page one and read through the entire play, would you? I would go straight to the index to pinpoint it in Act 5, Scene 1, and even the exact page.\n\nInverted indexes make text searches much faster than a traditional search because you are not searching through every single document at query time. You are instead querying the search index which was mapped upon index creation. Then, following the roadmap with the \\_id to the exact data document(s) is fast and easy.\n\n## What are analyzers?\n\nHow does our metaphorical book decide which words or expressions to list in the back? Or for Atlas Search specifically, how do we know what terms to put in our Search indexes? Well, this is where *analyzers* come into play.\n\nTo make our corpus of data searchable, we transform it into terms or \u201ctokens\u201d through a process called \u201canalysis\u201d done by analyzers.\n\nIn our Charles Dickens example, we broke apart, \u201cIt was the best of times, it was the worst of times,\u201d by removing the punctuation, lowercasing the words, and breaking the text apart at the non-letter characters to obtain our terms.\n\nThese rules are applied by the lucene.standard analyzer, which is Atlas Search\u2019s default analyzer.\n\nAtlas Search offers other analyzers built-in, too.\n\nA whitespace analyzer will keep your casing and punctuation but will split the text into tokens at only the whitespaces.\n\nThe English analyzer takes a bit of a heavier hand when tokenizing.\n\nIt removes common STOP words for English. STOP words are common words like \u201cthe,\u201d\u00a0 \u201ca,\u201d\u00a0 \u201cof,\u201d and\u00a0 \u201cand\u201d that you find often but may make the results of your searches less meaningful. In our Dickens example, we remove the \u201cit,\u201d \u201cwas,\u201d and \u201cthe.\u201d Also, it understands plurals and \u201cstemming\u201d words to their most reduced form. Applying the English analyzer leaves us with only the following three tokens:\n\n**\\- best \\- worst \\- time**\n\nWhich maps as follows:\n\nNotice you can\u2019t find \u201cthe\u201d or \u201cof\u201d with the English analyzer because those stop words were removed in the analysis process.\n## The Analyzer Analyzer\nInteresting, huh? \ud83e\udd14\u00a0\n\nWant a\u00a0 deeper analyzer analysis? Check out\u00a0AtlasSearchIndexes.com. Here you\u2019ll find a basic tool to compare some of the various analyzers built into Atlas:\n\n| | |\n| --- | --- |\n| **Analyzer** | **Text Processing Description** |\n| Standard | Lowercase, removes punctuation, keeps accents |\n| English | Lowercase, removes punctuation and stop words, stems to root, pluralization, and possessive |\n| Simple | Lowercase, removes punctuation, separates at non-letters |\n| Whitespace | Keeps case and punctuation, separates at whitespace |\n| Keyword | Keeps everything exactly intact |\n| French | Similar to English, but in French =-) |\n\nBy toggling across all the different types of analyzers listed in the top bar, you will see what I call the basic golden rules of each one. We\u2019ve discussed standard, whitespace, and English. The simple analyzer removes punctuation and lowercases and separates at non-letters. \u201cKeyword\u201d is the easiest for me to remember because everything needs to match exactly and returns a single token. Case, punctuation, everything. This is really helpful for when you expect a specific set of options\u2014checkboxes in the application UI, for example.\u00a0\n\nWith our golden rules in mind, select one the sample texts offered and see how they are transformed differently with each analyzer. We have a basic string, an email address, some html, and a French sentence.\n\nTry searching for particular terms across these text samples by using the input box. Do they produce a match?\n\nTrying our first sample text:\n\n**\u201cAs I was walking to work, I listened to two of Mike Lynn\u2019s podcasts, and I dropped my keys.\u201d**\n\nNotice by the yellow highlighting how the English analyzer allows you to recognize the stems \u201cwalk\u201d and \u201clisten,\u201d the singular \u201cpodcast\u201d and \u201ckey.\u201d\u00a0\n\nHowever, none of those terms will match with any other analyzer:\n\nParlez-vous fran\u00e7ais? Comment dit-on \u201cstop word\u201d en fran\u00e7ais?\n\nEmail addresses can be a challenge. But now that you understand the rules for analyzers, try looking for\u00a0 \u201cmongodb\u201d email addresses (or Gmail, Yahoo, \u201cfill-in-the-corporate-blank.com\u201d). I can match \u201cmongodb\u201d with the simple analyzer, but no other ones.\n\n## Test your token knowledge on your own data\n\nNow that you have acquired some token knowledge of analyzers, test it on your own data on the\u00a0Tokens\u00a0page of atlassearchindexes.com.\n\nWith our Analyzer Analyzer in place to help guide you, you can input your own sample text data in the input bar and hit submit \u2705. Once that is done, input your search term and choose an analyzer to see if there is a result returned.\n\nMaybe you have some logging strings or UUIDs to try?\n\nAnalyzers matter. If you aren\u2019t getting the search results you expect, check the\u00a0 analyzer used in your index definition.\n\n## Create an Atlas Search index\n\nArmed with our deeper understanding of analyzers, we can take the next step in our search journey and create a search index in Atlas using different analyzers.\n\nI have a\u00a0movie search engine application\u00a0that uses the sample\\_mflix.movies collection in Atlas, so let\u2019s go to that collection in my Atlas UI, and then to the Search Indexes tab.\n\n> **Tip! You can\u00a0download this sample data, as well as other sample datasets on all Atlas clusters, including the free tier.**\n\nWe can create the search index using the Visual Editor. When creating the Atlas Search index, we can specify which analyzer to use. By default, Atlas Search uses the lucene.standard analyzer and maps every field dynamically.\n\nMapping dynamically will automatically index all the fields of supported type.\n\nThis is great if your schema evolves often or if you are experimenting with Atlas Search\u2014but this takes up space. Some index configuration options\u2014like autocomplete, synonyms, multi analyzers, and embedded documents\u2014can lead to search indexes taking up a significant portion of your disk space, even more than the dataset itself. Although this is expected behavior, you might feel it with performance, especially with larger collections. If you are only searching across a few fields, I suggest you define your index to map only for those fields.\u00a0\n\n> Pro tip! To improve search query performance and save disk space, refine your index to:\n> * Map only the fields your application needs.\n> * Set the\u00a0store\u00a0option to\u00a0false\u00a0when specifying a\u00a0string\u00a0type in an index definition.\n\nYou can also choose different analyzers for different fields\u2014and you can even apply more than one analyzer to the same field.\n\nPro tip! You can also use your own custom analyzer\u2014but we\u2019ll save custom analyzers for a different day.\n\nClick **Refine** to customize our index definition.\n\nI\u2019ll turn off dynamic mapping and Add Field to map the title to standard analyzer. Then, add the fullplot field to map with the\u00a0**english analyzer**. CREATE!\n\nAnd now, after just a few clicks, I\u00a0 have a search index named \u2018default\u2019 which has stored in it the tokenized results of the standard analysis on the title field and the tokenized results of the lucene.english analyzer on the full plot field.\n\nIt\u2019s just that simple.\n\nAnd just like that, now I can use this index that took a minute to create to search these fields in my movies collection!\u00a0\ud83c\udfa5\ud83c\udf7f\n## Takeaways\n\nSo, when configuring your search index:\n\n* Think about your data first. Knowing your data, how will you be querying it? What do you want your tokens to be?\n* Then, choose your analyzer accordingly.\n* Specify the best analyzer for your use case in your Atlas Search index definition.\n* Specify that index when writing your search query.\n\nYou can create many different search indexes for your use case, but remember that you can only use one search index per search query.\n\nSo, now that we have analyzed the analyzers, you know why picking the right analyzer matters. You can create the most efficient Atlas Search index for accurate results and optimal results. So go forth, search-warrior! Type in your application\u2019s search box with confidence, not crossed fingers.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This is an in-depth explanation of various Atlas Search analyzers and indexes to help you build the best full-text search experience for your MongoDB application.", "contentType": "Tutorial"}, "title": "Analyzing Analyzers to Build the Right Search Index for Your App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/rule-based-access-atlas-data-api", "action": "created", "body": "# Rule-Based Access to Atlas Data API\n\nMongoDB Atlas App Services\u00a0have extensive serverless backend capabilities, such as\u00a0Atlas Data API, that simply provide an endpoint for the read and write access in a specific cluster that you can customize access to later. You can enable authentication by using one of the\u00a0authentication providers\u00a0that are available in Atlas App Services. And, customize the data access on the collections, based on the rules that you can define with the\u00a0App Services Rules.\n\nIn this blog post, I\u2019ll walk you through how you can expose the data of a collection through Atlas Data API to three users that are in three different groups with different permissions.\n\n## Scenario\n\n* Dataset: We have a simple dataset that includes movies, and we will expose the movie data through Atlas Data API.\n* We have three users that are in three different groups.\n * Group 01 has access to all the fields on the **movie** collection in the **sample_mflix** database and all the movies available in the collection.\n * Group 02 has access only to the fields **title, fullplot, plot,** and **year** on the **movie** collection in the **sample_mflix** database and to all the movies available in the collection.\n * Group 03 has access only to the fields **title, fullplot, plot,** and **year** on the **movie** collection in the **sample_mflix** database and to the movies where the **year** field is greater than **2000**.\n\nThree users given in the scenario above will have the same HTTPS request, but they will receive a different result set based on the rules that are defined in App Services Rules.\n## Prerequisites\n* Provision an Atlas cluster (even the tier M0 should be enough for the feature to be tested).\n* After you\u2019ve provisioned the cluster, load the sample data set by following the steps.\n\n## Steps to set up\nHere's how you can get started!\n### Step 1: Create an App Services Application\nAfter you\u2019ve created a cluster and loaded the sample dataset, you can create an application in App Services. Follow the steps to create a new App Services Application if you haven\u2019t done so already.\n\nI used the name \u201cAPITestApplication\u201d and chose the cluster \u201cAPITestCluster\u201d that I\u2019ve already loaded the sample dataset into. \n\n### Step 2: Enable Atlas Data API\nAfter you\u2019ve created the App Services application, navigate to the **HTTPS Endpoints** on the left side menu and click the **Data API** tab, as shown below.\n\nHit the button **Enable the Data API**.\n\nAfter that, you will see that Data API has been enabled. Scroll down on the page and find the **User Settings**. Enable **Create User Upon Authentication**. **Save** it and then **Deploy** it. \n\nNow, your API endpoint is ready and accessible. But if you test it, you will get the following authentication error, since no authentication provider has been enabled.\n\n```bash\ncurl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \\\n> --header 'Content-Type: application/json' \\\n> --data-raw '{\n> \"dataSource\": \"mongodb-atlas\",\n> \"database\": \"sample_mflix\",\n> \"collection\": \"movies\",\n> \"limit\": 5\n> }'\n{\"error\":\"no authentication methods were specified\",\"error_code\":\"InvalidParameter\",\"link\":\"https://realm.mongodb.com/groups/5ca48430014b76f34448bbcf/apps/63a8bb695e56d7c41ab77da6/logs?co_id=63a8be8c0b3a0268511a7525\"}\n```\n\n### Step 3.1: Enable JWT-based authentication\nNavigate to the homepage of the App Services application. Click **Authentication** on the left-hand side menu and click the **EDIT** button of the row where the provider is **Custom JWT Authentication**.\n\nJWT (JSON Web Token) provides a token-based authentication where a token is generated by the client based on an agreed secret and cryptography algorithm. After the client transmits the token, the server validates the token with the agreed secret and cryptography algorithm and then processes client requests if the token is valid. \n\nIn the configuration options of the Custom JWT Authentication, fill out the options with the following:\n\n* Enable the Authentication Provider (**Provider Enabled** must be turned on).\n* Keep the verification method as is (**Manually specify signing keys**).\n* Keep the signing algorithm as is (**HS256**).\n* Add a new signing key.\n * Provide the signing key name.\n * For example, **APITestJWTSigningKEY**.\n * Provide the secure key content (between 32 and 512 characters) and note it somewhere secure.\n * For example, **FipTEgYJ6WfUEhCJq3e@pm8-TkE9*UZN**.\n* Add two fields in the metadata fields.\n * The path should be **metadata.group** and the corresponding field should be **group**.\n * The path should be **metadata.name** and the corresponding field should be **name**.\n* Keep the audience field as is (empty).\n\nBelow, you can find how the JWT Authentication Provider form has been filled accordingly.\n\n**Save** it and then **Deploy** it.\n\nAfter it\u2019s deployed, you can see the secret that has been created in the App Services Values, that can be accessible on the left side menu by clicking **Values**.\n\n### Step 3.2: Test JWT authentication\nNow, we need an encoded JWT to pass it to App Services Data API to authenticate and consequently access the underlying data. \n\nYou can have a separate external authentication service that can provide a signed JWT that you can use in App Services Authentication. However, for the sake of simplicity, we\u2019ll generate our own fake JWTs through jwt.io. \n\nThese are the steps to generate an encoded JWT:\n\n* Visit jwt.io.\n* On the right-hand side in the section **Decoded**, we can fill out the values. On the left-hand side, the corresponding **Encoded** JWT will be generated.\n* In the **Decoded** section:\n * Keep the **Header** section same. \n * In the **Payload** section, set the following fields:\n * Sub. \n * Represents owner of the token.\n * Provide value unique to the user.\n * Metadata. \n * Represents metadata information regarding this token and can be used for further processing in App Services.\n * We have two sub fields here.\n * Name. \n * Represents the username of the client that will initiate the API request.\n * This information will be used as the username in App Services.\n * Group.\n * Represents the group information of the client that we\u2019ll use later for rule-based access.\n * Exp.\n * Represents when the token is going to expire.\n * Provide a future time to keep expiration impossible during our tests.\n * Aud.\n * Represents the name of the App Services Application that you can get from the homepage of your application in App Services.\n * In the **Verify Signature** section:\n * Provide the same secret that you\u2019ve already provided while enabling Custom JWT Authentication in the Step 3.1.\n\nBelow, you can find how the values have been filled out in the **Decoded** section and the corresponding **Encoded** JWT that has been generated. \n\nCopy the generated **JWT** from the **Encoded** section and pass it to the header section of the HTTP request, as shown below.\n\n```bash\ncurl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAxIiwiZ3JvdXAiOiJncm91cDAxIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.cq5Dr5fJ-BD1mBJia697oWVg_yWPua_NT5roUlxihYE' --header 'Content-Type: application/json' --data-raw '{\n \"dataSource\": \"mongodb-atlas\",\n \"database\": \"sample_mflix\",\n \"collection\": \"movies\",\n \"limit\": 5\n}'\n\"Failed to find documents: FunctionError: no rule exists for namespace 'sample_mflix.movies\"\n```\n\nWe get the following error: \u201c**no rule exists for namespace**.\u201d Basically, we were able to authenticate to the application. However, since there were no App Services Rules defined, we were not able to access any data. \n\nEven though the request is not successful due to the no rule definition, you can check out the App Users page to list authenticated users as shown below. **user01** was the name of the user that was provided in the **metadata.name** field of the JWT.\n\n### Step 4.1: Create a Role in App Services Rules\nSo far, we have enabled Atlas Data API and Custom JWT Authentication, and we were able to authenticate with the username **user01** who is in the group **group01**. These two metadata information (user and group) were filled in the **metadata** field of the JWT. Remember the payload of the JWT:\n\n```json\n{\n \"sub\": \"001\",\n \"metadata\": {\n \"name\": \"user01\",\n \"group\": \"group01\"\n },\n \"exp\": 1896239022,\n \"aud\" : \"apitestapplication-ckecj\"\n}\n```\nNow, based on the **metadata.group** field value, we will show filtered or unfiltered movie data. \n\nLet\u2019s remember the rules that we described in the Scenario:\n\n* We have three users that are in three different groups.\n * Group 01 has access to all the fields on the **movie** collection in the **sample_mflix** database and all the movies available in the collection.\n * Group 02 has access only to the fields **title**, **fullplot**, **plot**, and **year** on the **movie** collection in the **sample_mflix** database and to all the movies available in the collection.\n * Group 03 has access only to the fields **title**, **fullplot**, **plot**, and **year** on the **movie** collection in the **sample_mflix** database and to the movies where the **year** field is greater than **2000**.\n\nLet\u2019s create a role that will have access to all of the fields. This role will be for the users that are in Group 01. \n\n* Navigate the **Rules** section on the left-hand side of the menu in App Services.\n* Choose the collection **sample_mflix.movies** on the left side of the menu.\n* Click **Skip** (**Start from Scratch**) on the right side of the menu, as shown below.\n\n**Role name**: Give it a proper role name. We will use **fullReadAccess** as the name for this role.\n**Apply when**: Evaluation criteria of this role. In other words, it represents when this role is evaluated. Provide the condition accordingly. **%%user.data.group** matches the **metadata.group** information that is represented in JWT. We\u2019ve configured this mapping in Step 3.1.\n**Document Permissions**: Allowed activities for this role.\n**Field Permissions**: Allowed fields to be read/write for this role.\n\nYou can see below how it was filled out accordingly. \n\nAfter you\u2019ve saved and deployed it, we can test the curl command again, as shown below:\n\n```\ncurl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \\\n> --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDEiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAxIiwiZ3JvdXAiOiJncm91cDAxIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.cq5Dr5fJ-BD1mBJia697oWVg_yWPua_NT5roUlxihYE' \\\n> --header 'Content-Type: application/json' \\\n> --data-raw '{\n> \"dataSource\": \"mongodb-atlas\",\n> \"database\": \"sample_mflix\",\n> \"collection\": \"movies\",\n> \"limit\": 5\n> }' | python -m json.tool\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 6192 0 6087 100 105 8072 139 --:--:-- --:--:-- --:--:-- 8245\n{\n \"documents\": \n {\n \"_id\": \"573a1390f29313caabcd4135\",\n \"plot\": \"Three men hammer on an anvil and pass a bottle of beer around.\",\n \"genres\": [\n \"Short\"\n ],\n \"runtime\": 1,\n \"cast\": [\n \"Charles Kayser\",\n \"John Ott\"\n ],\n \"num_mflix_comments\": 0,\n```\n\nNow the execution of the HTTPS request is successful. It returns five records with all the available fields in the documents.\n\n### Step 4.2: Create another role in App Services Rules\n\nNow we\u2019ll add another role that only has access to four fields (**title**, **fullplot**, **plot**, and **year**) on the collection **sample_mflix.movies**.\n\nIt is similar to what we\u2019ve created in [Step 4.1, but now we\u2019ve defined which fields are accessible to this role, as shown below.\n\n**Save** it and **Deploy** it.\n\nCreate another JWT for the user **user02** that is in **group02**, as shown below.\n\nPass the generated Encoded JWT to the curl command:\n\n```\ncurl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \\\n> --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDIiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAyIiwiZ3JvdXAiOiJncm91cDAyIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0.llfSR9rLSoSTb3LGwENcgYvKeIu3XZugYbHIbqI29nk' \\\n> --header 'Content-Type: application/json' \\\n> --data-raw '{\n> \"dataSource\": \"mongodb-atlas\",\n> \"database\": \"sample_mflix\",\n> \"collection\": \"movies\",\n> \"limit\": 5\n> }' | python -m json.tool\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 3022 0 2917 100 105 3363 121 --:--:-- --:--:-- --:--:-- 3501\n{\n \"documents\": \n {\n \"_id\": \"573a1390f29313caabcd4135\",\n \"plot\": \"Three men hammer on an anvil and pass a bottle of beer around.\",\n \"title\": \"Blacksmith Scene\",\n \"fullplot\": \"A stationary camera looks at a large anvil with a blacksmith behind it and one on either side. The smith in the middle draws a heated metal rod from the fire, places it on the anvil, and all three begin a rhythmic hammering. After several blows, the metal goes back in the fire. One smith pulls out a bottle of beer, and they each take a swig. Then, out comes the glowing metal and the hammering resumes.\",\n \"year\": 1893\n },\n {\n \"_id\": \"573a1390f29313caabcd42e8\",\n \"plot\": \"A group of bandits stage a brazen train hold-up, only to find a determined posse hot on their heels.\",\n \"title\": \"The Great Train Robbery\",\n \"fullplot\": \"Among the earliest existing films in American cinema - notable as the first film that presented a narrative story to tell - it depicts a group of cowboy outlaws who hold up a train and rob the passengers. They are then pursued by a Sheriff's posse. Several scenes have color included - all hand tinted.\",\n \"year\": 1903\n },\n\u2026\n```\n\nNow the user in **group02** has access to only the four fields (**title**, **plot**, **fullplot**, and **year**), in addition to the **_id** field, as we configured in the role definition of a rule in App Services Rules.\n\n### Step 4.3: Updating a role and a creating a filter in App Services Rules\nNow we\u2019ll update the existing role that we\u2019ve created in [Step 4.2 by including **group03** to be evaluated, and we will add a filter that restricts access to only the movies where the **year** field is greater than 2000. \n\nUpdate the role (include **group03** in addition to **group02**) that you created in Step 4.2 as shown below.\n\nNow, users that are in **group03** can authenticate and project only the four fields rather than all the available fields. But how can we put a restriction on the filtering based on the value of the **year** field? We need to add a filter.\n\nNavigate to the **Filters** tab in the **Rules** page of the App Services after you choose the **sample_mflix.movies** collection. \n\nProvide the following inputs for the **Filter**:\n\nAfter you\u2019ve saved and deployed it, create a new JWT for the user **user03** that is in the group **group03**, as shown below:\n\nCopy the encoded JWT and pass it to the curl command, as shown below:\n\n```\ncurl --location --request POST 'https://ap-south-1.aws.data.mongodb-api.com/app/apitestapplication-ckecj/endpoint/data/v1/action/find' \\\n> --header 'jwtTokenString: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIwMDMiLCJtZXRhZGF0YSI6eyJuYW1lIjoidXNlcjAzIiwiZ3JvdXAiOiJncm91cDAzIn0sImV4cCI6MTg5NjIzOTAyMiwiYXVkIjoiYXBpdGVzdGFwcGxpY2F0aW9uLWNrZWNqIn0._H5rScXP9xymF7mCDj6m9So1-3qylArHTH_dxqlndwU' \\\n> --header 'Content-Type: application/json' \\\n> --data-raw '{\n> \"dataSource\": \"mongodb-atlas\",\n> \"database\": \"sample_mflix\",\n> \"collection\": \"movies\",\n> \"limit\": 5\n> }' | python -m json.tool\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 4008 0 3903 100 105 6282 169 --:--:-- --:--:-- --:--:-- 6485\n{\n \"documents\": \n {\n \"_id\": \"573a1393f29313caabcdcb42\",\n \"plot\": \"Kate and her actor brother live in N.Y. in the 21st Century. Her ex-boyfriend, Stuart, lives above her apartment. Stuart finds a space near the Brooklyn Bridge where there is a gap in time....\",\n \"title\": \"Kate & Leopold\",\n \"fullplot\": \"Kate and her actor brother live in N.Y. in the 21st Century. Her ex-boyfriend, Stuart, lives above her apartment. Stuart finds a space near the Brooklyn Bridge where there is a gap in time. He goes back to the 19th Century and takes pictures of the place. Leopold -- a man living in the 1870s -- is puzzled by Stuart's tiny camera, follows him back through the gap, and they both ended up in the present day. Leopold is clueless about his new surroundings. He gets help and insight from Charlie who thinks that Leopold is an actor who is always in character. Leopold is a highly intelligent man and tries his best to learn and even improve the modern conveniences that he encounters.\",\n \"year\": 2001\n },\n {\n \"_id\": \"573a1398f29313caabceb1fe\",\n \"plot\": \"A modern day adaptation of Dostoyevsky's classic novel about a young student who is forever haunted by the murder he has committed.\",\n \"title\": \"Crime and Punishment\",\n \"fullplot\": \"A modern day adaptation of Dostoyevsky's classic novel about a young student who is forever haunted by the murder he has committed.\",\n \"year\": 2002\n },\n\n\u2026\n```\nNow, **group03** members will receive the movies where the **year** information is greater than 2000, along with only the four fields (**title**, **plot**, **fullplot**, and **year**), in addition to the **_id** field. \n\n## Summary\nMongoDB Atlas App Services provides extensive functionalities to build your back end in a serverless manner. In this blog post, we\u2019ve discussed:\n\n* How we can enable Custom JWT Authentication. \n * How we can map custom content of a JWT to the data that can be consumed in App Services \u2014 for example, managing usernames and the groups of users.\n* How we can restrict data access for the users who have different permissions.\n * We\u2019ve created the following in App Services Rules:\n * Two roles to specify read access on all the fields and only the four fields.\n * One filter to exclude the movies where the year field is not greater than 2000.\n\nCan you add a call-to-action? Maybe directing people to our developer forums?\n\nGive it a free try! Provision an M0 Atlas instance and create a new App Services Application. If you are stuck, let us help you in the [developer forums.\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Atlas Data API provides a serverless API layer on top of your data in Atlas. You can natively configure rule-based access for a set of users with different permissions.", "contentType": "Tutorial"}, "title": "Rule-Based Access to Atlas Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/elt-mongodb-data-airbyte", "action": "created", "body": "# ELT MongoDB Data Using Airbyte\n\nAirbyte is an open source data integration platform that provides an easy and quick way to ELT (Extract, Load, and Transform) your data between a plethora of data sources. AirByte can be used as part of a workflow orchestration solution like Apache Airflow to address data movement. In this post, we will install Airbyte and replicate the sample database, \u201csample\\_restaurants,\u201d found in MongoDB Atlas out to a CSV file.\n\n## Getting started\n\nAirbyte is available as a cloud service or can be installed self-hosted using Docker containers. In this post, we will deploy Airbyte locally using Docker.\n\n```\ngit clone https://github.com/airbytehq/airbyte.git\ncd airbyte\ndocker-compose up\n```\n\nWhen the containers are ready, you will see the logo printed in the compose logs as follows:\n\nNavigate to http://localhost:8000 to launch the Airbyte portal. Note that the default username is \u201cadmin\u201d and the password is \u201cpassword.\u201d\n\n## Creating a connection\n\nTo create a source connector, click on the Sources menu item on the left side of the portal and then the \u201cConnect to your first source\u201d button. This will launch the New Source page as follows:\n\nType \u201cmongodb\u201d and select \u201cMongoDb.\u201d\n\nThe MongoDB Connector can be used with both self-hosted and MongoDB Atlas clusters.\n\nSelect the appropriate MongoDB instance type and fill out the rest of the configuration information. In this post, we will be using MongoDB Atlas and have set our configuration as follows:\n\n| | |\n| --- | --- |\n| MongoDB Instance Type | MongoDB Atlas |\n| Cluster URL | demo.ikyil.mongodb.net |\n| Database Name | sample_restaurants |\n| Username | ab_user |\n| Password | ********** |\n| Authentication Source | admin |\n\nNote: If you\u2019re using MongoDB Atlas, be sure to create the user and allow network access. By default, MongoDB Atlas does not access remote connections.\n\nClick \u201cSetup source\u201d and Airbyte will test the connection. If it\u2019s successful, you\u2019ll be sent to the Add destination page. Click the \u201cAdd destination\u201d button and select \u201cLocal CSV\u201d from the drop-down.\n\nNext, provide a destination name, \u201crestaurant-samples,\u201d and destination path, \u201c/local.\u201d The Airbyte portal provides a setup guide for the Local CSV connector on the right side of the page. This is useful for a quick reference on connector configuration. \n\nClick \u201cSet up destination\u201d and Airbyte will test the connection with the destination. Upon success, you\u2019ll be redirected to a page where you can define the details of the stream you\u2019d like to sync.\n\nAirbyte provides a variety of sync options, including full refresh and incremental.\n\nSelect \u201cFull Refresh | Overwrite\u201d and then click \u201cSet up sync.\u201d\n\nAirbyte will kick off the sync process and if successful, you\u2019ll see the Sync Succeeded message.\n\n## Exploring the data\n\nLet\u2019s take a look at the CSV files created. The CSV connector writes to the /local docker mount on the airbyte server. By default, this mount is defined as /tmp/airbyte_local and can be changed by defining the LOCAL_ROOT docker environment variable.\n\nTo view the CSV files, launch bash from the docker exec command as follows:\n\n**docker exec -it airbyte-server bash**\n\nOnce connected, navigate to the /local folder and view the CSV files:\n\nbash-4.2# **cd /tmp/airbyte_local/**\nbash-4.2# **ls**\n_airbyte_raw_neighborhoods.csv _airbyte_raw_restaurants.csv\n\n## Summary\nIn today\u2019s data-rich world, building data pipelines to collect and transform heterogeneous data is an essential part of many business processes. Whether the goal is deriving business insights through analytics or creating a single view of the customer, Airbyte makes it easy to move data between MongoDB and many other data sources. \n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to extract load and transform MongoDB data using Airbyte.", "contentType": "Tutorial"}, "title": "ELT MongoDB Data Using Airbyte", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/build-first-dotnet-core-application-mongodb-atlas", "action": "created", "body": "# Build Your First .NET Core Application with MongoDB Atlas\n\nSo you're a .NET Core developer or you're trying to become one and you'd like to get a database included into the mix. MongoDB is a great choice and is quite easy to get started with for your .NET Core projects.\n\nIn this tutorial, we're going to explore simple CRUD operations in a .NET Core application, something that will make you feel comfortable in no time!\n\n## The Requirements\n\nTo be successful with this tutorial, you'll need to have a few things ready to go.\n\n- .NET Core installed and configured.\n- MongoDB Atlas cluster, M0 or better, deployed and configured.\n\nBoth are out of the scope of this particular tutorial, but you can refer to this tutorial for more specific instructions around MongoDB Atlas deployments. You can validate that .NET Core is ready to go by executing the following command:\n\n```bash\ndotnet new console --output MongoExample\n```\n\nWe're going to be building a console application, but we'll explore API development in a later tutorial. The \"MongoExample\" project is what we'll use for the remainder of this tutorial.\n\n## Installing and Configuring the MongoDB Driver for .NET Core Development\n\nWhen building C# applications, the common package manager to use is NuGet, something that is readily available in Visual Studio. If you're using Visual Studio, you can add the following:\n\n```bash\nInstall-Package MongoDB.Driver -Version 2.14.1\n```\n\nHowever, I'm on a Mac, use a variety of programming languages, and have chosen Visual Studio Code to be the IDE for me. There is no official NuGet extension for Visual Studio Code, but that doesn't mean we're stuck.\n\nExecute the following from a CLI while within your project directory:\n\n```bash\ndotnet add package MongoDB.Driver\n```\n\nThe above command will add an entry to your project's \"MongoExample.csproj\" file and download the dependencies that we need. This is valuable whether you're using Visual Studio Code or not.\n\nIf you generated the .NET Core project with the CLI like I did, you'll have a \"Program.cs\" file to work with. Open it and add the following code:\n\n```csharp\nusing MongoDB.Driver;\nusing MongoDB.Bson;\n\nMongoClient client = new MongoClient(\"ATLAS_URI_HERE\");\n\nList databases = client.ListDatabaseNames().ToList();\n\nforeach(string database in databases) {\n Console.WriteLine(database);\n}\n```\n\nThe above code will connect to a MongoDB Atlas cluster and then print out the names of the databases that the particular user has access to. The printing of databases is optional, but it could be a good way to make sure everything is working correctly.\n\nIf you're wondering where to get your `ATLAS_URI_HERE` string, you can find it in your MongoDB Atlas dashboard and by clicking the connect button on your cluster.\n\nThe above image should help when looking for the Atlas URI.\n\n## Building a POCO Class for the MongoDB Document Model\n\nWhen using .NET Core to work with MongoDB documents, you can make use of the `BsonDocument` class, but depending on what you're trying to do, it could complicate your .NET Core application. Instead, I like to work with classes that are directly mapped to document fields. This allows me to use the class naturally in C#, but know that everything will work out on its own for MongoDB documents.\n\nCreate a \"playlist.cs\" file within your project and include the following C# code:\n\n```csharp\nusing MongoDB.Bson;\n\npublic class Playlist {\n\n public ObjectId _id { get; set; }\n public string username { get; set; } = null!;\n public List items { get; set; } = null!;\n\n public Playlist(string username, List movieIds) {\n this.username = username;\n this.items = movieIds;\n }\n\n}\n```\n\nIn the above `Playlist` class, we have three fields. If you want each of those fields to map perfectly to a field in a MongoDB document, you don't have to do anything further. To be clear, the above class would map to a document that looks like the following:\n\n```json\n{\n \"_id\": ObjectId(\"61d8bb5e2d5fe0c2b8a1007d\"),\n \"username\": \"nraboy\",\n \"items\": \"1234\", \"5678\" ]\n}\n```\n\nHowever, if you wanted your C# class field to be different than the field it should map to in a MongoDB document, you'd have to make a slight change. The `Playlist` class would look something like this:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\n\npublic class Playlist {\n\n public ObjectId _id { get; set; }\n\n [BsonElement(\"username\")]\n public string user { get; set; } = null!;\n\n public List items { get; set; } = null!;\n\n public Playlist(string username, List movieIds) {\n this.user = username;\n this.items = movieIds;\n }\n\n}\n```\n\nNotice the new import and the use of `BsonElement` to map a remote document field to a local .NET Core class field.\n\nThere are a lot of other things you can do in terms of document mapping, but they are out of the scope of this particular tutorial. If you're curious about other mapping techniques, check out the [documentation on the subject.\n\n## Implementing Basic CRUD in .NET Core with MongoDB\n\nSince we're able to connect to Atlas from our .NET Core application and we have some understanding of what our data model will look like for the rest of the example, we can now work towards creating, reading, updating, and deleting (CRUD) documents.\n\nWe'll start by creating some data. Within the project's \"Program.cs\" file, make it look like the following:\n\n```csharp\nusing MongoDB.Driver;\n\nMongoClient client = new MongoClient(\"ATLAS_URI_HERE\");\n\nvar playlistCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"playlist\");\n\nList movieList = new List();\nmovieList.Add(\"1234\");\n\nplaylistCollection.InsertOne(new Playlist(\"nraboy\", movieList));\n```\n\nIn the above example, we're connecting to MongoDB Atlas, getting a reference to our \"playlist\" collection while noting that it is related to our `Playlist` class, and then making use of the `InsertOne` function on the collection.\n\nIf you ran the above code, you should see a new document in your collection with matching information.\n\nSo let's read from that collection using our C# code:\n\n```csharp\n// Previous code here ...\n\nFilterDefinition filter = Builders.Filter.Eq(\"username\", \"nraboy\");\n\nList results = playlistCollection.Find(filter).ToList();\n\nforeach(Playlist result in results) {\n Console.WriteLine(string.Join(\", \", result.items));\n}\n```\n\nIn the above code, we are creating a new `FilterDefinition` filter to determine which data we want returned from our `Find` operation. In particular, our filter will give us all documents that have \"nraboy\" as the `username` field, which may be more than one because we never specified if the field should be unique.\n\nUsing the filter, we can do a `Find` on the collection and convert it to a `List` of our `Playlist` class. If you don't want to use a `List`, you can work with your data using a cursor. You can learn more about cursors in the documentation.\n\nWith a `Find` out of the way, let's move onto updating our documents within MongoDB.\n\nWe're going to add to our \"Program.cs\" file with the following code:\n\n```csharp\n// Previous code here ...\n\nFilterDefinition filter = Builders.Filter.Eq(\"username\", \"nraboy\");\n\n// Previous code here ...\n\nUpdateDefinition update = Builders.Update.AddToSet(\"items\", \"5678\");\n\nplaylistCollection.UpdateOne(filter, update);\n\nresults = playlistCollection.Find(filter).ToList();\n\nforeach(Playlist result in results) {\n Console.WriteLine(string.Join(\", \", result.items));\n}\n```\n\nIn the above code, we are creating two definitions, one being the `FilterDefinition` that we had created in the previous step. We're going to keep the same filter, but we're adding a definition of what should be updated when there was a match based on the filter.\n\nTo clear things up, we're going to match on all documents where \"nraboy\" is the `username` field. When matched, we want to add \"5678\" to the `items` array within our document. Using both definitions, we can use the `UpdateOne` method to make it happen.\n\nThere are more update operations than just the `AddToSet` function. It is worth checking out the documentation to see what you can accomplish.\n\nThis brings us to our final basic CRUD operation. We're going to delete the document that we've been working with.\n\nWithin the \"Program.cs\" file, add the following C# code:\n\n```csharp\n// Previous code here ...\n\nFilterDefinition filter = Builders.Filter.Eq(\"username\", \"nraboy\");\n\n// Previous code here ...\n\nplaylistCollection.DeleteOne(filter);\n```\n\nWe're going to make use of the same filter we've been using, but this time in the `DeleteOne` function. While we could have more than one document returned from our filter, the `DeleteOne` function will only delete the first one. You can make use of the `DeleteMany` function if you want to delete all of them.\n\nNeed to see it all together? Check this out:\n\n```csharp\nusing MongoDB.Driver;\n\nMongoClient client = new MongoClient(\"ATLAS_URI_HERE\");\n\nvar playlistCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"playlist\");\n\nList movieList = new List();\nmovieList.Add(\"1234\");\n\nplaylistCollection.InsertOne(new Playlist(\"nraboy\", movieList));\n\nFilterDefinition filter = Builders.Filter.Eq(\"username\", \"nraboy\");\n\nList results = playlistCollection.Find(filter).ToList();\n\nforeach(Playlist result in results) {\n Console.WriteLine(string.Join(\", \", result.items));\n}\n\nUpdateDefinition update = Builders.Update.AddToSet(\"items\", \"5678\");\n\nplaylistCollection.UpdateOne(filter, update);\n\nresults = playlistCollection.Find(filter).ToList();\n\nforeach(Playlist result in results) {\n Console.WriteLine(string.Join(\", \", result.items));\n}\n\nplaylistCollection.DeleteOne(filter);\n```\n\nThe above code is everything that we did. If you swapped out the Atlas URI string with your own, it would create a document, read from it, update it, and then finally delete it.\n\n## Conclusion\n\nYou just saw how to quickly get up and running with MongoDB in your .NET Core application! While we only brushed upon the surface of what is possible in terms of MongoDB, it should put you on a better path for accomplishing your project needs.\n\nIf you're looking for more help, check out the MongoDB Community Forums and get involved.", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "Learn how to quickly and easily start building .NET Core applications that interact with MongoDB Atlas for create, read, update, and delete (CRUD) operations.", "contentType": "Quickstart"}, "title": "Build Your First .NET Core Application with MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/real-time-tracking-change-streams-socketio", "action": "created", "body": "# Real-Time Location Tracking with Change Streams and Socket.io\n\nIn this article, you will learn how to use MongoDB Change Streams and Socket.io to build a real-time location tracking application. To demonstrate this, we will build a local package delivery service.\n\nChange streams are used to detect document updates, such as location and shipment status, and Socket.io is used to broadcast these updates to the connected clients. An Express.js server will run in the background to create and maintain the websockets.\n\nThis article will highlight the important pieces of this demo project, but you can find the full code, along with setup instructions, on Github.\n\n## Connect Express to MongoDB Atlas\n\nConnecting Express.js to MongoDB requires the use of the MongoDB driver, which can be installed as an npm package. For this project I have used MongoDB Atlas and utilized the free tier to create a cluster. You can create your own free cluster and generate the connection string from the Atlas dashboard.\n\nI have implemented a singleton pattern for connecting with MongoDB to maintain a single connection across the application.\n\nThe code defines a singleton `db` variable that stores the MongoClient instance after the first successful connection to the MongoDB database.The `dbConnect()` is an asynchronous function that returns the MongoClient instance. It first checks if the db variable has already been initialized and returns it if it has. Otherwise, it will create a new MongoClient instance and return it. `dbConnect` function is exported as the default export, allowing other modules to use it.\n\n```typescript\n// dbClient.ts\nimport { MongoClient } from 'mongodb';\nconst uri = process.env.MONGODB_CONNECTION_STRING;\nlet db: MongoClient;\nconst dbConnect = async (): Promise => {\n\u00a0\u00a0\u00a0\u00a0try {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if (db) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return db;\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0console.log('Connecting to MongoDB...');\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0const client = new MongoClient(uri);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0await client.connect();\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0db = client;\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0console.log('Connected to db');\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return db;\n\u00a0\u00a0\u00a0\u00a0} catch (error) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0console.error('Error connecting to MongoDB', error);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0throw error;\n\u00a0\u00a0\u00a0\u00a0}\n};\nexport default dbConnect;\n```\n\nNow we can call the dbConnect function in the `server.ts` file or any other file that serves as the entry point for your application.\n\n```typescript\n// server.ts\nimport dbClient from './dbClient';\nserver.listen(5000, async () => {\n\u00a0\u00a0\u00a0\u00a0try {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0await dbClient();\n\u00a0\u00a0\u00a0\u00a0} catch (error) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0console.error(error);\n\u00a0\u00a0\u00a0\u00a0}\n});\n```\n\nWe now have our Express server connected to MongoDB. With the basic setup in place, we can proceed to incorporating change streams and socket.io into our application.\n\n## Change Streams\n\nMongoDB Change Streams is a powerful feature that allows you to listen for changes in your MongoDB collections in real-time. Change streams provide a change notification-like mechanism that allows you to be notified of any changes to your data as they happen.\n\nTo use change streams, you need to use the `watch()` function from the MongoDB driver. Here is a simple example of how you would use change streams on a collection.\n\n```typescript\nconst changeStream = collection.watch()\nchangeStream.on('change', (event) => {\n// your logic\n})\n```\n\nThe callback function will run every time a document gets added, deleted, or updated in the watched collection.\n\n## Socket.IO and Socket.IO rooms\n\nSocket.IO is a popular JavaScript library. It enables real-time communication between the server and client, making it ideal for applications that require live updates and data streaming. In our application, it is used to broadcast location and shipment status updates to the connected clients in real-time.\n\nOne of the key features of Socket.IO is the ability to create \"rooms.\" Rooms are a way to segment connections and allow you to broadcast messages to specific groups of clients. In our application, rooms are used to ensure that location and shipment status updates are only broadcasted to the clients that are tracking that specific package or driver.\n\nThe code to include Socket.IO and its handlers can be found inside the files `src/server.ts` and `src/socketHandler.ts`\n\nWe are defining all the Socket.IO events inside the `socketHandler.ts` file so the socket-related code is separated from the rest of the application. Below is an example to implement the basic connect and disconnect Socket.IO events in Node.js.\n\n```typescript\n// socketHandler.ts\nconst socketHandler = (io: Server) => {\n\u00a0\u00a0io.on('connection', (socket: any) => {\n\u00a0\u00a0\u00a0\u00a0console.log('A user connected');\n\u00a0\u00a0\u00a0\u00a0socket.on('disconnect', () => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0console.log('A user disconnected');\n\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0});\n};\nexport default socketHandler;\n```\n\nWe can now integrate the socketHandler function into our `server.ts file` (the starting point of our application) by importing it and calling it once the server begins listening.\n\n```typescript\n// server.ts\nimport app from './app'; // Express app\nimport http from 'http';\nimport { Server } from 'socket.io';\nconst server = http.createServer(app);\nconst io = new Server(server);\nserver.listen(5000, async () => {\n\u00a0\u00a0try {\n\u00a0\u00a0\u00a0\u00a0socketHandler(io);\n\u00a0\u00a0} catch (error) {\n\u00a0\u00a0\u00a0\u00a0console.error(error);\n\u00a0\u00a0}\n});\n```\n\nWe now have the Socket.IO setup with our Express app. In the next section, we will see how location data gets stored and updated.\n\n## Storing location data\n\nMongoDB has built-in support for storing location data as GeoJSON, which allows for efficient querying and indexing of spatial data. In our application, the driver's location is stored in MongoDB as a GeoJSON point.\n\nTo simulate the driver movement, in the front end, there's an option to log in as driver and move the driver marker across the map, simulating the driver's location. (More on that covered in the front end section.)\n\nWhen the driver moves, a socket event is triggered which sends the updated location to the server, which is then updated in the database.\n\n```typescript\nsocket.on(\"UPDATE_DA_LOCATION\", async (data) => {\n\u00a0\u00a0const { email, location } = data;\n\u00a0\u00a0await collection.findOneAndUpdate({ email }, { $set: { currentLocation: location } });\n});\n```\n\nThe code above handles the \"UPDATE_DA_LOCATION\" socket event. It takes in the email and location data from the socket message and updates the corresponding driver's current location in the MongoDB database.\n\nSo far, we've covered how to set up an Express server and connect it to MongoDB. We also saw how to set up Socket.IO and listen for updates. In the next section, we will cover how to use change streams and emit a socket event from server to front end.\n\n## Using change streams to read updates\n\nThis is the center point of discussion in this article. When a new delivery is requested from the UI, a shipment entry is created in DB. The shipment will be in pending state until a driver accepts the shipment.\n\nOnce the driver accepts the shipment, a socket room is created with the driver id as the room name, and the user who created the shipment is subscribed to that room.\n\nHere's a simple diagram to help you better visualize the flow.\n\nWith the user subscribed to the socket room, all we need to do is to listen to the changes in the driver's location. This is where the change stream comes into picture.\n\nWe have a change stream in place, which is listening to the Delivery Associate (Driver) collection. Whenever there is an update in the collection, this will be triggered. We will use this callback function to execute our business logic.\n\nNote we are passing an option to the change stream watch function `{ fullDocument: 'updateLookup' }`. It specifies that the complete updated document should be included in the change event, rather than just the delta or the changes made to the document.\n\n```typescript\n\nconst watcher = async (io: Server) => {\\\n\u00a0 const collection = await DeliveryAssociateCollection();\\\n\u00a0 const changeStream = collection.watch(], { fullDocument: 'updateLookup' });\\\n\u00a0 changeStream.on('change', (event) => {\\\n\u00a0 \u00a0 if (event.operationType === 'update') {\\\n\u00a0 \u00a0 \u00a0 \u00a0 const fullDocument = event.fullDocument;\\\n\u00a0 \u00a0 \u00a0 \u00a0 io.to(String(fullDocument._id)).emit(\"DA_LOCATION_CHANGED\", fullDocument);\\\n}});};\n\n```\n\nIn the above code, we are listening to all CRUD operations in the Delivery Associate (Driver) collection and we emit socket events only for update operations. Since the room names are just driver ids, we can get the driver id from the updated document.\n\nThis way, we are able to listen to changes in driver location using change streams and send it to the user.\u00a0\n\nIn the codebase, all the change stream code for the application will be inside the folder `src/watchers/`. You can specify the watchers wherever you desire but to keep code clean, I'm following this approach. The below code shows how the watcher function is executed in the entry point of the application --- i.e., server.ts file.\n\n```typescript\n// server.ts\nimport deliveryAssociateWatchers from './watchers/deliveryAssociates';\nserver.listen(5000, async () => {\n\u00a0\u00a0try {\n await dbClient();\n socketHandler(io);\n await deliveryAssociateWatchers(io);\n\u00a0\u00a0} catch (error) {\n\u00a0\u00a0\u00a0\u00a0console.error(error);\n\u00a0\u00a0}\n});\n```\n\nIn this section, we saw how change streams are used to monitor updates in the Delivery Associate (Driver) collection. We also saw how the `fullDocument` option in the watcher function was used to retrieve the complete updated document, which then allowed us to send the updated location data to the subscribed user through sockets. The next section focuses on exploring the front-end codebase and how the emitted data is used to update the map in real time.\n\n## Front end\n\nI won't go into much detail on the front end but just to give you an overview, it's built on React and uses Leaflet.js for Map.\n\nI have included the entire front end as a sub app in the GitHub repo under the folder [`/frontend`. The Readme contains the steps on how to install and start the app.\n\nStarting the front end gives two options:\u00a0\n\n1\\. Log in as user.2. Log in as a driver.\n\nUse the \"log in as driver\" option to simulate the driver's location. This can be done by simply dragging the marker across the map.\n\n### Driver simulator\n\nLogging in as driver will let you simulate the driver's location. The code snippet provided demonstrates the use of `useState`and `useEffect` hooks to simulate a driver's location updates. The `` and `` are Leaflet components. One is the actual map we see on the UI and other is, as the name suggests, a marker which is movable using our mouse.\n\n```jsx\n// Driver Simulator\nconst position, setPosition] = useState(initProps.position);\nconst gpsUpdate = (position) => {\n\u00a0\u00a0const data = {\n\u00a0\u00a0\u00a0\u00a0email,\n\u00a0\u00a0\u00a0\u00a0location: { type: 'Point', coordinates: [position.lng, position.lat] },\n\u00a0\u00a0};\n\u00a0\u00a0socket.emit(\"UPDATE_DA_LOCATION\", data);\n};\nuseEffect(() => {\ngpsUpdate(position);\n}, [position]);\nreturn (\n\n)\n```\n\nThe **position** state is initialized with the initial props. When the draggable marker is moved, the position gets updated. This triggers the gpsUpdate function inside its useEffect hook, which sends a socket event to update the driver's location.\n\n### User app\n\nOn the user app side, when a new shipment is created and a delivery associate is assigned, the `SHIPMENT_UPDATED` socket event is triggered. In response, the user app emits the `SUBSCRIBE_TO_DA` event to subscribe to the driver's socket room. (DA is short for Delivery Associate.)\n\n```js\nsocket.on('SHIPMENT_UPDATED', (data) => {\n\u00a0\u00a0if (data.deliveryAssociateId) {\n\u00a0\u00a0\u00a0\u00a0const deliveryAssociateId = data.deliveryAssociateId;\n\u00a0\u00a0\u00a0\u00a0socket.emit('SUBSCRIBE_TO_DA', { deliveryAssociateId });\n\u00a0\u00a0}\n});\n```\n\nOnce subscribed, any changes to the driver's location will trigger the DA_LOCATION_CHANGED socket event. The `driverPosition` state represents the delivery driver's current position. This gets updated every time new data is received from the socket event.\n\n```jsx\nconst [driverPosition, setDriverPosition] = useState(initProps.position);\nsocket.on('DA_LOCATION_CHANGED', (data) => {\n\u00a0\u00a0const location = data.location;\n\u00a0\u00a0setDriverPosition(location);\n});\nreturn (\n\n)\n```\n\nThe code demonstrates how the user app updates the driver's marker position on the map in real time using socket events. The state driverPosition is passed to the component and updated with the latest location data from the DA_LOCATION_CHANGED socket event.\n\n## Summary\n\nIn this article, we saw how MongoDB Change Streams and Socket.IO can be used in a Node.js Express application to develop a real-time system.\n\nWe learned about how to monitor a MongoDB collection using the change stream watcher method. We also learned how Socket.IO rooms can be used to segment socket connections for broadcasting updates. We also saw a little front-end code on how props are manipulated with socket events.\n\nIf you wish to learn more about Change Streams, check out our tutorial on [Change Streams and triggers with Node.js, or the video version of it. For a more in-depth tutorial on how to use Change Streams directly in your React application, you can also check out this tutorial on real-time data in a React JavaScript front end.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB", "Node.js"], "pageDescription": "In this article, you will learn how to use MongoDB Change Streams and Socket.io to build a real-time location tracking application. To demonstrate this, we will build a local package delivery service.\n", "contentType": "Tutorial"}, "title": "Real-Time Location Tracking with Change Streams and Socket.io", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/transactions-csharp-dotnet", "action": "created", "body": "# Working with MongoDB Transactions with C# and the .NET Framework\n\n>Update 10/2019: This article's code example has been updated to include\nthe required handing of the session handle to database methods.\n\nC# applications connected to a MongoDB database use the MongoDB .NET driver. To add the .NET driver to your Visual Studio Application, in the NuGet Package Manager, search for \"MongoDB\".\n\nMake sure you choose the latest version (>=2.7) of the driver, and press *Install*.\n\nPrior to MongoDB version 4.0, MongoDB was transactionally consistent at the document level. These existing atomic single-document operations provide the transaction semantics to meet the data integrity needs of the majority of applications. This is because the flexibility of the document model allows developers to easily embed related data for an entity as arrays and sub-documents within a single, rich document. That said, there are some cases where splitting the content into two or more collections would be appropriate, and for these cases, multi-document ACID transactions makes it easier than ever for developers to address the full spectrum of use cases with MongoDB. For a deeper discussion on MongoDB document model design, including how to represent one-to-many and many-to-many relationships, check out this article on data model design.\n\nIn the following code we will create a Product object and perform a MongoDB transaction that will insert some sample data into MongoDB then update the prices for all products by 10%.\n\n``` csp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Driver;\nusing System;\nusing System.Threading.Tasks;\n\nnamespace MongoDBTransaction\n{\n public static class Program\n {\n public class Product\n {\n BsonId]\n public ObjectId Id { get; set; }\n [BsonElement(\"SKU\")]\n public int SKU { get; set; }\n [BsonElement(\"Description\")]\n public string Description { get; set; }\n [BsonElement(\"Price\")]\n public Double Price { get; set; }\n }\n\n // replace with your connection string if it is different\n const string MongoDBConnectionString = \"mongodb://localhost\"; \n\n public static async Task Main(string[] args)\n {\n if (!await UpdateProductsAsync()) { Environment.Exit(1); }\n Console.WriteLine(\"Finished updating the product collection\");\n Console.ReadKey();\n }\n\n private static async Task UpdateProductsAsync()\n {\n // Create client connection to our MongoDB database\n var client = new MongoClient(MongoDBConnectionString);\n\n // Create the collection object that represents the \"products\" collection\n var database = client.GetDatabase(\"MongoDBStore\");\n var products = database.GetCollection(\"products\");\n\n // Clean up the collection if there is data in there\n await database.DropCollectionAsync(\"products\");\n\n // collections can't be created inside a transaction so create it first\n await database.CreateCollectionAsync(\"products\"); \n\n // Create a session object that is used when leveraging transactions\n using (var session = await client.StartSessionAsync())\n {\n // Begin transaction\n session.StartTransaction();\n\n try\n {\n // Create some sample data\n var tv = new Product { Description = \"Television\", \n SKU = 4001, \n Price = 2000 };\n var book = new Product { Description = \"A funny book\", \n SKU = 43221, \n Price = 19.99 };\n var dogBowl = new Product { Description = \"Bowl for Fido\", \n SKU = 123, \n Price = 40.00 };\n\n // Insert the sample data \n await products.InsertOneAsync(session, tv);\n await products.InsertOneAsync(session, book);\n await products.InsertOneAsync(session, dogBowl);\n\n var resultsBeforeUpdates = await products\n .Find(session, Builders.Filter.Empty)\n .ToListAsync();\n Console.WriteLine(\"Original Prices:\\n\");\n foreach (Product d in resultsBeforeUpdates)\n {\n Console.WriteLine(\n String.Format(\"Product Name: {0}\\tPrice: {1:0.00}\", \n d.Description, d.Price)\n );\n }\n\n // Increase all the prices by 10% for all products\n var update = new UpdateDefinitionBuilder()\n .Mul(r => r.Price, 1.1);\n await products.UpdateManyAsync(session, \n Builders.Filter.Empty, \n update); //,options);\n\n // Made it here without error? Let's commit the transaction\n await session.CommitTransactionAsync();\n }\n catch (Exception e)\n {\n Console.WriteLine(\"Error writing to MongoDB: \" + e.Message);\n await session.AbortTransactionAsync();\n return false;\n }\n\n // Let's print the new results to the console\n Console.WriteLine(\"\\n\\nNew Prices (10% increase):\\n\");\n var resultsAfterCommit = await products\n .Find(session, Builders.Filter.Empty)\n .ToListAsync();\n foreach (Product d in resultsAfterCommit)\n {\n Console.WriteLine(\n String.Format(\"Product Name: {0}\\tPrice: {1:0.00}\", \n d.Description, d.Price)\n );\n }\n\n return true;\n }\n }\n }\n}\n```\n\nSource Code available on [Gist. Successful execution yields the following:\n\n## Key points:\n\n- You don't have to match class properties to JSON objects - just define a class object and insert it directly into the database. There is no need for an Object Relational Mapper (ORM) layer.\n- MongoDB transactions use snapshot isolation meaning only the client involved in the transactional session sees any changes until such time as the transaction is committed.\n- The MongoDB .NET Driver makes it easy to leverage transactions and leverage LINQ based syntax for queries.\n\nAdditional information about using C# and the .NET driver can be found in the C# and .NET MongoDB Driver documentation.", "format": "md", "metadata": {"tags": ["C#", "MongoDB", ".NET"], "pageDescription": "Walk through an example of how to use transactions in C#.", "contentType": "Tutorial"}, "title": "Working with MongoDB Transactions with C# and the .NET Framework", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/mongodb-connectors-translators-interview", "action": "created", "body": "# MongoDB Podcast Interview with Connectors and Translators Team\n\nThe BI Connector and\nmongomirror are\njust two examples of powerful but less popular MongoDB products. These\nproducts are maintained by a team in MongoDB known as the Connectors and\nTranslators Engineering team. In this podcast episode transcript, we\nchat with Tim Fogarty, Varsha Subrahmanyam, and Evgeni Dobranov. The\nteam gives us a better understanding of these tools, focusing\nspecifically on the BI Connector and mongomirror.\n\nThis episode of the MongoDB Podcast is available on YouTube if you\nprefer to listen.\n\n:youtube]{vid=SFezkmAbwos}\n\nMichael Lynn (01:58): All right, welcome back. Today, we're talking\nabout connectors and translators and you might be thinking, \"Wait a\nminute. What is a connector and what is a translator?\" We're going to\nget to that. But first, I want to introduce the folks that are joining\nus on the podcast today. Varsha, would you introduce yourself?\n\nVarsha Subrahmanyam (02:19): Yes. Hi, my name is Varsha Subrahmanyam.\nI'm a software engineer on the translators and connectors team. I\ngraduated from the University of Illinois at Urbana-Champagne in 2019\nand was an intern at MongoDB just before graduation. And I returned as a\nfull-timer the following summer. So I've been here for one and a half\nyears. \\[inaudible 00:02:43\\]\n\nMichael Lynn (02:43): Evgeni?\n\nEvgeni Dobranov (02:44): Yeah. Hello. My name is Evgeni Dobranov. I'm\nmore or less right alongside Varsha. We interned together in 2018. We\nboth did our rotations just about a year ago and ended up on connector\nand translators together. I went to Tufts University and graduated in\n2019.\n\nMichael Lynn (03:02): And Tim, welcome.\n\nTim Fogarty (03:04): Hey, Mike. So I'm Tim Fogarty. I'm also a software\nengineer on the connectors and translators team. I actually worked for\nmLab, the MongoDB hosting service, which was acquired by MongoDB about\ntwo years ago. So I was working there before MongoDB and now I'm working\non the connectors and translators team.\n\nMichael Lynn (03:25): Fantastic. And Nic, who are you??\n\nNic Raboy (03:27): I am Nic and I am Mike's co-host for this fabulous\npodcast and the developer relations team at MongoDB.\n\nMichael Lynn (03:33): Connectors and translators. It's a fascinating\ntopic. We were talking before we started recording and I made the\nincorrect assumption that connectors and translators are somewhat\noverlooked and might not even appear on the front page, but that's not\nthe case. So Tim, I wonder if I could ask you to explain what connectors\nand translators are? What kind of software are we talking about?\n\nTim Fogarty (03:55): Yeah, so our team works on essentially three\ndifferent software groups. We have the BI Connector or the business\nintelligence connector, which is used to essentially translate SQL\ncommands into MongoDB commands so that you can use it with tools like\nTableau or PowerBI, those kinds of business intelligence tools.\n\nTim Fogarty (04:20): Then we also have the database tools, which are\nused for importing and exporting data, creating backups on the command\nline, and then also mongomirror, which is used internally for the Atlas\nLive Migrates function. So you're able to migrate a MongoDB database\ninto a MongoDB apps cloud service.\n\nTim Fogarty (04:39): The connectors and translators, it's a bit of a\nconfusing name. And we also have other products which are called\nconnectors. So we have the Kafka connector and Spark connector, and we\nactually don't work on those. So it's a bit of an awkward name, but\nessentially we're dealing with backups restores, migrations, and\ntranslating SQL.\n\nMichael Lynn (04:58): So you mentioned the BI Connector and Tableau and\nbeing able to use SQL with MongoDB. Can we maybe take a step back and\ntalk about why somebody might even want to use a connector, whether that\nthe BI one or something else with MongoDB?\n\nVarsha Subrahmanyam (05:16): Yeah. So I can speak about that a little\nbit. The reason why we might want to use the BI Connector is for people\nwho use business intelligence tools, they're mostly based on SQL. And so\nwe would like people to use the MongoDB query language. So we basically\nhad this translation engine that connects business intelligence tools to\nthe MongoDB back end. So the BI Connector received SQL queries. And then\nthe BI Connector translates those into SQL, into the MongoDB aggregation\nlanguage. And then queries MongoDB and then returns the result. So it's\nvery easy to store your data at MongoDB without actually knowing how to\nquery the database with MQL.\n\nMichael Lynn (06:03): Is this in real time? Is there a delay or a lag?\n\nVarsha Subrahmanyam (06:06): Maybe Evgeni can speak a bit to this? I\nbelieve most of this happens in memory. So it's very, very quick and we\nare able to process, I believe at this point 100% of all SQL queries, if\nnot very close to that. But it is very, very quick.\n\nMichael Lynn (06:22): Maybe I've got an infrastructure in place where\nI'm leveraging a BI tool and I want to make use of the data or an\napplication that leverages MongoDB on the back end. That sounds like a\npopular used case. I'm curious about how it does that. Is it just a\nstraight translation from the SQL commands and the operators that come\nto us from SQL?\n\n>\n>\n>\"So if you've heard of transpilers, they translate code from one higher\n>level language to another. Regular compilers will translate high level\n>code to lower level code, something like assembly, but the BI Connector\n>acts like a transpilers where it's translating from SQL to the MongoDB\n>query language.\"\" -- Varsha Subrahmanyam on the BI Connector\n>\n>\n\nVarsha Subrahmanyam (06:47): So if you've heard of transpilers, they\ntranslate code from one higher level language to another. Regular\ncompilers will translate high level code to lower level code, something\nlike assembly, but the BI Connector acts like a transpilers where it's\ntranslating from SQL to the MongoDB query language. And there are\nmultiple steps to a traditional compiler. There's the front end that\nbasically verifies the SQL query from both a semantic and syntactic\nperspective.\n\nVarsha Subrahmanyam (07:19): So kind of like does this query make sense\ngiven the context of the language itself and the more granularly the\ndatabase in question. And then there are two more steps. There's the\nmiddle end and the back end. They basically just after verifying the\nquery is acceptable, will then actually step into the translation\nprocess.\n\nVarsha Subrahmanyam (07:40): We basically from the syntactic parsing\nsegment of the compiler, we produce this parse tree which basically\ntakes all the tokens, constructs the tree out of them using the grammar\nof SQL and then based off of that, we will then start the translation\nprocess. And there's something called push-down. Evgeni, if you want to\ntalk about that.\n\nEvgeni Dobranov (08:03): Yeah, I actually have not done or worked with\nany code that does push-down specifically, unfortunately.\n\nVarsha Subrahmanyam (08:09): I can talk about that.\n\nEvgeni Dobranov (08:13): Yeah. It might be better for you.\n\nVarsha Subrahmanyam (08:13): Yeah. In push-down basically, we basically\nhad this parse tree and then from that we construct something called a\n[query plan, which\nbasically creates stages for every single part of the SQL query. And\nstages are our internal representation of what those tokens mean. So\nthen we construct like a linear plan, and this gets us into something\ncalled push-down.\n\nVarsha Subrahmanyam (08:42): So basically let's say you have, I suppose\nlike a normal SELECT query. The SELECT will then be a stage in our\nintermediate representation of the query. And that slowly will just\ntranslate single token into the equivalent thing in MQL. And we'll do\nthat in more of a linear fashion, and that slowly will just generate the\nMQL representation of the query.\n\nMichael Lynn (09:05): Now, there are differences in the way that data is\nrepresented between a relational or tabular database and the way that\nMongoDB represents it in document. I guess, through the push-down and\nthrough the tokenization, you're able to determine when a SQL statement\ncomes in that is referencing what would be columns if there's a\ntranslator that makes that reference field.\n\nVarsha Subrahmanyam (09:31): Right, right. So we have similar kinds of\nways of translating things from the relational model to the document\nmodel.\n\nTim Fogarty (09:39): So we have to either sample or set a specific\nschema for the core collection so that it looks like it's a table with\ncolumns. Mike, maybe you can talk a little bit more about that.\n\nMichael Lynn (09:55): Yeah. So is there a requirement to use the BI\nConnector around normalizing your data or providing some kind of hint\nabout how you're representing the data?\n\nVarsha Subrahmanyam (10:06): That I'm not too familiar with.\n\nNic Raboy (10:10): How do you even develop such a connector? What kind\nof technologies are you using? Are you using any of the MongoDB drivers\nin the process as well?\n\nVarsha Subrahmanyam (10:18): I know for the BI Connector, a lot of the\ncode was borrowed from existing parsing logic. And then it's all written\nin Go. Everything on our team is written in Go. It's been awhile since I\nhave been on this recode, so I am not too sure about specific\ntechnologies that are used. I don't know if you recall, Evgeni.\n\nEvgeni Dobranov (10:40): Well, I think the biggest thing is the Mongo\nAST, the abstract syntax tree, which has also both in Go and that sort\nof like, I think what Varsha alluded to earlier was like the big\nintermediate stage that helps translate SQL queries to Mongo queries by\nrepresenting things like taking a programming language class in\nuniversity. It sort of represents things as nodes in a tree and sort of\nlike relates how different like nouns to verbs and things like that in\nlike a more grammatical sense.\n\nMichael Lynn (11:11): Is the BI Connector open source? Can people take a\nlook at the source code to see how it works?\n\nEvgeni Dobranov (11:16): It is not, as far as I know, no.\n\nMichael Lynn (11:19): That's the BI Connector. I'm sure there's other\nconnectors that you work on. Let's talk a little bit about the other\nconnectors that you guys work on.\n\nNic Raboy (11:26): Yeah. Maybe what's the most interesting one. What's\nyour personal favorites? I mean, you're probably all working on one\nseparately, but is there one that's like commonly cool and commonly\nbeneficial to the MongoDB customers?\n\nEvgeni Dobranov (11:39): Well, the one I've worked on the most recently\npersonally at least has been mongomirror and I've actually come to like\nit quite a bit just because I think it has a lot of really cool\ncomponents. So just as a refresher, mongomirror is the tool that we use\nor the primary tool that Atlas uses to help customers with live\nmigration. So what this helps them essentially do is they could just be\nrunning a database, taking in writes and reads and things like that. And\nthen without essentially shutting down the database, they can migrate\nover to a newer version of Mongo. Maybe just like bigger clusters,\nthings like that, all using mongomirror.\n\nEvgeni Dobranov (12:16): And mongomirror has a couple of stages that it\ndoes in order to help with the migration. It does like an initial sync\nor just copies the existing data as much as it can. And then it also\nrecords. It also records operations coming in as well and puts them in\nthe oplog, which is essentially another collection of all the operations\nthat are being done on the database while the initial sync is happening.\nAnd then replays this data on top of your destination, the thing that\nyou're migrating to.\n\nEvgeni Dobranov (12:46): So there's a lot of juggling basically with\noperations and data copying, things like that. I think it's a very\nrobust system that seems to work well most of the time actually. I think\nit's a very nicely engineered piece of software.\n\nNic Raboy (13:02): I wanted to comment on this too. So this is a plug to\nthe event that we actually had recently called MongoDB Live for one of\nour local events though for North America. I actually sat in on a few\nsessions and there were customer migration stories where they actually\nused mongomirror to migrate from on-premise solutions to MongoDB Atlas.\nIt seems like it's the number one tool for getting that job done. Is\nthis a common scenario that you have run into as well? Are people using\nit for other types of migrations as well? Like maybe Atlas, maybe AWS to\nGCP even though that we have multi-cloud now, or is it mostly on prem to\nAtlas kind of migrations?\n\nEvgeni Dobranov (13:43): We work more on maintaining the software\nitself, having taken the request from the features from the Atlas team.\nThe people that would know exactly these details, I think would be the\nTSEs, the technical services engineers, who are the ones working with\nthe actual customers, and they receive more information about exactly\nwhat type of migration is happening, whether it's from private database\nor Mongo Atlas or private to private, things like that. But I do know\nfor a fact that you have all combinations of migrations. Mongomirror is\nnot limited to a single type. Tim can expand more on this for sure.\n\nTim Fogarty (14:18): Yeah. I'd say definitely migrating from on-prem to\nAtlas is the number one use case we see that's actually the only\ntechnically officially supported use case. So there are customers who\nare doing other things like they're migrating on-prem to on-prem or one\ncloud to another cloud. So it definitely does happen. But by far, the\nlargest use case is migrating to Atlas. And that is the only use case\nthat we officially test for and support.\n\nNic Raboy (14:49): I actually want to dig deeper into mongomirror as\nwell. I mean, how much data can you move with it at a certain time? Do\nyou typically like use a cluster of these mongomirrors in parallel to\nmove your however many terabytes you might have in your cluster? Or\nmaybe go into the finer details on how it works?\n\nTim Fogarty (15:09): Yeah, that would be cool, but that would be much\nmore difficult. So we generally only spin up one mongomirror machine. So\nif we have a source cluster that's on-prem, and then we have our\ndestination cluster, which is MongoDB Atlas, we spin up a machine that's\nhosted by us or you can run MongoDB on-prem yourself, if you want to, if\nthere are, let's say firewall concerns, and sometimes make it a little\nbit easier.\n\nTim Fogarty (15:35): But a single process and then the person itself is\nparalyzed. So it will, during the initial sync stage Evgeni mentioned,\nit will copy over all of the data for each collection in parallel, and\nthen it will start building indexes in parallels as well. You can\nmigrate over terabytes of data, but it can take a very long time. It can\nbe a long running process. We've definitely seen customers where if\nthey've got very large data sets, it can take weeks to migrate. And\nparticularly the index build phase takes a long time because that's just\na very compute intensive like hundreds of thousands of indexes on a very\nlarge data set.\n\n>\n>\n>\"But then once the initial sync is over, then we're just in the business\n>of replicating any changes that happen to the source database to the\n>destination cluster.\" -- Tim Fogarty on the mongomirror process of\n>migrating data from one cluster to another.\n>\n>\n\nTim Fogarty (16:18): But then once the initial sync is over, then we're\njust in the business of replicating any changes that happen to the\nsource database to the destination cluster.\n\nNic Raboy (16:28): So when you say changes that happened to the source\ndatabase, are you talking about changes that might have occurred while\nthat migration was happening?\n\nTim Fogarty (16:35): Exactly.\n\nNic Raboy (16:36): Or something else?\n\nTim Fogarty (16:38): While the initial sync happens, we buffer all of\nthe changes that happened to the source destination to a file. So we\nessentially just save them on disc, ready to replay them once we're\nfinished with the initial sync. So then once the initial sync has\nfinished, we replay everything that happened during the initial sync and\nthen everything new that comes in, we also start to replay that once\nthat's done. So we keep the two clusters in sync until the user is ready\nto cut over the application from there to source database over to their\nnew destination cluster.\n\nNic Raboy (17:12): When it copies over the data, is it using the same\nobject IDs from the source database or is it creating new documents on\nthe destination database?\n\nTim Fogarty (17:23): Yeah. The object IDs are the same, I believe. And\nthis is a kind of requirement because in the oplog, it will say like,\n\"Oh, this document with this object ID, we need to update it or change\nit in this way.\" So when we need to reapply those changes to the\ndestination kind of cluster, then we need to make sure that obviously\nthe object ID matches that we're changing the right document when we\nneed to reapply those changes.\n\nMichael Lynn (17:50): Okay. So there's two sources of data used in a\nmongomirror execution. There's the database, the source database itself,\nand it sounds like mongomirror is doing, I don't know, a standard find\ngetting all of the documents from there, transmitting those to the new,\nthe target system and leveraging an explicit ID reference so that the\ndocuments that are inserted have the same object ID. And then during\nthat time, that's going to take a while, this is physics, folks. It's\ngoing to take a while to move those all over, depending on the size of\nthe database.\n\nMichael Lynn (18:26): I'm assuming there's a marketplace in the oplog or\nat least the timestamp of the, the time that the mongomirror execution\nbegan. And then everything between that time and the completion of the\ninitial sync is captured in oplog, and those transactions in the oplog\nare used to recreate the transactions that occurred in the target\ndatabase.\n\nTim Fogarty (18:48): Yeah, essentially correct. The one thing is the\ninitial sync phase can take a long time. So it's possible that your\noplog, because the oplog is a cap collection, which means it can only be\na certain finite size. So eventually the older entries just start\ngetting deleted when they're not used. As soon as we start the initial\nsync, we start listening to the oplog and saving it to the disc that we\nhave the information saved. So if we start deleting things off the back\nof the oplog, we don't essentially get lost.\n\nMichael Lynn (19:19): Great. So I guess a word of caution would be\nensure that you have enough disc space available to you in order to\nexecute.\n\nTim Fogarty (19:26): Yes, exactly.\n\nMichael Lynn (19:29): That's mongomirror. That's great. And I wanted to\nclarify, mongomirror, It sounds like it's available from the MongoDB\nAtlas console, right? Because we're going to execute that from the\nconsole, but it also sounds like you said it might be available for\non-prem. Is it a downloadable? Is it an executable command line?\n\nTim Fogarty (19:47): Yeah. So in general, if you want to migrate into\nAtlas, then you should use the Atlas Live Migrate service. So that's\navailable on the Atlas console. It's like click and set it up and that's\nthe easiest way to use it. There are some cases where for some reason\nyou might need to run mongomirror locally, in which case you can\ndownload the binaries and run it locally. Those are kind of rare cases.\nI think that's probably something you should talk to support about if\nyou're concerned that you might work locally.\n\nNic Raboy (20:21): So in regards to the connectors like mongomirror, is\nthere anything that you've done recently towards the product or anything\nthat's coming soon on the roadmap?\n\nEvgeni Dobranov (20:29): So Varsha and I just finished a big epic on\nJira, which improves status reporting. And basically this was like a\nhuge collection of tickets that customers have come to us over time,\nbasically just saying, \"We wish there was a better status here. We wish\nthere was a better logging or I wish the logs gave us a better idea of\nwhat was going on in mongomirror internally. So we basically spent about\na month or so, and Varsha spent quite a bit of time on a ticket recently\nthat she can talk about. We just spent a lot of time improving error\nmessages and revealing information that previously wasn't revealed to\nhelp users get a better idea of what's going on in the internals of\nmongomirror.\n\nVarsha Subrahmanyam (21:12): Yeah. The ticket I just finished but was\nworking on for quite some time, was to provide better logging during the\nindex building process, which happens during initial sync and then again\nduring all oplog sync. Now, users will be able to get logs at a\ncollection level telling them what percentage of indexes have been built\non a particular collection as well as on each host in their replica set.\nAnd then also if they wanted to roll that information from the HTTP\nserver, then they can also do that.\n\nVarsha Subrahmanyam (21:48): So that's an exciting addition, I think.\nAnd now I'm also enabling those logs in the oplog sync portion of\nmongomirror, which is pretty similar, but probably we'll probably have a\nlittle bit less information just because we're figuring out which\nindexes need to be built on a rolling basis because we're just tailoring\nthe oplog and seeing what comes up. So by the nature of that, there's a\nlittle less information on how many indexes can you expect to be built.\nYou don't exactly know from the get-go, but yeah, I think that'll be\nhopefully a great help to people who are unsure if their indexes are\nstalled or are just taking a long time to build.\n\nMichael Lynn (22:30): Well, some fantastic updates. I want to thank you\nall for stopping by. I know we've got an entire set of content that I\nwanted to cover around the tools that you work on. Mongoimport,\nMongoexport, Mongorestore, Mongodump. But I think I'd like to give that\nthe time that it deserves. That could be a really healthy discussion. So\nI think what I'd like to do is get you guys to come back. That sound\ngood?\n\nVarsha Subrahmanyam (22:55): Yeah.\n\nTim Fogarty (22:56): Yeah.\n\nVarsha Subrahmanyam (22:56): Sounds good.\n\nEvgeni Dobranov (22:56): Yeah. Sounds great.\n\nMichael Lynn (22:57): Well, again, I want to thank you very much. Is\nthere anything else you want the audience to know before we go? How can\nthey reach out to you? Are you on social media, LinkedIn, Twitter? This\nis a time to plug yourself.\n\nVarsha Subrahmanyam (23:09): You can find me on LinkedIn.\n\nTim Fogarty (23:12): I'm trying to stay away from social media recently.\n\nNic Raboy (23:15): No problem.\n\nTim Fogarty (23:16): No, please don't contact me.\n\nMichael Lynn (23:19): I get that. I get it.\n\nTim Fogarty (23:21): You can contact me, I'll tell you where, on the\ncommunity forums.\n\nMichael Lynn (23:25): There you go. Perfect.\n\nTim Fogarty (23:27): If you have questions-\n\nMichael Lynn (23:28): Great.\n\nTim Fogarty (23:29): If you have questions about the database tools,\nthen you can ask questions there and I'll probably see it.\n\nMichael Lynn (23:34): All right. So\ncommunity.mongodb.com. We'll all be\nthere. If you have questions, you can swing by and ask them in that\nforum. Well, thanks once again, everybody. Tim Fogarty, Varsha\nSubrahmanyam, and Evgeni Dobranov.\n\nEvgeni Dobranov (23:47): Yes, you got it.\n\nMichael Lynn (23:48): All right. So thanks so much for stopping by. Have\na great day.\n\nVarsha Subrahmanyam (23:52): Thank you.\n\n## Summary\n\nI hope you enjoyed this episode of the MongoDB\nPodcast and learned a bit more about\nthe MongoDB Connectors and Translators including the Connector for\nBusiness Intelligence\nand mongomirror.\nIf you enjoyed this episode, please consider giving a review on your\nfavorite podcast networks including\nApple,\nGoogle,\nand Spotify.\n\nFor more information on the BI Connector, visit our\ndocs or\nproduct pages.\n\nFor more information on mongomirror, visit the\ndocs.\n", "format": "md", "metadata": {"tags": ["Connectors", "Kafka", "Spark"], "pageDescription": "MongoDB Podcast Interview with Connectors and Translators Team", "contentType": "Podcast"}, "title": "MongoDB Podcast Interview with Connectors and Translators Team", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/triggers-tricks-data-driven-schedule", "action": "created", "body": "# Realm Triggers Treats and Tricks - Document-Based Trigger Scheduling\n\nIn this blog series, we are trying to inspire you with some reactive Realm trigger use cases. We hope these will help you bring your application pipelines to the next level.\n\nEssentially, triggers are components in our Atlas projects/Realm apps that allow a user to define a custom function to be invoked on a specific event.\n\n- **Database triggers:** We have triggers that can be scheduled based on database events\u2014like `deletes`, `inserts`, `updates`, and `replaces`\u2014called database triggers.\n- **Scheduled triggers:** We can schedule a trigger based on a `cron` expression via scheduled triggers.\n- **Authentication triggers:** These triggers are only relevant for Realm authentication. They are triggered by one of the Realm auth providers' authentication events and can be configured only via a Realm application.\n\nFor this blog post, I would like to focus on trigger scheduling patterns.\n\nLet me present a use case and we will see how the discussed mechanics might help us in this scenario. Consider a meeting management application that schedules meetings and as part of its functionality needs to notify a user 10 minutes before the meeting.\n\nHow would we create a trigger that will be fired 10 minutes before a timestamp which is only known by the \"meeting\" document?\n\nFirst, let's have a look at the meetings collection documents example:\n\n``` javascript\n{\n _id : ObjectId(\"5ca4bbcea2dd94ee58162aa7\"),\n event : \"Mooz Meeting\",\n eventDate : ISODate(\"2021-03-20:14:00:00Z\"),\n meetingUrl : \"https://mooz.meeting.com/5ca4bbcea2dd94ee58162aa7\",\n invites : \"jon.doe@myemail.com\", \"doe.jonas@myemail.com\"]\n }\n```\n\nI wanted to share an interesting solution based on triggers, and throughout this article, we will use a meeting notification example to explain the discussed approach.\n\n## Prerequisites\n\nFirst, verify that you have an Atlas project with owner privileges to create triggers.\n\n- [MongoDB Atlas account, Atlas cluster\n- A MongoDB Realm application or access to MongoDB Atlas triggers.\n\n> If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n\n## The Idea Behind the Main Mechanism\n\nI will use the event example as a source document initiating the flow\nwith an insert to a `meetings` collection:\n\n``` javascript\n{\n _id : ObjectId(\"5ca4bbcea2dd94ee58162aa7\"),\n event : \"Mooz Meeting\",\n eventDate : ISODate(\"2021-03-20:11:00:00Z\"),\n meetingUrl : \"https://mooz.meeting.com/5ca4bbcea2dd94ee58162aa7\"\n invites : \"jon.doe@example.com\"],\n phone : \"+123456789\"\n}\n```\n\nOnce we insert this document into the `meetings` collection, it will create the following record in a helper collection called `notifications` using an insert trigger:\n\n``` javascript\n{\n _id : ObjectId(\"5ca4bbcea2dd94ee58162aa7\"),\n triggerDate : ISODate(\"2021-03-10:50:00:00Z\")\n}\n```\n\nThe time and `_id` are calculated from the source document and aim to fire once `2021-03-10:50:00:00Z` arrives via a `fireScheduleTasks` trigger. This trigger is based on a delete operation out of a [TTL index on the `triggerDate` field from the `notifications`.\n\nThis is when the user gets the reminder!\n\nOn a high level, here is the flow described above.\n\nA meeting document is tracked by a trigger, creating a notification document. This document at the specified time will cause a delete event. The delete will fire a notification trigger to notify the user.\n\nThere are three main components that allow our system to trigger based on our document data.\n\n## 1. Define a Notifications Helper Collection\n\nFirst, we need to prepare our `notifications` collection. This collection will be created implicitly by the following index creation command.\n\nNow we will create a TTL index. This index will cause the schedule document to expire when the value in `triggerDate` field arrives at its expiry lifetime of 0 seconds after its value.\n\n``` javascript\ndb.notifications.createIndex( { \"triggerDate\": 1 }, { expireAfterSeconds: 0 } )\n```\n\n## 2. Building a Trigger to Populate the Schedule Collection\n\nWhen setting up your `scheduleTasks` trigger, make sure you provide the following:\n\n1. Linked Atlas service and verify its name.\n2. The database and collection name we are basing the scheduling on, e.g., `meetings`.\n3. The relevant trigger operation that we want to schedule upon, e.g., when an event is inserted.\n4. Link it to a function that will perform the schedule collection population.\n\nMy trigger UI configuration to populate the scheduling collection.\n\nTo populate the `notifications` collection with relevant triggering dates, we need to monitor our documents in the source collection. In our case, the user's upcoming meeting data is stored in the \"meeting\" collection with the userId field. Our trigger will monitor inserts to populate a Scheduled document.\n\n``` javascript\nexports = function(changeEvent) {\n // Get the notifications collection\n const coll = context.services.get(\"\").db(\"\").collection(\"notifications\");\n\n // Calculate the \"triggerDate\" and populate the trigger collection and duplicate the _id\n const calcTriggerDate = new Date(changeEvent.fullDocument.eventDate - 10 * 60000); \n return coll.insertOne({_id:changeEvent.fullDocument._id,triggerDate: calcTriggerDate });\n};\n```\n\n>Important: Please replace \\ and \\ with your linked service and database names.\n\n## 3. Building the Trigger to Perform the Action on the \"Trigger Date\"\n\nTo react to the TTL \"delete\" event happening exactly when we want our scheduled task to be executed, we need to use an \"on delete\" database trigger I call `fireScheduleTasks`.\n\nWhen setting up your `fireScheduleTasks` trigger, make sure you provide the following:\n\n1. Linked Atlas service and verify its name.\n2. The database and collection for the notifications collection, e.g., `notifications`.\n3. The relevant trigger operation that we want to schedule upon, which is \"DELETE.\"\n4. Link it to a function that will perform the fired task.\n\nNow that we have populated the `notifications` collection with the `triggerDate`, we know the TTL index will fire a \"delete\" event with the relevant deleted `_id` so we can act upon our task.\n\nIn my case, 10 minutes before the user's event starts, my document will reach its lifetime and I will send a text using Twilio service to the attendee's phone.\n\nA prerequisite for this stage will be to set up a Twilio service using your Twilio cloud credentials.\n\n1. Make sure you have a Twilio cloud account with its SID and your Auth token.\n2. Set up the SID and Auth token into the Realm Twilio service configuration.\n3. Configure your Twilio Messaging service and phone number.\n\nOnce we have it in place, we can use it to send SMS notifications to our invites.\n\n``` javascript\nexports = async function(changeEvent) {\n // Get meetings collection\n const coll = context.services.get(\"\").db(\"\").collection(\"meetings\");\n\n // Read specific meeting document\n const doc = await coll.findOne({ _id: changeEvent.documentKey._id});\n\n // Send notification via Twilio SMS\n const twilio = context.services.get(\"\");\n twilio.send({\n to: doc.phone,\n from: \"+123456789\",\n body: `Reminder : Event ${doc.event} is about to start in 10min at ${doc.scheduledOn}`\n });\n};\n```\n\n>Important: Replace \\ and \\ with your linked service and database names.\n\nThat's how the event was fired at the appropriate time.\n\n## Wrap Up\n\nWith the presented technique, we can leverage existing triggering patterns to build new ones. This may open your mind to other ideas to design your next flows on MongoDB Realm.\n\nIn the following article in this series, we will learn how we can implement auto-increment with triggers.\n\n> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this article, we will explore a trick that lets us invoke a trigger task based on a date document field in our collections.", "contentType": "Article"}, "title": "Realm Triggers Treats and Tricks - Document-Based Trigger Scheduling", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/sending-requesting-data-mongodb-unity-game", "action": "created", "body": "# Sending and Requesting Data from MongoDB in a Unity Game\n\nAre you working on a game in Unity and finding yourself needing to make use of a database in the cloud? Storing your data locally works for a lot of games, but there are many gaming scenarios where you'd need to leverage an external database. Maybe you need to submit your high score for a leaderboard, or maybe you need to save your player stats and inventory so you can play on numerous devices. There are too many reasons to list as to why a remote database might make sense for your game.\n\nIf you've been keeping up with the content publishing on the MongoDB Developer Hub and our Twitch channel, you'll know that I'm working on a game development series with Adrienne Tacke. This series is centered around creating a 2D multiplayer game with Unity that uses MongoDB as part of the online component. Up until now, we haven't actually had the game communicate with MongoDB.\n\nIn this tutorial, we're going to see how to make HTTP requests from a Unity game to a back end that communicates with MongoDB. The back end was already developed in a tutorial titled, Creating a User Profile Store for a Game With Node.js and MongoDB. We're now going to leverage it in our game.\n\nTo get an idea where we're at in the tutorial series, take a look at the animated image below:\n\nTo take this to the next level, it makes sense to send data to MongoDB when the player crosses the finish line. For example, we can send how many steps were taken by the player in order to reach the finish line, or how many times the player collided with something, or even what place the player ranked in upon completion. The data being sent doesn't truly matter as of now.\n\nThe assumption is that you've been following along with the tutorial series and are jumping in where we left off. If not, some of the steps that refer to our project may not make sense, but the concepts can be applied in your own game. The tutorials in this series so far are:\n\n- Designing a Strategy to Develop a Game with Unity and MongoDB\n- Creating a User Profile Store for a Game with Node.js and MongoDB\n- Getting Started with Unity for Creating a 2D Game\n- Designing and Developing 2D Game Levels with Unity and C#\n\nIf you'd like to view the source code to the project, it can be found on GitHub.\n\n## Creating a C# Class in Unity to Represent the Data Model Within MongoDB\n\nBecause Unity, as of now, doesn't have an official MongoDB driver, sending and receiving MongoDB data from Unity isn't handled for you. We're going to have to worry about marshalling and unmarshalling our data as well as making the request. In other words, we're going to need to manipulate our data manually to and from JSON and C# classes.\n\nTo make this possible, we're going to need to start with a class that represents our data model in MongoDB.\n\nWithin your project's **Assets/Scripts** directory, create a **PlayerData.cs** file with the following code:\n\n``` csharp\nusing UnityEngine;\n\npublic class PlayerData\n{\n public string plummie_tag;\n public int collisions;\n public int steps;\n}\n```\n\nNotice that this class does not extend the `MonoBehavior` class. This is because we do not plan to attach this script as a component on a game object. The `public`-defined properties in the `PlayerData` class represent each of our database fields. In the above example, we only have a select few, but you could add everything from our user profile store if you wanted to.\n\nIt is important to use the `public` identifier for anything that will have relevance to the database.\n\nWe need to make a few more changes to the `PlayerData` class. Add the following functions to the class:\n\n``` csharp\nusing UnityEngine;\n\npublic class PlayerData\n{\n // 'public' variables here ...\n\n public string Stringify() \n {\n return JsonUtility.ToJson(this);\n }\n\n public static PlayerData Parse(string json)\n {\n return JsonUtility.FromJson(json);\n }\n}\n```\n\nNotice the function names are kind of like what you'd find in JavaScript if you are a JavaScript developer. Unity expects us to send string data in our requests rather than objects. The good news is that Unity also provides a helper `JsonUtility` class that will convert objects to strings and strings to objects.\n\nThe `Stringify` function will take all `public` variables in the class and convert them to a JSON string. The fields in the JSON object will match the names of the variables in the class. The `Parse` function will take a JSON string and convert it back into an object that can be used within C#.\n\n## Sending Data with POST and Retrieving Data with GET in a Unity Game\n\nWith a class available to represent our data model, we can now send data to MongoDB as well as retrieve it. Unity provides a UnityWebRequest class for making HTTP requests within a game. This will be used to communicate with either a back end designed with a particular programming language or a MongoDB Realm webhook. If you'd like to learn about creating a back end to be used with a game, check out my previous tutorial on the topic.\n\nWe're going to spend the rest of our time in the project's **Assets/Scripts/Player.cs** file. This script is attached to our player as a component and was created in the tutorial titled, Getting Started with Unity for Creating a 2D Game. In your own game, it doesn't really matter which game object script you use.\n\nOpen the **Assets/Scripts/Player.cs** file and make sure it looks similar to the following:\n\n``` csharp\nusing UnityEngine;\nusing System.Text;\nusing UnityEngine.Networking;\nusing System.Collections;\n\npublic class Player : MonoBehaviour\n{\n public float speed = 1.5f;\n\n private Rigidbody2D _rigidBody2D;\n private Vector2 _movement;\n\n void Start()\n {\n _rigidBody2D = GetComponent();\n }\n\n void Update()\n {\n // Mouse and keyboard input logic here ...\n }\n\n void FixedUpdate() {\n // Physics related updates here ...\n }\n}\n```\n\nI've stripped out a bunch of code from the previous tutorial as it doesn't affect anything we're planning on doing. The previous code was very heavily related to moving the player around on the screen and should be left in for the real game, but is overlooked in this example, at least for now.\n\nTwo things to notice that are important are the imports:\n\n``` csharp\nusing System.Text;\nusing UnityEngine.Networking;\n```\n\nThe above two imports are important for the networking features of Unity. Without them, we wouldn't be able to properly make GET and POST requests.\n\nBefore we make a request, let's get our `PlayerData` class included. Make the following changes to the **Assets/Scripts/Player.cs** code:\n\n``` csharp\nusing UnityEngine;\nusing System.Text;\nusing UnityEngine.Networking;\nusing System.Collections;\n\npublic class Player : MonoBehaviour\n{\n public float speed = 1.5f;\n\n private Rigidbody2D _rigidBody2D;\n private Vector2 _movement;\n private PlayerData _playerData;\n\n void Start()\n {\n _rigidBody2D = GetComponent();\n _playerData = new PlayerData();\n _playerData.plummie_tag = \"nraboy\";\n }\n\n void Update() { }\n\n void FixedUpdate() { }\n\n void OnCollisionEnter2D(Collision2D collision) \n {\n _playerData.collisions++;\n }\n}\n```\n\nIn the above code, notice that we are creating a new `PlayerData` object and assigning the `plummie_tag` field a value. We're also making use of an `OnCollisionEnter2D` function to see if our game object collides with anything. Since our function is very vanilla, collisions can be with walls, objects, etc., and nothing in particular. The collisions will increase the `collisions` counter.\n\nSo, we have data to work with, data that we need to send to MongoDB. To do this, we need to create some `IEnumerator` functions and make use of coroutine calls within Unity. This will allow us to do asynchronous activities such as make web requests.\n\nWithin the **Assets/Scripts/Player.cs** file, add the following `IEnumerator` function:\n\n``` csharp\nIEnumerator Download(string id, System.Action callback = null)\n{\n using (UnityWebRequest request = UnityWebRequest.Get(\"http://localhost:3000/plummies/\" + id))\n {\n yield return request.SendWebRequest();\n\n if (request.isNetworkError || request.isHttpError)\n {\n Debug.Log(request.error);\n if (callback != null)\n {\n callback.Invoke(null);\n }\n }\n else\n {\n if (callback != null)\n {\n callback.Invoke(PlayerData.Parse(request.downloadHandler.text));\n }\n }\n }\n}\n```\n\nThe `Download` function will be responsible for retrieving data from our database to be brought into the Unity game. It is expecting an `id` which we'll use a `plummie_id` for and a `callback` so we can work with the response outside of the function. The response should be `PlayerData` which is that of the data model we just made.\n\nAfter sending the request, we check to see if there were errors or if it succeeded. If the request succeeded, we can convert the JSON string into an object and invoke the callback so that the parent can work with the result.\n\nSending data with a payload, like that in a POST request, is a bit different. Take the following function:\n\n``` csharp\nIEnumerator Upload(string profile, System.Action callback = null)\n{\n using (UnityWebRequest request = new UnityWebRequest(\"http://localhost:3000/plummies\", \"POST\"))\n {\n request.SetRequestHeader(\"Content-Type\", \"application/json\");\n byte] bodyRaw = Encoding.UTF8.GetBytes(profile);\n request.uploadHandler = new UploadHandlerRaw(bodyRaw);\n request.downloadHandler = new DownloadHandlerBuffer();\n yield return request.SendWebRequest();\n\n if (request.isNetworkError || request.isHttpError)\n {\n Debug.Log(request.error);\n if(callback != null) \n {\n callback.Invoke(false);\n }\n }\n else\n {\n if(callback != null) \n {\n callback.Invoke(request.downloadHandler.text != \"{}\");\n }\n }\n }\n}\n```\n\nIn the `Upload` function, we are expecting a JSON string of our profile. This profile was defined in the `PlayerData` class and it is the same data we received in the `Download` function.\n\nThe difference between these two functions is that the POST is sending a payload. For this to work, the JSON string needs to be converted to `byte[]` and the upload and download handlers need to be defined. Once this is done, it is business as usual.\n\nIt is up to you what you want to return back to the parent. Because we are creating data, I thought it'd be fine to just return `true` if successful and `false` if not. To demonstrate this, if there are no errors, the response is compared against an empty object string. If an empty object comes back, then false. Otherwise, true. This probably isn't the best way to respond after a creation, but that is up to the creator (you, the developer) to decide.\n\nThe functions are created. Now, we need to use them.\n\nLet's make a change to the `Start` function:\n\n``` csharp\nvoid Start()\n{\n _rigidBody2D = GetComponent();\n _playerData = new PlayerData();\n _playerData.plummie_tag = \"nraboy\";\n StartCoroutine(Download(_playerData.plummie_tag, result => {\n Debug.Log(result);\n }));\n}\n```\n\nWhen the script runs\u2014or in our, example when the game runs\u2014and the player enters the scene, the `StartCoroutine` method is executed. We are providing the `plummie_tag` as our lookup value and we are printing out the results that come back.\n\nWe might want the `Upload` function to behave a little differently. Instead of making the request immediately, maybe we want to make the request when the player crosses the finish line. For this, maybe we add some logic to the `FixedUpdate` method instead:\n\n``` csharp\nvoid FixedUpdate() \n{\n // Movement logic here ...\n\n if(_rigidBody2D.position.x > 24.0f) {\n StartCoroutine(Upload(_playerData.Stringify(), result => {\n Debug.Log(result);\n }));\n }\n}\n```\n\nIn the above code, we check to see if the player position is beyond a certain value in the x-axis. If this is true, we execute the `Upload` function and print the results.\n\nThe above example isn't without issues though. As of now, if we cross the finish line, we're going to experience many requests as our code will continuously execute. We can correct this by adding a boolean variable into the mix.\n\nAt the top of your **Assets/Scripts/Player.cs** file with the rest of your variable declarations, add the following:\n\n``` csharp\nprivate bool _isGameOver;\n```\n\nThe idea is that when the `_isGameOver` variable is true, we shouldn't be executing certain logic such as the web requests. We are going to initialize the variable as false in the `Start` method like so:\n\n``` csharp\nvoid Start()\n{\n // Previous code here ...\n _isGameOver = false;\n}\n```\n\nWith the variable initialized, we can make use of it prior to sending an HTTP request after crossing the finish line. To do this, we'd make a slight adjustment to the code like so:\n\n``` csharp\nvoid FixedUpdate() \n{\n // Movement logic here ...\n\n if(_rigidBody2D.position.x > 24.0f && _isGameOver == false) {\n StartCoroutine(Upload(_playerData.Stringify(), result => {\n Debug.Log(result);\n }));\n _isGameOver = true;\n }\n}\n```\n\nAfter the player crosses the finish line, the HTTP code is executed and the game is marked as game over for the player, preventing further requests.\n\n## Conclusion\n\nYou just saw how to use the `UnityWebRequest` class in Unity to make HTTP requests from a game to a remote web server that communicates with MongoDB. This is valuable for any game that needs to either store game information remotely or retrieve it.\n\nThere are plenty of other ways to make use of the `UnityWebRequest` class, even in our own player script, but the examples we used should be a great starting point.\n\nThis tutorial series is part of a series streamed on Twitch. To see these streams live as they happen, follow the [Twitch channel and tune in.", "format": "md", "metadata": {"tags": ["C#", "Unity"], "pageDescription": "Learn how to interact with MongoDB from a Unity game with C# and the UnityWebRequest class.", "contentType": "Tutorial"}, "title": "Sending and Requesting Data from MongoDB in a Unity Game", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-e-commerce-content-catalog-atlas-search", "action": "created", "body": "# Building an E-commerce Content Catalog with Atlas Search\n\nSearch is now a fundamental part of applications across all industries\u2014but especially so in the world of retail and e-commerce. If your customers can\u2019t find what they\u2019re looking for, they\u2019ll go to another website and buy it there instead. The best way to provide your customers with a great shopping experience is to provide a great search experience. As far as searching goes, Atlas Search, part of MongoDB Atlas, is the easiest way to build rich, fast, and relevance-based search directly into your applications. In this tutorial, we\u2019ll make a website that has a simple text search and use Atlas Search to integrate full-text search capabilities, add autocomplete to our search box, and even promote some of our products on sale. \n\n## Pre-requisites\n\nYou can find the complete source code for this application on Github. The application is built using the MERN stack. It has a Node.js back end running the express framework, a MongoDB Atlas database, and a React front end.\n\n## Getting started\n\nFirst, start by cloning the repository that contains the starting source code.\n\n```bash\ngit clone https://github.com/mongodb-developer/content-catalog\ncd content-catalog\n```\n\nIn this repository, you will see three sub-folders:\n\n* `mdbstore`: contains the front end\n* `backend`: has the Node.js back end\n* `data`: includes a dataset that you can use with this e-commerce application\n\n### Create a database and import the dataset\n\nFirst, start by creating a free MongoDB Atlas cluster by following the instructions from the docs. Once you have a cluster up and running, find your connection string. You will use this connection string with `mongorestore` to import the provided dataset into your cluster.\n\n>You can find the installation instructions and usage information for `mongorestore` from the MongoDB documentation. \n\nUse your connection string without the database name at the end. It should look like `mongodb+srv://user:password@cluster0.xxxxx.mongodb.net`\n\n```bash\ncd data\nmongorestore \n```\n\nThis tool will automatically locate the BSON file from the dump folder and import these documents into the `items` collection inside the `grocery` database.\n\nYou now have a dataset of about 20,000 items to use and explore.\n\n### Start the Node.js backend API\n\nThe Node.js back end will act as an API that your front end can use. It will be connecting to your database by using a connection string provided in a `.env` file. Start by creating that file.\n\n```bash\ncd backend\ntouch .env\n```\n\nOpen your favourite code editor, and enter the following in the `.env` file. Change to your current connection string from MongoDB Atlas.\n\n```\nPORT=5050\nMONGODB_URI=\n```\n\nNow, start your server. You can use the `node` executable to start your server, but it\u2019s easier to use `nodemon` while in development. This tool will automatically reload your server when it detects a change to the source code. You can find out more about installing the tool from the official website.\n\n```bash\nnodemon .\n```\n\nThis command will start the server. You should see a message in your console confirming that the server is running and the database is connected.\n\n### Start the React frontend application\n\nIt\u2019s now time to start the front end of your application. In a new terminal window, go to the `mdbstore` folder, install all the dependencies for this project, and start the project using `npm`.\n\n```bash\ncd ../mdbstore\nnpm install\nnpm start\n```\n\nOnce this is completed, a browser tab will open, and you will see your fully functioning store. The front end is a React application. Everything in the front end is already connected to the backend API, so we won\u2019t be making any changes here. Feel free to explore the source code to learn more about using React with a Node.js back end.\n\n### Explore the application\n\nYour storefront is now up and running. A single page lets you search for and list all products. Try searching for `chicken`. Well, you probably don\u2019t have a lot of results. As a matter of fact, you won't find any result. Now try `Boneless Chicken Thighs`. There\u2019s a match! But that\u2019s not very convenient. Your users don\u2019t know the exact name of your products. Never mind possible typos or mistakes. This e-commerce offers a very poor experience to its customers and risks losing some business. In this tutorial, you will see how to leverage Atlas Search to provide a seamless experience to your users.\n\n## Add full-text search capabilities\n\nThe first thing we\u2019ll do for our users is to add full-text search capabilities to this e-commerce application. By adding a search index, we will have the ability to search through all the text fields from our documents. So, instead of searching only for a product name, we can search through the name, category, tags, and so on.\n\nStart by creating a search index on your collection. Find your collection in the MongoDB Atlas UI and click on Search in the top navigation bar. This will bring you to the Atlas Search Index creation screen. Click on Create Index.\n\nFrom this screen, click Next to use the visual editor. Then, choose the newly imported data\u2014\u2018grocery/items\u2019, on the database and collection screen. Accept all the defaults and create that index.\n\nWhile you\u2019re there, you can also create the index that will be used later for autocomplete. Click Create Index again, and click Next to use the visual editor. Give this new index the name `autocomplete`, select \u2018grocery/items\u2019 again, and then click Next.\n\nOn the following screen, click the Refine Index button to add the autocomplete capabilities to the index. Click on the Add Field button to add a new field that will support autocomplete searches. Choose the `name` field in the dropdown. Then toggle off the `Enable Dynamic Mapping` option. Finally, click Add data type, and from the dropdown, pick autocomplete. You can save these settings and click on the Create Search Index button. You can find the detailed instructions to set up the index in this tutorial.\n\nOnce your index is created, you will be able to use the $search stage in an aggregation pipeline. The $search stage enables you to perform a full-text search in your collections. You can experiment by going to the Aggregations tab once you\u2019ve selected your collection or using Compass, the MongoDB GUI.\n\nThe first aggregation pipeline we will create is for the search results. Rather than returning only results that have an exact match, we will use Altas Search to return all similar results or close to the user search intent. \n\nIn the Aggregation Builder screen, create a new pipeline by adding a first $search stage.\n\nYou use the following JSON for the first stage of your pipeline.\n\n```javascript\n{\n index: 'default',\n text: {\n query: \"chicken\",\n path: \"name\"]\n }\n}\n```\n\nAnd voil\u00e0! You already have much better search results. You could also add other [stages here to limit the number of results or sort them in a specific order. For this application, this is all we need for now. Let\u2019s try to import this into the API used for this project.\n\nIn the file _backend/index.js_, look for the route that listens for GET requests on `/search/:query`. Here, replace the code between the comments with the code you used for your aggregation pipeline. This time, rather than using the hard-coded value, use `req.params.query` to use the query string sent to the server.\n\n```javascript\n /** TODO: Update this to use Atlas Search */\n results = await itemCollection.aggregate(\n { $search: {\n index: 'default',\n text: {\n query: req.params.query,\n path: [\"name\"]\n }\n }\n }\n ]).toArray();\n /** End */\n```\n\nThe old code used the `find()` method to find an exact match. This new code uses the newly created Search index to return any records that would contain, in part or in full, the search term that we\u2019ve passed to it.\n\nIf you try the application again with the word \u201cChicken,\u201d you will get much more results this time. In addition to that, you might also notice that your searches are also case insensitive. But we can do even better. Sometimes, your users might be searching for more generic terms, such as one of the tags that describe the products or the brand name. Let\u2019s add more fields to this search to return more relevant records. \n\nIn the `$search` stage that you added in the previous code snippet, change the value of the path field to contain all the fields you want to search.\n\n```javascript\n /** TODO: Update this to use Atlas Search */\n results = await itemCollection.aggregate([\n { $search: {\n index: 'default',\n text: {\n query: req.params.query,\n path: [\"name\", \"brand\", \"category\", \"tags\"]\n }\n }\n }\n ]).toArray();\n /** End */\n```\n\nExperiment with your new application again. Try out some brand names that you know to see if you can find the product you are looking for. \n\nYour search capabilities are now much better, and the user experience of your website is already improved, but let\u2019s see if we can make this even better.\n\n## Add autocomplete to your search box\n\nA common feature of most modern search engines is an autocomplete dropdown that shows suggestions as you type. In fact, this is expected behaviour from users. They don\u2019t want to scroll through an infinite list of possible matches; they\u2019d rather find the right one quickly. \n\nIn this section, you will use the Atlas Search autocomplete capabilities to enable this in your search box. The UI already has this feature implemented, and you already created the required indexes, but it doesn\u2019t show up because the API is sending back no results. \n\nOpen up the aggregation builder again to build a new pipeline. Start with a $search stage again, and use the following. Note how this $search stage uses the `autocomplete` stage that was created earlier.\n\n```javascript\n{\n 'index': 'autocomplete', \n 'autocomplete': {\n 'query': \"chic\", \n 'path': 'name'\n }, \n 'highlight': {\n 'path': [\n 'name'\n ]\n }\n}\n```\n\nIn the preview panel, you should see some results containing the string \u201cchic\u201d in their name. That\u2019s a lot of potential matches. For our application, we won\u2019t want to return all possible matches. Instead, we\u2019ll only take the first five. To do so, a $limit stage is used to limit the results to five. Click on Add Stage, select $limit from the dropdown, and replace `number` with the value `5`.\n\n![The autocomplete aggregation pipeline in Compass\n\nExcellent! Now we only have five results. Since this request will be executed on each keypress, we want it to be as fast as possible and limit the required bandwidth as much as possible. A $project stage can be added to help with this\u2014we will return only the \u2018name\u2019 field instead of the full documents. Click Add Stage again, select $project from the dropdown, and use the following JSON.\n\n```javascript\n{\n 'name': 1, \n 'highlights': {\n '$meta': 'searchHighlights'\n }\n}\n```\n\nNote that we also added a new field named `highlights`. This field returns the metadata provided to us by Atlas Search. You can find much information in this metadata, such as each item's score. This can be useful to sort the data, for example.\n\nNow that you have a working aggregation pipeline, you can use it in your application.\n\nIn the file _backend/index.js_, look for the route that listens for GET requests on `/autocomplete/:query`. After the `TODO` comment, add the following code to execute your aggregation pipeline. Don\u2019t forget to replace the hard-coded query with `req.params.query`. You can export the pipeline directly from Compass or use the following code snippet.\n\n```javascript\n // TODO: Insert the autocomplete functionality here\n results = await itemCollection.aggregate(\n {\n '$search': {\n 'index': 'autocomplete', \n 'autocomplete': {\n 'query': req.params.query, \n 'path': 'name'\n }, \n 'highlight': {\n 'path': [\n 'name'\n ]\n }\n }\n }, {\n '$limit': 5\n }, {\n '$project': {\n 'name': 1, \n 'highlights': {\n '$meta': 'searchHighlights'\n }\n }\n }\n ]).toArray();\n /** End */\n```\n\nGo back to your application, and test it out to see the new autocomplete functionality. \n\n![The final application in action\n\nAnd look at that! Your site now offers a much better experience to your developers with very little additional code. \n\n## Add custom scoring to adjust search results\n\nWhen delivering results to your users, you might want to push some products forward. Altas Search can help you promote specific results by giving you the power to change and tweak the relevance score of the results. A typical example is to put the currently on sale items at the top of the search results. Let\u2019s do that right away.\n\nIn the _backend/index.js_ file, replace the database query for the `/search/:query` route again to use the following aggregation pipeline.\n\n```javascript\n /** TODO: Update this to use Atlas Search */\n results = await itemCollection.aggregate(\n { $search: {\n index: 'default',\n compound: {\n must: [\n {text: {\n query: req.params.query,\n path: [\"name\", \"brand\", \"category\", \"tags\"]\n }},\n {exists: {\n path: \"price_special\",\n score: {\n boost: {\n value: 3\n }\n }\n }}\n ]\n }\n }\n }\n ]).toArray();\n /** End */\n```\n\nThis might seem like a lot; let\u2019s look at it in more detail. \n\n```javascript\n { $search: {\n index: 'default',\n compound: {\n must: [\n {...},\n {...}\n ]\n }\n }\n }\n```\n\nFirst, we added a `compound` object to the `$search` operator. This lets us use two or more operators to search on. Then we use the `must` operator, which is the equivalent of a logical `AND` operator. In this new array, we added two search operations. The first one is the same `text` as we had before. Let\u2019s focus on that second one.\n\n```javascript\n{\nexists: {\n path: \"price_special\",\n score: {\n boost: {\n value: 3\n }\n }\n}\n```\n\nHere, we tell Atlas Search to boost the current relevance score by three if the field `price_special` exists in the document. By doing so, any document that is on sale will have a much higher relevance score and be at the top of the search results. If you try your application again, you should notice that all the first results have a sale price.\n\n## Add fuzzy matching\n\nAnother common feature in product catalog search nowadays is fuzzy matching. Implementing a fuzzy matching feature can be somewhat complex, but Atlas Search makes it simpler. In a `text` search, you can add the `fuzzy` field to specify that you want to add this capability to your search results. You can tweak this functionality using [multiple options, but we\u2019ll stick to the defaults for this application.\n\nOnce again, in the _backend/index.js_ file, change the `search/:query` route to the following.\n\n```javascript\n /** TODO: Update this to use Atlas Search */\n results = await itemCollection.aggregate(\n { $search: {\n index: 'default',\n compound: {\n must: [\n {text: {\n query: req.params.query,\n path: [\"name\", \"brand\", \"category\", \"tags\"],\n fuzzy: {}\n }},\n {exists: {\n path: \"price_special\",\n score: {\n boost: {\n value: 3\n }\n }\n }}\n ]\n }\n }\n }\n ]).toArray();\n /** End */\n```\n\nYou\u2019ll notice that the difference is very subtle. A single line was added.\n\n```javascript\nfuzzy: {}\n```\n\nThis enables fuzzy matching for this `$search` operation. This means that the search engine will be looking for matching keywords, as well as matches that could differ slightly. Try out your application again, and this time, try searching for `chickn`. You should still be able to see some results.\n\nA fuzzy search is a process that locates web pages that are likely to be relevant to a search argument even when the argument does not exactly correspond to the desired information.\n\n## Summary\n\nTo ensure that your website is successful, you need to make it easy for your users to find what they are looking for. In addition to that, there might be some products that you want to push forward. Atlas Search offers all the necessary tooling to enable you to quickly add those features to your application, all by using the same MongoDB Query API you are already familiar with. In addition to that, there\u2019s no need to maintain a second server and synchronize with a search engine. \n\nAll of these features are available right now on [MongoDB Atlas. If you haven\u2019t already, why not give it a try right now on our free-to-use clusters?", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "In this tutorial, we\u2019ll make a website that has a simple text search and use Atlas Search to promote some of our products on sale.", "contentType": "Tutorial"}, "title": "Building an E-commerce Content Catalog with Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/getting-started-with-mongodb-atlas-and-azure-functions-using-net", "action": "created", "body": "# Getting Started with MongoDB Atlas and Azure Functions using .NET and C#\n\nSo you need to build an application with minimal operating costs that can also scale to meet the growing demand of your business. This is a perfect scenario for a serverless function, like those built with Azure Functions. With serverless functions you can focus more on the application and less on the infrastructure and operations side of things. However, what happens when you need to include a database in the mix?\n\nIn this tutorial we'll explore how to create a serverless function with Azure Functions and the .NET runtime to interact with MongoDB Atlas. If you're not familiar with MongoDB, it offers a flexible data model that can be used for a variety of use cases while being integrated into most application development stacks with ease. Scaling your MongoDB database and Azure Functions to meet demand is easy, making them a perfect match.\n\n## Prerequisites\n\nThere are a few requirements that must be met prior to starting this tutorial:\n\n- The Azure CLI installed and configured to use your Azure account.\n- The Azure Functions Core Tools installed and configured.\n- .NET or .NET Core 6.0+\n- A MongoDB Atlas deployed and configured with appropriate user rules and network rules.\n\nWe'll be using the Azure CLI to configure Azure and we'll be using the Azure Functions Core Tools to create and publish serverless functions to Azure.\n\nConfiguring MongoDB Atlas is out of the scope of this tutorial so the assumption is that you've got a database available, a user that can access that database, and proper network access rules so Azure can access your database. If you need help configuring these items, check out the MongoDB Atlas tutorial to set everything up.\n\n## Create an Azure Function with MongoDB Support on Your Local Computer\n\nWe're going to start by creating an Azure Function locally on our computer. We'll be able to test that everything is working prior to uploading it to Azure.\n\nWithin a command prompt, execute the following command:\n\n```bash\nfunc init MongoExample\n```\n\nThe above command will start the wizard for creating a new Azure Functions project. When prompted, choose **.NET** as the runtime since our focus will be C#. It shouldn\u2019t matter if you choose the isolated process or not, but we won\u2019t be using the isolated process for this example.\n\nWith your command prompt, navigate into the freshly created project and execute the following command:\n\n```bash\nfunc new --name GetMovies --template \"HTTP trigger\"\n```\n\nThe above command will create a new \"GetMovies\" Function within the project using the \"HTTP trigger\" template which is quite basic. In the \"GetMovies\" Function, we plan to retrieve one or more movies from our database.\n\nWhile it wasn't a requirement to use the MongoDB sample database **sample_mflix** and sample collection **movies** in this project, it will be referenced throughout. Nothing we do can't be replicated using a custom database or collection.\n\nAt this point we can start writing some code!\n\nSince MongoDB will be one of the highlights of this tutorial, we need to install it as a dependency. Within the project, execute the following from the command prompt:\n\n```bash\ndotnet add package MongoDB.Driver\n```\n\nIf you're using NuGet there are similar commands you can use, but for the sake of this example we'll stick with the .NET CLI.\n\nBecause we created a new Function, we should have a **GetMovies.cs** file at the root of the project. Open it and replace the existing code with the following C# code:\n\n```csharp\nusing System;\nusing System.IO;\nusing System.Threading.Tasks;\nusing Microsoft.AspNetCore.Mvc;\nusing Microsoft.Azure.WebJobs;\nusing Microsoft.Azure.WebJobs.Extensions.Http;\nusing Microsoft.AspNetCore.Http;\nusing Microsoft.Extensions.Logging;\nusing Newtonsoft.Json;\nusing MongoDB.Driver;\nusing System.Collections.Generic;\nusing MongoDB.Bson.Serialization.Attributes;\nusing MongoDB.Bson;\nusing System.Text.Json.Serialization;\n\nnamespace MongoExample\n{\n\n BsonIgnoreExtraElements]\n public class Movie\n {\n\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; }\n\n [BsonElement(\"title\")]\n [JsonPropertyName(\"title\")]\n public string Title { get; set; } = null!;\n\n [BsonElement(\"plot\")]\n [JsonPropertyName(\"plot\")]\n public string Plot { get; set; } = null!;\n\n }\n\n public static class GetMovies\n {\n\n public static Lazy lazyClient = new Lazy(InitializeMongoClient);\n public static MongoClient client = lazyClient.Value;\n\n public static MongoClient InitializeMongoClient()\n {\n return new MongoClient(Environment.GetEnvironmentVariable(\"MONGODB_ATLAS_URI\"));\n }\n\n [FunctionName(\"GetMovies\")]\n public static async Task Run(\n [HttpTrigger(AuthorizationLevel.Function, \"get\", \"post\", Route = null)] HttpRequest req,\n ILogger log)\n {\n\n string limit = req.Query[\"limit\"];\n IMongoCollection moviesCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"movies\");\n\n BsonDocument filter = new BsonDocument{\n {\n \"year\", new BsonDocument{\n { \"$gt\", 2005 },\n { \"$lt\", 2010 }\n }\n }\n };\n\n var moviesToFind = moviesCollection.Find(filter);\n\n if(limit != null && Int32.Parse(limit) > 0) {\n moviesToFind.Limit(Int32.Parse(limit));\n }\n\n List movies = moviesToFind.ToList();\n\n return new OkObjectResult(movies);\n\n }\n\n }\n\n}\n```\n\nThere's a lot happening in the above code, but we're going to break it down so it makes sense.\n\nWithin the namespace, you'll notice we have a *Movie* class:\n\n```csharp\n[BsonIgnoreExtraElements]\npublic class Movie\n{\n\n [BsonId]\n [BsonRepresentation(BsonType.ObjectId)]\n public string? Id { get; set; }\n\n [BsonElement(\"title\")]\n [JsonPropertyName(\"title\")]\n public string Title { get; set; } = null!;\n\n [BsonElement(\"plot\")]\n [JsonPropertyName(\"plot\")]\n public string Plot { get; set; } = null!;\n\n}\n```\n\nThe above class is meant to map our local C# objects to fields within our documents. If you're using the **sample_mflix** database and **movies** collection, these are fields from that collection. The class doesn't represent all the fields, but because the *[BsonIgnoreExtraElements]* is included, it doesn't matter. In this case only the present class fields will be used.\n\nNext you'll notice some initialization logic for our database:\n\n```csharp\npublic static Lazy lazyClient = new Lazy(InitializeMongoClient);\npublic static MongoClient client = lazyClient.Value;\n\npublic static MongoClient InitializeMongoClient()\n{\n\n return new MongoClient(Environment.GetEnvironmentVariable(\"MONGODB_ATLAS_URI\"));\n\n}\n```\n\nWe're using the *Lazy* class for lazy initialization of our database connection. This is done outside the runnable function of our class because it is not efficient to establish connections on every execution of our Azure Function. Concurrent connections to MongoDB and pretty much every database out there are finite, so if you have a large scale Azure Function, things can go poorly real quick if you're establishing a connection every time. Instead, we establish connections as needed.\n\nTake note of the *MONGODB_ATLAS_URI* environment variable. We'll obtain that value soon and we'll make sure it gets exported to Azure.\n\nThis brings us to the actual logic of our Azure Function:\n\n```csharp\nstring limit = req.Query[\"limit\"];\n\nIMongoCollection moviesCollection = client.GetDatabase(\"sample_mflix\").GetCollection(\"movies\");\n\nBsonDocument filter = new BsonDocument{\n {\n \"year\", new BsonDocument{\n { \"$gt\", 2005 },\n { \"$lt\", 2010 }\n }\n }\n};\n\nvar moviesToFind = moviesCollection.Find(filter);\n\nif(limit != null && Int32.Parse(limit) > 0) {\n moviesToFind.Limit(Int32.Parse(limit));\n}\n\nList movies = moviesToFind.ToList();\n\nreturn new OkObjectResult(movies);\n```\n\nIn the above code we are accepting a l*imit* variable from the client who executes the Function. It is not a requirement and doesn't need to be called *limit*, but it will make sense for us.\n\nAfter getting a reference to the database and collection we wish to use, we define the filter for the query we wish to run. In this example we are attempting to return only documents for movies that were released between the year 2005 and 2010. We then use that filter in the *Find* operation.\n\nSince we want to be able to limit our results, we check to see if *limit* exists and we make sure it has a value that we can work with. If it does, we use that value as our limit.\n\nFinally we convert our result set to a *List* and return it. Azure hands the rest for us!\n\nWant to test this Function locally before we deploy it? First make sure you have your Atlas URI string and set it as an environment variable on your local computer. This can be obtained through [the MongoDB Atlas Dashboard.\n\nThe best place to add your environment variable for the project is within the **local.settings.json** file like so:\n\n```json\n{\n \"IsEncrypted\": false,\n \"Values\": {\n // OTHER VALUES ...\n \"MONGODB_ATLAS_URI\": \"mongodb+srv://:@.170lwj0.mongodb.net/?retryWrites=true&w=majority\"\n },\n \"ConnectionStrings\": {}\n}\n```\n\nThe **local.settings.json** file doesn't get sent to Azure, but we'll handle that later.\n\nWith the environment variable set, execute the following command:\n\n```bash\nfunc start\n```\n\nIf it ran successfully, you'll receive a URL to test with. Try adding a limit and see the results it returns.\n\nAt this point we can prepare the project to be deployed to Azure.\n\n## Configure a Function Project in the Cloud with the Azure CLI\n\nAs mentioned previously in the tutorial, you should have the Azure CLI. We're going to use it to do various configurations within Azure.\n\nFrom a command prompt, execute the following:\n\n```bash\naz group create --name --location \n```\n\nThe above command will create a group. Make sure to give it a name that makes sense to you as well as a region. The name you choose for the group will be used for the next steps.\n\nWith the group created, execute the following command to create a storage account:\n\n```bash\naz storage account create --name --location --resource-group --sku Standard_LRS\n```\n\nWhen creating the storage account, use the same group as previous and provide new information such as a name for the storage as well as a region. The storage account will be used when we attempt to deploy the Function to the Azure cloud.\n\nThe final thing we need to create is the Function within Azure. Execute the following:\n\n```bash\naz functionapp create --resource-group --consumption-plan-location --runtime dotnet --functions-version 4 --name --storage-account \n```\n\nUse the regions, groups, and storage accounts from the previous commands when creating your function. In the above command we're defining the .NET runtime, one of many possible runtimes that Azure offers. In fact, if you want to see how to work with MongoDB using Node.js, check out this tutorial on the topic.\n\nMost of the Azure cloud is now configured. We'll see the final configuration towards the end of this tutorial when it comes to our environment variable, but for now we're done. However, now we need to link the local project and cloud project in preparation for deployment.\n\nNavigate into your project with a command prompt and execute the following command:\n\n```bash\nfunc azure functionapp fetch-app-settings \n```\n\nThe above command will download settings information from Azure into your local project. Just make sure you've chosen the correct Function name from the previous steps.\n\nWe also need to download the storage information.\n\nFrom the command prompt, execute the following command:\n\n```bash\nfunc azure storage fetch-connection-string \n```\n\nAfter running the above command you'll have the storage information you need from the Azure cloud.\n\n## Deploy the Local .NET Project as a Function with Microsoft Azure\n\nWe have a project and that project is linked to Azure. Now we can focus on the final steps for deployment.\n\nThe first thing we need to do is handle our environment variable. We can do this through the CLI or the web interface, but for the sake of quickness, let's use the CLI.\n\nFrom the command prompt, execute the following:\n\n```bash\naz functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_URI=\n```\n\nThe environment variable we're sending is the *MONGODB_ATLAS_URI* like we saw earlier. Maybe sure you add the correct value as well as the other related information in the above command. You'd have to do this for every environment variable that you create, but luckily this project only had the one.\n\nFinally we can do the following:\n\n```bash\nfunc azure functionapp publish \n```\n\nThe above command will publish our Azure Function. When it's done it will provide a link that you can access it from.\n\nDon't forget to obtain a \"host key\" from Azure before you try to access your Function from cURL, the web browser or similar otherwise you'll likely receive an unauthorized error response.\n\n```bash\ncurl https://.azurewebsites.net/api/GetMovies?code=\n```\n\nThe above cURL is an example of what you can run, just swap the values to match your own.\n\n## Conclusion\n\nYou just saw how to create an Azure Function that communicates with MongoDB Atlas using the .NET runtime. This tutorial explored several topics which included various CLI tools, efficient database connections, and the querying of MongoDB data. This tutorial could easily be extended to do more complex tasks within MongoDB such as using aggregation pipelines as well as other basic CRUD operations.\n\nIf you're looking for something similar using the Node.js runtime, check out this other tutorial on the subject.\n\nWith MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud\u2013based developer data platform in the market. Now, with the availability of Atlas on the Azure Marketplace, it\u2019s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the Atlas on Azure Marketplace listing.", "format": "md", "metadata": {"tags": ["C#", ".NET", "Azure", "Serverless"], "pageDescription": "Learn how to build scalable serverless functions on Azure that communicate with MongoDB Atlas using C# and .NET.", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB Atlas and Azure Functions using .NET and C#", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/rust/rust-quickstart-aggregation", "action": "created", "body": "# Getting Started with Aggregation Pipelines in Rust\n\n \n\nMongoDB's aggregation pipelines are one of its most powerful features. They allow you to write expressions, broken down into a series of stages, which perform operations including aggregation, transformations, and joins on the data in your MongoDB databases. This allows you to do calculations and analytics across documents and collections within your MongoDB database.\n\n## Prerequisites\n\nThis quick start is the second in a series of Rust posts. I *highly* recommend you start with my first post, Basic MongoDB Operations in Rust, which will show you how to get set up correctly with a free MongoDB Atlas database cluster containing the sample data you'll be working with here. Go read it and come back. I'll wait. Without it, you won't have the database set up correctly to run the code in this quick start guide.\n\nIn summary, you'll need:\n\n- An up-to-date version of Rust. I used 1.49, but any recent version should work well.\n- A code editor of your choice. I recommend VS Code with the Rust Analyzer extension.\n- A MongoDB cluster containing the `sample_mflix` dataset. You can find instructions to set that up in the first blog post in this series.\n\n## Getting Started\n\nMongoDB's aggregation pipelines are very powerful and so they can seem a little overwhelming at first. For this reason, I'll start off slowly. First, I'll show you how to build up a pipeline that duplicates behaviour that you can already achieve with MongoDB's `find()` method, but instead using an aggregation pipeline with `$match`, `$sort`, and `$limit` stages. Then, I'll show how to make queries that go beyond what can be done with `find`, demonstrating using `$lookup` to include related documents from another collection. Finally, I'll put the \"aggregation\" into \"aggregation pipeline\" by showing you how to use `$group` to group together documents to form new document summaries.\n\n>All of the sample code for this quick start series can be found on GitHub. I recommend you check it out if you get stuck, but otherwise, it's worth following the tutorial and writing the code yourself!\n\nAll of the pipelines in this post will be executed against the sample_mflix database's `movies` collection. It contains documents that look like this (I'm showing you what they look like in Python, because it's a little more readable than the equivalent Rust struct):\n\n``` python\n{\n '_id': ObjectId('573a1392f29313caabcdb497'),\n 'awards': {'nominations': 7,\n 'text': 'Won 1 Oscar. Another 2 wins & 7 nominations.',\n 'wins': 3},\n 'cast': 'Janet Gaynor', 'Fredric March', 'Adolphe Menjou', 'May Robson'],\n 'countries': ['USA'],\n 'directors': ['William A. Wellman', 'Jack Conway'],\n 'fullplot': 'Esther Blodgett is just another starry-eyed farm kid trying to '\n 'break into the movies. Waitressing at a Hollywood party, she '\n 'catches the eye of alcoholic star Norman Maine, is given a test, '\n 'and is caught up in the Hollywood glamor machine (ruthlessly '\n 'satirized). She and her idol Norman marry; but his career '\n 'abruptly dwindles to nothing',\n 'genres': ['Drama'],\n 'imdb': {'id': 29606, 'rating': 7.7, 'votes': 5005},\n 'languages': ['English'],\n 'lastupdated': '2015-09-01 00:55:54.333000000',\n 'plot': 'A young woman comes to Hollywood with dreams of stardom, but '\n 'achieves them only with the help of an alcoholic leading man whose '\n 'best days are behind him.',\n 'poster': 'https://m.media-amazon.com/images/M/MV5BMmE5ODI0NzMtYjc5Yy00MzMzLTk5OTQtN2Q3MzgwOTllMTY3XkEyXkFqcGdeQXVyNjc0MzMzNjA@._V1_SY1000_SX677_AL_.jpg',\n 'rated': 'NOT RATED',\n 'released': datetime.datetime(1937, 4, 27, 0, 0),\n 'runtime': 111,\n 'title': 'A Star Is Born',\n 'tomatoes': {'critic': {'meter': 100, 'numReviews': 11, 'rating': 7.4},\n 'dvd': datetime.datetime(2004, 11, 16, 0, 0),\n 'fresh': 11,\n 'lastUpdated': datetime.datetime(2015, 8, 26, 18, 58, 34),\n 'production': 'Image Entertainment Inc.',\n 'rotten': 0,\n 'viewer': {'meter': 79, 'numReviews': 2526, 'rating': 3.6},\n 'website': 'http://www.vcientertainment.com/Film-Categories?product_id=73'},\n 'type': 'movie',\n 'writers': ['Dorothy Parker (screen play)',\n 'Alan Campbell (screen play)',\n 'Robert Carson (screen play)',\n 'William A. Wellman (from a story by)',\n 'Robert Carson (from a story by)'],\n 'year': 1937}\n```\n\nThere's a lot of data there, but I'll be focusing mainly on the `_id`, `title`, `year`, and `cast` fields.\n\n## Your First Aggregation Pipeline\n\nAggregation pipelines are executed by the mongodb module using a Collection's [aggregate() method.\n\nThe first argument to `aggregate()` is a sequence of pipeline stages to be executed. Much like a query, each stage of an aggregation pipeline is a BSON document. You'll often create these using the `doc!` macro that was introduced in the previous post.\n\nAn aggregation pipeline operates on *all* of the data in a collection. Each stage in the pipeline is applied to the documents passing through, and whatever documents are emitted from one stage are passed as input to the next stage, until there are no more stages left. At this point, the documents emitted from the last stage in the pipeline are returned to the client program, as a cursor, in a similar way to a call to `find()`.\n\nIndividual stages, such as `$match`, can act as a filter, to only pass through documents matching certain criteria. Other stage types, such as `$project`, `$addFields`, and `$lookup`, will modify the content of individual documents as they pass through the pipeline. Finally, certain stage types, such as `$group`, will create an entirely new set of documents based on the documents passed into it taken as a whole. None of these stages change the data that is stored in MongoDB itself. They just change the data before returning it to your program! There *are* stages, like $out, which can save the results of a pipeline back into MongoDB, but I won't be covering it in this quick start.\n\nI'm going to assume that you're working in the same environment that you used for the last post, so you should already have the mongodb crate configured as a dependency in your `Cargo.toml` file, and you should have a `.env` file containing your `MONGODB_URI` environment variable.\n\n### Finding and Sorting\n\nFirst, paste the following into your Rust code:\n\n``` rust\n// Load the MongoDB connection string from an environment variable:\nlet client_uri =\n env::var(\"MONGODB_URI\").expect(\"You must set the MONGODB_URI environment var!\");\n\n// An extra line of code to work around a DNS issue on Windows:\nlet options =\n ClientOptions::parse_with_resolver_config(&client_uri, ResolverConfig::cloudflare())\n .await?;\nlet client = mongodb::Client::with_options(options)?;\n\n// Get the 'movies' collection from the 'sample_mflix' database:\nlet movies = client.database(\"sample_mflix\").collection(\"movies\");\n```\n\nThe above code will provide a `Collection` instance called `movie_collection`, which points to the `movies` collection in your database.\n\nHere is some code which creates a pipeline, executes it with `aggregate`, and then loops through and prints the detail of each movie in the results. Paste it into your program.\n\n``` rust\n// Usually implemented outside your main function:\n#derive(Deserialize)]\nstruct MovieSummary {\n title: String,\n cast: Vec,\n year: i32,\n}\n\nimpl fmt::Display for MovieSummary {\n fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n write!(\n f,\n \"{}, {}, {}\",\n self.title,\n self.cast.get(0).unwrap_or(&\"- no cast -\".to_owned()),\n self.year\n )\n }\n}\n\n// Inside main():\nlet pipeline = vec![\n doc! {\n // filter on movie title:\n \"$match\": {\n \"title\": \"A Star Is Born\"\n }\n },\n doc! {\n // sort by year, ascending:\n \"$sort\": {\n \"year\": 1\n }\n },\n];\n\n// Look up \"A Star is Born\" in ascending year order:\nlet mut results = movies.aggregate(pipeline, None).await?;\n// Loop through the results, convert each of them to a MovieSummary, and then print out.\nwhile let Some(result) = results.next().await {\n // Use serde to deserialize into the MovieSummary struct:\n let doc: MovieSummary = bson::from_document(result?)?;\n println!(\"* {}\", doc);\n}\n```\n\nThis pipeline has two stages. The first is a [$match stage, which is similar to querying a collection with `find()`. It filters the documents passing through the stage based on a read operation query. Because it's the first stage in the pipeline, its input is all of the documents in the `movie` collection. The query for the `$match` stage filters on the `title` field of the input documents, so the only documents that will be output from this stage will have a title of \"A Star Is Born.\"\n\nThe second stage is a $sort stage. Only the documents for the movie \"A Star Is Born\" are passed to this stage, so the result will be all of the movies called \"A Star Is Born,\" now sorted by their year field, with the oldest movie first.\n\nCalls to aggregate() return a cursor pointing to the resulting documents. Cursor implements the Stream trait. The cursor can be looped through like any other stream, as long as you've imported StreamExt, which provides the `next()` method. The code above loops through all of the returned documents and prints a short summary, consisting of the title, the first actor in the `cast` array, and the year the movie was produced.\n\nExecuting the code above results in:\n\n``` none\n* A Star Is Born, Janet Gaynor, 1937\n* A Star Is Born, Judy Garland, 1954\n* A Star Is Born, Barbra Streisand, 1976\n```\n\n### Refactoring the Code\n\nIt is possible to build up whole aggregation pipelines as a single data structure, as in the example above, but it's not necessarily a good idea. Pipelines can get long and complex. For this reason, I recommend you build up each stage of your pipeline as a separate variable, and then combine the stages into a pipeline at the end, like this:\n\n``` rust\n// Match title = \"A Star Is Born\":\nlet stage_match_title = doc! {\n \"$match\": {\n \"title\": \"A Star Is Born\"\n }\n};\n\n// Sort by year, ascending:\nlet stage_sort_year_ascending = doc! {\n \"$sort\": { \"year\": 1 }\n};\n\n// Now the pipeline is easier to read:\nlet pipeline = vec stage.\n\nThe **modified and new** code looks like this:\n\n``` rust\n// Sort by year, descending:\nlet stage_sort_year_descending = doc! {\n \"$sort\": {\n \"year\": -1\n }\n};\n\n// Limit to 1 document:\nlet stage_limit_1 = doc! { \"$limit\": 1 };\n\nlet pipeline = vec stage.\n\nI'll show you how to obtain related documents from another collection, and embed them in the documents from your primary collection.\n\nFirst, modify the definition of the `MovieSummary` struct so that it has a `comments` field, loaded from a `related_comments` BSON field. Define a `Comment` struct that contains a subset of the data contained in a `comments` document.\n\n``` rust\n#derive(Deserialize)]\nstruct MovieSummary {\n title: String,\n cast: Vec,\n year: i32,\n #[serde(default, rename = \"related_comments\")]\n comments: Vec,\n}\n\n#[derive(Debug, Deserialize)]\nstruct Comment {\n email: String,\n name: String,\n text: String,\n}\n```\n\nNext, create a new pipeline from scratch, and start with the following:\n\n``` rust\n// Look up related documents in the 'comments' collection:\nlet stage_lookup_comments = doc! {\n \"$lookup\": {\n \"from\": \"comments\",\n \"localField\": \"_id\",\n \"foreignField\": \"movie_id\",\n \"as\": \"related_comments\",\n }\n};\n\n// Limit to the first 5 documents:\nlet stage_limit_5 = doc! { \"$limit\": 5 };\n\nlet pipeline = vec![\n stage_lookup_comments,\n stage_limit_5,\n];\n\nlet mut results = movies.aggregate(pipeline, None).await?;\n// Loop through the results and print a summary and the comments:\nwhile let Some(result) = results.next().await {\n let doc: MovieSummary = bson::from_document(result?)?;\n println!(\"* {}, comments={:?}\", doc, doc.comments);\n}\n```\n\nThe stage I've called `stage_lookup_comments` is a `$lookup` stage. This `$lookup` stage will look up documents from the `comments` collection that have the same movie id. The matching comments will be listed as an array in a BSON field named `related_comments`, with an array value containing all of the comments that have this movie's `_id` value as `movie_id`.\n\nI've added a `$limit` stage just to ensure that there's a reasonable amount of output without being overwhelming.\n\nNow, execute the code.\n\n>\n>\n>You may notice that the pipeline above runs pretty slowly! There are two reasons for this:\n>\n>- There are 23.5k movie documents and 50k comments.\n>- There's a missing index on the `comments` collection. It's missing on purpose, to teach you about indexes!\n>\n>I'm not going to show you how to fix the index problem right now. I'll write about that in a later post in this series, focusing on indexes. Instead, I'll show you a trick for working with slow aggregation pipelines while you're developing.\n>\n>Working with slow pipelines is a pain while you're writing and testing the pipeline. *But*, if you put a temporary `$limit` stage at the *start* of your pipeline, it will make the query faster (although the results may be different because you're not running on the whole dataset).\n>\n>When I was writing this pipeline, I had a first stage of `{ \"$limit\": 1000 }`.\n>\n>When you have finished crafting the pipeline, you can comment out the first stage so that the pipeline will now run on the whole collection. **Don't forget to remove the first stage, or you're going to get the wrong results!**\n>\n>\n\nThe aggregation pipeline above will print out summaries of five movie documents. I expect that most or all of your movie summaries will end with this: `comments=[]`.\n\n### Matching on Array Length\n\nIf you're *lucky*, you may have some documents in the array, but it's unlikely, as most of the movies have no comments. Now, I'll show you how to add some stages to match only movies which have more than two comments.\n\nIdeally, you'd be able to add a single `$match` stage which obtained the length of the `related_comments` field and matched it against the expression `{ \"$gt\": 2 }`. In this case, it's actually two steps:\n\n- Add a field (I'll call it `comment_count`) containing the length of the `related_comments` field.\n- Match where the value of `comment_count` is greater than two.\n\nHere is the code for the two stages:\n\n``` rust\n// Calculate the number of comments for each movie:\nlet stage_add_comment_count = doc! {\n \"$addFields\": {\n \"comment_count\": {\n \"$size\": \"$related_comments\"\n }\n }\n};\n\n// Match movie documents with more than 2 comments:\nlet stage_match_with_comments = doc! {\n \"$match\": {\n \"comment_count\": {\n \"$gt\": 2\n }\n }\n};\n```\n\nThe two stages go after the `$lookup` stage, and before the `$limit` 5 stage:\n\n``` rust\nlet pipeline = vec![\n stage_lookup_comments,\n stage_add_comment_count,\n stage_match_with_comments,\n limit_5,\n]\n```\n\nWhile I'm here, I'm going to clean up the output of this code to format the comments slightly better:\n\n``` rust\nlet mut results = movies.aggregate(pipeline, None).await?;\n// Loop through the results and print a summary and the comments:\nwhile let Some(result) = results.next().await {\n let doc: MovieSummary = bson::from_document(result?)?;\n println!(\"* {}\", doc);\n if doc.comments.len() > 0 {\n // Print a max of 5 comments per movie:\n for comment in doc.comments.iter().take(5) {\n println!(\n \" - {} <{}>: {}\",\n comment.name,\n comment.email,\n comment.text.chars().take(60).collect::(),\n );\n }\n } else {\n println!(\" - No comments\");\n }\n}\n```\n\n*Now* when you run this code, you should see something more like this:\n\n``` none\n* Midnight, Claudette Colbert, 1939\n - Sansa Stark : Error ex culpa dignissimos assumenda voluptates vel. Qui inventore \n - Theon Greyjoy : Animi dolor minima culpa sequi voluptate. Possimus necessitatibu\n - Donna Smith : Et esse nulla ducimus tempore aliquid. Suscipit iste dignissimos v\n```\n\nIt's good to see Sansa Stark from Game of Thrones really knows her Latin, isn't it?\n\nNow I've shown you how to work with lookups in your pipelines, I'll show you how to use the `$group` stage to do actual *aggregation*.\n\n## Grouping Documents with `$group`\n\nI'll start with a new pipeline again.\n\nThe `$group` stage is one of the more difficult stages to understand, so I'll break this down slowly.\n\nStart with the following code:\n\n``` rust\n// Define a struct to hold grouped data by year:\n#[derive(Debug, Deserialize)]\nstruct YearSummary {\n _id: i32,\n #[serde(default)]\n movie_count: i64,\n #[serde(default)]\n movie_titles: Vec,\n}\n\n// Some movies have \"year\" values ending with '\u00e8'.\n// This stage will filter them out:\nlet stage_filter_valid_years = doc! {\n \"$match\": {\n \"year\": {\n \"$type\": \"number\",\n }\n }\n};\n\n/*\n* Group movies by year, producing 'year-summary' documents that look like:\n* {\n* '_id': 1917,\n* }\n*/\nlet stage_group_year = doc! {\n \"$group\": {\n \"_id\": \"$year\",\n }\n};\n\nlet pipeline = vec![stage_filter_valid_years, stage_group_year];\n\n// Loop through the 'year-summary' documents:\nlet mut results = movies.aggregate(pipeline, None).await?;\n// Loop through the yearly summaries and print their debug representation:\nwhile let Some(result) = results.next().await {\n let doc: YearSummary = bson::from_document(result?)?;\n println!(\"* {:?}\", doc);\n}\n```\n\nIn the `movies` collection, some of the years contain the \"\u00e8\" character. This database has some messy values in it. In this case, there's only a small handful of documents, and I think we should just remove them, so I've added a `$match` stage that filters out any documents with a `year` that's not numeric.\n\nExecute this code, and you should see something like this:\n\n``` none\n* YearSummary { _id: 1959, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1980, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1977, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1933, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1998, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1922, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1948, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1965, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1950, movie_count: 0, movie_titles: [] }\n* YearSummary { _id: 1968, movie_count: 0, movie_titles: [] }\n...\n```\n\nEach line is a document emitted from the aggregation pipeline. But you're not looking at *movie* documents anymore. The `$group` stage groups input documents by the specified `_id` expression and outputs one document for each unique `_id` value. In this case, the expression is `$year`, which means one document will be emitted for each unique value of the `year` field. Each document emitted can (and usually will) also contain values generated from aggregating data from the grouped documents. Currently, the YearSummary documents are using the default values for `movie_count` and `movie_titles`. Let's fix that.\n\nChange the stage definition to the following:\n\n``` rust\nlet stage_group_year = doc! {\n \"$group\": {\n \"_id\": \"$year\",\n // Count the number of movies in the group:\n \"movie_count\": { \"$sum\": 1 },\n }\n};\n```\n\nThis will add a `movie_count` field, containing the result of adding `1` for every document in the group. In other words, it counts the number of movie documents in the group. If you execute the code now, you should see something like the following:\n\n``` none\n* YearSummary { _id: 2005, movie_count: 758, movie_titles: [] }\n* YearSummary { _id: 1999, movie_count: 542, movie_titles: [] }\n* YearSummary { _id: 1943, movie_count: 36, movie_titles: [] }\n* YearSummary { _id: 1926, movie_count: 9, movie_titles: [] }\n* YearSummary { _id: 1935, movie_count: 40, movie_titles: [] }\n* YearSummary { _id: 1966, movie_count: 116, movie_titles: [] }\n* YearSummary { _id: 1971, movie_count: 116, movie_titles: [] }\n* YearSummary { _id: 1952, movie_count: 58, movie_titles: [] }\n* YearSummary { _id: 2013, movie_count: 1221, movie_titles: [] }\n* YearSummary { _id: 1912, movie_count: 2, movie_titles: [] }\n...\n```\n\nThere are a number of [accumulator operators, like `$sum`, that allow you to summarize data from the group. If you wanted to build an array of all the movie titles in the emitted document, you could add `\"movie_titles\": { \"$push\": \"$title\" },` to the `$group` stage. In that case, you would get `YearSummary` instances that look like this:\n\n``` none\n* YearSummary { _id: 1986, movie_count: 206, movie_titles: \"Defense of the Realm\", \"F/X\", \"Mala Noche\", \"Witch from Nepal\", ... ]}\n```\n\nAdd the following stage to sort the results:\n\n``` rust\nlet stage_sort_year_ascending = doc! {\n \"$sort\": {\"_id\": 1}\n};\n\nlet pipeline = vec! [\n stage_filter_valid_years, // Match numeric years\n stage_group_year,\n stage_sort_year_ascending, // Sort by year (which is the unique _id field)\n]\n```\n\nNote that the `$match` stage is added to the start of the pipeline, and the `$sort` is added to the end. A general rule is that you should filter documents out early in your pipeline, so that later stages have fewer documents to deal with. It also ensures that the pipeline is more likely to be able to take advantages of any appropriate indexes assigned to the collection.\n\n>Remember, all of the sample code for this quick start series can be found [on GitHub.\n\nAggregations using `$group` are a great way to discover interesting things about your data. In this example, I'm illustrating the number of movies made each year, but it would also be interesting to see information about movies for each country, or even look at the movies made by different actors.\n\n## What Have You Learned?\n\nYou've learned how to construct aggregation pipelines to filter, group, and join documents with other collections. You've hopefully learned that putting a `$limit` stage at the start of your pipeline can be useful to speed up development (but should be removed before going to production). You've also learned some basic optimization tips, like putting filtering expressions towards the start of your pipeline instead of towards the end.\n\nAs you've gone through, you'll probably have noticed that there's a *ton* of different stage types, operators, and accumulator operators. Learning how to use the different components of aggregation pipelines is a big part of learning to use MongoDB effectively as a developer.\n\nI love working with aggregation pipelines, and I'm always surprised at what you can do with them!\n\n## Next Steps\n\nAggregation pipelines are super powerful, and because of this, they're a big topic to cover. Check out the full documentation to get a better idea of their full scope.\n\nMongoDB University also offers a *free* online course on The MongoDB Aggregation Framework.\n\nNote that aggregation pipelines can also be used to generate new data and write it back into a collection, with the $out stage.\n\nMongoDB provides a *free* GUI tool called Compass. It allows you to connect to your MongoDB cluster, so you can browse through databases and analyze the structure and contents of your collections. It includes an aggregation pipeline builder which makes it easier to build aggregation pipelines. I highly recommend you install it, or if you're using MongoDB Atlas, use its similar aggregation pipeline builder in your browser. I often use them to build aggregation pipelines, and they include export buttons which will export your pipeline as Python code (which isn't too hard to transform into Rust).\n\nI don't know about you, but when I was looking at some of the results above, I thought to myself, \"It would be fun to visualise this with a chart.\" MongoDB provides a hosted service called Charts which just *happens* to take aggregation pipelines as input. So, now's a good time to give it a try!", "format": "md", "metadata": {"tags": ["Rust", "MongoDB"], "pageDescription": "Query, group, and join data in MongoDB using aggregation pipelines with Rust.", "contentType": "Quickstart"}, "title": "Getting Started with Aggregation Pipelines in Rust", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/email-password-authentication-react", "action": "created", "body": "# Implement Email/Password Authentication in React\n\n> **Note:** GraphQL is deprecated. Learn more.\n\nWelcome back to our journey building a full stack web application with MongoDB Atlas App Services, GraphQL, and React!\n\nIn the first part of the series, we configured the email/password authentication provider in our backend App Service. In this second article, we will integrate the authentication into a web application built with React. We will write only a single line of server-side code and let the App Service handle the rest!\n\nWe will also build the front end of our expense management application, Expengo, using React. By the end of today\u2019s tutorial, we will have the following web application:\n\n## Set up your React web application\n\nMake sure you have Node.js and npm installed on your machine. You can check if they\u2019re correctly set up by running the following commands in your terminal emulator:\n\n```sh\nnode -v\nnpm -v\n```\n\n### Create the React app\nLet\u2019s create a brand new React app. Launch your terminal and execute the following command, where \u201cexpengo\u201d will be the name of our app:\n\n```sh\nnpx create-react-app expengo -y\n```\n\nThe process may take up to a minute to complete. After it\u2019s finished, navigate to your new project:\n\n```sh\ncd expengo\n```\n\n### Add required dependencies\nNext, we\u2019ll install the Realm Web SDK. The SDK enables browser-based applications to access data stored in MongoDB Atlas and interact with Atlas App Services like Functions, authentication, and GraphQL.\n\n```\nnpm install realm-web\n```\n\nWe\u2019ll also install a few other npm packages to make our lives easier:\n\n1. React-router-dom to manage navigation in our app:\n\n ```\n npm install react-router-dom\n ```\n \n1. Material UI to help us build beautiful components without writing a lot of CSS:\n\n ```\n npm install @mui/material @emotion/styled @emotion/react\n ```\n\n### Scaffold the application structure\nFinally, let\u2019s create three new directories with a few files in them. To do that, we\u2019ll use the shell. Feel free to use a GUI or your code editor if you prefer.\n\n```sh\n(cd src/ && mkdir pages/ contexts/ realm/)\n(cd src/pages && touch Home.page.js PrivateRoute.page.js Login.page.js Signup.page.js)\n(cd src/contexts && touch user.context.js)\n(cd src/realm && touch constants.js)\n```\n\nOpen the expengo directory in your code editor. The project directory should have the following structure:\n\n```\n\u251c\u2500\u2500 README.md\n\u2514\u2500\u2500node_modules/\n\u251c\u2500\u2500 \u2026\n\u251c\u2500\u2500 package-lock.json\n\u251c\u2500\u2500 package.json\n\u2514\u2500\u2500 public/\n \u251c\u2500\u2500 \u2026\n\u2514\u2500\u2500src/\n \u2514\u2500\u2500contexts/\n \u251c\u2500\u2500user.context.js\n \u2514\u2500\u2500pages/\n \u251c\u2500\u2500Home.page.js\n \u251c\u2500\u2500PrivateRoute.page.js\n \u251c\u2500\u2500Login.page.js\n \u251c\u2500\u2500Signup.page.js\n \u2514\u2500\u2500 realm/\n \u251c\u2500\u2500constants.js\n \u251c\u2500\u2500 App.css\n \u251c\u2500\u2500 App.js\n \u251c\u2500\u2500 App.test.js\n \u251c\u2500\u2500 index.css\n \u251c\u2500\u2500 index.js\n \u251c\u2500\u2500 logo.svg\n \u251c\u2500\u2500 reportWebVitals.js\n \u2514\u2500\u2500 setupTests.js\n```\n\n## Connect your React app with App Services and handle user management\n\nIn this section, we will be creating functions and React components in our app to give our users the ability to log in, sign up, and log out.\n\n* Start by copying your App Services App ID:\n\nNow open this file: `./src/realm/constants.js`\n\nPaste the following code and replace the placeholder with your app Id:\n\n```js\nexport const APP_ID = \"<-- Your App ID -->\";\n```\n\n### Create a React Context for user management\nNow we will add a new React Context on top of all our routes to get access to our user\u2019s details, such as their profile and access tokens. Whenever we need to call a function on a user\u2019s behalf, we can easily do that by consuming this React Context through child components.\nThe following code also implements functions that will do all the interactions with our Realm Server to perform authentication. Please take a look at the comments for a function-specific description.\n\n**./src/contexts/user.context.js**\n\n```js\nimport { createContext, useState } from \"react\";\nimport { App, Credentials } from \"realm-web\";\nimport { APP_ID } from \"../realm/constants\";\n \n// Creating a Realm App Instance\nconst app = new App(APP_ID);\n \n// Creating a user context to manage and access all the user related functions\n// across different components and pages.\nexport const UserContext = createContext();\n \nexport const UserProvider = ({ children }) => {\n const user, setUser] = useState(null);\n \n // Function to log in user into our App Service app using their email & password\n const emailPasswordLogin = async (email, password) => {\n const credentials = Credentials.emailPassword(email, password);\n const authenticatedUser = await app.logIn(credentials);\n setUser(authenticatedUser);\n return authenticatedUser;\n };\n \n // Function to sign up user into our App Service app using their email & password\n const emailPasswordSignup = async (email, password) => {\n try {\n await app.emailPasswordAuth.registerUser(email, password);\n // Since we are automatically confirming our users, we are going to log in\n // the user using the same credentials once the signup is complete.\n return emailPasswordLogin(email, password);\n } catch (error) {\n throw error;\n }\n };\n \n // Function to fetch the user (if the user is already logged in) from local storage\n const fetchUser = async () => {\n if (!app.currentUser) return false;\n try {\n await app.currentUser.refreshCustomData();\n // Now, if we have a user, we are setting it to our user context\n // so that we can use it in our app across different components.\n setUser(app.currentUser);\n return app.currentUser;\n } catch (error) {\n throw error;\n }\n }\n \n // Function to logout user from our App Services app\n const logOutUser = async () => {\n if (!app.currentUser) return false;\n try {\n await app.currentUser.logOut();\n // Setting the user to null once loggedOut.\n setUser(null);\n return true;\n } catch (error) {\n throw error\n }\n }\n \n return \n {children}\n ;\n}\n```\n\n## Create a PrivateRoute page\nThis is a wrapper page that will only allow authenticated users to access our app\u2019s private pages. We will see it in action in our ./src/App.js file.\n\n**./src/pages/PrivateRoute.page.js**\n\n```js\nimport { useContext } from \"react\";\nimport { Navigate, Outlet, useLocation } from \"react-router-dom\";\nimport { UserContext } from \"../contexts/user.context\";\n \nconst PrivateRoute = () => {\n \n // Fetching the user from the user context.\n const { user } = useContext(UserContext);\n const location = useLocation();\n const redirectLoginUrl = `/login?redirectTo=${encodeURI(location.pathname)}`;\n \n // If the user is not logged in we are redirecting them\n // to the login page. Otherwise we are letting them to\n // continue to the page as per the URL using .\n return !user ? : ;\n}\n \nexport default PrivateRoute;\n```\n\n## Create a login page\nNext, let\u2019s add a login page.\n\n**./src/pages/Login.page.js**\n\n```js\nimport { Button, TextField } from \"@mui/material\";\nimport { useContext, useEffect, useState } from \"react\";\nimport { Link, useLocation, useNavigate } from \"react-router-dom\";\nimport { UserContext } from \"../contexts/user.context\";\n \nconst Login = () => {\n const navigate = useNavigate();\n const location = useLocation();\n \n // We are consuming our user-management context to\n // get & set the user details here\n const { user, fetchUser, emailPasswordLogin } = useContext(UserContext);\n \n // We are using React's \"useState\" hook to keep track\n // of the form values.\n const [form, setForm] = useState({\n email: \"\",\n password: \"\"\n });\n \n // This function will be called whenever the user edits the form.\n const onFormInputChange = (event) => {\n const { name, value } = event.target;\n setForm({ ...form, [name]: value });\n };\n \n // This function will redirect the user to the\n // appropriate page once the authentication is done.\n const redirectNow = () => {\n const redirectTo = location.search.replace(\"?redirectTo=\", \"\");\n navigate(redirectTo ? redirectTo : \"/\");\n }\n \n // Once a user logs in to our app, we don\u2019t want to ask them for their\n // credentials again every time the user refreshes or revisits our app, \n // so we are checking if the user is already logged in and\n // if so we are redirecting the user to the home page.\n // Otherwise we will do nothing and let the user to login.\n const loadUser = async () => {\n if (!user) {\n const fetchedUser = await fetchUser();\n if (fetchedUser) {\n // Redirecting them once fetched.\n redirectNow();\n }\n }\n }\n \n // This useEffect will run only once when the component is mounted.\n // Hence this is helping us in verifying whether the user is already logged in\n // or not.\n useEffect(() => {\n loadUser(); // eslint-disable-next-line react-hooks/exhaustive-deps\n }, []);\n \n // This function gets fired when the user clicks on the \"Login\" button.\n const onSubmit = async (event) => {\n try {\n // Here we are passing user details to our emailPasswordLogin\n // function that we imported from our realm/authentication.js\n // to validate the user credentials and log in the user into our App.\n const user = await emailPasswordLogin(form.email, form.password);\n if (user) {\n redirectNow();\n }\n } catch (error) {\n if (error.statusCode === 401) {\n alert(\"Invalid username/password. Try again!\");\n } else {\n alert(error);\n }\n \n }\n };\n \n return \n \n\nLOGIN\n\n \n \n \n Login\n \n \n\nDon't have an account? Signup\n\n \n}\n \nexport default Login;\n```\n\n## Create a signup page\nNow our users can log into the application, but how do they sign up? Time to add a signup page!\n\n**./src/pages/Signup.page.js**\n\n```js\nimport { Button, TextField } from \"@mui/material\";\nimport { useContext, useState } from \"react\";\nimport { Link, useLocation, useNavigate } from \"react-router-dom\";\nimport { UserContext } from \"../contexts/user.context\";\n \nconst Signup = () => {\n const navigate = useNavigate();\n const location = useLocation();\n \n // As explained in the Login page.\n const { emailPasswordSignup } = useContext(UserContext);\n const [form, setForm] = useState({\n email: \"\",\n password: \"\"\n });\n \n // As explained in the Login page.\n const onFormInputChange = (event) => {\n const { name, value } = event.target;\n setForm({ ...form, [name]: value });\n };\n \n \n // As explained in the Login page.\n const redirectNow = () => {\n const redirectTo = location.search.replace(\"?redirectTo=\", \"\");\n navigate(redirectTo ? redirectTo : \"/\");\n }\n \n // As explained in the Login page.\n const onSubmit = async () => {\n try {\n const user = await emailPasswordSignup(form.email, form.password);\n if (user) {\n redirectNow();\n }\n } catch (error) {\n alert(error);\n }\n };\n \n return \n \n\nSIGNUP\n\n \n \n \n Signup\n \n \n\nHave an account already? Login\n\n \n}\n \nexport default Signup;\n```\n\n## Create a homepage\nOur homepage will be a basic page with a title and logout button.\n\n**./src/pages/Home.page.js:**\n\n```js\nimport { Button } from '@mui/material'\nimport { useContext } from 'react';\nimport { UserContext } from '../contexts/user.context';\n \nexport default function Home() {\n const { logOutUser } = useContext(UserContext);\n \n // This function is called when the user clicks the \"Logout\" button.\n const logOut = async () => {\n try {\n // Calling the logOutUser function from the user context.\n const loggedOut = await logOutUser();\n // Now we will refresh the page, and the user will be logged out and\n // redirected to the login page because of the component.\n if (loggedOut) {\n window.location.reload(true);\n }\n } catch (error) {\n alert(error)\n }\n }\n \n return (\n <>\n \n\nWELCOME TO EXPENGO\n\n Logout\n \n )\n}\n```\n\n## Putting it all together in App.js\nLet\u2019s connect all of our pages in the root React component\u2014App.\n\n**./src/App.js**\n\n```js\nimport { BrowserRouter, Route, Routes } from \"react-router-dom\";\nimport { UserProvider } from \"./contexts/user.context\";\nimport Home from \"./pages/Home.page\";\nimport Login from \"./pages/Login.page\";\nimport PrivateRoute from \"./pages/PrivateRoute.page\";\nimport Signup from \"./pages/Signup.page\";\n \nfunction App() {\n return (\n \n {/* We are wrapping our whole app with UserProvider so that */}\n {/* our user is accessible through out the app from any page*/}\n \n \n } />\n } />\n {/* We are protecting our Home Page from unauthenticated */}\n {/* users by wrapping it with PrivateRoute here. */}\n }>\n } />\n \n \n \n \n );\n}\n \nexport default App;\n```\n\n## Launch your React app\nAll have to do now is run the following command from your project directory:\n\n```\nnpm start\n```\n\nOnce the compilation is complete, you will be able to access your app from your browser at http://localhost:3000/. You should be able to sign up and log into your app now.\n\n## Conclusion\nWoah! We have made a tremendous amount of progress. Authentication is a very crucial part of any app and once that\u2019s done, we can focus on features that will make our users\u2019 lives easier. In the next part of this blog series, we\u2019ll be leveraging App Services GraphQL to perform CRUD operations. I\u2019m excited about that because the basic setup is already over.\n\nIf you have any doubts or concerns, please feel free to reach out to us on the MongoDB Community Forums. I have created a [dedicated forum topic for this blog where we can discuss anything related to this blog series.\n\nAnd before you ask, here\u2019s the GitHub repository, as well!\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "React"], "pageDescription": "Configuring signup and login authentication is a common step for nearly every web application. Learn how to set up email/password authentication in React using MongoDB Atlas App Services.", "contentType": "Tutorial"}, "title": "Implement Email/Password Authentication in React", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/flask-app-ufo-tracking", "action": "created", "body": "\n You must submit {{message}}.\n ", "format": "md", "metadata": {"tags": ["Python", "Flask"], "pageDescription": "Learn step-by-step how to build a full-stack web application to track reports of unidentified flying objects (UFOs) in your area.", "contentType": "Tutorial"}, "title": "Build an App With Python, Flask, and MongoDB to Track UFOs", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/slowly-changing-dimensions-application-mongodb", "action": "created", "body": "# Slowly Changing Dimensions and Their Application in MongoDB\n\nThe concept of \u201cslowly changing dimensions\u201d (usually abbreviated as SCD) has been around for a long time and is a staple in SQL-based data warehousing. The fundamental idea is to track all changes to data in the data warehouse over time. The \u201cslowly changing\u201d part of the name refers to the assumption that the data that is covered by this data model changes with a low frequency, but without any apparent pattern in time. This data model is used when the requirements for the data warehouse cover functionality to track and reproduce outputs based on historical states of data.\n\nOne common case of this is for reporting purposes, where the data warehouse must explain the difference of a report produced last month, and why the aggregated values are different in the current version of the report. Requirements such as these are often encountered in financial reporting systems.\n\nThere are many ways to implement slowly changing dimensions in SQL, referred to as the \u201ctypes.\u201d Types 0 and 1 are the most basic ones that only keep track of the current state of data (in Type 1) or in the original state (Type 0). The most commonly applied one is Type 2. SCD Type 2 implements three new fields, \u201cvalidFrom,\u201d \u201cvalidTo,\u201d and an optional flag on the latest set of data, which is usually called \u201cisValid\u201d or \u201cisEffective.\u201d\n\n**Table of SCD types:**\n\n| | |\n| --- | --- |\n| **SCD Type** | **Description** |\n| SCD Type 0 | Only keep original state, data can not be changed |\n| SCD Type 1 | Only keep updated state, history can not be stored |\n| SCD Type 2 | Keep history in new row/document |\n| SCD Type 3 | Keep history in new fields in same row/document |\n| SCD Type 4 | Keep history in separate collection |\n| SCD Types >4 | Combinations of previous types \u2014 e.g., Type 5 is Type 1 plus Type 4 |\n\nIn this simplest implementation of SCD, every record contains the information on the validity period for this set of data and all different validities are kept in the same collection or table. \n\nIn applying this same concept to MongoDB\u2019s document data model, the approach is exactly the same as in a relational database. In the comparison of data models, the normalization that is the staple of relational databases is not the recommended approach in the document model, but the details of this have been covered in many blog posts \u2014 for example, the 6 Rules of Thumb for MongoDB Schema Design. The concept of slowly changing dimensions applies on a per document basis in the chosen and optimized data model for the specific use case. The best way to illustrate this is in a small example.\n\nConsider the following use case: Your MongoDB stores the prices of a set of items, and you need to keep track of the changes of the price of an item over time, in order to be able to process returns of an item, as the money refunded needs to be the price of the item at the time of purchase. You have a simple collection called \u201cprices\u201d and each document has an itemID and a price.\n\n```\ndb.prices.insertMany(\n { 'item': 'shorts', 'price': 10 },\n { 'item': 't-shirt', 'price': 2 },\n { 'item': 'pants', 'price': 5 }\n]);\n\n```\nNow, the price of \u201cpants\u201d changes from 5 to 7. This can be done and tracked by assuming default values for the necessary data fields for SCD Type 2. The default value for \u201cvalidFrom\u201d is 01.01.1900, \u201cvalidTo\u201d is 01.01.9999, and isValid is \u201ctrue.\u201d\n\nThe change to the price of the \u201cpants\u201d item is then executed as an insert of the new document, and an update to the previously valid one.\n\n```\nlet now = new Date();\ndb.prices.updateOne(\n { 'item': 'pants', \"$or\":[{\"isValid\":false},{\"isValid\":null}]},\n {\"$set\":{\"validFrom\":new Date(\"1900-01-01\"), \"validTo\":now,\"isValid\":false}}\n);\ndb.prices.insertOne(\n { 'item': 'pants', 'price': 7 ,\"validFrom\":now, \"validTo\":new Date(\"9999-01-01\"),\"isValid\":true}\n);\n\n```\nAs it is essential that the chain of validity is unbroken, the two database operations should happen with the same timestamp. Depending on the requirements of the application, it might make sense to wrap these two commands into a transaction to ensure both changes are always applied together. There are also ways to push this process to the background, but as per the initial assumption in the slowly changing dimensions, changes like this are infrequent and data consistency is the highest priority. Therefore, the performance impact of a transaction is acceptable for this use case.\n\nIf you then want to query the latest price for an item, it\u2019s as simple as specifying:\n\n```\ndb.prices.find({ 'item': 'pants','isValid':true});\n```\nAnd if you want to query for the state at a specific point in time:\n```\nlet time = new Date(\"2022-11-16T13:00:00\")\ndb.prices.find({ 'item': 'pants','validFrom':{'$lte':time}, 'validTo':{'$gt':time}});\n```\nThis example shows that the flexibility of the document model allows us to take a relational concept and directly apply it to data inside MongoDB. But it also opens up other methods that are not possible in relational databases. Consider the following: What if you only need to track changes to very few fields in a document? Then you could simply embed the history of a field as an array in the first document. This implements SCD Type 3, storing the history in new fields, but without the limitation and overhead of creating new columns in a relational database. SCD Type 3 in RDMBS is usually limited to storing only the last few changes, as adding new columns on the fly is not possible. \n\nThe following aggregation pipeline does exactly that. It changes the price to 7, and stores the previous value of the price with a timestamp of when the old price became invalid in an array called \u201cpriceHistory\u201d:\n\n```\ndb.prices.aggregate([\n { $match: {'item': 'pants'}},\n { $addFields: { price: 7 ,\n priceHistory: { $concatArrays:\n [{$ifNull: ['$priceHistory', []]},\n [{price: \"$price\",time: now}]]}\n }\n },\n { $merge: {\n into: \"prices\",\n on: \"_id\",\n whenMatched: \"merge\",\n whenNotMatched: \"fail\"\n }}])\n\n```\nThere are some caveats to that solution which cover large array sizes, but there are known solutions to deal with these kinds of data modeling challenges. In order to avoid large arrays, you could apply the \u201cOutlier\u201d or \u201cBucketing\u201d patterns of the many possibilities in [MongoDB schema design and many useful explanations on what to avoid. \n\nIn this way, you could store the most recent history of data changes in the documents themselves, and if any analysis gets deeper into past changes, it would have to load the older change history from a separate collection. This approach might sound similar to the stated issue of adding new fields in a relational database, but there are two differences: Firstly, MongoDB does not encounter this problem until more than 100 changes are done on a single document. And secondly, MongoDB has tools to dynamically deal with large arrays, whereas in relational DBs, the solution would be to choose a different approach, as even pre-allocating more than 10 columns for changes is not a good idea in SQL. \n\nBut in both worlds, dealing with many changes in SCD Type 3 requires an extension to a different SCD type, as having a separate collection for the history is SCD Type 4.\n\n## Outlook Data Lake/Data Federation\nThe shown example focuses on a strict and accurate representation of changes. Sometimes, there are less strict requirements on the necessity to show historical changes in data. It might be that 95% of the time, the applications using the MongoDB database are only interested in the current state of the data, but some (analytical) queries still need to be run on the full history of data. \n\nIn this case, it might be more efficient to store the current version of the data in one collection, and the historical changes in another. The historical collection could then even be removed from the active MongoDB cluster by using MongoDB Atlas Federated Database functionalities, and in the fully managed version using Atlas Online Archive.\n\nIf the requirement for tracking the changes is different in a way that not every single change needs to be tracked, but rather a series of checkpoints is required to show the state of data at specific times, then Atlas Data Lake might be the correct solution. With Atlas Data Lake, you are able to extract a snapshot of the data at specific points in time, giving you a similar level of traceability, albeit at fixed time intervals. Initially the concept of SCD was developed to avoid data duplication in such a case, as it does not store an additional document if nothing changes. In today's world where cold storage has become much more affordable, Data Lake offers the possibility to analyze data from your productive system, using regular snapshots, without doing any changes to the system or even increasing the load on the core database.\n\nAll in all, the concept of slowly changing dimensions enables you to cover part of the core requirements for a data warehouse by giving you the necessary tools to keep track of all changes.\n\n## Applying SCD methods outside of data warehousing\nWhile the fundamental concept of slowly changing dimensions was developed with data warehouses in mind, another area where derivatives of the techniques developed there can be useful is in event-driven applications. Given the case that you have infrequent events, in different types of categories, it\u2019s oftentimes an expensive database action to find the latest event per category. The process for that might require grouping and/or sorting your data in order to find the current state.\n\nIn this case, it might make sense to amend the data model by a flag similar to the \u201cisValid'' flag of the SCD Type 2 example above, or even go one step further and not only store the event time per document, but adding the time of the next event in a similar fashion to the SCD Type 2 implementation. The flag enables very fast queries for the latest set of data per event type, and the date ensures that if you execute a search for a specific point in time, it\u2019s easy and efficient to get the respective event that you are looking for. \n\nIn such a case, it might make sense to separate the \u201cevents\u201d and their processed versions that include the isValid flag and the validity end date into separate collections, utilizing more of the methodologies of the different types of SCD implementations. \n\nSo, the next time you encounter a data model that requires keeping track of changes, think, \u201cSCD could be useful and can easily be applied in the document model.\u201d If you want to implement slowly changing dimensions in your MongoDB use case, consider getting support from the MongoDB Professional Services team.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article describes how to implement the concept of \u201cslowly changing dimensions\u201d (SCD) in the MongoDB document model and how to efficiently query them.", "contentType": "Article"}, "title": "Slowly Changing Dimensions and Their Application in MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/johns-hopkins-university-covid-19-data-atlas", "action": "created", "body": "# How to work with Johns Hopkins University COVID-19 Data in MongoDB Atlas\n\n## TL;DR\n\nOur MongoDB Cluster is running in version 7.0.3.\n\nYou can connect to it using MongoDB Compass, the Mongo Shell, SQL or any MongoDB driver supporting at least MongoDB 7.0\nwith the following URI:\n\n``` none\nmongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\n```\n\n> `readonly` is the username and the password, they are not meant to be replaced.\n\n## News\n\n### November 15th, 2023\n\n- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.\n- Here is JHU's GitHub repository.\n- First data entry is 2020-01-22, last one is 2023-03-09.\n- Cluster now running on 7.0.3\n- Removed the database `covid19jhu` with the raw data. Use the much better database `covid19`.\n- BI Tools access is now disable.\n\n### December 10th, 2020\n\n- Upgraded the cluster to 4.4.\n- Improved the python data import script to calculate the daily values using the existing cumulative values with\n an Aggregation Pipeline.\n - confirmed_daily.\n - deaths_daily.\n - recovered_daily.\n\n### May 13th, 2020\n\n- Renamed the field \"city\" to \"county\" and \"cities\" to \"counties\" where appropriate. They contain the data from the\n column \"Admin2\" in JHU CSVs.\n\n### May 6th, 2020\n\n- The `covid19` database now has 5 collections. More details in\n our README.md.\n- The `covid19.statistics` collection is renamed `covid19.global_and_us` for more clarity.\n- Maxime's Charts are now\n using the `covid19.global_and_us` collection.\n- The dataset is updated hourly so any commit done by JHU will be reflected at most one hour later in our cluster.\n\n## Table of Contents\n\n- Introduction\n- The MongoDB Dataset\n- Get Started\n - Explore the Dataset with MongoDB Charts\n - Explore the Dataset with MongoDB Compass\n - Explore the Dataset with the MongoDB Shell\n - Accessing the Data with Java\n - Accessing the Data with Node.js\n - Accessing the Data with Python\n - Accessing the Data with Golang\n - Accessing the Data with Google Colaboratory\n - Accessing the Data with Business Intelligence Tools\n - Accessing the Data with any SQL tool\n - Take a copy of the data\n- Wrap up\n- Sources\n\n## Introduction\n\nAs the COVID-19 pandemic has swept the globe, the work of JHU (Johns Hopkins University) and\nits COVID-19 dashboard has become vitally important in keeping people informed\nabout the progress of the virus in their communities, in their countries, and in the world.\n\nJHU not only publishes their dashboard,\nbut they make the data powering it freely available for anyone to use.\nHowever, their data is delivered as flat CSV files which you need to download each time to then query. We've set out to\nmake that up-to-date data more accessible so people could build other analyses and applications directly on top of the\ndata set.\n\nWe are now hosting a service with a frequently updated copy of the JHU data in MongoDB Atlas, our database in the cloud.\nThis data is free for anyone to query using the MongoDB Query language and/or SQL. We also support\na variety of BI tools directly, so you can query the data with Tableau,\nQlik and Excel.\n\nWith the MongoDB COVID-19 dataset there will be no more manual downloads and no more frequent format changes. With this\ndata set, this service will deliver a consistent JSON and SQL view every day with no\ndownstream ETL required.\n\nNone of the actual data is modified. It is simply structured to make it easier to query by placing it within\na MongoDB Atlas cluster and by creating some convenient APIs.\n\n## The MongoDB Dataset\n\nAll the data we use to create the MongoDB COVID-19 dataset comes from the JHU dataset. In their\nturn, here are the sources they are using:\n\n- the World Health Organization,\n- the National Health Commission of the People's Republic of China,\n- the United States Centre for Disease Control,\n- the Australia Government Department of Health,\n- the European Centre for Disease Prevention and Control,\n- and many others.\n\nYou can read the full list on their GitHub repository.\n\nUsing the CSV files they provide, we are producing two different databases in our cluster.\n\n- `covid19jhu` contains the raw CSV files imported with\n the mongoimport tool,\n- `covid19` contains the same dataset but with a clean MongoDB schema design with all the good practices we are\n recommending.\n\nHere is an example of a document in the `covid19` database:\n\n``` javascript\n{\n \"_id\" : ObjectId(\"5e957bfcbd78b2f11ba349bf\"),\n \"uid\" : 312,\n \"country_iso2\" : \"GP\",\n \"country_iso3\" : \"GLP\",\n \"country_code\" : 312,\n \"state\" : \"Guadeloupe\",\n \"country\" : \"France\",\n \"combined_name\" : \"Guadeloupe, France\",\n \"population\" : 400127,\n \"loc\" : {\n \"type\" : \"Point\",\n \"coordinates\" : -61.551, 16.265 ]\n },\n \"date\" : ISODate(\"2020-04-13T00:00:00Z\"),\n \"confirmed\" : 143,\n \"deaths\" : 8,\n \"recovered\" : 67\n}\n```\n\nThe document above was obtained by joining together the file `UID_ISO_FIPS_LookUp_Table.csv` and the CSV files time\nseries you can find\nin [this folder.\n\nSome fields might not exist in all the documents because they are not relevant or are just not provided\nby JHU. If you want more details, run a schema analysis\nwith MongoDB Compass on the different collections available.\n\nIf you prefer to host the data yourself, the scripts required to download and transform the JHU data are\nopen-source. You\ncan view them and instructions for how to use them on our GitHub repository.\n\nIn the `covid19` database, you will find 5 collections which are detailed in\nour GitHub repository README.md file.\n\n- metadata\n- global (the data from the time series global files)\n- us_only (the data from the time series US files)\n- global_and_us (the most complete one)\n- countries_summary (same as global but countries are grouped in a single doc for each date)\n\n## Get Started\n\nYou can begin exploring the data right away without any MongoDB or programming experience\nusing MongoDB Charts\nor MongoDB Compass.\n\nIn the following sections, we will also show you how to consume this dataset using the Java, Node.js and Python drivers.\n\nWe will show you how to perform the following queries in each language:\n\n- Retrieve the last 5 days of data for a given place,\n- Retrieve all the data for the last day,\n- Make a geospatial query to retrieve data within a certain distance of a given place.\n\n### Explore the Dataset with MongoDB Charts\n\nWith Charts, you can create visualisations of the data using any of the\npre-built graphs and charts. You can\nthen arrange this into a unique dashboard,\nor embed the charts in your pages or blogs.\n\n:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4266-8264-d37ce88ff9fa\ntheme=light autorefresh=3600}\n\n> If you want to create your own MongoDB Charts dashboard, you will need to set up your\n> own [Free MongoDB Atlas cluster and import the dataset in your cluster using\n> the import scripts or\n> use `mongoexport & mongoimport` or `mongodump & mongorestore`. See this section for more\n> details: Take a copy of the data.\n\n### Explore the Dataset with MongoDB Compass\n\nCompass allows you to dig deeper into the data using\nthe MongoDB Query Language or via\nthe Aggregation Pipeline visual editor. Perform a range of\noperations on the\ndata, including mathematical, comparison and groupings.\nCreate documents that provide unique insights and interpretations. You can use the output from your pipelines\nas data-sources for your Charts.\n\nFor MongoDB Compass or your driver, you can use this connection string.\n\n``` \nmongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\n```\n\n### Explore the Dataset with the MongoDB Shell\n\nBecause we store the data in MongoDB, you can also access it via\nthe MongoDB Shell or\nusing any of our drivers. We've limited access to these collections to 'read-only'.\nYou can find the connection strings for the shell and Compass below, as well as driver examples\nfor Java, Node.js,\nand Python to get you started.\n\n``` shell\nmongo \"mongodb+srv://covid-19.hip2i.mongodb.net/covid19\" --username readonly --password readonly\n```\n\n### Accessing the Data with Java\n\nOur Java examples are available in\nour Github Repository Java folder.\n\nYou need the three POJOs from\nthe Java Github folder\nto make this work.\n\n### Accessing the Data with Node.js\n\nOur Node.js examples are available in\nour Github Repository Node.js folder.\n\n### Accessing the Data with Python\n\nOur Python examples are available in\nour Github Repository Python folder.\n\n### Accessing the Data with Golang\n\nOur Golang examples are available in\nour Github Repository Golang folder.\n\n### Accessing the Data with Google Colaboratory\n\nIf you have a Google account, a great way to get started is with\nour Google Colab Notebook.\n\nThe sample code shows how to install pymongo and use it to connect to the MongoDB COVID-19 dataset. There are some\nexample queries which show how to query the data and display it in the notebook, and the last example demonstrates how\nto display a chart using Pandas & Matplotlib!\n\nIf you want to modify the notebook, you can take a copy by selecting \"Save a copy in Drive ...\" from the \"File\" menu,\nand then you'll be free to edit the copy.\n\n### Accessing the Data with Business Intelligence Tools\n\nYou can get lots of value from the dataset without any programming at all. We've enabled\nthe Atlas BI Connector (not anymore, see News section), which exposes\nan SQL interface to MongoDB's document structure. This means you can use data analysis and dashboarding tools\nlike Tableau, Qlik Sense,\nand even MySQL Workbench to analyze, visualise and extract understanding\nfrom the data.\n\nHere's an example of a visualisation produced in a few clicks with Tableau:\n\nTableau is a powerful data visualisation and dashboard tool, and can be connected to our COVID-19 data in a few steps.\nWe've written a short tutorial\nto get you up and running.\n\n### Accessing the Data with any SQL tool\n\nAs mentioned above, the Atlas BI Connector is activated (not anymore, see News section), so you can\nconnect any SQL tool to this cluster using the following connection information:\n\n- Server: covid-19-biconnector.hip2i.mongodb.net,\n- Port: 27015,\n- Database: covid19,\n- Username: readonly or readonly?source=admin,\n- Password: readonly.\n\n### Take a copy of the data\n\nAccessing *our* copy of this data in a read-only database is useful, but it won't be enough if you want to integrate it\nwith other data within a single MongoDB cluster. You can obtain a copy of the database, either to use offline using a\ndifferent tool outside of MongoDB, or to load into your own MongoDB instance. `mongoexport` is a command-line tool that\nproduces a JSONL or CSV export of data stored in a MongoDB instance. First, follow\nthese instructions to install the MongoDB Database Tools.\n\nNow you can run the following in your console to download the metadata and global_and_us collections as jsonl files in\nyour current directory:\n\n``` bash\nmongoexport --collection='global_and_us' --out='global_and_us.jsonl' --uri=\"mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\"\nmongoexport --collection='metadata' --out='metadata.jsonl' --uri=\"mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\"\n```\n\n> Use the `--jsonArray` option if you prefer to work with a JSON array rather than a JSONL file.\n\nDocumentation for all the features of `mongoexport` is available on\nthe MongoDB website and with the command `mongoexport --help`.\n\nOnce you have the data on your computer, you can use it directly with local tools, or load it into your own MongoDB\ninstance using mongoimport.\n\n``` bash\nmongoimport --collection='global_and_us' --uri=\"mongodb+srv://:@.mongodb.net/covid19\" global_and_us.jsonl\nmongoimport --collection='metadata' --uri=\"mongodb+srv://:@.mongodb.net/covid19\" metadata.jsonl\n```\n\n> Note that you cannot run these commands against our cluster because the user we gave you (`readonly:readonly`) doesn't\n> have write permission on this cluster.\n> Read our Getting Your Free MongoDB Atlas Cluster blog post if you want to know more.\n\nAnother smart way to duplicate the dataset in your own cluster would be to use `mongodump` and `mongorestore`. Apart\nfrom being more efficient, it will also grab the indexes definition along with the data.\n\n``` bash\nmongodump --uri=\"mongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\"\nmongorestore --drop --uri=\"\"\n```\n\n## Wrap up\n\nWe see the value and importance of making this data as readily available to everyone as possible, so we're not stopping\nhere. Over the coming days, we'll be adding a GraphQL and REST API, as well as making the data available within Excel\nand Google Sheets.\n\nWe've also launched an Atlas credits program for\nanyone working on detecting, understanding, and stopping the spread of COVID-19.\n\nIf you are having any problems accessing the data or have other data sets you would like to host please contact us\non the MongoDB community. We would also love to showcase any services you build on top\nof this data set. Finally please send in PRs for any code changes you would like to make to the examples.\n\nYou can also reach out to the authors\ndirectly (Aaron Bassett, Joe Karlsson, Mark Smith,\nand Maxime Beugnet) on Twitter.\n\n## Sources\n\n- MongoDB Open Data COVID-19 GitHub repository\n- JHU Dataset on GitHub repository\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Making the Johns Hopkins University COVID-19 Data open and accessible to all with MongoDB", "contentType": "Article"}, "title": "How to work with Johns Hopkins University COVID-19 Data in MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-orms-odms-libraries", "action": "created", "body": "# MongoDB ORMs, ODMs, and Libraries\n\nThough developers have always been capable of manually writing complex queries to interact with a database, this approach can be tedious and error-prone.\u00a0Object-Relational Mappers\u00a0(or ORMs) improve the developer experience, as they accomplish multiple meaningful tasks:\n\n* Facilitating interactions between the database and an application by abstracting away the need to write raw SQL or database query language.\n* Managing serialization/deserialization of data to objects.\n* Enforcement of schema.\n\nSo, while it\u2019s true that MongoDB offers\u00a0Drivers\u00a0with idiomatic APIs and helpers for most\u00a0 programming languages, sometimes a higher level abstraction is desirable. Developers are used to interacting with data in a more declarative fashion (LINQ for C#, ActiveRecord for Ruby, etc.) and an ORM facilitates code maintainability and reuse by allowing developers to interact with data as objects.\n\nMongoDB provides a number of ORM-like libraries, and our\u00a0community\u00a0and partners have as well! These are sometimes referred to as ODMs (Object Document Mappers), as MongoDB is not a relational database management system. However, they exist to solve the same problem as ORMs do and the terminology can be used interchangeably.\n\nThe following are some examples of the best MongoDB ORM or ODM libraries for a number of programming languages, including Ruby, Python, Java, Node.js, and PHP.\n\n## Beanie\n\nBeanie is an Asynchronous Python object-document mapper (ODM) for MongoDB, based on\u00a0Motor\u00a0(an asynchronous MongoDB driver) and\u00a0Pydantic.\n\nWhen using Beanie, each database collection has a corresponding document that is used to interact with that collection. In addition to retrieving data, Beanie allows you to add, update, and delete documents from the collection. Beanie saves you time by removing boilerplate code, and it helps you focus on the parts of your app that actually matter.\n\nSee the\u00a0Beanie documentation\u00a0for more information.\n\n## Doctrine\n\nDoctrine is a PHP MongoDB ORM, even though it\u2019s referred to as an ODM. This library provides PHP object mapping functionality and transparent persistence for PHP objects to MongoDB, as well as a mechanism to map embedded or referenced documents. It can also create references between PHP documents in different databases and work with GridFS buckets.\n\nSee the\u00a0Doctrine MongoDB ODM documentation\u00a0for more information.\n\n## Mongoid\n\nMost Ruby-based applications are built using the\u00a0Ruby on Rails\u00a0framework. As a result, Rails\u2019\u00a0Active Record\u00a0implementation, conventions, CRUD API, and callback mechanisms are second nature to Ruby developers. So, as far as a MongoDB ORM for Ruby, the Mongoid ODM provides API parity wherever possible to ensure developers working with a Rails application and using MongoDB can do so using methods and mechanics they\u2019re already familiar with.\n\nSee the\u00a0Mongoid documentation\u00a0for more information.\n\n## Mongoose\n\nIf you\u2019re seeking an ORM for NodeJS and MongoDB, look no further than Mongoose. This Node.js-based Object Data Modeling (ODM) library for MongoDB is akin to an Object Relational Mapper (ORM) such as\u00a0SQLAlchemy. The problem that Mongoose aims to solve is allowing developers to enforce a specific schema at the application layer. In addition to enforcing a schema, Mongoose also offers a variety of hooks, model validation, and other features aimed at making it easier to work with MongoDB.\n\nSee the\u00a0Mongoose documentation\u00a0or\u00a0MongoDB & Mongoose: Compatibility and Comparison\u00a0for more information.\n\n## MongoEngine\n\nMongoEngine is a Python ORM for MongoDB. Branded as a Document-Object Mapper, it uses a simple declarative API, similar to the Django ORM.\n\nIt was first released in 2015 as an open-source project, and the current version is built on top of\u00a0PyMongo, the official Python Driver by MongoDB.\n\nSee the\u00a0MongoEngine documentation\u00a0for more information.\n\n## Prisma\n\nPrisma is a\u00a0new kind of ORM\u00a0for Node.js and Typescript that fundamentally differs from traditional ORMs. With Prisma, you define your models in the declarative\u00a0Prisma schema, which serves as the single source of truth for your database schema and the models in your programming language. The Prisma Client will read and write data to your database in a type-safe manner, without the overhead of managing complex model instances. This makes the process of querying data a lot more natural as well as more predictable since Prisma Client always returns plain JavaScript objects.\n\nSupport for MongoDB was one of the most requested features since the initial release of the Prisma ORM, and was added in version 3.12.\n\nSee\u00a0Prisma & MongoDB\u00a0for more information.\n\n## Spring Data MongoDB\n\nIf you\u2019re seeking a Java ORM for MongoDB, Spring Data for MongoDB is the most popular choice for Java developers. The\u00a0Spring Data\u00a0project provides a familiar and consistent Spring-based programming model for new datastores while retaining store-specific features and capabilities.\n\nKey functional areas of Spring Data MongoDB that Java developers will benefit from are a POJO centric model for interacting with a MongoDB DBCollection and easily writing a repository-style data access layer.\n\nSee the\u00a0Spring Data MongoDB documentation\u00a0or the\u00a0Spring Boot Integration with MongoDB Tutorial\u00a0for more information.\n\n## Go Build Something Awesome!\n\nThough not an exhaustive list of the available MongoDB ORM and ODM libraries available right now, the entries above should allow you to get started using MongoDB in your language of choice more naturally and efficiently.\n\nIf you\u2019re looking for assistance or have any feedback don\u2019t hesitate to engage on our\u00a0Community Forums.", "format": "md", "metadata": {"tags": ["MongoDB", "Ruby", "Python", "Java"], "pageDescription": "MongoDB has a number of ORMs, ODMs, and Libraries that simplify the interaction between your application and your MongoDB cluster. Build faster with the best database for Ruby, Python, Java, Node.js, and PHP using these libraries, ORMs, and ODMs.", "contentType": "Article"}, "title": "MongoDB ORMs, ODMs, and Libraries", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/source-generated-classes-nullability-realm", "action": "created", "body": "# Source Generated Classes and Nullability in Realm .NET\n\nThe latest releases of Realm .NET have included some interesting updates that we would love to share \u2014 in particular, source generated classes and support for nullability annotation. \n\n## Source generated classes\n\nRealm 10.18.0 introduced `Realm.SourceGenerator`, a source generator that can generate Realm model classes. This is part of our ongoing effort to modernize the Realm library, and will allow us to introduce certain language level features more easily in the future.\n\nThe migration to the new source generated classes is quite straightforward. All you need to do is:\n\n* Declare the Realm classes as `partial`, including all the eventual containing classes.\n* Swap out the base Realm classes (`RealmObject`, `EmbeddedObject`, `AsymmetricObject`) for the equivalent interfaces (`IRealmObject`, `IEmbeddedObject`, `IAsymmetricObject`).\n* Declare `OnManaged` and `OnPropertyChanged` methods as `partial` instead of overriding them, if they are used.\n\nThe property definition remains the same, and the source generator will take care of adding the full implementation of the interfaces. \n\nTo give an example, if your model definition looks like this:\n\n```csharp\npublic class Person: RealmObject\n{\n public string Name { get; set; } \n\n public PhoneNumber Phone { get; set; }\n\n protected override void OnManaged()\n {\n //...\n }\n\n protected override void OnPropertyChanged(string propertyName)\n {\n //...\n } \n}\n\npublic class PhoneNumber: EmbeddedObject\n{\n public string Number { get; set; }\n\n public string Prefix { get; set; } \n}\n```\nThis is how it should look like after you migrated it: \n\n```csharp\npublic partial class Person: IRealmObject\n{\n public string Name { get; set; } \n\n public PhoneNumber Phone { get; set; }\n\n partial void OnManaged()\n {\n //...\n }\n\n partial void OnPropertyChanged(string propertyName)\n {\n //...\n } \n}\n\npublic partial class PhoneNumber: IEmbeddedObject\n{\n public string Number { get; set; }\n\n public string Prefix { get; set; } \n}\n```\nThe classic Realm model definition is still supported, but it will not receive some of the new updates, such as the support for nullability annotations, and will be phased out in the future. \n\n## Nullability annotations\n\nRealm 10.20.0 introduced full support for nullability annotations in the model definition for source generated classes. This allows you to use Realm models as usual when nullable context is active, and removes the need to use the `Required` attribute to indicate required properties, as this information will be inferred directly from the nullability status.\n\nTo sum up the expected nullability annotations:\n* Value type properties, such as `int`, can be declared as before, either nullable or not.\n* `string` and `byte]` properties now cannot be decorated anymore with the `Required` attribute, as this information will be inferred from the nullability. If the property is not nullable, then it is considered equivalent as declaring it with the `Required` attribute.\n* Collections (list, sets, dictionaries, and backlinks) cannot be declared nullable, but their parameters may be.\n* Properties that link to a single Realm object are inherently nullable, and thus the type must be defined as nullable.\n* Lists, sets, and backlinks of objects cannot contain null values, and thus the type parameter must be non-nullable.\n* Dictionaries of object values can contain null, and thus the type parameter must be nullable.\n\nDefining the properties with a different nullability annotation than what has been outlined will raise a diagnostic error. For instance:\n ```cs\npublic partial class Person: IRealmObject\n{\n //string (same for byte[])\n public string Name { get; set; } //Correct, required property\n\n public string? Name { get; set; } //Correct, non-required property\n\n //Collections\n public IList IntList { get; } //Correct\n\n public IList IntList { get; } //Correct\n\n public IList? IntList { get; } //Error\n\n //Object \n public Dog? MyDog { get; set; } //Correct\n\n public Dog MyDog { get; set; } //Error\n\n //List of objects\n public IList MyDogs { get; } //Correct\n\n public IList MyDogs { get; } //Error\n\n //Set of objects\n public ISet MyDogs { get; } //Correct\n\n public ISet MyDogs { get; } //Error\n\n //Dictionary of objects\n public IDictionary MyDogs { get; } //Correct\n\n public IDictionary MyDogs { get; } //Error\n\n //Backlink\n [Realms.Backlink(\"...\")]\n public IQueryable MyDogs { get; } //Correct\n\n [Realms.Backlink(\"...\")]\n public IQueryable MyDogs { get; } //Error\n}\n ```\n\nWe realize that some developers would prefer to have more freedom in the nullability annotation of object properties, and it is possible to do so by setting `realm.ignore_objects_nullability = true` in a global configuration file (more information about this can be found in the [.NET documentation). If this option is enabled, all the object properties (including collections) will be considered valid, and the nullability annotations will be ignored.\n\nFinally, please note that this will only work with source generated classes, and not with the classic Realm model definition. If you want more information, you can take a look at the Realm .NET repository and at our documentation.\n\nWant to continue the conversation? Head over to our community forums!", "format": "md", "metadata": {"tags": ["Realm", ".NET"], "pageDescription": "The latest releases of Realm .NET have included some interesting updates that we would love to share \u2014 in particular, source generated classes and support for nullability annotation.", "contentType": "Article"}, "title": "Source Generated Classes and Nullability in Realm .NET", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/swift-single-collection-pattern", "action": "created", "body": "# Working with the MongoDB Single-Collection Pattern in Swift\n\nIt's a MongoDB axiom that you get the best performance and scalability by storing together the data that's most commonly accessed together.\n\nThe simplest and most obvious approach to achieve this is to embed all related data into a single document. This works great in many cases, but there are a couple of scenarios where it can become inefficient:\n\n* (Very) many to many relationships. This can lead to duplicated data. This duplication is often acceptable \u2014 storage is comparatively cheap, after all. It gets more painful when the duplicated data is frequently modified. You then have the cost of updating every document which embeds that data.\n* Reading small parts of large documents. Even if your query is only interested in a small fraction of fields in a document, the whole document is brought into cache \u2014 taking up memory that could be used more effectively.\n* Large, mutable documents. Whenever your application makes a change to a document, the entire document must be written back to disk at some point (could be combined with other changes to the same document). WiredTiger writes data to disk in 4 KB blocks after compression \u2014 that typically maps to a 16-20 KB uncompressed document. If you're making lots of small edits to a 20+ KB document, then you may be wasting disk IO.\n\nIf embedding all of the data in a single document isn't the right pattern for your application, then consider the single-collection design. The single-collection pattern can deliver comparable read performance to embedded documents, while also optimizing for updates.\n\nThere are variants on the single-collection pattern, but for this post, I focus on the key aspects:\n\n* Related data that's queried together is stored in the same collection.\n* The documents can have different structures.\n* Indexes are added so that all of the data for your frequent queries can be fetched with a single index lookup.\n\nAt this point, your developer brain may be raising questions about how your application code can cope with this. It's common to read the data from a particular collection, and then have the MongoDB driver convert that document into an object of a specific class. How does that work if the driver is fetching documents with different shapes from the same collection? This is the primary thing I want to demonstrate in this post. \n\nI'll be using Swift, but the same principles apply to other languages. To see how to do this with Java/Spring Data, take a look at Single-Collection Designs in MongoDB with Spring Data.\n\n## Running the example code\n\nI recently started using the MongoDB Swift Driver for the first time. I decided to build a super-simple Mac desktop app that lets you browse your collections (which MongoDB Compass does a **much** better job of) and displays Change Stream events in real time (which Compass doesn't currently do).\n\nYou can download the code from the Swift-Change-Streams repo. Just build and run from Xcode.\n\nProvide your connection-string and then browse your collections. Select the \"Enable change streams\" option to display change events in real time.\n\nThe app will display data from most collections as generic JSON documents, with no knowledge of the schema. There's a special case for a collection named \"Collection\" in a database named \"Single\" \u2014 we'll look at that next.\n\n### Sample data\n\nThe Simple.Collection collection needs to contain these (or similar) documents:\n\n```json\n{ _id: 'basket1', docType: 'basket', customer: 'cust101' }\n{ _id: 'basket1-item1', docType: 'item', name: 'Fish', quantity: 5 }\n{ _id: 'basket1-item2', docType: 'item', name: 'Chips', quantity: 3 }\n```\n\nThis data represents a shopping basket with an `_id` of \"basket1\". There are two items associated with `basket1 \u2014 basket1-item1` and `basket1-item2`. A single query will fetch all three documents for the basket (find all documents where `_id` starts with \"basket1\"). There is always an index on the `_id` attribute, and so that index will be used.\n\nNote that all of the data for a basket in this dataset is extremely small \u2014 **well** below the 16-20K threshold \u2014 and so in a real life example, I'd actually advise embedding everything in a single document instead. The single-collection pattern would make more sense if there were a large number of line items, and each was large (e.g., if they embedded multiple thumbnail images).\n\nEach document also has a `docType` attribute to identify whether the document refers to the basket itself, or one of the associated items. If your application included a common query to fetch just the basket or just the items associated with the basket, then you could add a composite index: `{ _id: 1, docType: 1}`.\n\nOther uses of the `docType` field include:\n\n* A prompt to help humans understand what they're looking at in the collection.\n* Filtering the data returned from a query to just certain types of documents from the collection.\n* Filtering which types of documents are included when using MongoDB Compass to examine a collection's schema.\n* Allowing an application to identify what type of document its received. The application code can then get the MongoDB driver to unmarshal the document into an object of the correct class. This is what we'll look at next.\n\n### Handling different document types from the same collection\n\nWe'll use the same desktop app to see how your code can discriminate between different types of documents from the same collection.\n\nThe app has hardcoded knowledge of what a basket and item documents looks like. This allows it to render the document data in specific formats, rather than as a JSON document:\n\nThe code to determine the document `docType` and convert the document to an object of the appropriate class can be found in CollectionView.swift. \n\nCollectionView fetches all of the matching documents from MongoDB and stores them in an array of `BSONDocument`s:\n\n```swift\n@State private var docs = BSONDocument\n```\n\nThe application can then loop over each document in `docs`, checks the `docType` attribute, and then decides what to do based on that value:\n\n```swift\nList(docs, id: \\.hashValue) { doc in\n if path.dbName == \"Single\" && path.collectionName == \"Collection\" { \n if let docType = doc\"docType\"] {\n switch docType {\n case \"basket\":\n if let basket = basket(doc: doc) {\n BasketView(basket: basket)\n }\n case \"item\":\n if let item = item(doc: doc) {\n ItemView(item: item)\n }\n default:\n Text(\"Unknown doc type\")\n }\n }\n } else {\n JSONView(doc: doc)\n }\n}\n```\n\nIf `docType == \"basket\"`, then the code converts the generic doc into a `Basket` object and passes it to `BasketView` for rendering.\n\nThis is the `Basket` class, including initializer to create a `Basket` from a `BSONDocument`: \n\n```swift\nstruct Basket: Codable {\n let _id: String\n let docType: String\n let customer: String\n \n init(doc: BSONDocument) {\n do {\n self = try BSONDecoder().decode(Basket.self, from: doc)\n } catch {\n _id = \"n/a\"\n docType = \"basket\"\n customer = \"n/a\"\n print(\"Failed to convert BSON to a Basket: \\(error.localizedDescription)\")\n }\n }\n}\n```\n\nSimilarly for `Item`s:\n\n```swift\nstruct Item: Codable {\n let _id: String\n let docType: String\n let name: String\n let quantity: Int\n \n init(doc: BSONDocument) {\n do {\n self = try BSONDecoder().decode(Item.self, from: doc)\n } catch {\n _id = \"n/a\"\n docType = \"item\"\n name = \"n/a\"\n quantity = 0\n print(\"Failed to convert BSON to a Item: \\(error.localizedDescription)\")\n }\n }\n}\n```\n\nThe sub-views can then use the attributes from the properly-typed object to render the data appropriately:\n\n```swift\nstruct BasketView: View {\n let basket: Basket\n \n var body: some View {\n VStack {\n Text(\"Basket\")\n .font(.title)\n Text(\"Order number: \\(basket._id)\")\n Text(\"Customer: \\(basket.customer)\")\n }\n .padding()\n .background(.secondary)\n .clipShape(RoundedRectangle(cornerRadius: 15.0))\n }\n}\n```\n\n```swift\nstruct ItemView: View {\n let item: Item\n \n var body: some View {\n VStack {\n Text(\"Item\")\n .font(.title)\n Text(\"Item name: \\(item.name)\")\n Text(\"Quantity: \\(item.quantity)\")\n }\n .padding()\n .background(.secondary)\n .clipShape(RoundedRectangle(cornerRadius: 15.0))\n }\n}\n```\n\n### Conclusion\n\nThe single-collection pattern is a way to deliver read and write performance when embedding or [other design patterns aren't a good fit.\n\nThis pattern breaks the 1-1 mapping between application classes and MongoDB collections that many developers might assume. This post shows how to work around that:\n\n* Extract a single docType field from the BSON document returned by the MongoDB driver.\n* Check the value of docType and get the MongoDB driver to map the BSON document into an object of the appropriate class.\n\nQuestions? Comments? Head over to our Developer Community to continue the conversation!", "format": "md", "metadata": {"tags": ["Swift", "MongoDB"], "pageDescription": "You can improve application performance by storing together data that\u2019s accessed together. This can be done through embedding sub-documents, or by storing related documents in the same collection \u2014 even when they have different shapes. This post explains how to work with these polymorphic MongoDB collections from your Swift app.", "contentType": "Quickstart"}, "title": "Working with the MongoDB Single-Collection Pattern in Swift", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-soccer", "action": "created", "body": "# Atlas Search is a Game Changer!\n\nEvery four years, for the sake of blending in, I pretend to know soccer (football, for my non-American friends). I smile. I cheer. I pretend to understand what \"offsides\" means. But what do I know about soccer, anyway? My soccer knowledge is solely defined by my status as a former soccer mom with an addiction to Ted Lasso.\n\nWhen the massive soccer tournaments are on, I\u2019m overwhelmed by the exhilarated masses. Painted faces to match their colorful soccer jerseys. Jerseys with unfamiliar names from far away places. I recognize Messi and Ronaldo, but the others?\u00a0Mkhitaryan,\u00a0Szcz\u0119sny, Gro\u00dfkreutz? How can I look up their stats to feign familiarity when I have no idea how to spell their names?\n\nWell, now there\u2019s an app for that. And it\u2019s built with Atlas Search: www.atlassearchsoccer.com. Check out the video tutorial:\n\n:youtube]{vid=1uTmDNTdgaw&t}\n\n**Build your own dream team!**\u00a0\n\nWith Atlas Search Soccer,\u00a0 you can scour across 22,000 players to build your own dream team of players across national and club teams. This instance of Atlas Search lets you search on a variety of different parameters and data types. Equipped with only a search box, sliders, and checkboxes, find the world's best players with the most impossible-to-spell names to build out your own dream team. Autocomplete, wildcard, and filters to find Ibrahimovi\u0107, B\u0142aszczykowski, and Szcz\u0119sny? No problem!\n\nWhen you pick a footballer for your team, he is written to local storage on your device. That way, your team stays warmed up and on the pitch even after you close your browser. You can then compare your dream team with your friends.\n\n**Impress your soccerphile friends!**\n\nAtlas Search Soccer grants you *instant*\u00a0credibility in sports bars. Who is the best current French player? Who plays goalie for Arsenal? Is Ronaldo from Portugal or Brazil?\u00a0 You can say with confidence because you have the\u00a0*DATA!*\u00a0Atlas Search lets you find it fast!\n\n**Learn all the $search Skills and Drills!**\n\nAs you interact with the application, you'll see the $search operator in a MongoDB aggregation pipeline live in-action! Click on the Advanced Scouting image for more options using the compound operator. Learn all the ways and plays to build complex, fine-grained, full-text searches across text, date, and numerics.\n\n* search operators\n * text\n * wildcard\n * autocomplete\n * range\n * moreLikeThis\n* fuzzy matching\n* indexes and analyzers\n* compound operator\n* relevance based scoring\n* custom score modifiers\n* filters, facets and counts\n\nOver the next season, we will launch a series of tutorials, breaking down how to implement all of these features. We can even cover GraphQL and the Data API if we head into extra time. And of course, we will provide tips and tricks for optimal performance.\n\n**Gain a home-field advantage by playing in your own stadium!**\n\n[Here is the repo so you can build Atlas Search Soccer on your own free-forever cluster.\n\nSo give it a shot. You'll be an Atlas Search pro in no time!\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Atlas Search is truly a game changer to quickly build fine-grained search functionality into your applications. See how with this Atlas Search Soccer demo app.", "contentType": "Article"}, "title": "Atlas Search is a Game Changer!", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/interact-aws-lambda-function-csharp", "action": "created", "body": "# Interact with MongoDB Atlas in an AWS Lambda Function Using C#\n\n# Interact with MongoDB Atlas in an AWS Lambda Function Using C#\n\nAWS Lambda is an excellent choice for C# developers looking for a solid serverless solution with many integration options with the rest of the AWS ecosystem. When a database is required, reading and writing to MongoDB Atlas at lightning speed is effortless because Atlas databases can be instantiated in the same data center as your AWS Lambda function.\n\nIn this tutorial, we will learn how to create a C# serverless function that efficiently manages the number of MongoDB Atlas connections to make your Lambda function as scalable as possible.\n\n## The prerequisites\n\n* Knowledge of the C# programming language.\n* A MongoDB Atlas cluster with sample data, network access (firewall), and user roles already configured.\n* An Amazon Web Services (AWS) account with a basic understanding of AWS Lambda.\n* Visual Studio with the AWS Toolkit and the Lamda Templates installed (official tutorial).\n\n## Create and configure your Atlas database\n\nThis step-by-step MongoDB Atlas tutorial will guide you through creating an Atlas database (free tier available) and loading the sample data.\n\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nWe will open the network access to any incoming IP to keep this tutorial simple and make it work with the free Atlas cluster tier. Here's how to add an IP to your Atlas project. Adding 0.0.0.0 means that any external IP can access your cluster.\n\nIn a production environment, you should restrict access and follow best MongoDB security practices, including using network peering between AWS Lambda and MongoDB Atlas. The free cluster tier does not support peering.\n\n## Build an AWS Lambda function with C#\n\nIn Visual Studio, create a basic AWS lambda project using the \"AWS Lambda Project (.NET Core - C#)\" project template with the \"Empty Function\" blueprint. We'll use that as the basis of this tutorial. Here's the official AWS tutorial to create such a project, but essentially:\n\n1. Open Visual Studio, and on the File menu, choose New, Project.\n2. Create a new \"AWS Lambda Project (.NET Core - C#)\" project.\n3. We'll name the project \"AWSLambda1.\"\n\nFollow the official AWS tutorial above to make sure that you can upload the project to Lambda and that it runs. If it does, we're ready to make changes to connect to MongoDB from AWS Lambda!\n\nIn our project, the main class is called `Function`. It will be instantiated every time the Lambda function is triggered. Inside, we have a method called `FunctionHandler`, (`Function:: FunctionHandler`), which we will designate to Lambda as the entry point.\n\n## Connecting to MongoDB Atlas from a C# AWS Lambda function\n\nConnecting to MongoDB requires adding the MongoDB.Driver (by MongoDB Inc) package in your project's packages.\n\nNext, add the following namespaces at the top of your source file:\n\n```\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n```\n\nIn the Function class, we will declare a static MongoClient member. Having it as a `static` member is crucial because we want to share it across multiple instances that AWS Lambda could spawn.\n\nAlthough we don't have complete control over, or visibility into, the Lambda serverless environment, this is the best practice to keep the number of connections back to the Atlas cluster to a minimum.\n\nIf we did not declare MongoClient as `static`, each class instance would create its own set of resources. Instead, the static MongoClient is shared among multiple class instances after a first instance was created (warm start). You can read more technical details about managing MongoDB Atlas connections with AWS Lambda.\n\nWe will also add a `CreateMongoClient()` method that initializes the MongoDB client when the class is instantiated. Now, things should look like this:\n\n```\npublic class Function\n{\n\u00a0\u00a0\u00a0\u00a0private static MongoClient? Client;\n\u00a0\u00a0\u00a0\u00a0private static MongoClient CreateMongoClient()\n\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0var mongoClientSettings = MongoClientSettings.FromConnectionString(Environment.GetEnvironmentVariable(\"MONGODB_URI\"));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return new MongoClient(mongoClientSettings);\n\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0static Function()\n\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Client = CreateMongoClient();\n\u00a0\u00a0\u00a0\u00a0}\n...\n}\n```\n\nTo keep your MongoDB credentials safe, your connection string can be stored in an AWS Lambda environment variable. The connection string looks like this below, and here's how to get it in Atlas.\n\n`mongodb+srv://USER:PASSWORD@INSTANCENAME.owdak.mongodb.net/?retryWrites=true&w=majority`\n\n**Note**: Visual Studio might store the connection string with your credentials into a aws-lambda-tools-defaults.json file at some point, so don't include that in a code repository.\n\nIf you want to use environment variables in the Mock Lambda Test Tool, you must create a specific \"Mock Lambda Test Tool\" profile with its own set of environment variables in `aws-lambda-tools-defaults.json` (here's an example).\n\nYou can learn more about AWS Lambda environment variables. However, be aware that such variables can be set from within your Visual Studio when publishing to AWS Lambda or directly in the AWS management console on your AWS Lambda function page.\n\nFor testing purposes, and if you don't want to bother, some people hard-code the connection string as so:\n\n```\nvar mongoClientSettings = FromConnectionString(\"mongodb+srv://USER:PASSWORD@instancename.owdak.mongodb.net/?retryWrites=true&w=majority\");\n```\n\nFinally, we can modify the FunctionHandler() function to read the first document from the sample\\_airbnb.listingsAndReviews database and collection we preloaded in the prerequisites.\n\nThe try/catch statements are not mandatory, but they can help detect small issues such as the firewall not being set up, or other configuration errors.\n\n```\npublic string FunctionHandler(string input, ILambdaContext context)\n{\n if (Client != null)\n {\n try\n {\n var database = Client.GetDatabase(\"sample_airbnb\");\n var collection = database.GetCollection(\"listingsAndReviews\");\n var result = collection.Find(FilterDefinition.Empty).First();\n return result.ToString();\n }\n catch\n {\n return \"Handling failed\";\n }\n } else\n {\n return \"DB not initialized\";\n }\n}\n```\n\nUsing the \"listingsAndReviews\" collection (a \"table\" in SQL jargon) in the \"sample\\_airbnb\" database, the code fetches the first document of the collection.\n\n`collection.Find()` normally takes a MongoDB Query built as a BsonDocument, but in this case, we only need an empty query.\n\n## Publish to AWS and test\n\nIt's time to upload it to AWS Lambda. In the Solution Explorer, right-click on the project and select \"Publish to AWS Lambda.\" Earlier, you might have done this while setting up the project using the official AWS Lambda C# tutorial.\n\nIf this is the first time you're publishing this function, take the time to give it a name (we use \"mongdb-csharp-function-001\"). It will be utilized during the initial Lambda function creation.\n\nIn the screenshot below, the AWS Lambda function Handler (\"Handler\") information is essential as it tells Lambda which method to call when an event is triggered. The general format is Assembly::Namespace.ClassName::MethodName\n\nIn our case, the handler is `AWSLambda1::AWSLambda1.Function::FunctionHandler`.\n\nIf the option is checked, this dialog will save these options in the `aws-lambda-tools-defaults.json` file.\n\nClick \"Next\" to see the second upload screen. The most important aspect of it is the environment variables, such as the connection string.\n\nWhen ready, click on \"Upload.\" Visual Studio will create/update your Lambda function to AWS and launch a test window where you can set your sample input and execute the method to see its response.\n\nOur Lambda function expects an input string, so we'll use the \"hello\" string in our Sample Input, then click the \"Invoke\" button. The execution's response will be sent to the \"Response\" field to the right. As expected, the first database record is converted into a string, as shown below.\n\n## Conclusion\n\nWe just learned how to build a C# AWS Lambda serverless function efficiently by creating and sharing a MongoDB client and connecting multiple class instances. If you're considering building with a serverless architecture and AWS Lambda, MongoDB Atlas is an excellent option.\n\nThe flexibility of our document model makes it easy to get started quickly and evolve your data structure over time. Create a free Atlas cluster now to try it.\n\nIf you want to learn more about our MongoDB C# driver, refer to the continuously updated documentation. You can do much more with MongoDB Atlas, and our C# Quick Start is a great first step on your MongoDB journey.", "format": "md", "metadata": {"tags": ["C#", "MongoDB", ".NET", "AWS"], "pageDescription": "In this tutorial, we'll see how to create a serverless function using the C# programming language and that function will connect to and query MongoDB Atlas in an efficient manner.", "contentType": "Tutorial"}, "title": "Interact with MongoDB Atlas in an AWS Lambda Function Using C#", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-separating-data", "action": "created", "body": "# Separating Data That is Accessed Together\n\nWe're breezing through the MongoDB schema design anti-patterns. So far in this series, we've discussed four of the six anti-patterns:\n\n- Massive arrays\n- Massive number of collections\n- Unnecessary indexes\n- Bloated documents\n\nNormalizing data and splitting it into different pieces to optimize for space and reduce data duplication can feel like second nature to those with a relational database background. However, separating data that is frequently accessed together is actually an anti-pattern in MongoDB. In this post, we'll find out why and discuss what you should do instead.\n\n>:youtube]{vid=dAN76_47WtA t=15}\n>\n>If you prefer to learn by video (or you just like hearing me repeat, \"Data that is accessed together should be stored together\"), watch the video above.\n\n## Separating Data That is Accessed Together\n\nMuch like you would use a `join` to combine information from different tables in a relational database, MongoDB has a [$lookup operation that allows you to join information from more than one collection. `$lookup` is great for infrequent, rarely used operations or analytical queries that can run overnight without a time limit. However, `$lookup` is not so great when you're frequently using it in your applications. Why?\n\n`$lookup` operations are slow and resource-intensive compared to operations that don't need to combine data from more than one collection.\n\nThe rule of thumb when modeling your data in MongoDB is:\n\n>Data that is accessed together should be stored together.\n\nInstead of separating data that is frequently used together between multiple collections, leverage embedding and arrays to keep the data together in a single collection.\n\nFor example, when modeling a one-to-one relationship, you can embed a document from one collection as a subdocument in a document from another. When modeling a one-to-many relationship, you can embed information from multiple documents in one collection as an array of documents in another.\n\nKeep in mind the other anti-patterns we've already discussed as you begin combining data from different collections together. Massive, unbounded arrays and bloated documents can both be problematic.\n\nIf combining data from separate collections into a single collection will result in massive, unbounded arrays or bloated documents, you may want to keep the collections separate and duplicate some of the data that is used frequently together in both collections. You could use the Subset Pattern to duplicate a subset of the documents from one collection in another. You could also use the Extended Reference Pattern to duplicate a portion of the data in each document from one collection in another. In both patterns, you have the option of creating references between the documents in both collections. Keep in mind that whenever you need to combine information from both collections, you'll likely need to use `$lookup`. Also, whenever you duplicate data, you are responsible for ensuring the duplicated data stays in sync.\n\nAs we have said throughout this series, each use case is different. As you model your schema, carefully consider how you will be querying the data and what the data you will be storing will realistically look like.\n\n## Example\n\nWhat would an Anti-Pattern post be without an example from Parks and Recreation? I don't even want to think about it. So let's return to Leslie.\n\nLeslie decides to organize a Model United Nations for local high school students and recruits some of her coworkers to participate as well. Each participant will act as a delegate for a country during the event. She assigns Andy and Donna to be delegates for Finland.\n\nLeslie decides to store information related to the Model United Nations in a MongoDB database. She wants to store the following information in her database:\n\n- Basic stats about each country\n- A list of resources that each country has available to trade\n- A list of delegates for each country\n- Policy statements for each country\n- Information about each Model United Nations event she runs\n\nWith this information, she wants to be able to quickly generate the following reports:\n\n- A country report that contains basic stats, resources currently available to trade, a list of delegates, the names and dates of the last five policy documents, and a list of all of the Model United Nations events in which this country has participated\n- An event report that contains information about the event and the names of the countries who participated\n\nThe Model United Nations event begins, and Andy is excited to participate. He decides he doesn't want any of his country's \"boring\" resources, so he begins trading with other countries in order to acquire all of the world's lions.\n\n \n\nLeslie decides to create collections for each of the categories of information she needs to store in her database. After Andy is done trading, Leslie has documents like the following.\n\n``` javascript\n// Countries collection\n\n{\n \"_id\": \"finland\",\n \"official_name\": \"Republic of Finland\",\n \"capital\": \"Helsinki\",\n \"languages\": \n \"Finnish\",\n \"Swedish\",\n \"S\u00e1mi\"\n ],\n \"population\": 5528737\n}\n```\n\n``` javascript\n// Resources collection\n\n{\n \"_id\": ObjectId(\"5ef0feeb0d9314ac117d2034\"),\n \"country_id\": \"finland\",\n \"lions\": 32563,\n \"military_personnel\": 0,\n \"pulp\": 0,\n \"paper\": 0\n}\n```\n\n``` javascript\n// Delegates collection\n\n{\n \"_id\": ObjectId(\"5ef0ff480d9314ac117d2035\"),\n \"country_id\": \"finland\",\n \"first_name\": \"Andy\",\n \"last_name\": \"Fryer\"\n},\n{\n \"_id\": ObjectId(\"5ef0ff710d9314ac117d2036\"),\n \"country_id\": \"finland\",\n \"first_name\": \"Donna\",\n \"last_name\": \"Beagle\"\n}\n```\n\n``` javascript\n// Policies collection\n\n{\n \"_id\": ObjectId(\"5ef34ec43e5f7febbd3ed7fb\"),\n \"date-created\": ISODate(\"2011-11-09T04:00:00.000+00:00\"),\n \"status\": \"draft\",\n \"title\": \"Country Defense Policy\",\n \"country_id\": \"finland\",\n \"policy\": \"Finland has formally decided to use lions in lieu of military for all self defense...\"\n}\n```\n\n``` javascript\n// Events collection\n\n{\n \"_id\": ObjectId(\"5ef34faa3e5f7febbd3ed7fc\"),\n \"event-date\": ISODate(\"2011-11-10T05:00:00.000+00:00\"),\n \"location\": \"Pawnee High School\",\n \"countries\": [\n \"Finland\",\n \"Denmark\",\n \"Peru\",\n \"The Moon\"\n ],\n \"topic\": \"Global Food Crisis\",\n \"award-recipients\": [\n \"Allison Clifford\",\n \"Bob Jones\"\n ]\n}\n```\n\nWhen Leslie wants to generate a report about Finland, she has to use `$lookup` to combine information from all five collections. She wants to optimize her database performance, so she decides to leverage embedding to combine information from her five collections into a single collection.\n\nLeslie begins working on improving her schema incrementally. As she looks at her schema, she realizes that she has a one-to-one relationship between documents in her `Countries` collection and her `Resources` collection. She decides to embed the information from the `Resources` collection as sub-documents in the documents in her `Countries` collection.\n\nNow the document for Finland looks like the following.\n\n``` javascript\n// Countries collection\n\n{\n \"_id\": \"finland\",\n \"official_name\": \"Republic of Finland\",\n \"capital\": \"Helsinki\",\n \"languages\": [\n \"Finnish\",\n \"Swedish\",\n \"S\u00e1mi\"\n ],\n \"population\": 5528737,\n \"resources\": {\n \"lions\": 32563,\n \"military_personnel\": 0,\n \"pulp\": 0,\n \"paper\": 0\n }\n}\n```\n\nAs you can see above, she has kept the information about resources together as a sub-document in her document for Finland. This is an easy way to keep data organized.\n\nShe has no need for her `Resources` collection anymore, so she deletes it.\n\nAt this point, she can retrieve information about a country and its resources without having to use `$lookup`.\n\nLeslie continues analyzing her schema. She realizes she has a one-to-many relationship between countries and delegates, so she decides to create an array named `delegates` in her `Countries` documents. Each `delegates` array will store objects with delegate information. Now her document for Finland looks like the following:\n\n``` javascript\n// Countries collection\n\n{\n \"_id\": \"finland\",\n \"official_name\": \"Republic of Finland\",\n \"capital\": \"Helsinki\",\n \"languages\": [\n \"Finnish\",\n \"Swedish\",\n \"S\u00e1mi\"\n ],\n \"population\": 5528737,\n \"resources\": {\n \"lions\": 32563,\n \"military_personnel\": 0,\n \"pulp\": 0,\n \"paper\": 0\n },\n \"delegates\": [\n {\n \"first_name\": \"Andy\",\n \"last_name\": \"Fryer\"\n },\n {\n \"first_name\": \"Donna\",\n \"last_name\": \"Beagle\"\n }\n ]\n}\n```\n\nLeslie feels confident about storing the delegate information in her country documents since each country will have only a handful of delegates (meaning her array won't grow infinitely), and she won't be frequently accessing information about the delegates separately from their associated countries.\n\nLeslie no longer needs her `Delegates` collection, so she deletes it.\n\nLeslie continues optimizing her schema and begins looking at her `Policies` collection. She has a one-to-many relationship between countries and policies. She needs to include the titles and dates of each country's five most recent policy documents in her report. She considers embedding the policy documents in her country documents, but the documents could quickly become quite large based on the length of the policies. She doesn't want to fall into the trap of the [Bloated Documents Anti-Pattern, but she also wants to avoid using `$lookup` every time she runs a report.\n\nLeslie decides to leverage the Subset Pattern. She stores the titles and dates of the five most recent policy documents in her country document. She also creates a reference to the policy document, so she can easily gather all of the information for each policy when needed. She leaves her `Policies` collection as-is. She knows she'll have to maintain some duplicate information between the documents in the `Countries` collection and the `Policies` collection, but she decides duplicating a little bit of information is a good tradeoff to ensure fast queries.\n\nHer document for Finland now looks like the following:\n\n``` javascript\n// Countries collection\n\n{\n \"_id\": \"finland\",\n \"official_name\": \"Republic of Finland\",\n \"capital\": \"Helsinki\",\n \"languages\": \n \"Finnish\",\n \"Swedish\",\n \"S\u00e1mi\"\n ],\n \"population\": 5528737,\n \"resources\": {\n \"lions\": 32563,\n \"military_personnel\": 0,\n \"pulp\": 0,\n \"paper\": 0\n },\n \"delegates\": [\n {\n \"first_name\": \"Andy\",\n \"last_name\": \"Fryer\"\n },\n {\n \"first_name\": \"Donna\",\n \"last_name\": \"Beagle\"\n }\n ],\n \"recent-policies\": [\n {\n \"_id\": ObjectId(\"5ef34ec43e5f7febbd3ed7fb\"),\n \"date-created\": ISODate(\"2011-11-09T04:00:00.000+00:00\"),\n \"title\": \"Country Defense Policy\"\n },\n {\n \"_id\": ObjectId(\"5ef357bb3e5f7febbd3ed7fd\"),\n \"date-created\": ISODate(\"2011-11-10T04:00:00.000+00:00\"),\n \"title\": \"Humanitarian Food Policy\"\n }\n ]\n}\n```\n\nLeslie continues examining her query for her report on each country. The last `$lookup` she has combines information from the `Countries` collection and the `Events` collection. She has a many-to-many relationship between countries and events. She needs to be able to quickly generate reports on each event as a whole, so she wants to keep the `Events` collection separate. She decides to use the [Extended Reference Pattern to solve her dilemma. She includes the information she needs about each event in her country documents and maintains a reference to the complete event document, so she can get more information when she needs to. She will duplicate the event date and event topic in both the `Countries` and `Events` collections, but she is comfortable with this as that data is very unlikely to change.\n\nAfter all of her updates, her document for Finland now looks like the following:\n\n``` javascript\n// Countries collection\n\n{\n \"_id\": \"finland\",\n \"official_name\": \"Republic of Finland\",\n \"capital\": \"Helsinki\",\n \"languages\": \n \"Finnish\",\n \"Swedish\",\n \"S\u00e1mi\"\n ],\n \"population\": 5528737,\n \"resources\": {\n \"lions\": 32563,\n \"military_personnel\": 0,\n \"pulp\": 0,\n \"paper\": 0\n },\n \"delegates\": [\n {\n \"first_name\": \"Andy\",\n \"last_name\": \"Fryer\"\n },\n {\n \"first_name\": \"Donna\",\n \"last_name\": \"Beagle\"\n }\n ],\n \"recent-policies\": [\n {\n \"policy-id\": ObjectId(\"5ef34ec43e5f7febbd3ed7fb\"),\n \"date-created\": ISODate(\"2011-11-09T04:00:00.000+00:00\"),\n \"title\": \"Country Defense Policy\"\n },\n {\n \"policy-id\": ObjectId(\"5ef357bb3e5f7febbd3ed7fd\"),\n \"date-created\": ISODate(\"2011-11-10T04:00:00.000+00:00\"),\n \"title\": \"Humanitarian Food Policy\"\n }\n ],\n \"events\": [\n {\n \"event-id\": ObjectId(\"5ef34faa3e5f7febbd3ed7fc\"),\n \"event-date\": ISODate(\"2011-11-10T05:00:00.000+00:00\"),\n \"topic\": \"Global Food Crisis\"\n },\n {\n \"event-id\": ObjectId(\"5ef35ac93e5f7febbd3ed7fe\"),\n \"event-date\": ISODate(\"2012-02-18T05:00:00.000+00:00\"),\n \"topic\": \"Pandemic\"\n }\n ]\n}\n```\n\n## Summary\n\nData that is accessed together should be stored together. If you'll be frequently reading or updating information together, consider storing the information together using nested documents or arrays. Carefully consider your use case and weigh the benefits and drawbacks of data duplication as you bring data together.\n\nBe on the lookout for a post on the final MongoDB schema design anti-pattern!\n\n>When you're ready to build a schema in MongoDB, check out [MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Reduce $lookup Operations\n- MongoDB Docs: Data Model Design\n- MongoDB Docs: Model One-to-One Relationships with Embedded Documents\n- MongoDB Docs: Model One-to-Many Relationships with Embedded Documents\n- MongoDB University M320: Data Modeling\n- Blog Post: The Subset Pattern\n- Blog Post: The Extended Reference Pattern\n- Blog Series: Building with Patterns", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Separating Data That is Accessed Together", "contentType": "Article"}, "title": "Separating Data That is Accessed Together", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-api-aws-lambda", "action": "created", "body": "# Creating an API with the AWS API Lambda and the Atlas Data API\n\n## Introduction\n\nThis article will walk through creating an API using the Amazon API Gateway in front of the MongoDB Atlas Data API. When integrating with the Amazon API Gateway, it is possible but undesirable to use a driver, as drivers are designed to be long-lived and maintain connection pooling. Using serverless functions with a driver can result in either a performance hit \u2013 if the driver is instantiated on each call and must authenticate \u2013 or excessive connection numbers if the underlying mechanism persists between calls, as you have no control over when code containers are reused or created.\n\nTheMongoDB Atlas Data API is an HTTPS-based API that allows us to read and write data in Atlas where a MongoDB driver library is either not available or not desirable. For example, when creating serverless microservices with MongoDB.\n\nAWS (Amazon Web Services) describe their API Gateway as:\n\n> \"A fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the \"front door\" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.\n> API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.\"\n\n## Prerequisites.\n\nA core requirement for this walkthrough is to have an Amazon Web Services account, the API Gateway is available as part of the AWS free tier, allowing up to 1 million API calls per month, at no charge, in your first 12 months with AWS.\n\nWe will also need an Atlas Cluster for which we have enabled the Data API \u2013 and our endpoint URL and API Key. You can learn how to get these in this Article or this Video if you do not have them already.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nA common use of Atlas with the Amazon API Gateway might be to provide a managed API to a restricted subset of data in our cluster, which is a common need for a microservice architecture. To demonstrate this, we first need to have some data available in MongoDB Atlas. This can be added by selecting the three dots next to our cluster name and choosing \"Load Sample Dataset\", or following instructions here. \n\n## Creating an API with the Amazon API Gateway and the Atlas Data API\n\nThe instructions here are an extended variation from Amazon's own \"Getting Started with the API Gateway\" tutorial. I do not presume to teach you how best to use Amazon's API Gateway as Amazon itself has many fine resources for this, what we will do here is use it to get a basic Public API enabled that uses the Data API.\n\n> The Data API itself is currently in an early preview with a flat security model allowing all users who have an API key to query or update any database or collection. Future versions will have more granular security. We would not want to simply expose the current data API as a 'Public' API but we can use it on the back-end to create more restricted and specific access to our data. \n> \nWe are going to create an API which allows users to GET the ten films for any given year which received the most awards - a notional \"Best Films of the Year\". We will restrict this API to performing only that operation and supply the year as part of the URL\n\nWe will first create the API, then analyze the code we used for it.\n\n## Create a AWS Lambda Function to retrieve data with the Data API\n\n1. Sign in to the Lambda console at https://console.aws.amazon.com/lambda.\n2. Choose **Create function**.\n3. For **Function name**, enter top-movies-for-year.\n4. Choose **Create function**.\n\nWhen you see the Javascript editor that looks like this\n\nReplace the code with the following, changing the API-KEY and APP-ID to the values for your Atlas cluster. Save and click **Deploy** (In a production application you might look to store these in AWS Secrets manager , I have simplified by putting them in the code here).\n\n```\nconst https = require('https');\n \nconst atlasEndpoint = \"/app/APP-ID/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"API-KEY\";\n \n \nexports.handler = async(event) => {\n \n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n \n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n \n \n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n \n \n const payload = JSON.stringify({\n dataSource: \"Cluster0\",\n database: \"sample_mflix\",\n collection: \"movies\",\n filter: { year },\n projection: { _id: 0, title: 1, awards: \"$awards.wins\" },\n sort: { \"awards.wins\": -1 },\n limit: 10\n });\n \n \n const options = {\n hostname: 'data.mongodb-api.com',\n port: 443,\n path: atlasEndpoint,\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Content-Length': payload.length,\n 'api-key': atlasAPIKey\n }\n };\n \n let results = '';\n \n const response = await new Promise((resolve, reject) => {\n const req = https.request(options, res => {\n res.on('data', d => {\n results += d;\n });\n res.on('end', () => {\n console.log(`end() status code = ${res.statusCode}`);\n if (res.statusCode == 200) {\n let resultsObj = JSON.parse(results)\n resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });\n }\n else {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key\n }\n });\n });\n //Do not give the user clues about backend issues for security reasons\n req.on('error', error => {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable\n });\n \n req.write(payload);\n req.end();\n });\n return response;\n \n};\n\n```\n\nAlternatively, if you are familiar with working with packages and Lambda, you could upload an HTTP package like Axios to Lambda as a zipfile, allowing you to use the following simplified code.\n\n```\n\nconst axios = require('axios');\n\nconst atlasEndpoint = \"https://data.mongodb-api.com/app/APP-ID/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"API-KEY\";\n\nexports.handler = async(event) => {\n\n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n\n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n\n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n\n const payload = {\n dataSource: \"Cluster0\",\n database: \"sample_mflix\",\n collection: \"movies\",\n filter: { year },\n projection: { _id: 0, title: 1, awards: \"$awards.wins\" },\n sort: { \"awards.wins\": -1 },\n limit: 10\n };\n\n try {\n const response = await axios.post(atlasEndpoint, payload, { headers: { 'api-key': atlasAPIKey } });\n return response.data.documents;\n }\n catch (e) {\n return { statusCode: 500, body: 'Unable to service request' }\n }\n};\n```\n\n## Create an HTTP endpoint for our custom API function\n\nWe now need to route an HTTP endpoint to our Lambda function using the HTTP API. \n\nThe HTTP API provides an HTTP endpoint for your Lambda function. API Gateway routes requests to your Lambda function, and then returns the function's response to clients.\n\n1. Go to the API Gateway console at https://console.aws.amazon.com/apigateway.\n2. Do one of the following:\n To create your first API, for HTTP API, choose **Build**.\n If you've created an API before, choose **Create API**, and then choose **Build** for HTTP API.\n3. For Integrations, choose **Add integration**.\n4. Choose **Lambda**.\n5. For **Lambda function**, enter top-movies-for-year.\n6. For **API name**, enter movie-api.\n\n8. Choose **Next**.\n\n8. Review the route that API Gateway creates for you, and then choose **Next**.\n\n9. Review the stage that API Gateway creates for you, and then choose **Next**.\n\n10. Choose **Create**.\n\nNow you've created an HTTP API with a Lambda integration and the Atlas Data API that's ready to receive requests from clients.\n\n## Test your API\n\nYou should now be looking at API Gateway details that look like this, if not you can get to it by going tohttps://console.aws.amazon.com/apigatewayand clicking on **movie-api**\n\nTake a note of the **Invoke URL**, this is the base URL for your API\n\nNow, in a new browser tab, browse to `/top-movies-for-year?year=2001` . Changing ` `to the Invoke URL shown in AWS. You should see the results of your API call - JSON listing the top 10 \"Best\" films of 2001.\n\n## Reviewing our Function.\n\nWe start by importing the Standard node.js https library - the Data API needs no special libraries to call it. We also define our API Key and the path to our find endpoint, You get both of these from the Data API tab in Atlas.\n\n```\nconst https = require('https');\n \nconst atlasEndpoint = \"/app/data-amzuu/endpoint/data/beta/action/find\";\nconst atlasAPIKey = \"YOUR-API-KEY\";\n```\n\nNow we check that the API call included a parameter for year and that it's a number - we need to convert it to a number as in MongoDB, \"2001\" and 2001 are different values, and searching for one will not find the other. The collection uses a number for the movie release year.\n\n```\nexports.handler = async (event) => {\n \n if (!event.queryStringParameters || !event.queryStringParameters.year) {\n return { statusCode: 400, body: 'Year not specified' };\n }\n //Year is a number but the argument is a string so we need to convert as MongoDB is typed\n let year = parseInt(event.queryStringParameters.year, 10);\n console.log(`Year = ${year}`)\n if (Number.isNaN(year)) { return { statusCode: 400, body: 'Year incorrectly specified' }; }\n \n \n const payload = JSON.stringify({\n dataSource: \"Cluster0\", database: \"sample_mflix\", collection: \"movies\",\n filter: { year }, projection: { _id: 0, title: 1, awards: \"$awards.wins\" }, sort: { \"awards.wins\": -1 }, limit: 10\n });\n\n```\n\nThen we construct our payload - the parameters for the Atlas API Call, we are querying for year = year, projecting just the title and the number of awards, sorting by the numbers of awards descending and limiting to 10.\n \n```\n const payload = JSON.stringify({\n dataSource: \"Cluster0\", database: \"sample_mflix\", collection: \"movies\",\n filter: { year }, projection: { _id: 0, title: 1, awards: \"$awards.wins\" }, \n sort: { \"awards.wins\": -1 }, limit: 10\n });\n\n```\n\nWe then construct the options for the HTTPS POST request to the Data API - here we pass the Data API API-KEY as a header.\n\n```\n const options = {\n hostname: 'data.mongodb-api.com',\n port: 443,\n path: atlasEndpoint,\n method: 'POST',\n headers: {\n 'Content-Type': 'application/json',\n 'Content-Length': payload.length,\n 'api-key': atlasAPIKey\n }\n };\n\n```\n\nFinally we use some fairly standard code to call the API and handle errors. We can get Request errors - such as being unable to contact the server - or Response errors where we get any Response code other than 200 OK - In both cases we return a 500 Internal error from our simplified API to not leak any details of the internals to a potential hacker.\n \n```\n let results = '';\n \n const response = await new Promise((resolve, reject) => {\n const req = https.request(options, res => {\n res.on('data', d => {\n results += d;\n });\n res.on('end', () => {\n console.log(`end() status code = ${res.statusCode}`);\n if (res.statusCode == 200) {\n let resultsObj = JSON.parse(results)\n resolve({ statusCode: 200, body: JSON.stringify(resultsObj.documents, null, 4) });\n } else {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Backend Problem like 404 or wrong API key\n }\n });\n });\n //Do not give the user clues about backend issues for security reasons\n req.on('error', error => {\n reject({ statusCode: 500, body: 'Your request could not be completed, Sorry' }); //Issue like host unavailable\n });\n \n req.write(payload);\n req.end();\n });\n return response;\n \n};\n```\n\nOur Axios verison is just the same functionality as above but simplified by the use of a library.\n## Conclusion\n\nAs we can see, calling the Atlas Data API from AWS Lambda function is incredibly simple, especially if making use of a library like Axios. The Data API is also stateless, so there are no concerns about connection setup times or maintaining long lived connections as there would be using a Driver. ", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "AWS"], "pageDescription": "In this article we look at how the Atlas Data API is a great choice for accessing MongoDB Atlas from AWS Lambda Functions by creating a custom API with the AWS API Gateway. ", "contentType": "Tutorial"}, "title": "Creating an API with the AWS API Lambda and the Atlas Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/end-to-end-test-realm-serverless-apps", "action": "created", "body": "# How to Write End-to-End Tests for MongoDB Realm Serverless Apps\n\nAs of June 2022, the functionality previously known as MongoDB Realm is now named Atlas App Services. Atlas App Services refers to the cloud services that simplify building applications with Atlas \u2013 Atlas Data API, Atlas GraphQL API, Atlas Triggers, and Atlas Device Sync. Realm will continue to be used to refer to the client-side database and SDKs. Some of the naming or references in this article may be outdated.\n\nEnd-to-end tests are the cherry on top of a delicious ice cream sundae\nof automated tests. Just like many people find cherries to be disgusting\n(rightly so\u2014cherries are gross!), many developers are not thrilled to\nwrite end-to-end tests. These tests can be time consuming to write and\ndifficult to maintain. However, these tests can provide development\nteams with confidence that the entire application is functioning as\nexpected.\n\nAutomated tests are like a delicious ice cream sundae.\n\nToday I'll discuss how to write end-to-end tests for apps built using\nMongoDB Realm.\n\nThis is the third post in the *DevOps + MongoDB Realm Serverless\nFunctions = \ud83d\ude0d* blog series. I began the series by introducing the\nSocial Stats app, a serverless app I built using MongoDB Realm. I've\nexplained\nhow I wrote unit tests\nand integration tests\nfor the app. If you haven't read\nthe first post where I explained what the app does and how I architected it,\nI recommend you start there and then return to this post.\n\n>\n>\n>Prefer to learn by video? Many of the concepts I cover in this series\n>are available in this video.\n>\n>\n\n## Writing End-to-End Tests for MongoDB Realm Serverless Apps\n\nToday I'll focus on the top layer of the testing\npyramid:\nend-to-end tests. End-to-end tests work through a complete scenario a\nuser would take while using the app. These tests typically interact with\nthe user interface (UI), clicking buttons and inputting text just as a\nuser would. End-to-end tests ultimately check that the various\ncomponents and systems that make up the app are configured and working\ntogether correctly.\n\nBecause end-to-end tests interact with the UI, they tend to be very\nbrittle; they break easily as the UI changes. These tests can also be\nchallenging to write. As a result, developers typically write very few\nof these tests.\n\nDespite their brittle nature, having end-to-end tests is still\nimportant. These tests give development teams confidence that the app is\nfunctioning as expected.\n\n### Sidenote\n\nI want to pause and acknowledge something before the Internet trolls\nstart sending me snarky DMs.\n\nThis section is titled *writing end-to-end tests for MongoDB Realm\nserverless apps*. To be clear, none of the approaches I'm sharing in\nthis post about writing end-to-end tests are specific to MongoDB Realm\nserverless apps. When you write end-to-end tests that interact with the\nUI, the underlying architecture is irrelevant. I know this. Please keep\nyour angry Tweets to yourself.\n\nI decided to write this post, because writing about only two-thirds of\nthe testing pyramid just seemed wrong. Now let's continue.\n\n### Example End-to-End Test\n\nLet's walk through how I wrote an end-to-test for the Social Stats app.\nI began with the simplest flow:\n\n1. A user navigates to the page where they can upload their Twitter\n statistics.\n2. The user uploads a Twitter statistics spreadsheet that has stats for\n a single Tweet.\n3. The user navigates to the dashboard so they can see their\n statistics.\n\nI decided to build my end-to-end tests using Jest\nand Selenium. Using Jest was a\nstraightforward decision as I had already built my unit and integration\ntests using it. Selenium has been a popular choice for automating\nbrowser interactions for many years. I've used it successfully in the\npast, so using it again was an easy choice.\n\nI created a new file named `uploadTweetStats.test.js`. Then I started\nwriting the typical top-of-the-file code.\n\nI began by importing several constants. I imported the MongoClient so\nthat I would be able to interact directly with my database, I imported\nseveral constants I would need in order to use Selenium, and I imported\nthe names of the database and collection I would be testing later.\n\n``` javascript\nconst { MongoClient } = require('mongodb');\n\nconst { Builder, By, until, Capabilities } = require('selenium-webdriver');\n\nconst { TwitterStatsDb, statsCollection } = require('../constants.js');\n```\n\nThen I declared some variables.\n\n``` javascript\nlet collection;\nlet mongoClient;\nlet driver;\n```\n\nNext, I created constants for common XPaths I would need to reference\nthroughout my tests.\nXPath\nis a query language you can use to select nodes in HTML documents.\nSelenium provides a variety of\nways\u2014including\nXPaths\u2014for you to select elements in your web app. The constants below\nare the XPaths for the nodes with the text \"Total Engagements\" and\n\"Total Impressions.\"\n\n``` javascript\nconst totalEngagementsXpath = \"//*text()='Total Engagements']\";\nconst totalImpressionsXpath = \"//*[text()='Total Impressions']\";\n```\n\nNow that I had all of my top-of-the-file code written, I was ready to\nstart setting up my testing structure. I began by implementing the\n[beforeAll()\nfunction, which Jest runs once before any of the tests in the file are\nrun.\n\nBrowser-based tests can run a bit slower than other automated tests, so\nI increased the timeout for each test to 30 seconds.\n\nThen,\njust as I did with the integration tests, I\nconnected directly to the test database.\n\n``` javascript\nbeforeAll(async () => {\n jest.setTimeout(30000);\n\n // Connect directly to the database\n const uri = `mongodb+srv://${process.env.DB_USERNAME}:${process.env.DB_PASSWORD}@${process.env.CLUSTER_URI}/test?retryWrites=true&w=majority`;\n mongoClient = new MongoClient(uri);\n await mongoClient.connect();\n collection = mongoClient.db(TwitterStatsDb).collection(statsCollection);\n});\n```\n\nNext, I implemented the\nbeforeEach()\nfunction, which Jest runs before each test in the file.\n\nI wanted to ensure that the collection the tests will be interacting\nwith is empty before each test, so I added a call to delete everything\nin the collection.\n\nNext, I configured the browser the tests will use. I chose to use\nheadless Chrome, meaning that a browser UI will not actually be\ndisplayed. Headless browsers provide many\nbenefits\nincluding increased performance. Selenium supports a variety of\nbrowsers,\nso you can choose to use whatever browser combinations you'd like.\n\nI used the configurations for Chrome when I created a new\nWebDriver\nstored in `driver`. The `driver` is what will control the browser\nsession.\n\n``` javascript\nbeforeEach(async () => {\n // Clear the database\n const result = await collection.deleteMany({});\n\n // Create a new driver using headless Chrome\n let chromeCapabilities = Capabilities.chrome();\n var chromeOptions = {\n 'args': '--headless', 'window-size=1920,1080']\n };\n chromeCapabilities.set('chromeOptions', chromeOptions);\n driver = new Builder()\n .forBrowser('chrome')\n .usingServer('http://localhost:4444/wd/hub')\n .withCapabilities(chromeCapabilities)\n .build();\n});\n```\n\nI wanted to ensure the browser session was closed after each test, so I\nadded a call to do so in\n[afterEach().\n\n``` javascript\nafterEach(async () => {\n driver.close();\n})\n```\n\nLastly, I wanted to ensure that the database connection was closed after\nall of the tests finished running, so I added a call to do so in\nafterAll().\n\n``` javascript\nafterAll(async () => {\n await mongoClient.close();\n})\n```\n\nNow that I had all of my test structure code written, I was ready to\nbegin writing the code to interact with elements in my browser. I\nquickly discovered that I would need to repeat a few actions in multiple\ntests, so I wrote functions specifically for those.\n\n- refreshChartsDashboard():\n This function clicks the appropriate buttons to manually refresh the\n data in the dashboard.\n- moveToCanvasOfElement(elementXpath):\n This function moves the mouse to the chart canvas associated with\n the node identified by `elementXpath`. This function will come in\n handy for verifying elements in charts.\n- verifyChartText(elementXpath,\n chartText):\n This function verifies that when you move the mouse to the chart\n canvas associated with the node identified by `elementXpath`, the\n `chartText` is displayed in the tooltip.\n\nFinally, I was ready to write my first test case that tests uploading a\nCSV file with Twitter statistics for a single Tweet.\n\n``` javascript\ntest('Single tweet', async () => {\n await driver.get(`${process.env.URL}`);\n const button = await driver.findElement(By.id('csvUpload'));\n await button.sendKeys(process.cwd() + \"/tests/ui/files/singletweet.csv\");\n\n const results = await driver.findElement(By.id('results'));\n await driver.wait(until.elementTextIs(results, `Fabulous! 1 new Tweet(s) was/were saved.`), 10000);\n\n const dashboardLink = await driver.findElement(By.id('dashboard-link'));\n dashboardLink.click();\n\n await refreshChartsDashboard();\n\n await verifyChartText(totalEngagementsXpath, \"4\");\n await verifyChartText(totalImpressionsXpath, \"260\");\n})\n```\n\nLet's walk through what this test is doing.\n\nScreen recording of the Single tweet test when run in Chrome\n\nThe test begins by navigating to the URL for the application I'm using\nfor testing.\n\nThen the test clicks the button that allows users to browse for a file\nto upload. The test selects a file and chooses to upload it.\n\nThe test asserts that the page displays a message indicating that the\nupload was successful.\n\nThen the test clicks the link to open the dashboard. In case the charts\nin the dashboard have stale data, the test clicks the buttons to\nmanually force the data to be refreshed.\n\nFinally, the test verifies that the correct number of engagements and\nimpressions are displayed in the charts.\n\nAfter I finished this test, I wrote another end-to-end test. This test\nverifies that uploading CSV files that update the statistics on existing\nTweets as well as uploading CSV files for multiple authors all work as\nexpected.\n\nYou can find the full test file with both end-to-end tests in\nstoreCsvInDB.test.js.\n\n## Wrapping Up\n\nYou now know the basics of how to write automated tests for Realm\nserverless apps.\n\nThe Social Stats application source code and associated test files are\navailable in a GitHub repo:\n. The repo's readme\nhas detailed instructions on how to execute the test files.\n\nWhile writing and maintaining end-to-end tests can sometimes be painful,\nthey are an important piece of the testing pyramid. Combined with the\nother automated tests, end-to-end tests give the development team\nconfidence that the app is ready to be deployed.\n\nNow that you have a strong foundation of automated tests, you're ready\nto dive into automated deployments. Be on the lookout for the next post\nin this series where I'll explain how to craft a CI/CD pipeline for\nRealm serverless apps.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- GitHub Repository: Social\n Stats\n- Video: DevOps + MongoDB Realm Serverless Functions =\n \ud83d\ude0d\n- Documentation: MongoDB Realm\n- MongoDB Atlas\n- MongoDB Charts\n\n", "format": "md", "metadata": {"tags": ["Realm", "Serverless"], "pageDescription": "Learn how to write end-to-end tests for MongoDB Realm Serverless Apps.", "contentType": "Tutorial"}, "title": "How to Write End-to-End Tests for MongoDB Realm Serverless Apps", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/performance-tuning-tips", "action": "created", "body": "# MongoDB Performance Tuning Questions\n\nMost of the challenges related to keeping a MongoDB cluster running at\ntop speed can be addressed by asking a small number of fundamental\nquestions and then using a few crucial metrics to answer them.\n\nBy keeping an eye on the metrics related to query performance, database\nperformance, throughput, resource utilization, resource saturation, and\nother critical \"assertion\" errors it's possible to find problems that\nmay be lurking in your cluster. Early detection allows you to stay ahead\nof the game, resolving issues before they affect performance.\n\nThese fundamental questions apply no matter how MongoDB is used, whether\nthrough MongoDB Atlas, the\nmanaged service available on all major cloud providers, or through\nMongoDB Community or Enterprise editions, which are run in a\nself-managed manner on-premise or in the cloud.\n\nEach type of MongoDB deployment can be used to support databases at\nscale with immense transaction volumes and that means performance tuning\nshould be a constant activity.\n\nBut the good news is that the same metrics are used in the tuning\nprocess no matter how MongoDB is used.\n\nHowever, as we'll see, the tuning process is much easier in the cloud\nusing MongoDB Atlas where\neverything is more automatic and prefabricated.\n\nHere are the key questions you should always be asking about MongoDB\nperformance tuning and the metrics that can answer them.\n\n## Are all queries running at top speed?\n\nQuery problems are perhaps the lowest hanging fruit when it comes to\ndebugging MongoDB performance issues. Finding problems and fixing them\nis generally straightforward. This section covers the metrics that can\nreveal query performance problems and what to do if you find slow\nqueries.\n\n**Slow Query Log.** The time elapsed and the method used to execute each\nquery is captured in MongoDB log files, which can be searched for slow\nqueries. In addition, queries over a certain threshold can be logged\nexplicitly by the MongoDB Database\nProfiler.\n\n- When a query is slow, first look to see if it was a collection scan\n rather than an index\n scan.\n - Collection scans means all documents in a collection must be\n read.\n - Index scans limit the number of documents that must be\n inspected.\n- Consider adding an index when you see a lot of collection\n scans.\n- But remember: indexes have a cost when it comes to writes and\n updates. Too many indexes that are underutilized can slow down the\n modification or insertion of new documents. Depending on the nature\n of your workloads, this may or may not be a problem.\n\n**Scanned vs Returned** is a metric that can be found in Cloud\nManager\nand in MongoDB Atlas that\nindicates how many documents had to be scanned in order to return the\ndocuments meeting the query.\n\n- In the absence of indexes, a rarely met ideal for this ratio is 1/1,\n meaning all documents scanned were returned \u2014 no wasted scans. Most\n of the time however, when scanning is done, documents are scanned\n that are not returned meaning the ratio is greater than 1.\n- When indexes are used, this ratio can be less than 1 or even 0,\n meaning you have a covered\n query.\n When no documents needed to be scanned, producing a ratio of 0, that\n means all the data needed was in the index.\n- Scanning huge amounts of documents is inefficient and could indicate\n problems regarding missing indexes or indicate a need for query\n optimization.\n\n**Scan and Order** is an index related metric that can be found in Cloud\nManager and MongoDB Atlas.\n\n- A high Scan and Order number, say 20 or more, indicates that the\n server is having to sort query results to get them in the right\n order. This takes time and increases the memory load on the server.\n- Fix this by making sure indexes are sorted in the order in which the\n queries need the documents, or by adding missing indexes.\n\n**WiredTiger Ticket Number** is a key indicator of the performance of\nthe WiredTiger\nstorage engine, which, since release 3.2, has been the storage engine\nfor MongoDB.\n\n- WiredTiger has a concept of read or write tickets that are created\n when the database is accessed. The WiredTiger ticket number should\n always be at 128.\n- If the value goes below 128 and stays below that number, that means\n the server is waiting on something and it's an indication of a\n problem.\n- The remedy is then to find the operations that are going too slowly\n and start a debugging process.\n- Deployments of MongoDB using releases older than 3.2 will certainly\n get a performance boost from migrating to a later version that uses\n WiredTiger.\n\n**Document Structure Antipatterns** aren't revealed by a metric but can\nbe something to look for when debugging slow queries. Here are two of\nthe most notorious bad practices that hurt performance.\n\n**Unbounded arrays:** In a MongoDB document, if an array can grow\nwithout a size limit, it could cause a performance problem because every\ntime you update the array, MongoDB has to rewrite the array into the\ndocument. If the array is huge, this can cause a performance problem.\nLearn more at Avoid Unbounded\nArrays\nand Performance Best Practices: Query Patterns and\nProfiling.\n\n**Subdocuments without bounds:** The same thing can happen with respect\nto subdocuments. MongoDB supports inserting documents within documents,\nwith up to 128 levels of nesting. Each MongoDB document, including\nsubdocuments, also has a size limit of 16MB. If the number of\nsubdocuments becomes excessive, performance problems may result.\n\nOne common fix to this problem is to move some or all of the\nsubdocuments to a separate collection and then refer to them from the\noriginal document. You can learn more about this topic in\nthis blog post.\n\n## Is the database performing at top speed?\n\nMongoDB, like most advanced database systems, has thousands of metrics\nthat track all aspects of database performance which includes reading,\nwriting, and querying the database, as well as making sure background\nmaintenance tasks like backups don't gum up the works.\n\nThe metrics described in this section all indicate larger problems that\ncan have a variety of causes. Like a warning light on a dashboard, these\nmetrics are invaluable high-level indicators that help you start looking\nfor the causes before the database has a catastrophic failure.\n\n>\n>\n>Note: Various ways to get access to all of these metrics are covered below in the Getting Access to Metrics and Setting Up Monitoring section.\n>\n>\n\n**Replication lag** occurs when a secondary member of a replica set\nfalls behind the primary. A detailed examination of the OpLog related\nmetrics can help get to the bottom of the problems but the causes are\noften:\n\n- A networking issue between the primary and secondary, making nodes\n unreachable\n- A secondary node applying data slower than the primary node\n- Insufficient write capacity in which case you should add more shards\n- Slow operations on the primary node, blocking replication\n\n**Locking performance** problems are indicated when the number of\navailable read or write tickets remaining reaches zero, which means new\nread or write requests will be queued until a new read or write ticket\nis available.\n\n- MongoDB's internal locking system is used to support simultaneous\n queries while avoiding write conflicts and inconsistent reads.\n- Locking performance problems can indicate a variety of problems\n including suboptimal indexes and poor schema design patterns, both\n of which can lead to locks being held longer than necessary.\n\n**Number of open cursors rising** without a corresponding growth of\ntraffic is often symptomatic of poorly indexed queries or the result of\nlong running queries due to large result sets.\n\n- This metric can be another indicator that the kind of query\n optimization techniques mentioned in the first section are in order.\n\n## Is the cluster overloaded?\n\nA large part of performance tuning is recognizing when your total\ntraffic, the throughput of transactions through the system, is rising\nbeyond the planned capacity of your cluster. By keeping track of growth\nin throughput, it's possible to expand the capacity in an orderly\nmanner. Here are the metrics to keep track of.\n\n**Read and Write Operations** is the fundamental metric that indicates\nhow much work is done by the cluster. The ratio of reads to writes is\nhighly dependent on the nature of the workloads running on the cluster.\n\n- Monitoring read and write operations over time allows normal ranges\n and thresholds to be established.\n- As trends in read and write operations show growth in throughput,\n capacity should be gradually increased.\n\n**Document Metrics** and **Query Executor** are good indications of\nwhether the cluster is actually too busy. These metrics can be found in\nCloud Manager and in MongoDB\nAtlas. As with read and write\noperations, there is no right or wrong number for these metrics, but\nhaving a good idea of what's normal helps you discern whether poor\nperformance is coming from large workload size or attributable to other\nreasons.\n\n- Document metrics are updated anytime you return a document or insert\n a document. The more documents being returned, inserted, updated or\n deleted, the busier your cluster is.\n - Poor performance in a cluster that has plenty of capacity\n usually points to query problems.\n- The query executor tells how many queries are being processed\n through two data points:\n - Scanned - The average rate per second over the selected sample\n period of index items scanned during queries and query-plan\n evaluation.\n - Scanned objects - The average rate per second over the selected\n sample period of documents scanned during queries and query-plan\n evaluation.\n\n**Hardware and Network metrics** can be important indications that\nthroughput is rising and will exceed the capacity of computing\ninfrastructure. These metrics are gathered from the operating system and\nnetworking infrastructure. To make these metrics useful for diagnostic\npurposes, you must have a sense of what is normal.\n\n- In MongoDB Atlas, or when\n using Cloud Manager, these metrics are easily displayed. If you are\n running on-premise, it depends on your operating system.\n- There's a lot to track but at a minimum have a baseline range for\n metrics like:\n - Disk latency\n - Disk IOPS\n - Number of Connections\n\n## Is the cluster running out of key resources?\n\nA MongoDB cluster makes use of a variety of resources that are provided\nby the underlying computing and networking infrastructure. These can be\nmonitored from within MongoDB as well as from outside of MongoDB at the\nlevel of computing infrastructure as described in the previous section.\nHere are the crucial resources that can be easily tracked from within\nMongo, especially through Cloud Manager and MongoDB\nAtlas.\n\n**Current number of client connections** is usually an effective metric\nto indicate total load on a system. Keeping track of normal ranges at\nvarious times of the day or week can help quickly identify spikes in\ntraffic.\n\n- A related metric, percentage of connections used, can indicate when\n MongoDB is getting close to running out of available connections.\n\n**Storage metrics** track how MongoDB is using persistent storage. In\nthe WiredTiger storage engine, each collection is a file and so is each\nindex. When a document in a collection is updated, the entire document\nis re-written.\n\n- If memory space metrics (dataSize, indexSize, or storageSize) or the\n number of objects show a significant unexpected change while the\n database traffic stays within ordinary ranges, it can indicate a\n problem.\n- A sudden drop in dataSize may indicate a large amount of data\n deletion, which should be quickly investigated if it was not\n expected.\n\n**Memory metrics** show how MongoDB is using the virtual memory of the\ncomputing infrastructure that is hosting the cluster.\n\n- An increasing number of page faults or a growing amount of dirty\n data \u2014 data changed but not yet written to disk \u2014 can indicate\n problems related to the amount of memory available to the cluster.\n- Cache metrics can help determine if the working set is outgrowing\n the available cache.\n\n## Are critical errors on the rise?\n\nMongoDB\nasserts\nare documents created, almost always because of an error, that are\ncaptured as part of the MongoDB logging process.\n\n- Monitoring the number of asserts created at various levels of\n severity can provide a first level indication of unexpected\n problems. Asserts can be message asserts, the most serious kind, or\n warning assets, regular asserts, and user asserts.\n- Examining the asserts can provide clues that may lead to the\n discovery of problems.\n\n## Getting Access to Metrics and Setting Up Monitoring\n\nMaking use of metrics is far easier if you know the data well: where it\ncomes from, how to get at it, and what it means.\n\nAs the MongoDB platform has evolved, it has become far easier to monitor\nclusters and resolve common problems. In addition, the performance\ntuning monitoring and analysis has become increasingly automated. For\nexample, MongoDB Atlas through\nPerformance Advisor will now suggest adding indexes if it detects a\nquery performance problem.\n\nBut it's best to know the whole story of the data, not just the pretty\ngraphs produced at the end.\n\n## Data Sources for MongoDB Metrics\n\nThe sources for metrics used to monitor MongoDB are the logs created\nwhen MongoDB is running and the commands that can be run inside of the\nMongoDB system. These commands produce the detailed statistics that\ndescribe the state of the system.\n\nMonitoring MongoDB performance metrics\n(WiredTiger)\ncontains an excellent categorization of the metrics available for\ndifferent purposes and the commands that can be used to get them. These\ncommands provide a huge amount of detailed information in raw form that\nlooks something like the following screenshot:\n\nThis information is of high quality but difficult to use.\n\n## Monitoring Environments for MongoDB Metrics\n\nAs MongoDB has matured as a platform, specialized interfaces have been\ncreated to bring together the most useful metrics.\n\n- Ops Manager is a\n management platform for on-premise and private cloud deployments of\n MongoDB that includes extensive monitoring and alerting\n capabilities.\n- Cloud Manager is a\n management platform for self-managed cloud deployments of MongoDB\n that also includes extensive monitoring and alerting capabilities.\n (Remember this screenshot reflects the user interface at the time of\n writing.)\n\n- Real Time Performance\n Panel,\n part of MongoDB Atlas or\n MongoDB Ops Manager (requires MongoDB Enterprise Advanced\n subscription), provides graph or table views of dozens of metrics\n and is a great way to keep track of many aspects of performance,\n including most of the metrics discussed earlier.\n- Commercial products like New Relic, Sumo\n Logic, and\n DataDog all provide interfaces\n designed for monitoring and alerting on MongoDB clusters. A variety\n of open source platforms such as\n mtools can be used as well.\n\n## Performance Management Tools for MongoDB Atlas\n\nMongoDB Atlas has taken advantage\nof the standardized APIs and massive amounts of data available on cloud\nplatforms to break new ground in automating performance tuning. Also, in\naddition to the Real Time Performance\nPanel\nmentioned above, the Performance\nAdvisor for\nMongoDB Atlas analyzes queries\nthat you are actually making on your data, determines what's slow and\nwhat's not, and makes recommendations for when to add indexes that take\ninto account the indexes already in use.\n\n## The Professional Services Option\n\nIn a sense, the questions covered in this article represent a playbook\nfor running a performance tuning process. If you're already running such\na process, perhaps some new ideas have occurred to you based on the\nanalysis.\n\nResources like this article can help you achieve or refine your goals if\nyou know the questions to ask and some methods to get there. But if you\ndon't know the questions to ask or the best steps to take, it's wise to\navoid trial and error and ask someone with experience. With broad\nexpertise in tuning large MongoDB deployments, professional\nservices can help identify\nthe most effective steps to take to improve performance right away.\n\nOnce any immediate issues are resolved, professional services can guide\nyou in creating an ongoing streamlined performance tuning process to\nkeep an eye on and action the metrics important to your deployment.\n\n## Wrap Up\n\nWe hope this article has made it clear that with a modest amount of\neffort, it's possible to keep your MongoDB cluster in top shape. No\nmatter what types of workloads are running or where the deployment is\nlocated, use the ideas and tools mentioned above to know what's\nhappening in your cluster and address performance problems before they\nbecome noticeable or cause major outages.\n\n>\n>\n>See the difference with MongoDB\n>Atlas.\n>\n>Ready for Professional\n>Services?\n>\n>\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Early detection of problems allows you to stay ahead of the game, resolving issues before they affect performance.", "contentType": "Article"}, "title": "MongoDB Performance Tuning Questions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/getting-started-with-mongodb-and-mongoose", "action": "created", "body": "# Getting Started with MongoDB & Mongoose\n\nIn this article, we\u2019ll learn how Mongoose, a third-party library for MongoDB, can help you to structure and access your data with ease.\n\n## What is Mongoose?\n\nMany who learn MongoDB get introduced to it through the very popular library, Mongoose. Mongoose is described as \u201celegant MongoDB object modeling for Node.js.\u201d\n\nMongoose is an ODM (Object Data Modeling) library for MongoDB. While you don\u2019t need to use an Object Data Modeling (ODM) or Object Relational Mapping (ORM) tool to have a great experience with MongoDB, some developers prefer them. Many Node.js developers choose to work with Mongoose to help with data modeling, schema enforcement, model validation, and general data manipulation. And Mongoose makes these tasks effortless.\n\n> If you want to hear from the maintainer of Mongoose, Val Karpov, give this episode of the MongoDB Podcast a listen!\n\n## Why Mongoose?\n\nBy default, MongoDB has a flexible data model. This makes MongoDB databases very easy to alter and update in the future. But a lot of developers are accustomed to having rigid schemas.\n\nMongoose forces a semi-rigid schema from the beginning. With Mongoose, developers must define a Schema and Model.\n\n## What is a schema?\n\nA schema defines the structure of your collection documents. A Mongoose schema maps directly to a MongoDB collection.\n\n``` js\nconst blog = new Schema({\n title: String,\n slug: String,\n published: Boolean,\n author: String,\n content: String,\n tags: String],\n createdAt: Date,\n updatedAt: Date,\n comments: [{\n user: String,\n content: String,\n votes: Number\n }]\n});\n```\n\nWith schemas, we define each field and its data type. Permitted types are:\n\n* String\n* Number\n* Date\n* Buffer\n* Boolean\n* Mixed\n* ObjectId\n* Array\n* Decimal128\n* Map\n\n## What is a model?\n\nModels take your schema and apply it to each document in its collection.\n\nModels are responsible for all document interactions like creating, reading, updating, and deleting (CRUD).\n\n> An important note: the first argument passed to the model should be the singular form of your collection name. Mongoose automatically changes this to the plural form, transforms it to lowercase, and uses that for the database collection name.\n\n``` js\nconst Blog = mongoose.model('Blog', blog);\n```\n\nIn this example, `Blog` translates to the `blogs` collection.\n\n## Environment setup\n\nLet\u2019s set up our environment. I\u2019m going to assume you have [Node.js installed already.\n\nWe\u2019ll run the following commands from the terminal to get going:\n\n```\nmkdir mongodb-mongoose\ncd mongodb-mongoose\nnpm init -y\nnpm i mongoose\nnpm i -D nodemon\ncode .\n```\n\nThis will create the project directory, initialize, install the packages we need, and open the project in VS Code.\n\nLet\u2019s add a script to our `package.json` file to run our project. We will also use ES Modules instead of Common JS, so we\u2019ll add the module `type` as well. This will also allow us to use top-level `await`.\n\n``` js\n...\n \"scripts\": {\n \"dev\": \"nodemon index.js\"\n },\n \"type\": \"module\",\n...\n```\n\n## Connecting to MongoDB\n\nNow we\u2019ll create the `index.js` file and use Mongoose to connect to MongoDB.\n\n``` js\nimport mongoose from 'mongoose'\n\nmongoose.connect(\"mongodb+srv://:@cluster0.eyhty.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\")\n```\n\nYou could connect to a local MongoDB instance, but for this article we are going to use a free MongoDB Atlas cluster. If you don\u2019t already have an account, it's easy to sign up for a free MongoDB Atlas cluster here.\n\nAnd if you don\u2019t already have a cluster set up, follow our guide to get your cluster created.\n\nAfter creating your cluster, you should replace the connection string above with your connection string including your username and password.\n\n> The connection string that you copy from the MongoDB Atlas dashboard will reference the `myFirstDatabase` database. Change that to whatever you would like to call your database.\n\n## Creating a schema and model\n\nBefore we do anything with our connection, we\u2019ll need to create a schema and model.\n\nIdeally, you would create a schema/model file for each schema that is needed. So we\u2019ll create a new folder/file structure: `model/Blog.js`.\n\n``` js\nimport mongoose from 'mongoose';\nconst { Schema, model } = mongoose;\n\nconst blogSchema = new Schema({\n title: String,\n slug: String,\n published: Boolean,\n author: String,\n content: String,\n tags: String],\n createdAt: Date,\n updatedAt: Date,\n comments: [{\n user: String,\n content: String,\n votes: Number\n }]\n});\n\nconst Blog = model('Blog', blogSchema);\nexport default Blog;\n```\n\n## Inserting data // method 1\n\nNow that we have our first model and schema set up, we can start inserting data into our database.\n\nBack in the `index.js` file, let\u2019s insert a new blog article.\n\n``` js\nimport mongoose from 'mongoose';\nimport Blog from './model/Blog';\n\nmongoose.connect(\"mongodb+srv://mongo:mongo@cluster0.eyhty.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\")\n\n// Create a new blog post object\nconst article = new Blog({\n title: 'Awesome Post!',\n slug: 'awesome-post',\n published: true,\n content: 'This is the best post ever',\n tags: ['featured', 'announcement'],\n});\n\n// Insert the article in our MongoDB database\nawait article.save();\n```\n\nWe first need to import the `Blog` model that we created. Next, we create a new blog object and then use the `save()` method to insert it into our MongoDB database.\n\nLet\u2019s add a bit more after that to log what is currently in the database. We\u2019ll use the `findOne()` method for this.\n\n``` js\n// Find a single blog post\nconst firstArticle = await Blog.findOne({});\nconsole.log(firstArticle);\n```\n\nLet\u2019s run the code!\n\n```\nnpm run dev\n```\n\nYou should see the document inserted logged in your terminal.\n\n> Because we are using `nodemon` in this project, every time you save a file, the code will run again. If you want to insert a bunch of articles, just keep saving. \ud83d\ude04\n\n## Inserting data // method 2\n\nIn the previous example, we used the `save()` Mongoose method to insert the document into our database. This requires two actions: instantiating the object, and then saving it.\n\nAlternatively, we can do this in one action using the Mongoose `create()` method.\n\n``` js\n// Create a new blog post and insert into database\nconst article = await Blog.create({\n title: 'Awesome Post!',\n slug: 'awesome-post',\n published: true,\n content: 'This is the best post ever',\n tags: ['featured', 'announcement'],\n});\n\nconsole.log(article);\n```\n\nThis method is much better! Not only can we insert our document, but we also get returned the document along with its `_id` when we console log it.\n\n## Update data\n\nMongoose makes updating data very convenient too. Expanding on the previous example, let\u2019s change the `title` of our article.\n\n``` js\narticle.title = \"The Most Awesomest Post!!\";\nawait article.save();\nconsole.log(article);\n```\n\nWe can directly edit the local object, and then use the `save()` method to write the update back to the database. I don\u2019t think it can get much easier than that!\n\n## Finding data\n\nLet\u2019s make sure we are updating the correct document. We\u2019ll use a special Mongoose method, `findById()`, to get our document by its ObjectId.\n\n``` js\nconst article = await Blog.findById(\"62472b6ce09e8b77266d6b1b\").exec();\nconsole.log(article);\n```\n\n> Notice that we use the `exec()` Mongoose function. This is technically optional and returns a promise. In my experience, it\u2019s better to use this function since it will prevent some head-scratching issues. If you want to read more about it, check out this note in the Mongoose docs about [promises.\n\nThere are many query options in Mongoose. View the full list of queries.\n\n## Projecting document fields\n\nJust like with the standard MongoDB Node.js driver, we can project only the fields that we need. Let\u2019s only get the `title`, `slug`, and `content` fields.\n\n``` js\nconst article = await Blog.findById(\"62472b6ce09e8b77266d6b1b\", \"title slug content\").exec();\nconsole.log(article);\n```\n\nThe second parameter can be of type `Object|String|Array` to specify which fields we would like to project. In this case, we used a `String`.\n\n## Deleting data\n\nJust like in the standard MongoDB Node.js driver, we have the `deleteOne()` and `deleteMany()` methods.\n\n``` js\nconst blog = await Blog.deleteOne({ author: \"Jesse Hall\" })\nconsole.log(blog)\n\nconst blog = await Blog.deleteMany({ author: \"Jesse Hall\" })\nconsole.log(blog)\n```\n\n## Validation\n\nNotice that the documents we have inserted so far have not contained an `author`, dates, or `comments`. So far, we have defined what the structure of our document should look like, but we have not defined which fields are actually required. At this point any field can be omitted.\n\nLet\u2019s set some required fields in our `Blog.js` schema.\n\n``` js\nconst blogSchema = new Schema({\n title: {\n type: String,\n required: true,\n },\n slug: {\n type: String,\n required: true,\n lowercase: true,\n },\n published: {\n type: Boolean,\n default: false,\n },\n author: {\n type: String,\n required: true,\n },\n content: String,\n tags: String],\n createdAt: {\n type: Date,\n default: () => Date.now(),\n immutable: true,\n },\n updatedAt: Date,\n comments: [{\n user: String,\n content: String,\n votes: Number\n }]\n});\n```\n\nWhen including validation on a field, we pass an object as its value.\n\n> `value: String` is the same as `value: {type: String}`.\n\nThere are several validation methods that can be used.\n\nWe can set `required` to true on any fields we would like to be required.\n\nFor the `slug`, we want the string to always be in lowercase. For this, we can set `lowercase` to true. This will take the slug input and convert it to lowercase before saving the document to the database.\n\nFor our `created` date, we can set the default buy using an arrow function. We also want this date to be impossible to change later. We can do that by setting `immutable` to true.\n\n> Validators only run on the create or save methods.\n\n## Other useful methods\n\nMongoose uses many standard MongoDB methods plus introduces many extra helper methods that are abstracted from regular MongoDB methods. Next, we\u2019ll go over just a few of them.\n\n### `exists()`\n\nThe `exists()` method returns either `null` or the ObjectId of a document that matches the provided query.\n\n``` js\nconst blog = await Blog.exists({ author: \"Jesse Hall\" })\nconsole.log(blog)\n```\n\n### `where()`\n\nMongoose also has its own style of querying data. The `where()` method allows us to chain and build queries.\n\n``` js\n// Instead of using a standard find method\nconst blogFind = await Blog.findOne({ author: \"Jesse Hall\" });\n\n// Use the equivalent where() method\nconst blogWhere = await Blog.where(\"author\").equals(\"Jesse Hall\");\nconsole.log(blogWhere)\n```\n\nEither of these methods work. Use whichever seems more natural to you.\n\nYou can also chain multiple `where()` methods to include even the most complicated query.\n\n### `select()`\n\nTo include projection when using the `where()` method, chain the `select()` method after your query.\n\n``` js\nconst blog = await Blog.where(\"author\").equals(\"Jesse Hall\").select(\"title author\")\nconsole.log(blog)\n```\n\n## Multiple schemas\n\nIt's important to understand your options when modeling data.\n\nIf you\u2019re coming from a relational database background, you\u2019ll be used to having separate tables for all of your related data.\n\nGenerally, in MongoDB, data that is accessed together should be stored together.\n\nYou should plan this out ahead of time if possible. Nest data within the same schema when it makes sense.\n\nIf you have the need for separate schemas, Mongoose makes it a breeze.\n\nLet\u2019s create another schema so that we can see how multiple schemas can be used together.\n\nWe\u2019ll create a new file, `User.js`, in the model folder.\n\n``` js\nimport mongoose from 'mongoose';\nconst {Schema, model} = mongoose;\n\nconst userSchema = new Schema({\n name: {\n type: String,\n required: true,\n },\n email: {\n type: String,\n minLength: 10,\n required: true,\n lowercase: true\n },\n});\n\nconst User = model('User', userSchema);\nexport default User;\n```\n\nFor the `email`, we are using a new property, `minLength`, to require a minimum character length for this string.\n\nNow we\u2019ll reference this new user model in our blog schema for the `author` and `comments.user`.\n\n``` js\nimport mongoose from 'mongoose';\nconst { Schema, SchemaTypes, model } = mongoose;\n\nconst blogSchema = new Schema({\n ...,\n author: {\n type: SchemaTypes.ObjectId,\n ref: 'User',\n required: true,\n },\n ...,\n comments: [{\n user: {\n type: SchemaTypes.ObjectId,\n ref: 'User',\n required: true,\n },\n content: String,\n votes: Number\n }];\n});\n...\n```\n\nHere, we set the `author` and `comments.user` to `SchemaTypes.ObjectId` and added a `ref`, or reference, to the user model.\n\nThis will allow us to \u201cjoin\u201d our data a bit later.\n\nAnd don\u2019t forget to destructure `SchemaTypes` from `mongoose` at the top of the file.\n\nLastly, let\u2019s update the `index.js` file. We\u2019ll need to import our new user model, create a new user, and create a new article with the new user\u2019s `_id`.\n\n``` js\n...\nimport User from './model/User.js';\n\n...\n\nconst user = await User.create({\n name: 'Jesse Hall',\n email: 'jesse@email.com',\n});\n\nconst article = await Blog.create({\n title: 'Awesome Post!',\n slug: 'Awesome-Post',\n author: user._id,\n content: 'This is the best post ever',\n tags: ['featured', 'announcement'],\n});\n\nconsole.log(article);\n```\n\nNotice now that there is a `users` collection along with the `blogs` collection in the MongoDB database.\n\nYou\u2019ll now see only the user `_id` in the author field. So, how do we get all of the info for the author along with the article?\n\nWe can use the `populate()` Mongoose method.\n\n``` js\nconst article = await Blog.findOne({ title: \"Awesome Post!\" }).populate(\"author\");\nconsole.log(article);\n```\n\nNow the data for the `author` is populated, or \u201cjoined,\u201d into the `article` data. Mongoose actually uses the MongoDB `$lookup` method behind the scenes.\n\n## Middleware\n\nIn Mongoose, middleware are functions that run before and/or during the execution of asynchronous functions at the schema level.\n\nHere\u2019s an example. Let\u2019s update the `updated` date every time an article is saved or updated. We\u2019ll add this to our `Blog.js` model.\n\n``` js\nblogSchema.pre('save', function(next) {\n this.updated = Date.now(); // update the date every time a blog post is saved\n next();\n});\n```\n\nThen in the `index.js` file, we\u2019ll find an article, update the title, and then save it.\n\n``` js\nconst article = await Blog.findById(\"6247589060c9b6abfa1ef530\").exec();\narticle.title = \"Updated Title\";\nawait article.save();\nconsole.log(article);\n```\n\nNotice that we now have an `updated` date!\n\nBesides `pre()`, there is also a `post()` mongoose middleware function.\n\n## Next steps\n\nI think our example here could use another schema for the `comments`. Try creating that schema and testing it by adding a few users and comments.\n\nThere are many other great Mongoose helper methods that are not covered here. Be sure to check out the [official documentation for references and more examples.\n\n## Conclusion\n\nI think it\u2019s great that developers have many options for connecting and manipulating data in MongoDB. Whether you prefer Mongoose or the standard MongoDB drivers, in the end, it\u2019s all about the data and what\u2019s best for your application and use case.\n\nI can see why Mongoose appeals to many developers and I think I\u2019ll use it more in the future.", "format": "md", "metadata": {"tags": ["JavaScript", "MongoDB"], "pageDescription": "In this article, we\u2019ll learn how Mongoose, a library for MongoDB, can help you to structure and access your data with ease. Many who learn MongoDB get introduced to it through the very popular library, Mongoose. Mongoose is described as \u201celegant MongoDB object modeling for Node.js.\"", "contentType": "Quickstart"}, "title": "Getting Started with MongoDB & Mongoose", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/php-error-handling", "action": "created", "body": "# Handling MongoDB PHP Errors\n\nWelcome to this article about MongoDB error handling in PHP. Code samples and tutorials abound on the web , but for clarity's sake, they often don't show what to do with potential errors. Our goal here is to show you common mechanisms to deal with potential issues like connection loss, temporary inability to read/write, initialization failures, and more.\n\nThis article was written using PHP 8.1 and MongoDB 6.1.1 (serverless) with the PHP Extension and Library 1.15. As things may change in the future, you can refer to our official MongoDB PHP documentation.\n\n## Prerequisites\n\nTo execute the code sample\u00a0created for this article, you will need:\n\n* A MongoDB Atlas cluster with sample data loaded. We have MongoDB Atlas free tier clusters\u00a0available to all.\n* A web server with PHP and the MongoDB PHP driver installed. Ideally, follow our \"Getting Set Up to Run PHP with MongoDB\" guide.\n * Alternatively, you can consider using PHP's built-in webserver,\u00a0which can be simpler to set up and might avoid other web server environment variances.\n* A functioning Composer\u00a0to set up the MongoDB PHP Library.\n* A code editor, like Visual Studio Code.\n\nWe will refer to the MongoDB PHP Driver,\u00a0which has two distinct components. First, there's the MongoDB PHP Extension, which is the system-level interface to MongoDB. \n\nSecondly, there's the MongoDB PHP Library, a PHP library that is the application's interface to MongoDB. You can learn about the people behind our PHP driver in this excellent podcast episode.\n\n:youtube]{vid=qOuGM6dNDm8}\n\n## Initializing our code sample\n\nClone it from the [Github repository\u00a0to a local folder in the public section of your web server and website.\u00a0You can use the command\n\n```\ngit clone https://github.com/mongodb-developer/php-error-handling-sample\n```\n\nGo to the project's directory with the command\n\n```\ncd php-error-handling-sample\n```\n\nand run the command\n\n```\ncomposer install\n```\n\nComposer will download external libraries to the \"vendor\" directory (see the screenshot below). Note that Composer will check if the MongoDB PHP extension is installed, and will report an error if it is not.\n\nCreate an .env file containing your database user credentials in the same folder as index.php. Our previous tutorial describes\u00a0how to do this in the \"Securing Usernames and Passwords\" section. The .env file is a simple text file formatted as follows:\n\n***MDB\\_USER=user name]\nMDB\\_PASS=[password]***\n\nIn your web browser, navigate to `website-url/php-error-handling-sample/`, and `index.php` \u00a0will be executed.\n\nUpon execution, our code sample outputs a page like this, and there are various ways to induce errors to see how the various checks work by commenting/uncommenting lines in the source code.\n\n![\n\n## System-level error handling\n\nInitially, developers run into system-level issues related to the PHP configuration and whether or not the MongoDB PHP driver is properly installed. That's especially true when your code is deployed on servers you don't control. Here are two common system-level runtime errors and how to check for them:\n\n1. Is the MongoDB extension installed and loaded?\n2. Is the MongoDB PHP Library available to your code?\n\nThere are many ways to check if the MongoDB PHP extension is installed and loaded and here are two in the article, while the others are in the the code file.\n\n1. You can call PHP's `extension_loaded()`\u00a0function with `mongodb`\u00a0as the argument. It will return true or false.\n2. You can call `class_exists()` to check for the existence\u00a0of the `MongoDB\\Driver\\Manager` class defined in the MongoDB PHP extension.\n3. Call `phpversion('mongodb')`, which should return the MongoDB PHP extension version number on success and false on failure.\n4. The MongoDB PHP Library also contains a detect-extension.php\u00a0file which shows another way of detecting if the extension was loaded. This file is not part of the distribution but it is documented.\n\n```\n// MongoDB Extension check, Method #1\nif ( extension_loaded('mongodb') ) {\n echo(MSG_EXTENSION_LOADED_SUCCESS);\n} else {\n echo(MSG_EXTENSION_LOADED_FAIL);\n}\n\n// MongoDB Extension check, Method #2\nif ( !class_exists('MongoDB\\Driver\\Manager') ) {\n echo(MSG_EXTENSION_LOADED2_FAIL); \n exit();\n} \nelse {\n echo(MSG_EXTENSION_LOADED2_SUCCESS);\n}\n```\n\nFailure for either means the MongoDB PHP extension has not been loaded properly and you should check your php.ini configuration and error logs, as this is a system configuration issue. Our Getting Set Up to Run PHP with MongoDB\u00a0article provides debugging steps and tips which may help you.\n\nOnce the MongoDB PHP extension is up and running, the next thing to do is to check if the MongoDB PHP Library is available to your code. You are not obligated to use the library, but we highly recommend you do. It keeps things more abstract, so you focus on your app instead of the inner-workings of MongoDB.\n\nLook for the `MongoDB\\Client` class. If it's there, the library has been added to your project and is available at runtime.\n\n```\n// MongoDB PHP Library check\nif ( !class_exists('MongoDB\\Client') ) {\n echo(MSG_LIBRARY_MISSING); \n exit();\n} \nelse {\n echo(MSG_LIBRARY_PRESENT);\n}\n```\n\n## Database instance initialization\n\nYou can now instantiate a client with your connection string . (Here's how to find the Atlas connection string.) \n\nThe instantiation will fail if something is wrong with the connection string parsing or the driver \u00a0cannot resolve the connection's SRV\u00a0(DNS) record. Possible causes for SRV resolution failures include the IP address being rejected by the MongoDB cluster or network connection issues while checking the SRV.\n\n```\n// Fail if the MongoDB Extension is not configuired and loaded\n// Fail if the connection URL is wrong\ntry {\n // IMPORTANT: replace with YOUR server DNS name\n $mdbserver = 'serverlessinstance0.owdak.mongodb.net';\n\n $client = new MongoDB\\Client('mongodb+srv://'.$_ENV'MDB_USER'].':'.$_ENV['MDB_PASS'].'@'.$mdbserver.'/?retryWrites=true&w=majority');\n echo(MSG_CLIENT_SUCCESS);\n // succeeds even if user/password is invalid\n}\ncatch (\\Exception $e) {\n // Fails if the URL is malformed\n // Fails without a SVR check\n // fails if the IP is blocked by an ACL or firewall\n echo(MSG_CLIENT_FAIL);\n exit();\n}\n```\n\nUp to this point, the library has just constructed an internal driver manager, and no I/O to the cluster has been performed. This behavior is described in this [PHP library documentation page\u00a0\u2014 see the \"Behavior\" section.\n\nIt's important to know that even though the client was successfully instantiated, it does not mean your user/password pair is valid , and it doesn't automatically \u00a0grant you access to anything . Your code has yet to try accessing any information, so your authentication has not been verified.\n\nWhen you first create a MongoDB Atlas cluster, there's a \"Connect\" button in the GUI to retrieve the instance's URL. If no user database exists, you will be prompted to add one, and add an IP address to the access list.\n\nIn the MongoDB Atlas GUI sidebar, there's a \"Security\" section with links to the \"Database Access\" and \"Network Access\" configuration pages. \"Database Access\" is where you create database users\u00a0and their privileges. \"Network Access\" lets you add IP addresses to the IP access list.\n\nNext, you can do a first operation that requires an I/O connection and an authentication, such as listing the databases \u00a0with `listDatabaseNames()`, as shown in the code block below. If it succeeds, your user/password pair is valid. If it fails, it could be that the pair is invalid or the user does not have the proper privileges.\n\n```\ntry { \n // if listDatabaseNames() works, your authorization is valid\n $databases_list_iterator = $client->listDatabaseNames(); // asks for a list of database names on the cluster\n\n $databases_list = iterator_to_array( $databases_list_iterator );\n echo( MSG_CLIENT_AUTH_SUCCESS );\n }\n catch (\\Exception $e) {\n // Fail if incorrect user/password, or not authorized\n // Could be another issue, check content of $e->getMessage()\n echo( MSG_EXCEPTION. $e->getMessage() );\n exit();\n }\n```\n\nThere are other reasons why any MongoDB command could fail (connectivity loss, etc.), and the exception message will reveal that. These first initialization steps are common points of friction as cluster URLs vary from project to project, IPs change, and passwords are reset.\n\n## CRUD error handling\n\nIf you haven't performed CRUD operation with MongoDB before, we have a great tutorial entitled \"Creating, Reading, Updating, and Deleting MongoDB Documents with PHP.\" Here, we'll look at the error handling mechanisms.\n\nWe will access one of the sample databases called \"sample\\_analytics ,\" and read/write into the \"customers\" collection. If you're unfamiliar with MongoDB's terminology, here's a quick overview of the MongoDB database and collections.\n\nSometimes, ensuring the connected cluster contains the expected database(s) and collection(s) might be a good idea. In our code sample, we can check as follows:\n\n```\n// check if our desired database is present in the cluster by looking up its name\n $workingdbname = 'sample_analytics';\n if ( in_array( $workingdbname, $databases_list ) ) {\n echo( MSG_DATABASE_FOUND.\" '$workingdbname'\n\" );\n }\n else {\n echo( MSG_DATABASE_NOT_FOUND.\" '$workingdbname'\n\" );\n exit();\n }\n\n // check if your desired collection is present in the database\n $workingCollectionname = 'customers';\n $collections_list_itrerator = $client->$workingdbname->listCollections();\n $foundCollection = false;\n \n $collections_list_itrerator->rewind();\n while( $collections_list_itrerator->valid() ) {\n if ( $collections_list_itrerator->current()->getName() == $workingCollectionname ) {\n $foundCollection = true;\n echo( MSG_COLLECTION_FOUND.\" '$workingCollectionname'\n\" );\n break; \n }\n $collections_list_itrerator->next();\n }\n\n if ( !$foundCollection ) {\n echo( MSG_COLLECTION_NOT_FOUND.\" '$workingCollectionname'\n\" );\n exit();\n }\n```\n\nMongoDB CRUD operations\u00a0have a multitude of legitimate reasons to encounter an exception. The general way of handling these errors is to put your operation in a try/catch block to avoid a fatal error.\n\nIf no exception is encountered, most operations return a document containing information about how the operation went. \n\nFor example, write operations return a document that contains a write concern\u00a0\"isAcknowledged\" boolean and a WriteResult\u00a0object. It has important feedback data, such as the number of matched and modified documents (among other things). Your app can check to ensure the operation performed as expected.\n\nIf an exception does happen, you can add further checks to see exactly what type of exception. For reference, look at the MongoDB exception class tree\u00a0and keep in mind that you can get more information from the exception than just the message. The driver's ServerException class\u00a0can also provide the exception error code, the source code line and the trace, and more!\n\nFor example, a common exception occurs when the application tries to insert a new document with an existing unique ID. This could happen for many reasons, including in high concurrency situations where multiple threads or clients might attempt to create identical records.\n\nMongoDB maintains an array of tests for its PHP Library (see DocumentationExamplesTest.php on Github). It contains great code examples of various queries, with error handling. I highly recommend looking at it and using it as a reference since it will stay up to date with the latest driver and APIs.\n\n## Conclusion\n\nThis article was intended to introduce MongoDB error handling in PHP by highlighting common pitfalls and frequently asked questions we answer. Understanding the various MongoDB error-handling mechanisms will make your application rock-solid, simplify your development workflow, and ultimately make you and your team more productive.\n\nTo learn more about using MongoDB in PHP, learn from our PHP Library tutorial,\u00a0and I invite you to connect via the PHP section of our developer community forums.\n\n## References\n\n* MongoDB PHP Quickstart Source Code Repository\n* MongoDB PHP Driver Documentation\u00a0provides thorough documentation describing how to use PHP with your MongoDB cluster.\n* MongoDB Query Document\u00a0documentation details the full power available for querying MongoDB collections.", "format": "md", "metadata": {"tags": ["PHP", "MongoDB"], "pageDescription": "This article shows you common mechanisms to deal with potential PHP Errors and Exceptions triggered by connection loss, temporary inability to read/write, initialization failures, and more.\n", "contentType": "Article"}, "title": "Handling MongoDB PHP Errors", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/secure-data-access-views", "action": "created", "body": "# How to Secure MongoDB Data Access with Views\n\n## Introduction\n\nSometimes, MongoDB collections contain sensitive information that require access control. Using the Role-Based Access Control (RBAC) provided by MongoDB, it's easy to restrict access to this collection.\nBut what if you want to share your collection to a wider audience without exposing sensitive data?\n\nFor example, it could be interesting to share your collections with the marketing team for analytics purposes without sharing personal identifiable information (PII) or data you prefer to keep private, like employee salaries.\n\nIt's possible to achieve this result with MongoDB views combined with the MongoDB RBAC, and this is what we are going to explore in this blog post.\n\n## Prerequisites\n\nYou'll need either:\n- A MongoDB cluster with authentication activated (which is somewhat recommended in production!).\n- A MongoDB Atlas cluster.\n\nI'll assume you already have an admin user on your cluster with full authorizations or at least a user that can create views, custom roles. and users. If you are in Atlas, you can create this user in the `Database Access` tab or use the MongoDB Shell, like this:\n\n```bash\nmongosh \"mongodb://localhost/admin\" --quiet --eval \"db.createUser({'user': 'root', 'pwd': 'root', 'roles': 'root']});\"\n```\n\nThen you can [connect with the command line provided in Atlas or like this, if you are not in Atlas:\n\n```js\nmongosh \"mongodb://localhost\" --quiet -u root -p root\n```\n\n## Creating a MongoDB collection with sensitive data\n\nIn this example, I'll pretend to have an `employees` collection with sensitive data:\n\n```js\ndb.employees.insertMany(\n \n {\n _id: 1,\n firstname: 'Scott',\n lastname: 'Snyder',\n age: 21,\n ssn: '351-40-7153',\n salary: 100000\n },\n {\n _id: 2,\n firstname: 'Patricia',\n lastname: 'Hanna',\n age: 57,\n ssn: '426-57-8180',\n salary: 95000\n },\n {\n _id: 3,\n firstname: 'Michelle',\n lastname: 'Blair',\n age: 61,\n ssn: '399-04-0314',\n salary: 71000\n },\n {\n _id: 4,\n firstname: 'Benjamin',\n lastname: 'Roberts',\n age: 46,\n ssn: '712-13-9307',\n salary: 60000\n },\n {\n _id: 5,\n firstname: 'Nicholas',\n lastname: 'Parker',\n age: 69,\n ssn: '320-25-5610',\n salary: 81000\n }\n ]\n)\n```\n\n## How to create a view in MongoDB to hide sensitive fields\n\nNow I want to share this collection to a wider audience, but I don\u2019t want to share the social security numbers and salaries.\n\nTo solve this issue, I can create a [view with a `$project` stage that only allows a set of selected fields.\n\n```js\ndb.createView('employees_view', 'employees', {$project: {firstname: 1, lastname: 1, age: 1}}])\n```\n\n> Note that I'm not doing `{$project: {ssn: 0, salary: 0}}` because every field except these two would appear in the view.\nIt works today, but maybe tomorrow, I'll add a `credit_card` field in some documents. It would then appear instantly in the view.\n\nLet's confirm that the view works:\n\n```js\ndb.employees_view.find()\n```\nResults:\n\n```js\n[\n { _id: 1, firstname: 'Scott', lastname: 'Snyder', age: 21 },\n { _id: 2, firstname: 'Patricia', lastname: 'Hanna', age: 57 },\n { _id: 3, firstname: 'Michelle', lastname: 'Blair', age: 61 },\n { _id: 4, firstname: 'Benjamin', lastname: 'Roberts', age: 46 },\n { _id: 5, firstname: 'Nicholas', lastname: 'Parker', age: 69 }\n]\n```\n\nDepending on your schema design and how you want to filter the fields, it could be easier to use [$unset instead of $project. You can learn more in the Practical MongoDB Aggregations Book. But again, `$unset` will just remove the specified fields without filtering new fields that could be added in the future.\n\n## Managing data access with MongoDB roles and users\n\nNow that we have our view, we can share this with restricted access rights. In MongoDB, we need to create a custom role to achieve this.\n\nHere are the command lines if you are not in Atlas.\n\n```js\nuse admin\ndb.createRole(\n {\n role: \"view_access\",\n privileges: \n {resource: {db: \"test\", collection: \"employees_view\"}, actions: [\"find\"]}\n ],\n roles: []\n }\n)\n```\n\nThen we can create the user:\n\n```js\nuse admin\ndb.createUser({user: 'view_user', pwd: '123', roles: [\"view_access\"]})\n```\n\nIf you are in Atlas, database access is managed directly in the Atlas website in the `Database Access` tab. You can also use the Atlas CLI if you feel like it.\n\n![Database access tab in Atlas\n\nThen you need to create a custom role.\n\n> Note: In Step 2, I only selected the _Collection Actions > Query and Write Actions > find_ option.\n\nNow that your role is created, head back to the `Database Users` tab and create a user with this custom role.\n\n## Testing data access control with restricted user account\n\nNow that our user is created, we can confirm that this new restricted user doesn't have access to the underlying collection but has access to the view.\n\n```js\n$ mongosh \"mongodb+srv://hidingfields.as3qc0s.mongodb.net/test\" --apiVersion 1 --username view_user --quiet\nEnter password: ***\nAtlas atlas-odym8f-shard-0 primary] test> db.employees.find()\nMongoServerError: user is not allowed to do action [find] on [test.employees]\nAtlas atlas-odym8f-shard-0 [primary] test> db.employees_view.find()\n[\n { _id: 1, firstname: 'Scott', lastname: 'Snyder', age: 21 },\n { _id: 2, firstname: 'Patricia', lastname: 'Hanna', age: 57 },\n { _id: 3, firstname: 'Michelle', lastname: 'Blair', age: 61 },\n { _id: 4, firstname: 'Benjamin', lastname: 'Roberts', age: 46 },\n { _id: 5, firstname: 'Nicholas', lastname: 'Parker', age: 69 }\n]\n```\n\n## Wrap-up\n\nIn this blog post, you learned how to share your MongoDB collections to a wider audience \u2014 even the most critical ones \u2014 without exposing sensitive data.\n\nNote that views can use the indexes from the source collection so your restricted user can leverage those for more advanced queries.\n\nYou could also choose to add an extra `$match` stage before your $project stage to filter entire documents from ever appearing in the view. You can see an example in the [Practical MongoDB Aggregations Book. And don't forget to support the `$match` with an index!\n\nQuestions? Comments? Let's continue the conversation over at the MongoDB Developer Community.\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "In this blog post, you will learn how to share a MongoDB collection to a wider audience without exposing sensitive fields in your documents.", "contentType": "Article"}, "title": "How to Secure MongoDB Data Access with Views", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-totally-serverless-rest-api-mongodb-atlas", "action": "created", "body": "# Build a Totally Serverless REST API with MongoDB Atlas\n\nSo you want to build a REST API, but you don't want to worry about the management burden when it comes to scaling it to meet the demand of your users. Or maybe you know your API will experience more burst usage than constant demand and you'd like to reduce your infrastructure costs.\n\nThese are two great scenarios where a serverless architecture could benefit your API development. However, did you know that the serverless architecture doesn't stop at just the API level? You could make use of a serverless database in addition to the application layer and reap the benefits of going totally serverless.\n\nIn this tutorial, we'll see how to go totally serverless in our application and data development using a MongoDB Atlas serverless instance as well as Atlas HTTPS endpoints for our application. \n\n## Prerequisites\n\nYou won't need much to be successful with this tutorial:\n\n- A MongoDB Atlas account.\n- A basic understanding of Node.js and JavaScript.\n\nWe'll see how to get started with MongoDB Atlas in this tutorial, but you'll need a basic understanding of JavaScript because we'll be using it to create our serverless API endpoints.\n\n## Deploy and configure a MongoDB Atlas serverless instance\n\nWe're going to start this serverless journey with a serverless database deployment. Serverless instances provide an on-demand database endpoint for your application that will automatically scale up and down to zero with application demand and only charge you based on your usage. Due to the limited strain we put on our database in this tutorial, you'll have to use your imagination when it comes to scaling.\n\nIt\u2019s worth noting that the serverless API that we create with the Atlas HTTPS endpoints can use a pre-provisioned database instance and is not limited to just serverless database instances. We\u2019re using a serverless instance to maintain 100% serverless scalability from database to application.\n\nFrom the MongoDB Atlas Dashboard, click the \"Create\" button.\n\nYou'll want to choose \"Serverless\" as the instance type followed by the cloud in which you'd like it to live. For this example, the cloud vendor isn't important, but if you have other applications that exist on one of the listed clouds, for latency reasons it would make sense to keep things consistent. You\u2019ll notice that the configuration process is very minimal and you never need to think about provisioning any specified resources for your database. \n\nWhen you click the \"Create Instance\" button, your instance is ready to go!\n\n## Developing a REST API with MongoDB Atlas HTTPS endpoints\n\nTo create the endpoints for our API, we are going to leverage Atlas HTTPS endpoints.Think of these as a combination of Functions as a Service (FaaS) and an API gateway that routes URLs to a function. This service can be found in the \"App Services\" tab area within MongoDB Atlas.\n\nClick on the \"App Services\" tab within MongoDB Atlas.\n\nYou'll need to create an application for this particular project. Choose the \"Create a New App\" button and select the serverless instance as the cluster that you wish to use.\n\nThere's a lot you can do with Atlas App Services beyond API creation in case you wanted to explore items out of the scope of this tutorial.\n\nFrom the App Services dashboard, choose \"HTTPS Endpoints.\"\n\nWe're going to create our first endpoint and it will be responsible for creating a new document.\n\nWhen creating the new endpoint, use the following information:\n\n- Route: /person\n- Enabled: true\n- HTTP Method: POST\n- Respond with Result: true\n- Return Type: JSON\n- Function: New Function\n\nThe remaining fields can be left as their default values.\n\nGive the new function a name. The name is not important, but it would make sense to call it something like \"createPerson\" for your own sanity.\n\nThe JavaScript for the function should look like the following:\n\n```javascript\nexports = function({ query, headers, body}, response) {\n const result = context.services\n .get(\"mongodb-atlas\")\n .db(\"examples\")\n .collection(\"people\")\n .insertOne(JSON.parse(body.text()));\n\n return result;\n};\n```\n\nRemember, our goal is to create a document.\n\nIn the above function, we are using the \"examples\" database and the \"people\" collection within our serverless instance. Neither need to exist prior to creating the function or executing our code. They will be created at runtime.\n\nFor this example, we are not doing any data validation. Whatever the client sends through a request body will be saved into MongoDB. Your actual function logic will likely vary to accommodate more of your business logic.\n\nWe're not in the clear yet. We need to change our authentication rules for the function. Click on the \"Functions\" navigation item and then choose the \"Settings\" tab. More complex authentication mechanisms are out of the scope of this particular tutorial, so we're going to give the function \"System\" level authentication. Consult the documentation to see what authentication mechanisms make the most sense for you.\n\nWe're going to create one more endpoint for this tutorial. We want to be able to retrieve any document from within our collection.\n\nCreate a new HTTPS endpoint. Use the following information:\n\n- Route: /people\n- Enabled: true\n- HTTP Method: GET\n- Respond with Result: true\n- Return Type: JSON\n- Function: New Function\n\nOnce again, the other fields can be left as the default. Choose a name like \"retrievePeople\" for your function, or whatever makes the most sense to you.\n\nThe function itself can be as simple as the following:\n\n```javascript\nexports = function({ query, headers, body}, response) {\n\n const docs = context.services\n .get(\"mongodb-atlas\")\n .db(\"examples\")\n .collection(\"people\")\n .find({})\n .toArray();\n\n return docs;\n};\n```\n\nIn the above example, we're using an empty filter to find and return all documents in our collection.\n\nTo make this work, don't forget to change the authentication on the \"retrievePeople\" function like you did the \"createPerson\" function. The \"System\" level works for this example, but once again, pick what makes the most sense for your production scenario.\n\n## MongoDB Atlas App Services authentication, authorization, and general security\n\nWe brushed over it throughout the tutorial, but it\u2019s worth further clarifying the levels of security available to you when developing a serverless REST API with MongoDB Atlas.\n\nWe can use all or some of the following to improve the security of our API:\n\n- Authentication\n- Authorization\n- Network security with IP access lists\n\nWith a network rule, you can allow everyone on the internet to be able to reach your API or specific IP addresses. This can be useful if you are building a public API or something internal for your organization.\n\nThe network rules for your application should be your first line of defense.\n\nThroughout this tutorial, we used \u201cSystem\u201d level authentication for our endpoints. This essentially allows anyone who can reach our API from a network level access to our API without question. If you want to improve the security beyond a network rule, you can change the authentication mechanism to something like \u201cApplication\u201d or \u201cUser\u201d instead.\n\nMongoDB offers a variety of ways to authenticate users. For example, you could enable email and password authentication, OAuth, or something custom. This would require the user to authenticate and establish a token or session prior to interacting with your API.\n\nFinally, you can take advantage of authorization rules within Atlas App Services. This can be valuable if you want to restrict users in what they can do with your API. These rules are created using special JSON expressions.\n\nIf you\u2019re interested in learning the implementation specifics behind network level security, authentication, or authorization, take a look at the documentation.\n\n## Conclusion\n\nYou just saw how to get started developing a truly serverless application with MongoDB Atlas. Not only was the API serverless through use of Atlas HTTPS endpoints, but it also made use of a serverless database instance.\n\nWhen using this approach, your application will scale to meet demand without any developer intervention. You'll also be billed for usage rather than uptime, which could provide many advantages.\n\nIf you want to learn more, consider checking out the MongoDB Community Forums to see how other developers are integrating serverless.\n\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Serverless"], "pageDescription": "Learn how to go totally serverless in both the database and application by using MongoDB Atlas serverless instances and the MongoDB Atlas App Services.", "contentType": "Tutorial"}, "title": "Build a Totally Serverless REST API with MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-cluster-automation-using-scheduled-triggers", "action": "created", "body": "# Atlas Cluster Automation Using Scheduled Triggers\n\n# Atlas Cluster Automation Using Scheduled Triggers\n\nEvery action you can take in the Atlas user interface is backed by a corresponding Administration API, which allows you to easily bring automation to your Atlas deployments. Some of the more common forms of Atlas automation occur on a schedule, such as pausing a cluster that\u2019s only used for testing in the evening and resuming the cluster again in the morning.\n\nHaving an API to automate Atlas actions is great, but you\u2019re still on the hook for writing the script that calls the API, finding a place to host the script, and setting up the job to call the script on your desired schedule. This is where Atlas Scheduled Triggers come to the rescue.\n\nIn this Atlas Cluster Automation Using Scheduled Triggers article I will show you how a Scheduled Trigger can be used to easily incorporate automation into your environment. In addition to pausing and unpausing a cluster, I\u2019ll similarly show how cluster scale up and down events could also be placed on a schedule. Both of these activities allow you to save on costs for when you either don\u2019t need the cluster (paused), or don\u2019t need it to support peak workloads (scale down).\n\n# Architecture\n\nThree example scheduled triggers are provided in this solution. Each trigger has an associated trigger function. The bulk of the work is handled by the **modifyCluster** function, which as the name implies is a generic function for making modifications to a cluster. It's a wrapper around the Atlas Update Configuration of One Cluster Admin API.\n\n# Preparation\n\n## Generate an API Key\n\nIn order to call the Atlas Administrative APIs, you'll first need an API Key with the Organization Owner role. API Keys are created in the Access Manager. At the Organization level (not the Project level), select **Access Manager** from the menu on the left: \n\nThen select the **API Keys** tab.\n\nCreate a new key, giving it a good description. Assign the key **Organization Owner** permissions, which will allow it to manage any of the projects in the organization. \n\nClick **Next** and make a note of your Private Key:\n\nLet's limit who can use our API key by adding an access list. In our case, the API key is going to be used by a Trigger which is a component of Atlas App Services. You will find the list of IP addresses used by App Services in the documentation under Firewall Configuration. Note, each IP address must be added individually. Here's an idea you can vote for to get this addressed: Ability to provide IP addresses as a list for Network Access\n\nClick **Done.**\n\n# Deployment\n\n## Create a Project for Automation\n\nSince this solution works across your entire Atlas organization, I like to host it in its own dedicated Atlas Project.\n\n## Create an Application\n\nWe will host our trigger in an Atlas App Services Application. To begin, just click the App Services tab: \n\nYou'll see that App Services offers a bunch of templates to get you started. For this use case, just select the first option to **Build your own App**:\n\nYou'll then be presented with options to link a data source, name your application and choose a deployment model. The current iteration of this utility doesn't use a data source, so you can ignore that step (a free cluster for you regardless). You can also leave the deployment model at its default (Global), unless you want to limit the application to a specific region. \n\nI've named the application **Automation App**:\n\nClick **Create App Service**. If you're presented with a set of guides, click **Close Guides** as today I am your guide.\n\nFrom here, you have the option to simply import the App Services application and adjust any of the functions to fit your needs. If you prefer to build the application from scratch, skip to the next section.\n\n# Import Option\n## Step 1: Store the API Secret Key\nThe extract has a dependency on the API Secret Key, thus the import will fail if it is not configured beforehand.\n\nUse the **Values** menu on the left to Create a Secret named **AtlasPrivateKeySecret** containing your private key (the secret is not in quotes): \n\n## Step 2: Install the App Services CLI\n\nThe App Services CLI is available on npm. To install the App Services CLI on your system, ensure that you have Node.js installed and then run the following command in your shell:\n\n```zsh\n\u2717 npm install -g atlas-app-services-cli\n```\n\n## Step 3: Extract the Application Archive\n\nDownload and extract the **AutomationApp.zip**.\n\n## Step 4: Log into Atlas\n\nTo configure your app with App Services CLI, you must log in to Atlas using your API keys:\n\n```zsh\n\u2717 appservices login --api-key=\"\" --private-api-key=\"\"\n\nSuccessfully logged in\n```\n\n## Step 5: Get the Application ID\n\nSelect the **App Settings** menu and copy your Application ID:\n\n## Step 6: Import the Application\n\nRun the following appservices push command from the directory where you extracted the export:\n```zsh\nappservices push --remote=\"\"\n\n...\nA summary of changes\n...\n\n? Please confirm the changes shown above Yes\n\nCreating draft\nPushing changes\nDeploying draft\nDeployment complete\nSuccessfully pushed app up:\n```\n\nAfter the import, replace the `AtlasPublicKey` with your API public key value.\n\n## Review the Imported Application\n\nThe imported application includes 3 self-explanatory sample scheduled triggers: \n\nThe 3 triggers have 3 associated Functions. The **pauseClustersTrigger** and **resumeClustersTrigger** function supply a set of projects and clusters to pause, so these need to be adjusted to fit your needs:\n\n```JavaScript\n // Supply projectIDs and clusterNames...\n const projectIDs =\n {\n id: '5c5db514c56c983b7e4a8701',\n names: [\n 'Demo',\n 'Demo2'\n ]\n },\n {\n id: '62d05595f08bd53924fa3634',\n names: [\n 'ShardedMultiRegion'\n ]\n }\n];\n```\n\nAll 3 trigger functions call the **modifyCluster** function, where the bulk of the work is done.\n\nIn addition, you'll find two utility functions, **getProjectClusters** and **getProjects**. These functions are not utilized in this solution, but are provided for reference if you wanted to further automate these processes (that is, removing the hard coded project IDs and cluster names in the trigger functions):\n\n![Functions\n\nNow that you have reviewed the draft, as a final step go ahead and deploy the App Services application.\n\n# Build it Yourself Option\n\nTo understand what's included in the application, here are the steps to build it yourself from scratch.\n\n## Step 1: Store the API Keys\n\nThe functions we need to create will call the Atlas Administration APIs, so we need to store our API Public and Private Keys, which we will do using Values & Secrets. The sample code I provide references these values as AtlasPublicKey and AtlasPrivateKey, so use those same names unless you want to change the code where they\u2019re referenced.\n\nYou'll find **Values** under the BUILD menu:\n\n## \n\nFirst, create a Value for your public key (_note, the key is in quotes_):\n\nCreate a Secret containing your private key (the secret is not in quotes):\n\nThe Secret cannot be accessed directly, so create a second Value that links to the secret:\n\n## Step 2: Note the Project ID(s)\n\nWe need to note the IDs of the projects that have clusters we want to automate. Click the 3 dots in the upper left corner of the UI to open the Project Settings:\n\nUnder which you\u2019ll find your Project ID:\n\n## Step 3: Create the Functions\n\nI will create two functions, a generic function to modify a cluster and a trigger function to iterate over the clusters to be paused. \n\nYou'll find Functions under the BUILD menu: \n\n## modifyCluster\n\nI\u2019m only demonstrating a couple of things you can do with cluster automation, but the sky is really limitless. The following modifyCluster function is a generic wrapper around the Modify One Multi-Cloud Cluster from One Project API for calling the API from App Services (or Node.js for that matter). \n\nCreate a New Function named **modifyCluster**. Set the function to Private as it will only be called by our trigger. The other default settings are fine:\n\n \n\nSwitch to the Function Editor tab and paste the following code:\n\n```JavaScript\n/*\n * Modifies the cluster as defined by the body parameter. \n * See https://www.mongodb.com/docs/atlas/reference/api-resources-spec/v2/#tag/Clusters/operation/updateCluster\n *\n */\nexports = async function(username, password, projectID, clusterName, body) {\n \n // Easy testing from the console\n if (username == \"Hello world!\") { \n username = await context.values.get(\"AtlasPublicKey\");\n password = await context.values.get(\"AtlasPrivateKey\");\n projectID = \"5c5db514c56c983b7e4a8701\";\n clusterName = \"Demo\";\n body = {paused: false}\n }\n\n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: 'api/atlas/v2/groups/' + projectID + '/clusters/' + clusterName, \n username: username, \n password: password,\n headers: {'Accept': 'application/vnd.atlas.2023-11-15+json'], 'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n body: JSON.stringify(body)\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.patch(arg);\n\n return EJSON.parse(response.body.text()); \n};\n```\n\nTo test this function, you need to supply an API key, an API secret, a project Id, an associated cluster name to modify, and a payload containing the modifications you'd like to make. In our case it's simply setting the paused property.\n\n> **Note**: By default, the Console supplies 'Hello world!' when test running a function, so my function code tests for that input and provides some default values for easy testing. \n\n![Console\n\n```JavaScript\n // Easy testing from the console\n if (username == \"Hello world!\") { \n username = await context.values.get(\"AtlasPublicKey\");\n password = await context.values.get(\"AtlasPrivateKey\");\n projectID = \"5c5db514c56c983b7e4a8701\";\n clusterName = \"Demo\";\n body = {paused: false}\n }\n```\n\nPress the **Run** button to see the results, which will appear in the Result window:\n\nAnd you should find you cluster being resumed (or paused):\n\n## pauseClustersTrigger\n\nThis function will be called by a trigger. As it's not possible to pass parameters to a scheduled trigger, it uses a hard-coded list of project Ids and associated cluster names to pause. Ideally these values would be stored in a collection with a nice UI to manage all of this, but that's a job for another day :-).\n\n_In the appendix of this article, I provide functions that will get all projects and clusters in the organization. That would create a truly dynamic operation that would pause all clusters. You could then alternatively refactor the code to use an exclude list instead of an allow list._\n\n```JavaScript\n/*\n * Iterates over the provided projects and clusters, pausing those clusters\n */\nexports = async function() {\n \n // Supply projectIDs and clusterNames...\n const projectIDs = {id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];\n\n // Get stored credentials...\n const username = context.values.get(\"AtlasPublicKey\");\n const password = context.values.get(\"AtlasPrivateKey\");\n\n // Set desired state...\n const body = {paused: true};\n\n var result = \"\";\n \n projectIDs.forEach(async function (project) {\n \n project.names.forEach(async function (cluster) {\n result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);\n console.log(\"Cluster \" + cluster + \": \" + EJSON.stringify(result));\n });\n });\n \n return \"Clusters Paused\";\n};\n```\n\n## Step 4: Create Trigger - pauseClusters\n\nThe ability to pause and resume a cluster is supported by the [Modify One Cluster from One Project API. To begin, select Triggers from the menu on the left: \n\nAnd add a Trigger.\n\nSet the Trigger Type to **Scheduled** and the name to **pauseClusters**: \n\nAs for the schedule, you have the full power of CRON Expressions at your fingertips. For this exercise, let\u2019s assume we want to pause the cluster every evening at 6pm. Select **Advanced** and set the CRON schedule to `0 22 * * *`. \n\n> **Note**, the time is in GMT, so adjust accordingly for your timezone. As this cluster is running in US East, I\u2019m going to add 4 hours:\n\nCheck the Next Events window to validate the job will run when you desire. \n\nThe final step is to select the function for the trigger to execute. Select the **pauseClustersTrigger** function.\n\nAnd **Save** the trigger.\n\nThe final step is to **REVIEW DRAFT & DEPLOY**. \n\n# Resume the Cluster\n\nYou could opt to manually resume the cluster(s) as it\u2019s needed. But for completeness, let\u2019s assume we want the cluster(s) to automatically resume at 8am US East every weekday morning. \n\nDuplicate the pauseClustersTrigger function to a new function named **resumeClustersTriggger**\n\nAt a minimum, edit the function code setting **paused** to **false**. You could also adjust the projectIDs and clusterNames to a subset of projects to resume:\n\n```JavaScript\n/*\n * Iterates over the provided projects and clusters, resuming those clusters\n */\nexports = async function() {\n \n // Supply projectIDs and clusterNames...\n const projectIDs = {id:'5c5db514c56c983b7e4a8701', names:['Demo', 'Demo2']}, {id:'62d05595f08bd53924fa3634', names:['ShardedMultiRegion']}];\n\n // Get stored credentials...\n const username = context.values.get(\"AtlasPublicKey\");\n const password = context.values.get(\"AtlasPrivateKey\");\n\n // Set desired state...\n const body = {paused: false};\n\n var result = \"\";\n \n projectIDs.forEach(async function (project) {\n \n project.names.forEach(async function (cluster) {\n result = await context.functions.execute('modifyCluster', username, password, project.id, cluster, body);\n console.log(\"Cluster \" + cluster + \": \" + EJSON.stringify(result));\n });\n });\n \n return \"Clusters Paused\";\n};\n```\n\nThen add a new scheduled trigger named **resumeClusters**. Set the CRON schedule to: `0 12 * * 1-5`. The Next Events validates for us this is exactly what we want: \n\n![Schedule Type Resume\n\n# Create Trigger: Scaling Up and Down\n\nIt\u2019s not uncommon to have workloads that are more demanding during certain hours of the day or days of the week. Rather than running your cluster to support peak capacity, you can use this same approach to schedule your cluster to scale up and down as your workload requires it. \n\n> **_NOTE:_** Atlas Clusters already support Auto-Scaling, which may very well suit your needs. The approach described here will let you definitively control when your cluster scales up and down.\n\nLet\u2019s say we want to scale up our cluster every day at 9am before our store opens for business.\n\nAdd a new function named **scaleClusterUpTrigger**. Here\u2019s the function code. It\u2019s very similar to before, except the body\u2019s been changed to alter the provider settings:\n\n> **_NOTE:_** This example represents a single-region topology. If you have multiple regions and/or asymetric clusters using read-only and/or analytics nodes, just check the Modify One Cluster from One Project API documenation for the payload details. \n\n```JavaScript\nexports = async function() {\n \n // Supply projectID and clusterNames...\n const projectID = '';\n const clusterName = '';\n\n // Get stored credentials...\n const username = context.values.get(\"AtlasPublicKey\");\n const password = context.values.get(\"AtlasPrivateKey\");\n\n // Set the desired instance size...\n const body = {\n \"replicationSpecs\": [\n {\n \"regionConfigs\": [\n {\n \"electableSpecs\": {\n \"instanceSize\": \"M10\",\n \"nodeCount\":3\n },\n \"priority\":7,\n \"providerName\": \"AZURE\",\n \"regionName\": \"US_EAST_2\",\n },\n ]\n }\n ]\n };\n \n result = await context.functions.execute('modifyCluster', username, password, projectID, clusterName, body);\n console.log(EJSON.stringify(result));\n \n if (result.error) {\n return result;\n }\n\n return clusterName + \" scaled up\"; \n};\n```\n\nThen add a scheduled trigger named **scaleClusterUp**. Set the CRON schedule to: `0 13 * * *`. \n\nScaling a cluster back down would simply be another trigger, scheduled to run when you want, using the same code above, setting the **instanceSizeName** to whatever you desire.\n\nAnd that\u2019s it. I hope you find this beneficial. You should be able to use the techniques described here to easily call any MongoDB Atlas Admin API endpoint from Atlas App Services.\n\n# Appendix\n## getProjects\n\nThis standalone function can be test run from the App Services console to see the list of all the projects in your organization. You could also call it from other functions to get a list of projects:\n\n```JavaScript\n/*\n * Returns an array of the projects in the organization\n * See https://docs.atlas.mongodb.com/reference/api/project-get-all/\n *\n * Returns an array of objects, e.g.\n *\n * {\n * \"clusterCount\": {\n * \"$numberInt\": \"1\"\n * },\n * \"created\": \"2021-05-11T18:24:48Z\",\n * \"id\": \"609acbef1b76b53fcd37c8e1\",\n * \"links\": [\n * {\n * \"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/609acbef1b76b53fcd37c8e1\",\n * \"rel\": \"self\"\n * }\n * ],\n * \"name\": \"mg-training-sample\",\n * \"orgId\": \"5b4e2d803b34b965050f1835\"\n * }\n *\n */\nexports = async function() {\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: 'api/atlas/v1.0/groups', \n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.get(arg);\n\n return EJSON.parse(response.body.text()).results; \n};\n```\n\n## getProjectClusters\n\nAnother example function that will return the cluster details for a provided project.\n\n> Note, to test this function, you need to supply a projectId. By default, the Console supplies \u2018Hello world!\u2019, so I test for that input and provide some default values for easy testing.\n\n```JavaScript\n/*\n * Returns an array of the clusters for the supplied project ID.\n * See https://docs.atlas.mongodb.com/reference/api/clusters-get-all/\n *\n * Returns an array of objects. See the API documentation for details.\n * \n */\nexports = async function(project_id) {\n \n if (project_id == \"Hello world!\") { // Easy testing from the console\n project_id = \"5e8f8268d896f55ac04969a1\"\n }\n \n // Get stored credentials...\n const username = await context.values.get(\"AtlasPublicKey\");\n const password = await context.values.get(\"AtlasPrivateKey\");\n \n const arg = { \n scheme: 'https', \n host: 'cloud.mongodb.com', \n path: `api/atlas/v1.0/groups/${project_id}/clusters`, \n username: username, \n password: password,\n headers: {'Content-Type': ['application/json'], 'Accept-Encoding': ['bzip, deflate']}, \n digestAuth:true,\n };\n \n // The response body is a BSON.Binary object. Parse it and return.\n response = await context.http.get(arg);\n\n return EJSON.parse(response.body.text()).results; \n};\n```", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "In this article I will show you how a Scheduled Trigger can be used to easily incorporate automation into your environment. In addition to pausing and unpausing a cluster, I\u2019ll similarly show how cluster scale up and down events could also be placed on a schedule. Both of these activities allow you to save on costs for when you either don\u2019t need the cluster (paused), or don\u2019t need it to support peak workloads (scale down).", "contentType": "Tutorial"}, "title": "Atlas Cluster Automation Using Scheduled Triggers", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/sharding-optimization-defragmentation", "action": "created", "body": "# Optimizing Sharded Collections in MongoDB with Defragmentation\n\n## Table of Contents\n\n* Introduction\n* Background\n* What is sharded collection fragmentation?\n* What is sharded collection defragmentation?\n* When should I defragment my sharded collection?\n* Defragmentation process overview\n* How do I defragment my sharded collection?\n* How to monitor the defragmentation process\n* How to stop defragmentation\n* Collection defragmentation example\n* FAQs\n\n## Introduction\nSo, what do you do if you have a large number of chunks in your sharded cluster and want to reduce the impact of chunk migrations on CRUD latency? You can use collection defragmentation!\n\nIn this post, we\u2019ll cover when you should consider defragmenting a collection, the benefits of defragmentation for your sharded cluster, and cover all of the commands needed to execute, monitor, and stop defragmentation. If you are new to sharding or want a refresher on how MongoDB delivers horizontal scalability, please check out the MongoDB manual.\n\n## Background\nA sharded collection is stored as \u201cchunks,\u201d and a balancer moves data around to maintain an equal distribution of data between shards. In MongoDB 6.0, when the difference in the amount of data between two shards is two times the configured chunk size, the MongoDB balancer automatically migrates chunks between shards. For collections with a chunk size of 128MB, we will migrate data between shards if the difference in data size exceeds 256MB.\n \nEvery time it migrates a chunk, MongoDB needs to update the new location of this chunk in its routing table. The routing table stores the location of all the chunks contained in your collection. The more chunks in your collection, the more \"locations\" in the routing table, and the larger the routing table will be. The larger the routing table, the longer it takes to update it after each migration. When updating the routing table, MongoDB blocks writes to your collection. As a result, it\u2019s important to keep the number of chunks for your collection to a minimum.\n\nBy merging as many chunks as possible via defragmentation, you reduce the size of the routing table by reducing the number of chunks in your collection. The smaller the routing table, the shorter the duration of write blocking on your collection for chunk migrations, merges, and splits.\n\n## What is sharded collection fragmentation?\nA collection with an excessive number of chunks is considered fragmented. \n\nIn this example, a customer\u2019s collection has ~615K chunks on each shard.\n\n## What is sharded collection defragmentation?\nDefragmentation is the concept of merging contiguous chunks in order to reduce the number of chunks in your collection. \n\nIn our same example, after defragmentation on December 5th, the number of chunks has gone down to 650 chunks on each shard. The customer has managed to reduce the number of chunks in their cluster by a factor of 1000.\n\n## When should I defragment my sharded collection?\nDefragmentation of a collection should be considered in the following cases:\n* A sharded collection contains more than 20,000 chunks.\n* Once chunk migrations are complete after adding and removing shards.\n\n## The defragmentation process overview\nThe process is composed of three distinct phases that all help reduce the number of chunks in your chosen collection. The first phase automatically merges mergeable chunks on the same shard. The second phase migrates smaller chunks to other shards so they can be merged. The third phase scans the cluster one final time and merges any remaining mergeable chunks that reside on the same shard.\n\nThe defragment operation will respect your balancing window and any configured zones.\n\n**Note**: Do not modify the chunkSize value while defragmentation is executing as this may lead to improper behavior. \n\n### Phase one: merge and measure\nIn phase one of the defragmentation process, MongoDB scans every shard in the cluster and merges any mergeable chunks that reside on the same shard. The data size of the resulting chunks is stored for the next phase of the defragmentation process. \n\n### Phase two: move and merge\nAfter phase one is completed, there might be some small chunks leftover. Chunks that are less than 25% of the max chunk size set are identified as small chunks. For example, with MongoDB\u2019s default chunk size of 128MB, all chunks of 32MB or less would be considered small. The balancer then attempts to find other chunks across every shard to determine if they can be merged. If two chunks can be merged, the smaller of the two is moved to be merged with the second chunk. This also means that the larger your configured chunk size, the more \u201csmall\u201d chunks you can move around, and the more you can defragment.\n\n### Phase three: final merge \nIn this phase, the balancer scans the entire cluster to find any other mergeable chunks that reside on the same shard and merges them. The defragmentation process is now complete.\n\n## How do I defragment my sharded collection?\nIf you have a highly fragmented collection, you can defragment it by issuing a command to initiate defragmentation via configureCollectionBalancing options. \n\n```\ndb.adminCommand(\n {\n configureCollectionBalancing: \".\",\n defragmentCollection: true\n }\n)\n```\n\n## How to monitor the defragmentation process\nThroughout the process, you can monitor the status of defragmentation by executing balancerCollectionStatus. Please refer to our balancerCollectionStatus manual for a detailed example on the output of the balancerCollectionStatus command during defragmentation.\n\n## How to stop defragmentation\nDefragmenting a collection can be safely stopped at any time during any phase by issuing a command to stop defragmentation via configureCollectionBalancing options.\n\n```\ndb.adminCommand(\n {\n configureCollectionBalancing: \".\",\n defragmentCollection: false\n }\n)\n```\n\n## Collection defragmentation example\nLet\u2019s defragment a collection called `\"airplanes\"` in the `\"vehicles\"` database, with the current default chunk size of 128MB.\n\n```\ndb.adminCommand(\n {\n configureCollectionBalancing: \"vehicles.airplanes\",\n defragmentCollection: true\n})\n```\n\nThis will start the defragmentation process. You can monitor the process by using the balancerCollectionStatus command. Here\u2019s an example of the output in each phase of the process. \n\n### Phase one: merge and measure\n```\n{\n \"balancerCompliant\": false,\n \"firstComplianceViolation\": \"defragmentingChunks\",\n \"details\": {\n \"currentPhase\": \"mergeAndMeasureChunks\",\n \"progress\": { \"remainingChunksToProcess\": 1 }\n }\n}\n```\n\nSince this phase of the defragmentation process contains multiple operations such as `mergeChunks` and `dataSize`, the value of the `remainingChunksToProcess` field will not change when the `mergeChunk` operation has been completed on a chunk but the dataSize operation is not complete for the same chunk. \n\n### Phase two: move and merge\n```\n{\n \"balancerCompliant\": false,\n \"firstComplianceViolation\": \"defragmentingChunks\",\n \"details\": {\n \"currentPhase\": \"moveAndMergeChunks\",\n \"progress\": { \"remainingChunksToProcess\": 1 }\n }\n}\n```\n\nSince this phase of the defragmentation process contains multiple operations, the value of the `remainingChunksToProcess` field will not change when a migration is complete but the `mergeChunk` operation is not complete for the same chunk.\n\n### Phase three: final merge \n```\n{\n \"balancerCompliant\": false,\n \"firstComplianceViolation\": \"defragmentingChunks\",\n \"details\": {\n \"currentPhase\": \"mergeChunks\",\n \"progress\": { \"remainingChunksToProcess\": 1 }\n }\n}\n```\n\nWhen the process is complete, for a balanced collection the document returns the following information.\n\n```\n{\n \"balancerCompliant\" : true,\n \"ok\" : 1,\n \"operationTime\" : Timestamp(1583193238, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1583193238, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }\n}\n```\n\n**Note**: There is a possibility that your collection is not balanced at the end of defragmentation. The balancer will then kick in and start migrating data as it does regularly.\n\n## FAQs\n* **How long does defragmentation take?**\n * The duration for defragmentation will vary depending on the size and the \u201cfragmentation state\u201d of a collection, with larger and more fragmented collections taking longer.\n * The first phase of defragmentation merges chunks on the same shard delivering immediate benefits to your cluster. Here are some worst-case estimates for the time to complete phase one of defragmentation:\n * Collection with 100,000 chunks - < 18 hrs\n * Collection with 1,000,000 chunks - < 6 days \n * The complete defragmentation process involves the movement of chunks between shards where speeds can vary based on the resources available and the cluster\u2019s configured chunk size. It is difficult to estimate how long it will take for your cluster to complete defragmentation.\n* **Can I use defragmentation to just change my chunk size?**\n * Yes, just run the command with `\"defragmentCollection: false\"`.\n* **How do I stop an ongoing defragmentation?**\n * Run the following command:\n\n```\ndb.adminCommand(\n {\n configureCollectionBalancing: \".\",\n defragmentCollection: false\n }\n)\n```\n\n* **Can I change my chunk size during defragmentation?**\n * Yes, but this will result in a less than optimal defragmentation since the new chunk size will only be applied to any future phases of the operation. \n * Alternatively, you can stop an ongoing defragmentation by running the command again with `\"defragmentCollection: false\"`. Then just run the command with the new chunk size and `\"defragmentCollection: true\"`.\n* **What happens if I run defragmentation with a different chunk size on a collection where defragmentation is already in progress?**\n * Do not run defragmentation with a different chunk size on a collection that is being defragmented as this causes the defragmentation process to utilize the new value in the next phase of the defragmentation process, resulting in a less than optimal defragmentation.\n* **Can I run defragmentation on multiple collections simultaneously?**\n * Yes. However, a shard can only participate in one migration at a time \u2014 meaning during the second phase of defragmentation, a shard can only donate or receive one chunk at a time. \n* **Can I defragment collections to different chunk sizes?**\n * Yes, chunk size is specific to a collection. So different collections can be configured to have different chunk sizes, if desired.\n* **Why do I see a 1TB chunk on my shards even though I set chunkSize to 256MB?**\n * In MongoDB 6.0, the cluster will no longer partition data unless it\u2019s necessary to facilitate a migration. So, chunks may exceed the configured `chunkSize`. This behavior reduces the number of chunks on a shard which in turn reduces the impact of migrations on a cluster.\n* **Is the value \u201ctrue\u201d for the key defragmentCollection of configureCollectionBalancing persistent once set?**\n * The `defragmentCollection` key will only have a value of `\"true\"` while the defragmentation process is occurring. Once the defragmentation process ends, the value for defragmentCollection field will be unset from true. \n* **How do I know if defragmentation is running currently, stopped, or started successfully?**\n * Use the balancerCollectionStatus command to determine the current state of defragmentation on a given collection. \n * In the document returned by the `balancerCollectionStatus` command, the firstComplianceViolation field will display `\u201cdefragmentingChunks\u201d` when a collection is actively being defragmented. \n * When a collection is not being defragmented, the balancer status returns a different value for \u201cfirstComplianceViolation\u201d. \n * If the collection is unbalanced, the command will return `\u201cbalancerCompliant: false\u201d` and `\u201cfirstComplianceViolation`: `\u201cchunksImbalance\u201d\u201d`.\n * If the collection is balanced, the command will return `\u201cbalancerCompliant: true\u201d`. See balancerCollectionStatus for more information on the other possible values. \n* **How does defragmentation impact my workload?**\n * The impact of defragmentation on a cluster is similar to a migration. Writes will be blocked to the collection being defragmented while the metadata refreshes occur in response to the underlying merge and move defragmentation operations. The duration of the write blockage can be estimated by reviewing the mongod logs of a previous donor shard. \n * Secondary reads will be affected during defragmentation operations as the changes on the primary node need to be replicated to the secondaries. \n * Additionally, normal balancing operations will not occur for a collection being defragmented. \n* **What if I have a balancing window?**\n * The defragmentation process respects balancing windows and will not execute any defragmentation operations outside of the configured balancing window. \n* **Is defragmentation resilient to crashes or stepdowns?**\n * Yes, the defragmentation process can withstand a crash or a primary step down. Defragmentation will automatically restart after the completion of the step up of the new primary.\n* **Is there a way to just do Phase One of defragmentation?**\n * You can\u2019t currently, but we may be adding this capability in the near future.\n* **What if I\u2019m still not happy with the number of chunks in my cluster?**\n * Consider setting your chunk size to 1GB (1024MB) for defragmentation in order to move more mergeable chunks.\n\n```\ndb.adminCommand(\n {\n configureCollectionBalancing: \".\",\n chunkSize: 1024,\n defragmentCollection: true \n }\n)\n```\n\n* **How do I find my cluster\u2019s configured chunk size?**\n * You can check it in the `\u201cconfig\u201d` database.\n\n```\nuse config \ndb.settings.find()\n```\n\n**Note**: If the command above returns Null, that means the cluster\u2019s default chunk size has not be overridden and the default chunk size of 128MB is currently in use.\n\n* **How do I find a specific collection\u2019s chunk size?**\n\n```\nuse \ndb.adminCommand(\n {\n balancerCollectionStatus: \".\"\n }\n)\n```\n\n* **How do I find a specific collection\u2019s number of chunks?**\n\n```\nuse \ndb.collection_name.getShardDistribution()\n```", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to optimize your MongoDB sharded cluster with defragmentation.", "contentType": "Article"}, "title": "Optimizing Sharded Collections in MongoDB with Defragmentation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/kotlin/spring-boot3-kotlin-mongodb", "action": "created", "body": "# Getting Started with Backend Development in Kotlin Using Spring Boot 3 & MongoDB\n\n> This is an introduction article on how to build a RESTful application in Kotlin using Spring Boot 3 and MongoDB Atlas.\n\n## Introduction\n\nToday, we are going to build a basic RESTful application that does a little more than a CRUD operation, and for that, we will use:\n\n* `Spring Boot 3`, which is one of the popular frameworks based on Spring, allowing developers to build production grades quickly.\n* `MongoDB`, which is a document oriented database, allowing developers to focus on building apps rather than on database schema.\n\n## Prerequisites\n\nThis is a getting-started article, so nothing much is needed as a prerequisite. But familiarity with Kotlin as a programming language, plus a basic understanding of Rest API and HTTP methods, would be helpful.\n\nTo help with development activities, we will be using Jetbrains IntelliJ IDEA (Community Edition).\n\n## HelloWorld app!\n\nBuilding a HelloWorld app in any programming language/technology, I believe, is the quickest and easiest way to get familiar with it. This helps you cover the basic concepts, like how to build, run, debug, deploy, etc.\n\nSince we are using the community version of IDEA, we cannot create the `HelloWorld` project directly from IDE itself using the New Project. But we can use the Spring initializer app instead, which allows us to create a Spring\nproject out of the box.\n\nOnce you are on the website, you can update the default selected parameters for the project, like the name of the project, language, version of `Spring Boot`, etc., to something similar as shown below.\n\nAnd since we want to create REST API with MongoDB as a database, let's add the dependency using the Add Dependency button on the right.\n\nAfter all the updates, our project settings will look like this.\n\nNow we can download the project folder using the generate button and open it using the IDE. If we scan the project folder, we will only find one class \u2014 i.e., `HelloBackendWorldApplication.kt`, which has the `main` function, as well.\n\nThe next step is to print HelloWorld on the screen. Since we are building a restful\napplication, we will create a `GET` request API. So, let's add a function to act as a `GET` API call.\n\n```kotlin\n@GetMapping(\"/hello\")\nfun hello(@RequestParam(value = \"name\", defaultValue = \"World\") name: String?): String {\n return String.format(\"Hello %s!\", name)\n}\n```\n\nWe also need to add an annotation of `@RestController` to our `class` to make it a `Restful` client.\n\n```kotlin\n@SpringBootApplication\n@RestController\nclass HelloBackendWorldApplication {\n @GetMapping(\"/hello\")\n fun hello(): String {\n return \"Hello World!\"\n }\n}\n\nfun main(args: Array) {\n runApplication(*args)\n}\n```\n\nNow, let's run our project using the run icon from the toolbar.\n\nNow load https://localhost:8080/hello on the browser once the build is complete, and that will print Hello World on your screen.\n\nAnd on cross-validating this from Postman, we can clearly understand that our `Get` API is working perfectly. \n\nIt's time to understand the basics of `Spring Boot` that made it so easy to create our first API call.\n\n## What is Spring Boot ?\n\n> As per official docs, Spring Boot makes it easy to create stand-alone, production-grade, Spring-based applications that you can \"just run.\"\n\nThis implies that it's a tool built on top of the Spring framework, allowing us to build web applications quickly.\n\n`Spring Boot` uses annotations, which do the heavy lifting in the background. A few of them, we have used already, like:\n\n1. `@SpringBootApplication`: This annotation is marked at class level, and declares to the code reader (developer) and Spring that it's a Spring Boot project. It allows an enabling feature, which can also be done using `@EnableAutoConfiguration`,`@ComponentScan`, and `@Configuration`.\n\n2. `@RequestMapping` and `@RestController`: This annotation provides the routing information. Routing is nothing but a mapping of a `HTTP` request path (text after `host/`) to classes that have the implementation of these across various `HTTP` methods.\n\nThese annotations are sufficient for building a basic application. Using Spring Boot, we will create a RESTful web service with all business logic, but we don't have a data container that can store or provide data to run these operations.\n\n## Introduction to MongoDB\n\nFor our app, we will be using MongoDB as the database. MongoDB is an open-source, cross-platform, and distributed document database, which allows building apps with flexible schema. This is great as we can focus on building the app rather than defining the schema.\n\nWe can get started with MongoDB really quickly using MongoDB Atlas, which is a database as a service in the cloud and has a free forever tier.\n\nI recommend that you explore the MongoDB Jumpstart series to get familiar with MongoDB and its various services in under 10 minutes.\n\n## Connecting with the Spring Boot app and MongoDB\n\nWith the basics of MongoDB covered, now let's connect our Spring Boot project to it. Connecting with MongoDB is really simple, thanks to the Spring Data MongoDB plugin.\n\nTo connect with MongoDB Atlas, we just need a database URL that can be added\nas a `spring.data.mongodb.uri` property in `application.properties` file. The connection string can be found as shown below.\n\nThe format for the connection string is:\n\n```shell\nspring.data.mongodb.uri = mongodb + srv ://:@.mongodb.net/\n```\n\n## Creating a CRUD RESTful app\n\nWith all the basics covered, now let's build a more complex application than HelloWorld! In this app, we will be covering all CRUD operations and tweaking them along the way to make it a more realistic app. So, let's create a new project similar to the HelloWorld app we created earlier. And for this app, we will use one of the sample datasets provided by MongoDB \u2014 one of my favourite features that enables quick learning.\n\nYou can load a sample dataset on Atlas as shown below:\n\nWe will be using the `sample_restaurants` collection for our CRUD application. Before we start with the actual CRUD operation, let's create the restaurant model class equivalent to it in the collection.\n\n```kotlin\n\n@Document(\"restaurants\")\ndata class Restaurant(\n @Id\n val id: ObjectId = ObjectId(),\n val address: Address = Address(),\n val borough: String = \"\",\n val cuisine: String = \"\",\n val grades: List = emptyList(),\n val name: String = \"\",\n @Field(\"restaurant_id\")\n val restaurantId: String = \"\"\n)\n\ndata class Address(\n val building: String = \"\",\n val street: String = \"\",\n val zipcode: String = \"\",\n @Field(\"coord\")\n val coordinate: List = emptyList()\n)\n\ndata class Grade(\n val date: Date = Date(),\n @Field(\"grade\")\n val rating: String = \"\",\n val score: Int = 0\n)\n```\n\nYou will notice there is nothing fancy about this class except for the annotation. These annotations help us to connect or co-relate classes with databases like:\n\n* `@Document`: This declares that this data class represents a document in Atlas.\n* `@Field`: This is used to define an alias name for a property in the document, like `coord` for coordinate in `Address` model.\n\nNow let's create a repository class where we can define all methods through which we can access data. `Spring Boot` has interface `MongoRepository`, which helps us with this.\n\n```kotlin\ninterface Repo : MongoRepository {\n\n fun findByRestaurantId(id: String): Restaurant?\n}\n```\n\nAfter that, we create a controller through which we can call these queries. Since this is a bigger project, unlike the HelloWorld app, we will create a separate controller where the `MongoRepository` instance is passed using `@Autowired`, which provides annotations-driven dependency injection. \n\n```kotlin\n@RestController\n@RequestMapping(\"/restaurants\")\nclass Controller(@Autowired val repo: Repo) {\n\n}\n``` \n\n### Read operation\n\nNow our project is ready to do some action, so let's count the number of restaurants in the collection using `GetMapping`.\n\n```kotlin\n@RestController\n@RequestMapping(\"/restaurants\")\nclass Controller(@Autowired val repo: Repo) {\n\n @GetMapping\n fun getCount(): Int {\n return repo.findAll().count()\n }\n}\n```\n\nTaking a step further to read the restaurant-based `restaurantId`. We will have to add a method in our repo as `restaurantId` is not marked `@Id` in the restaurant class.\n\n```kotlin\ninterface Repo : MongoRepository {\n fun findByRestaurantId(restaurantId: String): Restaurant?\n}\n``` \n\n```kotlin\n@GetMapping(\"/{id}\")\nfun getRestaurantById(@PathVariable(\"id\") id: String): Restaurant? {\n return repo.findByRestaurantId(id)\n}\n```\n\nAnd again, we will be using Postman to validate the output against a random `restaurantId` from the sample dataset.\n\nLet's also validate this against a non-existing `restaurantId`. \n\nAs expected, we haven't gotten any results, but the API response code is still 200, which is incorrect! So, let's fix this.\n\nIn order to have the correct response code, we will have to check the result before sending it back with the correct response code.\n\n```kotlin\n @GetMapping(\"/{id}\")\nfun getRestaurantById(@PathVariable(\"id\") id: String): ResponseEntity {\n val restaurant = repo.findByRestaurantId(id)\n return if (restaurant != null) ResponseEntity.ok(restaurant) else ResponseEntity\n .notFound().build()\n}\n```\n\n### Write operation\n\nTo add a new object to the collection, we can add a `write` function in the `repo` we created earlier, or we can use the inbuilt method `insert` provided by `MongoRepository`. Since we will be adding a new object to the collection, we'll be using `@PostMapping` for this.\n\n```kotlin\n @PostMapping\nfun postRestaurant(): Restaurant {\n val restaurant = Restaurant().copy(name = \"sample\", restaurantId = \"33332\")\n return repo.insert(restaurant)\n}\n```\n\n### Update operation\n\nSpring doesn't have any specific in-built update similar to other CRUD operations, so we will be using the read and write operation in combination to perform the update function.\n\n```kotlin\n @PatchMapping(\"/{id}\")\nfun updateRestaurant(@PathVariable(\"id\") id: String): Restaurant? {\n return repo.findByRestaurantId(restaurantId = id)?.let {\n repo.save(it.copy(name = \"Update\"))\n }\n}\n```\n\nThis is not an ideal way of updating items in the collection as it requires two operations and can be improved further if we use the MongoDB native driver, which allows us to perform complicated operations with the minimum number of steps.\n\n### Delete operation\n\nDeleting a restaurant is also similar. We can use the `MongoRepository` delete function of the item from the collection, which takes the item as input.\n\n```kotlin\n @DeleteMapping(\"/{id}\")\nfun deleteRestaurant(@PathVariable(\"id\") id: String) {\n repo.findByRestaurantId(id)?.let {\n repo.delete(it)\n }\n}\n```\n\n## Summary\n\nThank you for reading and hopefully you find this article informative! The complete source code of the app can be found on GitHub.\n\nIf you have any queries or comments, you can share them on the MongoDB forum or tweet me @codeWithMohit.", "format": "md", "metadata": {"tags": ["Kotlin", "MongoDB", "Spring"], "pageDescription": "This is an introductory article on how to build a RESTful application in Kotlin using Spring Boot 3 and MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Getting Started with Backend Development in Kotlin Using Spring Boot 3 & MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-migrate-from-core-data-swiftui", "action": "created", "body": "# Migrating a SwiftUI iOS App from Core Data to Realm\n\nPorting an app that's using Core Data to Realm is very simple. If you\nhave an app that already uses Core Data, and have been considering the\nmove to Realm, this step-by-step guide is for you! The way that your\ncode interacts with Core Data and Realm is very different depending on\nwhether your app is based on SwiftUI or UIKit\u2014this guide assumes SwiftUI\n(a UIKit version will come soon.)\n\nYou're far from the first developer to port your app from Core Data to\nRealm, and we've been told many times that it can be done in a matter of\nhours. Both databases handle your data as objects, so migration is\nusually very straightforward: Simply take your existing Core Data code\nand refactor it to use the Realm\nSDK.\n\nAfter migrating, you should be thrilled with the ease of use, speed, and\nstability that Realm can bring to your apps. Add in MongoDB Realm\nSync and you can share the same\ndata between iOS, Android, desktop, and web apps.\n\n>\n>\n>This article was updated in July 2021 to replace `objc` and `dynamic`\n>with the `@Persisted` annotation that was introduced in Realm-Cocoa\n>10.10.0.\n>\n>\n\n## Prerequisites\n\nThis guide assumes that your app is written in Swift and built on\nSwiftUI rather than UIKit.\n\n## Steps to Migrate Your Code\n\n### 1. Add the Realm Swift SDK to Your Project\n\nTo use Realm, you need to include Realm's Swift SDK\n(Realm-Cocoa) in your Xcode\nproject. The simplest method is to use the Swift Package Manager.\n\nIn Xcode, select \"File/Swift Packages/Add Package Dependency...\". The\npackage URL is :\n\nYou can keep the default options and then select both the \"Realm\" and\n\"RealmSwift\" packages.\n\n### 2a. The Brutalist Approach\u2014Remove the Core Data Framework\n\nFirst things first. If your app is currently using Core Data, you'll\nneed to work out which parts of your codebase include Core Data code.\nThese will need to be refactored. Fortunately, there's a handy way to do\nthis. While you could manually perform searches on the codebase looking\nfor the relevant code, a much easier solution is to simply delete the\nCore Data import statements at the top of your source files:\n\n``` swift\nimport CoreData\n```\n\nOnce this is done, every line of code implementing Core Data will throw\na compiler error, and then it's simply a matter of addressing each\ncompiler error, one at a time.\n\n### 2b. The Incremental Approach\u2014Leave the Core Data Framework Until Port Complete\n\nNot everyone (including me) likes the idea of not being able to build a\nproject until every part of the port has been completed. If that's you,\nI'd suggest this approach:\n\n- Leave the old code there for now.\n- Add a new model, adding `Realm` to the end of each class.\n- Work through your views to move them over to your new model.\n- Check and fix build breaks.\n- Remove the `Realm` from your model names using the Xcode refactoring\n feature.\n- Check and fix build breaks.\n- Find any files that still `import CoreData` and either remove that\n line or the entire file if it's now obsolete.\n- Check and fix build breaks.\n- Migrate existing user data from Core Data to Realm if needed.\n- Remove the original model code.\n\n### 3. Remove Core Data Setup Code\n\nIn Core Data, changes to model objects are made against a managed object\ncontext object. Managed object context objects are created against a\npersistent store coordinator object, which themselves are created\nagainst a managed object model object.\n\nSuffice to say, before you can even begin to think about writing or\nreading data with Core Data, you usually need to have code somewhere in\nyour app to set up these dependency objects and to expose Core Data's\nfunctionality to your app's own logic. There will be a sizable chunk of\n\"setup\" Core Data code lurking somewhere.\n\nWhen you're switching to Realm, all of that code can go.\n\nIn Realm, all of the setting up is done on your behalf when you access a\nRealm object for the first time, and while there are options to\nconfigure it\u2014such as where to place your Realm data file on disk\u2014it's\nall completely optional.\n\n### 4. Migrate Your Model Files\n\nYour Realm schema will be defined in code by defining your Realm Object\nclasses. There is no need for `.xcdatamodel` files when working with\nRealm and so you can remove those Core Data files from your project.\n\nIn Core Data, the bread-and-butter class that causes subclassed model\nobjects to be persisted is `NSManagedObject`. The classes for these\nkinds of objects are pretty much standard:\n\n``` swift\nimport CoreData\n\n@objc(ReminderList)\npublic class ReminderList: NSManagedObject {\n @NSManaged public var title: String\n @NSManaged public var reminders: Array\n}\n\n@objc(Reminder)\npublic class Reminder: NSManagedObject {\n @NSManaged var title: String\n @NSManaged var isCompleted: Bool\n @NSManaged var notes: String?\n @NSManaged var dueDate: Date?\n @NSManaged var priority: Int16\n @NSManaged var list: ReminderList\n}\n```\n\nConverting these managed object subclasses to Realm is really simple:\n\n``` swift\nimport RealmSwift\n\nclass ReminderList: Object, ObjectKeyIdentifiable {\n @Persisted var title: String\n @Persisted var reminders: List\n}\n\nclass Reminder: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var title: String\n @Persisted var isCompleted: Bool\n @Persisted var notes: String?\n @Persisted var dueDate: Date?\n @Persisted var priority: Int16\n}\n```\n\nNote that top-level objects inherit from `Object`, but objects that only\nexist within higher-level objects inherit from `EmbeddedObject`.\n\n### 5. Migrate Your Write Operations\n\nCreating a new object in Core Data and then later modifying it is\nrelatively trivial, only taking a few lines of code.\n\nAdding an object to Core Data must be done using a\n`NSManagedObjectContext`. This context is available inside a SwiftUI\nview through the environment:\n\n``` swift\n@Environment(\\.managedObjectContext) var viewContext: NSManagedObjectContext\n```\n\nThat context can then be used to save the object to Core Data:\n\n``` swift\nlet reminder = Reminder(context: viewContext)\nreminder.title = title\nreminder.notes = notes\nreminder.dueDate = date\nreminder.priority = priority\n\ndo {\n try viewContext.save()\n} catch {\n let nserror = error as NSError\n fatalError(\"Unresolved error \\(nserror), \\(nserror.userInfo)\")\n}\n```\n\nRealm requires that writes are made within a transaction, but the Realm\nSwift SDK hides most of that complexity when you develop with SwiftUI.\nThe current Realm is made available through the SwiftUI environment and\nthe view can access objects in it using the `@ObserveredResults`\nproperty wrapper:\n\n``` swift\n@ObservedResults(Reminder.self) var reminders\n```\n\nA new object can then be stored in the Realm:\n\n``` swift\nlet reminder = Reminder()\nreminder.title = title\nreminder.notes = notes\nreminder.dueDate = date\nreminder.priority = priority\n$reminders.append(reminder)\n```\n\nThe Realm Swift SDK also hides the transactional complexity behind\nmaking updates to objects already stored in Realm. The\n`@ObservedRealmObject` property wrapper is used in the same way as\n`@ObservedObject`\u2014but for Realm managed objects:\n\n``` swift\n@ObservedRealmObject var reminder: Reminder\nTextField(\"Notes\", text: $reminder.notes)\n```\n\nTo benefit from the transparent transaction functionality, make sure\nthat you use the `@ObservedRealmObject` property wrapper as you pass\nRealm objects down the view hierarchy.\n\nIf you find that you need to directly update an attribute within a Realm\nobject within a view, then you can use this syntax to avoid having to\nexplicitly work with Realm transactions (where `reminder` is an\n`@ObservedRealmObject`):\n\n``` swift\n$reminder.isCompleted.wrappedValue.toggle()\n```\n\n### 6. Migrate Your Queries\n\nIn its most basic implementation, Core Data uses the concept of fetch\nrequests in order to retrieve data from disk. A fetch can filter and\nsort the objects:\n\n``` swift\nvar reminders = FetchRequest(\n entity: Reminder.entity(),\n sortDescriptors: NSSortDescriptor(key: \"title\", ascending: true),\n predicate: NSPredicate(format: \"%K == %@\", \"list.title\", title)).wrappedValue\n```\n\nThe equivalent code for such a query using Realm is very similar, but it\nuses the `@ObservedResults` property wrapper rather than `FetchRequest`:\n\n``` swift\n@ObservedResults(\n Reminder.self,\n filter: NSPredicate(format: \"%K == %@\", \"list.title\", title),\n sortDescriptor: SortDescriptor(keyPath: \"title\", ascending: true)) var reminders\n```\n\n### 7. Migrate Your Users' Production Data\n\nOnce all of your code has been migrated to Realm, there's one more\noutstanding issue: How do you migrate any production data that users may\nalready have on their devices out of Core Data and into Realm?\n\nThis can be a very complex issue. Depending on your app's functionality,\nas well as your users' circumstances, how you go about handling this can\nend up being very different each time.\n\nWe've seen two major approaches:\n\n- Once you've migrated your code to Realm, you can re-link the Core\n Data framework back into your app, use raw NSManagedObject objects\n to fetch your users' data from Core Data, and then manually pass it\n over to Realm. You can leave this migration code in your app\n permanently, or simply remove it after a sufficient period of time\n has passed.\n- If the user's data is replaceable\u2014for example, if it is simply\n cached information that could be regenerated by other user data on\n disk\u2014then it may be easier to simply blow all of the Core Data save\n files away, and start from scratch when the user next opens the app.\n This needs to be done with very careful consideration, or else it\n could end up being a bad user experience for a lot of people.\n\n## SwiftUI Previews\n\nAs with Core Data, your SwiftUI previews can add some data to Realm so\nthat it's rendered in the preview. However, with Realm it's a lot easier\nas you don't need to mess with contexts and view contexts:\n\n``` swift\nfunc bootstrapReminder() {\n do {\n let realm = try Realm()\n try realm.write {\n realm.deleteAll()\n let reminder = Reminder()\n reminder.title = \"Do something\"\n reminder.notes = \"Anything will do\"\n reminder.dueDate = Date()\n reminder.priority = 1\n realm.add(list)\n }\n } catch {\n print(\"Failed to bootstrap the default realm\")\n }\n}\n\nstruct ReminderListView_Previews: PreviewProvider {\n static var previews: some View {\n bootstrapReminder()\n return ReminderListView()\n }\n}\n```\n\n## Syncing Realm Data\n\nNow that your application data is stored in Realm, you have the option\nto sync that data to other devices (including Android) using MongoDB\nRealm Sync. That same data is\nthen stored in Atlas where it can be queried by web applications via\nGraphQL or Realm's web\nSDK.\n\nThis enhanced functionality is beyond the scope of this guide, but you\ncan see how it can be added by reading the\nBuilding a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App series.\n\n## Conclusion\n\nThanks to their similarities in exposing data through model objects,\nconverting an app from using Core Data to Realm is very quick and\nsimple.\n\nIn this guide, we've focussed on the code that needs to be changed to\nwork with Realm, but you'll be pleasantly surprised at just how much\nCore Data boilerplate code you're able to simply delete!\n\nIf you've been having trouble getting Core Data working in your app, or\nyou're looking for a way to sync data between platforms, we strongly\nrecommend giving Realm a try, to see if it works for you. And if it\ndoes, please be sure to let us know!\n\nIf you've any questions or comments, then please let us know on our\ncommunity\nforum.\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS"], "pageDescription": "A guide to porting a SwiftUI iOS app from Core Data to MongoDB.", "contentType": "Tutorial"}, "title": "Migrating a SwiftUI iOS App from Core Data to Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/advanced-data-api-with-atlas-cli", "action": "created", "body": "# Mastering the Advanced Features of the Data API with Atlas CLI\n\nThe MongoDB Atlas Data API allows you to easily access and manipulate your data stored in Atlas using standard HTTPS requests. To utilize the Data API, all you need is an HTTPS client (like curl or Postman) and a valid API key. In addition to the standard functionality, the Data API now also offers advanced security and permission options, such as:\n\n- Support for various authentication methods, including JWT and email/password.\n\n- Role-based access control, which allows you to configure rules for user roles to restrict read and write access through the API.\n\n- IP Access List, which allows you to specify which IP addresses are permitted to make requests to the API.\n\nThe Atlas Data API also offers a great deal of flexibility and customization options. One of these features is the ability to create custom endpoints, which enable you to define additional routes for the API, giving you more control over the request method, URL, and logic. In this article, we will delve deeper into these capabilities and explore how they can be utilized. All this will be done using the new Atlas CLI, a command-line utility that makes it easier to automate Atlas cluster management.\n\nIf you want to learn more about how to get started with the Atlas Data API, I recommend reading Accessing Atlas Data in Postman with the Data API.\n\n## Installing and configuring an Atlas project and cluster\n\nFor this tutorial, we will need the following tools:\n\n- atlas cli\n\n- realm cli\n\n- curl\n\n- awk (or gawk for Windows)\n\n- jq\n\nI have already set up an organization called `MongoDB Blog` in the Atlas cloud, and I am currently using the Atlas command line interface (CLI) to display the name of the organization.\n\n```bash\natlas login\n```\n\n```bash\natlas organizations list\nID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NAME\n62d2d54c6b03350a26a8963b \u00a0 MongoDB Blog\n```\n\nI set a variable `ORG_ID ` with the Atlas organization id.\n\n```bash\nORG_ID=$(atlas organizations list|grep Blog|awk '{print $1}')\n```\n\nI also created a project within the `MongoDB Blog` organization. To create a project, you can use `atlas project create`, and provide it with the name of the project and the organization in which it should live. The project will be named `data-api-blog `.\n\n```bash\nPROJECT_NAME=data-api\nPROJECT_ID=$(atlas project create \"${PROJECT_NAME}\" --orgId \"${ORG_ID}\" | awk '{print $2}' | tr -d \"'\")\n```\n\nI will also deploy a MongoDB cluster within the project `data-api-blog ` on Google Cloud (free M0 trier). The cluster will be named `data-api`.\n\n```bash\nCLUSTER_NAME=data-api-blog\natlas cluster create \"${CLUSTER_NAME}\" --projectId \"${PROJECT_ID}\" --provider GCP --region CENTRAL_US --tier M0\n```\n\nAfter a few minutes, the cluster is ready. You can view existing clusters with the `atlas clusters list` command.\n\n```bash\natlas clusters list --projectId \"${PROJECT_ID}\"\n```\n\n```bash\\\nID \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NAME \u00a0 \u00a0 \u00a0 MDB VER \u00a0 STATE\\\n63b877a293bb5618ab7c373b \u00a0 data-api \u00a0 5.0.14\u00a0 \u00a0 IDLE\n\n```\nThe next step is to load a sample data set. Wait a few minutes while the dataset is being loaded. I need this dataset to work on the query examples.\n```bash\natlas clusters loadSampleData \"${CLUSTER_NAME}\" --projectId \"${PROJECT_ID}\"\n```\n\nGood practice is also to add the IP address to the Atlas project access list\n\n```bash\natlas accessLists create\u00a0 --currentIp --projectId \"${PROJECT_ID}\"\n```\n\n## Atlas App Services (version 3.0)\n\nThe App Services API allows for programmatic execution of administrative tasks outside of the App Services UI. This includes actions such as modifying authentication providers, creating rules, and defining functions. In this scenario, I will be using the App Services API to programmatically create and set up the Atlas Data API.\n\nUsing the `atlas organizations apiKeys` with the Atlas CLI, you can create and manage your organization keys. To begin with, I will\u00a0 create an API key that will belong to the organization `MongoDB Blog`.\n\n```bash\nAPI_KEY_OUTPUT=$(atlas organizations apiKeys create --desc \"Data API\" --role ORG_OWNER --orgId \"${ORG_ID}\")\n```\n\nEach request made to the App Services Admin API must include a valid and current authorization token from the MongoDB Cloud API, presented as a bearer token in the Authorization header. In order to get one, I need\u00a0 the `PublicKey`and `PrivateKey` returned by the previous command.\n\n```bash\nPUBLIC_KEY=$(echo $API_KEY_OUTPUT | awk -F'Public API Key ' '{print $2}' | awk '{print $1}' | tr -d '\\n')\nPRIVATE_KEY=$(echo $API_KEY_OUTPUT | awk -F'Private API Key ' '{print $2}' | tr -d '\\n')\n```\n\n> NOTE \\\n> If you are using a Windows machine, you might have to manually create those two environment variables. Get the API key output by running the following command.\n> `echo $API_KEY_OUTPUT`\n> Then create the API key variables with the values from the output.\n> `PUBLIC_KEY=`\n> `PRIVATE_KEY=`\n\nUsing those keys, I can obtain an access token.\n\n```bash\ncurl --request POST\u00a0 --header 'Content-Type: application/json'--header 'Accept: application/json'--data \"{\\\"username\\\": \\\"$PUBLIC_KEY\\\", \\\"apiKey\\\": \\\"$PRIVATE_KEY\\\"}\" https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/login | jq -r '.access_token'>\u00a0 token.txt\n```\n\nThen, using the access token, I create a new application type `data-api` in the Atlas Application Service. My application will be named `data-api-blog`.\n\n```bash\nACCESS_TOKEN=$(cat token.txt)\u00a0\nDATA_API_NAME=data-api-blog\nBASE_URL=\"https://realm.mongodb.com/api/admin/v3.0\"\ncurl --request POST\n\u00a0\u00a0--header \"Authorization: Bearer $ACCESS_TOKEN\"\n\"${BASE_URL}\"/groups/\"${PROJECT_ID}\"/apps?product=data-api\n\u00a0\u00a0--data '{\n\u00a0\u00a0\u00a0\u00a0\"name\": \"'\"$DATA_API_NAME\"'\",\n\u00a0\u00a0\u00a0\u00a0\"deployment_model\": \"GLOBAL\",\n\u00a0\u00a0\u00a0\u00a0\"environment\": \"development\",\n\u00a0\u00a0\u00a0\u00a0\"data_source\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"name\": \"'\"$DATA_API_NAME\"'\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"mongodb-atlas\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"config\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"clusterName\": \"'\"$CLUSTER_NAME\"'\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}'\n```\n\nThe application is visible now through Atlas UI, in the App Services tab.\n\nI can also display our new application using the `realm cli`tool. The `realm-cli` command line utility is used to manage the App Services applications. In order to start using the `realm cli` tool, I have to log into the Atlas Application Services.\n\n```bash\nrealm-cli login --api-key \"$PUBLIC_KEY\" --private-api-key \"$PRIVATE_KEY\"\n```\n\nNow, I can list my application with `realm-cli apps list`, assign the id to the variable, and use it later. In this example, the Data API application has a unique id: `data-api-blog-rzuzf`. (The id of your app will be different.)\n\n```bash\nAPP_ID=$(realm-cli apps list | awk '{print $1}'|grep data)\n```\n\n## Configure and enable the Atlas Data API\n\nBy default, the Atlas Data API is disabled, I will now have to enable the Data API. It can be done through the Atlas UI, however, I want to show you how to do it using the command line.\n\n### Export an existing app\n\nLet's enhance the application by incorporating some unique settings and ensuring that it can be accessed from within the Atlas cluster. I will pull my data-api application on my local device.\n\n```bash\nrealm-cli pull --remote=\"${APP_ID}\"\n```\n\nEach component of an Atlas App Services app is fully defined and configured using organized JSON configuration files and JavaScript source code files. To get more information about app configuration, head to the docs. Below, I display the comprehensive directories tree.\n\n|\n\n```bash\ndata-api-blog\n\u251c\u2500\u2500 auth\n\u2502 \u00a0 \u251c\u2500\u2500 custom_user_data.json\n\u2502 \u00a0 \u2514\u2500\u2500 providers.json\n\u251c\u2500\u2500 data_sources\n\u2502 \u00a0 \u2514\u2500\u2500 data-api-blog\n\u2502 \u00a0 \u00a0 \u00a0 \u2514\u2500\u2500 config.json\n\u251c\u2500\u2500 environments\n\u2502 \u00a0 \u251c\u2500\u2500 development.json\n\u2502 \u00a0 \u251c\u2500\u2500 no-environment.json\n\u2502 \u00a0 \u251c\u2500\u2500 production.json\n\u2502 \u00a0 \u251c\u2500\u2500 qa.json\n\u2502 \u00a0 \u2514\u2500\u2500 testing.json\n\u251c\u2500\u2500 functions\n\u2502 \u00a0 \u2514\u2500\u2500 config.json\n\u251c\u2500\u2500 graphql\n\u2502 \u00a0 \u251c\u2500\u2500 config.json\n\u2502 \u00a0 \u2514\u2500\u2500 custom_resolvers\n\u251c\u2500\u2500 http_endpoints\n\u2502 \u00a0 \u2514\u2500\u2500 config.json\n\u251c\u2500\u2500 log_forwarders\n\u251c\u2500\u2500 realm_config.json\n\u251c\u2500\u2500 sync\n\u2502 \u00a0 \u2514\u2500\u2500 config.json\n\u2514\u2500\u2500 values\n```\n\nI will modify the `data_api_config.json`file located in the `http_endpoints` directory. This file is responsible for enabling the Atlas Data API.\n\nI paste the document below into the `data_api_config.json`file. Note that to activate the Atlas Data API, I will set the `disabled`option to `false`. I also set `create_user_on_auth` to `true` If your linked function is using application authentication and custom JWT authentication, the endpoint will create a new user with the passed-in JWT if that user has not been created yet.\n\n_data-api-blog/http_endpoints/data_api_config.json_\n```bash\n{\n\u00a0\u00a0\u00a0\u00a0\"versions\": \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"v1\"\n\u00a0\u00a0\u00a0\u00a0],\n\u00a0\u00a0\u00a0\u00a0\"disabled\": false,\n\u00a0\u00a0\u00a0\u00a0\"validation_method\": \"NO_VALIDATION\",\n\u00a0\u00a0\u00a0\u00a0\"secret_name\": \"\",\n\u00a0\u00a0\u00a0\u00a0\"create_user_on_auth\": true,\n\u00a0\u00a0\u00a0\u00a0\"return_type\": \"JSON\"\n}\n```\n\n### Authentication providers\n\nThe Data API now supports new layers of configurable data permissioning and security, including new authentication methods, such as [JWT authentication or email/password, and role-based access control, which allows for the configuration of rules for user roles that control read and write access through the API. Let's start by activating authentication using JWT tokens.\n\n#### JWT tokens\n\nJWT (JSON Web Token) is a compact, URL-safe means of representing claims to be transferred between two parties. It is often used for authentication and authorization purposes.\n\n- They are self-contained, meaning they contain all the necessary information about the user, reducing the need for additional requests to the server.\n\n- They can be easily passed in HTTP headers, which makes them suitable for API authentication and authorization.\n\n- They are signed, which ensures that the contents have not been tampered with.\n\n- They are lightweight and can be easily encoded/decoded, making them efficient to transmit over the network.\n\nA JWT key is a secret value used to sign and verify the authenticity of a JWT token. The key is typically a long string of characters or a file that is securely stored on the server. I will pick a random key and create a secret in my project using Realm CLI.\n\n```bash\nKEY=thisisalongsecretkeywith32pluscharacters\nSECRET_NAME=data-secret\nrealm-cli secrets create -a \"${APP_ID}\" -n \"${SECRET_NAME}\" -v \"${KEY}\"\n```\n\nI list my secret.\n\n```bash\nrealm-cli secrets list -a \"${APP_ID}\"\n```\n\n```bash\nFound 1 secrets\n\u00a0\u00a0ID Name\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0------------------------\u00a0 -----------\n\u00a0\u00a063d58aa2b10e93a1e3a45db1\u00a0 data-secret\n```\n\nNext, I enable the use of two Data API authentication providers: traditional API key and JWT tokens. JWT token auth needs a secret created in the step above. I declare the name of the newly created secret in the configuration file `providers.json` located in the `auth` directory.\n\nI paste this content into `providers.json` file. Note that I set the `disabled` option in both providers `api-key` and `custom-token` to `false`.\n\n_auth/providers.json_\n```bash\n{\n\u00a0\u00a0\u00a0\u00a0\"api-key\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"name\": \"api-key\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"api-key\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"disabled\": false\n\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\"custom-token\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"name\": \"custom-token\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"custom-token\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"config\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"audience\": ],\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"requireAnyAudience\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"signingAlgorithm\": \"HS256\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"secret_config\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"signingKeys\": [\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"data-secret\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"disabled\": false\n\u00a0\u00a0\u00a0\u00a0}\n}\n```\n\n### Role-based access to the Data API\n\nFor each cluster, we can set high-level access permissions (Read-Only, Read & Write, No Access) and also set [custom role-based access-control (App Service Rules) to further control access to data (cluster, collection, document, or field level).\n\nBy default, all collections have no access, but I will create a custom role and allow read-only access to one of them. In this example, I will allow read-only access to the `routes` collection in the `sample_training` database.\n\nIn the `data_sources` directory, I create directories with the name of the database and collection, along with a `rules.json` file, which will contain the rule definition.\n\n_data_sources/data-api-blog/sample_training/routes/rules.json_\n```bash\n{\n\u00a0\u00a0\u00a0\u00a0\"collection\": \"routes\",\n\u00a0\u00a0\u00a0\u00a0\"database\": \"sample_training\",\n\u00a0\u00a0\u00a0\u00a0\"roles\": \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"name\": \"readAll\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"apply_when\": {},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"read\": true,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"write\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"insert\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"delete\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"search\": true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0]\n}\n```\n\nIt's time to deploy our settings and test them in the Atlas Data API. To deploy changes, I must push Data API configuration files to the Atlas server.\n\n```bash\ncd data-api-blog/\nrealm-cli push --remote \"${APP_ID}\"\n```\n![The URL endpoint is ready to use, and the custom rule is configured\nUpon logging into the Data API UI, we see that the interface is activated, the URL endpoint is ready to use, and the custom rule is configured.\n\nGoing back to the `App Services` tab, we can see that two authentication providers are now enabled.\n\n### Access the Atlas Data API with JWT token\n\nI will send a query to the MongoDB database now using the Data API interface. As the authentication method, I will choose the JWT token. I need to first generate an access token. I will do this using the website . The audience (`aud`) for this token will need to be the name of the Data API application. I can remind myself of the unique name of my Data API by printing the `APP_ID`environment variable. I will need this name when creating the token.\n\n```bash\necho ${APP_ID}\n```\n\nIn the `PAYLOAD`field, I will place the following data. Note that I placed the name of my Data API in the `aud` field. It is the audience of the token. By default, App Services expects this value to be the unique app name of your app.\n\n```bash\n{\n\u00a0\u00a0\"sub\": \"1\",\n\u00a0\u00a0\"name\": \"The Atlas Data API access token\",\n\u00a0\u00a0\"iat\": 1516239022,\n\u00a0\u00a0\"aud\":\"\",\n\u00a0\u00a0\"exp\": 1900000000\n}\n```\n\nThe signature portion of the JWT token is the secret key generated in one of the previous steps. In this example, the key is `thisisalongsecretkeywith32pluscharacters` . I will place this key in the `VERIFY SIGNATURE` field.\n\nIt will look like the screenshot below. The token has been generated and is visible in the top left corner.\n\nI copy the token and place it in the `JWT`environment variable, and also create another variable called `ENDPOINT`\u00a0 with the Data API query endpoint. Now, finally, we can start making requests to the Atlas Data API. Since the access role was created for only one collection, my request will be related to it.\n\n```bash\nJWT=\nDB=sample_training\nCOLL=routes\nENDPOINT=https://data.mongodb-api.com/app/\"${APP_ID}\"/endpoint/data/v1\ncurl --location --request POST $ENDPOINT'/action/findOne'\n--header 'Access-Control-Request-Headers: *'\n--header 'jwtTokenString: '$JWT\n--header 'Content-Type: application/json'\n--data-raw '{\n\"dataSource\": \"'\"$DATA_API_NAME\"'\",\n\"database\": \"'\"$DB\"'\",\n\"collection\": \"'\"$COLL\"'\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\"filter\": {}\n\u00a0}'\n```\n\n```bash\n{\"document\":{\"_id\":\"56e9b39b732b6122f877fa31\",\"airline\":{\"id\":410,\"name\":\"Aerocondor\",\"alias\":\"2B\",\"iata\":\"ARD\"},\"src_airport\":\"CEK\",\"dst_airport\":\"KZN\",\"codeshare\":\"\",\"stops\":0,\"airplane\":\"CR2\"}}\n```\n\n>WARNING \\\n>If you are getting an error message along the lines of the following:\n> `{\"error\":\"invalid session: error finding user for endpoint\",\"error_code\":\"InvalidSession\",\"link\":\"...\"}`\n>Make sure that your JSON Web Token is valid. Verify that the audience (aud) matches your application id, that the expiry timestamp (exp) is in the future, and that the secret key used in the signature is the correct one.\n\nYou can see that this retrieved a single document from the routes collection.\n\n### Configure IP access list\n\nLimiting access to your API endpoint to only authorized servers is a simple yet effective way to secure your API. You can modify the list of allowed IP addresses by going to `App Settings` in the left navigation menu and selecting the `IP Access list` tab in the settings area. By default, all IP addresses have access to your API endpoint (represented by 0.0.0.0). To enhance the security of your API, remove this entry and add entries for specific authorized servers. There's also a handy button to quickly add your current IP address for ease when developing using your API endpoint. You can also add your custom IP address with the help of `realm cli`. I'll show you how!\n\nI am displaying the current list of authorized IP addresses by running the `realm cli` command.\n\n```bash\nrealm-cli accessList list\n```\n\n```bash\nFound 1 allowed IP address(es) and/or CIDR block(s)\n\u00a0\u00a0IP Address Comment\n\u00a0\u00a0----------\u00a0 -------\n0.0.0.0/0 \u00a0\u00a0\u00a0\n```\n\nI want to restrict access to the Atlas Data API to only my IP address. Therefore, I am displaying my actual address and assigning the address into a variable `MY_IP`.\n\n```bash\n\u00a0MY_IP=$(curl ifconfig.me)\n```\n\nNext, I add this address to the IP access list, which belongs to my application, and delete `0.0.0.0/0` entry.\n\n```bash\nrealm-cli accessList create -a \"${APP_ID}\" --ip \"${MY_IP}\"\n--comment \"My current IP address\"\nrealm-cli accessList delete -a \"${APP_ID}\" --ip \"0.0.0.0/0\"\n```\n\nThe updated IP access list is visible in the Data API, App Services UI.\n\n### Custom HTTPS endpoints\n\nThe Data API offers fundamental endpoint options for creating, reading, updating, and deleting, as well as for aggregating information.\n\nCustom HTTPS endpoints can be created to establish specific API routes or webhooks that connect with outside services. These endpoints utilize a serverless function, written by you, to manage requests received at a specific URL and HTTP method. Communication with these endpoints is done via secure HTTPS requests, eliminating the need for installing databases or specific libraries. Requests can be made from any HTTP client.\n\nI can configure the Data API custom HTTP endpoint for my app from the App Services UI or by deploying configuration files with Realm CLI. I will demonstrate a second method. My custom HTTP endpoint will aggregate, count, and sort all source airports from the collection `routes` from `sample_training` database and return the top three results. I need to change the `config.json` file from the `http_endpoint` directory, but before I do that, I need to pull the latest version of my app.\n\n```bash\nrealm-cli pull --remote=\"${APP_ID}\"\n```\n\nI name my custom HTTP endpoint `sumTopAirports` . Therefore, I have to assign this name to the `route` key and `function_name` key's in the `config.json` file.\n\n_data-api-blog/http_endpoints/config.json_\n```bash\n\n\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"route\": \"/sumTopAirports\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"http_method\": \"GET\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"function_name\": \"sumTopAirports\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"validation_method\": \"NO_VALIDATION\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"respond_result\": true,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"fetch_custom_user_data\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"create_user_on_auth\": true,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"disabled\": false,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"return_type\": \"EJSON\"\n\u00a0\u00a0\u00a0\u00a0}\n]\n```\n\nI need to also write a custom function. [Atlas Functions run standard ES6+ JavaScript functions that you export from individual files. I create a `.js` file with the same name as the function in the functions directory or one of its subdirectories.\n\nI then place this code in a newly created file. This code exports a function that aggregates data from the Atlas cluster `data-api-blog`, `sample_training` database, and collection `routes`. It groups, sorts, and limits the data to show the top three results, which are returned as an array.\n\n_data-api-blog/functions/sumTopAirports.js_\n```bash\nexports = function({ query, headers, body }, response) {\n\u00a0\u00a0\u00a0\u00a0const result = context.services\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.get(\"data-api-blog\")\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.db(\"sample_training\")\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.collection(\"routes\")\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.aggregate(\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{ $group: { _id: \"$src_airport\", count: { $sum: 1 } } },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{ $sort: { count: -1 } },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{ $limit: 3 }\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0])\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.toArray();\n\u00a0\u00a0\u00a0\u00a0return result;\n};\n```\n\nNext, I push my changes to the Atlas.\n\n```bash\nrealm-cli push --remote \"${APP_ID}\"\n```\n\nMy custom HTTPS endpoint is now visible in the Atlas UI.\n![custom HTTPS endpoint is now visible in the Atlas UI\n\nI can now query the `sumTopAirports` custom HTTPS endpoint.\n\n```bash\nURL=https://data.mongodb-api.com/app/\"${APP_ID}\"/endpoint/sumTopAirports\ncurl --location --request GET $URL\n--header 'Access-Control-Request-Headers: *'\n--header 'jwtTokenString: '$JWT\n--header 'Content-Type: application/json'\u00a0\n```\n\n```bash\n{\"_id\":\"ATL\",\"count\":{\"$numberLong\":\"909\"}},{\"_id\":\"ORD\",\"count\":{\"$numberLong\":\"558\"}},{\"_id\":\"PEK\",\"count\":{\"$numberLong\":\"535\"}}]\n```\n\nSecurity is important when working with data because it ensures that confidential and sensitive information is kept safe and secure. Data breaches can have devastating consequences, from financial loss to reputational damage. Using the Atlas command line interface, you can easily extend the Atlas Data API with additional security features like JWT tokens, IP Access List, and custom role-based access-control. Additionally, you can use custom HTTPS functions to provide a secure, user-friendly, and powerful way for managing and accessing data. The Atlas platform provides a flexible and robust solution for data-driven applications, allowing users to easily access and manage data in a secure manner.\n\n## Summary\n\nMongoDB Atlas Data API allows users to access their MongoDB Atlas data from any platform and programmatically interact with it. With the API, developers can easily build applications that access data stored in MongoDB Atlas databases. The API provides a simple and secure way to perform common database operations, such as retrieving and updating data, without having to write custom code. This makes it easier for developers to get started with MongoDB Atlas and provides a convenient way to integrate MongoDB data into existing applications.\n\nIf you want to learn more about all the capabilities of the Data API, [check out our course over at MongoDB University. There are also multiple resources available on the Atlas CLI.\n\nIf you don't know how to start or you want to learn more, visit the MongoDB Developer Community forums!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article delves into the advanced features of the Data API, such as authentication and custom endpoints.", "contentType": "Tutorial"}, "title": "Mastering the Advanced Features of the Data API with Atlas CLI", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-single-collection-springpart1", "action": "created", "body": "# Single-Collection Designs in MongoDB with Spring Data (Part 1)\n\nModern document-based NoSQL databases such as MongoDB offer advantages over traditional relational databases for many types of applications. One of the key benefits is data models that avoid the need for normalized data spread across multiple tables requiring join operations that are both computationally expensive and difficult to scale horizontally.\n\nIn the first part of this series, we will discuss single-collection designs \u2014 one of the design patterns used to realize these advantages in MongoDB. In Part 2, we will provide examples of how the single-collection pattern can be utilized in Java applications using\u00a0Spring Data MongoDB.\n\n## The ADSB air-traffic control application\nIn this blog post, we discuss a database design for collecting and analyzing Automatic Dependent Surveillance-Broadcast (ADSB) data transmitted by aircraft. ADSB is a component of a major worldwide modernization of air-traffic control systems that moves away from dependency on radar (which is expensive to maintain and has limited range) for tracking aircraft movement and instead has the aircraft themselves transmit their location, speed, altitude, and direction of travel, all based on approved Global Navigation Satellite Systems such as GPS, GLONASS, Galileo, and BeiDou.\u00a0Find more information about ADSB.\n\nA number of consumer-grade devices are available for receiving ADSB transmissions from nearby aircraft. These are used by pilots of light aircraft to feed data to tablet and smart-phone based navigation applications such as\u00a0Foreflight. This provides a level of situational awareness and safety regarding the location of nearby flight traffic that previously was simply not available even to commercial airline pilots. Additionally, web-based aircraft tracking initiatives, such as\u00a0the Opensky Network, depend on community-sourced ADSB data to build their databases used for numerous research projects.\n\nWhilst most ADSB receivers retail in the high hundreds-of-dollars price range, the rather excellent\u00a0Stratux open-source project\u00a0allows a complete receiver system to be built using a Raspberry Pi and cheap USB Software Defined Radios (SDRs). A complete system can be built from parts totalling around $200 (1).\n\nThe Stratux receiver transmits data to listening applications either over a raw TCP/IP connection with messages adhering to the\u00a0GDL90\u00a0specification designed and maintained by Garmin, or as JSON messages sent to subscribers to a websocket connection. In this exercise, we will\u00a0simulate receiving messages from a Stratux receiver \u2014 **a working receiver is not a prerequisite for completing the exercises**. The database we will be building will track observed aircraft, the airlines they belong to, and the individual ADSB position reports picked up by our receiver.\n\nIn a traditional RDBMS-based system, we might settle on a normalized data model that looks like this:\n\nEach record in the airline table can be joined to zero or more aircraft records, and each aircraft record can be joined to zero or more ADSB position reports. Whilst this model offers a degree of flexibility in terms of querying, queries that join across tables are computationally intensive and difficult to scale horizontally. In particular, consider that over 3000 commercial flights are handled per day by airports in the New York City area and that each of those flights are transmitting a new ADSB position report every second. With ADSB transmissions for a flight being picked up by the receiver for an average of 15 minutes until the aircraft moves out of range, an ADSB receiver in New York alone could be feeding over 2.5 million position reports per day into the system. With a network of ADSB receivers positioned at major hubs throughout the USA, the possibility of needing to be able to scale out could grow quickly.\n\nMongoDB has been designed from the outset to be easy to scale horizontally. However, to do that, the correct design principles and patterns must be employed, one of which is to avoid unnecessary joins. In our case, we will be utilizing the *document data model*, *polymorphic collections*, and the *single-collection design pattern*. And whilst it\u2019s common practice in relational database design to start by normalizing the data before considering access patterns, with document-centric databases such as MongoDB, you should always start by considering the access patterns for your data and work from there, using the guiding principle that *data that is accessed together should be stored together*.\u00a0\n\nIn MongoDB, data is stored in JSON (2) like documents, organized into collections. In relational database terms, a document is analogous to a record whilst a collection is analogous to a table. However, there are some key differences to be aware of.\n\nA document in MongoDB can be hierarchical, in that the value of any given attribute (column in relational terms) in a document may itself be a document or an array of values or documents. This allows for data to be stored in a single document within a collection in ways that tabular relational database designs can\u2019t support and that would require data to be stored across multiple tables and accessed using joins. Consider our airline to aircraft one-to-many and aircraft to ADSB position report one-to-many relationships. In our relational model, this requires three tables joined using primary-foreign key relationships. In MongoDB, this could be represented by airline documents, with their associated aircraft embedded in the same document and the ADSB position reports for each aircraft further embedded in turn, all stored in a single collection. Such documents might look like this:\n\n```\n{\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"62abdd534e973de2fcbdc10d\"\n\u00a0 },\n\u00a0 \"airlineName\": \"Delta Air Lines\",\n\u00a0 \"airlineIcao\": \"DAL\",\n\u00a0 ...\n\n\u00a0\u00a0\"aircraft\": \n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"icaoNumber\": \"a36f7e\",\n\u00a0 \"tailNumber\": \"N320NB\",\n\u00a0 \u00a0 \u00a0 ...\n \"positionReports\": [\n\u00a0 \u00a0 \u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"msgNum\": \"1\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"altitude\": 38825,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ...\n\u00a0 \u00a0 \u00a0 \"geoPoint\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"type\": \"Point\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"coordinates\": [\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0-4.776722,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 55.991776\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 },\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"msgNum\": \"2\",\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ...\n\u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 {\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"msgNum\": \"3\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0... \n\u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 },\n\n\u00a0\u00a0\u00a0\u00a0{\n\u00a0 \u00a0 \u00a0 \"icaoNumber\": \"a93d7c\",\n\u00a0 ...\n\u00a0 \u00a0 },\n\u00a0 \u00a0 {\n\u00a0 \"icaoNumber\": \"ab8379\",\n\u00a0 ...\n\u00a0 \u00a0 },\n\u00a0 ]\n}\n```\n\nBy embedding the aircraft information for each airline within its own document, all stored within a single collection, we are able retrieve information for an airline and all its aircraft using a single query and no joins:\n\n```javascript\ndb.airlines.find({\"airlineName\": \"Delta Air Lines\"}\n```\n\nEmbedded, hierarchical documents provide a great deal of flexibility in our data design and are consistent with our guiding principle that *data that is accessed together should be stored together*. However, there are some things to be aware of:\n\n* For some airlines, the number of embedded aircraft documents could become large. This would be compounded by the number of embedded ADSB position reports within each associated aircraft document. In general, large, unbounded arrays are considered an anti-pattern within MongoDB as they can lead to excessively sized documents with a corresponding impact on update operations and data retrieval.\n* There may be a need to access an individual airline or aircraft\u2019s data independently of either the corresponding aircraft data or information related to other aircraft within the airline\u2019s fleet. Whilst the MongoDB query aggregation framework allows for such shaping and projecting of the data returned by a query to do this, it would add extra processing overhead when carrying out such queries. Alternatively, the required data could be filtered out of the query returns within our application, but that might lead to unnecessary large data transmissions.\n* Some aircraft may be operated privately, and not be associated with an airline.\n\nOne approach to tackling these problems would be to separate the airline, aircraft, and ADSB position report data into separate documents stored in three different collections with appropriate cross references (primary/foreign keys). In some cases, this might be the right approach (for example, if synchronizing data from mobile devices using\u00a0[Realm). However, it comes at the cost of maintaining additional collections and indexes, and might necessitate the use of joins ($lookup stages in a MongoDB aggregation pipeline) when retrieving data. For some of our access patterns, this design would be violating our guiding principle that *data that is accessed together should be stored together*. Also, as the amount of data in an application grows and the need for scaling through sharding of data starts to become a consideration, having related data separated across multiple collections can complicate the maintenance of data across shards.\n\nAnother option would be to consider using\u00a0*the Subset Pattern*\u00a0which limits the number of embedded documents we maintain according to an algorithm (usually most recently received/accessed, or most frequently accessed), with the remaining documents stored in separate collections. This allows us to control the size of our hierarchical documents and in many workloads, cover our data retrieval and access patterns with a single query against a single collection. However, for our airline data use case, we may find that the frequency with which we are requesting all aircraft for a given airline, or all position reports for an aircraft (of which there could be many thousands), the subset pattern may still lead to many queries requiring joins.\n\nOne further solution, and the approach we\u2019ll take in this article, is to utilize another feature of MongoDB: polymorphic collections. Polymorphic collections refer to the ability of collections to store documents of varying types. Unlike relational tables, where the columns of each table are pre-defined, a collection in MongoDB can contain documents of any design, with the only requirement being that every document must contain an \u201c\\_id\u201d field containing a unique identifier. This ability has led some observers to describe MongoDB as being schemaless. However, it\u2019s more correct to describe MongoDB as \u201cschema-optional.\u201d You *can* define restrictions on the design of documents that are accepted by a collection using\u00a0JSON Schema, but this is optional and at the discretion of the application developers. By default, no restrictions are imposed. It\u2019s considered best practice to only store documents that are in some way related and/or will be retrieved in a single operation within the same collection, but again, this is at the developers\u2019 discretion.\u00a0\n\nUtilizing polymorphic collection in our aerodata example, we separate our Airline, Aircraft, and ADSB position report data into separate documents, but store them all within a *single collection.* Taking this approach, the documents in our collection may end up looking like this:\n```JSON\n{\n \"_id\": \"DAL\",\n \"airlineName\": \"Delta Air Lines\",\n ...\n \"recordType\": 1\n},\n{\n \"_id\": \"DAL_a93d7c\",\n \"tailNumber\": \"N695CA\",\n \"manufacturer\": \"Bombardier Inc\",\n \"model\": \"CL-600-2D24\",\n \"recordType\": 2\n},\n{\n \"_id\": \"DAL_ab8379\",\n \"tailNumber\": \"N8409N\",\n \"manufacturer\": \"Bombardier Inc\",\n \"model\": \"CL-600-2B19\",\n \"recordType\": 2\n},\n{\n \"_id\": \"DAL_a36f7e\",\n \"tailNumber\": \"N8409N\",\n \"manufacturer\": \"Airbus Industrie\",\n \"model\": \"A319-114\",\n \"recordType\": 2\n},\n{\n \"_id\": \"DAL_a36f7e_1\",\n \"altitude\": 38825,\n . . .\n \"geoPoint\": {\n \"type\": \"Point\",\n \"coordinates\": \n -4.776722,\n 55.991776\n ]\n },\n \"recordType\": 3\n},\n{\n \"_id\": \"DAL_a36f7e_2\",\n \"altitude\": 38875,\n ... \n \"geoPoint\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -4.781466,\n 55.994843\n ]\n },\n \"recordType\": 3\n},\n{\n \"_id\": \"DAL_a36f7e_3\",\n \"altitude\": 38892,\n ... \n \"geoPoint\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -4.783344,\n 55.99606\n ]\n },\n \"recordType\": 3\n}\n```\nThere are a couple of things to note here. Firstly, with the airline, aircraft, and ADSB position reports separated into individual documents rather than embedded within each other, we can query for and return the different document types individually or in combination as needed.\n\nSecondly, we have utilized a custom format for the \u201c\\_id\u201d field in each document. Whilst the \u201c\\_id\u201d field is always required in MongodB, the format of the value stored in the field can be anything as long as it\u2019s unique within that collection. By default, if no value is provided, MongoDB will assign an objectID value to the field. However, there is nothing to prevent us using any value we wish, as long as care is taken to ensure each value used is unique. Considering that MongoDB will always maintain an index on the \u201c\\_id\u201d field, it makes sense that we should use a value in the field that has some value to our application. In our case, the values are used to represent the hierarchy within our data. Airline document \u201c\\_id\u201d fields contain the airline\u2019s unique ICAO (International Civil Aviation Organization) code. Aircraft document \u201c\\_id\u201d fields start with the owning airline\u2019s ICAO code, followed by an underscore, followed by the aircraft\u2019s own unique ICAO code. Finally, ADSB position report document \u201c\\_id\u201d fields start with the airline ICAO code, an underscore, then the aircraft ICAO code, then a second underscore, and finally an incrementing message number.\u00a0\n\nWhilst we could have stored the airline and aircraft ICAO codes and ADSB message numbers in their own fields to support our queries, and in some ways doing so would be a simpler approach, we would have to create and maintain additional indexes on our collection against each field. Overloading the values in the \u201c\\_id\u201d field in the way that we have avoids the need for those additional indexes.\n\nLastly, we have added a helper field called recordType to each document to aid filtering of searches. Airline documents have a recordType value of 1, aircraft documents have a recordType value of 2, and ADSB position report documents have a recordType value of 3. To maintain query performance, the positionType field should be indexed.\n\nWith these changes in place, and assuming we have placed all our documents in a collection named \u201caerodata\u201d, we can now carry out the following range of queries:\n\nRetrieve all documents related to Delta Air Lines:\n\n```javascript\ndb.aerodata.find({\"_id\": /^DAL/}) \n```\n\nRetrieve Delta Air Lines\u2019 airline document on its own:\n\n```javascript\ndb.aerodata.find({\"_id\": \"DAL\"})\n```\n\nRetrieve all aircraft documents for aircraft in Delta Air Lines\u2019 fleet:\n\n```javascript\ndb.aerodata.find({\"_id\": /^DAL_/, \"recordType\": 2})\n```\n\nRetrieve the aircraft document for Airbus A319 with ICAO code \"a36f7e\" on its own:\n\n```javascript\ndb.aerodata.find({\"_id\": \"DAL_a36f7e\", \"recordType\": 2})\n```\n\nRetrieve all ADSB position report documents for Airbus A319 with ICAO code \"a36f7e\":\n\n```javascript\ndb.aerodata.find({\"_id\": /^DAL_a36f7e/, \"recordType\": 3}) \n```\n\nIn each case, we are able to retrieve the data we need with a single query operation (requiring a single round trip to the database) against a single collection (and thus, no joins) \u2014 even in cases where we are returning multiple documents of different types. Note the use of regular expressions in some of the queries. In each case, our search pattern is anchored to the start of the field value being searched using the \u201c^\u201d hat symbol. This is important when performing a regular expression search as MongoDB can only utilize an index on the field being searched if the search pattern is anchored to the start of the field.\n\nThe following search will utilize the index on the \u201c\\_id\u201d field:\n\n```javascript\ndb.aerodata.find({\"_id\": /^DAL/}) \n```\n\nThe following search will **not** be able to utilize the index on the \u201c\\_id\u201d field and will instead perform a full collection scan:\n\n```javascript\ndb.aerodata.find({\"_id\": /DAL/})\n```\n\nIn this first part of our two-part post, we have seen how polymorphic single-collection designs in MongoDB can provide all the query flexibility of normalized relational designs, whilst simultaneously avoiding anti-patterns, such as unbounded arrays and unnecessary joins. This makes the resulting collections highly performant from a search standpoint and amenable to horizontal scaling. In Part 2, we will show how we can work with these designs using Spring Data MongoDB in Java applications.\n\nThe example source code used in this series is\u00a0[available on Github.\n\n(1) As of October 2022, pandemic era supply chain issues have impacted Raspberry Pi availability and cost. However for anyone interested in building their own Stratux receiver, the following parts list will allow a basic system to be put together:\n* USB SDR Radios\n* Raspberry Pi Starter Kit\n* SD Card\n* GPS Receiver (optional)\n\n(2) MongoDB stores data using BSON - a binary form of JSON with support for additional data types not supported by JSON. Get more information about the BSON specification.", "format": "md", "metadata": {"tags": ["Java"], "pageDescription": "Learn how to avoid joins in MongoDB by using Single Collection design patterns, and access those patterns using Spring Data in Java applications.", "contentType": "Tutorial"}, "title": "Single-Collection Designs in MongoDB with Spring Data (Part 1)", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/customer-success-ruby-tablecheck", "action": "created", "body": "# TableCheck: Empowering Restaurants with Best-in-Class Booking Tools Powered by MongoDB\n\nTableCheck is the world\u2019s premiere booking and guest platform. Headquartered in Tokyo, they empower restaurants with tools to elevate their guest experience and create guests for life with features like booking forms, surveys, marketing automation tools and an ecosystem of powerful solutions for restaurants to take their business forward.\n\n## Architectural overview of TableCheck\n\nLaunched in 2013, TableCheck began life as a Ruby on Rails monolith. Over time, the solution has been expanded to include satellite microservices. However, one constant that has remained throughout this journey was MongoDB.\n\nOriginally, TableCheck managed their own MongoDB Enterprise clusters. However, once MongoDB Atlas became available, they migrated their data to a managed replica set running in AWS.\n\nAccording to CTO Johnny Shields, MongoDB was selected initially as the database of choice for TableCheck as it was _\"love at first sight\"_. Though MongoDB was a much different solution in 2013, even in the database product\u2019s infancy, it fit perfectly with their development workflow and allowed them to work with their data easily and quickly while building out their APIs and application.\n\n## Ruby on Rails + MongoDB\n\nAny developer familiar with Ruby on Rails knows that the ORM layer (via Active Record) was designed to support relational databases. MongoDB\u2019s Mongoid ODM acts as a veritable \"drop-in\" replacement for existing Active Record adapters so that MongoDB can be used seamlessly with Rails. The CRUD API is familiar to Ruby on Rails developers and makes working with MongoDB extremely easy.\n\nWhen asked if MongoDB and Ruby were a good fit, Johnny Shields replied:\n> _\"Yes, I\u2019d add the combo of MongoDB + Ruby + Rails + Mongoid is a match made in heaven. Particularly with the Mongoid ORM library, it is easy to get MongoDB data represented in native Ruby data structures, e.g. as nested arrays and objects\"._\n\nThis has allowed TableCheck to ensure MongoDB remains the \"golden-source\" of data for the entire platform. They currently replicate a subset of data to Elasticsearch for deep multi-field search functionality. However, given the rising popularity and utility of Atlas Search, this part of the stack may be further simplified.\n \nAs MongoDB data changes within the TableCheck platform, these changes are broadcast over Apache Kafka via the MongoDB Kafka Connector to enable downstream services to consume it. Several of their microservices are built in Elixir, including a data analytics application. PostgreSQL is being used for these data analytics use cases as the only MongoDB Drivers for Elixir and managed by the community (such as `elixir-mongo/mongodb` or `zookzook/elixir-mongodb-driver`). However, should an official Driver surface, this decision may change.\n\n## Benefits of the Mongoid ODM for Ruby on Rails development\n\nThe \"killer feature\" for new users discovering Ruby on Rails is Active Record Migrations. This feature of Active Record provides a DSL that enables developers to manage their relational database\u2019s schema without having to write a single line of SQL. Because MongoDB is a NoSQL database, migrations and schema management are unnecessary!\n\nJohnny Shields shares the following based on his experience working with MongoDB and Ruby on Rails:\n> _\"You can add or remove data fields without any need to migrate your database. This alone is a \"killer-feature\" reason to choose MongoDB. You do still need to consider database indexes however, but MongoDB Atlas has a profiler which will monitor for slow queries and auto-suggest if any index is needed.\"_\n\nAs the Mongoid ODM supports large portions of the Active Record API, another powerful productivity feature TableCheck was able to leverage is the use of Associations. Cross-collection referenced associations are available. However, unlike relational databases, embedded associations can be used to simplify the data model.\n\n## Open source and community strong\n\nBoth `mongodb/mongoid` and `mongodb/mongo-ruby-driver` are licensed under OSI approved licenses and MongoDB encourages the community to contribute feedback, issues, and pull requests!\n\nSince 2013, the TableCheck team has contributed nearly 150 PRs to both projects. The majority tend to be quality-of-life improvements and bug fixes related to edge-case combinations of various methods/options. They\u2019ve also helped improve the accuracy of documentation in many places, and have even helped the MongoDB Ruby team setup Github Actions so that it would be easier for outsiders to contribute. \n\nWith so many contributions under their team\u2019s belt, and clearly able to extend the Driver and ODM to fit any use case the MongoDB team may not have envisioned, when asked if there were any use-cases MongoDB couldn\u2019t satisfy within a Ruby on Rails application, the feedback was:\n> _\"I have not encountered any use case where I\u2019ve felt SQL would be a fundamentally better solution than MongoDB. On the contrary, we have several microservices which we\u2019ve started in SQL and are moving to MongoDB now wherever we can.\"_\n\nThe TableCheck team are vocal advocates for things like better changelogs and more discipline in following semantic versioning best practices. These have benefited the community greatly, and Johnny and team continue to advocate for things like adopting static code analysis (ex: via Rubocop) to improve overall code quality and consistency.\n\n## Overall thoughts on working with MongoDB and Ruby on Rails\n\nTableCheck has been a long-time user of MongoDB via the Ruby driver and Mongoid ODM, and as a result has experienced some growing pains as the data platform matured. When asked about any challenges his team faced working with MongoDB over the years, Johnny replied: \n> _\"The biggest challenge was that in earlier MongoDB versions (3.x) there were a few random deadlock-type bugs in the server that bit us. These seemed to have gone away in newer versions (4.0+). MongoDB has clearly made an investment in core stability which we have benefitted from first-hand. Early on we were maintaining our own cluster, and from a few years ago we moved to Atlas and MongoDB now does much of the maintenance for us\"._\n\nWe at MongoDB continue to be impressed by the scope and scale of the solutions our users and customers like TableCheck continue to build. Ruby on Rails continues to be a viable framework for enterprise and best-in-class applications, and our team will continue to grow the product to meet the needs of the next generation of Ruby application developers.\n\nJohnny presented at MongoDB Day Singapore on November 23, 2022 (view presentation). His talk covered a number of topics, including his experiences working with MongoDB and Ruby.", "format": "md", "metadata": {"tags": ["MongoDB", "Ruby"], "pageDescription": "TableCheck's CTO Johnny Shields discusses their development experience working with the MongoDB Ruby ODM (mongoid) and how they accelerated and streamlined their development processes with these tools.", "contentType": "Article"}, "title": "TableCheck: Empowering Restaurants with Best-in-Class Booking Tools Powered by MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/saving-data-in-unity3d-using-playerprefs", "action": "created", "body": "# Saving Data in Unity3D Using PlayerPrefs\n\n*(Part 1 of the Persistence Comparison Series)*\n\nPersisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.\n\nIn this tutorial series, we will explore the options given to us by Unity and third-party libraries. Each part will take a deeper look into one of them with the final part being a comparison:\n\n- Part 1: PlayerPrefs *(this tutorial)*\n- Part 2: Files\n- Part 3: BinaryReader and BinaryWriter *(coming soon)*\n- Part 4: SQL\n- Part 5: Realm Unity SDK\n- Part 6: Comparison of all these options\n\nTo make it easier to follow along, we have prepared an example repository for you. All those examples can be found within the same Unity project since they all use the same example game, so you can see the differences between those persistence approaches better.\n\nThe repository can be found at https://github.com/realm/unity-examples, with this tutorial being on the persistence-comparison branch next to other tutorials we have prepared for you.\n\n## Example game\n\n*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we are using the same example for all parts of the series so that it is easier to see the differences between the approaches.*\n\nThe goal of this tutorial series is to show you a quick and easy way to take some first steps in the various ways to persist data in your game.\n\nTherefore, the example we will be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.\n\nA simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.\n\nWhen you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.\n\nYou can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.\n\nThe scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.\n\n```cs\nusing UnityEngine;\n\n/// \n/// This script shows the basic structure of all other scripts.\n/// \npublic class HitCountExample : MonoBehaviour\n{\n // Keep count of the clicks.\n SerializeField] private int hitCount; // 1\n\n private void Start() // 2\n {\n // Read the persisted data and set the initial hit count.\n hitCount = 0; // 3\n }\n\n private void OnMouseDown() // 4\n {\n // Increment the hit count on each click and save the data.\n hitCount++; // 5\n }\n}\n```\n\nThe first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerializeField]` here so that you can observe it while clicking on the capsule in the Unity editor.\n\nWhenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.\n\nThe second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorials series.\n\n## PlayerPrefs\n\n(See `PlayerPrefsExampleSimple.cs` in the repository for the finished version.)\n\nThe easiest and probably most straightforward way to save data in Unity is using the built-in [`PlayerPrefs`. The downside, however, is the limited usability since only three data types are supported:\n\n- string\n- float\n- integer\n\nAnother important fact about them is that they save data in plain text, which means a player can easily change their content. `PlayerPrefs` should therefore only be used for things like graphic settings, user names, and other data that could be changed in game anyway and therefore does not need to be safe.\n\nDepending on the operating system the game is running on, the `PlayerPrefs` get saved in different locations. They are all listed in the documentation. Windows, for example, uses the registry to save the data under `HKCU\\Software\\ExampleCompanyName\\ExampleProductName`.\n\nThe usage of `PlayerPrefs` is basically the same as a dictionary. They get accessed as `key`/`value` pairs where the `key` is of type `string`. Each supported data type has its own function:\n\n- SetString(key, value)\n- GetString(key)\n- SetFloat(key, value)\n- GetFloat(key)\n- SetInt(key, value)\n- GetInt(key)\n\n```cs\nusing UnityEngine;\n\npublic class PlayerPrefsExampleSimple : MonoBehaviour\n{\n // Resources:\n // https://docs.unity3d.com/ScriptReference/PlayerPrefs.html\n\n SerializeField] private int hitCount = 0;\n\n private const string HitCountKey = \"HitCountKey\"; // 1\n\n private void Start()\n {\n // Check if the key exists. If not, we never saved the hit count before.\n if (PlayerPrefs.HasKey(HitCountKey)) // 2\n {\n // Read the hit count from the PlayerPrefs.\n hitCount = PlayerPrefs.GetInt(HitCountKey); // 3\n }\n }\n\n private void OnMouseDown()\n {\n hitCount++;\n\n // Set and save the hit count before ending the game.\n PlayerPrefs.SetInt(HitCountKey, hitCount); // 4\n PlayerPrefs.Save(); // 5\n }\n\n}\n```\n\nFor the `PlayerPrefs` example, we create a script named `PlayerPrefsExampleSimple` based on the `HitCountExample` shown earlier.\n\nIn addition to the basic structure, we also need to define a key (1) that will be used to save the `hitCount` in the `PlayerPrefs`. Let's call it `\"HitCountKey\"`.\n\nWhen the game starts, we first want to check if there was already a hit count saved. The `PlayerPrefs` have a built-in function `HasKey(hitCountKey)` (2) that let's us achieve exactly this. If the key exists, we read it using `GetInt(hitCountKey)` (3) and save it in the counter.\n\nThe second part is saving data whenever it changes. On each click after we incremented the `hitCount`, we have to call `SetInt(key, value)` on `PlayerPrefs` (4) to set the new data. Note that this does not save the data to disk. This only happens during `OnApplicationQuit()` implicitly. We can explicitly write the data to disk at any time to avoid losing data in case the game crashes and `OnApplicationQuit()` never gets called.\nTo write the data to disk, we call `Save()` (5).\n\n## Extended example\n\n(See `PlayerPrefsExampleExtended.cs` in the repository for the finished version.)\n\nIn the second part of this tutorial, we will extend this very simple version to look at ways to save more complex data within `PlayerPrefs`.\n\nInstead of just detecting a mouse click, the extended script will detect `Shift+Click` and `Ctrl+Click` as well.\n\nAgain, to visualize this in the editor, we will add some more `[SerializeFields]` (1). Substitute the current one (`hitCount`) with the following:\n\n```cs\n// 1\n[SerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nEach type of click will be shown in its own `Inspector` element.\n\nThe same has to be done for the `PlayerPrefs` keys. Remove the `HitCountKey` and add three new elements (2).\n\n```cs\n// 2\nprivate const string HitCountKeyUnmodified = \"HitCountKeyUnmodified\";\nprivate const string HitCountKeyShift = \"HitCountKeyShift\";\nprivate const string HitCountKeyControl = \"HitCountKeyControl\";\n```\n\nThere are many different ways to save more complex data. Here we will be using three different entries in `PlayerPrefs` as a first step. Later, we will also look at how we can save structured data that belongs together in a different way.\n\nOne more field we need to save is the `KeyCode` for the key that was pressed:\n\n```cs\n// 3\nprivate KeyCode modifier = default;\n```\n\nWhen starting the scene, loading the data looks similar to the previous example, just extended by two more calls:\n\n```cs\nprivate void Start()\n{\n // Check if the key exists. If not, we never saved the hit count before.\n if (PlayerPrefs.HasKey(HitCountKeyUnmodified)) // 4\n {\n // Read the hit count from the PlayerPrefs.\n hitCountUnmodified = PlayerPrefs.GetInt(HitCountKeyUnmodified); // 5\n }\n if (PlayerPrefs.HasKey(HitCountKeyShift)) // 4\n {\n // Read the hit count from the PlayerPrefs.\n hitCountShift = PlayerPrefs.GetInt(HitCountKeyShift); // 5\n }\n if (PlayerPrefs.HasKey(HitCountKeyControl)) // 4\n {\n // Read the hit count from the PlayerPrefs.\n hitCountControl = PlayerPrefs.GetInt(HitCountKeyControl); // 5\n }\n}\n```\n\nAs before, we first check if the key exists in the `PlayerPrefs` (4) and if so, we set the corresponding counter (5) to its value. This is fine for a simple example but here, you can already see that saving more complex data will bring `PlayerPrefs` very soon to its limits if you do not want to write a lot of boilerplate code.\n\nUnity offers a detection for keyboard clicks and other input like a controller or the mouse via a class called [`Input`. Using `GetKey`, we can check if a specific key was held down the moment we register a mouse click.\n\nThe documentation tells us about one important fact though:\n\n> Note: Input flags are not reset until Update. You should make all the Input calls in the Update Loop.\n\nTherefore, we also need to implement the `Update()` function (6) where we check for the key and save it in the previously defined `modifier`.\n\nThe keys can be addressed via their name as string but the type safe way to do this is to use the class `KeyCode`, which defines every key necessary. For our case, this would be `KeyCode.LeftShift` and `KeyCode.LeftControl`.\n\nThose checks use `Input.GetKey()` (7) and if one of the two was found, it will be saved as the `modifier` (8). If neither of them was pressed (9), we just reset `modifier` to the `default` (10) which we will use as a marker for an unmodified mouse click.\n\n```cs\nprivate void Update() // 6\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 7\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift; // 8\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 7\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl; // 8\n }\n else // 9\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 10\n }\n}\n```\n\nThe same triplet can then also be found in the click detection:\n\n```cs\nprivate void OnMouseDown()\n{\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift: // 11\n // Increment the hit count and set it to PlayerPrefs.\n hitCountShift++; // 12\n PlayerPrefs.SetInt(HitCountKeyShift, hitCountShift); // 15\n break;\n case KeyCode.LeftControl: // 11\n // Increment the hit count and set it to PlayerPrefs.\n hitCountControl++; // \n PlayerPrefs.SetInt(HitCountKeyControl, hitCountControl); // 15\n break;\n default: // 13\n // Increment the hit count and set it to PlayerPrefs.\n hitCountUnmodified++; // 14\n PlayerPrefs.SetInt(HitCountKeyUnmodified, hitCountUnmodified); // 15\n break;\n }\n\n // Persist the data to disk.\n PlayerPrefs.Save(); // 16\n}\n```\n\nFirst we check if one of those two was held down while the click happened (11) and if so, increment the corresponding hit counter (12). If not (13), the `unmodfied` counter has to be incremented (14).\n\nFinally, we need to set each of those three counters individually (15) via `PlayerPrefs.SetInt()` using the three keys we defined earlier.\n\nLike in the previous example, we also call `Save()` (16) at the end to make sure data does not get lost if the game does not end normally.\n\nWhen switching back to the Unity editor, the script on the capsule should now look like this:\n\n## More complex data\n\n(See `PlayerPrefsExampleJson.cs` in the repository for the finished version.)\n\nIn the previous two sections, we saw how to handle two simple examples of persisting data in `PlayerPrefs`. What if they get more complex than that? What if you want to structure and group data together?\n\nOne possible approach would be to use the fact that `PlayerPrefs` can hold a `string` and save a `JSON` in there.\n\nFirst we need to figure out how to actually transform our data into JSON. The .NET framework as well as the `UnityEngine` framework offer a JSON serializer and deserializer to do this job for us. Both behave very similarly, but we will use Unity's own `JsonUtility`, which performs better in Unity than other similar JSON solutions.\n\nTo transform data to JSON, we first need to create a container object. This has some restriction:\n\n> Internally, this method uses the Unity serializer. Therefore, the object you pass in must be supported by the serializer. It must be a MonoBehaviour, ScriptableObject, or plain class/struct with the Serializable attribute applied. The types of fields that you want to be included must be supported by the serializer; unsupported fields will be ignored, as will private fields, static fields, and fields with the NonSerialized attribute applied.\n\nIn our case, since we are only saving simple data types (int) for now, that's fine. We can define a new class (1) and call it `HitCount`:\n\n```cs\n// 1\nprivate class HitCount\n{\n public int Unmodified;\n public int Shift;\n public int Control;\n}\n```\n\nWe will keep the Unity editor outlets the same (2):\n\n```cs\n// 2\nSerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nAll those will eventually be saved into the same `PlayerPrefs` field, which means we only need one key (3):\n\n```cs\n// 3\nprivate const string HitCountKey = \"HitCountKeyJson\";\n```\n\nAs before, the `modifier` will indicate which modifier was used:\n\n```cs\n// 4\nprivate KeyCode modifier = default;\n```\n\nIn `Start()`, we then need to read the JSON. As before, we check if the `PlayerPrefs` key exists (5) and then read the data, this time using `GetString()` (as opposed to `GetInt()` before).\n\nTransforming this JSON into the actual object is then done using `JsonUtility.FromJson()` (6), which takes the string as an argument. It's a generic function and we need to provide the information about which object this JSON is supposed to be representing\u2014in this case, `HitCount`.\n\nIf the JSON can be read and transformed successfully, we can set the hit count fields (7) to their three values.\n\n```cs\nprivate void Start()\n{\n // 5\n // Check if the key exists. If not, we never saved to it.\n if (PlayerPrefs.HasKey(HitCountKey))\n {\n // 6\n var jsonString = PlayerPrefs.GetString(HitCountKey);\n var hitCount = JsonUtility.FromJson(jsonString);\n\n // 7\n if (hitCount != null)\n {\n hitCountUnmodified = hitCount.Unmodified;\n hitCountShift = hitCount.Shift;\n hitCountControl = hitCount.Control;\n }\n }\n}\n```\n\nThe detection for the key that was pressed is identical to the extended example since it does not involve loading or saving any data but is just a check for the key during `Update()`:\n\n```cs\nprivate void Update() // 8\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 9\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift; // 10\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 9\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl; // 10\n }\n else // 11\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 12\n }\n}\n```\n\nIn a very similar fashion, `OnMouseDown()` needs to save the data whenever it's changed.\n\n```cs\nprivate void OnMouseDown()\n{\n // Check if a key was pressed.\n switch (modifier)\n {\n case KeyCode.LeftShift: // 13\n // Increment the hit count and set it to PlayerPrefs.\n hitCountShift++; // 14\n break;\n case KeyCode.LeftControl: // 13\n // Increment the hit count and set it to PlayerPrefs.\n hitCountControl++; // 14\n break;\n default: // 15\n // Increment the hit count and set it to PlayerPrefs.\n hitCountUnmodified++; // 16\n break;\n }\n\n // 17\n var updatedCount = new HitCount\n {\n Unmodified = hitCountUnmodified,\n Shift = hitCountShift,\n Control = hitCountControl,\n };\n\n // 18\n var jsonString = JsonUtility.ToJson(updatedCount);\n PlayerPrefs.SetString(HitCountKey, jsonString);\n PlayerPrefs.Save();\n}\n```\n\nCompared to before, you see that checking the key and increasing the counter (13 - 16) is basically unchanged except for the save part that is now a bit different.\n\nFirst, we need to create a new `HitCount` object (17) and assign the three counts. Using `JsonUtility.ToJson()`, we can then (18) create a JSON string from this object and set it using the `PlayerPrefs`.\n\nRemember to also call `Save()` here to make sure data cannot get lost in case the game crashes without being able to call `OnApplicationQuit()`.\n\nRun the game, and after you've clicked the capsule a couple of times with or without Shift and Control, have a look at the result. The following screenshot shows the Windows registry which is where the `PlayerPrefs` get saved.\n\nThe location when using our example project is `HKEY_CURRENT_USER\\SOFTWARE\\Unity\\UnityEditor\\MongoDB Inc.\\UnityPersistenceExample` and as you can see, our JSON is right there, saved in plain text. This is also one of the big downsides to keep in mind when using `PlayerPrefs`: Data is not safe and can easily be edited when saved in plain text. Watch out for our future tutorial on encryption, which is one option to improve the safety of your data.\n\n![\n\n## Conclusion\n\nIn this tutorial, we have seen how to save and load data using `PlayerPrefs`. They are very simple and easy to use and a great choice for some simple data points. If it gets a bit more complex, you can save data using multiple fields or wrapping them into an object which can then be serialized using `JSON`.\n\nWhat happens if you want to persist multiple objects of the same class? Or multiple classes? Maybe with relationships between them? And what if the structure of those objects changes?\n\nAs you see, `PlayerPrefs` get to their limits really fast\u2014as easy as they are to use as limited they are.\n\nIn future tutorials, we will explore other options to persist data in Unity and how they can solve some or all of the above questions.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["C#", "Realm", "Unity"], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.\n\nIn this tutorial series, we will explore the options given to us by Unity and third-party libraries.", "contentType": "Tutorial"}, "title": "Saving Data in Unity3D Using PlayerPrefs", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/introducing-sync-geospatial-data", "action": "created", "body": "# Introducing Sync for Geospatial Data\n\nGeospatial queries have been one of the most requested features in the Atlas Device SDKs and Realm for a long time. As of today, we have added support in Kotlin, JS, and .NET with the rest to follow soon. Geospatial queries unlock a powerful set of location-based applications, and today we will look at how to leverage the power of using them with sync to make your application both simple and efficient. \n\nThe dataset used in the following examples can be downloaded to your own database by following the instructions in the geospatial queries docs.\n\nLet\u2019s imagine that we want to build a \u201crestaurants near me\u201d application where the primary use case is to provide efficient, offline-first search for restaurants within a walkable distance of the user\u2019s current location. How should we design such an app? Let\u2019s consider a few options:\n\n1. We could send the user\u2019s location-based queries to the server and have them processed there. This promises to deliver accurate results but is bottlenecked on the server\u2019s performance and may not scale well. We would like to avoid the frustrating user experience of having to wait on a loading icon after entering a search.\n2. We could load relevant/nearby data onto the user\u2019s device and do the search locally. This promises to deliver a fast search time and will be accurate to the degree that the data cached on the user\u2019s device is up to date for the current location. But the question is, how do we decide what data to send to the device, and how do we keep it up to date?\n\nWith flexible sync and geospatial queries, we now have the tools to build the second solution, and it is much more efficient than an app that uses a REST API to fetch data.\n\n## Filtering by radius\n\nA simple design will be to subscribe to all restaurant data that is within a reasonable walkable distance from the user\u2019s current location \u2014 let\u2019s say .5 kilometer (~0.31 miles). To enable geospatial queries to work in flexible sync, your data has to be in the right shape. For complete instructions on how to configure your app to support geospatial queries in sync, see the documentation. But basically, the location field has to be added to the list of queryable fields. The sync schema will look something like this:\n\n . Syncing these types of shapes is in our upcoming roadmap, but until that is available, you can query the MongoDB data using the Atlas App Services API to get the BSON representation and parse that to build a GeoPolygon that Realm queries accept. Being able to filter on arbitrary shapes opens up all sorts of interesting geofencing applications, granting the app the ability to react to a change in location.\n\nThe ability to use flexible sync with geospatial queries makes it simple to design an efficient location-aware application. We are excited to see what you will use these features to create!\n\n> **Ready to get started now?**\n>\n> Install one of our SDKs \u2014 start your journey with our docs or jump right into example projects with source code.\n>\n> Then, register for Atlas to connect to Atlas Device Sync, a fully-managed mobile backend as a service. Leverage out-of-the-box infrastructure, data synchronization capabilities, network handling, and much more to quickly launch enterprise-grade mobile apps. \n>\n> Finally, let us know what you think and get involved in our forums. See you there!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a44f04ba566d1ae/656fabb4c6be9315f5e0128a/image6.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt344682eee682e5dc/656fabe4d4041c844014bd01/image7.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2cab77d982c9d77/656fabfbc7fbbbe84612fff0/maps.jpg\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf426b51421040dc0/656fac2d358dcdd08dd73ca0/image5.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18e2b28f3f616b7f/656fac56841cdf44f874bb27/image3.png", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "Sync your data based on geospatial constraints using Atlas Device Sync in your applications.", "contentType": "News & Announcements"}, "title": "Introducing Sync for Geospatial Data", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/add-us-postal-abbreviations-atlas-search", "action": "created", "body": "# Add US Postal Abbreviations to Your Atlas Search in 5 Minutes\n\nThere are cases when it helps to have synonyms set up to work with your Atlas Search index. For example, if the search in your application needs to work with addresses, it might help to set up a list of common synonyms for postal abbreviations, so one could type in \u201cblvd\u201d instead of \u201cboulevard\u201d and still find all places with \u201cboulevard\u201d in the address.\n\nThis tutorial will show you how to set up your Atlas Search index to recognize US postal abbreviations.\n\n## Prerequisites\n\nTo be successful with this tutorial, you will need:\n* Python, to use a script that scrapes\u00a0a list of street suffix abbreviations\u00a0helpfully compiled by the United States Postal Service (USPS). This tutorial was written using Python 3.10.15, but you could try it on earlier versions of 3, if you\u2019d like.\n* A MongoDB Atlas cluster. Follow the\u00a0Get Started with Atlas\u00a0guide to create your account and a MongoDB cluster. For this tutorial, you can use your\u00a0free-forever MongoDB Atlas cluster!\u00a0Keep a note of your database username, password, and\u00a0connection string\u00a0as you will need those later.\n* Rosetta, if you\u2019re on a MacOS with an M1 chip. This will allow you to run MongoDB tools like\u00a0mongoimport\u00a0and\u00a0mongosh.\u00a0\n* mongosh for running commands in the MongoDB shell. If you don\u2019t already have it,\u00a0install mongosh.\n* A copy of\u00a0mongoimport. If you have MongoDB installed on your workstation, then you may already have\u00a0mongoimport\u00a0installed. If not, follow the instructions on the MongoDB website to\u00a0install mongoimport.\u00a0\n* We're going to be using a sample\\_restaurants dataset in this tutorial since it contains address data. For instructions on how to load sample data, see the\u00a0documentation. Also, you can\u00a0see all available sample datasets.\n\nThe examples shown here were all written on a MacOS but should run on any unix-type system. If you're running on Windows, we recommend running the example commands inside the\u00a0Windows Subsystem for Linux.\n\n## A bit about synonyms in Atlas Search\nTo learn about synonyms in Atlas Search, we suggest you start by checking out our\u00a0documentation. Synonyms\u00a0allow you to index and search your collection for words that have the same or nearly the same meaning, or, in the case of our tutorial, you can search using different ways to write out an address and still get the results you expect. To set up and use synonyms in Atlas Search, you will need to:\n\n1. Create a collection in the same database as the collection you\u2019re indexing\u00a0 containing the synonyms. Note that every document in the synonyms collection must have\u00a0a specific format.\n2. Reference your synonyms collection in your search index definition\u00a0via a synonym mapping.\n3. Reference your synonym mapping in the $search command with the\u00a0$text operator.\u00a0\n\nWe will walk you through these steps in the tutorial, but first, let\u2019s start with creating the JSON documents that will form our synonyms collection.\n\n## Scrape the USPS postal abbreviations page\n\nWe will use\u00a0the list of official street suffix abbreviations\u00a0and\u00a0a list of secondary unit designators from the USPS website to create a JSON document for each set of the synonyms.\n\nAll documents in the synonyms collection must have a\u00a0specific formatthat specifies the type of synonyms\u2014equivalent or explicit. Explicit synonyms have a one-way mapping. For example, if \u201cboat\u201d is explicitly mapped to \u201csail,\u201d we\u2019d be saying that if someone searches \u201cboat,\u201d we want to return all documents that include \u201csail\u201d and \u201cboat.\u201d However, if we search the word \u201csail,\u201d we would not get any documents that have the word \u201cboat.\u201d In the case of postal abbreviations, however, one can use all abbreviations interchangeably, so we will use the \u201cequivalent\u201d type of synonym in the mappingType field.\n\nHere is a sample document in the synonyms collection for all the possible abbreviations of \u201cavenue\u201d:\n```\n\u201cAvenue\u201d:\u00a0\n\n{\n\n\"mappingType\":\"equivalent\",\n\n\"synonyms\":\"AVENUE\",\"AV\",\"AVEN\",\"AVENU\",\"AVN\",\"AVNUE\",\"AVE\"]\n\n}\n```\nWe wrote the web scraping code for you in Python, and you can run it with the following commands to create a document for each synonym group:\n```\ngit clone https://github.com/mongodb-developer/Postal-Abbreviations-Synonyms-Atlas-Search-Tutorial/\u00a0\n\ncd Postal-Abbreviations-Synonyms-Atlas-Search-Tutorial\n\npython3 main.py\n```\nTo see details of the Python code, read the rest of the section.\n\nIn order to scrape the USPS postal website, we will need to import the following packages/libraries and install them using PIP:\u00a0[requests,\u00a0BeautifulSoup, and\u00a0pandas. We\u2019ll also want to import\u00a0json\u00a0and\u00a0re\u00a0for formatting our data when we\u2019re ready:\n```\nimport json\n\nimport requests\n\nfrom bs4 import BeautifulSoup\n\nimport pandas as pd\n\nimport re\n```\nLet\u2019s start with the Street Suffix Abbreviations page. We want to create objects that represent both the URL and the page itself:\n```\n# Create a URL object\n\nstreetsUrl = 'https://pe.usps.com/text/pub28/28apc_002.htm'\n\n# Create object page\n\nheaders = {\n\n\u00a0\u00a0\u00a0\u00a0\"User-Agent\": 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Mobile Safari/537.36'}\n\nstreetsPage = requests.get(streetsUrl, headers=headers)\n```\nNext, we want to get the information on the page. We\u2019ll start by parsing the HTML, and then get the table by its id:\n```\n# Obtain page's information\n\nstreetsSoup = BeautifulSoup(streetsPage.text, 'html.parser')\n```\n\n```\n# Get the table by its id\n\nstreetsTable = streetsSoup.find('table', {'id': 'ep533076'})\n```\nNow that we have the table, we\u2019re going to want to transform it into a\u00a0dataframe, and then format it in a way that\u2019s useful for us:\n```\n# Transform the table into a list of dataframes\n\nstreetsDf = pd.read_html(str(streetsTable))\n```\nOne thing to take note of is that in the table provided on USPS\u2019s website, one primary name is usually mapped to multiple commonly used names.\n\nThis means we need to dynamically group together commonly used names by their corresponding primary name and compile that into a list:\n```\n# Group together all \"Commonly Used Street Suffix or Abbreviation\" entries\n\nstreetsGroup = streetsDf0].groupby(0)[1].apply(list)\n```\nOnce our names are all grouped together, we can loop through them and export them as individual JSON files.\n```\nfor x in range(streetsGroup.size):\n\n\u00a0\u00a0\u00a0\u00a0dictionary = {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"mappingType\": \"equivalent\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"synonyms\": streetsGroup[x]\n\n\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0# export the JSON into a file\n\n\u00a0\u00a0\u00a0\u00a0with open(streetsGroup.index.values[x] + \".json\", \"w\") as outfile:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0json.dump(dictionary, outfile)\n```\nNow, let\u2019s do the same thing for the Secondary Unit Designators page:\n\nJust as before, we\u2019ll start with getting the page and transforming it to a dataframe:\n```\n# Create a URL object\n\nunitsUrl = 'https://pe.usps.com/text/pub28/28apc_003.htm'\n\nunitsPage = requests.get(unitsUrl, headers=headers)\n\n# Obtain page's information\n\nunitsSoup = BeautifulSoup(unitsPage.text, 'html.parser')\n\n# Get the table by its id\n\nunitsTable = unitsSoup.find('table', {'id': 'ep538257'})\n\n# Transform the table into a list of dataframes\n\nunitsDf = pd.read_html(str(unitsTable))\n```\nIf we look at the table more closely, we can see that one of the values is blank. While it makes sense that the USPS would include this in the table, it\u2019s not something that we want in our synonyms list.\n![Table with USPS descriptions and abbreviations\nTo take care of that, we\u2019ll simply remove all rows that have blank values:\n```\nunitsDf0] = unitsDf[0].dropna()\n```\nNext, we\u2019ll take our new dataframe and turn it into a list:\n```\n# Create a 2D list that we will use for our synonyms\n\nunitsList = unitsDf[0][[0, 2]].values.tolist()\n```\nYou may have noticed that some of the values in the table have asterisks in them. Let\u2019s quickly get rid of them so they won\u2019t be included in our synonym mappings:\n```\n# Remove all non-alphanumeric characters\n\nunitsList = [[re.sub(\"[^ \\w]\",\" \",x).strip().lower() for x in y] for y in unitsList]\n```\nNow we can now loop through them and export them as individual JSON files just as we did before. The one thing to note is that we want to restrict the range on which we\u2019re iterating to include only the relevant data we want:\n```\n# Restrict the range to only retrieve the results we want\n\nfor x in range(1, len(unitsList) - 1):\n\n\u00a0\u00a0\u00a0\u00a0dictionary = {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"mappingType\": \"equivalent\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"synonyms\": unitsList[x]\n\n\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0# export the JSON into a file\n\n\u00a0\u00a0\u00a0\u00a0with open(unitsList[x][0] + \".json\", \"w\") as outfile:\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0json.dump(dictionary, outfile)\n```\n## Create a synonyms collection with JSON schema validation\nNow that we created the JSON documents for abbreviations, let\u2019s load them all into a collection in the sample\\_restaurants database. If you haven\u2019t already created a MongoDB cluster, now is a good time to do that and load the sample data in.\n\nThe first step is to connect to your Atlas cluster. We will use mongosh to do it. If you don\u2019t have mongosh installed, follow the\u00a0[instructions.\n\nTo connect to your Atlas cluster, you will need a\u00a0connection string. Choose the \u201cConnect with the MongoDB Shell\u201d option and follow the instructions. Note that you will need to connect with a\u00a0database user\u00a0that has permissions to modify the database, since we would be creating a collection in the sample\\_restaurant database. The command you need to enter in the terminal will look something like:\n```\nmongosh \"mongodb+srv://cluster0.XXXXX.mongodb.net/sample_restaurant\" --apiVersion 1 --username \n```\nWhen prompted for the password, enter the database user\u2019s password.\n\nWe created our synonym JSON documents in the right format already, but let\u2019s make sure that if we decide to add more documents to this collection, they will also have the correct format. To do that, we will create a synonyms collection with a validator that uses\u00a0$jsonSchema. The commands below will create a collection with the name \u201cpostal\\_synonyms\u201d in the sample\\_restaurants database and ensure that only documents with correct format are inserted into the collection.\n```\nuse('sample_restaurants')\n\ndb.createCollection(\"postal_synonyms\", { validator: { $jsonSchema: { \"bsonType\": \"object\", \"required\": \"mappingType\", \"synonyms\"], \"properties\": { \"mappingType\": { \"type\": \"string\", \"enum\": [\"equivalent\", \"explicit\"], \"description\": \"must be a either equivalent or explicit\" }, \"synonyms\": { \"bsonType\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"must be an Array with each item a string and is required\" }, \"input\": { \"type\": \"array\", \"items\": { \"type\": \"string\" }, \"description\": \"must be an Array and is each item is a string\" } }, \"anyOf\": [{ \"not\": { \"properties\": { \"mappingType\": { \"enum\": [\"explicit\"] } }, \"required\": [\"mappingType\"] } }, { \"required\": [\"input\"] }] } } })\n\n```\n## Import the JSON files into the synonyms collection\nWe will use mongoimport to import all the JSON files we created.\n\nYou will need a\u00a0[connection string\u00a0for your Atlas cluster to use in the mongoimport command. If you don\u2019t already have mongoimport installed, use\u00a0the\u00a0instructions\u00a0in the MongoDB documentation.\n\nIn the terminal, navigate to the folder where all the JSON files for postal abbreviation synonyms were created.\n```\ncat *.json | mongoimport --uri 'mongodb+srv://:@cluster0.pwh9dzy.mongodb.net/sample_restaurants?retryWrites=true&w=majority' --collection='postal_synonyms'\n\n```\nIf you liked mongoimport, check out this\u00a0very helpful mongoimport guide.\n\nTake a look at the synonyms collections you just created in Atlas. You should see around 229 documents there.\n\n## Create a search index with synonyms mapping in JSON Editor\n\nNow that we created the synonyms collection in our sample\\_restaurants database, let\u2019s put it to use.\n\nLet\u2019s start by creating a search index. Navigate to the Search tab in your Atlas cluster and click the \u201cCREATE INDEX\u201d button.\n\nSince the Visual Index builder doesn\u2019t support synonym mappings yet, we will choose JSON Editor and click Next:\n\nIn the JSON Editor, pick restaurants collection in the sample\\_restaurants database and enter the following into the index definition. Here, the source collection name refers to the name of the collection with all the postal abbreviation synonyms, which we named \u201cpostal\\_synonyms.\u201d\n```\n{\n\n\u00a0\u00a0\"mappings\": {\n\n\u00a0\u00a0\u00a0\u00a0\"dynamic\": true\n\n\u00a0\u00a0},\n\n\u00a0\u00a0\"synonyms\": \n\n\u00a0\u00a0\u00a0\u00a0{\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"analyzer\": \"lucene.standard\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"name\": \"synonym_mapping\",\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"source\": {\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"collection\": \"postal_synonyms\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0]\n\n}\n\n```\n![The Create Search Index JSON Editor UI in Atlas\n\nWe are indexing the restaurants collection and creating a synonym mapping with the name \u201csynonym\\_mapping\u201d that references the synonyms collection \u201cpostal\\_synonyms.\u201d\n\nClick on Next and then on Create Search Index, and wait for the search index to build.\n\nOnce the index is active, we\u2019re ready to test it out.\n\n## Test that synonyms are working (aggregation pipeline in Atlas or Compass)\n\nNow that we have an active search index, we\u2019re ready to test that our synonyms are working. Let\u2019s head to the Aggregation pipeline in the Collections tab to test different calls to $search. You can also\u00a0use\u00a0Compass, the MongoDB GUI, if you prefer.\n\nChoose $search from the list of pipeline stages. The UI gives us a helpful placeholder for the $search command\u2019s arguments.\n\nLet\u2019s look for all restaurants that are located on a boulevard. We will search in the \u201caddress.street\u201d field, so the arguments to the $search stage will look like this:\n```\n{\n\n\u00a0\u00a0index: 'default',\n\n\u00a0\u00a0text: {\n\n\u00a0\u00a0\u00a0\u00a0query: 'boulevard',\n\n\u00a0\u00a0\u00a0\u00a0path: 'address.street'\n\n\u00a0\u00a0}\n\n}\n\n```\nLet\u2019s add a $count stage after the $search stage to see how many restaurants with an address that contains \u201cboulevard\u201d we found:\n\nAs expected, we found a lot of restaurants with the word \u201cboulevard\u201d in the address. But what if we don\u2019t want to have users type \u201cboulevard\u201d in the search bar? What would happen if we put in \u201cblvd,\u201d for example?\n\n```\n{\n\n\u00a0\u00a0index: 'default',\n\n\u00a0\u00a0text: {\n\n\u00a0\u00a0\u00a0\u00a0query: blvd,\n\n\u00a0\u00a0\u00a0\u00a0path: 'address.street'\n\n\u00a0\u00a0}\n\n}\n```\n\nLooks like it found us restaurants with addresses that have \u201cblvd\u201d in them. What about the addresses with \u201cboulevard,\u201d though? Those did not get picked up by the search.\u00a0\n\nAnd what if we weren\u2019t sure how to spell \u201cboulevard\u201d and just searched for \u201cboul\u201d?\u00a0USPS\u2019s website\u00a0tells us it\u2019s an acceptable abbreviation for boulevard, but our $search finds nothing.\n\nThis is where our synonyms come in! We need to add a synonyms option to the text operator in the $search command and reference the synonym mapping\u2019s name:\n```\n{\n\n\u00a0\u00a0index: 'default',\n\n\u00a0\u00a0text: {\n\n\u00a0\u00a0\u00a0\u00a0query: 'blvd',\n\n\u00a0\u00a0\u00a0\u00a0path: 'address.street',\n\n\u00a0\u00a0\u00a0\u00a0synonyms:'synonym_mapping'\n\n\u00a0\u00a0}\n\n}\n```\n\nAnd there you have it! We found all the restaurants on boulevards, regardless of which way the address was abbreviated, all thanks to our synonyms.\n\n## Conclusion\n\nSynonyms is just one of many features\u00a0Atlas Search\u00a0offers to give you all the necessary search functionality in your application. All of these features are available right now on\u00a0MongoDB Atlas. We just showed you how to add support for common postal abbreviations to your Atlas Search index\u2014what can you do with Atlas Search next? Try it now on your free-forever\u00a0MongoDB Atlas\u00a0cluster and head over to\u00a0community forums\u00a0if you have any questions!", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This tutorial will show you how to set up your Atlas Search index to recognize US postal abbreviations. ", "contentType": "Tutorial"}, "title": "Add US Postal Abbreviations to Your Atlas Search in 5 Minutes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/atlas-flask-and-azure-app-service", "action": "created", "body": "# Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service\n\nManaging large amounts of data locally can prove to be a challenge, especially as the amount of saved data grows. Fortunately, there is an efficient solution available. By utilizing the features of Flask, MongoDB Atlas, and Azure App Service, you can build and host powerful web applications that are capable of storing and managing tons of data in a centralized, secure, and scalable manner. Say goodbye to unreliable local files and hello to a scalable solution. \n\nThis in-depth tutorial will teach you how to build a functional CRUD (Create, Read, Update, and Delete) Flask application that connects to a MongoDB Atlas database, and is hosted on Azure App Service. Using Azure App Service and MongoDB together can be a great way to build and host web applications. Azure App Service makes it easy to build and deploy web apps, while MongoDB is great for storing and querying large amounts of data. With this combination, you can focus on building your application and let Azure take care of the underlying infrastructure and scaling. \n\nThis tutorial is aimed at beginners, but feel free to skip through this article and focus on the aspects necessary to your project. \n\nWe are going to be making a virtual bookshelf filled with some of my favorite books. Within this bookshelf, we will have the power to add a new book, view all the books in our bookshelf, exchange a book for another one of my favorites, or remove a book we would like to read. At the end of our tutorial, our bookshelf will be hosted so anyone with our website link can enjoy our book list too.\n\n### Requirements\nBefore we begin, there are a handful of prerequisites we need:\n\n* MongoDB Atlas account.\n* Microsoft Azure App Services subscription. \n* Postman Desktop (or another way to test our functions).\n* Python 3.9+. \n* GitHub Repository.\n\n### Setting up a MongoDB Atlas cluster\nWithin MongoDB Atlas, we need to create a free cluster. Follow the instructions in our MongoDB Atlas Tutorial. Once your cluster has provisioned, create a database and collection within Atlas. Let\u2019s name our database \u201cbookshelf\u201d and our collection \u201cbooks.\u201d Click on \u201cInsert Document\u201d and add in a book so that we have some data to start with. Your setup should look like this:\n\nNow that we have our bookshelf set up, we are ready to connect to it and utilize our CRUD operations. Before we get started, let\u2019s focus on *how* to properly connect.\n\n## Cluster security access\nNow that we have our cluster provisioned and ready to use, we need to make sure we have proper database access. Through Atlas, we can do this by heading to the \u201cSecurity\u201d section on the left-hand side of the screen. Ensure that under \u201cDatabase Access,\u201d you have enabled a user with at least \u201cRead and Write'' access. Under \u201cNetwork Access,\u201d ensure you\u2019ve added in any and all IP addresses that you\u2019re planning on accessing your database from. An easy way to do this is to set your IP address access to \u201c0.0.0.0/0.\u201d This allows you to access your cluster from any IP address. Atlas provides additional optional security features through Network Peering and Private Connections, using all the major cloud providers. Azure Private Link is part of this additional security feature, or if you\u2019ve provisioned an M10 or above cluster, the use of Azure Virtual Private Connection. \n\n## Setting up a Python virtual environment\nBefore we open up our Visual Studio Code, use your terminal to create a directory for where your bookshelf project will live.\n\nOnce we have our directory made, open it up in VSCode and access the terminal inside of VSCode. We are going to set up our Python virtual environment. We do this so all our files have a fun spot to live, where nothing else already downloaded can bother them. \n\nSet up your environment with:\n```\npython3 -m venv venv\n```\nActivate your environment with:\n```\nsource venv/bin/activate\n```\nYou\u2019ll know you\u2019re in your virtual environment when you see the little (venv) at the beginning of your hostname in your command line. \n\nOnce we are in our virtual environment, we are ready to set up our project requirements. A \u2018requirements.txt\u2019 file is used to specify the dependencies (various packages and their versions) required by the project to run. It helps ensure the correct versions of packages are installed when deploying the project to a new environment. This makes it much easier to reproduce the development environment and prevents any compatibility issues that may arise when using different versions of dependencies. \n\n## Setting up our \u2018requirements.txt\u2019 file\nOur \u2018requirements.txt\u2019 file will consist of four various dependencies this project requires. The first is Flask. Flask is a web micro-framework for Python. It provides the basic tools for building web apps, such as routing and request handling. Flask allows for easy integration with other libraries and frameworks and allows for flexibility and customizability. If you\u2019ve never worked with Flask before, do not worry. By the end of this tutorial, you will have a clear understanding of how useful Flask can be.\n\nThe second dependency we have is PyMongo. PyMongo is a Python library for working with MongoDB. It provides a convenient way to interact with MongoDB databases and collections. We will be using it to connect to our database.\n\nThe third dependency we have is Python-dotenv. This is a tool used to store and access important information, like passwords and secret keys, in a safe and secure manner. Instead of hard-coding this information, Python-dotenv allows us to keep this information in an environment variable in a separate file that isn\u2019t shared with anyone else. Later in this tutorial, we will go into more detail on how to properly set up environment variables in our project. \n\nThe last dependency we have in our file is Black. Black is a code formatter for Python and it enforces a consistent coding style, making it easier for developers to read and maintain the code. By using a common code style, it can improve readability and maintainability.\n\nInclude these four dependencies in your \u2018requirements.txt\u2019 file.\n\n```\nFlask==2.2.2\npymongo==4.3.3\npython-dotenv==0.21.1\nblack==22.12.0\n```\nThis way, we can install all our dependencies in one step:\n```\npip install -r requirements.txt\n```\n\n***Troubleshooting***: After successfully installing PyMongo, a line in your terminal saying `dnspython has been installed` will likely pop up. It is worth noting that without `dnspython` properly downloaded, our next package `dotenv` won\u2019t work. If, when attempting to run our script later, you are getting `ModuleNotFoundError: No module named dotenv`, include `dnspython==2.2.1` in your \u2018requirements.txt\u2019 file and rerun the command from above.\n\n## Setting up our \u2018app.py\u2019 file\nOur \u2018app.py\u2019 file is the main file where our code for our bookshelf project will live. Create a new file within our \u201cazuredemo\u201d directory and name it \u2018app.py\u2019. It is time for us to include our imports:\n```\nimport bson \nimport os\nfrom dotenv import load_dotenv \nfrom flask import Flask, render_template, request\nfrom pymongo import MongoClient\nfrom pymongo.collection import Collection\nfrom pymongo.database import Database\n```\nHere we have our environment variable imports, our Flask imports, our PyMongo imports, and the BSON import we need in order to work with binary JSON data. \n\nOnce we have our imports set up, we are ready to connect to our MongoDB Atlas cluster and implement our CRUD functions, but first let\u2019s test and make sure Flask is properly installed. \n\nRun this very simple Flask app: \n```\napp: Flask = Flask(__name__)\n# our initial form page \n@app.route(\u2018/\u2019) \ndef index():\nreturn \u201cHi!\u201d\n```\nHere, we continue on to creating a new Flask application object, which we named \u201capp\u201d and give it the name of our current file. We then create a new route for the application. This tells the server which URL to listen for and which function to run when that URL is requested. In this specific example, the route is the homepage, and the function that runs returns the string \u201cHi!\u201d.\n\nRun your flask app using:\n```\nflask run\n```\n\nThis opens up port 5000, which is Flask\u2019s default port, but you can always switch the port you\u2019re using by running the command: \n```\nflask run -p port number]\n```\n\nWhen we access [http://127.0.0.1:5000, we see: \n\nSo, our incredibly simple Flask app works! Amazing. Let\u2019s now connect it to our database. \n\n## Connecting our Flask app to MongoDB\nAs mentioned above, we are going to be using a database environment variable to connect our database. In order to do this, we need to set up an .env file. Add this file in the same directory we\u2019ve been working with and include your MongoDB connection string. Your connection string is a URL-like string that is used to connect to a MongoDB server. It includes all the necessary details to connect to your specific cluster. This is how your setup should look:\n\nChange out the `username` and `password` for your own. Make sure you have set the proper Network Access points from the paragraph above. \n\nWe want to use environment variables so we can keep them separate from our code. This way, there is privacy since the `CONNECTION_STRING` contains sensitive information. It is crucial for security and maintainability purposes.\n\nOnce you have your imports in, we need to add a couple lines of code above our Flask instantiation so we can connect to our .env file holding our `CONNECTION_STRING`, and connect to our Atlas database. At this point, your app.py should look like this:\n\n```\nimport bson \nimport os\nfrom dotenv import load_dotenv \nfrom flask import Flask, render_template, request\nfrom pymongo import MongoClient\nfrom pymongo.collection import Collection\nfrom pymongo.database import Database\n# access your MongoDB Atlas cluster\nload_dotenv()\nconnection_string: str = os.environ.get(\u201cCONNECTION_STRING\u201d)\nmongo_client: MongoClient = MongoClient(connection_string)\n\n# add in your database and collection from Atlas \ndatabase: Database = mongo_client.get_database(\u201cbookshelf\u201d)\ncollection: Collection = database.get_collection(\u201cbooks\u201d)\n# instantiating new object with \u201cname\u201d\napp: Flask = Flask(__name__)\n\n# our initial form page\n@app.route(\u2018/\u2019)\ndef index():\nreturn \u201cHi!\u201d\n```\n\nLet\u2019s test `app.py` and ensure our connection to our cluster is properly in place. \nAdd in these two lines after your `collection = database\u201cbooks\u201d]` line and before your `#instantiating new object with name` line to check and make sure your Flask application is really connected to your database:\n\n```\nbook = {\u201ctitle\u201d: \u201cThe Great Gatsby\u201d, \u201cauthor\u201d: \u201cF. Scott Fitzgerald\u201d, \u201cyear\u201d: 1925}\ncollection.insert_one(book)\n```\n\nRun your application, access Atlas, and you should see the additional copy of \u201cThe Great Gatsby\u201d added. \n\n![screenshot of our \u201cbooks\u201d collection showing both copies of \u201cThe Great Gatsby\u201d\n\nAmazing! We have successfully connected our Flask application to MongoDB. Let\u2019s start setting up our CRUD (Create, Read, Update, Delete) functions. \n\nFeel free to delete those two added lines of code and manually remove both the Gatsby documents from Atlas. This was for testing purposes!\n\n## Creating CRUD functions\nRight now, we have hard-coded in our \u201cHi!\u201d on the screen. Instead, it\u2019s easier to render a template for our homepage. To do this, create a new folder called \u201ctemplates\u201d in your directory. Inside of this folder, create a file called: `index.html`. Here is where all the HTML and CSS for our homepage will go. This is highly customizable and not the focus of the tutorial, so please access this code from my Github (or make your own!).\n\nOnce our `index.html` file is complete, let\u2019s link it to our `app.py` file so we can read everything correctly. This is where the addition of the `render_template` import comes in. Link your `index.html` file in your initial form page function like so:\n```\n# our initial form page\n@app.route(\u2018/\u2019)\ndef index():\nreturn render_template(\u201cindex.html\u201d)\n```\n\nWhen you run it, this should be your new homepage when accessing http://127.0.0.1:5000/:\n\nWe are ready to move on to our CRUD functions.\n\n#### Create and read functions\nWe are combining our two Create and Read functions. This will allow us to add in a new book to our bookshelf, and be able to see all the books we have in our bookshelf depending on which request method we choose.\n\n```\n# CREATE and READ \n@app.route('/books', methods=\"GET\", \"POST\"])\ndef books():\n if request.method == 'POST':\n # CREATE\n book: str = request.json['book']\n pages: str = request.json['pages']\n\n # insert new book into books collection in MongoDB\n collection.insert_one({\"book\": book, \"pages\": pages})\n\n return f\"CREATE: Your book {book} ({pages} pages) has been added to your bookshelf.\\n \"\n\n elif request.method == 'GET':\n # READ\n bookshelf = list(collection.find())\n novels = []\n\n for titles in bookshelf:\n book = titles['book']\n pages = titles['pages']\n shelf = {'book': book, 'pages': pages}\n novels.insert(0,shelf)\n\n return novels\n```\n\nThis function is connected to our \u2018/books\u2019 route and depending on which request method we send, we can either add in a new book, or see all the books we have already in our database. We are not going to be validating any of the data in this example because it is out of scope, but please use Postman, cURL, or a similar tool to verify the function is properly implemented. For this function, I inserted:\n\n```\n{\n \u201cbook\u201d: \u201cThe Odyssey\u201d,\n \u201cpages\u201d: 384\n}\n```\nIf we head over to our Atlas portal, refresh, and check on our \u201cbookshelf\u201d database and \u201cbooks\u201d collection, this is what we will see:\n\n![screenshot of our \u201cbooks\u201d collection showing \u201cThe Odyssey\u201d\n\nLet\u2019s insert one more book of our choosing just to add some more data to our database. I\u2019m going to add in \u201c*The Perks of Being a Wallflower*.\u201d\n\nAmazing! Read the database collection back and you should see both novels. \n\nLet\u2019s move onto our UPDATE function.\n\n#### Update\nFor this function, we want to exchange a current book in our bookshelf with a different book. \n\n```\n# UPDATE\n@app.route(\"/books/\", methods = 'PUT'])\ndef update_book(book_id: str):\n new_book: str = request.json['book']\n new_pages: str = request.json['pages']\n collection.update_one({\"_id\": bson.ObjectId(book_id)}, {\"$set\": {\"book\": new_book, \"pages\": new_pages}})\n\n return f\"UPDATE: Your book has been updated to: {new_book} ({new_pages} pages).\\n\"\n\n```\n\nThis function allows us to exchange a book we currently have in our database with a new book. The exchange takes place via the book ID. To do so, access Atlas and copy in the ID you want to use and include this at the end of the URL. For this, I want to switch \u201cThe Odyssey\u201d with \u201cThe Stranger\u201d. Please use your testing tool to communicate to the update endpoint and view the results in Atlas. \n\nOnce you hit send and refresh your Atlas database, you\u2019ll see: \n![screenshot of our \u201cbooks\u201d collection with \u201cThe Stranger\u201d and \u201cThe Perks of Being a Wallflower\u201d\n\n\u201cThe Odyssey\u201d has been exchanged with \u201cThe Stranger\u201d! \n\nNow, let\u2019s move onto our last function: the DELETE function. \n\n#### Delete\n```\n# DELETE\n@app.route(\"/books/\", methods = 'DELETE'])\ndef remove_book(book_id: str):\n collection.delete_one({\"_id\": bson.ObjectId(book_id)})\n\n return f\"DELETE: Your book (id = {book_id}) has been removed from your bookshelf.\\n\"\n```\nThis function allows us to remove a specific book from our bookshelf. Similarly to the UPDATE function, we need to specify which book we want to delete through the URL route using the novels ID. Let\u2019s remove our latest book from the bookshelf to read, \u201cThe Stranger\u201d. \n\nCommunicate with the delete endpoint and execute the function. \n\nIn Atlas our results are shown:\n\n![screenshot of the \u201cbooks\u201d collection showing \u201cThe Perks of Being a Wallflower\u201d\n\n\u201cThe Stranger\u201d has been removed!\n\nCongratulations, you have successfully created a Flask application that can utilize CRUD functionalities, while using MongoDB Atlas as your database. That\u2019s huge. But\u2026no one else can use your bookshelf! It\u2019s only hosted locally. Microsoft Azure App Service can help us out with this. Let\u2019s host our Flask app on App Service. \n\n## Host your application on Microsoft Azure App Service\nWe are using Visual Studio Code for this demo, so make sure you have installed the Azure extension and you have signed into your subscription. There are other ways to work with Azure App Service, and to use Visual Studio Code is a personal preference. \n\nIf you\u2019re properly logged in, you\u2019ll see your Azure subscription on the left-hand side. \n\nClick the (+) sign next to Resources:\n\nClick on \u201cCreate App Service Web App\u201d:\n\nEnter a new name. This will serve as your website URL, so make sure it\u2019s not too hectic:\n\nSelect your runtime stack. Mine is Python 3.9:\n\nSelect your pricing tier. The free tier will work for this demo. \n\nIn the Azure Activity Log, you will see the web app being created. \n\nYou will be asked to deploy your web app, and then choose the folder you want to deploy:\n\nIt will start deploying, as you\u2019ll see through the \u201cOutput Window\u201d in the Azure App Service Log. \n\nOnce it\u2019s done, you\u2019ll see a button that says \u201cBrowse Website.\u201d Click on it. \n\nAs you can see, our application is now hosted at a different location! It now has its own URL.\n\nLet\u2019s make sure we can still utilize our CRUD operations with our new URL. Test again for each function.\n\nAt each step, if we refresh our MongoDB Atlas database, we will see these changes take place there as well. Great job!\n\n## Conclusion\nCongratulations! We have successfully created a Flask application from scratch, connected it to our MongoDB database, and hosted it on Azure App Service. These skills will continue to come in handy and I hope you enjoyed the journey. Separately, Azure App Service and MongoDB host a variety of benefits. Together, they are unstoppable! Combined, they provide a powerful platform for building and scaling web applications that can handle large amounts of data. Azure App Service makes it easy to deploy and scale web apps, while MongoDB provides a flexible and scalable data storage solution. \n\nGet information on MongoDB Atlas, Azure App Service, and Flask.\n\nIf you liked this tutorial and would like to dive even further into MongoDB Atlas and the functionalities available, please view my YouTube video. \n\n", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "Azure"], "pageDescription": "This tutorial will show you how to create a functional Flask application that connects to a MongoDB Atlas database and is hosted on Azure App Service.", "contentType": "Tutorial"}, "title": "Scaling for Demand: Deploying Python Applications Using MongoDB Atlas on Azure App Service", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-single-collection-springpart2", "action": "created", "body": "# Single-Collection Designs in MongoDB with Spring Data (Part 2)\n\nIn\u00a0Part 1 of this two-part series, we discussed single-collection design patterns in MongoDB and how they can be used to avoid the need for computationally expensive joins across collections. In this second part of the series, we will provide examples of how the single-collection pattern can be utilized in Java applications using\u00a0Spring Data MongoDB\u00a0and, in particular, how documents representing different classes but residing in the same collection can be accessed.\n\n## Accessing polymorphic single collection data using Spring Data MongoDB\n\nWhilst official, native idiomatic interfaces for MongoDB are available for 12 different programming languages, with community-provided interfaces available for many more, many of our customers have significant existing investment and knowledge developing Java applications using Spring Data. A common question we are asked is how can polymorphic single-collection documents be accessed using Spring Data MongoDB?\n\nIn the next few steps, I will show you how the Spring Data template model can be used to map airline, aircraft, and ADSB position report documents in a single collection named **aerodata**, to corresponding POJOs in a Spring application.\n\nThe code examples that follow were created using the Netbeans IDE, but any IDE supporting Java IDE, including Eclipse and IntelliJ IDEA, can be used.\n\nTo get started, visit the\u00a0Spring Initializr website\u00a0and create a new Spring Boot project, adding Spring Data MongoDB as a dependency. In my example, I\u2019m using Gradle, but you can use Maven, if you prefer.\n\nGenerate your template project, unpack it, and open it in your IDE:\n\nAdd a package to your project to store the POJO, repository class, and interface definitions. (In my project, I created a package called (**com.mongodb.devrel.gcr.aerodata**). For our demo, we will add four POJOs \u2014 **AeroData**, **Airline**, **Aircraft**, and **ADSBRecord** \u2014 to represent our data, with four corresponding repository interface definitions. **AeroData** will be an abstract base class from which the other POJOs will extend:\n\n```\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.annotation.Id;\nimport org.springframework.data.mongodb.core.mapping.Document;\n\n@Document(collection = \"aeroData\")\npublic abstract class AeroData {\n \n @Id\n public String id;\n public Integer recordType;\n \n //Getters and Setters...\n \n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.mongodb.repository.MongoRepository;\n\npublic interface AeroDataRepository extends MongoRepository{\n\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.annotation.TypeAlias;\nimport org.springframework.data.mongodb.core.mapping.Document;\n\n@Document(collection = \"aeroData\")\n@TypeAlias(\"AirlineData\")\npublic class Airline extends AeroData{\n\n public String airlineName;\n public String country;\n public String countryISO;\n public String callsign;\n public String website;\n\n public Airline(String id, String airlineName, String country, String countryISO, String callsign, String website) {\n this.id = id;\n this.airlineName = airlineName;\n this.country = country;\n this.countryISO = countryISO;\n this.callsign = callsign;\n this.website = website;\n }\n\n @Override\n public String toString() {\n return String.format(\n \"Airlineid=%s, name='%s', country='%s (%s)', callsign='%s', website='%s']\",\n id, airlineName, country, countryISO, callsign, website);\n }\n\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.mongodb.repository.MongoRepository;\n\npublic interface AirlineRepository extends MongoRepository{\n\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.annotation.TypeAlias;\nimport org.springframework.data.mongodb.core.mapping.Document;\n\n@Document(collection = \"aeroData\")\n@TypeAlias(\"AircraftData\")\npublic class Aircraft extends AeroData{\n\n public String tailNumber;\n public String manufacturer;\n public String model;\n\n public Aircraft(String id, String tailNumber, String manufacturer, String model) {\n this.id = id;\n this.tailNumber = tailNumber;\n this.manufacturer = manufacturer;\n this.model = model;\n }\n\n @Override\n public String toString() {\n return String.format(\n \"Aircraft[id=%s, tailNumber='%s', manufacturer='%s', model='%s']\",\n id, tailNumber, manufacturer, model);\n }\n\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.mongodb.repository.MongoRepository;\n\npublic interface AircraftRepository extends MongoRepository{\n\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.Date;\nimport org.springframework.data.annotation.TypeAlias;\nimport org.springframework.data.mongodb.core.mapping.Document;\n\n@Document(collection = \"aeroData\")\n@TypeAlias(\"ADSBRecord\")\npublic class ADSBRecord extends AeroData {\n\n public Integer altitude; \n public Integer heading;\n public Integer speed;\n public Integer verticalSpeed;\n public Date timestamp;\n public GeoPoint geoPoint;\n\n public ADSBRecord(String id, Integer altitude, Integer heading, Integer speed, Integer verticalSpeed, Date timestamp, GeoPoint geoPoint) {\n this.id = id;\n this.altitude = altitude;\n this.heading = heading;\n this.speed = speed;\n this.verticalSpeed = verticalSpeed;\n this.timestamp = timestamp;\n this.geoPoint = geoPoint;\n }\n\n @Override\n public String toString() {\n return String.format(\n \"ADSB[id=%s, altitude='%d', heading='%d', speed='%d', verticalSpeed='%d' timestamp='%tc', latitude='%f', longitude='%f']\",\n id, altitude, heading, speed, verticalSpeed, timestamp, geoPoint == null ? null : geoPoint.coordinates[1], geoPoint == null ? null : geoPoint.coordinates[0]);\n }\n}\n```\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport org.springframework.data.mongodb.repository.MongoRepository;\n\npublic interface ADSBRecordRepository extends MongoRepository{\n\n}\n```\n\nWe\u2019ll also add a **GeoPoint** class to hold location information within the **ADSBRecord** objects:\n\n``` java\npackage com.mongodb.devrel.gcr.aerodata;\n\npublic class GeoPoint {\n public String type;\n public Double[] coordinates;\n\n //Getters and Setters...\n}\n```\n\nNote the annotations used in the four main POJO classes. We\u2019ve used the \u201c**@Document**\u201d\u00a0annotation to specify the MongoDB collection into which data for each class should be saved. In each case, we\u2019ve specified the \u201c**aeroData**\u201d collection. In the **Airline**, **Aircraft**, and **ADSBRecord** classes, we\u2019ve also used the \u201c**@TypeAlias**\u201d annotation. Spring Data will automatically add a \u201c**\\_class**\u201d field to each of our documents containing the Java class name of the originating object. The **TypeAlias** annotation allows us to override the value saved in this field and can be useful early in a project\u2019s development if it\u2019s suspected the class type may change. Finally, in the **AeroData** abstract class, we\u2019ve used the \u201c@id\u201d annotation to specify the field Spring Data will use in the MongoDB \\_id field of our documents.\n\nLet\u2019s go ahead and update our project to add and retrieve some data. Start by adding your MongoDB connection URI to application.properties. (A free MongoDB Atlas cluster can be created if you need one by signing up at [cloud.mongodb.com.)\n\n```\nspring.data.mongodb.uri=mongodb://myusername:mypassword@abc-c0-shard-00-00.ftspj.mongodb.net:27017,abc-c0-shard-00-01.ftspj.mongodb.net:27017,abc-c0-shard-00-02.ftspj.mongodb.net:27017/air_tracker?ssl=true&replicaSet=atlas-k9999h-shard-0&authSource=admin&retryWrites=true&w=majority\n```\n\nNote that having unencrypted user credentials in a properties file is obviously not best practice from a security standpoint and this approach should only be used for testing and educational purposes. For more details on options for connecting to MongoDB securely, including the use of keystores and cloud identity mechanisms, refer to the\u00a0MongoDB documentation.\n\nWith our connection details in place, we can now update the main application entry class. Because we are not using a view or controller, we\u2019ll set the application up as a **CommandLineRunner** to view output on the command line:\n\n```java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.Date;\nimport java.util.Optional;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.CommandLineRunner;\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\n@SpringBootApplication\npublic class AerodataApplication implements CommandLineRunner {\n\n @Autowired\n private AirlineRepository airlineRepo;\n \n @Autowired\n private AircraftRepository aircraftRepo;\n \n @Autowired\n private ADSBRecordRepository adsbRepo;\n\n public static void main(String] args) {\n SpringApplication.run(AerodataApplication.class, args);\n }\n\n @Override\n public void run(String... args) throws Exception {\n\n // save an airline\n airlineRepo.save(new Airline(\"DAL\", \"Delta Air Lines\", \"United States\", \"US\", \"DELTA\", \"delta.com\"));\n \n // add some aircraft aircraft\n aircraftRepo.save(new Aircraft(\"DAL_a93d7c\", \"N695CA\", \"Bombardier Inc\", \"CL-600-2D24\"));\n aircraftRepo.save(new Aircraft(\"DAL_ab8379\", \"N8409N\", \"Bombardier Inc\", \"CL-600-2B19\"));\n aircraftRepo.save(new Aircraft(\"DAL_a36f7e\", \"N8409N\", \"Airbus Industrie\", \"A319-114\"));\n \n //Add some ADSB position reports\n Double[] coords1 = {55.991776, -4.776722};\n GeoPoint geoPoint = new GeoPoint(coords1);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_1\", 38825, 319, 428, 1024, new Date(1656980617041l), geoPoint));\n Double[] coords2 = {55.994843, -4.781466};\n geoPoint = new GeoPoint(coords2);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_2\", 38875, 319, 429, 1024, new Date(1656980618041l), geoPoint));\n Double[] coords3 = {55.99606, -4.783344};\n geoPoint = new GeoPoint(coords3);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_3\", 38892, 319, 428, 1024, new Date(1656980619041l), geoPoint)); \n \n\n // fetch all airlines\n System.out.println(\"Airlines found with findAll():\");\n System.out.println(\"-------------------------------\");\n for (Airline airline : airlineRepo.findAll()) {\n System.out.println(airline);\n }\n // fetch a specific airline by ICAO ID\n System.out.println(\"Airline found with findById():\");\n System.out.println(\"-------------------------------\");\n Optional airlineResponse = airlineRepo.findById(\"DAL\");\n System.out.println(airlineResponse.get());\n \n System.out.println();\n\n // fetch all aircraft\n System.out.println(\"Aircraft found with findAll():\");\n System.out.println(\"-------------------------------\");\n for (Aircraft aircraft : aircraftRepo.findAll()) {\n System.out.println(aircraft);\n }\n // fetch a specific aircraft by ICAO ID\n System.out.println(\"Aircraft found with findById():\");\n System.out.println(\"-------------------------------\");\n Optional aircraftResponse = aircraftRepo.findById(\"DAL_a36f7e\");\n System.out.println(aircraftResponse.get());\n \n System.out.println();\n \n // fetch all adsb records\n System.out.println(\"ADSB records found with findAll():\");\n System.out.println(\"-------------------------------\");\n for (ADSBRecord adsb : adsbRepo.findAll()) {\n System.out.println(adsb);\n }\n // fetch a specific ADSB Record by ID\n System.out.println(\"ADSB Record found with findById():\");\n System.out.println(\"-------------------------------\");\n Optional adsbResponse = adsbRepo.findById(\"DAL_a36f7e_1\");\n System.out.println(adsbResponse.get());\n System.out.println();\n \n }\n\n}\n```\n\nSpring Boot takes care of a lot of details in the background for us, including establishing a connection to MongoDB and autowiring our repository classes. On running the application, we are:\n\n1. Using the save method on the **Airline**, **Aircraft**, and **ADSBRecord** repositories respectively to add an airline, three aircraft, and three ADSB position report documents to our collection.\n2. Using the findAll and findById methods on the **Airline**, **Aircraft**, and **ADSBRecord** repositories respectively to retrieve, in turn, all airline documents, a specific airline document, all aircraft documents, a specific aircraft document, all ADSB position report documents, and a specific ADSB position report document.\n\nIf everything is configured correctly, we should see the following output on the command line:\n\n```bash\nAirlines found with findAll():\n-------------------------------\nAirline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']\nAirline[id=DAL_a93d7c, name='null', country='null (null)', callsign='null', website='null']\nAirline[id=DAL_ab8379, name='null', country='null (null)', callsign='null', website='null']\nAirline[id=DAL_a36f7e, name='null', country='null (null)', callsign='null', website='null']\nAirline[id=DAL_a36f7e_1, name='null', country='null (null)', callsign='null', website='null']\nAirline[id=DAL_a36f7e_2, name='null', country='null (null)', callsign='null', website='null']\nAirline[id=DAL_a36f7e_3, name='null', country='null (null)', callsign='null', website='null']\nAirline found with findById():\n-------------------------------\nAirline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']\n\nAircraft found with findAll():\n-------------------------------\nAircraft[id=DAL, tailNumber='null', manufacturer='null', model='null']\nAircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']\nAircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']\nAircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']\nAircraft[id=DAL_a36f7e_1, tailNumber='null', manufacturer='null', model='null']\nAircraft[id=DAL_a36f7e_2, tailNumber='null', manufacturer='null', model='null']\nAircraft[id=DAL_a36f7e_3, tailNumber='null', manufacturer='null', model='null']\nAircraft found with findById():\n-------------------------------\nAircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']\n\nADSB records found with findAll():\n-------------------------------\nADSB[id=DAL, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']\nADSB[id=DAL_a93d7c, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']\nADSB[id=DAL_ab8379, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']\nADSB[id=DAL_a36f7e, altitude='null', heading='null', speed='null', verticalSpeed='null' timestamp='null', latitude='null', longitude='null']\nADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']\nADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']\nADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']\nADSB Record found with findById():\n-------------------------------\nADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='-4.776722', longitude='55.991776']\n```\n\nAs you can see, our data has been successfully added to the MongoDB collection, and we are able to retrieve the data. However, there is a problem. The findAll methods of each of the repository objects are returning a result for every document in our collection, not just the documents of the class type associated with each repository. As a result, we are seeing seven documents being returned for each record type \u2014 airline, aircraft, and ADSB \u2014 when we would expect to see only one airline, three aircraft, and three ADSB position reports. Note this issue is common across all the \u201cAll\u201d repository methods \u2014 findAll, deleteAll, and notifyAll. A call to the deleteAll method on the airline repository would result in all documents in the collection being deleted, not just airline documents.\n\nTo address this, we have two options: We could override the standard Spring Boot repository findAll (and deleteAll/notifyAll) methods to factor in the class associated with each calling repository class, or we could extend the repository interface definitions to include methods to specifically retrieve only documents of the corresponding class. In our exercise, we\u2019ll concentrate on the later approach by updating our repository interface definitions:\n\n```java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.List;\nimport java.util.Optional;\nimport org.springframework.data.mongodb.repository.MongoRepository;\nimport org.springframework.data.mongodb.repository.Query;\n\npublic interface AirlineRepository extends MongoRepository{\n \n @Query(\"{_class: \\\"AirlineData\\\"}\")\n List findAllAirlines();\n \n @Query(value=\"{_id: /^?0/, _class: \\\"AirlineData\\\"}\", sort = \"{_id: 1}\")\n Optional findAirlineByIcaoAddr(String icaoAddr);\n\n}\n```\n\n```java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.List;\nimport org.springframework.data.mongodb.repository.MongoRepository;\nimport org.springframework.data.mongodb.repository.Query;\n\npublic interface AircraftRepository extends MongoRepository{\n \n @Query(\"{_class: \\\"AircraftData\\\"}\")\n List findAllAircraft();\n \n @Query(\"{_id: /^?0/, _class: \\\"AircraftData\\\"}\")\n List findAircraftDataByIcaoAddr(String icaoAddr);\n \n}\n```\n\n```java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.List;\nimport org.springframework.data.mongodb.repository.MongoRepository;\nimport org.springframework.data.mongodb.repository.Query;\n\npublic interface ADSBRecordRepository extends MongoRepository{\n \n @Query(value=\"{_class: \\\"ADSBRecord\\\"}\",sort=\"{_id: 1}\")\n List findAllADSBRecords();\n \n @Query(value=\"{_id: /^?0/, _class: \\\"ADSBRecord\\\"}\", sort = \"{_id: 1}\")\n List findADSBDataByIcaoAddr(String icaoAddr);\n \n}\n```\n\nIn each interface, we\u2019ve added two new function definitions \u2014 one to return all documents of the relevant type, and one to allow documents to be returned when searching by ICAO address. Using the @Query annotation, we are able to format the queries as needed.\n\nWith our function definitions in place, we can now update the main application class:\n\n```java\npackage com.mongodb.devrel.gcr.aerodata;\n\nimport java.util.Date;\nimport java.util.Optional;\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.boot.CommandLineRunner;\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\n@SpringBootApplication\npublic class AerodataApplication implements CommandLineRunner {\n\n @Autowired\n private AirlineRepository airlineRepo;\n \n @Autowired\n private AircraftRepository aircraftRepo;\n \n @Autowired\n private ADSBRecordRepository adsbRepo;\n\n public static void main(String[] args) {\n SpringApplication.run(AerodataApplication.class, args);\n }\n\n @Override\n public void run(String... args) throws Exception {\n\n //Delete any records from a previous run;\n airlineRepo.deleteAll();\n\n // save an airline\n airlineRepo.save(new Airline(\"DAL\", \"Delta Air Lines\", \"United States\", \"US\", \"DELTA\", \"delta.com\"));\n \n // add some aircraft aircraft\n aircraftRepo.save(new Aircraft(\"DAL_a93d7c\", \"N695CA\", \"Bombardier Inc\", \"CL-600-2D24\"));\n aircraftRepo.save(new Aircraft(\"DAL_ab8379\", \"N8409N\", \"Bombardier Inc\", \"CL-600-2B19\"));\n aircraftRepo.save(new Aircraft(\"DAL_a36f7e\", \"N8409N\", \"Airbus Industrie\", \"A319-114\"));\n \n //Add some ADSB position reports\n Double[] coords1 = {-4.776722, 55.991776};\n GeoPoint geoPoint = new GeoPoint(coords1);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_1\", 38825, 319, 428, 1024, new Date(1656980617041l), geoPoint));\n Double[] coords2 = {-4.781466, 55.994843};\n geoPoint = new GeoPoint(coords2);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_2\", 38875, 319, 429, 1024, new Date(1656980618041l), geoPoint));\n Double[] coords3 = {-4.783344, 55.99606};\n geoPoint = new GeoPoint(coords3);\n adsbRepo.save(new ADSBRecord(\"DAL_a36f7e_3\", 38892, 319, 428, 1024, new Date(1656980619041l), geoPoint)); \n \n\n // fetch all airlines\n System.out.println(\"Airlines found with findAllAirlines():\");\n System.out.println(\"-------------------------------\");\n for (Airline airline : airlineRepo.findAllAirlines()) {\n System.out.println(airline);\n }\n System.out.println();\n // fetch a specific airline by ICAO ID\n System.out.println(\"Airlines found with findAirlineByIcaoAddr(\\\"DAL\\\"):\");\n System.out.println(\"-------------------------------\");\n Optional airlineResponse = airlineRepo.findAirlineByIcaoAddr(\"DAL\");\n System.out.println(airlineResponse.get());\n \n System.out.println();\n\n // fetch all aircraft\n System.out.println(\"Aircraft found with findAllAircraft():\");\n System.out.println(\"-------------------------------\");\n for (Aircraft aircraft : aircraftRepo.findAllAircraft()) {\n System.out.println(aircraft);\n }\n System.out.println();\n // fetch Aircraft Documents specific to airline \"DAL\"\n System.out.println(\"Aircraft found with findAircraftDataByIcaoAddr(\\\"DAL\\\"):\");\n System.out.println(\"-------------------------------\");\n for (Aircraft aircraft : aircraftRepo.findAircraftDataByIcaoAddr(\"DAL\")) {\n System.out.println(aircraft);\n }\n \n System.out.println();\n \n // fetch Aircraft Documents specific to aircraft \"a36f7e\"\n System.out.println(\"Aircraft found with findAircraftDataByIcaoAddr(\\\"DAL_a36f7e\\\"):\");\n System.out.println(\"-------------------------------\");\n for (Aircraft aircraft : aircraftRepo.findAircraftDataByIcaoAddr(\"DAL_a36f7e\")) {\n System.out.println(aircraft);\n }\n \n System.out.println();\n \n // fetch all adsb records\n System.out.println(\"ADSB records found with findAllADSBRecords():\");\n System.out.println(\"-------------------------------\");\n for (ADSBRecord adsb : adsbRepo.findAllADSBRecords()) {\n System.out.println(adsb);\n }\n System.out.println();\n // fetch ADSB Documents specific to airline \"DAL\"\n System.out.println(\"ADSB Documents found with findADSBDataByIcaoAddr(\\\"DAL\\\"):\");\n System.out.println(\"-------------------------------\");\n for (ADSBRecord adsb : adsbRepo.findADSBDataByIcaoAddr(\"DAL\")) {\n System.out.println(adsb);\n }\n \n System.out.println();\n \n // fetch ADSB Documents specific to aircraft \"a36f7e\"\n System.out.println(\"ADSB Documents found with findADSBDataByIcaoAddr(\\\"DAL_a36f7e\\\"):\");\n System.out.println(\"-------------------------------\");\n for (ADSBRecord adsb : adsbRepo.findADSBDataByIcaoAddr(\"DAL_a36f7e\")) {\n System.out.println(adsb);\n }\n }\n}\n```\n\nNote that as well as the revised search calls, we also added a call to deleteAll on the airline repository to remove data added by prior runs of the application.\n\nWith the updates in place, when we run the application, we should now see the expected output:\n\n```bash\nAirlines found with findAllAirlines():\n-------------------------------\nAirline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']\n\nAirlines found with findAirlineByIcaoAddr(\"DAL\"):\n-------------------------------\nAirline[id=DAL, name='Delta Air Lines', country='United States (US)', callsign='DELTA', website='delta.com']\n\nAircraft found with findAllAircraft():\n-------------------------------\nAircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']\nAircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']\nAircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']\n\nAircraft found with findAircraftDataByIcaoAddr(\"DAL\"):\n-------------------------------\nAircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']\nAircraft[id=DAL_a93d7c, tailNumber='N695CA', manufacturer='Bombardier Inc', model='CL-600-2D24']\nAircraft[id=DAL_ab8379, tailNumber='N8409N', manufacturer='Bombardier Inc', model='CL-600-2B19']\n\nAircraft found with findAircraftDataByIcaoAddr(\"DAL_a36f7e\"):\n-------------------------------\nAircraft[id=DAL_a36f7e, tailNumber='N8409N', manufacturer='Airbus Industrie', model='A319-114']\n\nADSB records found with findAllADSBRecords():\n-------------------------------\nADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']\nADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']\nADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']\n\nADSB Documents found with findADSBDataByIcaoAddr(\"DAL\"):\n-------------------------------\nADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']\nADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']\nADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']\n\nADSB Documents found with findADSBDataByIcaoAddr(\"DAL_a36f7e\"):\n-------------------------------\nADSB[id=DAL_a36f7e_1, altitude='38825', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:37 BST 2022', latitude='55.991776', longitude='-4.776722']\nADSB[id=DAL_a36f7e_2, altitude='38875', heading='319', speed='429', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:38 BST 2022', latitude='55.994843', longitude='-4.781466']\nADSB[id=DAL_a36f7e_3, altitude='38892', heading='319', speed='428', verticalSpeed='1024' timestamp='Tue Jul 05 01:23:39 BST 2022', latitude='55.996060', longitude='-4.783344']\n```\n\nIn this two-part post, we have seen how polymorphic single-collection designs in MongoDB can provide all the query flexibility of normalized relational designs, whilst simultaneously avoiding anti-patterns such as unbounded arrays and unnecessary joins. This makes the resulting collections highly performant from a search standpoint and amenable to horizontal scaling. We have also shown how we can work with these designs using Spring Data MongoDB.\n\nThe example source code used in this series is\u00a0[available in Github.", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring"], "pageDescription": "In the second part of the series, we will provide examples of how the single-collection pattern can be utilized in Java applications using Spring Data MongoDB.", "contentType": "Tutorial"}, "title": "Single-Collection Designs in MongoDB with Spring Data (Part 2)", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/easy-migration-relational-database-mongodb-relational-migrator", "action": "created", "body": "# Easy Migration: From Relational Database to MongoDB with MongoDB Relational Migrator\n\nDefining the process of data migration from a relational database to MongoDB has always been a complex task. Some have opted for a custom approach, adopting custom solutions such as scripts, whereas others have preferred to use third-party tools. \n\nIt is in this context that the Relational Migrator enters the picture, melting the complexity of this transition from a relational database to MongoDB as naturally as the sun melts the snow.\n\n## How Relational Migrator comes to our help\n\nIn the context of a relational database to MongoDB migration project, several questions arise \u2014 for example:\n\n- What tool should you use to best perform this migration?\n- How can this migration process be made time-optimal for a medium/large size database?\n- How will the data need to be modeled on MongoDB?\n- How much time/resources will it take to restructure SQL queries to MQL?\n\nConsider the following architecture, as an example:\n\n:\n\nLogstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite \"stash.\"\n\nThis tool effectively achieves the goal by dynamically performing the following operations:\n\n- Ingestion of data from the source \u2014 PostgreSQL\n- Data transformation \u2014 Logstash\n- Distribution of the transformed data to the destination \u2014 MongoDB\n\nGreat! So, it will be possible to migrate data and benefit from high flexibility in its transformation, and we also assume relatively short time frames because we've done some very good tuning, but different pipelines will have to be defined manually. \n\nLet us concretize with an example what we have been telling ourselves so far by considering the following scheme:\n\n!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb8fd2b6686f0d533/65947c900543c5aba18f25c0/image4.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb675990ee35856de/65947cc42f46f772668248fa/image9.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5a1c48d93e93ae2a/65947d0f13cde9e855200a1f/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta4c41c7a110ed3fa/65947d3ca8ee437ac719978a/image6.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd5909d7af57817b6/65947d7a1f8952fb3f91391a/image5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5be52451fdb9c083/65947db3254effecce746c7a/image7.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte510e7b6a9b0a51c/65947dd3a8ee432b5f199799/image3.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt038cad66aa0954dc/65947dfeb0fbcbe233627e12/image8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb204b4ea9c897b43/65947e27a8ee43168219979e/image2.png", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to migrate from a relational database management system (RDBMS) to the Document Model of MongoDB using the MongoDB Relational Migrator utility.", "contentType": "Tutorial"}, "title": "Easy Migration: From Relational Database to MongoDB with MongoDB Relational Migrator", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-smart-applications-atlas-vector-search-google-vertex-ai", "action": "created", "body": "# Build Smart Applications With Atlas Vector Search and Google Vertex AI\n\nThe application development landscape is evolving very rapidly. Today, users crave intuitive, context-aware experiences that understand their intent and deliver relevant results even when queries aren't perfectly phrased, putting an end to the keyword-based search practices. This is where MongoDB Atlas and Google Cloud Vertex AI can help users build and deploy scalable and resilient applications. \n\nMongoDB Atlas Vector Search is a cutting-edge tool that indexes and stores high-dimensional vectors, representing the essence of your data. It allows you to perform lightning-fast similarity searches, retrieving results based on meaning and context. Google Vertex AI is a comprehensive AI platform that houses an abundance of pre-trained models and tools, including the powerful Vertex AI PALM. This language model excels at extracting semantic representations from text data, generating those crucial vectors that fuel MongoDB Atlas Vector Search. \n\nVector Search can be useful in a variety of contexts, such as natural language processing and recommendation systems. It is a powerful technique that can be used to find similar data based on its meaning.\n\nIn this tutorial, we will see how to get started with MongoDB Atlas and Vertex AI. If you are new to MongoDB Atlas, refer to the documentation to get set up from Google Cloud Marketplace or use the Atlas registration page. \n\n## Before we begin\n\nMake sure that you have the below prerequisites set up before starting to test your application.\n\n1. MongoDB Atlas access, either by the registration page or from Google Cloud Marketplace\n2. Access to Google Cloud Project to deploy and create a Compute Engine instance\n\n## How to get set up\n\nLet us consider a use case where we are loading sample PDF documents to MongoDB Atlas as vectors and deploying an application on Google Cloud to perform a vector search on the PDF documents. \n\nWe will start with the creation of MongoDB Atlas Vector Search Index on the collection to store and retrieve the vectors generated by the Google Vertex AI PALM model. To store and access vectors on MongoDB Atlas, we need to create an Atlas Search index. \n\n### Create an Atlas Search index \n\n1. Navigate to the **Database Deployments** page for your project.\n2. Click on **Create Database.** Name your Database **vertexaiApp** and your collection **chat-vec**.\n3. Click **Atlas Search** from the Services menu in the navigation bar.\n4. Click **Create Search Index** and select **JSON Editor** under **Atlas Vector Search**. Then, click **Next.**\n5. In the **Database and Collection** section, find the database **vertexaiApp**, and select the **chat-vec** collection.\n6. Replace the default definition with the following index definition and then click **Next**. Click on **Create Search index** on the review page.\n\n```json\n{\n \"fields\": \n {\n \"type\":\"vector\",\n \"path\":\"vec\",\n \"numDimensions\":768,\n \"similarity\": \"cosine\"\n }\n ]\n}\n```\n\n### Create a Google Cloud Compute instance\n\nWe will create a [Google Cloud virtual machine instance to run and deploy the application. The Google Cloud VM can have all the default configurations. To begin, log into your Google Cloud Console and perform the following steps:\n\n- In the Google Cloud console, click on **Navigation menu > Compute Engine.**\n- Create a new VM instance with the below configurations:\n - **Name:** vertexai-chatapp\n - **Region**: region near your physical location\n - **Machine configurations:**\n - Machine type: High Memory, n1-standard-1\n- Boot disk: Click on **CHANGE**\n - Increase the size to 100 GB.\n - Leave the other options to default (Debian).\n- Access: Select **Allow full access** to all Cloud APIs.\n- Firewall: Select all.\n- Advanced options:\n - Networking: Expand the default network interface.\n - For External IP range: Expand the section and click on **RESERVE STATIC EXTERNAL IP ADDRESS**. This will help users to access the deployed application from the internet.\n - Name your IP and click on **Done**.\n- Click on **CREATE** and the VM will be created in about two to three minutes.\n\n### Deploy the application\n\nOnce the VM instance is created, SSH into the VM instance and clone the GitHub repository.\n\n```\ngit clone https://github.com/mongodb-partners/MongoDB-VertexAI-Qwiklab.git\n```\n\nThe repository contains a script to create and deploy a Streamlit application to transform and store PDFs in MongoDB Atlas, then search them lightning-fast with Atlas Vector Search. The app.py script in the repository uses Python and LangChain to leverage MongoDB Atlas as our data source and Google Vertex AI for generating embeddings. \n\nWe start by setting up connections and then utilize LangChain\u2019s ChatVertexAI and Google's Vertex AI embeddings to transform the PDF being loaded into searchable vectors. Finally, we constructed the Streamlit app structure, enabling users to input queries and view the top retrieved documents based on vector similarity.\n\nInstall the required dependencies on your virtual machine using the below commands:\n\n```bash\nsudo apt update\nsudo apt install python3-pip\nsudo apt install git\ngit --version\npip3 --version\ncd MongoDB-VertexAI-Qwiklab\npip3 install -r requirements.txt\n```\n\nOnce the requirements are installed, you can run the application using the below command. Open the application using the public IP of your VM and the port mentioned in the command output:\n\n```bash\nstreamlit run app.py\n```\n\n using our pay-as-you-go model and take advantage of our simplified billing.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted03916b8de19681/65a7ede75d51352518fb89d6/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltacbaf40c14ecccbc/65a7ee01cdbb961f8ac4faa4/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc7a3b3c7a2db880c/65a7ee187a1dd7b984e136ba/3.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Google Cloud"], "pageDescription": "Learn how to leverage MongoDB Atlas Vector Search to perform semantic search, Google Vertex AI for AI capabilities, and LangChain for seamless integration to build smart applications.", "contentType": "Tutorial"}, "title": "Build Smart Applications With Atlas Vector Search and Google Vertex AI", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/johns-hopkins-university-covid-19-rest-api", "action": "created", "body": "# A Free REST API for Johns Hopkins University COVID-19 dataset\n\n## TL;DR\n\n> Here is the REST API Documentation in Postman.\n\n## News\n\n### November 15th, 2023\n\n- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.\n- Here is JHU's GitHub repository.\n- First data entry is 2020-01-22, last one is 2023-03-09.\n- Current REST API is implemented using Third-Party Services which is now deprecated.\n- Hosting the REST API honestly isn't very valuable now as the data isn't updated anymore and the entire cluster is available below.\n- The REST API will be removed on November 1st, 2024; but possibly earlier as it's currently mostly being queried for dates after the last entry.\n\n### December 10th, 2020\n\n- Added 3 new calculated fields:\n - confirmed_daily.\n - deaths_daily.\n - recovered_daily.\n\n### September 10th, 2020\n\n- Let me know what you think in our topic in the community forum.\n- Fixed a bug in my code which was failing if the IP address wasn't collected properly.\n\n## Introduction\n\nRecently, we built the MongoDB COVID-19 Open Data project using the dataset from Johns Hopkins University (JHU).\n\nThere are two big advantages to using this cluster, rather than directly using JHU's CSV files:\n\n- It's updated automatically every hour so any update in JHU's repo will be there after a maximum of one hour.\n- You don't need to clean, parse and transform the CSV files, our script does this for you!\n\nThe MongoDB Atlas cluster is freely accessible using the user `readonly` and the password `readonly` using the connection string:\n\n```none\nmongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/covid19\n```\n\nYou can use this cluster to build your application, but what about having a nice and free REST API to access this curated dataset?!\n\n## COVID-19 REST API\n\n> Here is the REST API Documentation in Postman.\n\nYou can use the button in the top right corner **Run in Postman** to directly import these examples in Postman and give them a spin.\n\n that can help you scale and hopefully solve this global pandemic.\n\n## But how did I build this?\n\nSimple and easy, I used the MongoDB App Services Third-Party HTTP services to build my HTTP webhooks.\n\n> Third-Party Services are now deprecated. Please use custom HTTPS Endpoints instead from now on.\n\nEach time you call an API, a serverless JavaScript function is executed to fetch your documents. Let's look at the three parts of this function separately, for the **Global & US** webhook (the most detailed cllection!):\n\n- First, I log the IP address each time a webhook is called. I'm using the IP address for my `_id` field which permits me to use an upsert operation.\n\n```javascript\nfunction log_ip(payload) {\n const log = context.services.get(\"pre-prod\").db(\"logs\").collection(\"ip\");\n let ip = \"IP missing\";\n try {\n ip = payload.headers\"X-Envoy-External-Address\"][0];\n } catch (error) {\n console.log(\"Can't retrieve IP address.\")\n }\n console.log(ip);\n log.updateOne({\"_id\": ip}, {\"$inc\": {\"queries\": 1}}, {\"upsert\": true})\n .then( result => {\n console.log(\"IP + 1: \" + ip);\n });\n}\n```\n\n- Then I retrieve the query parameters and I build the query that I'm sending to the MongoDB cluster along with the [projection and sort options.\n\n```javascript\nfunction isPositiveInteger(str) {\n return ((parseInt(str, 10).toString() == str) && str.indexOf('-') === -1);\n}\n\nexports = function(payload, response) {\n log_ip(payload);\n\n const {uid, country, state, country_iso3, min_date, max_date, hide_fields} = payload.query;\n const coll = context.services.get(\"mongodb-atlas\").db(\"covid19\").collection(\"global_and_us\");\n\n var query = {};\n var project = {};\n const sort = {'date': 1};\n\n if (uid !== undefined && isPositiveInteger(uid)) {\n query.uid = parseInt(uid, 10);\n }\n if (country !== undefined) {\n query.country = country;\n }\n if (state !== undefined) {\n query.state = state;\n }\n if (country_iso3 !== undefined) {\n query.country_iso3 = country_iso3;\n }\n if (min_date !== undefined && max_date === undefined) {\n query.date = {'$gte': new Date(min_date)};\n }\n if (max_date !== undefined && min_date === undefined) {\n query.date = {'$lte': new Date(max_date)};\n }\n if (min_date !== undefined && max_date !== undefined) {\n query.date = {'$gte': new Date(min_date), '$lte': new Date(max_date)};\n }\n if (hide_fields !== undefined) {\n const fields = hide_fields.split(',');\n for (let i = 0; i < fields.length; i++) {\n projectfields[i].trim()] = 0\n }\n }\n\n console.log('Query: ' + JSON.stringify(query));\n console.log('Projection: ' + JSON.stringify(project));\n // [...]\n};\n```\n\n- Finally, I build the answer with the documents from the cluster and I'm adding a `Contact` header so you can send us an email if you want to reach out.\n\n```javascript\nexports = function(payload, response) {\n // [...]\n coll.find(query, project).sort(sort).toArray()\n .then( docs => {\n response.setBody(JSON.stringify(docs));\n response.setHeader(\"Contact\",\"devrel@mongodb.com\");\n });\n};\n```\n\nHere is the entire JavaScript function if you want to copy & paste it.\n\n```javascript\nfunction isPositiveInteger(str) {\n return ((parseInt(str, 10).toString() == str) && str.indexOf('-') === -1);\n}\n\nfunction log_ip(payload) {\n const log = context.services.get(\"pre-prod\").db(\"logs\").collection(\"ip\");\n let ip = \"IP missing\";\n try {\n ip = payload.headers[\"X-Envoy-External-Address\"][0];\n } catch (error) {\n console.log(\"Can't retrieve IP address.\")\n }\n console.log(ip);\n log.updateOne({\"_id\": ip}, {\"$inc\": {\"queries\": 1}}, {\"upsert\": true})\n .then( result => {\n console.log(\"IP + 1: \" + ip);\n });\n}\n\nexports = function(payload, response) {\n log_ip(payload);\n\n const {uid, country, state, country_iso3, min_date, max_date, hide_fields} = payload.query;\n const coll = context.services.get(\"mongodb-atlas\").db(\"covid19\").collection(\"global_and_us\");\n\n var query = {};\n var project = {};\n const sort = {'date': 1};\n\n if (uid !== undefined && isPositiveInteger(uid)) {\n query.uid = parseInt(uid, 10);\n }\n if (country !== undefined) {\n query.country = country;\n }\n if (state !== undefined) {\n query.state = state;\n }\n if (country_iso3 !== undefined) {\n query.country_iso3 = country_iso3;\n }\n if (min_date !== undefined && max_date === undefined) {\n query.date = {'$gte': new Date(min_date)};\n }\n if (max_date !== undefined && min_date === undefined) {\n query.date = {'$lte': new Date(max_date)};\n }\n if (min_date !== undefined && max_date !== undefined) {\n query.date = {'$gte': new Date(min_date), '$lte': new Date(max_date)};\n }\n if (hide_fields !== undefined) {\n const fields = hide_fields.split(',');\n for (let i = 0; i < fields.length; i++) {\n project[fields[i].trim()] = 0\n }\n }\n\n console.log('Query: ' + JSON.stringify(query));\n console.log('Projection: ' + JSON.stringify(project));\n\n coll.find(query, project).sort(sort).toArray()\n .then( docs => {\n response.setBody(JSON.stringify(docs));\n response.setHeader(\"Contact\",\"devrel@mongodb.com\");\n });\n};\n```\n\nOne detail to note: the payload is limited to 1MB per query. If you want to consume more data, I would recommend using the MongoDB cluster directly, as mentioned earlier, or I would filter the output to only the return the fields you really need using the `hide_fields` parameter. See the [documentation for more details.\n\n## Examples\n\nHere are a couple of example of how to run a query.\n\n- With this one you can retrieve all the metadata which will help you populate the query parameters in your other queries:\n\n```shell\ncurl --location --request GET 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/metadata'\n```\n\n- The `covid19.global_and_us` collection is probably the most complete database in this system as it combines all the data from JHU's time series into a single collection. With the following query, you can filter down what you need from this collection:\n\n```shell\ncurl --location --request GET 'https://webhooks.mongodb-stitch.com/api/client/v2.0/app/covid-19-qppza/service/REST-API/incoming_webhook/global_and_us?country=Canada&state=Alberta&min_date=2020-04-22T00:00:00.000Z&max_date=2020-04-27T00:00:00.000Z&hide_fields=_id,%20country,%20country_code,%20country_iso2,%20country_iso3,%20loc,%20state'\n```\n\nAgain, the REST API documentation in Postman is the place to go to review all the options that are offered to you.\n\n## Wrap Up\n\nI truly hope you will be able to build something amazing with this REST API. Even if it won't save the world from this COVID-19 pandemic, I hope it will be a great source of motivation and training for your next pet project.\n\nSend me a tweet with your project, I will definitely check it out!\n\n> If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blteee2f1e2d29d4361/6554356bf146760db015a198/postman_arrow.png\n", "format": "md", "metadata": {"tags": ["Atlas", "Serverless", "Postman API"], "pageDescription": "Making the Johns Hopkins University COVID-19 Data open and accessible to all, with MongoDB, through a simple REST API.", "contentType": "Article"}, "title": "A Free REST API for Johns Hopkins University COVID-19 dataset", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/wordle-solving-mongodb-query-api-operators", "action": "created", "body": "# Wordle Solving Using MongoDB Query API Operators\n\nThis article details one of my MongoDB Atlas learning journeys. I joined MongoDB in the fall of 2022 as a Developer Advocate for Atlas Search. With a couple of decades of Lucene experience, I know search, but I had little experience with MongoDB itself. As part of my initiation, I needed to learn the MongoDB Query API, and coupled my learning with my Wordle interest.\n\n## Game on: Introduction to Wordle\n\nThe online game Wordle took the world by storm in 2022. For many, including myself, Wordle has become a part of the daily routine. If you\u2019re not familiar with Wordle, let me first apologize for introducing you to your next favorite time sink. The Wordle word guessing game gives you six chances to guess the five-letter word of the day. After a guess, each letter of the guessed word is marked with clues indicating how well it matches the answer. Let\u2019s jump right into an example, with our first guess being the word `ZESTY`. Wordle gives us these hints after that guess:\n\nThe hints tell us that the letter E is in the goal word though not in the second position and that the letters `Z`, `S`, `T`, and `Y` are not in the solution in any position. Our next guess factors in these clues, giving us more information about the answer:\n\nDo you know the answer at this point? Before we reveal it, let\u2019s learn some MongoDB and build a tool to help us choose possible solutions given the hints we know.\n\n## Modeling the data as BSON in MongoDB\n\nWe can easily use the forever free tier of hosted MongoDB, called Atlas. To play along, visit the Atlas homepage and create a new account or log in to your existing one.\n\nOnce you have an Atlas account, create a database to contain a collection of words. All of the possible words that can be guessed or used as daily answers are built into the source code of the single page Wordle app itself. These words have been extracted into a list that we can quickly ingest into our Atlas collection.\n\nI created a repository for the data and code here. The README shows how to import the word list into your Atlas collection so you can play along.\n\nThe query operations needed are:\n\n* Find words that have a specific letter in an exact position.\n* Find words that do not contain any of a set of letters.\n* Find words that contain a set of specified letters, but not in any known positions.\n\nIn order to accommodate these types of criteria, a word document looks like this, using the word MONGO to illustrate:\n\n```\n{\n \"_id\":\"MONGO\",\n \"letter1\":\"M\",\n \"letter2\":\"O\",\n \"letter3\":\"N\",\n \"letter4\":\"G\",\n \"letter5\":\"O\",\n \"letters\":\"M\",\"O\",\"N\",\"G\"]\n}\n```\n\n## Finding the matches with the MongoDB Query API\n\nEach word is its own document and structured to facilitate the types of queries needed. I come from a background of full-text search where it makes sense to break down documents into the atomic findable units for clean query-ability and performance. There are, no doubt, other ways to implement the document structure and query patterns for this challenge, but bear with me while we learn how to use MongoDB Query API with this particular structure. Each letter position of the word has its own field, so we can query for exact matches. There is also a catch-all field containing an array of all unique characters in the word so queries do not have to be necessarily concerned with positions.\n\nLet\u2019s build up the MongoDB Query API to find words that match the hints from our initial guess. First, what words do not contain `Z`, `S`, `T`, or `Y`? Using MongoDB Query API [query operators in a `.find()` API call, we can use the `$nin` \\(not in\\) operator as follows:\n\n```\n{\n \"letters\":{\n \"$nin\":\"Z\",\"S\",\"T\",\"Y\"]\n }\n}\n```\nIndependently, a `.find()` for all words that have a letter `E` but not in the second position looks like this, using the [`$all` operator as there could be potentially multiple letters we know are in the solution but not which position they are in:\n\n```\n{\n \"letters\":{\n \"$all\":\"E\"]\n },\n \"letter2\":{\"$nin\":[\"E\"]}\n}\n```\n\nTo find the possible solutions, we combine all criteria for all the hints. After our `ZESTY` guess, the full `.find()` criteria is:\n\n```\n{\n \"letters\":{\n \"$nin\":[\"Z\",\"S\",\"T\",\"Y\"],\n \"$all\":[\"E\"]\n },\n \"letter2\":{\"$nin\":[\"E\"]}\n}\n```\n\nOut of the universe of all 2,309 words, there are 394 words possible after our first guess.\n\nNow on to our second guess, `BREAD`, which gave us several other tidbits of information about the answer. We now know that the answer also does not contain the letters `B` or `D`, so we add that to our letters field `$nin` clause. We also know the answer has an `R` and `A` somewhere, but not in the positions we initially guessed. And we have now know the third letter is an `E`, which is matched using the [`$eq` operator. Combining all of this information from both of our guesses, `ZESTY` and `BREAD`, we end up with this criteria:\n\n```\n{\n \"letters\":{\n \"$nin\":\"Z\",\"S\",\"T\",\"Y\",\"B\",\"D\"],\n \"$all\":[\"E\",\"R\",\"A\"]\n },\n \"letter2\":{\"$nin\":[\"E\",\"R\"]},\n \"letter3\":{\"$eq\":\"E\"},\n \"letter4\":{\"$nin\":[\"A\"]}\n}\n```\n\nHas the answer revealed itself yet to you? If not, go ahead and import the word list into your Atlas cluster and run the aggregation.\n\nIt\u2019s tedious to accumulate all of the hints into `.find()` criteria manually, and duplicate letters in the answer can present a challenge when translating the color-coded hints to MongoDB Query API, so I wrote a bit of Ruby code to handle the details. From the command-line, using [this code, the possible words after our first guess looks like this\u2026.\n\n```\n$ ruby word_guesser.rb \"ZESTY x~xxx\"\n{\"letters\":{\"$nin\":\"Z\",\"S\",\"T\",\"Y\"],\"$all\":[\"E\"]},\"letter2\":{\"$nin\":[\"E\"]}}\nABIDE\nABLED\nABODE\nABOVE\n.\n.\n.\nWOVEN\nWREAK\nWRECK\n394\n```\n\nThe output of running `word_guesser.rb` consists first of the MongoDB Query API generated, followed by all of the possible matching words given the hints provided, ending with the number of words listed. The command-line arguments to the word guessing script are one or more quoted strings consisting of the guessed word and a representation of the hints provided from that word where `x` is a greyed out letter, `~` is a yellow letter, and `^` is a green letter. It\u2019s up to the human solver to pick one of the listed words to try for the next guess. After our second guess, the command and output are:\n\n```\n$ ruby word_guesser.rb \"ZESTY x~xxx\" \"BREAD x~^~x\"\n{\"letters\":{\"$nin\":[\"Z\",\"S\",\"T\",\"Y\",\"B\",\"D\"],\"$all\":[\"E\",\"R\",\"A\"]},\"letter2\":{\"$nin\":[\"E\",\"R\"]},\"letter3\":{\"$eq\":\"E\"},\"letter4\":{\"$nin\":[\"A\"]}}\nOPERA\n1\n```\n\nVoila, solved! Only one possible word after our second guess.\n\n![OPERA\n\nIn summary, this fun exercise allowed me to learn MongoDB\u2019s Query API operators, specifically `$all`, `$eq`, and `$nin` operators for this challenge.\n\nTo learn more about the MongoDB Query API, check out these resources:\n* Introduction to MongoDB Query API\n* Getting Started with Atlas and the MongoDB Query Language \\(MQL\\)(now referred to as the MongoDB Query API)\n* The free MongoDB CRUD Operations: Insert and Find Documents course at MongoDB University", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Let\u2019s learn a few MongoDB Query API operators while solving Wordle", "contentType": "Article"}, "title": "Wordle Solving Using MongoDB Query API Operators", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/media-storage-integrating-azure-blob-storage-mongodb-spring-boot", "action": "created", "body": "# Seamless Media Storage: Integrating Azure Blob Storage and MongoDB with Spring Boot\n\nFrom social media to streaming services, many applications require a mixture of different types of data. If you are designing an application that requires storing images or videos, a good idea is to store your media using a service specially designed to handle large objects of unstructured data. \n\nYour MongoDB database is not the best place to store your media files directly. The maximum BSON document size is 16MB. This helps ensure that a single document cannot use an excessive amount of RAM or, during transmission, an excessive amount of bandwidth. This provides an obstacle as this limit can easily be surpassed by images or videos. \n\nMongoDB provides GridFS as a solution to this problem. MongoDB GridFS is a specification for storing and retrieving large files that exceed the BSON-document size limit and works by dividing the file into chunks and storing each chunk as a separate document. In a second collection, it stores the metadata for these files, including what chunks each file is composed of. While this may work for some use cases, oftentimes, it is a good idea to use a service dedicated to storing large media files and linking to that in your MongoDB document. Azure Blob (**B**inary **L**arge **Ob**jects) Storage is optimized for storing massive amounts of unstructured data and designed for use cases such as serving images directly to a browser, streaming video and audio, etc. Unstructured data is data that doesn't conform to a specific data model or format, such as binary data (how we store our media files).\n\nIn this tutorial, we are going to build a Java API with Spring Boot that allows you to upload your files, along with any metadata you wish to store. When you upload your file, such as an image or video, it will upload to Azure Blob Storage. It will store the metadata, along with a link to where the file is stored, in your MongoDB database. This way, you get all the benefits of MongoDB databases while taking advantage of how Azure Blob Storage deals with these large files.\n\n or higher\n - Maven or Gradle, but this tutorial will reference Maven\n - A MongoDB cluster deployed and configured; if you need help, check out our MongoDB Atlas tutorial on how to get started\n - An Azure account with an active subscription\n\n## Set up Azure Storage\n\nThere are a couple of different ways you can set up your Azure storage, but we will use the Microsoft Azure Portal. Sign in with your Azure account and it will take you to the home page. At the top of the page, search \"Storage accounts.\"\n\n.\n\nSelect the subscription and resource group you wish to use, and give your storage account a name. The region, performance, and redundancy settings are depending on your plans with this application, but the lowest tiers have all the features we need.\n\nIn networking, select to enable public access from all networks. This might not be desirable for production but for following along with this tutorial, it allows us to bypass configuring rules for network access.\n\nFor everything else, we can accept the default settings. Once your storage account is created, we\u2019re going to navigate to the resource. You can do this by clicking \u201cGo to resource,\u201d or return to the home page and it will be listed under your resources.\n\nThe next step is to set up a container. A container organizes a set of blobs, similar to a directory in a file system. A storage account can include an unlimited number of containers, and a container can store an unlimited number of blobs. On the left blade, select the containers tab, and click the plus container option. A menu will come up where you name your container (and configure access level if you don't want the default, private). Now, let's launch our container!\n\nIn order to connect your application to Azure Storage, create your `Shared Access Signature` (SAS). SAS allows you to have granular control over how your client can access the data. Select \u201cShared access signature\u201d from the left blade menu and configure it to allow the services and resource types you wish to allow. For this tutorial, select \u201cObject\u201d under the allowed resource types. Object is for blob-level APIs, allowing operations on individual blobs, like uploading, downloading, or deleting an image. The rest of the settings you can leave as the default configuration. If you would like to learn more about what configurations are best suited for your application, check out Microsoft\u2019s documentation. Once you have configured it to your desired settings, click \u201cGenerate SAS and connection string.\u201d Your SAS will be generated below this button.\n\n and click \u201cConnect.\u201d If you need help, check out our guide in the docs.\n\n. With MongoRepository, you don\u2019t need to provide implementation code for many basic CRUD methods for your MongoDB database, such as save, findById, findAll, delete, etc. Spring Data MongoDB automatically generates the necessary implementation based on the method names and conventions.\n\nNow that we have the repository set up, it's time to set up our service layer. This acts as the intermediate between our repository (data access layer) and our controller (REST endpoints) and contains the applications business logic. We'll create another package `com.example.azureblob.service` and add our class `ImageMetadataService.java`.\n\n```java\n@Service\npublic class ImageMetadataService {\n\n @Autowired\n private ImageMetadataRepository imageMetadataRepository;\n \n @Value(\"${spring.cloud.azure.storage.blob.container-name}\")\n private String containerName;\n \n @Value(\"${azure.blob-storage.connection-string}\")\n private String connectionString;\n\n private BlobServiceClient blobServiceClient;\n\n @PostConstruct\n public void init() {\n blobServiceClient = new BlobServiceClientBuilder().connectionString(connectionString).buildClient();\n }\n\n public ImageMetadata save(ImageMetadata metadata) {\n return imageMetadataRepository.save(metadata);\n }\n\n public List findAll() {\n return imageMetadataRepository.findAll();\n }\n\n public Optional findById(String id) {\n return imageMetadataRepository.findById(id);\n }\n \n public String uploadImageWithCaption(MultipartFile imageFile, String caption) throws IOException {\n String blobFileName = imageFile.getOriginalFilename();\n BlobClient blobClient = blobServiceClient.getBlobContainerClient(containerName).getBlobClient(blobFileName);\n\n blobClient.upload(imageFile.getInputStream(), imageFile.getSize(), true);\n\n String imageUrl = blobClient.getBlobUrl();\n \n ImageMetadata metadata = new ImageMetadata();\n metadata.setCaption(caption);\n metadata.setImageUrl(imageUrl);\n \n imageMetadataRepository.save(metadata);\n\n return \"Image and metadata uploaded successfully!\";\n }\n}\n```\n\nHere we have a couple of our methods set up for finding our documents in the database and saving our metadata. Our `uploadImageWithCaption` method contains the integration with Azure Blob Storage. Here you can see we create a `BlobServiceClient` to interact with Azure Blob Storage. After it succeeds in uploading the image, it gets the URL of the uploaded blob. It then stores this, along with our other metadata for the image, in our MongoDB database.\n\nOur last step is to set up a controller to establish our endpoints for the application. In a Spring Boot application, controllers handle requests, process data, and produce responses, making it possible to expose APIs and build web applications. Create a package `com.example.azureblob.service` and add the class `ImageMetadataController.java`.\n\n```java\n@RestController\n@RequestMapping(\"/image-metadata\")\npublic class ImageMetadataController {\n\n@Autowired\nprivate ImageMetadataService imageMetadataService;\n\n@PostMapping(\"/upload\")\npublic String uploadImageWithCaption(@RequestParam(\"image\") MultipartFile imageFile, @RequestParam(\"caption\") String caption) throws IOException {\nreturn imageMetadataService.uploadImageWithCaption(imageFile, caption);\n}\n\n@GetMapping(\"/\")\npublic List getAllImageMetadata() {\nreturn imageMetadataService.findAll();\n}\n\n@GetMapping(\"/{id}\")\npublic ImageMetadata getImageMetadataById(@PathVariable String id) {\nreturn imageMetadataService.findById(id).orElse(null);\n}\n}\n```\n\nHere we're able to retrieve all our metadata documents or search by `_id`, and we are able to upload our documents. \n\nThis should be everything you need to upload your files and store the metadata in MongoDB. Let's test it out! You can use your favorite tool for testing APIs but I'll be using a cURL command. \n\n```console\ncurl -F \"image=mongodb-is-webscale.png\" -F \"caption=MongoDB is Webscale\" http://localhost:8080/blob/upload\n```\n\nNow, let's check how that looks in our database and Azure storage. If we look in our collection in MongoDB, we can see our metadata, including the URL to the image. Here we just have a few fields, but depending on your application, you might want to store information like when this document was created, the filetype of the data being stored, or even the size of the file.\n\n, such as How to Use Azure Functions with MongoDB Atlas in Java.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4993e491194e080/65783b3f2a3de32470d701d9/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte01b0191c2bca39b/65783b407cf4a90e91f5d32e/image5.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64cc0729a4a237b9/65783b400d1fdc1b0b574582/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf9ecb0e59cab32f5/65783b40bd48af73c3f67c16/image6.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7a72157e335d34/65783b400f54458167a01f00/image4.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3117132d54aee8a1/65783b40fd77da8557159020/image7.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0807cbd5641b1011/65783b402813253a07cd197e/image2.png", "format": "md", "metadata": {"tags": ["Atlas", "Java", "Spring", "Azure"], "pageDescription": "This tutorial describes how to build a Spring Boot Application to upload your media files into Azure Blob Storage, while storing associated metadata in MongoDB.", "contentType": "Tutorial"}, "title": "Seamless Media Storage: Integrating Azure Blob Storage and MongoDB with Spring Boot", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/azure-functions-mongodb-atlas-java", "action": "created", "body": "# How to Use Azure Functions with MongoDB Atlas in Java\n\nCloud computing is one of the most discussed topics in the tech industry. Having the ability to scale your infrastructure up and down instantly is just one of the many benefits associated with serverless apps. In this article, we are going write the function as a service (FaaS) \u2014 i.e., a serverless function that will interact with data via a database, to produce meaningful results. FaaS can also be very useful in A/B testing when you want to quickly release an independent function without going into actual implementation or release. \n\n> In this article, you'll learn how to use MongoDB Atlas, a cloud database, when you're getting started with Azure functions in Java.\n\n## Prerequisites\n\n1. A Microsoft Azure account that we will be using for running and deploying our serverless function. If you don't have one, you can sign up for free.\n2. A MongoDB Atlas account, which is a cloud-based document database. You can sign up for an account for free. \n3. IntelliJ IDEA Community Edition to aid our development\n activities for this tutorial. If this is not your preferred IDE, then you can use other IDEs like Eclipse, Visual Studio, etc., but the steps will be slightly different.\n4. An Azure supported Java Development Kit (JDK) for Java, version 8 or 11.\n5. A basic understanding of the Java programming language.\n\n## Serverless function: Hello World!\n\nGetting started with the Azure serverless function is very simple, thanks to the Azure IntelliJ plugin, which offers various features \u2014 from generating boilerplate code to the deployment of the Azure function. So, before we jump into actual code, let's install the plugin.\n\n### Installing the Azure plugin\n\nThe Azure plugin can be installed on IntelliJ in a very standard manner using the IntelliJ plugin manager. Open Plugins and then search for \"_Azure Toolkit for IntelliJ_\" in the Marketplace. Click Install.\n\nWith this, we are ready to create our first Azure function.\n\n### First Azure function\n\nNow, let's create a project that will contain our function and have the necessary dependencies to execute it. Go ahead and select File > New > Project from the menu bar, select Azure functions from Generators as shown below, and hit Next.\n\nNow we can edit the project details if needed, or you can leave them on default.\n\nIn the last step, update the name of the project and location. \n\nWith this complete, we have a bootstrapped project with a sample function implementation. Without further ado, let's run this and see it in action.\n\n### Deploying and running\n\nWe can deploy the Azure function either locally or on the cloud. Let's start by deploying it locally. To deploy and run locally, press the play icon against the function name on line 20, as shown in the above screenshot, and select run from the dialogue.\n\nCopy the URL shown in the console log and open it in the browser to run the Azure function.\n\nThis will prompt passing the name as a query parameter as defined in the bootstrapped function.\n\n```java\nif (name == null) {\n return request.createResponseBuilder(HttpStatus.BAD_REQUEST)\n .body(\"Please pass a name on the query string or in the request body\").build();\n} else {\n return request.createResponseBuilder(HttpStatus.OK).body(\"Hello, \" + name).build();\n}\n```\n\nUpdate the URL by appending the query parameter `name` to\n`http://localhost:XXXXX/api/HttpExample?name=World`, which will print the desired result.\n\nTo learn more, you can also follow the official guide.\n\n## Connecting the serverless function with MongoDB Atlas\n\nIn the previous step, we created our first Azure function, which takes user input and returns a result. But real-world applications are far more complicated than this. In order to create a real-world function, which we will do in the next section, we need to \nunderstand how to connect our function with a database, as logic operates over data and databases hold the data.\n\nSimilar to the serverless function, let's use a database that is also on the cloud and has the ability to scale up and down as needed. We'll be using MongoDB Atlas, which is a document-based cloud database.\n\n### Setting up an Atlas account\n\nCreating an Atlas account is very straightforward, free forever, and perfect to validate any MVP project idea, but if you need a guide, you can follow the documentation.\n\n### Adding the Azure function IP address in Atlas Network Config\n\nThe Azure function uses multiple IP addresses instead of a single address, so let's add them to Atlas. To get the range of IP addresses, open your Azure account and search networking inside your Azure virtual machine. Copy the outbound addresses from outbound traffic.\n\nOne of the steps while creating an account with Atlas is to add the IP address for accepting incoming connection requests. This is essential to prevent unwanted access to our database. In our case, Atlas will get all the connection requests from the Azure function, so let's add this address.\n\nAdd these to the IP individually under Network Access. \n\n### Installing dependency to interact with Atlas\n\nThere are various ways of interacting with Atlas. Since we are building a service using a serverless function in Java, my preference is to use MongoDB Java driver. So, let's add the dependency for the driver in the `build.gradle` file.\n\n```groovy\ndependencies {\n implementation 'com.microsoft.azure.functions:azure-functions-java-library:3.0.0'\n // dependency for MongoDB Java driver\n implementation 'org.mongodb:mongodb-driver-sync:4.9.0'\n}\n```\n\nWith this, our project is ready to connect and interact with our cloud database.\n\n## Building an Azure function with Atlas\n\nWith all the prerequisites done, let's build our first real-world function using the MongoDB sample dataset for movies. In this project, we'll build two functions: One returns the count of the\ntotal movies in the collection, and the other returns the movie document based on the year of release.\n\nLet's generate the boilerplate code for the function by right-clicking on the package name and then selecting New > Azure function class. We'll call this function class `Movies`.\n\n```java\npublic class Movies {\n /**\n * This function listens at endpoint \"/api/Movie\". Two ways to invoke it using \"curl\" command in bash:\n * 1. curl -d \"HTTP Body\" {your host}/api/Movie\n * 2. curl {your host}/api/Movie?name=HTTP%20Query\n */\n @FunctionName(\"Movies\")\n public HttpResponseMessage run(\n @HttpTrigger(name = \"req\", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,\n final ExecutionContext context) {\n context.getLogger().info(\"Java HTTP trigger processed a request.\");\n\n // Parse query parameter\n String query = request.getQueryParameters().get(\"name\");\n String name = request.getBody().orElse(query);\n\n if (name == null) {\n return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body(\"Please pass a name on the query string or in the request body\").build();\n } else {\n return request.createResponseBuilder(HttpStatus.OK).body(\"Hello, \" + name).build();\n }\n }\n}\n```\n\nNow, let's:\n\n1. Update `@FunctionName` parameter from `Movies` to `getMoviesCount`.\n2. Rename the function name from `run` to `getMoviesCount`.\n3. Remove the `query` and `name` variables, as we don't have any query parameters.\n\nOur updated code looks like this.\n\n```java\npublic class Movies {\n\n @FunctionName(\"getMoviesCount\")\n public HttpResponseMessage getMoviesCount(\n @HttpTrigger(name = \"req\", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,\n final ExecutionContext context) {\n context.getLogger().info(\"Java HTTP trigger processed a request.\");\n\n return request.createResponseBuilder(HttpStatus.OK).body(\"Hello\").build();\n }\n}\n```\n\nTo connect with MongoDB Atlas using the Java driver, we first need a connection string that can be found when we press to connect to our cluster on our Atlas account. For details, you can also refer to the documentation.\n\nUsing the connection string, we can create an instance of `MongoClients` that can be used to open connection from the `database`.\n\n```java\npublic class Movies {\n\n private static final String MONGODB_CONNECTION_URI = \"mongodb+srv://xxxxx@cluster0.xxxx.mongodb.net/?retryWrites=true&w=majority\";\n private static final String DATABASE_NAME = \"sample_mflix\";\n private static final String COLLECTION_NAME = \"movies\";\n private static MongoDatabase database = null;\n\n private static MongoDatabase createDatabaseConnection() {\n if (database == null) {\n try {\n MongoClient client = MongoClients.create(MONGODB_CONNECTION_URI);\n database = client.getDatabase(DATABASE_NAME);\n } catch (Exception e) {\n throw new IllegalStateException(\"Error in creating MongoDB client\");\n }\n }\n return database;\n }\n \n /*@FunctionName(\"getMoviesCount\")\n public HttpResponseMessage run(\n @HttpTrigger(name = \"req\", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage> request,\n final ExecutionContext context) {\n context.getLogger().info(\"Java HTTP trigger processed a request.\");\n\n return request.createResponseBuilder(HttpStatus.OK).body(\"Hello\").build();\n }*/\n}\n```\n\nWe can query our database for the total number of movies in the collection, as shown below.\n\n```java\nlong totalRecords=database.getCollection(COLLECTION_NAME).countDocuments();\n```\n\nUpdated code for `getMoviesCount` function looks like this.\n\n```java\n@FunctionName(\"getMoviesCount\")\npublic HttpResponseMessage getMoviesCount(\n @HttpTrigger(name = \"req\",\n methods = {HttpMethod.GET},\n authLevel = AuthorizationLevel.ANONYMOUS\n ) HttpRequestMessage> request,\n final ExecutionContext context) {\n\n if (database != null) {\n long totalRecords = database.getCollection(COLLECTION_NAME).countDocuments();\n return request.createResponseBuilder(HttpStatus.OK).body(\"Total Records, \" + totalRecords + \" - At:\" + System.currentTimeMillis()).build();\n } else {\n return request.createResponseBuilder(HttpStatus.INTERNAL_SERVER_ERROR).build();\n }\n}\n```\n\nNow let's deploy this code locally and on the cloud to validate the output. We'll use Postman.\n\nCopy the URL from the console output and paste it on Postman to validate the output.\n\nLet's deploy this on the Azure cloud on a `Linux` machine. Click on `Azure Explore` and select Functions App to create a virtual machine (VM).\n\nNow right-click on the Azure function and select Create.\n\nChange the platform to `Linux` with `Java 1.8`.\n\n> If for some reason you don't want to change the platform and would like use Window OS, then add standard DNS route before making a network request.\n> ```java \n> System.setProperty(\"java.naming.provider.url\", \"dns://8.8.8.8\"); \n> ```\n\nAfter a few minutes, you'll notice the VM we just created under `Function App`. Now, we can deploy our app onto it.\n\nPress Run to deploy it.\n\nOnce deployment is successful, you'll find the `URL` of the serverless function.\n\nAgain, we'll copy this `URL` and validate using Postman.\n\nWith this, we have successfully connected our first function with\nMongoDB Atlas. Now, let's take it to next level. We'll create another function that returns a movie document based on the year of release. \n\nLet's add the boilerplate code again.\n\n```java\n@FunctionName(\"getMoviesByYear\")\npublic HttpResponseMessage getMoviesByYear(\n @HttpTrigger(name = \"req\",\n methods = {HttpMethod.GET},\n authLevel = AuthorizationLevel.ANONYMOUS\n ) HttpRequestMessage> request,\n final ExecutionContext context) {\n\n}\n```\n\nTo capture the user input year that will be used to query and gather information from the collection, add this code in:\n\n```java\nfinal int yearRequestParam = valueOf(request.getQueryParameters().get(\"year\"));\n```\n\nTo use this information for querying, we create a `Filters` object that can pass as input for `find` function. \n\n```java\nBson filter = Filters.eq(\"year\", yearRequestParam);\nDocument result = collection.find(filter).first();\n```\n\nThe updated code is:\n\n```java\n@FunctionName(\"getMoviesByYear\")\npublic HttpResponseMessage getMoviesByYear(\n @HttpTrigger(name = \"req\",\n methods = {HttpMethod.GET},\n authLevel = AuthorizationLevel.ANONYMOUS\n ) HttpRequestMessage> request,\n final ExecutionContext context) {\n\n final int yearRequestParam = valueOf(request.getQueryParameters().get(\"year\"));\n MongoCollection collection = database.getCollection(COLLECTION_NAME);\n\n if (database != null) {\n Bson filter = Filters.eq(\"year\", yearRequestParam);\n Document result = collection.find(filter).first();\n return request.createResponseBuilder(HttpStatus.OK).body(result.toJson()).build();\n } else {\n return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body(\"Year missing\").build();\n }\n}\n```\n\nNow let's validate this against Postman. \n\nThe last step in making our app production-ready is to secure the connection `URI`, as it contains credentials and should be kept private. One way of securing it is storing it into an environment variable. \n\nAdding an environment variable in the Azure function can be done via the Azure portal and Azure IntelliJ plugin, as well. For now, we'll use the Azure IntelliJ plugin, so go ahead and open Azure Explore in IntelliJ. \n\nThen, we select `Function App` and right-click `Show Properties`.\n\nThis will open a tab with all existing properties. We add our property into it. \n\nNow we can update our function code to use this variable. From \n\n```java\nprivate static final String MONGODB_CONNECTION_URI = \"mongodb+srv://xxxxx:xxxx@cluster0.xxxxx.mongodb.net/?retryWrites=true&w=majority\";\n```\nto \n\n```java\nprivate static final String MONGODB_CONNECTION_URI = System.getenv(\"MongoDB_Connection_URL\");\n```\n\nAfter redeploying the code, we are all set to use this app in production. \n\n## Summary\nThank you for reading \u2014 hopefully you find this article informative! The complete source code of the app can be found on GitHub.\n\nIf you're looking for something similar using the Node.js runtime, check out the other tutorial on the subject.\n\nWith MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud\u2013based developer data platform on the market. Now, with the availability of Atlas on the Azure Marketplace, it\u2019s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the Atlas on Azure Marketplace listing.\n\nIf you have any queries or comments, you can share them on the MongoDB forum or tweet me @codeWithMohit.", "format": "md", "metadata": {"tags": ["Atlas", "Java", "Azure"], "pageDescription": "In this article, you'll learn how to use MongoDB Atlas, a cloud database, when you're getting started with Azure functions in Java.", "contentType": "Tutorial"}, "title": "How to Use Azure Functions with MongoDB Atlas in Java", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/langchain-vector-search", "action": "created", "body": "# Introduction to LangChain and MongoDB Atlas Vector Search\n\nIn this tutorial, we will leverage the power of LangChain, MongoDB, and OpenAI to ingest and process data created after ChatGPT-3.5. Follow along to create your own chatbot that can read lengthy documents and provide insightful answers to complex queries!\n\n### What is LangChain?\nLangChain is a versatile Python library that enables developers to build applications that are powered by large language models (LLMs). LangChain actually helps facilitate the integration of various LLMs (ChatGPT-3, Hugging Face, etc.) in other applications and understand and utilize recent information. As mentioned in the name, LangChain chains together different components, which are called links, to create a workflow. Each individual link performs a different task in the process, such as accessing a data source, calling a language model, processing output, etc. Since the order of these links can be moved around to create different workflows, LangChain is super flexible and can be used to build a large variety of applications. \n\n### LangChain and MongoDB\nMongoDB integrates nicely with LangChain because of the semantic search capabilities provided by MongoDB Atlas\u2019s vector search engine. This allows for the perfect combination where users can query based on meaning rather than by specific words! Apart from MongoDB LangChain Python integration and MongoDB LangChain Javascript integration, MongoDB recently partnered with LangChain on the LangChain templates release to make it easier for developers to build AI-powered apps.\n\n## Prerequisites for success\n\n - MongoDB Atlas account\n - OpenAI API account and your API key\n - IDE of your choice (this tutorial uses Google Colab)\n\n## Diving into the tutorial\nOur first step is to ensure we\u2019re downloading all the crucial packages we need to be successful in this tutorial. In Google Colab, please run the following command:\n\n```\n!pip install langchain pypdf pymongo openai python-dotenv tiktoken\n```\nHere, we\u2019re installing six different packages in one. The first package is `langchain` (the package for the framework we are using to integrate language model capabilities), `pypdf` (a library for working with PDF documents in Python), `pymongo` (the official MongoDB driver for Python so we can interact with our database from our application), `openai` (so we can use OpenAI\u2019s language models), `python-dotenv` (a library used to read key-value pairs from a .env file), and `tiktoken` (a package for token handling). \n\n### Environment configuration\nOnce this command has been run and our packages have been successfully downloaded, let\u2019s configure our environment. Prior to doing this step, please ensure you have saved your OpenAI API key and your connection string from your MongoDB Atlas cluster in a `.env` file at the root of your project. Help on finding your MongoDB Atlas connection string can be found in the docs.\n\n```\nimport os\nfrom dotenv import load_dotenv\nfrom pymongo import MongoClient\n\nload_dotenv(override=True)\n\n# Add an environment file to the notebook root directory called .env with MONGO_URI=\"xxx\" to load these environment variables\n\nOPENAI_API_KEY = os.environ\"OPENAI_API_KEY\"]\nMONGO_URI = os.environ[\"MONGO_URI\"]\nDB_NAME = \"langchain-test-2\"\nCOLLECTION_NAME = \"test\"\nATLAS_VECTOR_SEARCH_INDEX_NAME = \"default\"\n\nEMBEDDING_FIELD_NAME = \"embedding\"\nclient = MongoClient(MONGO_URI)\ndb = client[DB_NAME]\ncollection = db[COLLECTION_NAME]\n```\n\nPlease feel free to name your database, collection, and even your vector search index anything you like. Just continue to use the same names throughout the tutorial. The success of this code block ensures that both your database and collection are created in your MongoDB cluster. \n\n### Loading in our data\nWe are going to be loading in the `GPT-4 Technical Report` PDF. As mentioned above, this report came out after OpenAI\u2019s ChatGPT information cutoff date, so the learning model isn\u2019t trained to answer questions about the information included in this 100-page document. \n\nThe LangChain package will help us answer any questions we have about this PDF. Let\u2019s load in our data:\n\n```\nfrom langchain.document_loaders import PyPDFLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\n\nloader = PyPDFLoader(\"https://arxiv.org/pdf/2303.08774.pdf\")\ndata = loader.load()\n\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 50)\ndocs = text_splitter.split_documents(data)\n\n# insert the documents in MongoDB Atlas Vector Search\nx = MongoDBAtlasVectorSearch.from_documents(\ndocuments=docs, embedding=OpenAIEmbeddings(disallowed_special=()), collection=MONGODB_COLLECTION, index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME\n)\n```\nIn this code block, we are loading in our PDF, using a command to split up the data into various chunks, and then we are inserting the documents into our collection so we can use our search index on the inserted data. \n\nTo test and make sure our data is properly loaded in, run a test: \n```\ndocs[0]\n```\nYour output should look like this:\n![output from our docs[0] command to see if our data is loaded correctly\n\n### Creating our search index\nLet\u2019s head over to our MongoDB Atlas user interface to create our Vector Search Index. \nFirst, click on the \u201cSearch\u201d tab and then on \u201cCreate Search Index.\u201d You\u2019ll be taken to this page. Please click on \u201cJSON Editor.\u201d \n\nPlease make sure the correct database and collection are pressed, and make sure you have the correct index name chosen that was defined above. Then, paste in the search index we are using for this tutorial:\n\n```\n{\n \"fields\": \n {\n \"type\": \"vector\",\n \"path\": \"embedding\", \n \"numDimensions\": 1536, \n \"similarity\": \"cosine\" \n },\n {\n \"type\": \"filter\",\n \"path\": \"source\" \n }\n \n ]\n}\n\n```\nThese fields are to specify the field name in our documents. With `embedding`, we are specifying that the dimensions of the model used to embed are `1536`, and the similarity function used to find the nearest k neighbors is `cosine`. It\u2019s crucial that the dimensions in our search index match that of the language model we are using to embed our data. \n\nCheck out our [Vector Search documentation for more information on the index configuration settings.\n\nOnce set up, it\u2019ll look like this:\n\nCreate the search index and let it load. \n\n## Querying our data\nNow, we\u2019re ready to query our data! We are going to show various ways of querying our data in this tutorial. We are going to utilize filters along with Vector Search to see our results. Let\u2019s get started. Please ensure you are connected to your cluster prior to attempting to query or it will not work.\n\n### Semantic search in LangChain\nTo get started, let\u2019s first see an example using LangChain to perform a semantic search:\n\n```\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\n\nvector_search = MongoDBAtlasVectorSearch.from_connection_string(\n MONGO_URI,\n DB_NAME + \".\" + COLLECTION_NAME,\n OpenAIEmbeddings(disallowed_special=()),\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME\n)\nquery = \"gpt-4\"\nresults = vector_search.similarity_search(\n query=query,\n k=20,\n)\n\nfor result in results:\n print( result)\n```\nThis gives the output:\n\nThis gives us the relevant results that semantically match the intent behind the question. Now, let\u2019s see what happens when we ask a question using LangChain.\n\n### Question and answering in LangChain\nRun this code block to see what happens when we ask questions to see our results: \n```\nqa_retriever = vector_search.as_retriever(\n search_type=\"similarity\",\n search_kwargs={\n \"k\": 200,\n \"post_filter_pipeline\": {\"$limit\": 25}]\n }\n)\nfrom langchain.prompts import PromptTemplate\nprompt_template = \"\"\"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\nQuestion: {question}\n\"\"\"\nPROMPT = PromptTemplate(\n template=prompt_template, input_variables=[\"context\", \"question\"]\n)\nfrom langchain.chains import RetrievalQA\nfrom langchain.chat_models import ChatOpenAI\nfrom langchain.llms import OpenAI\n\nqa = RetrievalQA.from_chain_type(llm=OpenAI(),chain_type=\"stuff\", retriever=qa_retriever, return_source_documents=True, chain_type_kwargs={\"prompt\": PROMPT})\n\ndocs = qa({\"query\": \"gpt-4 compute requirements\"})\n\nprint(docs[\"result\"])\nprint(docs['source_documents'])\n```\nAfter this is run, we get the result: \n```\nGPT-4 requires a large amount of compute for training, it took 45 petaflops-days of compute to train the model. [Document(page_content='gpt3.5Figure 4. GPT performance on academic and professional exams. In each case, we simulate\n```\nThis provides a succinct answer to our question, based on the data source provided. \n\n## Conclusion\nCongratulations! You have successfully loaded in external data and queried it using LangChain and MongoDB. For more information on MongoDB Vector Search, please visit our [documentation \n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "This comprehensive tutorial takes you through how to integrate LangChain with MongoDB Atlas Vector Search.", "contentType": "Tutorial"}, "title": "Introduction to LangChain and MongoDB Atlas Vector Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/gaming-startups-2023", "action": "created", "body": "# MongoDB Atlas for Gaming, Startups to Watch in 2023\n\nIn the early days, and up until a decade ago, games were mostly about graphics prowess and fun game play that keep players coming back, wanting for more. And that's still the case today, but modern games have proven that data is also a crucial part of video games.\n\nAs developers leverage a data platform like MongoDB Atlas for gaming, they can do more, faster, and make the game better by focusing engineering resources on the player's experience, which can be tailored thanks to insights leveraged during the game sessions. The experience can continue outside the game too, especially with the rise in popularity of eSports and their legions of fans who gather around a fandom.\n\n## Yile Technology\n\n*Mezi Wu, Research and Development Manager at Yile Technology (left) and Yi-Zheng Lin, Senior Database Administrator of Yile Technology*\n\nYile Technology Co. Ltd. is a mobile game development company founded in 2018 in Taiwan. Since then, it has developed social games that have quickly acquired a large audience. For example, its Online808 social casino game has rapidly crossed the 1M members mark as Yile focuses intensely on user experience improvement and game optimization.\n\nYile developers leverage the MongoDB Atlas platform for two primary reasons. First, it's about performance. Yile developers realized early in their success that even cloud relational databases (RDBMS) were challenging to scale horizontally. Early tests showed RDBMS could not achieve Yile's desired goal of having a 0.5s minimum game response time.\n\n\"Our team sought alternatives to find a database with much stronger horizontal scalability. After assessing the pros and cons of a variety of solutions on the market, we decided to build with MongoDB's document database,\" Mezi Wu, Research and Development Manager at Yile Technology, said.\n\nThe R&D team thought MongoDB was easy to use and supported by vast online resources, including discussion forums. It only took one month to move critical data back-end components, like player profiles, from RDBMS to MongoDB and eliminate game database performance issues.\n\nThe second is about operations. Wu said, \"MongoDB Atlas frees us from the burden of basic operational maintenance and maximizes the use of our most valuable resources: our people.\"\n\nThat's why after using the self-managed MongoDB community version at first, Yile Technology moved to the cloud-managed version of MongoDB, MongoDB Atlas, to alleviate the maintenance and monitoring burden experienced by the R&D team after a game's launch. It's natural to overwatch the infrastructure after a new launch, but the finite engineering resources are best applied to optimizing the game and adding new features.\n\n\"Firstly, with support from the MongoDB team, we have gained a better understanding of MongoDB features and advantages and become more precise in our usage. Secondly, MongoDB Atlas provides an easy-to-use operation interface, which is faster and more convenient for database setup and can provide a high-availability architecture with zero downtime,\" says Yi-Zheng Lin, Senior Database Administrator.\n\nHaving acquired experience and confidence, now validated by rapid success, Yile Technology plans to expand its usage of MongoDB further. The company is interested in the MongoDB transaction features for its cash flow data and the MongoDB aggregation pipeline to analyze users' behavior.\n\n## Beamable\n\nBased in Boston, USA, Beamable is a company that streamlines game development and deployment for game developers. Beamable does that by providing a game server architecture that handles the very common needs of backend game developers, which offloads a sizable chunk of the development process, leaving more time to fine-tune game mechanics and stickiness.\n\nGame data (also called game state) is a very important component in game development, but the operations and tools required to maximize its utilization and efficiency are almost as critical. Building such tools and processes can be daunting, especially for smaller up-and-coming game studios, no matter how talented.\n\nFor example, Beamable lets developers integrate, manage, and analyze their data with a web dashboard called LiveOps Portal so engineers don't have to build an expensive custom live games solution. That's only one of the many game backend aspects Beamable handles, so check the whole list on their features page.\n\nBeamable's focus on integrating itself into the development workflow is one of the most crucial advantages of their offering, because every game developer wants to tweak things right in the game's editor --- for example, in Unity, for which Beamable's integration is impressive and complete.\n\nTo achieve such a feat, Beamable built the platform on top of MongoDB Atlas \"from day one\" according to Ali El Rhermoul (listen to the podcast ep. 151), and therefore started on a solid\n\nand scalable developer data platform to innovate upon, leaving the database operations to MongoDB, while focusing on adding value to their customers. Beamable helps many developers, which translates into an enormous aggregated amount of data.\n\nAdditionally, MongoDB's document model works really well for games and that has been echoed many times in the games industry. Games have some of the most rapidly changing schemas, and some games offer new features, items, and rewards on a daily basis, if not hourly.\n\nWith Beamable, developers can easily add game features such as leaderboards, commerce offers, or even identity management systems that are GDPR-compatible. Beamable is so confident in its platform that developers can try for free with a solid feature set, and seamlessly upgrade to get active support or enterprise features.\n\n## Bemyfriends\n\nbemyfriends is a South Korean company that built a SaaS solution called b.stage, which lets creators, brands, talents, and IP holders connect with their fans in meaningful, agreeable, and effective ways, including monetization. bemyfriends is different from any other competitor because the creators are in control and own entirely all data created or acquired, even if they decide to leave.\n\nWith b.stage, creators have a dedicated place where they can communicate, monetize, and grow their businesses at their own pace, free from feed algorithms. There, they can nurture their fans into super fans. b.stage supports multiple languages (system and content) out of the box. However, membership, e-commerce, live-streaming, content archives, and even community features (including token-gated ones) are also built-in and integrated to single admin.\n\nBuilt-in analytics tools and dashboards are available for in-depth analysis without requiring external tool integration. Creators can focus on their content and fans without worrying about complex technical implementations. That makes b.stage a powerful and straightforward fandom solution with high-profile creators, such as eSports teams T1, KT Rolster and Nongshim Redforce, three teams with millions of gamer fans in South Korea and across the world.\n\nbemyfriends uses MongoDB as its primary data platform. June Kay Kim (CTO, bemyfriends) explained that engineers initially tested with an RDBMS solution but quickly realized that scaling a relational database at the required scale would be difficult. MongoDB's scalability and performance were crucial criteria in the data platform selection.\n\nAdditionally, MongoDB's flexible schema was an essential feature for the bemyfriends team. Their highly innovative product demands many different data schemas, and each can be subject to frequent modifications to integrate the latest features creators need.\n\nWhile managing massive fandoms, downtime is not an option, so the ability to make schema modifications without incurring downtime was also a requirement for the data platform. For all these reasons, bemyfriends use MongoDB Atlas to power the vast majority of the data in their SaaS solution.\n\nBuilding with the corporate slogan of \"Whatever you make, we will help you make more of it!,\" bemyfriend has created a fantastic tool for fandom business, whether their fans are into music, movies, games, or a myriad of other things --- the sky's the limit. Creators can focus on their fandom, knowing the most crucial piece of their fandom business, the data, is truly theirs.\n\n## Diagon\n\nDiagon is a gaming company based in Lagos, Nigeria. They are building a hyper-casual social gaming platform called \"CASUAL by Diagon\" where users can access several games. There are about 10 games at the moment, and Diagon is currently working on developing and publishing more games on its platform, by working with new game developers currently joining the in-house team. The building of an internal game development team will be coming with the help of a fresh round of funding for the start-up (Diagon Pre-Seed Round).\n\nThe games are designed to be very easy to play so that more people can play games while having a break, waiting in line, or during other opportune times. Not only do players have the satisfaction of progressing and winning the games, but there's also a social component.\n\nDiagon has a system of leaderboards to help the best players gain visibility within the community. At the same time, raffles make people more eager to participate, regardless of their gaming skills.\n\nDiagon utilized MongoDB from the start, and one key defining factor was MongoDB's flexible schema. It means that the same collection (\"table,\" in RDBMS lingo) can contain documents using multiple schemas, or schema versions, as long as the code can handle them. This flexibility allows game developers to quickly add properties or new data types without incurring downtime, thus accelerating the pace of innovation.\n\nDiagon also runs on MongoDB Atlas, the MongoDB platform, which handles the DevOps aspect of the database, leaving developers to focus on making their games better. \"Having data as objects is the future,\" says Jeremiah Onojah, Founder of and Product Developer at Diagon. And Diagion's engineers are just getting started: \"I believe there's so much more to get out of MongoDB,\" he adds, noting that future apps are planned to run on MongoDB.\n\nFor example, an area of interest for Onojah is MongoDB Atlas Search, a powerful integrated Search feature, powered by Lucene. Atlas developers can tap into this very advanced search engine without having to integrate a third-party system, thanks to the unified MongoDB Query Language (MQL).\n\nDiagon is growing fast and has a high retention rate of 20%. Currently, 80% of its user base comes from Nigeria, but the company already sees users coming from other locations, which demonstrates that growth could be worldwide. Diagon is one of the startups from the MongoDB Startup Program.\n\n## Conclusion\n\nMongoDB Atlas is an ideal developer data platform for game developers, whether you are a solo developer or working on AAA titles. Developers agree that MongoDB's data model helps them change their data layer quicker to match the desired outcome.\n\nAll the while, MondoDB Atlas enables their applications to reach global scale and high availability (99.995% SLA) without involving complex operations. Finally, the unique Atlas data services --- like full-text search, data lake, analytics workload, mobile sync, and Charts ---\u00a0make it easy to extract insights from past and real-time data.\n\nCreate a free MongoDB Atlas cluster and start prototyping your next game back end. Listen to the gaming MongoDB podcast playlist to learn more about how other developers use MongoDB. If you are going to GDC 2023, come to our booth, talks, user group meetup, and events. They are all listed at mongodb.com/gdc.\n\nLast and not least, if your startup uses MongoDB, our MongoDB startup program can help you reach the next level faster, with Atlas credits and access to MongoDB experts.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article highlights startups in the games industry that use MongoDB as a backend. Their teams describe why they chose MongoDB Atlas and how it makes their development more productive.", "contentType": "Article"}, "title": "MongoDB Atlas for Gaming, Startups to Watch in 2023", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/introducing-realm-flipper-plugin", "action": "created", "body": "# Technical Preview of a Realm Flipper Plugin\n\nReact Native is a framework built by many components, and often, there are multiple ways to do the same thing. Debugging is an example of that. React Native exposes the Chrome DevTools Protocol, and you are able to debug mobile apps using the Chrome browser. Moreover, if you are using MacOS, you can debug your app running on iOS using Safari.\n\nFlipper is a new tool for mobile developers, and in particular, in the React Native community, it\u2019s growing in popularity.\n\nIn the past, debugging a React Native app with Realm JavaScript has virtually been impossible. Switching to the\u00a0new React Native architecture, it has been possible to switch from the Chrome debugger to Flipper by using the new JavaScript engine, Hermes.\n\nDebugging is more than setting breakpoints and single stepping through code. Inspecting your database is just as important.\n\nFlipper itself can be downloaded, or\u00a0you can use Homebrew\u00a0if you are a Mac user. The plugin is available for installation in the Flipper plugin manager and on npm for the mobile side.\u00a0\n\nRead more about\u00a0getting started with the Realm Flipper plugin.\n\nIn the last two years, Realm has been investing in providing a better experience for React Native developers. Over the course of 10 weeks, a team of three interns investigated how Realm can increase developer productivity and enhance the developer experience by developing a Realm plugin for Flipper to inspect a Realm database.\n\nThe goal with the Realm Flipper Plugin is to offer a simple-to-use and powerful debugging tool for Realm databases. It enables you to explore and modify Realm directly from the user interface.\n\n## Installation\n\nThe Flipper support consists of two components. First, you need to install the `flipper-realm-plugin`\u00a0in the Flipper desktop application. You can find it in Flipper\u2019s plugin manager \u2014 simply search for it by name.\n\nSecond, you have to add Flipper support to your React Native app. Add\u00a0`realm-flipper-plugin-device`\u00a0to your app\u2019s dependencies, and add the component `` to your app\u2019s source code (realms is an array of Realm instances).\n\nOnce you launch your app \u2014 on device or simulator \u2014 you can access your database from the Flipper desktop application.\n\n## Features\n\nLive objects are a key concept of Realm. Query results and individual objects will automatically be updated when the underlying database is changed. The Realm Flipper plugin supports live objects. This means whenever objects in a Realm change, it\u2019s reflected in the plugin. This makes it easy to see what is happening inside an application. Data can either be filtered using\u00a0Realm Query Language\u00a0or explored directly in the table. Additionally, the plugin enables you to traverse linked objects inside the table or in a JSON view.\n\nThe schema tab shows an overview of the currently selected schema and its properties.\n\nSchemas are not only presented in a table but also as a directed graph, which makes it even easier to see dependencies.\n\nSee a demonstration of our plugin.\n\n## Looking ahead\n\nCurrently, our work on Hermes is only covered by pre-releases. In the near future,\u00a0we will release version 11.0.0. The Realm Flipper plugin will hopefully prove to be a useful tool when you are debugging your React Native app once you switch to Hermes.\n\nFrom the start, the plugin was split into two components. One component runs on the device, and the other runs on the desktop (inside the Flipper desktop application). This will make it possible to add a database inspector within an IDE such as VSCode.\n\n## Relevant links\n\n* Desktop plugin on npm\n* Device plugin on npm\n* GitHub repository\n* Flipper download\n* Flipper documentation\n* Realm Node.js documentation\n\n## Disclaimer\n\nThe Realm Flipper plugin is still in the early stage of development. We are putting it out to our community to get a better understanding of what their needs are.\n\nThe plugin is likely to change over time, and for now, we cannot commit to any promises regarding bug fixes or new features. As always, you are welcome to create pull requests and\u00a0issues.\n\nAnd don\u2019t forget \u2014\u00a0if you have questions, comments, or feedback, we\u2019d love to hear from you in the\u00a0MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["JavaScript", "Realm", "React Native"], "pageDescription": "Click here for a brief introduction to the Realm Flipper plugin for React Native developers.", "contentType": "Article"}, "title": "Technical Preview of a Realm Flipper Plugin", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-federation-setup", "action": "created", "body": "# MongoDB Data Federation Setup\n\nAs an avid traveler, you have a love for staying at Airbnbs and have been keeping detailed notes about each one you\u2019ve stayed in over the years. These notes are spread out across different storage locations, like MongoDB Atlas and AWS S3, making it a challenge to search for a specific Airbnb with the amenities your girlfriend desires for your upcoming Valentine\u2019s Day trip. Luckily, there is a solution to make this process a lot easier. By using MongoDB\u2019s Data Federation feature, you can combine all your data into one logical view and easily search for the perfect Airbnb without having to worry about where the data is stored. This way, you can make your Valentine\u2019s Day trip perfect without wasting time searching through different databases and storage locations.\n\nDon\u2019t know how to utilize MongoDB\u2019s Data Federation feature? This tutorial will guide you through exactly how to combine your Airbnb data together for easier query-ability.\n\n## Tutorial Necessities\n\nBefore we jump in, there are a few necessities we need to have in order to be on the same page. This tutorial requires:\n\n* MongoDB Atlas.\n* An Amazon Web Services (AWS) account.\n* Access to the AWS Management Console.\n* AWS CLI.\n* MongoDB Compass.\n\n### Importing our sample data\n\nOur first step is to import our Airbnb data into our Atlas cluster and our S3 bucket, so we have data to work with throughout this tutorial. Make sure to import the dataset into both of these storage locations.\n\n### Importing via MongoDB Atlas\n\nStep 1: Create a free tier shared cluster.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nStep 2: Once your cluster is set up, click the three ellipses and click \u201cLoad Sample Dataset.\"\n\nStep 3: Once you get this green message you\u2019ll know your sample dataset (Airbnb notes) is properly loaded into your cluster.\n\n### Importing via AWS S3\nStep 1: We will be using this sample data set. Please download it locally. It contains the sample data we are working with along with the S3 bucket structure necessary for this demo. \n\nStep 2: Once the data set is downloaded, access your AWS Management Console and navigate to their S3 service. \n\nStep 3: Hit the button \u201cCreate Bucket\u201d and follow the instructions to create your bucket and upload the sampledata.zip. \n\nStep 4: Make sure to unzip your file before uploading the folders into S3. \n\nStep 5: Once your data is loaded into the bucket, you will see several folders, each with varying data types.\n\nStep 6: Follow the path: Amazon S3 > Buckets > atlas-data-federation-demo > json/ > airbnb/ to view your Airbnb notes. Your bucket structure should look like this:\n\nCongratulations! You have successfully uploaded your extensive Airbnb notes in not one but two storage locations. Now, let\u2019s see how to retrieve this information in one location using Data Federation so we can find the perfect Airbnb. In order to do so, we need to get comfortable with the MongoDB Atlas Data Federation console. \n\n## Connecting MongoDB Atlas to S3\n\nInside the MongoDB Atlas console, on the left side, click on Data Federation.\n\nHere, click \u201cset up manually\u201d in the \"create new federated database\" dropdown in the top right corner of the UI. This will lead us to a page where we can add in our data sources. You can rename your Federated Database Instance to anything you like. Once you save it, you will not be able to change the name.\n\nLet\u2019s add in our data sources from our cluster and our bucket!\n\n### Adding in data source via AWS S3 Bucket:\nStep 1: Click on \u201cAdd Data Source.\u201d \n\nStep 2: Select the \u201cAmazon S3\u201d button and hit \u201cNext.\u201d \n\nStep 3: From here, click Next on the \u201cAuthorize an AWS IAM Role\u201d:\n\nStep 4: Click on \u201cCreate New Role in the AWS CLI\u201d: \n\nStep 5: Now, you\u2019re going to want to make sure you have AWS CLI configured on your laptop. \n\nStep 6: Follow the steps below the \u201cCreate New Role with the AWS CLI\u201d in your AWS CLI. \n\n```\naws iam create-role \\\n --role-name datafederation \\\n --assume-role-policy-document file://role-trust-policy.json\n```\n\nStep 7: You can find your \u201cARN\u201d directly in your terminal. Copy that in \u2014 it should look like this:\n\n```\narn:aws:iam::7***************:role/datafederation\n```\n\nStep 8: Enter the bucket name containing your Airbnb notes:\n\nStep 9: Follow the instructions in Atlas and save your policy role. \n\nStep 10: Copy the CLI commands listed on the screen and paste them into your terminal like so:\n\n```\naws iam put-role-policy \\\n --role-name datafederation \\\n --policy-name datafederation-policy \\\n --policy-document file://adl-s3-policy.json\n```\n\nStep 11: Access your AWS Console, locate your listingsAndReviews.json file located in your S3 bucket, and copy the S3 URI. \n\nStep 12: Enter it back into your \u201cDefine \u2018Data Sources\u2019 Using Paths Inside Your S3\u201d screen and change each step of the tree to \u201cstatic.\u201d \n\nStep 13: Drag your file from the left side of the screen to the middle where it says, \u201cDrag the dataset to your Federated Database.\u201d Following these steps correctly will result in a page similar to the screenshot below.\n\nYou have successfully added in your Airbnb notes from your S3 bucket. Nice job. Let's do the same thing for the notes saved in our Atlas cluster. \n\n### Adding in data source via MongoDB Atlas cluster\nStep 1: Click \u201cAdd Data Sources.\u201d\n\nStep 2: Select \u201cMongoDB Atlas Cluster\u201d and provide the cluster name along with our sample_airbnb collection. These are your Atlas Airbnb notes. \n\nStep 3: Click \u201cNext\u201d and your sample_airbnb.listingsAndReviews will appear in the left-hand side of the console.\n\nStep 4: Drag it directly under your Airbnb notes from your S3 bucket and hit \u201cSave.\u201d Your console should look like this when done:\n\nGreat job. You have successfully imported your Airbnb notes from both your S3 bucket and your Atlas cluster into one location. Let\u2019s connect to our Federated Database and see our data combined in one easily query-able location. \n\n## Connect to your federated database\nWe are going to connect to our Federated Database using MongoDB Compass. \n\nStep 1: Click the green \u201cConnect\u201d button and then select \u201cConnect using MongoDB Compass.\u201d \n\nStep 2: Copy in the connection string, making sure to switch out the user and password for your own. This user must have admin access in order to access the data. \n\nStep 3: Once you\u2019re connected to Compass, click on \u201cVirtualDatabase0\u201d and once more on \u201cVirtualCollection0.\u201d \n\nAmazing job. You can now look at all your Airbnb notes in one location! \n\n## Conclusion\n\nIn this tutorial, we have successfully stored your Airbnb data in various storage locations, combined these separate data sets into one via Data Federation, and successfully accessed our data back through MongoDB Compass. Now you can look for and book the perfect Airbnb for your trip in a fraction of the time. ", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "This tutorial will guide you through exactly how to combine your Airbnb data together for easier query-ability. ", "contentType": "Tutorial"}, "title": "MongoDB Data Federation Setup", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-java", "action": "created", "body": "# Using Atlas Search from Java\n\nDear fellow developer, welcome! \n\nAtlas Search is a full-text search engine embedded in MongoDB Atlas that gives you a seamless, scalable experience for building relevance-based app features. Built on Apache Lucene, Atlas Search eliminates the need to run a separate search system alongside your database. The gateway to Atlas Search is the `$search` aggregation pipeline stage.\n\nThe $search stage, as one of the newest members of the MongoDB aggregation pipeline family, has gotten native, convenient support added to various language drivers. Driver support helps developers build concise and readable code. This article delves into using the Atlas Search support built into the MongoDB Java driver, where we\u2019ll see how to use the driver, how to handle `$search` features that don\u2019t yet have native driver convenience methods or have been released after the driver was released, and a glimpse into Atlas Search relevancy scoring. Let\u2019s get started!\n\n## New to search?\n\nFull-text search is a deceptively sophisticated set of concepts and technologies. From the user perspective, it\u2019s simple: good ol\u2019 `?q=query` on your web applications URL and relevant documents are returned, magically. There\u2019s a lot behind the classic magnifying glass search box, from analyzers, synonyms, fuzzy operators, and facets to autocomplete, relevancy tuning, and beyond. We know it\u2019s a lot to digest. Atlas Search works hard to make things easier and easier for developers, so rest assured you\u2019re in the most comfortable place to begin your journey into the joys and power of full-text search. We admittedly gloss over details here in this article, so that you get up and running with something immediately graspable and useful to you, fellow Java developers. By following along with the basic example provided here, you\u2019ll have the framework to experiment and learn more about details elided.\n\n## Setting up our Atlas environment\nWe need two things to get started, a database and data. We\u2019ve got you covered with both. First, start with logging into your Atlas account. If you don\u2019t already have an Atlas account, follow the steps for the Atlas UI in the \u201cGet Started with Atlas\u201d tutorial.\n\n### Opening network access\nIf you already had an Atlas account or perhaps like me, you skimmed the tutorial too quickly and skipped the step to add your IP address to the list of trusted IP addresses, take care of that now. Atlas only allows access to the IP addresses and users that you have configured but is otherwise restricted.\n\n### Indexing sample data\nNow that you\u2019re logged into your Atlas account, add the sample datasets to your environment. Specifically, we are using the sample_mflix collection here. Once you\u2019ve added the sample data, turn Atlas Search on for that collection by navigating to the Search section in the Databases view, and clicking \u201cCreate Search Index.\u201d\n\nOnce in the \u201cCreate Index\u201d wizard, use the Visual Editor, pick the sample_mflix.movies collection, leave the index name as \u201cdefault\u201d, and finally, click \u201cCreate Search Index.\u201d \n\nIt\u2019ll take a few minutes for the search index to be built, after which an e-mail notification will be sent. The indexing processing status can be tracked in the UI, as well.\n\nHere\u2019s what the Search section should now look like for you:\n\nVoila, now you\u2019ve got the movie data indexed into Atlas Search and can perform sophisticated full text queries against it. Go ahead and give it a try using the handy Search Tester, by clicking the \u201cQuery\u201d button. Try typing in some of your favorite movie titles or actor names, or even words that would appear in the plot or genre.\n\nBehind the scenes of the Search Tester lurks the $search pipeline stage. Clicking \u201cEdit $search Query\u201d exposes the full $search stage in all its JSON glory, allowing you to experiment with the syntax and behavior.\n\nThis is our first glimpse into the $search syntax. The handy \u201ccopy\u201d (the top right of the code editor side panel) button copies the code to your clipboard so you can paste it into your favorite MongoDB aggregation pipeline tools like Compass, MongoDB shell, or the Atlas UI aggregation tool (shown below). There\u2019s an \u201caggregation pipeline\u201d link there that will link you directly to the aggregation tool on the current collection.\n\nAt this point, your environment is set up and your collection is Atlas search-able. Now it\u2019s time to do some coding!\n\n## Click, click, click, \u2026 code!\n\nLet\u2019s first take a moment to reflect on and appreciate what\u2019s happened behind the scenes of our wizard clicks up to this point:\n\n* A managed, scalable, reliable MongoDB cluster has spun up.\n* Many sample data collections were ingested, including the movies database used here.\n* A triple-replicated, flexible, full-text index has been configured and built from existing content and stays in sync with database changes.\n\nThrough the Atlas UI and other tools like MongoDB Compass, we are now able to query our movies collection in, of course, all the usual MongoDB ways, and also through a proven and performant full-text index with relevancy-ranked results. It\u2019s now up to us, fellow developers, to take it across the finish line and build the applications that allow and facilitate the most useful or interesting documents to percolate to the top. And in this case, we\u2019re on a mission to build Java code to search our Atlas Search index. \n\n## Our coding project challenge\n\nLet\u2019s answer this question from our movies data:\n\n> What romantic, drama movies have featured Keanu Reeves? \n\nYes, we could answer this particular question knowing the precise case and spelling of each field value in a direct lookup fashion, using this aggregation pipeline:\n\n \n {\n $match: {\n cast: {\n $in: [\"Keanu Reeves\"],\n },\n genres: {\n $all: [\"Drama\", \"Romance\"],\n },\n },\n }\n ]\n\nLet\u2019s suppose we have a UI that allows the user to select one or more genres to filter, and a text box to type in a free form query (see the resources at the end for a site like this). If the user had typed \u201ckeanu reeves\u201d, all lowercase, the above $match would not find any movies. Doing known, exact value matching is an important and necessary capability, to be sure, yet when presenting free form query interfaces to humans, we need to allow for typos, case insensitivity, voice transcription mistakes, and other inexact, fuzzy queries. \n\n![using $match with lowercase \u201ckeanu reeves\u201d, no matches!\n\nUsing the Atlas Search index we\u2019ve already set up, we can now easily handle a variety of full text queries. We\u2019ll stick with this example throughout so you can compare and contrast doing standard $match queries to doing sophisticated $search queries.\n\n## Know the $search structure\nUltimately, regardless of the coding language, environment, or driver that we use, a BSON representation of our aggregation pipeline request is handled by the server. The Aggregation view in Atlas UI and very similarly in Compass, our useful MongoDB client-side UI for querying and analyzing MongoDB data, can help guide you through the syntax, with links directly to the pertinent Atlas Search aggregation pipeline documentation. \n\nRather than incrementally building up to our final example, here\u2019s the complete aggregation pipeline so you have it available as we adapt this to Java code. This aggregation pipeline performs a search query, filtering results to movies that are categorized as both Drama and Romance genres, that have \u201ckeanu reeves\u201d in the cast field, returning only a few fields of the highest ranked first 10 documents.\n\n \n {\n \"$search\": {\n \"compound\": {\n \"filter\": [\n {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"Drama\",\n \"path\": \"genres\"\n }\n },\n {\n \"text\": {\n \"query\": \"Romance\",\n \"path\": \"genres\"\n }\n }\n ]\n }\n }\n ],\n \"must\": [\n {\n \"phrase\": {\n \"query\": \"keanu reeves\",\n \"path\": {\n \"value\": \"cast\"\n }\n }\n }\n ]\n },\n \"scoreDetails\": true\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"title\": 1,\n \"cast\": 1,\n \"genres\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"scoreDetails\": {\n \"$meta\": \"searchScoreDetails\"\n }\n }\n },\n {\n \"$limit\": 10\n }\n ]\n\nAt this point, go ahead and copy the above JSON aggregation pipeline and paste it into Atlas UI or Compass. There\u2019s a nifty feature (the \" TEXT\" mode toggle) where you can paste in the entire JSON just copied. Here\u2019s what the results should look like for you:\n\n![three-stage aggregation pipeline in Compass\n\nAs we adapt the three-stage aggregation pipeline to Java, we\u2019ll explain things in more detail.\n\nWe spend the time here emphasizing this JSON-like structure because it will help us in our Java coding. It\u2019ll serve us well to also be able to work with this syntax in ad hoc tools like Compass in order to experiment with various combinations of options and stages to arrive at what serves our applications best, and be able to translate that aggregation pipeline to Java code. It\u2019s also the most commonly documented query language/syntax for MongoDB and Atlas Search; it\u2019s valuable to be savvy with it.\n\n## Now back to your regularly scheduled Java\n\nVersion 4.7 of the MongoDB Java driver was released in July of last year (2022), adding convenience methods for the Atlas `$search` stage, while Atlas Search was made generally available two years prior. In that time, Java developers weren\u2019t out of luck, as direct BSON Document API calls to construct a $search stage work fine. Code examples in that time frame used `new Document(\"$search\",...)`. This article showcases a more comfortable way for us Java developers to use the `$search` stage, allowing clearly named and strongly typed parameters to guide you. Your IDE\u2019s method and parameter autocompletion will be a time-saver to more readable and reliable code.\n\nThere\u2019s a great tutorial on using the MongoDB Java driver in general.\n\nThe full code for this tutorial is available on GitHub. \n\nYou\u2019ll need a modern version of Java, something like:\n\n $ java --version\n openjdk 17.0.7 2023-04-18\n OpenJDK Runtime Environment Homebrew (build 17.0.7+0)\n OpenJDK 64-Bit Server VM Homebrew (build 17.0.7+0, mixed mode, sharing)\n\nNow grab the code from our repository using `git clone` and go to the working directory:\n\n git clone https://github.com/mongodb-developer/getting-started-search-java\n cd getting-started-search-java\n\nOnce you clone that code, copy the connection string from the Atlas UI (the \u201cConnect\u201d button on the Database page). You\u2019ll use this connection string in a moment to run the code connecting to your cluster. \n\nNow open a command-line prompt to the directory where you placed the code, and run: \n\n ATLAS_URI=\"<>\" ./gradlew run \n\nBe sure to fill in the appropriate username and password in the connection string. If you don\u2019t already have Gradle installed, the `gradlew` command should install it the first time it is executed. At this point, you should get a few pages of flurry of output to your console. If the process hangs for a few seconds and then times out with an error message, check your Atlas network permissions, the connection string you have specified the `ATLAS_URI` setting, including the username and password.\n\nUsing the `run` command from Gradle is a convenient way to run the Java `main()` of our `FirstSearchExample`. It can be run in other ways as well, such as through an IDE. Just be sure to set the `ATLAS_URI` environment variable for the environment running the code.\n\nIdeally, at this point, the code ran successfully, performing the search query that we have been describing, printing out these results:\n\n Sweet November\n Cast: Keanu Reeves, Charlize Theron, Jason Isaacs, Greg Germann]\n Genres: [Drama, Romance]\n Score:6.011996746063232\n\n Something's Gotta Give\n Cast: [Jack Nicholson, Diane Keaton, Keanu Reeves, Frances McDormand]\n Genres: [Comedy, Drama, Romance]\n Score:6.011996746063232\n\n A Walk in the Clouds\n Cast: [Keanu Reeves, Aitana S\u00e8nchez-Gij\u00e8n, Anthony Quinn, Giancarlo Giannini]\n Genres: [Drama, Romance]\n Score:5.7239227294921875\n\n The Lake House\n Cast: [Keanu Reeves, Sandra Bullock, Christopher Plummer, Ebon Moss-Bachrach]\n Genres: [Drama, Fantasy, Romance]\n Score:5.7239227294921875\n \nSo there are four movies that match our criteria \u2014 our initial mission has been accomplished.\n\n## Java $search building\nLet\u2019s now go through our project and code, pointing out the important pieces you will be using in your own project. First, our `build.gradle` file specifies that our project depends on the MongoDB Java driver, down to the specific version of the driver. There\u2019s also a convenient `application` plugin so that we can use the `run` target as we just did.\n\n plugins {\n id 'java'\n id 'application'\n }\n\n group 'com.mongodb.atlas'\n version '1.0-SNAPSHOT'\n\n repositories {\n mavenCentral()\n }\n\n dependencies {\n implementation 'org.mongodb:mongodb-driver-sync:4.10.1'\n implementation 'org.apache.logging.log4j:log4j-slf4j-impl:2.17.1'\n }\n\n application {\n mainClass = 'com.mongodb.atlas.FirstSearchExample'\n }\n\nSee [our docs for further details on how to add the MongoDB Java driver to your project.\n\nIn typical Gradle project structure, our Java code resides under `src/main/java/com/mongodb/atlas/` in FirstSearchExample.java. \n\nLet\u2019s walk through this code, section by section, in a little bit backward order. First, we open a connection to our collection, pulling the connection string from the `ATLAS_URI` environment variable:\n\n // Set ATLAS_URI in your environment\n String uri = System.getenv(\"ATLAS_URI\");\n if (uri == null) {\n throw new Exception(\"ATLAS_URI must be specified\");\n }\n\n MongoClient mongoClient = MongoClients.create(uri);\n MongoDatabase database = mongoClient.getDatabase(\"sample_mflix\");\n MongoCollection collection = database.getCollection(\"movies\");\n\nOur ultimate goal is to call `collection.aggregate()` with our list of pipeline stages: search, project, and limit. There are driver convenience methods in `com.mongodb.client.model.Aggregates` for each of these. \n\n AggregateIterable aggregationResults = collection.aggregate(Arrays.asList(\n searchStage,\n project(fields(excludeId(),\n include(\"title\", \"cast\", \"genres\"),\n metaSearchScore(\"score\"),\n meta(\"scoreDetails\", \"searchScoreDetails\"))),\n limit(10)));\n\nThe `$project` and `$limit` stages are both specified fully inline above. We\u2019ll define `searchStage` in a moment. The `project` stage uses `metaSearchScore`, a Java driver convenience method, to map the Atlas Search computed score (more on this below) to a pseudo-field named `score`. Additionally, Atlas Search can provide the score explanations, which itself is a performance hit to generate so only use for debugging and experimentation. Score explanation details must be requested as an option on the `search` stage for them to be available for projection here. There is not a convenience method for projecting scoring explanations, so we use the generic `meta()` method to provide the pseudo-field name and the key of the meta value Atlas Search returns for each document. The Java code above generates the following aggregation pipeline, which we had previously done manually above, showing it here to show the Java code and the corresponding generated aggregation pipeline pieces.\n\n \n {\n \"$search\": { ... }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"title\": 1,\n \"cast\": 1,\n \"genres\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"scoreDetails\": {\n \"$meta\": \"searchScoreDetails\"\n }\n }\n },\n {\n \"$limit\": 10\n }\n ]\n\nThe `searchStage` consists of a search operator and an additional option. We want the relevancy scoring explanation details of each document generated and returned, which is enabled by the `scoreDetails` setting that was developed and released after the Java driver version was released. Thankfully, the Java driver team built in pass-through capabilities to be able to set arbitrary options beyond the built-in ones to future-proof it. `SearchOptions.searchOptions().option()` allows us to set the `scoreDetails` option on the `$search` stage to true. Reiterating the note from above, generating score details is a performance hit on Lucene, so only enable this setting for debugging or experimentation while inspecting but do not enable it in performance sensitive environments.\n\n Bson searchStage = search(\n compound()\n .filter(List.of(genresClause))\n .must(List.of(SearchOperator.of(searchQuery))),\n searchOptions().option(\"scoreDetails\", true)\n );\n\nThat code builds this structure:\n\n \"$search\": {\n \"compound\": {\n \"filter\": [ . . . ],\n \"must\": [ . . . ]\n },\n \"scoreDetails\": true\n }\n\nWe\u2019ve left a couple of variables to fill in: `filters` and `searchQuery`. \n\n> What are filters versus other compound operator clauses? \n> * `filter`: clauses to narrow the query scope, not affecting the resultant relevancy score\n> * `must`: required query clauses, affecting relevancy scores\n> * `should`: optional query clauses, affecting relevancy scores\n> * `mustNot`: clauses that must not match\n\nOur (non-scoring) filter is a single search operator clause that combines required criteria for genres Drama and Romance:\n\n SearchOperator genresClause = SearchOperator.compound()\n .must(Arrays.asList(\n SearchOperator.text(fieldPath(\"genres\"),\"Drama\"),\n SearchOperator.text(fieldPath(\"genres\"), \"Romance\")\n ));\n\nAnd that code builds this query operator structure:\n\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"Drama\",\n \"path\": \"genres\"\n }\n },\n {\n \"text\": {\n \"query\": \"Romance\",\n \"path\": \"genres\"\n }\n }\n ]\n }\n\nNotice how we nested the `genresClause` within our `filter` array, which takes a list of `SearchOperator`s. `SearchOperator` is a Java driver class with convenience builder methods for some, but not all, of the available Atlas Search search operators. You can see we used `SearchOperator.text()` to build up the genres clauses. \n\nLast but not least is the primary (scoring!) `phrase` search operator clause to search for \u201ckeanu reeves\u201d within the `cast` field. Alas, this is one search operator that currently does not have built-in `SearchOperator` support. Again, kudos to the Java driver development team for building in a pass-through for arbitrary BSON objects, provided we know the correct JSON syntax. Using `SearchOperator.of()`, we create an arbitrary operator out of a BSON document. Note: This is why it was emphasized early on to become savvy with the JSON structure of the aggregation pipeline syntax.\n\n Document searchQuery = new Document(\"phrase\",\n new Document(\"query\", \"keanu reeves\")\n .append(\"path\", \"cast\"));\n\n## And the results are\u2026\n\nSo now we\u2019ve built the aggregation pipeline. To show the results (shown earlier), we simply iterate through `aggregationResults`:\n\n aggregationResults.forEach(doc -> {\n System.out.println(doc.get(\"title\"));\n System.out.println(\" Cast: \" + doc.get(\"cast\"));\n System.out.println(\" Genres: \" + doc.get(\"genres\"));\n System.out.println(\" Score:\" + doc.get(\"score\"));\n // printScoreDetails(2, doc.toBsonDocument().getDocument(\"scoreDetails\"));\n System.out.println(\"\");\n });\n\nThe results are ordered in descending score order. Score is a numeric factor based on the relationship between the query and each document. In this case, the only scoring component to our query was a phrase query of \u201ckeanu reeves\u201d. Curiously, our results have documents with different scores! Why is that? If we covered everything, this article would never end, so addressing the scoring differences is beyond this scope, but we\u2019ll explain a bit below for bonus and future material.\n\n## Conclusion\nYou\u2019re now an Atlas Search-savvy Java developer \u2014 well done! You\u2019re well on your way to enhancing your applications with the power of full-text search. With just the steps and code presented here, even without additional configuration and deeper search understanding, the power of search is available to you. \n\nThis is only the beginning. And it is important, as we refine our application to meet our users\u2019 demanding relevancy needs, to continue the Atlas Search learning journey. \n\n### For further information\nWe finish our code with some insightful diagnostic output. An aggregation pipeline execution can be [*explain*ed, dumping details of execution plans and performance timings. In addition, the Atlas Search process, `mongot`, provides details of `$search` stage interpretation and statistics.\n\n System.out.println(\"Explain:\");\n System.out.println(format(aggregationResults.explain().toBsonDocument()));\n\nWe\u2019ll leave delving into those details as an exercise to the reader, noting that you can learn a lot about how queries are interpreted/analyzed by studying the explain() output. \n\n## Bonus section: relevancy scoring\nSearch relevancy is a scientific art. Without getting into mathematical equations and detailed descriptions of information retrieval research, let\u2019s focus on the concrete scoring situation presented in our application here. The scoring component of our query is a phrase query of \u201ckeanu reeves\u201d on the cast field. We do a `phrase` query rather than a `text` query so that we search for those two words contiguously, rather than \u201ckeanu OR reeves\u201d (\u201ckeanu\u201d is a rare term, of course, but there are many \u201creeves\u201d).\n\nScoring takes into account the field length (the number of terms/words in the content), among other factors. Underneath, during indexing, each value of the cast field is run through an analysis process that tokenizes the text. Tokenization is a process splitting the content into searchable units, called terms. A \u201cterm\u201d could be a word or fragment of a word, or the exact text, depending on the analyzer settings. Take a look at the `cast` field values in the returned movies. Using the default, `lucene.standard`, analyzer, the tokens emitted split at whitespace and other word boundaries, such as the dash character.\n\nNow do you see how the field length (number of terms) varies between the documents? If you\u2019re curious of the even gnarlier details of how Lucene performs the scoring for our query, uncomment the `printScoreDetails` code in our results output loop.\n\nDon\u2019t worry if this section is a bit too much to take in right now. Stay tuned \u2014 we\u2019ve got some scoring explanation content coming shortly.\n\nWe could quick fix the ordering to at least not bias based on the absence of hyphenated actor names. Moving the queryClause into the `filters` section, rather than the `must` section, such that there would be no scoring clauses, only filtering ones, will leave all documents of equal ranking.\n\n## Searching for more?\nThere are many useful Atlas Search resources available, several linked inline above; we encourage you to click through those to delve deeper. These quick three steps will have you up and searching quickly:\n\n1. Create an Atlas account\n2. Add some content\n3. Create an Atlas Search index\n\nPlease also consider taking the free MongoDB University Atlas Search course.\n\nAnd finally, we\u2019ll leave you with the slick demonstration of Atlas Search on the movies collection at https://www.atlassearchmovies.com/ (though note that it fuzzily searches all searchable text fields, not just the cast field, and does so with OR logic querying, which is different than the `phrase` query only on the `cast` field we performed here).", "format": "md", "metadata": {"tags": ["Atlas", "Java"], "pageDescription": "This article delves into using the Atlas Search support built into the MongoDB Java driver", "contentType": "Article"}, "title": "Using Atlas Search from Java", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-1", "action": "created", "body": "# Part #1: Build Your Own Vector Search with MongoDB Atlas and Amazon SageMaker\n\nHave you heard about machine learning, models, and AI but don't quite know where to start? Do you want to search your data semantically? Are you interested in using vector search in your application?\n\nThen you\u2019ve come to the right place!\n\nThis series will introduce you to MongoDB Atlas Vector Search and Amazon SageMaker, and how to use both together to semantically search your data.\n\nThis first part of the series will focus on the architecture of such an application \u2014 i.e., the parts you need, how they are connected, and what they do.\n\nThe following parts of the series will then dive into the details of how the individual elements presented in this architecture work (Amazon SageMaker in Part 2 and MongoDB Atlas Vector Search in Part 3) and their actual configuration and implementation. If you are just interested in one of these two implementations, have a quick look at the architecture pictures and then head to the corresponding part of the series. But to get a deep understanding of Vector Search, I recommend reading the full series.\n\nLet\u2019s start with why though: Why should you use MongoDB Atlas Vector Search and Amazon SageMaker?\n\n## Components of your application\n\nIn machine learning, an embedding model is a type of model that learns to represent objects \u2014 such as words, sentences, or even entire documents \u2014 as vectors in a high-dimensional space. These vectors, called embeddings, capture semantic relationships between the objects.\n\nOn the other hand, a large language model, which is a term you might have heard of, is designed to understand and generate human-like text. It learns patterns and relationships within language by processing vast amounts of text data. While it also generates embeddings as an internal representation, the primary goal is to understand and generate coherent text.\n\nEmbedding models are often used in tasks like natural language processing (NLP), where understanding semantic relationships is crucial. For example, word embeddings can be used to find similarities between words based on their contextual usage.\n\nIn summary, embedding models focus on representing objects in a meaningful way in a vector space, while large language models are more versatile, handling a wide range of language-related tasks by understanding and generating text.\n\nFor our needs in this application, an embedding model is sufficient. In particular, we will be using All MiniLM L6 v2 by Hugging Face.\n\nAmazon SageMaker isn't just another AWS service; it's a versatile platform designed by developers, for developers. It empowers us to take control of our machine learning projects with ease. Unlike traditional ML frameworks, SageMaker simplifies the entire ML lifecycle, from data preprocessing to model deployment. As software engineers, we value efficiency, and SageMaker delivers precisely that, allowing us to focus more on crafting intelligent models and less on infrastructure management. It provides a wealth of pre-built algorithms, making it accessible even for those not deep into the machine learning field.\n\nMongoDB Atlas Vector Search is a game-changer for developers like us who appreciate the power of simplicity and efficiency in database operations. Instead of sifting through complex queries and extensive code, Atlas Vector Search provides an intuitive and straightforward way to implement vector-based search functionality. As software engineers, we know how crucial it is to enhance the user experience with lightning-fast and accurate search results. This technology leverages the benefits of advanced vector indexing techniques, making it ideal for projects involving recommendation engines, content similarity, or even gaming-related features. With MongoDB Atlas Vector Search, we can seamlessly integrate vector data into our applications, significantly reducing development time and effort. It's a developer's dream come true \u2013 practical, efficient, and designed to make our lives easier in the ever-evolving world of software development.\n\n## Generating and updating embeddings for your data\n\nThere are two steps to using Vector Search in your application.\n\nThe first step is to actually create vectors (also called embeddings or embedding vectors), as well as update them whenever your data changes. The easiest way to watch for newly inserted and updated data from your server application is to use MongoDB Atlas triggers and watch for exactly those two events. The triggers themselves are out of the scope of this tutorial but you can find other great resources about how to set them up in Developer Center.\n\nThe trigger then executes a script that creates new vectors. This can, for example, be done via MongoDB Atlas Functions or as in this diagram, using AWS Lambda. The script itself then uses the Amazon SageMaker endpoint with your desired model deployed via the REST API to create or update a vector in your Atlas database.\n\nThe important bit here that makes the usage so easy and the performance so great is that the data and the embeddings are saved inside the same database:\n\n> Data that belongs together gets saved together.\n\nHow to deploy and prepare this SageMaker endpoint and offer it as a REST service for your application will be discussed in detail in Part 2 of this tutorial.\n\n## Querying your data\n\nThe other half of your application will be responsible for taking in queries to semantically search your data.\n\nNote that a search has to be done using the vectorized version of the query. And the vectorization has to be done with the same model that we used to vectorize the data itself. The same Amazon SageMaker endpoint can, of course, be used for that.\n\nTherefore, whenever a client application sends a request to the server application, two things have to happen.\n\n1. The server application needs to call the REST service that provides the Amazon SageMaker endpoint (see the previous section).\n2. With the vector received, the server application then needs to execute a search using Vector Search to retrieve the results from the database.\n\nThe implementation of how to query Atlas can be found in Part 3 of this tutorial.\n\n## Wrapping it up\n\nThis short, first part of the series has provided you with an overview of a possible architecture to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.\n\nHave a look at Part 2 if you are interested in how to set up Amazon SageMaker and Part 3 to go into detail about MongoDB Atlas Vector Search.\n\n\u2705 Sign-up for a free cluster.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n\u2705 Get help on our Community Forums.\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "Serverless", "AWS"], "pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.", "contentType": "Tutorial"}, "title": "Part #1: Build Your Own Vector Search with MongoDB Atlas and Amazon SageMaker", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/searching-nearby-points-interest-mapbox", "action": "created", "body": "# Searching for Nearby Points of Interest with MongoDB and Mapbox\n\nWhen it comes to location data, MongoDB's ability to work with GeoJSON through geospatial queries is often under-appreciated. Being able to query for intersecting or nearby coordinates while maintaining performance is functionality a lot of organizations are looking for.\n\nTake the example of maintaining a list of business locations or even a fleet of vehicles. Knowing where these locations are, relative to a particular position isn't an easy task when doing it manually.\n\nIn this tutorial we're going to explore the `$near` operator within a MongoDB Realm application to find stored points of interest within a particular proximity to a position. These points of interest will be rendered on a map using the Mapbox service.\n\nTo get a better idea of what we're going to accomplish, take the following animated image for example:\n\nWe're going to pre-load our MongoDB database with a few points of interest that are formatted using the GeoJSON specification. When clicking around on the map, we're going to use the `$near` operator to find new points of interest that are within range of the marker.\n\n## The Requirements\n\nThere are numerous components that must be accounted for to be successful with this tutorial:\n\n- A MongoDB Atlas free tier cluster or better to store the data.\n- A MongoDB Realm application to access the data from a client-facing application.\n- A Mapbox free tier account or better to render the data on a map.\n\nThe assumption is that MongoDB Atlas has been properly configured and that MongoDB Realm is using the MongoDB Atlas cluster.\n\n>MongoDB Atlas can be used for FREE with a M0 sized cluster. Deploy MongoDB in minutes within the MongoDB Cloud.\n\nIn addition to Realm being pointed at the Atlas cluster, anonymous authentication for the Realm application should be enabled and an access rule should be defined for the collection. All users should be able to read all documents for this tutorial.\n\nIn this example, Mapbox is a third-party service for showing interactive map tiles. An account is necessary and an access token to be used for development should be obtained. You can learn how in the Mapbox documentation.\n\n## MongoDB Geospatial Queries and the GeoJSON Data Model\n\nBefore diving into geospatial queries and creating an interactive client-facing application, a moment should be taken to understand the data and indexes that must be created within MongoDB.\n\nTake the following example document:\n\n``` json\n{\n \"_id\": \"5ec6fec2318d26b626d53c61\",\n \"name\": \"WorkVine209\",\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": \n -121.4123,\n 37.7621\n ]\n }\n}\n```\n\nLet's assume that documents that follow the above data model exist in a **location_services** database and a **points_of_interest** collection.\n\nTo be successful with our queries, we only need to store the location type and the coordinates. This `location` field makes up a [GeoJSON feature, which follows a specific format. The `name` field, while useful isn't an absolute requirement. Some other optional fields might include an `address` field, `hours_of_operation`, or similar.\n\nBefore being able to execute the geospatial queries that we want, we need to create a special index.\n\nThe following index should be created:\n\n``` none\ndb.points_of_interest.createIndex({ location: \"2dsphere\" });\n```\n\nThe above index can be created numerous ways, for example, you can create it using the MongoDB shell, Atlas, Compass, and a few other ways. Just note that the `location` field is being classified as a `2dsphere` for the index.\n\nWith the index created, we can execute a query like the following:\n\n``` none\ndb.points_of_interest.find({\n \"location\": {\n \"$near\": {\n \"$geometry\": {\n \"type\": \"Point\",\n \"coordinates\": -121.4252, 37.7397]\n },\n \"$maxDistance\": 2500\n }\n }\n});\n```\n\nNotice in the above example, we're looking for documents that have a `location` field within 2,500 meters of the point provided in the filter.\n\nWith an idea of how the data looks and how the data can be accessed, let's work towards creating a functional application.\n\n## Interacting with Places using MongoDB Realm and Mapbox\n\nLike previously mentioned, you should already have a Mapbox account and MongoDB Realm should already be configured.\n\nOn your computer, create an **index.html** file with the following boilerplate code:\n\n``` xml\n\n \n \n \n\n \n \n \n \n\n```\n\nIn the above code, we're including both the Mapbox library as well as the MongoDB Realm SDK. We're creating a `map` placeholder component which will show our map, and it is lightly styled with CSS.\n\nYou can run this file locally, serve it, or host it on [MongoDB Realm.\n\nWithin the `", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to use the $near operator in a MongoDB geospatial query to find nearby points of interest.", "contentType": "Tutorial"}, "title": "Searching for Nearby Points of Interest with MongoDB and Mapbox", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/bash/get-started-atlas-aws-cloudformation", "action": "created", "body": "# Get Started with MongoDB Atlas and AWS CloudFormation\n\nIt's pretty amazing that we can now deploy and control massive systems\nin the cloud from our laptops and phones. And it's so easy to take for\ngranted when it all works, but not so awesome when everything is broken\nafter coming back on Monday morning after a long weekend! On top of\nthat, the tooling that's available is constantly changing and updating\nand soon you are drowning in dependabot PRs.\n\nThe reality of setting up and configuring all the tools necessary to\ndeploy an app is time-consuming, error-prone, and can result in security\nrisks if you're not careful. These are just a few of the reasons we've\nall witnessed the incredible growth of DevOps tooling as we continue the\nevolution to and in the cloud.\n\nAWS CloudFormation is an\ninfrastructure-as-code (IaC) service that helps you model and set up\nyour Amazon Web Services resources so that you can spend less time\nmanaging those resources and more time focusing on your applications\nthat run in AWS. CloudFormation, or CFN, let's users create and manage\nAWS resources directly from templates which provide dependable out of\nthe box blueprint deployments for any kind of cloud app.\n\nTo better serve customers using modern cloud-native workflows, MongoDB\nAtlas supports native CFN templates with a new set of Resource Types.\nThis new integration allows you to manage complete MongoDB Atlas\ndeployments through the AWS CloudFormation console and CLI so your apps\ncan securely consume data services with full AWS cloud ecosystem\nsupport.\n\n## Launch a MongoDB Atlas Stack on AWS CloudFormation\n\nWe created a helper project that walks you through an end-to-end example\nof setting up and launching a MongoDB Atlas stack in AWS CloudFormation.\n\nThe\nget-started-aws-cfn\nproject builds out a complete MongoDB Atlas deployment, which includes a\nMongoDB Atlas project, cluster, AWS IAM role-type database user, and IP\naccess list entry.\n\n>\n>\n>You can also use the AWS Quick Start for MongoDB\n>Atlas\n>that uses the same resources for CloudFormation and includes network\n>peering to a new or existing VPC.\n>\n>\n\nYou're most likely already set up to run the\nget-started-aws-cfn\nsince the project uses common tools like the AWS CLI and Docker, but\njust in case, head over to the\nprerequisites\nsection to check your development machine. (If you haven't already,\nyou'll want to create a MongoDB Atlas\naccount.)\n\nThe project has two main parts: \"get-setup' will deploy and configure\nthe MongoDB Atlas CloudFormation resources into the AWS region of your\nchoice, while \"get-started' will launch your complete Atlas deployment.\n\n## Step 1) Get Set Up\n\nClone the\nget-started-aws-cfn\nrepo and get\nsetup:\n\n``` bash\ngit clone https://github.com/mongodb-developer/get-started-aws-cfn\ncd get-started-aws-cfn\n./get-setup.sh\n```\n\n## Step 2) Get Started\n\nRun the get-started script:\n\n``` bash\n./get-started.sh\n```\n\nOnce the stack is launched, you will start to see resources getting\ncreated in the AWS CloudFormation console. The Atlas cluster takes a few\nminutes to spin up completely, and you can track the progress in the\nconsole or through the AWS CLI.\n\n## Step 3) Get Connected\n\nOnce your MongoDB Atlas cluster is deployed successfully, you can find\nits connection string under the Outputs tab as the value for the\n\"ClusterSrvAddress' key.\n\nThe Get-Started project also has a helper script to combine the AWS and\nMongoDB shells to securely connect via an AWS IAM role session. Check\nout connecting to your\ncluster\nfor more information.\n\nWhat next? You can connect to your MongoDB Atlas cluster from the mongo\nshell, MongoDB\nCompass, or any of\nour supported\ndrivers. We have\nguides for those using Atlas with popular languages: Here's one for how\nto connect to Atlas with\nNode.js\nand another for getting started with\nJava.\n\n## Conclusion\n\nUse the MongoDB Atlas CloudFormation Resources to power everything from\nthe most basic \"hello-world' apps to the most advanced devops pipelines.\nJump start your new projects today with the MongoDB Atlas AWS\nCloudFormation Get-Started\nproject!\n\n>\n>\n>If you have questions, please head to our developer community\n>website where the MongoDB engineers and\n>the MongoDB community will help you build your next big idea with\n>MongoDB.\n>\n>\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.", "format": "md", "metadata": {"tags": ["Bash", "Atlas", "AWS"], "pageDescription": "Learn how to get started with MongoDB Atlas and AWS CloudFormation.", "contentType": "Code Example"}, "title": "Get Started with MongoDB Atlas and AWS CloudFormation", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/hashicorp-vault-kmip-secrets-engine-mongodb", "action": "created", "body": "# How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption\n\nEncryption is proven and trusted and has been around for close to 60 years, but there are gaps. So when we think about moving data (TLS encryption) and storing data (storage encryption), most databases have that covered. But as soon as data is in use, processed by the database, it's in plain text and more vulnerable to insider access and active breaches. Most databases do not have this covered.\n\nWith MongoDB\u2019s Client-Side Field Level Encryption (CSFLE) and Queryable Encryption, applications can encrypt sensitive plain text fields in documents prior to transmitting data to the server. This means that data processed by database (in use) will not be in plain text as it\u2019s always encrypted and most importantly still can be queried. The encryption keys used are typically stored in a key management service.\n\nOrganizations with a multi-cloud strategy face the challenge of how to manage encryption keys across cloud environments in a standardized way, as the public cloud KMS services use proprietary APIs \u2014 e.g., AWS KMS, Azure Key Vault, or GCP KMS \u2014 to manage encryption keys. Organizations wanting to have a standardized way of managing the lifecycle of encryption keys can utilize KMIP, Key Management Interoperability Protocol.\n\nAs shown in the diagram above, KMIPs eliminate the sprawl of encryption key management services in multiple cloud providers by utilizing a KMIP-enabled key provider. MongoDB CSFLE and Queryable Encryption support KMIP as a key provider.\n\nIn this article, I will showcase how to use MongoDB Queryable Encryption and CSFLE with Hashicorp Key Vault KMIP Secrets Engine to have a standardized way of managing the lifecycle of encryption keys regardless of cloud provider.\n\n## Encryption terminology\n\nBefore I dive deeper into how to actually use MongoDB CSFLE and Queryable Encryption, I will explain encryption terminology and the common practice to encrypt plain text data.\n\n**Customer Master Key (CMK)** is the encryption key used to protect (encrypt) the Data Encryption Keys, which is on the top level of the encryption hierarchy.\n\n**The Data Encryption Key (DEK)** is used to encrypt the data that is plain text. Once plain text is encrypted by the DEK, it will be in cipher text.\n\n**Plain text** data is unencrypted information that you wish to protect.\n\n**Cipher text** is encrypted information unreadable by a human or computer without decryption.\n\n**Envelope encryption** is the practice of encrypting **plain text** data with a **data encryption key** (DEK) and then encrypting the data key using the **customer master key**.\n\n**The prerequisites to enable querying in CSFLE or Queryable Encryption mode are:**\n\n* A running Key Management System which supports the KMIP standard \u2014 e.g., HashiCorp Key Vault. Application configured to use the KMIP endpoint.\n* Data Encryption Keys (DEK) created and an encryption JSON schema that is used by a MongoDB driver to know which fields to encrypt.\n* An authenticated MongoDB connection with CSFLE/Queryable Encryption enabled.\n* You will need a supported server version and a compatible driver version.\u00a0For this tutorial we are going to use MongoDB Atlas version 6.0. Refer to documentation to see what driver versions for\u00a0CSFLE\u00a0or\u00a0Queryable Encryption\u00a0is required.\n\nOnce the above are fulfilled, this is what happens when a query is executed.\n\n**Step 1:** Upon receiving a query, the MongoDB driver checks to see if any encrypted fields are involved using the JSON encryption schema that is configured when connecting to the database.\n\n**Step 2:** The MongoDB driver requests the Customer Master Key (CMK) key from the KMIP key provider. In our setup, it will be HashiCorp Key Vault.\n\n**Step 3:** The MongoDB driver decrypts the data encryptions keys using the CMK. The DEK is used to encrypt/decrypt the plain text fields. What fields to encrypt/decrypt are defined in the JSON encryption schema. The encrypted data encryption keys\u00a0 are stored in a key vault collection in your MongoDB cluster.\n\n**Step 4:** The driver submits the query to the MongoDB server with the encrypted fields rendered as ciphertext.\n\n**Step 5:** MongoDB returns the encrypted results of the query to the MongoDB driver, still as ciphertext.\n\n**Step 6:** MongoDB Driver decrypts the encrypted fields using DEK to plain text and returns it to the authenticated client.\n\nNext is to actually set up and configure the prerequisites needed to enable querying MongoDB in CSFLE or Queryable Encryption mode.\n\n## What you will set up\n\nSo let's look at what's required to install, configure, and run to implement what's described in the section above.\n\n* MongoDB Atlas cluster:\u00a0MongoDB Atlas is a fully managed data platform for modern applications. Storing data the way it\u2019s accessed as documents makes developers more productive. It provides a document-based database that is cost-efficient and resizable while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups. It allows you to focus on your applications by providing the foundation of high performance, high availability, security, and compatibility they need.\u00a0 For this tutorial we are going to use MongoDB Atlas version 6.0. Refer to documentation to see what driver versions for\u00a0CSFLE\u00a0or\u00a0Queryable Encryption\u00a0is required.\n* Hashicorp Vault Enterprise: Run and configure the Hashicorp Key Vault **KMIP** Secrets Engine, along with Scopes, Roles, and Certificates.\n* Python application: This showcases how CSFLE and Queryable Encryption can be used with HashiCorp Key Vault. I will show you how to configure DEK, JSON Schema, and a MongoDB authenticated client to connect to a database and execute queries that can query on encrypted data stored in a collection in MongoDB Atlas.\n\n## Prerequisites\n\nFirst off, we need to have at least an Atlas account to provision Atlas and then somewhere to run our automation. You can\u00a0get an Atlas account for free\u00a0at mongodb.com. If you want to take this tutorial for a spin, take the time and create your Atlas account now.\n\nYou will also need to have Docker installed as we are using a docker container where we have prebaked an image containing all needed dependencies, such as HashiCorp Key Vault, MongoDB Driver, and crypto library.. For more information on how to install Docker, see\u00a0Get Started with Docker. Also, install the latest version of\u00a0MongoDB Compass, which we will use to actually see if the fields in collection have been encrypted.\n\nNow we are almost ready to get going. You\u2019ll need to clone this tutorial\u2019s\u00a0Github repository. You can clone the repo by using the below command:\n\n```\ngit clone https://github.com/mongodb-developer/mongodb-kmip-fle-queryable\n```\n\nThere are main four steps to get this tutorial running:\n\n* Retrieval of trial license key for Hashicorp Key Vault\n* Update database connection string\n* Start docker container, embedded with Hashicorp Key Vault\n* Run Python application, showcasing CSFLE and Queryable Encryption\n\n## Retrieval of trial license key for Hashicorp Key Vault\n\nNext is to request a\u00a0trial license key for Hashicorp Enterprise Key Vault from the Hashicorp\u00a0product page. Copy the generated license key that is generated.\n\nReplace the content of **license.txt** with the generated license key in the step above. The file is located in the cloned github repository at location kmip-with-hashicorp-key-vault/vault/license.txt.\n\n## Update database connection string\n\nYou will need to update the connection string so the Python application can connect to your MongoDB Atlas cluster. It\u2019s best to update both configuration files as this tutorial will demonstrate both CSFLE and Queryable Encryption.\n\n**For CSFLE**: Open file\u00a0kmip-with-hashicorp-key-vault/configuration\\_fle.py\u00a0line 3, and update connection\\_uri.\n\n```\nencrypted_namespace = \"DEMO-KMIP-FLE.users\"\nkey_vault_namespace = \"DEMO-KMIP-FLE.datakeys\"\nconnection_uri = \"mongodb+srv://:@?retryWrites=true&w=majority\"\n# Configure the \"kmip\" provider.\nkms_providers = {\n \"kmip\": {\n \"endpoint\": \"localhost:5697\"\n }\n}\nkms_tls_options = {\n \"kmip\": {\n \"tlsCAFile\": \"vault/certs/FLE/vv-ca.pem\",\n \"tlsCertificateKeyFile\": \"vault/certs/FLE/vv-client.pem\"\n }\n}\n```\n\nReplace , , with your Atlas cluster connection configuration, after you have updated with your Atlas cluster connection details. You should have something looking like this:\n\n```\nencrypted_namespace = \"DEMO-KMIP-FLE.users\"\nkey_vault_namespace = \"DEMO-KMIP-FLE.datakeys\"\nconnection_uri = \"mongodb+srv://admin:mPassword@demo-cluster.tcrpd.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\"\n# Configure the \"kmip\" provider.\nkms_providers = {\n \"kmip\": {\n \"endpoint\": \"localhost:5697\"\n }\n}\nkms_tls_options = {\n \"kmip\": {\n \"tlsCAFile\": \"vault/certs/FLE/vv-ca.pem\",\n \"tlsCertificateKeyFile\": \"vault/certs/FLE/vv-client.pem\"\n }\n}\n```\n\n**For Queryable Encryption**: Open file kmip-with-hashicorp-key-vault/configuration\\_queryable.py in the cloned Github repository, update line 3, replace , , with your Atlas cluster connection configuration. So you should have something looking like this, after you have updated with your Atlas cluster connection details.\n\n```\nencrypted_namespace = \"DEMO-KMIP-QUERYABLE.users\"\nkey_vault_namespace = \"DEMO-KMIP-QUERYABLE.datakeys\"\nconnection_uri = \"mongodb+srv://admin:mPassword@demo-cluster.tcrpd.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\"\n\n# Configure the \"kmip\" provider.\nkms_providers = {\n \"kmip\": {\n \"endpoint\": \"localhost:5697\"\n }\n}\nkms_tls_options = {\n \"kmip\": {\n \"tlsCAFile\": \"vault/certs/QUERYABLE/vv-ca.pem\",\n \"tlsCertificateKeyFile\": \"vault/certs/QUERYABLE/vv-client.pem\"\n }\n}\n```\n\n## Start Docker container\n\nA prebaked docker image is prepared that has HashiCorp Vault installed\u00a0 and a Mongodb shared library. The MongoDB shared library\u00a0is the translation layer that takes an unencrypted query and translates it into an encrypted format that the server understands.\u00a0 It is what makes it so that you don't need to rewrite all of your queries with explicit encryption calls.\u00a0You don't need to build the docker image, as it\u2019s already published at docker hub. Start container in root of this repo. Container will be started and the current folder will be mounted to kmip in the running container. Port 8200 is mapped so you will be able to access the Hashicorp Key Vault Console running in the docker container. The ${PWD} is used to set the current path you are running the command from. If running this tutorial on Windows shell, replace ${PWD} with the full path to the root of the cloned Github repository.\n\n```\ndocker run -p 8200:8200 -it -v ${PWD}:/kmip piepet/mongodb-kmip-vault:latest\n```\n\n## Start Hashicorp Key Vault server\n\nRunning the below commands within the started docker container will start Hashicorp Vault Server and configure the Hashicorp KMIP Secrets engine. Scopes, Roles, and Certificates will be generated, vv-client.pem, vv-ca.pem, vv-key.pem, separate for CSFLE or Queryable Encryption.\n\n```\ncd kmip \n./start_and_configure_vault.sh -a\n```\n\nWait until you see the below output in your command console:\n\nYou can now access the Hashicorp Key Vault console, by going to url\u00a0http://localhost:8200/. You should see this in your browser:\n\nLet\u2019s sign in to the Hashicorp console to see what has been configured. Use the \u201cRoot token\u201d outputted in your shell console. Once you are logged in you should see this:\n\nThe script that you just executed \u2014\u00a0`./start_and_configure_vault.sh -a` \u00a0\u2014 uses the Hashicorp Vault cli to create all configurations needed, such as Scopes, Roles, and Certificates. You can explore what's created by clicking demo/kmip.\n\nIf you want to utilize the Hashicorp Key Vault server from outside the docker container, you will need to add port 5697.\n\n## Run CSFLE Python application encryption\n\nA sample Python application will be used to showcase the capabilities of CSFLE where the encryption schema is defined on the database. Let's start by looking at the main method of the Python application in the file located at `kmip-with-hashicorp-key-vault/vault_encrypt_with_csfle_kmip.py`.\n\n```\ndef main():\n reset()\n #1,2 Configure your KMIP Provider and Certificates\n kmip_provider_config = configure_kmip_provider()\n #3 Configure Encryption Data Keys\n data_keys_config = configure_data_keys(kmip_provider_config)\n #4 Create collection with Validation Schema for CSFLE defined, will be stored in\n create_collection_with_schema_validation(data_keys_config)\n #5 Configure Encrypted Client\n secure_client=configure_csfle_session()\n #6 Run Query\n create_user(secure_client)\nif __name__ == \"__main__\":\n main()\n```\n\n**Row 118:** Drops database, just to simplify rerunning this tutorial. In a production setup, this would be removed.\n\n**Row 120:**\u00a0Configures the MongoDB driver to use the Hashicorp Vault KMIP secrets engine, as the key provider. This means that CMK will be managed by the Hashicorp Vault KMIP secrets engine.\n\n**Row 122:**\u00a0Creates Data Encryption Keys to be used to encrypt/decrypt fields in collection. The encrypted data encryption keys will be stored in the database\u00a0**DEMO-KMIP-FLE**\u00a0in collection\u00a0**datakeys**.\n\n**Row 124:**\u00a0Creates collection and attaches\u00a0Encryption JSON schema that defines which fields need to be encrypted.\n\n**Row 126:**\u00a0Creates a MongoClient that enables CSFLE and uses Hashicorp Key Vault KMIP Secrets Engine as the key provider.\n\n**Row 128:** Inserts a user into database **DEMO-KMIP-FLE** and collection **users**, using the MongoClient that is configured at row 126. It then does a lookup on the SSN field to validate that MongoDB driver can query on encrypted data.\n\nLet's start the Python application by executing the below commands in the running docker container:\n\n```\ncd /kmip/kmip-with-hashicorp-key-vault/ \npython3.8 vault_encrypt_with_csfle_kmip.py\n```\n\nStart MongoDB Compass, connect to your database DEMO-KMIP-FLE, and review the collection users. Fields that should be encrypted are ssn, contact.mobile, and contact.email. You should now be able to see in Compass that fields that are encrypted are masked by \\*\\*\\*\\*\\*\\* shown as value \u2014 see the picture below:\n\n## Run Queryable Encryption Python application\n\nA sample Python application will be used to showcase the capabilities of Queryable Encryption, currently in Public Preview, with schema defined on the server. Let's start by looking at the main method of the Python application in the file located at `kmip-with-hashicorp-key-vault/vault_encrypt_with_queryable_kmip.py`.\n\n```\ndef main():\n reset()\n #1,2 Configure your KMIP Provider and Certificates\n kmip_provider_config = configure_kmip_provider()\n #3 Configure Encryption Data Keys\n data_keys_config = configure_data_keys(kmip_provider_config)\n #4 Create Schema for Queryable Encryption, will be stored in database\n encrypted_fields_map = create_schema(data_keys_config)\n #5 Configure Encrypted Client\n secure_client = configure_queryable_session(encrypted_fields_map)\n #6 Run Query\n create_user(secure_client)\nif __name__ == \"__main__\":\n main()\n```\n\n**Row 121:** Drops database, just to simplify rerunning application. In a production setup, this would be removed.\n\n**Row 123:** Configures the MongoDB driver to use the Hashicorp Vault KMIP secrets engine, as the key provider. This means that CMK will be managed by the Hashicorp Vault KMIP secrets engine.\n\n**Row 125:** Creates Data Encryption Keys to be used to encrypt/decrypt fields in collection. The encrypted data encryption keys will be stored in the database **DEMO-KMIP-QUERYABLE** in collection datakeys.\n\n**Row 127:** Creates Encryption Schema that defines which fields need to be encrypted. It\u2019s important to note the encryption schema has a different format compared to CSFLE Encryption schema.\n\n**Row 129:** Creates a MongoClient that enables Queryable Encryption and uses Hashicorp Key Vault KMIP Secrets Engine as the key provider.\n\n**Row 131:** Inserts a user into database **DEMO-KMIP-QUERYABLE** and collection **users**, using the MongoClient that is configured at row 129. It then does a lookup on the SSN field to validate that MongoDB driver can query on encrypted data.\n\nLet's start the Python application to test Queryable Encryption.\n\n```\ncd /kmip/kmip-with-hashicorp-key-vault/ \npython3.8 vault_encrypt_with_queryable_kmip.py\n```\n\nStart MongoDB Compass, connect to your database DEMO-KMIP-QUERYABLE, and review the collection users. Fields that should be encrypted are ssn, contact.mobile, and contact.email. You should now be able to see in Compass that fields that are encrypted are masked by \\*\\*\\*\\*\\*\\* shown as value, as seen in the picture below.\n\n### Cleanup\n\nIf you want to rerun the tutorial, run the following in the root of this git repository outside the docker container.\n\n```\n./cleanup.sh\n```\n\n## Conclusion\n\nIn this blog, you have learned how to configure and set up CSFLE and Queryble Encryption with Hashicorp Key Vault KMIP secrets engine. By utilizing KMIP, you will have a standardized way of managing the lifecycle of encryption keys, regardless of Public Cloud KMS services.. Learn more about\u00a0CSFLE\u00a0and\u00a0Queryable Encryption.", "format": "md", "metadata": {"tags": ["Atlas", "Python"], "pageDescription": "In this blog, learn how to use Hashicorp Vault KMIP Secrets Engine with CSFLE and Queryable Encryption to have a standardized way of managing encryption keys.", "contentType": "Tutorial"}, "title": "How to Set Up HashiCorp Vault KMIP Secrets Engine with MongoDB CSFLE or Queryable Encryption", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-seamlessly-use-mongodb-atlas-ibm-watsonx-ai-genai-applications", "action": "created", "body": "# How to Seamlessly Use MongoDB Atlas and IBM watsonx.ai LLMs in Your GenAI Applications\n\nOne of the challenges of e-commerce applications is to provide relevant and personalized product recommendations to customers. Traditional keyword-based search methods often fail to capture the semantic meaning and intent of the user search queries, and return results that do not meet the user\u2019s needs. In turn, they fail to convert into a successful sale. To address this problem, RAG (retrieval-augmented generation) is used as a framework powered by MongoDB Atlas Vector Search, LangChain, and IBM watsonx.ai. \n\nRAG is a natural language generation (NLG) technique that leverages a retriever module to fetch relevant documents from a large corpus and a generator module to produce text conditioned on the retrieved documents. Here, the RAG framework is used to power product recommendations as an extension to existing semantic search techniques.\n\n- RAG use cases can be easily built using the vector search capabilities of MongoDB Atlas to store and query large-scale product embeddings that represent the features and attributes of each product. Because of MongoDB\u2019s flexible schema, these are stored right alongside the product embeddings, eliminating the complexity and latency of having to retrieve the data from separate tables or databases.\n- RAG then retrieves the most similar products to the user query based on the cosine similarity of their embeddings, and generates natural language reasons that highlight why these products are relevant and appealing to the user. \n- RAG can also enhance the user experience (UX) by handling complex and diverse search queries, such as \"a cozy sweater for winter\" or \"a gift for my daughter who is interested in science\", and provides accurate and engaging product recommendations that increase customer satisfaction and loyalty.\n\n is IBM\u2019s next-generation enterprise studio for AI builders, bringing together new generative AI capabilities with traditional machine learning (ML) that span the entire AI lifecycle. With watsonx.ai, you can train, validate, tune, and deploy foundation and traditional ML models.\n\nwatsonx.ai brings forth a curated library of foundation models, including IBM-developed models, open-source models, and models sourced from third-party providers. Not all models are created equal, and the curated library provides enterprises with the optionality to select the model best suited to a particular use case, industry, domain, or even price performance. Further, IBM-developed models, such as the Granite model series, offer another level of enterprise-readiness, transparency, and indemnification for production use cases. We\u2019ll be using Granite models in our demonstration. For the interested reader, IBM has published information about its data and training methodology for its Granite foundation models.\n\n## How to build a custom RAG-powered product discovery pipeline\n\nFor this tutorial, we will be using an e-commerce products dataset containing over 10,000 product details. We will be using the sentence-transformers/all-mpnet-base-v2 model from Hugging Face to generate the vector embeddings to store and retrieve product information. You will need a Python notebook or an IDE, a MongoDB Atlas account, and a wastonx.ai account for hands-on experience.\n\nFor convenience, the notebook to follow along and execute in your environment is available on GitHub.\n\n### Python dependencies\n\n* `langchain`: Orchestration framework\n\n* `ibm-watson-machine-learning`: For IBM LLMs\n\n* `wget`: To download knowledge base data\n\n* `sentence-transformers`: For embedding model\n\n* `pymongo`: For the MongoDB Atlas vector store\n\n### watsonx.ai dependencies\n\nWe\u2019ll be using the watsonx.ai foundation models and Python SDK to implement our RAG pipeline in LangChain.\n\n1. **Sign up for a free watsonx.ai trial on IBM cloud**. Register and get set up.\n2. **Create a watsonx.ai Project**. During onboarding, a sandbox project can be quickly created for you. You can either use the sandbox project or create one; the link will work once you have registered and set up watsonx.ai. If more help is needed, you can read the documentation.\n3. **Create an API key to access watsonx.ai foundation models**. Follow the steps to create your API key.\n4. **Install and use watsonx.ai**. Also known as the IBM Watson Machine Learning SDK, watsonx.ai SDK information is available on GitHub. Like any other Python module, you can install it with a pip install. Our example notebook takes care of this for you.\n\nWe will be running all the code snippets below in a Jupyter notebook. You can choose to run these on VS Code or any other IDE of your choice.\n\n**Initialize the LLM**\n\nInitialize the watsonx URL to connect by running the below code blocks in your Jupyter notebook:\n\n```python\n# watsonx URL\n\ntry:\n wxa_url = os.environ\"WXA_URL\"]\n\nexcept KeyError:\n wxa_url = getpass.getpass(\"Please enter your watsonx.ai URL domain (hit enter): \")\n```\n\nEnter the URL for accessing the watsonx URL domain. For example: https://us-south.ml.cloud.ibm.com.\n\nTo be able to access the LLM models and other AI services on watsonx, you need to initialize the API key. You init the API key by running the following code block in you Jupyter notebook:\n\n```python\n# watsonx API Key\n\ntry:\n wxa_api_key = os.environ[\"WXA_API_KEY\"]\nexcept KeyError:\n wxa_api_key = getpass.getpass(\"Please enter your watsonx.ai API key (hit enter): \")\n```\n\nYou will be prompted when you run the above code to add the IAM API key you fetched earlier.\n\nEach experiment can tagged or executed under specific projects. To fetch the relevant project, we can initialize the project ID by running the below code block in the Jupyter notebook:\n\n```python\n# watsonx Project ID\n\ntry:\n wxa_project_id = os.environ[\"WXA_PROJECT_ID\"]\nexcept KeyError:\n wxa_project_id = getpass.getpass(\"Please enter your watsonx.ai Project ID (hit enter): \")\n```\n\nYou can find the project ID alongside your IAM API key in the settings panel in the watsonx.ai portal.\n\n**Language model**\n\nIn the code example below, we will initialize Granite LLM from IBM and then demonstrate how to use the initialized LLM with the LangChain framework before we build our RAG.\n\nWe will use the query: \"I want to introduce my daughter to science and spark her enthusiasm. What kind of gifts should I get her?\" \n\nThis will help us demonstrate how the LLM and vector search work in an RAG framework at each step.\n\nFirstly, let us initialize the LLM hosted on the watsonx cloud. To access the relevant Granite model from watsonx, you need to run the following code block to initialize and test the model with our sample query in the Jupyter notebook: \n\n```python\nfrom ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes\nfrom ibm_watson_machine_learning.foundation_models import Model\nfrom ibm_watson_machine_learning.metanames import GenTextParamsMetaNames as GenParams\nfrom ibm_watson_machine_learning.foundation_models.utils.enums import DecodingMethods\n\nparameters = {\n GenParams.DECODING_METHOD: DecodingMethods.GREEDY,\n GenParams.MIN_NEW_TOKENS: 1,\n GenParams.MAX_NEW_TOKENS: 100\n}\n\nmodel = Model(\n model_id=ModelTypes.GRANITE_13B_INSTRUCT,\n params=parameters,\n credentials={\n \"url\": wxa_url,\n \"apikey\": wxa_api_key\n },\n project_id=wxa_project_id\n)\n\nfrom ibm_watson_machine_learning.foundation_models.extensions.langchain import WatsonxLLM\n\ngranite_llm_ibm = WatsonxLLM(model=model)\n\n# Sample query chosen in the example to evaluate the RAG use case\nquery = \"I want to introduce my daughter to science and spark her enthusiasm. What kind of gifts should I get her?\"\n\n# Sample LLM query without RAG framework\nresult = granite_llm_ibm(query)\n```\n\nOutput:\n\n![Jupyter Notebook Output][3]\n\n### Initialize MongoDB Atlas for vector search\n\nPrior to starting this section, you should have already set up a cluster in MongoDB Atlas. If you have not created one for yourself, then you can follow the steps in the [MongoDB Atlas tutorial to create an account in Atlas (the developer data platform) and a cluster with which we can store and retrieve data. It is also advised that the users spin an Atlas dedicated cluster with size M10 or higher for this tutorial.\n\nNow, let us see how we can set up MongoDB Atlas to provide relevant information to augment our RAG framework. \n\n**Init Mongo client**\n\nWe can connect to the MongoDB Atlas cluster using the connection string as detailed in the tutorial link above. To initialize the connection string, run the below code block in your Jupyter notebook:\n\n```python\nfrom pymongo import MongoClient\n\ntry:\n MONGO_CONN = os.environ\"MONGO_CONN\"]\nexcept KeyError:\n MONGO_CONN = getpass.getpass(\"Please enter your MongoDB connection String (hit enter): \")\n```\n\nWhen prompted, you can enter your MongoDB Atlas connection string.\n\n**Download and load data to MongoDB Atlas**\n\nIn the steps below, we demonstrate how to download the products dataset from the provided URL link and add the documents to the respective collection in MongoDB Atlas. We will also be embedding the raw product texts as vectors before adding them in MongoDB. You can do this by running the following lines of code your Jupyter notebook:\n\n```python\nimport wget\n\nfilename = './amazon-products.jsonl'\nurl = \"https://github.com/ashwin-gangadhar-mdb/mbd-watson-rag/raw/main/amazon-products.jsonl\"\n\nif not os.path.isfile(filename):\n wget.download(url, out=filename)\n\n# Load the documents using Langchain Document Loader\nfrom langchain.document_loaders import JSONLoader\n\nloader = JSONLoader(file_path=filename, jq_schema=\".text\",text_content=False,json_lines=True)\ndocs = loader.load()\n\n# Initialize Embedding for transforming raw documents to vectors**\nfrom langchain.embeddings import HuggingFaceEmbeddings\nfrom tqdm import tqdm as notebook_tqdm\n\nembeddings = HuggingFaceEmbeddings()\n\n# Initialize MongoDB client along with Langchain connector module\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\n\nclient = MongoClient(MONGO_CONN)\nvcol = client[\"amazon\"][\"products\"]\nvectorstore = MongoDBAtlasVectorSearch(vcol, embeddings)\n\n# Load documents to collection in MongoDB**\nvectorstore.add_documents(docs)\n```\n\nYou will be able to see the documents have been created in `amazon` database under the collection `products`.\n\n![MongoDB Atlas Products Collection][4]\n\nNow all the product information is added to the respective collection, we can go ahead and create a vector index by following the steps given in the [Atlas Search index tutorial. You can create the search index using both the Atlas UI as well as programmatically. Let us look at the steps if we are doing this using the Atlas UI.\n\n.\n\n**Sample query to vector search**\n\nWe can test the vector similarity search by running the sample query with the LangChain MongoDB Atlas Vector Search connector. Run the following code in your Jupyter notebook:\n\n```python\ntexts_sim = vectorstore.similarity_search(query, k=3)\n\nprint(\"Number of relevant texts: \" + str(len(texts_sim)))\nprint(\"First 100 characters of relevant texts.\")\n\nfor i in range(len(texts_sim)):\n print(\"Text \" + str(i) + \": \" + str(texts_simi].page_content[0:100]))\n```\n\n![Sample Vector Search Query Output][7]\n\nIn the above example code, we are able to use our sample text query to retrieve three relevant products. Further in the tutorial, let\u2019s see how we can combine the capabilities of LLMs and vector search to build a RAG framework. For further information on various operations you can perform with the `MongoDBAtlasVectorSearch` module in LangChain, you can visit the [Atlas Vector Search documentation.\n\n### RAG chain\n\nIn the code snippets below, we demonstrate how to initialize and query the RAG chain. We also introduce methods to improve the output from RAG so you can customize your output to cater to specific needs, such as the reason behind the product recommendation, language translation, summarization, etc.\n\nSo, you can set up the RAG chain and execute to get the response for our sample query by running the following lines of code in your Jupyter notebook:\n\n```python\nfrom langchain.chains import RetrievalQA\nfrom langchain.prompts import PromptTemplate\n\nprompt = PromptTemplate(template=\"\"\"\n\nUse the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\n{context}\n\n##Question:{question} \\n\\\n\n##Top 3 recommnedations from Products and reason:\\n\"\"\",input_variables=\"context\",\"question\"])\n\nchain_type_kwargs = {\"prompt\": prompt}\nretriever = vectorstore.as_retriever(search_type=\"mmr\", search_kwargs={'k': 6, 'lambda_mult': 0.25})\n\nqa = RetrievalQA.from_chain_type(llm=granite_llm_ibm, chain_type=\"stuff\",\n retriever=retriever,\n chain_type_kwargs=chain_type_kwargs)\n\nres = qa.run(query)\n\nprint(f\"{'-'*50}\")\nprint(\"Query: \" + query)\nprint(f\"Response:\\n{res}\\n{'-'*50}\\n\")\n```\nThe output will look like this:\n\n![Sample RAG Chain Output][8]\n\nYou can see from the example output where the RAG is able to recommend products based on the query as well as provide a reasoning or explanation as to how this product suggestion is relevant to the query, thereby enhancing the user experience.\n\n## Conclusion\n\nIn this tutorial, we demonstrated how to use watsonx LLMs along with Atlas Vector Search to build a RAG framework. We also demonstrated how to efficiently use the RAG framework to customize your application needs, such as the reasoning for product suggestions. By following the steps in the article, we were also able to bring the power of machine learning models to a private knowledge base that is stored in the Atlas Developer Data Platform.\n\nIn summary, RAG is a powerful NLG technique that can generate product recommendations as an extension to semantic search using vector search capabilities provided by MongoDB Atlas. RAG can also improve the UX of product recommendations by providing more personalized, diverse, informative, and engaging descriptions.\n\n## Next steps\n\nExplore more details on how you can [build generative AI applications using various assisted technologies and MongoDB Atlas Vector Search.\n\nTo learn more about Atlas Vector Search, visit the product page or the documentation for creating a vector search index or running vector search queries.\n\nTo learn more about watsonx, visit the IBM watsonx page.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabec5b11a292b3d6/6553a958c787a446c12ab071/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte019110c36fc59f5/6553a916c787a4d4282ab069/image3.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabec5b11a292b3d6/6553a958c787a446c12ab071/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta69be6d193654a53/6553a9ad4d2859f3c8afae47/image5.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70003655ac1919b7/6553a9d99f2b9963f6bc99de/image6.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcbdae931b43cc17a/6553a9f788cbda51858566f6/image2.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0b43cc03bf7bb27f/6553aa124452cc3ed9f9523d/image7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf6c21ef667b8470b/6553aa339f2b993db7bc99e3/image4.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Learn how to build a RAG framework using MongoDB Atlas Vector Search and IBM watsonx LLMs.", "contentType": "Tutorial"}, "title": "How to Seamlessly Use MongoDB Atlas and IBM watsonx.ai LLMs in Your GenAI Applications", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/six-principles-building-robust-flexible-shared-data-applications", "action": "created", "body": "# The Six Principles for Building Robust Yet Flexible Shared Data Applications\n\nI've spent my seven years employed at MongoDB Inc. thinking about how organisations can better build fluid data-intensive applications. Over the years, in conversations with clients, I've tried to convey my opinions of how this can be achieved, but in hindsight, I've only had limited success, due to my inability to articulate the \"why\" and the \"how\" properly. In fact, the more I reflect, the more I realise it's a theme I've been jostling with for most of my IT career. For example, back in 2008, when SOAP was still commonplace for building web services, I touched on a similar theme in my blog post Web Service Messaging Nirvana. Now, after quite some time, I feel like I've finally been able to locate the signals in the noise, and capture these into something cohesive and positively actionable by others...\n\nSo, I've now brought together a set of techniques I've identified to effectively deliver resilient yet evolvable data-driven applications, in a recorded online 45-minute talk, which you can view below.\n\n>The Six Principles For Resilient Evolvability by Paul Done.\n>\n>:youtube]{vid=ms-2kgZbdGU}\n\nYou can also scan through the slides I used for the talk, [here.\n\nI've also shared, on Github, a sample Rust application I built that highlights some of the patterns described.\n\nIn my talk, you will hear about the potential friction that can occur with multiple applications on different release trains, due to overlapping dependencies on a shared data set. Without forethought, the impact of making shared data model changes to meet new requirements for one application can result in needing to modify every other application too, dramatically reducing business agility and flexibility. You might be asking yourself, \"If this shared data is held in a modern real-time operational database like MongoDB, why isn't MongoDB's flexible data model sufficient to allow applications and services to easily evolve?\" My talk will convey why this is a naive assumption made by some, and why the adoption of specific best practices, in your application tier, is also required to mitigate this.\n\nIn the talk, I identify the resulting best practices as a set of six key principles, which I refer to as \"resilient evolvability.\" Below is a summary of the six principles:\n\n1. Support optional fields. Field absence conveys meaning.\n2. For Finds, only ask for fields that are your concern, to support variability and to reduce change dependency.\n3. For Updates, always use in-place operators, changing targeted fields only. Replacing whole documents blows away changes made by other applications.\n4. For the rare data model Mutative Changes, adopt \"Interim Duplication\" to reduce delaying high priority business requirements.\n5. Facilitate entity variance, because real-world entities do vary, especially when a business evolves and diversifies.\n6. Only use Document Mappers if they are NOT \"all or nothing,\" and only if they play nicely with the other five principles.\n\nAdditionally, in the talk, I capture my perspective on the three different distributed application/data architectural combinations I often see, which I call \"The Data Access Triangle.\"\n\n \n\nIn essence, my talk is primarily focussed on how to achieve agility and flexibility when Shared Data is being used by many applications or services, but some of the principles will still apply when using Isolated Data or Duplicated Data for each application or service.\n\n## Wrap-Up\n\nFrom experience, by adopting the six principles, I firmly believe:\n\n- Your software will enable varying structured data which embraces, rather than inhibits, real-world requirements.\n- Your software won't break when additive data model changes occur, to rapidly meet new business requirements.\n- You will have a process to deal with mutative data model changes, which reduces delays in delivering new business requirements.\n\nThis talk and its advice is the culmination of many years trying to solve and address the problems in this space. I hope you will find my guidance to be a useful contribution to your work and a set of principles everyone can build on in the future.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to build robust yet flexible shared data applications which don't break when data model changes occur, to rapidly meet new business requirements.", "contentType": "Article"}, "title": "The Six Principles for Building Robust Yet Flexible Shared Data Applications", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/mongodb-classmaps-optimal-performance", "action": "created", "body": "# How to Set Up MongoDB Class Maps for C# for Optimal Query Performance and Storage Size\n\n> Starting out with MongoDB and C#? These tips will help you get your class maps right from the beginning to support your desired schema.\n\nWhen starting my first projects with MongoDB and C# several years ago, what captivated me the most was how easy it was to store plain old CLR objects (POCOs) in a collection without having to create a static relational structure first and maintaining it painfully over the course of development. \n\nThough MongoDB and C# have their own set of data types and naming conventions, the MongoDB C# Driver connects the two in a very seamless manner. At the center of this, class maps are used to describe the details of the mapping. \n\nThis post shows how to fine-tune the mapping in key areas and offers solutions to common scenarios.\n\n## Automatic mapping\n\nEven if you don't define a class map explicitly, the driver will create one as soon as the class is used for a collection. In this case, the properties of the POCO are mapped to elements in the BSON document based on the name. The driver also tries to match the property type to the BSON type of the element in MongoDB.\n\nThough automatic mapping of a class will make sure that POCOs can be stored in a collection easily, tweaking the mapping is rewarded by better memory efficiency and enhanced query performance. Also, if you are working with existing data, customizing the mapping allows POCOs to follow C# and .NET naming conventions without changing the schema of the data in the collection.\n\n## Declarative vs. imperative mapping\n\nAdjusting the class map can be as easy as adding attributes to the declaration of a POCO (declarative mapping). These attributes are used by the driver when the class map is auto-mapped. This happens when the class is first used to access data in a collection:\n\n```csharp\npublic class BlogPost\n{\n // ...\n BsonElement(\"title\")]\n public string Title { get; set; } = string.Empty;\n // ...\n}\n```\n\nThe above sample shows how the `BsonElement` attribute is used to adjust the name of the `Title` property in a document in MongoDB:\n\n```BSON\n{\n // ...\n \"title\": \"Blog post title\",\n // ...\n}\n```\n\nHowever, there are scenarios when declarative mapping is not applicable: If you cannot change the POCOs because they are defined in a third-party libary or if you want to separate your POCOs from MongoDB-related code parts, there also is the option to define the class maps imperatively by calling methods in code:\n\n```csharp\nBsonClassMap.RegisterClassMap(cm =>\n{\n cm.AutoMap();\n cm.MapMember(x => x.Title).SetElementName(\"title\");\n});\n```\n\nThe code above first performs the auto-mapping and then includes the `Title` property in the mapping as an element named `title` in BSON, thus overriding the auto-mapping for the specific property.\n\nOne thing to keep in mind is that the class map needs to be registered before the driver starts the automatic mapping process for a class. It is a good idea to include it in the bootstrapping process of the application.\n\nThis post will use declarative mapping for better readability but all of the adjustments can also be made using imperative mapping, as well. You can find an imperative class map that contains all the samples at the end of the post. \n## Adjusting property names\n\nWhether you are working with existing data or want to name properties differently in BSON for other reasons, you can use the `BsonElement(\"specificElementName\")` attribute introduced above. This is especially handy if you only want to change the name of a limited set of properties. \n\nIf you want to change the naming scheme in a widespread fashion, you can use a convention that is applied when auto-mapping the classes. The driver offers a number of conventions out-of-the-box (see the namespace [MongoDB.Bson.Serialization.Conventions) and offers the flexibility to create custom ones if those are not sufficient. \n\nAn example is to name the POCO properties according to C# naming guidelines in Pascal case in C#, but name the elements in camel case in BSON by adding the CamelCaseElementNameConvention: \n\n```csharp\nvar pack = new ConventionPack();\npack.Add(new CamelCaseElementNameConvention());\nConventionRegistry.Register(\n \"Camel Case Convention\",\n pack, \n t => true);\n```\n\nPlease note the predicate in the last parameter. This can be used to fine-tune whether the convention is applied to a type or not. In our sample, it is applied to all classes. \nThe above code needs to be run before auto-mapping takes place. You can still apply a `BsonElement` attribute here and there if you want to overwrite some of the names.\n\n## Using ObjectIds as identifiers\n\nMongoDB uses ObjectIds as identifiers for documents by default for the \u201c_id\u201d field. This is a data type that is unique to a very high probability and needs 12 bytes of memory. If you are working with existing data, you will encounter ObjectIds for sure. Also, when setting up new documents, ObjectIds are the preferred choice for identifiers. In comparison to GUIDs (UUIDs), they require less storage space and are ordered so that identifiers that are created later receive higher values.\n\nIn C#, properties can use `ObjectId` as their type. However, using `string` as the property type in C# simplifies the handling of the identifiers and increases interoperability with other frameworks that are not specific to MongoDB (e.g. OData). \n\nIn contrast, MongoDB should serialize the identifiers with the specific BSON type ObjectId to reduce storage size. In addition, performing a binary comparison on ObjectIds is much safer than comparing strings as you do not have to take letter casing, etc. into account.\n\n```csharp\npublic class BlogPost\n{\n BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; } = ObjectId.GenerateNewId().ToString();\n // ...\n [BsonRepresentation(BsonType.ObjectId)]\n public ICollection TopComments { get; set; } = new List();\n}\n```\n\nBy applying the `BsonRepresentation` attribute, the `Id` property is serialized as an `ObjectId` in BSON. Also, the array of identifiers in `TopComments` also uses ObjectIds as their data type for the array elements: \n\n```BSON\n{ \n \"_id\" : ObjectId(\"6569b12c6240d94108a10d20\"), \n // ...\n \"TopComments\" : [\n ObjectId(\"6569b12c6240d94108a10d21\"), \n ObjectId(\"6569b12c6240d94108a10d22\")\n ] \n}\n```\n## Serializing GUIDs in a consistent way\n\nWhile `ObjectId` is the default type of identifier for MongoDB, GUIDs or UUIDs are a data type that is used for identifying objects in a variety of programming languages. In order to store and query them efficiently, using a binary format instead of strings is also preferred. \n\nIn the past, GUIDs/UUIDs have been stored as BSON type binary of subtype 3; drivers for different programming environments serialized the value differently. Hence, reading GUIDs with the C# driver that had been serialized with a Java driver did not yield the same value. To fix this, the new binary subtype 4 was introduced by MongoDB. GUIDs/UUIDs are then serialized in the same way across drivers and languages. \n\nTo provide the flexibility to both handle existing values and new values on a property level, the MongoDB C# Driver introduced a new way of handling GUIDs. This is referred to as `GuidRepresentationMode.V3`. For backward compatibility, when using Version 2.x of the MongoDB C# Driver, the GuidRepresentationMode is V2 by default (resulting in binary subtype 3). This is set to change with MongoDB C# Driver version 3. It is a good idea to opt into using V3 now and specify the subtype that should be used for GUIDs on a property level. For new GUIDs, subtype 4 should be used. \n\nThis can be achieved by running the following code before creating the client: \n\n```csharp\nBsonDefaults.GuidRepresentationMode \n= GuidRepresentationMode.V3;\n```\n\nKeep in mind that this setting requires the representation of the GUID to be specified on a property level. Otherwise, a `BsonSerializationException` will be thrown informing you that \"GuidSerializer cannot serialize a Guid when GuidRepresentation is Unspecified.\" To fix this, add a `BsonGuidRepresentation` attribute to the property: \n\n```csharp\n[BsonGuidRepresentation(GuidRepresentation.Standard)]\npublic Guid MyGuid { get; set; } = Guid.NewGuid();\n```\n\nThere are various settings available for `GuidRepresentation`. For new GUIDs, `Standard` is the preferred value, while the other values (e.g., `CSharpLegacy`) support the serialization of existing values in binary subtype 3. \n\nFor a detailed overview, see the [documentation of the driver. \n\n## Processing extra elements\n\nMaybe you are working with existing data and only some part of the elements is relevant to your use case. Or you have older documents in your collection that contain elements that are not relevant anymore. Whatever the reason, you want to keep the POCO minimal so that it only comprises the relevant properties. \n\nBy default, the MongoDB C# Driver is strict and raises a `FormatException` if it encounters elements in a BSON document that cannot be mapped to a property on the POCO: \n\n```\"Element '...]' does not match any field or property of class [...].\"```\n Those elements are called \"extra elements.\"\n\nOne way to handle this is to simply ignore extra elements by applying the `BsonIgnoreExtraElements` attribute to the POCO: \n\n```csharp\n[BsonIgnoreExtraElements]\npublic class BlogPost \n{\n // ...\n}\n```\n\nIf you want to use this behavior on a large scale, you can again register a convention: \n\n```csharp\nvar pack = new ConventionPack();\npack.Add(new IgnoreExtraElementsConvention(true));\nConventionRegistry.Register(\n \"Ignore Extra Elements Convention\",\n pack, \n t => true);\n```\nBe aware that if you use _replace_ when storing the document, extra properties that C# does not know about will be lost. \n\nOn the other hand, MongoDB's flexible schema is built for handling documents with different elements. If you are interested in the extra properties or you want to safeguard for a replace, you can add a dictionary to your POCO and mark it with a `BsonExtraElements` attribute. The dictionary is filled with the content of the properties upon deserialization: \n\n```csharp\npublic class BlogPost\n{\n // ...\n [BsonExtraElements()]\n public IDictionary ExtraElements { get; set; } = new Dictionary();\n}\n```\nEven when replacing a document that contains an extra-elements-dictionary, the key-value pairs of the dictionary are serialized as elements so that their content is not lost (or even updated if the value in the dictionary has been changed). \n\n## Serializing calculated properties\n\nPre-calculation is key for great query performance and is a common pattern when working with MongoDB. In POCOs, this is supported by adding read-only properties, e.g.: \n\n```csharp\npublic class BlogPost\n{\n // ...\n public DateTime CreatedAt { get; set; } = DateTime.UtcNow;\n public DateTime? UpdatedAt { get; set; }\n public DateTime LastChangeAt => UpdatedAt ?? CreatedAt;\n}\n```\n\nBy default, the driver excludes read-only properties from serialization. This can be fixed easily by applying a `BsonElement` attribute to the property \u2014 you don't need to change the name: \n\n```csharp\npublic class BlogPost\n{\n // ...\n public DateTime CreatedAt { get; set; } = DateTime.UtcNow;\n public DateTime? UpdatedAt { get; set; }\n [BsonElement()]\n public DateTime LastChangeAt => UpdatedAt ?? CreatedAt;\n}\n```\n\nAfter this change, the read-only property is included in the document and it can be used in indexes and queries: \n\n```BSON\n{ \n // ...\n \"CreatedAt\" : ISODate(\"2023-12-01T12:16:34.441Z\"), \n \"UpdatedAt\" : null, \n \"LastChangeAt\" : ISODate(\"2023-12-01T12:16:34.441Z\") \n}\n```\n## Custom serializers\n\nCommon scenarios are very well supported by the MongoDB C# Driver. If this is not enough, you can create a [custom serializer that supports your specific scenario. \n\nCustom serializers can be used to handle documents with different data for the same element. For instance, if some documents store the year as an integer and others as a string, a custom serializer can analyze the BSON type during deserialization and read the value accordingly. \n\nHowever, this is a last resort that you will rarely need to use as the existing options offered by the MongoDB C# Driver cover the vast majority of use cases. \n\n## Conclusion\n\nAs you have seen, the MongoDB C# Driver offers a lot of options to tweak the mapping between POCOs and BSON documents. POCOs can follow C# conventions while at the same time building upon a schema that offers good query performance and reduced storage consumption. \n\nIf you have questions or comments, join us in the MongoDB Developer Community!\n\n### Appendix: sample for imperative class map\n\n```csharp\nBsonClassMap.RegisterClassMap(cm =>\n{\n // Perform auto-mapping to include properties \n // without specific mappings\n cm.AutoMap();\n // Serialize string as ObjectId\n cm.MapIdMember(x => x.Id)\n .SetSerializer(new StringSerializer(BsonType.ObjectId));\n // Serialize ICollection as array of ObjectIds\n cm.MapMember(x => x.TopComments)\n .SetSerializer(\n new IEnumerableDeserializingAsCollectionSerializer, string, List>(\n new StringSerializer(BsonType.ObjectId)));\n // Change member name\n cm.MapMember(x => x.Title).SetElementName(\"title\");\n // Serialize Guid as binary subtype 4\n cm.MapMember(x => x.MyGuid).SetSerializer(new GuidSerializer(GuidRepresentation.Standard));\n // Store extra members in dictionary\n cm.MapExtraElementsMember(x => x.ExtraElements);\n // Include read-only property\n cm.MapMember(x => x.LastChangeAt);\n});\n```\n\n", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "", "contentType": "Article"}, "title": "How to Set Up MongoDB Class Maps for C# for Optimal Query Performance and Storage Size", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/pymongoarrow-bigframes-using-python", "action": "created", "body": "# Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Libraries\n\nIn today's data-driven world, the ability to analyze and efficiently move data across different platforms is crucial. MongoDB Atlas and Google BigQuery are two powerful platforms frequently used for managing and analyzing data. While they excel in their respective domains, connecting and transferring data between them seamlessly can pose challenges. However, with the right tools and techniques, this integration becomes not only possible but also streamlined.\n\nOne effective way to establish a smooth pipeline between MongoDB Atlas and BigQuery is by leveraging PyMongoArrow and pandas-gbq, two powerful Python libraries that facilitate data transfer and manipulation. PyMongoArrow acts as a bridge between MongoDB and Arrow, a columnar in-memory analytics layer, enabling efficient data conversion. On the other hand, pandas-gbq is a Python client library for Google BigQuery, allowing easy interaction with BigQuery datasets.\n\n\u00a0on data read from both Google BigQuery and MongoDB Atlas platforms without physically moving the data between these platforms. This will simplify the effort required by data engineers to move the data and offers a faster way for data scientists to build machine learning (ML) models.\n\nLet's discuss each of the implementation advantages with examples.\n\n### ETL data from MongoDB to BigQuery\n\nLet\u2019s consider a sample shipwreck dataset available on MongoDB Atlas for this use case.\n\nUse the commands below to install the required libraries on the notebook environment of your choice. For easy and scalable setup, use BigQuery Jupyter notebooks\u00a0or managed VertexAI workbench\u00a0notebooks.\n\n\u00a0for setting up your cluster, network access, and authentication. Load a sample dataset\u00a0to your Atlas cluster. Get the Atlas connection string\u00a0and replace the URI string below with your connection string. The below script is also available in the GitHub repository\u00a0with steps to set up.\n\n```python\n#Read data from MongoDB\nimport certifi\nimport pprint \nimport pymongo\nimport pymongoarrow \nfrom pymongo import MongoClient\n\nclient = MongoClient(\"URI ``sting``\",tlsCAFile=certifi.where())\n\n#Initialize database and collection\ndb = client.get_database(\"sample_geospatial\")\ncol = db.get_collection(\"shipwrecks\")\n\nfor doc in col.find({}): \n\u00a0 pprint.pprint(doc)\n\nfrom pymongoarrow.monkey import patch_all \npatch_all()\n\n#Create Dataframe for data read from MongoDB\nimport pandas as pd \ndf = col.find_pandas_all({})\n```\n\nTransform the data to the required format \u2014 e.g., transform and remove the unsupported data formats, like the MongoDB object ID, or convert the MongoDB object to JSON before writing it to BigQuery.\u00a0Please refer to the documentation to learn more about data types\u00a0supported by pandas-gbq and PyMongoArrow.\n\n```python\n#Transform the schema for required format. \n#e.g. the object id is not supported in dataframes can be removed or converted to string. \n\ndel(df\"_id\"])\n```\n\nOnce you have retrieved data from MongoDB Atlas and converted it into a suitable format using PyMongoArrow, you can proceed to transfer it to BigQuery using either the\u00a0pandas-gbq or google-cloud-bigquery.\u00a0In this article, we are using pandas-gbq. Refer to the [documentation\u00a0for more details on the differences between pandas-gbq and google-cloud-bigquery libraries. Ensure you have a dataset in BigQuery to which you want to load the MongoDB data. You can create a new dataset or use an existing one.\n\n```python\n#Write the transformed data to BigQuery.\n\nimport pandas_gbq\n\npandas_gbq.to_gbq(df0:100], \"gcp-project-name.bigquery-dataset-name.bigquery-table-name\", project_id=\"gcp-project-name\")\n```\n\nAs you embark on building your pipeline, optimizing the data transfer process between MongoDB Atlas and BigQuery is essential for performance. A few points to consider:\n\n1. Batch Dataframes in to chunks, especially when dealing with large datasets, to prevent memory issues.\n1. Handle schema mapping and data type conversions properly to ensure compatibility between the source and destination databases.\n1. With the right tools like Google colab, VertexAI Workbench etc, this pipeline can become a cornerstone of your data ecosystem, facilitating smooth and reliable data movement between MongoDB Atlas and Google BigQuery.\n\n### Introduction to Google BigQuery DataFrames (bigframes)\n\nGoogle bigframes is a Python API that provides a pandas-compatible DataFrame and machine learning capabilities powered by the BigQuery engine. It provides a familiar pandas interface for data manipulation and analysis. Once the data from MongoDB is written into BigQuery, the BigQuery DataFrames\u00a0can unlock the user-friendly solution for analyzing petabytes of data with ease. The pandas DataFrame can be read directly into BigQuery DataFrames\u00a0using the Python bigframes.pandas\u00a0library. Install the bigframes library to use BigQuery DataFrames.\n\n```\n!pip install bigframes\n```\n\nBefore reading the pandas DataFrames into BigQuery DataFrames, rename the columns as per Google's [schema guidelines. (Please note that at the time of publication, the feature may not be GA).\n\n```python\nimport bigframes.pandas as bpd\nbigframes.options.bigquery.project = \"GCP project ID\"\n\n# df = \nbdf = bpd.read_pandas(df)\n```\n\nFor more information on using Google Cloud Bigquery DataFrames, visit the Google Cloud documentation.\n\n## Conclusion\n\nCreating a robust pipeline between MongoDB Atlas and BigQuery using PyMongoArrow and pandas-gbq opens up a world of possibilities for efficient data movement and analysis. This integration allows for the seamless transfer of data, enabling organizations to leverage the strengths of both platforms for comprehensive data analytics and decision-making.\n\n### Further reading\n\n- Learn more about MongoDB PyMongoArrow\u00a0libraries and how to use them.\n- Read more about Google BigQuery DataFrames\u00a0for pandas and ML.\n- Load DataFrames to BigQuery with Google pandas-gbq.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2557413e5cba18f3/65c3cbd20872227d14497236/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7459376aeef23b01/65c3cbd2245ed9a8b190fd38/image2.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python", "Pandas", "Google Cloud", "AI"], "pageDescription": "Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Librarie", "contentType": "Tutorial"}, "title": "Orchestrating MongoDB & BigQuery for ML Excellence with PyMongoArrow and BigQuery Pandas Libraries", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/kotlin/mastering-kotlin-creating-api-ktor-mongodb-atlas", "action": "created", "body": "# Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas\n\nKotlin's simplicity, Java interoperability, and Ktor's user-friendly framework combined with MongoDB Atlas' flexible cloud database provide a robust stack for modern software development.\n\nTogether, we'll demonstrate and set up the Ktor project, implement CRUD operations, define API route endpoints, and run the application. By the end, you'll have a solid understanding of Kotlin's capabilities in API development and the tools needed to succeed in modern software development.\n\n## Demonstration\n\n.\n\n.\n\nOnce your account is created, access the **Overview** menu, then **Connect**, and select **Kotlin**. After that, our connection **string** will be available as shown in the image below:\n\nAny questions? Come chat with us in the MongoDB Developer Community. \n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt02db98d69407f577/65ce9329971dbb3e733ff0fa/1.gif\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt45cf0f5548981055/65ce9d77719d5654e2e86701/1.gif\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt79f17d2600ccd262/65ce9350800623c03507858f/2.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltddd64b3284f120d4/65ce93669be818cb46d5a628/3.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69bc7f53eff9cace/65ce937f76c8be10aa75c034/4.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt317cc4b60864461a/65ce939b6b67f967a3ee2723/5.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69681844c8a7b8b0/65ce93bb8e125b05b739af2c/6.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49fa907ced0329aa/65ce93df915aea23533354e0/7.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta3c3a68abaced0e0/65ce93f0fc5dbd56d22d5e4c/8.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blteeebb531c794f095/65ce9400c3164b51b2b471dd/9.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4f1f4abbb7515c18/65ce940f5c321d8136be1c12/10.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb837240b887cdde2/65ce9424f09ec82e0619a7ab/11.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c37f97d439b319a/65ce9437bccfe25e8ce992ab/12.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4791dadc3ebecb2e/65ce9448c3164b1ffeb471e9/13.png", "format": "md", "metadata": {"tags": ["Kotlin"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/atlas-search-with-csharp", "action": "created", "body": "# MongoDB Atlas Search with .NET Blazor for Full-Text Search\n\nImagine being presented with a website with a large amount of data and not being able to search for what you want. Instead, you\u2019re forced to sift through piles of results with no end in sight.\n\nThat is, of course, the last thing you want for yourself or your users. So in this tutorial, we\u2019ll see how you can easily implement search with autocomplete in your .NET Blazor application using MongoDB Atlas Search.\n\nAtlas Search is the easiest and fastest way to implement relevant searches into your MongoDB Atlas-backed applications, making it simpler for developers to focus on implementing other things.\n\n## Prerequisites\nIn order to follow along with this tutorial, you will need a few things in place before you start:\n\n - An IDE or text editor that can support C# and Blazor for the most seamless development experience, such as Visual Studio, Visual Studio Code with the C# DevKit Extension installed, and JetBrains Rider.\n - An Atlas M0\n cluster,\n our free forever tier, perfect for development.\n - The sample dataset\n loaded into the\n cluster.\n - Your cluster connection\n string for use in your application settings later on.\n - A fork of the GitHub\n repo that we\n will be adding search to.\n\nOnce you have forked and then cloned the repo and have it locally, you will need to add your connection string into ```appsettings.Development.json``` and ```appsettings.json``` in the placeholder section in order to connect to your cluster when running the project.\n\n> If you don\u2019t want to follow along, the repo has a branch called \u201cfull-text-search\u201d which has the final result implemented.\n\n## Creating Atlas Search indexes\nBefore we can start adding Atlas Search to our application, we need to create search indexes inside Atlas. These indexes enable full-text search capabilities on our database. We want to specify what fields we wish to index.\n\nAtlas Search does support dynamic indexes, which apply to all fields and adapt to any document shape changes. But for this tutorial, we are going to add a search index for a specific field, \u201ctitle.\u201d\n\n 1. Inside Atlas, click \u201cBrowse Collections\u201d to open the data explorer to view your newly loaded sample data.\n 2. Select the \u201cAtlas Search\u201d tab at the top.\n 3. Click the green \u201cCreate Search Index\u201d button to load the index creation wizard.\n 4. Select Visual Editor and then click \u201cNext.\u201d\n 5. Give your index a name. What you choose is up to you.\n 6. For \u201cDatabase and Collection,\u201d select \u201csample_mflix\u201d to expand the database and select the \u201cmovies\u201d collection. Then, click \u201cNext.\u201d\n 7. In the final review section, click the \u201cRefine Your Index\u201d button below the \u201cIndex Configurations\u201d table as we want to make some changes.\n 8. Click \u201c+ Add Field Mapping\u201d about halfway down the page.\n 9. In \u201cField Name,\u201d search for \u201ctitle.\u201d\n 10. For \u201cData Type,\u201d select \u201cAutocomplete.\u201d This is because we want to have autocomplete available in our application so users can see results as they start typing.\n 11. Click the \u201cAdd\u201d button in the bottom right corner.\n 12. Click \u201cSave\u201d and then \u201cCreate Search Index.\u201d\n\nAfter a few minutes, the search index will be set up and the application will be ready to be \u201csearchified.\u201d\n\nIf you prefer to use the JSON editor to simply copy and paste, you can use the following:\n```json\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"title\": {\n \"type\": \"autocomplete\"\n }\n }\n }\n}\n```\n## Implementing backend functionality\nNow the database is set up to support Atlas Search with our new indexes, it's time to update the code in the application to support search. The code has an interface and service for talking to Atlas using the MongoDB C# driver which can be found in the ```Services``` folder.\n### Adding a new method to IMongoDBService\nFirst up is adding a new method for searching to the interface.\n\nOpen ```IMongoDBService.cs``` and add the following code:\n\n```csharp\npublic IEnumerable MovieSearchByText (string textToSearch);\n```\n\nWe return an IEnumerable of movie documents because multiple documents might match the search terms.\n\n### Implementing the method in MongoDBService\n\nNext up is adding the implementation to the service.\n\n 1. Open ```MongoDBService.cs``` and paste in the following code:\n```csharp\npublic IEnumerable MovieSearchByText(string textToSearch)\n{ \n// define fuzzy options\n SearchFuzzyOptions fuzzyOptions = new SearchFuzzyOptions()\n {\n MaxEdits = 1,\n PrefixLength = 1, \n MaxExpansions = 256\n };\n \n // define and run pipeline\n var movies = _movies.Aggregate().Search(Builders.Search.Autocomplete(movie => movie.Title, \n textToSearch, fuzzy: fuzzyOptions), indexName: \"title\").Project(Builders.Projection\n .Exclude(movie => movie.Id)).ToList();\n return movies;\n} \n```\n Replace the value for ```indexName``` with the name you gave your search index.\n\nFuzzy search allows for approximate matching to a search term which can be helpful with things like typos or spelling mistakes. So we set up some fuzzy search options here, such as how close to the right term the characters need to be and how many characters at the start that must exactly match. \n\nAtlas Search is carried out using the $search aggregation stage, so we call ```.Aggregate()``` on the movies collection and then call the ``Search``` method.\n\nWe then pass a builder to the search stage to search against the title using our passed-in search text and the fuzzy options from earlier.\n\nThe ```.Project()``` stage is optional but we\u2019re going to include it because we don\u2019t use the _id field in our application. So for performance reasons, it is always good to exclude any fields you know you won\u2019t need to be returned.\n\nYou will also need to make sure the following using statements are present at the top of the class for the code to run later:\n\n```chsarp\nusing SeeSharpMovies.Models;\nusing MongoDB.Driver;\nusing MongoDB.Driver.Search;\n```\nJust like that, the back end is ready to accept a search term, search the collection for any matching documents, and return the result.\n## Implementing frontend functionality\nNow the back end is ready to accept our searches, it is time to implement it on the front end so users can search. This will be split into two parts: the code in the front end for talking to the back end, and the search bar in HTML for typing into.\n\n### Adding code to handle search\nThis application uses razor pages which support having code in the front end. If you look inside ```Home.razor``` in the ```Components/Pages``` folder, you will see there is already some code there for requesting all movies and pagination.\n\n 1. Inside the ```@code``` block, underneath the existing variables, add the following code:\n```csharp\nstring searchTerm;\nTimer debounceTimer;\nint debounceInterval = 200;\n```\nAs expected, there is a string variable to hold the search term, but the other two values might not seem obvious. In development, where you are accepting input and then calling some kind of service, you want to avoid calling it too often. So you can implement something called *debounce* which handles that. You will see that implemented later but it uses a timer and an interval \u2014 in this case, 200 milliseconds.\n\n2. Add the following code after the existing methods:\n```csharp\nprivate void SearchMovies()\n {\n if (string.IsNullOrWhiteSpace(searchTerm))\n {\n movies = MongoDBService.GetAllMovies();\n }\n else\n {\n movies = MongoDBService.MovieSearchByText(searchTerm);\n }\n }\n\nvoid DebounceSearch(object state)\n {\n if (string.IsNullOrWhiteSpace(searchTerm))\n {\n SearchMovies();\n }\n else\n {\n InvokeAsync(() =>\n {\n SearchMovies();\n StateHasChanged();\n });\n }\n }\n\nvoid OnSearchInput(ChangeEventArgs e)\n {\n searchTerm = e.Value.ToString();\n debounceTimer?.Dispose();\n debounceTimer = new Timer(DebounceSearch, null, debounceInterval, Timeout.Infinite);\n }\n\n```\nSearchMovies: This method handles an empty search box as trying to search on nothing will cause it to error. So if there is nothing in the search box, it fetches all movies again. Otherwise, it calls the backend method we implemented previously to search by that term.\n\nDebounceSearch: This calls search movies and if there is a search term available, it also tells the component that the stage has changed.\n\nOnSearchInput: This will be called later by our search box but this is an event handler that says that when there is a change event, set the search term to the value of the box, reset the debounce timer, and start it again from the timer interval, passing in the ```DebounceSearch``` method as a callback function.\n\nNow we have the code to smoothly handle receiving input and calling the back end, it is time to add the search box to our UI.\n\n### Adding a search bar\nAdding the search bar is really simple. We are going to add it to the header component already present on the home page.\n\nAfter the link tag with the text \u201cSee Sharp Movies,\u201d add the following HTML:\n\n```html\n\n \n\n```\n## Testing the search functionality\nNow we have the backend code available and the front end has the search box and a way to send the search term to the back end, it's time to run the application and see it in action.\n\nRun the application, enter a search term in the box, and test the result. \n\n## Summary \nExcellent! You now have a Blazor application with search functionality added and a good starting point for using full-text search in your applications going forward.\n\nIf you want to learn more about Atlas Search, including more features than just autocomplete, you can take an amazing Atlas Search workshop created by my colleague or view the docs]https://www.mongodb.com/docs/manual/text-search/). If you have questions or feedback, join us in the [Community Forums.\n", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "In this tutorial, learn how to add Atlas Search functionality with autocomplete and fuzzy search to a .NET Blazor application.", "contentType": "Tutorial"}, "title": "MongoDB Atlas Search with .NET Blazor for Full-Text Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/introducing-atlas-stream-processing-support-mongodb-vs-code-extension", "action": "created", "body": "# Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension\n\nAcross industries, teams are building applications that need access to low-latency data to deliver compelling experiences and gain valuable business insights. Stream processing is a fundamental building block powering these applications. Stream processing lets developers discover and act on streaming data (data in motion), and combine that data when necessary with data at rest (data stored in a database). MongoDB is a natural fit for streaming data with its capabilities around storing and querying unstructured data and an effective query API. MongoDB Atlas Stream Processing is a service within MongoDB Atlas that provides native stream processing capabilities. In this article, you will learn how to use the MongoDB for VS Code extension to create and manage stream processors in MongoDB Atlas.\n\n## Installation\n\nMongoDB support for VS Code is provided by the MongoDB for VS Code extension. To install the MongoDB for VS Code extension, launch VS Code, open the Extensions view, and search for MongoDB to filter the results. Select the MongoDB for VS Code extension.\n\n or visit the online documentation.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt666c23a3d692a93f/65ca49389778069713c044c0/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf7ffa77814bb0f50/65ca494dfaacae5fb31fbf4e/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltefc76078f205a62a/65ca495d8a7a51c5d10a6474/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt79292cfdc1f0850b/65ca496fdccfc6374daaf101/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta3bccb1ba48cdb6a/65ca49810ad0380459881a98/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa4cf4e38b4e9feb/65ca499676283276edc5e599/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt795a00f7ec1d1d12/65ca49ab08fffd1cdc721948/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c505751ace8d8ad/65ca49bc08fffd774372194c/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt214f210305140aa8/65ca49cc8a7a5127a00a6478/9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09aabfaf76e1907/65ca49e7f48bc2130d50eb36/10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5b7e455dfdb6982f/65ca49f80167d0582c8f8e88/11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta8ed4dee7ddc3359/65ca4a0aedad33ddf7fae3ab/12.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt856ff7e440e2c786/65ca4a18862c423b4dfb5c91/13.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to use the MongoDB for VS Code extension to create and manage stream processors in MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-dataflow-templates-udf-enhancement", "action": "created", "body": "# UDF Announcement for MongoDB to BigQuery Dataflow Templates\n\nMany enterprise customers using MongoDB Atlas as their core operational database also use BigQuery for their Batch and AI/ML based analytics, making it pivotal for seamless transfer of data between these entities. Since the announcement of the Dataflow templates (in October Of 2022) on moving data between MongoDB and BigQuery, we have seen a lot of interest from customers as it made it effortless for an append-only, one-to-one migration of data. Though the three Dataflow templates provided cater to most of the common use cases, there was also a demand to be able to do transformations as part of these templates.\n\nWe are excited to announce the addition of the ability to write your own user-defined functions (UDFs) in these Dataflow pipelines! This new feature allows you to use UDFs in JavaScript to transform and analyze data within BigQuery. With UDFs, you can define custom logic and business rules that can be applied to your data as it is being processed by Dataflow. This allows you to perform complex transformations like transforming fields, concatenating fields, deleting fields, converting embedded documents to separate documents, etc. These UDFs take unprocessed documents as input parameters and return the processed documents as output.\n\nTo use UDFs with BigQuery Dataflow, simply write your JavaScript function and store it in the Google cloud storage bucket. Use the Dataflow templates\u2019 optional parameter to read these UDFs while running the templates. The function will be executed on the data as it is being processed, allowing you to apply custom logic and transformations to your data during the transfer.\n\n## How to set it up\nLet\u2019s have a quick look at how to set up a sample UDF to process (transform a field, flatten an embedded document, and delete a field) from an input document before writing the processed data to BigQuery.\n\n### Set up MongoDB \n\n1. MongoDB Atlas setup through registration.\n2. MongoDB Atlas setup through GCP Marketplace. (MongoDB Atlas is available pay as you go in the GC marketplace).\n3. Create your MongoDB cluster.\n4. Click on **Browse collections** and click on **+Create Database**.\n\n5: Name your database **Sample_Company** and collection **Sample_Employee**.\n\n6: Click on **INSERT DOCUMENT**.\n\nCopy and paste the below document and click on **Insert**.\n```\n{\n \"Name\":\"Venkatesh\",\n \"Address\":{\"Phone\":{\"$numberLong\":\"123455\"},\"City\":\"Honnavar\"},\n \"Department\":\"Solutions Consulting\",\n \"Direct_reporting\": \"PS\"\n}\n```\n7: To have authenticated access on the MongoDB Sandbox cluster from Google console, we need to create database users.\n\nClick on the **Database Access** from the left pane on the Atlas Dashboard. \n\nChoose to **Add New User** using the green button on the left. Enter the username `appUser` and password `appUser123`. We will use built-in roles; click **Add Default Privileges** and in the **Default Privileges** section, add the roles readWriteAnyDatabase. Then press the green **Add User** button to create the user.\n\n8: Whitelist the IPs.\n\nFor the purpose of this demo, we will allow access from any ip, i.e 0.0.0.0/0. However, this is not recommended for a production setup, where the recommendation will be to use VPC Peering and private IPs.\n\n### Set up Google Cloud\n\n1. Create a cloud storage bucket.\n2. On your local machine, create a Javascript file **transform.js** and add below sample code.\n\n```\nfunction transform(inputDoc) {\n var outputDoc = new Object();\n inputDoc\"City\"] = inputDoc[\"Address\"][\"City\"];\n delete doc.Address;\n outputDoc = doc;\n return returnObj;\n}\n```\n\nThis function will read the document read from MongoDB using the Apache beam MongoDB IO connector. Flatten the embedded document Address/City to City. Delete the Address field and return the updated document.\n\n3: [Upload the javascript file to the Google Cloud storage bucket.\n\n4: Create a BigQuery Dataset in your project in the region close to your physical location.\n\n5: Create a Dataflow pipeline.\n\na. Click on the **Create Job from the template** button at the top.\n\nb. Job Name: **mongodb-udf**.\n\nc. Region: Same as your BigQuery dataset region.\n\nd. MongoDB connection URI: Copy the connection URI for connecting applications from MongoDB Atlas.\n\ne. MongoDB Database: **Sample_Company**.\n\nf. MongoDB Collection: **Sample_Employee**.\n\ng. BigQuery Destination Table: Copy the destination table link from the BigQuery\n\nh. Dataset details page in format: bigquery-project:**sample_dataset.sample_company**.\n\ni. User Option: **FLATTEN**.\n\nj. Click on **show optional parameters**.\n\nk. Cloud storage location of your Javascript UDF: Browse your UDF file loaded to bucket location. This is the new feature that allows running the UDF and applies the transformations before inserting into BigQuery.\n\nl. Name of your Javascript function: **transform**.\n\n6: Click on **RUN JOB** to start running the pipeline. Once the pipeline finishes running, your graph should show **Succeeded** on each stage as shown below.\n\n7: After completion of the job, you will be able to see the transformed document inserted into BigQuery.\n\n## Conclusion\nIn this blog, we introduced UDFs to MongoDB to BigQuery Dataflow templates and their capabilities to transform the documents read from MongoDB using custom user defined Javascript functions stored on Google Cloud storage buckets. This blog also includes a simple tutorial on how to set up MongoDB Atlas, Google Cloud, and the UDFs.\n\n### Further reading\n\n* A data pipeline for MongoDB Atlas and BigQuery using Dataflow.\n* A data pipeline for MongoDB Atlas and BigQuery using the Confluent connector.\n* Run analytics using BigQuery using BigQuery ML.\n* Set up your first MongoDB cluster using Google Marketplace.\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "AI"], "pageDescription": "Learn how to transform the MongoDB Documents using user-defined JavaScript functions in Dataflow templates.", "contentType": "Tutorial"}, "title": "UDF Announcement for MongoDB to BigQuery Dataflow Templates", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/improving-storage-read-performance-free-flat-vs-structured-schemas", "action": "created", "body": "# Improving Storage and Read Performance for Free: Flat vs Structured Schemas\n\nWhen developers or administrators who had previously only been \"followers of the word of relational data modeling\" start to use MongoDB, it is common to see documents with flat schemas. This behavior happens because relational data modeling makes you think about data and schemas in a flat, two-dimensional structure called tables.\n\nIn MongoDB, data is stored as BSON documents, almost a binary representation of JSON documents, with slight differences. Because of this, we can create schemas with more dimensions/levels. More details about BSON implementation can be found in its specification. You can also learn more about its differences from JSON. \n\nMongoDB documents are composed of one or more key/value pairs, where the value of a field can be any of the BSON data types, including other documents, arrays, or arrays of documents.\n\nUsing documents, arrays, or arrays of documents as values for fields enables the creation of a structured schema, where one field can represent a group of related information. This structured schema is an alternative to a flat schema. \n\nLet's see an example of how to write the same `user` document using the two schemas:\n\n.\n- Documents with 10, 25, 50, and 100 fields were utilized for the flat schema.\n- Documents with 2x5, 5x5, 10x5, and 20x5 fields were used for the structured schema, where 2x5 means two fields of type document with five fields for each document.\n- Each collection had 10.000 documents generated using faker/npm.\n- To force the MongoDB engine to loop through all documents and all fields inside each document, all queries were made searching for a field and value that wasn't present in the documents.\n- Each query was executed 100 times in a row for each document size and schema.\n- No concurrent operation was executed during each test.\n\nNow, to the test results:\n\n| **Documents** | **Flat** | **Structured** | **Difference** | **Improvement** |\n| ------------- | -------- | -------------- | -------------- | --------------- |\n| 10 / 2x5 | 487 ms | 376 ms | 111 ms | 29,5% |\n| 25 / 5x5 | 624 ms | 434 ms | 190 ms | 43,8% |\n| 50 / 10x5 | 915 ms | 617 ms | 298 ms | 48,3% |\n| 100 / 20x5 | 1384 ms | 891 ms | 493 ms | 55,4% |\n\nAs our theory predicted, traversing a structured document is faster than traversing a flat one. The gains presented in this test shouldn't be considered for all cases when comparing structured and flat schemas, the improvements in traversing will depend on how the nested fields and documents are organized.\n\nThis article showed how to better use your MongoDB deployment by changing the schema of your document for the same data/information. Another option to extract more performance from your MongoDB deployment is to apply the common schema patterns of MongoDB. In this case, you will analyze which data you should put in your document/schema. The article Building with Patterns has the most common patterns and will significantly help.\n\nThe code used to get the above results is available in the GitHub repository.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0d2f0e3700c6e2ac/65b3f5ce655e30caf6eb9dba/schema-comparison.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte533958fd8753347/65b3f611655e30a264eb9dc4/image1.png", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to optimize the size of your documents within MongoDB by changing how you structure your schema.", "contentType": "Article"}, "title": "Improving Storage and Read Performance for Free: Flat vs Structured Schemas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/streamlining-cloud-native-development-gitpod-atlas", "action": "created", "body": "# Streamlining Cloud-Native Development with Gitpod and MongoDB Atlas\n\nDevelopers are increasingly shifting from the traditional development model of writing code and testing the entire application stack locally to remote development environments that are more cloud-native. This allows them to have environments that are configurable as-code, are easily reproducible for any team member, and are quick enough to spin up and tear down that each pull request can have an associated ephemeral environment for code reviews.\n\nAs new platforms and services that developers use on a daily basis are more regularly provided as cloud-first or cloud-only offerings, it makes sense to leverage all the advantages of the cloud for the entire development lifecycle and have the development environment more effectively mirror the production environment.\n\nIn this blog, we\u2019ll look at how Gitpod, with its Cloud Development Environment (CDE), is a perfect companion for MongoDB Atlas when it comes to a cloud-native development experience. We are so excited about the potential of this combined development experience that we invested in Gitpod\u2019s most recent funding round.\n\nAs an example, let\u2019s look at a simple Node.js application that exposes an API to retrieve quotes from popular authors. You can find the source code on Github. You should be able to try out the end-to-end setup yourself by going to Gitpod. The project is configured to use a free cluster in Atlas and, assuming you don\u2019t have one already running in your Atlas account, everything should work out of the box.\n\nThe code for the application is straightforward and is mostly contained in app.js, but the most interesting part is how the Gitpod development environment is set up: With just a couple of configuration files added to the GitHub repository, **a developer who works on this project for the first time can have everything up and running, including the MongoDB cluster needed for development seeded with test data, in about 30 seconds!**\n\nLet\u2019s take a look at how that is possible.\n\nWe\u2019ll start with the Dockerfile. Gitpod provides an out-of-the-box Docker image for the development environment that contains utilities and support for the most common programming languages. In our case, we prefer to start with a minimal image and add only what we need to it: the Atlas CLI (and the MongoDB Shell that comes with it) to manage resources in Atlas and Node.js.\n\n```dockerfile\nFROM gitpod/workspace-base:2022-09-07-02-19-02\n\n# Install MongoDB Tooling\nRUN sudo apt-get install gnupg\nRUN wget -qO - https://pgp.mongodb.com/server-5.0.asc | sudo apt-key add -\nRUN echo \"deb arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/5.0 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-5.0.list\nRUN sudo apt-get update\nRUN sudo apt-get install -y mongodb-atlas\n\n# Install Node 18\nRUN curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -\nRUN sudo apt-get install -y nodejs\n\n# Copy Atlas script\nCOPY mongodb-utils.sh /home/gitpod/.mongodb-utils.sh\nRUN echo \"source ~/.mongodb-utils.sh\" >> .bash_aliases\n\n```\n\nTo make things a little easier and cleaner, we\u2019ll also add to the container a [mongodb-utils.sh file and load it into bash_aliases. It\u2019s a bash script that contains convenience functions that wrap some of the Atlas CLI commands to make them easier to use within the Gitpod environment.\n\nThe second half of the configuration is contained in .gitpod.yml. This file may seem a little verbose, but what it does is relatively simple. Let\u2019s take a closer look at these configuration details in the following sections of this article.\n\n## Ephemeral cluster for development\nOur Quotes API application uses MongoDB to store data: All the quotes with their metadata are in a MongoDB collection. Atlas is the best way to run MongoDB so we will be using that. Plus, because we are using Atlas, we can also take advantage of Atlas Search to offer full-text search capabilities to our API users.\n\nSince we want our development environment to have characteristics that are compatible with what we\u2019ll have in production, we will use Atlas for our development needs as well. In particular, we want to make sure that every time a developer starts a Gitpod environment, a corresponding ephemeral cluster is created in Atlas and seeded with test data.\n\nWith some simple configuration, Gitpod takes care of all of this in a fully automated way. The `atlas_up` script creates a cluster with the same name as the Gitpod workspace. This way, it\u2019s easy to see what clusters are being used for development.\n\n```bash\nif ! -n \"${MONGODB_ATLAS_PROJECT_ID+1}\" ]; then\n echo \"\\$MONGODB_ATLAS_PROJECT_ID is not set. Lets try to login.\"\n if ! atlas auth whoami &> /dev/null ; then\n atlas auth login --noBrowser\n fi\nfi\nMONGODB_CONNECTION_STRING=$(atlas_up)\n```\n\nThe script above is a little sophisticated as it takes care of opening the browser and logging you in with your Atlas account if it\u2019s the first time you\u2019ve set up Gitpod with this project. Once you are set up the first time, you can choose to generate API credentials and skip the login step in the future. The instructions on how to do that are in the [README file included in the repository.\n\n## Development cluster seeded with sample data\nWhen developing an application, it\u2019s convenient to have test data readily available. In our example, the repository contains a zipped dataset in JSON format. During the initialization of the workspace, once the cluster is deployed, we connect to it with the MongoDB Shell (mongosh) and run a script that loads the unzipped dataset into the cluster.\n\n```bash\nunzip data/quotes.zip -d data\nmongosh $MONGODB_CONNECTION_STRING data/_load.js\n```\n\n## Creating an Atlas Search index\nAs part of our Quotes API, we provide an endpoint to search for quotes based on their content or their author. With Atlas Search and the MongoDB Query API, it is extremely easy to configure full-text search for a given collection, and we\u2019ll use that in our application.\n\nAs we want the environment to be ready to code, as part of the initialization, we also create a search index. For convenience, we included the `data/_create-search-index.sh` script that takes care of that by calling the `atlas cluster search index create command` and passing the right parameters to it.\n\n## Cleaning things up\nTo make the cluster truly ephemeral and start with a clean state every time we start a new workspace, we want to make sure we terminate it once it is no longer needed.\n\nFor this example, we\u2019ve used a free cluster, which is perfect for most development use cases. However, if you need better performance, you can always configure your environment to use a paid cluster (see the `--tier` option of the Atlas CLI). Should you choose to do so, it is even more important to terminate the cluster when it is no longer needed so you can avoid unnecessary costs.\n\nTo do that, we wait for the Gitpod environment to be terminated. That is what this section of the configuration file does:\n\n```yml\ntasks:\n - name: Cleanup Atlas Cluster\n command: |\n atlas_cleanup_when_done\n```\n\nThe `atlas_cleanup_when_done` script waits for the SIGTERM sent to the Gitpod container and, once it receives it, it sends a command to the Atlas CLI to terminate the cluster.\n\n## End-to-end developer experience\nDuring development, it is often useful to look at the data stored in MongoDB. As Gitpod integrates very well with VS Code, we can configure it so the MongoDB for VS Code extension is included in the setup.\n\nThis way, whoever starts the environment has the option of connecting to the Atlas cluster directly from within VS Code to explore their data, and test their queries. MongoDB for VS Code is also a useful tool to insert and edit data into your test database: With its Playground functionality, it is really easy to execute any CRUD operation, including scripting the insertion of fake test data.\n\nAs this is a JavaScript application, we also include the Standard VS Code extension for linting and code formatting.\n\n```yml\nvscode:\n extensions:\n - mongodb.mongodb-vscode\n - standard.vscode-standard\n```\n\n## Conclusion\nMongoDB Atlas is the ideal data platform across the entire development lifecycle. With Atlas, developers get a platform that is 100% compatible with production, including services like Atlas Search that runs next to the core database. And as developers shift towards Cloud Development Environments like Gitpod, they can get an even more sophisticated experience developing in the cloud with Atlas and always be ready to code. Check out the source code provided in this article and give MongoDB Atlas a try with Gitpod.\n\nQuestions? Comments? Head to the MongoDB Developer Community to join the conversation.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "More developers are moving from local development to working in cloud-native, remote development environments. Together, MongoDB and Gitpod make a perfect pair for developers looking for this type of seamless cloud development experience.", "contentType": "Tutorial"}, "title": "Streamlining Cloud-Native Development with Gitpod and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/leveraging-mongodb-atlas-vector-search-langchain", "action": "created", "body": "# Leveraging MongoDB Atlas Vector Search with LangChain\n\n## Introduction to Vector Search in MongoDB Atlas\n\nVector search engines \u2014 also termed as vector databases, semantic search, or cosine search \u2014 locate the closest entries to a specified vectorized query. While the conventional search methods hinge on keyword references, lexical match, and the rate of word appearances, vector search engines measure similarity by the distance in the embedding dimension. Finding related data becomes searching for the nearest neighbors of your query. \n\nVector embeddings act as the numeric representation of data and its accompanying context, preserved in high-dimensional (dense) vectors. There are various models, both proprietary (like those from OpenAI and Hugging Face) and open-source ones (like FastText), designed to produce these embeddings. These models can be trained on millions of samples to deliver results that are both more pertinent and precise. In certain situations, the numeric data you've gathered or designed to showcase essential characteristics of your documents might serve as embeddings. The crucial part is to have an efficient search mechanism, like MongoDB Atlas.\n\n), choose _Search_ and _Create Search Index_. Please also visit the official MongoDB documentation to learn more.\n\n in your user settings.\n\nTo install LangChain, you'll first need to update pip for Python or npm for JavaScript, then use the respective install command. Here are the steps:\n\nFor Python version, use:\n\n```\npip3 install pip --upgrade\npip3 install langchain\n```\n\nWe will also need other Python modules, such as ``pymongo`` for communication with MongoDB Atlas, ``openai`` for communication with the OpenAI API, and ``pypdf` `and ``tiktoken`` for other functionalities.\n\n```\npip3 install pymongo openai pypdf tiktoken\n```\n\n### Start using Atlas Vector Search \n\nIn our exercise, we utilize a publicly accessible PDF document titled \"MongoDB Atlas Best Practices\" as a data source for constructing a text-searchable vector space. The implemented Python script employs several modules to process, vectorize, and index the document's content into a MongoDB Atlas collection.\n\nIn order to implement it, let's begin by setting up and exporting the environmental variables. We need the Atlas connection string and the OpenAI API key.\n\n```\nexport OPENAI_API_KEY=\"xxxxxxxxxxx\"\nexport ATLAS_CONNECTION_STRING=\"mongodb+srv://user:passwd@vectorsearch.abc.mongodb.net/?retryWrites=true\"\n```\n\nNext, we can execute the code provided below. This script retrieves a PDF from a specified URL, segments the text, and indexes it in MongoDB Atlas for text search, leveraging LangChain's embedding and vector search features. The full code is accessible on GitHub.\n\n```\nimport os\nfrom pymongo import MongoClient\nfrom langchain.document_loaders import PyPDFLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\nfrom langchain.embeddings import OpenAIEmbeddings\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\n\n# Define the URL of the PDF MongoDB Atlas Best Practices document\npdf_url = \"https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4HkJP\"\n\n# Retrieve environment variables for sensitive information\nOPENAI_API_KEY = os.getenv('OPENAI_API_KEY')\nif not OPENAI_API_KEY:\n raise ValueError(\"The OPENAI_API_KEY environment variable is not set.\")\n\nATLAS_CONNECTION_STRING = os.getenv('ATLAS_CONNECTION_STRING')\nif not ATLAS_CONNECTION_STRING:\n raise ValueError(\"The ATLAS_CONNECTION_STRING environment variable is not set.\")\n\n# Connect to MongoDB Atlas cluster using the connection string\ncluster = MongoClient(ATLAS_CONNECTION_STRING)\n\n# Define the MongoDB database and collection names\nDB_NAME = \"langchain\"\nCOLLECTION_NAME = \"vectorSearch\"\n\n# Connect to the specific collection in the database\nMONGODB_COLLECTION = clusterDB_NAME][COLLECTION_NAME]\n\n# Initialize the PDF loader with the defined URL\nloader = PyPDFLoader(pdf_url)\ndata = loader.load()\n\n# Initialize the text splitter\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)\n\n# Split the document into manageable segments\ndocs = text_splitter.split_documents(data)\n\n# Initialize MongoDB Atlas vector search with the document segments\nvector_search = MongoDBAtlasVectorSearch.from_documents(\n documents=docs,\n embedding=OpenAIEmbeddings(),\n collection=MONGODB_COLLECTION,\n index_name=\"default\" # Use a predefined index name\n)\n# At this point, 'docs' are split and indexed in MongoDB Atlas, enabling text search capabilities.\n```\n\nUpon completion of the script, the PDF has been segmented and its vector representations are now stored within the ``langchain.vectorSearch`` namespace in MongoDB Atlas.\n\n![embedding results][5]\n\n### Execute similarities searching query in Atlas Vector Search\n\n\"`MongoDB Atlas auditing`\" serves as our search statement for initiating similarity searches. By utilizing the `OpenAIEmbeddings` class, we'll generate vector embeddings for this phrase. Following that, a similarity search will be executed to find and extract the three most semantically related documents from our MongoDB Atlas collection that align with our search intent.\n\nIn the first step, we need to create a ``MongoDBAtlasVectorSearch`` object:\n\n```\ndef create_vector_search():\n \"\"\"\n Creates a MongoDBAtlasVectorSearch object using the connection string, database, and collection names, along with the OpenAI embeddings and index configuration.\n\n :return: MongoDBAtlasVectorSearch object\n \"\"\"\n vector_search = MongoDBAtlasVectorSearch.from_connection_string(\n ATLAS_CONNECTION_STRING,\n f\"{DB_NAME}.{COLLECTION_NAME}\",\n OpenAIEmbeddings(),\n index_name=\"default\"\n )\n return vector_search\n```\n\nSubsequently, we can perform a similarity search.\n\n```\ndef perform_similarity_search(query, top_k=3):\n \"\"\"\n This function performs a similarity search within a MongoDB Atlas collection. It leverages the capabilities of the MongoDB Atlas Search, which under the hood, may use the `$vectorSearch` operator, to find and return the top `k` documents that match the provided query semantically.\n\n :param query: The search query string.\n :param top_k: Number of top matches to return.\n :return: A list of the top `k` matching documents with their similarity scores.\n \"\"\"\n\n # Get the MongoDBAtlasVectorSearch object\n vector_search = create_vector_search()\n \n # Execute the similarity search with the given query\n results = vector_search.similarity_search_with_score(\n query=query,\n k=top_k,\n )\n \n return results\n\n# Example of calling the function directly\nsearch_results = perform_similarity_search(\"MongoDB Atlas auditing\")\n```\n\nThe function returns the most semantically relevant documents from a MongoDB Atlas collection that correspond to a specified search query. When executed, it will provide a list of documents that are most similar to the query \"`MongoDB Atlas auditing`\". Each entry in this list includes the document's content that matches the search along with a similarity score, reflecting how closely each document aligns with the intent of the query. The function returns the top k matches, which by default is set to 5 but can be specified for any number of top results desired. Please find the [code on GitHub. \n\n## Summary\n\nMongoDB Atlas Vector Search enhances AI applications by facilitating the embedding of vector data into MongoDB documents. It simplifies the creation of search indices and the execution of KNN searches through the ``$vectorSearch`` MQL stage, utilizing the Hierarchical Navigable Small Worlds algorithm for efficient nearest neighbor searches. The collaboration with LangChain leverages this functionality, contributing to more streamlined and powerful semantic search capabilities\n. Harness the potential of MongoDB Atlas Vector Search and LangChain to meet your semantic search needs today!\n\nIn the next blog post, we will delve into LangChain Templates, a new feature set to enhance the capabilities of MongoDB Atlas Vector Search. Alongside this, we will examine the role of retrieval-augmented generation (RAG) in semantic search and AI development. Stay tuned for an in-depth exploration in our upcoming article!\n\nQuestions? Comments? We\u2019d love to continue the conversation over in the Developer Community forum.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte7a2e75d0a8966e6/6553d385f1467608ae159f75/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdc7192b71b0415f1/6553d74b88cbdaf6aa8571a7/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfd79fc3b47ce4ad8/6553d77b38b52a4917584197/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta6bbbb7c921bb08c/65a1b3ecd2ebff119d6f491d/atlas-search-create-search-index.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt627e7a7dd7b1a208/6553d7b28c5bd6f5f8c993cf/4.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Discover the integration of MongoDB Atlas Vector Search with LangChain, explored in Python in this insightful article. It highlights how advanced semantic search capabilities and high-dimensional embeddings revolutionize data retrieval. Understand the use of MongoDB Atlas' $vectorSearch operator and how Python enhances the functionality of LangChain in building AI-driven applications. This guide offers a comprehensive overview for harnessing these cutting-edge tools in data analysis and AI-driven search processes.", "contentType": "Tutorial"}, "title": "Leveraging MongoDB Atlas Vector Search with LangChain", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-spring-bulk-writes", "action": "created", "body": "# Implementing Bulk Writes using Spring Boot for MongoDB\n\n## Introduction\nThe Spring Data Framework is used extensively in applications as it makes it easier to access different kinds of persistence stores. This article will show how to use Spring Data MongoDB to implement bulk insertions.\n\nBulkOperations is an interface that contains a list of write operations to be applied to the database. They can be any combination of\n`InsertOne`,\n`updateOne`\n`updateMany`\n`replaceOne`\n`deleteOne`\n`deleteMany`\n\nA bulkOperation can be ordered or unordered. Ordered operations will be applied sequentially and if an error is detected, will return with an error code. Unordered operations will be applied in parallel and are thus potentially faster, but it is the responsibility of the application to check if there were errors during the operations. For more information please refer to the bulk write operations section of the MongoDB documentation.\n\n## Getting started\nA POM file will specify the version of Spring Data that the application will use. Care must be taken to use a version of Spring Data that utilizes a compatible version of the MongoDB Java Driver. You can verify this compatibility in the MongoDB Java API documentation.\n```\n\n \n org.springframework.boot\n spring-boot-starter-data-mongodb\n 2.7.2\n \n```\n\n## Application class\nThe top level class is a SpringBootApplication that implements a CommandLineRunner , like so: \n```\n@SpringBootApplication\npublic class SpringDataBulkInsertApplication implements CommandLineRunner {\n\n @Value(\"${documentCount}\")\n private int count;\n private static final Logger LOG = LoggerFactory\n .getLogger(SpringDataBulkInsertApplication.class);\n\n @Autowired\n private CustomProductsRepository repository;\n\n public static void main(String] args) {\n SpringApplication.run(SpringDataBulkInsertApplication.class, args);\n }\n\n @Override\n public void run(String... args) {\n\n repository.bulkInsertProducts(count);\n LOG.info(\"End run\");\n }\n}\n\n```\n\nNow we need to write a few classes to implement our bulk insertion application.\n\n## Configuration class\nWe will implement a class that holds the configuration to the MongoClient object that the Spring Data framework will utilize.\n\nThe `@Configuration` annotation will allow us to retrieve values to configure access to the MongoDB Environment. For a good explanation of Java-based configuration see [JavaConfig in the Spring reference documentation for more details.\n\n```\n@Configuration\npublic class MongoConfig {\n @Value(\"${mongodb.uri}\")\n private String uri;\n\n @Value(\"${mongodb.database}\")\n private String databaseName;\n\n @Value(\"${truststore.path}\")\n private String trustStorePath;\n @Value(\"${truststore.pwd}\")\n private String trustStorePwd;\n\n @Value(\"${mongodb.atlas}\")\n private boolean atlas;\n\n @Bean\n public MongoClient mongo() {\n ConnectionString connectionString = new ConnectionString(uri);\n MongoClientSettings mongoClientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .applyToSslSettings(builder -> {\n if (!atlas) {\n // Use SSLContext if a trustStore has been provided\n if (!trustStorePath.isEmpty()) {\n SSLFactory sslFactory = SSLFactory.builder()\n .withTrustMaterial(Paths.get(trustStorePath), trustStorePwd.toCharArray())\n .build();\n SSLContext sslContext = sslFactory.getSslContext();\n builder.context(sslContext);\n builder.invalidHostNameAllowed(true);\n }\n }\n builder.enabled(true);\n })\n .build();\n return MongoClients.create(mongoClientSettings);\n }\n\n @Bean\n public MongoTemplate mongoTemplate() throws Exception {\n return new MongoTemplate(mongo(), databaseName);\n }\n}\n```\n\nIn this implementation we are using a flag, mongodb.atlas, to indicate that this application will connect to Atlas. If the flag is false, an SSL Context may be created using a trustStore, This presents a certificate for the root certificate authority in the form of a truststore file pointed to by truststore.path, protected by a password (`truststore.pwd`) at the moment of creation. If needed the client can also offer a keystore file, but this is not implemented.\n\nThe parameter mongodb.uri should contain a valid MongoDB URI. The URI contains the hosts to which the client connects, the user credentials, etcetera. \n\n## The document class\nThe relationship between MongoDB collection and the documents that it contains is implemented via a class that is decorated by the @Document annotation. This class defines the fields of the documents and the annotation defines the name of the collection.\n\n```\n@Document(\"products\")\npublic class Products {\n\n private static final Logger LOG = LoggerFactory\n .getLogger(Products.class);\n @Id\n private String id;\n private String name;\n private int qty;\n private double price;\n private Date available;\n private Date unavailable;\n private String skuId;\n```\n\nSetters and getters need to be defined for each field. The @Id annotation indicates our default index. If this field is not specified, MongoDB will assign an ObjectId value which will be unique.\n\n## Repository classes\nThe repository is implemented with two classes, one an interface and the other the implementation of the interface. The repository classes flesh out the interactions of the application with the database. A method in the repository is responsible for the bulk insertion:\n\n```\n@Component\npublic class CustomProductsRepositoryImpl implements CustomProductsRepository {\n\n private static final Logger LOG = LoggerFactory\n .getLogger(CustomProductsRepository.class);\n\n @Autowired\n MongoTemplate mongoTemplate;\n\n public int bulkInsertProducts(int count) {\n\n LOG.info(\"Dropping collection...\");\n mongoTemplate.dropCollection(Products.class);\n LOG.info(\"Dropped!\");\n\n Instant start = Instant.now();\n mongoTemplate.setWriteConcern(WriteConcern.W1.withJournal(true));\n\n Products [] productList = Products.RandomProducts(count);\n BulkOperations bulkInsertion = mongoTemplate.bulkOps(BulkOperations.BulkMode.UNORDERED, Products.class);\n\n for (int i=0; i", "format": "md", "metadata": {"tags": ["Java", "Spring"], "pageDescription": "Learn how to use Spring Data MongoDB to implement bulk insertions for your application", "contentType": "Tutorial"}, "title": "Implementing Bulk Writes using Spring Boot for MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-atlas-with-terraform", "action": "created", "body": "# MongoDB Atlas with Terraform\n\nIn this tutorial, I will show you how to start using MongoDB Atlas with Terraform and create some simple resources. This first part is simpler and more introductory, but in the next article, I will explore more complex items and how to connect the creation of several resources into a single module. The tutorial is aimed at people who want to maintain their infrastructure as code (IaC) in a standardized and simple way. If you already use or want to use IaC on the MongoDB Atlas platform, this article is for you.\n\nWhat are modules?\n\nThey are code containers for multiple resources that are used together. They serve several important purposes in building and managing infrastructure as code, such as:\n\n 1. Code reuse.\n 2. Organization.\n 3. Encapsulation.\n 4. Version management.\n 5. Ease of maintenance and scalability.\n 6. Sharing in the community.\n\nEverything we do here is contained in the provider/resource documentation.\n\n> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as an S3, GCS, Azurerm, etc\u2026\n\n## Creating a project\n\nIn this first step, we will dive into the process of creating a project using Terraform. Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.\n\nTo get started, you'll need to install Terraform in your development environment. This step is crucial as it is the basis for running all the scripts and infrastructure definitions we will create. After installation, the next step is to configure Terraform to work with MongoDB Atlas. You will need an API key that has permission to create a project at this time.\n\nTo create an API key, you must:\n 1. Select **Access Manager** at the top of the page, and click **Organization Access**. \n 2. Click **Create API Key**.\n ![Organization Access Manager for your organization][1]\n 3. Enter a brief description of the API key and the necessary permission. In this case, I put it as Organization Owner. After that, click **Next**.\n ![Screen to create your API key][2]\n 4. Your API key will be displayed on the screen.\n ![Screen with information about your API key][3]\n 5. Release IP in the Access List (optional): If you have enabled your organization to use API keys, the requestor's IP must be released in the Access List; you must include your IP in this list. To validate whether it is enabled or not, go to **Organization Settings -> Require IP Access List** for the Atlas Administration API. In my case, it is disabled, as it is just a demonstration, but in case you are using this in an organization, I strongly advise you to enable it.\n ![Validate whether the IP Require Access List for APIs is enabled in Organization Settings][4]\n\nAfter creating an API key, let's start working with Terraform. You can use the IDE of your choice; I will be using VS Code. Create the files within a folder. The files we will need at this point are:\n - main.tf: In this file, we will define the main resource, `mongodbatlas_project`. Here, you will configure the project name and organization ID, as well as other specific settings, such as teams, limits, and alert settings.\n - provider.tf: This file is where we define the provider we are using \u2014 in our case, `mongodbatlas`. Here, you will also include the access credentials, such as the API key.\n - terraform.tfvars: This file contains the variables that will be used in our project \u2014 for example, the project name, team information, and limits, among others.\n - variable.tf: Here, we define the variables mentioned in the terraform.tfvars file, specifying the type and, optionally, a default value.\n - version.tf: This file is used to specify the version of Terraform and the providers we are using.\n\nThe main.tf file is the heart of our Terraform project. In it, you start with the data source declaration `mongodbatlas_roles_org_id` to obtain the `org_id`, which is essential for creating the project.\nNext, you define the `mongodbatlas_project` resource with several settings. Here are some examples:\n - `name` and `org_id` are basic settings for the project name and organization ID.\n - Dynamic blocks are used to dynamically configure teams and limits, allowing flexibility and code reuse.\n - Other settings, like `with_default_alerts_settings` and `is_data_explorer_enabled`, are options for customizing the behavior of your MongoDB Atlas project.\n\nIn the main.tf file, we will then add our project resource, called `mongodbatlas_project`.\n\n```tf\ndata \"mongodbatlas_roles_org_id\" \"org\" {}\n\nresource \"mongodbatlas_project\" \"default\" {\n name = var.name\n org_id = data.mongodbatlas_roles_org_id.org.org_id\n\n dynamic \"teams\" {\n for_each = var.teams\n content {\n team_id = teams.value.team_id\n role_names = teams.value.role_names\n }\n }\n\n dynamic \"limits\" {\n for_each = var.limits\n content {\n name = limits.value.name\n value = limits.value.value\n }\n }\n\n with_default_alerts_settings = var.with_default_alerts_settings\n is_collect_database_specifics_statistics_enabled = var.is_collect_database_specifics_statistics_enabled\n is_data_explorer_enabled = var.is_data_explorer_enabled\n is_extended_storage_sizes_enabled = var.is_extended_storage_sizes_enabled\n is_performance_advisor_enabled = var.is_performance_advisor_enabled\n is_realtime_performance_panel_enabled = var.is_realtime_performance_panel_enabled\n is_schema_advisor_enabled = var.is_schema_advisor_enabled\n}\n```\n\nIn the provider file, we will define the provider we are using and the API key that will be used. As we are just testing, I will specify the API key as a variable that we will input into our code. However, when you are using it in production, you will not want to pass the API key in the code in exposed text, so it is possible to pass it through environment variables or even AWS secret manager.\n```tf\nprovider \"mongodbatlas\" {\n public_key = var.atlas_public_key\n private_key = var.atlas_private_key\n}\n```\n\nIn the variable.tf file, we will specify the variables that we are waiting for a user to pass. As I mentioned earlier, the API key is an example.\n```tf\nvariable \"name\" {\n description = <= 0.12\"\n required_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n version = \"1.14.0\"\n }\n }\n}\n```\n - `required_version = \">= 0.12\"`: This line specifies that your Terraform project requires, at a minimum, Terraform version 0.12. By using >=, you indicate that any version of Terraform from 0.12 onward is compatible with your project. This offers some flexibility by allowing team members and automation systems to use newer versions of Terraform as long as they are not older than 0.12.\n - `required_providers`: This section lists the providers required for your Terraform project. In your case, you are specifying the mongodbatlas provider.\n - `source = \"mongodb/mongodbatlas\"`: This defines the source of the mongodbatlas provider. Here, mongodb/mongodbatlas is the official identifier of the MongoDB Atlas provider in the Terraform Registry.\n - `version = \"1.14.0\":` This line specifies the exact version of the mongodbatlas provider that your project will use, which is version 1.14.0. Unlike Terraform configuration, where we specify a minimum version, here you are defining a provider-specific version. This ensures that everyone using your code will work with the same version of the provider, avoiding discrepancies and issues related to version differences.\n\nFinally, we have the variable file that will be included in our code, .tfvars.\n```tf\nname = \"project-test\"\natlas_public_key = \"YOUR PUBLIC KEY\"\natlas_private_key = \"YOUR PRIVATE KEY\"\n```\n\nWe are specifying the value of the name variable, which is the name of the project and the public/private key of our provider. You may wonder, \"Where are the other variables that we specified in the main.tf and variable.tf files?\" The answer is: These variables were specified with a default value within the variable.tf file \u2014 for example, the limits value:\n```tf\nvariable \"limits\" {\n description = <", "format": "md", "metadata": {"tags": ["Atlas", "Terraform"], "pageDescription": "Learn how to get started with organising your MongoDB deployment with Terraform, using code to build and maintain your infrastructure.", "contentType": "Tutorial"}, "title": "MongoDB Atlas with Terraform", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/python-data-access-layer", "action": "created", "body": "# Building a Python Data Access Layer\n\nThis tutorial will show you how to use some reasonably advanced Python techniques to wrap BSON documents in a way that makes them feel much more like Python objects and allows different ways to access the data within. It's the first in a series demonstrating how to build a Python data access layer for MongoDB.\n\n## Coding with Mark?\n\nThis tutorial is loosely based on the first episode of a new livestream I host, called \"Coding with Mark.\" I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. ET or 6 a.m. PT, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recordings!\n\nFor the first few episodes, you can follow along as I attempt to build a different kind of Pythonic data access layer, a library to abstract underlying database modeling changes from a hypothetical application. One of the examples I'll use later on in this series is a microblogging platform, along the lines of Twitter/X or Bluesky. In order to deal with huge volumes of data, various modeling techniques are required, and my library will attempt to find ways to make these data modeling choices invisible to the application, making it easier to develop while remaining possible to change the underlying data model.\n\nI'm using some pretty advanced programming and metaprogramming techniques to hide away some quite clever functionality. It's going to be a good series whether you're looking to improve either your Python or your MongoDB skills.\n\nIf that doesn't sound exciting enough, I'm lining up some awesome guests from the Python community, and in the future, we may branch away from Python and into other strange and wonderful worlds.\n\n## Why a data access layer?\n\nIn any well-architected application of a reasonable size, you'll usually find that the codebase is split into at least three areas of concern:\n\n 1. A presentation layer is concerned with formatting data for consumption by a client. This may generate web pages to be viewed by a person in a browser, but increasingly, this may be an API endpoint, either driving an app that runs on a user's computer (or within their browser) or providing data to other services within a broader service-based architecture. This layer is also responsible for receiving data from a client and parsing it into data that can be used by the business logic layer.\n 2. A business logic layer sits behind the presentation layer and provides the \"brains\" of an application, making decisions on what actions to take based on user requests or data input into the application. \n 3. The data access layer, where I'm going to be focusing, provides a layer of abstraction over the database. Its responsibility is to request data from the database and provide them in a usable form to the business logic layer, but also to take requests from the business logic layer and to appropriately store data in the database.\n\n to work with documents. An ORM is an Object-Relational Mapper library and handles mapping between relational data in a tabular database and objects in your application.\n\n## Why not an ODM?\n\nGood question! Many great ODMs have been developed for MongoDB. ODM is short for \"Object Document Mapper\" and describes a type of library that attempts to map between MongoDB documents and your application objects. Just within the Python ecosystem, there is MongoEngine, ODMantic, PyMODM, and more recently, Beanie and Bunnet. The last two are more or less the same, but Beanie is built on asyncio and Bunnet is synchronous. We're especially big fans of Beanie at MongoDB, and because it's built on Pydantic, it works especially well with FastAPI.\n\nOn the other hand, most ODMs are essentially solving the same problem \u2014 abstracting away MongoDB's powerful query language to make it easier to read and write, and modeling document schemas as objects so that data can be directly serialized and deserialized between the application and MongoDB.\n\nOnce your data model becomes relatively sophisticated, however, if you're implementing one or more patterns to improve the performance and scalability of your application, the way your data is stored is not necessarily the way you logically think about it within your application.\n\nOn top of that, if you're working with a very large dataset, then data migration may not be feasible, meaning that different subsets of your data will be stored in different ways! A good data access layer should be able to abstract over these differences so that your application doesn't need to be rewritten each time you evolve your schema for one reason or another.\n\nAm I just building another ODM? Well, yes, probably. I'm just a little reluctant to use the term because I think it comes along with some of the preconceptions I've mentioned here. If it is an ODM, it's one which will have a focus on the \u201cM.\u201d\n\nAnd partly, I just think it's a fun thing to build. It's an experiment. Let's see if it works!\n\n## Introducing DocBridge\n\nYou can check out the current library in the project's GitHub repo. At the time of writing, the README contains what could be described as a manifesto:\n\n- Managing large amounts of data in MongoDB while keeping a data schema flexible is challenging.\n- This ODM is not an active record implementation, mapping documents in the database directly into similar objects in code.\n- This ODM is designed to abstract underlying documents, mapping potentially multiple document schemata into a shared object representation.\nIt should also simplify the evolution of documents in the database, automatically migrating individual documents' schemas either on-read or on-write.\n- There should be \"escape hatches\" so that unforeseen mappings can be implemented, hiding away the implementation code behind hopefully reusable components.\n\n## Starting a New Framework\n\nI think that's enough waffle. Let's get started. \n\nIf you want to get a look at how this will all work once it all comes together, skip to the end, where I'll also show you how it can be used with PyMongo queries. For the moment, I'm going to dive right in and start implementing a class for wrapping BSON documents to make it easier to abstract away some of the details of the document structure. In later tutorials, I may start to modify the way queries are done, but at the moment, I just want to wrap individual documents. \n\nI want to define classes that encapsulate data from the database, so let's call that class `Document`. At the moment, I just need it to store away an underlying \"raw\" document, which PyMongo (and Motor) both provide as dict implementations:\n\n```python\nclass Document:\n def __init__(self, doc, *, strict=False):\n self._doc = doc\n self._strict = strict\n```\n\nI've defined two parameters that are stored away on the instance: `doc` and `strict`. The first will hold the underlying BSON document so that it can be accessed, and `strict` is a boolean flag I'll explain below. In this tutorial, I'm mostly ignoring details of using PyMongo or Motor to access MongoDB \u2014 I'm just working with BSON document data as a plain old dict.\n\nWhen a Document instance wraps a MongoDB document, if `strict` is `False`, then it will allow any field in the document to automatically be looked up as if it was a normal Python attribute of the Document instance that wraps it. If `strict` is `True`, then it won't allow this dynamic lookup.\n\nSo, if I have a MongoDB document that contains { 'name': 'Jones' }, then wrapping it with a Document will behave like this:\n\n```python\n>>> relaxed_doc = Document({ 'name': 'Jones' })\n>>> relaxed_doc.name\n\"Jones\"\n\n>>> strict_doc = Document({ 'name': 'Jones' }, strict=True)\n>>> strict_doc.name\nTraceback (most recent call last):\n File \"\", line 1, in \n File \".../docbridge/__init__.py\", line 33, in __getattr__\n raise AttributeError(\nAttributeError: 'Document' object has no attribute 'name'\n```\n\nThe class doesn't do this magic attribute lookup by itself, though! To get that behavior, I'll need to implement `__getattr__`. This is a \"magic\" or \"dunder\" method that is automatically called by Python when an attribute is requested that is not actually defined on the instance or the class (or any of the superclasses). As a fallback, Python will call `__getattr__` if your class implements it and provide the name of the attribute that's been requested.\n\n```python\ndef __getattr__(self, attr):\n if not self._strict:\n return self._docattr]\n else:\n raise AttributeError(\n f\"{self.__class__.__name__!r} object has no attribute {attr!r}\"\n )\n```\n\nThis implements the logic I've described above (although it differs slightly from the code in [the repository because there were a couple of bugs in that!).\n\nThis is a neat way to make a dictionary look like an object and allows document fields to be looked up as if they were attributes. It does currently require those attribute names to be exactly the same as the underlying fields, though, and it only works at the top level of the document. In order to make the encapsulation more powerful, I need to be able to configure how data is looked up on a per-field basis. First, let's handle how to map an attribute to a different field name.\n\n## Let's abstract field names\n\nThe first abstraction I'd like to implement is the ability to have a different field name in the BSON document to the one that's exposed by the Document object. Let's say I have a document like this:\n\n```javascript\n{\n \"cocktailName\": \"Old Fashioned\"\n}\n```\n\nThe field name uses camelCase instead of the more idiomatic snake_case (which would be \"cocktail_name\" instead of \"cocktailName\"). At this point, I could change the field name with a MongoDB query, but that's both not very sensible (because it's not that important) and potentially may be controversial with other teams using the same database that may be more used to using camelCase names. So let's add the ability to explicitly map from one attribute name to a different field name in the wrapped document.\n\nI'm going to do this using metaprogramming, but in this case, it doesn't require me to write a custom metaclass! Let's assume that I'm going to subclass `Document` to provide a specific mapping for cocktail recipe documents.\n\n```python\nclass Cocktail(Document):\n cocktail_name = Field(field_name=\"cocktailName\")\n```\n\nThis may look similar to some patterns you've seen used by other ODMs or with, say, a Django model. Under the hood, `Field` needs to implement the Descriptor Protocol so that we can intercept attribute lookup for `cocktail_name` on instances of the `Cocktail` class and return data contained in the underlying BSON document.\n\n## The Descriptor Protocol\n\nThe name sounds highly technical, but all it really means is that I'm going to implement a couple of methods on `Field` so that Python can treat it differently in two different ways:\n\n`__set_name__` is called by Python when the Field is attached to a class (in this case, the Cocktail class). It's called with, you guessed it, the name of the field \u2014 in this case, \"cocktail_name.\"\n`__get__` is called by Python whenever the attribute is looked up on a Cocktail instance. So in this case, if I had a Cocktail instance called `my_cocktail`, then accessing `cocktail.cocktail_name` will call Field.__get__() under the hood, providing the Field instance, and the class that the field is attached to as arguments. This allows you to return whatever you think should be returned by this attribute access \u2014 which is the underlying BSON document's \"cocktailName\" value.\n\nHere's my implementation of `Field`. I've simplified it from the implementation in GitHub, but this implements everything I've described above.\n\n```python\nclass Field:\n def __init__(self, field_name=None):\n \"\"\"\n Initialize a Field attribute, mapping to an underlying BSON field.\n\n field_name is the name of the underlying BSON field.\n If field_name is None (the default), use the attribute name for lookup in the doc.\n \"\"\"\n self.field_name = None\n\n def __set_name__(self, owner, name):\n \"\"\"\n Called by Python when this Field instance is attached to a class (the owner).\n \"\"\"\n self.name = name # this is the *attribute* name on the class.\n\n # If no field_name was provided, then default to using the attribute\n # name to look up the BSON field:\n if self.field_name is None:\n self.field_name = name\n\n def __get__(self, ob, cls):\n \"\"\"\n Called by Python when this attribute is looked up on an instance of\n the class it's attached to.\n \"\"\"\n try:\n # Look up the BSON field and return it:\n return ob._docself.field_name]\n except KeyError as ke:\n raise ValueError(\n f\"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}.\"\n ) from ke\n```\n\nWith the code above, I've implemented a Field object, which can be attached to a Document class. It gives you the ability to allow field lookups on the underlying BSON document, with an optional mapping between the attribute name and the underlying field name. \n\n## Let's abstract document versioning\n\nA very common pattern in MongoDB is the [schema versioning pattern, which is very important if you want to maintain the evolvability of your data. (This is a term coined by Martin Kleppmann in his book, Designing Data Intensive Applications.)\n\nThe premise is that over time, your document schema will need to change, either for efficiency reasons or just because your requirements have changed. MongoDB allows you to store documents with different structures within a single collection so a changing schema doesn't require you to change all of your documents in one go \u2014 which can be infeasible with very large datasets anyway.\n\nInstead, the schema versioning pattern suggests that when your schema changes, as you update individual documents to the new structure, you update a field that specifies the schema version of each document.\n\nFor example, I might start with a document representing a person, like this:\n\n```javascript\n{\n\"name\": \"Mark Smith\",\n\"schema_version\": 1,\n}\n```\n\nBut eventually, I might realize that I need to break up the user's name:\n\n```javascript\n{\n \"full_name\": \"Mark Smith\"\n\"first_name\": \"Mark\",\n\"last_name\": \"Smith\",\n\"schema_version\": 2,\n}\n```\n\nIn this example, when I load a document from this collection, I won't know in advance whether it's version 1 or 2, so when I request the name of the person, it may be stored in \"name\" or \"full_name\" depending on whether the particular document has been upgraded or not.\n\nFor this, I've designed a different kind of \"Field\" descriptor, called a \"FallthroughField.\" This one will take a list of field names and will attempt to look them up in turn. In this way, I can avoid checking the \"schema_version\" field in the underlying document, but it will still work with both older and newer documents.\n\n`FallthroughField` looks like this:\n\n```python\nclass Fallthrough:\n def __init__(self, field_names: Sequencestr]) -> None:\n self.field_names = field_names\n\n def __get__(self, ob, cls):\n for field_name in self.field_names: # loop through the field names until one returns a value.\n try:\n return ob._doc[field_name]\n except KeyError:\n pass\n else:\n raise ValueError(\n f\"Attribute {self.name!r} references the field names {', '.join([repr(fn) for fn in self.field_names])} which are not present.\"\n )\n\n def __set_name__(self, owner, name):\n self.name = name\n```\n\nObviously, changing a field name is a relatively trivial schema change. I have big plans for how I can use descriptors to abstract away lots of complexity in the underlying document model.\n\n## What does it look like?\nThis tutorial has shown a lot of implementation code. Now, let me show you what it looks like to use this library in practice:\n\n```python\nimport os\nfrom docbridge import Document, Field, FallthroughField\nfrom pymongo import MongoClient\n\ncollection = (\n MongoClient(os.environ[\"MDB_URI\"])\n .get_database(\"docbridge_test\")\n .get_collection(\"people\")\n)\n\ncollection.delete_many({}) # Clean up any leftover documents.\n# Insert a couple of sample documents:\ncollection.insert_many(\n [\n {\n \"name\": \"Mark Smith\",\n \"schema_version\": 1,\n },\n {\n \"full_name\": \"Mark Smith\",\n \"first_name\": \"Mark\",\n \"last_name\": \"Smith\",\n \"schema_version\": 2,\n },\n ]\n)\n\n# Define a mapping for \"person\" documents:\nclass Person(Document):\n version = Field(\"schema_version\")\n name = FallthroughField(\n [\n \"name\", # v1\n \"full_name\", # v2\n ]\n )\n\n# This finds all the documents in the collection, but wraps each BSON document with a Person wrapper:\npeople = (Person(doc, None) for doc in collection.find())\nfor person in people:\n print(\n \"Name:\",\n person.name,\n ) # The name (or full_name) of the underlying document.\n print(\n \"Document version:\",\n person.version, # The schema_version field of the underlying document.\n )\n```\n\nIf you run this, it prints out the following:\n\n```\n$ python examples/why/simple_example.py\nName: Mark Smith\nDocument version: 1\nName: Mark Smith\nDocument version: 2\n```\n\n## Upcoming features\n\nI'll be the first to admit that this was a long tutorial given that effectively, I've so far just written an object wrapper around a dictionary that can conduct some simple name remapping. But it's a great start for some of the more advanced features that are upcoming: \n\n- The ability to automatically upgrade the data in a document when data is [calculated or otherwise written back to the database\n- Recursive class definitions to ensure that you have the full power of the framework no matter how nested your data is\n- The ability to transparently handle the subset and extended reference patterns to lazily load data from across documents and collections\n- More advanced name remapping to build Python objects that feel like Python objects, on documents that may have dramatically different conventions\n- Potentially some tools to help build complex queries against your data\n\nBut the _next_ thing to do is to take a step back from writing library code and do some housekeeping. I'm building a test framework to help test directly against MongoDB while having my test writes rolled back after every test, and I'm going to package and publish the docbridge library. You can check out the livestream recording where I attempt this, or you can wait for the accompanying tutorial, which will be written any day now.\n\nI'm streaming on the MongoDB YouTube channel nearly every Tuesday, at 2 p.m. GMT! Come join me \u2014 it's always helpful to have more people spot the bugs I'm creating as I write the code!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0476a5d51e59056/6579eacbda6bef79e4d28370/application-architecture.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "Let's build an Object-Document Mapper with some reasonably advanced Python!", "contentType": "Tutorial"}, "title": "Building a Python Data Access Layer", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-performance-evaluation", "action": "created", "body": "# Schema Performance Evaluation in MongoDB Using PerformanceBench\n\nMongoDB is often incorrectly described as being schemaless. While it is true that MongoDB offers a level of flexibility when working with schema designs that traditional relational databases systems cannot match, as with any database system, the choice of schema design employed by an application built on top of MongoDB will still ultimately determine whether the application is able to meet its performance objectives and SLAs.\n\nFortunately, a number of design patterns (and corresponding anti-patterns) exist to help guide application developers design appropriate schemas for their MongoDB applications. A significant part of our role as developer advocates within the\u00a0global strategic account team at MongoDB involves educating developers new to MongoDB on the use of these design patterns and how they differ from those they may have previously used working with relational database systems. My colleague,\u00a0Daniel Coupal, contributed to a fantastic set of blog posts on the most common\u00a0patterns\u00a0and\u00a0anti-patterns\u00a0we see working with MongoDB.\n\nWhilst schema design patterns provide a great starting point for guiding our design process, for many applications, there may come a point where it becomes unclear which one of a set of alternative designs will best support the application\u2019s anticipated workloads. In these situations, a quote by Rear Admiral Grace Hopper that my manager, Rick Houlihan, made me aware of rings true:*\u201cOne accurate measurement is worth a thousand expert opinions.\u201d*\n\nIn this article, we will explore using\u00a0PerformanceBench, a Java framework application used by my team when evaluating candidate data models for a customer workload.\n\n## PerformanceBench\n\nPerformanceBench is a simple Java framework designed to allow developers to assess the relative performance of different database design patterns within MongoDB.\n\nPerformanceBench defines its functionality in terms of ***models*** (the design patterns being assessed) and ***measures*** (the operations to be measured against each model). As an example, a developer may wish to assess the relative performance of a design based on having data spread across multiple collections and accessed using **$lookup** (join) aggregations, versus one based on a hierarchical model where related documents are embedded within each other. In this scenario, the models might be respectively referred to as *multi-collection* and *hierarchical*, with the \"measures\" for each being CRUD operations: *Create*, *Read*, *Update*, and *Delete*.\n\nThe framework allows Java classes to be developed that implement a defined interface known as \u201c**SchemaTest**,\u201d with one class for each model to be tested. Each **SchemaTest** class implements the functionality to execute the measures defined for that model and returns, as output, an array of documents with the results of the execution of each measure \u2014 typically timing data for the measure execution, plus any metadata needed to later identify the parameters used for the specific execution. PerformanceBench stores these returned documents in a MongoDB collection for later analysis and evaluation.\n\nPerformanceBench is configured via a JSON format configuration file which contains an array of documents \u2014 one for each model being tested. Each model document in the configuration file contains a set of standard fields that are common across all models being tested, plus a set of custom fields specific to that model. Developers implementing **SchemaTest** model classes are free to include whatever custom parameters their testing of a specific model requires.\n\nWhen executed, PerformanceBench uses the data in the configuration file to identify the implementing class for each model to be tested and its associated measures. It then instructs the implementing classes to execute a specified number of iterations of each measure, optionally using multiple threads to simulate multi-user/multi-client environments.\n\nFull details of the **SchemaTest** interface and the format of the PerformanceBench JSON configuration file are provided in the GitHub readme file for the project.\n\nThe PerformanceBench source in Github was developed using IntelliJ IDEA 2022.2.3 with OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7). \n\nThe compiled application has been run on Amazon Linux using OpenJDK 17.0.5 (2022-10-18 LTS - Corretto).\n\n## Designing SchemaTest model classes: factors to consider\nOther than the requirement to implement the SchemaTest interface, PerformanceBench gives model class developers wide latitude in designing their classes in whatever way is needed to meet the requirements of their test cases. However, there are some common considerations to take into account.\n\n### Understand the intention of the SchemaTest interface methods\nThe **SchemaTest** interface defines the following four methods:\n\n```java\npublic void initialize(JSONObject args);\n```\n\n```java\npublic String name();\n```\n\n```java\npublic void warmup(JSONObject args);\n```\n\n```java\npublic Document] executeMeasure(int opsToTest, String measure, JSONObject args);\n```\n\n```java\npublic void cleanup(JSONObject args);\n```\n\nThe **initialize** method is intended to allow implementing classes to carry out any necessary steps prior to measures being executed. This could, for example, include establishing and verifying connection to the database, building or preparing a test data set, and/or removing the results of prior execution runs. PerformanceBench calls initialize immediately after instantiating an instance of the class, but before any measures are executed.\n\nThe **name** method should return a string name for the implementing class. Class implementers can set the returned value to anything that makes sense for their use case. Currently, PerformanceBench only uses this method to add context to logging messages.\n\nThe **warmup** method is called by PerformanceBench prior to any iterations of any measure being executed. It is designed to allow model class implementers to attempt to create an environment that accurately reflects the expected state of the database in real-life. This could, for example, include carrying out queries designed to seed the MongoDB cache with an appropriate working set of data.\n\nThe **executeMeasure** method allows PerformanceBench to instruct a model-implementing class to execute a defined number of iterations of a specified measure. Typically, the method implementation will contain a case statement redirecting execution to the code for each defined measure. However, there is no requirement to implement in that way. The return from this method should be an array of BSON **Document** objects containing the results of each test iteration. Implementers are free to include whatever fields are necessary in these documents to support the metrics their use case requires.\n\nThe **cleanup** method is called by PerformanceBench after all iterations of all measures have been executed by the implementing class and is designed primarily to allow test data to be deleted or reset ahead of future test executions. However, the method can also be used to execute any other post test-run functionality necessary for a given use case. This may, for example, include calculating average/mean/percentile execution times for a test run, or for cleanly disconnecting from a database.\n\n### Execute measures using varying test data sets\nWhen assessing a given model, it is important to measure the model\u2019s performance against varying data sets. For example, the following can all impact the performance of different search and data manipulation operations:\n\n* Overall database and collection sizes \n* Individual document sizes\n* Available CPU and memory on the MongoDB servers being used\n* Total number of documents within individual collections.\n\nExecuting a sequence of measures using different test data sets can help to identify if there is a threshold beyond which one model may perform better than another. It may also help to identify the amount of memory needed to store the working set of data necessary for the workload being tested to avoid excessive paging. Model-implementing classes should ensure that they add sufficient metadata to the results documents they generate to allow the conditions of the test to be identified during later analysis.\n\n### Ensure queries are supported by appropriate indexes \nAs with most databases, query performance in MongoDB is dependent on appropriate indexes existing on collections being queried. Model class implementers should ensure any such indexes needed by their test cases either exist or are created during the call to their classes\u2019 **initialize** method. Index size compared with available cache memory should be considered, and often, finding the point at which performance is negatively impacted by paging of indexes is a major objective of PerformanceBench testing. \n\n### Remove variables such as network latency\nWith any testing regime, one goal should be to limit the number of variables potentially impacting performance discrepancies between test runs so differences in measured performance can be attributed with confidence to the intentional differences in test conditions. Items that come under this heading include network latency between the server running PerformanceBench and the MongoDB cluster servers. When working with MongoDB Atlas in a cloud environment, for example, specifying dedicated rather than shared servers can help avoid background load on the servers impacting performance, whilst deploying all servers in the same availability zone/region can reduce potential impacts from varying network latency. \n\n### Model multi-user environments realistically\nPerformanceBench allows measures to be executed concurrently in multiple threads to simulate a multi-user environment. However, if making use of this facility, put some thought into how to accurately model real user behavior. It is rare, for example, for users to execute a complex ad-hoc aggregation pipeline and immediately execute another on its completion. Your model class may therefore want to insert a delay between execution of measure iterations to attempt to model a realistic length of time you may expect between query requests from an individual user in a realistic production environment.\n\n## APIMonitor: an example PerformanceBench model implementation\nThe PerformaceBench GitHub repository includes example model class implementations for a hypothetical application designed to report on success and failure rates of calls to a set of APIs monitored by observability software. \n\nData for the application is stored in two document types in two different collections. \n\nThe **APIDetails** collection contains one document for each monitored API with metadata about that API:\n\n```json\n{\n \"_id\": \"api#9\",\n \"apiDetails\": {\n \"appname\": \"api#9\",\n \"platform\": \"Linux\",\n \"language\": {\n \"name\": \"Java\",\n \"version\": \"11.8.202\"\n },\n \"techStack\": {\n \"name\": \"Springboot\",\n \"version\": \"UNCATEGORIZED\"\n },\n \"environment\": \"PROD\"\n },\n \"deployments\": {\n \"region\": \"UK\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1669164599000\"\n }\n }\n }\n}\n```\n\nThe second collection, **APIMetrics**, is designed to represent the output from monitoring software with one document generated for each API at 15-minute intervals, giving the total number of calls to the API, the number that were successful, and the number that failed:\n\n```json\n{\n \"_id\": \"api#1#S#2\",\n \"appname\": \"api#1\",\n \"creationDate\": {\n \"$date\": {\n \"$numberLong\": \"1666909520000\"\n }\n },\n \"transactionVolume\": 54682,\n \"errorCount\": 33302,\n \"successCount\": 21380,\n \"region\": \"TK\",\n \"year\": 2022,\n \"monthOfYear\": 10,\n \"dayOfMonth\": 27,\n \"dayOfYear\": 300\n}\n```\n\nThe documents include a deployment region value for each API (one of \u201cTokyo,\u201d \u201cHong Kong,\u201d \u201cIndia,\u201d or \u201cUK\u201d). The sample model classes in the repository are designed to compare the performance of options for running aggregation pipelines that calculate the total number of calls, the overall success rate, and the corresponding failure rate for all the APIs in a given region, for a given time period. \n\nFour approaches are evaluated:\n\n1. Carrying out an aggregation pipeline against the **APIDetails** collection that includes a **$lookup** stage to perform a join with and summarization of relevant data in the **APIMetrics** collection.\n2. Carrying out an initial query against the **APIDetails** collection to produce a list of the API ids for a given region and use that list as input to an **$in** clause as part of a **$match** stage in a separate aggregation pipeline against the APIMetrics collection to summarize the relevant monitoring data.\n3. A third approach that uses an equality clause on the region information in each document as part of the initial **$match** stage of a pipeline against the APIMetrics collection to summarize the relevant monitoring data. This approach is designed to test whether an equality match against a single value performs better than one using an **$in** clause with a large number of possible values, as used in the second approach. Two measures are implemented in this model: one that queries the two collections sequentially using the standard MongoDB Java driver, and one that queries the two collections in parallel using the MongoDB [Java Reactive Streams driver.\n4. A fourth approach that adds a third collection called **APIPreCalc** that stores documents with pre-calculated total calls, total failed calls, and total successful calls for each API for each complete day, month, and year in the data set, with the aim of reducing the number of documents and size of calculations the aggregation pipeline has to execute. This model is an example implementation of the Computed schema design pattern and also uses the MongoDB Java Reactive Streams driver to query the collections in parallel.\n\nFor the fourth approach, the pre-computed documents in the **APIPreCalc** collection look like the following:\n\n```json\n{\n \"_id\": \"api#379#Y#2022\",\n \"transactionVolume\": 166912052,\n \"errorCount\": 84911780,\n \"successCount\": 82000272,\n \"region\": \"UK\",\n \"appname\": \"api#379\",\n \"metricsCount\": {\n \"$numberLong\": \"3358\"\n },\n \"year\": 2022,\n \"type\": \"year_precalc\",\n \"dateTag\": \"2022\"\n},\n{\n \"_id\": \"api#379#Y#2022#M#11\",\n \"transactionVolume\": 61494167,\n \"errorCount\": 31247475,\n \"successCount\": 30246692,\n \"region\": \"UK\",\n \"appname\": \"api#379\",\n \"metricsCount\": {\n \"$numberLong\": \"1270\"\n },\n \"year\": 2022,\n \"monthOfYear\": 11,\n \"type\": \"month_precalc\",\n \"dateTag\": \"2022-11\"\n},\n{\n \"_id\": \"api#379#Y#2022#M#11#D#19\",\n \"transactionVolume\": 4462897,\n \"errorCount\": 2286438,\n \"successCount\": 2176459,\n \"region\": \"UK\",\n \"appname\": \"api#379\",\n \"metricsCount\": {\n \"$numberLong\": \"96\"\n },\n \"year\": 2022,\n \"monthOfYear\": 11,\n \"dayOfMonth\": 19,\n \"type\": \"dom_precalc\",\n \"dateTag\": \"2022-11-19\"\n}\n```\n\nNote the **type** field in the documents used to differentiate between totals for a year, month, or day of month.\n\nFor the purposes of showing how PerformanceBench organizes models and measures, in the PerformanceBench GitHub repository, the first and second approaches are implemented as two separate **SchemaTest** model classes, each with a single measure, while the third and fourth approaches are implemented in a third **SchemaTest** model class with two measures \u2014 one for each approach.\n\n### APIMonitorLookupTest class\nThe first model, implementing the **$lookup approach**, is implemented in package **com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup** in a class named **APIMonitorLookupTest**.\n\nThe aggregation pipeline implemented by this approach is:\n\n```json\n\n {\n $match: {\n \"deployments.region\": \"HK\",\n },\n },\n {\n $lookup: {\n from: \"APIMetrics\",\n let: {\n apiName: \"$apiDetails.appname\",\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $and: [\n {\n $eq: [\"$apiDetails.appname\", \"$$apiName\"],\n },\n {\n $gte: [\n \"$creationDate\", ISODate(\"2022-11-01\"),\n ],\n },\n ],\n },\n },\n },\n {\n $group: {\n _id: \"apiDetails.appName\",\n totalVolume: {\n $sum: \"$transactionVolume\",\n },\n totalError: {\n $sum: \"$errorCount\",\n },\n totalSuccess: {\n $sum: \"$successCount\",\n },\n },\n },\n {\n $project: {\n aggregatedResponse: {\n totalTransactionVolume: \"$totalVolume\",\n errorRate: {\n $cond: [\n {\n $eq: [\"$totalVolume\", 0],\n },\n 0,\n {\n $multiply: [\n {\n $divide: [\n \"$totalError\",\n \"$totalVolume\",\n ],\n },\n 100,\n ],\n },\n ],\n },\n successRate: {\n $cond: [\n {\n $eq: [\"$totalVolume\", 0],\n },\n 0,\n {\n $multiply: [\n {\n $divide: [\n \"$totalSuccess\",\n \"$totalVolume\",\n ],\n },\n 100,\n ],\n },\n ],\n },\n },\n _id: 0,\n },\n },\n ],\n as: \"results\",\n },\n },\n]\n```\n\nThe pipeline is executed against the **APIDetails** collection and is run once for each of the four geographical regions. The **$lookup** stage of the pipeline contains its own sub-pipeline which is executed against the **APIMetrics** collection once for each API belonging to each region.\n\nThis results in documents looking like the following being produced:\n\n```json\n{\n \"_id\": \"api#100\",\n \"apiDetails\": {\n \"appname\": \"api#100\",\n \"platform\": \"Linux\",\n \"language\": {\n \"name\": \"Java\",\n \"version\": \"11.8.202\"\n },\n \"techStack\": {\n \"name\": \"Springboot\",\n \"version\": \"UNCATEGORIZED\"\n },\n \"environment\": \"PROD\"\n },\n \"deployments\": [\n {\n \"region\": \"HK\",\n \"createdAt\": {\n \"$date\": {\n \"$numberLong\": \"1649399685000\"\n }\n }\n }\n ],\n \"results\": [\n {\n \"aggregatedResponse\": {\n \"totalTransactionVolume\": 43585837,\n \"errorRate\": 50.961542851637795,\n \"successRate\": 49.038457148362205\n }\n }\n ]\n}\n```\n\nOne document will be produced for each API in each region. The model implementation records the total time taken (in milliseconds) to generate all the documents for a given region and returns this in a results document to PerformanceBench. The results documents look like:\n\n```json\n{\n \"_id\": {\n \"$oid\": \"6389b6581a3cd92944057c6c\"\n },\n \"startTime\": {\n \"$numberLong\": \"1669962059685\"\n },\n \"duration\": {\n \"$numberLong\": \"1617\"\n },\n \"model\": \"APIMonitorLookupTest\",\n \"measure\": \"USEPIPELINE\",\n \"region\": \"HK\",\n \"baseDate\": {\n \"$date\": {\n \"$numberLong\": \"1667260800000\"\n }\n },\n \"apiCount\": 189,\n \"metricsCount\": 189,\n \"threads\": 3,\n \"iterations\": 1000,\n \"clusterTier\": \"M10\",\n \"endTime\": {\n \"$numberLong\": \"1669962061302\"\n }\n}\n```\n\nAs can be seen, as well as the region, start time, end time, and duration of the execution run, the result documents also include: \n\n* The model name and measure executed (in this case, **\u2018USEPIPELINE\u2019**).\n* The number of APIs (**apiCount**) found for this region, and number of APIs for which metrics were able to be generated (**metricsCount**). These numbers should always match and are included as a sanity check that data was generated correctly by the measure.\n* The number of **threads** and **iterations** used for the execution of the measure. PerformanceBench allows measures to be executed a defined number of times (iterations) to allow a good average to be determined. Executions can also be run in one or more concurrent threads to simulate multi-user/multi-client environments. In the above example, three threads each concurrently executed 1,000 iterations of the measure (3,000 total iterations).\n* The MongoDB Atlas cluster tier on which the measures were executed. This is simply used for tracking purposes when analyzing the results and could be set to any value by the class developer. In the sample class implementations, the value is set to match a corresponding value in the PerformanceBench configuration file. Importantly, it remains the user\u2019s responsibility to ensure the cluster tier being used matches what is written to the results documents.\n* **baseDate** indicates the date period for which monitoring data was summarized. For a given **baseDate**, the summarized period is always **baseDate** to the current date (inclusive). An earlier **baseDate** will therefore result in more data being summarized.\n\nWith a single measure defined for the model, and with three threads each carrying out 1,000 iterations of the measure, an array of 3,000 results documents will be returned by the model class to PerformanceBench. PerformanceBench then writes these documents to a collection for later analysis.\n\nTo support the aggregation pipeline, the model implementation creates the following indexes in its **initialize** method implementation:\n\n**APIDetails: {\"deployments.region\": 1}**\n**APIMetrics: {\"appname\": 1, \"creationDate\": 1}**\n\nThe model temporarily drops any existing indexes on the collection to avoid contention for memory cache space. The above indexes are subsequently dropped in the model\u2019s **cleanup** method implementation, and all original indexes restored.\n\n### APIMonitorMultiQueryTest class\nThe second model carries out an initial query against the **APIDetails** collection to produce a list of the API ids for a given region and then uses that list as input to an **$in** clause as part of a **$match** stage in an aggregation pipeline against the **APIMetrics** collection. It is implemented in package **com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery** in a class named **APIMonitorMultiQueryTest**.\n\nThe initial query, carried out against the **APIDetails** collection, looks like:\n\n```\ndb.APIDetails.find(\"deployments.region\": \"HK\")\n```\n\nThis query is carried out for each of the four regions in turn and, from the returned documents, a list of the APIs belonging to each region is generated. The generated list is then used as the input to a **$in** clause in the **$match** stage of the following aggregation pipeline run against the APIMetrics collection:\n\n```\n[\n {\n $match: {\n \"apiDetails.appname\": {$in: [\"api#1\", \"api#2\", \"api#3\"]},\n creationDate: {\n $gte: ISODate(\"2022-11-01\"),\n },\n },\n },\n {\n $group: {\n _id: \"$apiDetails.appname\",\n totalVolume: {\n $sum: \"$transactionVolume\",\n },\n totalError: {\n $sum: \"$errorCount\",\n },\n totalSuccess: {\n $sum: \"$successCount\",\n },\n },\n },\n {\n $project: {\n aggregatedResponse: {\n totalTransactionVolume: \"$totalVolume\",\n errorRate: {\n $cond: [\n {\n $eq: [\"$totalVolume\", 0],\n },\n 0,\n {\n $multiply: [\n {\n $divide: [\"$totalError\", \"$totalVolume\"],\n },\n 100,\n ],\n },\n ],\n },\n successRate: {\n $cond: [\n {\n $eq: [\"$totalVolume\", 0],\n },\n 0,\n {\n $multiply: [\n {\n $divide: [\n \"$totalSuccess\",\n \"$totalVolume\",\n ],\n },\n 100,\n ],\n },\n ],\n },\n },\n },\n },\n]\n```\n\nThis pipeline is essentially the same as the sub-pipeline in the **$lookup** stage of the aggregation used by the **APIMonitorLookupTest** class, the main difference being that this pipeline returns the summary documents for all APIs in a region using a single execution, whereas the sub-pipeline is executed once per API as part of the **$lookup** stage in the **APIMonitorLookupTest** class. Note that the pipeline shown above has only three API values listed in its **$in** clause. In reality, the list generated during testing was between two and three hundred items long for each region.\n\nWhen the documents are returned from the pipeline, they are merged with the corresponding API details documents retrieved from the initial query to create a set of documents equivalent to those created by the pipeline in the **APIMonitorLookupTest** class. From there, the model implementation creates the same summary documents to be returned to and saved by PerformanceBench.\n\nTo support the pipeline, the model implementation creates the following indexes in its **initialize** method implementation:\n\n**APIDetails: {\"deployments.region\": 1}**\n**APIMetrics: {\"appname\": 1, \"creationDate\": 1}**\n\nAs with the **APIMonitorLookupTest** class, this model temporarily drops any existing indexes on the collections to avoid contention for memory cache space. The above indexes are subsequently dropped in the model\u2019s **cleanup** method implementation, and all original indexes restored.\n\n### APIMonitorRegionTest class\nThe third model class, **com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery.APIMonitorRegionTest**, implements two measures, both similar to the measure in **APIMonitorMultiQueryTest**, but where the **$in** clause in the **$match** stage is replaced with a equivalency check on the **\u201dregion\u201d** field. The purpose of these measures is to assess whether an equivalency check against the region field provides any performance benefit versus an **$in** clause where the list of matching values could be several hundred items long. The difference between the two measures in this model, named **\u201cQUERYSYNC\u201d** and **\u201cQUERYASYNC\u201d** respectively, is that the first performs the initial find query against the **APIDetails** collection, and then the aggregation pipeline against the **APIMetrics** collection in sequence, whilst the second model uses the [Reactive Streams MongoDB Driver to carry out the two operations in parallel to assess whether that provides any performance benefit.\n\nWith these changes, the match stage of the aggregation pipeline for this model looks like:\n\n```json\n {\n $match: {\n \"deployments.region\": \"HK\",\n creationDate: {\n $gte: ISODate(\"2022-11-01\"),\n },\n },\n }\n```\n\nIn all other regards, the pipeline and the subsequent processes for creating summary documents to pass back to PerformanceBench are the same as those used in **APIMonitorMultiQueryTest**.\n\n### APIMonitorPrecomputeTest class\nThe fourth model class, **com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute.APIMonitorPrecomputeTest**, implements a single measure named **\u201cPRECOMPUTE\u201d**. This measure makes use of a third collection named **APIPreCalc** that contains precalculated summary data for each API for each complete day, month, and year in the data set. The intention with this measure is to assess what, if any, performance gain can be obtained by reducing the number of documents and resulting calculations the aggregation pipeline is required to carry out.\n\nThe measure calculates complete days, months, and years between the **baseDate** specified in the configuration file, and the current date. The total number of calls, failed calls and successful calls for each API for each complete day, month, or year is then retrieved from **APIPreCalc**. A **$unionWith** stage in the pipeline is then used to combine these values with the metrics for the partial days at either end of the period (the basedate and current date) retrieved from **APIMetrics**.\n\nThe pipeline used for this measure looks like:\n\n```json\n\n {\n \"$match\": {\n \"region\": \"UK\",\n \"dateTag\": {\n \"$in\": [\n \"2022-12\",\n \"2022-11-2\",\n \"2022-11-3\",\n \"2022-11-4\",\n \"2022-11-5\",\n \"2022-11-6\",\n \"2022-11-7\",\n \"2022-11-8\",\n \"2022-11-9\",\n \"2022-11-10\"\n ]\n }\n }\n },\n {\n \"$unionWith\": {\n \"coll\": \"APIMetrics\",\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$or\": [\n {\n \"$and\": [\n {\n \"$eq\": [\n \"$region\",\n \"UK\"\n ]\n },\n {\n \"$eq\": [\n \"$year\", 2022\n ]\n },\n {\n \"$eq\": [\n \"$dayOfYear\",\n 305\n ]\n },\n {\n \"$gte\": [\n \"$creationDate\",\n {\n \"$date\": \"2022-11-01T00:00:00Z\"\n }\n ]\n }\n ]\n },\n {\n \"$and\": [\n {\n \"$eq\": [\n \"$region\",\n \"UK\"\n ]\n },\n {\n \"$eq\": [\n \"$year\",\n 2022\n ]\n },\n {\n \"$eq\": [\n \"$dayOfYear\",\n 315\n ]\n },\n {\n \"$lte\": [\n \"$creationDate\",\n {\n \"$date\": \"2022-11-11T01:00:44.774Z\"\n }\n ]\n }\n ]\n }\n ]\n }\n }\n }\n ]\n }\n },\n {\n \"$group\": {\n \u2026\n }\n },\n {\n \"$project\": {\n \u2026\n }\n }\n ]\n```\n\nThe **$group** and **$project** stages are identical to the prior models and are not shown above.\n\nTo support the queries and carried out by the pipeline, the model creates the following indexes in its **initialize** method implementation:\n\n**APIDetails: {\"deployments.region\": 1}**\n**APIMetrics: {\"region\": 1, \"year\": 1, \"dayOfYear\": 1, \"creationDate\": 1}**\n**APIPreCalc: {\"region\": 1, \"dateTag\": 1}**\n\n### Controlling PerformanceBench execution \u2014 config.json\nThe execution of PerformanceBench is controlled by a configuration file in JSON format. The name and path to this file is passed as a command line argument using the **-c** flag. In the PerformanceBench GitHub repository, the file is called **config.json**:\n\n```json\n{\n \"models\": [\n {\n \"namespace\": \"com.mongodb.devrel.pods.performancebench.models.apimonitor_lookup\",\n \"className\": \"APIMonitorLookupTest\",\n \"measures\": [\"USEPIPELINE\"],\n \"threads\": 2,\n \"iterations\": 500,\n \"resultsuri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"resultsCollectionName\": \"apimonitor_results\",\n \"resultsDBName\": \"performancebenchresults\",\n \"custom\": {\n \"uri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"apiCollectionName\": \"APIDetails\",\n \"metricsCollectionName\": \"APIMetrics\",\n \"precomputeCollectionName\": \"APIPreCalc\",\n \"dbname\": \"APIMonitor\",\n \"regions\": [\"UK\", \"TK\", \"HK\", \"IN\" ],\n \"baseDate\": \"2022-11-01T00:00:00.000Z\",\n \"clusterTier\": \"M40\",\n \"rebuildData\": false,\n \"apiCount\": 1000\n }\n },\n {\n \"namespace\": \"com.mongodb.devrel.pods.performancebench.models.apimonitor_multiquery\",\n \"className\": \"APIMonitorMultiQueryTest\",\n \"measures\": [\"USEINQUERY\"],\n \"threads\": 2,\n \"iterations\": 500,\n \"resultsuri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"resultsCollectionName\": \"apimonitor_results\",\n \"resultsDBName\": \"performancebenchresults\",\n \"custom\": {\n \"uri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"apiCollectionName\": \"APIDetails\",\n \"metricsCollectionName\": \"APIMetrics\",\n \"precomputeCollectionName\": \"APIPreCalc\",\n \"dbname\": \"APIMonitor\",\n \"regions\": [\"UK\", \"TK\", \"HK\", \"IN\" ],\n \"baseDate\": \"2022-11-01T00:00:00.000Z\",\n \"clusterTier\": \"M40\",\n \"rebuildData\": false,\n \"apiCount\": 1000\n }\n },\n {\n \"namespace\": \"com.mongodb.devrel.pods.performancebench.models.apimonitor_regionquery\",\n \"className\": \"APIMonitorRegionQueryTest\",\n \"measures\": [\"QUERYSYNC\",\"QUERYASYNC\"],\n \"threads\": 2,\n \"iterations\": 500,\n \"resultsuri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"resultsCollectionName\": \"apimonitor_results\",\n \"resultsDBName\": \"performancebenchresults\",\n \"custom\": {\n \"uri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"apiCollectionName\": \"APIDetails\",\n \"metricsCollectionName\": \"APIMetrics\",\n \"precomputeCollectionName\": \"APIPreCalc\",\n \"dbname\": \"APIMonitor\",\n \"regions\": [\"UK\", \"TK\", \"HK\", \"IN\" ],\n \"baseDate\": \"2022-11-01T00:00:00.000Z\",\n \"clusterTier\": \"M40\",\n \"rebuildData\": false,\n \"apiCount\": 1000\n }\n },\n {\n \"namespace\": \"com.mongodb.devrel.pods.performancebench.models.apimonitor_precompute\",\n \"className\": \"APIMonitorPrecomputeTest\",\n \"measures\": [\"PRECOMPUTE\"],\n \"threads\": 2,\n \"iterations\": 500,\n \"resultsuri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"resultsCollectionName\": \"apimonitor_results\",\n \"resultsDBName\": \"performancebenchresults\",\n \"custom\": {\n \"uri\": \"mongodb+srv://myuser:mypass@my_atlas_instance.mongodb.net/?retryWrites=true&w=majority\",\n \"apiCollectionName\": \"APIDetails\",\n \"metricsCollectionName\": \"APIMetrics\",\n \"precomputeCollectionName\": \"APIPreCalc\",\n \"dbname\": \"APIMonitor\",\n \"regions\": [\"UK\", \"TK\", \"HK\", \"IN\" ],\n \"baseDate\": \"2022-11-01T00:00:00.000Z\",\n \"clusterTier\": \"M40\",\n \"rebuildData\": false,\n \"apiCount\": 1000\n }\n }\n ]\n}\n```\n\nThe document contains a single top-level field called \u201cmodels,\u201d the value of which is an array of sub-documents, each of which describes a model and its corresponding measures to be executed. PerformanceBench attempts to execute the models and measures in the order they appear in the file.\n\nFor each model, the configuration file defines the Java class implementing the model and its measures, the number of concurrent threads there should be executing each measure, the number of iterations of each measure each thread should execute, an array listing the names of the measures to be executed, and the connection URI, database name, and collection name where PerformanceBench should write results documents.\n\nAdditionally, there is a \u201ccustom\u201d sub-document for each model where model class implementers can add any parameters specific to their model implementations. In the case of the **APIMonitor** class implementations, this includes the connection URI, database name and collection names where the test data resides, an array of acronyms for the geographic regions, the base date from which monitoring data should be summarized (summaries are based on values for **baseDate** to the current date, inclusive), and the Atlas cluster tier on which the tests were run (this is included in the results documents to allow comparison of performance of different tiers). The custom parameters also include a flag indicating if the test data set should be rebuilt before any of the measures for a model are executed and, if so, how many APIs data should be built for. The data rebuild code included in the sample model implementations builds data for the given number of APIs with the data for each API starting from a random date within the last 90 days.\n\n### Summarizing results of the APIMonitor tests\nBy having PerformanceBench save the results of each test to a MongoDB collection, we are able to carry out analysis of the results in a variety of ways. The [MongoDB aggregation framework includes over 20 different available stages and over 150 available expressions allowing enormous flexibility in performing analysis, and if you are using MongoDB Atlas, you have access to Atlas Charts, allowing you to quickly and easily visually display and analyze the data in a variety of chart formats.\n\nFor analyzing larger data sets, the MongoDB driver for Python or Connector for Apache Spark could be considered. \n\nThe output from one simulated test run generated the following results:\n #### Test setup\n \n\nNote that the AWS EC2 server used to run PerformanceBench was located within the same AWS availability zone as the MongoDB Atlas cluster in order to minimize variations in measurements due to variable network latency.\n\nThe above conditions resulted in a total of 20,000 results documents being written by PerformanceBench to MongoDB (five measures, executed 500 times for each of four geographical regions, by two threads). Atlas Charts was used to display the results:\n\nA further aggregation pipeline was then run on the results to find, for each measure, run by each model:\n\n* The shortest iteration execution time\n* The longest iteration execution time\n* The mean iteration execution time\n* The 95 percentile execution time\n* The number of iterations completed per second.\n\nThe pipeline used was:\n\n```json\n\n {\n $group: {\n _id: {\n model: \"$model\",\n measure: \"$measure\",\n region: \"$region\",\n baseDate: \"$baseDate\",\n threads: \"$threads\",\n iterations: \"$iterations\",\n clusterTier: \"$clusterTier\",\n },\n max: {\n $max: \"$duration\",\n },\n min: {\n $min: \"$duration\",\n },\n mean: {\n $avg: \"$duration\",\n },\n stddev: {\n $stdDevPop: \"$duration\",\n }\n },\n },\n {\n $project: {\n model: \"$_id.model\",\n measure: \"$_id.measure\",\n region: \"$_id.region\",\n baseDate: \"$_id.baseDate\",\n threads: \"$_id.threads\",\n iterations: \"$_id.iterations\",\n clusterTier: \"$_id.clusterTier\",\n max: 1,\n min: 1,\n mean: {\n $round: [\"$mean\"],\n },\n \"95th_Centile\": {\n $round: [\n {\n $sum: [\n \"$mean\",\n {\n $multiply: [\"$stddev\", 2],\n },\n ],\n },\n ],\n },\n throuput: {\n $round: [\n {\n $divide: [\n \"$count\",\n {\n $divide: [\n {\n $subtract: [\"$end\", \"$start\"],\n },\n 1000,\n ],\n },\n ],\n },\n 2,\n ],\n },\n _id: 0,\n },\n },\n]\n```\n\nThis produced the following results:\n\n![Table of summary results \n\nAs can be seen, the pipelines using the **$lookup** stage and the equality searches on the **region** values in APIMetrics performed significantly slower than the other approaches. In the case of the **$lookup** based pipeline, this was most likely because of the overhead of marshaling one call to the sub-pipeline within the lookup for every API (1,000 total calls to the sub-pipeline for each iteration), rather than one call per geographic region (four calls total for each iteration) in the other approaches. With two threads each performing 500 iterations of each measure, this would mean marshaling 1,000,000 calls to the sub-pipeline with the **$lookup** approach as opposed to 4,000 calls for the other measures. \n\nIf verification of the results indicated they were accurate, this would be a good indicator that an approach that avoided using a **$lookup** aggregation stage would provide better query performance for this particular use case. In the case of the pipelines with the equality clause on the region field (**QUERYSYNC** and **QUERYASYNC**), their performance was likely impacted by having to sort a large number of documents by **APIID** in the **$group** stage of their pipeline. In contrast, the pipeline using the **$in** clause (**USEINQUERY**) utilized an index on the **APPID** field, meaning documents were returned to the pipeline already sorted by **APPID** \u2014 this likely gave it enough of an advantage during the **$group** stage of the pipeline for it to consistently complete the stage faster. Further investigation and refinement of the indexes used by the **QUERYSYNC** and **QUERYASYNC** measures could reduce their performance deficit.\n\nIt\u2019s also noticeable that the precompute model was between 25 and 40 times faster than the other approaches. By using the precomputed values for each API, the number of documents the pipeline needed to aggregate was reduced from as much as 96,000, to, at most, 1,000 for each full day being measured, and from as much as 2,976,000 to, at most, 1,000 for each complete month being measured. This has a significant impact on throughput and underlies the value of the computed schema design pattern. \n\n## Final thoughts\nPerformanceBench provides a quick way to organize, create, execute, and record the results of tests to measure how different schema designs perform when executing different workloads. However, it is important to remember that the accuracy of the results will depend on how well the implemented model classes simulate the real life access patterns and workloads they are intended to model. \n\nEnsuring the models accurately represent the workloads and schemas being measured is the job of the implementing developers, and PerformanceBench can only provide the framework for executing those models. It cannot improve or provide any guarantee that the results it records are an accurate prediction of an application\u2019s real world performance.\n\n**Finally, it is important to understand that PerformanceBench, while free to download and use, is not in any way endorsed or supported by MongoDB.**\n\nThe repository for PerformanceBench can be found on Github. The project was created in IntelliJ IDEA using Gradle.\n", "format": "md", "metadata": {"tags": ["MongoDB", "Java"], "pageDescription": "Learn how to use PerformanceBench, a Java-based framework application, to carry out empirical performance comparisons of schema design patterns in MongoDB.", "contentType": "Tutorial"}, "title": "Schema Performance Evaluation in MongoDB Using PerformanceBench", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/deploy-mongodb-atlas-terraform-aws", "action": "created", "body": "# How to Deploy MongoDB Atlas with Terraform on AWS\n\n**MongoDB Atlas** is the multi-cloud developer data platform that provides an integrated suite of cloud database and data services. We help to accelerate and simplify how you build resilient and performant global applications on the cloud provider of your choice.\n\n**HashiCorp Terraform** is an Infrastructure-as-Code (IaC) tool that lets you define cloud resources in human-readable configuration files that you can version, reuse, and share. Hence, we built the **Terraform MongoDB Atlas Provider** that automates infrastructure deployments by making it easy to provision, manage, and control Atlas configurations as code on any of the three major cloud providers.\n\nIn addition, teams can also choose to deploy MongoDB Atlas through the MongoDB Atlas CLI (Command-Line Interface), Atlas Administration API, AWS CloudFormation, and as always, with the Atlas UI (User Interface).\n\nIn this blog post, we will learn how to deploy MongoDB Atlas hosted on AWS using Terraform. In addition, we will explore how to use Private Endpoints with AWS Private Link to provide increased security with private connectivity for your MongoDB Atlas cluster without exposing traffic to the public internet.\n\nWe designed this Quickstart for beginners with no experience with MongoDB Atlas, HashiCorp Terraform, or AWS and seeking to set up their first environment. Feel free to access all source code described below from this\u00a0GitHub repo.\n\nLet\u2019s get started:\n\n## Step 1: Create a MongoDB Atlas account\n\nSign up for a free MongoDB Atlas account, verify your email address, and log into your new account.\n\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n## Step 2: Generate MongoDB Atlas API access keys\n\nOnce you have an account created and are logged into MongoDB Atlas, you will need to generate an API key to authenticate the Terraform\u00a0MongoDB Atlas Provider.\n\nGo to the top of the Atlas UI, click the **Gear Icon** to the right of the organization name you created, click **Access Manager** in the lefthand menu bar, click the **API Keys** tab, and then click the green **Create API Key** box.\n\nEnter a description for the API key that will help you remember what it\u2019s being used for \u2014 for example \u201cTerraform API Key.\u201d Next, you\u2019ll select the appropriate permission for what you want to accomplish with Terraform. Both the Organization Owner and Organization Project Creator roles (see role descriptions below) will provide access to complete this task, but by using the principle of least privilege, let\u2019s select the Organization Project Creator role in the dropdown menu and click Next.\n\nMake sure to copy your private key and store it in a secure location. After you leave this page, your full private key will\u00a0**not**be accessible.\n\n## Step 3: Add API Key Access List entry\n\nMongoDB Atlas API keys have\u00a0specific endpoints\u00a0that require an API Key Access List. Creating an API Key Access List ensures that API calls must originate from IPs or CIDR ranges given access. As a good refresher,\u00a0learn more about cloud networking.\n\nOn the same page, scroll down and click\u00a0**Add Access List Entry**. If you are unsure of the IP address that you are running Terraform on (and you are performing this step from that machine), simply click\u00a0**Use Current IP Address**\u00a0and\u00a0**Save**. Another option is to open up your IP Access List to all, but this comes with significant potential risk. To do this, you can add the following two CIDRs:\u00a0**0.0.0.0/1** and\u00a0**128.0.0.0/1**. These entries will open your IP Access List to at most 4,294,967,296 (or 2^32) IPv4 addresses and should be used with caution.\n\n## Step 4: Set up billing method\n\nGo to the lefthand menu bar and click\u00a0**Billing**\u00a0and then\u00a0**Add Payment Method**. Follow the steps to ensure there is a valid payment method for your organization. Note when creating a free (forever) or M0 cluster tier, you can skip this step.\n\n## Step 5: Install Terraform\n\nGo to the official HashiCorp Terraform\u00a0downloads\u00a0page and follow the instructions to set up Terraform on the platform of your choice. For the purposes of this demo, we will be using an Ubuntu/Debian environment.\n\n## Step 6: Defining the MongoDB Atlas Provider with environment variables\n\nWe will need to configure the MongoDB Atlas Provider using the MongoDB Atlas API Key you generated earlier (Step 2). We will be securely storing these secrets as\u00a0Environment Variables.\n\nFirst, go to the terminal window and create Environment Variables with the below commands. This prevents you from having to hard-code secrets directly into Terraform config files (which is not recommended):\n\n```\nexport MONGODB_ATLAS_PUBLIC_KEY=\"\"\nexport MONGODB_ATLAS_PRIVATE_KEY=\"\"\n```\n\nNext, create in an empty directory with an empty file called **provider.tf**. Here we will input the following code to define the MongoDB Atlas Provider. This will automatically grab the most current version of the Terraform MongoDB Atlas Provider.\n\n```\n# Define the MongoDB Atlas Provider\nterraform {\n required_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n }\n }\n required_version = \">= 0.13\"\n}\n```\n\n## Step 7: Creating variables.tf, terraform.tfvars, and main.tf files\n\nWe will now create a **variables.tf** file to declare all the Terraform variables used as part of this exercise and all of which are of type string. Next, we\u2019ll define values (i.e. our secrets) for each of these variables in the **terraform.tfvars** file. Note as with most secrets, best practice is not to upload them (or the **terraform.tfvars** file itself) to public repos.\n\n```\n# Atlas Organization ID \nvariable \"atlas_org_id\" {\n type = string\n description = \"Atlas Organization ID\"\n}\n# Atlas Project Name\nvariable \"atlas_project_name\" {\n type = string\n description = \"Atlas Project Name\"\n}\n\n# Atlas Project Environment\nvariable \"environment\" {\n type = string\n description = \"The environment to be built\"\n}\n\n# Cluster Instance Size Name \nvariable \"cluster_instance_size_name\" {\n type = string\n description = \"Cluster instance size name\"\n}\n\n# Cloud Provider to Host Atlas Cluster\nvariable \"cloud_provider\" {\n type = string\n description = \"AWS or GCP or Azure\"\n}\n\n# Atlas Region\nvariable \"atlas_region\" {\n type = string\n description = \"Atlas region where resources will be created\"\n}\n\n# MongoDB Version \nvariable \"mongodb_version\" {\n type = string\n description = \"MongoDB Version\"\n}\n\n# IP Address Access\nvariable \"ip_address\" {\n type = string\n description = \"IP address used to access Atlas cluster\"\n}\n```\n\nThe example below specifies to use the most current MongoDB version (as of this writing), which is 6.0, a M10 cluster tier which is great for a robust development environment and will be deployed on AWS in the US\\_WEST\\_2 Atlas region. For specific details about all the available options besides M10 and US\\_WEST\\_2, please see the documentation.\n\n```\natlas_org_id = \"\"\natlas_project_name = \"myFirstProject\"\nenvironment = \"dev\"\ncluster_instance_size_name = \"M10\"\ncloud_provider = \"AWS\"\natlas_region = \"US_WEST_2\"\nmongodb_version = \"6.0\"\nip_address = \"\"\n```\n\nNext, create a **main.tf** file, which we will populate together to create the minimum required resources to create and access your cluster: a MongoDB Atlas Project (Step 8), Database User/Password (Step 9), IP Access List (Step 10), and of course, the MongoDB Atlas Cluster itself (Step 11). We will then walk through how to create Terraform Outputs (Step 12) so you can access your Atlas cluster and then create a Private Endpoint with AWS PrivateLink (Step 13). If you are already familiar with any of these steps, feel free to skip ahead.\n\nNote: As infrastructure resources get created, modified, or destroyed, several more files will be generated in your directory by Terraform (for example the **terraform.tfstate** file). It is best practice not to modify these additional files directly unless you know what you are doing.\n\n## Step 8: Create MongoDB Atlas project\n\nMongoDB Atlas Projects helps to organize and provide granular access controls to our resources inside our MongoDB Atlas Organization. Note MongoDB Atlas \u201cGroups\u201d and \u201cProjects\u201d are synonymous terms.\n\nTo create a Project using Terraform, we will need the **MongoDB Atlas Organization ID** with at least the Organization Project Creator role (defined when we created the MongoDB Atlas Provider API Keys in Step 2).\n\nTo get this information, simply click on **Settings** on the lefthand menu bar in the Atlas UI and click the copy icon next to Organization ID. You can now paste this information as the atlas\\_org\\_id in your **terraform.tfvars** file.\n\nNext in the **main.tf** file, we will use the resource **mongodbatlas\\_project** from the Terraform MongoDB Atlas Provider to create our Project. To do this, simply input:\n\n```\n# Create a Project\nresource \"mongodbatlas_project\" \"atlas-project\" {\n org_id = var.atlas_org_id\n name = var.atlas_project_name\n}\n```\n\n## Step 9: Create MongoDB Atlas user/password\n\nTo authenticate a client to MongoDB, like the MongoDB Shell or your application code using a\u00a0MongoDB Driver\u00a0(officially supported in Python, Node.js, Go, Java, C#, C++, Rust, and several others), you must add a corresponding Database User to your MongoDB Atlas Project. See the\u00a0documentation\u00a0for more information on available user roles so you can customize the user\u2019s RBAC (Role Based Access Control) as per your team\u2019s needs.\n\nFor now, simply input the following code as part of the next few lines in the **main.tf** file to create a Database User with a random 16-character password. This will use the resource\u00a0**mongodbatlas_database_user**\u00a0from the Terraform MongoDB Atlas Provider. The database user_password is a sensitive secret, so to access it, you will need to input the \u201c**terraform output -json user_password**\u201d command in your terminal window after our deployment is complete to reveal.\n\n```\n# Create a Database User\nresource \"mongodbatlas_database_user\" \"db-user\" {\n username = \"user-1\"\n password = random_password.db-user-password.result\n project_id = mongodbatlas_project.atlas-project.id\n auth_database_name = \"admin\"\n roles {\n role_name = \"readWrite\"\n database_name = \"${var.atlas_project_name}-db\"\n }\n}\n\n# Create a Database Password\nresource \"random_password\" \"db-user-password\" {\n length = 16\n special = true\n override_special = \"_%@\"\n}\n```\n\n## Step 10: Create IP access list\n\nNext, we will create the IP Access List by inputting the following into your\u00a0**main.tf** file. Be sure to lookup the IP address (or CIDR range) of the machine where you\u2019ll be connecting to your MongoDB Atlas cluster from and paste it into the\u00a0**terraform.tfvars** file (as shown in code block in Step 7). This will use the resource\u00a0**mongodbatlas_project_ip_access_list**\u00a0from the Terraform MongoDB Atlas Provider.\n\n```\n# Create Database IP Access List \nresource \"mongodbatlas_project_ip_access_list\" \"ip\" {\n project_id = mongodbatlas_project.atlas-project.id\n ip_address = var.ip_address\n}\n```\n\n## Step 11: Create MongoDB Atlas cluster\n\nWe will now use the **mongodbatlas_advanced_cluster** resource to create a MongoDB Atlas Cluster. With this resource, you can not only create a deployment, but you can manage it over its lifecycle, scaling compute and storage independently, enabling cloud backups, and creating analytics nodes.\n\nIn this example, we group three database servers together to create a\u00a0replica set\u00a0with a primary server and two secondary replicas duplicating the primary's data. This architecture is primarily designed with high availability in mind and can automatically handle failover if one of the servers goes down \u2014 and recover automatically when it comes back online. We call all these nodes\u00a0*electable*\u00a0because an election is held between them to work out which one is primary.\n\nWe will also set the optional\u00a0*backup_enabled* flag to true. This provides increased data resiliency by enabling localized backup storage using the native snapshot functionality of the cluster's cloud service provider (see\u00a0documentation).\n\nLastly, we create one\u00a0*analytics*\u00a0node. Analytics nodes are read-only nodes that can exclusively be used to execute database queries on. That means that this analytics workload is isolated to this node only so operational performance isn't impacted. This makes analytic nodes ideal to run longer and more computationally intensive analytics queries on without impacting your replica set performance (see\u00a0documentation).\n\n```\n# Create an Atlas Advanced Cluster \nresource \"mongodbatlas_advanced_cluster\" \"atlas-cluster\" {\n project_id = mongodbatlas_project.atlas-project.id\n name = \"${var.atlas_project_name}-${var.environment}-cluster\"\n cluster_type = \"REPLICASET\"\n backup_enabled = true\n mongo_db_major_version = var.mongodb_version\n replication_specs {\n region_configs {\n electable_specs {\n instance_size = var.cluster_instance_size_name\n node_count = 3\n }\n analytics_specs {\n instance_size = var.cluster_instance_size_name\n node_count = 1\n }\n priority = 7\n provider_name = var.cloud_provider\n region_name = var.atlas_region\n }\n }\n}\n```\n\n## Step 12: Create Terraform outputs\n\nYou can output information from your Terraform configuration to the terminal window of the machine executing Terraform commands. This can be especially useful for values you won\u2019t know until the resources are created, like the random password for the database user or the connection string to your Atlas cluster deployment. The code below in the\u00a0**main.tf** file will output these values to the terminal display for you to use after Terraform completes.\n\n```\n# Outputs to Display\noutput \"atlas_cluster_connection_string\" { value = mongodbatlas_advanced_cluster.atlas-cluster.connection_strings.0.standard_srv }\noutput \"ip_access_list\" { value = mongodbatlas_project_ip_access_list.ip.ip_address }\noutput \"project_name\" { value = mongodbatlas_project.atlas-project.name }\noutput \"username\" { value = mongodbatlas_database_user.db-user.username } \noutput \"user_password\" { \n sensitive = true\n value = mongodbatlas_database_user.db-user.password \n }\n```\n\n## Step 13: Set up a private endpoint to your MongoDB Atlas cluster\n\nIncreasingly, we see our customers want their data to traverse only private networks. One of the best ways to connect to Atlas over a private network from AWS is to use\u00a0AWS PrivateLink, which establishes a one-way connection that preserves your perceived network trust boundary while eliminating additional security controls associated with other options like VPC peering (Azure Private Link and GCP Private Service Connect are supported, as well).\u00a0Learn more about AWS Private Link with MongoDB Atlas.\n\nTo get started, we will need to first\u00a0**Install the AWS CLI**. If you have not already done so, also see\u00a0AWS Account Creation and\u00a0AWS Access Key Creation\u00a0for more details.\n\nNext, let\u2019s go to the terminal and create AWS Environment Variables with the below commands (similar as we did in Step 6 with our MongoDB Atlas credentials). Use the same region as above, except we will use the AWS naming convention instead, i.e., \u201cus-west-2\u201d.\n\n```\nexport AWS_ACCESS_KEY_ID=\"\"\nexport AWS_SECRET_ACCESS_KEY=\"\"\nexport AWS_DEFAULT_REGION=\"\"\n```\n\nThen, we add the AWS provider to the **provider.tf** file. This will enable us to now deploy AWS resources from the\u00a0**Terraform AWS Provider**\u00a0in addition to MongoDB Atlas resources from the\u00a0**Terraform MongoDB Atlas Provider**\u00a0directly from the same Terraform config files.\n\n```\n# Define the MongoDB Atlas and AWS Providers\nterraform {\n required_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n }\n aws = {\n source = \"hashicorp/aws\"\n }\n }\n required_version = \">= 0.13\"\n}\n```\n\nWe now add a new entry in our **variables.tf** and **terraform.tfvars** files for the desired AWS region. As mentioned earlier, we will be using \u201cus-west-2\u201d which is the AWS region in Oregon, USA.\n\n**variables.tf**\n\n```\n# AWS Region\nvariable \"aws_region\" {\n type = string\n description = \"AWS Region\"\n}\n```\n\n**terraform.tfvars**\n\n```\naws_region = \"us-west-2\"\n```\n\nNext we create two more files for each of the new types of resources to be deployed: **aws-vpc.tf** to create a full network configuration on the AWS side and **atlas-pl.tf** to create both the\u00a0Amazon VPC Endpoint and the\u00a0MongoDB Atlas Endpoint of the PrivateLink. In your environment, you may already have an AWS network created. If so, then you\u2019ll want to include the correct values in the **atlas-pl.tf** file and won\u2019t need **aws-vpc.tf** file. To get started quickly, we will simply\u00a0git clone them from\u00a0our repo.\n\nAfter that we will use a\u00a0Terraform Data Source and wait until the PrivateLink creation is completed so we can get the new connection string for the PrivateLink connection. In the\u00a0**main.tf**, simply add the below:\n\n```\ndata \"mongodbatlas_advanced_cluster\" \"atlas-cluser\" {\n project_id = mongodbatlas_project.atlas-project.id\n name = mongodbatlas_advanced_cluster.atlas-cluster.name\n depends_on = mongodbatlas_privatelink_endpoint_service.atlaseplink]\n}\n```\n\nLastly, staying in the **main.tf** file, we add the below additional output code snippet in order to display the [Private Endpoint-Aware Connection String to the terminal:\n\n```\noutput \"privatelink_connection_string\" {\n value = lookup(mongodbatlas_advanced_cluster.atlas-cluster.connection_strings0].aws_private_link_srv, aws_vpc_endpoint.ptfe_service.id)\n}\n```\n\n## Step 14: Initializing Terraform\n\nWe are now all set to start creating our first MongoDB Atlas deployment!\n\nOpen the terminal console and type the following command:\u00a0**terraform init** to initialize Terraform. This will download and install both the Terraform AWS and MongoDB Atlas Providers (if you have not done so already).\n\n![terraform init command, Terraform has been successfully initialized!\n\n## Step 15: Review Terraform deployment\n\nNext, we will run the\u00a0**terraform plan**\u00a0command. This will output what Terraform plans to do, such as creation, changes, or destruction of resources. If the output is not what you expect, then it\u2019s likely an issue in your Terraform configuration files.\n\n## Step 16: Apply the Terraform configuration\n\nNext, use the\u00a0**terraform apply** command to deploy the infrastructure. If all looks good, input\u00a0**yes**\u00a0to approve terraform build.\n\n**Success!**\n\nNote new AWS and MongoDB Atlas resources can take \\~15 minutes to provision and the provider will continue to give you a status update until it is complete. You can also check on progress through the Atlas UI and AWS Management Console.\n\nThe connection string shown in the output can be used to access (including performing CRUD operations) on your MongoDB database via the\u00a0MongoDB Shell,\u00a0MongoDB Compass GUI, and\u00a0Data Explorer in the UI (as shown below). Learn more about\u00a0how to interact with data in MongoDB Atlas\u00a0with the MongoDB Query Language (MQL). As a pro tip, I regularly leverage the\u00a0MongoDB Cheat Sheet\u00a0to quickly reference key commands.\n\nLastly, as a reminder, the database user_password is a sensitive secret, so to access it, you will need to input the \u201c**terraform output -json user_password**\u201d command in your terminal window to reveal.\n\n## Step 17: Terraform destroy\n\nFeel free to explore more complex environments (including code examples for deploying MongoDB Atlas Clusters from other cloud vendors) which you can find in our public repo examples. When ready to delete all infrastructure created, you can leverage the **terraform destroy command**.\n\nHere the resources we created earlier will all now be terminated. If all looks good, input **yes**:\n\nAfter a few minutes, we are back to an empty slate on both our MongoDB Atlas and AWS environments. It goes without saying, but please be mindful when using the terraform destroy command in any kind of production environment.\n\nThe HashiCorp Terraform MongoDB Atlas Provider is an open source project licensed under the Mozilla Public License 2.0 and we welcome community contributions. To learn more, see our\u00a0contributing guidelines.\u00a0As always, feel free to\u00a0contact us\u00a0with any issues.\n\nHappy Terraforming with MongoDB Atlas on AWS!", "format": "md", "metadata": {"tags": ["Atlas", "AWS", "Terraform"], "pageDescription": "A beginner\u2019s guide to start deploying Atlas clusters today with Infrastructure as Code best practices", "contentType": "Tutorial"}, "title": "How to Deploy MongoDB Atlas with Terraform on AWS", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/autocomplete-atlas-search-nextjs", "action": "created", "body": "# Adding Autocomplete To Your NextJS Applications With Atlas Search\n\n## Introduction\n\nImagine landing on a webpage with thousands of items and you have to scroll through all of them to get what you are looking for. You will agree that it's a bad user experience. For such a website, the user might have to leave for an alternative website, which I'm sure is not what any website owner would want.\n\nProviding users with an excellent search experience, such that they can easily search for what they want to see, is crucial for giving them a top-notch user experience.\n\nThe easiest way to incorporate rich, fast, and relevant searches into your applications is through MongoDB Atlas Search, a component of MongoDB Atlas.\u00a0\n\n## Explanation of what we will be building\n\nIn this guide, I will be showing you how I created a text search for a home rental website and utilize Atlas Search to integrate full-text search functionality, also incorporating autocomplete into the search box.\n\nThis search will give users the ability to search for homes by country.\n\nLet's look at the technology we will be using in this project.\n\n## Overview of the technologies and tools that will be used\n\nIf you'd like to follow along, here's what I'll be using.\n\n### Frontend framework\n\nNextJS: We will be using NextJS to build our front end. NextJS is a popular JavaScript framework for building server-rendered React applications.\n\nI chose this framework because it provides a simple setup and helps with optimizations such as automatic code splitting and optimized performance for faster load times. Additionally, it has a strong community and ecosystem, with a large number of plugins and examples available, making it an excellent choice for building both small- and large-scale web applications.\n\n### Backend framework\n\nNodeJS and ExpressJS: We will be using these to build our back end. Both are used together for building server-side web applications.\n\nI chose these frameworks because Node.js is an open-source, cross-platform JavaScript runtime environment for building fast, scalable, and high-performance server-side applications. Express.js, on the other hand, is a popular minimal and flexible Node.js web application framework that provides a robust set of features for building web and mobile applications.\n\n### Database service provider\n\nMongoDB Atlas is a fully managed cloud database service provided by MongoDB. It's a cloud-hosted version of the popular NoSQL database (MongoDB) and offers automatic scalability, high availability, and built-in security features. With MongoDB Atlas, developers can focus on building their applications rather than managing the underlying infrastructure, as the service takes care of database setup, operation, and maintenance.\n\n### MongoDB Atlas Search\n\nMongoDB Atlas Search is a full-text search and analytics engine integrated with MongoDB Atlas. It enables developers to add search functionality to their applications by providing fast and relevant search results, including text search and faceted search, and it also supports autocomplete and geospatial search.\n\nMongoDB Atlas Search is designed to be highly scalable and easy to use.\n\n## Pre-requisites\n\nThe full source of this application can be found on Github.\n\n## Project setup\n\nLet's get to work!\n\n### Setting up our project\n\nTo start with, let's clone the repository that contains the starting source code on Github.\n\n```bash\ngit clone https://github.com/mongodb-developer/search-nextjs/\ncd search-nextjs\n```\n\nOnce the clone is completed, in this repository, you will see two sub-folders:\n\n`mdbsearch`: Contains the Nextjs project (front end)\n`backend`: Contains the Node.js project (back end)\n\nOpen the project with your preferred text editor. With this done, let's set up our MongoDB environment.\n\n### Setting up a MongoDB account\n\nTo set up our MongoDB environment, we will need to follow the below instructions from the MongoDB official documentation.\n\n- Sign Up for a Free MongoDB Account\n\n- Create a Cluster\n\n- Add a Database User\n\n- Configure a Network Connection\n\n- Load Sample Data\n\n- Get Connection String\n\nYour connection string should look like this: mongodb+srv://user:\n\n### Identify a database to work with\n\nWe will be working with the `sample-airbnb` sample data from MongoDB for this application because it contains appropriate entries for the project.\n\nIf you complete the above steps, you should have the sample data loaded in your cluster. Otherwise, check out the documentation on how to load sample data.\n\n## Start the Node.js backend API\n\nThe API for our front end will be provided by the Node.js back end. To establish a connection to your database, let's create a `.env` file and update it with the connection string.\n\n```bash\ncd backend\nnpm install\ntouch .env\n```\n\nUpdate .env as below\n\n```bash\nPORT=5050\nMONGODB_URI=\n```\n\nTo start the server, we can either utilize the node executable or, for ease during the development process, use `nodemon`. This tool can automatically refresh your server upon detecting modifications made to the source code. For further information on tool installation, visit the official website.\n\nRun the below code\n\n```bash\nnpx nodemon .\n```\n\nThis command will start the server. You should see a message in your console confirming that the server is running and the database is connected.\u00a0\n\n## Start the NextJs frontend application\n\nWith the back end running, let's start the front end of your application. Open a new terminal window and navigate to the `mdbsearch` folder. Then, install all the necessary dependencies for this project and initiate the project by running the npm command. Let's also create a `.env` file and update it with the backend url.\n\n```bash\ncd ../mdbsearch\nnpm install\ntouch .env\n```\n\nCreate a .env file, and update as below:\n\n```bash\nNEXT_PUBLIC_BASE_URL=http://localhost:5050/\n```\n\nStart the application by running the below command.\n\n```bash\nnpm run dev\n```\n\nOnce the application starts running, you should see the page below running at http://localhost:3000. The running back end is already connected to our front end, hence, during the course of this implementation, we need to make a few modifications to our code.\n\nWith this data loading from the MongoDB database, next, let's proceed to implement the search functionality.\n\n## Implementing text search in our application with MongoDB Altas Search\n\nTo be able to search through data in our collection, we need to follow the below steps:\n\n### Create a search index\n\nThe MongoDB free tier account gives us the room to create, at most, three search indexes.\n\nFrom the previously created cluster, click on the Browse collections button, navigate to Search, and at the right side of the search page, click on the Create index button. On this screen, click Next to use the visual editor, add an index name (in our case, `search_home`), select the `listingsAndReviews` collection from the `sample_airbnb` database, and click Next.\n\nFrom this screen, click on Refine Your Index. Here is where we will specify the field in our collection that will be used to generate search results. In our case --- `address` and `property_type` --- `address` field is an object that has a `country` property, which is our target.\n\nTherefore, on this screen, we need to toggle off the Enable Dynamic Mapping option. Under Field Mapping, click the Add Field Mapping button. In the Field Name input, type `address.country`, and in the Data Type, make sure String is selected. Then, scroll to the bottom of the dialog and click the Add button. Create another Field Mapping for `property_type`. Data Type should also be String.\n\nThe index configuration should be as shown below.\n\nAt this point, scroll to the bottom of the screen and click on Save Changes. Then, click the Create Search Index button. Then wait while MongoDB creates your search index. It usually takes a few seconds to be active. Once active, we can start querying our collection with this index.\n\nYou can find detailed information on how to create a search index in the documentation.\n\n## Testing our search index\n\nMongoDB provides a search tester, which can be used to test our search indexes before integrating them into our application. To test our search index, let's click the QUERY button in the search index. This will take us to the Search Tester screen.\n\nRemember, we configure our search index to return results from `address.country` or `property_type`. So, you can test with values like `spain`, `brazil`, \n`apartment`, etc. These values will return results, and we can explore each result document to see where the result is found from these fields.\n\nTest with values like `span` and `brasil`. These will return no data result because it's not an exact match. MongoDB understands that scenarios like these are likely to happen. So, Atlas Search has a fuzzy matching feature. With fuzzy matching, the search tool will be searching for not only exact matching keywords but also for matches that might have slight variations, which we will be using in this project. You can find the details on fuzzy search in the documentation.\n\nWith the search index created and tested, we can now implement it in our application. But before that, we need to understand what a MongoDB aggregation pipeline is.\n\n## Consume search index in our backend application\n\nNow that we have the search index configured, let's try to integrate it into the API used for this project. Open `backend/index.js` file, find the comment Search endpoint goes here , and update it with the below code.\n\nStart by creating the route needed by our front end.\n\n```javascript\n// Search endpoint goes here\napp.get(\"/search/:search\", async (req, res) => {\n\u00a0\u00a0const queries = JSON.parse(req.params.search)\n\u00a0// Aggregation pipeline goes here\n\u00a0});\n```\n\nIn this endpoint, `/search/:search` we create a two-stage aggregation pipeline: `$search` and `$project`. `$search` uses the index `search_home`, which we created earlier. The `$search` stage structure will be based on the query parameter that will be sent from the front end while the `$project` stage simply returns needed fields from the `$search` result.\n\nThis endpoint will receive the `country` and `property_type`, so we can start building the aggregation pipeline. There will always be a category property. We can start by adding this.\n\n```javascript\n// Start building the search aggregation stage\nlet searcher_aggregate = {\n\u00a0\u00a0\u00a0\u00a0\"$search\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"index\": 'search_home',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"compound\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"must\": \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// get home where queries.category is property_type\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{ \"text\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"query\": queries.category,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": 'property_type',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"fuzzy\": {}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}},\n\n// get home where queries.country is address.country\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\"text\": {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"query\": queries.country,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": 'address.country',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"fuzzy\": {}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]}\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0};\n```\n\nWe don't necessarily want to send all the fields back to the frontend, so we can use a projection stage to limit the data we send over.\n\n```javascript\napp.get(\"/search/:search\", async (req, res) => {\n\u00a0const queries = JSON.parse(req.params.search)\n\u00a0\u00a0// Start building the search aggregation stage\n\u00a0\u00a0let searcher_aggregate = { ...\u00a0 };\n\u00a0\u00a0// A projection will help us return only the required fields\n\u00a0\u00a0let projection = {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$project': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'accommodates': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'price': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'property_type': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'name': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'description': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'host': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'address': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'images': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"review_scores\": 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0};\n\u00a0});\n```\n\nFinally, we can use the `aggregate` method to run this aggregation pipeline, and return the first 50 results to the front end.\n\n```javascript\napp.get(\"/search/:search\", async (req, res) => {\n\u00a0\u00a0const queries = JSON.parse(req.params.search)\n\u00a0\u00a0\u00a0// Start building the search aggregation stage\n\u00a0\u00a0let searcher_aggregate = { ... };\n // A projection will help us return only the required fields\n\u00a0\u00a0let projection = { ... };\n // We can now execute the aggregation pipeline, and return the first 50 elements\n\u00a0\u00a0let results = await itemCollection.aggregate([ searcher_aggregate, projection ]).limit(50).toArray();\n\u00a0\u00a0res.send(results).status(200);\n\u00a0});\n```\n\nThe result of the pipeline will be returned when a request is made to `/search/:search`.\n\nAt this point, we have an endpoint that can be used to search for homes by their country.\n\nThe full source of this endpoint can be located on [Github .\n\n## Implement search feature in our frontend application\n\nFrom our project folder, open the `mdbsearch/components/Header/index.js` file.Find the `searchNow`function and update it with the below code.\n\n```javascript\n//Search function goes here\n\u00a0\u00a0\u00a0\u00a0const searchNow = async (e) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setshow(false)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0let search_params = JSON.stringify({\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0country: country,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0category: `${activeCategory}`\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0})\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setLoading(true)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0await fetch(`${process.env.NEXT_PUBLIC_BASE_URL}search/${search_params}`)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.then((response) => response.json())\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.then(async (res) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0updateCategory(activeCategory, res)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0router.query = { country, category: activeCategory };\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setcountryValue(country);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0router.push(router);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0})\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.catch((err) => console.log(err))\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.finally(() => setLoading(false))\n\u00a0\u00a0\u00a0\u00a0}\n\n```\n\nNextFind the `handleChange`function, let update it with the below code\u00a0\n\n```javascript\n\u00a0\u00a0const handleChange = async (e) => {\n //Autocomplete function goes here\n\u00a0setCountry(e.target.value);\n}\n```\n\nWith the above update, let's explore our application. Start the application by running `npm run dev` in the terminal. Once the page is loaded, choose a property type, and then click on \"search country.\" At the top search bar, type `brazil`. Finally, click the search button. You should see the result as shown below.\n\nThe search result shows data where `address.country` is brazil and `property_type` is apartment. Explore the search with values such as braz, brzl, bral, etc., and we will still get results because of the fuzzy matching feature.\n\nNow, we can say the experience on the website is good. However, we can still make it better by adding an autocomplete feature to the search functionality.\n\n## Add autocomplete to search box\n\nMost modern search engines commonly include an autocomplete dropdown that provides suggestions as you type. Users prefer to quickly find the correct match instead of browsing through an endless list of possibilities. This section will demonstrate how to utilize Atlas Search autocomplete capabilities to implement this feature in our search box.\n\nIn our case, we are expecting to see suggestions of countries as we type into the country search input. To implement this, we need to create another search index.\n\nFrom the previously created cluster, click on the Browse collections button and navigate to Search. At the right side of the search page, click on the Create index button. On this screen, click Next to use the visual editor, add an index name (in our case, `country_autocomplete`), select the listingsAndReviews collection from the sample_airbnb database, and click Next.\n\nFrom this screen, click on Refine Your Index. We need to toggle off the Enable Dynamic Mapping option.\n\nUnder Field Mapping, click the Add Field Mapping button. In the Field Name input, type `address.country`, and in the Data Type, this time, make sure Autocomplete is selected. Then scroll to the bottom of the dialog and click the Add button.\n\nAt this point, scroll to the bottom of the screen and Save Changes. Then, click the Create Search Index button. Wait while MongoDB creates your search index --- it usually takes a few seconds to be active.\n\nOnce done, we should have two search indexes, as shown below.\n\n## Implement autocomplete API in our backend application\n\nWith this done, let's update our backend API as below:\n\nOpen the `backend/index.js` file, and update it with the below code:\n\n```javascript\n//Country autocomplete endpoint goes here\napp.get(\"/country/autocomplete/:param\", async (req, res) => {\n\u00a0\u00a0let\u00a0 results = await itemCollection.aggregate(\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$search': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'index': 'country_autocomplete',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'autocomplete': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'query': req.params.param,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'path': 'address.country',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'highlight': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'path': [ 'address.country']\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}, {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$limit': 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}, {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$project': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'address.country': 1,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'highlights': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$meta': 'searchHighlights'\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]).toArray();\n\n\u00a0\u00a0res.send(results).status(200);\n\u00a0});\n```\n\nThe above endpoint will return a suggestion of countries as the user types in the search box. In a three-stage aggregation pipeline, the first stage in the pipeline uses the `$search` operator to perform an autocomplete search on the `address.country` field of the documents in the `country_autocomplete` index. The query parameter is set to the user input provided in the URL parameter, and the `highlight` parameter is used to return the matching text with highlighting.\n\nThe second stage in the pipeline limits the number of results returned to one.\n\nThe third stage in the pipeline uses the `$project` operator to include only the `address.country` field and the search highlights in the output\n\n## Implement autocomplete in our frontend application\n\nLet's also update the front end as below. From our project folder, open the `mdbsearch/components/Header/index.js` file. Find the `handeChange` function, and let's update it with the below code.\n\n```javascript\n//Autocomplete function goes here\n\u00a0\u00a0\u00a0\u00a0const handleChange = async (e) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setCountry(e.target.value);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if(e.target.value.length > 1){\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0await fetch(`${process.env.NEXT_PUBLIC_BASE_URL}country/autocomplete/${e.target.value}`)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.then((response) => response.json())\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0.then(async (res) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setsug_countries(res)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0})\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0else{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0setsug_countries([])\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n```\n\nThe above function will make a HTTP request to the `country/autocomplete` and save the response in a variable.\n\nWith our code updated accordingly, let's explore our application. Everything should be fine now. We should be able to search homes by their country, and we should get suggestions as we type into the search box.\n\n![showing text autocomplete.\n\nVoila! We now have fully functional text search for a home rental website. This will improve the user experience on the website.\n\n## Summary\n\nTo have a great user experience on a website, you'll agree with me that it's crucial to make it easy for your users to search for what they are looking for. In this guide, I showed you how I created a text search for a home rental website with MongoDB Atlas Search. This search will give users the ability to search for homes by their country.\n\nMongoDB Atlas Search is a full-text search engine that enables developers to build rich search functionality into their applications, allowing users to search through large volumes of data quickly and easily. Atlas Search also supports a wide range of search options, including fuzzy matching, partial word matching, and wildcard searches. Check out more on MongoDB Atlas Search from the official documentation.\n\nQuestions? Comments? Let's continue the conversation! Head over to the MongoDB Developer Community --- we'd love to hear from you.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "In this tutorial, you will learn how to add the autocomplete feature to a website built with NextJS.", "contentType": "Tutorial"}, "title": "Adding Autocomplete To Your NextJS Applications With Atlas Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mistral-ai-integration", "action": "created", "body": "# Revolutionizing AI Interaction: Integrating Mistral AI and MongoDB for a Custom LLM GenAI Application\n\nLarge language models (LLMs) are known for their ability to converse with us in an almost human-like manner. Yet, the complexity of their inner workings often remains covered in mystery, sparking intrigue. This intrigue intensifies when we factor in the privacy challenges associated with AI technologies.\n\nIn addition to privacy concerns, cost is another significant challenge. Deploying a large language model is crucial for AI applications, and there are two primary options: self-hosted or API-based models. With API-based LLMs, the model is hosted by a service provider, and costs accrue with each API request. In contrast, a self-hosted LLM runs on your own infrastructure, giving you complete control over costs. The bulk of expenses for a self-hosted LLM pertains to the necessary hardware.\n\nAnother aspect to consider is the availability of LLM models. With API-based models, during times of high demand, model availability can be compromised. In contrast, managing your own LLM ensures control over availability. You will be able to make sure all your queries to your self-managed LLM can be handled properly and under your control.\n\nMistral AI, a French startup, has introduced innovative solutions with the Mistral 7B model, Mistral Mixture of Experts, and Mistral Platform, all standing for a spirit of openness. This article explores how Mistral AI, in collaboration with MongoDB, a developer data platform that unifies operational, analytical, and vector search data services, is revolutionizing our interaction with AI. We will delve into the integration of Mistral AI with MongoDB Atlas and discuss its impact on privacy, cost efficiency, and AI accessibility.\n\n## Mistral AI: a game-changer\nMistral AI has emerged as a pivotal player in the open-source AI community, setting new standards in AI innovation. Let's break down what makes Mistral AI so transformative.\n\n### A beacon of openness: Mistral AI's philosophy\nMistral AI's commitment to openness is at the core of its philosophy. This commitment extends beyond just providing open-source code; it's about advocating for transparent and adaptable AI models. By prioritizing transparency, Mistral AI empowers users to truly own and shape the future of AI. This approach is fundamental to ensuring AI remains a positive, accessible force for everyone.\n\n### Unprecedented performance with Mistral 8x7B\nMistral AI has taken a monumental leap forward with the release of Mixtral 8x7B, an innovative sparse mixture of experts model (SMoE) with open weights. An SMoE is a neural network architecture that boosts traditional model efficiency and scalability. It utilizes specialized \u201cexpert\u201d sub-networks to handle different input segments. Mixtral incorporates eight of these expert sub-networks.\n\nLicensed under Apache 2.0, Mixtral sets a new benchmark in the AI landscape. Here's a closer look at what makes Mixtral 8x7B a groundbreaking advancement.\n\n### High-performance with sparse architectures\nMixtral 8x7B stands out for its efficient utilization of parameters and high-quality performance. Despite its total parameter count of 46.7 billion, it operates using only 12.9 billion parameters per token. This unique architecture allows Mixtral to maintain the speed and cost efficiency of a 12.9 billion parameter model while offering the capabilities of a much larger model.\n\n### Superior performance, versatility, and cost-performance optimization\nMixtral rivals leading models like Llama 2 70B and GPT-3.5, excelling in handling large contexts, multilingual processing, code generation, and instruction-following. The Mixtral 8x7B model combines cost efficiency with high performance, using a sparse mixture of experts network for optimized resource usage, offering premium outputs at lower costs compared to similar models.\n\n## Mistral \u201cLa plateforme\u201d\nMistral AI's beta platform offers developers generative models focusing on simplicity: Mistral-tiny for cost-effective, English-only text generation (7.6 MT-Bench score), Mistral-small for multilingual support including coding (8.3 score), and Mistral-medium for high-quality, multilingual output (8.6 score). These user-friendly, accurately fine-tuned models facilitate efficient AI deployment, as demonstrated in our article using the Mistral-tiny and the platform's embedding model.\n\n## Why MongoDB Atlas as a vector store?\nMongoDB Atlas is a unique, fully-managed platform integrating enterprise data, vector search, and analytics, allowing the creation of tailored AI applications. It goes beyond standard vector search with a comprehensive ecosystem, including models like Mistral, setting it apart in terms of unification, scalability, and security.\n\nMongoDB Atlas unifies operational, analytical, and vector search data services to streamline the building of generative AI-enriched apps. From proof-of-concept to production, MongoDB Atlas empowers developers with scalability, security, and performance for their mission-critical production applications.\n\nAccording to the Retool AI report, MongoDB takes the lead, earning its place as the top-ranked vector database.\n\n \n\n - Vector store easily works together with current MongoDB databases, making it a simple addition for groups already using MongoDB for managing their data. This means they can start using vector storage without needing to make big changes to their systems.\n \n - MongoDB Atlas is purpose-built to handle large-scale, operation-critical applications, showcasing its robustness and reliability. This is especially important in applications where it's critical to have accurate and accessible data. \n\n - Data in MongoDB Atlas is stored in JSON format, making it an ideal choice for managing a variety of data types and structures. This is particularly useful for AI applications, where the data type can range from embeddings and text to integers, floating-point values, GeoJSON, and more.\n\n - MongoDB Atlas is designed for enterprise use, featuring top-tier security, the ability to operate across multiple cloud services, and is fully managed. This ensures organizations can trust it for secure, reliable, and efficient operations.\n\nWith MongoDB Atlas, organizations can confidently store and retrieve embeddings alongside your existing data, unlocking the full potential of AI for their applications.\n\n## Overview and implementation of your custom LLM GenAI app\nCreating a self-hosted LLM GenAI application integrates the power of open-source AI with the robustness of an enterprise-grade vector store like MongoDB. Below is a detailed step-by-step guide to implementing this innovative system:\n\n### 1. Data acquisition and chunk\nThe first step is gathering data relevant to your application's domain, including text documents, web pages, and importantly, operational data already stored in MongoDB Atlas. Leveraging Atlas's operational data adds a layer of depth, ensuring your AI application is powered by comprehensive, real-time data, which is crucial for contextually enriched AI responses.\n\nThen, we divide the data into smaller, more manageable chunks. This division is crucial for efficient data processing, guaranteeing the AI model interacts with data that is both precise and reflective of your business's operational context.\n\n### 2.1 Generating embeddings\nUtilize **Mistral AI embedding endpoint** to transform your segmented text data into embeddings. These embeddings are numerical representations that capture the essence of your text, making it understandable and usable by AI models.\n\n### 2.2 Storing embeddings in MongoDB vector store\nOnce you have your embeddings, store them in MongoDB\u2019s vector store. MongoDB Atlas, with its advanced search capabilities, allows for the efficient storing and managing of these embeddings, ensuring that they are easily accessible when needed.\n\n### 2.3 Querying your data\nUse **MongoDB\u2019s vector search** capability to query your stored data. You only need to create a vector search index on the embedding field in your document. This powerful feature enables you to perform complex searches and retrieve the most relevant pieces of information based on your query parameters.\n\n### 3. & 4. Embedding questions and retrieving similar chunks\nWhen a user poses a question, generate an embedding for this query. Then, using MongoDB\u2019s search functionality, retrieve data chunks that are most similar to this query embedding. This step is crucial for finding the most relevant information to answer the user's question.\n\n### 5. Contextualized prompt creation\nCombine the retrieved segments and the original user query to create a comprehensive prompt. This prompt will provide a context to the AI model, ensuring that the responses generated are relevant and accurate.\n\n### 6. & 7. Customized answer generation from Mistral AI\nFeed the contextualized prompt into the Mistral AI 7B LLM. The model will then generate a customized answer based on the provided context. This step leverages the advanced capabilities of Mistral AI to provide specific, accurate, and relevant answers to user queries.\n\n## Implementing a custom LLM GenAI app with Mistral AI and MongoDB Atlas\n\nNow that we have a comprehensive understanding of Mistral AI and MongoDB Atlas and the overview of your next custom GenAI app, let\u2019s dive into implementing a custom large language model GenAI app. This app will allow you to have your own personalized AI assistant, powered by the Mistral AI and supported by the efficient data management of MongoDB Atlas.\n\nIn this section, we\u2019ll explain the prerequisites and four parts of the code:\n\n - Needed libraries\n - Data preparation process\n - Question and answer process\n - User interface through Gradio\n\n### 0. Prerequisites\nAs explained above, in this article, we are going to leverage the Mistral AI model through Mistral \u201cLa plateforme.\u201d To get access, you should first create an account on Mistral AI. You may need to wait a few hours (or one day) before your account is activated. \n\nOnce your account is activated, you can add your subscription. Follow the instructions step by step on the Mistral AI platform.\n\nOnce you have set up your subscription, you can then generate your API key for future usage. \n\nBesides using the Mistral AI \u201cLa plateforme,\u201d you have another option to implement the Mistral AI model on a machine featuring Nvidia V100, V100S, or A100 GPUs (not an exhaustive list). If you want to deploy a self-hosted large language model on a public or private cloud, you can refer to my previous article on how to deploy Mistral AI within 10 minutes.\n\n### 1. Import needed libraries\nThis section shows the versions of the required libraries. Personally, I run my code in VScode. So you need to install the following libraries beforehand. Here is the version at the moment I\u2019m running the following code.\n\n```\nmistralai \n 0.0.8\npymongo 4.3.3\ngradio 4.10.0\ngradio_client 0.7.3\nlangchain 0.0.348\nlangchain-core 0.0.12\npandas 2.0.3\n```\n\nThese include libraries for data processing, web scraping, AI models, and database interactions.\n\n```\nimport gradio as gr\nimport os\nimport pymongo\nimport pandas as pd\nfrom mistralai.client import MistralClient\nfrom mistralai.models.chat_completion import ChatMessage\nfrom langchain.document_loaders import PyPDFLoader\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n```\n\n### 2. Data preparation\nThe data_prep() function loads data from a PDF, a document, or a specified URL. It extracts text content from a webpage/documentation, removes unwanted elements, and then splits the data into manageable chunks.\n\nOnce the data is chunked, we use the Mistral AI embedding endpoint to compute embeddings for every chunk and save them in the document. Afterward, each document is added to a MongoDB collection.\n\n```\ndef data_prep(file):\n # Set up Mistral client\n api_key = os.environ\"MISTRAL_API_KEY\"]\n client = MistralClient(api_key=api_key)\n\n # Process the uploaded file\n loader = PyPDFLoader(file.name)\n pages = loader.load_and_split()\n\n # Split data\n text_splitter = RecursiveCharacterTextSplitter(\n chunk_size=100,\n chunk_overlap=20,\n separators=[\"\\n\\n\", \"\\n\", \"(?<=\\. )\", \" \", \"\"],\n length_function=len,\n )\n docs = text_splitter.split_documents(pages)\n\n # Calculate embeddings and store into MongoDB\n text_chunks = [text.page_content for text in docs]\n df = pd.DataFrame({'text_chunks': text_chunks})\n df['embedding'] = df.text_chunks.apply(lambda x: get_embedding(x, client))\n\n collection = connect_mongodb()\n df_dict = df.to_dict(orient='records')\n collection.insert_many(df_dict)\n\n return \"PDF processed and data stored in MongoDB.\"\n```\n\n### Connecting to MongoDB server\nThe `connect_mongodb()` function establishes a connection to a MongoDB server. It returns a collection object that can be used to interact with the database. This function will be called in the `data_prep()` function. \n\nIn order to get your MongoDB connection string, you can go to your MongoDB Atlas console, click the \u201cConnect\u201d button on your cluster, and choose the Python driver. \n\n![Connect to your cluster\n\n```\ndef connect_mongodb():\n # Your MongoDB connection string\n mongo_url = os.environ\"MONGO_URI\"]\n client = pymongo.MongoClient(mongo_url)\n db = client[\"mistralpdf\"]\n collection = db[\"pdfRAG\"]\n return collection\n```\n\nYou can import your mongo_url by doing the following command in shell.\n\n```\nexport MONGO_URI=\"Your_cluster_connection_string\"\n```\n\n### Getting the embedding\nThe get_embedding(text) function generates an embedding for a given text. It replaces newline characters and then uses Mistral AI \u201cLa plateforme\u201d embedding endpoints to get the embedding. This function will be called in both data preparation and question and answering processes.\n\n```\ndef get_embedding(text, client):\n text = text.replace(\"\\n\", \" \")\n embeddings_batch_response = client.embeddings(\n model=\"mistral-embed\",\n input=text,\n )\n return embeddings_batch_response.data[0].embedding\n```\n\n### 3. Question and answer function\nThis function is the core of the program. It processes a user's question and creates a response using the context supplied by Mistral AI. \n\n![Question and answer process\n\nThis process involves several key steps. Here\u2019s how it works:\n\n - Firstly, we generate a numerical representation, called an embedding, through a Mistral AI embedding endpoint, for the user\u2019s question. \n - Next, we run a vector search in the MongoDB collection to identify the documents similar to the user\u2019s question.\n - It then constructs a contextual background by combining chunks of text from these similar documents. We prepare an assistant instruction by combining all this information. \n - The user\u2019s question and the assistant\u2019s instruction are prepared into a prompt for the Mistral AI model. \n - Finally, Mistral AI will generate responses to the user thanks to the retrieval-augmented generation process.\n\n```\ndef qna(users_question):\n # Set up Mistral client\n api_key = os.environ\"MISTRAL_API_KEY\"]\n client = MistralClient(api_key=api_key)\n\n question_embedding = get_embedding(users_question, client)\n print(\"-----Here is user question------\")\n print(users_question)\n documents = find_similar_documents(question_embedding)\n \n print(\"-----Retrieved documents------\")\n print(documents)\n for doc in documents:\n doc['text_chunks'] = doc['text_chunks'].replace('\\n', ' ')\n \n for document in documents:\n print(str(document) + \"\\n\")\n\n context = \" \".join([doc[\"text_chunks\"] for doc in documents])\n template = f\"\"\"\n You are an expert who loves to help people! Given the following context sections, answer the\n question using only the given context. If you are unsure and the answer is not\n explicitly written in the documentation, say \"Sorry, I don't know how to help with that.\"\n\n Context sections:\n {context}\n\n Question:\n {users_question}\n\n Answer:\n \"\"\"\n messages = [ChatMessage(role=\"user\", content=template)]\n chat_response = client.chat(\n model=\"mistral-tiny\",\n messages=messages,\n )\n formatted_documents = '\\n'.join([doc['text_chunks'] for doc in documents])\n\n return chat_response.choices[0].message, formatted_documents\n```\n\n### The last configuration on the MongoDB vector search index\nIn order to run a vector search query, you only need to create a vector search index in MongoDB Atlas as follows. (You can also learn more about [how to create a vector search index.)\n\n```\n{\n \"type\": \"vectorSearch\",\n \"fields\": \n {\n \"numDimensions\": 1536,\n \"path\": \"'embedding'\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }\n ]\n}\n```\n\n### Finding similar documents\nThe find_similar_documents(embedding) function runs the vector search query in a MongoDB collection. This function will be called when the user asks a question. We will use this function to find similar documents to the questions in the question and answering process.\n\n```\ndef find_similar_documents(embedding):\n collection = connect_mongodb()\n documents = list(\n collection.aggregate([\n {\n \"$vectorSearch\": {\n \"index\": \"vector_index\",\n \"path\": \"embedding\",\n \"queryVector\": embedding,\n \"numCandidates\": 20,\n \"limit\": 10\n }\n },\n {\"$project\": {\"_id\": 0, \"text_chunks\": 1}}\n ]))\n return documents\n```\n\n### 4. Gradio user interface\nIn order to have a better user experience, we wrap the PDF upload and chatbot into two tabs using Gradio. Gradio is a Python library that enables the fast creation of customizable web applications for machine learning models and data processing workflows. You can put this code at the end of your Python file. Inside of this function, depending on which tab you are using, either data preparation or question and answering, we will call the explained dataprep() function or qna() function. \n\n```\n# Gradio Interface for PDF Upload\npdf_upload_interface = gr.Interface(\n fn=data_prep,\n inputs=gr.File(label=\"Upload PDF\"),\n outputs=\"text\",\n allow_flagging=\"never\"\n)\n\n# Gradio Interface for Chatbot\nchatbot_interface = gr.Interface(\n fn=qna,\n inputs=gr.Textbox(label=\"Enter Your Question\"),\n outputs=[\n gr.Textbox(label=\"Mistral Answer\"),\n gr.Textbox(label=\"Retrieved Documents from MongoDB Atlas\")\n ],\n allow_flagging=\"never\"\n)\n\n# Combine interfaces into tabs\niface = gr.TabbedInterface(\n [pdf_upload_interface, chatbot_interface],\n [\"Upload PDF\", \"Chatbot\"]\n)\n\n# Launch the Gradio app\niface.launch()\n```\n\n![Author\u2019s Gradio UI to upload PDF\n\n## Conclusion\nThis detailed guide has delved into the dynamic combination of Mistral AI and MongoDB, showcasing how to develop a bespoke large language model GenAI application. Integrating the advanced capabilities of Mistral AI with MongoDB's robust data management features enables the creation of a custom AI assistant that caters to unique requirements.\n\nWe have provided a straightforward, step-by-step methodology, covering everything from initial data gathering and segmentation to the generation of embeddings and efficient data querying. This guide serves as a comprehensive blueprint for implementing the system, complemented by practical code examples and instructions for setting up Mistral AI on a GPU-powered machine and linking it with MongoDB.\n\nLeveraging Mistral AI and MongoDB Atlas, users gain access to the expansive possibilities of AI applications, transforming our interaction with technology and unlocking new, secure ways to harness data insights while maintaining privacy.\n\n### Learn more\nTo learn more about how Atlas helps organizations integrate and operationalize GenAI and LLM data, take a look at our Embedding Generative AI whitepaper to explore RAG in more detail.\n\nIf you want to know more about how to deploy a self-hosted Mistral AI with MongoDB, you can refer to my previous articles: Unleashing AI Sovereignty: Getting Mistral.ai 7B Model Up and Running in Less Than 10 Minutes and Starting Today with Mistral AI & MongoDB: A Beginner\u2019s Guide to a Self-Hosted LLM Generative AI Application.\nMixture of Experts Explained\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "This tutorial will go over how to integrate Mistral AI and MongoDB for a custom LLM genAI application.", "contentType": "Tutorial"}, "title": "Revolutionizing AI Interaction: Integrating Mistral AI and MongoDB for a Custom LLM GenAI Application", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/boosting-ai-build-chatbot-data-mongodb-atlas-vector-search-langchain-templates-using-rag-pattern", "action": "created", "body": "# Boosting AI: Build Your Chatbot Over Your Data With MongoDB Atlas Vector Search and LangChain Templates Using the RAG Pattern\n\nIn this tutorial, I will show you the simplest way to implement an AI chatbot-style application using MongoDB Atlas Vector Search with LangChain Templates and the retrieval-augmented generation (RAG) pattern for more precise chat responses.\n\n## Retrieval-augmented generation (RAG) pattern\n\nThe retrieval-augmented generation (RAG) model enhances LLMs by supplementing them with additional, relevant data, ensuring grounded and precise responses for business purposes. Through vector search, RAG identifies and retrieves pertinent documents from databases, which it uses as context sent to the LLM along with the query, thereby improving the LLM's response quality. This approach decreases inaccuracies by anchoring responses in factual content and ensures responses remain relevant with the most current data. RAG optimizes token use without expanding an LLM's token limit, focusing on the most relevant documents to inform the response process.\n\n. This collaboration has produced a retrieval-augmented generation template that capitalizes on the strengths of MongoDB Atlas Vector Search along with OpenAI's technologies. The template offers a developer-friendly approach to crafting and deploying chatbot applications tailored to specific data sets. The LangChain templates serve as a deployable reference framework, accessible as a REST API via LangServe.\n\nThe alliance has also been instrumental in showcasing the latest Atlas Vector Search advancements, notably the `$vectorSearch` aggregation stage, now embedded within LangChain's Python and JavaScript offerings. The joint venture is committed to ongoing development, with plans to unveil more templates. These future additions are intended to further accelerate developers' abilities to realise and launch their creative projects.\n\n## LangChain Templates\n\nLangChain Templates present a selection of reference architectures that are designed for quick deployment, available to any user. These templates introduce an innovative system for the crafting, exchanging, refreshing, acquiring, and tailoring of diverse chains and agents. They are crafted in a uniform format for smooth integration with LangServe, enabling the swift deployment of production-ready APIs. Additionally, these templates provide a free sandbox for experimental and developmental purposes. \n\nThe `rag-mongo` template is specifically designed to perform retrieval-augmented generation utilizing MongoDB and OpenAI technologies. We will take a closer look at the `rag-mongo` template in the following section of this tutorial.\n\n## Using LangChain RAG templates\n\nTo get started, you only need to install the `langchain-cli`.\n\n```\npip3 install -U \"langchain-cliserve]\"\n```\n\nUse the LangChain CLI to bootstrap a LangServe project quickly. The application will be named `my-blog-article`, and the name of the template must also be specified. I\u2019ll name it `rag-mongo`.\n\n```\nlangchain app new my-blog-article --package rag-mongo\n```\n\nThis will create a new directory called my-app with two folders:\n\n* `app`: This is where LangServe code will live.\n* `packages`: This is where your chains or agents will live.\n\nNow, it is necessary to modify the `my-blog-article/app/server.py` file by adding the [following code:\n\n```\nfrom rag_mongo import chain as rag_mongo_chain\nadd_routes(app, rag_mongo_chain, path=\"/rag-mongo\")\n```\n\nWe will need to insert data to MongoDB Atlas. In our exercise, we utilize a publicly accessible PDF document titled \"MongoDB Atlas Best Practices\" as a data source for constructing a text-searchable vector space. The data will be ingested into the MongoDB `langchain.vectorSearch`namespace.\n\nIn order to do it, navigate to the directory `my-blog-article/packages/rag-mongo` and in the file `ingest.py`, change the default names of the MongoDB database and collection. Additionally, modify the URL of the document you wish to use for generating embeddings.\n\n```\ncd my-blog-article/packages/rag-mongo \n```\n\nMy `ingest.py` is located on GitHub. Note that if you change the database and collection name in `ingest.py`, you also need to change it in `rag_mongo`/`chain.py`. My `chain.py` is also located on GitHub. Next, export your OpenAI API Key and MongoDB Atlas URI.\n\n```\nexport OPENAI_API_KEY=\"xxxxxxxxxxx\"\nexport MONGO_URI\n=\"mongodb+srv://user:passwd@vectorsearch.abc.mongodb.net/?retryWrites=true\"\n```\n\nCreating and inserting embeddings into MongoDB Atlas using LangChain templates is very easy. You just need to run the `ingest.py`script. It will first load a document from a specified URL using the PyPDFLoader. Then, it splits the text into manageable chunks using the `RecursiveCharacterTextSplitter`. Finally, the script uses the OpenAI Embeddings API to generate embeddings for each chunk and inserts them into the MongoDB Atlas `langchain.vectorSearch` namespace.\n\n```\npython3 ingest.py\n```\n\nNow, it's time to initialize Atlas Vector Search. We will do this through the Atlas UI. In the Atlas UI, choose `Search` and then `Create Search`. Afterwards, choose the JSON Editor to declare the index parameters as well as the database and collection where the Atlas Vector Search will be established (`langchain.vectorSearch`). Set index name as `default`. The definition of my index is presented below.\n\n```\n{\n \"type\": \"vectorSearch\",\n \"fields\": \n {\n \"path\": \"embedding\",\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }\n]\n}\n```\n\n A detailed procedure is [available on GitHub. \n\nLet's now take a closer look at the central component of the LangChain `rag-mongo` template: the `chain.py` script. This script utilizes the `MongoDBAtlasVectorSearch` \n\nclass and is used to create an object \u2014 `vectorstore` \u2014 that interfaces with MongoDB Atlas's vector search capabilities for semantic similarity searches. The `retriever` is then configured from `vectorstore` to perform these searches, specifying the search type as \"similarity.\" \n\n```\nvectorstore = MongoDBAtlasVectorSearch.from_connection_string(\n MONGO_URI,\n DB_NAME + \".\" + COLLECTION_NAME,\n OpenAIEmbeddings(disallowed_special=()),\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n)\nretriever = vectorstore.as_retriever()\n```\n\nThis configuration ensures the most contextually relevant document is retrieved from the database. Upon retrieval, the script merges this document with a user's query and leverages the `ChatOpenAI` class to process the input through OpenAI's GPT models, crafting a coherent answer. To further enhance this process, the ChatOpenAI class is initialized with the `gpt-3.5-turbo-16k-0613` model, chosen for its optimal performance. The temperature is set to 0, promoting consistent, deterministic outputs for a streamlined and precise user experience.\n\n```\nmodel = ChatOpenAI(model_name=\"gpt-3.5-turbo-16k-0613\",temperature=0) \n```\n\nThis class permits tailoring API requests, offering control over retry attempts, token limits, and response temperature. It adeptly manages multiple response generations, response caching, and callback operations. Additionally, it facilitates asynchronous tasks to streamline response generation and incorporates metadata and tagging for comprehensive API run tracking.\n\n## LangServe Playground\n\nAfter successfully creating and storing embeddings in MongoDB Atlas, you can start utilizing the LangServe Playground by executing the `langchain serve` command, which grants you access to your chatbot.\n\n```\nlangchain serve\n\nINFO: Will watch for changes in these directories: \nINFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)\nINFO: Started reloader process 50552] using StatReload\nINFO: Started server process [50557]\nINFO: Waiting for application startup.\n\nLANGSERVE: Playground for chain \"/rag-mongo\" is live at:\nLANGSERVE: \u2502\nLANGSERVE: \u2514\u2500\u2500> /rag-mongo/playground\nLANGSERVE:\nLANGSERVE: See all available routes at /docs\n```\n\nThis will start the FastAPI application, with a server running locally at `http://127.0.0.1:8000`. All templates can be viewed at `http://127.0.0.1:8000/docs`, and the playground can be accessed at `http://127.0.0.1:8000/rag-mongo/playground/`.\n\nThe chatbot will answer questions about best practices for using MongoDB Atlas with the help of context provided through vector search. Questions on other topics will not be considered by the chatbot.\n\nGo to the following URL:\n\n```\nhttp://127.0.0.1:8000/rag-mongo/playground/\n```\n\nAnd start using your template! You can ask questions related to MongoDB Atlas in the chat.\n\n![LangServe Playground][2]\n\nBy expanding the `Intermediate steps` menu, you can trace the entire process of formulating a response to your question. This process encompasses searching for the most pertinent documents related to your query, and forwarding them to the Open AI API to serve as the context for the query. This methodology aligns with the RAG pattern, wherein relevant documents are retrieved to furnish context for generating a well-informed response to a specific inquiry.\n\nWe can also use `curl` to interact with `LangServe` REST API and contact endpoints, such as `/rag-mongo/invoke`:\n\n```\ncurl -X POST \"https://127.0.0.1:8000/rag-mongo/invoke\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"input\": \"Does MongoDB support transactions?\"}'\n```\n\n```\n{\"output\":\"Yes, MongoDB supports transactions.\",\"callback_events\":[],\"metadata\":{\"run_id\":\"06c70537-8861-4dd2-abcc-04a85a50bcb6\"}}\n```\n\nWe can also send batch requests to the API using the `/rag-mongo/batch` endpoint, for example:\n\n```\ncurl -X POST \"https://127.0.0.1:8000/rag-mongo/batch\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"inputs\": [\n \"What options do MongoDB Atlas Indexes include?\",\n \"Explain Atlas Global Cluster\",\n \"Does MongoDB Atlas provide backups?\"\n ],\n \"config\": {},\n \"kwargs\": {}\n }'\n```\n\n```\n{\"output\":[\"MongoDB Atlas Indexes include the following options:\\n- Compound indexes\\n- Geospatial indexes\\n- Text search indexes\\n- Unique indexes\\n- Array indexes\\n- TTL indexes\\n- Sparse indexes\\n- Partial indexes\\n- Hash indexes\\n- Collated indexes for different languages\",\"Atlas Global Cluster is a feature provided by MongoDB Atlas, a cloud-based database service. It allows users to set up global clusters on various cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform. \\n\\nWith Atlas Global Cluster, users can easily distribute their data across different regions by just a few clicks in the MongoDB Atlas UI. The deployment and management of infrastructure and database resources required for data replication and distribution are taken care of by MongoDB Atlas. \\n\\nFor example, if a user has an accounts collection that they want to distribute among their three regions of business, Atlas Global Cluster ensures that the data is written to and read from different regions, providing high availability and low latency access to the data.\",\"Yes, MongoDB Atlas provides backups.\"],\"callback_events\":[],\"metadata\":{\"run_ids\":[\"1516ba0f-1889-4688-96a6-d7da8ff78d5e\",\"4cca474f-3e84-4a1a-8afa-e24821fb1ec4\",\"15cd3fba-8969-4a97-839d-34a4aa167c8b\"]}}\n```\n\nFor comprehensive documentation and further details, please visit `http://127.0.0.1:8000/docs`.\n\n## Summary\n\nIn this article, we've explored the synergy of MongoDB Atlas Vector Search with LangChain Templates and the RAG pattern to significantly improve chatbot response quality. By implementing these tools, developers can ensure their AI chatbots deliver highly accurate and contextually relevant answers. Step into the future of chatbot technology by applying the insights and instructions provided here. Elevate your AI and engage users like never before. Don't just build chatbots \u2014 craft intelligent conversational experiences. [Start now with MongoDB Atlas and LangChain!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbe1d8daf4783a8a1/6578c9297cf4a90420f5d76a/Boosting_AI_-_1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt435374678f2a3d2a/6578cb1af2362505ae2f7926/Boosting_AI_-_2.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Discover how to enhance your AI chatbot's accuracy with MongoDB Atlas Vector Search and LangChain Templates using the RAG pattern in our comprehensive guide. Learn to integrate LangChain's retrieval-augmented generation model with MongoDB for precise, data-driven chat responses. Ideal for developers seeking advanced AI chatbot solutions.", "contentType": "Tutorial"}, "title": "Boosting AI: Build Your Chatbot Over Your Data With MongoDB Atlas Vector Search and LangChain Templates Using the RAG Pattern", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/cpp/turn-ble", "action": "created", "body": "# Turn BLE: Implementing BLE Sensors with MCU Devkits\n\nIn the first episode of this series, I shared with you the project that I plan to implement. I went through the initial planning and presented a selection of MCU devkit boards that would be suitable for our purposes.\n\nIn this episode, I will try and implement BLE communication on one of the boards. Since the idea is to implement\nthis project as if it were a proof of concept (PoC), once I am moderately successful with one implementation, I will stop there and move forward to the next step, which is implementing the BLE central role in the Raspberry Pi.\n\nAre you ready? Then buckle up for the Bluetooth bumps ahead!\n\n# Table of Contents\n\n1. Concepts\n 1. Bluetooth classic vs BLE\n 2. BLE data\n 3. BLE roles\n2. Setup\n 1. Development environment\n 2. Testing environment\n3. BLE sensor implementation\n 1. First steps\n 2. Read from a sensor\n 3. BLE peripheral GAP\n 4. Add a sensor service\n 5. Add notifications\n4. Recap\n\n# Concepts\n\n## Bluetooth classic vs BLE\n\nBluetooth is a technology for wireless communications. Although we talk about Bluetooth as if it were a single thing, Bluetooth Classic and Bluetooth Low Energy are mostly different beasts and also incompatible. Bluetooth Classic has a higher transfer rate (up to 3Mb/s) than Bluetooth Low Energy (up to 2Mb/s), but with great transfer rate comes\ngreat power consumption (as Spidey's uncle used to say).\n, a mechanism created by Microsoft and implemented by some boards that emulates a storage device when connected to the USB port. You can then drop a file into that storage device in a special format. The file contains the firmware that you want to install with some metadata and redundancy and, after some basic verifications, it gets flashed to the microcontroller automatically.\n\nIn this case, we are going to flash the latest version of MicroPython to the RP2. We press and hold down the BOOTSEL button while we plug the board to the USB, and we drop the latest firmware UF2 file into the USB mass storage device that appears and that is called RPI-RP2. The firmware will be flashed and the board rebooted.\n a profile so you can have different extensions for different boards if needed. In this profile, you can also install the recommended Python extensions to help you with the python code.\n\nLet's start by creating a new directory for our project and open VSCode there:\n```sh\nmkdir BLE-periph-RP2\ncd BLE-periph-RP2\ncode .\n```\n\nThen, let's initialize the project so code completion works. From the main menu, select `View` -> `Command Palette` (or Command + Shift + P) and find `MicroPico: Configure Project`. This command will add a file to the project and various buttons to the bottom left of your editor that will allow you to upload the files to the board, execute them, and reset it, among other things.\n\nYou can find all of the code that is explained in the repository. Feel free to make pull requests where you see they fit or ask questions.\n\n## Testing environment\n\nSince we are only going to develop the BLE peripheral, we will need some existing tool to act as the BLE central. There are several free mobile apps available that will do that. I am going to use \"nRF Connect for Mobile\" (Android or iOS), but there are others that can help too, like LightBlue (macOS/iOS or Android).\n\n# BLE sensor implementation\n\n## First steps\n\n1. MicroPython loads and executes code stored in two files, called `boot.py` and `main.py`, in that order. The first one is used to configure some board features, like network or peripherals, just once and only after (re)starting the board. It must only contain that to avoid booting problems. The `main.py` file gets loaded and executed by MicroPython right after `boot.py`, if it exists, and that contains the application code. Unless explicitly configured, `main.py` runs in a loop, but it can be stopped more easily. In our case, we don't need any prior configuration, so let's start with a `main.py` file.\n2. Let's start by blinking the builtin LED. So the first thing that we are going to need is a module that allows us to work with the different capabilities of the board. That module is named `machine` and we import it, just to have access to the pins:\n ```python\n from machine import Pin\n ```\n3. We then get an instance of the pin that is connected to the LED that we'll use to output voltage, switching it on or off:\n ```python\n led = Pin('LED', Pin.OUT)\n ```\n4. We create an infinite loop and turn on and off the LED with the methods of that name, or better yet, with the `toggle()` method.\n ```python\n while True:\n led.toggle()\n ```\n5. This is going to switch the led on and off so fast that we won't be able to see it, so let's introduce a delay, importing the `time` module:\n ```python\n import time\n \n while True:\n time.sleep_ms(500)\n ```\n6. Run the code using the `Run` button at the left bottom of VSCode and see the LED blinking. Yay!\n\n## Read from a sensor\n\nOur devices are going to be measuring the noise level from a microphone and sending it to the collecting station. However, our Raspberry Pi Pico doesn't have a microphone builtin, so we are going to start by using the temperature sensor that the RP2 has to get some measurements.\n\n1. First, we import the analog-to-digital-converting capabilities:\n ```python\n from machine import ADC\n ```\n2. The onboard sensor is on the fifth (index 4) ADC channel, so we get a variable pointing to it:\n ```python\n adc = ADC(4)\n ```\n3. In the main loop, read the voltage. It is a 16-bit unsigned integer, in the range 0V to 3.3V, that converts into degrees Celsius according to the specs of the sensor. Print the value:\n ```python\n temperature = 27.0 - ((adc.read_u16() * 3.3 / 65535) - 0.706) / 0.001721\n print(\"T: {}\u00baC\".format(temperature))\n ```\n4. We run this new version of the code and the measurements should be updated every half a second.\n\n## BLE peripheral GAP\n\nWe are going to start by advertising the device name and its characteristics. That is done with the Generic Access Profile (GAP) for the peripheral role. We could use the low level interface to Bluetooth provided by the `bluetooth` module or the higher level interface provided by `aioble`. The latter is simpler and recommended in the MicroPython manual, but the documentation is a little bit lacking. We are going to start with this one and read its source code when in doubt.\n\n1. We will start by importing the `aioble` and `bluetooth`, i.e. the low level bluetooth (used here only for the UUIDs):\n ```python\n import aioble\n import bluetooth\n ```\n2. All devices must be able to identify themselves via the Device Information Service, identified with the UUID 0x180A. We start by creating this service:\n ```python\n # Constants for the device information service\n _SVC_DEVICE_INFO = bluetooth.UUID(0x180A)\n svc_dev_info = aioble.Service(_SVC_DEVICE_INFO)\n ```\n3. Then, we are going to add some read-only characteristics to that service, with initial values that won't change:\n ```python\n _CHAR_MANUFACTURER_NAME_STR = bluetooth.UUID(0x2A29)\n _CHAR_MODEL_NUMBER_STR = bluetooth.UUID(0x2A24)\n _CHAR_SERIAL_NUMBER_STR = bluetooth.UUID(0x2A25)\n _CHAR_FIRMWARE_REV_STR = bluetooth.UUID(0x2A26)\n _CHAR_HARDWARE_REV_STR = bluetooth.UUID(0x2A27)\n aioble.Characteristic(svc_dev_info, _CHAR_MANUFACTURER_NAME_STR, read=True, initial='Jorge')\n aioble.Characteristic(svc_dev_info, _CHAR_MODEL_NUMBER_STR, read=True, initial='J-0001')\n aioble.Characteristic(svc_dev_info, _CHAR_SERIAL_NUMBER_STR, read=True, initial='J-0001-0000')\n aioble.Characteristic(svc_dev_info, _CHAR_FIRMWARE_REV_STR, read=True, initial='0.0.1')\n aioble.Characteristic(svc_dev_info, _CHAR_HARDWARE_REV_STR, read=True, initial='0.0.1')\n ```\n4. Now that the service is created with the relevant characteristics, we register it:\n ```python\n aioble.register_services(svc_dev_info)\n ```\n5. We can now create an asynchronous task that will take care of handling the connections. By definition, our peripheral can only be connected to one central device. We enable the Generic Access Protocol (GAP), a.k.a General Access service, by starting to advertise the registered services and thus, we accept connections. We could disallow connections (`connect=False`) for connection-less devices, such as beacons. Device name and appearance are mandatory characteristics of GAP, so they are parameters of the `advertise()` method.\n ```python\n from micropython import const\n \n _ADVERTISING_INTERVAL_US = const(200_000)\n _APPEARANCE = const(0x0552) # Multi-sensor\n \n async def task_peripheral():\n \"\"\" Task to handle advertising and connections \"\"\"\n while True:\n async with await aioble.advertise(\n _ADVERTISING_INTERVAL_US,\n name='RP2-SENSOR',\n appearance=_APPEARANCE,\n services=_DEVICE_INFO_SVC]\n ) as connection:\n print(\"Connected from \", connection.device)\n await connection.disconnected() # NOT connection.disconnect()\n print(\"Disconnect\")\n ```\n6. It would be useful to know when this peripheral is connected so we can do what is needed. We create a global boolean variable and expose it to be changed in the task for the peripheral:\n ```python\n connected=False\n \n async def task_peripheral():\n \"\"\" Task to handle advertising and connections \"\"\"\n global connected\n while True:\n connected = False\n async with await aioble.advertise(\n _ADVERTISING_INTERVAL_MS,\n appearance=_APPEARANCE,\n name='RP2-SENSOR',\n services=[_SVC_DEVICE_INFO]\n ) as connection:\n print(\"Connected from \", connection.device)\n connected = True\n ```\n7. We can provide visual feedback about the connection status in another task:\n ```python\n async def task_flash_led():\n \"\"\" Blink the on-board LED, faster if disconnected and slower if connected \"\"\"\n BLINK_DELAY_MS_FAST = const(100)\n BLINK_DELAY_MS_SLOW = const(500)\n while True:\n led.toggle()\n if connected:\n await asyncio.sleep_ms(BLINK_DELAY_MS_SLOW)\n else:\n await asyncio.sleep_ms(BLINK_DELAY_MS_FAST)\n ```\n8. Next, we import [`asyncio` to use it with the async/await mechanism:\n ```python\n import uasyncio as asyncio\n ```\n9. And move the sensor read into another task:\n ```python\n async def task_sensor():\n \"\"\" Task to handle sensor measures \"\"\"\n while True:\n temperature = 27.0 - ((adc.read_u16() * 3.3 / 65535) - 0.706) / 0.001721\n print(\"T: {}\u00b0C\".format(temperature))\n time.sleep_ms(_TEMP_MEASUREMENT_INTERVAL_MS)\n ```\n10. We define a constant for the interval between temperature measurements:\n ```python\n _TEMP_MEASUREMENT_INTERVAL_MS = const(15_000)\n ```\n11. And replace the delay with an asynchronous compatible implementation:\n ```python\n await asyncio.sleep_ms(_TEMP_MEASUREMENT_FREQUENCY)\n ```\n12. We delete the import of the `time` module that we won't be needing anymore.\n13. Finally, we create a main function where all the tasks are instantiated:\n ```python\n async def main():\n \"\"\" Create all the tasks \"\"\"\n tasks = \n asyncio.create_task(task_peripheral()),\n asyncio.create_task(task_flash_led()),\n asyncio.create_task(task_sensor()),\n ]\n asyncio.gather(*tasks)\n ```\n14. And launch main when the program starts:\n ```python\n asyncio.run(main())\n ```\n15. Wash, rinse, repeat. I mean, run it and try to connect to the device using one of the applications mentioned above. You should be able to find and read the hard-coded characteristics.\n\n## Add a sensor service\n\n1. We define a new service, like what we did with the *device info* one. In this case, it is an Environmental Sensing Service (ESS) that exposes one or more characteristics for different types of environmental measurements.\n ```python\n # Constants for the Environmental Sensing Service\n _SVC_ENVIRONM_SENSING = bluetooth.UUID(0x181A)\n svc_env_sensing = aioble.Service(_SVC_ENVIRONM_SENSING)\n ```\n2. We also define a characteristic for\u2026 yes, you guessed it, a temperature measurement:\n ```python\n _CHAR_TEMP_MEASUREMENT = bluetooth.UUID(0x2A1C)\n temperature_char = aioble.Characteristic(svc_env_sensing, _CHAR_TEMP_MEASUREMENT, read=True)\n ```\n3. We then add the service to the one that we registered:\n ```python\n aioble.register_services(svc_dev_info, svc_env_sensing)\n ```\n4. And also to the services that get advertised:\n ```python\n services=[_SVC_DEVICE_INFO, _SVC_ENVIRONM_SENSING]\n ```\n5. The format in which the data must be written is specified in the \"[GATT Specification Supplement\" document. My advice is that before you select the characteristic that you are going to use, you check the data that is going to be contained there. For this characteristic, we need to encode the temperature encoded as a IEEE 11073-20601 memfloat32 :cool: :\n ```python\n def _encode_ieee11073(value, precision=2):\n \"\"\" Binary representation of float value as IEEE-11073:20601 32-bit FLOAT \"\"\"\n return int(value * (10 ** precision)).to_bytes(3, 'little', True) + struct.pack('\n## Add notifications\n\nThe \"GATT Specification Supplement\" document states that notifications should be implemented adding a \"Client Characteristic Configuration\" descriptor, where they get enabled and initiated. Once the notifications are enabled, they should obey the trigger conditions set in the \"ES Trigger Setting\" descriptor. If two or three (max allowed) trigger descriptors are defined for the same characteristic, then the \"ES Configuration\" descriptor must be present too to define if the triggers should be combined with OR or AND. Also, to change the values of these descriptors, client binding --i.e. persistent pairing-- is required.\n\nThis is a lot of work for a proof of concept, so we are going to simplify it by notifying every time the sensor is read. Let me make myself clear, this is **not** the way it should be done. We are cutting corners here, but my understanding at this point in the project is that we can postpone this part of the implementation because it does not affect the viability of our device. We add a to-do to remind us later that we will need to do this, if we decide to go with Bluetooth sensors over MQTT.\n\n1. We change the characteristic declaration to enable notifications:\n ```python\n temperature_char = aioble.Characteristic(svc_env_sensing, _CHAR_TEMP_MEASUREMENT, read=True, notify=True)\n ```\n2. We add a descriptor, although we are going to ignore it for now:\n ```python\n _DESC_ES_TRIGGER_SETTING = bluetooth.UUID(0x290D)\n aioble.Descriptor(temperature_char, _DESC_ES_TRIGGER_SETTING, write=True, initial=struct.pack(\"\n# Recap\n\nIn this article, I have covered some relevant Bluetooth Low Energy concepts and put them in practice by using them in writing the firmware of a Raspberry Pi Pico board. In this firmware, I used the on-board LED, read from the on-board temperature sensor, and implemented a BLE peripheral that offered two services and a characteristic that depended on measured data and could push notifications.\n\nWe haven't connected a microphone to the board or read noise levels using it yet. I have decided to postpone this until we have decided which mechanism will be used to send the data from the sensors to the collecting stations: BLE or MQTT. If, for any reason, I have to switch boards while implementing the next steps, this time investment would be lost. So, it seems reasonable to move this part to later in our development effort.\n\nIn my next article, I will guide you through how we need to interact with Bluetooth from the command line and how Bluetooth can be used for our software using DBus. The goal is to understand what we need to do in order to move from theory to practice using C++ later.\n\nIf you have questions or feedback, join me in the MongoDB Developer Community!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt26289a1e0bd71397/6565d3e3ca38f02d5bd3045f/bluetooth.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte87a248b6f6e9663/6565da9004116d59842a0c77/RP2-bootsel.JPG", "format": "md", "metadata": {"tags": ["C++", "Python"], "pageDescription": "After having sketched the plan in our first article, this is the first one where we start coding. In this hands-on article, you will understand how to write firmware for a Raspberry Pi Pico (RP2) board try that implements offering sensor data through Bluetooth Low Energy communication.", "contentType": "Tutorial"}, "title": "Turn BLE: Implementing BLE Sensors with MCU Devkits", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/multicloud-clusters-with-andrew-davidson", "action": "created", "body": "# MongoDB Atlas Multicloud Clusters\n\nIn this episode of the podcast, Nic and I are joined by Andrew Davidson,\nVP of Cloud Product at MongoDB. Andrew shares some details of the latest\ninnovation in MongoDB Atlas and talks about some of the ways multi-cloud\nclusters can help developers.\n\n:youtube]{vid=GWKa_VJNv7I} \n\nMichael Lynn (00:00): Welcome to the podcast. On this episode, Nick and\nI sit down with Andrew Davidson, VP of cloud product here at MongoDB.\nWe're talking today about the latest innovation built right into MongoDB\nAtlas, our database-as-a-service multi-cloud. So this gives you the\nability to deploy and manage your instances of MongoDB in the cloud\nacross the three major cloud providers: AWS, Azure, and GCP. Andrew\ntells us all about this innovation and how it could be used and some of\nthe benefits. So stay tuned. I hope you enjoyed the episode.\n\nMichael Lynn (00:52): Andrew Davidson, VP of cloud product with MongoDB.\nHow are you, sir?\n\nAndrew Davidson (00:57): Good to see you, Mike. I'm doing very well.\nThank you. It's been a busy couple of weeks and I'm super excited to be\nhere to talk to you about what we've been doing.\n\nMichael Lynn (01:05): Absolutely. We're going to talk about multi-cloud\ntoday and innovation added to MongoDB Atlas. But before we get there,\nAndrew, I wonder if you would just explain or just introduce yourself to\nthe audience. Who are you and what do you do?\n\nAndrew Davidson (01:19): Sure. Yeah. Yeah. So as Mike introed me\nearlier, I'm VP of cloud products here at MongoDB, which basically means\nthat I focus on our cloud business and what we're bringing to market for\nour customers and also thinking about how those services for our\ncustomers evolve over time and the roadmap around them and how we\nexplain them to the world as well and how our users use them and over\ntime, grow on them in deep partnership with us. So I've been around\nMongoDB for quite some time, for eight years. In that time, have really\nsort of seen this huge shift that everyone involved at MongoDB has been\npart of with our DNA shifting from being more of a software company, to\nbeing a true cloud company. It's been a really, a five-year journey over\nthe last five years. To me, this announcement we made last week that\nMike was just alluding to is really the culmination in many ways of that\njourney. So couldn't be more excited.\n\nMichael Lynn (02:12): Yeah, fantastic. Eight years. Eight years at a\nsoftware company is a lifetime. You were at Google prior to this. What\ndid you do at Google?\n\nAndrew Davidson (02:23): I was involved in a special team. They're\ncalled Ground Truth. It was remapping the world and it was all about\nbuilding a new map dataset using Google's unique street view and other\ninputs to basically make all of the maps that you utilize every day on\nGoogle maps better and for Google to be able to evolve that dataset\nfaster. So it was a very human project that involved thousands of human\noperators doing an enormous amount of complex work because the bottom\nline was, this is not something that you could do with ML at that point\nanyway. I'm sure they've evolved a little bit since then. It's been a\nlong time.\n\nMichael Lynn (02:59): Fantastic. So in your eight years, what other\nthings have you done at MongoDB?\n\nAndrew Davidson (03:05): So I really started out focusing on our\ntraditional, on-prem management software, something called MongoDB ops\nmanager, which was kind of the core differentiated in our enterprise\nadvanced offering. At that time, the company was more focused on\nessentially, monetizing getting off the ground, through traditional IT\noperations. Even though we were always about developers and developers\nwere always building great new applications on the database, in a way,\nwe had sort of moved our focus from a monetization perspective towards a\nmore ops centered view, and I was a big part of that. But I was able to\nmake that shift and kind of recenter, recenter on the developer when we\nkind of moved into a true cloud platform and that's been a lot of fun\never since.\n\nMichael Lynn (03:52): Yeah. Amazing journey. So from ops manager to\nAtlas. I want to be cognizant that not all of our listeners will be\nfamiliar with Atlas. So maybe give a description of what Atlas is from\nyour perspective.\n\nAndrew Davidson (04:08): Totally. Yeah. So MongoDB Atlas as a global\ncloud database service. It's available on the big three cloud providers,\nAWS, Google Cloud, and Azure. And it's truly elastic and declarative,\nmeaning you can describe a database cluster in any part of the world, in\nany region, 79 regions across the three providers and Atlas does all the\nheavy lifting to get you there, to do the lifecycle management. You can\ndo infrastructure as code, you can manage your database clusters in\nTerraform, or you can use our beautiful user interface to learn and\ndeploy. We realized it's not enough to have an elastic database service.\nThat's the starting point. It's also not enough to have the best modern\ndatabase, one that's so native to developers, one that speaks to that\nrich data model of MongoDB with the secondary indexes and all the rest.\nReally, we needed to go beyond the database.\n\nAndrew Davidson (04:54): So we focused heavily on helping our customers\nwith prescriptive guidance, schema advice, index suggestions, and you'll\nsee us keep evolving there because we recognize that really every week,\ntens of thousands of people are coming onto the platform for the first\ntime. We need to just lower the barrier to entry to build successful\napplications on the database. We've also augmented Atlas with key\nplatform expansions by including search. We have Lucene-based search\nindexes now native to Atlas. So you don't have to ETL that data to a\nsearch engine and basically, build search right into your operational\napplications. We've got online archive for data tiering into object\nstorage economics. With MongoDB Realm, we now have synchronization all\nthe way back to the Realm mobile database and data access services all\nnative to the platform. So it's all very exciting, but fundamentally\nwhat has been missing until just last week was true multi-cloud\nclusters, the ability to mix and match those databases across the clouds\nto have replicas that span the cloud providers or to seamlessly move\nfrom one provider to the other with no downtime, no change in connection\nstring. So that's really exciting.\n\nNic Raboy (06:02): Hey, Andrew, I have a question for you. This is a\nquestion that I received quite a bit. So when setting up your Atlas\ncluster, you're of course asked to choose between Amazon, Google, and\nMicrosoft for your hosting. Can you maybe talk about how that's\ndifferent or what that's really for in comparison to the multi-cloud\nthat we're talking about today?\n\nAndrew Davidson (06:25): Yeah, sure. Look, being intellectually honest,\nmost customers of ours, most developers, most members of the community\nhave a preferred cloud platform and all of the cloud platforms are great\nin their own ways. I think they shine in so many ways. There's lots of\nreasons why folks will start on Google, or start on Azure, or start at\nAWS. Usually, there's that preferred provider. So most users will deploy\nan Atlas cluster into their target provider where their other\ninfrastructure lives, where their application tier lives, et cetera.\nThat's where the world is today for the most part. We know though that\nwe're kind of at the bleeding edge of a new change that's happening in\nthis market where over time, people are going to start more and more,\nmixing and take advantage of the best of the different cloud providers.\nSo I think those expectations are starting to shift and over time,\nyou'll see us probably boost the prominence of the multi-cloud option as\nthe market kind of moves there as well.\n\nMichael Lynn (07:21): So this is available today and what other\nrequirements are there if I want to deploy an instance of MongoDB and\nleverage multi-cloud?\n\nAndrew Davidson (07:30): Yeah, that's a great question. Fundamentally,\nin order to use the multi-cloud database cluster, I think it kind of\ndepends on what your use case is, what you're trying to achieve. But\ngenerally speaking, database in isolation on a cloud provider isn't\nenough. You need to use something that's connecting to and using that\ndatabase. So broadly speaking, you're going to want to have an\napplication tier that's able to connect the database and if you're\nacross multiple clouds and you're doing that for various reasons, like\nfor example, high availability resiliency to be able to withstand the\nadage of a full cloud provider, well then you would want your app tier\nto also be multi-cloud.\n\nAndrew Davidson (08:03): That's the kind of thing that traditionally,\nfolks have not thought was easy, but it's getting easier all the time.\nThat's why it kind of... We're opening this up at the data tier, and\nthen others, the Kubernetes platform, et cetera, are really opening up\nthat portability at the app tier and really making this possible for the\nmarket. But before we sort of keep focusing on kind of where we are\ntoday, I think it wouldn't hurt to sort of rewind a little bit and talk\nabout why multi-cloud is so difficult.\n\nMichael Lynn (08:32): That makes sense.\n\nAndrew Davidson (08:35): There's broadly been two main reasons why\nmulti-cloud is so hard. They kind of boil down to data and how much data\ngravity there is. Of course, that's what our announcement is about\nchanging. In other words, your data has to be stored in one cloud or\nanother, or traditionally had to be. So actually moving that data to\nanother cloud or making it present or available in the other cloud, that\nwas enormously difficult and traditionally, made it so that people just\nfelt multi-cloud was essentially not achievable. The second key reason\nmulti-cloud has traditionally been very difficult is that there hasn't\nbeen essentially, a community created or company backed sort of way of\nstandardizing operations around a multi-cloud posture.\n\nAndrew Davidson (09:21): In other words, you had to go so deep in your\nAWS environment, or your Google environment, your Azure environment, to\nmanage all that infrastructure to be completely comfortable with the\ngovernance and life cycle management, that the idea of going and\nlearning to go do that again in another cloud platform was just\noverwhelming. Who wants to do that? What's starting to change that\nthough, is that there's sort of best in class software vendors, as well\nas SaaS offerings that are starting to basically, essentially build\nconsistency around the clouds and really are best in breed for doing so.\nSo when you look at what maybe Datadog is doing for monitoring or what\nHashi Corp is doing with Terraform and vault, infrastructure is code and\nsecrets management, all the other exciting announcements they're always\nmaking, these dynamics are all kind of contributing to making it\npossible for customers to actually start truly doing this. Then we're\ncoming in now with true multi-cloud data tier. So it's highly\ncomplimentary with those other offerings. I think over the next couple\nof years, this is going to start becoming very popular.\n\nMichael Lynn (10:26): Sort of the next phase in the evolution of cloud\ncomputing?\n\nAndrew Davidson (10:29): Totally, totally.\n\nMichael Lynn (10:30): I thought it might be good if we could take a look\nat it. I know that some of the folks listening to this will be just\nthat, just listening to it. So we'll try and talk our way through it as\nwell. But let's give folks a peek at what this thing looks like. So I'm\ngoing to share my screen here.\n\nAndrew Davidson (10:48): Cool. Yeah. While you're pulling that up-\n\\[crosstalk 00:10:50\\] Go ahead, Nic. Sorry.\n\nNic Raboy (10:51): I was going to ask, and then maybe this is something\nthat Mike is going to show when he brings up his screen-\n\nAndrew Davidson (10:55): Yeah.\n\nNic Raboy (10:56): ... but from a user perspective, how much involvement\ndoes the multi-cloud wire? Is it something that just happens behind the\nscenes and I don't have to worry a thing about it, or is there going to\nbe some configurations that we're going to see?\n\nAndrew Davidson (11:11): Yeah. It's pretty straightforward. It's a very\nintuitive user interface for setting it up and then boom, your cluster's\nmulti-cloud, which Mike will show, but going back to the question\nbefore, in order to take... Depending on what use case you've got for\nmulti-cloud, and I would say there's about maybe four kinds of use cases\nand happy to go through them, depending on the use case, I think there's\na different set of things you're going to need to worry about for how to\nuse this from the perspective of your applications.\n\nMichael Lynn (11:36): Okay. So for the folks listening in, I've opened\nmy web browser and I'm visiting cloud.MongoDB.com. I provided my\ncredentials and I'm logged into my Atlas console. So I'm on the first\ntab, which is Atlas, and I'm looking at the list of clusters that I've\npreviously deployed. I've got a free tier cluster and some additional\nproject-based clusters. Let's say I want to deploy a new instance of\nMongoDB, and I want to make use of multi-cloud. The first thing I'm\ngoing to do is click the \"Create New Cluster\" button, and that's going\nto bring up the deployment wizard. Here's where you make all the\ndecisions about what you want that cluster to look like. Andrew, feel\nfree to add color as I go through this.\n\nAndrew Davidson (12:15): Totally.\n\nMichael Lynn (12:16): So the first question is a global cluster\nconfiguration. Just for this demo, I'm going to leave that closed. We'll\nleave that for another day. The second panel is cloud provider and\nregion, and here's where it gets interesting. Now, Andrew, at the\nbeginning when you described what Atlas is, you mentioned that Atlas is\navailable on the top three cloud providers. So we've got AWS, Google\nCloud, and Azure, but really, doesn't it exist above the provider?\n\nAndrew Davidson (12:46): In many ways, it does. You're right. Look,\nthinking about kind of the history of how we got here, Atlas was\nlaunched maybe near... about four and a half years ago in AWS and then\nmaybe three and a half years ago on Google Cloud and Azure. Ever since\nthat moment, we've just been deepening what Atlas is on all three\nproviders. So we've gotten to the point where we can really sort of\nthink about the database experience in a way that really abstracts away\nthe complexity of those providers and all of those years of investment\nin each of them respectively, is what has enabled us to sort of unify\nthem together today in a way that frankly, would just be a real\nchallenge for someone to try and do on their own.\n\nAndrew Davidson (13:28): The last thing you want to be trying to set up\nis a distributed database service across multiple clouds. We've got some\ncustomers who've tried to do it and it's giant undertaking. We've got\nlarge engineering teams working on this problem full time and boom, here\nit is. So now, you can take advantage of it. We do it once, everyone\nelse can use it a thousand times. That's the beauty of it.\n\nMichael Lynn (13:47): Beautiful. Fantastic. I was reading the update on\nthe release schedule changes for MongoDB, the core server product, and I\nwas just absolutely blown away with the amount of hours that goes into a\nmajor release, just incredible amount of hours and then on top of that,\nthe ability that you get with Atlas to deploy that in multiple cloud's\npretty incredible.\n\nNic Raboy (14:09): Let me interject here for a second. We've got a\nquestion coming in from the chat. So off the band is asking, \"Will Atlas\nsupport DigitalOcean or OVH or Ali Cloud?\"\n\nAndrew Davidson (14:19): Great questions. We don't have current plans to\ndo so, but I'll tell you. Everything about our roadmap is about customer\ndemand and what we're hearing from you. So hearing that from you right\nnow helps us think about it.\n\nMichael Lynn (14:31): Great. Love the questions. Keep them coming. So\nback to the screen. We've got our create new cluster wizard up and I'm\nin the second panel choosing the cloud provider and region. What I\nnotice, something new I haven't seen before, is there's a call-out box\nthat is labeled, \"multi-cloud multi-region workload isolation.\" So this\nis the key to multi-cloud. Am I right?\n\nAndrew Davidson (14:54): That's right.\n\nMichael Lynn (14:54): So if I toggle that radio button over to on, I see\nsome additional options available to me and here is where I'm going to\nspecify the electable nodes in a cluster. So we have three possible\nconfigurations. We've got the electable nodes for high availability. We\nhave the ability or the option to add read-only nodes, and we can\nspecify the provider and region. We've got an option to add analytics\nnodes. Let's just focus on the electable nodes for the moment. By\ndefault, AWS is selected. I think that's because I selected AWS as the\nprovider, but if I click \"Add a Provider/Region,\" I now have the ability\nto change the provider to let's say, GCP, and then I can select a\nregion. Of course, the regions are displaying Google's data center list.\nSo I can choose something that's near the application. I'm in\nPhiladelphia, so North Virginia is probably the closest. So now, we have\na multi-cloud, multi-provider deployment. Any other notes or things you\nwant to call out, Andrew?\n\nAndrew Davidson (16:01): Yeah- \\[crosstalk 00:16:02\\]\n\nNic Raboy (16:01): Actually, Mike, real quick.\n\nMichael Lynn (16:03): Yeah.\n\nNic Raboy (16:04): I missed it. When you added GCP, did you select two\nor did it pre-populate with that? I'm wondering what's the thought\nprocess behind how it calculated each of those node numbers.\n\nAndrew Davidson (16:15): It's keeping them on automatically. For\nelectrical motors, you have to have an odd number. That's based on-\n\\[crosstalk 00:16:20\\]\n\nNic Raboy (16:20): Got it.\n\nAndrew Davidson (16:20): ... we're going to be using a raft-like\nconsensus protocol, which allows us to maintain read and write\navailability continuously as long as majority quorum is online. So if\nyou add a third one, if you add Azure, for example, for fun, why not?\nWhat that means is we're now spread across three cloud providers and\nyou're going to have to make an odd number... You're going to have to\neither make it 111 or 221, et cetera. What this means is you can now\nwithstand a global outage of any of the three cloud providers and still\nhave your application be continuously available for both reads and\nwrites because the other two cloud providers will continue to be online\nand that's where you'll receive your majority quorum from.\n\nAndrew Davidson (17:03): So I think what we've just demonstrated here is\nkind of one of the four sort of dominant use cases for multi-cloud,\nwhich is high availability resilience. It's kind of a pretty intuitive\none. In practice, a lot of people would want to use this in the context\nof countries that have fewer cloud regions. In the US, we're a bit\nspoiled. There's a bunch of AWS regions, bunch of Azure regions, a bunch\nof Google Cloud regions. But if you're a UK based, France based, Canada\nbased, et cetera, your preferred cloud provider might have just one\nregion that country. So being able to expand into other regions from\nanother cloud provider, but keep data in your country for data\nsovereignty requirements can be quite compelling.\n\nMichael Lynn (17:46): So I would never want to deploy a single node in\neach of the cloud providers, right? We still want a highly available\ncluster deployed in each of the individual cloud providers. Correct?\n\nAndrew Davidson (17:57): You can do 111. The downside with 111 is that\nduring maintenance rounds, you would essentially have rights that would\nmove to the second region on your priority list. That's broadly\nreasonable actually, if you're using majority rights from a right\nconcern perspective. It kind of depends on what you want to optimize\nfor. One other thing I want to quickly show, Mike, is that there's\nlittle dotted lines on the left side or triple bars on the left side.\nYou can actually drag and drop your preferred regional order with that.\nThat basically is choosing which region by default will take rights if\nthat region's online.\n\nMichael Lynn (18:35): So is zone deployment with the primary, in this\ncase, I've moved Azure to the top, that'll take the highest priority and\nthat will be my primary right receiver.\n\nAndrew Davidson (18:47): Exactly. That would be where the primaries are.\nIf Azure were to be down or Azure Virginia were to be down, then what\nwould have initially been a secondary in USC's one on AWS would be\nelected primary and that's where rights would start going.\n\nMichael Lynn (19:03): Got you. Yeah.\n\nAndrew Davidson (19:04): Yeah.\n\nMichael Lynn (19:05): So you mentioned majority rights. Can you explain\nwhat that is for anyone who might be new to that concept?\n\nAndrew Davidson (19:12): Yeah, so MongoDB has a concept of a right\nconcern and basically our best practice is to configure your rights,\nwhich is a MongoDB client side driver configuration to utilize the right\nconcern majority, which essentially says the driver will not acknowledge\nthe right from the perspective of the database and move on to the next\noperation until the majority of the nodes in the replica set have\nacknowledged that right. What that kind of guarantees you is that you're\nnot allowing your rights to sort of essentially, get past what your\nreplica set can keep up with. So in a world in which you have really\nbursty momentary rights, you might consider a right concern of one, just\nmake sure it goes to the primary, but that can have some risks at scale.\nSo we recommend majority.\n\nMichael Lynn (20:01): So in the list of use cases, you mentioned the\nfirst and probably the most popular, which was to provide additional\naccess and availability in a region where there's only one provider data\ncenter. Let's talk about some of the other reasons why would someone\nwant to deploy multi-cloud,\n\nAndrew Davidson (20:19): Great question. The second, which actually\nthink may even be more popular, although you might tell me, \"It's not\nexactly as multi-cloudy as what we just talked about,\" but what I think\nis going to be the most popular is being able to move from one cloud\nprovider to the other with no downtime. In other words, you're only\nmulti-cloud during the transition, then you're on the other cloud. So\nit's kind of debatable, but having that freedom, that flexibility, and\nbasically the way this one would be configured, Mike, is if you were to\nclick \"Cancel\" here and just go back to the single cloud provider view,\nin a world in which you have a cluster deployed on AWS just like you\nhave now, if this was a deployed cluster, you could just go to the top,\nselect Azure or GCP, click \"Deploy,\" and we would just move you there.\nThat's also possible now.\n\nAndrew Davidson (21:07): The reason I think this will be the most\ncommonly used is there's lots of reasons why folks need to be able to\nmove from one cloud provider to the other. Sometimes you have sort of an\norganization that's been acquired into another organization and there's\na consolidation effort underway. Sometimes there's just a feeling that\nanother cloud provider has key capabilities that you want to start\ntaking advantage of more, so you want to make the change. Other times,\nit's about really feeling more future-proof and just being able to not\nbe locked in and make that change. So this one, I think, is more of a\nsort of boardroom level concern, as well as a developer empowerment\nthing. It's really exciting to have at your fingertips, the power to\nfeel like I can just move my data around to anywhere in the world across\n79 regions and nothing's holding me back from doing that. When you sit\nat your workstation, that's really exciting.\n\nMichael Lynn (22:00): Back to that comment you made earlier, really\nreducing that data gravity-\n\nAndrew Davidson (22:05): Totally.\n\nMichael Lynn (22:05): ... and increasing fungibility. Yeah, go ahead,\nNic.\n\nNic Raboy (22:09): Yeah. So you mentioned being able to move things\naround. So let me ask the same scenario, same thing, but when Mike was\nable to change the priority of each of those clouds, can we change the\npriority after deployment? Say Amazon is our priority right now for the\nnext year, but then after that, Google is our now top priority. Can we\nchange that after the fact?\n\nAndrew Davidson (22:34): Absolutely. Very great point. In general with\nAtlas, traditionally, the philosophy was always that basically\neverything in this cluster builder that Mike's been showing should be\nthe kind of thing that you could configure when you first deploying\ndeclaratively, and that you could then change and Atlas will just do the\nheavy lifting to get you to that new declarative state. However, up\nuntil last week, the only major exception to that was you couldn't\nchange your cloud provider. You could already change the region inside\nthe cloud provider, change your multi-region configs, et cetera. But\nnow, you can truly change between cloud providers, change the order of\npriority for a multi-region environment that involves multiple cloud\nproviders. All of those things can easily be changed.\n\nAndrew Davidson (23:15): When you make those changes, these are all no\ndowntime operations. We make that possible by doing everything in a\nrolling manner on the backend and taking advantage of MongoDB's, in what\nwe were talking about earlier, the distributed system, the consensus\nthat allows us to ensure that we always have majority quorum online, and\nit would just do all that heavy lifting to get you from any state to any\nother state in a wall preserving that majority. It's really kind of a\nbeautiful thing.\n\nMichael Lynn (23:39): It is. And so powerful. So what we're showing here\nis the deployer, like you said, but all this same screen comes up when I\ntake a look at a previously deployed instance of MongoDB and I can make\nchanges right in that same way.\n\nAndrew Davidson (23:55): Exactly.\n\nMichael Lynn (23:55): Very powerful.\n\nAndrew Davidson (23:56): Exactly.\n\nMichael Lynn (23:56): Yeah.\n\nAndrew Davidson (23:57): So there's a few other use cases I think we\nshould just quickly talk about because we've gone through two sort of\nfuture-proof mobility moving from one to the other. We talked about high\navailability resilience and how that's particularly useful in countries\nwhere you might want to keep data in country and you might not have as\nmany cloud provider regions in that country. But the third use case\nthat's pretty exciting is, and I think empowering more for developers,\nis sometimes you want to take advantage of the best capabilities of the\ndifferent cloud providers. You might love AWS because you just love\nserverless and you love Lambda, and who doesn't? So you want to be there\nfor that aspect of your application.\n\nAndrew Davidson (24:34): Maybe you also want to be able to take\nadvantage of some of the capabilities that Google offers around machine\nlearning and AI, and maybe you want to be able to have the ML jobs on\nthe Google side be able to access your data with low latency in that\ncloud provider region. Well, now you can have a read replica in that\nGoogle cloud region and do that right there. Maybe you want to take\nadvantage of Azure dev ops, just love the developer centricity that\nwe're seeing from Microsoft and Azure these days, and again, being able\nto kind of mix and match and take advantage of the cloud provider you\nwant unlocks possibilities and functional capabilities that developers\njust haven't really had at their fingertips before. So that's pretty\nexciting too.\n\nMichael Lynn (25:18): Great. So any other use cases that we want to\nmention?\n\nAndrew Davidson (25:23): Yeah. The final one is kind of a little bit of\na special category. It's more about saying that sometimes... So many of\nour own customers and people listening are themselves, building software\nservices and cloud services on top of MongoDB Atlas. For people doing\nthat, you'll likely be aware that sometimes your end customers will\nstipulate which underlying cloud provider you need to use for them. It's\na little frustrating when they do that. It's kind of like, \"Oh my, I\nhave to go use a different cloud provider to service you.\" You can duke\nit out with them and maybe make it happen without doing that. But now,\nyou have the ability to just easily service your end customers without\nthat getting in the way. If they have a rule that a certain cloud\nprovider has to be used, you can just service them too. So we power so\nmany layers of the infrastructure stack, so many SaaS services and\nplatforms, so many of them, this is very compelling.\n\nMichael Lynn (26:29): So if I've got my data in AWS, they have a VPC, I\ncan establish a VPC between the application and the database?\n\nAndrew Davidson (26:36): Correct.\n\nMichael Lynn (26:37): And the same with Google and Azure.\n\nAndrew Davidson (26:39): Yeah. There's an important note. MongoDB Atlas\noffers VPC peering, as well as private link on AWS and Azure. We offer\nVPC peering on Google as well. In the context of our multi-cloud\nclusters that we've just announced, we don't yet have support for\nprivate link and VPC peering. You're going to use public IP access list\nmanagement. That will be coming, along with global cluster support,\nthose will be coming in early 2021 as our current forward-looking\nstatement. Obviously, everything forward looking... There's uncertainty\nthat you want me to disclaimer in there, but what we've launched today\nis really first and foremost, for accessless management. However, when\nyou move one cluster from one cloud to the other, you can absolutely\ntake advantage of peering today or privately.\n\nNic Raboy (27:30): Because Mike has it up on his screen, am I able to\nremove nodes from a cloud region on demand, at will?\n\nAndrew Davidson (27:37): Absolutely. You can just add more replicas.\nJust as we were saying, you can move from one to the other or sort of\nchange your preferred order of where the rights go, you can add more\nreplicas in any cloud at any time or remove them at any time \\[crosstalk\n00:27:53\\] ... of Atlas vertical auto scaling too.\n\nNic Raboy (27:55): That was what I was going to ask. So how does that\nwork? How would you tell it, if it's going to auto-scale, could you tell\nit to auto-scale? How does it balance between three different clouds?\n\nAndrew Davidson (28:07): That's a great question. The way Atlas\nauto-scaling works is you really... So if you choose an M30, you can see\nthe auto-scaling in there.\n\nNic Raboy (28:20): For people who are listening, this is all in the\ncreate a new cluster screen.\n\nAndrew Davidson (28:25): Basically, the way it works is we will\nvertically scale you. If any of the nodes in the cluster are\nessentially, getting to the point where they require scaling based on\nunderlying compute requirements, the important thing to note is that\nit's a common misconception, I guess you could say, on MongoDB that you\nmight want to sort of scale only certain replicas and not others. In\ngeneral, you would want to scale them all symmetrically. The reason for\nthat is that the workload needs to be consistent across all the nodes\nand the replica sets. That's because even though the rights go to the\nprimary, the secondaries have to keep up with those rights too. Anyway.\n\nMichael Lynn (29:12): I just wanted to show that auto-scale question\nhere.\n\nAndrew Davidson (29:16): Oh, yes.\n\nMichael Lynn (29:17): Yeah, there we go. So if I'm deploying an M30, I\nget to specify at a minimum, I want to go down to an M20 and at a\nmaximum, based on the read-write profile and the activity application, I\nwant to go to a maximum of an M50, for example.\n\nAndrew Davidson (29:33): Exactly.\n\nNic Raboy (29:35): But maybe I'm missing something or maybe it's not\neven important based on how things are designed. Mike is showing how to\nscale up and down from M20 to M50, but what if I wanted all of the new\nnodes to only appear on my third priority tier? Is that a thing?\n\nAndrew Davidson (29:55): Yeah, that's a form of auto-scaling that's\ndefinitely... In other words, you're basically saying... Essentially,\nwhat you're getting at is what if I wanted to scale my read throughput\nby adding more read replicas?\n\nNic Raboy (30:04): Sure.\n\nAndrew Davidson (30:05): It's generally speaking, not the way we\nrecommend scaling. We tend to recommend vertical scaling as opposed to\nadding read replicas. \\[crosstalk 00:30:14\\]\n\nNic Raboy (30:14): Got it.\n\nAndrew Davidson (30:14): The reason for that with MongoDB is that if you\nscale reads with replicas, the risk is that you could find yourself in a\ncompounding failure situation where you're overwhelming all your\nreplicas somehow, and then one goes down and then all of a sudden, you\nhave the same workload going to an even smaller pool. So we tend to\nvertically scale and/or introduce sharding once you're talking about\nthat kind of level of scale. However, there's scenarios, in which to\nyour point, you kind of want to have read replicas in other regions,\nlet's say for essentially,. servicing traffic from that region at low\nlatency and those kinds of use cases. That's where I think you're right.\nOver time, we'll probably see more exotic forms of auto-scaling we'll\nwant to introduce. It's not there today.\n\nMichael Lynn (31:00): Okay. So going back and we'll just finish out our\ncreate a new cluster. Create a new cluster, I'll select multi-cloud and\nI'll select electable nodes into three providers.\n\nAndrew Davidson (31:15): So analytics on Azure- \\[crosstalk 00:31:18\\]\nThat's fine. That's totally fine.\n\nMichael Lynn (31:20): Okay.\n\nAndrew Davidson (31:21): Not a problem.\n\nMichael Lynn (31:22): Okay. So a single cluster across AWS, GCP, and\nAzure, and we've got odd nodes. Okay. Looking good there. We'll select\nour cluster tier. Let's say an M30 is fine and we'll specify the amount\nof disk. Okay. So anything else that we want to bring into the\ndiscussion? Any other features that we're missing?\n\nAndrew Davidson (31:47): Not that I can think of. I'll say we've\ndefinitely had some interesting early adoption so far. I'm not going to\nname names, but we've seen folks, both take advantage of moving between\nthe cloud providers, we've seen some folks who have spread their\nclusters across multiple cloud providers in a target country like I\nmentioned, being able to keep my data in Canada, but across multiple\ncloud providers. We've seen use cases in e-commerce. We've seen use\ncases in healthcare. We've seen use cases in basically monitoring. We've\nseen emergency services use cases. So it's just great early validation\nto have this out in the market and to have so much enthusiasm for the\ncustomers. So if anyone is keen to try this out, it's available to try\non MongoDB Atlas today.\n\nNic Raboy (32:33): So this was a pretty good episode. Actually, we have\na question coming. Let's address this one first. Just curious that M\nstands for multi-tiered? Where did this naming convention derive from?\n\nAndrew Davidson (32:48): That's a great question. The cluster tiers in\nAtlas from the very beginning, we use this nomenclature of the M10, the\nM20, the M30. The not-so-creative answer is that it stands for MongoDB,\n\\[crosstalk 00:33:00\\] but it's a good point that now we can start\nclaiming that it has to do with multi-cloud, potentially. I like that.\n\nMichael Lynn (33:08): Can you talk anything about the roadmap? Is there\nanything that you can share about what's coming down the pike?\n\nAndrew Davidson (33:13): Look, we're just going to keep going bigger,\nfaster, more customers, more scale. It's just so exciting. We're now\npowering on Atlas some of the biggest games in the world, some of the\nmost popular consumer financial applications, applications that make\nconsumers' lives work, applications that enable manufacturers to\ncontinue building all the things that we rely on, applications that\npower for a truly global audience. We're seeing incredible adoption and\ngrowth and developing economies. It's just such an exciting time and\nbeing on the front edge of seeing developers really just transforming\nthe economy, the digital transformation that's happening.\n\nAndrew Davidson (33:57): We're just going to continue it, focus on where\nour customers want us to go to unlock more value for them, keep going\nbroader on the data platform. I think I mentioned that search is a big\nfocus for us, augmenting the traditional operational transactional\ndatabase, realm, the mobile database community, and essentially making\nit possible to build those great mobile applications and have them\nsynchronize back up to the cloud mothership. I'm super excited about\nthat and the global run-up to the rollout of 5g. I think the possibility\nin mobile are just going to be incredible to watch in the coming year.\nYeah, there's just a lot. There's going to be a lot happening and we're\nall going to be part of it together.\n\nMichael Lynn (34:34): Sounds awesome.\n\nNic Raboy (34:34): If people wanted to get in contact with you after\nthis episode airs, you on Twitter, LinkedIn? Where would you prefer\npeople to reach out?\n\nAndrew Davidson (34:43): I would just recommend people email directly:\n. Love to hear product feedback, how we\ncan improve. That's what we're here for is to hear it from you directly,\nconnect you with the right people, et cetera.\n\nMichael Lynn (34:56): Fantastic. Well, Andrew, thanks so much for taking\ntime out of your busy day. This has been a great conversation. Really\nenjoyed learning more about multi-cloud and I look forward to having you\non the podcast again.\n\nAndrew Davidson (35:08): Thanks so much. Have a great rest of your day,\neverybody.\n\n## Summary\n\nWith multi-cloud clusters on MongoDB Atlas, customers can realize the\nbenefits of a multi-cloud strategy with true data portability and a\nsimplified management experience. Developers no longer have to deal with\nmanual data replication, and businesses can focus their technical\nresources on building differentiated software.\n\n## Related Links\n\nCheck out the following resources for more information:\n\n[Introducing Multi-Cloud\nClusters\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn about multi-cloud clusters with Andrew Davidson", "contentType": "Podcast"}, "title": "MongoDB Atlas Multicloud Clusters", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-3", "action": "created", "body": "# Part #3: Semantically Search Your Data With MongoDB Atlas Vector Search\n\n This final part of the series will show you how to use the Amazon SageMaker endpoint created in the previous part and perform a semantic search on your data using MongoDB Atlas Vector Search. The two parts shown in this tutorial will be:\n\n- Creating and updating embeddings/vectors for your data.\n- Creating vectors for a search query and sending them via Atlas Vector Search.\n\n## Creating a MongoDB cluster and loading the sample data\n\nIf you haven\u2019t done so, create a new cluster in your MongoDB Atlas account. Make sure to check `Add sample dataset` to get the sample data we will be working with right away into your cluster.\n\n before continuing.\n\n## Preparing embeddings\n\nAre you ready for the final part?\n\nLet\u2019s have a look at the code (here, in Python)!\n\nYou can find the full repository on GitHub.\n\nIn the following section, we will look at the three relevant files that show you how you can implement a server app that uses the Amazon SageMaker endpoint.\n\n## Accessing the endpoint: sagemaker.py\n\nThe `sagemaker.py` module is the wrapper around the Lambda/Gateway endpoint that we created in the previous example.\n\nMake sure to create a `.env` file with the URL saved in `EMBDDING_SERVICE`.\n\nIt should look like this:\n```\nMONGODB_CONNECTION_STRING=\"mongodb+srv://:@.mongodb.net/?retryWrites=true&w=majority\"\nEMBEDDING_SERVICE=\"https://.amazonaws.com/TEST/sageMakerResource\"\n```\n\nThe following function will then attach the query that we want to search for to the URL and execute it.\n\n```\nimport os\nfrom typing import Optional\nfrom urllib.parse import quote\n\nimport requests\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\nEMBEDDING_SERVICE = os.environ.get(\"EMBEDDING_SERVICE\")\n```\n\nAs a result, we expect to find the vector in a JSON field called `embedding`.\n\n```\ndef create_embedding(plot: str) -> Optionalfloat]:\n encoded_plot = quote(plot)\n embedding_url = f\"{EMBEDDING_SERVICE}?query={encoded_plot}\"\n\n embedding_response = requests.get(embedding_url)\n embedding_vector = embedding_response.json()[\"embedding\"]\n\n return embedding_vector\n```\n\n## Access and searching the data: atlas.py\n\nThe module `atlas.py` is the wrapper around everything MongoDB Atlas.\n\nSimilar to `sagemaker.py`, we first grab the `MONGODB_CONNECTION_STRING` that you can retrieve in [Atlas by clicking on `Connect` in your cluster. It\u2019s the authenticated URL to your cluster. We need to save MONGODB_CONNECTION_STRING to our .env file too.\n\nWe then go ahead and define a bunch of variables that we\u2019ve set in earlier parts, like `VectorSearchIndex` and `embedding`, along with the automatically created `sample_mflix` demo data.\n\nUsing the Atlas driver for Python (called PyMongo), we then create a `MongoClient` which holds the connection to the Atlas cluster.\n\n```\nimport os\n\nfrom dotenv import load_dotenv\nfrom pymongo import MongoClient, UpdateOne\n\nfrom sagemaker import create_embedding\n\nload_dotenv()\n\nMONGODB_CONNECTION_STRING = os.environ.get(\"MONGODB_CONNECTION_STRING\")\nDATABASE_NAME = \"sample_mflix\"\nCOLLECTION_NAME = \"embedded_movies\"\nVECTOR_SEARCH_INDEX_NAME = \"VectorSearchIndex\"\nEMBEDDING_PATH = \"embedding\"\nmongo_client = MongoClient(MONGODB_CONNECTION_STRING)\ndatabase = mongo_clientDATABASE_NAME]\nmovies_collection = database[COLLECTION_NAME]\n```\n\nThe first step will be to actually prepare the already existing data with embeddings.\n\nThis is the sole purpose of the `add_missing_embeddings` function.\n\nWe\u2019ll create a filter for the documents with missing embeddings and retrieve those from the database, only showing their plot, which is the only field we\u2019re interested in for now.\n\nAssuming we will only find a couple every time, we can then go through them and call the `create_embedding` endpoint for each, creating an embedding for the plot of the movie.\n\nWe\u2019ll then add those new embeddings to the `movies_to_update` array so that we eventually only need one `bulk_write` to the database, which makes the call more efficient.\n\nNote that for huge datasets with many embeddings to create, you might want to adjust the lambda function to take an array of queries instead of just a single query. For this simple example, it will do.\n\n```\ndef add_missing_embeddings():\n movies_with_a_plot_without_embedding_filter = {\n \"$and\": [\n {\"plot\": {\"$exists\": True, \"$ne\": \"\"}},\n {\"embedding\": {\"$exists\": False}},\n ]\n }\n only_show_plot_projection = {\"plot\": 1}\n\n movies = movies_collection.find(\n movies_with_a_plot_without_embedding_filter,\n only_show_plot_projection,\n )\n\n movies_to_update = []\n\n for movie in movies:\n embedding = create_embedding(movie[\"plot\"])\n update_operation = UpdateOne(\n {\"_id\": movie[\"_id\"]},\n {\"$set\": {\"embedding\": embedding}},\n )\n movies_to_update.append(update_operation)\n\n if movies_to_update:\n result = movies_collection.bulk_write(movies_to_update)\n print(f\"Updated {result.modified_count} movies\")\n\n else:\n print(\"No movies to update\")\n```\n\nNow that the data is prepared, we add two more functions that we need to offer a nice REST service for our client application.\n\nFirst, we want to be able to update the plot, which inherently means we need to update the embeddings again.\n\nThe `update_plot` is similar to the initial `add_missing_embeddings` function but a bit simpler since we only need to update one document.\n\n```\ndef update_plot(title: str, plot: str) -> dict:\n embedding = create_embedding(plot)\n\n result = movies_collection.find_one_and_update(\n {\"title\": title},\n {\"$set\": {\"plot\": plot, \"embedding\": embedding}},\n return_document=True,\n )\n\n return result\n```\n\nThe other function we need to offer is the actual vector search. This can be done using the [MongoDB Atlas aggregation pipeline that can be accessed via the Atlas driver.\n\nThe `$vectorSearch` stage needs to include the index name we want to use, the path to the embedding, and the information about how many results we want to get. This time, we only want to retrieve the title, so we add a `$project` stage to the pipeline. Make sure to use `list` to turn the cursor that the search returns into a python list.\n\n```\ndef execute_vector_search(vector: float]) -> list[dict]:\n vector_search_query = {\n \"$vectorSearch\": {\n \"index\": VECTOR_SEARCH_INDEX_NAME,\n \"path\": EMBEDDING_PATH,\n \"queryVector\": vector,\n \"numCandidates\": 10,\n \"limit\": 5,\n }\n }\n projection = {\"$project\": {\"_id\": 0, \"title\": 1}}\n results = movies_collection.aggregate([vector_search_query, projection])\n results_list = list(results)\n\n return results_list\n```\n\n## Putting it all together: main.py\n\nNow, we can put it all together. Let\u2019s use Flask to expose a REST service for our client application.\n\n```\nfrom flask import Flask, request, jsonify\n\nfrom atlas import execute_vector_search, update_plot\nfrom sagemaker import create_embedding\n\napp = Flask(__name__)\n```\n\nOne route we want to expose is `/movies/` that can be executed with a `PUT` operation to update the plot of a movie given the title. The title will be a query parameter while the plot is passed in via the body. This function is using the `update_plot` that we created before in `atlas.py` and returns the movie with its new plot on success.\n\n```\n@app.route(\"/movies/\", methods=[\"PUT\"])\ndef update_movie(title: str):\n try:\n request_json = request.get_json()\n plot = request_json[\"plot\"]\n updated_movie = update_plot(title, plot)\n\n if updated_movie:\n return jsonify(\n {\n \"message\": \"Movie updated successfully\",\n \"updated_movie\": updated_movie,\n }\n )\n else:\n return jsonify({\"error\": f\"Movie with title {title} not found\"}), 404\n\n except Exception as e:\n return jsonify({\"error\": str(e)}), 500\n```\n\nThe other endpoint, finally, is the vector search: `/movies/search`.\n\nA `query` is `POST`\u2019ed to this endpoint which will then use `create_embedding` first to create a vector from this query. Note that we need to also create vectors for the query because that\u2019s what the vector search needs to compare it to the actual data (or rather, its embeddings).\n\nWe then call `execute_vector_search` with this `embedding` to retrieve the results, which will be returned on success.\n\n```\n@app.route(\"/movies/search\", methods=[\"POST\"])\ndef search_movies():\n try:\n request_json = request.get_json()\n query = request_json[\"query\"]\n embedding = create_embedding(query)\n\n results = execute_vector_search(embedding)\n\n jsonified_results = jsonify(\n {\n \"message\": \"Movies searched successfully\",\n \"results\": results,\n }\n )\n\n return jsonified_results\n\n except Exception as e:\n return jsonify({\"error\": str(e)}), 500\n\nif __name__ == \"__main__\":\n app.run(debug=True)\n```\n\nAnd that\u2019s about all you have to do. Easy, wasn\u2019t it?\n\nGo ahead and run the Flask app (main.py) and when ready, send a cURL to see Atlas Vector Search in action. Here is an example when running it locally:\n\n```\ncurl -X POST -H \"Content-Type: application/json\" -d '{\"query\": \"A movie about the Earth, Mars and an invasion.\"}' http://127.0.0.1:5000/movies/search\n```\n\nThis should lead to the following result:\n\n```\n{\n \"message\": \"Movies searched successfully\",\n \"results\": [\n {\n \"title\": \"The War of the Worlds\"\n },\n {\n \"title\": \"The 6th Day\"\n },\n {\n \"title\": \"Pixels\"\n },\n {\n \"title\": \"Journey to Saturn\"\n },\n {\n \"title\": \"Moonraker\"\n }\n ]\n}\n```\n\nWar of the Worlds \u2014 a movie about Earth, Mars, and an invasion. And what a great one, right?\n\n## That\u2019s a wrap!\n\nOf course, this is just a quick and short overview of how to use Amazon SageMaker to create vectors and then search via Vector Search.\n\nWe do have a full workshop for you to learn about all those parts in detail. Please visit the [Search Lab GitHub page to learn more.\n\n\u2705 Sign-up for a free cluster.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n\u2705 Get help on our Community Forums.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39d5ab8bbebc44c9/65cc9cbbdccfc66fb1aafbcc/image31.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdf806894dc3b136b/65cc9cc023dbefeab0ffeefd/image27.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fc901d90dfa6ff0/65cc9cc31a7344b317bc5e49/image11.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b5a88f35287c71c/65cc9cd50167d02ac58f99e2/image23.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8fd38c8f35fdca2c/65cc9ccbdccfc666d2aafbd0/image8.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt198377737b0b106e/65cc9cd5fce01c5c5efc8603/image26.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "AWS", "Serverless"], "pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.", "contentType": "Tutorial"}, "title": "Part #3: Semantically Search Your Data With MongoDB Atlas Vector Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/local-development-mongodb-atlas-cli-docker", "action": "created", "body": "# Local Development with the MongoDB Atlas CLI and Docker\n\nNeed a consistent development and deployment experience as developers work across teams and use different machines for their daily tasks? That is where Docker has you covered with containers. A common experience might include running a local version of MongoDB Community in a container and an application in another container. This strategy works for some organizations, but what if you want to leverage all the benefits that come with MongoDB Atlas in addition to a container strategy for your application development?\n\nIn this tutorial we'll see how to create a MongoDB-compatible web application, bundle it into a container with Docker, and manage creation as well as destruction for MongoDB Atlas with the Atlas CLI during container deployment.\n\nIt should be noted that this tutorial was intended for a development or staging setting through your local computer. It is not advised to use all the techniques found in this tutorial in a production setting. Use your best judgment when it comes to the code included.\n\nIf you\u2019d like to try the results of this tutorial, check out the repository and instructions on GitHub.\n\n## The prerequisites\n\nThere are a lot of moving parts in this tutorial, so you'll need a few things prior to be successful:\n\n- A MongoDB Atlas account\n- Docker\n- Some familiarity with Node.js and JavaScript\n\nThe Atlas CLI can create an Atlas account for you along with any keys and ids, but for the scope of this tutorial you'll need one created along with quick access to the \"Public API Key\", \"Private API Key\", \"Organization ID\", and \"Project ID\" within your account. You can see how to do this in the documentation.\n\nDocker is going to be the true star of this tutorial. You don't need anything beyond Docker because the Node.js application and the Atlas CLI will be managed by the Docker container, not your host computer.\n\nOn your host computer, create a project directory. The name isn't important, but for this tutorial we'll use **mongodbexample** as the project directory.\n\n## Create a simple Node.js application with Express Framework and MongoDB\n\nWe're going to start by creating a Node.js application that communicates with MongoDB using the Node.js driver for MongoDB. The application will be simple in terms of functionality. It will connect to MongoDB, create a database and collection, insert a document, and expose an API endpoint to show the document with an HTTP request.\n\nWithin the project directory, create a new **app** directory for the Node.js application to live. Within the **app** directory, using a command line, execute the following:\n\n```bash\nnpm init -y\nnpm install express mongodb\n```\n\nIf you don't have Node.js installed, just create a **package.json** file within the **app** directory with the following contents:\n\n```json\n{\n \"name\": \"mongodbexample\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"main\": \"main.js\",\n \"scripts\": {\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\",\n \"start\": \"node main.js\"\n },\n \"keywords\": ],\n \"author\": \"\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"express\": \"^4.18.2\",\n \"mongodb\": \"^4.12.1\"\n }\n}\n```\n\nNext, we'll need to define our application logic. Within the **app** directory we need to create a **main.js** file. Within the **main.js** file, add the following JavaScript code:\n\n```javascript\nconst { MongoClient } = require(\"mongodb\");\nconst Express = require(\"express\");\n\nconst app = Express();\n\nconst mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);\nlet database, collection;\n\napp.get(\"/data\", async (request, response) => {\n try {\n const results = await collection.find({}).limit(5).toArray();\n response.send(results);\n } catch (error) {\n response.status(500).send({ \"message\": error.message });\n }\n});\n\nconst server = app.listen(3000, async () => {\n try {\n await mongoClient.connect();\n database = mongoClient.db(process.env.MONGODB_DATABASE);\n collection = database.collection(`${process.env.MONGODB_COLLECTION}`);\n collection.insertOne({ \"firstname\": \"Nic\", \"lastname\": \"Raboy\" });\n console.log(\"Listening at :3000\");\n } catch (error) {\n console.error(error);\n }\n});\n\nprocess.on(\"SIGTERM\", async () => {\n if(process.env.CLEANUP_ONDESTROY == \"true\") {\n await database.dropDatabase();\n }\n mongoClient.close();\n server.close(() => {\n console.log(\"NODE APPLICATION TERMINATED!\");\n });\n});\n```\n\nThere's a lot happening in the few lines of code above. We're going to break it down!\n\nBefore we break down the pieces, take note of the environment variables used throughout the JavaScript code. We'll be passing these values through Docker in the end so we have a more dynamic experience with our local development.\n\nThe first important snippet of code to focus on is the start of our application service:\n\n```javascript\nconst server = app.listen(3000, async () => {\n try {\n await mongoClient.connect();\n database = mongoClient.db(process.env.MONGODB_DATABASE);\n collection = database.collection(`${process.env.MONGODB_COLLECTION}`);\n collection.insertOne({ \"firstname\": \"Nic\", \"lastname\": \"Raboy\" });\n console.log(\"Listening at :3000\");\n } catch (error) {\n console.error(error);\n }\n});\n```\n\nUsing the client that was configured near the top of the file, we can connect to MongoDB. Once connected, we can get a reference to a database and collection. This database and collection doesn't need to exist before that because it will be created automatically when data is inserted. With the reference to a collection, we insert a document and begin listening for API requests through HTTP.\n\nThis brings us to our one and only endpoint:\n\n```javascript\napp.get(\"/data\", async (request, response) => {\n try {\n const results = await collection.find({}).limit(5).toArray();\n response.send(results);\n } catch (error) {\n response.status(500).send({ \"message\": error.message });\n }\n});\n```\n\nWhen the `/data` endpoint is consumed, the first five documents in our collection are returned to the user. Otherwise if there was some issue, an error message would be returned.\n\nThis brings us to something optional, but potentially valuable when it comes to a Docker deployment for local development:\n\n```javascript\nprocess.on(\"SIGTERM\", async () => {\n if(process.env.CLEANUP_ONDESTROY == \"true\") {\n await database.dropDatabase();\n }\n mongoClient.close();\n server.close(() => {\n console.log(\"NODE APPLICATION TERMINATED!\");\n });\n});\n```\n\nThe above code says that when a termination event is sent to the application, drop the database we had created and close the connection to MongoDB as well as the Express Framework service. This could be useful if we want to undo everything we had created when the container stops. If you want your changes to persist, it might not be necessary. For example, if you want your data to exist between container deployments, persistence would be required. On the other hand, maybe you are using the container as part of a test pipeline and you want to clean up when you\u2019re done, the termination commands could be valuable.\n\nSo we have an environment variable heavy Node.js application. What's next?\n\n## Deploying a MongoDB Atlas cluster with network rules, user roles, and sample data\n\nWhile we have the application, our MongoDB Atlas cluster may not be available to us. For example, maybe this is our first time being exposed to Atlas and nothing has been created yet. We need to be able to quickly and easily create a cluster, configure our IP access rules, specify users and permissions, and then connect with our Node.js application.\n\nThis is where the MongoDB Atlas CLI does the heavy lifting!\n\nThere are many different ways to create a script. Some like Bash, some like ZSH, some like something else. We're going to be using ZX which is a JavaScript wrapper for Bash.\n\nWithin your project directory, not your **app** directory, create a **docker_run_script.mjs** file with the following code:\n\n```javascript\n#!/usr/bin/env zx\n\n$.verbose = true;\n\nconst runtimeTimestamp = Date.now();\n\nprocess.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || \"examples\";\nprocess.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || \"demo\";\nprocess.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || \"password1234\";\nprocess.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || \"business_\" + runtimeTimestamp;\nprocess.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || \"people_\" + runtimeTimestamp;\nprocess.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;\n\nvar app;\n\nprocess.on(\"SIGTERM\", () => { \n app.kill(\"SIGTERM\");\n});\n\ntry {\n let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;\n await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`\n let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;\n} catch (error) {\n console.log(error.stdout);\n}\n\ntry {\n let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;\n let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;\n await $`sleep 10`\n} catch (error) {\n console.log(error.stdout);\n}\n\ntry {\n let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;\n let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);\n parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);\n parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);\n parsedConnectionString.search = \"retryWrites=true&w=majority\";\n process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();\n app = $`node main.js`;\n} catch (error) {\n console.log(error.stdout);\n}\n```\n\nOnce again, we're going to break down what's happening!\n\nLike with the Node.js application, the ZX script will be using a lot of environment variables. In the end, these variables will be passed with Docker, but you can hard-code them at any time if you want to test things outside of Docker.\n\nThe first important thing to note is the defaulting of environment variables:\n\n```javascript\nprocess.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || \"examples\";\nprocess.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || \"demo\";\nprocess.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || \"password1234\";\nprocess.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || \"business_\" + runtimeTimestamp;\nprocess.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || \"people_\" + runtimeTimestamp;\nprocess.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;\n```\n\nThe above snippet isn't a requirement, but if you want to avoid setting or passing around variables, defaulting them could be helpful. In the above example, the use of `runtimeTimestamp` will allow us to create a unique database and collection should we want to. This could be useful if numerous developers plan to use the same Docker images to deploy containers because then each developer would be in a sandboxed area. If the developer chooses to undo the deployment, only their unique database and collection would be dropped.\n\nNext we have the following:\n\n```javascript\nprocess.on(\"SIGTERM\", () => { \n app.kill(\"SIGTERM\");\n});\n```\n\nWe have something similar in the Node.js application as well. We have it in the script because eventually the script controls the application. So when we (or Docker) stops the script, the same stop event is passed to the application. If we didn't do this, the application would not have a graceful shutdown and the drop logic wouldn't be applied.\n\nNow we have three try / catch blocks, each focusing on something particular.\n\nThe first block is responsible for creating a cluster with sample data:\n\n```javascript\ntry {\n let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;\n await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`\n let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;\n} catch (error) {\n console.log(error.stdout);\n}\n```\n\nIf the cluster already exists, an error will be caught. We have three blocks because in our scenario, it is alright if certain parts already exist.\n\nNext we worry about users and access:\n\n```javascript\ntry {\n let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;\n let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;\n await $`sleep 10`\n} catch (error) {\n console.log(error.stdout);\n}\n```\n\nWe want our local IP address to be added to the access list and we want a user to be created. In this example, we are creating a user with extensive access, but you may want to refine the level of permission they have in your own project. For example, maybe the container deployment is meant to be a sandboxed experience. In this scenario, it makes sense that the user created access only the database and collection in the sandbox. We `sleep` after these commands because they are not instant and we want to make sure everything is ready before we try to connect.\n\nFinally we try to connect:\n\n```javascript\ntry {\n let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;\n let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);\n parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);\n parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);\n parsedConnectionString.search = \"retryWrites=true&w=majority\";\n process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();\n app = $`node main.js`;\n} catch (error) {\n console.log(error.stdout);\n}\n```\n\nAfter the first try / catch block finishes, we'll have a connection string. We can finalize our connection string with a Node.js URL object by including the username and password, then we can run our Node.js application. Remember, the environment variables and any manipulations we made to them in our script will be passed into the Node.js application.\n\n## Transition the MongoDB Atlas workflow to containers with Docker and Docker Compose\n\nAt this point, we have an application and we have a script for preparing MongoDB Atlas and launching the application. It's time to get everything into a Docker image to be deployed as a container.\n\nAt the root of your project directory, add a **Dockerfile** file with the following:\n\n```dockerfile\nFROM node:18\n\nWORKDIR /usr/src/app\n\nCOPY ./app/* ./\nCOPY ./docker_run_script.mjs ./\n\nRUN curl https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz --output mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz\nRUN tar -xvf mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz && mv mongodb-atlas-cli_1.3.0_linux_x86_64 atlas_cli\nRUN chmod +x atlas_cli/bin/atlas\nRUN mv atlas_cli/bin/atlas /usr/bin/\n\nRUN npm install -g zx\nRUN npm install\n\nEXPOSE 3000\n\nCMD [\"./docker_run_script.mjs\"]\n```\n\nThe custom Docker image will be based on a Node.js image which will allow us to run our Node.js application as well as our ZX script.\n\nAfter our files are copied into the image, we run a few commands to download and extract the MongoDB Atlas CLI.\n\nFinally, we install ZX and our application dependencies and run the ZX script. The `CMD` command for running the script is done when the container is run. Everything else is done when the image is built.\n\nWe could build our image from this **Dockerfile** file, but it is a lot easier to manage when there is a Compose configuration. Within the project directory, create a **docker-compose.yml** file with the following YAML:\n\n```yaml\nversion: \"3.9\"\nservices:\n web:\n build:\n context: .\n dockerfile: Dockerfile\n ports:\n - \"3000:3000\"\n environment:\n MONGODB_ATLAS_PUBLIC_API_KEY: YOUR_PUBLIC_KEY_HERE\n MONGODB_ATLAS_PRIVATE_API_KEY: YOUR_PRIVATE_KEY_HERE\n MONGODB_ATLAS_ORG_ID: YOUR_ORG_ID_HERE\n MONGODB_ATLAS_PROJECT_ID: YOUR_PROJECT_ID_HERE\n MONGODB_CLUSTER_NAME: examples\n MONGODB_USERNAME: demo\n MONGODB_PASSWORD: password1234\n # MONGODB_DATABASE: sample_mflix\n # MONGODB_COLLECTION: movies\n CLEANUP_ONDESTROY: true\n```\n\nYou'll want to swap the environment variable values with your own. In the above example, the database and collection variables are commented out so the defaults would be used in the ZX script.\n\nTo see everything in action, execute the following from the command line on the host computer:\n\n```bash\ndocker-compose up\n```\n\nThe above command will use the **docker-compose.yml** file to build the Docker image if it doesn't already exist. The build process will bundle our files, install our dependencies, and obtain the MongoDB Atlas CLI. When Compose deploys a container from the image, the environment variables will be passed to the ZX script responsible for configuring MongoDB Atlas. When ready, the ZX script will run the Node.js application, further passing the environment variables. If the `CLEANUP_ONDESTROY` variable was set to `true`, when the container is stopped the database and collection will be removed.\n\n## Conclusion\n\nThe [MongoDB Atlas CLI can be a powerful tool for bringing MongoDB Atlas to your local development experience on Docker. Essentially you would be swapping out a local version of MongoDB with Atlas CLI logic to manage a more feature-rich cloud version of MongoDB.\n\nMongoDB Atlas enhances the MongoDB experience by giving you access to more features such as Atlas Search, Charts, and App Services, which allow you to build great applications with minimal effort.", "format": "md", "metadata": {"tags": ["MongoDB", "Bash", "JavaScript", "Docker", "Node.js"], "pageDescription": "Learn how to use the MongoDB Atlas CLI with Docker in this example that includes JavaScript and Node.js.", "contentType": "Tutorial"}, "title": "Local Development with the MongoDB Atlas CLI and Docker", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/pytest-fixtures-and-pypi", "action": "created", "body": "# Testing and Packaging a Python Library\n\n# Testing & Packaging a Python Library\n\nThis tutorial will show you how to build some helpful pytest\u00a0fixtures for testing code that interacts with a MongoDB database. On top of that, I'll show how to package a Python library using the popular hatchling\u00a0library, and publish it to PyPI.\n\nThis the second tutorial in a series! Feel free to check out the first tutorial\u00a0if you like, but it's not necessary if you want to just read on.\n\n## Coding with Mark?\n\nThis tutorial is loosely based on the second episode of a new livestream I host, called \"Coding with Mark.\" I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. Eastern or 6 a.m. Pacific, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recording!\n\nCurrently, I'm building an experimental data access layer library that should provide a toolkit for abstracting complex document models from the business logic layer of the application that's using them.\n\nYou can check out the code in the project's GitHub repository!\n\n## The problem with testing data\n\nTesting is easier when the code you're testing is relatively standalone and can be tested in isolation. Sadly, code that works with data within MongoDB is at the other end of the spectrum \u2014 it's an integration test by definition because you're testing your integration with MongoDB.\n\nYou have two options when writing test that works with MongoDB:\n\n- Mock out MongoDB, so instead of working with MongoDB, your code works with an object that just *looks like*\u00a0MongoDB but doesn't really store data. mongomock\u00a0is a good solution if you're following this technique.\n- Work directly with MongoDB, but ensure the database is in a known state before your tests run (by loading test data into an empty database) and then clean up any changes you make after your tests are run.\n\nThe first approach is architecturally simpler \u2014 your tests don't run against MongoDB, so you don't need to configure or run a real MongoDB server. On the other hand, you need to manage an object that pretends to be a `MongoClient`, or a `Database`, or a `Collection`, so that it responds in accurate ways to any calls made against it. And because it's not a real MongoDB connection, it's easy to use those objects in ways that don't accurately reflect a real MongoDB connection.\n\nMy preferred approach is the latter: My tests will run against a real MongoDB instance, and I will have the test framework clean up my database after each run\u00a0using transactions. This makes it harder to run the tests and they may run more slowly, but it should do a better job of highlighting real problems interacting with MongoDB itself.\n\n### Some alternative approaches\n\nBefore I ran in and decided to write my own plugin for pytest, I decided to see what others have done before me. I am building my own ODM, after all \u2014 there's only so much room for Not Invented Here\u2122 in my life. There are two reasonably popular pytest integrations for use with MongoDB: pytest-mongo\u00a0and pytest-mongodb. Sadly, neither did quite what I wanted. But they both look good \u2014 if they do what *you*\u00a0want, then I recommend using them.\n\n### pytest-mongo\n\nPytest-mongo is a pytest plugin that enables you to test code that relies on a running MongoDB database. It allows you to specify fixtures for the MongoDB process and client, and it will spin up a MongoDB process to run tests against, if you configure it to do so.\n\n### pytest-mongodb\n\nPytest-mongo is a pytest plugin that enables you to test code that relies on a database connection to a MongoDB and expects certain data to be present. It allows you to specify fixtures for database collections in JSON/BSON or YAML format. Under the hood, it uses mongomock to simulate a MongoDB connection, or you can use a MongoDB connection, if you prefer.\n\nBoth of these offer useful features \u2014 especially the ability to provide fixture data that's specified in files on disk. Pytest-mongo even provides the ability to clean up the database after each test! When I looked a bit further, though, it does this by deleting all the collections in the test database, which is not the behavior I was looking for.\n\nI want to use MongoDB transactions\u00a0to automatically roll back any changes that are made by each test.\u00a0This way, the test won't actually commit any changes to MongoDB, and only the changes it would have made are rolled back, so the database will be efficiently left in the correct state after each test run.\n\n## Pytest fixtures for MongoDB\n\nI'm going to use pytest's fixtures\u00a0feature to provide both a MongoDB connection object and a transaction session to each test that requires them. Behind the scenes, each fixture object will clean up after itself when it is finished.\n\n### How fixtures work\n\nFixtures in pytest are defined as functions, usually in a file called `conftest.py`. The thing that often surprises people new to fixtures, however, is that pytest will magically provide them to any test function with a parameter with the same name as the fixture. It's a form of dependency injection and is probably easier to show than to describe:\n\n```python\n# conftest.py\ndef sample_fixture():\n\n\u00a0 \u00a0 assert sample_fixture == \"Hello, World\"\n```\n\nAs well as pytest providing fixture values to test functions, it will also do the same with other fixture functions. I'll be making use of this in the second fixture I write.\n\nFixtures are called once for their scope, and by default, a fixture's scope is \"function\" which means it'll be called once for each test function. I want my \"session\" fixture to be called (and rolled back) for each function, but it will be much more efficient for my \"mongodb\" client fixture to be called once per session \u2014 i.e., at the start of my whole test run.\n\nThe final bit of pytest fixture theory I want to explain is that if you want something cleaned up *after*\u00a0a scope is over \u2014 for example, when the test function is complete \u2014 the easiest way to accomplish this is to write a generator function using yield instead of return, like this:\n\n```python\ndef sample_fixture():\n\u00a0 \u00a0 # Any code here will be executed *before* the test run\n\u00a0 \u00a0 yield \"Hello, World\"\n\u00a0 \u00a0 # Any code here will be executed *after* the test run\n```\n\nI don't know about you, but despite the magic, I really like this setup. It's nice and consistent, once you know how to use it.\n\n### A MongoClient fixture\n\nThe first fixture I need is one that returns a MongoClient instance that is connected to a MongoDB cluster.\n\nIncidentally, MongoDB Atlas Serverless\u00a0clusters are perfect for this as they don't cost anything when you're not using them. If you're only running your tests a few times a day, or even less, then this could be a good way to save on hosting costs for test infrastructure.\n\nI want to provide configuration to the test runner via an environment variable, `MDB_URI`, which will be the connection string provided by Atlas. In the future, I may want to provide the connection string via a command-line flag, which is something you can do with pytest, but I'll leave that to later.\n\nAs I mentioned before, the scope of the fixture should be \"session\" so that the client is configured once at the start of the test run and then closed at the end. I'm actually going to leave clean-up to Python, so I won't do that explicitly myself.\n\nHere's the fixture:\n\n```python\nimport pytest\nimport pymongo\nimport os\n\n@pytest.fixture(scope=\"session\")\ndef mongodb():\n\u00a0 \u00a0 client = pymongo.MongoClient(os.environ\"MDB_URI\"])\n\u00a0 \u00a0 assert client.admin.command(\"ping\")[\"ok\"] != 0.0 \u00a0# Check that the connection is okay.\n\u00a0 \u00a0 return client\n```\n\nThe above code means that I can write a test that reads from a MongoDB cluster:\n\n```python\n# test_fixtures.py\n\ndef test_mongodb_fixture(mongodb):\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"\"\" This test will pass if MDB_URI is set to a valid connection string. \"\"\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0assert mongodb.admin.command(\"ping\")[\"ok\"] > 0\n```\n\n### Transactions in MongoDB\n\nAs I mentioned, the fixture above is fine for reading from an existing database, but any changes made to the data would be persisted after the tests were finished. In order to correctly clean up after the test run, I need to start a transaction before the test run and then abort the transaction after the test run so that any changes are rolled back. This is how Django's test runner works with relational databases!\n\nIn MongoDB, to create a transaction, you first need to start a session which is done with the `start_session` method on the MongoClient object. Once you have a session, you can call its `start_transaction` method to start a transaction and its `abort_transaction` method to roll back any database updates that were run between the two calls.\n\nOne warning here: You *must*\u00a0provide the session object to all your queries or they won't be considered part of the session you've started. All of this together looks like this:\n\n```python\nsession = mongodb.start_session()\nsession.start_transaction()\nmy_collection.insert_one(\n\u00a0 \u00a0 {\"this document\": \"will be erased\"},\n\u00a0 \u00a0 session=session,\n)\nsession.abort_transaction()\n```\n\nThat's not too bad. Now, I'll show you how to wrap up that logic in a fixture.\n\n### Wrapping up a transaction in a fixture\n\nThe fixture takes the code above, replaces the middle with a `yield` statement, and wraps it in a fixture function:\n\n```python\n@pytest.fixture\ndef rollback_session(mongodb):\n\u00a0 \u00a0 session = mongodb.start_session()\n\u00a0 \u00a0 session.start_transaction()\n\u00a0 \u00a0 try:\n\u00a0 \u00a0 \u00a0 \u00a0 yield session\n\u00a0 \u00a0 finally:\n\u00a0 \u00a0 \u00a0 \u00a0 session.abort_transaction()\n```\n\nThis time, I haven't specified the scope of the fixture, so it defaults to \"function\" which means that the `abort_transaction` call will be made after each test function is executed.\n\nJust to be sure that the test fixture both rolls back changes but also allows subsequent queries to access data inserted during the transaction, I have a test in my `test_docbridge.py` file:\n\n```python\ndef test_update_mongodb(mongodb, rollback_session):\n\u00a0 \u00a0 mongodb.docbridge.tests.insert_one(\n\u00a0 \u00a0 \u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"_id\": \"bad_document\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"description\": \"If this still exists, then transactions aren't working.\",\n\u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 session=rollback_session,\n\u00a0 \u00a0 )\n\u00a0 \u00a0 assert (\n\u00a0 \u00a0 \u00a0 \u00a0 mongodb.docbridge.tests.find_one(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 {\"_id\": \"bad_document\"}, session=rollback_session\n\u00a0 \u00a0 \u00a0 \u00a0 )\n\u00a0 \u00a0 \u00a0 \u00a0 != None\n\u00a0 \u00a0 )\n```\n\nNote that the calls to `insert_one` and `find_one` both provide the `rollback_session` fixture value as a `session` argument. If you forget it, unexpected things will happen!\n\n## Packaging a Python library\n\nPackaging a Python library has always been slightly daunting, and it's made more so by the fact that these days, the packaging ecosystem changes quite a bit. At the time of writing, a good back end for building Python packages is [hatchling\u00a0from the Hatch project.\n\nIn broad terms, for a simple Python package, the steps to publishing your package are these:\n\n- Describe your package.\n- Build your package.\n- Push the package to PyPI.\n\nBefore you go through these steps, it's worth installing the following packages into your development environment:\n\n- build - used for installing your build dependencies and packaging your project\n- twine - used for securely pushing your packages to PyPI\n\nYou can install both of these with:\n\n```\npython -m pip install \u2013upgrade build twine\n```\n\n### Describing the package\n\nFirst, you need to describe your project. Once upon a time, this would have required a `setup.py` file. These days, `pyproject.toml` is the way to go. I'm just going to link to the `pyproject.toml` file in GitHub. You'll see that the file describes the project. It lists `pymongo` as a dependency. It also states that \"hatchling.build\" is the build back end in a couple of lines toward the top of the file.\n\nIt's not super interesting, but it does allow you to do the next step...\n\n### Building the package\n\nOnce you've described your project, you can build a distribution from it by running the following command:\n\n```\n$ python -m build\n* Creating venv isolated environment...\n* Installing packages in isolated environment... (hatchling)\n* Getting build dependencies for sdist...\n* Building sdist...\n* Building wheel from sdist\n* Creating venv isolated environment...\n* Installing packages in isolated environment... (hatchling)\n* Getting build dependencies for wheel...\n* Building wheel...\nSuccessfully built docbridge-0.0.1.tar.gz and docbridge-0.0.1-py3-none-any.whl\n```\n\n### Publishing\u00a0to PyPI\n\nOnce the wheel and gzipped tarballs have been created, they can be published to PyPI (assuming the library name is still unique!) by running Twine:\n\n```\n$ python -m twine upload dist/*\nUploading distributions to https://upload.pypi.org/legacy/\nEnter your username: bedmondmark\nEnter your password: \nUploading docbridge-0.0.1-py3-none-any.whl\n100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 6.6/6.6 kB \u2022 00:00 \u2022 ?\nUploading docbridge-0.0.1.tar.gz\n100% \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u25018.5/8.5 kB \u2022 00:00 \u2022 ?\nView at:\nhttps://pypi.org/project/docbridge/0.0.1/\n```\n\nAnd that's it! I don't know about you, but I always go and check that it really worked.\n\n, and sometimes they're extended references!\n\nI'm really excited about some of the abstraction building blocks I have planned, so make sure to read my next tutorial, or if you prefer, join me\u00a0on the livestream\u00a0at 2 p.m. GMT on Wednesdays!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt842ca6201f83fbce/659683fc2d261259bee75968/image1.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "As part of the coding-with-mark series, see how to build some helpful pytest fixtures for testing code that interacts with a MongoDB database, and how to package a Python library using the popular hatchling library.", "contentType": "Tutorial"}, "title": "Testing and Packaging a Python Library", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/superduperdb-ai-development-with-mongodb", "action": "created", "body": "# Using SuperDuperDB to Accelerate AI Development on MongoDB Atlas Vector Search\n\n## Introduction\n\nAre\u00a0you interested in getting started with vector search and AI on MongoDB Atlas but don\u2019t know where to start? The journey can be daunting; developers are confronted with questions such as:\n\n- Which model should I use?\n- Should I go with an open or closed source?\n- How do I correctly apply my model to my data in Atlas to create vector embeddings?\n- How do I configure my Atlas vector search index correctly?\n- Should I chunk my text or apply a vectorizing model to the text directly?\n- How and where can I robustly serve my model to be ready for new searches, based on incoming text queries?\n\nSuperDuperDB is an open-source Python project\u00a0designed to accelerate AI development with the database and assist in answering such questions, allowing developers to focus on what they want to build, without getting bogged down in the details of exactly how vector search and AI more generally are implemented.\n\nSuperDuperDB includes computation of model outputs and model training which directly work with data in your database, as well as first-class support for vector search. In particular, SuperDuperDB supports MongoDB community and Atlas deployments.\n\nYou can follow along with the code below, but if you prefer, all of the code is available in the SuperDuperDB GitHub repository.\n\n## Getting started with SuperDuperDB\n\nSuperDuperDB is super-easy to install using pip:\n\n```\npython -m pip install -U superduperdbapis]\n```\n\nOnce you\u2019ve installed SuperDuperDB, you\u2019re ready to connect to your MongoDB Atlas deployment:\n\n```python\nfrom\u00a0superduperdb import\u00a0superduper\n\ndb = superduper(\"mongodb+srv://:@...mongodb.net/documents\")\n```\n\nThe trailing characters after the last \u201c/\u201d denote the database you\u2019d like to connect to. In this case, the database is called \"documents.\" You should make sure that the user is authorized to access this database.\n\nThe variable `db`\u00a0is a connector that is simultaneously:\n\n- A database client.\n- An artifact store for AI models (stores large file objects).\n- A meta-data store, storing important information about your models as they relate to the database.\n- A query interface allowing you to easily execute queries including vector search, without needing to explicitly handle the logic of converting the queries into vectors.\n\n## Connecting SuperDuperDB with AI models\n\n*Let\u2019s see this in action.*\n\nWith SuperDuperDB, developers can import model wrappers that support a variety of open-source projects as well as AI API providers, such as OpenAI. Developers may even define and program their own models.\n\nFor example, to create a vectorizing model using the OpenAI API, first set your `OPENAI_API_KEY`\u00a0as an environment variable:\n\n```shell\nexport\u00a0OPENAI_API_KEY=\"sk-...\"\n```\n\nNow, simply import the OpenAI model wrapper:\n\n```python\nfrom\u00a0superduperdb.ext.openai.model import\u00a0OpenAIEmbedding\n\nmodel = OpenAIEmbedding(\n \u00a0 \u00a0identifier='text-embedding-ada-002', model='text-embedding-ada-002')\n```\n\nTo check this is working, you can apply this model to a single text snippet using the `predict`\n\nmethod, specifying that this is a single data point with `one=True`.\n\n```python\n>>> model.predict('This is a test', one=True)\n[-0.008146246895194054,\n -0.0036965329200029373,\n -0.0006024622125551105,\n -0.005724836140871048,\n -0.02455105632543564,\n 0.01614714227616787,\n...]\n```\n\nAlternatively, we can also use an open-source model (not behind an API), using, for instance, the [`sentence-transformers`\u00a0library:\n\n```python\nimport\u00a0sentence_transformers\nfrom superduperdb.components.model import Model\n```\n\n```python\nfrom superduperdb import vector\n```\n\n```python\nmodel = Model(\n \u00a0 \u00a0identifier='all-MiniLM-L6-v2',\n \u00a0 \u00a0object=sentence_transformers.SentenceTransformer('all-MiniLM-L6-v2'),\n \u00a0 \u00a0encoder=vector(shape=(384,)),\n \u00a0 \u00a0predict_method='encode',\n \u00a0 \u00a0postprocess=lambda\u00a0x: x.tolist(),\n \u00a0 \u00a0batch_predict=True,\n)\n```\n\nThis code snippet uses the base `Model`\u00a0wrapper, which supports arbitrary model class instances, using both open-sourced and in-house code. One simply supplies the class instance to the object parameter, optionally specifying `preprocess`\u00a0and/or `postprocess`\u00a0functions.\u00a0The `encoder`\u00a0argument tells Atlas Vector Search what size the outputs of the model are, and the `batch_predict=True`\u00a0option makes computation quicker.\n\nAs before, we can test the model:\n\n```python\n>>> model.predict('This is a test', one=True)\n-0.008146246895194054,\n -0.0036965329200029373,\n -0.0006024622125551105,\n -0.005724836140871048,\n -0.02455105632543564,\n 0.01614714227616787,\n...]\n```\n\n## Inserting and querying data via SuperDuperDB\n\nLet\u2019s add some data to MongoDB using the `db`\u00a0connection. We\u2019ve prepared some data from the PyMongo API to add a meta twist to this walkthrough. You can download this data with this command:\n\n```shell\ncurl -O https://superduperdb-public.s3.eu-west-1.amazonaws.com/pymongo.json\n```\n\n```python\nimport\u00a0json\nfrom superduperdb.backends.mongodb.query import Collection\nfrom superduperdb.base.document import Document as D\n\nwith\u00a0open('pymongo.json') as\u00a0f:\n \u00a0 \u00a0data = json.load(f)\n\ndb.execute(\n \u00a0 \u00a0Collection('documents').insert_many([D(r) for\u00a0r in\u00a0data])\n)\n```\n\nYou\u2019ll see from this command that, in contrast to `pymongo`, `superduperdb`\n\nincludes query objects (`Collection(...)...`). This allows `superduperdb`\u00a0to pass the queries around to models, computations, and training runs, as well as save the queries for future use.\\\nOther than this fact, `superduperdb`\u00a0supports all of the commands that are supported by the core `pymongo`\u00a0API.\n\nHere is an example of fetching some data with SuperDuperDB:\n\n```python\n>>> r = db.execute(Collection('documents').find_one())\n>>> r\nDocument({\n \u00a0 \u00a0'key': 'pymongo.mongo_client.MongoClient', \n \u00a0 \u00a0'parent': None, \n \u00a0 \u00a0'value': '\\nClient for a MongoDB instance, a replica set, or a set of mongoses.\\n\\n', \n \u00a0 \u00a0'document': 'mongo_client.md',\n \u00a0 \u00a0'res': 'pymongo.mongo_client.MongoClient',\n \u00a0 \u00a0'_fold': 'train',\n \u00a0 \u00a0'_id': ObjectId('652e460f6cc2a5f9cc21db4f')\n})\n```\n\nYou can see that the usual data from MongoDB is wrapped with the `Document`\u00a0class.\n\nYou can recover the unwrapped document with `unpack`:\n\n```python\n>>> r.unpack()\n{'key': 'pymongo.mongo_client.MongoClient',\n 'parent': None,\n 'value': '\\nClient for a MongoDB instance, a replica set, or a set of mongoses.\\n\\n',\n 'document': 'mongo_client.md',\n 'res': 'pymongo.mongo_client.MongoClient',\n '_fold': 'train',\n '_id': ObjectId('652e460f6cc2a5f9cc21db4f')}\n```\n\nThe reason `superduperdb`\u00a0uses the `Document`\u00a0abstraction is that, in SuperDuperDB, you don't need to manage converting data to bytes yourself. We have a system of configurable and user-controlled types, or \"Encoders,\" which allow users to insert, for example, images directly.\u00a0*(This is a topic of an upcoming tutorial!)*\n\n## Configuring models to work with vector search on MongoDB Atlas using SuperDuperDB\n\nNow you have chosen and tested a model and inserted some data, you may configure vector search on MongoDB Atlas using SuperDuperDB. To do that, execute this command:\n\n```python\nfrom superduperdb import VectorIndex\nfrom superduperdb import Listener\n\ndb.add(\n \u00a0 \u00a0VectorIndex(\n \u00a0 \u00a0 \u00a0 \u00a0identifier='pymongo-docs',\n \u00a0 \u00a0 \u00a0 \u00a0indexing_listener=Listener(\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0model=model,\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0key='value',\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0select=Collection('documents').find(),\n \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0predict_kwargs={'max_chunk_size': 1000},\n \u00a0 \u00a0 \u00a0 \u00a0),\n \u00a0 \u00a0)\n)\n```\n\nThis command tells `superduperdb`\u00a0to do several things:\n\n- Search the \"documents\"\u00a0collection\n- Set up a vector index on our Atlas cluster, using the text in the \"value\"\u00a0field (Listener)\n- Use the model\u00a0variable to create vector embeddings\n\nAfter receiving this command, SuperDuperDB:\n\n- Configures a MongoDB Atlas knn-index in the \"documents\"\u00a0collection.\n- Saves the model\u00a0object in the SuperDuperDB model store hosted on gridfs.\n- Applies model\u00a0to all data in the \"documents\"\u00a0collection, and saves the vectors in the documents.\n- Saves the fact that the model\u00a0is connected to the \"pymongo-docs\"\u00a0vector index.\n\nIf you\u2019d like to \u201creload\u201d your model in a later session, you can do this with the `load`\u00a0command:\n\n```python\n>>> db.load(\"model\", 'all-MiniLM-L6-v2')\n```\n\nTo look at what happened during the creation of the VectorIndex, we can see that the individual documents now contain vectors:\n\n```python\n>>> db.execute(Collection('documents').find_one()).unpack()\n{'key': 'pymongo.mongo_client.MongoClient',\n 'parent': None,\n 'value': '\\nClient for a MongoDB instance, a replica set, or a set of mongoses.\\n\\n',\n 'document': 'mongo_client.md',\n 'res': 'pymongo.mongo_client.MongoClient',\n '_fold': 'train',\n '_id': ObjectId('652e460f6cc2a5f9cc21db4f'),\n '_outputs': {'value': {'text-embedding-ada-002': [-0.024740776047110558,\n \u00a0 \u00a00.013489063829183578,\n \u00a0 \u00a00.021334229037165642,\n \u00a0 \u00a0-0.03423869237303734,\n \u00a0 \u00a0...]}}}\n```\n\nThe outputs of models are always saved in the `\"_outputs..\"`\u00a0path of the documents. This allows MongoDB Atlas Vector Search to know where to look to create the fast vector lookup index.\n\nYou can verify also that MongoDB Atlas has created a `knn`\u00a0vector search index by logging in to your Atlas account and navigating to the search tab. It will look like this:\n\n![The MongoDB Atlas UI, showing a list of indexes attached to the documents collection.][1]\n\nThe green ACTIVE\u00a0status indicates that MongoDB Atlas has finished comprehending and \u201corganizing\u201d the vectors so that they may be searched quickly.\n\nIf you navigate to the **\u201c...\u201d**\u00a0sign on **Actions**\u00a0and click **edit with JSON editor**\\*,\\*\u00a0then you can inspect the explicit index definition which was automatically configured by `superduperdb`:\n\n![The MongoDB Atlas cluster UI, showing the vector search index details.][2]\n\nYou can confirm from this definition that the index looks into the `\"_outputs..\"`\u00a0path of the documents in our collection.\n\n## Querying vector search with a high-level API with SuperDuperDB\n\nNow that our index is ready to go, we can perform some \u201csearch-by-meaning\u201d queries using the `db`\u00a0connection:\n\n```python\n>>> query = 'Query the database'\n>>> result = db.execute(\n... \u00a0 \u00a0Collection('documents')\n... \u00a0 \u00a0 \u00a0 \u00a0.like(D({'value': query}), vector_index='pymongo-docs', n=5)\n... \u00a0 \u00a0 \u00a0 \u00a0.find({}, {'value': 1, 'key': 1})\n... )\n>>> for\u00a0r in\u00a0result:\n... \u00a0 \u00a0print(r.unpack())\n\n{'key': 'find', 'value': '\\nQuery the database.\\n\\nThe filter argument is a query document that all results\\nmust match. For example:\\n\\n`pycon\\n>>> db'}\n{'key': 'database_name', 'value': '\\nThe name of the database this command was run against.\\n\\n'}\n{'key': 'aggregate', 'value': '\\nPerform a database-level aggregation.\\n\\nSee the [aggregation pipeline\n- GitHub\n- Documentation\n- Blog\n- Example use cases and apps\n- Slack community\n- LinkedIn\n- Twitter\n- YouTube\n\n## Contributors are welcome!\n\nSuperDuperDB is open source and permissively licensed under the Apache 2.0 license. We would like to encourage developers interested in open-source development to contribute to\u00a0our discussion forums and issue boards and make their own pull requests. We'll see you on GitHub!\n\n## Become a Design Partner!\n\nWe are looking for visionary organizations we can help to identify and implement transformative AI applications for their business and products. We're offering this absolutely for free. If you would like to learn more about this opportunity, please reach out to us via email: .\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1ea0a942a4e805fc/65d63171c520883d647f9cb9/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f3999da670dc6cd/65d631712e0c64553cca2ae4/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "Python"], "pageDescription": "Discover how you can use SuperDuperDB to describe complex AI pipelines built on MongoDB Atlas Vector Search and state of the art LLMs.", "contentType": "Article"}, "title": "Using SuperDuperDB to Accelerate AI Development on MongoDB Atlas Vector Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/srv-connection-strings", "action": "created", "body": "# MongoDB 3.6: Here to SRV you with easier replica set connections\n\nIf you have logged into MongoDB Atlas\nrecently - and you should, the entry-level tier is free! - you may have\nnoticed a strange new syntax on 3.6 connection strings.\n\n## MongoDB Seed Lists\n\nWhat is this `mongodb+srv` syntax?\n\nWell, in MongoDB 3.6 we introduced the concept of a seed\nlist\nthat is specified using DNS records, specifically\nSRV and\nTXT records. You will recall\nfrom using replica sets with MongoDB that the client must specify at\nleast one replica set member (and may specify several of them) when\nconnecting. This allows a client to connect to a replica set even if one\nof the nodes that the client specifies is unavailable.\n\nYou can see an example of this URL on a 3.4 cluster connection string:\n\nNote that without the SRV record configuration we must list several\nnodes (in the case of Atlas we always include all the cluster members,\nthough this is not required). We also have to specify the `ssl` and\n`replicaSet` options.\n\nWith the 3.4 or earlier driver, we have to specify all the options on\nthe command line using the MongoDB URI\nsyntax.\n\nThe use of SRV records eliminates the requirement for every client to\npass in a complete set of state information for the cluster. Instead, a\nsingle SRV record identifies all the nodes associated with the cluster\n(and their port numbers) and an associated TXT record defines the\noptions for the URI.\n\n## Reading SRV and TXT Records\n\nWe can see how this works in practice on a MongoDB Atlas cluster with a\nsimple Python script.\n\n``` python\nimport srvlookup #pip install srvlookup\nimport sys \nimport dns.resolver #pip install dnspython \n\nhost = None \n\nif len(sys.argv) > 1 : \n host = sys.argv1] \n\nif host : \n services = srvlookup.lookup(\"mongodb\", domain=host) \n for i in services:\n print(\"%s:%i\" % (i.hostname, i.port)) \n for txtrecord in dns.resolver.query(host, 'TXT'): \n print(\"%s: %s\" % ( host, txtrecord))\n\nelse: \n print(\"No host specified\") \n```\n\nWe can run this script using the node specified in the 3.6 connection\nstring as a parameter.\n\n![The node is specified in the connection string\n\n``` sh\n$ python mongodb_srv_records.py\nfreeclusterjd-ffp4c.mongodb.net\nfreeclusterjd-shard-00-00-ffp4c.mongodb.net:27017\nfreeclusterjd-shard-00-01-ffp4c.mongodb.net:27017\nfreeclusterjd-shard-00-02-ffp4c.mongodb.net:27017\nfreeclusterjd-ffp4c.mongodb.net: \"authSource=admin&replicaSet=FreeClusterJD-shard-0\" \n$ \n```\n\nYou can also do this lookup with nslookup:\n\n``` sh\nJD10Gen-old:~ jdrumgoole$ nslookup\n> set type=SRV > \\_mongodb._tcp.rs.joedrumgoole.com\nServer: 10.65.141.1\nAddress: 10.65.141.1#53\n\nNon-authoritative answer:\n\\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs1.joedrumgoole.com.\n\\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs2.joedrumgoole.com.\n\\_mongodb._tcp.rs.joedrumgoole.com service = 0 0 27022 rs3.joedrumgoole.com.\n\nAuthoritative answers can be found from:\n> set type=TXT\n> rs.joedrumgoole.com\nServer: 10.65.141.1\nAddress: 10.65.141.1#53\n\nNon-authoritative answer:\nrs.joedrumgoole.com text = \"authSource=admin&replicaSet=srvdemo\"\n```\n\nYou can see how this could be used to construct a 3.4 style connection\nstring by comparing it with the 3.4 connection string above.\n\nAs you can see, the complexity of the cluster and its configuration\nparameters are stored in the DNS server and hidden from the end user. If\na node's IP address or name changes or we want to change the replica set\nname, this can all now be done completely transparently from the\nclient's perspective. We can also add and remove nodes from a cluster\nwithout impacting clients.\n\nSo now whenever you see `mongodb+srv` you know you are expecting a SRV\nand TXT record to deliver the client connection string.\n\n## Creating SRV and TXT records\n\nOf course, SRV and TXT records are not just for Atlas. You can also\ncreate your own SRV and TXT records for your self-hosted MongoDB\nclusters. All you need for this is edit access to your DNS server so you\ncan add SRV and TXT records. In the examples that follow we are using\nthe AWS Route 53 DNS service.\n\nI have set up a demo replica set on AWS with a three-node setup. They\nare\n\n``` sh\nrs1.joedrumgoole.com \nrs2.joedrumgoole.com \nrs3.joedrumgoole.com\n```\n\nEach has a mongod process running on port 27022. I have set up a\nsecurity group that allows access to my local laptop and the nodes\nthemselves so they can see each other.\n\nI also set up the DNS names for the above nodes in AWS Route 53.\n\nWe can start the mongod processes by running the following command on\neach node.\n\n``` sh\n$ sudo /usr/local/m/versions/3.6.3/bin/mongod --auth --port 27022 --replSet srvdemo --bind_ip 0.0.0.0 --keyFile mdb_keyfile\"\n```\n\nNow we need to set up the SRV and TXT records for this cluster.\n\nThe SRV record points to the server or servers that will comprise the\nmembers of the replica set. The TXT record defines the options for the\nreplica set, specifically the database that will be used for\nauthorization and the name of the replica set. It is important to note\nthat the **mongodb+srv** format URI implicitly adds \"ssl=true\". In our\ncase SSL is not used for the demo so we have to append \"&ssl=false\" to\nthe client connector. Note that the SRV record is specifically designed\nto look up the **mongodb** service referenced at the start of the URL.\n\nThe settings in AWS Route 53 are:\n\nWhich leads to the following entry in the zone file for Route 53.\n\nNow we can add the TXT record. By convention, we use the same name as\nthe SRV record (`rs.joedrumgoole.com`) so that MongoDB knows where to\nfind the TXT record.\n\nWe can do this on AWS Route 53 as follows:\n\nThis will create the following TXT record.\n\nNow we can access this service as :\n\n``` sh\nmongodb+srv://rs.joedrumgoole.com/test\n```\n\nThis will retrieve a complete URL and connection string which can then\nbe used to contact the service.\n\nThe whole process is outlined below:\n\nOnce your records are set up, you can easily change port numbers without\nimpacting clients and also add and remove cluster members.\n\nSRV records are another way in which MongoDB is making life easier for\ndatabase developers everywhere.\n\nYou should also check out full documentation on SRV and TXT records in\nMongoDB\n3.6.\n\nYou can sign up for a free MongoDB Atlas tier\nwhich is suitable for single user use.\n\nFind out how to use your favorite programming language with MongoDB via\nour MongoDB drivers.\n\nPlease visit MongoDB University for\nfree online training in all aspects of MongoDB.\n\nFollow Joe Drumgoole on twitter for\nmore news about MongoDB.\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "SRV records are another way in which MongoDB is making life easier for database developers everywhere.", "contentType": "News & Announcements"}, "title": "MongoDB 3.6: Here to SRV you with easier replica set connections", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/audio-find-atlas-vector-search", "action": "created", "body": "# Audio Find - Atlas Vector Search for Audio\n\n## Introduction\n\nAs we venture deeper into the realm of digital audio, the frontiers of music discovery are expanding. The pursuit for a more personalized audio experience has led us to develop a state-of-the-art music catalog system. This system doesn't just archive music; it understands it. By utilizing advanced sound embeddings and leveraging the power of MongoDB Atlas Vector Search, we've crafted an innovative platform that recommends songs not by genre or artist, but by the intrinsic qualities of the music itself.\n\nThis article was done together with a co-writer, Ran Shir, music composer and founder of Cues Assets , a production music group. We have researched and developed the following architecture to allow businesses to take advantage of their audio materials for searches.\n\n### Demo video for the main flow\n:youtube]{vid=RJRy0-kEbik}\n\n## System architecture overview\n\nAt the heart of this music catalog is a Python service, intricately detailed in our Django-based views.py. This service is the workhorse for generating sound embeddings, using the Panns-inference model to analyze and distill the unique signatures of audio files uploaded by users. Here's how our sophisticated system operates:\n\n**Audio file upload and storage:**\n\nA user begins by uploading an MP3 file through the application's front end. This file is then securely transferred to Amazon S3, ensuring that the user's audio is stored safely in the cloud.\n\n**Sound embedding generation:**\nWhen an audio file lands in our cloud storage, our Django service jumps into action. It downloads the file from S3, using the Python requests library, into a temporary storage on the server to avoid any data loss during processing.\n\n**Normalization and embedding processing:**\n\nThe downloaded audio file is then processed to extract its features. Using librosa, a Python library for audio analysis, the service loads the audio file and passes it to our Panns-inference model. The model, running on a GPU for accelerated computation, computes a raw 4096 members embedding vector which captures the essence of the audio.\n\n**Embedding normalization:**\n\nThe raw embedding is then normalized to ensure consistent comparison scales when performing similarity searches. This normalization step is crucial for the efficacy of vector search, enabling a fair and accurate retrieval of similar songs.\n\n**MongoDB Atlas Vector Search integration:**\n\nThe normalized embedding is then ready to be ingested by MongoDB Atlas. Here, it's indexed alongside the metadata of the audio file in the \"embeddings\" field. This indexing is what powers the vector search, allowing the application to perform a K-nearest neighbor (KNN) search to find and suggest the songs most similar to the one uploaded by the user.\n\n**User interaction and feedback:**\n\nBack on the front end, the application communicates with the user, providing status updates during the upload process and eventually serving the results of the similarity search, all in a user-friendly and interactive manner.\n\n![Sound Catalog Similarity Architecture\n\nThis architecture encapsulates a blend of cloud technology, machine learning, and database management to deliver a unique music discovery experience that's as intuitive as it is revolutionary.\n\n## Uploading and storing MP3 files\n\nThe journey of an MP3 file through our system begins the moment a user selects a track for upload. The frontend of the application, built with user interaction in mind, takes the first file from the dropped files and prepares it for upload. This process is initiated with an asynchronous call to an endpoint that generates a signed URL from AWS S3. This signed URL is a token of sorts, granting temporary permission to upload the file directly to our S3 bucket without compromising security or exposing sensitive credentials.\n\n### Frontend code for file upload\n\nThe frontend code, typically written in JavaScript for a web application, makes use of the `axios` library to handle HTTP requests. When the user selects a file, the code sends a request to our back end to retrieve a signed URL. With this URL, the file can be uploaded to S3. The application handles the upload status, providing real-time feedback to the user, such as \"Uploading...\" and then \"Searching based on audio...\" upon successful upload. This interactive feedback loop is crucial for user satisfaction and engagement.\n\n```javascript\nasync uploadFiles(files) {\n const file = files0]; // Get the first file from the dropped files\n if (file) {\n try {\n this.imageStatus = \"Uploading...\";\n // Post a request to the backend to get a signed URL for uploading the file\n const response = await axios.post('https://[backend-endpoint]/getSignedURL', {\n fileName: file.name,\n fileType: file.type\n });\n const { url } = response.data;\n // Upload the file to the signed URL\n const resUpload = await axios.put(url, file, {\n headers: {\n 'Content-Type': file.type\n }\n });\n console.log('File uploaded successfully');\n console.log(resUpload.data);\n\n this.imageStatus = \"Searching based on image...\";\n // Post a request to trigger the audio description generation\n const describeResponse = await axios.post('https://[backend-endpoint]/labelsToDescribe', {\n fileName: file.name\n });\n\n const prompt = describeResponse.data;\n this.searchQuery = prompt;\n this.$refs.dropArea.classList.remove('drag-over');\n if (prompt === \"I'm sorry, I can't provide assistance with that request.\") {\n this.imageStatus = \"I'm sorry, I can't provide assistance with that request.\"\n throw new Error(\"I'm sorry, I can't provide assistance with that request.\");\n }\n this.fetchListings();\n // If the request is successful, show a success message\n this.showSuccessPopup = true;\n this.imageStatus = \"Drag and drop an image here\"\n\n // Auto-hide the success message after 3 seconds\n setTimeout(() => {\n this.showSuccessPopup = false;\n }, 3000);\n } catch (error) {\n console.error('File upload failed:', error);\n // In case of an error, reset the UI and show an error message\n this.$refs.dropArea.classList.remove('drag-over');\n this.showErrorPopup = true;\n\n // Auto-hide the error message after 3 seconds\n setTimeout(() => {\n this.showErrorPopup = false;\n }, 3000);\n\n // Reset the status message after 6 seconds\n setTimeout(() => {\n this.imageStatus = \"Drag and drop an image here\"\n }, 6000);\n\n }\n }\n}\n```\n\n### Backend Code for Generating Signed URLs\n\nOn the backend, a Serverless function written for the MongoDB Realm platform interacts with AWS SDK. It uses stored AWS credentials to access S3 and create a signed URL, which it then sends back to the frontend. This URL contains all the necessary information for the file upload, including the file name, content type, and access control settings.\n\n```javascript\n// Serverless function to generate a signed URL for file uploads to AWS S3\nexports = async function({ query, headers, body}, response) {\n \n // Import the AWS SDK\n const AWS = require('aws-sdk');\n\n // Update the AWS configuration with your access keys and region\n AWS.config.update({\n accessKeyId: context.values.get('YOUR_AWS_ACCESS_KEY'), // Replace with your actual AWS access key\n secretAccessKey: context.values.get('YOUR_AWS_SECRET_KEY'), // Replace with your actual AWS secret key\n region: 'eu-central-1' // The AWS region where your S3 bucket is hosted\n });\n \n // Create a new instance of the S3 service\n const s3 = new AWS.S3();\n // Parse the file name and file type from the request body\n const { fileName, fileType } = JSON.parse(body.text())\n \n // Define the parameters for the signed URL\n const params = {\n Bucket: 'YOUR_S3_BUCKET_NAME', // Replace with your actual S3 bucket name\n Key: fileName, // The name of the file to be uploaded\n ContentType: fileType, // The content type of the file to be uploaded\n ACL: 'public-read' // Access control list setting to allow public read access\n };\n \n // Generate the signed URL for the 'putObject' operation\n const url = await s3.getSignedUrl('putObject', params);\n \n // Return the signed URL in the response\n return { 'url' : url }\n};\n```\n\n## Sound embedding with Panns-inference model\nOnce an MP3 file is securely uploaded to S3, a Python service, which interfaces with our Django back end, takes over. This service is where the audio file is transformed into something more \u2014 a compact representation of its sonic characteristics known as a sound embedding. Using the librosa library, the service reads the audio file, standardizing the sample rate to ensure consistency across all files. The Panns-inference model then takes a slice of the audio waveform and infers its embedding.\n\n```python\nimport tempfile\nfrom django.http import JsonResponse\nfrom django.views.decorators.csrf import csrf_exempt\nfrom panns_inference import AudioTagging\nimport librosa\nimport numpy as np\nimport os\nimport json\nimport requests\n\n# Function to normalize a vector\ndef normalize(v):\n norm = np.linalg.norm(v)\n return v / norm if norm != 0 else v\n\n# Function to generate sound embeddings from an audio file\ndef get_embedding(audio_file):\n # Initialize the AudioTagging model with the specified device\n model = AudioTagging(checkpoint_path=None, device='gpu')\n # Load the audio file with librosa, normalizing the sample rate to 44100\n a, _ = librosa.load(audio_file, sr=44100)\n # Add an extra dimension to the array to fit the model's input requirements\n query_audio = a[None, :]\n # Perform inference to get the embedding\n _, emb = model.inference(query_audio)\n # Normalize the embedding before returning\n return normalize(emb[0])\n\n# Django view to handle the POST request for downloading and embedding\n@csrf_exempt\ndef download_and_embed(request):\n if request.method == 'POST':\n try:\n # Parse the request body to get the file name\n body_data = json.loads(request.body.decode('utf-8'))\n file_name = body_data.get('file_name')\n\n # If the file name is not provided, return an error\n if not file_name:\n return JsonResponse({'error': 'Missing file_name in the request body'}, status=400)\n\n # Construct the file URL (placeholder) and send a request to get the file\n file_url = f\"https://[s3-bucket-url].amazonaws.com/{file_name}\"\n response = requests.get(file_url)\n\n # If the file is successfully retrieved\n if response.status_code == 200:\n # Create a temporary file to store the downloaded content\n with tempfile.NamedTemporaryFile(delete=False, suffix=\".mp3\") as temp_audio_file:\n temp_audio_file.write(response.content)\n temp_audio_file.flush()\n # Log the temporary file's name and size for debugging\n print(f\"Temp file: {temp_audio_file.name}, size: {os.path.getsize(temp_audio_file.name)}\")\n\n # Generate the embedding for the downloaded file\n embedding = get_embedding(temp_audio_file.name)\n # Return the embedding as a JSON response\n return JsonResponse({'embedding': embedding.tolist()})\n else:\n # If the file could not be downloaded, return an error\n return JsonResponse({'error': 'Failed to download the file'}, status=400)\n except json.JSONDecodeError:\n # If there is an error in the JSON data, return an error\n return JsonResponse({'error': 'Invalid JSON data in the request body'}, status=400)\n\n # If the request method is not POST, return an error\n return JsonResponse({'error': 'Invalid request'}, status=400)\n```\n\n### Role of Panns-inference model\n\nThe [Panns-inference model is a deep learning model trained to understand and capture the nuances of audio content. It generates a vector for each audio file, which is a numerical representation of the file's most defining features. This process turns a complex audio file into a simplified, quantifiable form that can be easily compared against others.\n\nFor more information and setting up this model see the following github example.\n\n## Vector search with MongoDB Atlas\n\n**Storing and indexing embeddings in MongoDB Atlas**\n\nMongoDB Atlas is where the magic of searchability comes to life. The embeddings generated by our Python service are stored in a MongoDB Atlas collection. Atlas, with its robust indexing capabilities, allows us to index these embeddings efficiently, enabling rapid and accurate vector searches.\nThis is the index definition used on the \u201csongs\u201d collection:\n\n```json\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"embeddings\": {\n \"dimensions\": 4096,\n \"similarity\": \"dotProduct\",\n \"type\": \"knnVector\"\n },\n \"file\": {\n \"normalizer\": \"none\",\n \"type\": \"token\"\n }\n }\n }\n}\n```\nThe \"file\" field is indexed with a \"token\" type for file name filtering logic, explained later in the article.\n\n**Songs collection sample document:**\n```json\n{ \n_id : ObjectId(\"6534dd09164a19b0ac1f7311\"),\n file : \"Glorious Outcame Full Mix.mp3\",\nembeddings : Array (4096)]\n}\n```\n\n### Vector search functionality\n\nVector search in MongoDB Atlas employs a K-nearest neighbor (KNN) algorithm to find the closest embeddings to the one provided by the user's uploaded file. When a user initiates a search, the system queries the Atlas collection, searching through the indexed embeddings to find and return a list of songs with the most similar sound profiles.\nThis combination of technologies \u2014 from the AWS S3 storage and signed URL generation to the processing power of the Panns-inference model, all the way to the search capabilities of MongoDB Atlas \u2014 creates a seamless experience. Users can not only upload their favorite tracks but also discover new ones that carry a similar auditory essence, all within an architecture built for scale, speed, and accuracy.\n\n### Song Lookup and similarity search\n\n**'\u201cGet Songs\u201d functionality**\nThe \u201cGet Songs\u201d feature is the cornerstone of the music catalog, enabling users to find songs with a similar auditory profile to their chosen track. When a user uploads a song, the system doesn't just store the file; it actively searches for and suggests tracks with similar sound embeddings. This is achieved through a similarity search, which uses the sound embeddings stored in the MongoDB Atlas collection.\n\n```javascript\n// Serverless function to perform a similarity search on the 'songs' collection in MongoDB Atlas\nexports = async function({ query, body }, response) {\n // Initialize the connection to MongoDB Atlas\n const mongodb = context.services.get('mongodb-atlas');\n // Connect to the specific database\n const db = mongodb.db('YourDatabaseName'); // Replace with your actual database name\n // Connect to the specific collection within the database\n const songsCollection = db.collection('YourSongsCollectionName'); // Replace with your actual collection name\n\n // Parse the incoming request body to extract the embedding vector\n const parsedBody = JSON.parse(body.text());\n console.log(JSON.stringify(parsedBody)); // Log the parsed body for debugging\n\n // Perform a vector search using the parsed embedding vector\n let foundSongs = await songs.aggregate([\n { \"$vectorSearch\": {\n \"index\" : \"default\",\n \"queryVector\": parsedBody.embedding,\n \"path\": \"embeddings\",\n \"numCandidates\": 15,\n \"limit\" : 15\n }\n }\n ]).toArray()\n \n // Map the found songs to a more readable format by stripping unnecessary path components\n let searchableSongs = foundSongs.map((song) => {\n // Extract a cleaner, more readable song title\n let shortName = song.name.replace('.mp3', '');\n return shortName.replace('.wav', ''); // Handle both .mp3 and .wav file extensions\n });\n\n // Prepare an array of $unionWith stages to combine results from multiple collections if needed\n let unionWithStages = searchableSongs.slice(1).map((songTitle) => {\n return {\n $unionWith: {\n coll: 'RelatedSongsCollection', // Name of the other collection to union with\n pipeline: [\n { $match: { \"songTitleField\": songTitle } }, // Match the song titles against the related collection\n ],\n },\n };\n });\n\n // Execute the aggregation query with a $match stage for the first song, followed by any $unionWith stages\n const relatedSongsCollection = db.collection('YourRelatedSongsCollectionName'); // Replace with your actual related collection name\n const locatedSongs = await relatedSongsCollection.aggregate([\n { $match: { \"songTitleField\": searchableSongs[0] } }, // Start with the first song's match stage\n ...unionWithStages, // Include additional stages for related songs\n ]).toArray();\n\n // Return the array of located songs as the response\n return locatedSongs;\n};\n```\nSince embeddings are stored together with the songs data we can use the embedding field when performing a lookup of nearest N neighbours. This approach implements the \"More Like This\" button.\n\n```javascript\n// Get input song 3 neighbours which are not itself. \"More Like This\"\n let foundSongs = await songs.aggregate([\n { \"$vectorSearch\": {\n \"index\" : \"default\",\n \"queryVector\": songDetails.embeddings,\n \"path\": \"embeddings\",\n \"filter\" : { \"file\" : { \"$ne\" : fullSongName}},\n \"numCandidates\": 15,\n \"limit\" : 3\n }}\n ]).toArray()\n``` \nThe code [filter out the searched song itself.\n\n## Backend code for similarity search\n\nThe backend code responsible for the similarity search is a serverless function within MongoDB Atlas. It executes an aggregation pipeline that begins with a vector search stage, leveraging the `$vectorSearch` operator with `queryVector` to perform a K-nearest neighbor search. The search is conducted on the \"embeddings\" field, comparing the uploaded track's embedding with those in the collection to find the closest matches. The results are then mapped to a more human-readable format, omitting unnecessary file path information for the user's convenience.\n```javascript\n let foundSongs = await songs.aggregate(\n { \"$vectorSearch\": {\n \"index\" : \"default\",\n \"queryVector\": parsedBody.embedding,\n \"path\": \"embeddings\",\n \"numCandidates\": 15,\n \"limit\" : 15\n }\n }\n ]).toArray()\n```\n\n## Frontend functionality\n\n**Uploading and searching for similar songs**\n\nThe front end provides a drag-and-drop interface for users to upload their MP3 files easily. Once a file is selected and uploaded, the front end communicates with the back end to initiate the search for similar songs based on the generated embedding. This process is made transparent to the user through real-time status updates.\n\n** User Interface and Feedback Mechanisms **\n\nThe user interface is designed to be intuitive, with clear indications of the current process \u2014 whether it's uploading, searching, or displaying results. Success and error popups inform the user of the status of their request. A success popup confirms the upload and successful search, while an error popup alerts the user to any issues that occurred during the process. These popups are designed to auto-dismiss after a short duration to keep the interface clean and user-friendly.\n\n## Challenges and solutions\n\n### Developmental challenges\n\nOne of the challenges faced was ensuring the seamless integration of various services, such as AWS S3, MongoDB Atlas, and the Python service for sound embeddings. Handling large audio files and processing them efficiently required careful consideration of file management and server resources.\n\n### Overcoming the challenges\n\nTo overcome these issues, we utilized temporary storage for processing and optimized the Python service to handle large files without significant memory overhead. Additionally, the use of serverless functions within MongoDB Atlas allowed us to manage compute resources effectively, scaling with the demand as needed.\n\n## Conclusion\n\nThis music catalog represents a fusion of cloud storage, advanced audio processing, and modern database search capabilities. It offers an innovative way to explore music by sound rather than metadata, providing users with a uniquely tailored experience.\n\nLooking ahead, potential improvements could include enhancing the [Panns-inference model for even more accurate embedding generation and expanding the database to accommodate a greater variety of audio content. Further refinements to the user interface could also be made, such as incorporating user feedback to improve the recommendation algorithm continually.\nLooking ahead, potential improvements could include enhancing the model for even more accurate embedding generation and expanding the database to accommodate a greater variety of audio content. Further refinements to the user interface could also be made, such as incorporating user feedback to improve the recommendation algorithm continually.\nIn conclusion, the system stands as a testament to the possibilities of modern audio technology and database management, offering users a powerful tool for music discovery and promising avenues for future development.\n\n**Special Thanks:** Ran Shir and Cues Assets group for the work, research efforts and materials.\n\nWant to continue the conversation? Meet us over in the MongoDB Community forums!", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Python", "AI", "Django", "AWS"], "pageDescription": "This in-depth article explores the innovative creation of a music catalog system that leverages the power of MongoDB Atlas's vector search and a Python service for sound embedding. Discover how sound embeddings are generated using the Panns-inference model via S3 hosted files, and how similar songs are identified, creating a dynamic and personalized audio discovery experience.", "contentType": "Article"}, "title": "Audio Find - Atlas Vector Search for Audio", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/query-analytics-part-1", "action": "created", "body": "# Query Analytics Part 1: Know Your Queries\n\nDo you know what your users are searching for? What they\u2019re finding? Or not finding?\n\nThe quality of search results drives users toward or away from using a service. If you can\u2019t find it, it doesn\u2019t exist\u2026 or it may as well not exist. A lack of discoverability leads to a lost customer. A library patron can\u2019t borrow a book they can\u2019t find. The bio-medical researcher won\u2019t glean insights from research papers or genetic information that is not in the search results. If users aren\u2019t finding what they need, expect, or what delights them, they\u2019ll go elsewhere.\n\nAs developers, we\u2019ve successfully deployed full-text search into our application. We can clearly see that our test queries are able to match what we expect, and the relevancy of those test queries looks good. But as we know, our users immediately try things we didn\u2019t think to try and account for and will be presented with results that may or may not be useful to them. If you\u2019re selling items from your search results page and \u201cSorry, no results match your query\u201d comes up, how much money have you _not_ made? Even more insidious are results for common queries that aren\u2019t providing the best results you have to offer; while users get results, there might not be the desired product within quick and easy reach to click and buy now.\n\nHaving Atlas Search enabled and in production is really the beginning of your search journey and also the beginning of the value you\u2019ll get out of a well-tuned, and monitored, search engine. Atlas Search provides Query Analytics, giving us actionable insights into the `$search` activity of our Atlas Search indexes. \n\nNote: Query Analytics is available in public preview for all MongoDB Atlas clusters on an M10 or higher running MongoDB v5.0 or higher to view the analytics information for the tracked search terms in the Atlas UI. Atlas Search doesn't track search terms or display analytics for queries on free and shared-tier clusters.\n\nCallout section: Atlas Search Query Analytics focuses entirely on the frequency and number of results returned from each $search call. There are also several search metrics available for operational monitoring of CPU, memory, index size, and other useful data points.\n\n## Factors that influence search results quality\n\nYou might be thinking, \u201cHey, I thought this Atlas Search thing would magically make my search results work well \u2014 why aren\u2019t the results as my users expect? Why do some seemingly reasonable queries return no results or not quite the best results?\u201d\n\nConsider these various types of queries of concern:\n\n| Query challenge | Example |\n| :-------- | :------- |\n| Common name typos/variations | Jacky Chan, Hairy Potter, Dotcor Suess |\n| Relevancy challenged | the purple rain, the the yes, there\u2019s a band called that], to be or not to be |\n| Part numbers, dimensions, measurements | \u215d\u201d driver bit, 1/2\" wrench, size nine dress, Q-36, Q36, Q 36 |\n| Requests for assistance | Help!, support, want to return a product, how to redeem a gift card, fax number |\n| Because you know better | cheap sushi [the user really wants \u201cgood\u201d sushi, don\u2019t recommend the cheap stuff], blue shoes [boost the brands you have in stock that make you the most money], best guitar for a beginner |\n| Word stems | Find nemo, finds nemo, finding nemo |\n| Various languages, character sets, romanization | Flughafen, integra\u00e7ao,\u4e2d\u6587, ko\u2019nichiwa |\n| Context, such as location, recency, and preferences | MDB [boost most recent news of this company symbol], pizza [show me nearby and open restaurants] |\n\nConsider the choices we made, or were dynamically made for us, when we built our Atlas Search index \u2014 specifically, the analyzer choices we make per string field. What we indexed determines what is searchable and in what ways it is searchable. A default `lucene.standard` analyzed field gives us pretty decent, language-agnostic \u201cwords\u201d as searchable terms in the index. That\u2019s the default and not a bad one. However, if your content is in a particular language, it may have some structural and syntactic rules that can be incorporated into the index and queries too. If you have part numbers, item codes, license plates, or other types of data that are precisely specified in your domain, users will enter them without the exact special characters, spacing, or case. Often, as developers or domain experts of a system, we don\u2019t try the wrong or _almost_ correct syntax or format when testing our implementation, but our users do.\n\nWith the number of ways that search results can go astray, we need to be keeping a close eye on what our users are experiencing and carefully tuning and improving.\n\n## Virtuous search query management cycle\n\nMaintaining a healthy search-based system deserves attention to the kinds of challenges just mentioned. A healthy search system management cycle includes these steps:\n\n1. (Re-)deploy search\n2. Measure and test\n3. Make adjustments\n4. Go to 1, repeat\n\n### (Re-)deploying search\n\nHow you go about re-deploying the adjustments will depend on the nature of the changes being made, which could involve index configuration and/or application or query adjustments. \n\nHere\u2019s where the [local development environment for Atlas could be useful, as a way to make configuration and app changes in a comfortable local environment, push the changes to a broader staging environment, and then push further into production when ready.\n\n### Measure and test\n\nYou\u2019ll want to have a process for analyzing the search usage of your system, by tracking queries and their results over time. Tracking queries simply requires the addition of `searchTerms` tracking information to your search queries, as in this template:\n\n```\n{\n $search: {\n \"index\": \"\",\n \"\": {\n \n },\n \"tracking\": {\n \"searchTerms\": \"\"\n }\n }\n}\n```\n\n### Make adjustments\n\nYou\u2019ve measured, twice even, and you\u2019ve spotted a query or class of queries that need some fine-tuning. It\u2019s part art and part science to tune queries, and with a virtuous search query management cycle in place to measure and adjust, you can have confidence that changes are improving the search results for you and your customers.\n\nNow, apply these adjustments, test, repeat, adjust, re-deploy, test... repeat.\n\nSo far, we\u2019ve laid the general rationale and framework for this virtuous cycle of query analysis and tuning feedback loop. Let\u2019s now see what actionable insights can be gleaned from Atlas Search Query Analytics.\n\n## Actionable insights\n\nThe Atlas Search Query Analytics feature provides two reports of search activity: __All Tracked Search Queries__ and __Tracked Search Queries with No Results__. Each report provides the top tracked \u201csearch terms\u201d for a selected time period, from the last day up to the last 90 days.\n\nLet\u2019s talk about the significance of each report.\n\n### All Tracked Search Queries\n\nWhat are the most popular search terms coming through your system over the last month? This report rolls that up for you.\n\n. \n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt66de710be3567815/6597dec2dc76629c3b7ebbf0/last_30_all_search_queries_chart.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt859876a30d03a803/6597dff21c5d7c16060f3a34/last_30_top_search_terms.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70d62c57c413a8b6/6597e06ab05b9eccd9d73b49/search_terms_agg_pipeline.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6f6c0668307d3bb2/6597e0dc1c5d7ca8bc0f3a38/last_30_no_results.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Do you know what your users are searching for? Atlas Search Query Analytics, gives us actionable insights.", "contentType": "Article"}, "title": "Query Analytics Part 1: Know Your Queries", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-semantic-kernel", "action": "created", "body": "# Building AI Applications with Microsoft Semantic Kernel and MongoDB Atlas Vector Search\n\nWe are excited to announce native support for MongoDB Atlas Vector Search in Microsoft Semantic Kernel. With this integration, users can bring the power of LLMs (large language models) to their proprietary data securely, and build generative AI applications using RAG (retrieval-augmented generation) with programming languages like Python and C#. The accompanying tutorial will walk you through an example.\n\n## What is Semantic Kernel?\nSemantic Kernel is a polyglot, open-source SDK that lets users combine various AI services with their applications. Semantic Kernel uses connectors to allow you to swap out AI services without rewriting code. Components of Semantic Kernel include: \n\n - AI services: Supports AI services like OpenAI, Azure OpenAI, and Hugging Face. \n - Programming languages: Supports conventional programming languages like C# Python, and Java.\n - Large language model (LLM) prompts: Supports the latest in LLM AI prompts with prompt templating, chaining, and planning capabilities. \n - Memory: Provides different vectorized stores for storing data, including MongoDB.\n\n## What is MongoDB Atlas Vector Search?\nMongoDB Atlas Vector Search is a fully managed service that simplifies the process of effectively indexing high-dimensional vector embedding data within MongoDB and being able to perform fast vector similarity searches. \n\nEmbedding refers to the representation of words, phrases, or other entities as dense vectors in a continuous vector space. It's designed to ensure that words with similar meanings are grouped closer together. This method helps computer models better understand and process language by recognizing patterns and relationships between words and is what allows us to search by semantic meaning.\n\nWhen data is converted into numeric vector embeddings using encoding models, these embeddings can be stored directly alongside their respective source data within the MongoDB database. This co-location of vector embeddings and the original data not only enhances the efficiency of queries but also eliminates potential synchronization issues. By avoiding the need to maintain separate databases or synchronization processes for the source data and its embeddings, MongoDB provides a seamless and integrated data retrieval experience.\n\nThis consolidated approach streamlines database management and allows for intuitive and sophisticated semantic searches, making the integration of AI-powered experiences easier.\n\n## Microsoft Semantic Kernel and MongoDB\nThis combination enables developers to build AI-powered intelligent applications using MongoDB Atlas Vector Search and large language models from providers like OpenAI, Azure OpenAI, and Hugging Face. \n\nDespite all their incredible capabilities, LLMs have a knowledge cutoff date and often need to be augmented with proprietary, up-to-date information for the particular business that an application is being built for. This \u201clong-term memory for LLM\u201d capability for AI-powered intelligent applications is typically powered by leveraging vector embeddings. Semantic Kernel allows for storing and retrieving this vector context for AI apps using the memory plugin (which now has support for MongoDB Atlas Vector Search). \n\n## Tutorial\nAtlas Vector Search is integrated in this tutorial to provide a way to interact with our memory store that was created through our MongoDB and Semantic Kernel connector.\n\nThis tutorial takes you through how to use Microsoft Semantic Kernel to properly upload and embed documents into your MongoDB Atlas cluster, and then conduct queries using Microsoft Semantic Kernel as well, all in Python!\n\n## Pre-requisites \n\n - MongoDB Atlas cluster \n - IDE of your choice (this tutorial uses Google Colab \u2014 please refer to it if you\u2019d like to run the commands directly)\n - OpenAI API key\n\nLet\u2019s get started!\n\n## Setting up our Atlas cluster\nVisit the MongoDB Atlas dashboard and set up your cluster. In order to take advantage of the `$vectorSearch` operator in an aggregation pipeline, you need to run MongoDB Atlas 6.0.11 or higher. This tutorial can be built using a free cluster. \n\nWhen you\u2019re setting up your deployment, you\u2019ll be prompted to set up a database user and rules for your network connection. Please ensure you save your username and password somewhere safe and have the correct IP address rules in place so your cluster can connect properly. \n\nIf you need more help getting started, check out our tutorial on MongoDB Atlas. \n\n## Installing the latest version of Semantic Kernel\nIn order to be successful with our tutorial, let\u2019s ensure we have the most up-to-date version of Semantic Kernel installed in our IDE. As of the creation of this tutorial, the latest version is 0.3.14. Please run this `pip` command in your IDE to get started:\n```\n!python -m pip install semantic-kernel==0.3.14.dev\n```\nOnce it has been successfully run, you will see various packages being downloaded. Please ensure `pymongo` is downloaded in this list. \n\n## Setting up our imports \nHere, include the information about our OpenAI API key and our connection string. \n\nLet\u2019s set up the necessary imports:\n```\nimport openai\nimport semantic_kernel as sk\nfrom semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAITextEmbedding\nfrom semantic_kernel.connectors.memory.mongodb_atlas import MongoDBAtlasMemoryStore\n\nkernel = sk.Kernel()\n\nopenai.api_key = '\"))\nkernel.import_skill(sk.core_skills.TextMemorySkill())\n```\nImporting in OpenAI is crucial because we are using their data model to embed not only our documents but also our queries. We also want to import their Text Embedding library for this same reason. For this tutorial, we are using the embedding model `ada-002`, but please double check that you\u2019re using a model that is compatible with your OpenAI API key.\n\nOur `MongoDBAtlasMemoryStore` class is very important as it\u2019s the part that enables us to use MongoDB as our memory store. This means we can connect to the Semantic Kernel and have our documents properly saved and formatted in our cluster. For more information on this class, please refer to the repository. \n\nThis is also where you will need to incorporate your OpenAI API key along with your MongoDB connection string, and other important variables that we will use. The ones above are just a suggestion, but if they are changed while attempting the tutorial, please ensure they are consistent throughout. For help on accessing your OpenAI key, please read the section below.\n\n### Generate your OpenAI key\nIn order to generate our embeddings, we will use the OpenAI API. First, we\u2019ll need a secret key. To create your OpenAI key, you'll need to create an account. Once you have that, visit the OpenAI API and you should be greeted with a screen like the one below. Click on your profile icon in the top right of the screen to get the dropdown menu and select \u201cView API keys\u201d.\n\nHere, you can generate your own API key by clicking the \u201cCreate new secret key\u201d button. Give it a name and store it somewhere safe. This is all you need from OpenAI to use their API to generate your embeddings.\n\n## The need for retrieval-augmented generation (RAG)\nRetrieval-augmented regeneration, also known as RAG, is an NLP technique that can help improve the quality of large language models (LLMs). It\u2019s an artificial intelligence framework for getting data from an external knowledge source. The memory store we are creating using Microsoft Semantic Kernel is an example of this. But why is RAG necessary? Let\u2019s take a look at an example.\n\nLLMs like OpenAI GPT-3.5 exhibit an impressive and wide range of skills. They are trained on the data available on the internet about a wide range of topics and can answer queries accurately. Using Semantic Kernel, let\u2019s ask OpenAI\u2019s LLM if Albert Einstein likes coffee: \n```\n# Wrap your prompt in a function\nprompt = kernel.create_semantic_function(\"\"\"\nAs a friendly AI Copilot, answer the question: Did Albert Einstein like coffee?\n\"\"\")\n\nprint(prompt())\n```\nThe output received is:\n```\nYes, Albert Einstein was known to enjoy coffee. He was often seen with a cup of coffee in his hand and would frequently visit cafes to discuss scientific ideas with his colleagues over a cup of coffee.\n```\nSince this information was available on the public internet, the LLM was able to provide the correct answer.\n\nBut LLMs have their limitations: They have a knowledge cutoff (September 2021, in the case of OpenAI) and do not know about proprietary and personal data. They also have a tendency to hallucinate \u2014 that is, they may confidently make up facts and provide answers that may seem to be accurate but are actually incorrect. Here is an example to demonstrate this knowledge gap:\n\n```\nprompt = kernel.create_semantic_function(\"\"\"\nAs a friendly AI Copilot, answer the question: Did I like coffee?\n\"\"\")\n\nprint(prompt())\n```\nThe output received is:\n```\nAs an AI, I don't have personal preferences or experiences, so I can't say whether \"I\" liked coffee or not. However, coffee is a popular beverage enjoyed by many people around the world. It has a distinct taste and aroma that some people find appealing, while others may not enjoy it as much. Ultimately, whether someone likes coffee or not is a subjective matter and varies from person to person.\n``` \n\nAs you can see, there is a knowledge gap here because we don\u2019t have our personal data loaded in OpenAI that our query can access. So let\u2019s change that. Continue on through the tutorial to learn how to augment the knowledge base of the LLM with proprietary data. \n\n## Add some documents into our MongoDB cluster\nOnce we have incorporated our MongoDB connection string and our OpenAI API key, we are ready to add some documents into our MongoDB cluster. \n\nPlease ensure you\u2019re specifying the proper collection variable below that we set up above. \n```\nasync def populate_memory(kernel: sk.Kernel) -> None:\n# Add some documents to the semantic memory\nawait kernel.memory.save_information_async(\ncollection=MONGODB_COLLECTION, id=\"1\", text=\"We enjoy coffee and Starbucks\"\n)\nawait kernel.memory.save_information_async(\ncollection=MONGODB_COLLECTION, id=\"2\", text=\"We are Associate Developer Advocates at MongoDB\"\n)\nawait kernel.memory.save_information_async(\ncollection=MONGODB_COLLECTION, id=\"3\", text=\"We have great coworkers and we love our teams!\"\n)\nawait kernel.memory.save_information_async(\ncollection=MONGODB_COLLECTION, id=\"4\", text=\"Our names are Anaiya and Tim\"\n)\nawait kernel.memory.save_information_async(\ncollection=MONGODB_COLLECTION, id=\"5\", text=\"We have been to New York City and Dublin\"\n)\n```\nHere, we are using the `populate_memory` function to define five documents with various facts about Anaiya and Tim. As you can see, the name of our collection is called \u201crandomFacts\u201d, we have specified the ID for each document (please ensure each ID is unique, otherwise you will get an error), and then we have included a text phrase we want to embed. \n\nOnce you have successfully filled in your information and have run this command, let\u2019s add them to our cluster \u2014 aka let\u2019s populate our memory! To do this, please run the command:\n```\nprint(\"Populating memory...aka adding in documents\")\nawait populate_memory(kernel)\n```\nOnce this command has been successfully run, you should see the database, collection, documents, and their embeddings populate in your Atlas cluster. The screenshot below shows how the first document looks after running these commands.\n\n \nOnce the documents added to our memory have their embeddings, let\u2019s set up our search index and ensure we can generate embeddings for our queries. \n\n## Create a vector search index in MongoDB \nIn order to use the `$vectorSearch` operator on our data, we need to set up an appropriate search index. We\u2019ll do this in the Atlas UI. Select the \u201cSearch\" tab on your cluster and click \u201cCreate Search Index\u201d.\n\nWe want to choose the \"JSON Editor Option\" and click \"Next\".\n\nOn this page, we're going to select our target database, `semantic-kernel`, and collection, `randomFacts`.\n\nFor this tutorial, we are naming our index `defaultRandomFacts`. The index will look like this: \n\n```json\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"dotProduct\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n```\nThe fields specify the embedding field name in our documents, `embedding`, the dimensions of the model used to embed, `1536`, and the similarity function to use to find K-nearest neighbors, `dotProduct`. It's very important that the dimensions in the index match that of the model used for embedding. This data has been embedded using the same model as the one we'll be using, but other models are available and may use different dimensions.\n\nCheck out our Vector Search documentation for more information on the index configuration settings.\n\n## Query documents using Microsoft Semantic Kernel\nIn order to query your new documents hosted in your MongoDB cluster \u201cmemory\u201d store, we can use the `memory.search_async` function. Run the following commands and watch the magic happen:\n\n```\nresult = await kernel.memory.search_async(MONGODB_COLLECTION, 'What is my job title?')\n\nprint(f\"Retrieved document: {result0].text}, {result[0].relevance}\")\n```\n\nNow you can ask any question and get an accurate response! \n\nExamples of questions asked and the results: \n![the result of the question: What is my job title?\n\n## Conclusion\nIn this tutorial, you have learned a lot of very useful concepts:\n\n - What Microsoft Semantic Kernel is and why it\u2019s important.\n - How to connect Microsoft Semantic Kernel to a MongoDB Atlas cluster.\n - How to add in documents to your MongoDB memory store (and embed them, in the process, through Microsoft Semantic Kernel).\n - How to query your new documents in your memory store using Microsoft Semantic Kernel.\n\nFor more information on MongoDB Vector Search, please visit the documentation, and for more information on Microsoft Semantic Kernel, please visit their repository and resources. \n\nIf you have any questions, please visit our MongoDB Developer Community Forum. ", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Follow this comprehensive guide to getting started with Microsoft Semantic Kernel and MongoDB Atlas Vector Search.", "contentType": "Tutorial"}, "title": "Building AI Applications with Microsoft Semantic Kernel and MongoDB Atlas Vector Search", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-unity-persistence", "action": "created", "body": "# Saving Data in Unity3D Using Realm\n\n(Part 5 of the Persistence Comparison Series)\n\nWe started this tutorial series by looking at Unity and .NET native ways to persist data, like `PlayerPrefs`, `File`, and the `BinaryReader` / `BinaryWriter`. In the previous part, we then continued on to external libraries and with that, databases. We looked at ``SQLite` as one example.\n\nThis time, we will look at another database. One that makes it very easy and intuitive to work with data: the Realm Unity SDK.\n\nFirst, here is an overview over the complete series:\n\n- Part 1: PlayerPrefs\n- Part 2: Files\n- Part 3: BinaryReader and BinaryWriter\n- Part 4: SQLite\n- Part 5: Realm Unity SDK *(this tutorial)*\n\nSimilar to the previous parts, this tutorial can also be found in our Unity examples repository on the persistence-comparison branch.\n\nEach part is sorted into a folder. The four scripts we will be looking at in this tutorial are in the `Realm` sub folder. But first, let's look at the example game itself and what we have to prepare in Unity before we can jump into the actual coding.\n\n## Example game\n\n*Note that if you have worked through any of the other tutorials in this series, you can skip this section since we're using the same example for all parts of the series, so that it's easier to see the differences between the approaches.*\n\nThe goal of this tutorial series is to show you a quick and easy way to make some first steps in the various ways to persist data in your game.\n\nTherefore, the example we'll be using will be as simple as possible in the editor itself so that we can fully focus on the actual code we need to write.\n\nA simple capsule in the scene will be used so that we can interact with a game object. We then register clicks on the capsule and persist the hit count.\n\nWhen you open up a clean 3D template, all you need to do is choose `GameObject` -> `3D Object` -> `Capsule`.\n\nYou can then add scripts to the capsule by activating it in the hierarchy and using `Add Component` in the inspector.\n\nThe scripts we will add to this capsule showcasing the different methods will all have the same basic structure that can be found in `HitCountExample.cs`.\n\n```cs\nusing UnityEngine;\n\n/// \n/// This script shows the basic structure of all other scripts.\n/// \npublic class HitCountExample : MonoBehaviour\n{\n // Keep count of the clicks.\n SerializeField] private int hitCount; // 1\n\n private void Start() // 2\n {\n // Read the persisted data and set the initial hit count.\n hitCount = 0; // 3\n }\n\n private void OnMouseDown() // 4\n {\n // Increment the hit count on each click and save the data.\n hitCount++; // 5\n }\n}\n```\n\nThe first thing we need to add is a counter for the clicks on the capsule (1). Add a `[SerilizeField]` here so that you can observe it while clicking on the capsule in the Unity editor.\n\nWhenever the game starts (2), we want to read the current hit count from the persistence and initialize `hitCount` accordingly (3). This is done in the `Start()` method that is called whenever a scene is loaded for each game object this script is attached to.\n\nThe second part to this is saving changes, which we want to do whenever we register a mouse click. The Unity message for this is `OnMouseDown()` (4). This method gets called every time the `GameObject` that this script is attached to is clicked (with a left mouse click). In this case, we increment the `hitCount` (5) which will eventually be saved by the various options shown in this tutorial series.\n\n## Realm\n\n(See `HitCount.cs` and ``RealmExampleSimple.cs` in the repository for the finished version.)\n\nNow that you have seen the example and the increasing hit counter, the next step will be to actually persist it so that it's available the next time we start the game.\n\nAs described in the [documentation, you can install Realm in two different ways:\n\n- Install with NPM\n- Manually Install a Tarball\n\nLet's choose option #1 for this tutorial. The first thing we need to do is to import the Realm framework into Unity using the project settings.\n\nGo to `Windows` \u2192 `Package Manager` \u2192 cogwheel in the top right corner \u2192 `Advanced Project Settings`:\n\nWithin the `Scoped Registries`, you can add the `Name`, `URL`, and `Scope` as follows:\n\nThis adds `NPM` as a source for libraries. The final step is to tell the project which dependencies to actually integrate into the project. This is done in the `manifest.json` file which is located in the `Packages` folder of your project.\n\nHere you need to add the following line to the `dependencies`:\n\n```json\n\"io.realm.unity\": \"\"\n```\n\nReplace `` with the most recent Realm version found in https://github.com/realm/realm-dotnet/releases and you're all set.\n\nThe final `manifest.json` should look something like this:\n\n```json\n{\n \"dependencies\": {\n ...\n \"io.realm.unity\": \"10.13.0\"\n },\n \"scopedRegistries\": \n {\n \"name\": \"NPM\",\n \"url\": \"https://registry.npmjs.org/\",\n \"scopes\": [\n \"io.realm.unity\"\n ]\n }\n ]\n}\n```\n\nWhen you switch back to Unity, it will reload the dependencies. If you then open the `Package Manager` again, you should see `Realm` as a new entry in the list on the left:\n\n![Realm in Project Manager\n\nWe can now start using Realm in our Unity project.\n\nSimilar to other databases, we need to start by telling the Realm SDK how our database structure is supposed to look like. We have seen this in the previous tutorial with SQL, where we had to define tables and column for each class we want to save.\n\nWith Realm, this is a lot easier. We can just define in our code by adding some additional information to let know Realm how to read that code.\n\nLook at the following definition of `HitCount`. You will notice that the super class for this one is `RealmObject` (1). When starting your game, Realm will automatically look for all sub classes of `RealmObject` and know that it needs to be prepared to persist this kind of data. This is all you need to do to get started when defining a new class. One additional thing we will do here, though, is to define which of the properties is the primary key. We will see why later. Do this by adding the attribute `PrimaryKey` to the `Id` property (2).\n\n```cs\nusing Realms;\n\npublic class HitCount: RealmObject // 1\n{\n PrimaryKey] // 2\n public int Id { get; set; }\n public int Value { get; set; }\n\n private HitCount() { }\n\n public HitCount(int id)\n {\n Id = id;\n }\n}\n```\n\nWith our data structure defined, we can now look at what we have to do to elevate our example game so that it persists data using Realm. Starting with the `HitCountExample.cs` as the blueprint, we create a new file `RealmExampleSimple.cs`:\n\n```cs\nusing UnityEngine;\n\npublic class RealmExampleSimple : MonoBehaviour\n{\n [SerializeField] private int hitCount;\n\n private void Start()\n {\n hitCount = 0;\n }\n\n private void OnMouseDown()\n {\n hitCount++;\n }\n}\n```\n\nFirst, we'll add two more fields \u2014 `realm` and `hitCount` \u2014\u00a0and rename the `SerializeField` to `hitCounter` to avoid any name conflicts:\n\n```cs\n[SerializeField] private int hitCounter = 0;\n\nprivate Realm realm;\nprivate HitCount hitCount;\n```\n\nThose two additional fields will let us make sure we reuse the same realm for load and save. The same holds true for the `HitCount` object we need to create when starting the scene. To do this, substitute the `Start()` method with the following:\n\n```cs\nvoid Start()\n{\n realm = Realm.GetInstance(); // 1\n\n hitCount = realm.Find(1); // 2\n if (hitCount != null) // 3\n {\n hitCounter = hitCount.Value;\n }\n else // 4\n {\n hitCount = new HitCount(1); // 5\n realm.Write(() => // 6\n {\n realm.Add(hitCount);\n });\n }\n}\n```\n\nA new Realm is created by calling `Realm.GetInstance()` (1). We can then use this `realm` object to handle all operations we need in this example. Start by searching for an already existing `HitCount` object. `Realm` offers a `Find<>` function (2) that let's you search for a specific class that was defined before. Additionally, we can pass long a primary key we want to look for. For this simple example, we will only ever need one `HitCount` object and will just assign the primary key `1` for it and also search for this one here.\n\nThere are two situations that can happen: If the game has been started before, the realm will return a `hitCount` object and we can use that to load the initial state of the `hitCounter` (3) using the `hitCount.Value`. The other possibility is that the game has not been started before and we need to create the `HitCount` object (4). To create a new object in Realm, you first create it the same way you would create any other object in C# (5). Then we need to add this object to the database. Whenever changes are made to the realm, we need to wrap these changes into a write block to make sure we're prevented from conflicting with other changes that might be going on \u2014 for example, on a different thread (6).\n\nWhenever the capsule is clicked, the `hitCounter` gets incremented in `OnMouseDown()`. Here we need to add the change to the database, as well:\n\n```cs\nprivate void OnMouseDown()\n{\n hitCounter++;\n\n realm.Write(() => // 8\n {\n hitCount.Value = hitCounter; // 7\n });\n}\n```\n\nWithin `Start()`, we made sure to create a new `hitCount` object that can be used to load and save changes. So all we need to do here is to update the `Value` with the new `hitCounter` value (7). Note, as before, we need to wrap this change into a `Write` block to guarantee data safety.\n\nThis is all you need to do for your first game using Realm. Easy, isn't it?\n\nRun it and try it out! Then we will look into how to extend this a little bit.\n\n## Extended example\n\n(See `HitCountExtended.cs` and ``RealmExampleExtended.cs` in the repository for the finished version.)\n\nTo make it easy to compare with the other parts of the series, all we will do in this section is add the key modifiers and save the three different versions:\n\n- Unmodified\n- Shift\n- Control\n\nAs you will see in a moment, this small change is almost too simple to create a whole section around it, but it will also show you how easy it is to work with Realm as you go along in your project.\n\nFirst, let's create a new `HitCountExtended.cs` so that we can keep and look at both strucutres side by side:\n\n```cs\nusing Realms;\n\npublic class HitCountExtended : RealmObject\n{\n [PrimaryKey]\n public int Id { get; set; }\n public int Unmodified { get; set; } // 1\n public int Shift { get; set; } // 2\n public int Control { get; set; } // 3\n\n private HitCountExtended() { }\n\n public HitCountExtended(int id)\n {\n Id = id;\n }\n}\n```\n\nCompared to the `HitCount.cs`, we've renamed `Value` to `Unmodified` (1) and added `Shift` (2) as well as `Control` (3). That's all we need to do in the entity that will hold our data. How do we need to adjust the `MonoBehaviour`?\n\nFirst, we'll update the outlets to the Unity editor (the `SerializeFields`) by replacing `hitCounter` with those three similar to the previous tutorials:\n\n```cs\n[SerializeField] private int hitCountUnmodified = 0;\n[SerializeField] private int hitCountShift = 0;\n[SerializeField] private int hitCountControl = 0;\n```\n\nEqually, we add a `KeyCode` field and use the `HitCountExtended` instead of the `HitCount`:\n\n```cs\nprivate KeyCode modifier = default;\nprivate Realm realm;\nprivate HitCountExtended hitCount;\n```\n\nLet's first adjust the loading of the data. Instead of searching for a `HitCount`, we now search for a `HitCountExtended`:\n\n```cs\nhitCount = realm.Find(1);\n```\n\nIf it was found, we extract the three values and set it to the corresponding hit counters to visualize them in the Unity Editor:\n\n```cs\nif (hitCount != null)\n{\n hitCountUnmodified = hitCount.Unmodified;\n hitCountShift = hitCount.Shift;\n hitCountControl = hitCount.Control;\n}\n```\n\nIf no object was created yet, we will go ahead and create a new one like we did in the simple example:\n\n```cs\nelse\n{\n hitCount = new HitCountExtended(1);\n realm.Write(() =>\n {\n realm.Add(hitCount);\n });\n}\n```\n\nIf you have worked through the previous tutorials, you've seen the `Update()` function already. It will be same for this tutorial as well since all it does it detect whichever key modifier is clicked, independent of the way we later on save that modifier:\n\n```cs\nprivate void Update()\n{\n // Check if a key was pressed.\n if (Input.GetKey(KeyCode.LeftShift)) // 1\n {\n // Set the LeftShift key.\n modifier = KeyCode.LeftShift;\n }\n else if (Input.GetKey(KeyCode.LeftControl)) // 2\n {\n // Set the LeftControl key.\n modifier = KeyCode.LeftControl;\n }\n else\n {\n // In any other case reset to default and consider it unmodified.\n modifier = default; // 3\n }\n}\n```\n\nThe important bits here are the check for `LeftShift` and `LeftControl` which exist in the enum `KeyCode` (1+2). To check if one of those keys is pressed in the current frame (remember, `Update()` is called once per frame), we use `Input.GetKey()` (1+2) and pass in the key we're interested in. If none of those two keys is pressed, we use the `Unmodified` version, which is just `default` in this case (3).\n\nThe final part that has to be adjusted is the mouse click that increments the counter. Depending on the `modifier` that was clicked, we increase the corresponding `hitCount` like so:\n\n```cs\nswitch (modifier)\n{\n case KeyCode.LeftShift:\n hitCountShift++;\n break;\n case KeyCode.LeftControl:\n hitCountControl++;\n break;\n default:\n hitCountUnmodified++;\n break;\n}\n```\n\nAfter we've done this, we once again update the realm like we did in the simple example, this time updating all three fields in the `HitCountExtended`:\n\n```cs\nrealm.Write(() =>\n{\n hitCount.Unmodified = hitCountUnmodified;\n hitCount.Shift = hitCountShift;\n hitCount.Control = hitCountControl;\n});\n```\n\nWith this, the modifiers are done for the Realm example and you can start the game and try it out.\n\n## Conclusion\n\nPersisting data in games leads you to many different options to choose from. In this tutorial, we've looked at Realm. It's an easy-to-use and -learn database that can be integrated into your game without much work. All we had to do was add it via NPM, define the objects we use in the game as `RealmObject`, and then use `Realm.Write()` to add and change data, along with `Realm.Find<>()` to retrieve data from the database.\n\nThere is a lot more that Realm can do that would go beyond the limits of what can be shown in a single tutorial.\n\nYou can find [more examples for local Realms in the example repository, as well. It contains examples for one feature you might ask for next after having worked through this tutorial: How do I synchronize my data between devices? Have a look at Realm Sync and some examples.\n\nI hope this series gave you some ideas and insights on how to save and load data in Unity games and prepares you for the choice of which one to pick.\n\nPlease provide feedback and ask any questions in the Realm Community Forum.", "format": "md", "metadata": {"tags": ["Realm", "C#"], "pageDescription": "Persisting data is an important part of most games. Unity offers only a limited set of solutions, which means we have to look around for other options as well.", "contentType": "Tutorial"}, "title": "Saving Data in Unity3D Using Realm", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-azure-app-service-atlas", "action": "created", "body": "# Getting Started with MongoDB Atlas, NodeJS, and Azure App Service\n\nMongoDB Atlas and Azure are great friends! In fact, they became even better friends recently with the addition of the MongoDB Atlas Pay-as-You-Go Software as a Service (SaaS) subscription to the Azure Marketplace, allowing you to use your existing Azure credits to enjoy all the benefits of the MongoDB Atlas Developer Data Platform. So there is no better time to learn how you can take advantage of both of these.\n\nIn this article, we are going to see how you can deploy a MERN stack application to Azure Web Apps, part of Azure App Service, in a few simple steps. By the end of this, you will have your own version of the website that can be found here.\n\n## Prerequisites\nThere are a few things you will need in place in order to follow this article.\n\n1. Atlas Account and database cluster.\n **N.B.** You can follow the Getting Started with Atlas guide, to learn how to create a free Atlas account, create your first free-forever cluster and get your all important Connection String to the database.\n2. Azure Account.\n3. Have the mern-stack-azure-deployment-example forked to your own account.\n\n### Database Network Access\nMongoDB Atlas comes with database level security out of the box. This includes not only the users who can connect but also where you can connect from. For this reason, you will need to configure network access rules for who or what can access your applications. \n\nThe most common connection technique is via IP address. If you wish to use this with Azure, you will need to allow access from anywhere inside Atlas as we cannot predict what your application IP addresses will be over time.\n\nAtlas also supports the use of network peering and private connections using the major cloud providers. This includes Azure Private Link or Azure Virtual Private Connection (VPC) if you are using an M10 or above cluster.\n\n## What\u2019s the MERN Stack?\nBefore we get started deploying our MERN Stack application to Azure, it\u2019s good to cover what the MERN Stack is.\nMERN stands for MongoDB, Express, React, Node, and is named after the technologies that make up the stack.\n\n* **MongoDB**: a general-purpose document database\n* **Express**: Node.js web framework\n* **React**: a client-side JavaScript framework\n* **Node.js**: the most widely used JavaScript web server\n\n## Create the Azure App Service\nSo we have the pieces in place we need, including a place to store data and an awesome MERN stack repo ready to go. Now we need to create our Azure App Service instance so we can take advantage of its deployment and hosting capabilities:\n\n1. Inside the Azure Portal, in the search box at the top, search for *App Services* and select it.\n2. Click Create to trigger the creation wizard.\n3. Enter the following information:\n- **Subscription**: Choose your preferred existing subscription.\n***Note: When you first create an account, you are given a free trial subscription with $150 free credits you can use***\n- **Resource Group**: Use an existing or click the *Create new* link underneath the box to create a new one.\n- **Name**: Choose what you would like to call it. The name has to be unique as it is used to create a URL ending .azurewebsites.net but otherwise, the choice is yours.\n- **Publish**: Code.\n- **Runtime stack**: Node 18 LTS.- \n- **OS**: Linux.\n- **Region**: Pick the one closest to you.\n- **Pricing Plans**: F1 - this is the free version.\n\n4. Once you are happy, select Review + create in the bottom left.\n5. Click Create in the bottom left and await deployment.\n6. Once created, it will allow you to navigate to your new app service so we can start configuring it.\n\n## Configuring our new App Service\nNow that we have App Service set up, we need to add our connection string to our MongoDB Atlas cluster to app settings, so when deployed the application will be able to find the value and connect successfully. \n1. From the left-side menu in the Azure Portal inside your newly created App Service, click Configuration under the Settings section.\n2. We then need to add a new value in the Application Settings section. **NOT** the Connection String section, despite the name. Click the New application setting button under this section to add one.\n3. Add the following values:\n- **Name**: ATLAS_URI\n- **Value**: Your Atlas connection string from the cluster you created earlier.\n\n## Deploy to Azure App Services\nWe have our application, we have our app service and we have our connection string stored. Now it is time to link to our GitHub repo to take advantage of CI/CD goodness in Azure App Services.\n\n1. Inside your app service app, click Deployment Center on the left in the Deployment section.\n2. In the Settings tab that opens by default, from Source, select GitHub.\n3. Fill out the boxes under the GitHub section that appears to select the main branch of your fork of the MERN stack repo.\n4. Under Workflow Option: Make sure Add a workflow is the selected option.\n5. Click Save at the top.\n\nThis will trigger a GitHub Actions build. If you view this in GitHub, you will see it will fail because we need to make some changes to the YAML file it created to allow it to build and deploy successfully.\n\n### Configuring our GitHub Actions Workflow file\nNow that we have connected GitHub Actions and App Services, there is a new folder in the GitHub repo called .github with a subfolder called workflows. This is where you will find the yaml files that App Services auto generated for us in the last section.\n\nHowever, as mentioned, we need to adjust it slightly to work for us:\n1. In the jobs section, there will be a sub section for the build job. Inside this we need to replace the whole steps section with the code found in this gist\n - **N.B.** *The reason it is in a Gist is because indentation is really crucial in YAML and this makes sure the layout stays as it should be to make your life easier.*\n2. As part of this, we have named our app \u2018mern-app\u2019 so we need to make sure this matches in the deploy step. Further down in the jobs section of the yaml file, you will find the deploy section and its own steps subsection. In the first name step, you will see a bit where it says node-app. Change this to mern-app. This associates the build and deploy apps.\n\nThat\u2019s it! All you need to do now is commit the changes to the file. This will trigger a run of the GitHub Action workflow.\nOnce it builds successfully, you can go ahead and visit your website. \n\nTo find the URL of your website, visit the project inside the Azure Portal and in the Overview section you will find the link. \n\nYou should now have a working NodeJS application that uses MongoDB Atlas that is deployed to Azure App Services.\n\n## Summary\nYou are now well on your way to success with Azure App Services, NodeJS and MongoDB Atlas!\n\nIn this article, we created an Azure App Service, added our connection string inside Azure and then linked it up to our existing MERN stack example repo in GitHub, before customizing the generated workflow file for our application. Super simple and shows what can be done with the power of the cloud and MongoDB\u2019s Developer Data Platform!\n\nGet started with Atlas on Azure today!\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js", "Azure"], "pageDescription": "How to easily deploy a MERN Stack application to Azure App Service.", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB Atlas, NodeJS, and Azure App Service", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/swift/full-stack-swift", "action": "created", "body": "# Building a Full Stack application with Swift\n\n I recently revealed on Twitter something that may have come as a surprise to many of my followers from the Swift/iOS community: I had never written an iOS app before! I've been writing Swift for a few years now but have focused entirely on library development and server-side Swift.\n\nA highly compelling feature of Swift is that it allows you to write an iOS app and a corresponding backend \u2013 a complete, end-to-end application \u2013 all in the same language. This is similar to how using Node.js for a web app backend allows you to write Javascript everywhere.\n\nTo test this out and learn about iOS development, I decided to build a full-stack application entirely in Swift. I settled on a familiar CRUD app I've created a web version of before, an application that allows the user to manage a list of kittens and information about them.\n\nI chose to build the app using the following components:\n* A backend server, written using the popular Swift web framework Vapor and using the MongoDB Swift driver via MongoDBVapor to store data in MongoDB\n* An iOS application built with SwiftUI and using SwiftBSON to support serializing/deserializing data to/from extended JSON, a version of JSON with MongoDB-specific extensions to simplify type preservation\n* A SwiftPM package containing the code I wanted to share between the two above components\n\nI was able to combine all of this into a single code base with a folder structure as follows:\n```\nFullStackSwiftExample/\n\u251c\u2500\u2500 Models/\n\u2502 \u251c\u2500\u2500 Package.swift\n\u2502 \u2514\u2500\u2500 Sources/\n\u2502 \u2514\u2500\u2500 Models/\n\u2502 \u2514\u2500\u2500 Models.swift\n\u251c\u2500\u2500 Backend/\n\u2502 \u251c\u2500\u2500 Package.swift\n\u2502 \u2514\u2500\u2500 Sources/\n\u2502 \u251c\u2500\u2500 App/\n\u2502 \u2502 \u251c\u2500\u2500 configure.swift\n\u2502 \u2502 \u2514\u2500\u2500 routes.swift\n\u2502 \u2514\u2500\u2500 Run/\n\u2502 \u2514\u2500\u2500 main.swift\n\u2514\u2500\u2500 iOSApp/\n \u2514\u2500\u2500 Kittens/\n \u251c\u2500\u2500 KittensApp.swift\n \u251c\u2500\u2500 Utilities.swift\n \u251c\u2500\u2500 ViewModels/\n \u2502 \u251c\u2500\u2500 AddKittenViewModel.swift\n \u2502 \u251c\u2500\u2500 KittenListViewModel.swift\n \u2502 \u2514\u2500\u2500 ViewUpdateDeleteKittenViewModel.swift\n \u2514\u2500\u2500 Views/\n \u251c\u2500\u2500 AddKitten.swift\n \u251c\u2500\u2500 KittenList.swift\n \u2514\u2500\u2500 ViewUpdateDeleteKitten.swift\n```\n\nOverall, it was a great learning experience for me, and although the app is pretty basic, I'm proud of what I was able to put together! Here is the finished application, instructions to run it, and documentation on each component.\n\nIn the rest of this post, I'll discuss some of my takeaways from this experience.\n\n## 1. Sharing data model types made it straightforward to consistently represent my data throughout the stack.\n\nAs I mentioned above, I created a shared SwiftPM package for any code I wanted to use both in the frontend and backend of my application. In that package, I defined `Codable` types modeling the data in my application, for example:\n\n```swift\n/**\n* Represents a kitten.\n* This type conforms to `Codable` to allow us to serialize it to and deserialize it from extended JSON and BSON.\n* This type conforms to `Identifiable` so that SwiftUI is able to uniquely identify instances of this type when they\n* are used in the iOS interface.\n*/\npublic struct Kitten: Identifiable, Codable {\n /// Unique identifier.\n public let id: BSONObjectID\n\n /// Name.\n public let name: String\n\n /// Fur color.\n public let color: String\n\n /// Favorite food.\n public let favoriteFood: CatFood\n\n /// Last updated time.\n public let lastUpdateTime: Date\n\n private enum CodingKeys: String, CodingKey {\n // We store the identifier under the name `id` on the struct to satisfy the requirements of the `Identifiable`\n // protocol, which this type conforms to in order to allow usage with certain SwiftUI features. However,\n // MongoDB uses the name `_id` for unique identifiers, so we need to use `_id` in the extended JSON\n // representation of this type.\n case id = \"_id\", name, color, favoriteFood, lastUpdateTime\n }\n}\n```\n\nWhen you use separate code/programming languages to represent data on the frontend versus backend of an application, it's easy for implementations to get out of sync. But in this application, since the same exact model type gets used for the frontend **and** backend representations of kittens, there can't be any inconsistency.\n\nSince this type conforms to the `Codable` protocol, we also get a single, consistent definition for a kitten's representation in external data formats. The formats used in this application are:\n* Extended JSON, which the frontend and backend use to communicate via HTTP, and\n* BSON, which the backend and MongoDB use to communicate\n\nFor a concrete example of using a model type throughout the stack, when a user adds a new kitten via the UI, the data flows through the application as follows:\n1. The iOS app creates a new `Kitten` instance containing the user-provided data\n1. The `Kitten` instance is serialized to extended JSON via `ExtendedJSONEncoder` and sent in a POST request to the backend\n1. The Vapor backend deserializes a new instance of `Kitten` from the extended JSON data using `ExtendedJSONDecoder`\n1. The `Kitten` is passed to the MongoDB driver method `MongoCollection.insertOne()`\n1. The MongoDB driver uses its built-in `BSONEncoder` to serialize the `Kitten` to BSON and send it via the MongoDB wire protocol to the database\n\nWith all these transformations, it can be tricky to ensure that both the frontend and backend remain in sync in terms of how they model, serialize, and deserialize data. Using Swift everywhere and sharing these `Codable` data types allowed me to avoid those problems altogether in this app.\n\n## 2. Working in a single, familiar language made the development experience seamless.\n\nDespite having never built an iOS app before, I found my existing Swift experience made it surprisingly easy to pick up on the concepts I needed to implement the iOS portion of my application. I suspect it's more common that someone would go in the opposite direction, but I think iOS experience would translate well to writing a Swift backend too!\n\nI used several Swift language features such as protocols, trailing closures, and computed properties in both the iOS and backend code. I was also able to take advantage of Swift's new built-in features for concurrency throughout the stack. I used the `async` APIs on `URLSession` to send HTTP requests from the frontend, and I used Vapor and the MongoDB driver's `async` APIs to handle requests on the backend. It was much easier to use a consistent model and syntax for concurrent, asynchronous programming throughout the application than to try to keep straight in my head the concurrency models for two different languages at once.\n\nIn general, using the same language really made it feel like I was building a single application rather than two distinct ones, and greatly reduced the amount of context-switching I had to do as I alternated between work on the frontend and backend. \n\n## 3. SwiftUI and iOS development are really cool!\n\nMany of my past experiences trying to cobble together a frontend for school or personal projects using HTML and Javascript were frustrating. This time around, the combination of using my favorite programming language and an elegant, declarative framework made writing the frontend very enjoyable. More generally, it was great to finally learn a bit about iOS development and what most people writing Swift and that I know from the Swift community do!\n\n---\n\nIn conclusion, my first foray into iOS development building this full-stack Swift app was a lot of fun and a great learning experience. It strongly demonstrated to me the benefits of using a single language to build an entire application, and using a language you're already familiar with as you venture into programming in a new domain.\n\nI've included a list of references below, including a link to the example application. Please feel free to get in touch with any questions or suggestions regarding the application or the MongoDB libraries listed below \u2013 the best way to get in touch with me and my team is by filing a GitHub issue or Jira ticket!\n\n## References\n* Example app source code\n* MongoDB Swift driver and documentation\n* MongoDBVapor and documentation\n* SwiftBSON and documentation\n* Vapor\n* SwiftUI", "format": "md", "metadata": {"tags": ["Swift", "iOS"], "pageDescription": "Curious about mobile and server-side swift? Use this tutorial and example code!", "contentType": "Code Example"}, "title": "Building a Full Stack application with Swift", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-c", "action": "created", "body": "# Getting Started with MongoDB and C\n\n# Getting Started with MongoDB and C\n\nIn this article we'll install the MongoDB C driver on macOS, and use this driver to write some sample console applications that can interact with your MongoDB data by performing basic CRUD operations. We'll use Visual Studio Code to type in the code and the command line to compile and run our programs. If you want to try it out now, all source code is in the GitHub repository.\n\n## Table of contents\n\n- Prerequisites\n- Installation: VS Code, C Extensions, Xcode\n- Installing the C Driver\n- Hello World MongoDB!\n- Setting up the client and pinging MongoDB Atlas\n- Compiling and running our code\n- Connecting to the database and listing all collections\n- Creating a JSON object in C\n- CRUD in MongoDB using the C driver\n- Querying data\n- Inserting a new document\n- Deleting a document\n- Updating a document\n- Wrapping up\n\n## Prerequisites\n\n1. A MongoDB Atlas account with a cluster created.\n2. The sample dataset loaded into the Atlas cluster (or you can modify the sample code to use your own database and collection).\n3. Your machine\u2019s IP address whitelisted. Note: You can add 0.0.0.0/0 as the IP address, which should allow access from any machine. This setting is not recommended for production use.\n\n## VS Code, C extensions, Xcode\n\n1. We will use Visual Studio Code, available in macOS, Windows, and Linux, because it has official support for C code. Just download and install the appropriate version. \n2. We need the C extensions, which will be suggested when you open a C file for the first time. You can also open extensions and search for \"C/C++\" and install them. This will install several extensions: C/C++, C/C++ Themes, and CMake.\n3. The last step is to make sure we have a C compiler. For that, either install Xcode from the Mac App Store or run in a terminal:\n\n```bash\n$ xcode-select --install\n```\n\nAlthough we can use CMake to build our C applications (and you have detailed instructions on how to do it), we'll use VS Code to type our code in and the terminal to build and run our programs.\n\n## Installing the C driver\n\nIn macOS, if we have the package manager homebrew installed (which you should), then we just open a terminal and type in:\n\n```bash\n$ brew install mongo-c-driver\n```\n\nYou can also download the source code and build the driver, but using brew is just way more convenient. \n\n## Configuring VS Code extensions\n\nTo make autocomplete work in VS Code, we need to change the extension's config to make sure it \"sees\" these new libraries installed. We want to change our INCLUDE_PATH to allow both IntelliSense to check our code while typing it and be able to build our app from VS Code.\n\nTo do that, from VS Code, open the `.vscode` hidden folder, and then click on c_cpp_properties.json and add these lines:\n\n```javascript\n{\n \"configurations\": \n {\n \"name\": \"Mac\",\n \"includePath\": [\n \"/usr/local/include/libbson-1.0/**\",\n \"/usr/local/include/libmongoc-1.0/**\",\n \"${workspaceFolder}/**\"\n ],\n...\n}\n ]\n}\n```\n\nNow, open tasks.json and add these lines to the args array:\n\n```\n\"-I/usr/local/include/libmongoc-1.0\",\n\"-I/usr/local/include/libbson-1.0\",\n\"-lmongoc-1.0\",\n\"-lbson-1.0\",`\n```\n\nWith these, we're telling VS Code where to find the MongoDB C libraries so it can compile and check our code as we type. \n\n# Hello World MongoDB! \n\nThe source code is available on [GitHub. \n\n## Setting up the client and pinging MongoDB Atlas\n\nLet\u2019s start with a simple program that connects to the MongoDB Atlas cluster and pings the server. For that, we need to get the connection string (URI) to the cluster and add it in our code. The best way is to create a new environment variable with the key \u201cMONGODB_URI\u201d and value the connection string (URI). It\u2019s a good practice to keep the connection string decoupled from the code, but in this example, for simplicity, we'll have our connection string hardcoded.\n\nWe include the MongoDB driver and send an \"echo\" command from our `main` function. This example shows us how to initialize the MongoDB C client, how to create a command, manipulate JSON (in this case, BCON, BSON C Object Notation), send a command, process the response and error, and release any memory used.\n\n ```c\n // hello_mongo.c\n#include \n\nint main(int argc, char const *argv]) {\n // your MongoDB URI connection string\n const char *uri_string = \"mongodb+srv://\";\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n // Command to be sent, and reply\n bson_t *command, reply;\n // Error management\n bson_error_t error;\n // Misc\n char *str;\n bool retval;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Optionally get MongoDB URI from command line\n */\n if (argc > 1) {\n uri_string = argv[1];\n }\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Do work. This example pings the database and prints the result as JSON\n * BCON == BSON C Object Notation\n */\n command = BCON_NEW(\"ping\", BCON_INT32(1));\n\n // we run above command on our DB, using the client. We get reply and error\n // (if any)\n retval = mongoc_client_command_simple(client, \"admin\", command, NULL, &reply,\n &error);\n\n // mongoc_client_command_simple returns false and sets error if there are\n // invalid arguments or a server or network error.\n if (!retval) {\n fprintf(stderr, \"%s\\n\", error.message);\n return EXIT_FAILURE;\n }\n\n // if we're here, there's a JSON response\n str = bson_as_json(&reply, NULL);\n printf(\"%s\\n\", str);\n\n /*\n * Clean up memory\n */\n bson_destroy(&reply);\n bson_destroy(command);\n bson_free(str);\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n ```\n\n## Compiling and running our code\n\nAlthough we can use way more sophisticated methods to compile and run our code, as this is just a C source code file and we're using just a few dependencies, I'll just compile from command line using good ol' gcc:\n\n ```bash\n gcc -o hello_mongoc hello_mongoc.c \\ \n -I/usr/local/include/libbson-1.0\n -I/usr/local/include/libmongoc-1.0 \\\n -lmongoc-1.0 -lbson-1.0\n ```\n\nTo run the code, just call the built binary:\n\n ```bash\n ./hello_mongo\n ```\n\nIn the [repo that accompanies this post, you'll find a shell script that builds and runs all examples in one go.\n\n## Connecting to the database and listing all collections\n\nNow that we have the skeleton of a C app, we can start using our database. In this case, we'll connect to the database` sample_mflix`, and we'll list all collections there.\n\nAfter connecting to the database, we list all connections with a simple` for` loop after getting all collection names with `mongoc_database_get_collection_names`.\n\n```c\nif ((collection_names =\n mongoc_database_get_collection_names(database, &error))) {\n for (i = 0; collection_namesi]; i++) {\n printf(\"%s\\n\", collection_names[i]);\n }\n\n }\n```\n\nThe complete sample follows.\n\n```c\n// list_collections.c\n#include \n\nint main(int argc, char const *argv[]) {\n // your MongoDB URI connection string\n const char *uri_string = \"mongodb+srv://\";\n\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n\n // Error management\n bson_error_t error;\n\n mongoc_database_t *database;\n mongoc_collection_t *collection;\n char **collection_names;\n unsigned i;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Get a handle on the database \"db_name\" and collection \"coll_name\"\n */\n database = mongoc_client_get_database(client, \"sample_mflix\");\n\n// getting all collection names, here we're not passing in any options\n if ((collection_names = mongoc_database_get_collection_names_with_opts(\n database, NULL, &error))) {\n \n for (i = 0; collection_names[i]; i++) {\n printf(\"%s\\n\", collection_names[i]);\n }\n\n } else {\n fprintf(stderr, \"Error: %s\\n\", error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n```\n\nIf we compile and run it, we'l get this output:\n\n```\n$ ./list_collections \nsessions\nusers\ntheaters\nmovies\ncomments\n```\n\n## Creating a JSON object in C\n\nBeing a document-based database, creating JSON documents is crucial for any application that interacts with MongoDB. Being this is C code, we don't use JSON. Instead, we use BCON ([BSON C Object Notation, as mentioned above). To create a new document, we call `BCON_NEW`, and to convert it into a C string, we call `bson_as_canonical_extended_json.`\n\n```c\n// bcon.c\n// https://mongoc.org/libmongoc/current/tutorial.html#using-bcon\n#include \n\n// Creating the JSON doc:\n/*\n{\n born : ISODate(\"1906-12-09\"),\n died : ISODate(\"1992-01-01\"),\n name : {\n first : \"Grace\",\n last : \"Hopper\"\n },\n languages : \"MATH-MATIC\", \"FLOW-MATIC\", \"COBOL\" ],\n degrees: [ { degree: \"BA\", school: \"Vassar\" },\n { degree: \"PhD\", school: \"Yale\" } ]\n}\n*/\n\nint main(int argc, char *argv[]) {\n struct tm born = {0};\n struct tm died = {0};\n bson_t *document;\n char *str;\n\n born.tm_year = 6;\n born.tm_mon = 11;\n born.tm_mday = 9;\n\n died.tm_year = 92;\n died.tm_mon = 0;\n died.tm_mday = 1;\n\n // document = BCON_NEW(\"born\", BCON_DATE_TIME(mktime(&born) * 1000),\n // \"died\", BCON_DATE_TIME(mktime(&died) * 1000),\n // \"name\", \"{\",\n // \"first\", BCON_UTF8(\"Grace\"),\n // \"last\", BCON_UTF8(\"Hopper\"),\n // \"}\",\n // \"languages\", \"[\",\n // BCON_UTF8(\"MATH-MATIC\"),\n // BCON_UTF8(\"FLOW-MATIC\"),\n // BCON_UTF8(\"COBOL\"),\n // \"]\",\n // \"degrees\", \"[\",\n // \"{\", \"degree\", BCON_UTF8(\"BA\"), \"school\",\n // BCON_UTF8(\"Vassar\"), \"}\",\n // \"{\", \"degree\", BCON_UTF8(\"PhD\"),\"school\",\n // BCON_UTF8(\"Yale\"), \"}\",\n // \"]\");\n\n document = BCON_NEW(\"born\", BCON_DATE_TIME(mktime(&born) * 1000), \"died\",\n BCON_DATE_TIME(mktime(&died) * 1000), \"name\", \"{\",\n \"first\", BCON_UTF8(\"Grace\"), \"last\", BCON_UTF8(\"Hopper\"),\n \"}\", \"languages\", \"[\", BCON_UTF8(\"MATH-MATIC\"),\n BCON_UTF8(\"FLOW-MATIC\"), BCON_UTF8(\"COBOL\"), \"]\",\n \"degrees\", \"[\", \"{\", \"degree\", BCON_UTF8(\"BA\"), \"school\",\n BCON_UTF8(\"Vassar\"), \"}\", \"{\", \"degree\", BCON_UTF8(\"PhD\"),\n \"school\", BCON_UTF8(\"Yale\"), \"}\", \"]\");\n\n /*\n * Print the document as a JSON string.\n */\n str = bson_as_canonical_extended_json(document, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n\n /*\n * Clean up allocated bson documents.\n */\n bson_destroy(document);\n return 0;\n}\n```\n\n## CRUD in MongoDB using the C driver\n\nNow that we've covered the basics of connecting to MongoDB, let's have a look at how to manipulate data.\n\n## Querying data\n\nProbably the most used function of any database is to retrieve data fast. In most use cases, we spend way more time accessing data than inserting or updating that same data. In this case, after creating our MongoDB client connection, we call `mongoc_collection_find_with_opts`, which will find data based on a query we can pass in. Once we have results, we can iterate through the returned cursor and do something with that data:\n\n```c\n// All movies from 1984!\n BSON_APPEND_INT32(query, \"year\", 1984);\n cursor = mongoc_collection_find_with_opts(collection, query, NULL, NULL);\n\n while (mongoc_cursor_next(cursor, &query)) {\n str = bson_as_canonical_extended_json(query, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n }\n```\n\nThe complete sample follows.\n\n```c\n// find.c\n#include \"URI.h\"\n#include \n\nint main(int argc, char const *argv[]) {\n // your MongoDB URI connection string\n const char *uri_string = MY_MONGODB_URI;\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n\n // Error management\n bson_error_t error;\n\n mongoc_collection_t *collection;\n char **collection_names;\n unsigned i;\n\n // Query object\n bson_t *query;\n mongoc_cursor_t *cursor;\n\n char *str;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n puts(\"Error connecting!\");\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Get a handle on the database \"db_name\" and collection \"coll_name\"\n */\n collection = mongoc_client_get_collection(client, \"sample_mflix\", \"movies\");\n\n query = bson_new();\n\n // All movies from 1984!\n BSON_APPEND_INT32(query, \"year\", 1984);\n cursor = mongoc_collection_find_with_opts(collection, query, NULL, NULL);\n\n while (mongoc_cursor_next(cursor, &query)) {\n str = bson_as_canonical_extended_json(query, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n }\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n bson_destroy(query);\n\n mongoc_collection_destroy(collection);\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n````\n\n## Inserting a new document\n\nOK, we know how to read data, but how about inserting fresh data in our MongoDB database? It's easy! We just create a BSON document to be inserted and call `mongoc_collection_insert_one.`\n\n```c\ndoc = bson_new();\n bson_oid_init(&oid, NULL);\n BSON_APPEND_OID(doc, \"_id\", &oid);\n BSON_APPEND_UTF8(doc, \"name\", \"My super new picture\");\n\n if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n }\n```\n\nThe complete sample follows.\n\n```c\n// insert.c\n#include \"URI.h\"\n#include \n\nint main(int argc, char const *argv[]) {\n // your MongoDB URI connection string\n const char *uri_string = MY_MONGODB_URI;\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n\n // Error management\n bson_error_t error;\n\n mongoc_collection_t *collection;\n char **collection_names;\n unsigned i;\n\n // Object id and BSON doc\n bson_oid_t oid;\n bson_t *doc;\n\n char *str;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Get a handle on the database \"db_name\" and collection \"coll_name\"\n */\n collection = mongoc_client_get_collection(client, \"sample_mflix\", \"movies\");\n\n doc = bson_new();\n bson_oid_init(&oid, NULL);\n BSON_APPEND_OID(doc, \"_id\", &oid);\n BSON_APPEND_UTF8(doc, \"name\", \"My super new picture\");\n\n if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n } else {\n printf(\"Document inserted!\");\n /*\n * Print the document as a JSON string.\n */\n str = bson_as_canonical_extended_json(doc, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n }\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n mongoc_collection_destroy(collection);\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n````\n\n## Deleting a document\n\nTo delete a document, we call `mongoc_collection_delete_one.` We need to pass in a document containing the query to restrict the documents we want to find and delete.\n\n```c\ndoc = bson_new();\n BSON_APPEND_OID(doc, \"_id\", &oid);\n\n if (!mongoc_collection_delete_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"Delete failed: %s\\n\", error.message);\n }\n```\n\nThe complete sample follows.\n\n```c\n// delete.c\n#include \"URI.h\"\n#include \n\nint main(int argc, char const *argv[]) {\n // your MongoDB URI connection string\n const char *uri_string = MY_MONGODB_URI;\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n\n // Error management\n bson_error_t error;\n\n mongoc_collection_t *collection;\n char **collection_names;\n unsigned i;\n\n // Object id and BSON doc\n bson_oid_t oid;\n bson_t *doc;\n\n char *str;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Get a handle on the database \"db_name\" and collection \"coll_name\"\n */\n collection = mongoc_client_get_collection(client, \"sample_mflix\", \"movies\");\n\n // Let's insert one document in this collection!\n doc = bson_new();\n bson_oid_init(&oid, NULL);\n BSON_APPEND_OID(doc, \"_id\", &oid);\n BSON_APPEND_UTF8(doc, \"name\", \"My super new picture\");\n\n if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n } else {\n printf(\"Document inserted!\");\n /*\n * Print the document as a JSON string.\n */\n str = bson_as_canonical_extended_json(doc, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n }\n\n bson_destroy(doc);\n\n // Delete the inserted document!\n\n doc = bson_new();\n BSON_APPEND_OID(doc, \"_id\", &oid);\n\n if (!mongoc_collection_delete_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"Delete failed: %s\\n\", error.message);\n } else {\n puts(\"Document deleted!\");\n }\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n mongoc_collection_destroy(collection);\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n````\n\n## Updating a document\n\nFinally, to update a document, we need to provide the query to find the document to update and a document with the fields we want to change.\n\n```c\nquery = BCON_NEW(\"_id\", BCON_OID(&oid));\nupdate =\n BCON_NEW(\"$set\", \"{\", \"name\", BCON_UTF8(\"Super new movie was boring\"),\n \"updated\", BCON_BOOL(true), \"}\");\n\nif (!mongoc_collection_update_one(collection, query, update, NULL, NULL,\n &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n}\n```\n\nThe complete sample follows.\n\n```c\n// update.c\n#include \"URI.h\"\n#include \n\nint main(int argc, char const *argv[]) {\n // your MongoDB URI connection string\n const char *uri_string = MY_MONGODB_URI;\n // MongoDB URI created from above string\n mongoc_uri_t *uri;\n // MongoDB Client, used to connect to the DB\n mongoc_client_t *client;\n\n // Error management\n bson_error_t error;\n\n mongoc_collection_t *collection;\n char **collection_names;\n unsigned i;\n\n // Object id and BSON doc\n bson_oid_t oid;\n bson_t *doc;\n\n // document to update and query to find it\n bson_t *update = NULL;\n bson_t *query = NULL;\n char *str;\n\n /*\n * Required to initialize libmongoc's internals\n */\n mongoc_init();\n\n /*\n * Safely create a MongoDB URI object from the given string\n */\n uri = mongoc_uri_new_with_error(uri_string, &error);\n if (!uri) {\n fprintf(stderr,\n \"failed to parse URI: %s\\n\"\n \"error message: %s\\n\",\n uri_string, error.message);\n return EXIT_FAILURE;\n }\n\n /*\n * Create a new client instance, here we use the uri we just built\n */\n client = mongoc_client_new_from_uri(uri);\n if (!client) {\n return EXIT_FAILURE;\n }\n\n /*\n * Register the application name so we can track it in the profile logs\n * on the server. This can also be done from the URI (see other examples).\n */\n mongoc_client_set_appname(client, \"connect-example\");\n\n /*\n * Get a handle on the database \"db_name\" and collection \"coll_name\"\n */\n collection = mongoc_client_get_collection(client, \"sample_mflix\", \"movies\");\n\n // we create a new BSON Document\n doc = bson_new();\n bson_oid_init(&oid, NULL);\n BSON_APPEND_OID(doc, \"_id\", &oid);\n BSON_APPEND_UTF8(doc, \"name\", \"My super new movie\");\n\n // Then we insert it in the movies collection\n if (!mongoc_collection_insert_one(collection, doc, NULL, NULL, &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n } else {\n printf(\"Document inserted!\\n\");\n /*\n * Print the document as a JSON string.\n */\n str = bson_as_canonical_extended_json(doc, NULL);\n printf(\"%s\\n\", str);\n bson_free(str);\n\n // now we search for that document to update it\n query = BCON_NEW(\"_id\", BCON_OID(&oid));\n update =\n BCON_NEW(\"$set\", \"{\", \"name\", BCON_UTF8(\"Super new movie was boring\"),\n \"updated\", BCON_BOOL(true), \"}\");\n\n if (!mongoc_collection_update_one(collection, query, update, NULL, NULL,\n &error)) {\n fprintf(stderr, \"%s\\n\", error.message);\n } else {\n printf(\"Document edited!\\n\");\n str = bson_as_canonical_extended_json(update, NULL);\n printf(\"%s\\n\", str);\n }\n }\n\n /*\n * Release our handles and clean up libmongoc\n */\n\n if (doc) {\n bson_destroy(doc);\n }\n if (query) {\n bson_destroy(query);\n }\n if (update) {\n bson_destroy(update);\n }\n\n mongoc_collection_destroy(collection);\n mongoc_uri_destroy(uri);\n mongoc_client_destroy(client);\n mongoc_cleanup();\n\n return EXIT_SUCCESS;\n}\n````\n\n## Wrapping up\n\nWith this article, we covered the installation of the MongoDB C driver, configuring VS Code as our editor and setting up other tools. Then, we created a few console applications that connect to MongoDB Atlas and perform basic CRUD operations.\n\n[Get more information about the C driver, and to try this code, the easiest way would be to register for a free MongoDB account. We can't wait to see what you build next!\n", "format": "md", "metadata": {"tags": ["Atlas", "C"], "pageDescription": "In this article we'll install the MongoDB C driver on macOS, and use it to write some sample console applications that can interact with your MongoDB data by performing basic CRUD operations, using Visual Studio Code.", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB and C", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/use-effectively-realm-in-xamarin-forms", "action": "created", "body": "# How to Use Realm Effectively in a Xamarin.Forms App\n\nTaking care of persistence while developing a mobile application is fundamental nowadays. Even though mobile connection bandwidth, as well as coverage, has been steadily increasing over time, applications still are expected to work offline and in a limited connectivity environment.\n\nThis becomes even more cumbersome when working on applications that require a steady stream of data with the service in order to work effectively, such as collaborative applications.\n\nCaching data coming from a service is difficult, but Realm can ease the burden by providing a very natural way of storing and accessing data. This in turn will make the application more responsive and allow the end user to work seamlessly regardless of the connection status.\n\nThe aim of this article is to show how to use Realm effectively, particularly in a Xamarin.Forms app. We will take a look at **SharedGroceries**, an app to share grocery lists with friends and family, backed by a REST API. With this application, we wanted to provide an example that would be simple but also somehow complete, in order to cover different common use cases. The code for the application can be found in the repository here. \n\nBefore proceeding, please note that this is not an introductory article to Realm or Xamarin.Forms, so we expect you to have some familiarity with both. If you want to get an introduction to Realm, you can take a look at the documentation for the Realm .NET SDK. The official documentation for Xamarin.Forms and MVVM are valuable resources to learn about these topics.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. Get started now by build: Deploy Sample for Free!\n\n## The architecture\n\nIn this section, we are going to discuss the difference between the architecture of an application backed by a classic SQL database and the architecture of an application that uses Realm.\n\n### Classic architecture\n\nIn an app backed by a classic SQL database, the structure of the application will be similar to the one shown in the diagram, where the arrows represent the dependency between different components of the application. The view model requests data from a repository that can retrieve it both from a remote data source (like a web service) when online and from a local database, depending on the situation. The repository also takes care of keeping the local database up to date with all the data retrieved from the web service.\nThis approach presents some issues:\n\n* *Combining data coming from both the remote data source and the local one is difficult.* For example, when opening a view in an application for the first time, it's quite common to show locally cached data while the data coming from a web service is being fetched. In this case, it's not easy to synchronize the retrieval, as well as merge the data coming from both sources to present in the view.\n* *The data coming from the local source is static.* The objects that are retrieved from the database are generally POCOs (plain old class object) and as such, they do not reflect the current state of the data present in the cache. For example, in order to keep the data shown to the user as fresh as possible, there could be a synchronization process in the background that is continuously retrieving data from the web service and inserting it into the database. It's quite complex to make this data available to the final user of the application, though, as with a classic SQL database we can get fresh data only with a new query, and this needs to be done manually, further increasing the need to coordinate different components of the application.\n* *Pagination is hard.* Objects are fully loaded from the database upon retrieval, and this can cause performance issues when working with big datasets. In this case, pagination could be required to keep the application performant, but this is not easy to implement.\n\n### Realm architecture\n\nWhen working with Realm, instead, the structure of the application should be similar to the one in the diagram above.\n\nIn this approach, the realm is directly accessed from the view model, and not hidden behind a repository like before. When information is retrieved from the web service, it is inserted into the database, and the view model can update the UI thanks to notifications coming from the realm. In our architecture, we have decided to call *DataService* the entity responsible for the flow of the data in the application.\n\nThere are several advantages to this approach:\n\n* *Single source of truth removes conflicts.* Because data is coming only from the realm, then there are no issues with merging and synchronizing data coming from multiple data sources on the UI. For example, when opening a view in an application for the first time, data coming from the realm is shown straight away. In the meantime, data from the web service is retrieved and inserted into the realm. This will trigger a notification in the view model that will update the UI accordingly.\n* *Objects and collections are live*. This means that the data coming from the realm is always the latest available locally. There is no need to query again the database to get the latest version of the data as with an SQL database.\n* *Objects and collections are lazily loaded.* This means that there is no need to worry about pagination, even when working with huge datasets.\n* *Bindings.* Realm works out of the box with data bindings in Xamarin.Forms, greatly simplifying the use of the MVVM pattern.\n\nAs you can see in the diagram, the line between the view model and the DataService is dashed, to indicate that is optional. Due to the fact that the view model is showing only data coming from the realm, it does not actually need to have a dependency on the DataService, and the retrieval of data coming from the web service can happen independently. For example, the DataService could continuously request data to the web service to keep the data fresh, regardless of what is being shown to the user at a specific time. This continuous request approach can also be used a SQL database solution, but that would require additional synchronization and queries, as the data coming from the database is static. Sometimes, though, data needs to be exchanged with the web service in consequence of specific actions from the user\u2014for example with pull-to-refresh\u2014and in this case, the view model needs to depend on the DataService.\n\n## SharedGroceries app\n\nIn this section, we are going to introduce our example application and how to run it.\n\nSharedGroceries is a simple collaborative app that allows you to share grocery lists with friends and family, backed by a REST API. We have decided to use REST as it is quite a common choice and allowed us to create a service easily. We are not going to focus too much on the REST API service, as it is outside of the scope of this article.\n\nLet's take a look at the application now. The screenshots here are taken from the iOS version of the application only, for simplicity:\n\n* (a) The first page of the application is the login page, where the user can input their username and password to login.\n* (b) After login, the user is presented with the shopping lists they are currently sharing. Additionally, the user can add a new list here.\n* (c) When clicking on a row, it goes to the shopping list page that shows the content of such list. From here, the user can add and remove items, rename them, and check/uncheck them when they have been bought.\n\nTo run the app, you first need to run the web service with the REST API. In order to do so, open the `SharedGroceriesWebService` project, and run it. This should start the web service on `http://localhost:5000` by default. After that, you can simply run the `SharedGroceries` project that contains the code for the Xamarin.Forms application. The app is already configured to connect to the web service at the default address.\n\nFor simplicity, we do not cover the case of registering users, and they are all created already on the web service. In particular, there are three predefined users\u2014`alice`, `bob`, and `charlie`, all with password set to `1234`\u2014that can be used to access the app. A couple of shopping lists are also already created in the service to make it easier to test the application.\n\n## Realm in practice\n\nIn this section, we are going to go into detail about the structure of the app and how to use Realm effectively. The structure follows the architecture that was described in the architecture section.\n\n### Rest API\n\nIf we start from the lower part of the architecture schema, we have the `RestAPI` namespace that contains the code responsible for the communication with the web service. In particular, the `RestAPIClient` is making HTTP requests to the `SharedGroceriesWebService`. The data is exchanged in the form of DTOs (Data Transfer Objects), simple objects used for the serialization and deserialization of data over the network. In this simple app, we could avoid using DTOs, and direcly use our Realm model objects, but it's always a good idea to use specific objects just for the data transfer, as this allows us to have independence between the local persistence model and the service model. With this separation, we don't necessarily need to change our local model in case the service model changes.\n\nHere you have the example of one of the DTOs in the app:\n\n``` csharp\npublic class UserInfoDTO\n{\n public Guid Id { get; set; }\n public string Name { get; set; }\n\n public UserInfo ToModel()\n {\n return new UserInfo\n {\n Id = Id,\n Name = Name,\n };\n }\n\n public static UserInfoDTO FromModel(UserInfo user)\n {\n return new UserInfoDTO\n {\n Id = user.Id,\n Name = user.Name,\n };\n }\n}\n```\n\n`UserInfoDTO` is just a container used for the serialization/deserialization of data transmitted in the API calls, and contains methods for converting to and from the local model (in this case, the `UserInfo` class).\n\n### RealmService\n\n`RealmService` is responsible for providing a reference to a realm:\n\n``` csharp\npublic static class RealmService\n{\n public static Realm GetRealm() => Realm.GetInstance();\n}\n```\n\nThe class is quite simple at the moment, as we are using the default configuration for the realm. Having a separate class becomes more useful, though, when we have a more complicated configuration for the realm, and we want avoid having code duplication.\n\nPlease note that the `GetRealm` method is creating a new realm instance when it is called. Because realm instances need to be used on the same thread where they have been created, this method can be used from everywhere in our code, without the need to worry about threading issues.\nIt's also important to dispose of realm instances when they are not needed anymore, especially on background threads.\n\n### DataService\n\nThe `DataService` class is responsible for managing the flow of data in the application. When needed, the class requests data from the `RestAPIClient`, and then persists it in the realm. A typical method in this class would look like this:\n\n``` csharp\npublic static async Task RetrieveUsers()\n{\n try\n {\n //Retrieve data from the API\n var users = await RestAPIClient.GetAllUsers();\n\n //Persist data in Realm\n using var realm = RealmService.GetRealm();\n realm.Write(() =>\n {\n realm.Add(users.Select(u => u.ToModel()), update: true);\n });\n }\n catch (HttpRequestException) //Offline/Service is not reachable\n {\n }\n}\n```\n\nThe `RetrieveUsers` method is first retrieving the list of users (in the form of DTOs) from the Rest API, and then inserting them into the realm, after a conversion from DTOs to model objects. Here you can see the use of the `using` declaration to dispose of the realm at the end of the try block.\n\n### Realm models\n\nThe definition of the model for Realm is generally straightforward, as it is possible to use a simple C# class as a model with very little modifications. In the following snippet, you can see the three model classes that we are using in SharedGroceries:\n\n``` csharp\npublic class UserInfo : RealmObject\n{\n [PrimaryKey]\n public Guid Id { get; set; }\n public string Name { get; set; }\n}\n\npublic class GroceryItem : EmbeddedObject\n{\n public string Name { get; set; }\n public bool Purchased { get; set; }\n}\n\npublic class ShoppingList : RealmObject\n{\n [PrimaryKey]\n public Guid Id { get; set; } = Guid.NewGuid();\n public string Name { get; set; }\n public ISet Owners { get; }\n public IList Items { get; }\n}\n```\n\nThe models are pretty simple, and strictly resemble the DTO objects that are retrieved from the web service. One of the few caveats when writing Realm model classes is to remember that collections (lists, sets, and dictionaries) need to be declared with a getter only property and the correspondent interface type (`IList`, `ISet`, `IDictionary`), as it is happening with `ShoppingList`.\n\nAnother thing to notice here is that `GroceryItem` is defined as an `EmbeddedObject`, to indicate that it cannot exist as an independent Realm object (and thus it cannot have a `PrimaryKey`), and has the same lifecycle of the `ShoppingList` that contains it. This implies that `GroceryItem`s get deleted when the parent `ShoppingList` is deleted.\n\n### View models\n\nWe will now go through the two main view models in the app, and discuss the most important points. We are going to skip `LoginViewModel`, as it is not particularly interesting.\n\n#### ShoppingListsCollectionViewModel\n\n`ShoppingListsCollectionViewModel` is the view model backing `ShoppingListsCollectionPage`, the main page of the application, that shows the list of shopping lists for the current user. Let's take a look look at the main elements:\n\n``` csharp\npublic class ShoppingListsCollectionViewModel : BaseViewModel\n{\n private readonly Realm realm;\n private bool loaded;\n\n public ICommand AddListCommand { get; }\n public ICommand OpenListCommand { get; }\n\n public IEnumerable Lists { get; }\n\n public ShoppingList SelectedList\n {\n get => null;\n set\n {\n OpenListCommand.Execute(value);\n OnPropertyChanged();\n }\n }\n\n public ShoppingListsCollectionViewModel()\n {\n //1\n realm = RealmService.GetRealm();\n Lists = realm.All();\n\n AddListCommand = new AsyncCommand(AddList);\n OpenListCommand = new AsyncCommand(OpenList);\n }\n\n internal override async void OnAppearing()\n {\n base.OnAppearing();\n\n IDisposable loadingIndicator = null;\n\n try\n {\n //2\n if (!loaded)\n {\n //Page is appearing for the first time, sync with service\n //and retrieve users and shopping lists\n loaded = true;\n loadingIndicator = DialogService.ShowLoading();\n await DataService.TrySync();\n await DataService.RetrieveUsers();\n await DataService.RetrieveShoppingLists();\n }\n else\n {\n DataService.FinishEditing();\n }\n }\n catch\n {\n await DialogService.ShowAlert(\"Error\", \"Error while loading the page\");\n }\n finally\n {\n loadingIndicator?.Dispose();\n }\n }\n\n //3\n private async Task AddList()\n {\n var newList = new ShoppingList();\n newList.Owners.Add(DataService.CurrentUser);\n realm.Write(() =>\n {\n return realm.Add(newList, true);\n });\n\n await OpenList(newList);\n }\n\n private async Task OpenList(ShoppingList list)\n {\n DataService.StartEditing(list.Id);\n await NavigationService.NavigateTo(new ShoppingListViewModel(list));\n }\n}\n```\n\nIn the constructor of the view model (*1*), we are initializing `realm` and also `Lists`. That is a queryable collection of `ShoppingList` elements, representing all the shopping lists of the user. `Lists` is defined as a public property with a getter, and this allows to bind it to the UI, as we can see in `ShoppingListsCollectionPage.xaml`:\n\n``` xml\n\n \n \n \n \n \n \n \n \n \n\n```\n\nThe content of the page is a `ListView` whose `ItemsSource` is bound to `Lists` (*A*). This means that the rows of the `ListView` are actually bound to the elements of `Lists` (that is, a collection of `ShoppingList`). A little bit down, we can see that each of the rows of the `ListView` is a `TextCell` whose text is bound to the variable `Name` of `ShoppingList` (*B*). Together, this means that this page will show a row for each of the shopping lists, with the name of list in the row.\n\nAn important thing to know is that, behind the curtains, Realm collections (like `Lists`, in this case) implement `INotifyCollectionChanged`, and that Realm objects implement `INotifyPropertyChanged`. This means that the UI will get automatically updated whenever there is a change in the collection (for example, by adding or removing elements), as well as whenever there is a change in an object (if a property changes). This greatly simplifies using the MVVM pattern, as implementing those interfaces manually is a tedious and error-prone process.\n\nComing back to `ShoppingListsCollectionViewModel`, in `OnAppearing`, we can see how the Realm collection is actually populated. If the page has not been loaded before (*2*), we call the methods `DataService.RetrieveUsers` and `DataService.RetrieveShoppingLists`, that retrieve the list of users and shopping lists from the service and insert them into the realm. Due to the fact that Realm collections are live, `Lists` will notify the UI that its contents have changed, and the list on the screen will get populated automatically.\nNote that there are also some more interesting elements here that are related to the synchronization of local data with the web service, but we will discuss them later.\n\nFinally, we have the `AddList` and `OpenList` methods (*3*) that are invoked, respectively, when the *Add* button is clicked or when a list is clicked. The `OpenList` method just passes the clicked `list` to the `ShoppingListViewModel`, while `AddList` first creates a new empty list, adds the current user in the list of owners, adds it to the realm, and then opens the list.\n\n#### ShoppingListViewModel\n\n`ShoppingListViewModel` is the view model backing `ShoppingListPage`, the page that shows the content of a certain list and allows us to modify it:\n\n``` csharp\npublic class ShoppingListViewModel : BaseViewModel\n{\n private readonly Realm realm;\n\n public ShoppingList ShoppingList { get; }\n public IEnumerable CheckedItems { get; }\n public IEnumerable UncheckedItems { get; }\n\n public ICommand DeleteItemCommand { get; }\n public ICommand AddItemCommand { get; }\n public ICommand DeleteCommand { get; }\n\n public ShoppingListViewModel(ShoppingList list)\n {\n realm = RealmService.GetRealm();\n\n ShoppingList = list;\n\n //1\n CheckedItems = ShoppingList.Items.AsRealmQueryable().Where(i => i.Purchased);\n UncheckedItems = ShoppingList.Items.AsRealmQueryable().Where(i => !i.Purchased);\n\n DeleteItemCommand = new Command(DeleteItem);\n AddItemCommand = new Command(AddItem);\n DeleteCommand = new AsyncCommand(Delete);\n }\n\n //2\n private void AddItem()\n {\n realm.Write(() =>\n {\n ShoppingList.Items.Add(new GroceryItem());\n });\n }\n\n private void DeleteItem(GroceryItem item)\n {\n realm.Write(() =>\n {\n ShoppingList.Items.Remove(item);\n });\n }\n\n private async Task Delete()\n {\n var confirmDelete = await DialogService.ShowConfirm(\"Deletion\",\n \"Are you sure you want to delete the shopping list?\");\n\n if (!confirmDelete)\n {\n return;\n }\n\n var listId = ShoppingList.Id;\n realm.Write(() =>\n {\n realm.Remove(ShoppingList);\n });\n\n await NavigationService.GoBack();\n }\n}\n```\n\nAs we will see in a second, the page is binding to two different collections, `CheckedItems` and `UncheckedItems`, that represent, respectively, the list of items that have been checked (purchased) and those that haven't been. In order to obtain those, `AsRealmQueryable` is called on `ShoppingList.Items`, to convert the `IList` to a Realm-backed query, that can be queried with LINQ.\n\nThe xaml code for the page can be found in `ShoppingListPage.xaml`. Here is the main content:\n\n``` xml\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n```\n\nThis page is composed by an external `StackLayout` (A) that contains:\n\n* (B) An `Editor` whose `Text` is bound to `ShoppingList.Name`. This allows the user to read and eventually modify the name of the list.\n* (C) A bindable `StackLayout` that is bound to `UncheckedItems`. This is the list of items that need to be purchased. Each of the rows of the `StackLayout` are bound to an element of `UncheckedItems`, and thus to a `GroceryItem`.\n* (D) A `Button` that allows us to add new elements to the list.\n* (E) A separator (the `BoxView`) and a `Label` that describe how many elements of the list have been ticked, thanks to the binding to `CheckedItems.Count`.\n* (F ) A bindable `StackLayout` that is bound to `CheckedItems`. This is the list of items that have been already purchased. Each of the rows of the `StackLayout` are bound to an element of `CheckedItems`, and thus to a `GroceryItem`.\n\nIf we focus our attention on on the `DataTemplate` of the first bindable `StackLayout`, we can see that each row is composed by three elements:\n\n* (H) A `Checkbox` that is bound to `Purchased` of `GroceryItem`. This allows us to check and uncheck items.\n* (I) An `Entry` that is bound to `Name` of `GroceryItem`. This allows us to change the name of the items.\n* (J) A `Button` that, when clicked, executed the `DeleteItemCommand` command on the view model, with `GroceryItem` as argument. This allows us to delete an item.\n\nPlease note that for simplicity, we have decided to use a bindable `StackLayout` to display the items of the shopping list. In a production application, it could be necessary to use a view that supports virtualization, such as a `ListView` or `CollectionView`, depending on the expected amount of elements in the collection.\n\nAn interesting thing to notice is that all the bindings are actually two-ways, so they go both from the view model to the page and from the page to the view model. This, for example, allows the user to modify the name of a shopping list, as well as check and uncheck items. The view elements are bound directly to Realm objects and collections (`ShoppingList`, `UncheckedItems`, and `CheckedItems`), and so all these changes are automatically persisted in the realm.\n\nTo make a more complete example about what is happening, let us focus on checking/unchecking items. When the user checks an item, the property `Purchased` of a `GroceryItem` is set to true, thanks to the bindings. This means that this item is no more part of `UncheckedItems` (defined as the collection of `GroceryItem` with `Purchased` set to false in the query (*1*)), and thus it will disappear from the top list. Now the item will be part of `CheckedItems` (defined as the collection of `GroceryItem` with `Purchased` set to true in the query (*1*)), and as such it will appear in the bottom list. Given that the number of elements in `CheckedItems` has changed, the text in `Label` (*E*) will be also updated.\n\nComing back to the view model, we then have the `AddItem`, `DeleteItem`, and `Delete` methods (*2*) that are invoked, respectively, when an item is added, when an item is removed, and when the whole list needs to be removed. The methods are pretty straightforward, and at their core just execute a write transaction modifying or deleting `ShoppingList`.\n\n## Editing and synchronization\n\nIn this section, we are going to discuss how shopping list editing is done in the app, and how to synchronize it back to the service.\n\nIn a mobile application, there are generally two different ways of approaching *editing*:\n\n* *Save button*. The user modifies what they need in the application, and then presses a save button to persist their changes when satisfied.\n* *Continuous save*. The changes by the user are continually saved by the application, so there is no need for an explicit save button.\n\nGenerally, the second choice is more common in modern applications, and for this reason, it is also the approach that we decided to use in our example.\n\nThe main editing in `SharedGroceries` happens in the `ShoppingListPage`, where the user can modify or delete shopping lists. As we discussed before, all the changes that are done by the user are automatically persisted in the realm thanks to the two-way bindings, and so the next step is to synchronize those changes back to the web service. Even though the changes are saved as they happen, we decided to synchronize those to the service only after the user is finished with modifying a certain list, and went away from the `ShoppingListPage`. This allows us to send the whole updated list to the service, instead of a series of individual updates. This is a choice that we made to keep the application simple, but obviously, the requirements could be different in another case.\n\nIn order to implement the synchronization mechanism we have discussed, we needed to keep track of which shopping list was being edited at a certain time and which shopping lists have already been edited (and so can be sent to the web service). This is implemented in the following methods from the `DataService` class:\n\n``` csharp\npublic static void StartEditing(Guid listId)\n{\n PreferencesManager.SetEditingListId(listId);\n}\n\npublic static void FinishEditing()\n{\n var editingListId = PreferencesManager.GetEditingListId();\n\n if (editingListId == null)\n {\n return;\n }\n\n //1\n PreferencesManager.RemoveEditingListId();\n //2\n PreferencesManager.AddReadyForSyncListId(editingListId.Value);\n\n //3\n Task.Run(TrySync);\n}\n\npublic static async Task TrySync()\n{\n //4\n var readyForSyncListsId = PreferencesManager.GetReadyForSyncListsId();\n\n //5\n var editingListId = PreferencesManager.GetEditingListId();\n\n foreach (var readyForSyncListId in readyForSyncListsId)\n {\n //6\n if (readyForSyncListId == editingListId) //The list is still being edited\n {\n continue;\n }\n\n //7\n var updateSuccessful = await UpdateShoppingList(readyForSyncListId);\n if (updateSuccessful)\n {\n //8\n PreferencesManager.RemoveReadyForSyncListId(readyForSyncListId);\n }\n }\n}\n```\n\nThe method `StartEditing` is called when opening a list in `ShoppingListsCollectionViewModel`:\n\n``` csharp\nprivate async Task OpenList(ShoppingList list)\n{\n DataService.StartEditing(list.Id);\n await NavigationService.NavigateTo(new ShoppingListViewModel(list));\n}\n```\n\nThis method persists to disk the `Id` of the list that is being currently edited.\n\nThe method `FinishEditing` is called in `OnAppearing` in `ShoppingListsCollectionViewModel`:\n\n``` csharp\ninternal override async void OnAppearing()\n{\n base.OnAppearing();\n\n if (!loaded)\n {\n ....\n await DataService.TrySync();\n ....\n }\n else\n {\n DataService.FinishEditing();\n }\n }\n\n}\n```\n\nThis method is called when `ShoppingListsCollectionPage` appears on screen, and so the user possibly went back from the `ShoppingListsPage` after finishing editing. This method removes the identifier of the shopping list that is currently being edited (if it exists)(*1*), and adds it to the collection of identifiers for lists that are ready to be synced (*2*). Finally, it calls the method `TrySync` (*3*) in another thread.\n\nFinally, the method `TrySync` is called both in `DataService.FinishEditing` and in `ShoppingListsCollectionViewModel.OnAppearing`, as we have seen before. This method takes care of synchronizing all the local changes back to the web service:\n\n* It first retrieves the ids of the lists that are ready to be synced (*4*), and then the id of the (eventual) list being edited at the moment (*5*).\n* Then, for each of the identifiers of the lists ready to be synced (`readyForSyncListsId`), if the list is being edited right now (*6*), it just skips this iteration of the loop. Otherwise, it updates the shopping list on the service (*7*).\n* Finally, if the update was successful, it removes the identifier from the collection of lists that have been edited (*8*).\n\nThis method is called also in `OnAppearing` of `ShoppingListsCollectionViewModel` if this is the first time the corresponding page is loaded. We do so as we need to be sure to synchronize data back to the service when the application starts, in case there have been connection issues previously.\n\nOverall, this is probably a very simplified approach to synchronization, as we did not consider several problems that need to be addressed in a production application:\n\n* What happens if the service is not reachable? What is our retry policy?\n* How do we resolve conflicts on the service when data is being modified by multiple users?\n* How do we respect consistency of the data? How do we make sure that the changes coming from the web service are not overriding the local changes?\n\nThose are only part of the possible issues that can arise when working with synchronization, especially in a collaborative applications like ours.\n\n## Conclusion\n\nIn this article, we have shown how Realm can be used effectively in a Xamarin.Forms app, thanks to notifications, bindings, and live objects.\n\nThe use of Realm as the source of truth for the application greatly simplified the architecture of SharedGroceries and the automatic bindings, together with notifications, also streamlined the implementation of the MVVM pattern.\n\nNevertheless, synchronization in a collaborative app such as SharedGroceries is still hard. In our example, we have covered only part of the possible synchronization issues that can arise, but you can already see the amount of effort necessary to ensure that everything stays in sync between the mobile application and the web service.\n\nIn a following article, we are going to see how we can use Realm Sync to greatly simplify the architecture of the application and resolve our synchronization issues.", "format": "md", "metadata": {"tags": ["C#", "Realm", "Xamarin"], "pageDescription": "This article shows how to effectively use Realm in a Xamarin.Forms app using recommended patterns. ", "contentType": "Article"}, "title": "How to Use Realm Effectively in a Xamarin.Forms App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/map-terms-concepts-sql-mongodb", "action": "created", "body": "# Mapping Terms and Concepts from SQL to MongoDB\n\nPerhaps, like me, you grew up on SQL databases. You can skillfully\nnormalize a database, and, after years of working with tables, you think\nin rows and columns as well.\n\nBut now you've decided to dip your toe into the wonderful world of NoSQL\ndatabases, and you're exploring MongoDB. Perhaps you're wondering what\nyou need to do differently. Can you just translate your rows and columns\ninto fields and values and call it a day? Do you really need to change\nthe way you think about storing your data?\n\nWe'll answer those questions and more in this three-part article series.\nBelow is a summary of what we'll cover today:\n\n- Meet Ron\n- Relational Database and Non-Relational Databases\n- The Document Model\n- Example Documents\n- Mapping Terms and Concepts from SQL to MongoDB\n- Wrap Up\n\n>\n>\n>This article is based on a presentation I gave at MongoDB World and\n>MongoDB.local Houston entitled \"From SQL to NoSQL: Changing Your\n>Mindset.\"\n>\n>If you prefer videos over articles, check out the\n>recording. Slides are available\n>here.\n>\n>\n\n## Meet Ron\n\nI'm a huge fan of the best tv show ever created: Parks and Recreation.\nYes, I wrote that previous sentence as if it were a fact, because it\nactually is.\n\nThis is Ron. Ron likes strong women, bacon, and staying off the grid.\n\nIn season 6, Ron discovers Yelp. Ron thinks Yelp\nis amazing, because he loves the idea of reviewing places he's been.\n\nHowever, Yelp is way too \"on the grid\" for Ron. He pulls out his beloved\ntypewriter and starts typing reviews that he intends to send via snail\nmail.\n\nRon writes some amazing reviews. Below is one of my favorites.\n\nUnfortunately, I see three big problems with his plan:\n\n1. Snail mail is way slower than posting the review to Yelp where it\n will be instantly available for anyone to read.\n2. The business he is reviewing may never open the letter he sends as\n they may just assume it's junk mail.\n3. No one else will benefit from his review. (These are exactly the\n type of reviews I like to find on Amazon!)\n\n### Why am I talking about Ron?\n\nOk, so why am I talking about Ron in the middle of this article about\nmoving from SQL to MongoDB?\n\nRon saw the value of Yelp and was inspired by the new technology.\nHowever, he brought his old-school ways with him and did not realize the\nfull value of the technology.\n\nThis is similar to what we commonly see as people move from a SQL\ndatabase to a NoSQL database such as MongoDB. They love the idea of\nMongoDB, and they are inspired by the power of the flexible document\ndata model. However, they frequently bring with them their SQL mindsets\nand don't realize the full value of MongoDB. In fact, when people don't\nchange the way they think about modeling their data, they struggle and\nsometimes fail.\n\nDon't be like Ron. (At least in this case, because, in most cases, Ron\nis amazing.) Don't be stuck in your SQL ways. Change your mindset and\nrealize the full value of MongoDB.\n\nBefore we jump into how to change your mindset, let's begin by answering\nsome common questions about non-relational databases and discussing the\nbasics of how to store data in MongoDB.\n\n## Relational Database and Non-Relational Databases\n\nWhen I talk with developers, they often ask me questions like, \"What use\ncases are good for MongoDB?\" Developers often have this feeling that\nnon-relational\ndatabases\n(or NoSQL databases) like MongoDB are for specific, niche use cases.\n\nMongoDB is a general-purpose database that can be used in a variety of\nuse cases across nearly every industry. For more details, see MongoDB\nUse Cases, MongoDB\nIndustries, and the MongoDB Use\nCase Guidance\nWhitepaper\nthat includes a summary of when you should evaluate other database\noptions.\n\nAnother common question is, \"If my data is relational, why would I use a\nnon-relational\ndatabase?\"\n\nMongoDB is considered a non-relational database. However, that doesn't\nmean MongoDB doesn't store relationship data well. (I know I just used a\ndouble-negative. Stick with me.) MongoDB stores relationship data in a\ndifferent way. In fact, many consider the way MongoDB stores\nrelationship data to be more intuitive and more reflective of the\nreal-world relationships that are being modeled.\n\nLet's take a look at how MongoDB stores data.\n\n## The Document Model\n\nInstead of tables, MongoDB stores data in documents. No, Clippy, I'm not\ntalking about Microsoft Word Documents.\n\nI'm talking about BSON\ndocuments. BSON is a\nbinary representation of JSON (JavaScript Object Notation)\ndocuments.\nDocuments will likely feel comfortable to you if you've used any of the\nC-family of programming languages such as C, C#, Go, Java, JavaScript,\nPHP, or Python.\n\nDocuments typically store information about one object as well as any\ninformation related to that object. Related documents are grouped\ntogether in collections. Related collections are grouped together and\nstored in a database.\n\nLet's discuss some of the basics of a document. Every document begins\nand ends with curly braces.\n\n``` json\n{\n}\n```\n\nInside of those curly braces, you'll find an unordered set of\nfield/value pairs that are separated by commas.\n\n``` json\n{\n field: value,\n field: value,\n field: value\n}\n```\n\nThe fields are strings that describe the pieces of data being stored.\n\nThe values can be any of the BSON data types.\nBSON has a variety of data\ntypes including\nDouble, String, Object, Array, Binary Data, ObjectId, Boolean, Date,\nNull, Regular Expression, JavaScript, JavaScript (with scope), 32-bit\nInteger, Timestamp, 64-bit Integer, Decimal128, Min Key, and Max Key.\nWith all of these types available for you to use, you have the power to\nmodel your data as it exists in the real world.\n\nEvery document is required to have a field named\n\\_id. The\nvalue of `_id` must be unique for each document in a collection, is\nimmutable, and can be of any type other than an array.\n\n## Example Documents\n\nOk, that's enough definitions. Let's take a look at a real example, and\ncompare and contrast how we would model the data in SQL vs MongoDB.\n\n### Storing Leslie's Information\n\nLet's say we need to store information about a user named Leslie. We'll\nstore her contact information including her first name, last name, cell\nphone number, and city. We'll also store some extra information about\nher including her location, hobbies, and job history.\n\n#### Storing Contact Information\n\nLet's begin with Leslie's contact information. When using SQL, we'll\ncreate a table named `Users`. We can create columns for each piece of\ncontact information we need to store: first name, last name, cell phone\nnumber, and city. To ensure we have a unique way to identify each row,\nwe'll include an ID column.\n\n**Users**\n\n| ID | first_name | last_name | cell | city |\n|-----|------------|-----------|------------|--------|\n| 1 | Leslie | Yepp | 8125552344 | Pawnee |\n\nNow let's store that same information in MongoDB. We can create a new\ndocument for Leslie where we'll add field/value pairs for each piece of\ncontact information we need to store. We'll use `_id` to uniquely\nidentify each document. We'll store this document in a collection named\n`Users`.\n\nUsers\n\n``` json\n{\n \"_id\": 1,\n \"first_name\": \"Leslie\",\n \"last_name\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"city\": \"Pawnee\"\n}\n```\n\n#### Storing Latitude and Longitude\n\nNow that we've stored Leslie's contact information, let's store the\ncoordinates of her current location.\n\nWhen using SQL, we'll need to split the latitude and longitude between\ntwo columns.\n\n**Users**\n| ID | first_name | last_name | cell | city | latitude | longitude |\n|-----|------------|-----------|------------|--------|-----------|------------|\n| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |\n\nMongoDB has an array data type, so we can store the latitude and\nlongitude together in a single field.\n\nUsers\n\n``` json\n{\n \"_id\": 1,\n \"first_name\": \"Leslie\",\n \"last_name\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"city\": \"Pawnee\",\n \"location\": -86.536632, 39.170344 ]\n}\n```\n\nBonus Tip: MongoDB has a few different built-in ways to visualize\nlocation data including the [schema analyzer in MongoDB\nCompass\nand the Geospatial Charts in MongoDB\nCharts.\nI generated the map below with just a few clicks in MongoDB Charts.\n\n#### Storing Lists of Information\n\nWe're successfully storing Leslie's contact information and current\nlocation. Now let's store her hobbies.\n\nWhen using SQL, we could choose to add more columns to the Users table.\nHowever, since a single user could have many hobbies (meaning we need to\nrepresent a one-to-many relationship), we're more likely to create a\nseparate table just for hobbies. Each row in the table will contain\ninformation about one hobby for one user. When we need to retrieve\nLeslie's hobbies, we'll join the `Users` table and our new `Hobbies`\ntable.\n\n**Hobbies**\n| ID | user_id | hobby |\n|-----|---------|----------------|\n| 10 | 1 | scrapbooking |\n| 11 | 1 | eating waffles |\n| 12 | 1 | working |\n\nSince MongoDB supports arrays, we can simply add a new field named\n\"hobbies\" to our existing document. The array can contain as many or as\nfew hobbies as we need (assuming we don't exceed the 16 megabyte\ndocument size\nlimit).\nWhen we need to retrieve Leslie's hobbies, we don't need to do an\nexpensive join to bring the data together; we can simply retrieve her\ndocument in the `Users` collection.\n\nUsers\n\n``` json\n{\n \"_id\": 1,\n \"first_name\": \"Leslie\",\n \"last_name\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"city\": \"Pawnee\",\n \"location\": -86.536632, 39.170344 ],\n \"hobbies\": [\"scrapbooking\", \"eating waffles\", \"working\"]\n}\n```\n\n##### Storing Groups of Related Information\n\nLet's say we also need to store Leslie's job history.\n\nJust as we did with hobbies, we're likely to create a separate table\njust for job history information. Each row in the table will contain\ninformation about one job for one user.\n\n**JobHistory**\n| ID | user_id | job_title | year_started |\n|-----|---------|----------------------------------------------------|--------------|\n| 20 | 1 | \"Deputy Director\" | 2004 |\n| 21 | 1 | \"City Councillor\" | 2012 |\n| 22 | 1 | \"Director, National Parks Service, Midwest Branch\" | 2014 |\n\nSo far in this article, we've used arrays in MongoDB to store\ngeolocation data and a list of Strings. Arrays can contain values of any\ntype, including objects. Let's create a document for each job Leslie has\nheld and store those documents in an array.\n\nUsers\n\n``` json\n{\n \"_id\": 1,\n \"first_name\": \"Leslie\",\n \"last_name\": \"Yepp\",\n \"cell\": \"8125552344\",\n \"city\": \"Pawnee\",\n \"location\": [ -86.536632, 39.170344 ],\n \"hobbies\": [\"scrapbooking\", \"eating waffles\", \"working\"],\n \"jobHistory\": [\n {\n \"title\": \"Deputy Director\",\n \"yearStarted\": 2004\n },\n {\n \"title\": \"City Councillor\",\n \"yearStarted\": 2012\n },\n {\n \"title\": \"Director, National Parks Service, Midwest Branch\",\n \"yearStarted\": 2014\n }\n ]\n}\n```\n\n### Storing Ron's Information\n\nNow that we've decided how we'll store information about our users in\nboth tables and documents, let's store information about Ron. Ron will\nhave almost all of the same information as Leslie. However, Ron does his\nbest to stay off the grid, so he will not be storing his location in the\nsystem.\n\n#### Skipping Location Data in SQL\n\nLet's begin by examining how we would store Ron's information in the\nsame tables that we used for Leslie's. When using SQL, we are required\nto input a value for every cell in the table. We will represent Ron's\nlack of location data with `NULL`. The problem with using `NULL` is that\nit's unclear whether the data does not exist or if the data is unknown,\nso many people discourage the use of `NULL`.\n\n**Users**\n| ID | first_name | last_name | cell | city | latitude | longitude |\n|-----|------------|--------------|------------|--------|-----------|------------|\n| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |\n| 2 | Ron | Swandaughter | 8125559347 | Pawnee | NULL | NULL |\n\n**Hobbies**\n| ID | user_id | hobby |\n|-----|---------|----------------|\n| 10 | 1 | scrapbooking |\n| 11 | 1 | eating waffles |\n| 12 | 1 | working |\n| 13 | 2 | woodworking |\n| 14 | 2 | fishing |\n\n**JobHistory**\n| ID | user_id | job_title | year_started |\n|-----|---------|----------------------------------------------------|--------------|\n| 20 | 1 | \"Deputy Director\" | 2004 |\n| 21 | 1 | \"City Councillor\" | 2012 |\n| 22 | 1 | \"Director, National Parks Service, Midwest Branch\" | 2014 |\n| 23 | 2 | \"Director\" | 2002 |\n| 24 | 2 | \"CEO, Kinda Good Building Company\" | 2014 |\n| 25 | 2 | \"Superintendent, Pawnee National Park\" | 2018 |\n\n#### Skipping Location Data in MongoDB\n\nIn MongoDB, we have the option of representing Ron's lack of location\ndata in two ways: we can omit the `location` field from the document or\nwe can set `location` to `null`. Best practices suggest that we omit the\n`location` field to save space. You can choose if you want omitted\nfields and fields set to `null` to represent different things in your\napplications.\n\nUsers\n\n``` json\n{\n \"_id\": 2,\n \"first_name\": \"Ron\",\n \"last_name\": \"Swandaughter\",\n \"cell\": \"8125559347\",\n \"city\": \"Pawnee\",\n \"hobbies\": [\"woodworking\", \"fishing\"],\n \"jobHistory\": [\n {\n \"title\": \"Director\",\n \"yearStarted\": 2002\n },\n {\n \"title\": \"CEO, Kinda Good Building Company\",\n \"yearStarted\": 2014\n },\n {\n \"title\": \"Superintendent, Pawnee National Park\",\n \"yearStarted\": 2018\n }\n ]\n}\n```\n\n### Storing Lauren's Information\n\nLet's say we are feeling pretty good about our data models and decide to\nlaunch our apps using them.\n\nThen we discover we need to store information about a new user: Lauren\nBurhug. She's a fourth grade student who Ron teaches about government.\nWe need to store a lot of the same information about Lauren as we did\nwith Leslie and Ron: her first name, last name, city, and hobbies.\nHowever, Lauren doesn't have a cell phone, location data, or job\nhistory. We also discover that we need to store a new piece of\ninformation: her school.\n\n#### Storing New Information in SQL\n\nLet's begin by storing Lauren's information in the SQL tables as they\nalready exist.\n\n**Users**\n| ID | first_name | last_name | cell | city | latitude | longitude |\n|-----|------------|--------------|------------|--------|-----------|------------|\n| 1 | Leslie | Yepp | 8125552344 | Pawnee | 39.170344 | -86.536632 |\n| 2 | Ron | Swandaughter | 8125559347 | Pawnee | NULL | NULL |\n| 3 | Lauren | Burhug | NULL | Pawnee | NULL | NULL |\n\n**Hobbies**\n| ID | user_id | hobby |\n|-----|---------|----------------|\n| 10 | 1 | scrapbooking |\n| 11 | 1 | eating waffles |\n| 12 | 1 | working |\n| 13 | 2 | woodworking |\n| 14 | 2 | fishing |\n| 15 | 3 | soccer |\n\nWe have two options for storing information about Lauren's school. We\ncan choose to add a column to the existing Users table, or we can create\na new table. Let's say we choose to add a column named \"school\" to the\nUsers table. Depending on our access rights to the database, we may need\nto talk to the DBA and convince them to add the field. Most likely, the\ndatabase will need to be taken down, the \"school\" column will need to be\nadded, NULL values will be stored in every row in the Users table where\na user does not have a school, and the database will need to be brought\nback up.\n\n#### Storing New Information in MongoDB\n\nLet's examine how we can store Lauren's information in MongoDB.\n\nUsers\n\n``` json\n{\n \"_id\": 3,\n \"first_name\": \"Lauren\",\n \"last_name\": \"Burhug\",\n \"city\": \"Pawnee\",\n \"hobbies\": [\"soccer\"],\n \"school\": \"Pawnee Elementary\"\n}\n```\n\nAs you can see above, we've added a new field named \"school\" to Lauren's\ndocument. We do not need to make any modifications to Leslie's document\nor Ron's document when we add the new \"school\" field to Lauren's\ndocument. MongoDB has a flexible schema, so every document in a\ncollection does not need to have the same fields.\n\nFor those of you with years of experience using SQL databases, you might\nbe starting to panic at the idea of a flexible schema. (I know I started\nto panic a little when I was introduced to the idea.)\n\nDon't panic! This flexibility can be hugely valuable as your\napplication's requirements evolve and change.\n\nMongoDB provides [schema\nvalidation so\nyou can lock down your schema as much or as little as you'd like when\nyou're ready.\n\n## Mapping Terms and Concepts from SQL to MongoDB\n\nNow that we've compared how you model data in SQL and MongoDB, let's be a bit more explicit with the terminology. Let's map terms and concepts from SQL to MongoDB.\n\n**Row \u21d2 Document**\n\nA row maps roughly to a document.\n\nDepending on how you've normalized your data, rows across several tables could map to a single document. In our examples above, we saw that rows for Leslie in the `Users`, `Hobbies`, and `JobHistory` tables mapped to a single document.\n\n**Column \u21d2 Field**\n\nA column maps roughly to a field. For example, when we modeled Leslie's data, we had a `first_name` column in the `Users` table and a `first_name` field in a User document.\n\n**Table \u21d2 Collection**\n\nA table maps roughly to a collection. Recall that a collection is a group of documents. Continuing with our example above, our ``Users`` table maps to our ``Users`` collection.\n \n\n \n**Database \u21d2 Database**\n\nThe term ``database`` is used fairly similarly in both SQL and MongoDB.\n Groups of tables are stored in SQL databases just as groups of\n collections are stored in MongoDB databases.\n\n**Index \u21d2 Index**\n\nIndexes provide fairly similar functionality in both SQL and MongoDB.\n Indexes are data structures that optimize queries. You can think of them\n like an index that you'd find in the back of a book; indexes tell the\n database where to look for specific pieces of information. Without an\n index, all information in a table or collection must be searched.\n\n New MongoDB users often forget how much indexes can impact performance.\n If you have a query that is taking a long time to run, be sure you have\n an index to support it. For example, if we know we will be commonly\n searching for users by first or last name, we should add a text index on\n the first and last name fields.\n\n Remember: indexes slow down write performance but speed up read\n performance. For more information on indexes including the types of\n indexes that MongoDB supports, see the MongoDB\n Manual.\n\n**View \u21d2 View**\n\nViews are fairly similar in both SQL and MongoDB. In MongoDB, a view is\n defined by an aggregation pipeline. The results of the view are not\n stored\u2014they are generated every time the view is queried.\n\n To learn more about views, see the MongoDB\n Manual.\n\n MongoDB added support for On-Demand Materialized Views in version 4.2.\n To learn more, see the MongoDB\n Manual.\n\n**Join \u21d2 Embedding**\n\nWhen you use SQL databases, joins are fairly common. You normalize your\n data to prevent data duplication, and the result is that you commonly\n need to join information from multiple tables in order to perform a\n single operation in your application\n\n In MongoDB, we encourage you to model your data differently. Our rule of\n thumb is *Data that is accessed together should be stored together*. If\n you'll be frequently creating, reading, updating, or deleting a chunk of\n data together, you should probably be storing it together in a document\n rather than breaking it apart across several documents.\n\nYou can use embedding to model data that you may have broken out into separate tables when using SQL. When we modeled Leslie's data for MongoDB earlier, we saw that we embedded her job history in her User document instead of creating a separate ``JobHistory`` document.\n\n For more information, see the MongoDB Manual's pages on modeling one-to-one relationships with embedding and modeling one-to-many relationships with embedding.\n\n**Join \u21d2 Database References**\n\nAs we discussed in the previous section, embedding is a common solution\n for modeling data in MongoDB that you may have split across one or more\n tables in a SQL database.\n\n However, sometimes embedding does not make sense. Let's say we wanted to\n store information about our Users' employers like their names,\n addresses, and phone numbers. The number of Users that could be\n associated with an employer is unbounded. If we were to embed\n information about an employer in a ``User`` document, the employer data\n could be replicated hundreds or perhaps thousands of times. Instead, we\n can create a new ``Employers`` collection and create a database\n reference between ``User`` documents and ``Employer`` documents.\n\n For more information on modeling one-to-many relationships with\n database references, see the MongoDB\n Manual.\n\n**Left Outer Join \u21d2 $lookup (Aggregation Pipeline)**\n\nWhen you need to pull all of the information from one table and join it\n with any matching information in a second table, you can use a left\n outer join in SQL.\n\n MongoDB has a stage similar to a left outer join that you can use with\n the aggregation framework.\n\n For those not familiar with the aggregation framework, it allows you to\n analyze your data in real-time. Using the framework, you can create an\n aggregation pipeline that consists of one or more stages. Each stage\n transforms the documents and passes the output to the next stage.\n\n $lookup is an aggregation framework stage that allows you to perform a\n left outer join to an unsharded collection in the same database. \n\n For more information, see the MongoDB Manual's pages on the aggregation\n framework and $lookup.\n\nMongoDB University has a fantastic free course on the aggregation\n pipeline that will walk you in detail through using ``$lookup``: M121:\n The MongoDB Aggregation Framework.\n\n*Recursive Common Table Expressions \u21d2 $graphLookup (Aggregation Pipeline)**\n\n When you need to query hierarchical data like a company's organization\n chart in SQL, we can use recursive common table expressions.\n\n MongoDB provides an aggregation framework stage that is similar to\n recursive common table expressions: ``$graphLookup``. ``$graphLookup``\n performs a recursive search on a collection.\n\n For more information, see the MongoDB Manual's page on $graphLookup and MongoDB University's free course on the aggregation\n framework.\n\n**Multi-Record ACID Transaction \u21d2 Multi-Document ACID Transaction**\n\nFinally, let's talk about ACID transactions. Transactions group database operations together so they\n all succeed or none succeed. In SQL, we call these multi-record ACID\n transactions. In MongoDB, we call these multi-document ACID\n transactions.\n\nFor more information, see the MongoDB Manual.\n\n## Wrap Up\n\n We've just covered a lot of concepts and terminology. The three term\n mappings I recommend you internalize as you get started using MongoDB\n are: \n \n * Rows map to documents. \n * Columns map to fields. \n * Tables map to collections.\n\n I created the following diagram you can use as a reference in the future\n as you begin your journey using MongoDB.\n\n Be on the lookout for the next post in this series where we'll discuss\n the top four reasons you should use MongoDB.\n", "format": "md", "metadata": {"tags": ["MongoDB", "SQL"], "pageDescription": "Learn how SQL terms and concepts map to MongoDB.", "contentType": "Article"}, "title": "Mapping Terms and Concepts from SQL to MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/multi-modal-image-vector-search", "action": "created", "body": "# Build an Image Search Engine With Python & MongoDB\n\n# Building an Image Search Engine With Python & MongoDB\n\nI can still remember when search started to work in Google Photos \u2014 the platform where I store all of the photos I take on my cellphone. It seemed magical to me that some kind of machine learning technique could allow me to describe an image in my vast collection of photos and have the platform return that image to me, along with any similar images.\n\nOne of the techniques used for this is image classification, where a neural network is used to identify objects and even people in a scene, and the image is tagged with this data. Another technique \u2014 which is, if anything, more powerful \u2014 is the ability to generate a vector embedding for the image using an embedding model that works with both text and images.\n\nUsing a multi-modal embedding model like this allows you to generate a vector that can be stored and efficiently indexed in MongoDB Atlas, and then when you wish to retrieve an image, the same embedding model can be used to generate a vector that is then used to search for images that are similar to the description. It's almost like magic.\n\n## Multi-modal embedding models\n\nA multi-modal embedding model is a machine learning model that encodes information from various data types, like text and images, into a common vector space. It helps link different types of data for tasks such as text-to-image matching or translating between modalities.\n\nThe benefit of this is that text and images can be indexed in the same way, allowing images to be searched for by providing either text or another image. You could even search for an item of text with an image, but I can't think of a reason you'd want to do that. The downside of multi-modal models is that they are very complex to produce and thus aren't quite as \"clever\" as some of the single-mode models that are currently being produced.\n\nIn this tutorial, I'll show you how to use the clip-ViT-L-14\u00a0model, which encodes both text and images into the same vector space. Because we're using Python, I'll install the model directly into my Python environment to run locally. In production, you probably wouldn't want to have your embedding model running directly inside your web application because it too tightly couples your model, which requires a powerful GPU, to the rest of your application, which will usually be mostly IO-bound. In that case, you can host an appropriate model on Hugging Face\u00a0or a similar platform.\n\n### Describing the search engine\n\nThis example search engine is going to be very much a proof of concept. All the code is available in a Jupyter Notebook, and I'm going to store all my images locally on disk. In production, you'd want to use an object storage service like Amazon's S3.\n\nIn the same way, in production, you'd either want to host the model using a specialized service or some dedicated setup on the appropriate hardware, whereas I'm going to download and run the model locally.\n\nIf you've got an older machine, it may take a while to generate the vectors, but I found on a four-year-old Intel MacBook Pro I could generate about 1,000 embeddings in 30 minutes, or my MacBook Air M2 can do the same in about five minutes! Either way, maybe go away and make yourself a cup of coffee when the notebook gets to that step.\n\nThe search engine will use the same vector model to encode queries (which are text) into the same vector space that was used to encode image data, which means that a phrase describing an image should appear in a similar location to the image\u2019s location in the vector space. This is the magic of multi-modal vector models!\n\n## Getting ready to run the notebook\n\nAll of the code described in this tutorial is hosted on GitHub.\n\nThe first thing you'll want to do is create a virtual environment using your favorite technique. I tend to use venv, which comes with Python.\n\nOnce you've done that, install dependencies with:\n\n```shell\npip install -r requirements.txt\n```\n\nNext, you'll need to set an environment variable, `MONGODB_URI`, containing the connection string for your MongoDB cluster.\n\n```python\n# Set the value below to your cluster:\nexport MONGODB_URI=\"mongodb+srv://image_search_demo:my_password_not_yours@sandbox.abcde.mongodb.net/image_search_demo?retryWrites=true&w=majority\"\n```\n\nOne more thing you'll need is an \"images\" directory, containing some images to index! I downloaded \u00a0Kaggle's ImageNet 1000 (mini) dataset, which contains lots of images at around 4GB, but you can use a different dataset if you prefer. The notebook searches the \"images\" directory recursively, so you don't need to have everything at the top level.\n\nThen, you can fire up the notebook with:\n\n```shell\njupyter notebook \"Image Search.ipynb\"\n```\n\n## Understanding the code\n\nIf you've set up the notebook as described above, you should be able to execute it and follow the explanations in the notebook. In this tutorial, I'm going to highlight the most important code, but I'm not going to reproduce it all here, as I worked hard to make the notebook understandable on its own.\n\n## Setting up the collection\n\nFirst, let's configure a collection with an appropriate vector search index. In Atlas, if you connect to a cluster, you can configure vector search indexes in the Atlas Search tab, but I prefer to configure indexes in my code to keep everything self-contained.\n\nThe following code can be run many times but will only create the collection and associated search index on the first run. This is helpful if you want to run the notebook several times!\n\n```python\nclient = MongoClient(MONGODB_URI)\ndb = client.get_database(DATABASE_NAME)\n\n# Ensure the collection exists, because otherwise you can't add a search index to it.\ntry:\n\u00a0 \u00a0 db.create_collection(IMAGE_COLLECTION_NAME)\nexcept CollectionInvalid:\n\u00a0 \u00a0 # This is raised when the collection already exists.\n\u00a0 \u00a0 print(\"Images collection already exists\")\n\n# Add a search index (if it doesn't already exist):\ncollection = db.get_collection(IMAGE_COLLECTION_NAME)\nif len(list(collection.list_search_indexes(name=\"default\"))) == 0:\n\u00a0 \u00a0 print(\"Creating search index...\")\n\u00a0 \u00a0 collection.create_search_index(\n\u00a0 \u00a0 \u00a0 \u00a0 SearchIndexModel(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"mappings\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"dynamic\": True,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"fields\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"embedding\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"dimensions\": 768,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"similarity\": \"cosine\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"type\": \"knnVector\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 name=\"default\",\n\u00a0 \u00a0 \u00a0 \u00a0 )\n\u00a0 \u00a0 )\n\u00a0 \u00a0 print(\"Done.\")\nelse:\n\u00a0 \u00a0 print(\"Vector search index already exists\")\n```\n\nThe most important part of the code above is the configuration being passed to `create_search_index`:\n\n```python\n{\n\u00a0 \u00a0 \"mappings\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \"dynamic\": True,\n\u00a0 \u00a0 \u00a0 \u00a0 \"fields\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"embedding\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"dimensions\": 768,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"similarity\": \"cosine\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"type\": \"knnVector\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 }\n}\n```\n\nThis specifies that the index will index all fields in the document (because \"dynamic\" is set to \"true\") and that the \"embedding\" field should be indexed as a vector embedding, using cosine similarity. Currently, \"knnVector\" is the only kind supported by Atlas. The dimension of the vector is set to 768 because that is the number of vector dimensions used by the CLIP model.\n\n## Loading the CLIP model\n\nThe following line of code may not look like much, but the first time you execute it, it will download the clip-ViT-L-14 model, which is around 2GB:\n\n```python\n# Load CLIP model.\n# This may print out warnings, which can be ignored.\nmodel = SentenceTransformer(\"clip-ViT-L-14\")\n```\n\n## Generating and storing a vector embedding\n\nGiven a path to an image file, an embedding for that image can be generated with the following code:\n\n```python\nemb = model.encode(Image.open(path))\n```\n\nIn this line of code, `model`\u00a0is the SentenceTransformer I created above, and `Image`\u00a0comes from the Pillow\u00a0library and is used to load the image data.\n\nWith the embedding vector, a new document can be created with the code below:\n\n```python\ncollection.insert_one(\n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \"_id\": re.sub(\"images/\", \"\", path),\n\u00a0 \u00a0 \u00a0 \u00a0 \"embedding\": emb.tolist(),\n\u00a0 \u00a0 }\n)\n```\n\nI'm only storing the path to the image (as a unique identifier) and the embedding vector. In a real-world application, I'd store any image metadata my application required and probably a URL to an S3 object containing the image data itself.\n\n**Note:** Remember that vector queries can be combined with any other query technique you'd normally use in MongoDB! That's the huge advantage you get using Atlas Vector Search \u2014 it's part of MongoDB Atlas, so you can query and transform your data any way you want and even combine it with the power of Atlas Search for free text queries.\n\nThe Jupyter Notebook loads images in a loop \u2014 by default, it loads 10 images \u2014 but that's not nearly enough to see the benefits of an image search engine, so you'll probably want to change `NUMBER_OF_IMAGES_TO_LOAD`\u00a0to 1000 and run the image load code block again.\n\n## Searching for images\n\nOnce you've indexed a good number of images, it's time to test how well it works. I've defined two functions that can be used for this. The first function, `display_images`, takes a list of documents and displays the associated images in a grid. I'm not including the code here because it's a utility function.\n\nThe second function, `image_search`, takes a text phrase, encodes it as a vector embedding, and then uses MongoDB's `$vectorSearch`\u00a0aggregation stage to look up images that are closest to that vector location, limiting the result to the nine closest documents:\n\n```python\ndef image_search(search_phrase):\n\u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 Use MongoDB Vector Search to search for a matching image.\n\n\u00a0 \u00a0 The search_phrase is first converted to a vector embedding using\n\u00a0 \u00a0 the model loaded earlier in the Jupyter notebook. The vector is then used\n\u00a0 \u00a0 to search MongoDB for matching images.\n\u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 emb = model.encode(search_phrase)\n\u00a0 \u00a0 cursor = collection.aggregate(\n\u00a0 \u00a0 \u00a0 \u00a0 \n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"$vectorSearch\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"index\": \"default\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"path\": \"embedding\",\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"queryVector\": emb.tolist(),\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"numCandidates\": 100,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"limit\": 9,\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 }\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 {\"$project\": {\"_id\": 1, \"score\": {\"$meta\": \"vectorSearchScore\"}}},\n\u00a0 \u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 )\n\n\u00a0 \u00a0 return list(cursor)\n```\n\nThe `$project`\u00a0stage adds a \"score\" field that shows how similar each document was to the original query vector. 1.0 means \"exactly the same,\" whereas 0.0 would mean that the returned image was totally dissimilar.\n\nWith the display_images function and the image_search function, I can search for images of \"sharks in the water\":\n\n```python\ndisplay_images(image_search(\"sharks in the water\"))\n```\n\nOn my laptop, I get the following grid of nine images, which is pretty good!\n\n![A screenshot, showing a grid containing 9 photos of sharks][1]\n\nWhen I first tried the above search out, I didn't have enough images loaded, so the query above included a photo of a corgi standing on gray tiles. That wasn't a particularly close match! After I loaded some more images to fix the results of the shark query, I could still find the corgi image by searching for \"corgi on snow\" \u2014 it's the second image below. Notice that none of the images exactly match the query, but a couple are definitely corgis, and several are standing in the snow.\n\n```python\ndisplay_images(image_search(\"corgi in the snow\"))\n```\n\n![A grid of photos. Most photos contain either a dog or snow, or both. One of the dogs is definitely a corgi.][2]\n\nOne of the things I really love about vector search is that it's \"semantic\" so I can search by something quite nebulous, like \"childhood.\"\n\n```\ndisplay_images(image_search(\"childhood\"))\n```\n\n![A grid of photographs of children or toys or things like colorful erasers.][3]\n\nMy favorite result was when I searched for \"ennui\" (a feeling of listlessness and dissatisfaction arising from a lack of occupation or excitement) which returned photos of bored animals (and a teenager)!\n\n```\ndisplay_images(image_search(\"ennui\"))\n```\n![Photographs of animals looking bored and slightly sad, except for one photo which contains a young man looking bored and slightly sad.][4]\n\n## Next steps\n\nI hope you found this tutorial as fun to read as I did to write!\n\nIf you wanted to run this model in production, you would probably want to use a hosting service like [Hugging Face, but I really like the ability to install and try out a model on my laptop with a single line of code. Once the embedding generation, which is processor-intensive and thus a blocking task, is delegated to an API call, it would be easier to build a FastAPI wrapper around the functionality in this code. Then, you could build a powerful web interface around it and deploy your own customized image search engine.\n\nThis example also doesn't demonstrate much of MongoDB's query capabilities. The power of vector search with MongoDB Atlas is the ability to combine it with all the power of MongoDB's aggregation framework to query and aggregate your data. If I have some time, I may extend this example to filter by criteria like the date of each photo and maybe allow photos to be tagged manually, or to be automatically grouped into albums.\n\n## Further reading\n\n- MongoDB Atlas Vector Search documentation\n- $vectorSearch Aggregation Stage\n- What are Multi-Modal Models?\u00a0from Towards Data Science\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt09221cf1894adc69/65ba2289c600052b89d5b78e/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt47aea2f5cb468ee2/65ba22b1c600057f4ed5b793/image4.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt04170aa66faebd34/65ba23355cdaec53863b9467/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd06c1f2848a13c6f/65ba22f05f12ed09ffe2282c/image2.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Jupyter"], "pageDescription": "Build a search engine for photographs with MongoDB Atlas Vector Search and a multi-modal embedding model.", "contentType": "Tutorial"}, "title": "Build an Image Search Engine With Python & MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-multi-doc-acid-transactions", "action": "created", "body": "# Java - MongoDB Multi-Document ACID Transactions\n\n## Introduction\n\nIntroduced in June 2018 with MongoDB 4.0, multi-document ACID transactions are now supported.\n\nBut wait... Does that mean MongoDB did not support transactions before that?\nNo, MongoDB has consistently supported transactions, initially in the form of single-document transactions.\n\nMongoDB 4.0 extends these transactional guarantees across multiple documents, multiple statements, multiple collections,\nand multiple databases. What good would a database be without any form of transactional data integrity guarantee?\n\nBefore delving into the details, you can access the code and experiment with multi-document ACID\ntransactions.\n\n``` bash\ngit clone git@github.com:mongodb-developer/java-quick-start.git\n```\n\n## Quick start\n\n### Last update: February 28th, 2024\n\n- Update to Java 21\n- Update Java Driver to 5.0.0\n- Update `logback-classic` to 1.2.13\n\n### Requirements\n\n- Java 21\n- Maven 3.8.7\n- Docker (optional)\n\n### Step 1: start MongoDB\n\nGet started with MongoDB Atlas and get a free cluster.\n\nOr you can start an ephemeral single node replica set using Docker for testing quickly:\n\n```bash\ndocker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:7.0.5 --replSet=RS && sleep 3 && docker exec mongo mongosh --quiet --eval \"rs.initiate();\"\n```\n\n### Step 2: start Java\n\nThis demo contains two main programs: `ChangeStreams.java` and `Transactions.java`.\n\n* The `ChangeSteams` class enables you to receive notifications of any data changes within the two collections used in\n this tutorial.\n* The `Transactions` class is the demo itself.\n\nYou need two shells to run them.\n\nFirst shell:\n\n```\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.transactions.ChangeStreams\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\nSecond shell:\n\n```\nmvn compile exec:java -Dexec.mainClass=\"com.mongodb.quickstart.transactions.Transactions\" -Dmongodb.uri=\"mongodb+srv://USERNAME:PASSWORD@cluster0-abcde.mongodb.net/test?w=majority\"\n```\n\n> Note: Always execute the `ChangeStreams` program first because it creates the `product` collection with the\n> required JSON Schema.\n\nLet\u2019s compare our existing single-document transactions with MongoDB 4.0\u2019s ACID-compliant multi-document transactions\nand see how we can leverage this new feature with Java.\n\n## Prior to MongoDB 4.0\n\nEven in MongoDB 3.6 and earlier, every write operation is represented as a **transaction scoped to the level of an\nindividual document** in the storage layer. Because the document model brings together related data that would otherwise\nbe modeled across separate parent-child tables in a tabular schema, MongoDB\u2019s atomic single-document operations provide\ntransaction semantics that meet the data integrity needs of the majority of applications.\n\nEvery typical write operation modifying multiple documents actually happens in several independent transactions: one for\neach document.\n\nLet\u2019s take an example with a very simple stock management application.\n\nFirst of all, I need a MongoDB replica set, so please follow the\ninstructions given above to start MongoDB.\n\nNow, let\u2019s insert the following documents into a `product` collection:\n\n```js\ndb.product.insertMany(\n { \"_id\" : \"beer\", \"price\" : NumberDecimal(\"3.75\"), \"stock\" : NumberInt(5) },\n { \"_id\" : \"wine\", \"price\" : NumberDecimal(\"7.5\"), \"stock\" : NumberInt(3) }\n])\n```\n\nLet\u2019s imagine there is a sale on, and we want to offer our customers a 20% discount on all our products.\n\nBut before applying this discount, we want to monitor when these operations are happening in MongoDB with [Change\nStreams.\n\nExecute the following in a MongoDB shell:\n\n```js\ncursor = db.product.watch({$match: {operationType: \"update\"}}]);\nwhile (!cursor.isClosed()) {\n let next = cursor.tryNext()\n while (next !== null) {\n printjson(next);\n next = cursor.tryNext()\n }\n}\n```\n\nKeep this shell on the side, open another MongoDB shell, and apply the discount:\n\n```js\nRS [direct: primary] test> db.product.updateMany({}, {$mul: {price:0.8}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 2,\n modifiedCount: 2,\n upsertedCount: 0\n}\nRS [direct: primary] test> db.product.find().pretty()\n[\n { _id: 'beer', price: Decimal128(\"3.00000000000000000\"), stock: 5 },\n { _id: 'wine', price: Decimal128(\"6.0000000000000000\"), stock: 3 }\n]\n```\n\nAs you can see, both documents were updated with a single command line but not in a single transaction.\nHere is what we can see in the change stream shell:\n\n```js\n{\n _id: {\n _data: '8265580539000000012B042C0100296E5A1004A7F55A5B35BD4C7DB2CD56C6CFEA9C49463C6F7065726174696F6E54797065003C7570646174650046646F63756D656E744B657900463C5F6964003C6265657200000004'\n },\n operationType: 'update',\n clusterTime: Timestamp({ t: 1700267321, i: 1 }),\n wallTime: ISODate(\"2023-11-18T00:28:41.601Z\"),\n ns: {\n db: 'test',\n coll: 'product'\n },\n documentKey: {\n _id: 'beer'\n },\n updateDescription: {\n updatedFields: {\n price: Decimal128(\"3.00000000000000000\")\n },\n removedFields: [],\n truncatedArrays: []\n }\n}\n{\n _id: {\n _data: '8265580539000000022B042C0100296E5A1004A7F55A5B35BD4C7DB2CD56C6CFEA9C49463C6F7065726174696F6E54797065003C7570646174650046646F63756D656E744B657900463C5F6964003C77696E6500000004'\n },\n operationType: 'update',\n clusterTime: Timestamp({ t: 1700267321, i: 2 }),\n wallTime: ISODate(\"2023-11-18T00:28:41.601Z\"),\n ns: {\n db: 'test',\n coll: 'product'\n },\n documentKey: {\n _id: 'wine'\n },\n updateDescription: {\n updatedFields: {\n price: Decimal128(\"6.0000000000000000\")\n },\n removedFields: [],\n truncatedArrays: []\n }\n}\n```\n\nAs you can see, the cluster times (see the `clusterTime` key) of the two operations are different: The operations\noccurred during the same second but the counter of the timestamp has been incremented by one.\n\nThus, here each document is updated one at a time, and even if this happens really fast, someone else could read the\ndocuments while the update is running and see only one of the two products with the discount.\n\nMost of the time, this is something you can tolerate in your MongoDB database because, as much as possible, we try to\nembed tightly linked (or related) data in the same document.\n\nConsequently, two updates on the same document occur within a single transaction:\n\n```js\nRS [direct: primary] test> db.product.updateOne({_id: \"wine\"},{$inc: {stock:1}, $set: {description : \"It's the best wine on Earth\"}})\n{\n acknowledged: true,\n insertedId: null,\n matchedCount: 1,\n modifiedCount: 1,\n upsertedCount: 0\n}\nRS [direct: primary] test> db.product.findOne({_id: \"wine\"})\n{\n _id: 'wine',\n price: Decimal128(\"6.0000000000000000\"),\n stock: 4,\n description: 'It's the best wine on Earth'\n}\n```\n\nHowever, sometimes, you cannot model all of your related data in a single document, and there are a lot of valid reasons\nfor choosing not to embed documents.\n\n## MongoDB 4.0 with multi-document ACID transactions\n\nMulti-document [ACID transactions in MongoDB closely resemble what\nyou may already be familiar with in traditional relational databases.\n\nMongoDB\u2019s transactions are a conversational set of related operations that must atomically commit or fully roll back with\nall-or-nothing execution.\n\nTransactions are used to make sure operations are atomic even across multiple collections or databases. Consequently,\nwith snapshot isolation reads, another user can only observe either all the operations or none of them.\n\nLet\u2019s now add a shopping cart to our example.\n\nFor this example, two collections are required because we are dealing with two different business entities: the stock\nmanagement and the shopping cart each client can create during shopping. The lifecycles of each document in these\ncollections are different.\n\nA document in the product collection represents an item I\u2019m selling. This contains the current price of the product and\nthe current stock. I created a POJO to represent\nit: Product.java.\n\n```js\n{ \"_id\" : \"beer\", \"price\" : NumberDecimal(\"3\"), \"stock\" : NumberInt(5) }\n```\n\nA shopping cart is created when a client adds their first item in the cart and is removed when the client proceeds to\ncheck out or leaves the website. I created a POJO to represent\nit: Cart.java.\n\n```js\n{\n \"_id\" : \"Alice\",\n \"items\" : \n {\n \"price\" : NumberDecimal(\"3\"),\n \"productId\" : \"beer\",\n \"quantity\" : NumberInt(2)\n }\n ]\n}\n```\n\nThe challenge here resides in the fact that I cannot sell more than I possess: If I have five beers to sell, I cannot have\nmore than five beers distributed across the different client carts.\n\nTo ensure that, I have to make sure that the operation creating or updating the client cart is atomic with the stock\nupdate. That\u2019s where the multi-document transaction comes into play.\nThe transaction must fail in case someone tries to buy something I do not have in my stock. I will add a constraint\non the product stock:\n\n```js\ndb.product.drop()\ndb.createCollection(\"product\", {\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n required: [ \"_id\", \"price\", \"stock\" ],\n properties: {\n _id: {\n bsonType: \"string\",\n description: \"must be a string and is required\"\n },\n price: {\n bsonType: \"decimal\",\n minimum: 0,\n description: \"must be a non-negative decimal and is required\"\n },\n stock: {\n bsonType: \"int\",\n minimum: 0,\n description: \"must be a non-negative integer and is required\"\n }\n }\n }\n }\n})\n```\n\n> Note that this is already included in the Java code of the `ChangeStreams` class.\n\nTo monitor our example, we are going to use MongoDB [Change Streams\nthat were introduced in MongoDB 3.6.\n\nIn ChangeStreams.java,\nI am going to monitor the database `test` which contains our two collections. It'll print each\noperation with its associated cluster time.\n\n```java\npackage com.mongodb.quickstart.transactions;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.client.MongoDatabase;\nimport com.mongodb.client.model.CreateCollectionOptions;\nimport com.mongodb.client.model.ValidationAction;\nimport com.mongodb.client.model.ValidationOptions;\nimport org.bson.BsonDocument;\n\nimport static com.mongodb.client.model.changestream.FullDocument.UPDATE_LOOKUP;\n\npublic class ChangeStreams {\n\n private static final String CART = \"cart\";\n private static final String PRODUCT = \"product\";\n\n public static void main(String] args) {\n ConnectionString connectionString = new ConnectionString(System.getProperty(\"mongodb.uri\"));\n MongoClientSettings clientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .build();\n try (MongoClient client = MongoClients.create(clientSettings)) {\n MongoDatabase db = client.getDatabase(\"test\");\n System.out.println(\"Dropping the '\" + db.getName() + \"' database.\");\n db.drop();\n System.out.println(\"Creating the '\" + CART + \"' collection.\");\n db.createCollection(CART);\n System.out.println(\"Creating the '\" + PRODUCT + \"' collection with a JSON Schema.\");\n db.createCollection(PRODUCT, productJsonSchemaValidator());\n System.out.println(\"Watching the collections in the DB \" + db.getName() + \"...\");\n db.watch()\n .fullDocument(UPDATE_LOOKUP)\n .forEach(doc -> System.out.println(doc.getClusterTime() + \" => \" + doc.getFullDocument()));\n }\n }\n\n private static CreateCollectionOptions productJsonSchemaValidator() {\n String jsonSchema = \"\"\"\n {\n \"$jsonSchema\": {\n \"bsonType\": \"object\",\n \"required\": [\"_id\", \"price\", \"stock\"],\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\",\n \"description\": \"must be a string and is required\"\n },\n \"price\": {\n \"bsonType\": \"decimal\",\n \"minimum\": 0,\n \"description\": \"must be a non-negative decimal and is required\"\n },\n \"stock\": {\n \"bsonType\": \"int\",\n \"minimum\": 0,\n \"description\": \"must be a non-negative integer and is required\"\n }\n }\n }\n }\"\"\";\n return new CreateCollectionOptions().validationOptions(\n new ValidationOptions().validationAction(ValidationAction.ERROR)\n .validator(BsonDocument.parse(jsonSchema)));\n }\n}\n```\n\nIn this example, we have five beers to sell.\n\nAlice wants to buy two beers, but we are **not** going to use a multi-document transaction for this. We will\nobserve in the change streams two operations at two different cluster times:\n\n- One creating the cart\n- One updating the stock\n\nThen, Alice adds two more beers to her cart, and we are going to use a transaction this time. The result in the change\nstream will be two operations happening at the same cluster time.\n\nFinally, she will try to order two extra beers but the jsonSchema validator will fail the product update (as there is only\none in stock) and result in a\nrollback. We will not see anything in the change stream.\nBelow is the source code\nfor [Transaction.java:\n\n```java\npackage com.mongodb.quickstart.transactions;\n\nimport com.mongodb.*;\nimport com.mongodb.client.*;\nimport com.mongodb.quickstart.transactions.models.Cart;\nimport com.mongodb.quickstart.transactions.models.Product;\nimport org.bson.BsonDocument;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.bson.conversions.Bson;\n\nimport java.math.BigDecimal;\nimport java.util.ArrayList;\nimport java.util.Collections;\nimport java.util.List;\n\nimport static com.mongodb.client.model.Filters.*;\nimport static com.mongodb.client.model.Updates.inc;\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\npublic class Transactions {\n\n private static final BigDecimal BEER_PRICE = BigDecimal.valueOf(3);\n private static final String BEER_ID = \"beer\";\n private static final Bson filterId = eq(\"_id\", BEER_ID);\n private static final Bson filterAlice = eq(\"_id\", \"Alice\");\n private static final Bson matchBeer = elemMatch(\"items\", eq(\"productId\", \"beer\"));\n private static final Bson incrementTwoBeers = inc(\"items.$.quantity\", 2);\n private static final Bson decrementTwoBeers = inc(\"stock\", -2);\n private static MongoCollection cartCollection;\n private static MongoCollection productCollection;\n\n public static void main(String] args) {\n ConnectionString connectionString = new ConnectionString(System.getProperty(\"mongodb.uri\"));\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n MongoClientSettings clientSettings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .codecRegistry(codecRegistry)\n .build();\n try (MongoClient client = MongoClients.create(clientSettings)) {\n MongoDatabase db = client.getDatabase(\"test\");\n cartCollection = db.getCollection(\"cart\", Cart.class);\n productCollection = db.getCollection(\"product\", Product.class);\n transactionsDemo(client);\n }\n }\n\n private static void transactionsDemo(MongoClient client) {\n clearCollections();\n insertProductBeer();\n printDatabaseState();\n System.out.println(\"\"\"\n ######### NO TRANSACTION #########\n Alice wants 2 beers.\n We have to create a cart in the 'cart' collection and update the stock in the 'product' collection.\n The 2 actions are correlated but can not be executed at the same cluster time.\n Any error blocking one operation could result in stock error or a sale of beer that we can't fulfill as we have no stock.\n ------------------------------------\"\"\");\n aliceWantsTwoBeers();\n sleep();\n removingBeersFromStock();\n System.out.println(\"####################################\\n\");\n printDatabaseState();\n sleep();\n System.out.println(\"\"\"\n ######### WITH TRANSACTION #########\n Alice wants 2 extra beers.\n Now we can update the 2 collections simultaneously.\n The 2 operations only happen when the transaction is committed.\n ------------------------------------\"\"\");\n aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(client);\n sleep();\n System.out.println(\"\"\"\n ######### WITH TRANSACTION #########\n Alice wants 2 extra beers.\n This time we do not have enough beers in stock so the transaction will rollback.\n ------------------------------------\"\"\");\n aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(client);\n }\n\n private static void aliceWantsTwoExtraBeersInTransactionThenCommitOrRollback(MongoClient client) {\n ClientSession session = client.startSession();\n try {\n session.startTransaction(TransactionOptions.builder().writeConcern(WriteConcern.MAJORITY).build());\n aliceWantsTwoExtraBeers(session);\n sleep();\n removingBeerFromStock(session);\n session.commitTransaction();\n } catch (MongoException e) {\n session.abortTransaction();\n System.out.println(\"####### ROLLBACK TRANSACTION #######\");\n } finally {\n session.close();\n System.out.println(\"####################################\\n\");\n printDatabaseState();\n }\n }\n\n private static void removingBeersFromStock() {\n System.out.println(\"Trying to update beer stock : -2 beers.\");\n try {\n productCollection.updateOne(filterId, decrementTwoBeers);\n } catch (MongoException e) {\n System.out.println(\"######## MongoException ########\");\n System.out.println(\"##### STOCK CANNOT BE NEGATIVE #####\");\n throw e;\n }\n }\n\n private static void removingBeerFromStock(ClientSession session) {\n System.out.println(\"Trying to update beer stock : -2 beers.\");\n try {\n productCollection.updateOne(session, filterId, decrementTwoBeers);\n } catch (MongoException e) {\n System.out.println(\"######## MongoException ########\");\n System.out.println(\"##### STOCK CANNOT BE NEGATIVE #####\");\n throw e;\n }\n }\n\n private static void aliceWantsTwoBeers() {\n System.out.println(\"Alice adds 2 beers in her cart.\");\n cartCollection.insertOne(new Cart(\"Alice\", List.of(new Cart.Item(BEER_ID, 2, BEER_PRICE))));\n }\n\n private static void aliceWantsTwoExtraBeers(ClientSession session) {\n System.out.println(\"Updating Alice cart : adding 2 beers.\");\n cartCollection.updateOne(session, and(filterAlice, matchBeer), incrementTwoBeers);\n }\n\n private static void insertProductBeer() {\n productCollection.insertOne(new Product(BEER_ID, 5, BEER_PRICE));\n }\n\n private static void clearCollections() {\n productCollection.deleteMany(new BsonDocument());\n cartCollection.deleteMany(new BsonDocument());\n }\n\n private static void printDatabaseState() {\n System.out.println(\"Database state:\");\n printProducts(productCollection.find().into(new ArrayList<>()));\n printCarts(cartCollection.find().into(new ArrayList<>()));\n System.out.println();\n }\n\n private static void printProducts(List products) {\n products.forEach(System.out::println);\n }\n\n private static void printCarts(List carts) {\n if (carts.isEmpty()) {\n System.out.println(\"No carts...\");\n } else {\n carts.forEach(System.out::println);\n }\n }\n\n private static void sleep() {\n System.out.println(\"Sleeping 1 second...\");\n try {\n Thread.sleep(1000);\n } catch (InterruptedException e) {\n System.err.println(\"Oops!\");\n e.printStackTrace();\n }\n }\n}\n```\n\nHere is the console of the change stream:\n\n```\nDropping the 'test' database.\nCreating the 'cart' collection.\nCreating the 'product' collection with a JSON Schema.\nWatching the collections in the DB test...\nTimestamp{value=7304460075832180737, seconds=1700702141, inc=1} => Document{{_id=beer, price=3, stock=5}}\nTimestamp{value=7304460075832180738, seconds=1700702141, inc=2} => Document{{_id=Alice, items=[Document{{price=3, productId=beer, quantity=2}}]}}\nTimestamp{value=7304460080127148033, seconds=1700702142, inc=1} => Document{{_id=beer, price=3, stock=3}}\nTimestamp{value=7304460088717082625, seconds=1700702144, inc=1} => Document{{_id=Alice, items=[Document{{price=3, productId=beer, quantity=4}}]}}\nTimestamp{value=7304460088717082625, seconds=1700702144, inc=1} => Document{{_id=beer, price=3, stock=1}}\n```\n\nAs you can see here, we only get five operations because the two last operations were never committed to the database,\nand therefore, the change stream has nothing to show.\n\n- The first operation is the product collection initialization (create the product document for the beers).\n- The second and third operations are the first two beers Alice adds to her cart *without* a multi-doc transaction. Notice\n that the two operations do *not* happen at the same cluster time.\n- The two last operations are the two additional beers Alice adds to her cart *with* a multi-doc transaction. Notice\n that this time the two operations are atomic, and they are happening exactly at the same cluster time.\n\nHere is the console of the transaction Java process that sums up everything I said earlier.\n\n```\nDatabase state:\nProduct{id='beer', stock=5, price=3}\nNo carts...\n\n######### NO TRANSACTION #########\nAlice wants 2 beers.\nWe have to create a cart in the 'cart' collection and update the stock in the 'product' collection.\nThe 2 actions are correlated but can not be executed on the same cluster time.\nAny error blocking one operation could result in stock error or a sale of beer that we can't fulfill as we have no stock.\n------------------------------------\nAlice adds 2 beers in her cart.\nSleeping 1 second...\nTrying to update beer stock : -2 beers.\n####################################\n\nDatabase state:\nProduct{id='beer', stock=3, price=3}\nCart{id='Alice', items=[Item{productId=beer, quantity=2, price=3}]}\n\nSleeping 1 second...\n######### WITH TRANSACTION #########\nAlice wants 2 extra beers.\nNow we can update the 2 collections simultaneously.\nThe 2 operations only happen when the transaction is committed.\n------------------------------------\nUpdating Alice cart : adding 2 beers.\nSleeping 1 second...\nTrying to update beer stock : -2 beers.\n####################################\n\nDatabase state:\nProduct{id='beer', stock=1, price=3}\nCart{id='Alice', items=[Item{productId=beer, quantity=4, price=3}]}\n\nSleeping 1 second...\n######### WITH TRANSACTION #########\nAlice wants 2 extra beers.\nThis time we do not have enough beers in stock so the transaction will rollback.\n------------------------------------\nUpdating Alice cart : adding 2 beers.\nSleeping 1 second...\nTrying to update beer stock : -2 beers.\n######## MongoException ########\n##### STOCK CANNOT BE NEGATIVE #####\n####### ROLLBACK TRANSACTION #######\n####################################\n\nDatabase state:\nProduct{id='beer', stock=1, price=3}\nCart{id='Alice', items=[Item{productId=beer, quantity=4, price=3}]}\n```\n\n## Next steps\n\nThanks for taking the time to read my post. I hope you found it useful and interesting.\nAs a reminder, all the code is\navailable [on the GitHub repository\nfor you to experiment.\n\nIf you're seeking an easy way to begin with MongoDB, you can achieve that in just five clicks using\nour MongoDB Atlas cloud database service.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "In this tutorial you'll learn more about multi-document ACID transaction in MongoDB with Java. You'll understand why they are necessary in some cases and how they work.", "contentType": "Quickstart"}, "title": "Java - MongoDB Multi-Document ACID Transactions", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/setup-multi-cloud-cluster-mongodb-atlas", "action": "created", "body": "# Create a Multi-Cloud Cluster with MongoDB Atlas\n\nMulti-cloud clusters on MongoDB Atlas are now generally available! Just as you might distribute your data across various regions, you can now distribute across multiple cloud providers as well. This gives you a lot more freedom and flexibility to run your application anywhere and move across any cloud without changing a single line of code.\n\nWant to use Azure DevOps for continuous integration and continuous deployment but Google Cloud for Vision AI? Possible! Need higher availability in Canada but only have a single region available in your current cloud provider? Add additional nodes from another Canadian region on a different cloud provider! These kinds of scenarios are what multi-cloud was made for!\n\nIn this post, I won't be telling you *why* multi-cloud is useful; there are several articles (like this one or that one) and a Twitch stream that do a great job of that already! Rather, in this post, I'd like to:\n\n- Show you how to set up a multi-cloud cluster in MongoDB Atlas.\n- Explain what each of the new multi-cloud options mean.\n- Acknowledge some new considerations that come with multi-cloud capabilities.\n- Answer some common questions surrounding multi-cloud clusters.\n\nLet's get started!\n\n## Requirements\n\nTo go through this tutorial, you'll need:\n\n- A MongoDB Cloud account\n- To create an M10 cluster or higher (note that this isn't covered by the free tier)\n\n## Quick Jump\n\n- How to Set Up a Multi-Cloud Cluster\n- How to Test a Primary Node Failover to a Different Cloud Provider\n- Differences between Electable, Read-Only, and Analytics Nodes\n- Choosing Your Electable Node Distribution\n- Multi-Cloud Considerations\n- Multi-Cloud FAQs\n\n## How to Set Up a Multi-Cloud Cluster\n\n1. Log into your MongoDB Cloud account.\n2. Select the organization and project you wish to create a multi-cloud cluster in. If you don't have either, first create an organization and project before proceeding.\n3. Click \"Build a Cluster\". (Alternatively, click \"Create a New Cluster\" toward the top-right of the screen, visible if you have at least one other cluster.)\n4. If this is the first cluster in your project, you'll be asked to choose what kind of cluster you'd like to create. Select \"Create a cluster\" for the \"Dedicated Multi-Region Clusters\" option.\n5. You are brought to the \"Create a Multi-Region Cluster\" screen. If not already in the ON position, toggle the \"Multi-Cloud, Multi-Region & Workload Isolation\" option:\n\n \n\n6. This will expand several more options for you to configure. These options determine the type and distribution of nodes in your cluster:\n\n \n\n >\n >\n >\ud83d\udca1 *What's the difference between \"Multi-Region\" and \"Multi-Cloud\" Clusters?*\n >\n >The introduction of multi-cloud capabilities in Atlas changes how Atlas defines geographies for a cluster. Now, when referencing a *multi-region* cluster, this can be a cluster that is hosted in: \n >- more than one region within one cloud provider, or\n >- more than one cloud provider. (A cluster that spans more than one cloud provider spans more than one region by design.)\n >- multiple regions across multiple cloud providers.\n >\n >As each cloud provider has its own set of regions, multi-cloud clusters are also multi-region clusters.\n >\n >\n\n7. Configure your cluster. In this step, you'll choose a combination of Electable, Read-Only, and Analytics nodes that will make up your cluster.\n\n >\n >\n >\ud83d\udca1 *Choosing Nodes for your Multi-Cloud Cluster*\n >\n >- **Electable nodes**: Additional candidate nodes (via region or cloud provider) and only nodes that can become the primary in case of a failure. Be sure to choose an odd number of total electable nodes (minimum of three); these recommended node distributions are a good place to start.\n >- **Read-Only nodes**: Great for local reads in specific areas.\n >- **Analytics nodes**: Great for isolating analytical workloads from your main, operational workloads.\n >\n >Still can't make a decision? Check out the detailed differences between Electable, Read-Only, and Analytics nodes for more information!\n >\n >\n\n As an example, here's my final configuration (West Coast-based, using a 2-2-1 electable node distribution):\n\n \n\n I've set up five electable nodes in regions closest to me, with a GCP Las Vegas region as the highest priority as I'm based in Las Vegas. Since both Azure and AWS offer a California region, the next closest ones available to me, I've chosen them as the next eligible regions. To accommodate my other service areas on the East Coast, I've also configured two read-only nodes: one in Virginia and one in Illinois. Finally, to separate my reporting queries, I've configured a dedicated node as an analytics node. I chose the same GCP Las Vegas region to reduce latency and cost.\n\n8. Choose the remaining options for your cluster:\n\n - Expand the \"Cluster Tier\" section and select the \"M10\" tier (or higher, depending on your needs).\n - Expand the \"Additional Settings\" section and select \"MongoDB 4.4,\" which is the latest version as of this time.\n - Expand the \"Cluster Name\" section and choose a cluster name. This name can't be changed after the cluster is created, so choose wisely!\n\n9. With all options set, click the \"Create Cluster\" button. After a short wait, your multi-cloud cluster will be created! When it's ready, click on your cluster name to see an overview of your nodes. Here's what mine looks like:\n\n \n\n As you can see, the GCP Las Vegas region has been set as my preferred region. Likewise, one of the nodes in that region is set as my primary. And as expected, the read-only and analytics nodes are set to the respective regions I've chosen:\n\n \n\n Sweet! You've just set up your own multi-cloud cluster. \ud83c\udf89 To test it out, you can continue onto the next section where you'll manually trigger an election and see your primary node restored to a different cloud provider!\n\n >\n >\n >\ud83c\udf1f You've just set up a multi-cloud cluster! If you've found this tutorial helpful or just want to share your newfound knowledge, consider sending a Tweet!\n >\n >\n\n## Testing a Primary Node Failover to a Different Cloud Provider\n\nIf you're creating a multi-cloud cluster for higher availability guarantees, you may be wondering how to test that it will actually work if one cloud provider goes down. Atlas offers self-healing clusters, powered by built-in automation tools, to ensure that in the case of a primary node outage, your cluster will still remain online as it elects a new primary node and reboots a new secondary node when possible. To test a primary being moved to a different cloud provider, you can follow these steps to manually trigger an election:\n\n1. From the main \"Clusters\" overview in Atlas, find the cluster you'd like to test. Select the three dots (...) to open the cluster's additional options, then click \"Edit Configuration\":\n\n \n\n2. You'll be brought to a similar configuration screen as when you created your cluster. Expand the \"Cloud Provider & Region\" section.\n\n3. Change your highest priority region to one of your lower-priority regions. For example, my current highest priority region is GCP Las Vegas (us-west4). To change it, I'll drag my Azure California (westus) region to the top, making it the new highest priority region:\n\n \n\n4. Click the \"Review Changes\" button. You'll be brought to a summary page where you can double-check the changes you are about to make:\n\n \n\n5. If everything looks good, click the \"Apply Changes\" button.\n\n6. After a short wait to deploy these changes, you'll see that your primary has been set to a node from your newly prioritized region and cloud provider. As you can see for my cluster, my primary is now set to a node in my Azure (westus) region:\n\n \n\n \ud83d\udca1 In the event of an actual outage, Atlas automatically handles this failover and election process for you! These steps are just here so that you can test a failover manually and visually inspect that your primary node has, indeed, been restored on a different cloud provider.\n\n There you have it! You've created a multi-cloud cluster on MongoDB Atlas and have even tested a manual \"failover\" to a new cloud provider. You can now grab the connection string from your cluster's Connect wizard and use it with your application.\n\n >\n >\n >\u26a1 Make sure you delete your cluster when finished with it to avoid any additional charges you may not want. To delete a cluster, click the three dots (...) on the cluster overview page of the cluster you want to delete, then click Terminate. Similar to GitHub, MongoDB Atlas will ask you to type the name of your cluster to confirm that you want to delete it, including all data that is on the cluster!\n >\n >\n\n## Differences between Electable, Read-Only, and Analytics Nodes\n\n### Electable Nodes\n\nThese nodes fulfill your availability needs by providing additional candidate nodes and/or alternative locations for your primary node. When the primary fails, electable nodes reduce the impact by failing over to an alternative node. And when wider availability is needed for a region, to comply with specific data sovereignty requirements, for example, an electable node from another cloud provider and similar region can help fill in the gap.\n\n\ud83d\udca1 When configuring electable nodes in a multi-cloud cluster, keep the following in mind:\n\n- Electable nodes are the *only ones that participate in replica set elections*.\n- Any Electable node can become the primary while the majority of nodes in a replica set remain available.\n- Spreading your Electable nodes across large distances can lead to longer election times.\n\nAs you select which cloud providers and regions will host your electable nodes, also take note of the order you place them in. Atlas prioritizes nodes for primary eligibility based on their order in the Electable nodes table. This means the *first row of the Electable nodes table is set as the highest priority region*. Atlas lets you know this as you'll see the \"HIGHEST\" badge listed as the region's priority.\n\nIf there are multiple nodes configured for this region, they will also rank higher in primary eligibility over any other regions in the table. The remaining regions (other rows in the Electable nodes table) and their corresponding nodes rank in the order that they appear, with the last row being the lowest priority region.\n\nAs an example, take this 2-2-1 node configuration:\n\nWhen Atlas prioritizes nodes for primary eligibility, it does so in this order:\n\nHighest Priority => Nodes 1 & 2 in Azure California (westus) region\n\nNext Priority => Nodes 3 & 4 in GCP Las Vegas (us-west4) region\n\nLowest Priority => Single node in AWS N. California (us-west-1) region\n\nTo change the priority order of your electable nodes, you can grab (click and hold the three vertical lines of the row) the region you'd like to move and drag it to the order you'd prefer.\n\nIf you need to change the primary cloud provider for your cluster after its creation, don't worry! You can do so by editing your cluster configuration via the Atlas UI.\n\n### Read-Only Nodes\n\nTo optimize local reads in specific areas, use read-only nodes. These nodes have distinct read-preference tags that allow you to direct queries to the regions you specify. So, you could configure a node for each of your serviceable regions, directing your users' queries to the node closest to them. This results in reduced latency for everyone! \ud83d\ude4c\n\n\ud83d\udca1 When configuring Read-only nodes in a multi-cloud cluster, keep the following in mind:\n\n- Read-only nodes don't participate in elections.\n- Because they don't participate in elections, they don't provide high availability.\n- Read-only nodes can't become the primary for their cluster.\n\nTo add a read-only node to your cluster, click \"+ Add a provider/region,\" then select the cloud provider, region, and number of nodes you'd like to add. If you want to remove a read-only node from your cluster, click the garbage can icon to the right of each row.\n\n### Analytics Nodes\n\nIf you need to run analytical workloads and would rather separate those from your main, operational workloads, use Analytics nodes. These nodes are great for complex or long-running operations, like reporting queries and ETL jobs, that can take up a lot of cluster resources and compete with your other traffic. The benefit of analytics nodes is that you can isolate those queries completely.\n\nAnalytics nodes have the same considerations as read-only nodes. They can also be added and removed from your cluster in the same way as the other nodes.\n\n## Choosing Your Electable Node Distribution\n\nDeploying an odd number of electable nodes ensures reliable elections. With this in mind, we require a minimum of three electable nodes to be configured. Depending on your scenario, these nodes can be divided in several different ways. We generally advise one of the following node distribution options:\n\n### **2-2-1**: *Two nodes in the highest-priority cloud region, two nodes in a lower-priority cloud region, one node in a different lower-priority region*\n\nTo achieve continuous read **and** write availability across any cloud provider and region outage, a 2-2-1 node distribution is needed. By spreading across multiple cloud providers, you gain higher availability guarantees. However, as 2-2-1 node distributions need to continuously replicate data to five nodes, across different regions and cloud providers, this can be the more costly configuration. If cost is a concern, then the 1-1-1 node distribution can be an effective alternative.\n\n### **1-1-1**: *One node in three different cloud regions*\n\nIn this configuration, you'll be able to achieve similar (but not quite exact) read and write availability to the 2-2-1 distribution with three cloud providers. The biggest difference, however, is that when a cloud provider *does* go down, you may encounter higher write latency, especially if your writes have to temporarily shift to a region that's farther away.\n\n## Multi-Cloud Considerations\n\nWith multi-cloud capabilities come new considerations to keep in mind. As you start creating more of your own multi-cloud clusters, be aware of the following:\n\n### Election/Replication Lag\n\nThe larger the number of regions you have or the longer the physical distances are between your nodes, the **longer your election times/replication** lag will be. You may have already experienced this if you have multi-region clusters, but it can be exacerbated as nodes are potentially spread farther apart with multi-cloud clusters.\n\n### Connection Strings\n\nIf you use the standard connection string format, removing an entire region from an existing multi-region cluster **may result in a new connection string**. Instead, **it is strongly recommended** that you use the DNS seedlist format to avoid potential service loss for your applications.\n\n### Host Names\n\nAtlas **does not** guarantee that host names remain consistent with respect to node types during topology changes. For example, in my cluster named \"multi-cloud-demo\", I had an Analytics node named `multi-cloud-demo-shard-00-05.opbdn.mongodb.net:27017`. When a topology change occurs, such as changing my selected regions or scaling the number of nodes in my cluster, Atlas does not guarantee that the specific host name `multi-cloud-demo-shard-00-05.opbdn.mongodb.net:27017` will still refer to an Analytics node.\n\n### Built-in Custom Write Concerns\n\nAtlas provides built-in custom write concerns for multi-region clusters. These can help improve data consistency by ensuring operations are propagated to a set number of regions before an operation can succeed.\n\n##### Custom Write Concerns for Multi-Region Clusters in MongoDB Atlas\n\n| Write Concern | Tags | Description |\n|----------------|-----------------|-------------------------------------------------------------------------------------------------------------|\n| `twoRegions` | `{region: 2}` | Write operations must be acknowledged by at least two regions in your cluster |\n| `threeRegions` | `{region: 3}` | Write operations must be acknowledged by at least three regions in your cluster |\n| `twoProviders` | `{provider: 2}` | Write operations must be acknowledged by at least two regions in your cluster with distinct cloud providers |\n\n## Multi-Cloud FAQs\n\n**Can existing clusters be modified to be multi-cloud clusters?** Yes. All clusters M10 or higher can be changed to a multi-cloud cluster through the cluster configuration settings in Atlas.\n\n**Can I deploy a multi-cloud sharded cluster?** Yes. Both multi-cloud replica sets and multi-cloud sharded clusters are available to deploy on Atlas.\n\n**Do multi-cloud clusters work the same way on all versions, cluster tiers, and clouds?** Yes. Multi-cloud clusters will behave very similarly to single-cloud multi-region clusters, which means it will also be subject to the same constraints.\n\n**What happens to the config servers in a multi-cloud sharded cluster?** Config servers will behave in the same way they do for existing sharded clusters on MongoDB Atlas today. If a cluster has two electable regions, there will be two config servers in the highest priority region and one config server in the next highest region. If a cluster has three or more electable regions, there will be one config server in each of the three highest priority regions.\n\n**Can I use a key management system for encryption at rest with a multi-cloud cluster?** Yes. Whichever KMS you prefer (Azure Key Vault, AWS KMS, or Google Cloud KMS) can be used, though only one KMS can be active at a time. Otherwise, key management for encryption at rest works in the same way as it does for single-cloud clusters.\n\n**Can I pin data to certain cloud providers for compliance requirements?** Yes. With Global Clusters, you can pin data to specific zones or regions to fulfill any data sovereignty requirements you may have.\n\nHave a question that's not answered here? Head over to our MongoDB Community Forums and start a topic! Our community of MongoDB experts and employees are always happy to help!\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn everything you need to know about multi-cloud clusters on MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Create a Multi-Cloud Cluster with MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/introducing-mongodb-analyzer-dotnet", "action": "created", "body": "# Introducing the MongoDB Analyzer for .NET\n\n# Introducing the MongoDB Analyzer for .NET\nCorrect code culprits at compile time!\n\nAs C# and .NET developers, we know that it can sometimes be frustrating to work idiomatically with MongoDB queries and aggregations. Without a way to see if your LINQ query or Builder expression corresponds to the MongoDB Query API (formerly known as MQL) during development, you previously had to wait for runtime errors in order to troubleshoot your queries. We knew there had to be a way to work more seamlessly with C# and MongoDB.\n\nThat\u2019s why we\u2019ve built the MongoDB Analyzer for .NET! Instead of mentally mapping the idiomatic version of your query in C# to the MongoDB Query API, the MongoDB Analyzer can do it for you - and even provide the generated Query API expression right in your IDE. The MongoDB Analyzer even surfaces useful information and helpful warnings on invalid expressions at compile time, bringing greater visibility to the root causes of bugs. And when used together with the recently released LINQ3 provider (now supported in MongoDB C#/.NET Driver 2.14.0 and higher), you can compose and understand queries in a much more manageable way.\n\nLet\u2019s take a look at how to install and use the new MongoDB Analyzer as a NuGet package. We\u2019ll follow with some code samples so you can see why this is a must-have tool for Visual Studio!\n\n## Install MongoDB Analyzer as a NuGet Package\nIn Visual Studio, install the `MongoDB.Analyzer` NuGet package:\n\n*Package Manager*\n\n```\nInstall-Package MongoDB.Analyzer -Version 1.0.0\n```\n\n*.NET CLI*\n\n```\ndotnet add package MongoDB.Analyzer --version 1.0.0\n```\n\nOnce installed, it will be added to your project\u2019s Dependencies list, under Analyzers:\n\nAfter installing and once the analyzer has run, you\u2019ll find all of the diagnostic warnings output to the Error List panel. As you start to inspect your code, you\u2019ll also see that any unsupported expressions will be highlighted.\n\n## Inspecting Information Messages and Warnings\nAs you write LINQ or Builders expressions, an information tooltip can be accessed by hovering over the three grey dots under your expression:\n\n*Accessing the tooltip for a LINQ expression*\n\nThis tooltip displays the corresponding Query API language to the expression you are writing and updates in real-time! With the translated query at your tooltips, you can confirm the query being generated (and executed!) is the one you expect. \n\nThis is a far more efficient process of composing and testing queries\u2014focus on the invalid expressions instead of wasting time translating your code for the Query API! And if you ever need to copy the resulting queries generated, you can do so right from your IDE (from the Error List panel).\n\nAnother common issue the MongoDB Analyzer solves is surfacing unsupported expressions and invalid queries at compile time. You\u2019ll find all of these issues listed as warnings:\n\n*Unsupported expressions shown as warnings in Visual Studio\u2019s Error List*\n\nThis is quite useful as not all LINQ expressions are supported by the MongoDB C#/.NET driver. Similarly, supported expressions will differ depending on which version of LINQ you use.\n\n## Code Samples\u2014See the MongoDB Analyzer for .NET in Action\nNow that we know what the MongoDB Analyzer can do for us, let\u2019s see it live!\n\n### Builder Expressions\nThese are a few examples that show how Builder expressions are analyzed. As you\u2019ll see, the MongoDB Analyzer provides immediate feedback through the tooltip. Hovering over your code shows you the supported Query API language that corresponds to the query/expression you are writing.\n\n*Builder Filter Definition - Filter movies by matching genre, score that is greater than or equal to minimum score, and a match on the title search term.*\n\n*Builder Sort Definition - Sort movies by score (lowest to highest) and title (from Z to A).*\n\n*Unsupported Builder Expression - Highlighted and shown as warning in Error List.*\n\n### LINQ Queries\nThe MongoDB Analyzer uses the default LINQ provider of the C#/.NET driver (LINQ2). Expressions that aren\u2019t supported in LINQ2 but are supported in LINQ3 will show the appropriate warnings, as you\u2019ll see in one of the following examples. If you\u2019d like to switch the LINQ provider the MongoDB Analyzer uses, set` \u201cDefaultLinqVersion\u201d: \u201cV3\u201d `in the `mongodb.analyzer.json` file.\n\n*LINQ Filter Query - Aggregation pipeline.*\n\n*LINQ Query - Get movie genre statistics; uses aggregation pipeline to group by and select a dynamic object.*\n\n*Unsupported LINQ Expression - GetHashCode() method unsupported.*\n \n\n*Unsupported LINQ Expression - Method referencing a lambda parameter unsupported.*\n\n*Unsupported LINQ2, but supported LINQ3 Expression - Trim() is not supported in LINQ2, but is supported in LINQ3.*\n\n## MongoDB Analyzer + New LINQ3 Provider = \ud83d\udc9a\nIf you\u2019d rather not see those \u201cunsupported in LINQ2, but supported in LINQ3\u201d warnings, now is also a good time to update to the latest MongoDB C#/.NET driver (2.14.1) which has LINQ3 support! While the full transition from LINQ2 to LINQ3 continues, you can explicitly configure your MongoClient to use the new LINQ provider like so:\n\n```csharp\nvar connectionString = \"mongodb://localhost\";\nvar clientSettings = MongoClientSettings.FromConnectionString(connectionString);\nclientSettings.LinqProvider = LinqProvider.V3;\nvar client = new MongoClient(clientSettings);\n```\n\n## Integrate MongoDB Analyzer for .NET into Your Pipelines\nThe MongoDB Analyzer can also be used from the CLI which means integrating this static analysis tool into your continuous integration and continuous deployment pipelines is seamless! For example, running `dotnet build` from the command line will output MongoDB Analyzer warnings to the terminal:\n\n*Running dotnet build command outputs warnings from the MongoDB Analyzer*\n\nAdding this as a step in your build pipeline can be a valuable gate check for your build. You\u2019ll save yourself a potential headache and catch unsupported expressions and invalid queries much earlier.\n\nAnother idea: Output a Static Analysis Results Interchange Format (SARIF) file and use it to generate explain plans for all of your queries. SARIF is a standard, JSON-based format for the output of static analysis tools, making a SARIF file an ideal place to grab the supported queries generated by the MongoDB Analyzer. \n\nTo output a SARIF file for your project, you\u2019ll need to add the `ErrorLog` option to your `.csproj` file. You\u2019ll be able to find it at the root of your project (unless you\u2019ve specified otherwise) the next time you build your project.\n\nWith this file, you can load it via a mongosh script, process the file to find and \u201cclean\u201d the found MongoDB Query API expressions, and generate explain plans for the list of queries. What can you do with this? A great example would be to output a build warning (or outright fail the build) if you catch any missing indexes! Adding steps like these to your build and using the information from the expain plans, you can prevent potential performance issues from ever making it to production.\n\n## We Want to Hear From You!\nWith the release of the MongoDB Analyzer for .NET, we hope to speed up your development cycle and increase your productivity in three ways: 1) by making it easier for you to see how your idiomatic queries map to the MongoDB Query API, 2) by helping you spot unsupported expressions and invalid queries faster (at compile time, baby), and 3) by streamlining your development process by enabling static analysis for your MongoDB queries in your CI/CD pipelines!\n\nWe\u2019re quite eager to see the .NET and C# communities use this tool and are even more eager to hear your feedback. The MongoDB Analyzer is ready for you to install as a NuGet package and can be added to any existing project that uses the MongoDB .NET driver. We want to continue improving this tool and that can only be done with your help. If you find any issues, are missing critical functionality, or have an edge case that the MongoDB Analyzer doesn\u2019t fulfill, please let us know! You can alsopost in our Community Forums.\n\n**Additional Resources**\n\n* MongoDB Analyzer Docs", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "Say hello to the MongoDB Analyzer for .NET. This tool translates your C# queries to their MongoDB Query API equivalent and warns you of unsupported expressions and invalid queries at compile time, right in Visual Studio.", "contentType": "News & Announcements"}, "title": "Introducing the MongoDB Analyzer for .NET", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/jdk-21-virtual-threads", "action": "created", "body": "# Java 21: Unlocking the Power of the MongoDB Java Driver With Virtual Threads\n\n## Introduction\n\nGreetings, dev community! Java 21 is here, and if you're using the MongoDB Java driver, this is a ride you won't want to\nmiss. Increased performances and non-blocking threads are on the menu today! \ud83d\ude80\n\nIn this article, we're going to take a stroll through some of the key features of Java 21 that are not just exciting\nfor Java devs in general but are particularly juicy for those of us pushing the boundaries with MongoDB.\n\n## JDK 21\n\nTo begin with, let's have a look at all the features released in Java 21, also known\nas JDK Enhancement Proposal (JEP).\n\n- JEP 430: String Templates (Preview)\n- JEP 431: Sequenced Collections\n- JEP 439: Generational ZGC\n- JEP 440: Record Patterns\n- JEP 441: Pattern Matching for switch\n- JEP 442: Foreign Function and Memory API (Third Preview)\n- JEP 443: Unnamed Patterns and Variables (Preview)\n- JEP 444: Virtual Threads\n- JEP 445: Unnamed Classes and Instance Main Methods (Preview)\n- JEP 446: Scoped Values (Preview)\n- JEP 448: Vector API (Sixth Incubator)\n- JEP 449: Deprecate the Windows 32-bit x86 Port for Removal\n- JEP 451: Prepare to Disallow the Dynamic Loading of Agents\n- JEP 452: Key Encapsulation Mechanism API\n- JEP 453: Structured Concurrency (Preview)\n\n## The Project Loom and MongoDB Java driver 4.11\n\nWhile some of these JEPs, like deprecations, might not be the most exciting, some are more interesting, particularly these three.\n\n- JEP 444: Virtual Threads\n- JEP 453: Structured Concurrency (Preview)\n- JEP 446: Scoped Values (Preview)\n\nLet's discuss a bit more about them.\n\nThese three JEPs are closely related to the Project Loom which is an\ninitiative within the Java\necosystem that introduces lightweight threads\ncalled virtual threads. These virtual threads\nsimplify concurrent programming, providing a more scalable and efficient alternative to traditional heavyweight threads.\n\nWith Project Loom, developers can create thousands of virtual threads without the\ntypical performance overhead, making it easier to write concurrent code. Virtual threads offer improved resource\nutilization and simplify code maintenance, providing a more accessible approach to managing concurrency in Java\napplications. The project aims to enhance the developer experience by reducing the complexities associated with thread\nmanagement while optimizing performance.\n\n> Since version 4.11 of the\nMongoDB Java driver, virtual threads are fully\nsupported.\n\nIf you want more details, you can read the epic in the MongoDB Jira which\nexplains the motivations for this support.\n\nYou can also read more about the Java\ndriver\u2019s new features\nand compatibility.\n\n## Spring Boot and virtual threads\n\nIn Spring Boot 3.2.0+, you just have to add the following property in your `application.properties` file\nto enable virtual threads.\n\n```properties\nspring.threads.virtual.enabled=true\n```\n\nIt's **huge** because this means that your accesses to MongoDB resources are now non-blocking \u2014 thanks to virtual threads.\n\nThis is going to dramatically improve the performance of your back end. Managing a large workload is now easier as all\nthe threads are non-blocking by default and the overhead of the context switching for the platform threads is almost\nfree.\n\nYou can read the blog post from Dan Vega to learn more\nabout Spring Boot and virtual threads.\n\n## Conclusion\n\nJava 21's recent release has unleashed exciting features for MongoDB Java driver users, particularly with the\nintroduction of virtual threads. Since version 4.11, these lightweight threads offer a streamlined approach to\nconcurrent programming, enhancing scalability and efficiency.\n\nFor Spring Boot enthusiasts, embracing virtual threads is a game-changer for backend performance, making MongoDB\ninteractions non-blocking by default.\n\nCurious to experience these advancements? Dive into the future of Java development and explore MongoDB with Spring Boot\nusing\nthe Java Spring Boot MongoDB Starter in GitHub.\n\nIf you don't have one already, claim your free MongoDB cluster\nin MongoDB Atlas to get started with the above repository faster. \n\nAny burning questions? Come chat with us in the MongoDB Community Forums.\n\nHappy coding! \ud83d\ude80", "format": "md", "metadata": {"tags": ["MongoDB", "Java", "Spring"], "pageDescription": "Learn more about the new Java 21 release and Virtual Threads.", "contentType": "Article"}, "title": "Java 21: Unlocking the Power of the MongoDB Java Driver With Virtual Threads", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/graphql-apis-hasura", "action": "created", "body": "# Rapidly Build a Highly Performant GraphQL API for MongoDB With Hasura\n\n## Introduction\n\nIn 2012, GraphQL was introduced as a developer-friendly API spec that allows clients to request exactly the data they\nneed, making it efficient and fast. By reducing the need for multiple requests and limiting the over-fetching of data,\nGraphQL simplifies data retrieval, improving the developer experience. This leads to better applications by ensuring\nmore efficient data loading and less bandwidth usage, particularly important for mobile or low-bandwidth environments.\n\nUsing GraphQL \u2014 instead of REST \u2014 on MongoDB is desirable for many use cases, especially when there is a need to\nsimultaneously query data from multiple MongoDB instances, or when engineers need to join NoSQL data from MongoDB with\ndata from another source.\n\nHowever, engineers are often faced with difficulties in implementing GraphQL APIs and layering them onto their MongoDB\ndata sources. Often, this learning curve and the maintenance overhead inhibit adoption. Hasura was designed to address\nthis common challenge with adopting GraphQL.\n\nHasura is a low-code GraphQL API solution. With Hasura, even engineers unfamiliar with GraphQL can build feature-rich\nGraphQL APIs \u2014 complete with pagination, filtering, sorting, etc. \u2014 on MongoDB and dozens of other data sources in\nminutes. Hasura also supports data federation, enabling developers to create a unified GraphQL API across different\ndatabases and services. In this guide, we\u2019ll show you how to quickly connect Hasura to MongoDB and generate a secure,\nhigh-performance GraphQL API.\n\nWe will walk you through the steps to:\n\n- Create a project on Hasura Cloud.\n- Create a database on MongoDB Atlas.\n- Connect Hasura to MongoDB.\n- Generate a high-performance GraphQL API instantly.\n- Try out GraphQL queries with relationships.\n- Analyze query execution.\n\nWe will also go over how and why the generated API is highly performant.\n\nAt the end of this guide, you\u2019ll be able to create your own high-performance, production-ready GraphQL API with Hasura\nfor your existing or new MongoDB Atlas instance.\n\n## Guide to connecting Hasura with MongoDB\n\nYou will need a project on Hasura Cloud and a MongoDB database on Atlas to get started with the next steps.\n\n### Create a project on Hasura Cloud\n\nHead over\nto cloud.hasura.io\nto create an account or log in. Once you are on the Cloud Dashboard, navigate\nto Projects and create a new project by clicking on `New Project`.\n\n, create a project if you don\u2019t have one, and navigate to\nthe `Database` page under the\nDeployments section. You should see a page like the one below:\n\nin the docs, particularly until Step 4, in case you are stuck in any of the steps above.\n\n### Load sample dataset\n\nOnce the database deployment is complete, you might want to load some sample data for the cluster. You can do this by\nheading to the `Database` tab and under the newly created Cluster, click on the `...` that opens up with an option\nto `Load Sample Dataset`. This can take a few seconds.\n\n for\nhigh performance.\n\n> Read more\n> about how Hasura queries are efficiently compiled for high performance.\n\n### Iterating on the API with updates to collections\n\nAs the structure of a document in a collection changes, it should be as simple as updating the Hasura metadata to add or\nremove the modified fields. The schema is flexible, and you can update the logical model to get the API updates. There\nare no database migrations required \u2014 just add or remove fields from the metadata to reflect in the API.\n\n## Summary\n\nThe integration of MongoDB with Hasura\u2019s GraphQL Engine brings a new level of efficiency and scalability to developers.\nBy leveraging Hasura\u2019s ability to create a unified GraphQL API from diverse data sources, developers can quickly expose\nMongoDB data over a secure, performant, and highly customizable GraphQL API.\n\nWe recommend a few resources to learn more about the integration.\n\n- Hasura docs for MongoDB Atlas integration\n- Running Hasura and MongoDB locally\n- It should\u2019ve been MongoDB all along!\n\nJoin the Hasura Discord server to engage with the Hasura community, and ask questions about\nGraphQL or Hasura\u2019s integration with MongoDB.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt41deae7313d3196d/65cd4b4108fffdec1972284c/image8.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18494d21c0934117/65cd4b400167d0749f8f9e6c/image15.jpg\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1364bb8b705997c0/65cd4b41762832af2bc5f453/image10.jpg\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt73fbb5707846963e/65cd4b40470a5a9e9bcb86ae/image16.jpg\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt31ed46e340623a82/65cd4b418a7a5153870a741b/image2.jpg\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdcb4a8993e2bd50b/65cd4b408a7a5148a90a7417/image14.jpg\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta472e64238c46910/65cd4b41faacaed48c1fce7f/image6.jpg\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd90ff83d842fb73/65cd4b4008fffd23ea722848/image19.jpg\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt32b456930b96b959/65cd4b41f48bc2469c50fa76/image7.jpg\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc3b49b304533ed87/65cd4b410167d01c2b8f9e70/image3.jpg\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9bf1445744157bef/65cd4b4100d72eb99cf537b1/image5.jpg\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa807b7d9bee708b/65cd4b41ab4731a8b00eecbe/image11.jpg\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf44e2aa8ff1e02a1/65cd4b419333f76f83109fb3/image4.jpg\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb2722b7da74ada16/65cd4b41dccfc663efab00ae/image9.jpg\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt933bb95de2d55385/65cd4b407c5d415bdb528a1b/image20.jpg\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cd80a46a72aee23/65cd4b4123dbef0a8bfff34c/image12.jpg\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03ae958f364433f6/65cd4b41670d7e0076281bbd/image13.jpg\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5d1e291dba16fe93/65cd4b400ad03883cc882ad8/image18.jpg\n [19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b71a7e3f04444ae/65cd4b4023dbeffeccfff348/image17.jpg\n [20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt722abb79eaede951/65cd4b419778063874c05447/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "GraphQL"], "pageDescription": "Learn how to configure and deploy a GraphQL API that uses MongoDB collections and documents with Hasura.", "contentType": "Tutorial"}, "title": "Rapidly Build a Highly Performant GraphQL API for MongoDB With Hasura", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/coronavirus-map-live-data-tracker-charts", "action": "created", "body": "# Coronavirus Map and Live Data Tracker with MongoDB Charts\n\n## Updates\n\n### November 15th, 2023\n\n- John Hopkins University (JHU) has stopped collecting data as of March 10th, 2023.\n- Here is JHU's GitHub repository.\n- First data entry is 2020-01-22, last one is 2023-03-09.\n- The data isn't updated anymore and is available in this cluster in readonly mode.\n\n```\nmongodb+srv://readonly:readonly@covid-19.hip2i.mongodb.net/\n```\n\n### August 20th, 2020\n\n- Removed links to Thomas's dashboard as it's not supported anymore.\n- Updated some Charts in the dashboard as JHU discontinued the recovered cases.\n\n### April 21st, 2020\n\n- MongoDB Open Data COVID-19 is now available on the new MongoDB Developer Hub.\n- You can check our code samples in our Github repository.\n- The JHU dataset changed again a few times. It's not really stable and it makes it complicated to build something reliable on top of this service. This is the reason why we created our more accessible version of the JHU dataset.\n- It's the same data but transformed in JSON documents and available in a readonly MongoDB Cluster we built for you.\n\n### March 24th, 2020\n\n- Johns Hopkins University changed the dataset they release daily.\n- I created a new dashboard based using the new dataset.\n- My new dashboard updates **automatically every hour** as new data comes in.\n\n## Too Long, Didn't Read\n\nThomas Rueckstiess and myself came up with two MongoDB Charts dashboards with the Coronavirus dataset.\n\n> - Check out Maxime's dashboard.\n> - Check out Thomas's dashboard (not supported anymore).\n\nHere is an example of the charts we made using the Coronavirus dataset. More below and in the MongoDB Charts dashboards.\n\n:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4266-8264-d37ce88ff9fa theme=light autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-479c-83b2-d37ce88ffa07 theme=dark autorefresh=3600}\n\n## Let The Data Speak\n\nWe have to make decisions at work every day.\n\n- Should we discontinue this project?\n- Should we hire more people?\n- Can we invest more in this branch? How much?\n\nLeaders make decisions. Great leaders make informed decisions, based on facts backed by data and not just based on assumptions, feelings or opinions.\n\nThe management of the Coronavirus outbreak obeys the same rules. To make the right decisions, we need accurate data.\n\nData about the Coronavirus is relatively easy to find. The [Johns Hopkins University has done a terrific job at gathering, cleaning and curating data from various sources. They wrote an excellent blog post which I encourage you to read.\n\nHaving data is great but it can also be overwhelming. That's why data visualisation is also very important. Data alone doesn't speak and doesn't help make informed decisions.\n\nJohns Hopkins University also did a great job on this part because they provided this dashboard to make this data more human accessible.\n\nThis is great... But we can do even better visualisations with MongoDB Charts.\n\n## Free Your Data With MongoDB Charts\n\nThomas Rueckstiess and I imported all the data from Johns Hopkins University (and we will keep importing new data as they are published) into a MongoDB database. If you are interested by the data import, you can check my Github repository.\n\nThen we used this data to produce a dashboard to monitor the progression of the virus.\n\n> Here is Maxime's dashboard. It's shared publicly for the greater good.\n\nMongoDB Charts also allows you to embed easily charts within a website... or a blog post.\n\nHere are a few of the graphs I was able to import in here with just two clicks.\n\n:charts]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4593-8e0e-d37ce88ffa15 theme=dark autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-43e7-8a6d-d37ce88ffa30 theme=light autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-42b4-8b88-d37ce88ffa3a theme=light autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-44c9-87f5-d37ce88ffa34 theme=light autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-41a8-8106-d37ce88ffa2c theme=dark autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-4cdc-8686-d37ce88ff9fc theme=dark autorefresh=3600}\n\n:charts[]{url=https://charts.mongodb.com/charts-open-data-covid-19-zddgb id=60da4f45-f168-47fd-88bd-d37ce88ffa0d theme=light autorefresh=3600 width=760 height=1000}\n\nAs you can see, [MongoDB Charts is really powerful and super easy to embed.\n\n## Participation\n\nIf you have a source of data that provides different or more accurate data about this virus. Please let me know on Twitter @MBeugnet or in the MongoDB community website. I will do my best to update this data and provide more charts.\n\n## Sources\n\n- MongoDB Open Data COVID-19 - Blog Post.\n- MongoDB Open Data COVID-19 - Github Repo.\n- Dashboard from Johns Hopkins University.\n- Blog post from Johns Hopkins University.\n- Public Google Spreadsheet (old version) - deprecated.\n- Public Google Spreadsheet (new version) - deprecated.\n- Public Google Spreadsheet (Time Series) - deprecated.\n- GitHub Repository with CSV dataset from Johns Hopkins University.\n- Image credit: Scientific Animations (CC BY-SA 4.0).\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how we put MongoDB Charts to use to track the global Coronavirus outbreak.", "contentType": "Article"}, "title": "Coronavirus Map and Live Data Tracker with MongoDB Charts", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/synchronize-mobile-applications-mongodb-atlas-google-cloud-mysql", "action": "created", "body": "# Synchronize Your Mobile Application With MongoDB Atlas and Google Cloud MySQL\n\nEnterprises around the world are looking to modernize their existing applications. They need a streamlined way to synchronize data from devices at the Edge into their cloud data stores. Whether their goals are business growth or fending off the competition, application modernization is the primary vehicle that will help them get there. \n\nOften the first step in this process is to move data from an existing relational database repository (like Oracle, SQL Server, DB2, or Postgres, for example) into a JSON-based flexible database in the cloud (like MongoDB, Aerospike, Couchbase, Cassandra, or DocumentDB). Sounds simple, right? I mean, really, if JSON (NoSQL) is so simple and flexible, why would data migration be hard? There must be a bunch of automated tools to facilitate this data migration, right? \n\nUnfortunately, the answers are \u201cNot really,\u201d \u201cBecause data synchronization is rarely simple,\u201d and \u201cThe available tools are often DIY-based and don\u2019t provide nearly the level of automation required to facilitate an ongoing, large-scale, production-quality, conflict-resolved data synchronization.\u201d\n\n## Why is this so complex?\n\n### Data modeling\n\nOne of the first challenges is data modeling. To effectively leverage the benefits inherent in a JSON-based schema, you need to include data modeling as part of your migration strategy. Simply flattening or de-normalizing a relational schema into nested JSON structures, or worse yet, simply moving from relational to JSON without any data modeling consideration, results in a JSON data repository that is slow, inefficient, and difficult to query. You need an intelligent data modeling platform that automatically creates the most effective JSON structures based on your application needs and the target JSON repository without requiring specialized resources like data scientists and data engineers. \n\n### Building and monitoring pipelines\n\nOnce you\u2019ve mapped the data, you need tools that allow you to build reliable, scalable data pipelines to move the data from the source to the target repository. Sadly, most of the tools available today are primarily DIY scripting tools that require both custom (often complex) coding to transform the data to the new schema properly and custom (often complex) monitoring to ensure that the new data pipelines are working reliably. You need a data pipeline automation and monitoring platform to move the data and ensure its quality. \n\n### DIY is hard\n\nThis process of data synchronization, pipeline automation, and monitoring is where most application modernization projects get bogged down and/or ultimately fail. These failed projects often consume significant resources before they fail, as well as affect the overall business functionality and outcomes, and lead to missed objectives. \n\n## CDC: MongoDB Atlas, Atlas Device Sync, and Dataworkz\n\nSynchronizing data between edge devices and various databases can be complex. Simplifying this is our goal, and we will demonstrate how to achieve bi-directional synchronization between mobile devices at MySQL in the cloud using MongoDB Atlas Device Sync and Dataworkz.\n\nLet's dive in.\n\n## Prerequisites \n\n- Accounts with MongoDB Atlas (this can be tested on free tiers), Dataworkz, and Google Cloud\n- Kafka\n- Debezium\n\n## Step 1: prepare your mobile application with Atlas Device Sync\n\nSet up a template app for this test by following the steps outlined in the docs. Once that step is complete, you will have a mobile application running locally, with automated synchronization back to MongoDB Atlas using the Atlas Device Sync SDK. \n\n## Step 2: set up a source database and target MongoDB Atlas Collection\n\nWe used GCP in us-west1-a and Cloud MySQL for this example. Be sure to include sample data.\n\n### Check if BinLog replication is already enabled\n\n and dataworkz.com to create accounts and begin your automated bi-directional data synchronization journey.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85b18bd98135559d/65c54454245ed9597d91062b/1.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0f313395e6bb7e0a/65c5447cf0270544bcea8b0f/2.jpg\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt42fc95f74b034c2c/65c54497ab9c0fe8aab945fc/3.jpg\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0bf05ceea39406ef/65c544b068e9235c39e585bc/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt852256fbd6c92a1d/65c544cf245ed9f5af910634/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6f7a22db7bfeb113/65c544f78b3a0d12277c6c70/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt732e20dad37e46e1/65c54515fb34d04d731b1a80/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc1f83c67ed8bf7e5/65c5453625aa94148c3513c5/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfe7cf5ed073b181/65c5454a49edef40e16a355a/9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt99688601bdbef64a/65c5455d211bae4eaea55ad0/10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7317f754880d41ba/65c545780acbc5455311135a/11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt690ff6f3783cb06f/65c5458b68e92372a8e585d2/12.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0810a0d139692e45/65c545a5ab9c0fdc87b9460a/13.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39b298970485dc92/65c545ba4cd37037ee70ec51/14.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6596816400af6c46/65c545d4eed32eadf6ac449d/15.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4eaf0f6c5b0ca4a5/65c545e3ff4e591910ad0ed6/16.png\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9ffa8c37f5e26e5f/65c545fa4cd3709a7870ec56/17.png\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1678582be8e8cc07/65c5461225aa943f393513cd/18.png\n [19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd4b2f6a8e389884a/65c5462625aa94d9b93513d1/19.png\n [20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbd4a6e31ba6103f6/65c5463fd4db5559ac6efc99/20.png\n [21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6e3be8f5f7f0e0f1/65c546547998dae7b86b5e4b/21.png\n [22]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd2fccad909850d63/65c54669d2c66186e28430d5/22.png\n [23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted07a5674eae79e0/65c5467d8b3a0d226c7c6c7f/23.png\n [24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8b8b5ce479b8fc7b/65c5469125aa94964c3513d5/24.png", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud", "Mobile", "Kafka"], "pageDescription": "Learn how to set up automated, automated, bi-directional synchronization of data from mobile devices to MongoDB Atlas and Google Cloud MySQL.", "contentType": "Tutorial"}, "title": "Synchronize Your Mobile Application With MongoDB Atlas and Google Cloud MySQL", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/building-android-app", "action": "created", "body": "# Building an Android App\n\nAs technology and styles of work evolve, the need for apps to support mobile is as important as ever. In 2023, Android had around 70% market share, so the need for developers to understand how to develop apps for Android is vital.\n\nIn this tutorial, you will learn the basics of getting started with your first Android app, using the programming language Kotlin. Although historically, native Android apps have been written in Java, Kotlin was upgraded to the official language for Android by Google in 2019.\n\n## Prerequisites\nIn order to follow along with this tutorial, you will need to have downloaded and installed Android Studio. This is the official IDE for Android development and comes with all the tools you will need to get started. Windows, MacOS, and Linux support it, as well.\n\n> You won\u2019t need an Android device to run your apps, thanks to the use of the Android Emulator, which comes with Android Studio. You will be guided through setup when you first open up Android Studio.\n\n## Creating the project\nThe first step is to create the application. You can do this from the \u201cWelcome to Android Studio\u201d page that appears when you open Android Studio for the first time.\n\n> If you have opened it before and don\u2019t see this window but instead a list of recent projects, you can create a new project from the **File** menu.\n\n 1. Click **New Project**, which starts a wizard to guide you through\n creating a new project.\n2. In the **Templates** window, make sure the **Phone and Tablet** option is selected on the left. Select Empty Activity and then click\n Next.\n\n3. Give your project a name. I chose \"Hello Android\". \n\nFor Package name, this can be left as the default, if you want. In the future, you might update it to reflect your company name, making sure to leave the app name on the end. I know the backward nature of the Package name can seem confusing but it is just something to note, if you update it.\n\nMinimum SDK: If you make an app in the future intended for users, you might choose an earlier version of Android to support more devices, but this isn\u2019t necessary for this tutorial, so update it to a newer version. I went with API 33, aka \u201cTiramisu.\u201d Android gives all their operating system (OS) versions names shared with sweet things, all in alphabetical order.\n\n> Fun fact: I created my first ever Android app back when the OS version was nicknamed Jelly Bean!\n\nYou can leave the other values as default, although you may choose to update the **Save** location. Once done, press the **Finish** button.\n\nIt will then open your new application inside the main Android Studio window. It can take a minute or two to index and build everything, so if you don\u2019t see much straight away, don\u2019t worry. Once completed, you will see the ```MainActivity.kt``` file open in the editor and the project structure on the left.\n\n## Running the app for the first time\nAlthough we haven\u2019t made any code changes yet, the Empty Activity template comes with a basic UI already. So let\u2019s run our app and see what it looks like out of the box.\n\n 1. Select the **Run** button that looks like a play button at the top of the Android Studio window. Or you can select the hamburger menu in the top left and go to **Run -> Run \u2018app\u2019**.\n 2. Once it has been built and deployed, you will see it running in the Running Devices area to the right of the editor window. Out of the box, it just says \u201cHello Android.\u201d\n\nCongratulations! You have your first running native Android app!\n\n## Updating the UI\nNow your app is running, let\u2019s take a look at how to make some changes to the UI.\nAndroid supports two types of UI: XML-based layouts and Jetpack Compose, known as Compose. Compose is now the recommended solution for Android, and this is what our new app is using, so we will continue to use it.\n\nCompose uses composable functions to define UI components. You can see this in action in the I\u2019m code inside ```MainActivity.kt``` where there is a function called ```Greeting``` with the attribute ```@Composable```. It takes in a string for a name and modifier and uses those inside a text element.\n\nWe are going to update the greeting function to now include a way to enter some text and a button to click that will update the label to say \u201cHello\u201d to the name you enter in the text box.\n\nReplace the existing code from ```class MainActivity : ComponentActivity() {``` onward with the following:\n```kotlin\nclass MainActivity : ComponentActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n HelloAndroidTheme {\n // A surface container using the 'background' color from the theme\n Surface(\n modifier = Modifier.fillMaxSize(),\n color = MaterialTheme.colorScheme.background\n ) {\n Greeting()\n }\n }\n }\n }\n}\n\n@Composable\nfun Greeting() {\n\n var message by remember { mutableStateOf(\"\")}\n var greeting by remember { mutableStateOf(\"\") }\n\n Column (Modifier.padding(16.dp)) {\n TextField(\n value = message,\n onValueChange = { message = it },\n label = {Text(\"Enter your name..\")}\n )\n Button(onClick = { greeting = \"Hello, $message\" }) {\n Text(\"Say Hello\")\n }\n Text(greeting)\n }\n\n}\n\n@Preview(showBackground = true)\n@Composable\nfun GreetingPreview() {\n HelloAndroidTheme {\n Greeting()\n }\n}\n\n```\n\nLet\u2019s now take a look at what has changed.\n### OnCreate\n\nWe have removed the passing of a hardcoded name value here as a parameter to the Greeting function, as we will now get that from the text box.\n\n### Greeting\n We have added two function-scoped variables here for holding the values we want to update dynamically. \n\nWe then start defining our components. Now we have multiple components visible, we want to apply some layout and styling to them, so we have created a column so the three sub-components appear vertically. Inside the column definition, we also pass padding of 16dp.\n\nOur column layout contains a TextField for entering text. The value property is linked to our message variable. The onValueChanged property says that when the value of the box is changed, assign it to the message variable so it is always up to date. It also has a label property, which acts as a placeholder hint to the user.\n\nNext is the button. This has an onClick property where we define what happens when the button is clicked. In this case, it sets the value of the greeting variable to be \u201cHello,\u201d plus the message.\n\nLastly, we have a text component to display the greeting. Each time the button is clicked and the greeting variable is updated, that text field will update on the screen.\n\n### GreetingPreview\n\nThis is a function that allows you to preview your UI without running it on a device or emulator. It is very similar to the OnCreate function above where it specifies the default HelloAndroidTheme and then our Greeting component.\n\nIf you want to view the preview of your code, you can click the button in the top right corner of the editor window, to the left of the **Running Devices** area that shows a hamburger icon with a rectangle with rounded corners next to it. This is the split view button. It splits the view between your code and the preview.\n\n### Imports\nIf Android Studio is giving you error messages in the code, it might be because you are missing some import statements at the top of the file.\n\nExpand the imports section at the top and replace it with the following:\n\n```kotlin\nimport android.os.Bundle\nimport androidx.activity.ComponentActivity\nimport androidx.activity.compose.setContent\nimport androidx.compose.foundation.layout.Column\nimport androidx.compose.foundation.layout.fillMaxSize\nimport androidx.compose.foundation.layout.padding\nimport androidx.compose.material3.Button\nimport androidx.compose.material3.MaterialTheme\nimport androidx.compose.material3.Surface\nimport androidx.compose.material3.Text\nimport androidx.compose.material3.TextField\nimport androidx.compose.runtime.Composable\nimport androidx.compose.runtime.getValue\nimport androidx.compose.runtime.mutableStateOf\nimport androidx.compose.runtime.remember\nimport androidx.compose.runtime.setValue\nimport androidx.compose.ui.Modifier\nimport androidx.compose.ui.text.input.TextFieldValue\nimport androidx.compose.ui.tooling.preview.Preview\nimport androidx.compose.ui.unit.dp\nimport com.mongodb.helloandroid.ui.theme.HelloAndroidTheme\n```\nYou will need to update the last import statement to make sure that your package name matches as it may not be com.mongodb.helloandroid, for example.\n\n## Testing the app\n\nNow that we have updated the UI, let\u2019s run it and see our shiny new UI. Click the **Run** button again and wait for it to deploy to the emulator or your device, if you have one connected.\n\nTry playing around with what you enter and pressing the button to see the result of your great work!\n\n## Summary\nThere you have it, your first Android app written in Kotlin using Android Studio, just like that! Compose makes it super easy to create UIs in no time at all.\n\nIf you want to take it further, you might want to add the ability to store information that persists between app sessions. MongoDB has an amazing product, called Atlas Device Sync, that allows you to store data on the device for your app and have it sync to MongoDB Atlas. You can read more about this and how to get started in our Kotlin Docs.\n", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Mobile", "Jetpack Compose", "Android"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Building an Android App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/rag-with-polm-stack-llamaindex-openai-mongodb", "action": "created", "body": "# How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB Vector Database\n\n## Introduction\n\nLarge language models (LLMs) substantially benefit business applications, especially in use cases surrounding productivity. Although LLMs and their applications are undoubtedly advantageous, relying solely on the parametric knowledge of LLMs to respond to user inputs and prompts proves insufficient for private data or queries dependent on real-time data. This is why a non-parametric secure knowledge source that holds sensitive data and can be updated periodically is required to augment user inputs to LLMs with current and relevant information.\n\n**Retrieval-augmented generation (RAG) is a system design pattern that leverages information retrieval techniques and generative AI models to provide accurate and relevant responses to user queries by retrieving semantically relevant data to supplement user queries with additional context, combined as input to LLMs**.\n\n. The content of the following steps explains in some detail library classes, methods, and processes that are used to achieve the objective of implementing a RAG system.\n\n## Step 1: install libraries\n\nThe code snippet below installs various libraries that will provide functionalities to access LLMs, reranking models, databases, and collection methods, abstracting complexities associated with extensive coding into a few lines and method calls.\n\n- **LlamaIndex** : data framework that provides functionalities to connect data sources (files, PDFs, website or data source) to both closed (OpenAI, Cohere) and open source (Llama) large language models; the LlamaIndex framework abstracts complexities associated with data ingestion, RAG pipeline implementation, and development of LLM applications (chatbots, agents, etc.).\n- **LlamaIndex (MongoDB**): LlamaIndex extension library that imports all the necessary methods to connect to and operate with the MongoDB Atlas database.\n- **LlamaIndex (OpenAI**): LlamaIndex extension library that imports all the necessary methods to access the OpenAI embedding models.\n- **PyMongo:** a Python library for interacting with MongoDB that enables functionalities to connect to a cluster and query data stored in collections and documents.\n- **Hugging Face datasets:** Hugging Face library holds audio, vision, and text datasets.\n- **Pandas** : provides data structure for efficient data processing and analysis using Python.\n\n```shell\n!pip install llama-index\n\n!pip install llama-index-vector-stores-mongodb\n\n!pip install llama-index-embeddings-openai\n\n!pip install pymongo\n\n!pip install datasets\n\n!pip install pandas\n\n```\n\n## Step 2: data sourcing and OpenAI key setup\n\nThe command below assigns an OpenAI API key to the environment variable OPENAI\\_API\\_KEY. This ensures LlamaIndex creates an OpenAI client with the provided OpenAI API key to access features such as LLM models (GPT-3, GPT-3.5-turbo, and GPT-4) and embedding models (text-embedding-ada-002, text-embedding-3-small, and text-embedding-3-large).\n\n```\n%env OPENAI\\_API\\_KEY=openai\\_key\\_here \n```\n\nThe data utilised in this tutorial is sourced from Hugging Face datasets, specifically the AIatMongoDB/embedded\\_movies dataset. A datapoint within the movie dataset contains information corresponding to a particular movie; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas data frame object, which enables data structure manipulation and analysis with relative ease.\n\n``` python\nfrom datasets import load_dataset\n\nimportpandasaspd\n\n# https://huggingface.co/datasets/AIatMongoDB/embedded\\_movies\n\ndataset=load_dataset(\"AIatMongoDB/embedded\\_movies\")\n\n# Convert the dataset to a pandas dataframe\n\ndataset_df=pd.DataFrame(dataset'train'])\n\ndataset_df.head(5)\n\n```\n\n## Step 3: data cleaning, preparation, and loading\n\nThe operations within this step focus on enforcing data integrity and quality. The first process ensures that each data point's ```plot``` attribute is not empty, as this is the primary data we utilise in the embedding process. This step also ensures we remove the ```plot_embedding``` attribute from all data points as this will be replaced by new embeddings created with a different model, the ```text-embedding-3-small```.\n\n``` python\n# Remove data point where plot column is missing\n\ndataset_df=dataset_df.dropna(subset=['plot'])\n\nprint(\"\\nNumber of missing values in each column after removal:\")\n\nprint(dataset_df.isnull().sum())\n\n# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with the new OpenAI embedding Model \"text-embedding-3-small\"\n\ndataset_df=dataset_df.drop(columns=['plot_embedding'])\n\ndataset_df.head(5)\n\n```\n\nAn embedding object is initialised from the ```OpenAIEmbedding``` model, part of the ```llama_index.embeddings``` module. Specifically, the ```OpenAIEmbedding``` model takes two parameters: the embedding model name, ```text-embedding-3-small``` for this tutorial, and the dimensions of the vector embedding.\n\nThe code snippet below configures the embedding model, and LLM utilised throughout the development environment. The LLM utilised to respond to user queries is the default OpenAI model enabled via LlamaIndex and is initialised with the ```OpenAI()``` class. To ensure consistency within all consumers of LLMs and their configuration, LlamaIndex provided the \"Settings\" module, which enables a global configuration of the LLMs and embedding models utilised in the environment.\n\n```python\nfrom llama_index.core.settings import Settings\n\nfrom llama_index.llms.openai import OpenAI\n\nfrom llama_index.embeddings.openai import OpenAIEmbedding\n\nembed_model=OpenAIEmbedding(model=\"text-embedding-3-small\",dimensions=256)\n\nllm=OpenAI()\n\nSettings.llm=llm\n\nSettings.embed_model=embed_model\n\n```\n\nNext, it's crucial to appropriately format the dataset and its contents for MongoDB ingestion. In the upcoming steps, we'll transform the current structure of the dataset, ```dataset_df``` \u2014 presently a DataFrame object \u2014 into a JSON string. This dataset conversion is done in the line ```documents = dataset_df.to_json(orient='records')```, which assigns the JSON format to the documents.\n\nBy specifying orient='records', each row of the DataFrame is converted into a separate JSON object.\n\nThe following step creates a list of Python dictionaries, ```documents_list```, each representing an individual record from the original DataFrame. The final step in this process is to convert each dictionary into manually constructed documents, which are first-class citizens that hold information extracted from a data source. Documents within LlamaIndex hold information, such as metadata, that is utilised in downstream processing and ingestion stages in a RAG pipeline.\n\nOne important point to note is that when creating a LlamaIndex document manually, it's possible to configure the attributes of the documents that are utilised when passed as input to embedding models and LLMs. The ```excluded_llm_metadata_keys``` and ```excluded_embed_metadata_keys``` arguments on the document class constructor take a list of attributes to ignore when generating inputs for downstream processes within a RAG pipeline. A reason for doing this is to limit the context utilised within embedding models for more relevant retrievals, and in the case of LLMs, this is used to control the metadata information combined with user queries. Without configuration of either of the two arguments, a document by default utilises all content in its metadata as embedding and LLM input.\n\nAt the end of this step, a Python list contains several documents corresponding to each data point in the preprocessed dataset.\n\n```python\nimport json\nfrom llama_index.core import Document\nfrom llama_index.core.schema import MetadataMode\n\n# Convert the DataFrame to a JSON string representation\ndocuments_json = dataset_df.to_json(orient='records')\n# Load the JSON string into a Python list of dictionaries\ndocuments_list = json.loads(documents_json)\n\nllama_documents = []\n\nfor document in documents_list:\n\n # Value for metadata must be one of (str, int, float, None)\n document[\"writers\"] = json.dumps(document[\"writers\"])\n document[\"languages\"] = json.dumps(document[\"languages\"])\n document[\"genres\"] = json.dumps(document[\"genres\"])\n document[\"cast\"] = json.dumps(document[\"cast\"])\n document[\"directors\"] = json.dumps(document[\"directors\"])\n document[\"countries\"] = json.dumps(document[\"countries\"])\n document[\"imdb\"] = json.dumps(document[\"imdb\"])\n document[\"awards\"] = json.dumps(document[\"awards\"])\n\n # Create a Document object with the text and excluded metadata for llm and embedding models\n llama_document = Document(\n text=document[\"fullplot\"],\n metadata=document,\n excluded_llm_metadata_keys=[\"fullplot\", \"metacritic\"],\n excluded_embed_metadata_keys=[\"fullplot\", \"metacritic\", \"poster\", \"num_mflix_comments\", \"runtime\", \"rated\"],\n metadata_template=\"{key}=>{value}\",\n text_template=\"Metadata: {metadata_str}\\n-----\\nContent: {content}\",\n )\n\n llama_documents.append(llama_document)\n\n# Observing an example of what the LLM and Embedding model receive as input\nprint(\n \"\\nThe LLM sees this: \\n\",\n llama_documents[0].get_content(metadata_mode=MetadataMode.LLM),\n)\nprint(\n \"\\nThe Embedding model sees this: \\n\",\n llama_documents[0].get_content(metadata_mode=MetadataMode.EMBED),\n)\n\n```\n\nThe final step of processing before ingesting the data to the MongoDB vector store is to convert the list of LlamaIndex documents into another first-class citizen data structure known as nodes. Once we have the nodes generated from the documents, the next step is to generate embedding data for each node using the content in the text and metadata attributes.\n\n```\nfrom llama_index.core.node_parser import SentenceSplitter\n\nparser = SentenceSplitter()\nnodes = parser.get_nodes_from_documents(llama_documents)\n\nfor node in nodes:\n node_embedding = embed_model.get_text_embedding(\n node.get_content(metadata_mode=\"all\")\n )\n node.embedding = node_embedding\n \n```\n\n## Step 4: database setup and connection\n\nBefore moving forward, ensure the following prerequisites are met:\n\n- Database cluster setup on MongoDB Atlas\n- Obtained the URI to your cluster\n\nFor assistance with database cluster setup and obtaining the URI, refer to our guide for [setting up a MongoDB cluster, and our guide to get your connection string. Alternatively, follow Step 5 of this article on using embeddings in a RAG system, which offers detailed instructions on configuring and setting up the database cluster.\n\nOnce you have successfully created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named `movies`, and the collection will be named `movies_records`.\n\n.\n\nIn the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named ```vector_index``` and the vector search index definition is as follows:\n\n```json\n{\n \"fields\": \n {\n \"numDimensions\": 256,\n \"path\": \"embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }\n ]\n}\n```\n\nAfter setting up the vector search index, data can be ingested and retrieved efficiently. Data ingestion is a trivial process achieved with less than three lines when leveraging LlamaIndex.\n\n## Step 6: data ingestion to vector database\n\nUp to this point, we have successfully done the following:\n\n- Loaded data sourced from Hugging Face\n- Provided each data point with embedding using the OpenAI embedding model\n- Set up a MongoDB database designed to store vector embeddings\n- Established a connection to this database from our development environment\n- Defined a vector search index for efficient querying of vector embeddings\n\nThe code snippet below also initialises a MongoDB Atlas vector store object via the LlamaIndex constructor ```MongoDBAtlasVectorSearch```. It's important to note that in this step, we reference the name of the vector search index previously created via the MongoDB Cloud Atlas interface. For this specific use case, the index name is ```vector_index```.\n\nThe crucial method that executes the ingestion of nodes into a specified vector store is the .add() method of the LlamaIndex MongoDB instance.\n\n```python\nfrom llama_index.vector_stores.mongodb import MongoDBAtlasVectorSearch\n\nvector_store = MongoDBAtlasVectorSearch(mongo_client, db_name=DB_NAME, collection_name=COLLECTION_NAME, index_name=\"vector_index\")\nvector_store.add(nodes)\n\n```\n\nThe last line in the code snippet above creates a LlamaIndex index. Within LlamaIndex, when documents are loaded into any of the index abstraction classes \u2014 ```SummaryIndex```, ``TreeIndex```, ```KnowledgeGraphIndex```, and especially ```VectorStoreIndex``` \u2014 an index that stores a representation of the original document is built in an in-memory vector store that also stores embeddings.\n\nBut since the MongoDB Atlas vector database is utilised in this RAG system to store the embeddings and also the index for our document, LlamaIndex enables the retrieval of the index from Atlas via the ```from_vector_store``` method of the ```VectorStoreIndex``` class.\n\n```python\nfrom llama_index.core import VectorStoreIndex, StorageContext\nindex = VectorStoreIndex.from_vector_store(vector_store)\n```\n\n## Step 7: handling user queries\n\nThe next step involves creating a LlamaIndex query engine. The query engine enables the functionality to utilise natural language to retrieve relevant, contextually appropriate information from a data index. The ```as_query_engine``` method provided by LlamaIndex abstracts the complexities of AI engineers and developers writing the implementation code to process queries appropriately for extracting information from a data source.\n\nFor our use case, the query engine satisfies the requirement of building a question-and-answer application. However, LlamaIndex does provide the ability to construct a chat-like application with the [Chat Engine functionality.\n\n``` python\n\nimport pprint\nfrom llama_index.core.response.notebook_utils import display_response\n\nquery_engine = index.as_query_engine(similarity_top_k=3)\nquery = \"Recommend a romantic movie suitable for the christmas season and justify your selecton\"\nresponse = query_engine.query(query)\ndisplay_response(response)\npprint.pprint(response.source_nodes)\n\n```\n\n----------\n\n## Conclusion\n\nIncorporating RAG architectural design patterns improves LLM performance within modern generative AI applications and introduces a cost-conscious approach to building robust AI infrastructure. Building a robust RAG system with minimal code implementation with components such as MongoDB as a vector database and LlamaIndex as the LLM orchestrator is a straightforward process, as this article demonstrates.\n\nIn particular, this tutorial covered the implementation of a RAG system that leverages the combined capabilities of Python, OpenAI, LlamaIndex, and the MongoDB vector database, also known as the POLM AI stack.\n\nIt should be mentioned that fine-tuning is still a viable strategy for improving the capabilities of LLMs and updating their parametric knowledge. However, for AI engineers who consider the economics of building and maintaining GenAI applications, exploring cost-effective methods that improve LLM capabilities is worth considering, even if it is experimental.\n\nThe associated cost of data sourcing, the acquisition of hardware accelerators, and the domain expertise needed for fine-tuning LLMs and foundation models often entail significant investment, making exploring more cost-effective methods, such as RAG systems, an attractive alternative.\n\nNotably, the cost implications of fine-tuning and model training underscore the need for AI engineers and developers to adopt a cost-saving mindset from the early stages of an AI project. Most applications today already, or will, have some form of generative AI capability supported by an AI infrastructure. To this point, it becomes a key aspect of an AI engineer's role to communicate and express the value of exploring cost-effective solutions to stakeholders and key decision-makers when developing AI infrastructure.\n\nAll code presented in this article is presented on GitHub. Happy hacking.\n\n----------\n\n## FAQ\n\n**Q: What is a retrieval-augmented generation (RAG) system?**\n\nRetrieval-augmented generation (RAG) is a design pattern that improves the capabilities of LLMs by using retrieval models to fetch semantically relevant information from a database. This additional context is combined with the user's query to generate more accurate and relevant responses from LLMs.\n\n**Q: What are the key components of an AI stack in a RAG system?**\n\nThe essential components include models (like GPT-3.5, GPT-4, or Llama), orchestrators or integrators for managing interactions between LLMs and data sources, and operational and vector databases for storing and retrieving data efficiently.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d3edefc63969c9e/65cf3ec38d55b016fb614064/GenAI_Stack_(4).png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte6e94adc39a972d2/65cf3fe80b928c05597cf436/GenAI_Stack_(3).png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt304223ce674c707c/65cf4262e52e7542df43d684/GenAI_Stack_(5).png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt62d948b4a9813c34/65cf442f849f316aeae97372/GenAI_Stack_(6).png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43cd95259d274718/65cf467b77f34c1fccca337e/GenAI_Stack_(7).png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "This article provides an in-depth tutorial on building a Retrieval-Augmented Generation (RAG) system using the combined capabilities of Python, OpenAI, LlamaIndex, and MongoDB's vector database, collectively referred to as the POLM AI stack.", "contentType": "Tutorial"}, "title": "How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB Vector Database", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/why-use-mongodb-with-ruby", "action": "created", "body": "# Why Use MongoDB with Ruby\n\nBefore discovering Ruby and Ruby on Rails, I was a .NET developer. At that time, I'd make ad-hoc changes to my development database, export my table/function/stored procedure/view definitions to text files, and check them into source control with any code changes. Using `diff` functionality, I'd compare the schema changes that the DBAs needed to apply to production and we'd script that out separately.\n\nI'm sure better tools existed (and I eventually started using some of RedGate's tools), but I was looking for a change. At that time, the real magic of Ruby on Rails for me was the Active Record Migrations which made working with my database fit with my programming workflow. Schema management became less of a chore and there were `rake` tasks for anything I needed (applying migrations, rolling back changes, seeding a test database).\n\nSchema versioning and management with Rails was leaps and bounds better than what I was used to, and I didn't think this could get any better \u2014 but then I found MongoDB. \n\nWhen working with MongoDB, there's no need to `CREATE TABLE foo (id integer, bar varchar(255), ...)`; if a collection (or associated database) doesn't exist, inserting a new document will automatically create it for you. This means Active Record migrations are no longer needed as this level of schema change management was no longer necessary.\n \nHaving the flexibility to define my data model directly within the code without needing to resort to the intermediary management that Active Record had facilitated just sort of made sense to me. I could now persist object state to my database directly, embed related model details, and easily form queries around these structures to quickly retrieve my data.\n\n## Flexible schema\n\nData in MongoDB has a flexible schema as collections do not enforce a strict document structure or schema by default. This flexibility gives you data-modeling choices to match your application and its performance requirements, which aligns perfectly with Ruby's focus on simplicity and productivity.\n\n## Let's try it out\n\nWe can easily demonstrate how to quickly get started using the MongoDB Ruby Driver using the following simple Ruby script that will connect to a cluster, insert a document, and read it back:\n \n```ruby\nrequire 'bundler/inline'\n \ngemfile do\n source 'https://rubygems.org'\n gem 'mongo'\nend\n \nclient = Mongo::Client.new('mongodb+srv://username:password@mycluster.mongodb.net/test')\ncollection = client:foo]\ncollection.insert_one({ bar: \"baz\" })\n \nputs collection.find.first \n# => {\"_id\"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), \"bar\"=>\"baz\"}\n```\n \nWhen the document above is inserted, an `_id` value of `BSON::ObjectId('62d83d9dceb023b20aff228a')` is created. All documents must have an [`_id` field. However, if not provided, a default `_id` of type `ObjectId` will be generated. When running the above, you will get a different value for `_id`, or you may choose to explicitly set it to any value you like!\n \nFeel free to give the above example a spin using your existing MongoDB cluster or MongoDB Atlas cluster. If you don't have a MongoDB Atlas cluster, sign up for an always free tier cluster to get started.\n \n## Installation\n \nThe MongoDB Ruby Driver is hosted at RubyGems, or if you'd like to explore the source code, it can be found on GitHub.\n \nTo simplify the example above, we used `bundler/inline` to provide a single-file solution using Bundler. However, the `mongo` gem can be just as easily added to an existing `Gemfile` or installed via `gem install mongo`.\n \n## Basic CRUD operations\n \nOur sample above demonstrated how to quickly create and read a document. Updating and deleting documents are just as painless as shown below:\n \n```ruby\n# set a new field 'counter' to 1\ncollection.update_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a')}, :\"$set\" => { counter: 1 })\n \nputs collection.find.first \n# => {\"_id\"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), \"bar\"=>\"baz\", \"counter\"=>1}\n \n# increment the field 'counter' by one\ncollection.update_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a')}, :\"$inc\" => { counter: 1 })\n \nputs collection.find.first \n# => {\"_id\"=>BSON::ObjectId('62d83d9dceb023b20aff228a'), \"bar\"=>\"baz\", \"counter\"=>2}\n \n# remove the test document\ncollection.delete_one({ _id: BSON::ObjectId('62d83d9dceb023b20aff228a') })\n```\n \n## Object document mapper\n \nThough all interaction with your Atlas cluster can be done directly using the MongoDB Ruby Driver, most developers prefer a layer of abstraction such as an ORM or ODM. Ruby developers can use the Mongoid ODM to easily model MongoDB collections in their code and simplify interaction using a fluid API akin to Active Record's Query Interface.\n \nThe following example adapts the previous example to use Mongoid:\n```ruby\nrequire 'bundler/inline'\n \ngemfile do\n source 'https://rubygems.org'\n \n gem 'mongoid'\nend\n \nMongoid.configure do |config|\n config.clients.default = { uri: \"mongodb+srv://username:password@mycluster.mongodb.net/test\" }\nend\n \nclass Foo\n include Mongoid::Document\n \n field :bar, type: String\n field :counter, type: Integer, default: 1\nend\n \n# create a new instance of 'Foo', which will assign a default value of 1 to the 'counter' field\nfoo = Foo.create bar: \"baz\"\n \nputs foo.inspect \n# => \n \n# interact with the instance variable 'foo' and modify fields programmatically\nfoo.counter += 1\n \n# save the instance of the model, persisting changes back to MongoDB\nfoo.save!\n \nputs foo.inspect \n# => \n```\n \n## Summary\n \nWhether you're using Ruby/Rails to build a script/automation tool, a new web application, or even the next Coinbase, MongoDB has you covered with both a Driver that simplifies interaction with your data or an ODM that seamlessly integrates your data model with your application code.\n \n## Conclusion\n\nInteracting with your MongoDB data via Ruby \u2014 either using the Driver or the ODM \u2014 is straightforward, but you can also directly interface with your data from MongoDB Atlas using the built in Data Explorer. Depending on your preferences though, there are options:\n \n* MongoDB for Visual Studio Code allows you to connect to your MongoDB instance and enables you to interact in a way that fits into your native workflow and development tools. You can navigate and browse your MongoDB databases and collections, and prototype queries and aggregations for use in your applications.\n\n* MongoDB Compass is an interactive tool for querying, optimizing, and analyzing your MongoDB data. Get key insights, drag and drop to build pipelines, and more.\n\n* Studio 3T is an extremely easy to use 3rd party GUI for interacting with your MongoDB data.\n\n* MongoDB Atlas Data API lets you read and write data in Atlas with standard HTTPS requests. To use the Data API, all you need is an HTTPS client and a valid API key.\n \nRuby was recently added as a language export option to both MongoDB Compass and the MongoDB VS Code Extension. Using this integration you can easily convert an aggregation pipeline from either tool into code you can copy/paste into your Ruby application.", "format": "md", "metadata": {"tags": ["MongoDB", "Ruby"], "pageDescription": "Find out what makes MongoDB a great fit for your next Ruby on Rails application! ", "contentType": "Article"}, "title": "Why Use MongoDB with Ruby", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-rivet-graph-ai-integ", "action": "created", "body": "# Building AI Graphs with Rivet and MongoDB Atlas Vector Search to Power AI Applications\n\n## Introduction\n\nIn the rapidly advancing realm of database technology and artificial intelligence, the convergence of intuitive graphical interfaces and powerful data processing tools has created a new horizon for developers and data scientists. MongoDB Compass, with its rich features and user-friendly design, stands out as a flagship database management tool. The integration of AI capabilities, such as those provided by Rivet AI's graph builder, pushes the envelope further, offering unprecedented ease and efficiency in managing and analyzing data.\nThis article delves into the synergy between MongoDB Atlas, a database as a service, and Rivet AI\u2019s graph builder, exploring how this integration facilitates the visualization and manipulation of data. Rivet is a powerful tool developed by Ironclad, a partner that together with MongoDB wishes to make AI flows as easy and intuitive as possible.\n\nWe will dissect the high-level architecture that allows users to interact with their database in a more dynamic and insightful manner, thereby enhancing their ability to make data-driven decisions swiftly.\n\nMake sure to visit our Github repository for sample codes and a test run of this solution.\n\n## High-level architecture\n\nThe high-level architecture of the MongoDB Atlas and Rivet AI graph builder integration is centered around a seamless workflow that caters to both the extraction of data and its subsequent analysis using AI-driven insights.\n\n----------\n\n**Data extraction and structuring**: At the core of the workflow is the ability to extract and structure data within the MongoDB Atlas database. Users can define and manipulate documents and collections, leveraging MongoDB's flexible schema model. The MongoDB Compass interface allows for real-time querying and indexing, making the retrieval of specific data subsets both intuitive and efficient.\n\n**AI-enhanced analysis**: Once the data is structured, Rivet AI\u2019s graph builder comes into play. It provides a visual representation of operations such as object path extraction, which is crucial for understanding the relationships within the data. The graph builder enables the construction of complex queries and data transformations without the need to write extensive code.\n\n**Vectorization and indexing**: A standout feature is the ability to transform textual or categorical data into vector form using AI, commonly referred to as embedding. These embeddings capture the semantic relationships between data points and are stored back in MongoDB. This vectorization process is pivotal for performing advanced search operations, such as similarity searches and machine learning-based predictions.\n\n**Interactive visualization**: The entire process is visualized interactively through the graph builder interface. Each operation, from matching to embedding extraction and storage, is represented as nodes in a graph, making the data flow and transformation steps transparent and easy to modify.\n\n**Search and retrieval**: With AI-generated vectors stored in MongoDB, users can perform sophisticated search queries. Using techniques like k-nearest neighbors (k-NN), the system can retrieve documents that are semantically close to a given query, which is invaluable for recommendation systems, search engines, and other AI-driven applications.\n\n----------\n\n## Installation steps\n**Install Rivet**: To begin using Rivet, visit the official Rivet installation page and follow the instructions to download and install the Rivet application on your system.\n\n**Obtain an OpenAI API key**: Rivet requires an OpenAI API key to access certain AI features. Register for an OpenAI account if you haven't already, and navigate to the API section to generate your key.\n\n**Configure Rivet with OpenAI**: After installing Rivet, open the application and navigate to the settings. Enter your OpenAI API key in the OpenAI settings section. This will allow you to use OpenAI's features within Rivet.\n\n**Install the MongoDB plugin in Rivet**: Within Rivet, go to the plugins section and search for the MongoDB plugin. Install the plugin to enable MongoDB functionality within Rivet. This will involve entering your MongoDB Atlas connection string to connect to your database.\n\n**Connect Rivet to MongoDB Atlas**: Once your Atlas Search index is configured, return to Rivet and use the MongoDB plugin to connect to your MongoDB Atlas cluster by providing the necessary connection string and credentials.\nGet your Atlas cluster connection string and place under \"Settings\" => \"Plugins\":\n\n## Setup steps\n**Set up MongoDB Atlas Search**: Log in to your MongoDB Atlas account and select the cluster where your collection resides. Use MongoDB Compass to connect to your cluster and navigate to the collection you want to index.\n\n**Create a search index in Compass**: In Compass, click on the \"Indexes\" tab within your collection view. Create a new search index by selecting the \"Create Index\" option. Choose the fields you want to index, and configure the index options according to your search requirements.\n\nExample:\n```json\n{\n \"name\": \"default\",\n \"type\": \"vectorSearch\",\n \"fields\":\n {\n \"type\": \"vector\",\n \"path\": \"embedding\",\n \"numDimensions\": 1536,\n \"similarity\": \"dotProduct\"\n }]\n}\n```\n\n**Build and execute queries**: With the setup complete, you can now build queries in Rivet to retrieve and manipulate data in your MongoDB Atlas collection using the search index you created.\n\nBy following these steps, you'll be able to harness the power of MongoDB Atlas Search with the advanced AI capabilities provided by Rivet. Make sure to refer to the official documentation for detailed instructions and troubleshooting tips.\n\n## Simple example of storing and retrieving graph data\n\n### Storing data\n\nIn this example, we have a basic Rivet graph that processes data to be stored in a MongoDB database using the `rivet-plugin-mongodb`. The graph follows these steps:\n\n![Store Embedding and documents using Rivet\n\n**Extract object path**: The graph starts with an object containing product information \u2014 for example, { \"product\": \"shirt\", \"color\": \"green\" }. This data is then passed to a node that extracts specific information based on the object path, such as $.color, to be used in further processing.\n\n**Get embedding**: The next node in the graph, labeled 'GET EMBEDDING', uses the OpenAI service to generate an embedding vector from the input data. This embedding represents the extracted feature (in this case, the color attribute) in a numerical form that can be used for machine learning or similarity searches.\n\n**Store vector in MongoDB**: The resulting embedding vector is then sent to the 'STORE VECTOR IN MONGODB' node. This node is configured with the database name search and collection products, where it stores the embedding in a field named embedding. The operation completes successfully, as indicated by the 'COMPLETE' status.\n\n**In MongoDB Compass**, we see the following actions and configurations:\n\n**Index creation**: Under the search.products index, a new index is created for the embedding field. This index is configured for vector searches, with 1536 dimensions and using the `DotProduct` similarity measure. This index is of the type \u201cknnVector,\u201d which is suitable for k-nearest neighbors searches.\n\n**Atlas Search index**: The bottom right corner of the screenshot shows the MongoDB Compass interface for editing the \u201cdefault\u201d index. The provided JSON configuration sets up the index for Atlas Search, with dynamic field mappings.\n\nWith this graph and MongoDB set up, the Rivet application is capable of storing vector data in MongoDB and performing efficient vector searches using MongoDB's Atlas Search feature. This allows users to quickly retrieve documents based on the similarity of vector data, such as finding products with similar characteristics.\n\n### Retrieving data\nIn this Rivet graph setup, we see the process of creating an embedding from textual input and using it to perform a vector search within a MongoDB database:\n\n**Text input**: The graph starts with a text node containing the word \"forest.\" This input could represent a search term or a feature of interest.\n\n**Get embedding**: The 'GET EMBEDDING' node uses OpenAI's service to convert the text input into a numerical vector. This vector has a length of 1536, indicating the dimensionality of the embedding space.\n\n**Search MongoDB for closest vectors with KNN**: With the embedding vector obtained, the graph then uses a node labeled \u201cSEARCH MONGODB FOR CLOSEST VECTORS WITH KNN.\u201d This node is configured with the following parameters:\n\n```\nDatabase: search\nCollection: products\nPath: embedding\nK: 1\n```\nThis configuration indicates that the node will perform a k-nearest neighbor search to find the single closest vector within the products collection of the search database, comparing against the embedding field of the documents stored there.\n\nDifferent colors and their associated embeddings. Each document contains an embedding array, which is compared against the input vector to find the closest match based on the chosen similarity measure (not shown in the image).\n\n### Complex graph workflow for an enhanced grocery shopping experience using MongoDB and embeddings\n\nThis section delves into a sophisticated workflow that leverages Rivet's graph processing capabilities, MongoDB's robust searching features, and the power of machine learning embeddings. To facilitate that, we have used a workflow demonstrated in another tutorial: AI Shop with MongoDB Atlas. Through this workflow, we aim to transform a user's grocery list into a curated selection of products, optimized for relevance and personal preferences. This complex graph workflow not only improves user engagement but also streamlines the path from product discovery to purchase, thus offering an enhanced grocery shopping experience.\n\n### High-level flow overview\n\n**Graph input**: The user provides input, presumably a list of items or recipes they want to purchase.\n\n**Search MongoDB collection**: The graph retrieves the available categories as a bounding box to the engineered prompt.\n\n**Prompt creation**: A prompt is generated based on the user input, possibly to refine the search or interact with the user for more details.\n\n**Chat interaction**: The graph accesses OpenAI chat capabilities to produce an AI-based list of a structured JSON. \n\n**JSON extraction and object path extraction**: The relevant data is extracted from the JSON response of the OpenAI Chat.\n\n**Embedding generation**: The data is then processed to create embeddings, which are high-dimensional representations of the items.\n\n**Union of searches**: These embeddings are used to create a union of $search queries in MongoDB, which allows for a more sophisticated search mechanism that can consider multiple aspects of the items, like similarity in taste, price range, or brand preference.\n\n**Graph output**: The built query is outputted back from the graph.\n\n### Detailed breakdown\n**Part 1: Input to MongoDB Search**\n\nThe user input is taken and used to query the MongoDB collection directly. A chat system might be involved to refine this query or to interact with the user. The result of the query is then processed to extract relevant information using JSON and object path extraction methods.\n\n**Part 2: Embedding to union of searches**\n\nThe extracted object from Part 1 is taken and an embedding is generated using OpenAI's service. This embedding is used to create a more complex MongoDB $search query. The code node likely contains the logic to perform an aggregation query in MongoDB that uses the generated embeddings to find the best matches. The output is then formatted, possibly as a list of grocery items that match the user's initial input, enriched by the embeddings.\n\nThis graph demonstrates a sophisticated integration of natural language processing, database querying, and machine learning embedding techniques to provide a user with a rich set of search results. It takes simple text input and transforms it into a detailed query that understands the nuances of user preferences and available products. The final output would be a comprehensive and relevant set of grocery items tailored to the user's needs.\n\n## Connect your application to graph logic\n\nThis code snippet defines an Express.js route that handles `POST` requests to the endpoint `/aiRivetSearch`. The route's purpose is to provide an AI-enhanced search functionality for a grocery shopping application, utilizing Rivet for graph operations and MongoDB for data retrieval.\n\n```javascript\n// Define a new POST endpoint for handling AI-enhanced search with Rivet\napp.post('/aiRivetSearch', async (req, res) => {\n\n // Connect to MongoDB using a custom function that handles the connection logic\n db = await connectToDb();\n\n // Extract the search query sent in the POST request body\n const { query } = req.body;\n\n // Logging the query and environment variables for debugging purposes\n console.log(query);\n console.log(process.env.GRAPH_ID);\n console.log(\"Before running graph\");\n\n // Load the Rivet project graph from the filesystem to use for the search\n const project = await loadProjectFromFile('./server/ai_shop.graph.rivet-project');\n\n // Execute the loaded graph with the provided inputs and plugin settings\n const response = await runGraph(project, { \n graph: process.env.GRAPH_ID,\n openAiKey: process.env.OPEN_AI_KEY,\n inputs: {\n input: {\n type: \"string\",\n value: query\n }\n },\n pluginSettings: {\n rivetPluginMongodb: {\n mongoDBConnectionString: process.env.RIVET_MONGODB_CONNECTION_STRING,\n }\n }\n });\n\n // Parse the MongoDB aggregation pipeline from the graph response\n const pipeline = JSON.parse(response.result.value);\n\n // Connect to the 'products' collection in MongoDB and run the aggregation pipeline\n const collection = db.collection('products');\n const result = await collection.aggregate(pipeline).toArray();\n \n // Send the search results back to the client along with additional context\n res.json({ \n \"result\": result, \n \"searchList\": response.list.value, \n prompt: query, \n pipeline: pipeline \n });\n\n});\n```\n\nHere\u2019s a step-by-step explanation:\n\nEndpoint initialization: \n- An asynchronous POST route /aiRivetSearch is set up to handle incoming search queries.\nMongoDB connection:\n- The server establishes a connection to MongoDB using a custom connectToDb function. This function is presumably defined elsewhere in the codebase and handles the specifics of connecting to the MongoDB instance.\nRequest handling:\n- The server extracts the query variable from the request's body. This query is the text input from the user, which will be used to perform the search.\nLogging for debugging:\n- The query and relevant environment variables, such as GRAPH_ID (which likely identifies the specific graph to be used within Rivet), are logged to the console. This is useful for debugging purposes, ensuring the server is receiving the correct inputs.\nGraph loading and execution:\n- The server loads a Rivet project graph from a file in the server's file system.\n- Using Rivet's runGraph function, the loaded graph is executed with the provided inputs (the user's query) and plugin settings. The settings include the openAiKey and the MongoDB connection string from environment variables.\nResponse processing:\n- The result of the graph execution is logged, and the server parses the MongoDB aggregation pipeline from the result. The pipeline defines a sequence of data aggregation operations to be performed on the MongoDB collection.\nMongoDB aggregation:\n- The server connects to the \u201cproducts\u2019 collection within MongoDB.\n- It then runs the aggregation pipeline against the collection and waits for the results, converting the cursor returned by the aggregate function to an array with toArray().\nResponse generation:\n- Finally, the server responds to the client's POST request with a JSON object. This object includes the results of the aggregation, the user's original search list, the prompt used for the search, and the aggregation pipeline itself. The inclusion of the prompt and pipeline in the response can be particularly helpful for front-end applications to display the query context or for debugging.\n\nThis code combines AI and database querying to create a powerful search tool within an application, giving the user relevant and personalized results based on their input.\n\nThis and other sample codes can be tested on our Github repository.\n\n## Wrap-up: synergizing MongoDB with Rivet for innovative search solutions\n\nThe integration of MongoDB with Rivet presents a unique opportunity to build sophisticated search solutions that are both powerful and user-centric. MongoDB's flexible data model and powerful aggregation pipeline, combined with Rivet's ability to process and interpret complex data structures through graph operations, pave the way for creating dynamic, intelligent applications.\n\nBy harnessing the strengths of both MongoDB and Rivet, developers can construct advanced search capabilities that not only understand the intent behind user queries but also deliver personalized results efficiently. This synergy allows for the crafting of seamless experiences that can adapt to the evolving needs of users, leveraging the full spectrum of data interactions from input to insight.\n\nAs we conclude, it's clear that this fusion of database technology and graph processing can serve as a cornerstone for future software development \u2014 enabling the creation of applications that are more intuitive, responsive, and scalable. The potential for innovation in this space is vast, and the continued exploration of this integration will undoubtedly yield new methodologies for data management and user engagement.\n\nQuestions? Comments? Join us in the MongoDB Developer Community forum.\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "AI", "Node.js"], "pageDescription": "Join us in a journey through the convergence of database technology and AI in our article 'Building AI Graphs with Rivet and MongoDB Atlas Vector Search'. This guide offers a deep dive into the integration of Rivet AI's graph builder with MongoDB Atlas, showcasing how to visualize and manipulate data for AI applications. Whether you're a developer or a data scientist, this article provides valuable insights and practical steps for enhancing data-driven decision-making and creating dynamic, AI-powered solutions.", "contentType": "Tutorial"}, "title": "Building AI Graphs with Rivet and MongoDB Atlas Vector Search to Power AI Applications", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/interactive-rag-mongodb-atlas-function-calling-api", "action": "created", "body": "# Interactive RAG with MongoDB Atlas + Function Calling API\n\n## Introduction: Unveiling the Power of Interactive Knowledge Discovery\n\nImagine yourself as a detective investigating a complex case. Traditional retrieval-augmented generation (RAG) acts as your static assistant, meticulously sifting through mountains of evidence based on a pre-defined strategy. While helpful, this approach lacks the flexibility needed for today's ever-changing digital landscape.\n\nEnter interactive RAG \u2013 the next generation of information access. It empowers users to become active knowledge investigators by:\n\n* **Dynamically adjusting retrieval strategies:** Tailor the search to your specific needs by fine-tuning parameters like the number of sources, chunk size, and retrieval algorithms.\n* **Staying ahead of the curve:** As new information emerges, readily incorporate it into your retrieval strategy to stay up-to-date and relevant.\n* **Enhancing LLM performance:** Optimize the LLM's workload by dynamically adjusting the information flow, leading to faster and more accurate analysis.\n\nBefore you continue, make sure you understand the basics of:\n\n- LLMs.\n- RAG.\n- Using a vector database.\n\n_)\n\n## Optimizing your retrieval strategy: static vs. interactive RAG\n\nChoosing between static and interactive retrieval-augmented generation approaches is crucial for optimizing your application's retrieval strategy. Each approach offers unique advantages and disadvantages, tailored to specific use cases:\n\n**Static RAG:** A static RAG approach is pre-trained on a fixed knowledge base, meaning the information it can access and utilize is predetermined and unchanging. This allows for faster inference times and lower computational costs, making it ideal for applications requiring real-time responses, such as chatbots and virtual assistants.\n\n**Pros:**\n\n* **Faster response:** Pre-loaded knowledge bases enable rapid inference, ideal for real-time applications like chatbots and virtual assistants.\n* **Lower cost:** Static RAG requires fewer resources for training and maintenance, making it suitable for resource-constrained environments.\n* **Controlled content:** Developers have complete control over the model's knowledge base, ensuring targeted and curated responses in sensitive applications.\n* **Consistent results:** Static RAG provides stable outputs even when underlying data changes, ensuring reliability in data-intensive scenarios.\n\n**Cons:**\n\n* **Limited knowledge:** Static RAG is confined to its pre-loaded knowledge, limiting its versatility compared to interactive RAG accessing external data.\n* **Outdated information:** Static knowledge bases can become outdated, leading to inaccurate or irrelevant responses if not frequently updated.\n* **Less adaptable:** Static RAG can struggle to adapt to changing user needs and preferences, limiting its ability to provide personalized or context-aware responses.\n\n**Interactive RAG:** An interactive RAG approach is trained on a dynamic knowledge base, allowing it to access and process real-time information from external sources such as online databases and APIs. This enables it to provide up-to-date and relevant responses, making it suitable for applications requiring access to constantly changing data.\n\n**Pros:**\n\n* **Up-to-date information:** Interactive RAG can access and process real-time external information, ensuring current and relevant responses, which is particularly valuable for applications requiring access to frequently changing data.\n* **Greater flexibility:** Interactive RAG can adapt to user needs and preferences by incorporating feedback and interactions into their responses, enabling personalized and context-aware experiences.\n* **Vast knowledge base:** Access to external information provides an almost limitless knowledge pool, allowing interactive RAG to address a wider range of queries and deliver comprehensive and informative responses.\n\n**Cons:**\n\n* **Slower response:** Processing external information increases inference time, potentially hindering real-time applications.\n* **Higher cost:** Interactive RAG requires more computational resources, making it potentially unsuitable for resource-constrained environments.\n* **Bias risk:** External information sources may contain biases or inaccuracies, leading to biased or misleading responses if not carefully mitigated.\n* **Security concerns:** Accessing external sources introduces potential data security risks, requiring robust security measures to protect sensitive information.\n\n### Choosing the right approach\n\nWhile this tutorial focuses specifically on interactive RAG, the optimal approach depends on your application's specific needs and constraints. Consider:\n\n* **Data size and update frequency:** Static models are suitable for static or infrequently changing data, while interactive RAG is necessary for frequently changing data.\n* **Real-time requirements:** Choose static RAG for applications requiring fast response times. For less critical applications, interactive RAG may be preferred.\n* **Computational resources:** Evaluate your available resources when choosing between static and interactive approaches.\n* **Data privacy and security:** Ensure your chosen approach adheres to all relevant data privacy and security regulations.\n\n## Chunking: a hidden hero in the rise of GenAI\n\nNow, let's put our detective hat back on. If you have a mountain of evidence available for a particular case, you wouldn't try to analyze every piece of evidence at once, right? You'd break it down into smaller, more manageable pieces \u2014 documents, witness statements, physical objects \u2014 and examine each one carefully. In the world of large language models, this process of breaking down information is called _chunking_, and it plays a crucial role in unlocking the full potential of retrieval-augmented generation.\n\nJust like a detective, an LLM can't process a mountain of information all at once. Chunking helps it break down text into smaller, more digestible pieces called _chunks_. Think of these chunks as bite-sized pieces of knowledge that the LLM can easily analyze and understand. This allows the LLM to focus on specific sections of the text, extract relevant information, and generate more accurate and insightful responses.\n\nHowever, the size of each chunk isn't just about convenience for the LLM; it also significantly impacts the _retrieval vector relevance score_, a key metric in evaluating the effectiveness of chunking strategies. The process involves converting text to vectors, measuring the distance between them, utilizing ANN/KNN algorithms, and calculating a score for the generated vectors.\n\nHere is an example: Imagine asking \"What is a mango?\" and the LLM dives into its knowledge base, encountering these chunks:\n\n**High scores:**\n\n* **Chunk:** \"Mango is a tropical stone fruit with a sweet, juicy flesh and a single pit.\" (Score: 0.98)\n* **Chunk:** \"In India, mangoes are revered as the 'King of Fruits' and hold cultural significance.\" (Score: 0.92)\n* **Chunk:** \"The mango season brings joy and delicious treats like mango lassi and mango ice cream.\" (Score: 0.85)\n\nThese chunks directly address the question, providing relevant information about the fruit's characteristics, cultural importance, and culinary uses. High scores reflect their direct contribution to answering your query.\n\n**Low scores:**\n\n* **Chunk:** \"Volcanoes spew molten lava and ash, causing destruction and reshaping landscapes.\" (Score: 0.21)\n* **Chunk:** \"The stock market fluctuates wildly, driven by economic factors and investor sentiment.\" (Score: 0.42)\n* **Chunk:** \"Mitochondria, the 'powerhouses of the cell,' generate energy for cellular processes.\" (Score: 0.55)\n\nThese chunks, despite containing interesting information, are completely unrelated to mangoes. They address entirely different topics, earning low scores due to their lack of relevance to the query.\n\nCheck out ChunkViz v0.1 to get a feel for how chunk size (character length) breaks down text.\n\n stands out for GenAI applications. Imagine MongoDB as a delicious cake you can both bake and eat. Not only does it offer the familiar features of MongoDB, but it also lets you store and perform mathematical operations on your vector embeddings directly within the platform. This eliminates the need for separate tools and streamlines the entire process.\n\nBy leveraging the combined power of function calling API and MongoDB Atlas, you can streamline your content ingestion process and unlock the full potential of vector embeddings for your GenAI applications.\n\n, OpenAI or Hugging Face.\n\n ```python\n # Chunk Ingest Strategy\n self.text_splitter = RecursiveCharacterTextSplitter(\n # Set a really small chunk size, just to show.\n \nchunk_size=4000, # THIS CHUNK SIZE IS FIXED - INGEST CHUNK SIZE DOES NOT CHANGE\n chunk_overlap=200, # CHUNK OVERLAP IS FIXED\n length_function=len,\n add_start_index=True,\n )\n \n # load data from webpages using Playwright. One document will be created for each webpage\n \n # split the documents using a text splitter to create \"chunks\"\n \n loader = PlaywrightURLLoader(urls=urls, remove_selectors=\"header\", \"footer\"]) \n documents = loader.load_and_split(self.text_splitter)\n self.index.add_documents(\n documents\n ) \n ```\n\n2. **Vector index**: When employing vector search, it's necessary to [create a search index. This process entails setting up the vector path, aligning the dimensions with your chosen model, and selecting a vector function for searching the top K-nearest neighbors. \n ```python\n {\n \"name\": \"\",\n \"type\": \"vectorSearch\",\n \"fields\":\n {\n \"type\": \"vector\",\n \"path\": ,\n \"numDimensions\": ,\n \"similarity\": \"euclidean | cosine | dotProduct\"\n },\n ...\n ]\n }\n ```\n3. **Chunk retrieval**: Once the vector embeddings are indexed, an aggregation pipeline can be created on your embedded vector data to execute queries and retrieve results. This is accomplished using the [$vectorSearch operator, a new aggregation stage in Atlas.\n\n ```python\n def recall(self, text, n_docs=2, min_rel_score=0.25, chunk_max_length=800,unique=True):\n #$vectorSearch\n print(\"recall=>\"+str(text))\n response = self.collection.aggregate(\n {\n \"$vectorSearch\": {\n \"index\": \"default\",\n \n \"queryVector\": self.gpt4all_embd.embed_query(text), #GPT4AllEmbeddings()\n \"path\": \"embedding\",\n #\"filter\": {},\n \n \"limit\": 15, #Number (of type int only) of documents to return in the results. Value can't exceed the value of numCandidates.\n \n \"numCandidates\": 50 #Number of nearest neighbors to use during the search. You can't specify a number less than the number of documents to return (limit).\n }\n },\n {\n \"$addFields\":\n {\n \"score\": {\n \"$meta\": \"vectorSearchScore\"\n }\n }\n },\n {\n \"$match\": {\n \"score\": {\n \"$gte\": min_rel_score\n }\n }\n },{\"$project\":{\"score\":1,\"_id\":0, \"source\":1, \"text\":1}}])\n tmp_docs = []\n str_response = []\n for d in response:\n if len(tmp_docs) == n_docs:\n break\n if unique and d[\"source\"] in tmp_docs:\n continue\n tmp_docs.append(d[\"source\"])\n \n str_response.append({\"URL\":d[\"source\"],\"content\":d[\"text\"][:chunk_max_length],\"score\":d[\"score\"]})\n \n kb_output = f\"Knowledgebase Results[{len(tmp_docs)}]:\\n```{str(str_response)}```\\n## \\n```SOURCES: \"+str(tmp_docs)+\"```\\n\\n\"\n self.st.write(kb_output)\n return str(kb_output)\n ```\n\nIn this tutorial, we will mainly be focusing on the **CHUNK RETRIEVAL** strategy using the function calling API of LLMs and MongoDB Atlas as our **[data platform**.\n\n## Key features of MongoDB Atlas\nMongoDB Atlas offers a robust vector search platform with several key features, including:\n\n1. **$vectorSearch operator:**\nThis powerful aggregation pipeline operator allows you to search for documents based on their vector embeddings. You can specify the index to search, the query vector, and the similarity metric to use. $vectorSearch provides efficient and scalable search capabilities for vector data.\n\n2. **Flexible filtering:**\nYou can combine $vectorSearch with other aggregation pipeline operators like $match, $sort, and $limit to filter and refine your search results. This allows you to find the most relevant documents based on both their vector embeddings and other criteria.\n\n3. **Support for various similarity metrics:**\nMongoDB Atlas supports different similarity metrics like cosine similarity and euclidean distance, allowing you to choose the best measure for your specific data and task.\n\n4. **High performance:**\nThe vector search engine in MongoDB Atlas is optimized for large datasets and high query volumes, ensuring efficient and responsive search experiences.\n\n5. **Scalability:**\nMongoDB Atlas scales seamlessly to meet your growing needs, allowing you to handle increasing data volumes and query workloads effectively.\n\n**Additionally, MongoDB Atlas offers several features relevant to its platform capabilities:**\n\n* **Global availability:**\nYour data is stored in multiple data centers around the world, ensuring high availability and disaster recovery.\n* **Security:**\nMongoDB Atlas provides robust security features, including encryption at rest and in transit, access control, and data audit logging.\n* **Monitoring and alerting:**\nMongoDB Atlas provides comprehensive monitoring and alerting features to help you track your cluster's performance and identify potential issues.\n* **Developer tools:**\nMongoDB Atlas offers various developer tools and APIs to simplify development and integration with your applications.\n\n## OpenAI function calling:\nOpenAI's function calling is a powerful capability that enables users to seamlessly interact with OpenAI models, such as GPT-3.5, through programmable commands. This functionality allows developers and enthusiasts to harness the language model's vast knowledge and natural language understanding by incorporating it directly into their applications or scripts. Through function calling, users can make specific requests to the model, providing input parameters and receiving tailored responses. This not only facilitates more precise and targeted interactions but also opens up a world of possibilities for creating dynamic, context-aware applications that leverage the extensive linguistic capabilities of OpenAI's models. Whether for content generation, language translation, or problem-solving, OpenAI function calling offers a flexible and efficient way to integrate cutting-edge language processing into various domains.\n\n## Key features of OpenAI function calling:\n- Function calling allows you to connect large language models to external tools.\n- The Chat Completions API generates JSON that can be used to call functions in your code.\n- The latest models have been trained to detect when a function should be called and respond with JSON that adheres to the function signature.\n- Building user confirmation flows is recommended before taking actions that impact the world on behalf of users.\n- Function calling can be used to create assistants that answer questions by calling external APIs, convert natural language into API calls, and extract structured data from text.\n- The basic sequence of steps for function calling involves calling the model, parsing the JSON response, calling the function with the provided arguments, and summarizing the results back to the user.\n- Function calling is supported by specific model versions, including GPT-4 and GPT-3.5-turbo.\n- Parallel function calling allows multiple function calls to be performed together, reducing round-trips with the API.\n- Tokens are used to inject functions into the system message and count against the model's context limit and billing.\n\n.\n\n## Function calling API basics: actions\n\nActions are functions that an agent can invoke. There are two important design considerations around actions:\n\n * Giving the agent access to the right actions\n * Describing the actions in a way that is most helpful to the agent\n\n## Crafting actions for effective agents\n\n**Actions are the lifeblood of an agent's decision-making.** They define the options available to the agent and shape its interactions with the environment. Consequently, designing effective actions is crucial for building successful agents.\n\nTwo key considerations guide this design process:\n\n1. **Access to relevant actions:** Ensure the agent has access to actions necessary to achieve its objectives. Omitting critical actions limits the agent's capabilities and hinders its performance.\n2. **Action description clarity:** Describe actions in a way that is informative and unambiguous for the agent. Vague or incomplete descriptions can lead to misinterpretations and suboptimal decisions.\n\nBy carefully designing actions that are both accessible and well-defined, you equip your agent with the tools and knowledge necessary to navigate its environment and achieve its objectives.\n\nFurther considerations:\n\n* **Granularity of actions:** Should actions be high-level or low-level? High-level actions offer greater flexibility but require more decision-making, while low-level actions offer more control but limit adaptability.\n* **Action preconditions and effects:** Clearly define the conditions under which an action can be taken and its potential consequences. This helps the agent understand the implications of its choices.\n\nIf you don't give the agent the right actions and describe them in an effective way, you won\u2019t be able to build a working agent.\n\n_)\n\nAn LLM is then called, resulting in either a response to the user or action(s) to be taken. If it is determined that a response is required, then that is passed to the user, and that cycle is finished. If it is determined that an action is required, that action is then taken, and an observation (action result) is made. That action and corresponding observation are added back to the prompt (we call this an \u201cagent scratchpad\u201d), and the loop resets \u2014 i.e., the LLM is called again (with the updated agent scratchpad).\n\n## Getting started\n\nClone the demo Github repository.\n```bash\ngit clone git@github.com:ranfysvalle02/Interactive-RAG.git\n```\n\nCreate a new Python environment.\n```bash\npython3 -m venv env\n```\n\nActivate the new Python environment.\n```bash\nsource env/bin/activate\n```\n\nInstall the requirements.\n```bash\npip3 install -r requirements.txt\n```\nSet the parameters in params.py:\n```bash\n# MongoDB\nMONGODB_URI = \"\"\nDATABASE_NAME = \"genai\"\nCOLLECTION_NAME = \"rag\"\n\n# If using OpenAI\nOPENAI_API_KEY = \"\"\n\n# If using Azure OpenAI\n#OPENAI_TYPE = \"azure\"\n#OPENAI_API_VERSION = \"2023-10-01-preview\"\n#OPENAI_AZURE_ENDPOINT = \"https://.openai.azure.com/\"\n#OPENAI_AZURE_DEPLOYMENT = \"\"\n\n```\nCreate a Search index with the following definition:\n```JSON\n{\n \"type\": \"vectorSearch\",\n \"fields\": \n{\n \"numDimensions\": 384,\n \"path\": \"embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n}\n ]\n}\n```\n\nSet the environment.\n```bash\nexport OPENAI_API_KEY=\n```\n\nTo run the RAG application:\n\n```bash\nenv/bin/streamlit run rag/app.py\n```\nLog information generated by the application will be appended to app.log.\n\n## Usage\nThis bot supports the following actions: answering questions, searching the web, reading URLs, removing sources, listing all sources, viewing messages, and resetting messages.\n\nIt also supports an action called iRAG that lets you dynamically control your agent's RAG strategy.\n\nEx: \"set RAG config to 3 sources and chunk size 1250\" => New RAG config:{'num_sources': 3, 'source_chunk_size': 1250, 'min_rel_score': 0, 'unique': True}.\n\nIf the bot is unable to provide an answer to the question from data stored in the Atlas Vector store and your RAG strategy (number of sources, chunk size, min_rel_score, etc), it will initiate a web search to find relevant information. You can then instruct the bot to read and learn from those results.\n\n## Demo\n\nLet's start by asking our agent a question \u2014 in this case, \"What is a mango?\" The first thing that will happen is it will try to \"recall\" any relevant information using vector embedding similarity. It will then formulate a response with the content it \"recalled\" or will perform a web search. Since our knowledge base is currently empty, we need to add some sources before it can formulate a response.\n\n![DEMO - Ask a Question][7]\n\nSince the bot is unable to provide an answer using the content in the vector database, it initiated a Google search to find relevant information. We can now tell it which sources it should \"learn.\" In this case, we'll tell it to learn the first two sources from the search results.\n\n![DEMO - Add a source][8]\n\n## Change RAG strategy\n\nNext, let's modify the RAG strategy! Let's make it only use one source and have it use a small chunk size of 500 characters.\n\n![DEMO - Change RAG strategy part 1][9]\n\nNotice that though it was able to retrieve a chunk with a fairly high relevance score, it was not able to generate a response because the chunk size was too small and the chunk content was not relevant enough to formulate a response. Since it could not generate a response with the small chunk, it performed a web search on the user's behalf.\n\nLet's see what happens if we increase the chunk size to 3,000 characters instead of 500.\n\n![DEMO - Change RAG strategy part 2][10]\n\nNow, with a larger chunk size, it was able to accurately formulate the response using the knowledge from the vector database!\n\n## List all sources\n\nLet's see what's available in the knowledge base of the agent by asking it, \u201cWhat sources do you have in your knowledge base?\u201d\n\n![DEMO - List all sources][11]\n\n## Remove a source of information\n\nIf you want to remove a specific resource, you could do something like:\n```\nUSER: remove source 'https://www.oracle.com' from the knowledge base\n```\n\nTo remove all the sources in the collection, we could do something like:\n\n![DEMO - Remove ALL sources][12]\n\nThis demo has provided a glimpse into the inner workings of our AI agent, showcasing its ability to learn and respond to user queries in an interactive manner. We've witnessed how it seamlessly combines its internal knowledge base with real-time web search to deliver comprehensive and accurate information. The potential of this technology is vast, extending far beyond simple question-answering. None of this would be possible without the magic of the function calling API.\n\n## Embracing the future of information access with interactive RAG\n\nThis post has explored the exciting potential of interactive retrievalaugmented generation (RAG) with the powerful combination of MongoDB Atlas and function calling API. We've delved into the crucial role of chunking, embedding, and retrieval vector relevance score in optimizing RAG performance, unlocking its true potential for information retrieval and knowledge management.\n\nInteractive RAG, powered by the combined forces of MongoDB Atlas and function calling API, represents a significant leap forward in the realm of information retrieval and knowledge management. By enabling dynamic adjustment of the RAG strategy and seamless integration with external tools, it empowers users to harness the full potential of LLMs for a truly interactive and personalized experience.\n\nIntrigued by the possibilities? Explore the full source code for the interactive RAG application and unleash the power of RAG with MongoDB Atlas and function calling API in your own projects!\n\nTogether, let's unlock the transformative potential of this potent combination and forge a future where information is effortlessly accessible and knowledge is readily available to all.\n\nView is the [full source code for the interactive RAG application using MongoDB Atlas and function calling API.\n\n### Additional MongoDB Resources\n\n- RAG with Atlas Vector Search, LangChain, and OpenAI\n- Taking RAG to Production with the MongoDB Documentation AI Chatbot\n- What is Artificial Intelligence (AI)?\n- Unlock the Power of Semantic Search with MongoDB Atlas Vector Search\n- Machine Learning in Healthcare:\nReal-World Use Cases and What You Need to Get Started\n- What is Generative AI?\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1c80f212af2260c7/6584ad159fa6cfce2b287389/interactive-rag-1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56fc9b71e3531a49/6584ad51a8ee4354d2198048/interactive-rag-2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte74948e1721bdaec/6584ad51dc76626b2c7e977f/interactive-rag-3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd026c60b753c27e3/6584ad51b0fbcbe79962669b/interactive-rag-4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8e9d94c7162ff93e/6584ad501f8952b2ab911de9/interactive-rag-5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta75a002d93bb01e6/6584ad50c4b62033affb624e/interactive-rag-6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltae07f2c87cf53157/6584ad50b782f0967d583f29/interactive-rag-7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c2eb90c4f462888/6584ad503ea3616a585750cd/interactive-rag-8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt227dec581a8ec159/6584ad50bb2e10e5fb00f92d/interactive-rag-9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd451cf7915c08958/6584ad503ea36155675750c9/interactive-rag-10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18aff0657fb9b496/6584ad509fa6cf3cca28738e/interactive-rag-11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcd723718e3fb583f/6584ad4f0543c5e8fe8f0ef6/interactive-rag-12.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Explore the cutting-edge of knowledge discovery with Interactive Retrieval-Augmented Generation (RAG) using MongoDB Atlas and Function Calling API. Learn how dynamic retrieval strategies, enhanced LLM performance, and real-time data integration can revolutionize your digital investigations. Dive into practical examples, benefits, and the future of interactive RAG in our in-depth guide. Perfect for developers and AI enthusiasts seeking to leverage advanced information access and management techniques.", "contentType": "Tutorial"}, "title": "Interactive RAG with MongoDB Atlas + Function Calling API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-data-federation-azure", "action": "created", "body": "# Atlas Data Federation with Azure Blob Storage\n\nFor as long as you have been reviewing restaurants, you've been storing your data in MongoDB. The plethora of data you've gathered is so substantial, you decide to team up with your friends to host this data online, so other restaurant goers can decide where to eat, informed by your detailed insights. But your friend has been storing their data in Azure Blob storage. They use JSON now, but they have reviews upon reviews stored as `.csv` files. How can we get all this data pooled together without the often arduous process of migrating databases or transforming data? With MongoDB's Data Federation, you can combine all your data into one unified view, allowing you to easily search for the best French diner in your borough. \n\nThis tutorial will walk you through the steps of combining your MongoDB database with your Azure Blob storage, utilizing MongoDB's Data Federation.\n\n## Prerequisites\nBefore you begin, you'll need a few prerequisites to follow along with this tutorial, including:\n- A MongoDB Atlas account, if you don't have one already\n- A Microsoft Azure account with a storage account and container setup. If you don't have this, follow the steps in the Microsoft documentation for the storage account and the container.\n- Azure CLI, or you can install Azure PowerShell, but this tutorial uses Azure CLI. Sign in and configure your command line tool following the steps in the documentation for Azure CLI and Azure PowerShell.\n- Node.js 18 or higher and npm: Make sure you have Node.js and npm (Node.js package manager) installed. Node.js is the runtime environment required to run your JavaScript code server-side. npm is used to manage the dependencies.\n\n### Add your sample data\nTo have something to view when your data stores are connected, let's add some reviews to your blob. First, you'll add a review for a new restaurant you just reviewed in Manhattan. Create a file called example1.json, and copy in the following:\n\n```json\n{\n\"address\":{\n\"building\":\"518\",\n\"coord\":\n{\n\"$numberDouble\":\"-74.006220\"\n},\n{\n\"$numberDouble\":\"40.733740\"\n}\n],\n\"street\":\"Hudson Street\",\n\"zipcode\":\"10014\"\n},\n\"borough\":\"Manhattan\",\n\"cuisine\": [\n\"French\",\n\"Filipino\"\n],\n\"grades\":[\n{\n\"date\":{\n\"$date\":{\n\"$numberLong\":\"1705403605904\"\n}\n},\n\"grade\":\"A\",\n\"score\":{\n\"$numberInt\":\"12\"\n}\n}\n],\n\"name\":\"Justine's on Hudson\",\n\"restaurant_id\":\"40356020\"\n}\n```\n\nUpload this file as a blob to your container:\n```bash\naz storage blob upload --account-name --container-name --name --file \n```\n\nHere, `BlobName` is the name you want to assign to your blob (just use the same name as the file), and `PathToFile` is the path to the file you want to upload (example1.json).\n\nBut you're not just restricted to JSON in your federated database. You're going to create another file, called example2.csv. Copy the following data into the file:\n\n```csv\nRestaurant ID,Name,Cuisine,Address,Borough,Latitude,Longitude,Grade Date,Grade,Score\n40356030,Sardi's,Continental,\"234 W 44th St, 10036\",Manhattan,40.757800,-73.987500,1927-09-09,A,11\n\n```\n\nLoad example2.csv to your blob using the same command as above.\n\nYou can list the blobs in your container to verify that your file was uploaded:\n\n```bash\naz storage blob list --account-name --container-name --output table\n```\n\n## Connect your databases using Data Federation\nThe first steps will be getting your MongoDB cluster set up. For this tutorial, you're going to create a [free M0 cluster. Once this is created, click \"Load Sample Dataset.\" In the sample dataset, you'll see a database called `sample_restaurants` with a collection called `restaurants`, containing thousands of restaurants with reviews. This is the collection you'll focus on.\n\nNow that you have your Azure Storage and MongoDB cluster setup, you are ready to deploy your federated database instance.\n\n 1. Select \"Data Federation\" from the left-hand navigation menu.\n 2. Click \"Create New Federated Database\" and, from the dropdown, select \"Set up manually.\"\n 3. Choose Azure as your cloud provider and give your federate database instance a name.\n , where you\u2019ll find a whole variety of tutorials, or explore MongoDB with other languages.\n\nBefore you start, make sure you have Node.js installed in your environment. \n\n 1. Set up a new Node.js project:\n - Create a new directory for your project.\n - Initialize a new Node.js project by running `npm init -y` in your terminal within that directory.\n - Install the MongoDB Node.js driver by running `npm install mongodb`.\n 2. Create a JavaScript file:\n - Create a file named searchApp.js in your project directory.\n 3. Implement the application:\n - Edit searchApp.js to include the following code, which connects to your MongoDB database and creates a client.\n ```\n const { MongoClient } = require('mongodb');\n \n // Connection URL\n const url = 'yourConnectionString';\n // Database Name\n const dbName = 'yourDatabaseName';\n // Collection Name\n const collectionName = 'yourCollectionName';\n\n // Create a new MongoClient\n const client = new MongoClient(url);\n ```\n - Now, create a function called `searchDatabase` that takes an input string and field from the command line and searches for documents containing that string in the specified field.\n ```\n // Function to search for a string in the database\n async function searchDatabase(fieldName, searchString) {\n try {\n await client.connect();\n console.log('Connected successfully to server');\n const db = client.db(dbName);\n const collection = db.collection(collectionName);\n \n // Dynamic query based on field name\n const query = { fieldName]: { $regex: searchString, $options: \"i\" } };\n const foundDocuments = await collection.find(query).toArray();\n console.log('Found documents:', foundDocuments);\n } finally {\n await client.close();\n }\n }\n ```\n - Lastly, create a main function to control the flow of the application.\n ```\n // Main function to control the flow\n async function main() {\n // Input from command line arguments\n const fieldName = process.argv[2];\n const searchString = process.argv[3];\n\n if (!fieldName || !searchString) {\n console.log('Please provide both a field name and a search string as arguments.');\n return;\n }\n \n searchStringInDatabase(fieldName, searchString)\n .catch(console.error);\n }\n \n main().catch(console.error);\n ```\n 4. Run your application with `node searchApp.js fieldName \"searchString\"`.\n - The script expects two command line arguments: the field name and the search string. It constructs a dynamic query object using these arguments, where the field name is determined by the first argument, and the search string is used to create a regex query.\n\nIn the terminal, you can type the query `node searchApp.js \"Restaurant ID\" \"40356030\"` to find your `example2.csv` file as if it was stored in a MongoDB database. Or maybe `node searchApp.js borough \"Manhattan\"`, to find all restaurants in your virtual database (across all your databases) in Manhattan. You're not just limited to simple queries. Most operators and aggregations are available on your federated database. There are some limitations and variations in the MongoDB Operators and Aggregation Pipeline Stages on your federated database that you can read about in our [documentation.\n\n## Conclusion\nBy following the steps outlined, you've learned how to set up Azure Blob storage, upload diverse data formats like JSON and CSV, and connect these with your MongoDB dataset using a federated database. \n\nThis tutorial highlights the potential of data federation in breaking down data silos, promoting data interoperability, and enhancing the overall data analysis experience. Whether you're a restaurant reviewer looking to share insights or a business seeking to unify disparate data sources, MongoDB's Data Federation along with Azure Blob storage provides a robust, scalable, and user-friendly platform to meet your data integration needs.\n\nAre you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace. If you found this tutorial useful, make sure to check out some more of our articles in Developer Center, like MongoDB Provider for EF Core Tutorial. Or pop over to our Community Forums to see what other people in the community are building!\n\n---\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8526ba2a8dccdc22/65df43a9747141e57e0a356f/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt12c4748f967ddede/65df43a837599950d070b53f/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Azure"], "pageDescription": "A tutorial to guide you through integrating your Azure storage with MongoDB using Data Federation", "contentType": "Tutorial"}, "title": "Atlas Data Federation with Azure Blob Storage", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-partitioning-strategies", "action": "created", "body": "# Realm Partitioning Strategies\n\nRealm partitioning can be used to control what data is synced to each mobile device, ensuring that your app is efficient, performant, and secure. This article will help you pick the right partitioning strategy for your app.\n\nMongoDB Realm Sync stores the superset of your application data in the cloud using MongoDB Atlas. The simplest strategy is that every instance of your mobile app contains the full database, but that quickly consumes a lot of space on the users' devices and makes the app slow to start while it syncs all of the data for the first time. Alternative strategies include partitioning by:\n\n- User\n- Group/team/store\n- Chanel/room/topic\n- Geographic region\n- Bucket of time\n- Any combination of these\n\nThis article covers:\n\n- Introduction to MongoDB Realm Sync Partitioning\n- Choosing the Right Strategy(ies) for Your App\n- Setting Up Partitions in the Backend Realm App\n- Accessing Realm Partitions from Your Mobile App (iOS or Android)\n- Resources\n- Summary\n\n## Prerequisites\n\nThe first part of the article has no prerequisites.\n\nThe second half shows how to set up partitioning and open Realms for a partition. If you want to try this in your own apps and you haven't worked with Realm before, then it would be helpful to try a tutorial for iOS or Android first.\n\n## Introduction to MongoDB Realm Sync Partitioning\n\nMongoDB Realm Sync lets a \"user\" access their application data from multiple mobile devices, whether they're online or disconnected from the internet. The data for all users is stored in MongoDB Atlas. When a user is logged into a device and has a network connection, the data they care about (what that means and how you control it is the subject of this article) is synchronized. When the device is offline, changes are stored locally and then synced when it's back online.\n\nThere may be cases where all data should be made available to all users, but I'd argue that it's rare that there isn't at least some data that shouldn't be universally shared. E.g., in a news app, the user may select which topics they want to follow, and set flags to indicate which articles they've already read\u2014that data shouldn't be seen by others.\n\n>In this article, I'm going to refer to \"users\", but for some apps, you could substitute in \"store,\" \"meeting room,\" \"device,\" \"location,\" ...\n\n**Why bother limiting what data is synced to a mobile app?** There are a couple of reasons:\n\n- Capacity: Why waste limited resources on a mobile device to store data that the user has no interest in?\n- Security: If a user isn't entitled to see a piece of data, it's safest not to store it on their device.\n\nThe easiest way to understand how partitions work in MongoDB Realm Sync is to look at an example.\n\n \n MongoDB Realm Sync Partitions\n\nThis example app works with shapes. The mobile app defines classes for circles, stars and triangles. In Atlas, each type of shape is stored in a distinct collection (`circles`, `stars` and `triangles`). Each of the shapes (regardless of which of the collections it's stored in) has a `color` attribute.\n\nWhen using the mobile app, the user is interested in working with a color. It could be that the user is only allowed to work with a single color, or it could be that the user can pick what color they currently want to work with. The backend Realm app gets to control which colors a given user is permitted to access.\n\nThe developer implements this by designating the `color` attribute as the partition key.\n\nA view in the mobile app can then open a synced Realm by specifying the color it wants to work with. The backend Realm app will then sync all shapes of that color to the mobile Realm, or it will reject the request if the user doesn't have permission to access that partition.\n\nThere are some constraints on the partition key:\n\n- The application must provide an exact match. It can specify that the Realm it's opening should contain the *blue* colored shapes, or that it should contain the *green* shapes. The app cannot open a synced Realm that contains both the *red* and *green* shapes.\n- The app must specify an exact match for the partition key. It cannot open a synced Realm for a range or pattern of partition key values. E.g. it can't specify \"all colors except *red*\" or \"all dates in the last week\".\n- Every collection must use the same partition key. In other words, you can't use `color` as the partition key for collections in the `shapes` database and `username` for collections in the `user` database. You'll see later that there's a technique to work around this.\n- You **can** change the value of the partition key (convert a `red` triangle into a `green` triangle), but it's inefficient as it results in the existing document being deleted and a new one being inserted.\n- The partition key must be one of these types:\n - `String`\n - `ObjectID`\n - `Int`\n - `Long`\n\nThe mobile app can ask to open a Realm using any value for the partition key, but it might be that the user isn't allowed access to that partition. For security, that check is performed in the backend Realm application. The developer can provide rules to decide if a user can access a partition, and the decision could be any one of:\n\n- No.\n- Yes, but only for reads.\n- Yes, for both reads and writes.\n\nThe permission rules can be anything from a simple expression that matches the partition key value, to a complex function that cross-references other collections.\n\nIn reality, the rules don't need to be based on the user. For example, the developer could decide that the \"happy hour\" chat room (partition) can only be opened on Fridays.\n\n## Choosing the Right Strategy(ies) for Your App\n\nThis section takes a deeper look at some of the partitioning strategies that you can adopt (or that may inspire you to create a bespoke approach). As you read through these strategies, remember that you can combine them within a single app. This is the meta-strategy we'll look at last.\n\n### Firehose\n\nThis is the simplest strategy. All of the documents/objects are synced to every instance of the app. This is a decision **not** to partition the data.\n\nYou might adopt this strategy for an NFL (National Football League) scores app where you want everyone to be able to view every result from every game in history\u2014even when the app is offline.\n\nConsider the two main reasons for partitioning:\n\n- **Capacity**: There have been less than 20,000 NFL games ever played, and the number is growing by less than 300 per year. The data for each game contains only the date, names of the two teams, and the score, and so the total volume of data is modest. It's reasonable to store all of this data on a mobile device.\n- **Security/Privacy**: There's nothing private in this data, and so it's safe to allow anyone on any mobile device to view it. We don't allow the mobile app to make any changes to the data. These are simple Realm Sync rules to define in the backend Realm app.\n\nEven though this strategy doesn't require partitioning, you must still designate a partition key when configuring Realm Sync. We want all of the documents/objects to be in the same partition and so we can add an attribute named `visible` and always set it to `true`.\n\n### User\n\nUser-based partitioning is a common strategy. Each user has a unique ID (that can be automatically created by MongoDB Realm). Each document contains an attribute that identifies the user that owns it. This could be a username, email address, or the `Id` generated by MongoDB Realm when the user registers. That attribute is used as the partitioning key.\n\nUse cases for this strategy include financial transactions, order history, game scores, and journal entries.\n\nConsider the two main drivers for partitioning:\n\n- **Capacity**: Only the data that's unique to the users is stored in the mobile app, which minimizes storage.\n- **Security/Privacy**: Users only have access to their own data.\n\nThere is often a subset of the user's data that should be made available to team members or to all users. In such cases, you may break the data into multiple collections, perhaps duplicating some data, and using different partition key values for the documents in those collections. You can see an example of this with the `User` and `Chatster` collections in the Rchat app.\n\n### Team\n\nThis strategy is used when you need to share data between a team of users. You can replace the term \"team\" with \"agency,\" \"store.\" or any other grouping of users or devices. Examples include all of the point-of-sale devices in a store or all of the employees in a department. The team's name or ID is used as the partitioning key and must be included in all documents in each synced collection.\n\nThe WildAid O-FISH App uses the agency name as the partition key. Each agency is the set of officers belonging to an organization responsible for enforcing regulations in one or more Marine Protected Areas. (You can think of an MPA as an ocean-based national park.) Every officer in an agency can create new reports and view all of the agency's existing reports. Agencies can customize the UI by controlling what options are offered when an officer creates a new report. E.g., an agency controlling the North Sea would include \"cod\" in the list of fish that could have been caught, but not \"clownfish\". The O-FISH menus are data-driven, with that data partitioned based on the agency.\n\n- **Capacity**: The \"team\" strategy consumes more space on the mobile device than the \"user\" partitioning strategy, but it's a good fit when all members of the team need to access the data (even when offline).\n- **Security/Privacy**: This strategy is used when all team members are allowed to view (and optionally modify) their team's data.\n\n### Channel\n\nWith this strategy, a user is typically entitled to open/sync Realms from a choice of channels. For example, a sports news app might have channels for soccer, baseball, etc., a chat app would offer multiple chat rooms, and an issue tracker might partition based on product. The channel name or ID should be used as the partitioning key.\n\n- **Capacity**: The mobile app can minimize storage use on the device by only opening a Realm for the partition representing the channel that the user is currently interacting with.\n- **Security/Privacy**: Realm Sync permissions can be added so that a user can only open a synced Realm for a partition if they're entitled to. For example, this might be handled by storing an array of allowed channels as part of the user's data.\n\n### Region\n\nThere are cases where you're only currently interested in data for a particular geographic area. Maps, cycle hire apps, and tourist guides are examples.\n\nIf you recall, when opening a Realm, the application must specify an exact match for the partition key, and that value needs to match the partition value in any document that is part of that partition. This restricts what you can do with location-based partitioning:\n\n- You **can** open a partition containing all documents where `location` is set to `\"London\"`.\n- You **can't** open a partition containing all documents where `location` is set to `\"either London or South East England\"`.\n- The partition key can't be an array.\n- You **can't** open a partition containing all documents where `location` is set to coordinates within a specified range.\n\nThe upshot of this is that you need to decide on geographic regions and assign them IDs or names. Each document can only belong to one of these regions. If you decided to use the state as your region, then the app can open a single synced Realm to access all of the data for Texas, but if the app wanted to be able to show data for all states in the US then it would need to open 50 synced Realms.\n\n- **Capacity**: Storage efficiency is dependent on how well your choice of regions matches how the application needs to work with the data. For example, if your app only ever lets the user work with data for a single state, then it would waste a lot of storage if you used countries as your regions.\n- **Security/Privacy**: In the cases that you want to control which users can access which region, Realm Sync permissions can be added.\n\nIn some cases, you may choose to duplicate some data in the backend (Atlas) database in order to optimise the frontend storage, where resources are more constrained. An analog is old-world (paper) travel guides. Lonely Planet produced a guide for Southeast Asia, in addition to individual guides for Vietnam, Thailand, Cambodia, etc. The guide for Cambodia contained 500 pages of information. Some of that same information (enough to fill 50 pages) was also printed in the Southeast Asia guide. The result was that the library of guides (think Atlas) contained duplicate information but it had plenty of space on its shelves. When I go on vacation, I could choose which region/partition I wanted to take with me in my small backpack (think mobile app). If I'm spending a month trekking around the whole of Southeast Asia, then I take that guide. If I'm spending the whole month in Vietnam, then I take that guide.\n\nIf you choose to duplicate data in multiple regions, then you can set up Atlas database triggers to automate the process.\n\n### Time Bucket\n\nAs with location, it doesn't make sense to use the exact time as the partition key as you typically would want to open a synced Realm for a range of times. The result is that you'd typically use discrete time ranges for your partition key values. A compatible set of partition values is \"Today,\" \"Earlier this week,\" \"This month (but not this week),\" \"Earlier this year (but not this month),\" \"2020,\" \"2000-2019,\" and \"Twentieth Century.\"\n\nYou can use Atlas scheduled and database triggers to automatically move documents between locations (e.g., at midnight, find all documents with `time == \"Today\"` and set `time = \"Earlier this week\"`. Note that changing the value of a partition key is expensive as it's implemented as a delete and insert.\n\n- **Capacity**: Storage efficiency is dependent on how well your choice of time buckets matches how the application needs to work with the data. That probably sounds familiar\u2014time bucket partitioning is analogous to region-based partitioning (with the exception that a city is unlikely to move from Florida to Alaska). As with regions, you may decide to duplicate some data\u2014perhaps having two documents for today's data one with `time == \"Today\"` and the other with `time == \"This week\"`.\n- **Security/Privacy**: In the cases that you want to control which users can access which time period, Realm Sync permissions can be added.\n\n>Note that slight variations on the Region and Time Bucket strategies can be used whenever you need to partition on ranges\u2014age, temperature, weight, exam score...\n\n### Combination/Hybrid\n\nFor many applications, no single partitioning strategy that we've looked at meets all of its use cases.\n\nConsider an eCommerce app. You might decide to have a single read-only partition for the entire product catalog. But, if the product catalog is very large, then you could choose to partition based on product categories (sporting good, electronics, etc.) to reduce storage size on the mobile device. When that user browses their order history, they shouldn't drag in orders for other users and so `user-id` would be a smart partitioning key. Unfortunately, the same key has to be used for every collection.\n\nThis can be solved by using `partition` as the partition key. `partition` is a `String` and its value is always made up of a key-value pair. In our eCommerce app, documents in the `productCatalog` collection could contain `partition: \"category=sports\"` and documents in the `orders` collection would include `partition: user=andrew@acme.com`.\n\nWhen the application opens a synced Realm, it provides a value such as `\"user=andrew@acme.com\"` as the partition. The Realm sync rules can parse the value of the partition key to determine if the user is allowed to open that partition by splitting the key to find the sub-key (`user`) and its value (`andrew@acme.com`). The rule knows that when `key == \"user\"`, it needs to check that the current user's email address matches the value.\n\n- **Capacity**: By using an optimal partitioning sub-strategy for each type of data, you can fine-tune what data is stored in the mobile app.\n- **Security/Privacy**: Your backend Realm app can apply custom rules based on the `key` component of the partition to decide whether the user is allowed to sync the requested partition.\n\nYou can see an example of how this is implemented for a chatroom app in Building a Mobile Chat App Using Realm \u2013 Data Architecture.\n\n## Setting Up Partitions in the Backend Realm App\n\nYou need to set up one backend Realm app, which can then be used by both your iOS and Android apps. You can also have multiple iOS and Android apps using the same back end.\n\n### Set Partition and Enable MongoDB Realm Sync\n\nFrom the Realm UI, select the \"Sync\" tab. From that view, you select whether you'd prefer to specify your schema through the back end or have it automatically derived from the Realm Objects that you define in your mobile app. If you don't already have data in your Atlas database, then I'd suggest the second option which turns on \"Dev Mode,\" which is the quickest way to get started:\n\n \n\nOn the next screen, select your key, specify the attribute to use as the partition key (in this case, a new string attribute named \"partition\"), and the database. Click \"Turn Dev Mode On\":\n\n \n\nClick on the \"REVIEW & DEPLOY\" button. You'll need to do this every time you change the Realm app, but this is the last time that I'll mention it:\n\n \n\nNow that Realm sync has been enabled, you should ensure that you set the `partition` attribute in all documents in any collections to be synced.\n\n### Sync Rules\n\nRealm Sync rules control whether the user/app is permitted to sync a partition or not.\n\n>A common misconception is that sync rules can control which documents within a partition will be synced. That isn't the case. They simply determine (true or false) whether the user is allowed to sync the entire partition.\n\nThe default behaviour is that the app can sync whichever partition it requests, and so you need to change the rules if you want to increase security/privacy\u2014which you probably do before going into production!\n\nTo see or change the rules, select the \"Configuration\" tab and then expand the \"Define Permissions\" section:\n\n \n\nBoth the read and write rules default to `true`.\n\nYou should click \"Pause Sync\" before editing the rules and then re-enable sync afterwards.\n\nThe rules are JSON expressions that have access to the user object (`%%user`) and the requested partition (`%%partition`). If you're using the user ID as your partitioning key, then this rule would ensure that a user can only sync the partition containing their documents: `{ \"%%user.id\": \"%%partition\" }`.\n\nFor more complex partitioning schemes (e.g., the combination strategy), you can provide a JSON expression that delegates the `true`/`false` decision to a Realm function:\n\n``` json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": \n \"%%partition\"\n ],\n \"name\": \"canReadPartition\"\n }\n }\n}\n```\n\nIt's then your responsibility to create the `canReadPartition` function. Here's an example from the [Rchat app:\n\n``` javascript\nexports = function(partition) {\nconsole.log(`Checking if can sync a read for partition = ${partition}`);\n\nconst db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\nconst chatsterCollection = db.collection(\"Chatster\");\nconst userCollection = db.collection(\"User\");\nconst chatCollection = db.collection(\"ChatMessage\");\nconst user = context.user;\nlet partitionKey = \"\";\nlet partitionVale = \"\";\n\nconst splitPartition = partition.split(\"=\");\nif (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n} else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n}\n\n switch (partitionKey) {\n case \"user\":\n console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n case \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n case \"all-users\":\n console.log(`Any user can read all-users partitions`);\n return true;\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n }\n};\n```\n\nThe function splits the partition string, taking the key from the left of the `=` symbol and the value from the right side. It then runs a specific check based on the key:\n\n- `user`: Checks that the value matches the current user's ID.\n- `conversation`: This is used for the chat messages. Checks that the value matches one of the conversations stored in the user's document (i.e. that the current user is a member of the chat room.)\n- `all-users`: This is used for the `Chatster` collection which provides a read-only view of a subset of each user's data, such as their name and presence state. This data is readable by anyone and so the function always returns true.\n\nRChat also has a `canWritePartition` function which has a similar structure but applies different checks. You can [view that function here.\n\n### Triggers\n\nMongoDB Realm provides three types of triggers:\n\n- **Authentication**: Often used to create a user document when a new user registers.\n- **Database**: Invoked when your nominated collection is updated. You can use database triggers to automate the duplication of data so that it can be shared through a different partition.\n- **Scheduled**: Similar to a `cron` job, scheduled triggers run at a specified time or interval. They can be used to move documents into different time buckets (e.g., from \"Today\" into \"Earlier this week\").\n\nIn the RChat app, only the owner is allowed to read or write their `User` document, but we want the user to be discoverable by anyone and for their presence state to be visible to others. We add a database trigger that mirrors a subset of the `User` document to a `Chatster` document which is in a publicly visible partition.\n\nThe first step is to create a database trigger by selecting \"Triggers\" and then clicking \"Add a Trigger\":\n\n \n\nFill in the details about the collection that invokes the new trigger, specify which operations we care about (all of them), and then indicate that we'll provide a new function to be executed when the trigger fires:\n\n \n\nAfter saving that definition, you're taken to the function editor to add the logic. This is the code for the trigger on the `User` collection:\n\n``` javascript\nexports = function(changeEvent) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatster = db.collection(\"Chatster\");\n const userCollection = db.collection(\"User\");\n let eventCollection = context.services.get(\"mongodb-atlas\").db(\"RChat\").collection(\"Event\");\n const docId = changeEvent.documentKey._id;\n const user = changeEvent.fullDocument;\n let conversationsChanged = false;\n\n console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);\n switch (changeEvent.operationType) {\n case \"insert\":\n case \"replace\":\n case \"update\":\n console.log(`Writing data for ${user.userName}`);\n let chatsterDoc = {\n _id: user._id,\n partition: \"all-users=all-the-users\",\n userName: user.userName,\n lastSeenAt: user.lastSeenAt,\n presence: user.presence\n };\n if (user.userPreferences) {\n const prefs = user.userPreferences;\n chatsterDoc.displayName = prefs.displayName;\n if (prefs.avatarImage && prefs.avatarImage._id) {\n console.log(`Copying avatarImage`);\n chatsterDoc.avatarImage = prefs.avatarImage;\n console.log(`id of avatarImage = ${prefs.avatarImage._id}`);\n }\n }\n chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })\n .then (() => {\n console.log(`Wrote Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);\n });\n\n if (user.conversations && user.conversations.length > 0) {\n for (i = 0; i < user.conversations.length; i++) {\n let membersToAdd = ];\n if (user.conversations[i].members.length > 0) {\n for (j = 0; j < user.conversations[i].members.length; j++) {\n if (user.conversations[i].members[j].membershipStatus == \"User added, but invite pending\") {\n membersToAdd.push(user.conversations[i].members[j].userName);\n user.conversations[i].members[j].membershipStatus = \"Membership active\";\n conversationsChanged = true;\n }\n }\n } \n if (membersToAdd.length > 0) {\n userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})\n .then (result => {\n console.log(`Updated ${result.modifiedCount} other User documents`);\n }, error => {\n console.log(`Failed to copy new conversation to other users: ${error}`);\n });\n }\n }\n }\n if (conversationsChanged) {\n userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});\n }\n break;\n case \"delete\":\n chatster.deleteOne({_id: docId})\n .then (() => {\n console.log(`Deleted Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);\n });\n break;\n }\n};\n```\n\nNote that the `Chatster` document is created with `partition` set to `\"all-users=all-the-users\"`. This is what makes the document accessible by any user.\n\n## Accessing Realm Partitions from Your Mobile App (iOS or Android)\n\nIn this section, you'll learn how to request a partition when opening a Realm. If you want more of a primer on using Realm in a mobile app, then these are suitable resources:\n\n- [Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine (iOS, Swift, SwiftUI) A good intro, but there have been some enhancements to the Realm SDK since it was written.\n- Building a Mobile Chat App Using Realm \u2013 Data Architecture (iOS, Swift, SwiftUI) This series involves a more complex app, but it uses the latest SwiftUI features in the Realm SDK.\n- Building an Android Emoji Garden on Jetpacks! (Compose) with Realm (Android, Kotlin, Jetpack Compose)\n\nFirst of all, note that you don't need to include the partition key in your iOS or Android `Object` definitions. They are handled automatically by Realm.\n\nAll you need to do is specify the partition value when opening a synced Realm:\n\n::::tabs\n:::tab]{tabid=\"Swift\"}\n``` swift\nChatRoomBubblesView(conversation: conversation)\n .environment(\n \\.realmConfiguration,\n app.currentUser!.configuration(partitionValue: \"conversation=\\(conversation.id)\"))\n```\n:::\n:::tab[]{tabid=\"Kotlin\"}\n``` kotlin\n\nval config: SyncConfiguration = SyncConfiguration.defaultConfig(user, \"conversation=${conversation.id}\")\nsyncedRealm = Realm.getInstance(config)\n```\n:::\n::::\n\n## Summary\n\nAt this point, you've hopefully learned:\n\n- That MongoDB Realm Sync partitioning is a great way to control data privacy and storage requirements in your mobile app.\n- How Realm partitioning works.\n- A number of partitioning strategies.\n- How to combine strategies to build the optimal solution for your mobile app.\n- How to implement your partitioning strategy in your backend Realm app and in your iOS/Android mobile apps.\n\n## Resources\n\n- [Building a Mobile Chat App Using Realm \u2013 Data Architecture.\n- Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine.\n- Building an Android Emoji Garden on Jetpacks! (Compose) with Realm.\n- Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps.\n- MongoDB Realm Sync docs.\n- MongoDB Realm Sync partitioning docs.\n- Realm iOS SDK.\n- Realm Kotlin SDK.\n\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.", "format": "md", "metadata": {"tags": ["Realm"], "pageDescription": "How to use Realm partitions to make your app efficient, performant, and secure.", "contentType": "Tutorial"}, "title": "Realm Partitioning Strategies", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/gemma-mongodb-huggingface-rag", "action": "created", "body": "# Building a RAG System With Google's Gemma, Hugging Face and MongoDB\n\n## Introduction\n\nGoogle recently released a state-of-the-art open model into the AI community called Gemma. Specifically, Google released four variants of Gemma: Gemma 2B base model, Gemma 2B instruct model, Gemma 7B base model, and Gemma 7B instruct model. The Gemma open model and its variants utilise similar building blocks as Gemini, Google\u2019s most capable and efficient foundation model built with Mixture-of-Expert (MoE) architecture.\n\n**This article presents how to leverage Gemma as the foundation model in a retrieval-augmented generation** (**RAG) pipeline or system, with supporting models provided by Hugging Face, a repository for open-source models, datasets, and compute resources.** The AI stack presented in this article utilises the GTE large embedding models from Hugging Face and MongoDB as the vector database.\n\n**Here\u2019s what to expect from this article:**\n - Quick overview of a RAG system\n - Information on Google\u2019s latest open model, Gemma\n - Utilising Gemma in a RAG system as the base model\n - Building an end-to-end RAG system with an open-source base and\n embedding models from Hugging Face*\n\n, which has a notebook version of the RAG system presented in this article.\n\nThe shell command sequence below installs libraries for leveraging open-source large language models (LLMs), embedding models, and database interaction functionalities. These libraries simplify the development of a RAG system, reducing the complexity to a small amount of code:\n\n```\n!pip install datasets pandas pymongo sentence_transformers\n!pip install -U transformers\n# Install below if using GPU\n!pip install accelerate\n```\n\n - **PyMongo:** A Python library for interacting with MongoDB that enables functionalities to connect to a cluster and query data stored in collections and documents.\n - **Pandas**: Provides a data structure for efficient data processing and analysis using Python\n - **Hugging Face datasets:** Holds audio, vision, and text datasets\n - **Hugging Face Accelerate**: Abstracts the complexity of writing code that leverages hardware accelerators such as GPUs. Accelerate is leveraged in the implementation to utilise the Gemma model on GPU resources.\n - **Hugging Face Transformers**: Access to a vast collection of pre-trained models\n - **Hugging Face Sentence Transformers**: Provides access to sentence, text, and image embeddings.\n\n## Step 2: data sourcing and preparation\n\nThe data utilised in this tutorial is sourced from Hugging Face datasets, specifically the AIatMongoDB/embedded\\_movies dataset.\u00a0\n\nA datapoint within the movie dataset contains attributes specific to an individual movie entry; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas DataFrame object, which enables efficient data structure manipulation and analysis.\n\n```python\n# Load Dataset\nfrom datasets import load_dataset\nimport pandas as pd\n# https://huggingface.co/datasets/MongoDB/embedded_movies\ndataset = load_dataset(\"MongoDB/embedded_movies\")\n# Convert the dataset to a pandas DataFrame\ndataset_df = pd.DataFrame(dataset'train'])\n```\n\nThe operations within the following code snippet below focus on enforcing data integrity and quality.\u00a0\n\n1. The first process ensures that each data point's `fullplot` attribute is not empty, as this is the primary data we utilise in the embedding process.\u00a0\n2. This step also ensures we remove the `plot_embedding` attribute from all data points as this will be replaced by new embeddings created with a different embedding model, the `gte-large`.\n\n```python\n# Remove data point where plot column is missing\ndataset_df = dataset_df.dropna(subset=['fullplot'])\nprint(\"\\nNumber of missing values in each column after removal:\")\nprint(dataset_df.isnull().sum())\n\n# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with an open-source embedding model from Hugging Face: gte-large\ndataset_df = dataset_df.drop(columns=['plot_embedding'])\n```\n\n## Step 3: generating embeddings\n\n**Embedding models convert high-dimensional data such as text, audio, and images into a lower-dimensional numerical representation that captures the input data's semantics and context.** This embedding representation of data can be used to conduct semantic searches based on the positions and proximity of embeddings to each other within a vector space.\n\nThe embedding model used in the RAG system is the Generate Text Embedding (GTE) model, based on the BERT model. The GTE embedding models come in three variants, mentioned below, and were trained and released by Alibaba DAMO Academy, a research institution.\n\n| | | |\n| ---------------------- | ------------- | --------------------------------------------------------------------------- |\n| **Model**\u00a0 | **Dimension** | **Massive Text Embedding Benchmark (MTEB) Leaderboard Retrieval (Average)** |\n| GTE-large | 1024 | 52.22 |\n| GTE-base | 768 | 51.14 |\n| GTE-small | 384 | 49.46 |\n| text-embedding-ada-002 | 1536 | 49.25 |\n| text-embedding-3-small | 256 | 51.08 |\n| text-embedding-3-large | 256 | 51.66 |\n\nIn the comparison between open-source embedding models GTE and embedding models provided by OpenAI, the GTE-large embedding model offers better performance on retrieval tasks but requires more storage for embedding vectors compared to the latest embedding models from OpenAI. Notably, the GTE embedding model can only be used on English texts.\n\nThe code snippet below demonstrates generating text embeddings based on the text in the \"fullplot\" attribute for each movie record in the DataFrame. Using the SentenceTransformers library, we get access to the \"thenlper/gte-large\" model hosted on Hugging Face. If your development environment has limited computational resources and cannot hold the embedding model in RAM, utilise other variants of the GTE embedding model: [gte-base or gte-small.\n\nThe steps in the code snippets are as follows:\n\n 1. Import the `SentenceTransformer` class to access the embedding models.\n 2. Load the embedding model using the `SentenceTransformer` constructor\n to instantiate the `gte-large` embedding model.\n 3. Define the `get_embedding function`, which takes a text string as\n input and returns a list of floats representing the embedding. The\n function first checks if the input text is not empty (after\n stripping whitespace). If the text is empty, it returns an empty\n list. Otherwise, it generates an embedding using the loaded model.\n 4. Generate embeddings by applying the `get_embedding` function to the\n \"fullplot\" column of the `dataset_df` DataFrame, generating\n embeddings for each movie's plot. The resulting list of embeddings\n is assigned to a new column named embedding.\n\n```python\n from sentence_transformers import SentenceTransformer\n # https://huggingface.co/thenlper/gte-large\n embedding_model = SentenceTransformer(\"thenlper/gte-large\")\n\n def get_embedding(text: str) -> listfloat]:\n \u00a0\u00a0\u00a0\u00a0if not text.strip():\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0print(\"Attempted to get embedding for empty text.\")\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return []\n\n \u00a0\u00a0\u00a0\u00a0embedding = embedding_model.encode(text)\n\n \u00a0\u00a0\u00a0\u00a0return embedding.tolist()\n\n dataset_df[\"embedding\"] = dataset_df[\"fullplot\"].apply(get_embedding)\n```\n\nAfter this section, we now have a complete dataset with embeddings that can be ingested into a vector database, like MongoDB, where vector search operations can be performed.\n\n## Step 4: database setup and connection\n\nBefore moving forward, ensure the following prerequisites are met\n - Database cluster set up on MongoDB Atlas\n - Obtained the URI to your cluster\n\nFor assistance with database cluster setup and obtaining the URI, refer to our guide for [setting up a MongoDB cluster and getting your connection string. Alternatively, follow Step 5 of this article on using embeddings in a RAG system, which offers detailed instructions on configuring and setting up the database cluster.\n\nOnce you have created a cluster, create the database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named movies, and the collection will be named movies\\_records.\n\n guide.\n\nIn the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named **vector\\_index** and the vector search index definition is as follows:\n\n```\n{\n \"fields\": {\n \"numDimensions\": 1024,\n \"path\": \"embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }]\n}\n```\n\nThe 1024 value of the numDimension field corresponds to the dimension of the vector generated by the gte-large embedding model. If you use the `gte-base` or `gte-small` embedding models, the numDimension value in the vector search index must be set to **768** and **384**, respectively.\n\n## Step 6: data ingestion and Vector Search\n\nUp to this point, we have successfully done the following:\n\n - Loaded data sourced from Hugging Face\n - Provided each data point with embedding using the GTE-large embedding\n model from Hugging Face\n - Set up a MongoDB database designed to store vector embeddings\n - Established a connection to this database from our development\n environment\n - Defined a vector search index for efficient querying of vector\n embeddings\n\nIngesting data into a MongoDB collection from a pandas DataFrame is a straightforward process that can be efficiently accomplished by converting the DataFrame into dictionaries and then utilising the `insert_many` method on the collection to pass the converted dataset records.\n\n```python\ndocuments = dataset_df.to_dict('records')\ncollection.insert_many(documents)\nprint(\"Data ingestion into MongoDB completed\")\n```\n\nThe operations below are performed in the code snippet:\n\n 1. Convert the dataset DataFrame to a dictionary using the`to_dict('records')` method on `dataset_df`. This method transforms the DataFrame into a list of dictionaries. The `records` parameter is crucial as it encapsulates each row as a single dictionary.\n 2. Ingest data into the MongoDB vector database by calling the `insert_many(documents)` function on the MongoDB collection, passing it the list of dictionaries. MongoDB's `insert_many` function ingests each dictionary from the list as an individual document within the collection.\n\nThe following step implements a function that returns a vector search result by generating a query embedding and defining a MongoDB aggregation pipeline.\u00a0\n\nThe pipeline, consisting of the `$vectorSearch` and `$project` stages, executes queries using the generated vector and formats the results to include only the required information, such as plot, title, and genres while incorporating a search score for each result.\n\n```python\ndef vector_search(user_query, collection):\n \"\"\"\n Perform a vector search in the MongoDB collection based on the user query.\n\n Args:\n user_query (str): The user's query string.\n collection (MongoCollection): The MongoDB collection to search.\n\n Returns:\n list: A list of matching documents.\n \"\"\"\n\n # Generate embedding for the user query\n query_embedding = get_embedding(user_query)\n\n if query_embedding is None:\n return \"Invalid query or embedding generation failed.\"\n\n # Define the vector search pipeline\n pipeline = [\n {\n \"$vectorSearch\": {\n \"index\": \"vector_index\",\n \"queryVector\": query_embedding,\n \"path\": \"embedding\",\n \"numCandidates\": 150, # Number of candidate matches to consider\n \"limit\": 4, # Return top 4 matches\n }\n },\n {\n \"$project\": {\n \"_id\": 0, # Exclude the _id field\n \"fullplot\": 1, # Include the plot field\n \"title\": 1, # Include the title field\n \"genres\": 1, # Include the genres field\n \"score\": {\"$meta\": \"vectorSearchScore\"}, # Include the search score\n }\n },\n ]\n\n # Execute the search\n results = collection.aggregate(pipeline)\n return list(results)\n\n```\n\nThe code snippet above conducts the following operations to allow semantic search for movies:\n\n 1. Define the `vector_search` function that takes a user's query string and a MongoDB collection as inputs and returns a list of documents that match the query based on vector similarity search.\n 2. Generate an embedding for the user's query by calling the previously defined function, `get_embedding`, which converts the query string into a vector representation.\n 3. Construct a pipeline for MongoDB's aggregate function, incorporating two main stages: `$vectorSearch` and `$project`.\n 4. The `$vectorSearch` stage performs the actual vector search. The`index` field specifies the vector index to utilise for the vector search, and this should correspond to the name entered in the vector search index definition in previous steps. The `queryVector` field takes the embedding representation of the use query. The `path` field corresponds to the document field containing the embeddings.\u00a0 The `numCandidates` specifies the number of candidate documents to consider and the limit on the number of results to return.\n 5. The `$project` stage formats the results to include only the required fields: plot, title, genres, and the search score. It explicitly excludes the `_id` field.\n 6. The `aggregate` executes the defined pipeline to obtain the vector search results. The final operation converts the returned cursor from the database into a list.\n\n## Step 7: handling user queries and loading Gemma\n\nThe code snippet defines the function `get_search_result`, a custom wrapper for performing the vector search using MongoDB and formatting the results to be passed to downstream stages in the RAG pipeline.\n\n```python\ndef get_search_result(query, collection):\n\n get_knowledge = vector_search(query, collection)\n\n search_result = \"\"\n for result in get_knowledge:\n search_result += f\"Title: {result.get('title', 'N/A')}, Plot: {result.get('fullplot', 'N/A')}\\n\"\n\n return search_result\n```\n\nThe formatting of the search results extracts the title and plot using the get method and provides default values (\"N/A\") if either field is missing. The returned results are formatted into a string that includes both the title and plot of each document, which is appended to `search_result`, with each document's details separated by a newline character.\n\nThe RAG system implemented in this use case is a query engine that conducts movie recommendations and provides a justification for its selection.\n```python\n# Conduct query with retrieval of sources\nquery = \"What is the best romantic movie to watch and why?\"\nsource_information = get_search_result(query, collection)\ncombined_information = f\"Query: {query}\\nContinue to answer the query by using the Search Results:\\n{source_information}.\"\nprint(combined_information)\n```\n\nA user query is defined in the code snippet above; this query is the target for semantic search against the movie embeddings in the database collection. The query and vector search results are combined into a single string to pass as a full context to the base model for the RAG system.\u00a0\n\nThe following steps below load the Gemma-2b instruction model (\u201cgoogle/gemma-2b-it\") into the development environment using the Hugging Face Transformer library. Specifically, the code snippet below loads a tokenizer and a model from the Transformers library by Hugging Face.\n\n```python\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"google/gemma-2b-it\")\n# CPU Enabled uncomment below \ud83d\udc47\ud83c\udffd\n# model = AutoModelForCausalLM.from_pretrained(\"google/gemma-2b-it\")\n# GPU Enabled use below \ud83d\udc47\ud83c\udffd\nmodel = AutoModelForCausalLM.from_pretrained(\"google/gemma-2b-it\", device_map=\"auto\")\n```\n\n**Here are the steps to load the Gemma open model:**\n\n 1. Import `AutoTokenizer` and `AutoModelForCausalLM` classes from the transformers module.\n 2. Load the tokenizer using the `AutoTokenizer.from_pretrained` method to instantiate a tokenizer for the \"google/gemma-2b-it\" model. This tokenizer converts input text into a sequence of tokens that the model can process.\n 3. Load the model using the `AutoModelForCausalLM.from_pretrained`method. There are two options provided for model loading, and each one accommodates different computing environments.\n 4. CPU usage: For environments only utilising CPU for computations, the model can be loaded without specifying the `device_map` parameter.\n 5. GPU usage: The `device_map=\"auto\"` parameter is included for environments with GPU support to map the model's components automatically to available GPU compute resources.\n\n```python\n# Moving tensors to GPU\ninput_ids = tokenizer(combined_information, return_tensors=\"pt\").to(\"cuda\")\nresponse = model.generate(**input_ids, max_new_tokens=500)\nprint(tokenizer.decode(response[0]))\n```\n\n**The steps to process user inputs and Gemma\u2019s output are as follows:**\n\n 1. Tokenize the text input `combined_information` to obtain a sequence of numerical tokens as PyTorch tensors; the result of this operation is assigned to the variable `input_ids`.\n 2. The `input_ids` are moved to the available GPU resource using the \\`.to(\u201ccuda\u201d)\\` method; the aim is to speed up the model\u2019s computation.\n 3. Generate a response from the model by involving the`model.generate` function with the input\\_ids tensor. The max_new_tokens=500 parameter limits the length of the generated text, preventing the model from producing excessively long outputs.\n 4. Finally, decode the model\u2019s response using the `tokenizer.decode`method, which converts the generated tokens into a readable text string. The `response[0]` accesses the response tensor containing the generated tokens.\n\n| | |\n| ------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| **Query**\u00a0 | **Gemma\u2019s responses** |\n| What is the best romantic movie to watch and why? | Based on the search results, the best romantic movie to watch is \\*\\*Shut Up and Kiss Me!\\*\\* because it is a romantic comedy that explores the complexities of love and relationships. The movie is funny, heartwarming, and thought-provoking |\n\n***\n\n## Conclusion\n\nThe implementation of a RAG system in this article utilised entirely open datasets, models, and embedding models available via Hugging Face. Utilising Gemma, it\u2019s possible to build RAG systems with models that do not rely on the management and availability of models from closed-source model providers.\u00a0\n\nThe advantages of leveraging open models include transparency in the training details of models utilised, the opportunity to fine-tune base models for further niche task utilisation, and the ability to utilise private sensitive data with locally hosted models.\n\nTo better understand open vs. closed models and their application to a RAG system, we have an [article implements an end-to-end RAG system using the POLM stack, which leverages embedding models and LLMs provided by OpenAI.\n\nAll implementation steps can be accessed in the repository, which has a notebook version of the RAG system presented in this article.\n\n***\n\n## FAQs\n\n**1. What are the Gemma models?**\nGemma models are a family of lightweight, state-of-the-art open models for text generation, including question-answering, summarisation, and reasoning. Inspired by Google's Gemini, they are available in 2B and 7B sizes, with pre-trained and instruction-tuned variants.\n\n**2. How do Gemma models fit into a RAG system?**\n\nIn a RAG system, Gemma models are the base model for generating responses based on input queries and source information retrieved through vector search. Their efficiency and versatility in handling a wide range of text formats make them ideal for this purpose.\n\n**3. Why use MongoDB in a RAG system?**\n\nMongoDB is used for its robust management of vector embeddings, enabling efficient storage, retrieval, and querying of document vectors. MongoDB also serves as an operational database that enables traditional transactional database capabilities. MongoDB serves as both the operational and vector database for modern AI applications.\n\n**4. Can Gemma models run on limited resources?**\n\nDespite their advanced capabilities, Gemma models are designed to be deployable in environments with limited computational resources, such as laptops or desktops, making them accessible for a wide range of applications. Gemma models can also be deployed using deployment options enabled by Hugging Face, such as inference API, inference endpoints and deployment solutions via various cloud services.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfb7f68b3bf810100/65d77918421dd35b0bebcb33/Screenshot_2024-02-22_at_16.40.40.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7ef2d37427c35b06/65d78ef8745ebcf6d39d4b6b/GenAI_Stack_(7).png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "This article presents how to leverage Gemma as the foundation model in a Retrieval-Augmented Generation (RAG) pipeline or system, with supporting models provided by Hugging Face, a repository for open-source models, datasets and compute resources.", "contentType": "Tutorial"}, "title": "Building a RAG System With Google's Gemma, Hugging Face and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/optimize-atlas-performance-advisor-query-analyzer-more", "action": "created", "body": "# Optimize With MongoDB Atlas: Performance Advisor, Query Analyzer, and More\n\nOptimizing MongoDB performance involves understanding the intricacies of your database's schema and queries, and navigating this landscape might seem daunting. There can be a lot to keep in mind, but MongoDB Atlas provides several tools to help spot areas where how you interact with your data can be improved.\n\nIn this tutorial, we're going to go through what some of these tools are, where to find them, and how we can use what they tell us to get the most out of our database. Whether you're a DBA, developer, or just a MongoDB enthusiast, our goal is to empower you with the knowledge to harness the full potential of your data.\n\n## Identify schema anti-patterns\nAs your application grows and use cases evolve, potential problems can present themselves in what was once a well-designed schema. How can you spot these? Well, in Atlas, from the data explorer screen, select the collection you'd like to examine. Above the displayed documents, you'll see a tab called \"Schema Anti-Patterns.\" \n\nNow, in my collection, I have a board that describes the tasks necessary for our next sprint, so my documents look something like this: \n\n```json\n{\n \"boardName\": \"Project Alpha\",\n \"boardId\": \"board123\",\n \"tasks\": \n {\n \"taskId\": \"task001\",\n \"title\": \"Design Phase\",\n \"description\": \"Complete the initial design drafts.\",\n \"status\": \"In Progress\",\n \"assignedTo\": [\"user123\", \"user456\"],\n \"dueDate\": \"2024-02-15\",\n },\n // 10,000 more tasks\n ]\n}\n```\n\nWhile this worked fine when our project was small in scope, the lists of tasks necessary really grew out of control (relatable, I'm sure). Let's pop over to our schema anti-pattern tab and see what it says.\n\n![Collection schema anti-pattern page][1]\n\nFrom here, you'll be provided with a list of anti-patterns detected in your database and some potential fixes. If we click the \"Avoid using unbounded arrays in documents\" item, we can learn a little more.\n\n![Collection schema anti-pattern page, dropdown for more info.][2]\n\nThis collection has a few problems. Inside my documents, I have a substantial array. Large arrays can cause multiple issues, from exceeding the limit size on documents (16 MB) to degrading the performance of indexes as the arrays grow in size. Now that I have identified this, I can click \"Learn How to Fix This Issue\" to be taken to the [MongoDB documentation. In this case, the solution mentioned is referencing. This involves storing the tasks in a separate collection and having a field to indicate what board they belong to. This will solve my issue of the unbounded array.\n\nNow, every application is unique, and thus, how you use MongoDB to leverage your data will be equally unique. There is rarely one right answer for how to model your data with MongoDB, but with this tool, you are able to see what is slowing down your database and what you might consider changing \u2014 from unused indexes that are increasing your write operation times to over-reliance on the expensive `$lookup` operation, when embedded documents would do. \n\n## Performance Advisor\nWhile you continue to use your MongoDB database, performance should always be at the back of your mind. Slow performance can hamper the user's experience with your application and can sometimes even make it unusable. With larger datasets and complex operations, these slow operations can become harder to avoid without conscious effort. The Performance Advisor provides a holistic view of your cluster, and as the name suggests, can help identify and solve the performance issues.\n\nThe Performance Advisor is a tool available for M10+ clusters and serverless instances. It monitors queries that MongoDB considers slow, based on how long operations on your cluster typically take. When you open up your cluster in MongoDB Atlas, you'll see a tab called \"Performance Advisor.\"\n\n, we have a database containing information on New York City taxi rides. A typical query on the application would look something like this:\n\n```shell\ndb.yellow.find({ \"dropoff_datetime\": \"2014-06-19 21:45:00\",\n \"passenger_count\": 1,\n \"trip_distance\": {\"$gt\": 3 }\n })\n```\n\nWith a large enough collection, running queries on specific field data will generate potentially slow operations without properly indexed collections. If we look at suggested indexes, we're presented with this screen, displaying the indexes we may want to create.\n\n.\n\n or to our Developer Community Forums to see what other people are building.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta5008804d2f3ad0e/65b8cf0893cdf11de27cafc1/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7fc9b26b8e7b5dcb/65b8cf077d4ae74bf4980919/image1.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcae2445ee8c4e5b1/65b8cf085f12eda542e220d7/image4.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3cddcbae41a303f4/65b8cf087d4ae7e2ee98091d/image5.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4ca5dd956a10c74c/65b8cf0830d47e0c7f5222f7/image7.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt07b872aa93e325e8/65b8cf0855a88a1fc1da7053/image6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64042035d566596c/65b8cf088fc5c08d430bcb76/image2.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how to get the most out of your MongoDB database using the tools provided to you by MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Optimize With MongoDB Atlas: Performance Advisor, Query Analyzer, and More", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/swift/authentication-ios-apps-atlas-app-services", "action": "created", "body": "# Authentication for Your iOS Apps with Atlas App Services\n\nAuthentication is one of the most important features for any app these days, and there will be a point when your users might want to reset their password for different reasons.\n\nAtlas App Services can help implement this functionality in a clear and simple way. In this tutorial, we\u2019ll develop a simple app that you can follow along with and incorporate into your apps.\n\nIf you also want to follow along and check the code that I\u2019ll be explaining in this article, you can find it in the\u00a0Github repository.\u00a0\n\n## Context\n\nThe application consists of a login flow where the user will be able to create their own account by using a username/password. It will also allow them to reset the password by implementing the use of Atlas App Services for it and\u00a0Universal Links.\n\nThere are different options in order to implement this functionality.\n\n* You can configure an email provider to\u00a0send a password reset email. This option will send an email to the user with the MongoDB logo and a URL that contains the necessary parameters that will be needed in order to reset the password.\n* App Services can automatically run a password reset function. You can implement it guided by our\u00a0password reset documentation. App Services passes this function unique confirmation tokens and data about the user. Use these values to define custom logic to reset a user's password.\n* If you decide to use a custom password reset email from a specific domain by using an external service, when the email for the reset password is received, you will get a URL that will be valid for 30 minutes, and you will need to implement\u00a0Universal Links\u00a0for it so your app can detect the URL when the user taps on it and extract the tokens from it.\n* You can define a function for App Services to run when you callResetPasswordFunction() in the SDK. App Services passes this function with unique confirmation tokens.\n\nFor this tutorial, we are going to use the first option. When it gets triggered, it will send the user an email and a valid URL for 30 minutes. But please be aware that we do not recommend using this option in production. Confirmation emails are not currently customizable beyond the base URL and subject line. In particular, they always come from a mongodb.com email address. For production apps, we recommend using a confirmation function. You can check\u00a0how to run a confirmation function in our MongoDB documentation.\n\n## Configuring authentication\n\nFirst, you\u2019ll need to create your Atlas App Services App. I recommend\u00a0following our documentation\u00a0and this will provide you with the base to start configuring your app.\n\nAfter creating your app, go to the\u00a0**Atlas App Services**\u00a0tab, click on your app, and go to\u00a0**Data Access \u2192 Authentication**\u00a0on the sidebar.\n\nIn the Authentication Providers section, enable the provider\u00a0**Email/Password**. In the configuration window that will get displayed after, we will focus on the **Password Reset Method**\u00a0part.\n\nFor this example, the user confirmation will be done automatically. But make sure that the\u00a0**Send a password reset email**\u00a0option is enabled.\n\nOne important thing to note is that **you won\u2019t be able to save and deploy these changes unless the URL section is completed**. Therefore, we\u2019ll use a temporary URL and we\u2019ll change it later to the final one.\n\nClick on the Save Draft button and your changes will be deployed.\n\n### Implementing the reset password functionality\n\nBefore starting to write the related code, please make sure that you have followed this\u00a0quick start guide\u00a0to make sure that you can use our Swift SDK.\n\nThe logic of implementing reset password will be implemented in the `MainViewController.swift`\u00a0file. In it, we have an IBAction called\u00a0`resetPasswordButtonTapped`, and inside we are going to write the following code:\n\n``` swift\n \n @IBAction func resetPasswordButtonTapped(_ sender: Any) {\n let email = app.currentUser?.profile.email ?? \"\"\n let client = app.emailPasswordAuth\n \n client.sendResetPasswordEmail(email) { (error) in\n DispatchQueue.main.async {\n guard error == nil else {\n print(\"Reset password email not sent: \\(error!.localizedDescription)\")\n return\n }\n \n print(\"Password reset email sent to the following address: \\(email)\")\n \nlet alert = UIAlertController(title: \"Reset Password\", message: \"Please check your inbox to continue the process\", preferredStyle: UIAlertController.Style.alert)\n alert.addAction(UIAlertAction(title: \"OK\", style: UIAlertAction.Style.default, handler: nil))\n self.present(alert, animated: true, completion: nil)\n \n }\n }\n }\n```\n\nBy making a call to `client.sendResetPasswordEmail` with the user's email, App Services sends an email to the user that contains a unique URL. The user must visit this URL within 30 minutes to confirm the reset.\n\nNow we have the first part of the functionality implemented. But if we try to tap on the button, it won\u2019t work as expected. We must go back to our Atlas App Services App, to the Authentication configuration.\n\nThe URL that we define here will be the one that will be sent in the email to the user. You can use your own from your own website hosted on a different server but if you don\u2019t, don\u2019t worry! Atlas App Services provides\u00a0Static Hosting. You can use hosting to store individual pieces of content or to upload and serve your entire client application, but please note that in order to enable static hosting, **you must have a paid tier** (i.e M2 or higher).\n\n## Configuring hosting\n\nGo to the Hosting section of your Atlas App Services app and click on the Enable Hosting button. App Services will begin provisioning hosting for your application. It may take a few minutes for App Services to finish provisioning hosting for your application once you've enabled it.\n\nThe resource path that you see in the screenshot above is the URL that will be used to redirect the user to our website so they can continue the process of resetting their password.\n\nNow we have to go back to the Authentication section in your Atlas App Services app and tap on the Edit button for Email/Password. We will focus our attention on the lower area of the window.\n\nIn the Password Reset URL we are going to add our hosted URL. This will create the link between your back end and the email that gets sent to the user. \n\nThe base of the URL is included in every password reset email. App Services appends a unique `token` and `tokenId` to this URL. These serve as query parameters to create a unique link for every password reset. To reset the user's password, extract these query parameters from the user's unique URL.\n\nIn order to extract these query parameters and use them in our client application, we can use Universal Links.\n\n## Universal links\n\nAccording to Apple, when adding universal links support to your app, your users can tap a link to your website and get seamlessly redirected to your installed app without going through Safari. But if the app isn\u2019t installed, then tapping a link to your website will open it in Safari. \n\n**Note**: Be aware that in order to add the universal links entitlement to your Xcode project, you need to have an Apple Developer subscription. \n\n#1 Add the\u00a0**Associated Domains** entitlement to the\u00a0**Signing & Capabilities** section of your project on Xcode and add to the domains the URL from your hosted website following the syntax:\u00a0`>applinks:`\n\n#2 You now need to create an `apple-app-site-association` file that contains JSON data about the URL that the app will handle. In my case, this is the structure of my file. The value of the `appID` key is the team ID or app ID prefix, followed by the bundle ID.\n\n``` json\n{\n \"applinks\": {\n \"apps\": ],\n \"details\": [\n {\n \"appID\": \"QX5CR2FTN2.io.realm.marcabrera.aries\",\n \"paths\": [ \"*\" ]\n }\n ]\n }\n}\n```\n\n#3 Upload the file to your HTTPS web server. In my case, I\u2019ll update it to my Atlas App Services hosted website. Therefore, now I have two files including `index.html`.\n\n![hosting section, Atlas App Services\n\n### Code\n\nYou need to implement the code that will handle the functionality when your user taps on the link from the received email.\n\nGo to the `SceneDelegate.swift` file of your Xcode project, and on the continue() delegate method, add the following code:\n\n``` swift\n func scene(_ scene: UIScene, continue userActivity: NSUserActivity) {\n \n if let url = userActivity.webpageURL {\n handleUniversalLinks(url)\n }\n }\n```\n\n``` swift\n func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {\n \n guard let _ = (scene as? UIWindowScene) else { return }\n \n // UNIVERSAL LINKS HANDLING\n \n guard let userActivity = connectionOptions.userActivities.first, userActivity.activityType == NSUserActivityTypeBrowsingWeb,\n let incomingURL = userActivity.webpageURL else {\n // If we don't get a link (meaning it's not handling the reset password flow then we have to check if user is logged in)\n if let _ = app.currentUser {\n // We make sure that the session is being kept active for users that have previously logged in\n let storyboard = UIStoryboard(name: \"Main\", bundle: nil)\n let tabBarController = storyboard.instantiateViewController(identifier: \"TabBarController\")\n let navigationController = UINavigationController(rootViewController: tabBarController)\n \n }\n return\n }\n \n handleUniversalLinks(incomingURL)\n }\n```\n\n``` swift\n private func handleUniversalLinks(_ url: URL) {\n // We get the token and tokenId URL parameters, they're necessary in order to reset password\n let token = url.valueOf(\"token\")\n let tokenId = url.valueOf(\"tokenId\")\n \n let storyboard = UIStoryboard(name: \"Main\", bundle: nil)\n let resetPasswordViewController = storyboard.instantiateViewController(identifier: \"ResetPasswordViewController\") as! ResetPasswordViewController\n \n resetPasswordViewController.token = token\n resetPasswordViewController.tokenId = tokenId\n \n }\n```\n\nThe `handleUniversalLinks()` private method will extract the `token` and `tokenId` parameters that we need to use in order to reset the password. We will store them as properties on the `ResetPassword` view controller.\n\nAlso note that we use the function `url.valueOf(\u201ctoken\u201d)`, which is an extension that I have created in order to extract the query parameters that match the string that we pass as an argument and store its value in the `token` variable.\n\n``` swift\nextension URL {\n // Function that returns a specific query parameter from the URL\n func valueOf(_ queryParameterName: String) -> String? {\n guard let url = URLComponents(string: self.absoluteString) else { return nil }\n \n return url.queryItems?.first(where: {$0.name == queryParameterName})?.value\n }\n}\n```\n\n**Note**: This functionality won\u2019t work if the user decides to terminate the app and it\u2019s not in the foreground. For that, we need to implement similar functionality on the `willConnectTo()` delegate method.\n\n``` swift\n func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {\n // Use this method to optionally configure and attach the UIWindow `window` to the provided UIWindowScene `scene`.\n // If using a storyboard, the `window` property will automatically be initialized and attached to the scene.\n // This delegate does not imply the connecting scene or session are new (see `application:configurationForConnectingSceneSession` instead).\n \n guard let _ = (scene as? UIWindowScene) else { return }\n \n // UNIVERSAL LINKS HANDLING\n \n guard let userActivity = connectionOptions.userActivities.first, userActivity.activityType == NSUserActivityTypeBrowsingWeb,\n let incomingURL = userActivity.webpageURL else {\n // If we don't get a link (meaning it's not handling the reset password flow then we have to check if user is logged in)\n if let _ = app.currentUser {\n // We make sure that the session is being kept active for users that have previously logged in\n let storyboard = UIStoryboard(name: \"Main\", bundle: nil)\n let mainVC = storyboard.instantiateViewController(identifier: \"MainViewController\")\n \n window?.rootViewController = mainVC\n window?.makeKeyAndVisible()\n }\n return\n }\n \n handleUniversalLinks(incomingURL)\n }\n```\n\n## Reset password\n\nThis view controller contains a text field that will capture the new password that the user wants to set up, and when the Reset Password button is tapped, the `resetPassword` function will get triggered and it will make a call to the Client SDK\u2019s resetPassword() function. If there are no errors, a success alert will be displayed on the app. Otherwise, an error message will be displayed.\n\n``` swift\n private func resetPassword() {\n \n let password = confirmPasswordTextField.text ?? \"\"\n \n app.emailPasswordAuth.resetPassword(to: password, token: token ?? \"\", tokenId: tokenId ?? \"\") { (error) in\n DispatchQueue.main.async {\n self.confirmButton.hideLoading()\n guard error == nil else {\n print(\"Failed to reset password: \\(error!.localizedDescription)\")\n self.presentErrorAlert(message: \"There was an error resetting the password\")\n return\n }\n print(\"Successfully reset password\")\n self.presentSuccessAlert()\n }\n }\n }\n```\n\n## Repository\n\nThe code for this project can be found in the\u00a0Github repository.\u00a0\n\nI hope you found this tutorial useful and that it will solve any doubts you may have! I encourage you to explore our\u00a0Realm Swift SDK documentation\u00a0so you can check all the features and advantages that Realm can offer you while developing your iOS apps. We also have a lot of resources for you to dive in and learn how to implement them.", "format": "md", "metadata": {"tags": ["Swift", "Atlas", "iOS"], "pageDescription": "Learn how to easily implement reset password functionality thanks to Atlas App Services on your iOS apps.", "contentType": "Tutorial"}, "title": "Authentication for Your iOS Apps with Atlas App Services", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/whatsapp-business-api-data-api", "action": "created", "body": "# WhatsApp Business API Webhook Integration with Data API\n\nThis tutorial walks through integrating the WhatsApp Business API --- specifically, Cloud API ---\u00a0and webhook setup in front of the MongoDB Atlas Data API.\n\nThe most interesting thing is we are going to use MongoDB Custom HTTPS Endpoints and Atlas Functions.\n\nThe WhatsApp Business Cloud API is intended for people developing for themselves or their organization and is also similar for Business Solution Providers (BSPs).\n\nWebhook will trigger whenever a business phone number receives a message, updates the current state of the sent message, and more.\n\nWe will examine a way to set up webhooks to connect with WhatsApp, in addition to how to set up a function that sends/receives messages and stores them in the MongoDB database.\n\n## Prerequisites\n\nThe core requirement is a Meta Business account. If you don't have a business account, then you can also use the Test Business account that is provided by Meta. Refer to the article Create a WhatsApp Business Platform account for more information.\n\nWhatsApp Business Cloud API is a part of Meta's Graph API, so you need to set up a Meta Developer account and a Meta developer app. You can follow the instructions from the Get Started with Cloud API, hosted by Meta guide and complete all the steps explained in the docs to set everything up. When you create your application, make sure that you create an \"Enterprise\" application and that you add WhatsApp as a service. Once your application is created, find the following and store them somewhere.\n\n- Access token: You can use a temporary access token from your developer app > WhatsApp > Getting Started page, or you can generate a permanent access token.\n\n- Phone number ID: You can find it from your developer app > WhatsApp > Getting Started page. It has the label \"Phone number ID\", and can be found under the \"From\" section.\n\nNext, you'll need to set up a MongoDB Atlas account, which you can learn how to do using the MongoDB Getting Started with Atlas article. Once your cluster is ready, create a database called `WhatsApp` and a collection called `messages`. You can leave the collection empty for now.\n\nOnce you have set up your MongoDB Atlas cluster, refer to the article on how to create an App Services app to create your MongoDB App Services application. On the wizard screen asking you for the type of application to build, choose the\u00a0 \"Build your own App\" template.\n\n## Verification Requests endpoint for webhook\n\nThe first thing you need to configure in the WhatsApp application is a verification request endpoint. This endpoint will validate the key and provides security to your application so that not everyone can use your endpoints to send messages.\n\nWhen you configure a webhook in the WhatsApp Developer App Dashboard, it will send a GET request to the Verification Requests endpoint. Let's write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.\n\nTo create a function in Atlas, use the \"App Services\" > \"Functions\" menu under the BUILD section. From that screen, click on the \"Create New Function\" button and it will show the Add Function page.\n\nHere, you will see two tabs: \"Settings\" and \"Function Editor.\" Start with the \"Settings\" tab and\u00a0 let's configure the required details:\n\n- Name: Set the Function Name to `webhook_get`.\n\n- Authentication: Select `System`. It will bypass the rule and authentication when our endpoint hits the function.\n\nTo write the code, we need to click on the \"Function Editor\" tab. You need to replace the code in your editor. Below is the brief of our code and how it works.\n\nYou need to set a secret value for `VERIFY_TOKEN`. You can pick any random value for this field, and you will need to add it to your WhatsApp webhook configuration later on.\n\nThe request receives three query parameters: `hub.mode`, `hub.verify_token`, and `hub.challenge`.\n\nWe need to check if `hub.mode` is `subscribe` and that the `hub.verify_token` value matches the `VERIFY_TOKEN`. If so, we return the `hub.challenge` value as a response. Otherwise, the response is forbidden.\n\n```javascript\n// this function Accepts GET requests at the /webhook endpoint. You need this URL to set up the webhook initially, refer to the guide https://developers.facebook.com/docs/graph-api/webhooks/getting-started#verification-requests\nexports = function({ query, headers, body }, response) {\n /**\n * UPDATE YOUR VERIFY TOKEN\n * This will be the Verify Token value when you set up the webhook\n **/\n const VERIFY_TOKEN = \"12345\";\n\n // Parse params from the webhook verification request\n let mode = query\"hub.mode\"],\n\u00a0\u00a0\u00a0\u00a0\u00a0token = query[\"hub.verify_token\"],\n\u00a0\u00a0\u00a0\u00a0\u00a0challenge = query[\"hub.challenge\"];\n // Check the mode and token values are correct\n if (mode == \"subscribe\" && token == VERIFY_TOKEN) {\n // Respond with 200 OK and challenge token from the request\n\u00a0\u00a0 response.setStatusCode(200);\n\u00a0\u00a0 response.setBody(challenge);\n } else {\n\u00a0 // Responds with '403 Forbidden' if verify tokens do not match\n\u00a0\u00a0 response.setStatusCode(403);\n }\n};\n\n```\n\nNow, we are all ready with the function. Click on the \"Save\" button above the tabs section, and use the \"Deploy\" button in the blue bar at the top to deploy your changes.\n\nNow, let's create a custom HTTPS endpoint to expose this function to the web. From the left navigation bar, follow the \"App Services\" > \"HTTPS Endpoints\" link, and then click on the \"Add an Endpoint\" button. It will show the Add Endpoint page.\n\nLet's configure the details step by step:\n\n1. Route: This is the name of your endpoint. Set it to `/webhook`.\n\n2. Operation Type under Endpoint Settings: This is the read-only callback URL for an HTTPS endpoint. Copy the URL and store it somewhere. The WhatsApp Webhook configuration will need it.\n\n3. HTTP Method under Endpoint Settings: Select the \"GET\" method from the dropdown.\n\n4. Respond With Result under Endpoint Settings: Set it to \"On\" because WhatsApp requires the response with the exact status code.\n\n5. Function: You will see the previously created function `webhook_get`. Select it.\n\nWe're all done. We just need to click on the \"Save\" button at the bottom, and deploy the application.\n\nWow, that was quick! Now you can go to [WhatsApp > Configuration under Meta Developer App and set up the Callback URL that we have generated in the above custom endpoint creation's second point. Click Verify Token, and enter the value that you have specified in the `VERIFY_TOKEN` constant variable of the function you just created.\n\n## Event Notifications webhook endpoint\n\nThe Event Notifications endpoint is a POST request. Whenever new events occur, it will send a notification to the callback URL. We will cover two types of notifications: received messages and message status notifications if you have subscribed to the `messages` object under the WhatsApp Business Account product. First, we will design the schema and write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.\n\nLet's design our sample database schema and see how we will store the sent/received messages in our MongoDB collection for future use. You can reply to the received messages and see whether the user has read the sent message.\n\n### Sent message document:\n\n```json\n{\n\u00a0\u00a0\u00a0\u00a0type: \"sent\", // this is we sent a message from our WhatsApp business account to the user\n\u00a0\u00a0\u00a0\u00a0messageId: \"\", // message id that is from sent message object\n\u00a0\u00a0\u00a0\u00a0contact: \"\", // user's phone number included country code\n\u00a0\u00a0\u00a0\u00a0businessPhoneId: \"\", // WhatsApp Business Phone ID\n\u00a0\u00a0\u00a0\u00a0message: {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// message content whatever we sent\n\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0status: \"initiated | sent | received | delivered | read | failed\", // message read status by user\n\u00a0\u00a0\u00a0\u00a0createdAt: ISODate(), // created date\n\u00a0\u00a0\u00a0\u00a0updatedAt: ISODate() // updated date - whenever message status changes\n}\n```\n\n### Received message document:\n\n```json\n{\n\u00a0\u00a0\u00a0\u00a0type: \"received\", // this is we received a message from the user\n\u00a0\u00a0\u00a0\u00a0messageId: \"\", // message id that is from the received message object\n\u00a0\u00a0\u00a0\u00a0contact: \"\", // user's phone number included country code\n\u00a0\u00a0\u00a0\u00a0businessPhoneId: \"\", // WhatsApp Business Phone ID\n\u00a0\u00a0\u00a0\u00a0message: {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// message content whatever we received from the user\n\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0status: \"ok | failed\", // is the message ok or has an error\n\u00a0\u00a0\u00a0\u00a0createdAt: ISODate() // created date\n}\n```\n\nLet's create another function in Atlas. As before, go to the functions screen, and click the\u00a0 \"Create New Function\" button. It will show the Add Function page. Use the following settings for this new function.\n\n- Name: Set the Function Name to `webhook_post`.\n\n- Authentication: Select `System`. It will bypass the rule and authentication when our endpoint hits the function.\n\nTo write code, we need to click on the \"Function Editor\" tab. You just need to replace the code in your editor. Below is the brief of our code and how it works.\n\nIn short, this function will do either an update operation if the notification is for a message status update, or an insert operation if a new message is received.\n\n```javascript\n// Accepts POST requests at the /webhook endpoint, and this will trigger when a new message is received or message status changes, refer to the guide https://developers.facebook.com/docs/graph-api/webhooks/getting-started#event-notifications\nexports = function({ query, headers, body }, response) {\n\u00a0\u00a0\u00a0\u00a0body = JSON.parse(body.text());\n\u00a0\u00a0\u00a0\u00a0if (body.object && body.entry) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Find the name of the MongoDB service you want to use (see \"Linked Data Sources\" tab)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0const clusterName = \"mongodb-atlas\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0dbName = \"WhatsApp\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0collName = \"messages\";\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0body.entry.map(function(entry) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0entry.changes.map(function(change) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Message status notification\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if (change.field == \"messages\" && change.value.statuses) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0change.value.statuses.map(function(status) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Update the status of a message\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context.services.get(clusterName).db(dbName).collection(collName).updateOne(\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{ messageId: status.id },\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0$set: {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"status\": status.status,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"updatedAt\": new Date(parseInt(status.timestamp)*1000)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Received message notification\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0else if (change.field == \"messages\" && change.value.messages) {\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0change.value.messages.map(function(message) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0let status = \"ok\";\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Any error\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if (message.errors) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0status = \"failed\";\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Insert the received message\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context.services.get(clusterName).db(dbName).collection(collName).insertOne({\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"received\", // this is we received a message from the user\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"messageId\": message.id, // message id that is from the received message object\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"contact\": message.from, // user's phone number included country code\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"businessPhoneId\": change.value.metadata.phone_number_id, // WhatsApp Business Phone ID\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"message\": message, // message content whatever we received from the user\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"status\": status, // is the message ok or has an error\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"createdAt\": new Date(parseInt(message.timestamp)*1000) // created date\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0response.setStatusCode(200);\u00a0\u00a0\n};\n\n```\n\nNow, we are all set with the function. Click on the \"Save\" button above the tabs section.\n\nJust like before, let's create a custom HTTPS endpoint in the \"HTTPS Endpoints\" tab. Click on the \"Add an Endpoint\" button, and it will show the Add Endpoint page.\n\nLet's configure the details step by step:\n\n1. Route: Set it to `/webhook`.\n\n2. HTTP Method under Endpoint Settings: Select the \"POST\" method from the dropdown.\n\n3. Respond With Result under Endpoint Settings: Set it to `On`.\n\n4. Function: You will see the previously created function `webhook_post`. Select it.\n\nWe're all done. We just need to click on the \"Save\" button at the bottom, and then deploy the application again.\n\nExcellent! We have just developed a webhook for sending and receiving messages and updating in the database, as well. So, you can list the conversation, see who replied to your message, and follow up.\n\n## Send Message endpoint\n\nSend Message Endpoint is a POST request, almost similar to the Send Messages of the WhatsApp Business API. The purpose of this endpoint is to send and store the message with `messageId` in the collection so the Event Notifications Webhook Endpoint can update the status of the message in the same document that we already developed in the previous point. We will write the logic for this endpoint in a function and then create a custom HTTPS endpoint in Atlas.\n\nLet's create a new function in Atlas with the following settings.\n\n- Name: Set the Function Name to `send_message`.\n\n- Authentication: Select \"System.\" It will bypass the rule and authentication when our endpoint hits the function.\n\nYou need to replace the code in your editor. Below is the brief of our code and how it works.\n\nThe request params should be:\n\n- body: The request body should be the same as the WhatsApp Send Message API.\n\n- headers: Pass Authorization Bearer token. You can use a temporary or permanent token. For more details, read the prerequisites section.\n\n- query: Pass the business phone ID in `businessPhoneId` property. For how to access it, read the prerequisites section.\n\nThis function uses the `https` node module to call the send message API of WhatsApp business. If the message is sent successfully, then insert a document in the collection with the messageId.\n\n```javascript\n// Accepts POST requests at the /send_message endpoint, and this will allow you to send messages the same as documentation https://developers.facebook.com/docs/whatsapp/cloud-api/guides/send-messages\nexports = function({ query, headers, body }, response) {\n\u00a0\u00a0\u00a0\u00a0response.setHeader(\"Content-Type\", \"application/json\");\n\u00a0\u00a0\u00a0\u00a0body = body.text();\n\u00a0\u00a0\u00a0\u00a0// Business phone ID is required\n\u00a0\u00a0\u00a0\u00a0if (!query.businessPhoneId) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setStatusCode(400);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setBody(JSON.stringify({\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0message: \"businessPhoneId is required, you can pass in query params!\"\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0return;\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0// Find the name of the MongoDB service you want to use (see \"Linked Data Sources\" tab)\n\u00a0\u00a0\u00a0\u00a0const clusterName = \"mongodb-atlas\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0dbName = \"WhatsApp\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0collName = \"messages\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0https = require(\"https\");\n\u00a0\u00a0\u00a0\u00a0// Prepare request options\n\u00a0\u00a0\u00a0\u00a0const options = {\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0hostname: \"graph.facebook.com\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0port: 443,\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0path: `/v15.0/${query.businessPhoneId}/messages`,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0method: \"POST\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0headers: {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Content-Type\": \"application/json\",\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Authorization\": headers.Authorization,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"Content-Length\": Buffer.byteLength(body)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0};\n\u00a0\u00a0\u00a0\u00a0const req = https.request(options, (res) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setStatusCode(res.statusCode);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0res.setEncoding('utf8');\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0let data = ];\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0res.on('data', (chunk) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0data.push(chunk);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0res.on('end', () => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0if (res.statusCode == 200) {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0let bodyJson = JSON.parse(body);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0let stringData = JSON.parse(data[0]);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0// Insert the message\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context.services.get(clusterName).db(dbName).collection(collName).insertOne({\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"sent\", // this is we sent a message from our WhatsApp business account to the user\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"messageId\": stringData.messages[0].id, // message id that is from the received message object\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"contact\": bodyJson.to, // user's phone number included country code\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"businessPhoneId\": query.businessPhoneId, // WhatsApp Business Phone ID\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"message\": bodyJson, // message content whatever we received from the user\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"status\": \"initiated\", // default status\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"createdAt\": new Date() // created date\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setBody(data[0]);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0});\u00a0\n\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0req.on('error', (e) => {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setStatusCode(e.statusCode);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0response.setBody(JSON.stringify(e));\n\u00a0\u00a0\u00a0\u00a0});\n\u00a0\u00a0\u00a0\u00a0// Write data to the request body\n\u00a0\u00a0\u00a0\u00a0req.write(body);\n\u00a0\u00a0\u00a0\u00a0req.end();\u00a0\u00a0\n};\n```\n\nNow, we are all ready with the function. Click on the \"Save\" button above the tabs section.\n\nLet's create a custom HTTPS endpoint for this function with the following settings.\n\n1. Route: Set it to `/send_message`.\n\n2. HTTP Method under Endpoint Settings: Select the \"POST\" method from the dropdown.\n\n3. Respond With Result under Endpoint Settings: Set it to \"On.\"\n\n4. Function: You will see the previously created function `send_message`. Select it.\n\nWe're all done. We just need to click on the \"Save\" button at the bottom.\n\nRefer to the below curl request example. This will send a default welcome template message to the users. You just need to replace your value inside the `<>` brackets.\n\n```bash\ncurl --location '?businessPhoneId=' \\\n --header 'Authorization: Bearer ' \\\n --header 'Content-Type: application/json' \\\n --data '{ \\\n\u00a0\u00a0\u00a0\u00a0 \"messaging_product\": \"whatsapp\",\u00a0\\\n\u00a0\u00a0\u00a0\u00a0 \"to\": \"\",\u00a0\\\n\u00a0\u00a0 \u00a0\u00a0\"type\": \"template\",\u00a0\\\n \u00a0\u00a0\u00a0\u00a0\"template\": {\u00a0\\\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\"name\": \"hello_world\",\u00a0\\\n\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\"language\": { \"code\": \"en_US\" }\u00a0\\\n\u00a0\u00a0 \u00a0\u00a0} \\\n }'\n```\n\nGreat! We have just developed an endpoint that sends messages to the user's WhatsApp account from your business phone number.\n\n## Conclusion\n\nIn this tutorial, we developed three custom HTTPS endpoints and their functions in MongoDB Atlas. One is Verification Requests, which verifies the request from WhatsApp > Developer App's webhook configuration using Verify Token. The second is Event Notifications, which can read sent messages and status updates,\u00a0 receive messages, and store them in MongoDB's collection. The third is Send Message, which can send messages from your WhatsApp business phone number to the user's WhatsApp account.\n\nApart from these things, we have built a collection for messages. You can use it for many use cases, like designing a chat conversation page where you can see the conversation and reply back to the user. You can also build your own chatbot to reply to users.\n\nIf you have any questions or feedback, check out the [MongoDB Community Forums and let us know what you think.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "In this article, learn how to integrate the WhatsApp Business API with MongoDB Atlas functions.", "contentType": "Tutorial"}, "title": "WhatsApp Business API Webhook Integration with Data API", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/neurelo-getting-started", "action": "created", "body": "# Neurelo and MongoDB: Getting Started and Fun Extras\n\nReady to hit the ground running with less code, fewer database complexities, and easier platform integration? Then this tutorial on navigating the intersection between Neurelo and MongoDB Atlas is for you. \n\nNeurelo is a platform that utilizes AI, APIs, and the power of the cloud to help developers better interact with and manipulate their data that is stored in either MongoDB, PostgreSQL, or MySQL. This straightforward approach to data programming allows developers to work with their data from their applications, improving scalability and efficiency, while ensuring full transparency. The tutorial below will show readers how to properly set up a Neurelo account, how to connect it with their MongoDB Atlas account, how to use Neurelo\u2019s API Playground to manipulate a collection, and how to create complex queries using Neurelo\u2019s AI Assist feature. \n\nLet\u2019s get started! \n\n### Prerequisites for success\n\n - A MongoDB Atlas account\n - A MongoDB Atlas cluster\n - A Neurelo account\n\n## The set-up\n### Step 1: MongoDB Atlas Cluster\n\nOur first step is to make sure we have a MongoDB Atlas cluster ready \u2014 if needed, learn more about how to create a cluster. Please ensure you have a memorable username and password, and that you have the proper network permissions in place. To make things easier, you can use `0.0.0.0` as the IP address, but please note that it\u2019s not recommended for production or if you have sensitive information in your cluster. \n\nOnce the cluster is set up, load in the MongoDB sample data. This is important because we will be using the `sample_restaurants` database and the `restaurants` collection. Once the cluster is set up, let\u2019s create our Neurelo account if not already created. \n\n### Step 2: Neurelo account creation and project initialization\n\nAccess Neurelo\u2019s dashboard and follow the instructions to create an account. Once finished, you will see this home screen. \n\nInitialize a new project by clicking the orange \u201cNew\u201d button in the middle of the screen. Fill in each section of the pop-up. \n\nThe `Organization` name is automatically filled for you but please pick a unique name for your project, select the `Database Engine` to be used (we are using MongoDB), select the language necessary for your project (this is optional since we are not using a language for this tutorial), and then fill in a description for future you or team members to know what\u2019s going on (also an optional step).\n\nOnce you click the orange \u201cCreate\u201d button, you\u2019ll be shown the three options in the screenshot below. It\u2019s encouraged for new Neurelo users to click on the \u201cQuick Start\u201d option. The other two options are there for you to explore once you\u2019re no longer a novice. \n\nYou\u2019ll be taken to this quick start. Follow the steps through.\n\nClick on \u201cConnect Data Source.\u201d Please go to your MongoDB Atlas cluster and copy the connection string to your cluster. When putting in your Connection String to Neurelo, you will need to specify the database you want to use at the end of the string. There is no need to specify which collection. \n\nSince we are using our `sample_restaurants` database for this example, we want to ensure it\u2019s included in the Connection String. It\u2019ll look something like this:\n\n```\nmongodb+srv://mongodb:@cluster0.xh8qopq.mongodb.net/sample_restaurants\n```\n\n \nOnce you\u2019re done, click \u201cTest Connection.\u201d If you\u2019re unable to connect, please go into MongoDB Atlas\u2019 Network permissions and copy in the two IP addresses on the `New Data Source` screen as it might be a network error. Once \u201cTest Connection\u201d is successful, hit \u201cSubmit.\u201d \n\nNow, click on the orange \u201cNew Environment\u201d button. In Neurelo, environments are used so developers can run their APIs (auto-generated and using custom queries) against their data. Please fill in the fields.\n\n \nOnce your environment is successfully created, it\u2019ll turn green and you can continue on to creating your Access Token. Click the orange \u201cNew Access Token\u201d button. These tokens grant the users permission to access the APIs for a specific environment. \n\nStore your key somewhere safe \u2014 if you lose it, you\u2019ll need to generate a new one. \n\nThe last step is to activate the runners by clicking the button. \n\nAnd congratulations! You have successfully created a project in Neurelo. \n\n### Step 3: Filtering data using the Neurelo Playground\n\nNow we can play around with the documents in our MongoDB collection and actually filter through them using the Playground.\n\nIn your API Playground \u201cHeaders\u201d area, please include your Token Key in the `X-API-KEY` header. This makes it so you\u2019re properly connected to the correct environment. \n\nNow you can use Neurelo\u2019s API playground to access the documents located in your MongoDB database. \n\nLet\u2019s say we want to return multiple documents from our restaurant category. We want to return restaurants that are located in the borough of Brooklyn in New York and we want those restaurants that serve American cuisine. \n\nTo utilize Neurelo\u2019s API to find us five restaurants, we can click on the \u201cGET Find many restaurants\u201d tab in our \u201crestaurants\u201d collection in the sidebar, click on the `Parameters` header, and fill in our parameters as such:\n\n```\nselect: {\"id\": true, \"borough\": true, \"cuisine\": true, \"name\": true}\n```\n```\nfilter: {\"AND\": {\"borough\": {\"equals\": \"Brooklyn\"}, \"cuisine\": {\"equals\": \"American\"}}]}\n```\n```\ntake: 5\n```\n\nYour response should look something like this: \n\n```\n{\n \"data\": [\n {\n \"id\": \"5eb3d668b31de5d588f4292a\",\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Riviera Caterer\"\n },\n {\n \"id\": \"5eb3d668b31de5d588f42931\",\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Regina Caterers\"\n },\n {\n \"id\": \"5eb3d668b31de5d588f42934\",\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"C & C Catering Service\"\n },\n {\n \"id\": \"5eb3d668b31de5d588f4293c\",\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"The Movable Feast\"\n },\n {\n \"id\": \"5eb3d668b31de5d588f42949\",\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Mejlander & Mulgannon\"\n }\n ]\n}\n```\n\nAs you can see from our output, the `select` feature maps to our MongoDB `$project` operator. We are choosing which fields from our document to show in our output. The `filter` feature mimics our `$match` operator and the `take` feature mimics our `$limit` operator. This is just one simple example, but the opportunities truly are endless. Once you become familiar with these APIs, you can use these APIs to build your applications with MongoDB. \n\nNeurelo truly allows developers to easily and quickly set up API calls so they can access and interact with their data. \n\n### Step 4: Complex queries in Neurelo\n\nIf we have a use case where Neurelo\u2019s auto-generated endpoints do not give us the results we want, we can actually create complex queries very easily in Neurelo. We are able to create our own custom endpoints for more complex queries that are necessary to filter through the results we want. These queries can be aggregation queries, find queries, or any query that MongoDB supports depending on the use case. Let\u2019s run through an example together.\n\nAccess your Neurelo \u201cHome\u201d page and click on the project \u201cTest\u201d we created earlier. Then, click on \u201cDefinitions\u201d on the left-hand side of the screen and click on \u201cCustom Queries.\u201d \n\n![Custom queries in Neurelo\n \nClick on the orange \u201cNew\u201d button in the middle of the screen to add a new custom query endpoint and once the screen pops up, come up with a unique name for your query. Mine is just \u201ccomplexQuery.\u201d\n\nWith Neurelo, you can actually use their AI Assist feature to help come up with the query you\u2019re looking for. Built upon LLMs, AI Assist for complex queries can help you come up with the code you need. \n\nClick on the multicolored \u201cAI Assist\u201d button on the top right-hand corner to bring up the AI Assist tab. \n\nType in a prompt. Ours is:\n\u201cPlease give me all restaurants that are in Brooklyn and are American cuisine.\u201d\n\n \nYou can also update the prompt to include the projections to be returned. Changing the prompt to \n\u201cget me all restaurants that are in Brooklyn and serve the American cuisine and show me the name of the restaurant\u201d will come up with something like:\n\nAs you can see, AI Assist comes up with a valid complex query that we can build upon. This is incredibly helpful especially if we aren\u2019t familiar with syntax or if we just don\u2019t feel like scrolling through documentation. \n\nEdit the custom query to better help with your use case.\n\nClick on the \u201cUse This\u201d button to import the query into your Custom Query box. Using the same example as before, we want to ensure we are able to see the name of the restaurant, the borough, and the cuisine. Here\u2019s the updated version of this query:\n```\n{\n \"find\": \"restaurants\",\n \"filter\": {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\"\n },\n \"projection\": {\n \"_id\": 0,\n \"name\": 1,\n \"borough\": 1,\n \"cuisine\": 1\n }\n}\n```\nClick the orange \u201cTest Query\u201d button, put in your Access Token, click on the environment approval button, and click run!\n\nYour output will look like this: \n```\n{\n \"data\": {\n \"cursor\": {\n \"firstBatch\": \n {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Regina Caterers\"\n },\n {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"The Movable Feast\"\n },\n {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Reben Luncheonette\"\n },\n {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Cody'S Ale House Grill\"\n },\n {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"name\": \"Narrows Coffee Shop\"\n },\n\u2026\n\n```\nAs you can see, you\u2019ve successfully created a complex query that shows you the name of the restaurant, the borough, and the cuisine. You can now commit and deploy this as a custom endpoint in your Neurelo environment and call this API from your applications. Great job!\n\n## To sum things up...\nThis tutorial has successfully taken you through how to create a Neurelo account, connect your MongoDB Atlas database to Neurelo, explore Neurelo\u2019s API Playground, and even create complex queries using their AI Assistant function. Now that you\u2019re familiar with the basics, you can always take things a step further and incorporate the above learnings in a new application. \n\nFor help, Neurelo has tons of [documentation, getting started videos, and information on their APIs. \n\nTo learn more about why developers should use Neurelo, check out the hyper-linked resource, as well as this article produced by our very own Matt Asay.\n", "format": "md", "metadata": {"tags": ["MongoDB", "Neurelo"], "pageDescription": "New to Neurelo? Let\u2019s dive in together. Learn the power of this platform through our in-depth tutorial which will take you from novice to expert in no time. ", "contentType": "Tutorial"}, "title": "Neurelo and MongoDB: Getting Started and Fun Extras", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-case-insensitive-query-index", "action": "created", "body": "# Case-Insensitive Queries Without Case-Insensitive Indexes\n\nWe've reached the sixth and final (at least for now) MongoDB schema design anti-pattern. In the first five posts in this series, we've covered the following anti-patterns.\n\n- Massive arrays\n- Massive number of collections\n- Unnecessary indexes\n- Bloated documents\n- Separating data that is accessed together\n\nToday, we'll explore the wonderful world of case-insensitive indexes. Not having a case-insensitive index can create surprising query results and/or slow queries...and make you hate everything.\n\n \n\nOnce you know the details of how case-insensitive queries work, the implementation is fairly simple. Let's dive in!\n\n>\n>\n>:youtube]{vid=mHeP5IbozDU start=948}\n>\n>Check out the video above to see the case-insensitive queries and indexes in action.\n>\n>\n\n## Case-Insensitive Queries Without Case-Insensitive Indexes\n\nMongoDB supports three primary ways to run case-insensitive queries.\n\nFirst, you can run a case-insensitive query using [$regex with the `i` option. These queries will give you the expected case-insensitive results. However, queries that use `$regex` cannot efficiently utilize case-insensitive indexes, so these queries can be very slow depending on how much data is in your collection.\n\nSecond, you can run a case-insensitive query by creating a case-insensitive index (meaning it has a collation strength of `1` or `2`) and running a query with the same collation as the index. A collation defines the language-specific rules that MongoDB will use for string comparison. Indexes can optionally have a collation with a strength that ranges from 1 to 5. Collation strengths of `1` and `2` both give you case-insensitivity. For more information on the differences in collation strengths, see the MongoDB docs. A query that is run with the same collation as a case-insensitive index will return case-insensitive results. Since these queries are covered by indexes, they execute very quickly.\n\nThird, you can run a case-insensitive query by setting the default collation strength for queries and indexes to a strength of `1` or `2` when you create a collection. All queries and indexes in a collection automatically use the default collation unless you specify otherwise when you execute a query or create an index. Therefore, when you set the default collation to a strength of `1` or `2`, you'll get case-insensitive queries and indexes by default. See the `collation` option in the db.createCollection() section of the MongoDB Docs for more details.\n\n>\n>\n>Warning for queries that do not use `$regex`: Your index must have a collation strength of `1` or `2` and your query must use the same collation as the index in order for your query to be case-insensitive.\n>\n>\n\nYou can use MongoDB Compass (MongoDB's desktop GUI) or the MongoDB Shell (MongoDB's command-line tool) to test if a query is returning the results you'd expect, see its execution time, and determine if it's using an index.\n\n## Example\n\nLet's revisit the example we saw in the Unnecessary Indexes Anti-Pattern and the Bloated Documents Anti-Pattern posts. Leslie is creating a website that features inspirational women. She has created a database with information about 4,700+ inspirational women. Below are three documents in her `InspirationalWomen` collection.\n\n``` none\n{\n \"_id\": ObjectId(\"5ef20c5c7ff4160ed48d8f83\"),\n \"first_name\": \"Harriet\",\n \"last_name\": \"Tubman\",\n \"quote\": \"I was the conductor of the Underground Railroad for eight years, \n and I can say what most conductors can't say; I never ran my \n train off the track and I never lost a passenger\"\n},\n{\n \"_id\": ObjectId(\"5ef20c797ff4160ed48d90ea\"),\n \"first_name\": \"HARRIET\",\n \"middle_name\": \"BEECHER\",\n \"last_name\": \"STOWE\",\n \"quote\": \"When you get into a tight place and everything goes against you,\n till it seems as though you could not hang on a minute longer, \n never give up then, for that is just the place and time that \n the tide will turn.\"\n},\n{\n \"_id\": ObjectId(\"5ef20c937ff4160ed48d9201\"),\n \"first_name\": \"Bella\",\n \"last_name\": \"Abzug\",\n \"quote\": \"This woman's place is in the House\u2014the House of Representatives.\"\n}\n```\n\nLeslie decides to add a search feature to her website since the website is currently difficult to navigate. She begins implementing her search feature by creating an index on the `first_name` field. Then she starts testing a query that will search for women named \"Harriet.\"\n\nLeslie executes the following query in the MongoDB Shell:\n\n``` sh\ndb.InspirationalWomen.find({first_name: \"Harriet\"})\n```\n\nShe is surprised to only get one document returned since she has two Harriets in her database: Harriet Tubman and Harriet Beecher Stowe. She realizes that Harriet Beecher Stowe's name was input in all uppercase in her database. Her query is case-sensitive, because it is not using a case-insensitive index.\n\nLeslie runs the same query with .explain(\"executionStats\") to see what is happening.\n\n``` sh\ndb.InspirationalWomen.find({first_name: \"Harriet\"}).explain(\"executionStats\")\n```\n\nThe Shell returns the following output.\n\n``` javascript\n{\n \"queryPlanner\": {\n ...\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"first_name\": 1\n },\n \"indexName\": \"first_name_1\",\n ...\n \"indexBounds\": {\n \"first_name\": \n \"[\\\"Harriet\\\", \\\"Harriet\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 1,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 1,\n \"totalDocsExamined\": 1,\n \"executionStages\": {\n ...\n }\n }\n },\n ...\n}\n```\n\nShe can see that the `winningPlan` is using an `IXSCAN` (index scan) with her `first_name_1` index. In the `executionStats`, she can see that only one index key was examined (`executionStats.totalKeysExamined`) and only one document was examined (`executionStats.totalDocsExamined`). For more information on how to interpret the output from `.explain()`, see [Analyze Query Performance.\n\nLeslie opens Compass and sees similar results.\n\n \n\n MongoDB Compass shows that the query is examining only one index key, examining only one document, and returning only one document. It also shows that the query used the first_name_1 index.\n\nLeslie wants all Harriets\u2014regardless of what lettercase is used\u2014to be returned in her query. She updates her query to use `$regex` with option `i` to indicate the regular expression should be case-insensitive. She returns to the Shell and runs her new query:\n\n``` sh\ndb.InspirationalWomen.find({first_name: { $regex: /Harriet/i} })\n```\n\nThis time she gets the results she expects: documents for both Harriet Tubman and Harriet Beecher Stowe. Leslie is thrilled! She runs the query again with `.explain(\"executionStats\")` to get details on her query execution. Below is what the Shell returns:\n\n``` javascript\n{\n \"queryPlanner\": {\n ...\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"filter\": {\n \"first_name\": {\n \"$regex\": \"Harriet\",\n \"$options\": \"i\"\n }\n },\n \"keyPattern\": {\n \"first_name\": 1\n },\n \"indexName\": \"first_name_1\",\n ...\n \"indexBounds\": {\n \"first_name\": \n \"[\\\"\\\", {})\",\n \"[/Harriet/i, /Harriet/i]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 2,\n \"executionTimeMillis\": 3,\n \"totalKeysExamined\": 4704,\n \"totalDocsExamined\": 2,\n \"executionStages\": {\n ...\n }\n },\n ...\n}\n```\n\nShe can see that this query, like her previous one, uses an index (`IXSCAN`). However, since `$regex` queries cannot efficiently utilize case-insensitive indexes, she isn't getting the typical benefits of a query that is covered by an index. All 4,704 index keys (`executionStats.totalKeysExamined`) are being examined as part of this query, resulting in a slightly slower query (`executionStats.executionTimeMillis: 3`) than one that fully utilizes an index.\n\nShe runs the same query in Compass and sees similar results. The query is using her `first_name_1` index but examining every index key.\n\n \n\n MongoDB Compass shows that the query is returning two documents as expected. The $regex query is using the first_name_1 index but examining every index key.\n\nLeslie wants to ensure that her search feature runs as quickly as possible. She uses Compass to create a new case-insensitive index named `first_name-case_insensitive`. (She can easily create indexes using other tools as well like the Shell or [MongoDB Atlas or even programmatically.) Her index will be on the `first_name` field in ascending order and use a custom collation with a locale of `en` and a strength of `2`. Recall from the previous section that the collation strength must be set to `1` or `2` in order for the index to be case-insensitive.\n\n \n\n Creating a new index in MongoDB Compass with a custom collation that has a locale of en and a strength of 2.\n\nLeslie runs a query very similar to her original query in the Shell, but this time she specifies the collation that matches her newly-created index:\n\n``` sh\ndb.InspirationalWomen.find({first_name: \"Harriet\"}).collation( { locale: 'en', strength: 2 } )\n```\n\nThis time she gets both Harriet Tubman and Harriet Beecher Stowe. Success!\n\nShe runs the query with `.explain(\"executionStats\")` to double check that the query is using her index:\n\n``` sh\ndb.InspirationalWomen.find({first_name: \"Harriet\"}).collation( { locale: 'en', strength: 2 } ).explain(\"executionStats\")\n```\n\nThe Shell returns the following results.\n\n``` javascript\n{\n \"queryPlanner\": {\n ...\n \"collation\": {\n \"locale\": \"en\",\n ...\n \"strength\": 2,\n ...\n },\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"first_name\": 1\n },\n \"indexName\": \"first_name-case_insensitive\",\n \"collation\": {\n \"locale\": \"en\",\n ...\n \"strength\": 2,\n ...\n },\n ...\n \"indexBounds\": {\n \"first_name\": \n \"[\\\"7)KK91O\\u0001\\u000b\\\", \\\"7)KK91O\\u0001\\u000b\\\"]\"\n ]\n }\n }\n },\n \"rejectedPlans\": []\n },\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 2,\n \"executionTimeMillis\": 0,\n \"totalKeysExamined\": 2,\n \"totalDocsExamined\": 2,\n \"executionStages\": {\n ...\n }\n }\n },\n ...\n}\n```\n\nLeslie can see that the winning plan is executing an `IXSCAN` (index scan) that uses the case-insensitive index she just created. Two index keys (`executionStats.totalKeysExamined`) are being examined, and two documents (`executionStats.totalDocsExamined`) are being examined. The query is executing in 0 ms (`executionStats.executionTimeMillis: 0`). Now that's fast!\n\nLeslie runs the same query in Compass and specifies the collation the query should use.\n\n \n\nShe can see that the query is using her case-insensitive index and the\nquery is executing in 0 ms. She's ready to implement her search feature.\nTime to celebrate!\n\n \n\n*Note:* Another option for Leslie would have been to set the default collation strength of her InspirationalWomen collection to `1` or `2` when she created her collection. Then all of her queries would have returned the expected, case-insensitive results, regardless of whether she had created an index or not. She would still want to create indexes to increase the performance of her queries.\n\n## Summary\n\nYou have three primary options when you want to run a case-insensitive query:\n\n1. Use `$regex` with the `i` option. Note that this option is not as performant because `$regex` cannot fully utilize case-insensitive indexes.\n2. Create a case-insensitive index with a collation strength of `1` or `2`, and specify that your query uses the same collation.\n3. Set the default collation strength of your collection to `1` or `2` when you create it, and do not specify a different collation in your queries and indexes.\n\nAlternatively, [MongoDB Atlas Search can be used for more complex text searches.\n\nThis post is the final anti-pattern we'll cover in this series. But, don't be too sad\u2014this is not the final post in this series. Be on the lookout for the next post where we'll summarize all of the anti-patterns and show you a brand new feature in MongoDB Atlas that will help you discover anti-patterns in your database. You won't want to miss it!\n\n>\n>\n>When you're ready to build a schema in MongoDB, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.\n>\n>\n\n## Related Links\n\nCheck out the following resources for more information:\n\n- MongoDB Docs: Improve Case-Insensitive Regex Queries\n- MongoDB Docs: Case-Insensitive Indexes\n- MongoDB Docs: $regex\n- MongoDB Docs: Collation\n- MongoDB Docs: db.collection.explain()\n- MongoDB Docs: Analyze Query Performance\n- MongoDB University M201: MongoDB Performance\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Case-Insensitive Queries Without Case-Insensitive Indexes", "contentType": "Article"}, "title": "Case-Insensitive Queries Without Case-Insensitive Indexes", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/getting-started-kmm-flexiable-sync", "action": "created", "body": "# Getting Started Guide for Kotlin Multiplatform Mobile (KMM) with Flexible Sync\n\n> This is an introductory article on how to build your first Kotlin Multiplatform Mobile using Atlas Device Sync.\n\n## Introduction\n\nMobile development has evolved a lot in recent years and in this tutorial, we are going discuss Kotlin Multiplatform Mobile (KMM), one such platform which disrupted the development communities by its approach and thoughts on how to build mobile apps.\n\nTraditional mobile apps, either built with a native or hybrid approach, have their tradeoffs from development time to performance. But with the Kotlin Multiplatform approach, we can have the best of both worlds.\n\n## What is Kotlin Multiplatform Mobile (KMM)?\n\nKotlin Multiplatform is all about code sharing within apps for different environments (iOS, Android). Some common use cases for shared code are getting data from the network, saving it into the device, filtering or manipulating data, etc. This is different from other cross-development frameworks as this enforces developers to share only business logic code rather than complete code which often makes things complicated, especially when it comes to building different complex custom UI for each platform.\n\n## Setting up your environment\n\nIf you are an Android developer, then you don't need to do much. The primary development of KMM apps is done using Android Studio. The only additional step for you is to install the KMM plugin via IDE plugin manager. One of the key benefits of this is it allows to you build and run the iOS app as well from Android Studio.\n\nTo enable iOS building and running via Android Studio, your system should have Xcode installed, which is development IDE for iOS development.\n\nTo verify all dependencies are installed correctly, we can use `kdoctor`, which can be installed using brew.\n\n```shell \nbrew install kdoctor\n```\n\n## Building Hello World!\n\nWith our setup complete, it's time to get our hands dirty and build our first Hello World application.\n\nCreating a KMM application is very easy. Open Android Studio and then select Kotlin Multiplatform App from the New Project template. Hit Next.\n\nOn the next screen, add the basic application details like the name of the application, location of the project, etc.\n\nFinally, select the dependency manager for the iOS app, which is recommended for `Regular framework`, and then hit finish.\n\nOnce gradle sync is complete, we can run both iOS and Android app using the run button from the toolbar.\n\nThat will start the Android emulator or iOS simulator, where our app will run.\n\n \n\n## Basics of the Kotlin Multiplatform\n\nNow it's time to understand what's happening under the hood to grasp the basic concepts of KMM.\n\n### Understanding project structure\n\nAny KMM project can be split into three logic folders \u2014 i.e., `androidApp`, `iosApp`, and `shared` \u2014 and each of these folders has a specific purpose.\n\nSince KMM is all about sharing business-/logic-related code, all the shared code is written under `shared` the folder. This code is then exposed as libs to `androidApp` and `iosApp` folders, allowing us to use shared logic by calling classes or functions and building a user interface on top of it.\n\n### Writing platform-specific code\n\nThere can be a few use cases where you like to use platform-specific APIs for writing business logic like in the `Hello World!` app where we wanted to know the platform type and version. To handle such use cases, KMM has introduced the concept of `actual` and `expect`, which can be thought of as KMM's way of `interface` or `Protocols`.\n\nIn this concept, we define `expect` for the functionality to be exposed, and then we write its implementation `actual` for the different environments. Something like this:\n\n```Kotlin \n\nexpect fun getPlatform(): String\n\n```\n\n```kotlin\nactual fun getPlatform(): String = \"Android ${android.os.Build.VERSION.SDK_INT}\"\n```\n\n```kotlin\nactual fun getPlatform(): String =\n UIDevice.currentDevice.systemName() + \" \" + UIDevice.currentDevice.systemVersion\n```\n\nIn the above example, you'll notice that we are using platform-specific APIs like `android.os` or `UIDevice` in `shared` folder. To keep this organised and readable, KMM has divided the `shared` folder into three subfolders: `commonMain`, `androidMain`, `iOSMain`.\n\nWith this, we covered the basics of KMM (and that small learning curve for KMM is especially for people coming from an `android` background) needed before building a complex and full-fledged real app.\n\n## Building a more complex app\nNow let's build our first real-world application, Querize, an app that helps you collect queries in real time during a session. Although this is a very simple app, it still covers all the basic use cases highlighting the benefits of the KMM app with a complex one, like accessing data in real time.\n\nThe tech stack for our app will be:\n\n1. JetPack Compose for UI building.\n2. Kotlin Multiplatform with Realm as a middle layer.\n3. Atlas Flexible Device Sync from MongoDB,\n serverless backend supporting our data sharing.\n4. MongoDB Atlas, our cloud database.\n\nWe will be following a top to bottom approach in building the app, so let's start building the UI using Jetpack compose with `ViewModel`.\n\n```kotlin\n\nclass MainActivity : ComponentActivity() {\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n MaterialTheme {\n Container()\n }\n }\n }\n}\n\n@Preview\n@Composable\nfun Container() {\n val viewModel = viewModel()\n\n Scaffold(\n topBar = {\n CenterAlignedTopAppBar(\n title = {\n Text(\n text = \"Querize\",\n fontSize = 24.sp,\n modifier = Modifier.padding(horizontal = 8.dp)\n )\n },\n colors = TopAppBarDefaults.centerAlignedTopAppBarColors(MaterialTheme.colorScheme.primaryContainer),\n navigationIcon = {\n Icon(\n painterResource(id = R.drawable.ic_baseline_menu_24),\n contentDescription = \"\"\n )\n }\n )\n },\n containerColor = (Color(0xffF9F9F9))\n ) {\n Column(\n modifier = Modifier\n .fillMaxSize()\n .padding(it),\n ) {\n\n Row(modifier = Modifier.fillMaxWidth(), horizontalArrangement = Arrangement.Center) {\n Image(\n painter = painterResource(id = R.drawable.ic_realm_logo),\n contentScale = ContentScale.Fit,\n contentDescription = \"App Logo\",\n modifier = Modifier\n .width(200.dp)\n .defaultMinSize(minHeight = 200.dp)\n .padding(bottom = 20.dp),\n )\n }\n\n AddQuery(viewModel)\n\n Text(\n \"Queries\",\n modifier = Modifier\n .fillMaxWidth()\n .padding(bottom = 8.dp),\n textAlign = TextAlign.Center,\n fontSize = 24.sp\n )\n\n QueriesList(viewModel)\n }\n }\n}\n\n@Composable\nfun AddQuery(viewModel: MainViewModel) {\n\n val queryText = remember { mutableStateOf(\"\") }\n\n TextField(\n modifier = Modifier\n .fillMaxWidth()\n .padding(8.dp),\n placeholder = { Text(text = \"Enter your query here\") },\n trailingIcon = {\n Icon(\n painterResource(id = R.drawable.ic_baseline_send_24),\n contentDescription = \"\",\n modifier = Modifier.clickable {\n viewModel.saveQuery(queryText.value)\n queryText.value = \"\"\n })\n },\n value = queryText.value,\n onValueChange = {\n queryText.value = it\n })\n}\n\n@Composable\nfun QueriesList(viewModel: MainViewModel) {\n\n val queries = viewModel.queries.observeAsState(initial = emptyList()).value\n\n LazyColumn(\n verticalArrangement = Arrangement.spacedBy(12.dp),\n contentPadding = PaddingValues(8.dp),\n content = {\n items(items = queries, itemContent = { item: String ->\n QueryItem(query = item)\n })\n })\n}\n\n@Preview\n@Composable\nfun QueryPreview() {\n QueryItem(query = \"Sample text\")\n}\n\n@Composable\nfun QueryItem(query: String) {\n Row(\n modifier = Modifier\n .fillMaxWidth()\n .background(Color.White)\n .padding(8.dp)\n .clip(RoundedCornerShape(8.dp))\n ) {\n Text(text = query, modifier = Modifier.fillMaxWidth())\n }\n}\n\n```\n\n```kotlin\nclass MainViewModel : ViewModel() {\n\n private val repo = RealmRepo()\n val queries: LiveData> = liveData {\n emitSource(repo.getAllData().flowOn(Dispatchers.IO).asLiveData(Dispatchers.Main))\n }\n\n fun saveQuery(query: String) {\n viewModelScope.launch {\n repo.saveInfo(query)\n }\n }\n}\n```\n\nIn our viewModel, we have a method `saveQuery` to capture the user queries and share them with the speaker. This information is then passed on to our logic layer, `RealmRepo`, which is built using Kotlin Multiplatform for Mobile (KMM) as we would like to reuse this for code when building an iOS app.\n\n```kotlin\nclass RealmRepo {\n\n suspend fun saveInfo(query: String) {\n\n }\n}\n```\n\nNow, to save and share this information, we need to integrate it with Atlas Device Sync, which will automatically save and share it with our clients in real time. To connect with Device Sync, we need to add `Realm` SDK first to our project, which provides us integration with Device Sync out of the box.\n\nRealm is not just SDK for integration with Atlas Device Sync, but it's a very powerful object-oriented mobile database built using KMM. One of the key advantages of using this is it makes our app work offline without any effort.\n\n### Adding Realm SDK\n\nThis step is broken down further for ease of understanding. \n\n#### Adding Realm plugin\n\nOpen the `build.gradle` file under project root and add the `Realm` plugin.\n\nFrom\n\n```kotlin\nplugins {\n id(\"com.android.application\").version(\"7.3.1\").apply(false)\n id(\"com.android.library\").version(\"7.3.1\").apply(false)\n kotlin(\"android\").version(\"1.7.10\").apply(false)\n kotlin(\"multiplatform\").version(\"1.7.20\").apply(false)\n}\n```\n\nTo\n\n```kotlin\nplugins {\n id(\"com.android.application\").version(\"7.3.1\").apply(false)\n id(\"com.android.library\").version(\"7.3.1\").apply(false)\n kotlin(\"android\").version(\"1.7.10\").apply(false)\n kotlin(\"multiplatform\").version(\"1.7.20\").apply(false)\n // Added Realm plugin \n id(\"io.realm.kotlin\") version \"0.10.0\"\n}\n```\n\n#### Enabling Realm plugin\n\nNow let's enable the Realm plugin for our project. We should make corresponding changes to the `build.gradle` file under the `shared` module.\n\nFrom\n\n```kotlin\nplugins {\n kotlin(\"multiplatform\")\n kotlin(\"native.cocoapods\")\n id(\"com.android.library\")\n}\n```\n\nTo\n\n```kotlin\nplugins {\n kotlin(\"multiplatform\")\n kotlin(\"native.cocoapods\")\n id(\"com.android.library\")\n // Enabled Realm Plugin\n id(\"io.realm.kotlin\")\n}\n```\n\n#### Adding dependencies\n\nWith the last step done, we are just one step away from completing the Realm setup. In this step, we add the Realm dependency to our project.\n\nSince the `Realm` database will be shared across all platforms, we will be adding the Realm dependency to the common source `shared`. In the same `build.gradle` file, locate the `sourceSet` tag and update it to:\n\nFrom\n\n ```kotlin\n sourceSets {\n val commonMain by getting {\n dependencies {\n\n }\n }\n // Other config\n}\n ```\n\nTo\n\n ```kotlin\n sourceSets {\n val commonMain by getting {\n dependencies {\n implementation(\"io.realm.kotlin:library-sync:1.4.0\")\n }\n }\n}\n ```\n\nWith this, we have completed the `Realm` setup for our KMM project. If you would like to use any part of the SDK inside the Android module, you can add the dependency in Android Module `build.gradle` file.\n\n ```kotlin\ndependencies {\n compileOnly(\"io.realm.kotlin:library-sync:1.4.0\")\n}\n ```\n\nSince Realm is an object-oriented database, we can save objects directly without getting into the hassle of converting them into different formats. To save any object into the `Realm` database, it should be derived from `RealmObject` class.\n\n```kotlin\nclass QueryInfo : RealmObject {\n\n @PrimaryKey\n var _id: String = \"\"\n var queries: String = \"\"\n}\n```\n\nNow let's save our query into the local database, which will then be synced using Atlas Device Sync and saved into our cloud database, Atlas.\n\n```kotlin\nclass RealmRepo {\n\n suspend fun saveInfo(query: String) {\n val info = QueryInfo().apply {\n _id = RandomUUID().randomId\n queries = query\n }\n realm.write {\n copyToRealm(info)\n }\n }\n}\n```\n\nThe next step is to create a `Realm` instance, which we use to save the information. To create a `Realm`, an instance of `Configuration` is needed which in turn needs a list of classes that can be saved into the database.\n\n```kotlin\n\nval realm by lazy {\n val config = RealmConfiguration.create(setOf(QueryInfo::class))\n Realm.open(config)\n}\n\n```\n\nThis `Realm` instance is sufficient for saving data into the device but in our case, we need to integrate this with Atlas Device Sync to save and share our data into the cloud. To do this, we take four more steps:\n\n1. Create a free MongoDB account.\n2. Follow the setup wizard after signing up to create a free cluster.\n3. Create an App with App Service UI to enable Atlas Device Sync.\n4. Enable Atlas Device Sync using Flexible Sync. Select the App services tab and enable sync, as shown below. \n \n\nNow let's connect our Realm and Atlas Device Sync. To do this, we need to modify our `Realm` instance creation. Instead of using `RealmConfiguration`, we need to use `SyncConfiguration`.\n\n`SyncConfiguration` instance can be created using its builder, which needs a user instance and `initialSubscriptions` as additional information. Since our application doesn't have a user registration form, we can use anonymous sign-in provided by Atlas App Services to identify as user session. So our updated code looks like this:\n\n```kotlin\n\nprivate val appServiceInstance by lazy {\n val configuration =\n AppConfiguration.Builder(\"application-0-elgah\").log(LogLevel.ALL).build()\n App.create(configuration)\n}\n```\n\n```kotlin\nlateinit var realm: Realm\n\nprivate suspend fun setupRealmSync() {\n val user = appServiceInstance.login(Credentials.anonymous())\n val config = SyncConfiguration\n .Builder(user, setOf(QueryInfo::class))\n .initialSubscriptions { realm ->\n // information about the data that can be read or modified. \n add(\n query = realm.query(),\n name = \"subscription name\",\n updateExisting = true\n )\n }\n .build()\n realm = Realm.open(config)\n}\n```\n\n```kotlin\nsuspend fun saveInfo(query: String) {\n if (!this::realm.isInitialized) {\n setupRealmSync()\n }\n\n val info = QueryInfo().apply {\n _id = RandomUUID().randomId\n queries = query\n }\n realm.write {\n copyToRealm(info)\n }\n}\n```\n\nNow, the last step to complete our application is to write a read function to get all the queries and show it on UI.\n\n```kotlin\nsuspend fun getAllData(): CommonFlow> {\n if (!this::realm.isInitialized) {\n setupRealmSync()\n }\n return realm.query().asFlow().map {\n it.list.map { it.queries }\n }.asCommonFlow()\n}\n```\n\nAlso, you can view or modify the data received via the `saveInfo` function using the `Atlas` UI.\n\nWith this done, our application is ready to send and receive data in real time. Yes, in real time. No additional implementation is required.\n\n## Summary\n\nThank you for reading this article! I hope you find it informative. The complete source code of the app can be found on GitHub.\n\nIf you have any queries or comments, you can share them on\nthe MongoDB Realm forum or tweet me @codeWithMohit.", "format": "md", "metadata": {"tags": ["Realm", "Kotlin", "Android", "iOS"], "pageDescription": "This is an introductory article on how to build your first Kotlin Multiplatform Mobile using Atlas Device Sync.", "contentType": "Tutorial"}, "title": " Getting Started Guide for Kotlin Multiplatform Mobile (KMM) with Flexible Sync", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/media-management-integrating-nodejs-azure-blob-storage-mongodb", "action": "created", "body": "# Building a Scalable Media Management Back End: Integrating Node.js, Azure Blob Storage, and MongoDB\n\nIf your goal is to develop a multimedia platform, a robust content management system, or any type of application that requires storing substantial media files, the storage, retrieval, and management of these files are critical to delivering a seamless user experience. This is where a robust media management back end becomes an indispensable component of your tech stack. In this tutorial, we will guide you through the process of creating such a back end utilizing Node.js, Azure Blob Storage, and MongoDB.\n\nStoring media files like images or videos directly in your MongoDB database may not be the most efficient approach. MongoDB has a BSON document size limit of 16MB, which is designed to prevent any single document from consuming too much RAM or bandwidth during transmission. Given the size of many media files, this limitation could be easily exceeded, presenting a significant challenge for storing large files directly in the database.\n\nMongoDB's GridFS is a solution for storing large files beyond the BSON-document size limit by dividing them into chunks and storing these chunks across separate documents. While GridFS is a viable solution for certain scenarios, an efficient approach is to use a dedicated service for storing large media files. Azure Blob (**B**inary **L**arge **Ob**jects) Storage, for example, is optimized for the storage of substantial amounts of unstructured data, which includes binary data like media files. Unstructured data refers to data that does not adhere to a specific model or format. \n\nWe'll provide you with a blueprint to architect a backend system capable of handling large-scale media storage with ease, and show you how to post to it using cURL commands. By the end of this article, you'll have a clear understanding of how to leverage Azure Blob Storage for handling massive amounts of unstructured data and MongoDB for efficient data management, all orchestrated with a Node.js API that glues everything together.\n\n installed. Node.js is the runtime environment required to run your JavaScript code server-side. npm is used to manage the dependencies.\n - A MongoDB cluster deployed and configured. If you need help, check out our MongoDB Atlas tutorial on how to get started.\n - An Azure account with an active subscription.\n\n## Set up Azure Storage\n\nFor this tutorial, we will use the Microsoft Azure Portal to set up our Azure storage. Begin by logging into your Azure account and it will take you to the home page. Once there, use the search bar at the top of the page to search \"Storage accounts.\"\n\n.\n\nChoose your preferred subscription and resource group, then assign a name to your storage account. While the selection of region, performance, and redundancy options will vary based on your application's requirements, the basic tiers will suffice for all the functionalities required in this tutorial.\n\nIn the networking section, opt to allow public access from all networks. While this setting is generally not recommended for production environments, it simplifies the process for this tutorial by eliminating the need to set up specific network access rules.\n\nFor the rest of the configuration settings, we can accept the default settings. Once your storage account is created, we\u2019re going to navigate to the resource. You can do this by clicking \u201cGo to resource,\u201d or return to the home page and it will be listed under your resources.\n\nNow, we'll proceed to create a container. Think of a container as akin to a directory in a file system, used for organizing blobs. You can have as many containers as you need in a storage account, and each container can hold numerous blobs. To do this, go to the left panel and click on the Containers tab, then choose the \u201cplus container\u201d option. This will open a dialog where you can name your container and, if necessary, alter the access level from the default private setting. Once that's done, you can go ahead and initiate your container.\n\nTo connect your application to Azure Storage, you'll need to create a `Shared Access Signature` (SAS). SAS provides detailed control over the ways your client can access data. From the menu on the left, select \u201cShared access signature\u201d and set it up to permit the services and resource types you need. For the purposes of this tutorial, choose \u201cObject\u201d under allowed resource types, which is suitable for blob-level APIs and enables operations on individual blobs, such as upload, download, or delete.\n\nYou can leave the other settings at their default values. However, if you're interested in understanding which configurations are ideal for your application, Microsoft\u2019s documentation offers comprehensive guidance. Once you've finalized your settings, click \u201cGenerate SAS and connection string.\u201d This action will produce your SAS, displayed below the button.\n\n and click connect. If you need help, check out our guide in the docs.\n\n and takes a request listener function as an argument. In this case, `handleImageUpload` is passed as the request listener, which means that this function will be called every time the server receives an HTTP request.\n\n```js\nconst server = http.createServer(handleImageUpload);\nconst port = 3000;\nserver.listen(port, () => {\n console.log(`Server listening on port ${port}`);\n});\n```\n\nThe `handleImageUpload` function is designed to process HTTP POST requests to the /api/upload endpoint, handling the uploading of an image and the storing of its associated metadata. It will call upon a couple of helper functions to achieve this. We\u2019ll break down how these work as well.\n\n```javascript\nasync function handleImageUpload(req, res) {\n res.setHeader('Content-Type', 'application/json');\n if (req.url === '/api/upload' && req.method === 'POST') {\n try {\n // Extract metadata from headers\n const {fileName, caption, fileType } = await extractMetadata(req.headers);\n\n // Upload the image as a to Azure Storage Blob as a stream\n const imageUrl = await uploadImageStreamed(fileName, req);\n\n // Store the metadata in MongoDB\n await storeMetadata(fileName, caption, fileType, imageUrl);\n\n res.writeHead(201);\n res.end(JSON.stringify({ message: 'Image uploaded and metadata stored successfully', imageUrl }));\n } catch (error) {\n console.error('Error:', error);\n res.writeHead(500);\n res.end(JSON.stringify({ error: 'Internal Server Error' }));\n }\n } else {\n res.writeHead(404);\n res.end(JSON.stringify({ error: 'Not Found' }));\n }\n}\n```\n\nIf the incoming request is a POST to the correct endpoint, it will call our `extractMetadata` method. This function takes in our header from the request and extracts the associated metadata. \n\n```javascript\nasync function extractMetadata(headers) {\n const contentType = headers'content-type'];\n const fileType = contentType.split('/')[1];\n const contentDisposition = headers['content-disposition'] || '';\n const caption = headers['x-image-caption'] || 'No caption provided';\n const matches = /filename=\"([^\"]+)\"/i.exec(contentDisposition);\n const fileName = matches?.[1] || `image-${Date.now()}.${fileType}`;\n return { fileName, caption, fileType };\n}\n```\nIt assumes that the 'content-type' header of the request will include the file type (like image/png or image/jpeg). It extracts this file type from the header. It then attempts to extract a filename from the content-disposition header, if provided. If no filename is given, it generates a default one using a timestamp.\n\nUsing the extracted or generated filename and file type, along with the rest of our metadata from the header, it calls `uploadImageStreamed`, which uploads the image as a stream directly from the request to Azure Blob Storage.\n\n```javascript\nasync function uploadImageStreamed(blobName, dataStream) {\n const blobClient = containerClient.getBlockBlobClient(blobName);\n await blobClient.uploadStream(dataStream);\n return blobClient.url;\n}\n```\n\nIn this method, we are creating our `blobClient`. The blobClient opens a connection to an Azure Storage blob and allows us to manipulate it. Here we upload our stream into our blob and finally return our blob URL to be stored in MongoDB. \n\nOnce we have our image stored in Azure Blob Storage, we are going to take the URL and store it in our database. The metadata you decide to store will depend on your application. In this example, I add a caption for the file, the name, and the URL, but you might also want information like who uploaded the image or when it was uploaded. This document is inserted into a MongoDB collection using the `storeMetadata` method.\n\n```javascript\nasync function storeMetadata(name, caption, fileType, imageUrl) {\n const collection = client.db(\"tutorial\").collection('metadata');\n await collection.insertOne({ name, caption, fileType, imageUrl });\n}\n```\n\nHere we create and connect to our MongoClient, and insert our document into the metadata collection in the tutorial. Don\u2019t worry if the database or collection don\u2019t exist yet. As soon as you try to insert data, MongoDB will create it.\n\nIf the upload and metadata storage are successful, it sends back an HTTP 201 status code and a JSON response confirming the successful upload.\n\nNow we have an API call to upload our image, along with some metadata for said image. Let's test what we built! Run your application by executing the `node app.mjs` command in a terminal that's open in your app's directory. If you\u2019re following along, you\u2019re going to want to substitute the path to the image below to your own path, and whatever you want the metadata to be.\n\n```console\ncurl -X POST \\\n -H \"Content-Type: image/png\" \\\n -H \"Content-Disposition: attachment; filename=\\\"mongodb-is-webscale.png\\\"\" \\\n -H \"X-Image-Caption: Your Image Caption Here\" \\\n --data-binary @\"/path/to/your/mongodb-is-webscale.png\" \\\n http://localhost:3000/api/upload\n```\n\nThere\u2019s a couple of steps to our cURL command. \n - `curl -X POST` initiates a curl request using the POST method, which is commonly used for submitting data to be processed to a specified resource.\n - `-H \"Content-Type: image/png\"` includes a header in the request that tells the server what the type of the content being sent is. In this case, it indicates that the file being uploaded is a PNG image.\n - `-H \"Content-Disposition: attachment; filename=\\\"mongodb-is-webscale.png\\\"\"` header is used to specify information about the file. It tells the server the file should be treated as an attachment, meaning it should be downloaded or saved rather than displayed. The filename parameter is used to suggest a default filename to be used if the content is saved to a file. (Otherwise, our application will auto-generate one.) \n - `-H \"X-Image-Caption: Your Image Caption Here\"` header is used to dictate our caption. Following the colon, include the message you wish to store in or MongoDB document.\n - `--data-binary @\"{Your-Path}/mongodb-is-webscale.png\"` tells cURL to read data from a file and to preserve the binary format of the file data. The @ symbol is used to specify that what follows is a file name from which to read the data. {Your-Path} should be replaced with the actual path to the image file you're uploading.\n - `http://localhost:3000/api/upload` is the URL where the request is being sent. It indicates that the server is running on localhost (the same machine from which the command is being run) on port 3000, and the specific API endpoint handling the upload is /api/upload.\n\nLet\u2019s see what this looks like in our storage. First, let's check our Azure Storage blob. You can view the `mongodb-is-webscale.png` image by accessing the container we created earlier. It confirms that the image has been successfully stored with the designated name.\n\n![Microsoft portal showing our container and the image we transferred in.][5]\n\nNow, how can we retrieve this image in our application? Let\u2019s check our MongoDB database. You can do this through the MongoDB Atlas UI. Select the cluster and the collection you uploaded your metadata to. Here you can view your document.\n\n![MongoDB Atlas showing our metadata document stored in the collection.][6]\n\nYou can see we\u2019ve successfully stored our metadata! If you follow the URL, you will be taken to the image you uploaded, stored in your blob. \n\n## Conclusion\n\nIntegrating Azure Blob Storage with MongoDB provides an optimal solution for storing large media files, such as images and videos, and provides a solid backbone for building your multimedia applications. Azure Blob Storage, a cloud-based service from Microsoft, excels in handling large quantities of unstructured data. This, combined with the efficient database management of MongoDB, creates a robust system. It not only simplifies the file upload process but also effectively manages relevant metadata, offering a comprehensive solution for data storage needs.\n\nThrough this tutorial, we've provided you with the steps to set up a MongoDB Atlas cluster and configure Azure Storage, and we demonstrated how to construct a Node.js API to seamlessly interact with both platforms.\n\nIf your goal is to develop a multimedia platform, a robust content management system, or any type of application that requires storing substantial media files, this guide offers a clear pathway to embark on that journey. Utilizing the powerful capabilities of Azure Blob Storage and MongoDB, along with a Node.js API, developers have the tools to create applications that are not only scalable and proficient but also robust enough to meet the demands of today's dynamic web environment.\n\nWant to learn more about what you can do with Microsoft Azure and MongoDB? Check out some of our articles in [Developer Center, such as Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas, where you can learn how to build and deploy a website in just a few simple steps.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd9281432bdaca405/65797bf8177bfa1148f89ad7/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc26fafc9dc5d0ee6/65797bf87cf4a95dedf5d9cf/image2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltedfcb0b696b631af/65797bf82a3de30dcad708d1/image4.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85d7416dc4785d29/65797bf856ca8605bfd9c50e/image5.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1c7d6af67a124be6/65797bf97ed7db1ef5c7da2f/image6.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6b50b6db724830a1/65797bf812bfab1ac0bc3a31/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js", "Azure"], "pageDescription": "Learn to create your own media management backend, storing your media files in Azure Blob Storage, and associated metadata in MongoDB.", "contentType": "Tutorial"}, "title": "Building a Scalable Media Management Back End: Integrating Node.js, Azure Blob Storage, and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-java-server", "action": "created", "body": "# How to Build a Search Service in Java\n\nWe need to code our way from the search box to our search index. Performing a search and rendering the results in a presentable fashion, itself, is not a tricky endeavor: Send the user\u2019s query to the search server, and translate the response data into some user interface technology. However, there are some important issues that need to be addressed, such as security, error handling, performance, and other concerns that deserve isolation and control.\n\nA typical three-tier system has a presentation layer that sends user requests to a middle layer, or application server, which interfaces with backend data services. These tiers separate concerns so that each can focus on its own responsibilities.\n\n. \n\nThis project was built using:\n\n* Gradle 8.5\n* Java 21\n\nStandard Java and servlet APIs are used and should work as-is or port easily to later Java versions.\n\nIn order to run the examples provided here, the Atlas sample data needs to be loaded and a `movies_index`, as described below, created on the `sample_mflix.movies` collection. If you\u2019re new to Atlas Search, a good starting point is Using Atlas Search from Java.\n\n## Search service design\n\nThe front-end presentation layer provides a search box, renders search results, and supplies sorting, pagination, and filtering controls. A middle tier, via an HTTP request, validates and translates the search request parameters into an aggregation pipeline specification that is then sent to the data tier.\n\nA search service needs to be fast, scalable, and handle these basic parameters:\n\n* The query itself: This is what the user entered into the search box.\n* Number of results to return: Often, only 10 or so results are needed at a time.\n* Starting point of the search results: This allows the pagination of search results.\n\nAlso, a performant query should only search and return a small number of fields, though not necessarily the same fields searched need to be returned. For example, when searching movies, you might want to search the `fullplot` field but not return the potentially large text for presentation. Or, you may want to include the year the movie was released in the results but not search the `year` field.\n\nAdditionally, a search service must provide a way to constrain search results to, say, a specific category, genre, or cast member, without affecting the relevancy ordering of results. This filtering capability could also be used to enforce access control, and a service layer is an ideal place to add such constraints that the presentation tier can rely on rather than manage.\n\n## Search service interface\n\nLet\u2019s now concretely define the service interface based on the design. Our goal is to support a request, such as _find \u201cMusic\u201d genre movies for the query \u201cpurple rain\u201d against the `title` and `plot` fields_, returning only five results at a time that only include the fields title, genres, plot, and year. That request from our presentation layer\u2019s perspective is this HTTP GET request:\n\n```\nhttp://service_host:8080/search?q=purple%20rain&limit=5&skip=0&project=title,genres,plot,year&search=title,plot&filter=genres:Music\n```\n\nThese parameters, along with a `debug` parameter, are detailed in the following table:\n\n|parameter|description|\n|-----------|-----------|\n|`q`|This is a full-text query, typically the value entered by the user into a search box.|\n|`search`|This is a comma-separated list of fields to search across using the query (`q`) parameter.|\n|`limit`|Only return this maximum number of results, constrained to a maximum of 25 results.|\n|`skip`|Return the results starting after this number of results (up to the `limit` number of results), with a maximum of 100 results skipped.|\n|`project`|This is a comma-separated list of fields to return for each document. Add `_id` if that is needed. `_score` is a \u201cpseudo-field\u201d used to include the computed relevancy score.|\n|`filter`|`:` syntax; supports zero or more `filter` parameters.|\n|`debug`|If `true`, include the full aggregation pipeline .explain() output in the response as well.|\n\n### Returned results\n\nGiven the specified request, let\u2019s define the response JSON structure to return the requested (`project`) fields of the matching documents in a `docs` array. In addition, the search service returns a `request` section showing both the explicit and implicit parameters used to build the Atlas $search pipeline and a `meta` section that will return the total count of matching documents. This structure is entirely our design, not meant to be a direct pass-through of the aggregation pipeline response, allowing us to isolate, manipulate, and map the response as it best fits our presentation tier\u2019s needs.\n\n```\n{\n \"request\": {\n \"q\": \"purple rain\",\n \"skip\": 0,\n \"limit\": 5,\n \"search\": \"title,plot\",\n \"project\": \"title,genres,plot,year\",\n \"filter\": \n \"genres:Music\"\n ]\n },\n \"docs\": [\n {\n \"plot\": \"A young musician, tormented by an abusive situation at home, must contend with a rival singer, a burgeoning romance and his own dissatisfied band as his star begins to rise.\",\n \"genres\": [\n \"Drama\",\n \"Music\",\n \"Musical\"\n ],\n \"title\": \"Purple Rain\",\n \"year\": 1984\n },\n {\n \"plot\": \"Graffiti Bridge is the unofficial sequel to Purple Rain. In this movie, The Kid and Morris Day are still competitors and each runs a club of his own. They make a bet about who writes the ...\",\n \"genres\": [\n \"Drama\",\n \"Music\",\n \"Musical\"\n ],\n \"title\": \"Graffiti Bridge\",\n \"year\": 1990\n }\n ],\n \"meta\": [\n {\n \"count\": {\n \"total\": 2\n }\n }\n ]\n}\n```\n\n## Search service implementation\n\nCode! That\u2019s where it\u2019s at. Keeping things as straightforward as possible so that our implementation is useful for every front-end technology, we\u2019re implementing an HTTP service that works with standard GET request parameters and returns easily digestible JSON. And Java is our language of choice here, so let\u2019s get to it. Coding is an opinionated endeavor, so we acknowledge that there are various ways to do this in Java and other languages \u2014 here\u2019s one opinionated (and experienced) way to go about it.\n\nTo run with the configuration presented here, a good starting point is to get up and running with the examples from the article [Using Atlas Search from Java. Once you\u2019ve got that running, create a new index, called `movies_index`, with a custom index configuration as specified in the following JSON: \n\n```\n{\n \"analyzer\": \"lucene.english\",\n \"searchAnalyzer\": \"lucene.english\",\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"cast\": \n {\n \"type\": \"token\"\n },\n {\n \"type\": \"string\"\n }\n ],\n \"genres\": [\n {\n \"type\": \"token\"\n },\n {\n \"type\": \"string\"\n }\n ]\n }\n }\n}\n```\n\nHere\u2019s the skeleton of the implementation, a standard `doGet` servlet entry point, grabbing all the parameters we\u2019ve specified:\n\n```\npublic class SearchServlet extends HttpServlet {\n private MongoCollection collection;\n private String indexName;\n\n private Logger logger;\n\n // ...\n @Override\n protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {\n String q = request.getParameter(\"q\");\n String searchFieldsValue = request.getParameter(\"search\");\n String limitValue = request.getParameter(\"limit\");\n String skipValue = request.getParameter(\"skip\");\n String projectFieldsValue = request.getParameter(\"project\");\n String debugValue = request.getParameter(\"debug\");\n String[] filters = request.getParameterMap().get(\"filter\");\n\n // ...\n }\n}\n```\n[SearchServlet\n\nNotice that a few instance variables have been defined, which get initialized in the standard servlet `init` method from values specified in the `web.xml` deployment descriptor, as well as the `ATLAS_URI` environment variable:\n\n```\n @Override\n public void init(ServletConfig config) throws ServletException {\n super.init(config);\n\n logger = Logger.getLogger(config.getServletName());\n\n String uri = System.getenv(\"ATLAS_URI\");\n if (uri == null) {\n throw new ServletException(\"ATLAS_URI must be specified\");\n }\n\n String databaseName = config.getInitParameter(\"database\");\n String collectionName = config.getInitParameter(\"collection\");\n indexName = config.getInitParameter(\"index\");\n\n MongoClient mongo_client = MongoClients.create(uri);\n MongoDatabase database = mongo_client.getDatabase(databaseName);\n collection = database.getCollection(collectionName);\n\n logger.info(\"Servlet \" + config.getServletName() + \" initialized: \" + databaseName + \" / \" + collectionName + \" / \" + indexName);\n }\n```\nSearchServlet#init\n\nFor the best protection of our `ATLAS_URI` connection string, we define it in the environment so that it\u2019s not hard-coded nor visible within the application itself other than at initialization, whereas we specify the database, collection, and index names within the standard `web.xml` deployment descriptor which allows us to define end-points for each index that we want to support. Here\u2019s a basic web.xml definition:\n\n```\n\n \n SearchServlet\n com.mongodb.atlas.SearchServlet\n 1\n \n \n database\n sample_mflix\n \n \n collection\n movies\n \n \n index\n movies_index\n \n \n\n \n SearchServlet\n /search\n \n\n```\nweb.xml\n\n### GETting the search results\n\nRequesting search results is a stateless operation with no side effects to the database and works nicely as a straightforward HTTP GET request, as the query itself should not be a very long string. Our front-end tier can constrain the length appropriately. Larger requests could be supported by adjusting to POST/getPost, if needed.\n\n### Aggregation pipeline behind the scenes\n\nUltimately, to support the information we want returned (as shown above in the example response), the request example shown above gets transformed into this aggregation pipeline request:\n\n```\n\n {\n \"$search\": {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"purple rain\",\n \"path\": [\n \"title\",\n \"plot\"\n ]\n }\n }\n ],\n \"filter\": [\n {\n \"equals\": {\n \"path\": \"genres\",\n \"value\": \"Music\"\n }\n }\n ]\n },\n \"index\": \"movies_index\",\n \"count\": {\n \"type\": \"total\"\n }\n }\n },\n {\n \"$facet\": {\n \"docs\": [\n {\n \"$skip\": 0\n },\n {\n \"$limit\": 5\n },\n {\n \"$project\": {\n \"title\": 1,\n \"genres\": 1,\n \"plot\": 1,\n \"year\": 1,\n \"_id\": 0,\n }\n }\n ],\n \"meta\": [\n {\n \"$replaceWith\": \"$$SEARCH_META\"\n },\n {\n \"$limit\": 1\n }\n ]\n }\n }\n]\n```\n\nThere are a few aspects to this generated aggregation pipeline worth explaining further:\n\n* The query (`q`) is translated into a [`text` operator over the specified `search` fields. Both of those parameters are required in this implementation.\n* `filter` parameters are translated into non-scoring `filter` clauses using the `equals` operator. The `equals` operator requires string fields to be indexed as a `token` type; this is why you see the `genres` and `cast` fields set up to be both `string` and `token` types. Those two fields can be searched full-text-wise (via the `text` or other string-type supporting operators) or used as exact match `equals` filters.\n* The count of matching documents is requested in $search, which is returned within the `$$SEARCH_META` aggregation variable. Since this metadata is not specific to a document, it needs special handling to be returned from the aggregation call to our search server. This is why the `$facet` stage is leveraged, so that this information is pulled into a `meta` section of our service\u2019s response.\n\nThe use of `$facet` is a bit of a tricky trick, which gives our aggregation pipeline response room for future expansion too.\n\n>`$facet` aggregation stage is confusingly named the same as the\n> Atlas Search `facet` collector. Search result facets give a group \n> label and count of that group within the matching search results. \n> For example, faceting on `genres` (which requires an index \n> configuration adjustment from the example here) would provide, in \n> addition to the documents matching the search criteria, a list of all \n>`genres` within those search results and the count of how many of \n> each. Adding the `facet` operator to this search service is on the \n> roadmap mentioned below.\n\n### $search in code\n\nGiven a query (`q`), a list of search fields (`search`), and filters (zero or more `filter` parameters), building the `$search` stage programmatically is straightforward using the Java driver\u2019s convenience methods:\n\n```\n // $search\n List searchPath = new ArrayList<>();\n for (String search_field : searchFields) {\n searchPath.add(SearchPath.fieldPath(search_field));\n }\n\n CompoundSearchOperator operator = SearchOperator.compound()\n .must(List.of(SearchOperator.text(searchPath, List.of(q))));\n if (filterOperators.size() > 0)\n operator = operator.filter(filterOperators);\n\n Bson searchStage = Aggregates.search(\n operator,\n SearchOptions.searchOptions()\n .option(\"scoreDetails\", debug)\n .index(indexName)\n .count(SearchCount.total())\n );\n```\n$search code\n\nWe\u2019ve added the `scoreDetails` feature of Atlas Search when `debug=true`, allowing us to introspect the gory Lucene scoring details only when desired; requesting score details is a slight performance hit and is generally only useful for troubleshooting.\n\n### Field projection\n\nThe last interesting bit of our service implementation entails field projection. Returning the `_id` field, or not, requires special handling. Our service code looks for the presence of `_id` in the `project` parameter and explicitly turns it off if not specified. We have also added a facility to include the document\u2019s computed relevancy score, if desired, by looking for a special `_score` pseudo-field specified in the `project` parameter. Programmatically building the projection stage looks like this: \n\n```\n List projectFields = new ArrayList<>();\n if (projectFieldsValue != null) {\n projectFields.addAll(List.of(projectFieldsValue.split(\",\")));\n }\n\n boolean include_id = false;\n if (projectFields.contains(\"_id\")) {\n include_id = true;\n projectFields.remove(\"_id\");\n }\n\n boolean includeScore = false;\n if (projectFields.contains(\"_score\")) {\n includeScore = true;\n projectFields.remove(\"_score\");\n }\n\n // ...\n\n // $project\n List projections = new ArrayList<>();\n if (projectFieldsValue != null) {\n // Don't add _id inclusion or exclusion if no `project` parameter specified\n projections.add(Projections.include(projectFields));\n if (include_id) {\n projections.add(Projections.include(\"_id\"));\n } else {\n projections.add(Projections.excludeId());\n }\n }\n if (debug) {\n projections.add(Projections.meta(\"_scoreDetails\", \"searchScoreDetails\"));\n }\n if (includeScore) {\n projections.add(Projections.metaSearchScore(\"_score\"));\n }\n```\n$project in code\n\n### Aggregating and responding\n\nPretty straightforward at the end of the parameter wrangling and stage building, we build the full pipeline, make our call to Atlas, build a JSON response, and return it to the calling client. The only unique thing here is adding the `.explain()` call when `debug=true` so that our client can see the full picture of what happened from the Atlas perspective:\n\n```\n AggregateIterable aggregationResults = collection.aggregate(List.of(\n searchStage,\n facetStage\n ));\n\n Document responseDoc = new Document();\n responseDoc.put(\"request\", new Document()\n .append(\"q\", q)\n .append(\"skip\", skip)\n .append(\"limit\", limit)\n .append(\"search\", searchFieldsValue)\n .append(\"project\", projectFieldsValue)\n .append(\"filter\", filters==null ? Collections.EMPTY_LIST : List.of(filters)));\n\n if (debug) {\n responseDoc.put(\"debug\", aggregationResults.explain().toBsonDocument());\n }\n\n // When using $facet stage, only one \"document\" is returned,\n // containing the keys specified above: \"docs\" and \"meta\"\n Document results = aggregationResults.first();\n if (results != null) {\n for (String s : results.keySet()) {\n responseDoc.put(s,results.get(s));\n }\n }\n\n response.setContentType(\"text/json\");\n PrintWriter writer = response.getWriter();\n writer.println(responseDoc.toJson());\n writer.close();\n\n logger.info(request.getServletPath() + \"?\" + request.getQueryString());\n```\nAggregate and return results code\n\n## Taking it to production\n\nThis is a standard Java servlet extension that is designed to run in Tomcat, Jetty, or other servlet API-compliant containers. The build runs Gretty, which smoothly allows a developer to either `jettyRun` or `tomcatRun` to start this example Java search service.\n\nIn order to build a distribution that can be deployed to a production environment, run:\n\n```\n./gradlew buildProduct\n```\n\n## Future roadmap\n\nOur search service, as is, is robust enough for basic search use cases, but there is room for improvement. Here are some ideas for the future evolution of the service:\n\n* Add negative filters. Currently, we support positive filters with the `filter=field:value` parameter. A negative filter could have a minus sign in front. For example, to exclude \u201cDrama\u201d movies, support for `filter=-genres:Drama` could be implemented.\n* Support highlighting, to return snippets of field values that match query terms.\n* Implement faceting.\n* And so on\u2026 see the issues list for additional ideas and to add your own.\n\nAnd with the service layer being a middle tier that can be independently deployed without necessarily having to make front-end or data-tier changes, some of these can be added without requiring changes in those layers.\n\n## Conclusion\n\nImplementing a middle-tier search service provides numerous benefits from security, to scalability, to being able to isolate changes and deployments independent of the presentation tier and other search clients. Additionally, a search service allows clients to easily leverage sophisticated search capabilities using standard HTTP and JSON techniques.\n\nFor the fundamentals of using Java with Atlas Search, check out Using Atlas Search from Java | MongoDB. As you begin leveraging Atlas Search, be sure to check out the Query Analytics feature to assist in improving your search results.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt749f7a8823712948/65ca9ba8bf8ac48b17c5b8e8/three-tier.png", "format": "md", "metadata": {"tags": ["Atlas", "Java"], "pageDescription": "In this article, we are going to detail an HTTP Java search service designed to be called from a presentation tier.", "contentType": "Article"}, "title": "How to Build a Search Service in Java", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/change-streams-in-java", "action": "created", "body": "# Using MongoDB Change Streams in Java\n\nMongoDB has come a long way from being a database engine developed at the internet company DoubleClick to now becoming this leading NoSQL data store that caters to huge clients from many domains.\n\nWith the growth of the database engine, MongoDB kept adding new features and improvements in its database product which makes it the go-to NoSQL database for new requirements and product developments.\n\nOne such feature added to the MongoDB tool kit is change streams, which was added with the MongoDB 3.6 release. Before version 3.6, keeping a tailable cursor open was used to perform similar functionality. Change streams are a feature that enables real-time streaming of data event changes on the database.\n\nThe event-driven streaming of data is a critical requirement in many use cases of product/feature developments implemented these days. Many applications developed today require that changes in data from one data source need to propagate to another source in real-time. They might also require the application to perform certain actions when a change happens in the data in the data source. Logging is one such use case where the application might need to collect, process, and transmit logs in real-time and thus would require a streaming tool or platform like change streams to implement it.\n\n## What are change streams in MongoDB?\n\nAs the word indicates, change streams are the MongoDB feature that captures \"change\" and \"streams\" it to the desired target data source.\n\nIt is an API that allows the user to subscribe their application to any change in collection, database, or even on the entire deployment. There is no middleware or data polling action to be initiated by the user to leverage this feature of event-driven, real-time data capture.\n\nMongoDB uses replication as the underlying technology for change streams by using the operation logs generated for the data replication between replica members.\n\nThe oplog is a special capped collection that records all operations that modify the data stored in the databases. The larger the oplog, the more operations can be recorded on it. Using the oplog for change stream guarantees that the change stream will be triggered in the same order as they were applied to the database.\n\nAs seen in the above flow, when there is a CRUD operation on the MongoDB database, the oplog captures it, and those oplog files are used by MongoDB to stream those changes into real-time applications/data receivers.\n\n## Kafka vs change streams\n\nIf we compare MongoDB and Kafka technologies, both would fall under completely separate buckets. MongoDB is classified as a NoSQL database, which can store JSON-like document structures. Kafka is an event streaming platform for real-time data feeds. It is primarily used as a publisher-subscriber model messaging service that provides a replicated message log system to stream data from one source to another.\n\nKafka helps to ingest huge data sets from desired data sources, filter/aggregate this data, and send it to the intended data source reliably and efficiently. Although MongoDB is a database system and its use case is miles apart from a messaging system like Kafka, the change streams feature does provide it with functionalities similar to those of Kafka.\n\nBasically, change streams act as a messaging service to stream real-time data of any collection from your MongoDB database. It helps you to aggregate/filter that data and store it back to your same MongoDB database data source. In short, if you have a narrow use case that does not require a generalized solution but is curtailed to your data source (MongoDB), then you could go ahead with change streams as your streaming solution. Still, if you want to involve different data sources outside of MongoDB and would like a generalized solution for messaging data sets, then Kafka would make more sense.\n\nBy using change streams, you do not need a separate license or server to host your messaging service. Unlike Kafka, you would get the best of both worlds, which is a great database and an efficient messaging system.\n\nMongoDB does provide Kafka connectors which could be used to read data in and out of Kafka topics in real-time, but if your use case is not big enough to invest in Kafka, change streams could be the perfect substitute for streaming your data.\n\nMoreover, the Kafka connectors use change streams under the hood, so you would have to build your Kafka setup by setting up connector services and start source and sink connectors for MongoDB. In the case of change streams, you would simply watch for changes in the collection you would want without any prerequisite setup.\n\n## How Change Streams works\n\nChange streams, once open for a collection, act as an event monitoring mechanism on your database/collection or, in some cases, documents within your database.\n\nThe core functionality lies in helping you \"watch\" for changes in an entity. The background work required for this mechanism of streaming changes is implemented by an already available functionality in MongoDB, which is the oplog.\n\nAlthough it comes with its overheads of blocking system resources, this event monitoring for your source collection has use cases in many business-critical scenarios, like capturing log inputs of application data or monitoring inventory changes for an e-commerce webshop, and so on. So, it's important to fit the change stream with the correct use case.\n\nAs the oplog is the driver of the entire change stream mechanism, a replicated environment of at least a single node is the first prerequisite to using change streams. You will also need the following:\n\n- Start change stream for the collection/database intended.\n\n- Have the necessary CPU resources for the cluster.\n\nInstead of setting up a self-hosted cluster for fulfilling the above checklist, there is always an option to use the cloud-based hosted solution, MongoDB Atlas. Using Atlas, you can get a ready-to-use setup with a few clicks. Since change streams are resource-intensive, the cost factor has to be kept in mind while firing an instance in Atlas for your data streaming.\n\n## Implementing change streams in your Java Spring application\n\nIn the current backend development world, streams are a burning topic as they help the developers to have a systematic pipeline in place to process the persisted data used in their application. The streaming of data helps to generate reports, have a notification mechanism for certain criteria, or, in some cases, alter some schema based on the events received through streams.\n\nHere, I will demonstrate how to implement a change stream for a Java Spring application.\n\nOnce the prerequisite to enable change streams is completed, the steps at the database level are almost done. You will now need to choose the collection on which you want to enable change streams.\n\nLet's consider that you have a Java Spring application for an e-commerce website, and you have a collection called `e_products`, which holds product information of the product being sold on the website.\n\nTo keep it simple, the fields of the collection can be:\n\n```json\n{\"_id\"\u00a0 , \"productName\", \"productDescription\" , \"price\" , \"colors\" , \"sizes\"}\n```\n\nNow, these fields are populated from your collection through your Java API to show the product information on your website when a product is searched for or clicked on.\n\nNow, say there exists another collection, `vendor_products`, which holds data from another source (e.g., another product vendor). In this case, it holds some of the products in your `e_products` but with more sizes and color options.\n\nYou want your application to be synced with the latest available size and color for each product. Change streams can help you do just that. They can be enabled on your `vendor_products` collection to watch for any new product inserted, and then for each of the insertion events, you could have some logic to add the colors/sizes to your `e_products` collection used by your application.\n\nYou could create a microservice application specifically for this use case. By using a dedicated microservice, you could allocate sufficient CPU/memory for the application to have a thread to watch on your `vendor_products` collection. The configuration class in your Spring application would have the following code to start the watch:\n\n```java\n@Async\n\u00a0\u00a0\u00a0\u00a0public void runChangeStreamConfig() throws InterruptedException {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0CodecRegistry pojoCodecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0fromProviders(PojoCodecProvider.builder().automatic(true).build()));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MongoCollection vendorCollection = mongoTemplate.getDb().withCodecRegistry(pojoCodecRegistry).getCollection(\"vendor_products\", VendorProducts.class);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0List pipeline = singletonList(match(eq(\"operationType\", \"insert\")));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0oldEcomFieldsCollection.watch(pipeline).forEach(s ->\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0mergeFieldsVendorToProducts(s.getDocumentKey().get(\"_id\").asString().getValue())\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0);\n\u00a0\u00a0\u00a0\u00a0}\n```\n\nIn the above code snippet, you can see how the collection is selected to be watched and that the monitored operation type is \"insert.\" This will only check for new products added to this collection. If needed, we could also do the monitoring for \"update\" or \"delete.\"\n\nOnce this is in place, whenever a new product is added to `vendor_products`, this method would be invoked and the `_id` of that product would then be passed to `mergeFieldsVendorToProducts()` method where you can write your logic to merge the various properties from `vendor_products` to the `e_products` collection.\n\n```java\nforEach(s ->\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Query query = new Query();\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0query.addCriteria(Criteria.where(\"_id\").is(s.get(\"_id\")));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Update update = new Update();\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0update.set(field, s.get(field));\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0mongoTemplate.updateFirst(query, update, EProducts.class);\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0})\n```\n\nThis is a small use case for change streams; there are many such examples where change streams can come in handy. It's about using this tool for the right use case.\n\n## Conclusion\n\nIn conclusion, change streams in MongoDB provide a powerful and flexible way to monitor changes to your database in real time. Whether you need to react to changes as they happen, synchronize data across multiple systems, or build custom event-driven workflows, change streams can help you achieve these goals with ease.\n\nBy leveraging the power of change streams, you can improve the responsiveness and efficiency of your applications, reduce the risk of data inconsistencies, and gain deeper insights into the behavior of your database.\n\nWhile there is a bit of a learning curve when working with change streams, MongoDB provides comprehensive documentation and a range of examples to help you get started. With a little practice, you can take advantage of the full potential of change streams and build more robust, scalable, and resilient applications.", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Change streams are an API that allows the user to subscribe to their application to any change in collection, database, or even on the entire deployment. There is no middleware or data polling action to be initiated by the user to leverage this event-driven, real-time data capture feature. Learn how to use change streams with Java in this article.\n", "contentType": "Article"}, "title": "Using MongoDB Change Streams in Java", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/developing-applications-mongodb-atlas-serverless-instances", "action": "created", "body": "# Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances\n\nIf you're a developer, worrying about your database is not necessarily something you want to do. You likely don't want to spend your time provisioning or sizing clusters as the demand of your application changes. You probably also don't want to worry about breaking the bank if you've scaled something incorrectly.\n\nWith MongoDB Atlas, you have a few deployment options to choose from when it comes to your database. While you could choose a pre-provisioned shared or dedicated cluster, you're still stuck having to size and estimate the database resources you will need and subsequently managing your cluster capacity to best fit demand. While a pre-provisioned cluster isn\u2019t necessarily a bad thing, it might not make sense if your development becomes idle or you\u2019re expecting frequent periods of growth or decline. Instead, you can opt for a serverless instance to help remove the capacity management burden and free up time to dedicate to writing code. Serverless instances provide an on-demand database endpoint for your application that will automatically scale up and down to zero with application demand and only charge you based on your usage.\u00a0\n\nIn this short and sweet tutorial, we'll see how easy it is to get started with a MongoDB Atlas serverless instance and how to begin to develop an application against it.\n## Deploy a MongoDB Atlas serverless instance\nWe're going to start by deploying a new MongoDB Atlas serverless instance. There are numerous ways to accomplish deploying MongoDB, but for this example, we'll stick to the web dashboard and some point and click.\n\nFrom the MongoDB Atlas dashboard, click the \"Create\" button.\n\nChoose \"Serverless\" as well as a cloud vendor where this instance should live.\n\nIf possible, choose a cloud vendor that matches where your application will live. This will give you the best possible latency between your database and your application.\n\nOnce you choose to click the \"Create Instance\" button, your instance is ready to go!\n\nYou're not in the clear yet though. You won't be able to use your Atlas serverless instance outside of the web dashboard until you create some database access and network access rules.\n\nWe'll start with a new database user.\n\nChoose the type of authentication that makes the most sense for you. To keep things simple for this tutorial, I recommend choosing the \"Password\" option.\n\nWhile you could use a \"Built-in Role\" when it comes to user privileges, your best bet for any application is to define \"Specific Privileges\" depending on what the user should be allowed to do. For this project, we'll be using an \"example\" database and a \"people\" collection, so it makes sense to give only that database and collection readWrite access.\n\nUse your best judgment when creating users and defining access.\n\nWith a user created, we can move onto the network access side of things. The final step before we can start developing against our database.\n\nIn the \"Network Access\" tab, add the IP addresses that should be allowed access. If you're developing and testing locally like I am, just add your local IP address. Just remember to add the IP range for your servers or cloud vendor when the time comes. You can also take advantage of private networking if needed.\n\nWith the database and network access out of the way, let's grab the URI string that we'll be using in the next step of the tutorial.\n\nFrom the Database tab, click the \"Connect\" button for your serverless instance.\n\nChoose the programming language you wish to use and make note of the URI.\n\nNeed more help getting started with serverless instances? Check out this video that can walk you through it.\n\n## Interacting with an Atlas serverless instance using a popular programming technology\n\nAt this point, you should have an Atlas serverless instance deployed. We're going to take a moment to connect to it from application code and do some interactions, such as basic CRUD.\n\nFor this particular example, we'll use JavaScript with the MongoDB Node.js driver, but the same rules and concepts apply, minus the language differences for the programming language that you wish to use.\n\nOn your local computer, create a project directory and navigate into it with your command line. You'll want to execute the following commands once it becomes your working directory:\n\n```bash\nnpm init -y\nnpm install mongodb\ntouch main.js\n```\n\nWith the above commands, we've initialized a Node.js project, installed the MongoDB Node.js driver, and created a **main.js** file to contain our code.\n\nOpen the **main.js** file and add the following JavaScript code:\n\n```javascript\nconst { MongoClient } = require(\"mongodb\");\n\nconst mongoClient = new MongoClient(\"MONGODB_URI_HERE\");\n\n(async () => {\n try {\n await mongoClient.connect();\n const database = mongoClient.db(\"example\");\n const collection = database.collection(\"people\");\n const inserted = await collection.insertOne({\n \"firstname\": \"Nic\",\n \"lastname\": \"Raboy\",\n \"location\": \"California, USA\"\n });\n const found = await collection.find({ \"lastname\": \"Raboy\" }).toArray();\n console.log(found);\n const deleted = await collection.deleteMany({ \"lastname\": \"Raboy\" });\n } catch (error) {\n console.error(error);\n } finally {\n mongoClient.close();\n }\n})();\n```\n\nSo, what's happening in the above code?\n\nFirst, we define our client with the URI string for our serverless instance. This is the same string that you took note of earlier in the tutorial and it should contain a username and password.\n\nWith the client, we can establish a connection and get a reference to a database and collection that we want to use. The database and collection does not need to exist prior to running your application.\n\nNext, we are doing three different operations with the MongoDB Query API. First, we are inserting a new document into our collection. After the insert is complete, assuming our try/catch block didn't find an error, we find all documents where the lastname matches. For this example, there should only ever be one document, but you never know what your code looks like. If a document was found, it will be printed to the console. Finally, we are deleting any document where the lastname matches.\n\nBy the end of this, no documents should exist in your collection, assuming you are following along with my example. However, a document did (at some point in time) exist in your collection \u2014 we just deleted it.\n\nAlright, so we have a basic example of how to build an application around an on-demand database, but it didn\u2019t really highlight the benefit of why you\u2019d want to. So, what can we do about that?\n\n## Pushing an Atlas serverless instance with a plausible application scenario\n\nWe know that pre-provisioned and serverless clusters work well and from a development perspective, you\u2019re going to end up with the same results using the same code.\n\nLet\u2019s come up with a scenario where a serverless instance in Atlas might lower your development costs and reduce the scaling burden to match demand. Let\u2019s say that you have an online store, but not just any kind of online store. This online store sees mild traffic most of the time and a 1000% spike in traffic every Friday between the hours of 9AM and 12PM because of a lightning type deal that you run.\n\nWe\u2019ll leave mild traffic up to your imagination, but a 1000% bump is nothing small and would likely require some kind of scaling intervention every Friday on a pre-provisioned cluster. That, or you\u2019d need to pay for a larger sized database.\n\nLet\u2019s visualize this example with the following Node.js code:\n\n```\nconst { MongoClient } = require(\"mongodb\");\nconst Express = require(\"express\");\nconst BodyParser = require(\"body-parser\");\n\nconst app = Express();\n\napp.use(BodyParser.json());\n\nconst mongoClient = new MongoClient(\"MONGODB_URI_HERE\");\nvar database, purchasesCollection, dealsCollection;\n\napp.get(\"/deal\", async (request, response) => {\n try {\n const deal = await dealsCollection.findOne({ \"date\": \"2022-10-07\" });\n response.send(deal || {});\n } catch (error) {\n response.status(500).send({ \"message\": error.message });\n }\n});\n\napp.post(\"/purchase\", async (request, response) => {\n try {\n if(!request.body) {\n throw { \"message\": \"The request body is missing!\" };\n }\n const receipt = await purchasesCollection.insertOne(\n { \n \"sku\": (request.body.sku || \"000000\"),\n \"product_name\": (request.body.product_name || \"Pokemon Scarlet\"),\n \"price\": (request.body.price || 59.99),\n \"customer_name\": (request.body.customer_name || \"Nic Raboy\"),\n \"purchase_date\": \"2022-10-07\"\n }\n );\n response.send(receipt || {});\n } catch (error) {\n response.status(500).send({ \"message\": error.message });\n }\n});\n\napp.listen(3000, async () => {\n try {\n await mongoClient.connect();\n database = mongoClient.db(\"example\");\n dealsCollection = database.collection(\"deals\");\n purchasesCollection = database.collection(\"receipts\");\n console.log(\"SERVING AT :3000...\");\n } catch (error) {\n console.error(error);\n }\n});\n```\n\nIn the above example, we have an Express Framework-powered web application with two endpoint functions. We have an endpoint for getting the deal and we have an endpoint for creating a purchase. The rest can be left up to your imagination.\n\nTo load test this application with bursts and simulate the potential value of a serverless instance, we can use a tool like Apache JMeter.\n\nWith JMeter, you can define the number of threads and iterations it uses when making HTTP requests.\n\nRemember, we\u2019re simulating a burst in this example. If you do decide to play around with JMeter and you go excessive on the burst, you could end up with an interesting bill. If you\u2019re interested to know how serverless is billed, check out the pricing page in the documentation.\n\nInside your JMeter Thread Group, you\u2019ll want to define what is happening for each thread or iteration. In this case, we\u2019re doing an HTTP request to our Node.js API.\n\nSince the API expects JSON, we can define the header information for the request.\n\nOnce you have the thread information, the HTTP request information, and the header information, you can run JMeter and you\u2019ll end up with a lot of activity against not only your web application, but also your database.\n\nAgain, a lot of this example has to be left to your imagination because to see the scaling benefits of a serverless instance, you\u2019re going to need a lot of burst traffic that isn\u2019t easily simulated during development. However, it should leave you with some ideas.\n\n## Conclusion\n\nYou just saw how quick it is to develop on MongoDB Atlas without having to burden yourself with sizing your own cluster. With a MongoDB Atlas serverless instance, your database will scale to meet the demand of your application and you'll be billed for that demand. This will protect you from paying for improperly sized clusters that are running non-stop. It will also save you the time you would have spent making size related adjustments to your cluster.\n\nThe code in this example works regardless if you are using an Atlas serverless instance or a pre-provisioned shared or dedicated cluster. \n\nGot a question regarding this example, or want to see more? Check out the MongoDB Community Forums to see what's happening.", "format": "md", "metadata": {"tags": ["Atlas", "Serverless"], "pageDescription": "In this short and sweet tutorial, we'll see how easy it is to get started with a MongoDB Atlas serverless instance and how to begin to develop an application against it.", "contentType": "Tutorial"}, "title": "Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-app-services-aws-bedrock-rag", "action": "created", "body": "# MongoDB Atlas Vector Search and AWS Bedrock modules RAG tutorial\n\nWelcome to our in-depth tutorial on MongoDB Atlas Vector Search and AWS Bedrock modules, tailored for creating a versatile database assistant for product catalogs. This tutorial will guide you through building an application that simplifies product searches using diverse inputs such as individual products, lists, images, and even recipes. Imagine finding all the necessary ingredients for a recipe with just a simple search. Whether you're a developer or a product manager, this guide will equip you with the skills to create a powerful tool for navigating complex product databases.\nSome examples of what this application can do:\n\n### Single product search:\n\n**Search query**: `\"Organic Almonds\"` \n\n**Result**: displays the top-rated or most popular organic almond product in the catalog\n\n### List-based search:\n**Search query**: `\"Rice\", \"Black Beans\", \"Avocado\"]`\n\n**Result**: shows a list of products including rice, black beans, and avocados, along with their different brands and prices\n\n### Image-based search:\n**Search query**: [image of a whole wheat bread loaf]\n\n**Result**: identifies and shows the top-picked brand of whole wheat bread available in the catalog\n\n### Recipe-based search:\n**Search query**: `\"Chocolate Chip Cookie Recipe\"`\n\n**Result**: lists all ingredients needed for the recipe, like flour, chocolate chips, sugar, butter, etc., and suggests relevant products\n\n![Demo Application Search Functionality\n\nLet\u2019s start!\n\n## High-level architecture\n\n1\\. Frontend VUE js application implementing a chat application\n\n2\\. Trigger:\n\n - A trigger watching for inserted \u201cproduct\u201d documents and using a function logic sets vector embeddings on the product \u201ctitle,\u201d \u201cimg,\u201d or both.\n\n 3\\. App services to facilitate a backend hosting the endpoints to interact with the database and AI models\n - **getSearch** \u2014 the main search engine point that receives a search string or base64 image and outputs a summarized document\n - **getChats** \u2014 an endpoint to retrieve user chats array\n - **saveChats** \u2014 an endpoint to save the chats array\n\n 4\\. MongoDB Atlas database with a vector search index to retrieve relevant documents for RAG\nenter image description here\n\n## Deploy a free cluster\n\nBefore moving forward, ensure the following prerequisites are met:\n\n- Database cluster setup on MongoDB Atlas\n- Obtained the URI to your cluster\n\nFor assistance with database cluster setup and obtaining the URI, refer to our guide for setting up a MongoDB cluster, and our guide to get your connection string.\n\nPreferably the database location will be in the same AWS region as the Bedrock enabled modules.\n\nMongoDB Atlas has a rich set of application services that allow a developer to host an entire application logic (authentication, permissions, functions, triggers, etc.) with a generous free tier.\nWe will leverage this ability to streamline development and data integration in minutes of work.\n\n## Setup app services\n\n1\\. Start by navigating to the App Services tab.\n\n2\\. You\u2019ll be prompted to select a starter template. Let\u2019s go with the **Build your own App** option that\u2019s already selected. Click the **Next** button.\n\n3\\. Next, you need to configure your application.\n\n- Data Source: Since we have created a single cluster, Atlas already linked it to our application.\n- (Optional) Application Name: Let\u2019s give our application a meaningful name, such as bedrockDemo. (This option might be chosen for you automatically as \"Application-0\" for the first application.)\n- (Optional) App Deployment Model: Change the deployment to Single Region and select the region closest to your physical location.\n\n4\\. Click the **Create App Service** button to create your first App Services application!\n\n5\\. Once the application is created, we need to verify data sources are linked to our cluster. Visit the **Linked Data Sources** tab:\nOur Atlas cluster with a linked name of `mongodb-atlas`\n\n## Setup secrets and trigger\n\nWe will use the app services to create a Value and a Secret for AWS access and secret keys to access our Bedrock modules.\n\nNavigate to the **Values** tab and click **Create New Value** by following this configuration:\n| **Value Type** | **Name** | **Value** |\n| --- | --- | --- |\n| Secret | AWS_ACCESS_KEY| ``|\n| Secret | AWS_SECRET_KEY | ``|\n| Value | AWS_ACCESS_KEY| Link to SECRET: AWS_ACCESS_KEY|\n| Value | AWS_SECRET_KEY | Link to SECRET: AWS_SECRET_KEY|\n\nBy the end of this process you should have:\n\nOnce done, press **Review Draft & Deploy** and then **Deploy**.\n\n### Add aws sdk dependency\nThe AWS SDK Bedrock client is the easiest and most convenient way to interact with AWS bedrock models.\n\n 1\\. In your app services application, navigate to the **Functions** tab and click the **Dependencies** tab.\n\n 2\\. Click **Add Dependency** and add the following dependency:\n```\n@aws-sdk/client-bedrock-runtime\n```\n 3\\. Click **Add** and wait for it to be successfully added.\n\n 4\\. Once done, press **Review Draft & Deploy** and then **Deploy**.\n\n### Create a trigger\nNavigate to **Triggers** tab and create a new trigger:\n\n**Trigger Code**\n\nChoose **Function type** and in the dropdown, click **New Function.** Add a name like setEmbeddings under **Function Name**.\n\nCopy and paste the following code.\n```javascript\n// Header: MongoDB Atlas Function to Process Document Changes\n// Inputs: MongoDB changeEvent object\n// Outputs: Updates the MongoDB document with processing status and AWS model response\n\nexports = async function(changeEvent) {\n // Connect to MongoDB service\n var serviceName = \"mongodb-atlas\";\n var dbName = changeEvent.ns.db;\n var collName = changeEvent.ns.coll;\n\n try {\n var collection = context.services.get(serviceName).db(dbName).collection(collName);\n\n // Set document status to 'pending'\n await collection.updateOne({'_id' : changeEvent.fullDocument._id}, {$set : {processing : 'pending'}});\n\n // AWS SDK setup for invoking models\n const { BedrockRuntimeClient, InvokeModelCommand } = require(\"@aws-sdk/client-bedrock-runtime\");\n const client = new BedrockRuntimeClient({\n region: 'us-east-1',\n credentials: {\n accessKeyId: context.values.get('AWS_ACCESS_KEY'),\n secretAccessKey: context.values.get('AWS_SECRET_KEY')\n },\n model: \"amazon.titan-embed-text-v1\",\n });\n\n // Prepare embedding input from the change event\n let embedInput = {}\n if (changeEvent.fullDocument.title) {\n embedInput'inputText'] = changeEvent.fullDocument.title\n }\n if (changeEvent.fullDocument.imgUrl) {\n const imageResponse = await context.http.get({ url: changeEvent.fullDocument.imgUrl });\n const imageBase64 = imageResponse.body.toBase64();\n embedInput['inputImage'] = imageBase64\n }\n\n // AWS SDK call to process the embedding\n const input = {\n \"modelId\": \"amazon.titan-embed-image-v1\",\n \"contentType\": \"application/json\",\n \"accept\": \"*/*\",\n \"body\": JSON.stringify(embedInput)\n };\n\n console.log(`before model invoke ${JSON.stringify(input)}`);\n const command = new InvokeModelCommand(input);\n const response = await client.send(command);\n \n // Parse and update the document with the response\n const doc = JSON.parse(Buffer.from(response.body));\n doc.processing = 'completed';\n await collection.updateOne({'_id' : changeEvent.fullDocument._id}, {$set : doc});\n\n } catch(err) {\n // Handle any errors in the process\n console.error(err)\n }\n};\n```\nClick **Save** and **Review Draft & Deploy**.\n\nNow, we need to set the function setEmbeddings as a SYSTEM function. Click on the Functions tab and then click on the **setEmbeddings** function, **Settings** tab. Change the Authentication to **System** and click **Save**.\n\n![System setting on a function\n\nA trigger running successfully will produce a collection in our Atlas cluster. You can navigate to **Data Services > Database**. Click the **Browse Collections** button on the cluster view. The database name is Bedrock and the collection is `products`.\n\n> Please note that the trigger run will only happen when we insert data into the `bedrock.products` collection and might take a while the first time. Therefore, you can watch the Logs section on the App Services side.\n\n## Create an Atlas Vector Search index\n\nLet\u2019s move back to the **Data Services** and **Database** tabs.\n\n**Atlas search index creation **\n1. First, navigate to your cluster\u2019s \"Atlas Search\" section and press the Create Index button.\n\n1. Click **Create Search Index**.\n2. Choose the Atlas Vector Search index and click **Next**.\n3. Select the \"bedrock\" database and \"products\" collection.\n4. Paste the following index definition:\n```\n{\n \"fields\": \n {\n \"type\": \"vector\",\n \"path\": \"embedding\",\n \"numDimensions\": 1024,\n \"similarity\": \"dotProduct\"\n }\n ]\n}\n```\n1. Click **Create** and wait for the index to be created.\n2. The index is going to go through a build phase and will appear \"Active\" eventually.\nNow, you are ready to write $search aggregations for Atlas Search.\n\nThe HTTP endpoint getSearch implemented in Chapter 3 already includes a search query.\n\n```\nconst items = await collection.aggregate([\n {\n \"$vectorSearch\": {\n \"queryVector\": doc.embedding,\n \"index\": \"vector_index\",\n \"path\": \"embedding\",\n \"numCandidates\": 15,\n \"limit\": 1\n }\n },\n {\"$project\": {\"embedding\": 0}}\n ]).toArray();\n```\nWith this code, we are performing a vector search with whatever is placed in the \"doc.embedding\" variable on fields \"embedding.\" We look for just one document\u2019s results and limit the set for the first one.\n\n## Set up the backend logic\n\nOur main functionality will rely on a user HTTP endpoint which will orchestrate the logic of the catalog search. The input from the user will be turned into a multimodal embedding via AWS Titan and will be passed to Atlas Vector Search to find the relevant document. The document will be returned to the user along with a prompt that will engineer a response from a Cohere LLM.\n\n> Cohere LLM `cohere.command-light-text-v14` is part of the AWS Bedrock base model suite.\n\n### Create application search HTTPS endpoint\n1. On the App Services application, navigate to the **HTTPS Endpoints** section.\n2. Create a new POST endpoint by clicking **Add An Endpoint** with a path of **/getSearch**.\n3. Important! Toggle the **Response With Result** to On.\n4. The logic of this endpoint will get a \"term\" from the query string and search for that term. If no term is provided, it will return the first 15 results.\n\n![getSearch endpoint\n5. Add under Function and New Function (name: getProducts) the following function logic:\n\n```javascript\n// Function Name : getProducts\n\nexports = async function({ body }, response) {\n\n // Import required SDKs and initialize AWS BedrockRuntimeClient\n const { BedrockRuntimeClient, InvokeModelCommand } = require(\"@aws-sdk/client-bedrock-runtime\");\n const client = new BedrockRuntimeClient({\n region: 'us-east-1',\n credentials: {\n accessKeyId: context.values.get('AWS_ACCESS_KEY'),\n secretAccessKey: context.values.get('AWS_SECRET_KEY')\n }\n });\n\n // MongoDB and AWS SDK service setup\n const serviceName = \"mongodb-atlas\";\n const dbName = \"bedrock\";\n const collName = \"products\";\n const collection = context.services.get(serviceName).db(dbName).collection(collName);\n\n // Function to run AWS model command\n async function runModel(command, body) {\n command.body = JSON.stringify(body);\n console.log(`before running ${command.modelId} and prompt ${body.prompt}`)\n const listCmd = new InvokeModelCommand(command);\n console.log(`after running ${command.modelId} and prompt ${body.prompt}`)\n const listResponse = await client.send(listCmd);\n console.log('model body ret', JSON.stringify(JSON.parse(Buffer.from(listResponse.body))))\n console.log('before return from runModel')\n return JSON.parse(Buffer.from(listResponse.body));\n }\n\n // Function to generate list query for text input\nfunction generateListQuery(text) {\n const listDescPrompt = `Please build a json only output start with: {productList : {\"product\" : \"\" , \"quantity\" : }]} stop output after json fully generated.\n The list for ${text}. Complete {productList : `;\n return {\n \"prompt\": listDescPrompt,\n \"temperature\": 0\n };\n}\n\n// Function to process list items\nasync function processListItems(productList, embedCmd) {\n let retDocuments = [];\n for (const product of productList) {\n console.log('product', JSON.stringify(product))\n const embedBody = { 'inputText': product.product };\n const resEmbedding = await runModel(embedCmd, embedBody);\n const items = await collection.aggregate([\n vectorSearchQuery(resEmbedding.embedding), {\"$project\" : {\"embedding\" : 0}}\n ]).toArray();\n retDocuments.push(items[0]);\n }\n return retDocuments;\n}\n\n// Function to process a single item\nasync function processSingleItem(doc) {\n const items = await collection.aggregate([\n vectorSearchQuery(doc.embedding), {\"$project\" : {\"embedding\" : 0}}]).toArray();\n return items;\n}\n\n// Function to create vector search query\nfunction vectorSearchQuery(embedding) {\n return {\n \"$vectorSearch\": {\n \"queryVector\": embedding,\n \"index\": \"vector_index\",\n \"path\": \"embedding\",\n \"numCandidates\": 15,\n \"limit\": 1\n }\n };\n}\n\n // Parsing input data\n const { image, text } = JSON.parse(body.text());\n\n try {\n let embedCmd = {\n \"modelId\": \"amazon.titan-embed-image-v1\",\n \"contentType\": \"application/json\",\n \"accept\": \"*/*\"\n };\n \n // Process text input\n if (text) {\n const genList = generateListQuery(text);\n const listResult = await runModel({ \"modelId\": \"cohere.command-light-text-v14\", \"contentType\": \"application/json\",\n \"accept\": \"*/*\" }, genList);\n const list = JSON.parse(listResult.generations[0].text);\n console.log('list', JSON.stringify(list));\n\n let retDocuments = await processListItems(list.productList, embedCmd);\n console.log('retDocuments', JSON.stringify(retDocuments));\n let prompt, success = true;\n prompt = `In one simple sentence explain how the retrieved docs: ${JSON.stringify(retDocuments)}\n and mention the searched ingridiants from list: ${JSON.stringify(list.productList)} `;\n\n // Generate text based on the prompt\n genQuery = {\n \"prompt\": prompt,\n \"temperature\": 0\n };\n \n textGenInput = {\n \"modelId\": \"cohere.command-light-text-v14\",\n \"contentType\": \"application/json\",\n \"accept\": \"*/*\"\n };\n \n const assistantResponse = await runModel(textGenInput, genQuery);\n console.log('assistant', JSON.stringify(assistantResponse));\n retDocuments[0].assistant = assistantResponse.generations[0].text;\n \n return retDocuments;\n }\n\n // Process image or other inputs\n if (image) {\n const doc = await runModel(embedCmd, { inputImage: image });\n return await processSingleItem(doc);\n }\n\n } catch (err) {\n console.error(\"Error: \", err);\n throw err;\n }\n};\n\n```\nClick **Save Draft** and follow the **Review Draft & Deploy** process.\nMake sure to keep the http callback URL as we will use it in our final chapter when consuming the data from the frontend application.\n> TIP:\n>\n> The URL will usually look something like: `https://us-east-1.aws.data.mongodb-api.com/app//endpoint/getSearch`\n\nMake sure that the function created (e.g., getProducts) is on \"SYSTEM\" privilege for this demo.\n\nThis page can be accessed by going to the **Functions** tab and looking at the **Settings** tab of the relevant function.\n\n### Import data into Atlas\nNow, we will import the data into Atlas from our [github repo.\n1. On the **Data Services** main tab, click your cluster name.\nClick the **Collections** tab.\n2. We will start by going into the \"bedrock\" database and importing the \"products\" collection.\n3. Click **Insert Document** or **Add My Own Data** (if present) and switch to the document view. Paste the content of the \"products.json\" file from the \"data\" folder in the repository.\n4. Click Insert and wait for the data to be imported. \n\n### Create an endpoint to save and retrieve chats\n1\\. `/getChats` - will save a chat to the database\nEndpoint\n- Name: getChats\n- Path: /getChats\n- Method: GET\n- Response with Result: Yes\n``` javascript\n// This function is the endpoint's request handler.\nexports = async function({ query, headers, body}, response) {\n // Data can be extracted from the request as follows:\n\n const {player } = query;\n\n // Querying a mongodb service:\n const doc = await context.services.get(\"mongodb-atlas\").db(\"bedrock\").collection(\"players\").findOne({\"player\" : player}, {messages : 1})\n\n return doc;\n\n};\n```\n2\\. `/saveChats` \u2014 will save a chat to the database\nEndpoint\n\n- Name: saveChats\n- Path: /saveChats\n- Method: POST\n- Response with Result: Yes\n```javascript\n// This function is the endpoint's request handler.\nexports = async function({ query, headers, body}, response) {\n // Data can be extracted from the request as follows:\n\n // Headers, e.g. {\"Content-Type\": \"application/json\"]}\n const contentTypes = headers[\"Content-Type\"];\n\n const {player , messages } = JSON.parse(body.text());\n\n // Querying a mongodb service:\n const doc = await context.services.get(\"mongodb-atlas\").db(\"bedrock\").collection(\"players\").findOneAndUpdate({player : player}, {$set : {messages : messages}}, {returnNewDocument : true});\n \n\n return doc;\n};\n```\nMake sure that all the functions created (e.g., registerUser) are on \"SYSTEM\" privilege for this demo.\n\n![System setting on a function\nThis page can be accessed by going to the Functions tab and looking at the Settings tab of the relevant function.\nFinally, click Save **Draft** and follow the **Review Draft & Deploy** process.\n\n## GitHub Codespaces frontend setup\n\nIt\u2019s time to test our back end and data services. We will use the created search HTTPS endpoint to show a simple search page on our data.\n\nYou will need to get the HTTPS Endpoint URL we created as part of the App Services setup.\n\n### Play with the front end\n\nWe will use the github repo to launch codespaces from: \n1. Open the repo in GitHub.\n2. Click the green **Code** button.\n3. Click the **Codespaces** tab and **+** to create a new codespace.\n\n### Configure the front end\n\n1. Create a file called .env in the root of the project.\n2. Add the following to the file:\n```\nVUE_APP_BASE_APP_SERVICE_URL=''\nVUE_APP_SEARCH_ENDPOINT='getSearch'\nVUE_APP_SAVE_CHATS_ENDPOINT='saveChats'\nVUE_APP_GET_CHATS_ENDPOINT='getChats'\n## Small chart to present possible products\nVUE_APP_SIDE_IFRAME='https://charts.mongodb.com/charts-fsidemo-ubsdv/embed/charts?id=65a67383-010f-4c3d-81b7-7cf19ca7000b&maxDataAge=3600&theme=light&autoRefresh=true'\n```\n### Install the front end\n```\nnpm install\n```\nInstall serve.\n```\nnpm install -g serve\n```\n### Build the front end\n```\nnpm run build\n```\n### Run the front end\n```\nserve -s dist/\n```\n### Test the front end\nOpen the browser to the URL provided by serve in a popup.\n\n## Summary \n\nIn summary, this tutorial has equipped you with the technical know-how to leverage MongoDB Atlas Vector Search and AWS Bedrock for building a cutting-edge database assistant for product catalogs. We've delved deep into creating a robust application capable of handling a variety of search inputs, from simple text queries to more complex image and recipe-based searches. As developers and product managers, the skills and techniques explored here are crucial for innovating and improving database search functionalities. \n\nThe combination of MongoDB Atlas and AWS Bedrock offers a powerful toolkit for efficiently navigating and managing complex product data. By integrating these technologies into your projects, you\u2019re set to significantly enhance the user experience and streamline the data retrieval process, making every search query more intelligent and results more relevant. Embrace this technology fusion to push the boundaries of what\u2019s possible in database search and management.\n\nIf you want to explore more about MongoDB and AI please refer to our main landing page. \n\nAdditionally, if you wish to communicate with our community please visit https://community.mongodb.com .\n\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "AI", "AWS", "Serverless"], "pageDescription": "Explore our comprehensive tutorial on MongoDB Atlas Vector Search and AWS Bedrock modules for creating a dynamic database assistant for product catalogs. This guide covers building an application for seamless product searching using various inputs such as single products, lists, images, and recipes. Learn to easily find ingredients for a recipe or the best organic almonds with a single search. Ideal for developers and product managers, this tutorial provides practical skills for navigating complex product databases with ease.", "contentType": "Tutorial"}, "title": "MongoDB Atlas Vector Search and AWS Bedrock modules RAG tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-dotnet-for-xamarin-best-practices-meetup", "action": "created", "body": "# Realm .NET for Xamarin (Best Practices and Roadmap) Meetup\n\nDidn't get a chance to attend the Realm .NET for Xamarin (best practices and roadmap) Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.\n\n>Realm .NET for Xamarin (best practices and roadmap)\n\n>:youtube]{vid=llW7MWlrZUA}\n\nIn this meet-up, Nikola Irinchev, the engineering lead for Realm's .NET team, and Ferdinando Papale, .NET engineer on the Realm team, will walk us through the .NET ecosystem as it relates to mobile with the Xamarin framework. We will discuss things to consider when using Xamarin, best practices to implement and gotcha's to avoid, and what's next for the .NET team at Realm.\n\nIn this meetup, Nikola & Ferdinando spend about 45 minutes on \n- Xamarin Overview & Benefits\n- Xamarin Key Concepts and Architecture\n- Realm Integration with Xamarin\n- Realm Best Practices / Tips&Tricks with Xamarin\n\nAnd then we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.\n\n## Transcript\n\n**Shane McAllister**: Welcome. It's good to have you all here. Sorry for the couple of minutes wait, we could see people entering. We just wanted to make sure everybody had enough time to get on board. So very welcome to what is our meetup today. We are looking forward to a great session and we're really delighted that you could join us. This is a new initiative that we have in MongoDB and Realm, and so far is we're trying to cater for all of the interested people who want to learn more about what we're building and how we're going about this.\n\n**Shane McAllister**: Essentially, I think this is our third this year and we have another three scheduled as well too, you'll see those at the end of the presentation. And really it's all about bringing together Realm developers and builders and trying to have an avenue whereby you're going to get an opportunity, as you'll see in a moment when I do the introductions, to talk to the people who built the SDKs that you're using. So we very much look forward to that.\n\n**Shane McAllister**: A couple of housekeeping things before I do the introductions. It is being recorded, we hope everybody here is happy with that. It's being recorded for those that can't attend, timezone might be work for them. And we will be putting it up. You will get a link to the recording probably within a day or two of the meetup finishing. It will go up on YouTube and we'll also share it in our developer hub.\n\n**Shane McAllister**: We will have an opportunity for Q&A at the end of the presentation as well too. But for those of you not familiar with this platform that we're on at the moment, it's very straightforward like any other video platform you might be on. We have the ability to chat up there. Everybody's been put in there, where they're from, and it's great to see so many people from around the world. I myself am in Limerick, in the west coast of Ireland. And Ferdinando and Nikola, who are presenting shortly, are in Copenhagen. So we'll go through that as well too.\n\n**Shane McAllister**: But as I said, we'll be doing Q&A, but if you want to, during the presentation, to put any questions into the chat, by all means, feel free to do so. I'll be manning that chat, I'll be looking through that. If I can answer you there and then I will. But what we've done in other meetups is we've opened out the mic and the cameras to all of our attendees at the end, for those that have asked questions in the chat. So we give you an opportunity to ask your own questions. There is no problem whatsoever if you're too shy to come on with an open mic and an open camera, I'll quite happily ask the question for you to both Ferdinando and Nikola.\n\n**Shane McAllister**: This is a meetup. Albeit that we're all stuck on the screen, we want to try and recreate a meetup. So I'm quite happy to open out your cameras and microphones for the questions at the end. The house rules are, would be, just be respectful of other people's time, and if you can get your question asked, then you can either turn off your camera or turn off your mic and you'll leave the platform, again, but still be part of the chat.\n\n**Shane McAllister**: So it's a kind of an interactive session towards the end. The presentation will be, hopefully, Nikola and Ferdinando, fingers crossed, 40 to 45 minutes or so, and then some Q&A. And what I'll be doing in the chat as well too, is I'll put a link during the presentation to a Google form for some Swag. We really do appreciate you attending and listening and we want you to share your thoughts with us on Realm and what you think, and in appreciation of your time we have some Swag goodies to share with you. The only thing that I would say with regard to that is that given COVID and postal and all of that, it's not going to be there very quick, you need to be a bit patient. A couple of weeks, maybe more, depending on where in the world that you are.\n\n**Shane McAllister**: So look, really delighted with what we have scheduled here for you shortly here now. So joining me today, I'm only the host, but the guys with the real brains behind this are Nikola and Ferdinando from the .NET team in Realm. And I really hope that you enjoy what we're going through today. I'll let both of you do your own introductions. Where you are, your background, how long you've been with Realm, et cetera. So Nikola, why don't we start with yourself?\n\n**Nikola Irinchev**: Sure. I'm Nikola. I'm hailing from sunny Denmark today, and usually for this time of the year. I've been with Realm for almost five years now, ever since before the MongoDB acquisition. Start a bit the dominant theme move to various different projects and I'm back to my favorite thing, which is the .NET one. I'm super excited to have all of you here today, and now I'm looking forward to the questions you ask us.\n\n**Ferdinando Papale**: Hello. Do you hear me?\n\n**Shane McAllister**: Yes.\n\n**Ferdinando Papale**: Okay. And I'm Ferdinando. And I've joined Realm only in October so I'm pretty new. I'm in the same team as Nikola. And before working at Realm I was a Xamarin developer. Yes. Shane, you're muted.\n\n**Shane McAllister**: Apologies. I'm talking head. Anyway, I'm very much looking forward to this. My background is iOS and Swift, so this is all relatively new as well to me. And I forgot to introduce myself properly at the beginning. I look after developer advocacy for Realm. So we have a team of developer advocates who in normal circumstances would be speaking at events and conferences. Today we're doing that but online and meetups such as this, but we also create a ton of content that we upload to our dev hub on developer.mongodb.com.\n\n**Shane McAllister**: We're also active on our forums there. And anywhere else, social in particular, I look after the @Realm Twitter a lot of the time as well too. So please if you're enjoying this meetup please do shout outs on @Realm at Twitter, we want to gather some more followers, et cetera, as well too. But without further ado, I will turn it over to Nikola and Ferdinando, and you can share screen and take it away.\n\n**Ferdinando Papale**: Yes. I will be the one starting the presentation. We already said who we were and now first let's take a look at the agenda. So this presentation will be made up of two parts. In the first part we'll talk about Xamarin. First some overview and benefits, and then some key concepts in architecture. And then in the second part, we're going to be more talk about Realm. How it integrates with Xamarin and then some tips, and then some final thoughts.\n\n**Ferdinando Papale**: Then let's go straight to the Xamarin part. That will be the first part of our presentation. First of all, if. Xamarin is an open source tool to build cross platform applications using C-sharp and .NET, and at the moment is developed by Microsoft. You can develop application with Xamarin for a lot of different platforms, but the main platforms are probably iOS and Android.\n\n**Ferdinando Papale**: You can actually also develop for MacOS, for Tizen, UWP, but probably iOS, Android are still the main targets of Xamarin. Why should you choose to develop your application using Xamarin? If we go to the next slide. Okay, yes. Probably the most important point of this is the code reuse. According to Microsoft, you can have up to 90% of the code shared between the platforms. This value actually really depends on the way that you structure of your application, how you decide to structure it, and if you decide, for example, to use Xamarin.Forms or not, but we'll discuss about it later.\n\n**Ferdinando Papale**: Another important point is that you are going to use C-sharp and .NET. So there is one language and one ecosystem. This means that you don't need to learn how to use Swift on iOS, you don't need to learn how to use Kotlin on Android, so it's a little bit more convenient, let's say.\n\n**Ferdinando Papale**: And then the final thing that needs to be known is the fact that in the end, the application that you develop with Xamarin feels native. I mean, a final user will not see any difference with a native app. Because whatever you can obtain natively you can also obtain with Xamarin from the UI point of view.\n\n**Ferdinando Papale**: Now, to talk a little bit more about the architecture of Xamarin. If you go to the next slide. Yes. In general, Xamarin works differently depending on the platform that we are targeting. But for both Android and iOS, the main targets, they both work with Mono. Mono is another implementation of them that is cross platform.\n\n**Ferdinando Papale**: And it's a little bit different the way that it works on Android and iOS. So on Android, the C-sharp code gets compiled to an intermediate language. And then when the application runs, this gets compiled with the just-in-time compiler. So this means that if you try to open the package developed with Xamarin, you will see that it's not the same as a completely native application.\n\n**Ferdinando Papale**: Instead, with iOS, it's not possible to have just-in-time compilation, and we have ahead-of-time compilation. This means that the C-sharp code gets directly compiled to assembly. And this was just to give a very brief introduction to the architecture.\n\n**Ferdinando Papale**: Now, if we want to talk more specifically about how to structure Xamarin application, there are essentially two ways to use Xamarin. On the left we have the, let's say, traditional way, also let's say Xamarin Native. In this case we have one project that contains the shared app logic, and this one will be common to all the platforms. And then on top of that, we have one project for each platform that we are targeting, in this case, Android and iOS. And this project contain the platform specific code, but from the practical point of view, this is mostly UI code.\n\n**Ferdinando Papale**: Then we have Xamarin.Forms. Xamarin.Forms is essentially a UI framework. If you have Xamarin.Forms, we still have these project with the shared app logic, but we also have another project with the shared UI. We still have the platform-specific projects but this contains almost nothing, and they are the entry point of the application.\n\n**Ferdinando Papale**: What happens in this case is that Xamarin.Forms has it's own UI paradigm that is different from Android and iOS. And then these gets... The controls that you use with Xamarin.Forms are the one that transform to native controls on all the platforms that are supported. Obviously, because this needs to support multiple platforms, you don't have a one to one correspondence between UI controls.\n\n**Ferdinando Papale**: Because with Xamarin.Forms, practically, you have these additional shared layer. Using Xamarin.Forms is the way that allows to have the most shared code between the two possibilities. And now we can talk a little bit more about some key concepts in forms. First of all, data binding and XAML.\n\n**Ferdinando Papale**: In Xamarin.Forms there are essentially two ways that you can define your UI. First, programmatically. So you define your UI in a C-sharp file. Or you can define your application in a XAML file. And XAML is just a language that is defined on top of XML. And the important thing is that it's human readable. On the left here you have an example of such a XAML file. And on the bottom you can see how it looks on an iOS and Android device.\n\n**Ferdinando Papale**: This application practically just contains a almost empty screen with the clock in the middle. If you look at the XAML file you will see it has a content page that is just Xamarin.Forms named for a screen. And then inside of that it contains a label that is centered horizontally and vertically. But that's not very important. Yes.\n\n**Ferdinando Papale**: And then the important thing here to notice is the fact that this label has a text that is not static, but is actually defined with bindings. You can see the binding time written in the XAML file. What this means here is the fact that if the bindings are done properly, whenever the time variable is changing our code, then it will also be updating the UI. This simple application it means that we have a functioning clock.\n\n**Ferdinando Papale**: The way that this is implemented, actually, you can see it on the right, we have an example of a ViewModel. In order for the ViewModel to notify the UI of these changes, it needs to implement I notify property changed, that is an interface that contains just one event, that is the property change event that you see almost at the top.\n\n**Ferdinando Papale**: Practically, the way that it works is that you can see how it works with the property time that is on the bottom. Every time we set the property time, we need to call property changed. And we need to pass also the name of the property that we're changing. Practically, let's say behind the curtains, what happens is that the view subscribes to this property change event and then gets notified when certain properties change, and so the UI gets updated accordingly.\n\n**Ferdinando Papale**: As you can see, it's not exactly straightforward to choose data binding, because you will need to do this for every property that needs to be bound in the UI. And the other thing to know is that this is just one simple way to use, data binding can be very complicated. It can be two way, one way in one direction, and so on.\n\n**Ferdinando Papale**: But data binding actually is extremely important, especially in the context of MVVM. MVVM is essentially the architectural pattern that Xamarin suggests for the Xamarin.Forms application. This is actually the interpretation that Microsoft has, obviously, of MVVM, because this really depends who you ask, everybody has his own views on this.\n\n**Ferdinando Papale**: In MVVM, essentially, the application is divided into three main blocks, the model, the view, and the ViewModel. The model contains the app data and the business logic, the view represents what is shown on the screen, so the UI of the application, and preferably should be in XAML, because it simplifies the things quite a lot. And then finally we have the ViewModel, that essentially is the glue between both the view and the model.\n\n**Ferdinando Papale**: The important thing to know here is that as you see on the graph, on the left, is that the view communicates with ViewModel through the data binding and commands, so the view knows about the ViewModel. Instead, the ViewModel actually doesn't know about the view. And the communication happens indirectly through notifications. Practically, the views subscribes to the property change event on the ViewModel, and then gets notified when something is changed, and so the UI needs to be updated eventually.\n\n**Ferdinando Papale**: This is really important. Because the ViewModel is independent from the view, this means that we can just swap the view for another one, we can change it without having to modify the ViewModel at all. And also these independents allows to have the code much more testable if this wasn't there, that's why the data binding is so important.\n\n**Ferdinando Papale**: Then there is another thing that is really important in Xamarin.Forms, and those are custom renders. As I said before, because Xamarin.Forms essentially needs to target multiple applications, sometimes the translation within the forms' UI and the native UI, is not what you expect or maybe what you want. And in this case, the way that you can go around it is use custom renders. Really with custom renders, you have the same control that you will have natively.\n\n**Ferdinando Papale**: What is on the screen is an example of how to create a custom render practically. So on the left, we can see that first of all we need to create a custom class, in this case my entry. And they need to derive from one of the forms' class, in this case an entry is just a text view on the screen where the user can write some stuff.\n\n**Ferdinando Papale**: Obviously you need also to add this custom view to your XAML page. And then you need to go into the platform-specific projects, so iOS and Android, and define the render. The render needs to obviously derive from a certain class in forms. And you need also to define the attribute expert render. This attribute practically say, this render, to which class it should be linked to.\n\n**Ferdinando Papale**: Once you use the render, obviously, you have full control over how the UI should look like. One thing to know is that what you have on this screen is actually a little bit of a simplified example, because actually it's a little bit more complicated than this. And also, one needs to understand that it's true that it's possible to define as many custom renders as needed, but the more custom renders are created, probably the less code reuse you have, because you need to create these custom renders in each of the platform-specific projects. So it starts to become... You have less and less shared codes, so you should start asking yourself if Xamarin.Forms is exactly what you want. And also, they are not exactly the easiest thing to use, in my opinion.\n\n**Ferdinando Papale**: Finally, why should you decide to use Xamarin.Forms or Xamarin Native. Essentially, there are a couple of things to consider. If the development time and the budgets are limited, Xamarin.Forms is a better option, because in this case you will need to create the UI just once and then it will run on both platforms, you don't need to do this development twice.\n\n**Ferdinando Papale**: Still, unfortunately, if your UI or UX needs to be polished, needs to be pixel perfect, you want to have exactly the specific UI, then probably you will need to use Xamarin Native. And this is because, as I've said before, if you want to have something that looks exactly as you want, you will need to probably use a lot of custom renders. And more custom renders means that Xamarin.Forms starts to be less and less important, or less advantageous, let's say.\n\n**Ferdinando Papale**: Another thing to consider is what kind of people you have in your team. If you have people in your team that only have C-sharp and .NET experience, then Xamarin.Forms can be important as an advantage because you don't need to learn... Even if you use Xamarin Native you will still use C-sharp and .NET, but you will also need to understand how you will need to have some native experience, you will need to know what is the lifecycle of an iOS application, of an Android application, how the UI is built in both cases and so on. So in this case, probably Xamarin.Forms will be a better option.\n\n**Ferdinando Papale**: And the final thing to consider is that generally Xamarin.Forms application are bigger than Xamarin Native applications. So if this is a problem, then probably native is the way to go. And I think that this is the end of my half of the presentation and now probably Nikola should continue with the rest.\n\n**Nikola Irinchev**: Sure. That's great. That hopefully gives people some idea for which route to take for the next project, the route they should take regards whether they use Xamarin Native or Forms they use to use Realm. Let's talk about how it fits into all that.\n\n**Nikola Irinchev**: The first thing to understand about Realm is that it's an open source, standalone object database. It's not an ORM or an interface for accessing MongoDB. All the data leaves locally on the device and is available regardless of whether the user has internet connectivity or not. Realm has also been meticulously optimized to work on devices with heavily constraint resources.\n\n**Nikola Irinchev**: Historically, these have been mobile devices, but recently we're seeing more and more IoT use cases. To achieve an extremely low memory footprint, Realm adopts a technique that is known as zero copy. When you fetch an object from Realm, you don't need the entire thing in memory, instead, you get some cleverly-organized metadata that tells us which memory offsets the various properties are located.\n\n**Nikola Irinchev**: Only when you ask us the property to which the database and read the information stored there. This means that you won't need to do any select X, Y, Z's, and it also allows you to use the exact same object in your master view, where you only need one or two properties as in the detail view where you need to display the information about the entire entity.\n\n**Nikola Irinchev**: Similarly, collections are lazily loaded and data is never copied into memory. A collection of a million items is, again, a super lightweight wrapper around some metadata. And accessing an element, just calculates the exact memory offset where the element is located, returns the data there. This, again, means you can get a collection of millions of items in fractions of a second, then drop it in the ListView with data binding, and as the user scrolls on the screen, new elements will be loaded on demand and don't want to be garbage collected. Meaning you never have to do pagination limits or add load more buttons.\n\n**Nikola Irinchev**: To contribute to that seamless experience, the way you define models in Realm is nearly identical to the way you define your in-memory of the process. You give it a name, you add some properties, and that's it. The only thing that you need to do to make sure, it's compatible with Realm, is to inherit from RealmObject.\n\n**Nikola Irinchev**: When you compile your project, Realm will use this code leaving. It will replace the appropriate getters and setters with custom code that will read and write to the database directly. And we do support most built in primitive types. You can use strings, various sizes of integers, floats, doubles, and so on.\n\n**Nikola Irinchev**: You can of course define links to other objects, as well as collection of items. For example, if you have a tweet model, you might want to have a list of strings that contain all the tags for the tweets, or you have a person model, you might want to have a list of dogs that are owned by that person.\n\n**Nikola Irinchev**: The final piece of core Realm functionality that I want to touch on is one that is directly related to what Ferdinando was talking about with Xamarin.Forms and data binding. That thing that I mentioned about properties that hook up directly to the database, apart from being super efficient in performance, it has the nice side effect that we're always working with up to date data.\n\n**Nikola Irinchev**: So if you have a background thread then you update a person's age, the next time you access the person's age property on the main thread, you're going to get the new value. That in and of itself is cool, but will be kind of useless if we didn't have a way to be notified when such a change has occurred. Luckily, we do. As all Realm objects implement I notify property changed, and all Realms collections implement I notify collection changed.\n\n**Nikola Irinchev**: These are the interfaces that are the foundation of any data binding engine, and are of course supported and respected by Xamarin.Forms, WTF, and so on. This means that you can data bind to your database models directly, and then we'll learn the UI whenever a property changes regardless of where the change originated from. And for people who want to have an extra level of control or those working with our native, we do have a callback that you can subscribe to, which gives you more detailed information than what the system interfaces expose.\n\n**Nikola Irinchev**: To see all these concepts in action, I've prepared a super simple app that lists some people and their dogs. Let me show that to you. All right. Let's start with the model definition. I have my person class. It has name, birthday and favorite dog. And it looks like your poco out there. The only difference again being that it inherits from Realm object, which is a hint for the code leaver that we use to replace the getter and setter with some clever code that hooks into the native Realm API.\n\n**Nikola Irinchev**: All right. Then let's take a look at lazy loading. I cheated a little bit, and I already populate my grammar, I inserted a million people with their dogs and their names and so on. And I added button in my view, which is called load, and it invokes the load medium items command. What it does is it starts a stopwatch, gets all items from Realm, and alerts how many they are and how much time it took.\n\n**Nikola Irinchev**: If I go back to my simulator, if I click load, we can see that we loaded a million elements in zero milliseconds. Again, this is cheating, we're not really loading them all, we are creating a collection that has the necessary metadata to know where the items are. But for all intents and purposes, for you as a developer, they are there. If I set a breakpoint here, both the items again, I can just drop the evaluator and I can pick any element of the collection, of any unit, and it's there. The property channel that their dog is all that... You can access any element as if you were accessing any memory structure.\n\n**Nikola Irinchev**: All right. That's cool. Let's display these million people. In my main page, I would have a ListView. Let's use a UITableViewController or just a collection of cells. And in my cell I have a text field which binds to the person's dog name, and I have a detail field which binds to favorite dog.name. And the entire ListView is bound to the people collection.\n\n**Nikola Irinchev**: In my main view model, people collection is just empty, but we can populate it with the data from Realm. I'm just passing all people there, which, as we saw, are on \\[inaudible 00:29:55\\]. And I'm going mute. What's going to happen now is Realm will feed this collection, and the data binding engine will start reading data from the collection to populate its UI. I can go back to my simulator. We can see that all the people are loaded in the ListView. And as I scroll the ListView, we can see that new people are being displayed.\n\n**Nikola Irinchev**: Judging by the fact that my scroller doesn't move far, we can guess that there are indeed a million people in there. And again, of course, we don't have a million items in memory, that would be ridiculous. The way Xamarin.Forms works is, it's only going to draw what's on screen, it's only going to ask around for the data that is being currently displayed. And as the user scrolls, all data is being garbage collected, new data is being picked up. So this allows you to have a very smooth user experience and a very small developer experience, because you no longer have to think about pagination and figuring out what's the minimum set of properties that you need to load to drive that UI.\n\n**Nikola Irinchev**: Finally, to build on top of example, I added a simple timer. I have a model called statistics which has a single property, which is an integer counting the total seconds users pass in the app. What I'm going to do is, in my app, when it starts, I'm going to run in the background my app data code. And what that does is, it waits one second, very imprecise, we don't care about precision here, and opens around and increments the number of total of seconds.\n\n**Nikola Irinchev**: In my main page, I will data bind my title property to statistics.total of seconds, also to the total of seconds property. I have a nice string format there to write a lapse time there. And in my view, I'll just populate my statistics instance with the first element from the statistics collection.\n\n**Nikola Irinchev**: I know that there's one. Okay. So when I run the top, what is going to happen is, every second, my app will increment this value on a background thread. In my main view model, the statistics systems, which points to the same object in the database, is going to be notified that there's a change to total of seconds property, is going to proxy that to the UI. And if we go to the UI, we can see that every second, the title is getting updated. And that require absolutely synchronization or UI code on my end, apart from the data binding logic.\n\n**Nikola Irinchev**: Clearly, that is a super silly example, I don't suppose any of you to ship that into production, but it's the exact same principle you can apply when fetching updates from your server or when doing some background processing in your offline, converting images or generating documents. What you need to do is just store the results in Realm, and as long as you set up your data bindings property, the UI will update itself regardless of where in the app the user is. All right. That was my little demo, and we can go back to more boring part of the presentation and talk about some tips when starting out with Realm and Xamarin.\n\n**Nikola Irinchev**: The main thing that trips people up when they start using Realm, is the threading model. Now that definitely deserves a talk of its own. And I'm not going to go into too much detail here, but I'll give you the TLDR of it, and you should just trust me on that. We can probably have some different talk about threading.\n\n**Nikola Irinchev**: First of, on the main thread, it's perfectly fine and probably good idea to keep a reference to the Realm in your ViewModel. You can either get a new instance, with the Realm getinstance close, or you can just use some singleton. As long as it's only accessible on the my thread, that is perfectly fine. And regardless of which approach you choose, the performance will be very similar. We do have native caching of main thread instances, so you won't be generating a lot of garbage if you did the getinstance approach.\n\n**Nikola Irinchev**: And on the my thread, you don't have to worry about disposing the managing instances, it's perfectly fine to let them be garbage collected when your ViewModel gets garbage collected. That's just fine. On the background thread though, it's quite the opposite. There you always want to wrap your getinstances into using statements.\n\n**Nikola Irinchev**: The reason for that is, background threads will cause the file size to increase when data gets updated, even if we don't insert new objects. This base is eventually reclaimed when you dispose the instance or when the app restarts. But it's nevertheless problematic for devices with constrained resources.\n\n**Nikola Irinchev**: Similarly, it is strongly encouraged to keep background instances short-lived. If you need to do some slow data pre-processing, think before you open the Realm file and just write the results when you open it. Or if you need to read some data from Realm, do the processing and private results, open the Realm plus. First, open it with the data, extract putting it in memory, then pause Realm, start the slow job, then open the Realm again, bind results. As a rule of thumb, always run background threads on using statements, and never have any advice in using block.\n\n**Nikola Irinchev**: All right. Let's move to a topic that will inevitably be controversial. And that is avoid repository pattern. And only going to be a bit of a shock especially for people coming from Java or back end backgrounds. But the benefit to complexity ratio of abstracting Realm usage is pretty low.\n\n**Nikola Irinchev**: The first argument is universal, doesn't apply just to Realm but with mobile apps. You should really design your app for the database that you're going to use. Each database has strengths and weaknesses. And some things are easy with \\[sycilite 00:36:54\\], others are easy with Realm. By abstracting away the database in a way that you can just swap it out with a different implementation, it means you're not taking advantage of any of the strong sides of the current database that you're using.\n\n**Nikola Irinchev**: And when an average active development time for a mobile app are between six and eight months, you'll likely spend more time preparing for database which then you save in case you actually have to go through with it.\n\n**Nikola Irinchev**: Speaking of strong sides, as much as one of Realm's strong sides is, the data is live. Collections are lazily loaded. And abstracting data in a generic repository pattern is going to be confusing for your consumers. You have two options. Return data is easy. Return live collections, live objects. But in a general purpose repository, there'll be no way to communicate with the consumer that this data is live, so they might think that they will need to fetch it or be confused as to why there are no pagination API. And if you do decide to materialize the FTC into memory, you're foregoing one of the main benefits of using Realm and taking a massive performance hit.\n\n**Nikola Irinchev**: Finally, having Realm refine the repository will inevitably complicate threading. As we've seen earlier, the recommendation is to use thread from instances or background threads. And if you want to have to go get repository, dispose repository all the time, you might as well use Realm directly.\n\n**Nikola Irinchev**: None of that is to say that abstractions are bad and you should avoid using them at all costs. We've seen plenty of good obstructions built on top of Realm, that work very well in the context of the apps that they're waiting for. But if you already have a secure-line-based app that uses circles and pattern and you think you can just swap out secure life with Realm, you're probably going to have a bad time and not take full advantage of what Realm has to offer.\n\n**Nikola Irinchev**: Finally, something that many people miss about Realm, is that you totally can't have more than one database at play in the same app. This can unlock many interesting use cases, and we've seen people get very creative with it. One benefit of using multiple Realms is that you have a clear separation of information in your app.\n\n**Nikola Irinchev**: For example, in a news app, you might have Realm that holds the app settings, a different one that holds the lyrics metadata, and a third one that holds the user playlist. We've seen similar setups in modular apps, where different themes work on different components of the app, and want to avoid having to always synchronize and align changes and migrations.\n\n**Nikola Irinchev**: Speaking of migrations, keeping data in different Realms can eliminate the need to do some migrations altogether. For example, if you have a Realm instance, this whole, mostly-cached data, and your server side models change significantly, it's probably cheaper to just use the new model and not deal with the cost of migration. If that instance was also holding important user data, you wouldn't be able to do that, making it much more complicated to shift the new version.\n\n**Nikola Irinchev**: And finally, it can allow you to offer improved security without data duplication. In a multicolored application, like our earlier music app, you may wish to have the lyrics' metadata Realm be unencrypted and shared between all users, while their personal playlist or user information, it can be encrypted with their user-specific key and accessible only for them.\n\n**Nikola Irinchev**: Obviously, you don't have to use multiple Realms. Most of the apps we've seen only use one. But it's something many folks just don't realize is an option, so I wanted to put it out there. And with that, I'm out of tips. I'm going to pass the virtual mic back to Ferdinando to give us a glimpse into the future of Xamarin.\n\n**Ferdinando Papale**: Yes. I'm going to talk just a little bit about what is the future of Xamarin.Forms, and the future of Xamarin.Forms is called MAUI. That stands for Multi-platform App UI, that is essentially the newest evolution of Xamarin.Forms that will be included in .NET 6. .NET 6 is coming out at the end of the year, I think in November, if everything goes well.\n\n**Ferdinando Papale**: Apart from containing all the new and shiny features, the interesting thing that they did with .NET 6 is that they are trying to unify a little bit the ecosystem, because .NET has always been a little bit all over the place with .NET Standard, .NET Core, .NET Framework, .NET this, .NET that. And now they're trying to put everything under the same name. So Xamarin, they will not be Xamarin iOS and Xamarin Android anymore, but just .NET iOS and .NET Android. Also Mono will be part of .NET and so on.\n\n**Ferdinando Papale**: Another important thing to know is that MAUI applications will still be possible to develop application for iOS and Android, but there is also a bigger focus also on MacOS and Windows application. So it would be much more complete.\n\n**Ferdinando Papale**: Then they're going also to work a lot on the design, so to improve the customization that can be done so that one needs to use much less custom renders. But also there is the possibility of creating UI controls that instead of feeling native on each platform, they look almost the same on each platform, for a bigger UI consistency, let's say.\n\n**Ferdinando Papale**: And the final things is the single project experience, let's say, that they are going to push. At the moment with Xamarin.Forms, if you want to have an application target five platforms, you need to have at least five projects plus the common one. What they want to do is that they want to eliminate these platform-specific projects and they have only the shared ones. This means that in this case, you will have all the, let's say, all the platform-specific icons, and so on, in these single projects. And this is something that they are really pushing on. And this is just the... It was just a brief look into the future of Xamarin.Forms.\n\n**Nikola Irinchev**: All right. Yeah, that's awesome. I for one I'm really looking forward to some healthy electronic competition which doesn't need to buy Realm for breakfasts. So hopefully it's our dog that seats in MAUI, we'll deliver that. I guess the future of Realm is Realm. We don't have Polynesian delegates in the pipeline, but we do have some pretty exciting plans for the rest.\n\n**Nikola Irinchev**: Soon in the spring, we'll be shipping some new datatypes we've been actively working on our past couple of months. These are iDictionary, Sets and Guids. We're also adding a thought that can hold any \\[inaudible 00:44:37\\].\n\n**Nikola Irinchev**: At Realm we do like schema so definitely don't expect Realm to become MongoDB anytime soon. But there are legitimate use cases for apps that need to have heterogeneous data sometimes. For example, a person class may hold the reference to a cat or a dog or a fish in their pet property, or an address in just a string for an address structure. So we kind of want to give developers the flexibility to let them be in control of their own destiny.\n\n**Nikola Irinchev**: Moving forward, in the summer, we're turning our attention to mobile gaming and Unity. This has been the most highly qualification on GitHub, so hope to see what the gaming community will do with Realm. And as Ferdinando mentioned, we are expecting a brand new .NET releasing in the fall. We fully intend to offer first-class MAUI support as soon as it lands.\n\n**Nikola Irinchev**: And I didn't have any plans for the winter, but we're probably going to be opening Christmas presents making cocoa. With the current situation, it's very hard to make long term plans, so we'll take these goals. But we are pretty excited with what we have in the pipeline so far. And with that, I will pass the virtual mic back to Shane and see if we have any questions.\n\n**Shane McAllister**: Excellent. That was brilliant. Thank you very much, Nikola and Ferdinando. I learned a lot, and lots to look forward to with MAUI and Unity as well too. And there has been some questions in the chat. And we thought Sergio was going to win it by asking all the questions. And we did get some other brave volunteers. So we're going to do our best to try and get through these.\n\n**Shane McAllister**: Sergio, I know you said you wanted me to ask on your behalf, that's no problem, I'll go through some of those. And James and Parth and Nick, if you're happy to ask your own questions, just let me know, and I can your mic and your video to do that. But we'll jump back to Sergio's, and I hope \\[inaudible 00:46:44\\] for you now. I might not get to all of them, we might get to all of them, so we try and fly through them. I'm conscious of everybody's time. But we have 10, 15 minutes here for the questions.\n\n**Shane McAllister**: So Nikola, Ferdinando, Sergio starts off with, I can generate Realm-encrypt DB at the server and send this file to different platforms, Windows, iOS, Android, MacOS. The use case; I have a large database and not like to use sync at app deploy, only using sync to update the database. Can he do that?\n\n**Nikola Irinchev**: That is actually something that I'm just writing the specification for. It's not available right now but it's a very valid use case, we definitely want to support it. But it's a few months in the future outside, but definitely something that we have in the works. One caveat there, the way encryption in Realm works, is that it depends on the page file size of the platform is running on.\n\n**Nikola Irinchev**: So it's possible that... For my question iOS is the same, but I believe that there are differences between Windows and Android. So if you encrypt your database, it's not guaranteed that it's going to be opened on all platforms. What you want to do is you shift the database unencrypted, in your app, encrypted with the page file of the specific platform that their app is running on.\n\n**Shane McAllister**: Okay, that makes sense. It means Sergio's explained his use case was that they didn't want some user data to be stored at the server, but the user wanted to sync it between their devices, I suppose. And that was the reason that he was looking for this. And we'll move on.\n\n**Shane McAllister**: Another follow up from Sergio there is, does the central database have to be hosted in the public cloud or can he choose another public cloud or on premise as well too?\n\n**Nikola Irinchev**: Currently we don't have a platform on premise for sync. That is definitely something we're looking into but we don't have any timeline for when that might be available. In terms of where the central database is hosted, it's hosted in Atlas. That means that, because it's on Azure, AWS and Google Cloud, it's conforming to all the rules that Atlas has for where the database is stored.\n\n**Nikola Irinchev**: I believe that Atlas has support for golf cloud, so the government version of AWS. But it's really something that we can definitely follow up on that if he gives us more specific place where he wants to foster. But on premise definitely not an option at the moment.\n\n**Shane McAllister**: And indeed, Sergio, if you want more in-depth feedback, our forum is the place to go. Post the questions. I know our engineering team and our developer advocates are active on the forums there too. And you touched on this slightly by way of showing that if you have a Realm that you don't necessarily care about and you go and update the schema, you can dump that. Sergio asked if the database schema changes, how does the update process work in that regard?\n\n**Nikola Irinchev**: That depends on whether the database... If a lot of the questions Sergio has are sync related, we didn't touch too much on sync because I didn't want to blow the presentation up. There's a slight difference to how the local database and how sync handle schema updates. The local database, when you do schema update, you write the migration like you would do with any other database. In the migration you have access to the old data, the new data, and you can, for example, populate new properties from all properties, split them or manipulate.\n\n**Nikola Irinchev**: For example if you're changing a string column to an integer, parse the string values from right to zeros there. With sync, there are more restrictions about what schema changes are allowed. You can only make additive schema changes, which means that you can only add properties after losses, you cannot change the type of a property.\n\n**Nikola Irinchev**: This is precisely to preserve backwards compatibility and allow apps that are already out in the wild in the hands of users not to break in case the schema changes, because so you cannot ship back your code and handle the different schema there.\n\n**Shane McAllister**: Super. Great. And I'll jump to Sergio's last question because I kind of know the answer to this, about full text search. Where are you with that?\n\n**Nikola Irinchev**: Again, we are in the specification part. One thing to notice, that the Realm database is fully open source. Like the core database, RTS is open source. And if he goes to the Realm core repository, which is the core database, he can definitely see the pull request that has full text search. That's a very much in a POC phase. We're nowhere near ready to shift that to the production quality, but it's definitely something we're actively working on and I do hope to have interesting updates in the coming months.\n\n**Shane McAllister**: Super. Thank you, Nikola. And James, you have three questions there. Would you want to ask them yourself or let me ask on your behalf? If you want to just type into the chat there, James, because with three you can follow up on them. I'll happily open the video and the mic. So he's happy to try. Fair play, brave individual. Let me just find you here now, James. You will come in as a host, so you will now have mic and video controls down the bottom of your screen. Please turn them on. We will see you hopefully, we'll hear you, and then you can ask your own questions.\n\n**James**: I don't have a camera so you won't see me, you'll only hear me.\n\n**Shane McAllister**: It's always dangerous doing this stuff live.\n\n**James**: Can you hear me?\n\n**Shane McAllister**: Is that working for you? Yeah, he's getting there.\n\n**James**: Can you hear me?\n\n**Shane McAllister**: We'll see.\n\n**James**: No?\n\n**Shane McAllister**: James, I see your mic is on, James. No. Hold on one second. We'll just set you again. Apologies everyone else for the... Turn that back on and turn this on. Should be okay. Look, James, we'll give you a moment to see if you appear on screen. And in the meantime, Parth had a couple of questions. How does Realm play with backward compatibility?\n\n**Nikola Irinchev**: That's a difficult question. There are many facets of backwards compatibility and \\[inaudible 00:54:22\\]. Let's see if the other questions give any hints.\n\n**Shane McAllister**: A follow up from Parth, and I hope I've got the name correct, is there any use case where I should not use Realm and use the native ones? In every use case you should always use Realm, that's the answer to that.\n\n**Nikola Irinchev**: Now I know that there are cases where sycilite performs better than Realm. The main difference is sycilite gives you static data. You get something from a database and it never changes. That may be desirable in certain cases. That is certainly desirable if you want to pass that data to a lot of threads. Because data is static, you don't have to worry about, let's say, updating values, environment, suddenly, things change under your feet.\n\n**Nikola Irinchev**: That being said, we believe that Realm should fit all use cases. And we are working hard to make it fit all use cases, but there's certainly going to be cases where, for example, we've got that you can use the iOS synchronization with the Apple ID. That is absolutely valid case. That has its own synchronization but it works differently from what Apple offers. It's not as automatic.\n\n**Shane McAllister**: Sure. No, that makes sense. And you answered his third question during the presentation, which was about having multiple Realms in a single app. I think that was definitely covered. Nick has told me to go and ask his questions too to save time. I presume the minute or two we tried to get James on board it wasn't the best use of our time.\n\n**Shane McAllister**: Nick, you've requested number two here but I can't see question number one, so go ahead and... I have to read this myself. Is RealmObject still the direction? A few years ago there was talk of using generators with an interface which would make inheritance easier, particularly for NSObject requirements for iOS.\n\n**Nikola Irinchev**: Yes. Generators are definitely something that we are very interested in. This is very much up in the air, I'm not giving any promises. But generators shipped with .NET 5 in November, at least, the stable version of generators. We haven't gotten the time to really play with them properly, but are definitely interested. And especially for Unity, that is an option that we want to offer, because certain things there also have special inheritance requirements. So yeah, generators are in the hazy part of the roadmap, but definitely an area of interest. We would want to offer both options in the future.\n\n**Shane McAllister**: That makes sense. And Nick had a follow up question then was, performance recommendations for partial string search of a property, is it indexing the property?\n\n**Nikola Irinchev**: Yeah. Right now, indexing the property it will not make up performance differences searching for partial matches. Once the full text search effort is closer to completion, that will yield performance benefits for partial searches even for non-full text search properties. Right now indexing won't make a difference, in the future it will.\n\n**Shane McAllister**: Okay, perfect. So we have two from James here. Will Realm be getting cascading deletes?\n\n**Nikola Irinchev**: Another thing that we didn't touch on in this talk is the concept of embedded objects. If you're familiar with MongoDB and their embedded objects there, it's a very similar concept. You have a top-level object like a person, and they may have objects that are embedded in that object, say, a list of addresses associated with that person.\n\n**Nikola Irinchev**: Embedded objects implement cascading deletes in the sense that if you delete the person, then all their objects are going to be deleted. That is not supported currently for top-level objects. It is something that we are continuously evaluating how to support in the best possible way. The main challenge there, of course, in a distributed system where sync is involved, cascading deletes are very dangerous. You never know who might be linking to a particular object that has been offline, for example, and you haven't seen their changes. So we are evaluating cascading deletes for standalone objects, but embedded objects will fit like 90% of the use cases people could have for cascading deletes.\n\n**Shane McAllister**: Super. Perfect. Thank you, Nikola. And I think there's only one more. Again, from James. Will Realm be providing a database viewer without having to remove it from the device, was the question.\n\n**Nikola Irinchev**: That is an interesting question. Yeah, that's an interesting question and I don't know the answer to that, unfortunately.\n\n**Shane McAllister**: That's all right. We don't need to know all the answers, that's what the meetups are for, right? You get to go back to the engineering team now and say, \"Hey, I got asked a really interesting question in a meetup, what are we going to do with this?\"\n\n**Shane McAllister**: James had another one there that he just snuck in. He's quick at typing. Will Realm objects work in unit tests or do they only work when the Realm is running, for example, integration test.\n\n**Nikola Irinchev**: Realm objects, they behave exactly like in-memory objects when they're not associated with Realm. So you can create a standalone person, don't turn it to Realm, it will behave exactly like the person model that you find with the in-memory properties. So that's probably not going to be enough for a unit test, especially if you rely on the property change notification mechanism. Because an object that is not associated with Realm, it's not going to get notified if another instance of the same object changes, because they're not linked in any way whatsoever.\n\n**Nikola Irinchev**: But Realm does have the option to run in in-memory mode. So you don't have to create a file on disk, you can run it in memory. And that is what we've seen people typically use for unit tests. It's a stretch call unit test, it's an integration test, but it fits 90% of the expectations from a unit test. So that's something that James could try, give it a go.\n\n**Nikola Irinchev**: But we're definitely interested in seeing what obstacles people are saying when writing unit tests, so we'll be very happy to see if we fit the bill currently, or if there's a way to fit the bill by changing some of the API.\n\n**Shane McAllister**: Super. And another final one there from \\[Nishit 01:02:26\\]. UI designed by schema XML or something else?\n\n**Nikola Irinchev**: I'm not sure. It can mean two things the way I understand it. One is, if he's asking about the design of the schema of Realm, then it's all done by XML or anything, it's designed by just defining your models.\n\n**Shane McAllister**: I think it's more to do with the UI design in XML, in your app development. That's the way I \\[crosstalk 01:03:02\\] question.\n\n**Nikola Irinchev**: I don't know. For Xamarin.Forms, and we like to XAML, then yeah. XAML is a great way to design your UI, and it's XML based. But yeah. And Nishi, if you want to drop your question in the forum, I'd be happy to follow up on that.\n\n**Shane McAllister**: Yeah.\n\n**Nikola Irinchev**: Just a little bit more context there.\n\n**Shane McAllister**: Well, look, we're going over the hour there. I think this has been superb, I've certainly learned a lot. And thanks, everybody, for attending. Everybody seems to have filled out the Swag form, so that seems to have gone down well. As I said at the beginning, the shipping takes a little while so please be patient with us, it's certainly going to take maybe two, three weeks to hit some of you, depending on where you are in the world.\n\n**Shane McAllister**: We do really appreciate this. And so the couple of things that I would ask you to do for those that attended today, is to follow @Realm on Twitter. As Nikola and Ferdinando have said, we're active on our forums, please join our forums. So if you go to developer.mongodb.com you'll see our forums, but you'll also see our developer hub, and links to our meetup platform, live.mongodb.com.\n\n**Shane McAllister**: Perfect timing, thank you. There are the URLs. So please do that. But the other thing too, is that this, as I said, is the third this year, and we've got three more coming up. Now look, that they're all in different fields, but the dates are here. So up next, on the 18th of March, we have Jason and Realm SwiftUI, Property wrappers, and MVI architecture.\n\n**Shane McAllister**: And then we're back on the 24th of March with Realm Kotlin multi platform for modern mobile apps. And then into April, but we probably might slot another one in before then. We have Krane with Realm JS for React Native applications as well too. So if you join the global Realm user group on live.mongodb.com, any future events that we create, you will automatically get emailed about those and you simply RSVP, and you end up exactly how you did today. So we do appreciate it.\n\n**Shane McAllister**: For me, I appreciate Ferdinando and Nikola, all the work. I was just here as a talking head at the beginning, at the end, those two did all the heavy lifting. So I do appreciate that, thank you very much. We did record this, so if there's anything you want to go back over, there was a lot of information to take in, it will be available. You will get via the platform, the YouTube link for where it lives, and we'll also be probably posting that out on Twitter as well too. So that's all from me, unless Nikola, Ferdinando you've anything else further to add. We're good?\n\n**Nikola Irinchev**: Yeah.\n\n**Shane McAllister**: Thank you very much, everyone, for attending. Thank you for your time and have a good rest of your week. Take care.\n\n**Ferdinando Papale**: See you.", "format": "md", "metadata": {"tags": ["Realm", "C#", ".NET"], "pageDescription": "Missed Realm .NET for Xamarin (best practices and roadmap) meetup event? Don't worry, you can catch up here.", "contentType": "Article"}, "title": "Realm .NET for Xamarin (Best Practices and Roadmap) Meetup", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/taking-rag-to-production-documentation-ai-chatbot", "action": "created", "body": "# Taking RAG to Production with the MongoDB Documentation AI Chatbot\n\nAt MongoDB, we have a tagline: \"Love your developers.\" One way that we show love to our developers is by providing them with excellent technical documentation for our products. Given the rise of generative AI technologies like ChatGPT, we wanted to use generative AI to help developers learn about our products using natural language. This led us to create an AI chatbot that lets users talk directly to our documentation. With the documentation AI chatbot, users can ask questions and then get answers and related content more efficiently and intuitively than previously possible.\n\nYou can try out the chatbot at mongodb.com/docs.\n\nThis post provides a technical overview of how we built the documentation AI chatbot. It covers:\n\n- The chatbot\u2019s retrieval augmented generation (RAG) architecture.\n- The challenges in building a RAG chatbot for the MongoDB documentation.\n- How we built the chatbot to overcome these challenges.\n- How we used MongoDB Atlas in the application.\n- Next steps for building your own production RAG application using MongoDB Atlas.\n\n## The chatbot's RAG architecture\n\nWe built our chatbot using the retrieval augmented generation (RAG) architecture. RAG augments the knowledge of large language models (LLMs) by retrieving relevant information for users' queries and using that information in the LLM-generated response. We used MongoDB's public documentation as the information source for our chatbot's generated answers.\n\nTo retrieve relevant information based on user queries, we used MongoDB Atlas Vector Search. We used the Azure OpenAI ChatGPT API to generate answers in response to user questions based on the information returned from Atlas Vector Search. We used the Azure OpenAI embeddings API to convert MongoDB documentation and user queries into vector embeddings, which help us find the most relevant content for queries using Atlas Vector Search.\n\nHere's a high-level diagram of the chatbot's RAG architecture:\n\n.\n\n## Building a \"naive RAG\" MVP\n\nOver the past few months, a lot of tools and reference architectures have come out for building RAG applications. We decided it would make the most sense to start simple, and then iterate with our design once we had a functional minimal viable product (MVP). \n\nOur first iteration was what Jerry Liu, creator of RAG framework LlamaIndex, calls \"naive RAG\". This is the simplest form of RAG. Our naive RAG implementation had the following flow:\n\n- **Data ingestion**: Ingesting source data into MongoDB Atlas, breaking documents into smaller chunks, and storing each chunk with its vector embedding. Index the vector embeddings using MongoDB Atlas Vector Search.\n- **Chat**: Generating an answer by creating an embedding for the user's question, finding matching chunks with MongoDB Atlas Vector Search, and then summarizing an answer using these chunks.\n\nWe got a reasonably functional naive RAG prototype up and running with a small team in under two months. To assess the quality of generated responses and links, we had MongoDB employees volunteer to test out the chatbot in a red teaming exercise.\n\nTo learn more about the approach we took to red teaming, refer to the documentation from Microsoft.\n\n## Challenges in building a RAG application for MongoDB documentation\n\nThe red teaming exercise revealed that the naive RAG chatbot provided satisfactory answers roughly 60% of the time.\n\nFor the 40% of answers that were unsatisfactory, we noticed a few common themes: \n\n- The chatbot was not aware of previous messages in the conversation.\n\nFor example, the conversation might go like:\n\n```txt\nUser: how to create a cluster atlas\nChatbot: To create a MongoDB Atlas cluster using the CLI, you can use the `mongocli atlas quickstart` command...\nUser: how to with UI\nChatbot: I'm sorry, I don't know what you mean...\n```\n\n- The chatbot sometimes gave niche or overly specific solutions when a more general answer would have been useful. MongoDB has many products with overlapping functionality (database drivers in multiple languages, MongoDB on Atlas and self-hosted, etc.) and without a clear priority, it could seemingly choose one at random.\n \nFor example, the conversation might go like:\n\n```txt\nUser: how to connect to mongodb\nChatbot: To connect to MongoDB you can use the Java Driver...\n```\n\n- The chatbot\u2019s further reading links were not consistently relevant.\n\n For example the conversation might go like:\n \n```txt\nUser: write data to mongodb node.js\nChatbot: /* Reasonable generated answer */\n \nFurther reading:\n- https://www.mongodb.com/docs/drivers/node/current/usage-examples/insertOne/ (\ud83d\udc4d)\n- https://www.mongodb.com/developer/languages/javascript/node-connect-mongodb/ (\ud83d\udc4d)\n- https://www.mongodb.com/developer/products/realm/realm-meetup-javascript-react-native/ (\ud83e\udd37)\n```\n\nTo get the chatbot to a place where we felt comfortable putting it out into the world, we needed to address these limitations.\n\n## Refactoring the chatbot to be production ready\n\nThis section covers how we built the documentation AI chatbot to address the previously mentioned limitations of naive RAG to build a not-so-naive chatbot that better responds to user questions.\n\nUsing the approach described in this section, we got the chatbot to over 80% satisfactory responses in a subsequent red teaming exercise.\n\n### Data ingestion\n\nWe set up a CLI for data ingestion, pulling content from MongoDB's documentation and the Developer Center. A nightly cron job ensures the chatbot's information remains current.\n\nOur ingestion pipeline involves two primary stages:\n\n#### 1. Pull raw content\n\nWe created a `pages` CLI command that pulls raw content from data sources into Markdown for the chatbot to use. This stage handles varied content formats, including abstract syntax trees, HTML, and Markdown. We stored this raw data in a `pages` collection in MongoDB.\n\nExample `pages` command:\n\n```sh\ningest pages --source docs-atlas\n```\n\n#### 2. Chunk and Embed Content\n\nAn `embed` CLI command takes the data from the `pages` collection and transforms it into a form that the chatbot can use in addition to generating vector embeddings for the content. We stored the transformed content in the `embedded_content` collection, indexed using MongoDB Atlas Vector Search.\n\nExample `embed` command:\n\n```sh\ningest embed --source docs-atlas \\\n --since 2023-11-07 # only update documentation changed since this time\n```\n\nTo transform our `pages` documents into `embedded_content` documents, we used the following strategy:\nBreak each page into one or more chunks using the LangChain RecursiveCharacterTextSplitter. We used the RecursiveCharacterTextSplitter to split the text into logical chunks, such as by keeping page sections (as denoted by headers) and code examples together.\nAllow max chunk size of 650 tokens. This led to an average chunk size of 450 tokens, which aligns with emerging best practices.\nRemove all chunks that are less than 15 tokens in length. These would sometimes show up in vector search results because they'd closely match the user query even though they provided little value for informing the answer generated by the ChatGPT API.\nAdd metadata to the beginning of each chunk before creating the embedding. This gives the chunk greater semantic meaning to create the embedding with. See the following section for more information about how adding metadata greatly improved the quality of our vector search results. \n\n##### Add chunk metadata\n\nThe most important improvement that we made to the chunking and embedding was to **prepend chunks with metadata**. For example, say you have this chunk of text about using MongoDB Atlas Vector Search:\n\n```txt\n### Procedure\n\n#### Go to the Search Tester.\n\n- Click the cluster name to view the cluster details.\n\n- Click the Search tab.\n\n- Click the Query button to the right of the index to query.\n\n#### View and edit the query syntax.\n\nClick Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.\n```\n\nThis chunk itself has relevant information about performing a semantic search on Atlas data, but it lacks context data that makes it more likely to be found in the search results. \n\nBefore creating the vector embedding for the content, we add metadata to the top of the chunk to change it to: \n\n```txt\n---\ntags:\n - atlas\n - docs\nproductName: MongoDB Atlas\nversion: null\npageTitle: How to Perform Semantic Search Against Data in Your Atlas Cluster\nhasCodeBlock: false\n---\n\n### Procedure\n\n#### Go to the Search Tester.\n\n- Click the cluster name to view the cluster details.\n\n- Click the Search tab.\n\n- Click the Query button to the right of the index to query.\n\n#### View and edit the query syntax.\n\nClick Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.\n```\n\nAdding this metadata to the chunk greatly improved the quality of our search results, especially when combined with adding metadata to the user's query on the server before using it in vector search, as discussed in the \u201cChat Server\u201d section.\n\n#### Example document from `embedded_content` collection\n\nHere\u2019s an example document from the `embedded_content` collection. The `embedding` field is indexed with MongoDB Atlas Vector Search.\n\n```js\n{\n_id: new ObjectId(\"65448eb04ef194092777bcf6\")\nchunkIndex: 4,\nsourceName: \"docs-atlas\",\nurl: \"https://mongodb.com/docs/atlas/atlas-vector-search/vector-search-tutorial/\",\ntext: '---\\ntags:\\n - atlas\\n - docs\\nproductName: MongoDB Atlas\\nversion: null\\npageTitle: How to Perform Semantic Search Against Data in Your Atlas Cluster\\nhasCodeBlock: false\\n---\\n\\n### Procedure\\n\\n\\n\\n\\n\\n#### Go to the Search Tester.\\n\\n- Click the cluster name to view the cluster details.\\n\\n- Click the Search tab.\\n\\n- Click the Query button to the right of the index to query.\\n\\n#### View and edit the query syntax.\\n\\nClick Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.',\ntokenCount: 151,\nmetadata: {\ntags: \"atlas\", \"docs\"],\n productName: \"MongoDB Atlas\",\n version: null,\n pageTitle: \"How to Perform Semantic Search Against Data in Your Atlas Cluster\",\n hasCodeBlock: false,\n},\nembedding: [0.002525234, 0.038020607, 0.021626275 /* ... */],\nupdated: new Date()\n};\n\n```\n#### Data ingestion flow diagram\n\n![Ingest data flow diagram][2]\n\n### Chat server\n\nWe built an Express.js server to coordinate RAG between the user, MongoDB documentation, and ChatGPT API. We used MongoDB Atlas Vector Search to perform a vector search on the ingested content in the `embedded_content` collection. We persist conversation information, including user and chatbot messages, to a `conversations` collection in the same MongoDB database.\n\nThe Express.js server is a fairly straightforward RESTful API with three routes:\n \n- `POST /conversations`: Create a new conversation.\n- `POST /conversations/:conversationId/messages`: Add a user message to a conversation and get back a RAG response to the user message. This route has the optional parameter `stream` to stream back a response or send it as a JSON object.\n- `POST /conversations/:conversationId/messages/:messageId/rating`: Rate a message.\n\nMost of the complexity of the server was in the `POST /conversations/:conversationId/messages` route, as this handles the whole RAG flow.\n\nWe were able to make dramatic improvements over our initial naive RAG implementation by adding what we call a **query preprocessor**. \n#### The query preprocessor\n\nA query preprocessor mutates the original user query to something that is more conversationally relevant and gets better vector search results. \n\nFor example, say the user inputs the following query to the chatbot:\n\n```txt\n$filter\n```\n\nOn its own, this query has little inherent semantic meaning and doesn't present a clear question for the ChatGPT API to answer.\n\nHowever, using a query preprocessor, we transform this query into:\n\n```txt\n---\nprogrammingLanguages:\n - shell\nmongoDbProducts:\n - MongoDB Server\n - Aggregation Framework\n---\nWhat is the syntax for filtering data in MongoDB?\n```\n\nThe application server then sends this transformed query in MongoDB Atlas Vector Search. It yields *much* better search results than the original query. The search query has more semantic meaning itself and also aligns with the metadata that we prepend during content ingestion to create a higher degree of semantic similarity for vector search.\n\nAdding the `programmingLanguage` and `mongoDbProducts` information to the query focuses the vector search to create a response grounded in a specific subset of the total surface area of the MongoDB product suite. For example, here we **would not** want the chatbot to return results for using the PHP driver to perform `$filter` aggregations, but vector search would be more likely to return that if we didn't specify that we're looking for examples that use the shell.\n\nAlso, telling the ChatGPT API to answer the question \"What is the syntax for filtering data in MongoDB?\" provides a clearer answer than telling it to answer the original \"$filter\".\n\nTo create a preprocessor that transforms the query like this, we used the library [TypeChat. TypeChat takes a string input and transforms it into a JSON object using the ChatGPT API. TypeChat uses TypeScript types to describe the shape of the output data.\n\nThe TypeScript type that we use in our application is as follows:\n\n```ts\n/**\n You are an AI-powered API that helps developers find answers to their MongoDB\n questions. You are a MongoDB expert. Process the user query in the context of\n the conversation into the following data type.\n */\nexport interface MongoDbUserQueryPreprocessorResponse {\n /**\n One or more programming languages present in the content ordered by\n relevancy. If no programming language is present and the user is asking for\n a code example, include \"shell\".\n @example \"shell\", \"javascript\", \"typescript\", \"python\", \"java\", \"csharp\",\n \"cpp\", \"ruby\", \"kotlin\", \"c\", \"dart\", \"php\", \"rust\", \"scala\", \"swift\"\n ...other popular programming languages ]\n */\n programmingLanguages: string[];\n\n /**\n One or more MongoDB products present in the content. Which MongoDB products\n is the user interested in? Order by relevancy. Include \"Driver\" if the user\n is asking about a programming language with a MongoDB driver.\n @example [\"MongoDB Atlas\", \"Atlas Charts\", \"Atlas Search\", \"Aggregation\n Framework\", \"MongoDB Server\", \"Compass\", \"MongoDB Connector for BI\", \"Realm\n SDK\", \"Driver\", \"Atlas App Services\", ...other MongoDB products]\n */\n mongoDbProducts: string[];\n\n /**\n Using your knowledge of MongoDB and the conversational context, rephrase the\n latest user query to make it more meaningful. Rephrase the query into a\n question if it's not already one. The query generated here is passed to\n semantic search. If you do not know how to rephrase the query, leave this\n field undefined.\n */\n query?: string;\n\n /**\n Set to true if and only if the query is hostile, offensive, or disparages\n MongoDB or its products.\n */\n rejectQuery: boolean;\n}\n```\n\nIn our app, TypeChat uses the `MongoDbUserQueryPreprocessorResponse` schema and description to create an object structured on this schema.\n\nThen, using a simple JavaScript function, we transform the `MongoDbUserQueryPreprocessorResponse` object into a query to send to embed and then send to MongoDB Atlas Vector Search.\n\nWe also have the `rejectQuery` field to flag if a query is inappropriate. When the `rejectQuery: true`, the server returns a static response to the user, asking them to try a different query.\n\n#### Chat server flow diagram\n\n![Chat data flow diagram][3]\n\n### React component UI\n\nOur front end is a React component built with the [LeafyGreen Design System. The component regulates the interaction with the chat server's RESTful API. \n\nCurrently, the component is only on the MongoDB docs homepage, but we built it in a way that it could be extended to be used on other MongoDB properties. \n\nYou can actually download the UI from npm with the `mongodb-chatbot-ui` package.\n\nHere you can see what the chatbot looks like in action:\n\n to the `embedding` field of the `embedded_content` collection:\n\n```json\n{\n \"type\": \"vectorSearch,\n \"fields\": {\n \"path\": \"embedding\",\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }]\n}\n```\n\nTo run queries using the MongoDB Atlas Vector Search index, it's a simple aggregation operation with the [`$vectorSearch` operator using the Node.js driver:\n\n```ts\nexport async function getVectorSearchResults(\n collection: Collection,\n vectorEmbedding: number],\n filterQuery: Filter\n) {\n return collection\n .aggregate>([\n {\n $vectorSearch: {\n index: \"default\",\n vector: vectorEmbedding,\n path: \"embedding\",\n filter: filterQuery,\n limit: 3,\n numCandidates: 30\n },\n },\n {\n $addFields: {\n score: {\n $meta: \"vectorSearchScore\",\n },\n },\n },\n { $match: { score: { $gte: 0.8 } } },\n ])\n .toArray();\n}\n```\n\nUsing MongoDB to store the `conversations` data simplified the development experience, as we did not have to think about using a data store for the embeddings that is separate from the rest of the application data.\n\nUsing MongoDB Atlas for vector search and as our application data store streamlined our application development process so that we were able to focus on the core RAG application logic, and not have to think very much about managing additional infrastructure or learning new domain-specific query languages. \n\n## What we learned building a production RAG application\n\nThe MongoDB documentation AI chatbot has now been live for over a month and works pretty well (try it out!). It's still under active development, and we're going to roll it to other locations in the MongoDB product suite over the coming months.\n\nHere are a couple of our key learnings from taking the chatbot to production:\n\n- Naive RAG is not enough. However, starting with a naive RAG prototype is a great way for you to figure out how you need to extend RAG to meet the needs of your use case.\n- Red teaming is incredibly useful for identifying issues. Red team early in the RAG application development process, and red team often.\n- Add metadata to the content before creating embeddings to improve search quality.\n- Preprocess user queries with an LLM (like the ChatGPT API and TypeChat) before sending them to vector search and having the LLM respond to the user. The preprocessor should:\n- Make the query more conversationally and semantically relevant.\n- Include metadata to use in vector search.\n- Catch any scenarios, like inappropriate queries, that you want to handle outside the normal RAG flow.\n- MongoDB Atlas is a great database for building production RAG apps. \n\n## Build your own production-ready RAG application with MongoDB\n\nWant to build your own RAG application? We've made our source code publicly available as a reference architecture. Check it out on [GitHub.\n\nWe're also working on releasing an open-source framework to simplify the creation of RAG applications using MongoDB. Stay tuned for more updates on this RAG framework.\n\nQuestions? Comments? Join us in the MongoDB Developer Community forum.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbd38c0363f44ac68/6552802f9984b8dc525a96e1/281539442-64de6f3a-9119-4b28-993a-9f8c67832e88.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2016e04b84663d9f/6552806b4d28595c45afa7e9/281065694-88b0de91-31ed-4a18-b060-3384ac514b6c.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt65a54cdc0d34806a/65528091c787a440a22aaa1f/281065692-052b15eb-cdbd-4cf8-a2a5-b0583a78b765.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58e9afb62d43763f/655280b5ebd99719aa13be92/281156988-2c5adb94-e2f0-4d4b-98cb-ce585baa7ba1.gif", "format": "md", "metadata": {"tags": ["Atlas", "React", "Node.js"], "pageDescription": "Explore how MongoDB enhances developer support with its innovative AI chatbot, leveraging Retrieval Augmented Generation (RAG) technology. This article delves into the technical journey of creating an AI-driven documentation tool, discussing the RAG architecture, challenges, and solutions in implementing MongoDB Atlas for a more intuitive and efficient developer experience. Discover the future of RAG applications and MongoDB's pivotal role in this cutting-edge field.", "contentType": "Article"}, "title": "Taking RAG to Production with the MongoDB Documentation AI Chatbot", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/kubernetes-operator-application-deployment", "action": "created", "body": "# Application Deployment in Kubernetes with the MongoDB Atlas Operator\n\nKubernetes is now an industry-wide standard when it comes to all things containers, but when it comes to deploying a database, it can be a bit tricky! However, tasks like adding persistence, ensuring redundancy, and database maintenance can be easily handled with MongoDB Atlas. Fortunately, the MongoDB Atlas Operator gives you the full benefits of using MongoDB Atlas, while still managing everything from within your Kubernetes cluster. In this tutorial, we\u2019ll deploy a MERN stack application in Kubernetes, install the Atlas operator, and connect our back end to Atlas using a Kubernetes secret.\n\n## Pre-requisites\n* `kubectl`\n* `minikube`\n* `jq`\n\nYou can find the complete source code for this application on Github. It\u2019s a mini travel planner application using MongoDB, Express, React, and Node (MERN). While this tutorial should work for any Kubernetes cluster, we\u2019ll be using Minikube for simplicity and consistency.\n\n## Getting started\n\nWhen it comes to deploying a database on Kubernetes, there\u2019s no simple solution. Apart from persistence and redundancy challenges, you may need to move data to specific geolocated servers to ensure that you comply with GDPR policies. Thus, you\u2019ll need a reliable, scalable, and resilient database once you launch your application into production. \n\nMongoDB Atlas is a full developer data platform that includes the database you love, which takes care of many of the database complexities you\u2019re used to. But, there is a gap between MongoDB Atlas and your Kubernetes cluster. Let\u2019s take a look at the MongoDB Atlas Operator by deploying the example MERN application with a back end and front end.\n\nThis application uses a three-tier application architecture, which will have the following layout within our Kubernetes cluster:\n\nTo briefly overview this layout, we\u2019ve got a back end with a deployment that will ensure we have two pods running at any given time, and the same applies for our front end. Traffic is redirected and configured by our ingress, meaning `/api` requests route to our back end and everything else will go to the front end. The back end of our application is responsible for the connection to the database, where we\u2019re using MongoDB Atlas Operator to link to an Atlas instance. \n\n## Deploying the application on Kubernetes\n\nTo simplify the installation process of the application, we can use a single `kubectl` command to deploy our demo application on Kubernetes. The single file we\u2019ll use includes all of the deployments and services for the back end and front end of our application, and uses containers created with the Dockerfiles in the folder. \n\nFirst, start by cloning the repository that contains the starting source code.\n\n```\ngit clone https://github.com/mongodb-developer/mern-k8s.git\n\ncd mern-k8s\n```\n\nSecondly, as part of this tutorial, you\u2019ll need to run `minikube tunnel` to access our services at `localhost`.\n\n```\nminikube tunnel\n```\n\nNow, let\u2019s go ahead and deploy everything in our Kubernetes cluster by applying the following `application.yaml` file.\n\n```\nkubectl apply -f k8s/application.yaml\n```\n\nYou can take a look at what you now have running in your cluster by using the `kubectl get` command.\n\n```\nkubectl get all\n```\n\nYou should see multiple pods, services, and deployments for the back end and front end, as well as replicasets. At the moment, they are more likely in a ContainerCreating status. This is because Kubernetes needs to pull the images to its local registry. As soon as the images are ready, the pods will start.\n\nTo see the application in action, simply head to `localhost` in your web browser, and the application should be live!\n\nHowever, you\u2019ll notice there\u2019s no way to add entries to our application, and this is because we haven\u2019t provided a connection string yet for the back end to connect to a MongoDB instance. For example, if we happen to check the logs for one of the recently created backend pods, we can see that there\u2019s a placeholder for a connection string.\n\n```\nkubectl logs pod/mern-k8s-back-d566cc88f-hhghl\n\nConnecting to database using $ATLAS_CONNECTION_STRING\nServer started on port 3000\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"\n```\n\nWe\u2019ve ran into a slight issue, as this demo application is using a placeholder (`$ATLAS_CONNECTION_STRING`) for the MongoDB connection string, which needs to be replaced by a valid connection string from our Atlas cluster. This issue can be taken care of with the MongoDB Atlas Operator, which allows you to manage everything from within Kubernetes and gives you the full advantages of using MongoDB Atlas, including generating a connection string as a Kubernetes secret.\n\n## Using the MongoDB Atlas Operator for Kubernetes\n\nAs there\u2019s currently a gap between your Kubernetes cluster and MongoDB Atlas, let\u2019s use the Atlas Operator to remedy this issue. Through the operator, we\u2019ll be able to manage our Atlas projects and clusters from Kubernetes. Specifically, getting your connection string to fix the error we received previously can be done now through Kubernetes secrets, meaning we won\u2019t need to retrieve it from the Atlas UI or CLI.\n\n### Why use the Operator?\n\nThe Atlas Operator bridges the gap between Atlas, the MongoDB data platform, and your Kubernetes cluster. By using the operator, you can use `kubectl` and your familiar tooling to manage and set up your Atlas deployments. Particularly, it allows for most of the Atlas functionality and tooling to be performed without having to leave your Kubernetes cluster. Installing the Atlas operator creates the Custom Resource Definitions that will connect to the MongoDB Atlas servers.\n\n### Installing the Atlas Operator\n\nThe installation process for the Atlas Operator is as simple as running a `kubectl` command. All of the source code for the operator can be found on the Github repository.\n\n```\nkubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/main/deploy/all-in-one.yaml\n```\n\nThis will create new custom resources in your cluster that you can use to create or manage your existing Atlas projects and clusters.\n\n### Creating a MongoDB Atlas cluster \n\nIf you haven't already, head to the Atlas Registration page to create your free account. This account will let you create a database on a shared server, and you won't even need a credit card to use it.\n\n### Set up access\n\nIn order for the operator to be able to manage your cluster, you will need to provide it with an API key with the appropriate permissions. Firstly, let\u2019s retrieve the organization ID.\n\nIn the upper left part of the Atlas UI, you will see your organization name in a dropdown. Right next to the dropdown is a gear icon. Clicking on this icon will open up a page called _Organization Settings_. From this page, look for a box labeled _Organization ID_. \n\nSave that organization ID somewhere for future use. You can also save it in an environment variable.\n\n```\nexport ORG_ID=60c102....bd\n```\n\n>Note: If using Windows, use:\n\n```\nset ORG_ID=60c102....bd\n```\n\nNext, let\u2019s create an API key. From the same screen, look for the _Access Manager_ option in the left navigation menu. This will bring you to the _Organization Access_ screen. In this screen, follow the instructions to create a new API key.\n\nThe key will need the **Organization Project Creator** role in order to create new projects and clusters. If you want to manage existing clusters, you will need to provide it with the **Organization Owner** role. Save the API private and public keys. You can also add them to the environment.\n\n```\nexport ATLAS_PUBLIC_KEY=iwpd...i\nexport ATLAS_PRIVATE_KEY=e13debfb-4f35-4...cb\n```\n\n>Note: If using Windows, use:\n\n```\nset ATLAS_PUBLIC_KEY=iwpd...i\nset ATLAS_PRIVATE_KEY=e13debfb-4f35-4...cb\n```\n\n### Create the Kubernetes secrets\n\nNow that you have created the API key, you can specify those values to the MongoDB Atlas Operator. By creating this secret in our Kubernetes cluster, this will give the operator the necessary permissions to create and manage projects and clusters for our specific Atlas account. \n\nYou can create the secret with `kubectl`, and to keep it simple, let\u2019s name our secret `mongodb-atlas-operator-api-key`. For the operator to be able to find this secret, it needs to be within the namespace `mongodb-atlas-system`.\n\n```\nkubectl create secret generic mongodb-atlas-operator-api-key \\\n --from-literal=\"orgId=$ORG_ID\" \\\n --from-literal=\"publicApiKey=$ATLAS_PUBLIC_KEY\" \\\n --from-literal=\"privateApiKey=$ATLAS_PRIVATE_KEY\" \\\n -n mongodb-atlas-system\n```\n\nNext, we\u2019ll need to label this secret, which helps the Atlas operator in finding the credentials.\n\n```\nkubectl label secret mongodb-atlas-operator-api-key atlas.mongodb.com/type=credentials -n mongodb-atlas-system\n```\n\n### Create a user password\n\nWe\u2019ll need a password for our database user in order to access our databases, create new databases, etc. However, you won't want to hard code this password into your yaml files. It\u2019s safer to save it as a Kubernetes secret. Just like the API key, this secret will need to be labeled too.\n\n```\nkubectl create secret generic atlaspassword --from-literal=\"password=mernk8s\"\nkubectl label secret atlaspassword atlas.mongodb.com/type=credentials\n```\n\n## Create and manage an Atlas deployment\n\nCongrats! You are now ready to manage your Atlas projects and deployments from Kubernetes. This can be done with the three new CRDs that were added to your cluster. Those CRDs are `AtlasProject` to manage projects, `AtlasDeployment` to manage deployments, and `AtlasDatabaseUser` to manage database users within MongoDB Atlas.\n\n* Projects: Allows you to isolate different database environments (for instance, development/qa/prod environments) from each other, as well as users/teams.\n* Deployments: Instance of MongoDB running on a cloud provider.\n* Users: Database users that have access to MongoDB database deployments.\n\nThe process of creating a project, user, and deployment is demonstrated below, but feel free to skip down to simply apply these files by using the `/atlas` folder.\n### Create a project\n\nStart by creating a new project in which the new cluster will be deployed. In a new file called `/operator/project.yaml`, add the following:\n```\napiVersion: atlas.mongodb.com/v1\nkind: AtlasProject\nmetadata:\n name: mern-k8s-project\nspec:\n name: \"MERN K8s\"\n projectIpAccessList:\n - ipAddress: \"0.0.0.0/0\"\n comment: \"Allowing access to database from everywhere (only for Demo!)\"\n```\n\nThis will create a new project called \"MERN K8s\" in Atlas. Now, this project will be open to anyone on the web. It\u2019s best practice to only open it to known IP addresses as mentioned in the comment.\n\n### Create a new database user\n\nNow, in order for your application to connect to this database, you will need a database user. To create this user, open a new file called `/operator/user.yaml`, and add the following:\n\n```\napiVersion: atlas.mongodb.com/v1\nkind: AtlasDatabaseUser\nmetadata:\n name: atlas-user\nspec:\n roles:\n - roleName: \"readWriteAnyDatabase\"\n databaseName: \"admin\"\n projectRef:\n name: mern-k8s-project\n username: mernk8s\n passwordSecretRef:\n name: atlaspassword\n```\n\nYou can see how the password uses the secret we created earlier, `atlaspassword`, in the `mern-k8s-project` namespace.\n\n### Create a deployment\n\nFinally, as you have a project setup and user to connect to the database, you can create a new deployment inside this project. In a new file called `/operator/deployment.yaml`, add the following yaml.\n\n```\napiVersion: atlas.mongodb.com/v1\nkind: AtlasDeployment\nmetadata:\n name: mern-k8s-cluster\nspec:\n projectRef:\n name: mern-k8s-project\n deploymentSpec:\n name: \"Cluster0\"\n providerSettings:\n instanceSizeName: M0\n providerName: TENANT\n regionName: US_EAST_1\n backingProviderName: AWS\n```\n\nThis will create a new M0 (free) deployment on AWS, in the US_EAST_1 region. Here, we\u2019re referencing the `mern-k8s-project` in our Kubernetes namespace, and creating a cluster named `Cluster0`. You can use a similar syntax to deploy in any region on AWS, GCP, or Azure. To create a serverless instance, see the serverless instance example.\n\n### Apply the new files\n\nYou now have everything ready to create this new project and cluster. You can apply those new files to your cluster using:\n\n```\nkubectl apply -f ./operator\n```\n\nThis will take a couple of minutes. You can see the status of the cluster and project creation with `kubectl`.\n\n```\nkubectl get atlasprojects\nkubectl get atlasdeployments\n```\n\nIn the meantime, you can go to the Atlas UI. The project should already be created, and you should see that a cluster is in the process of being created.\n\n### Get your connection string\n\nGetting your connection string to that newly created database can now be done through Kubernetes. Once your new database has been created, you can use the following command that uses `jq` to view the connection strings, without using the Atlas UI, by converting to JSON from Base64. \n\n```\nkubectl get secret mern-k8s-cluster0-mernk8s -o json | jq -r '.data | with_entries(.value |= @base64d)'\n\n{\n\u2026\n \"connectionStringStandard\": \"\",\n \"connectionStringStandardSrv\": \"mongodb+srv://mernk8s:mernk8s@cluster0.fb4qw.mongodb.net\",\n \"password\": \"mernk8s\",\n \"username\": \"mernk8s\"\n}\n```\n\n## Configure the application back end using the Atlas operator\n\nNow that your project and cluster are created, you can access the various properties from your Atlas instance. You can now access the connection string, and even configure your backend service to use that connection string. We\u2019ll go ahead and connect our back end to our database without actually specifying the connection string, instead using the Kubernetes secret we just created.\n\n### Update the backend deployment\n\nNow that you can find your connection string from within Kubernetes, you can use that as part of your deployment to specify the connection string to your back end.\n\nIn your `/k8s/application.yaml` file, change the `env` section of the containers template to the following:\n\n```\n env: \n - name: PORT\n value: \"3000\"\n - name: \"CONN_STR\"\n valueFrom:\n secretKeyRef:\n name: mern-k8s-cluster0-mernk8s\n key: connectionStringStandardSrv\n```\n\nThis will use the same connection string you've just seen in your terminal.\n\nSince we\u2019ve changed our deployment, you can apply those changes to your cluster using `kubectl`:\n\n```\nkubectl apply -f k8s/application.yaml\n```\n\nNow, if you take a look at your current pods:\n\n```\nkubectl get pods\n```\n\nYou should see that your backend pods have been restarted. You should now be able to test the application with the back end connected to our newly created Atlas cluster. Now, just head to `localhost` to view the updated application once the deployment has restarted. You\u2019ll see the application fully running, using this newly created cluster. \n\nIn addition, as you add items or perhaps clear the entries of the travel planner, you\u2019ll notice the entries added and removed from the \u201cCollections\u201d tab of the `Cluster0` database within the Atlas UI. Let\u2019s take a look at our database using MongoDB Compass, with username `mernk8s` and password `mernk8s` as we set previously.\n\n### Delete project\n\nLet\u2019s finish off by using `kubectl` to delete the Atlas cluster and project and clean up our workspace. We can delete everything from the current namespace by using `kubectl delete`\n\n \n```\nkubectl delete atlasdeployment mern-k8s-cluster\nkubectl delete atlasproject mern-k8s-project\n```\n\n## Summary\n\nYou now know how to leverage the MongoDB Atlas Operator to create and manage clusters from Kubernetes. We\u2019ve only demonstrated a small bit of the functionality the operator provides, but feel free to head to the documentation to learn more.\n\nIf you are using MongoDB Enterprise instead of Atlas, there is also an Operator available, which works in very similar fashion.\n\nTo go through the full lab by Joel Lord, which includes this guide and much more, check out the self-guided Atlas Operator Workshop.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Kubernetes", "Docker"], "pageDescription": "Get started with application deployment into a Kubernetes cluster using the MongoDB Atlas Operator.", "contentType": "Tutorial"}, "title": "Application Deployment in Kubernetes with the MongoDB Atlas Operator", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/demystifying-stored-procedures-mongodb", "action": "created", "body": "# Demystifying Stored Procedures in MongoDB\n\nIf you have ever used a SQL database, you might have heard about stored procedures. Stored procedures represent pre-written SQL code designed for reuse. By storing frequently used SQL queries as procedures, you can execute them repeatedly. Additionally, these procedures can be parameterized, allowing them to operate on specified parameter values. Oftentimes, developers find themselves wondering:\n\n- Does MongoDB support stored procedures? \n- Where do you write the logic for stored procedures in MongoDB? \n- How can I run a query every midnight, like a CRON job?\n\nIn today\u2019s article, we are going to answer these questions and demystify stored procedures in MongoDB.\n\n## Does MongoDB support stored procedures?\n\nEssentially, a stored procedure consists of a set of SQL statements capable of accepting parameters, executing tasks, and optionally returning values. In the world of MongoDB, we can achieve this using an aggregation pipeline. \n\nAn aggregation pipeline, in a nutshell, is basically a series of stages where the output from a particular stage is an input for the next stage, and the last stage\u2019s output is the final result. \n\nNow, every stage performs some sort of processing to the input provided to it, like filtering, grouping, shaping, calculating, etc. You can even perform vector search and full-text search using MongoDB\u2019s unified developer data platform, Atlas.\n\nLet's see how MongoDB\u2019s aggregation pipeline, Atlas triggers, and change streams together can act as a super efficient, powerful, and flexible alternative to stored procedures.\n\n## What is MongoDB Atlas?\n\nMongoDB Atlas is a multi-cloud developer data platform focused on making it stunningly easy to work with data. It offers the optimal environment for running MongoDB, the leading non-relational database solution.\n\nMongoDB's document model facilitates rapid innovation by directly aligning with the objects in your code. This seamless integration makes data manipulation more intuitive and efficient. With MongoDB, you have the flexibility to store data of diverse structures and adapt your schema effortlessly as your application evolves with new functionalities.\n\nThe Atlas database is available in 100+ regions across AWS, Google Cloud, and Azure. You can even take advantage of multi-cloud and multi-region deployments, allowing you to target the providers and regions that best serve your users. It has best-in-class automation and proven practices that guarantee availability, scalability, and compliance with the most demanding data security and privacy standards.\n\n## What is an Atlas Trigger?\n\nDatabase triggers enable the execution of server-side logic whenever a document undergoes addition, modification, or deletion within a connected Atlas cluster. \n\nUnlike conventional SQL data triggers confined to the database server, Atlas Triggers operate on a serverless compute layer capable of scaling autonomously from the database server. \n\nIt seamlessly invokes Atlas Functions and can also facilitate event forwarding to external handlers via Amazon EventBridge.\n\n## How can Atlas Triggers be invoked?\n\nAn Atlas Trigger might fire on:\n\n- A specific operation type in a given collection, like insert, update, and delete. \n- An authentication event, such as User Creation or Deletion.\n- A scheduled time, like a CRON job.\n\n## Types of Atlas Triggers\n\nThere are three types of triggers in Atlas:\n\n- Database triggers are used in scenarios where you want to respond when a document is inserted, changed, or deleted. \n- Authentication triggers can be used where you want to respond when a database user is created, logged in, or deleted.\n- Scheduled triggers acts like a CRON job and run on a predefined schedule.\n\nRefer to Configure Atlas Triggers for advanced options.\n\n## Atlas Triggers in action\n\nLet's compare how stored procedures can be implemented in SQL and MongoDB using triggers, functions, and aggregation pipelines.\n\n### The SQL way\n\nHere's an example of a stored procedure in MySQL that calculates the total revenue for the day every time a new order is inserted into an orders table:\n\n```\nDELIMITER $$\n\nCREATE PROCEDURE UpdateTotalRevenueForToday()\nBEGIN\n DECLARE today DATE;\n DECLARE total_revenue DECIMAL(10, 2);\n\n -- Get today's date\n SET today = CURDATE();\n\n -- Calculate total revenue for today\n SELECT SUM(total_price) INTO total_revenue\n FROM orders\n WHERE DATE(order_date) = today;\n\n -- Update total revenue for today in a separate table or perform any other necessary action\n -- Here, I'm assuming you have a separate table named 'daily_revenue' to store daily revenue\n -- If not, you can perform any other desired action with the calculated total revenue\n\n -- Update or insert the total revenue for today into the 'daily_revenue' table\n INSERT INTO daily_revenue (date, revenue)\n VALUES (today, total_revenue)\n ON DUPLICATE KEY UPDATE revenue = total_revenue;\nEND$$\n\nDELIMITER ;\n```\n\nIn this stored procedure:\n\n- We declare two variables: today to store today's date and total_revenue to store the calculated total revenue for today.\n- We use a SELECT statement to calculate the total revenue for today from the orders table where the order_date matches today's date.\n- We then update the daily_revenue table with today's date and the calculated total revenue. If there's already an entry for today's date, it updates the revenue. Otherwise, it inserts a new row for today's date.\n\nNow, we have to create a trigger to call this stored procedure every time a new order is inserted into the orders table. Here's an example of how to create such a trigger:\n\n```\nCREATE TRIGGER AfterInsertOrder\nAFTER INSERT ON orders\nFOR EACH ROW\nBEGIN\n CALL UpdateTotalRevenueForToday();\nEND;\n```\n\nThis trigger will call the UpdateTotalRevenueForToday() stored procedure every time a new row is inserted into the orders table.\n\n### The MongoDB way\n\nIf you don\u2019t have an existing MongoDB Database deployed on Atlas, start for free and get 500MBs of storage free forever.\n\nNow, all we have to do is create an Atlas Trigger and implement an Atlas Function in it.\n\nLet\u2019s start by creating an Atlas database trigger. \n\n.\n\n, are powerful alternatives to traditional stored procedures. MongoDB Atlas, the developer data platform, further enhances development flexibility with features like Atlas Functions and Triggers, enabling seamless integration of server-side logic within the database environment.\n\nThe migration from stored procedures to MongoDB is not just a technological shift; it represents a paradigm shift towards embracing a future-ready digital landscape. As organizations transition, they gain the ability to leverage MongoDB's innovative solutions, maintaining agility, enhancing performance, and adhering to contemporary development practices.\n\nSo, what are you waiting for? Sign up for Atlas today and experience the modern alternative to stored procedures in MongoDB.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcb5b2b2db6b3a2b6/65dce8447394e52da349971b/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5cbb9842024e79f1/65dce844ae62f722b74bdfe0/image2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt529e154503e12f56/65dce844aaeb364e19a817e3/image3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta2a7b461deb6879d/65dce844aaeb36b5d5a817df/image4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5801d9543ac94f25/65dce8446c65d723e087ae99/image5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5ceb23e853d05d09/65dce845330e0069f27f5980/image6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt626536829480d1be/65dce845375999f7bc70a71b/image7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b1366584ee0766a/65dce8463b4c4f91f07ace17/image8.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js"], "pageDescription": "Let's see how MongoDB\u2019s aggregation pipeline, Atlas triggers, and change streams together can act as a super efficient, powerful, and flexible alternative to stored procedures.", "contentType": "Tutorial"}, "title": "Demystifying Stored Procedures in MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/efficiently-managing-querying-visual-data-mongodb-atlas-vector-search-fiftyone", "action": "created", "body": "# Efficiently Managing and Querying Visual Data With MongoDB Atlas Vector Search and FiftyOne\n\n between FiftyOne and MongoDB Atlas enables the processing and analysis of visual data with unparalleled efficiency!\n\nIn this post, we will show you how to use FiftyOne and MongoDB Atlas Vector Search to streamline your data-centric workflows and interact with your visual data like never before.\n\n## What is FiftyOne?\n\n for the curation and visualization of unstructured data, built on top of MongoDB. It leverages the non-relational nature of MongoDB to provide an intuitive interface for working with datasets consisting of images, videos, point clouds, PDFs, and more.\n\nYou can install FiftyOne from PyPi:\n\n```\npip install fiftyone\n```\n\nThe core data structure in FiftyOne is the Dataset, which consists of samples \u2014 collections of labels, metadata, and other attributes associated with a media file. You can access, query, and run computations on this data either programmatically, with the FiftyOne Python software development kit, or visually via the FiftyOne App.\n\nAs an illustrative example, we\u2019ll be working with the Quickstart dataset, which we can load from the FiftyOne Dataset Zoo:\n\n```python\nimport fiftyone as fo\nimport fiftyone.zoo as foz\n\n## load dataset from zoo\ndataset = foz.load_zoo_dataset(\"quickstart\")\n\n## launch the app\nsession = fo.launch_app(dataset)\n```\n\n\ud83d\udca1It is also very easy to load in your data.\n\nOnce you have a `fiftyone.Dataset` instance, you can create a view into your dataset (`DatasetView`) by applying view stages. These view stages allow you to perform common operations like filtering, matching, sorting, and selecting by using arbitrary attributes on your samples. \n\nTo programmatically isolate all high-confidence predictions of an `airplane`, for instance, we could run:\n\n```python\nfrom fiftyone import ViewField as F\n\nview = dataset.filter_labels(\n \"predictions\",\n (F(\"label\") == \"airplane\") & (F(\"confidence\") > 0.8)\n)\n```\n\nNote that this achieves the same result as the UI-based filtering in the last GIF.\n\nThis querying functionality is incredibly powerful. For a full list of supported view stages, check out this View Stages cheat sheet. What\u2019s more, these operations readily scale to billions of samples. How? Simply put, they are built on MongoDB aggregation pipelines!\n\nWhen you print out the `DatasetView`, you can see a summary of the applied aggregation under \u201cView stages\u201d:\n\n```python\n# view the dataset and summary\nprint(view)\n```\n\n```\nDataset: quickstart\nMedia type: image\nNum samples: 14\nSample fields:\n id: fiftyone.core.fields.ObjectIdField\n filepath: fiftyone.core.fields.StringField\n tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)\n metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)\n ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\n uniqueness: fiftyone.core.fields.FloatField\n predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)\nView stages:\n 1. FilterLabels(field='predictions', filter={'$and': {...}, {...}]}, only_matches=True, trajectories=False)\n```\n\nWe can explicitly obtain the MongoDB aggregation pipeline when we create directly with the `_pipeline()` method:\n\n```python\n## Inspect the MongoDB agg pipeline\nprint(view._pipeline())\n```\n\n```\n[{'$addFields': {'predictions.detections': {'$filter': {'input': '$predictions.detections',\n 'cond': {'$and': [{'$eq': ['$$this.label', 'airplane']},\n {'$gt': ['$$this.confidence', 0.8]}]}}}}},\n {'$match': {'$expr': {'$gt': [{'$size': {'$ifNull': ['$predictions.detections',\n []]}},\n 0]}}}]\n```\n\nYou can also inspect the underlying MongoDB document for a sample with the to_mongo() method.\n\nYou can even create a DatasetView by applying a MongoDB aggregation pipeline directly to your dataset using the Mongo view stage and the add_stage() method:\n\n```python\n# Sort by the number of objects in the `ground_truth` field\n\nstage = fo.Mongo([\n {\n \"$addFields\": {\n \"_sort_field\": {\n \"$size\": {\"$ifNull\": [\"$ground_truth.detections\", []]}\n }\n }\n },\n {\"$sort\": {\"_sort_field\": -1}},\n {\"$project\": {\"_sort_field\": False}},\n])\nview = dataset.add_stage(stage)\n```\n\n## Vector Search With FiftyOne and MongoDB Atlas\n\n![Searching images with text in the FiftyOne App using multimodal vector embeddings and a MongoDB Atlas Vector Search backend.][3]\n\nVector search is a technique for indexing unstructured data like text and images by representing them with high-dimensional numerical vectors called *embeddings*, generated from a machine learning model. This makes the unstructured data *searchable*, as inputs can be compared and assigned similarity scores based on the alignment between their embedding vectors. The indexing and searching of these vectors are efficiently performed by purpose-built vector databases like [MongoDB Atlas Vector Search.\n\nVector search is an essential ingredient in retrieval-augmented generation (RAG) pipelines for LLMs. Additionally, it enables a plethora of visual and multimodal applications in data understanding, like finding similar images, searching for objects within your images, and even semantically searching your visual data using natural language.\n\nNow, with the integration between FiftyOne and MongoDB Atlas, it is easier than ever to apply vector search to your visual data! When you use FiftyOne and MongoDB Atlas, your traditional queries and vector search queries are connected by the same underlying data infrastructure. This streamlines development, leaving you with fewer services to manage and less time spent on tedious ETL tasks. Just as importantly, when you mix and match traditional queries with vector search queries, MongoDB can optimize efficiency over the entire aggregation pipeline. \n\n### Connecting FiftyOne and MongoDB Atlas\n\nTo get started, first configure a MongoDB Atlas cluster:\n\n```\nexport FIFTYONE_DATABASE_NAME=fiftyone\nexport FIFTYONE_DATABASE_URI='mongodb+srv://$USERNAME:$PASSWORD@fiftyone.XXXXXX.mongodb.net/?retryWrites=true&w=majority'\n```\n\nThen, set MongoDB Atlas as your default vector search back end:\n\n```\nexport FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=mongodb\n```\n\n### Generating the similarity index\n\nYou can then create a similarity index on your dataset (or dataset view) by using the FiftyOne Brain\u2019s `compute_similarity()` method. To do so, you can provide any of the following:\n\n1. An array of embeddings for your samples\n2. The name of a field on your samples containing embeddings\n3. The name of a model from the FiftyOne Model Zoo (CLIP, OpenCLIP, DINOv2, etc.), to use to generate embeddings\n4. A `fiftyone.Model` instance to use to generate embeddings\n5. A Hugging Face `transformers` model to use to generate embeddings\n\nFor more information on these options, check out the documentation for compute_similarity().\n\n```python\nimport fiftyone.brain as fob\nfob.compute_similarity(\n dataset,\n model=\"clip-vit-base32-torch\", ### Use a CLIP model\n brain_key=\"your_key\",\n embeddings='clip_embeddings',\n)\n```\n\nWhen you generate the similarity index, you can also pass in configuration parameters for the MongoDB Atlas Vector Search index: the `index_name` and what `metric` to use to measure similarity between vectors.\n\n### Sorting by Similarity\n\nOnce you have run `compute_similarity()` to generate the index, you can sort by similarity using the MongoDB Atlas Vector Search engine with the `sort_by_similarity()` view stage. In Python, you can specify the sample (whose image) you want to find the most similar images to by passing in the ID of the sample:\n\n```python\n## get ID of third sample\nquery = dataset.skip(2).first().id\n\n## get 25 most similar images\nview = dataset.sort_by_similarity(query, k=25, brain_key=\"your_key\")\nsession = fo.launch_app(view)\n```\n\nIf you only have one similarity index on your dataset, you don\u2019t need to specify the `brain_key`. \n\nWe can achieve the same result with UI alone by selecting an image and then pressing the button with the image icon in the menu bar:\n\n!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb7504ea028d24cc7/65df8d81eef4e3804a1e6598/1.gif\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a282069dd09ffbf/65df8d976c65d7a87487e309/2.gif\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64eb99496c21ea9f/65df8db7c59852e860f6bb3a/3.gif\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5d0148a55738e9bf/65df8dd3eef4e382751e659f/4.gif\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27b44a369441ecd8/65df8de5ffa94a72a33d40fb/5.gif", "format": "md", "metadata": {"tags": ["Python", "AI"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Efficiently Managing and Querying Visual Data With MongoDB Atlas Vector Search and FiftyOne", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/atlas-device-sdks-with-dotnet-maui", "action": "created", "body": "# Online/Offline Data-Capable Cross-Platform Apps with MongoDB Atlas, Atlas Device SDKs and .NET MAUI\n\nIn a world of always-on, always-connected devices, it is more important than ever that apps function in a way that gives a user a good experience. But as well as the users, developers matter too. We want to be able to feel productive and focus on delivery and innovation, not solving common problems.\n\nIn this article, we will look at how you can mix .NET MAUI with MongoDB\u2019s Atlas App Services, including the Atlas Device SDKs mobile database, for online/offline-capable apps without the pain of coding for network handling and errors.\n\n## What are Atlas Device SDKs?\nAtlas Device SDKs, formerly Realm is an alternative to SQLite that takes advantage of MongoDB\u2019s document data model. It is a mobile-first database that has been designed for modern data-driven applications. Although the focus of this article is the mobile side of Atlas Device SDK, it actually also supports the building of web, desktop, and IoT apps.\n\nAtlas Device SDKs have some great features that save a lot of time as a developer. It uses an object-oriented data model so you can work directly with the native objects without needing any Object Relational Mappers (ORMs) or Data Access Objects (DAO). This also means it is simple to start working with and scales well.\n\nPlus, Atlas Device SDKs are part of the Atlas App Services suite of products that you get access to via the SDK. This means that Realm also has automatic access to a built-in, device-to-cloud sync feature. It uses a local database stored on the device to allow for always-on functionality. MongoDB also has Atlas, a document database as a service in the cloud, offering many benefits such as resilience, security, and scaling. The great thing about device sync with App Services is it links to a cloud-hosted MongoDB Atlas cluster, automatically taking care of syncing between them, including in the event of changes in network connectivity. By taking advantage of Atlas, you can share data between multiple devices, users, and the back ends using the same database cluster. \n\n## Can you use Atlas Device SDKs with .NET MAUI?\nIn short, yes! There is a .NET SDK available that supports .NET, MAUI (including Desktop), Universal Windows Platform (UWP), and even Unity.\n\nIn fact, Maddy Montaquila (Senior PM for MAUI at Microsoft) and I got talking about fun project ideas and came up with HouseMovingAssistant, an app built using .NET MAUI and Atlas Device SDKs, for tracking tasks related to moving house. \n\nIt takes advantage of all the great features of Atlas Device SDKs and App Services, including device sync, data partitioning based on the logged-in user, and authentication to handle the logging in and out.\n\nIt even uses another MongoDB feature, Charts, which allows for great visualizations of data in your Atlas cluster, without having to use any complex graphing libraries!\n\n## Code ##\nThe actual code for working with Atlas Device SDKs is very simple and straightforward. This article isn't a full tutorial, but we will use code snippets to show how simple it is. If you want to see the full code for the application, you can find it on GitHub.\n\n> Note that despite the product update name, the Realm name is still used in the library name and code for now so you will see references to Realm throughout the next sections.\n\n### Initialization\n```csharp\nRealmApp = Realms.Sync.App.Create(AppConfig.RealmAppId);\n```\nThis code creates your Realm Sync App and lives inside of App.Xaml.cs.\n\n```csharp\nPartitionSyncConfiguration config = new PartitionSyncConfiguration($\"{App.RealmApp.CurrentUser.Id}\", App.RealmApp.CurrentUser); return Realm.GetInstance(config);\n```\nThe code above is part of an initialization method and uses the RealmApp from earlier to create the connection to your app inside of App Services. This gives you access to features such as authentication (and more), as well as your Atlas data.\n\n### Log in/create an account ###\nWorking with authentication is equally as simple. Creating an account is as easy as picking an authentication type and passing the required credentials.\n\nThe most simple way is email and password auth using details entered in a form in your mobile app.\n\n```csharp\nawait App.RealmApp.EmailPasswordAuth.RegisterUserAsync(EmailText, PasswordText);\n```\n\nLogging in, too, is one call.\n\n```csharp\nvar user = await App.RealmApp.LogInAsync(Credentials.EmailPassword(EmailText, PasswordText));\n```\n\nOf course, you can add conditional handling around this, such as checking if there is already a user object available and combining that with navigation built into MAUI, such as Shell, to simply skip logging in if the user is already logged in:\n\n```csharp\nif (user != null)\n {\n await AppShell.Current.GoToAsync(\"///Main\");\n }\n```\n\n### Model\nAs mentioned earlier in the article, Atlas Device SDKs can work with simple C# objects with properties, and use those as fields in your document, handling mapping between object and document.\n\nOne example of this is the MovingTask object, which represents a moving task. Below is a snippet of part of the MovingTask.cs model object.\n\n```csharp\nPrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(\"owner\")]\n public string Owner { get; set; }\n\n [MapTo(\"name\")]\n [Required]\n public string Name { get; set; }\n\n [MapTo(\"_partition\")]\n [Required]\n public string Partition { get; set; }\n\n [MapTo(\"status\")]\n [Required]\n public string Status { get; set; }\n\n [MapTo(\"createdAt\")] \n public DateTimeOffset CreatedAt { get; set; }\n\n```\n\nIt uses standard properties, with some additional attributes from the [MongoDB driver, which mark fields as required and also say what fields they map to in the document. This is great for handling different upper and lower case naming conventions, differing data types, or even if you wanted to use a totally different name in your documents versus your code, for any reason.\n\nYou will notice that the last property uses the DateTimeOffset data type, which is part of C#. This isn\u2019t available as a data type in a MongoDB document, but the driver is able to handle converting this to and from a supported type without requiring any manual code, which is super powerful.\n\n## Do Atlas Device SDKs support MVVM?\nAbsolutely. It fully supports INotifyPropertyChanged events, meaning you don\u2019t have to worry about whether the data is up to date. You can trust that it is. This support for events means that you don\u2019t need to have an extra layer between your viewmodel and your database if you don\u2019t want to.\n\nAs of Realm 10.18.0 (as it was known at the time), there is even support for Source Generators, making it even easier to work with Atlas Device SDKs and MVVM applications.\n\nHouseMovingAssistant fully takes advantage of Source Generators. In fact, the MovingTask model that we saw earlier implements IRealmObject, which is what brings in source generation to your models.\n\nThe list of moving tasks visible on the page uses a standard IEnumerable type, fully supported by CollectionView in MAUI.\n\n```csharp\nObservableProperty]\n IEnumerable movingTasks;\n```\n Populating that list of tasks is then easy thanks to LINQ support.\n\n```chsarp\nMovingTasks = realm.All().OrderBy(task => task.CreatedAt);\n```\n## What else should I know?\nThere are a couple of extra things to know about working with Atlas Device SDKs from your .NET MAUI applications.\n\n### Services\nAlthough as discussed above, you can easily and safely talk directly to the database (via the SDK) from your viewmodel, it is good practice to have an additional service class. This could be in a different/shared project that is used by other applications that want to talk to Atlas, or within your application for an added abstraction.\n\nIn HouseMovingAssistant, there is a RealmDatabaseService.cs class which provides a method for fetching the Realm instance. This is because you only want one instance of your Realm at a time, so it is better to have this as a public method in the service.\n\n```csharp\npublic static Realm GetRealm()\n {\n PartitionSyncConfiguration config = new PartitionSyncConfiguration($\"{App.RealmApp.CurrentUser.Id}\", App.RealmApp.CurrentUser);\n return Realm.GetInstance(config);\n }\n\n```\n### Transactions\nBecause of the way Atlas Device SDKs work under the hood, any kind of operation to it \u2014 be it read, create, update, or delete \u2014 is done inside what is called a write transaction. The use of transactions means that actions are grouped together as one and if one of those fails, the whole thing fails. \n\nCarrying out a transaction inside the Realm .NET SDK is super easy. We use it in HouseMovingAssistant for many features, including creating a new task, updating an existing task, or deleting one.\n\n```csharp\nvar task =\n new MovingTask\n {\n Name = MovingTaskEntryText,\n Partition = App.RealmApp.CurrentUser.Id,\n Status = MovingTask.TaskStatus.Open.ToString(),\n Owner = App.RealmApp.CurrentUser.Profile.Email,\n CreatedAt = DateTimeOffset.UtcNow\n };\n\n realm.Write(() =>\n {\n realm.Add(task);\n });\n```\nThe code above creates a task using the model we saw earlier and then inside a write transaction, adds that object to the Realm database, which will in turn update the Atlas cluster it is connected to. This is a great example of how you don\u2019t need an ORM, as we create an object from our model class and can directly add it, without needing to do anything extra.\n## Summary\nIn this article, we have gone on a whistle stop tour of .NET MAUI with Atlas Device SDKs (formerly Realm), and how you can quickly get up and running with a data capable application, with online/offline support and no need for an ORM.\n\nThere is so much more you can do with Atlas Device SDKs, MongoDB Atlas, and the App Services platform. A great article to read next is on [advanced data modelling with Realm and .NET by the lead engineer for the Atlas Device SDKs .NET team, Nikola Irinchev.\n\nYou can get started today by signing up to an Atlas account and discovering the world of Realm, Atlas and Atlas App Services!", "format": "md", "metadata": {"tags": ["Realm", "C#", ".NET", "Mobile"], "pageDescription": "A tutorial showing how to get started with Atlas Device SDKs, MongoDB Atlas and .NET MAUI", "contentType": "Tutorial"}, "title": "Online/Offline Data-Capable Cross-Platform Apps with MongoDB Atlas, Atlas Device SDKs and .NET MAUI", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-swiftui-property-wrappers-mvi-meetup", "action": "created", "body": "# Realm SwiftUI Property wrappers and MVI architecture Meetup\n\nDidn't get a chance to attend the Realm SwiftUI Property wrappers and\nMVI architecture Meetup? Don't worry, we recorded the session and you\ncan now watch it at your leisure to get you caught up.\n\n>Realm SwiftUI Property wrappers and MVI architecture\n:youtube]{vid=j72YIxJw4Es}\n\nIn this second installment of our SwiftUI meetup series, Jason Flax, the lead for Realm's iOS team, returns to dive into more advanced app architectures using SwiftUI and Realm. We will dive into what property wrappers SwiftUI provides and how they integrate with Realm, navigation and how to pass state between views, and where to keep your business logic in a MVI architecture.\n\n> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!\n\nNote - If you missed our first SwiftUI & Realm talk, you can review it here before the talk and get all your questions answered -\n.\n\nIn this meetup, Jason spends about 35 minutes on \n- StateObject, ObservableObject, EnvironmentObject\n- Navigating between Views with state\n- Business Logic and Model-View-Intent Best Practices\n\nAnd then we have a full 25 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!\n\nThroughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.\n\nTo learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.\n\n## Transcript\n\n**Jason Flax**: Great. So, as I said, I'm Jason Flax. I'm the lead engineer of the Realm Cocoa team. Potentially soon to be named the Realm Swift team. But I will not go into that. It's raining outside, but it smells nice. So, let's begin the presentation. So, here's, today's agenda. First let's go over. What is an architecture? It's a very loaded word. It means a number of things, for developers of any level, it's an important term to have down pat. W hat are the common architectures? There's going to be a lot of abbreviations that you hear today. How does SwiftUI change the playing field? SwiftUI is two-way data-binding makes the previous architecture somewhat moot in certain cases. And I'm here to talk about that. And comparing the architectures and pretty much injecting into this, from my professional opinion what the most logical architecture to use with SwiftUI is. If there is time, I have some bonus slides on networking and testing using the various architectures.\n\n**Jason Flax**: But if there is not, I will defer to the Q&A where you all get to ask a bunch of questions that Shane had enumerated before. So, let us begin. What is an architecture? x86, PowerPC, ARM. No, it's not, this is not, we're not talking about hardware architecture here. Architecture is short for an architectural pattern. In my opinion, hardware is probably too strong of a word or architecture is too strong of a word. It's just a term to better contextualize how data is displayed and consumed it really helps you organize your code. In certain cases, it enhances testability. In certain cases, it actually makes you have to test more code. Basically the patterns provide guidelines and a unified or United vocabulary to better organize the software application.\n\n**Jason Flax**: If you just threw all of your code onto a view, that would be a giant mess of spaghetti code. And if you had a team of 20 people all working on that, it would be fairly measurable and a minimum highly disorganized. The images here, just MVC, MVVM, Viper, MBI. These are the main ones I'm going to talk about today. There are a number of architectures I won't really be touching on. I think the notable missing one from this talk will be CLEAN architecture, which I know is becoming somewhat big but I can address that later when we talk or in the Q&A.\n\n**Jason Flax**: Let's go over some of those common architectures. So, from the horse's mouth, the horse here being Apple the structure of UIKit apps is based on the Model-View-Controller design pattern, wherein objects are divided by their purpose. Model objects manage the app's data and business logic. View objects provide the visual representation of your data. Controller objects acts as a bridge between your model and view objects, moving data between them at appropriate times.\n\n**Jason Flax**: Going over this, the user uses the controller by interacting with the view, the view talks to the controller, generally controllers are going to be one-to-one with the view, the controller then manipulates the data model, which in this case generally speaking would actually just be your data structure/the data access layer, which could be core data or Realm. The model then updates the view through the controller is displayed on the view the user sees it, interacts with it goes in a big circle. This is a change from the original MVC model. I know there are a few people in this, attending right now that could definitely go into a lot more history than I can. But the original intent was basically all the data and logic was in the model.\n\n**Jason Flax**: The controller was just for capturing user input and passing it to the model. And the communication was strictly user to controller, to model, to view. With no data flowing the other way this was like the OG unidirectional data flow. But over time as the MVC model evolved controllers got heavier and heavier and heavier. And so what you ended up with is MVC evolving into these other frameworks, such as MVVM. MVVM, Viper, CLEAN. They didn't come about out of nowhere. People started having issues with MVC, their apps didn't scale well, their code didn't scale well. And so what came about from that was new architectures or architectural design patterns.\n\n**Jason Flax**: Let's go over Model-View-ViewModel. It's a bit of a mouthful. So, in MVVM the business logic is abstracting to an object called a ViewModel. The ViewModel is in charge of providing data to the view and updating the view when data changes, traditionally this is a good way to separate business logic from the view controller and offer a much cleaner way to test your code. So, generally here, the ViewModel is going to be one-to-one with the model as opposed to the view and what the ViewModel ends up being is this layer of business logic and presentation logic. And that's an important distinction from what the controller previously did as the controller was more associated with the view and less so the model. So, what you end up with, and this will be the pattern as I go through each of the architectures, you're going to end up with these smaller and smaller pieces of bite-sized code. So, in this case, maybe you have more models than views. So, MVVM makes more sense.\n\n**Jason Flax**: So ViewModel is responsible for persistence, networking and business logic. ViewModel is going to be your data access layer, which is awkward when it comes to something like Realm, since Realm is the data access layer. I will dig more into that. But with SwiftUI you end up with a few extra bits that don't really make much sense anymore. This is just a quick diagram showing the data flow with MVVM. The ViewModel binds the view, the user inputs commands or intent or actions or whatever we want to say. The commands go through to the ViewModel, which effectively filters and calculates what needs to be updated on the model, it then reads back from the model updates the view does that in sort of a circular pattern.\n\n**Jason Flax**: Let's go over Viper. So, Viper makes the pieces even smaller. It is a design pattern used to separate logic for each specific module of your app. So, the view is your SwiftUI, A View owns a Presenter and a Router. The Interactive, that is where your business logic for your module lives, the interactor talks to your entity and other services such as networking. I'll get back to this in a second. The presenter owns the interactor and is in charge of delivering updates to the view when there is new data to display or an event is triggered. So, in this case, the breakdown, if we're associating concepts here, the presenter is associated with the view. It's closer to your view controller and the interactor is more associated with the model. So, it's closer to your ViewModel. So, now we're like mixing all these concepts together, but breaking things and separating things, into smaller parts.\n\n**Jason Flax**: In this case, the data flow is going to be a bit different. Your View is going to interact with the Presenter, which interacts with the Interactor, which interacts with the Entity. So you end up with this sort of onion that you're slowly peeling back. The Entity is your data model. I'm not entirely sure why they didn't call it model my guess is that Viper sounds a lot better than \\[inaudibile 00:06:46\\] doesn't really work. The router handles the creation of a View for a particular destination. This is a weird one. I'll touch on it a couple of times in the talk.\n\n**Jason Flax**: Routers made more sense when the view flow was executed by storyboards and segues and nibs and all that kind of thing. Now it's SwiftUI because it's all programmatic, routers don't really make as much sense. That said maybe in a complex enough application, I could be convinced that a router might elucidate the flow of use, but at the moment I haven't seen anything yet. But that is what the router is meant to do anyway, this is a brief diagram on how Viper works, sorry. So, again the View owns and sends actions to the Presenter, the Presenter owns and asks for updates and sends updates to the Interactor and the Interactor actually manipulates the data, it edits the entity, it contains the data access layer, it'll save things and load things and will update things.\n\n**Jason Flax**: And then the Interactor will notify the Presenter, which then notifies them View. As you can see this ... For anybody that's actually used SwiftUI, this is immediately not going to make much sense considering the way that you actually buying data to Views. MVI, this is kind of our end destination, this is where we want to end up at the end of our journey. MVI is fairly simple, to be honest, it's closer to an ideal, it's closer to a concept than an architecture to be honest. You have user, user has intent that intent changes the model, that model changes the view user sees that and they can keep acting on it. This has not really been possible previously UIKit was fairly complex, apps grow in complexity. Having such a simple thing would not have been enough in previous frameworks, especially uni-directional ones where a circular pattern like this doesn't really make sense.\n\n**Jason Flax**: But now it's SwiftUI there's so much abstracted way with us, especially with Realm, especially with live objects, especially with property rappers that update the view automatically under the hood that update your Realm objects automatically under the hood. We can finally achieve this, which is what I'm going to be getting out in the stock. So, let's go over some of those common concepts. Throwing around terms like View and Presenter and Model is really easy to do if you're familiar, but just in case anybody isn't. The View is what the user sees and interacts with all architectural patterns have a view. It is the monitor. It is your phone. It is your whatever. It is the thing that the user touches that the user plays with.\n\n**Jason Flax**: The User, the person behind the screen, the user actions can be defined as intent or interactions, actions trigger view updates which can trigger business logic. And I do feel the need to explicitly say that one thing that is also missing from this presentation not all actions, not all intent will be user-driven there are things that can be triggered by timers things that can be triggered by network requests things that don't necessarily line up perfectly with the model. That said, I felt comfortable leaving out of the top because it actually doesn't affect your code that much, at least if you're using MVI.\n\n**Jason Flax**: The model. So, this term gets a bit complicated basically it's the central components of the pattern. It's the application status structure, independent of the user interface it manages the data, logic and rules of the application. The reason that this gets a bit wonky when describing it is that oftentimes people just speak about the model as if it was the data structures themselves as if it was my struct who with fields, bars, whatever. That is the model. It's also an object. It's also potentially an instance of an object. So, hopefully over this talk, I can better elaborate on what the model actually is.\n\n**Jason Flax**: The Presenter. So, this is the presenter of the ViewModel, whatever you want to call it. It calculates and projects what data needs to actually be displayed on the view. As more often referred to as the ViewModel, the presenter. Again, frameworks with two-way data-binding obviate the need for this. It is awkward to go to a presenter when you don't necessarily need to. So, let's get the meat of this, how does SwiftUI actually change the playing field? For starters\nthere is no controller. It does not really make sense to have a controller. It was very much a UIKit concept ironically or coincidentally by eliminating the controller this graphic actually ends up looking a lot like MVI. The user manipulates the model through intent, the model updates, the view, the user sees the view goes around in a big circle. I will touch on that a bit more later. But MVC really doesn't make as much sense anymore if you consider it as the new school MVC.\n\n**Jason Flax**: MVVM, added a few nice screen arrows here. The new model doesn't really make sense anymore. Again, this is the presentation layer. When you can bind directly to the data, all you're doing by creating a ViewModel with SwiftUI in a two-way data-binding framework is shifting responsibility in a way that creates more code to test and doesn't do much else especially these days where you can just throw that business logic on the model. If you were doing traditional MVVM with SwiftUI, the green arrows would be going from, View to ViewModel and you could create the same relationship. It's just extra code it's boiler plate. Viper lot of confusion here. Again not really sure that the router makes a lot of sense. I can be convinced otherwise. Presentation view, the presenter doesn't really again makes sense it's basically your ViewModel. Interactor also doesn't make sense. It is again, because the view is directly interacting with the model itself or the entity. This piece is again, kind of like, eh, what are you doing here?\n\n**Jason Flax**: There's also an element here that as you keep building these blocks with Viper and it's both a strength and weakness of Viper. So, the cool thing about it you end up with all these cool little pieces to test with, but if the 10,000 line controller is spaghetti code. 10,000 lines of Viper code is ravioli code. Like these little pieces end up being overwhelming in themselves and a lot of them do nothing but control a lot at the same time. We'll get more into that when I show the actual code itself. And here's our golden MVI nothing changes. This is the beauty of it. This is the simplicity of it. User interacts, changes the model changes the view ad infinitum.\n\n**Jason Flax**: Now, let's actually compare these with code. So, the app that we will be looking at today, Apple came out with a Scrumdinger app to show offs with UI. It is basically an app that lets you create scrums. If you're not familiar with the concept of scrum it's a meeting, it's a stand-up where people briefly chat and update and so on and so forth. And I could go into more detail, but that's not what this talk is about. So, we took their app and we added Realm to it and we also then\nbasically wrote it three different times, one in Viper, one in MVVM and one in MVI. This will allow us to show you what works and what doesn't work. Obviously it's going to be slightly biased towards MVI. And of course I do feel the need to disclaim that this is a simple application. So, there's definitely going to be questions of like, \"How does this scale? How does MVI actually scale?\" I can address those if they are asked, if not, it should become pretty clear why I'm pushing MVI as the go-to thing for SwiftUI plus Realm plus the Realm Property Wrappers.\n\n**Jason Flax**: Let's compare the models. So, in Viper, this is your entity. So, it contains the scrums so the DailyScrum is going to be our like core data structure here. It's going to contain an ID. It's going to contain a title, it's going to contain attendees and things like that. But the main thing I'm trying to show with this slide is that the entity loads from the Realm, it loads the DailyScrum, which for those that have used Realm, you already know this is a bit awkward because everything with Realm is live. Those DailyScrum, that objects Realm.objects(DailyScrum.self).map. So, if you were to just save out those objects, those results are always updating, those results read and write, directly from the persistent store. So, loading from the database is already an awkward step.\n\n**Jason Flax**: Otherwise, you can push new scrums. You can update scrums. Again, what this is doing is creating a data access layer for a database that is already the data access layer. Either way this is the idea of Viper. You are creating these abstractions to better organize how things are updated, pushed, loaded, et cetera. MVVM the model is still a more classically the data structure itself. I will show the actual ViewModel in another slide. This should look a bit more similar to probably what you'd be used to. Actually there's probably a mistake the color shouldn't be there because that should be in the ViewModel. But for the most part, these are your properties.\n\n**Jason Flax**: The big difference here between MVI will be the fact that again you're creating this access layer around the Realm where you're going to actually pass in the ViewModel to update the scrum itself and then right to the Realm. It's even possible that depending on your interpretation of MVVM, which again is something I should have actually said earlier in the talk, a lot of these architectures end up being up for interpretation. There is a level of subjectivity and when you're on a team of 20 people trying to all architect around the same concepts, you might end up with some wishy-washy ViewModels and models and things like that.\n\n**Jason Flax**: MVI, this is your model that is the Realm database, and I'm not actually being facetious here. If you consider Realm to be your data access layer, to be your persistent storage, to be the thing that actually syncs data, it is. It holds all your data, It maintains all of the state, which is kind of what the model is supposed to do to. It maintain state, it maintains the entire, flow and state of your application. That is what Realm can be if you use it as it's intended to be used. Let's go over what the View actually look like depending on your architecture. Spoilers, they look very similar. It's what's happening under the hood that actually really changes the game. So, in this case you have these ScrumsView.\n\n**Jason Flax**: The object that you have on the View, you'll notice this does not, even though this app uses Realm is not using the Realm property wrappers because you are presenting the Presenter kind of makes sense, I suppose. You're going to show the scrums from the presenter and you're going to pass around that presenter around this view, to be able to interact with the actual underlying model, which is the DailyScrum class. You'll also notice at the bottom, which is I suppose a feature of Viper, each view has a presenter and potentially each model has an interactor.\n\n**Jason Flax**: For the EditView. So, I have the scrum app, I want to edit the scrum I have, I want to change the title or the color of it or something like that. I want to add attendees. For Viper, you have to pass in a new presenter, you have to pass in a new interactor and these are going to be these bite-sized pieces that again interact with the View or model, depending on which thing you're talking about. How am I doing on time? Cool. So, this is the actual like EditView now that I'm talking about. So, the EditView has that presenter, that presenter is going to basically give the view all of the data. So, lengthInminutes title, color, attendees, things like that. They're all coming off the presenter. You can see over here where I'm circling, you would save off the presenter as well. So, when you're done editing this view, you save on the presenter, that presenter is actually going to then speak to the interactor and that interactor is going to interact with their database and actually say about that data.\n\n**Jason Flax**: Again, the reason that this is a bit awkward when using Realm and SwiftUI at least is that because you have live objects with Realm, having intermediary layers is unnecessary abstraction. So, this is MVVM and now we actually have the video of the View as well on the right side. So, instead of a Presenter, you have a ViewModel right now you're seeing all the terms come together. You're going to read the ViewModels off of the ViewModel for each view. So, for the detailed view, you're going to pass in the detail ViewModel for the EditView, you're going to pass in the EditViewModel and the set ViewModel is going to take a scrum and it's going to read and write the data into that scrum.\n\n**Jason Flax**: This is or MBI now. So, MBI is going to look a little different. The view code is slightly larger but there are no obstructions beyond this. So, in this case, you have your own property wrap, you have observed results. This is going to be all of the DailyScrum in your round database. It is going to live update. You are not going to know that it's updating, but it will notify the view that it's updating. So, the DailyScrum was added, say, you have Realm sync, a DailyScrum is added from somebody else's phone, you just update. There is no other code you have to write for that. Below that you have a StateRealmObject, which is new scrum data. So, in this case, this is a special case for forms, which is a very common use case, that scrum data is going to be passed into the EditView and it's going to be operated on directly.\n\n**Jason Flax**: So the main added code here, or the main difference in code is this bit right here, where we actually add the scrum data to the observed results. So, somebody, following MVVM or Viper religiously might say, that's terrible. \"Why are you doing business logic in a view like that? And why would that happen?\" This is a direct result of user action. A user hits the done button or the add button. This needs to happen afterwards technically, if you really wanted to, you could extract this out to the model itself. You could put this on in instance of new scrum data and have it write itself to the Realm that is totally valid. I've seen people do that with, MBI, SwiftUI and Realm. In this case, it's simple enough where those layers of abstraction don't actually add anything beneficial.\n\n**Jason Flax**: And for testing, this would be, you'd want to test this from a UI test anyway. And the reason for that is that we test that the scrum data is added to the Realm we being Realm. I suppose there's a level of trust you have to have with Realm here that we actually are testing our code, we promise that we are. But that's the idea is that all of this appending data, all of this adding to the database, the data access layer, that's tested by us. You don't have to worry about that. So, yeah. Why is this view larger than the MVVM view? Because the interactive logic has been shifted to the ViewModel in MVVM, and there's no extra logic here, it's all there. But for MVVM again, it's all been pushed back, the responsibility has been shifted slightly. MVI this is actually what the EditView would look like. There's no obstructions here. You have your StateRealmObject, which is the DailyScrum that's been passed in from the previous view. And you bind the title directly to that.\n\n**Jason Flax**: So if you look at the right side video here, as I changed the Cocoa team to Swift team Scrum so mouthful that is updating to the Realm that is persisting and if you were using Realm sync, that would be syncing as well. But there is no other logic here that is just handled. Hopefully at this point you would be asking yourself why add the extra logic. I can't give you a good reason, which is the whole point of this talk. So, let's go over the ViewModel and Persistence or dig in a bit deeper. So, this is our actual Realm Object. This is the basic object that we have. It's the POJO, the POSO whatever you want to call it. It is your plain old Swift object.\n\n**Jason Flax**: In this case, it is also a Realm Object has an ID. That ID would largely be for syncing if you had say, if you weren't using Realm Sync and you just had a REST API, it would be your way of identifying the Scrums. It has a title, which is the name of it of course, a list of attendees, which in this case for this simple use case, it's just strings it's names it's whatever that would be length of the scrum in minutes and the color components, which depending on which thing you're using is actually pretty cool. And this is something that I probably won't have time to fully dig into, but you can use Realm to manage view state. You can use Realm to manage the app state if say you're scrolling in a view and you're at a certain point and then you present another view over that maybe it's a model or something, and the phone dies, that sucks.\n\n**Jason Flax**: You can open up the app when the phone turns back on, if they've charged it of course, and you can bring them back to that exact state, if it's persistent in the Realm. In the case of color components, the cool thing there is that you can have a computer variable, which I'll show after that will display directly to the view as a color. And with that binding, the view can also then change that color, the model can break it down into its components and then store that in the Realm. Let's actually skip the presenter in that case, because we were actually on the EditView, which I think is the more interesting view to talk about. So, this is the edit presenter for Viper.\n\n**Jason Flax**: This is your ViewModel, this is your presenter. And as you can see here, it owns the Interactor and it's going to modify each of these fields as they're modified. It's going to fetch them. It's going to modify them. It's going to send updates to the view because it can't take advantage of Realms update. It can't take advantage of Realms observe or the Property wrappers or anything like that because you are creating this, separation of layers. In here with colors it's going to grab everything. And when you actually add new attendees, it's going to have to do that as well. So, as you can see, it just breaks everything down.\n\n**Jason Flax**: And this is the Interactor that's actually now going to talk to the model. This is where your business logic is. This is where you could validate to make sure that say the title's not empty, that the attendees are not empty, that the length of time is not negative or something like that. And this is also where you'd save it. This would be the router, which again I didn't really know where to put this. It doesn't fit in with any other architecture but this is how you would present views with Viper\n\n**Jason Flax**: And for anybody that's used SwiftUI you might be able to see that this is a bit odd. So, this would be your top level ViewModel for MVVM. In this case, you actually can somewhat take advantage of Realm. If you wanted to. Again, it depends on how by the book you are approaching the architecture as, so you have all your scrums there, you have what is, and isn't presented. You have all your ViewModels there as well, which are probably going to be derived from the result of scrums. And it's going to manage the Realm. It's going to own the Realm. It's going to own a network service potentially. You're going to add scrums through here. You're going to fetch scrums through here. It controls everything. It is the layer that blocks the data layer. It is the data access layer.\n\n**Jason Flax**: And this is going to be your ViewModel for the actual DailyScrum. This is the presentation layer. This is where you're seeing. So you get the scrum title that you change the scrum title, and you get the scrum within minutes you change the scrum length in minutes. You validate it from here, you can add it from here. You can modify it from here. It also depends on the view. But to avoid repeating myself and this would be the EditView with the ViewModel. So, instead of having the Realm object here, as you saw with MBI, you'd have the ViewModel. The two-way data-binding is actually going to change the model. And then at the end you can update. So, things don't need to be live necessarily. And again the weird thing here is that with Realms live objects, why would you want to use the ViewModel when you have two-way data-binding?\n\n**Jason Flax**: And just to ... My laptop fan just got very loud. This is the path to persistence with MVVM as well. So, user intent, user interaction they modify the view in whatever way they do it. it goes through the presenter, which is the DailyScrum ViewModel. This is specifically coming from the EditView. It goes to the presenter. It changes the DailyScrum model, which then interacts and persists to the Realm. Given anybody that's used Realm again to repeat myself, this is a strange way to use Realm considering what Realm is supposed to be as your persistent storage for live objects.\n\n**Jason Flax**: MVI, what is your presentation layer? There's no VM here. There's no extra letters in here. So, what do we actually do? In MVI, the presentation layer is an instance of your data model. It is an instance of these simple structures. So in this case, this is the actual DailyScrum model. You can see on here, the color thing that I was talking about before. This color variable is going to bind directly to the view and when the view updates, it will update the model. It will persist to the Realm. It will sync to MongoDB Realm. It will then get the color back showed in the view, et cetera. And for business logic, that's going to be on the instance. This could be an extension. It could be in an extension. It could be in a different file. There's ways to organize that obviate the need for these previously needed abstractions in UIKit.\n\n**Jason Flax**: So, this is an actual implementation of it, which I showed earlier. You have your StateRealmObject, which is the auto updating magic property wrapper in SwiftUI. You have your DailyScrum model and instance has been passed in here. So, when we actually write the title down, type the title, I suppose. Because it's on the phone, it is going to update sync persist, et cetera. MVI is a much shorter path to persistence because we are binding directly to the view. User makes an action, action modifies the view, view modifies the actual state. You modifies the Realm, modifies the DailyScrum syncs et cetera.\n\n**Jason Flax**: Why MVI is NOT scary, not as an all capital letters because I'm super serious guys. So, MVI is lightweight. It's nearly a concept as opposed to a by the book architecture. There are standards and practices. You should definitely follow your business logic should be on the actual instance of the data model. The two-way data-bindings should be happening on the view itself. There's some wiggle room, but not really, but the implication is that the View is entirely data-driven. It has zero state of its own, bar a few dangling exceptions, like things being presented, like views being presented or scroll position or things like that.\n\n**Jason Flax**: And all UI change has come from changes in the model which again, leveraging Realm, the model auto-updates and auto-notifies you anyway. So, that is all done for you. SwiftUI though imperfect does come very close to this ideal. View state can even be stored and persisted within the guidelines of the architecture to perfectly restore user state in the application, which ties back to the case I gave of somebody's phone dying and wanting to reopen right to the exact page that they were in the app. So, when considering the differences in SwiftUI's two way databinding versus UIKit's unidirectional data flow, we can rethink certain core concepts of at least MVVM and to an extent Viper.\n\n**Jason Flax**: And this is where rethinking the acronyms or abbreviations comes into play a bit. It's a light spin, but I think for those who actually, let's say, you're on your team of 20 iOS developers and you go to the lead engineer and you're like, \"I really think we should start doing MVI.\" And they're like, \"Well, we've been doing MVVM for the past five years. So, you can take a walk.\" In this case, just rephrase MVVM. Rephrase Viper. In this case, your model becomes the round database. It is where you persist state. It is the data access layer. The View is still the View. That one doesn't change. The ViewModel, again, just becomes an instance of your Realm object.\n\n**Jason Flax**: You just don't need the old school ViewModel anymore. The business logic goes on that. The transformation goes on that. It is honestly a light shift in responsibility, but it prevents having to test so much extra boilerplate code. If the goal of MVVM was to make things easier to test in previous iterations of iOS development, it no longer applies here because now you're actually just adding extra code. Viper concepts can be similarly rethought. Again, your View is your View, your presenter and interactor or the ViewModel. Your entity is the model and your router is enigma to me. So, I'll leave that one to the Viper doves out there to figure out. It looks like we have enough time for the extra slides that I have here before the Q &A.\n\n**Jason Flax**: So, just a bit of networking code, this is really basic. It's not very good code either. So, in this case, we're just going to fetch the scrums from a third party service. So, we're not using Realm sync in this case. We have some third party service or our own service or whatever that we call in. And if we want those to show on the View, we're going to have to notify the View. We're going to want the cache those maybe. So, we're going to add those to the Realm. If they actually do have IDs, we want to make sure that our update policy does not overwrite the same scrums or anything like that for updating. And this is Viper, by the way. For updating similarly, we're going to pass the scrum to the interactor. That scrum is going to get sent up to the server. We're going to make sure that that scrum is then added to the Realm, depends on what we're updating.\n\n**Jason Flax**: If we've updated it properly and using Realm as Realm is intended to be used, you should not have to re-add it to the Realm. But if you are following Viper by the book, you need to go through all the steps of reloading your model, saving this appropriately and updating the View, which again is a lot of extra work. Not to mention here as well, that this does not account for anything like conflicts or things that would happen in a real-world scenario, but I will get to that in a later slide. So, for MVVM in this case, the networking is likely going to be on the ViewModel and go through again, some kind of service. It's very similar to Viper, except that it's on the ViewModel we're going to fetch.\n\n**Jason Flax**: We're going to add to the Realm of cache, cache and layer. And because we're not using the Realm property wrappers on the View, we're using the ViewModel, we have to update the View manually, which is the objectWillChange.send. So, for MVI, it's similar, but again, slightly different because the Realms are on the View this time, the main difference here is that we don't have to update anything. That results from before the observed results. That's going to automatically update the View. And for the update case, you shouldn't really have to do anything, which is the big difference between the other two architectures because you're using live objects, everything should just be live.\n\n**Jason Flax**: And because in MVI, the business logic is going to be on the data models themselves or the instances of the Realm objects themselves. These methods are going to be on that, you update using yourself which is here. And the cool thing, if you're using MongoDB Realm Sync and you're thinking about networking, you don't have to do anything. Again, not being facetious, that's handled for you. If you're using your persistence layer and thinking about sync, when you actually open up the Realm, those scrums are going to be pulled down for you, and they're going to hydrate the Realm.\n\n**Jason Flax**: If somebody on their phone updates one of the existing scrums, that's going to be automatically there for you. It is going to appear on your View live without you having to edit any extra code, any extra networking or whatever. Similar, removal. And of course, Realm sync also handles things like conflicts, where if I'm updating the scrum at the same time as somebody else, the algorithm on the backend will figure out how to handle that conflict for you. And your Realm, your persistence layer, your instances of your data models as well, which is another cool feature from because remember that they're live, they will be up-to-date.\n\n**Jason Flax**: They will sync to the Views, which will then have the most up-to-date information without you adding any extra code. I would love to go into this more. So, for my next talk, I think the thing I want to do, and of course I'd like to hear from everybody if they'd be interested, but the thing I want to do is to show a more robust, mature, fully fledged application using MVI MongoDB Realm sync, SwiftUI Realm and the property wrappers, which we can talk about more in the Q&A, but that's my goal. I don't know when the talk will be, but hopefully sooner than later. And then finally, the last bit of slides here. Actually, testing your models. So, for MVVM you actually have to test the ViewModels. You're going to test that things are writing to the database and reading from database appropriately.\n\n**Jason Flax**: You're testing that the business logic validates correctly. You're testing that it calculates View data correctly. You're testing out all of these calculations that you don't necessarily have to test out with other architectures. Viper, it's going to be the same thing. You're just literally swapping out the ViewModel for the interactor and presenter. But for MVI, colors are a little messed up there. You're really just going to be testing the business logic on your models. You're going to create instances of those Realm objects and make sure that the business logic checks out. For all of these, I would also highly recommend that you write UI tests. Those are very important parts of testing UI applications. So, please write those as well. And that's it. Thank you, everyone. That is all for the presentation. And I would love to throw this back to Ian and Shane, so that we can start our Q&A.\n\n**Shane McAllister**: Excellent. Thank you. Thank you, Jason. That was great. I learned a lot in that as well, too. So, do appreciate that. I was watching the comments on the side and thank you for the likes of Jacob and Sebastian and Ian and Richard and Simon who've raised some questions. There's a couple that might come in there. But above all, they've been answered by Lee and also Alexander Stigsen. Who, for those of you who don't know, and he'll kill me for saying, is the founder of Realm and he's on the chat. So, if you question, drop it in there. He's going to kill me now. I'm dead. So, I think for anybody, as I said at the beginning, we can open and turn on your camera and microphone if you want to ask a question directly.\n\n**Shane McAllister**: There's no problem if you don't want to come on camera well you can throw it into the chat and I'll present it to essentially Jason and Ian and we'll discuss it. So, I think while we're seeing, if anybody comes in there, and for this scrum dinger example, Jason, are we going to put our Realm version up on a repo somewhere that people can play around with?\n\n**Jason Flax**: Yes, we will. It is not going to be available today, unfortunately. But we are going to do that in the next, hopefully few days. So, I will I guess we'll figure out a way to send out a link to when that time comes.\n\n**Shane McAllister**: Okay. So, we've a question from Jacob. \"And what thoughts do you have on using MVI for more mixed scenarios, for example, an app or some Views operate on the database while others use something like a RIA service?\"\n\n**Jason Flax**: Where is that question, by the way, \\[crosstalk\n00:38:31\\].\n\n**Shane McAllister**: On the chat title, and there's just a period in the chat there it'll give you some heads up. Most of the others were answered by Alexander and Lee, which is great. Really appreciate that. But so looking at the bottom of the chat there, Jason, if you want to see them come through.\n\n**Jason Flax**: I see, Jacob. Yeah. So, I hope I was able to touch on that a bit at the end. For Views that need to talk to some network service, I would recommend that that logic again, no different than MVVM or Viper, that logic, which I would consider business logic, even though it's talking to RIA service, it just goes back on the instance of the object itself. In certain cases, I think let's say you're fetching all of the daily scrums from the server, I would make that a static method on the instance of the data object, which is mainly for organizational purposes, to be honest. But I don't think that it needs to be specially considered beyond that. I'm sure in extremely complex cases, more questions could be asked, but I would probably need to see a more complex case to be able to-\n\n**Ian Ward**: I think one of the themes while you were presenting with the different architecture patterns, is that a lot of the argument here is that we are eliminating boilerplate code. We're eliminating a lot of the code that a developer would normally need to write in order to implement MVVM or there was a talk of MVC as Massive View Controller. And some of the questions around MVI were, \"Do we have the risk of also maybe inflating the model as well here?\" Some of that boilerplate code now go into the model. How would you talk to that a little bit of putting extra code into the model now to handle some of this?\n\n**Jason Flax**: As in like how to avoid this massive inflation of your model \\[crosstalk 00:40:33\\]?\n\n**Ian Ward**: Yeah. Exactly. Are we just moving the problem around or does some of this eliminate some of that boilerplate?\n\n**Jason Flax**: To be honest, each one of these \\[crosstalk 00:40:45\\].\n\n**Ian Ward**: That's fair. I guess that's why it's a contentious issue. You have your opinions and at some point it's, where do you want to put the code?\n\n**Jason Flax**: Right. Which is why, there is no best solution and there is no best answer to your question either. The reason that I'm positing MVI here is not necessarily about code organization, which is always going to be a problem and it's going to be unique to somebody's application. If you have a crazy amount of business logic on one of your Realm objects, you probably need to break up that Realm object. That would be my first thought. It might not be true for each case. I've seen applications where people have 40 different properties on their Realm object and a ton of logic associated with it. I personally would prefer to see that broken down a bit more.\n\n**Jason Flax**: You can then play devil's advocate and say, \"Well, okay,\" then you end up with the Ravioli Code that you were talking about from before. So it's all, it's this balancing act. The main reason I'm positing MVI as the go-to architecture is less about code organization and more about avoiding unnecessarily boilerplate and having to frankly test more than.\n\n**Ian Ward**: Right.That's a fair answer. And a couple of questions that are coming in here. There's one question at the beginning asking about the block pattern, which watch out sounds like we have a flutter developer in here. But the block pattern is very much about event streams and passing events back and forth, which although we have the property wrappers, we've done a lot of the work under the hood. And then there was another question on combined. So, maybe you could talk a little bit about our combined support and some of the work that we've done with property wrappers to integrate with that.\n\n**Jason Flax**: Sure. So, we've basically added extensions to each of our observable types which in the case of Realm is going to be objects, lists, backlinks, results which is basically a View of the table that the object is stored on, which can be queried as well. And then by effect through objects, you can also observe individual properties. We have added support to combine. So, you can do that through the flow of combine, to get those nice chains of observations, to be able to map data how you want it to sync it at the end and all that kind of thing. Under the hood of our property wrappers are hooking that observation logic into SwiftUI.\n\n**Jason Flax**: Those property wrappers themselves have information on that, so that when a change happens, it notifies the View. To be honest, some of that is not through combined, but it's just through standard observation. But I think the end mechanism where we actually tell the View, this thing needs to update that is through, I guess, one of the added combined features, which is the publisher for object changes. We notified the View, \"Hey, this thing is updated.\" So, yeah, there's full combine support for Realm, is the short answer as well.\n\n**Ian Ward**: Perfect.\n\n**Shane McAllister**: Cool. There was a question hiding away in the Q&A section as well too. \"Does at state Realm object sends Realm sync requests for each key stroke?\"\n\n**Jason Flax**: It would. But surprisingly enough, that is actually not as heavy of an action as you might think. We've had a lot of debate about this as well, because that is one of the first questions asked when people see the data being bound to a text field. It's really not that heavy. If you are worried about it or maybe this is just some application that you want to work in the Tundras of Antarctica, and maybe you don't want to have to worry about things like network connection or something, I would consider using a draft object, something that is not being persistent to the Realm. And then at the end, when you are ready to persist that you can persistent it. Classically, that would have been the ViewModel, but now you can just use an instance of a non-persistent Realm object, a non \\[crosstalk 00:44:51\\].\n\n**Ian Ward**: Yeah. That was another question as well. I believe Simon, you had a question regarding draft objects and having ... And so when you say draft objects, you're saying a copy of the Realm object in memory, is that correct? Or maybe you can go into that a little bit.\n\n**Jason Flax**: It could be a copy. That would be the way to handle an existing object that you want to modify, if you don't want to set it up on every keystroke for form Views in this case, let's say it's a form. Where it doesn't exist in the Realm, you can just do an on manage type and to answer Simon's second query there. Yeah, it could also be managed by a local Realm that is also perfectly valid, and that is another approach. And if I recall Simon, were you working on the workout app with that?\n\n**Ian Ward**: I believe he was.\n\n**Jason Flax**: I don't know. Yeah. Yeah. I played around with that. That is a good app example for having a lot of forms where maybe you don't want to persist on every keystroke. Maybe you even want something like specifically, and I believe this might've even been the advice that I gave you on the forums. Yes, store a draft object in a local Realm. It could be the exact same object. It could be a different model that is just called, let's say, you want to save your workout and it has sets and reps and whatever. You might have a workout object stored in the sync Realm, and then you might have a workout draft object stored in a local Realm, and you can handle it that way as well.\n\n**Shane McAllister**: Great. Does anybody want to come on screen with us, turn on the camera, turn on the mic, join us? If you do, just ping in the chat, I'll jump in. I'll turn that right on for you. Richard had a question further up and it was more advice, more so than a question per se, \"Jason, nice to show some examples of how you would blend MVI with wrapped View controllers.\" He's saying that rewrites are iterative and involve hybrid systems was the other point he made.\n\n**Jason Flax**: Right. Yeah. That would be a great concept for another talks because yeah, you're totally right. It's really easy for me to come in with a cricket bat or whatever, and just knock everything down and say, \"Use MVI.\" But in reality of course, yeah, you want to incrementally migrate to something you never want to do ever. Well, not never, but most of the time you don't want to do a total rewrite.\n\n**Ian Ward**: Yeah, a total rewrite would be a sticky wicket, I think. So, for cricket. So, we have another question here on Realm's auto-sync. And the question from Sebastian is, \"Can we force trigger from an API sync?\" And actually I can answer this one. So, yes, you can. There is a suspend and resume method for Realm sync. So, if you really want to be prescriptive about when Realm syncs and doesn't sync, you can control that in your code.\n\n**Jason Flax**: Perfect.\n\n**Shane McAllister**: And asks, \"Is there any learning path available to get started with Realm?\" Well, we've got a few. Obviously our docs is a good place to start, and if you go look in there, but the other thing too is come on who, and this is the plug, developer.mongodb.com. And from there, you can get to our developer hub for articles. You can get into our forums to ask the questions of the engineers who are on here and indeed our wider community as well too. But we're also very active where our developers are. So, in GitHub and Stack Overflow, et cetera, as well too, there's comments and questions whizzing around there. Jason, is there anywhere else to go and grab information on getting started with Realm?\n\n**Shane McAllister**: Yeah. Obviously this is the place to go as well too. I know we're kind of, we went in at a high level and a lot of this here and maybe it's not obviously the beginner stuff, but we intend to run these as often as we can. Certainly once or twice a month going forward, resources permitting and time permitting for everybody too. So, as Ian said, I think at the beginning, tell us what you want to hear in meetups like this as well too because we want to engage with our community, understand where you're at and help you resolve your problems with Realm as much as possible.\n\n**Ian Ward**: Absolutely\n\n**Shane McAllister**: Ian has another one in here, Ian. Thank you, Ian. \"And how to move a local Realm into sync? Just copy the items manually from one to the other or is there a switch you can throw to make the local one a synced one?\"\n\n**Ian Ward**: Yeah.\n\n**Jason Flax**: \\[crosstalk 00:49:49\\]. So, we do get this feature request. It is something that is on my list, like by list of product backlog. Definitely something I want to add and we just need to put a product description together, another thing on my backlog. But yes, right now what you would do is to open the local Realm, iterate through all the objects, copy them over into a synced Realm. The issue here is that a synced Realm has to match the history of the MondoDB Realm sync server on the side. So, the histories have to match and the local Realm doesn't have that history. So, it breaks the semantics of conflict resolution. In the future, we would like to give a convenience API to do this very simply for the user. And so hopefully we can solve that use case for you.\n\n**Shane McAllister**: Good. Well, Ian has responded to say, \"That makes sense.\" And indeed it does, as always. Something else for your task list then. So, yeah, definitely.\n\n**Ian Ward**: Absolutely.\n\n**Shane McAllister**: I'm trying to scroll back through here to see, did we miss anybody. If we did miss anybody, do to let me know. I noticed a comment further up from \\[Anov 00:51:01\\], which was great to see, which is, \"These sessions turn out to be the best use of my time.\" And that's what we're looking for, that validity from our community, that this is worth the time. Jason puts a ton of effort into getting this prepared as does Ian and pulling it all together. Those examples don't write themselves. And indeed the wider team, the Coca team with Jason as well had put effort into putting this together. So, it's great to see that these are very beneficial for our community. So, unless, is there anything else, any other questions? I suppose throwing it back out to you, Jason, what's next? What's on the roadmap? What's keeping you busy at the moment? Ian, what are we planning later on? You're not going to say you can't tell, right?\n\n**Ian Ward**: Yeah. For iOS specifically, I think maybe Jason, we were talking about group results. I know we had a scope the other day to do that. We're also talking about path filtering. These are developer improvements for some of the APIs that we have that are very iOS-specific. So, I don't know, Jason, if you want to talk to a couple of those things that would be great.\n\n**Jason Flax**: Sure. Yeah. And I'll talk about some of stuff that hopefully we'll get to next quarter as well. So, group results is something we actually have to figure out and ironically actually ties more to UIKit and basically how to better display Realm data on table Views. But we are still figuring out what that looks like. Key path filtering is nice. It just gives you granual observation for the properties that you do actually want to observe and listen to on objects. Some of the other things that we've begun prototyping, and I don't think it's ... I can't promise any dates. Also, by the way, Realm is open source. So, all of this stuff that we're talking about, go on our branches, you can see our poll requests. So, some of the stuff that we're prototyping right now Async rights, which is a pretty common use case we're writing data to Realm asynchronously.\n\n**Jason Flax**: We're toying with that. We're toying with another property wrapper called auto-open, which will hopefully simplify some of the logic around MongoDB Realm locking in and async opening the Realm. Basically the goal of that project is so that let's say your downloading a synced Realm with a ton of data in it as opposed to having to manually open the Realm, notify the View that it's done, et cetera, you'll again, just use a property wrapper that when the Realm is done downloading, it will notify the View that that's occurred. We're also talking about updating our query syntax. That one I'm particularly excited about. Again, no dates promised. But it will basically be as opposed to having to use NS predicate to query on your Realm objects, you would be able to instead use a type safe key path based query syntax, closer to what Swift natively uses.\n\n**Ian Ward**: Absolutely. We've got some new new types coming down the pike as well. We have a dictionary type for more unstructured key values, as well as a set we're looking to introduce very shortly and a mixed type as well, which I believe we have a different name for that. Don't we, Jason?\n\n**Jason Flax**: Yes, it will follow-\n\n**Ian Ward**: Any Realm value. There you go.\n\n**Jason Flax**: ... what that does yeah. Any Realm value -\n\n**Ian Ward**: Yeah, so we had a lot of feature requests for full tech search. And so if you have, let's say an inventory application that has a name and then the description, two fields on an object and that's a string field. We have just approved our product description for full text search. So, you'll hopefully be able to tokenize or we are working towards tokenizing that string fields. And so then you can search that string field, search the actual words in that string field to get a match at index level speeds. So, hopefully that will help individuals, especially when they're offline to search string fields.\n\n**Jason Flax**: That's Richard's dictionary would be huge. Yeah. We're excited about that one. We're probably going to call it Map. So, yeah, that's an exciting one.\n\n**Shane McAllister**: Excellent. Ian's squeezing in a question there, a feature request actually. Leads open multiple sync Realms targeting multiple partition keys. Okay.\n\n**Ian Ward**: Yeah. So, we are actively working towards that. I don't know how many people are familiar with Legacy Realm. I recognize a couple faces here, but we did have something called query based sync. And we are looking to have a reimagination of that inquiry-based sync 2.0 or we're also calling it flexible sync, which will have a very analogous usage where you'd be able to send queries to the server side, have those queries run and returned the results set down to the client. And this will remove the partition key requirement. And so yes, we are definitely working on that and it's definitely needed for our sync users for sure.\n\n**Shane McAllister**: Excellent. That got a yay and a what, cool emoji from Ian. Thank you, Ian, appreciate it. Excellent. I think that probably look we're just after the hour or two hours for those of you that joined at the earlier start time that we decided we were going to do this at. For wrap-up for me at, from an advocacy point of View, we love to reach out to the community. So, I'm going to plug again, developer.mongodb.com. Please come on board there to our forums and our developer hub, where we write about round content all the time. We want to grow this community. So, live.mongodb.com will lead you to the Realm global community, where if you sign-up, if you haven't already, and if you sign-up, you'll get instant notification of any of these future meetups that we're doing.\n\n**Shane McAllister**: So, they're not all Swift. We're covering all of our other SDKs as well too. And then we have general meetups. So, please sign-up there, share the word. And also on Twitter, the app Realm Twitter handle. If you enjoyed this, please share that on Twitter with everybody. We love to see that feedback come through and we want to be part of that community. We want to engage on Twitter as well too. So, our developer hub, our forums and Twitter. And then obviously as Jason mentioned, the round master case or open source, you can contribute on our repos if you like. We love to see the participation of the wider community as well, too. Ian, anything to add?\n\n**Ian Ward**: No, it's just it's really great to see so many people joining and giving great questions. And so thank you so much for coming and we love to see your feedback. So, please try out our new property wrappers, give us feedback. We want to hear from the community and thank you so much, Jason and team for putting this together. It's been a pleasure\n\n**Shane McAllister**: Indeed. Excellent. Thank you, everyone. Take care.\n\n**Jason Flax**: Thank you everyone for joining.\n\n**Ian Ward**: Thank you. Have a great week. Bye.\n", "format": "md", "metadata": {"tags": ["Realm", "Swift"], "pageDescription": "Missed Realm SwiftUI Property wrappers and MVI architecture meetup event? Don't worry, you can catch up here.", "contentType": "Article"}, "title": "Realm SwiftUI Property wrappers and MVI architecture Meetup", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/java-azure-spring-apps", "action": "created", "body": "# Getting Started With Azure Spring Apps and MongoDB Atlas: A Step-by-Step Guide\n\n## Introduction\n\nEmbrace the fusion of cloud computing and modern application development as we delve into the integration of Azure\nSpring Apps\nand MongoDB. In this tutorial, we'll guide you through the process of creating\nand deploying a Spring Boot\napplication in the Azure Cloud, leveraging the strengths of Azure's platform, Spring Boot's simplicity, and MongoDB's\ncapabilities.\n\nWhether you're a developer venturing into the cloud landscape or looking to refine your cloud-native skills, this\nstep-by-step guide provides a concise roadmap. By the end of this journey, you'll have a fully functional Spring Boot\napplication seamlessly running on Azure Spring\nApps, with MongoDB handling your data storage needs and a REST API ready\nfor interaction. Let's explore the synergy of these technologies and propel your cloud-native endeavors forward.\n\n## Prerequisites\n\n- Java 17\n- Maven 3.8.7\n- Git (or you can download the zip folder and unzip it locally)\n- MongoDB Atlas cluster (the M0 free tier is enough for this tutorial). If you don't have\n one, you can create one for free.\n- Access to your Azure account with enough permissions to start a new Spring App.\n- Install the Azure CLI to be\n able to deploy your Azure Spring App.\n\nI'm using Debian, so I just had to run a single command line to install the Azure CLI. Read the documentation for your\noperating system.\n\n```shell\ncurl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash\n```\n\nOnce it's installed, you should be able to run this command.\n\n```shell\naz --version \n```\n\nIt should return something like this.\n\n```\nazure-cli 2.56.0\n\ncore 2.56.0\ntelemetry 1.1.0\n\nExtensions:\nspring 1.19.2\n\nDependencies:\nmsal 1.24.0b2\nazure-mgmt-resource 23.1.0b2\n\nPython location '/opt/az/bin/python3'\nExtensions directory '/home/polux/.azure/cliextensions'\n\nPython (Linux) 3.11.5 (main, Jan 8 2024, 09:08:48) GCC 12.2.0]\n\nLegal docs and information: aka.ms/AzureCliLegal\n\nYour CLI is up-to-date.\n```\n\n> Note: It's normal if you don't have the Spring extension yet. We'll install it in a minute.\n\nYou can log into your Azure account using the following command.\n\n```shell\naz login\n```\n\nIt should open a web browser in which you can authenticate to Azure. Then, the command should print something like this.\n\n```json\n[\n {\n \"cloudName\": \"AzureCloud\",\n \"homeTenantId\": \"\",\n \"id\": \"\",\n \"isDefault\": true,\n \"managedByTenants\": [],\n \"name\": \"MDB-DevRel\",\n \"state\": \"Enabled\",\n \"tenantId\": \"\",\n \"user\": {\n \"name\": \"maxime.beugnet@mongodb.com\",\n \"type\": \"user\"\n }\n }\n]\n```\n\nOnce you are logged into your Azure account, you can type the following command to install the Spring extension.\n\n```shell\naz extension add -n spring\n```\n\n## Create a new Azure Spring App\n\nTo begin with, on the home page of Azure, click on `Create a resource`.\n\n![Create a resource][1]\n\nThen, select Azure Spring Apps in the marketplace.\n\n![Azure Spring Apps][2]\n\nCreate a new Azure Spring App.\n\n![Create a new Azure Spring App][3]\n\nNow, you can select your subscription and your resource group. Create a new one if necessary. You can also create a\nservice name and select the region.\n\n![Basics to create an Azure Spring App][4]\n\nFor the other options, you can use your best judgment depending on your situation but here is what I did for this\ntutorial, which isn't meant for production use...\n\n- Basics:\n - Hosting: \"Basic\" (not for production use, but it's fine for me)\n - Zone Redundant: Disable\n - Deploy sample project: No\n- Diagnostic settings:\n - Enable by default.\n- Application Insights:\n - Disable (You probably want to keep this in production)\n- Networking:\n - Deploy in your own virtual network: No\n- Tags:\n - I didn't add any\n\nHere is my `Review and create` summary:\n\n![Review and create][5]\n\nOnce you are happy, click on `Create` and wait a minute for your deployment to be ready to use.\n\n## Prepare our Spring application\n\nIn this tutorial, we are deploying\nthis [Java, Spring Boot, and MongoDB template,\navailable on GitHub. If you want to learn more about this template, you can read\nmy article, but in a few words:\nIt's a simple CRUD Spring application that manages\na `persons` collection, stored in MongoDB with a REST API.\n\n- Clone or download a zip of this repository.\n\n```shell\ngit clone git@github.com:mongodb-developer/java-spring-boot-mongodb-starter.git\n```\n\n- Package this project in a fat JAR.\n\n```shell\ncd java-spring-boot-mongodb-starter\nmvn clean package\n```\n\nIf everything went as planned, you should now have a JAR file available in your `target` folder\nnamed `java-spring-boot-mongodb-starter-1.0.0.jar`.\n\n## Create our microservice\n\nIn Azure, you can now click on `Go to resource` to access your new Azure Spring App.\n\n for\n the Java driver. It should look like this:\n\n```\nmongodb+srv://user:password@free.ab12c.mongodb.net/?retryWrites=true&w=majority\n```\n\n- Create a new environment variable in your configuration.\n\n, it's time\n> to create one and use the login and password in your connection string.\n\n## Atlas network access\n\nMongoDB Atlas clusters only accept TCP connections from known IP addresses.\n\nAs our Spring application will try to connect to our MongoDB cluster, we need to add the IP address of our microservice\nin the Atlas Network Access list.\n\n- Retrieve the outbound IP address in the `Networking` tab of our Azure Spring App.\n\n,\nyou can access the Swagger UI here:\n\n```\nhttps:///swagger-ui/index.html\n```\n\n and start exploring all the features\nMongoDB Atlas has to offer.\n\nGot questions or itching to share your success? Head over to\nthe MongoDB Community Forum \u2013 we're all ears and ready to help!\n\nCheers to your successful deployment, and here's to the exciting ventures ahead! Happy coding! \ud83d\ude80\n\n[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85b83d544dd0ca8a/65b1e83a5cdaec024a3b7504/1_Azure_create_resource.png\n\n[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbb51a0462dcbee8f/65b1e83a60a275d0957fb596/2_Azure_marketplace.png\n\n[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf3dd72fc38c1ebb6/65b1e83a24ea49f803de48b9/3_Azure_create_spring_app.png\n\n[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltba32459974d4333e/65b1e83a5f12edbad7e207d2/4_Azure_create_spring_app_basics.png\n\n[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc13b9d359d2bea5d/65b1e83ad2067b1eef8c361a/5_Azure_review_create.png\n\n[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6c07487ae8a39b3a/65b1e83ae5c1f348ced943b7/6_Azure_go_to_resource.png\n\n[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt37a00fe46791bb41/65b1e83a292a0e1bf887c012/7_Azure_create_app.png\n\n[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt402b5ddcf552ae28/65b1e83ad2067bb08b8c361e/8_Azure_create_app_details.png\n\n[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf0d1d7e1a2d3aa85/65b1e83a7d4ae76ad397f177/9_Azure_access_new_microservice.png\n\n[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb238d1ab52de83f7/65b1e83a292a0e6c2c87c016/10_Azure_env_variable_mdb_uri.png\n\n[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt67bc373d0b45280e/65b1e83a5cdaec4f253b7508/11_Azure_networking_outbound.png\n\n[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18a1a75423ce4fb4/65b1e83bc025eeec67b86d13/12_Azure_networking_Atlas.png\n\n[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdeb250a7e04c891b/65b1e83a41400c0b1b4571e0/13_Azure_deploy_app_tab.png\n\n[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt390946eab49df898/65b1e83a7d4ae73fbb97f17b/14_Azure_deploy_app.png\n\n[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7a33fb264f9fdc6f/65b1e83ac025ee08acb86d0f/15_Azure_app_deployed.png\n\n[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d899245c14fb006/65b1e83b92740682adeb573b/16_Azure_assign_endpoint.png\n\n[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc531bc65df82614b/65b1e83a450fa426730157f0/17_Azure_endpoint.png\n\n[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6bf26487ac872753/65b1e83a41400c1b014571e4/18_Azure_Atlas_doc.png\n\n[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39ced6072046b686/65b1e83ad2067b0f2b8c3622/19_Azure_Swagger.png\n", "format": "md", "metadata": {"tags": ["Java", "Atlas", "Azure", "Spring"], "pageDescription": "Learn how to deploy your first Azure Spring Apps connected to MongoDB Atlas.", "contentType": "Tutorial"}, "title": "Getting Started With Azure Spring Apps and MongoDB Atlas: A Step-by-Step Guide", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/change-streams-with-kafka", "action": "created", "body": "# Migrating PostgreSQL to MongoDB Using Confluent Kafka\n\nIn today's data-driven world, businesses are continuously seeking innovative ways to harness the full potential of their data. One critical aspect of this journey involves data migration \u2013 the process of transferring data from one database system to another, often driven by evolving business needs, scalability requirements, or the desire to tap into new technologies.\n\nIn this era of digital transformation, where agility and scalability are paramount, organizations are increasingly turning to NoSQL databases like MongoDB for their ability to handle unstructured or semi-structured data at scale. On the other hand, relational databases like PostgreSQL have long been trusted for their robustness and support for structured data.\n\nAs businesses strive to strike the right balance between the structured and unstructured worlds of data, the question arises: How can you seamlessly migrate from a relational database like PostgreSQL to the flexible documented-oriented model of MongoDB while ensuring data integrity, minimal downtime, and efficient synchronization?\n\nThe answer lies in an approach that combines the power of Confluent Kafka, a distributed streaming platform, with the agility of MongoDB. In this article, we'll explore the art and science of migrating from PostgreSQL to MongoDB Atlas, leveraging Confluent Kafka as our data streaming bridge. We'll delve into the step-by-step tutorial that can make this transformation journey a success, unlocking new possibilities for your data-driven initiatives.\n\n## Kafka: a brief introduction\n\n### What is Apache Kafka?\nApache Kafka is an open-source distributed streaming platform developed by the Apache Software Foundation that is designed to handle real-time data streams.\n\nTo understand Kafka, imagine a busy postal system in a bustling city. In this city, there are countless businesses and individuals sending packages and letters to one another, and it's essential that these messages are delivered promptly and reliably.\n\nApache Kafka is like the central hub of this postal system, but it's not an ordinary hub; it's a super-efficient, high-speed hub with a memory that never forgets. When someone sends a message (data) to Kafka, it doesn't get delivered immediately. Instead, it's temporarily stored within Kafka's memory. Messages within Kafka are not just one-time deliveries. They can be read and processed by multiple parties. Imagine if every package or letter sent through the postal system had a copy available for anyone who wanted it. This is the core concept of Kafka: It's a distributed, highly scalable, and fault-tolerant message streaming platform.\n\nFrom maintaining real-time inventory information for e-commerce to supporting real-time patient monitoring, Kafka has varied business use cases in different industries and can be used for log aggregation and analysis, event sourcing, real-time analytics, data integration, etc.\n\n## Kafka Topics\nIn the same analogy of the postal system, the system collects and arranges its letters and packages into different sections and organizes them into compartments for each type of item. Kafka does the same. The messages it receives from the producer of data are arranged and organized into Kafka topics. Kafka topics are like different mailboxes where messages with a similar theme are placed, and various programs can send and receive these messages to exchange information. This helps keep data well-organized and ensures that the right people or systems can access the information they need from the relevant topic.\n\n## Kafka connectors\nKafka connectors are like special mailboxes that format and prepare letters (data) in a way that Kafka can understand, making it easier for data to flow between different systems. Say the sender (system) wants to send a letter (data) to the receiver (another system) using our postal system (Kafka). Instead of just dropping the letter in the regular mailbox, the sender places it in a special connector mailbox outside their house. This connector mailbox knows how to format the letter properly. So connectors basically act as a bridge that allows data to flow between Kafka and various other data systems.\n\n## Confluent Kafka\nConfluent is a company that builds tools and services. It has built tools and services for Apache Kafka to make it more robust and feature-rich. It is like working with a more advanced post office that not only receives and delivers letters but also offers additional services like certified mail, tracking, and package handling. The migration in this article is done using Confluent Kafka through its browser user interface.\n\n## Setting up a Confluent Kafka account\nTo begin with, you can set up an account on Confluent Kafka by registering on the Confluent Cloud website. You can sign up with your email account or using GitHub.\n\nOnce you log in, this is how the home page looks:\n\nThis free account comes with free credits worth $400 which you can use to utilize the resources in the Confluent Cloud. If your database size is small, your migration could also be completed within this free credit limit. If you go to the billing section, you can see the details regarding the credits.\n\nTo create a new cluster, topics, and connectors for your migration, click on the Environments tab from the side menu and create a new environment and cluster.\n\nYou can select the type of cluster. Select the type \u201cbasic\u201d which is the free tier with basic configuration. If you want to have a higher configuration for the cluster, you can select the \u201cstandard\u201d, \u201centerprise,\u201d or \u201cdedicated\u201d cluster types which have higher storage, partition, and uptime SLA respectively with hourly rates.\n\nNext, you can select the region/zone where your cluster has to be deployed along with the cloud provider you want for your cluster (AWS, GCP, or Azure ). The prerequisite for your data migration to work through Kafka connectors is that the Kafka cluster where you create your connectors should be in the same region as your MongoDB Atlas cluster to where you will migrate your PostgreSQL data.\n\nThen, you can provide your payment information and launch your cluster.\n\nOnce your cluster is launched, this is how the cluster menu looks with options to have a cluster overview and create topics and connectors, among other features.\n\nWith this, we are ready with the basic Kafka setup to migrate your data from PostgreSQL to MongoDB Atlas.\n\n## Setting up PostgreSQL test data\nFor this example walkthrough, if you do not have an existing PostgreSQL database that you would like to migrate to a MongoDB Atlas instance using Confluent Kafka, you can create a sample database in PostgreSQL by following the below steps and then continue with this tutorial.\n\n 1. Download PostgreSQL Database Server from the official website and start your instance locally.\n 2. Download the pgadmin tool and connect to your local instance.\n 3. Create a database ```mytestdb``` and table ```users``` and put some sample data into the employee table.\n```sql\n-- Create the database mytestdb\nCREATE DATABASE mytestdb;\n\n-- Connect to the mytestdb database\n\\c org;\n\n-- Create the users table\nCREATE TABLE users (\n id SERIAL PRIMARY KEY,\n firstname VARCHAR(50),\n lastname VARCHAR(50),\n age INT\n);\n\n-- Insert sample data into the 'users' table\nINSERT INTO users (firstname, lastname, age)\nVALUES\n ('John', 'Doe', 25),\n ('Jane', 'Smith', 30),\n ('Bob', 'Johnson', 22);\n```\nKeep in mind that the host where your PostgreSQL is running \u2014 in this case, your local machine \u2014 should have Confluent Kafka whitelisted in a firewall. Otherwise, the source connector will not be able to reach the PostgreSQL instance.\n\n## Steps for data migration using Confluent Kafka\nTo migrate the data from PostgreSQL to MongoDB Atlas, we have to configure a source connector to connect to PostgreSQL that will stream the data into the Confluent Cloud topic. Then, we will configure a sink connector for MongoDB Atlas to read the data from the created topic and write to the respective database in the MongoDB Atlas cluster.\n\n### Configuring the PostgreSQL source connector\nTo configure the PostgreSQL source connector, follow the below steps:\n\n 1. Click on the Connectors tab in your newly created cluster in Confluent. It will list popular plugins available in the Confluent Cloud. You can search for the \u201cpostgres source\u201d connector plugin and use that to create your custom connector to connect to your PostgreSQL database.\n\n 2. Next, you will be prompted for the topic prefix. Provide the name of the topic into which you want to stream your PostgreSQL data. If you leave it empty, the topic will be created with the table name for you.\n\n 3. You can then specify the access levels for the new connector you are creating. You can keep it global and also download the API credentials that you can use in your applications, if needed to connect to your cluster. For this migration activity, you will not need it \u2014 but you will need to create it to move to the next step.\n\n 4. Next, you will be prompted for connection details of PostgreSQL.You can provide the connection params, schema context, transaction isolation levels, poll intervals, etc. for the connection. \n\n 5. Select the output record type as JSON. MongoDB natively uses the JSON format. You will also have to provide the name of the table that you are trying to migrate.\n\n 6. In the next screen, you will be redirected to an overview page with all the configurations you provided in JSON format along with the cost for running this source connector per hour.\n\n 7. Once you create your source connector, you can see its status in the\n Connectors tab and if it is running or has failed. The source\n connector will start syncing the data to the Confluent Cloud topic\n immediately after starting up. You can check the number of messages\n processed by the connector by clicking on the new connector. If the\n connector has failed to start, you can check connector logs and\n rectify any issues by reconfiguring the connector settings.\n\n### Validating data in the new topic\nOnce your Postgres source connector is running, you can switch to the Topics tab to list all the topics in your cluster, and you will be able to view the new topic created by the source connector.\n\nIf you click on the newly created topic and navigate to the \u201cMessages\u201d tab, you will be able to view the processed messages. If you are not able to see any recent messages, you can check them by selecting the \u201cJump to time\u201d option, selecting the default partition 0, and providing a recent past time from the date picker. Here, my topic name is \u201cusers.\u201d\n\nBelow, you can see the messages processed into my \u201cusers\u201d topic from the users table in PostgreSQL.\n\n### Configuring the MongoDB Atlas sink connector\nNow that we have the data that you wanted to migrate (one table, in our example) in our Confluent Cloud topic, we can create a sink connector to stream that data into your MongoDB Atlas cluster. Follow the below steps to configure the data inflow:\n\n 1. Go to the Connectors tab and search for \u201cMongoDB Atlas Sink\u201d to find the MongoDB Atlas connector plugin that you will use to create your custom sink connector.\n\n 2. You will then be asked to select the topic for which you are creating this sink connector. Select the respective topic and click on \u201cContinue.\u201d\n\n 3. You can provide the access levels for the sink connector and also download the API credentials if needed, as in the case of the source connector.\n 4. In the next section, you will have to provide the connection details for your MongoDB Atlas cluster \u2014 including the hostname, username/password, database name, and collection name \u2014 into which you want to push the data. The connection string for Atlas will be in the format ```mongodb+srv://:@```, so you can get the details from this format. Remember that the Atlas cluster should be in the same region and hosted on the same cloud provider for the Kafka connector to be able to communicate with it. You have to add your Confluent cluster static IP address into the firewall\u2019s allowlist of MongoDB Atlas to allow the connections to your Altas cluster from Confluent Cloud. For non-prod environments, you can also add 0.0.0.0/0 to allow access from anywhere, but it is not recommended for a production environment as it is a security concern allowing any IP access.\n\n 5. You can select the Kafka input message type as JSON as in the case of the source connector and move to the final review page to view the configuration and cost for your new sink connector.\n\n 6. Once the connector has started, you can query the collection mentioned in your sink connector configuration and you would be able to see the data from your PostgreSQL table in the new collection of your MongoDB Atlas cluster.\n### Validating PostgreSQL to Atlas data migration\nThis data is synced in real-time from PostgreSQL to MongoDB Atlas using the source and sink connectors, so if you try adding a new record or updating/deleting existing records in PostgreSQL, you can see it reflect real-time in your MongoDB Atlas cluster collection, as well.\n\nIf your data set is huge, the connectors will catch up and process all the data in due time according to the data size. After completion of the data transfer, you can validate your MongoDB Atlas DB and stop the data flow by stopping the source and sink connectors directly from the Confluent Cloud Interface.\n\nUsing Kafka, not only can you sync the data using its event-driven architecture, but you can also transform the data in transfer in real-time while migrating it from PostgreSQL to MongoDB. For example, if you would like to rename a field or concat two fields into one for the new collection in Atlas, you can do that while configuring your MongoDB Atlas sink connector.\n\nLet\u2019s say PostgreSQL had the fields \u201cfirstname\u201d and \u201clastname\u201d for your \u201cusers\u201d table, and in MongoDB Atlas post-migration, you only want the \u201cname\u201d field which would be a concatenation of the two fields. This can be done using the \u201ctransform\u201d attribute in the sink connector configuration. This provides a list of transformations to apply to your data before writing it to the database. Below is an example configuration.\n```json\n{\n \"name\": \"mongodb-atlas-sink\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\": \"1\",\n \"topics\": \"your-topic-name\",\n \"connection.uri\": \"mongodb+srv://:@cluster.mongodb.net/test\",\n \"database\": \"your-database\",\n \"collection\": \"your-collection\",\n \"key.converter\": \"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\": \"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\": \"false\",\n \"transforms\": \"addFields,unwrap\",\n \"transforms.addFields.type\": \"org.apache.kafka.connect.transforms.InsertField$Value\",\n \"transforms.addFields.static.field\": \"name\",\n \"transforms.addFields.static.value\": \"${r:firstname}-${r:lastname}\",\n \"transforms.unwrap.type\": \"io.debezium.transforms.UnwrapFromEnvelope\",\n \"transforms.unwrap.drop.tombstones\": \"false\",\n \"transforms.unwrap.delete.handling.mode\": \"none\"\n }\n}\n```\n\n## Relational Migrator: an intro\nAs we are discussing data migration from relational to MongoDB, it\u2019s worth mentioning the MongoDB Relational Migrator. This is a tool designed natively by MongoDB to simplify the process of moving data from relational databases into MongoDB. Relational Migrator analyzes your relational schema and gives recommendations for mapping to a new MongoDB schema.\n\nIts features \u2014 including schema analysis, data extraction, indexing, and validation \u2014 make it a valuable asset for organizations seeking to harness the benefits of MongoDB's NoSQL platform while preserving their existing relational data assets. Whether for application modernization, data warehousing, microservices, or big data analytics, this tool is a valuable asset for those looking to make the shift from relational to NoSQL databases. It helps to migrate from major relational database technologies including Oracle, SQL Server, MySQL, and PostgreSQL.\n\nGet more information and download and use relational migrator.\n\n## Conclusion\nIn the ever-evolving landscape of data management, MongoDB has emerged as a leading NoSQL database, known for its flexibility, scalability, and document-oriented structure. However, many organizations still rely on traditional relational databases to store their critical data. The challenge often lies in migrating data between these disparate systems efficiently and accurately.\n\nConfluent Kafka acts as a great leverage in this context with its event driven architecture and native support for major database engines including MongoDB Atlas.The source and sink connectors would have inbound and outbound data through Topics and acts as a platform for a transparent and hassle free data migration from relational to MongoDB Atlas cluster.\n", "format": "md", "metadata": {"tags": ["Atlas", "Java", "Kafka"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Migrating PostgreSQL to MongoDB Using Confluent Kafka", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/vector-search-with-csharp-driver", "action": "created", "body": "# Adding MongoDB Atlas Vector Search to a .NET Blazor C# Application\n\nWhen was the last time you could remember the rough details of something but couldn\u2019t remember the name of it? That happens to quite a few people, so being able to search semantically instead of with exact text searches is really important.\n\nThis is where MongoDB Atlas Vector Search comes in useful. It allows you to perform semantic searches against vector embeddings in your documents stored inside MongoDB Atlas. Because the embeddings are stored inside Atlas, you can create the embeddings against any type of data, both structured and unstructured.\n\nIn this tutorial, you will learn how to add vector search with MongoDB Atlas Vector Search, using the MongoDB C# driver, to a .NET Blazor application. The Blazor application uses the sample_mflix database, available in the sample dataset anyone can load into their Atlas cluster. You will add support for searching semantically against the plot field, to find any movies that might fit the plot entered into the search box.\n\n## Prerequisites\nIn order to follow along with this tutorial, you will need a few things in place before you start:\n\n 1. .NET 8 SDK installed on your computer\n 2. An IDE or text editor that can support C# and Blazor for the most seamless development experience, such as Visual Studio, Visual Studio Code with the C# DevKit Extension installed, or JetBrains Rider\n 3. An Atlas M0 cluster, our free forever tier, perfect for development\n 4. Your cluster connection string\n 5. A local copy of the Hugging Face Dataset Upload tool\n 6. A fork and clone of the See Sharp Movies GitHub repo that we will be adding search to\n 7. An OpenAI account and a free API key generated \u2014 you will use the OpenAI API to create a vector embedding for our search term\n\n> Once you have forked and then cloned the repo and have it locally, you will need to add your connection string into ```appsettings.Development.json``` and ```appsettings.json``` in the placeholder section in order to connect to your cluster when running the project.\n\n> If you don\u2019t want to follow along, the repo has a branch called \u201cvector-search\u201d which has the final result implemented. However, you will need to ensure you have the embedded data in your Atlas cluster.\n\n## Getting our embedded data into Atlas\nThe first thing you need is some data stored in your cluster that has vector embeddings available as a field in your documents. MongoDB has already provided a version of the movies collection from sample_mflix, called embedded_movies, which has 1500 documents, using a subset of the main movies collection which has been uploaded as a dataset to Hugging Face that will be used in this tutorial.\n\nThis is where the Hugging Face Dataset Uploader downloaded as part of the prerequisites comes in. By running this tool using ```dotnet run``` at the root of the project, and passing your connection string into the console when asked, it will go ahead and download the dataset from Hugging Face and then upload that into an ```embedded_movies``` collection inside the ```sample_mflix``` database. If you haven\u2019t got the same dataset loaded so this database is missing, it will even just create it for you thanks to the C# driver!\n\nYou can generate vector embeddings for your own data using tools such as Hugging Face, OpenAI, LlamaIndex, and others. You can read more about generating embeddings using open-source models by reading a tutorial from Prakul Agarwal on Generative AI, Vector Search, and open-source models here on Developer Center.\n\n## Creating the Vector Search index\nNow you have a collection of movie documents with a ```plot_embedding``` field of vector embeddings for each document, it is time to create the Atlas Vector Search index. This is to enable vector search capabilities on the cluster and to let MongoDB know where to find the vector embeddings.\n\n 1. Inside Atlas, click \u201cBrowse Collections\u201d to open the data explorer to view your newly loaded sample_mflix database.\n 2. Select the \u201cAtlas Search\u201d tab at the top.\n 3. Click the green \u201cCreate Search Index\u201d button to load the index creation wizard.\n 4. Select JSON Editor under the Vector Search heading and then click \u201cNext.\u201d\n 5. Select the embedded_movies collection under sample_mflix from the left.\n 6. The name doesn\u2019t matter hugely here, as long as you remember it for later but for now, leave it as the default value of \u2018vector_index\u2019.\n 7. Copy and paste the following JSON in, replacing the current contents of the box in the wizard:\n\n```json\n{\n \"fields\": \n {\n \"type\": \"vector\",\n \"path\": \"plot_embedding\",\n \"numDimensions\": 1536,\n \"similarity\": \"dotProduct\"\n }\n ]\n}\n``` \nThis contains a few fields you might not have seen before.\n\n - path is the name of the field that contains the embeddings. In the case of the dataset from Hugging Face, this is plot_embedding.\n - numDimensions refers to the dimensions of the model used.\n - similarity refers to the type of function used to find similar results.\n\nCheck out the [Atlas Vector Search documentation to learn more about these configuration fields.\n\nClick \u201cNext\u201d and on the next page, click \u201cCreate Search Index.\u201d\n\nAfter a couple of minutes, the vector search index will be set up, you will be notified by email, and the application will be ready to have vector search added.\n\n## Adding the backend functionality\nYou have the data with plot embeddings and a vector search index created against that field, so it is time to start work on the application to add search, starting with the backend functionality.\n### Adding OpenAI API key to appsettings\nThe OpenAI API key will be used to request embeddings from the API for the search term entered since vector search understands numbers and not text. For this reason, the application needs your OpenAI API key to be stored for use later. \n\n 1. Add the following into the root of your ```appsettings.Development.json``` and ```appsettings.json```, after the MongoDB section, replacing the placeholder text with your own key:\n```json\n\"OpenAPIKey\": \"\"\n```\n 2. Inside ```program.cs```, after the creation of the var builder, add the following line of code to pull in the value from app config:\n\n```csharp\nvar openAPIKey = builder.Configuration.GetValue(\"OpenAPIKey\");\n```\n 3. Change the code that creates the MongoDBService instance to also pass in the ```openAPIKey variable```. You will change the constructor of the class later to make use of this.\n\n```csharp\nbuilder.Services.AddScoped(service => new MongoDBService(mongoDBSettings, openAPIKey));\n```\n\n### Adding a new method to IMongoDBService.cs\nYou will need to add a new method to the interface that supports search, taking in the term to be searched against and returning a list of movies that were found from the search.\n\nOpen ```IMongoDBService.cs``` and add the following code:\n\n```csharp\npublic IEnumerable MovieSearch(string textToSearch);\n```\n### Implementing the method in MongoDBService.cs\nNow to make the changes to the implementation class to support the search.\n\n 1. Open ```MongoDBService.cs``` and add the following using statements to the top of the file:\n```csharp\nusing System.Text; \nusing System.Text.Json;\n```\n 2. Add the following new local variables below the existing ones at the top of the class:\n```csharp\n private readonly string _openAPIKey;\n private readonly HttpClient _httpClient = new HttpClient();\n```\n 3. Update the constructor to take the new openAPIKey string parameter, as well as the MongoDBSettings parameter. It should look like this:\n```csharp\npublic MongoDBService(MongoDBSettings settings, string openAPIKey)\n```\n 4. Inside the constructor, add a new line to assign the value of openAPIKey to _openAPIKey.\n 5. Also inside the constructor, update the collection name from \u201cmovies\u201d to \u201cembedded_movies\u201d where it calls ```.GetCollection```.\n\nThe following is what the completed constructor should look like:\n```csharp\npublic MongoDBService(MongoDBSettings settings, string openAPIKey)\n{\n _client = new MongoClient(settings.AtlasURI);\n _mongoDatabase = _client.GetDatabase(settings.DatabaseName);\n _movies = _mongoDatabase.GetCollection(\"embedded_movies\");\n _openAPIKey = openAPIKey;\n}\n```\n### Updating the Movie model\nThe C# driver acts as an object document mapper (ODM), taking care of mapping between a plain old C# object (POCO) that is used in C# and the documents in your collection.\n\nHowever, the existing movie model fields need updating to match the documents inside your embedded_movies collection.\n\nReplace the contents of ```Models/Movie.cs``` with the following code:\n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\n\nnamespace SeeSharpMovies.Models;\n\npublic class Movie\n{\n BsonId]\n [BsonElement(\"_id\")]\n public ObjectId Id { get; set; }\n \n [BsonElement(\"plot\")]\n public string Plot { get; set; }\n\n [BsonElement(\"genres\")] \n public string[] Genres { get; set; }\n\n [BsonElement(\"runtime\")]\n public int Runtime { get; set; }\n\n [BsonElement(\"cast\")]\n public string[] Cast { get; set; }\n\n [BsonElement(\"num_mflix_comments\")]\n public int NumMflixComments { get; set; }\n\n [BsonElement(\"poster\")]\n public string Poster { get; set; }\n\n [BsonElement(\"title\")]\n public string Title { get; set; }\n\n [BsonElement(\"fullplot\")]\n public string FullPlot { get; set; }\n\n [BsonElement(\"languages\")]\n public string[] Languages { get; set; }\n\n [BsonElement(\"directors\")]\n public string[] Directors { get; set; }\n\n [BsonElement(\"writers\")]\n public string[] Writers { get; set; }\n\n [BsonElement(\"awards\")]\n public Awards Awards { get; set; }\n \n [BsonElement(\"year\")]\n public string Year { get; set; }\n\n [BsonElement(\"imdb\")]\n public Imdb Imdb { get; set; }\n\n [BsonElement(\"countries\")]\n public string[] Countries { get; set; }\n\n [BsonElement(\"type\")]\n public string Type { get; set; }\n\n [BsonElement(\"plot_embedding\")]\n public float[] PlotEmbedding { get; set; }\n\n}\n\npublic class Awards\n{\n [BsonElement(\"wins\")]\n public int Wins { get; set; }\n \n [BsonElement(\"nominations\")]\n public int Nominations { get; set; }\n \n [BsonElement(\"text\")]\n public string Text { get; set; }\n}\n\npublic class Imdb\n{\n [BsonElement(\"rating\")]\n public float Rating { get; set; }\n \n [BsonElement(\"votes\")]\n public int Votes { get; set; }\n \n [BsonElement(\"id\")]\n public int Id { get; set; }\n}\n```\n\nThis contains properties for all the fields in the document, as well as classes and properties representing subdocuments found inside the movie document, such as \u201ccritic.\u201d You will also note the use of the BsonElement attribute, which tells the driver how to map between the field names and the property names due to their differing naming conventions.\n\n### Adding an EmbeddingResponse model\nIt is almost time to start implementing the search on the back end. When calling the OpenAI API\u2019s embedding endpoint, you will get back a lot of data, including the embeddings. The easiest way to handle this is to create an EmbeddingResponse.cs class that models this response for use later.\n\nAdd a new class called EmbeddingResponse inside the Model folder and replace the contents of the file with the following:\n\n```csharp\nnamespace SeeSharpMovies.Models\n{\n public class EmbeddingResponse\n {\n public string @object { get; set; }\n public List data { get; set; }\n public string model { get; set; }\n public Usage usage { get; set; }\n }\n\n public class Data\n {\n public string @object { get; set; }\n public int index { get; set; }\n public List embedding { get; set; }\n }\n\n public class Usage\n {\n public int prompt_tokens { get; set; }\n public int total_tokens { get; set; }\n }\n}\n```\n### Adding a method to request embeddings for the search term\nIt is time to make use of the API key for OpenAI and write functionality to create vector embeddings for the searched term by calling the [OpenAI API Embeddings endpoint.\n\nInside ```MongoDBService.cs```, add the following code:\n\n```csharp\nprivate async Task> GetEmbeddingsFromText(string text)\n{\n Dictionary body = new Dictionary\n {\n { \"model\", \"text-embedding-ada-002\" },\n { \"input\", text }\n };\n\n _httpClient.BaseAddress = new Uri(\"https://api.openai.com\");\n _httpClient.DefaultRequestHeaders.Add(\"Authorization\", $\"Bearer {_openAPIKey}\");\n\n string requestBody = JsonSerializer.Serialize(body);\n StringContent requestContent =\n new StringContent(requestBody, Encoding.UTF8, \"application/json\");\n\n var response = await _httpClient.PostAsync(\"/v1/embeddings\", requestContent)\n .ConfigureAwait(false);\n\n if (response.IsSuccessStatusCode)\n {\n string responseBody = await response.Content.ReadAsStringAsync();\n EmbeddingResponse embeddingResponse = JsonSerializer.Deserialize(responseBody);\n return embeddingResponse.data0].embedding;\n }\n\n return new List();\n}\n```\n\nThe body dictionary is needed by the API to know the model used and what the input is. The text-embedding-ada-002 model is the default text embedding model.\n\n### Implementing the SearchMovie function\nThe GetEmbeddingsFromText method returned the embeddings for the search term, so now it is available to be used by Atlas Vector Search and the C# driver.\n\nPaste the following code to implement the search:\n\n```csharp\npublic IEnumerable MovieSearch(string textToSearch)\n{\n\n var vector = GetEmbeddingsFromText(textToSearch).Result.ToArray();\n\n var vectorOptions = new VectorSearchOptions()\n {\n IndexName = \"vector_index\",\n NumberOfCandidates = 150\n };\n\n var movies = _movies.Aggregate()\n .VectorSearch(movie => movie.PlotEmbedding, vector, 150, vectorOptions)\n .Project(Builders.Projection\n .Include(m => m.Title)\n .Include(m => m.Plot)\n .Include(m => m.Poster)) \n .ToList();\n\n return movies;\n}\n```\n\n> If you chose a different name when creating the vector search index earlier, make sure to update this line inside vectorOptions.\n\nVector search is available inside the C# driver as part of the aggregation pipeline. It takes four arguments: the field name with the embeddings, the vector embeddings of the searched term, the number of results to return, and the vector options.\n\nFurther methods are then chained on to specify what fields to return from the resulting documents.\n\nBecause the movie document has changed slightly, the current code inside the ```GetMovieById``` method is no longer correct.\n\nReplace the current line that calls ```.Find``` with the following:\n\n```csharp\n var movie = _movies.Find(movie => movie.Id.ToString() == id).FirstOrDefault();\n```\n\nThe back end is now complete and it is time to move on to the front end, adding the ability to search on the UI and sending that search back to the code we just wrote.\n\n## Adding the frontend functionality\nThe frontend functionality will be split into two parts: the code in the front end for talking to the back end, and the search bar in HTML for typing into.\n### Adding the code to handle search\nAs this is an existing application, there is already code available for pulling down the movies and even pagination. This is where you will be adding the search functionality, and it can be found inside ```Home.razor``` in the ```Components/Pages``` folder.\n\n 1. Inside the ```@code``` block, add a new string variable for searchTerm:\n```csharp\n string searchTerm;\n```\n 2. Paste the following new method into the code block:\n```csharp\nprivate void SearchMovies()\n{\n if (string.IsNullOrWhiteSpace(searchTerm))\n {\n movies = MongoDBService.GetAllMovies();\n }\n else\n {\n movies = MongoDBService.MovieSearch(searchTerm);\n }\n}\n```\nThis is quite straightforward. If the searchTerm string is empty, then show everything. Otherwise, search on it.\n\n### Adding the search bar\nAdding the search bar is really simple. It will be added to the header component already present on the home page.\n\nReplace the existing header tag with the following HTML:\n\n```html\n\n See Sharp Movies\n \n \n Search\n \n \n```\n\nThis creates a search input with the value being bound to the searchTerm string and a button that, when clicked, calls the SearchMovies method you just called.\n\n### Making the search bar look nicer\nAt this point, the functionality is implemented. But if you ran it now, the search bar would be in a strange place in the header, so let\u2019s fix that, just for prettiness.\n\nInside ```wwwroot/app.css```, add the following code:\n\n```css\n.search-bar {\n padding: 5%;\n}\n\n.search-bar button {\n padding: 4px;\n}\n```\n\nThis just gives the search bar and the button a bit of padding to make it position more nicely within the header. Although it\u2019s not perfect, CSS is definitely not my strong suit. C# is my favorite language!\n\n## Testing the search\nWoohoo! We have the backend and frontend functionality implemented, so now it is time to run the application and see it in action!\n\nRun the application, enter a search term in the box, click the \u201cSearch\u201d button, and see what movies have plots semantically close to your search term.\n\n![Showing movie results with a plot similar to three young men and a sword\n\n## Summary\nAmazing! You now have a working Blazor application with the ability to search the plot by meaning instead of exact text. This is also a great starting point for implementing more vector search capabilities into your application.\n\nIf you want to learn more about Atlas Vector Search, you can read our documentation.\nMongoDB also has a space on Hugging Face where you can see some further examples of what can be done and even play with it. Give it a go!\n\nThere is also an amazing article on using Vector Search for audio co-written by Lead Developer Advocate at MongoDB Pavel Duchovny.\n\nIf you have questions or feedback, join us in the Community Forums.\n", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "Learn how to get started with Atlas Vector Search in a .NET Blazor application with the C# driver, including embeddings and adding search functionality.\n", "contentType": "Tutorial"}, "title": "Adding MongoDB Atlas Vector Search to a .NET Blazor C# Application", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/modernizing-rdbms-schemas-mongodb-document", "action": "created", "body": "# Modernizing RDBMS Schemas With a MongoDB Document Model\n\nWelcome to the exciting journey of transitioning from the traditional realm of relational databases to the dynamic world of MongoDB! This is the first entry in a series of tutorials helping you migrate from relational databases to MongoDB. Buckle up as we embark on a thrilling adventure filled with schema design, data modeling, and the wonders of the document model. Say goodbye to the rigid confines of tables and rows, and hello to the boundless possibilities of collections and documents. In this tutorial, we'll unravel the mysteries of MongoDB's schema design, exploring how to harness its flexibility to optimize your data storage like never before, using the Relational Migrator tool!\n\nThe migration from a relational database to MongoDB involves several stages. Once you've determined your database and application requirements, the initial step is schema design. This process involves multiple steps, all centered around how you intend to access your data. In MongoDB, data accessed together should be stored together. Let's delve into the schema design process.\n\n## Schema design\nThe most fundamental difference between the world of relational databases and MongoDB is how your data is modeled. There are some terminology changes to keep in mind when moving from relational databases to MongoDB:\n\n| RDBMS | MongoDB |\n|--------------------|--------------------------------------------------|\n| Database | Database |\n| Table | Collection |\n| Row | Document |\n| Column | Field |\n| Index | Index |\n| JOIN | Embedded document, document references, or $lookup to combine data from different collections |\n\nTransitioning from a relational database to MongoDB offers several advantages due to the flexibility of JSON (JavaScript Object Notation) documents. MongoDB's BSON (Binary JSON) encoding extends JSON's capabilities by including additional data types like int, decimal, dates, and more, making it more efficient for representing complex data structures.\n\nDocuments in MongoDB, with features such as sub-documents (embedded documents) and arrays, align well with the structure of application-level objects. This alignment simplifies data mapping for developers, as opposed to the complexities of mapping object representations to tabular structures in relational databases, which can slow down development, especially when using Object Relational Mappers (ORMs).\n\nWhen designing schemas for MongoDB, it's crucial to consider the application's requirements and leverage the document model's flexibility. While mirroring a relational database's flat schema in MongoDB might seem straightforward, it undermines the benefits of MongoDB's embedded data structures. For instance, MongoDB allows collapsing (embedding) data belonging to a parent-child relationship in relational databases into a single document, enhancing efficiency and performance. It's time to introduce a powerful tool that will streamline your transition from relational databases to MongoDB: the Relational Migrator. \n\n### Relational Migrator \nThe transition from a relational database to MongoDB is made significantly smoother with the help of the Relational Migrator. The first step in this process is a comprehensive analysis of your existing relational schema. The Relational Migrator examines your database, identifying tables, relationships, keys, and other elements that define the structure and integrity of your data. You can connect to a live database or load a .SQL file containing Data Defining Language (DDL) statements. For this tutorial, I\u2019m just going to use the sample schema available when you click **create new project**.\n\nThe first screen you\u2019ll see is a diagram of your relational database relationships. This lays the groundwork by providing a clear picture of your current data model, which is instrumental in devising an effective migration strategy. By understanding the intricacies of your relational schema, the Relational Migrator can make informed suggestions on how to best transition this structure into MongoDB's document model.\n\n.\n\nIn MongoDB, data that is accessed together should be stored together. This allows the avoidance of resource-intensive `$lookup` operations where not necessary. Evaluate whether to embed or reference data based on how it's accessed and updated. Remember, embedding can significantly speed up read operations but might complicate updates if the embedded data is voluminous or frequently changed. Use the Relational Migrator's suggestions as a starting point but remain flexible. Not every recommendation will be applicable, especially as you project data growth and access patterns into the future.\n\nYou may be stuck, staring at the daunting representation of your tables, wondering how to reduce this to a manageable number of collections that best meets your needs. Select any collection to see a list of suggestions for how to represent your data using embedded arrays or documents. Relational Migrator will show all the relationships in your database and how you can represent them in MongoDB, but they might not all be appropriate for application. In my example, I have selected the products collection.\n\n. Use the migrator\u2019s suggestions to iteratively refine your new schema, and understand the suggestions are useful, but not all will make sense for you.\n\n contains all the information you need.\n\n### Data modeling templates\nIt can be difficult to understand how best to store your data in your application, especially if you\u2019re new to MongoDB. MongoDB Atlas offers a variety of data modeling templates that are designed to demonstrate best practices for various use cases. To find them, go to your project overview and you'll see the \"Data Toolkit.\" Under this header, click the \"Data Modeling Templates.\" These templates are there to serve as a good starting point to demonstrate best practices depending on how you plan on interacting with your data. \n\n \u2014 or pop over to our community forums to see what others are doing with MongoDB.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e10c8890fd54c23/65e8774877faff0e5a5a5cb8/image8.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt06b47b571be972b5/65e877485fd476466274f9ba/image5.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6119b9e0d49180e0/65e877478b9c628cfa46feed/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5542e55fba7a3659/65e8774863ec424da25d87e0/image4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4e6f30cb08ace8c0/65e877478b9c62d66546fee9/image2.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03ad163bd859e14c/65e877480395e457c2284cd8/image3.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1343d17dd36fd642/65e8774803e4602da8dc3870/image7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt68bfa90356269858/65e87747105b937781a86cca/image6.png", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Move from a relational database to MongoDB, and learn to use the document model.", "contentType": "Tutorial"}, "title": "Modernizing RDBMS Schemas With a MongoDB Document Model", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/query-analytics-part-2", "action": "created", "body": "# Query Analytics Part 2: Tuning the System\n\nIn Part 1: Know Your Queries][1], we demonstrated the importance of monitoring and tuning your search system and the dramatic effect it can have on your business. In this second part, we are going to delve into the technical techniques available to tune and adjust based on your query result analysis.\n\n> [Query Analytics][2] is available in public preview for all MongoDB Atlas clusters on an M10 or higher running MongoDB v5.0 or higher to view the analytics information for the tracked search terms in the Atlas UI. Atlas Search doesn't track search terms or display analytics for queries on free and shared-tier clusters.\n\n[Atlas Search Query Analytics][3] focuses entirely on the frequency and number of results returned from each $search call. There are also a number of [search metrics available for operational monitoring including CPU, memory, index size, and other useful data points.\n\n# Insightful actions\n\nThere are a few big categories of actions we can take based on search query analysis insights, which are not mutually exclusive and often work in synergy with one another.\n\n## User experience\n\nLet\u2019s start with the user experience itself, from the search box down to the results presentation. You\u2019re following up on a zero-results query: What did the user experience when this occurred? Are you only showing something like \u201cSorry, nothing found. Try again!\u201d? Consider showing documents the user has previously engaged with, providing links, or automatically searching for looser queries, perhaps removing some of the user's query terms and asking, \u201cDid you mean this?\u201d While the user was typing the query, are you providing autosuggest/typeahead so that typos get corrected in the full search query?\n\nFor queries that return results, is there enough information provided in the user interface to allow the user to refine the results? \n\nConsider these improvements:\n\n* Add suggestions as the user is typing, which can be facilitated by leveraging ngrams via the autocomplete operator or building a specialized autocomplete collection and index for this purpose.\n* Add faceted navigation, allowing the user to drill into specific categories and narrow the results shown.\n* Provide moreLikeThis queries to broaden results.\n\n## Query construction\n\nHow the queries are constructed is half the trick to getting great search results. (The other half is how your content is indexed.) The search terms the user entered are the key to the Query Analytics tracking, but behind the scenes, there\u2019s much more to the full search request.\n\nYour user interface provides the incoming search terms and likely, additional parameters. It\u2019s up to the application tier to construct the $search-using aggregation pipeline from those parameters. \n\nHere are some querying techniques that can influence the quality of the search results:\n\n* Incorporate synonyms, perhaps in a relevancy-weighted fashion where non-synonymed clauses are boosted higher than clauses with synonyms added.\n* Leverage compound.should clauses to allow the underlying relevancy computations to work their magic. Spreading query terms across multiple fields \u2014 with independent scoring boosts representing the importance, or weight, of each field \u2014\u00a0allows the best documents to bubble up higher in the results but still provides all matching documents to be returned. For example, a query of \u201cthe matrix\u201d in the movies collection would benefit from boosting `title` higher than `plot`.\n* Use multi-analysis querying. Take advantage of a field being analyzed in multiple ways. Boost exact matches highest, and have less exact and fuzzier matches weighted lower. See the \u201cIndex configuration\u201d section below.\n\n## Index configuration\n\nIndex configuration is the other half of great search results and relies on how the underlying search indexes are built from your documents. Here are some index configuration techniques to consider: \n\n* Multi-analysis: Configure your main content fields to be analyzed/tokenized in various ways, ranging from exact (`token` type) to near-exact (lowercased `token`, diacritics normalized) to standard tokenized (whitespace and special characters ignored) to language-specific analysis, down to fuzzy. \n* Language considerations: If you know the language of the content, use that to your advantage by using the appropriate language analyzer. Consider doing this in a multi-analysis way so that at query time, you can incorporate language-specific considerations into the relevancy computations.\n\nWe\u2019re going to highlight a few common Atlas Search-specific adjustments to consider.\n\n## Adding synonyms\n\nWhy didn\u2019t \u201cJacky Chan\u201d match any of the numerous movies that should have matched? First of all, his name is spelled \u201cJackie Chan,\u201d so the user made a spelling mistake and we have no exact match of the misspelled name. (This is where $match will always fail, and a fuzzier search option is needed.) It turns out our app was doing `phrase` queries. We loosened this by adding in some additional `compound.should` clauses using a fuzzy `text` operator, and also went ahead and added a \u201cjacky\u201d/\u201cjackie\u201d synonym equivalency for good measure. By making these changes, over time, we will see that the number of occurrences for \u201cJacky Chan'' in the \u201cTracked Queries with No Results\u201d will go down. \n\nThe `text` operator provides query-time synonym expansion. Synonyms can be bi-directional or unidirectional. Bi-directional synonyms are called `equivalent` in Atlas Search synonym mappings) \u2014 for example, \u201ccar,\u201d \u201cautomobile,\u201d and \u201cvehicle\u201d \u2014\u00a0so a query containing any one of those terms would match documents containing any of the other terms, as well. These words are \u201cequivalent\u201d because they can all be used interchangeably. Uni-directional synonyms are `explicit` mappings \u2014 say \u201canimal\u201d -> \u201cdog\u201d and \u201canimal\u201d -> \u201ccat\u201d \u2014\u00a0such that a query for \u201canimal\u201d will match documents with \u201ccat\u201d or \u201cdog,\u201d but a query for \u201cdog\u201d will only be for just that: \u201cdog.\u201d\n\n## Enhancing query construction\n\nUsing a single operator, like `text` over a wildcard path, facilitates findability (\u201crecall\u201d in information retrieval speak) but does not help with *relevancy* where the best matching documents bubble to the top of the results. An effective way to improve relevancy is to add variously boosted clauses to weight some fields higher than others.\n\nIt\u2019s generally a good idea to include a `text` operator within a `compound.should` to allow for synonyms to come into play (the `phrase` operator currently does not support synonym expansion) along with additional `phrase` clauses that more precisely match what the user typed. Add `fuzzy` to the `text` operator to match in spite of slight typos/variations of words. \n\nYou may note that Search Tester currently goes really *wild* with a wildcard `*` path to match across all textually analyzed fields; consider the field(s) that really make the most sense to be searched, and whether separate boosts should be assigned to them for fine-tuning relevancy. Using a `*` wildcard is not going to give you the best relevancy because each field has the same boost weight. It can cause objectively bad results to get higher relevancy than they should. Further, a wildcard\u2019s performance is impacted by how many fields you have across your entire collection, which may increase as you add documents. \n\nAs an example, let\u2019s suppose our search box powers movie search. Here\u2019s what a relevancy-educated first pass looks like for a query of \u201cpurple rain,\u201d generated from our application, first in prose: Consider query term (OR\u2019d) matches in `title`, `cast`, and `plot` fields, boosting matches in those fields in that order, and let\u2019s boost it all the way up to 11 when the query matches a phrase (the query terms in sequential order) of any of those fields.\n\nNow, in Atlas $search syntax, the main query operator becomes a `compound` of a handful of `should`s with varying boosts:\n\n```\n\"compound\": {\n \"should\": \n {\n \"text\": {\n \"query\": \"purple rain\",\n \"path\": \"title\",\n \"score\": {\n \"boost\": {\n \"value\": 3.0\n }\n }\n }\n },\n {\n \"text\": {\n \"query\": \"purple rain\",\n \"path\": \"cast\",\n \"score\": {\n \"boost\": {\n \"value\": 2.0\n }\n }\n }\n },\n {\n \"text\": {\n \"query\": \"purple rain\",\n \"path\": \"plot\",\n \"score\": {\n \"boost\": {\n \"value\": 1.0\n }\n }\n }\n },\n {\n \"phrase\": {\n \"query\": \"purple rain\",\n \"path\": [\n \"title\",\n \"phrase\",\n \"cast\"\n ],\n \"score\": {\n \"boost\": {\n \"value\": 11.0\n }\n }\n }\n }\n ]\n}\n```\n\nNote the duplication of the user\u2019s query in numerous places in that $search stage. This deserves a little bit of coding on your part, parameterizing values, providing easy, top-of-the code or config file adjustments to these boosting values, field names, and so on, to make creating these richer query clauses straightforward in your environment.\n\nThis kind of spreading a query across independently boosted fields is the first key to unlocking better relevancy in your searches. The next key is to query with different analyses, allowing various levels of exactness to fuzziness to have independent boosts, and again, these could be spread across differently weighted paths of fields. \n\nThe next section details creating multiple analyzers for fields; imagine plugging those into the `path`s of another bunch of `should` clauses! Yes, you can get carried away with this technique, though you should start simple. Often, boosting fields independently and appropriately for your domain is all one needs for Pretty Good Findability and Relevancy.\n\n## Field analysis configuration\n\nHow your data is indexed determines whether, and how, it can be matched with queries, and thus affects the results your users experience. Adjusting field index configuration could change a search request from finding no documents to matching as expected (or vice versa!). Your index configuration is always a work in progress, and Query Analytics can help track that progress. It will evolve as your querying needs change. \n\nIf you\u2019ve set up your index entirely with dynamic mappings, you\u2019re off to a great start! You\u2019ll be able to query your fields in data type-specific ways \u2014 numerically, by date ranges, filtering and matching, even regexing on string values. Most interesting is the query-ability of analyzed text. String field values are _analyzed_. By default, in dynamic mapping settings, each string field is analyzed using the `lucene.standard` analyzer. This analyzer does a generally decent job of splitting full-text strings into searchable terms (i.e., the \u201cwords\u201d of the text). This analyzer doesn\u2019t do any language-specific handling. So, for example, the words \u201cfind,\u201d \u201cfinding,\u201d and \u201cfinds\u201d are all indexed as unique terms with standard/default analysis but would be indexed as the same stemmed term when using `lucene.english`.\n\n### What\u2019s in a word?\n\nApplying some domain- and data-specific knowledge, we can fine-tune how terms are indexed and thus how easily findable and relevant they are to the documents. Knowing that our movie `plot` is in English, we can switch the analyzer to `lucene.english`, opening up the findability of movies with queries that come close to the English words in the actual `plot`. Atlas Search has over 40 [language-specific analyzers available.\n\n### Multi-analysis \n\nQuery Analytics will point you to underperforming queries, but it\u2019s up to you to make adjustments. To emphasize an important point that is being reiterated here in several ways, how your content is indexed affects how it can be queried, and the combination of both how content is indexed and how it is queried controls the order in which results are returned (also referred to as relevancy). One really useful technique available with Atlas Search is called Multi Analyzer, empowering each field to be indexed using any number of analyzer configurations. Each of these configurations is indexed independently (its own inverted index, term dictionary, and all that). \n\nFor example, we could index the title field for autocomplete purposes, and we could also index it as English text, then phonetically. We could also use our custom defined analyzer (see below) for term shingling, as well as our index-wide analyzer, defaulting to `lucene.standard` if not specified. \n\n```\n\"title\": \n {\n \"foldDiacritics\": false,\n \"maxGrams\": 7,\n \"minGrams\": 3,\n \"tokenization\": \"nGram\",\n \"type\": \"autocomplete\"\n },\n {\n \"multi\": {\n \"english\": {\n \"analyzer\": \"lucene.english\",\n \"type\": \"string\"\n },\n \"phonetic\": {\n \"analyzer\": \"custom.phonetic\",\n \"type\": \"string\"\n },\n \"shingles\": {\n \"analyzer\": \"custom.shingles\",\n \"type\": \"string\"\n }\n },\n \"type\": \"string\"\n}\n```\n\nAs they are indexed independently, they are also queryable independently. With this configuration, titles can be queried phonetically (\u201ckat in the hat\u201d), using English-aware stemming (\u201cfind nemo\u201d), or with shingles (such that \u201cthe purple rain\u201d queries can create \u201cpurple rain\u201d phrase queries).\n\nExplore the available built-in [analyzers and give multi-indexing and querying a try. Sometimes, a little bit of custom analysis can really do the trick, so keep that technique in mind for a potent way to improve findability and relevancy. Here are our `custom.shingles` and `custom.phonetic` analyzer definitions, but please don\u2019t blindly copy this. Make sure you\u2019re testing and understanding these adjustments as it relates to your data and types of queries:\n\n```\n\"analyzers\": \n {\n \"charFilters\": [],\n \"name\": \"standard.shingles\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n },\n {\n \"maxShingleSize\": 3,\n \"minShingleSize\": 2,\n \"type\": \"shingle\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"standard\"\n }\n },\n {\n \"name\": \"phonetic\",\n \"tokenFilters\": [\n {\n \"originalTokens\": \"include\",\n \"type\": \"daitchMokotoffSoundex\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"standard\"\n }\n }\n]\n```\n\nQuerying will naturally still query the inverted index set up as the default for a field, unless the path specifies a [\u201cmulti\u201d. \n\nA straightforward example to query specifically the `custom.phonetic` multi as we have defined it here looks like this:\n\n```\n$search: {\n \"text\": {\n \"query\": \"kat in the hat\",\n \"path\": { \"value\": \"title\", \"multi\": \"custom.phonetic\" }\n }\n}\n```\n\nNow, imagine combining this \u201cmulti\u201d analysis with variously boosted `compound.should` clauses to achieve fine-grained findability and relevancy controls that are as nuanced as your domain deserves.\n\nRelevancy tuning pro-tip: Use a few clauses, one per multi-analyzed field independently, to boost from most exact (best!) to less exact, down to as fuzzy matching as needed.\n\nAll of these various tricks \u2014 from language analysis, stemming words, and a fuzzy parameter to match words that are close but not quite right and broadcasting query terms across multiple fields \u2014 are useful tools. \n\n# Tracking Atlas Search queries\n\nHow do you go about incorporating Atlas Search Query Analytics into your application? It\u2019s a fairly straightforward process of adding a small \u201ctracking\u201d section to your $search stage.\n\nQueries containing the `tracking.searchTerms` structure are tracked (caveat to qualified cluster tier):\n\n```\n{\n $search: {\n \"tracking\": {\n \"searchTerms\": \"\"\n }\n }\n}\n```\n\nIn Java, the tracking SearchOptions are constructed like this:\n\n```\nSearchOptions opts = SearchOptions.searchOptions()\n .option(\"scoreDetails\", BsonBoolean.TRUE)\n .option(\"tracking\", new Document(\"searchTerms\", query_string));\n```\n\nIf you\u2019ve got a straightforward search box and that\u2019s the only input provided for a search query, that query string is the best fit for the `searchTerms` value. In some cases, the query to track is more complicated or deserves more context. In doing some homework for this article, we met with one of our early adopters of the Query Analytics feature who was using tracking codes for the `searchTerms` value, corresponding to another collection containing the full query context, such as a list of IP addresses being used for network intrusion detection.\n\nA simple addition of this tracking information opens the door to a greater understanding of the queries happening in your search system.\n\n# Conclusion\n\nThe specific adjustments that work best for your particular query challenges are where the art of this craft comes into play. There are many ways to improve a particular query\u2019s results. We\u2019ve shown several techniques to consider here. The main takeaways:\n\n* Search is the gateway used to drive revenue, research, and engage users.\n* Know what your users are experiencing, and use that insight to iterate improvements.\n* Matching fuzzily and relevancy ranking results is both an art and science, and there are many options.\n\nAtlas Search Query Analytics is a good first step in the virtuous search query management process.\n\nWant to continue the conversation? Head over to the MongoDB Developer Community Forums!\n\n [1]: https://www.mongodb.com/developer/products/atlas/query-analytics-part-1/\n [2]: https://www.mongodb.com/docs/atlas/atlas-search/view-query-analytics/\n [3]: https://www.mongodb.com/docs/atlas/atlas-search/view-query-analytics/", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Techniques to tune and adjust search results based on Query Analytics", "contentType": "Article"}, "title": "Query Analytics Part 2: Tuning the System", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/php/laravel-mongodb-4-2-released-laravel-11-support", "action": "created", "body": "# Laravel MongoDB 4.2 Released, With Laravel 11 Support\n\nThe PHP team is happy to announce that version 4.2 of the Laravel MongoDB integration is now available!\n\n## Highlights\n\n**Laravel 11 support**\n\nThe MongoDB Laravel integration now supports Laravel 11, ensuring compatibility with the latest framework version and enabling developers to leverage its new features and enhancements. To apply transformation on model attributes, the new recommended way is to declare the Model::casts method.\n\n**Fixed transaction issue with firstOrCreate()**\n\nPreviously, using firstOrCreate() in a transaction would result in an error. This problem has been resolved by implementing the underlying Model::createOrFirst() method with the atomic operation findOneAndUpdate.\n\n**Support for whereAll and whereAny**\n\nThe library now supports the new methods whereAll and whereAny, introduced in Laravel 10.47.\n\n## Installation\n\nThis library may be installed or upgraded with:\n\n```\ncomposer require mongodb/laravel-mongodb:4.2.0\n```\n\n## Resources\n\nDocumentation and other resources to get you started with Laravel and MongoDB databases are shared below:\n\n- Laravel MongoDB documentation\n- Quick Start with MongoDB and Laravel\n- Release notes \n\nGive it a try today and let us know what you think! Please report any ideas, bugs, or feedback in the GitHub repository or the PHPORM Jira project, as we continue to improve and enhance the integration.", "format": "md", "metadata": {"tags": ["PHP"], "pageDescription": "", "contentType": "News & Announcements"}, "title": "Laravel MongoDB 4.2 Released, With Laravel 11 Support", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/java-spring-boot-vector-search", "action": "created", "body": "# Unlocking Semantic Search: Building a Java-Powered Movie Search Engine with Atlas Vector Search and Spring Boot\n\nIn the rapidly evolving world of technology, the quest to deliver more relevant, personalized, and intuitive search results has led to the rise in popularity of semantic search. \n\nMongoDB's Vector Search allows you to search your data related semantically, making it possible to search your data by meaning, not just keyword matching.\n\nIn this tutorial, we'll delve into how we can build a Spring Boot application that can perform a semantic search on a collection of movies by their plot descriptions.\n\n## What we'll need\n\nBefore you get started, there are a few things you'll need.\n\n- Java 11 or higher\n- Maven or Gradle, but this tutorial will reference Maven\n- Your own MongoDB Atlas account\n- An OpenAI account, to generate our embeddings\n\n## Set up your MongoDB cluster\n\nVisit the MongoDB Atlas dashboard and set up your cluster. In order to take advantage of the `$vectorSearch` operator in an aggregation pipeline, you need to run MongoDB Atlas 6.0.11 or higher.\n\nSelecting your MongoDB Atlas version is available at the bottom of the screen when configuring your cluster under \"Additional Settings.\"\n\n.\n\nFor this project, we're going to use the sample data MongoDB provides. When you first log into the dashboard, you will see an option to load sample data into your database. \n\n in your database to automatically embed your data.\n\n## Create a Vector Search Index\n\nIn order to use the `$vectorSearch` operator on our data, we need to set up an appropriate search index. Select the \"Search\" tab on your cluster and click the \"Create Search Index.\"\n\n for more information on these configuration settings.\n\n## Setting up a Spring Boot project\n\nTo set up our project, let's use the Spring Initializr. This will generate our **pom.xml** file which will contain our dependencies for our project.\n\nFor this project, you want to select the options in the screenshot below, and create a JAR:\n\n. Feel free to use a more up to date version in order to make use of some of the most up to date features, such as the `vectorSearch()` method. You will also notice that throughout this application we use the MongoDB Java Reactive Streams. This is because we are creating an asynchronous API. AI operations like generating embeddings can be compute-intensive and time-consuming. An asynchronous API allows these tasks to be processed in the background, freeing up the system to handle other requests or operations simultaneously. Now, let\u2019s get to coding!\n\nTo represent our document in Java, we will use Plain Old Java Objects (POJOs). The data we're going to handle are the documents from the sample data you just loaded into your cluster. For each document and subdocument, we need a POJO. MongoDB documents bear a lot of resemblance to POJOs already and are straightforward to set up using the MongoDB driver.\n\nIn the main document, we have three subdocuments: `Imdb`, `Tomatoes`, and `Viewer`. Thus, we will need four POJOs for our `Movie` document.\n\nWe first need to create a package called `com.example.mdbvectorsearch.model` and add our class `Movie.java`. \n\nWe use the `@BsonProperty(\"_id\")` to assign our `_id` field in JSON to be mapped to our `Id` field in Java, so as to not violate Java naming conventions.\n\n```java\npublic class Movie {\n\n @BsonProperty(\"_id\")\n private ObjectId Id;\n private String title;\n private int year;\n private int runtime;\n private Date released;\n private String poster;\n private String plot;\n private String fullplot;\n private String lastupdated;\n private String type;\n private List directors;\n private Imdb imdb;\n private List cast;\n private List countries;\n private List genres;\n private Tomatoes tomatoes;\n private int num_mflix_comments;\n private String plot_embeddings;\n\n // Getters and setters for Movie fields\n\n}\n\n```\n\nAdd another class called `Imdb`.\n\n```java\npublic static class Imdb {\n\n private double rating;\n private int votes;\n private int id;\n\n // Getters and setters for Imdb fields\n\n}\n\n```\n\nYet another called `Tomatoes`.\n\n```java\npublic static class Tomatoes {\n\n private Viewer viewer;\n private Date lastUpdated;\n\n // Getters and setters for Tomatoes fields\n\n}\n```\n\nAnd finally, `Viewer`.\n\n```java\npublic static class Viewer {\n\n private double rating;\n private int numReviews; \n\n // Getters and setters for Viewer fields\n\n}\n```\n\n> Tip: For creating the getters and setters, many IDEs have shortcuts.\n\n### Connect to your database\n\nIn your main file, set up a package `com.example.mdbvectorsearch.config` and add a class, `MongodbConfig.java`. This is where we will connect to our database, and create and configure our client. If you're used to using Spring Data MongoDB, a lot of this is usually obfuscated. We are doing it this way to take advantage of some of the latest features of the MongoDB Java driver to support vectors.\n\nFrom the MongoDB Atlas interface, we'll get our connection string and add this to our `application.properties` file. We'll also specify the name of our database here.\n\n```\nmongodb.uri=mongodb+srv://:@.mongodb.net/\nmongodb.database=sample_mflix\n```\n\nNow, in your `MongodbConfig` class, import these values, and denote this as a configuration class with the annotation `@Configuration`.\n\n```java\n@Configuration\npublic class MongodbConfig {\n\n @Value(\"${mongodb.uri}\")\n private String MONGODB_URI;\n\n @Value(\"${mongodb.database}\")\n private String MONGODB_DATABASE;\n```\n\nNext, we need to create a Client and configure it to handle the translation to and from BSON for our POJOs. Here we configure a `CodecRegistry` to handle these conversions, and use a default codec as they are capable of handling the major Java data types. We then wrap these in a `MongoClientSettings` and create our `MongoClient`.\n\n```java\n @Bean\n public MongoClient mongoClient() {\n CodecRegistry pojoCodecRegistry = CodecRegistries.fromRegistries(\n MongoClientSettings.getDefaultCodecRegistry(),\n CodecRegistries.fromProviders(\n PojoCodecProvider.builder().automatic(true).build()\n )\n );\n\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(MONGODB_URI))\n .codecRegistry(pojoCodecRegistry)\n .build();\n\n return MongoClients.create(settings);\n }\n```\n\nOur last step will then be to get our database, and we're done with this class.\n\n```java\n @Bean\n public MongoDatabase mongoDatabase(MongoClient mongoClient) {\n return mongoClient.getDatabase(MONGODB_DATABASE); \n }\n}\n```\n\n### Embed your data with the OpenAI API\n\nWe are going to send the prompt given from the user to the OpenAI API to be embedded.\nAn embedding is a series (vector) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.\n\nThis will transform our natural language prompt, such as `\"Toys that come to life when no one is looking\"`, to a large array of floating point numbers that will look something like this `-0.012670076, -0.008900887, ..., 0.0060262447, -0.031987168]`.\n\nIn order to do this, we need to create a few files. All of our code to interact with OpenAI will be contained in our `OpenAIService.java` class and go to `com.example.mdbvectorsearch.service`. The `@Service` at the top of our class dictates to Spring Boot that this belongs to this service layer and contains business logic. \n\n```java\n@Service\npublic class OpenAIService {\n\nprivate static final String OPENAI_API_URL = \"https://api.openai.com\";\n\n@Value(\"${openai.api.key}\")\n\nprivate String OPENAI_API_KEY;\n\nprivate WebClient webClient;\n\n@PostConstruct\nvoid init() {\nthis.webClient = WebClient.builder()\n.clientConnector(new ReactorClientHttpConnector())\n.baseUrl(OPENAI_API_URL)\n.defaultHeader(\"Content-Type\", MediaType.APPLICATION_JSON_VALUE)\n.defaultHeader(\"Authorization\", \"Bearer \" + OPENAI_API_KEY)\n.build();\n}\n\npublic Mono> createEmbedding(String text) {\nMap body = Map.of(\n\"model\", \"text-embedding-ada-002\",\n\"input\", text\n);\n\nreturn webClient.post()\n.uri(\"/v1/embeddings\")\n.bodyValue(body)\n.retrieve()\n.bodyToMono(EmbeddingResponse.class)\n.map(EmbeddingResponse::getEmbedding);\n}\n}\n```\n\nWe use the Spring WebClient to make the calls to the OpenAI API. We then create the embeddings. To do this, we pass in our text and specify our embedding model (e.g., `text-embedding-ada-002`). You can read more about the OpenAI API parameter options [in their docs.\n\nTo pass in and receive the data from the Open AI API, we need to specify our models for the data being received. We're going to add two models to our `com.example.mdbvectorsearch.model` package, `EmbeddingData.java` and `EmbeddingResponse.java`.\n\n```java\npublic class EmbeddingData {\nprivate List embedding;\n\npublic List getEmbedding() {\nreturn embedding;\n}\n\npublic void setEmbedding(List embedding) {\nthis.embedding = embedding;\n}\n}\n```\n\n```java\npublic class EmbeddingResponse {\nprivate List data;\n\npublic List getEmbedding() {\nreturn data.get(0).getEmbedding();\n}\n\npublic List getData() {\nreturn data;\n}\n\npublic void setData(List data) {\nthis.data = data;\n}\n}\n```\n\n### Your vector search aggregation pipeline in Spring Boot\n\nWe have our database. We are able to embed our data. We are ready to send and receive our movie documents. How do we actually perform our semantic search?\n\nThe data access layer of our API implementation takes place in the repository. Create a package `com.example.mdbvectorsearch.repository` and add the interface `MovieRepository.java`.\n\n```java\npublic interface MovieRepository {\n Flux findMoviesByVector(List embedding);\n}\n```\n\nNow, we implement the logic for our `findMoviesByVector` method in the implementation of this interface. Add a class `MovieRepositoryImpl.java` to the package. This method implements the data logic for our application and takes the embedding of user's inputted text, embedded using the OpenAI API, then uses the `$vectorSearch` aggregation stage against our `embedded_movies` collection, using the index we set up earlier.\n\n```java\n@Repository\npublic class MovieRepositoryImpl implements MovieRepository {\n\n private final MongoDatabase mongoDatabase;\n\n public MovieRepositoryImpl(MongoDatabase mongoDatabase) {\n this.mongoDatabase = mongoDatabase;\n }\n\n private MongoCollection getMovieCollection() {\n return mongoDatabase.getCollection(\"embedded_movies\", Movie.class);\n }\n\n @Override\n public Flux findMoviesByVector(List embedding) {\n String indexName = \"PlotVectorSearch\";\n int numCandidates = 100;\n int limit = 5;\n\n List pipeline = asList(\n vectorSearch(\n fieldPath(\"plot_embedding\"),\n embedding,\n indexName,\n numCandidates,\n limit));\n\n return Flux.from(getMovieCollection().aggregate(pipeline, Movie.class));\n }\n}\n\n```\n\nFor the business logic of our application, we need to create a service class. Create a class called `MovieService.java` in our `service` package.\n\n```java\n@Service\npublic class MovieService {\n\n private final MovieRepository movieRepository;\n private final OpenAIService embedder;\n\n @Autowired\n public MovieService(MovieRepository movieRepository, OpenAIService embedder) {\n this.movieRepository = movieRepository;\n this.embedder = embedder;\n }\n\n public Mono> getMoviesSemanticSearch(String plotDescription) {\n return embedder.createEmbedding(plotDescription)\n .flatMapMany(movieRepository::findMoviesByVector)\n .collectList();\n }\n}\n```\n\nThe `getMoviesSemanticSearch` method will take in the user's natural language plot description, embed it using the OpenAI API, perform a vector search on our `embedded_movies` collection, and return the top five most similar results.\n\nThis service will take the user's inputted text, embed it using the OpenAI API, then use the `$vectorSearch` aggregation stage against our `embedded_movies` collection, using the index we set up earlier.\n\nThis returns a `Mono` wrapping our list of `Movie` objects. All that's left now is to actually pass in some data and call our function. \n\nWe\u2019ve got the logic in our application. Now, let\u2019s make it an API! First, we need to set up our controller. This will allow us to take in the user input for our application. Let's set up an endpoint to take in the users plot description and return our semantic search results. Create a `com.example.mdbvectorsearch.service` package and add the class `MovieController.java`.\n\n```java\n@RestController\npublic class MovieController {\n\nprivate final MovieService movieService;\n\n@Autowired\npublic MovieController(MovieService movieService) {\nthis.movieService = movieService;\n}\n\n@GetMapping(\"/movies/semantic-search\")\npublic Mono> performSemanticSearch(@RequestParam(\"plotDescription\") String plotDescription) {\nreturn movieService.getMoviesSemanticSearch(plotDescription);\n}\n}\n```\nWe define an endpoint `/movies/semantic-search` that handles get requests, captures the `plotDescription` as a query parameter, and delegates the search operation to the `MovieService`.\n\nYou can use your favorite tool to test the API endpoints but I'm just going to send a cURL command. \n\n```console\n\ncurl -X GET \"http://localhost:8080/movies/semantic-search?plotDescription=A%20cop%20from%20china%20and%20cop%20from%20america%20save%20kidnapped%20girl\"\n\n```\n>Note: We use `%20` to indicate spaces in our URL.\n\nHere we call our API with the query, `\"A cop from China and a cop from America save a kidnapped girl\"`. There's no title in there but I think it's a fairly good description of a particular action/comedy movie starring Jackie Chan and Chris Tucker. Here's a slightly abbreviated version of my output. Let's check our results!\n\n```markdown\nMovie title: Rush Hour\nPlot: Two cops team up to get back a kidnapped daughter.\n\nMovie title: Police Story 3: Supercop\nPlot: A Hong Kong detective teams up with his female Red Chinese counterpart to stop a Chinese drug czar.\n \nMovie title: Fuk sing go jiu\nPlot: Two Hong-Kong cops are sent to Tokyo to catch an ex-cop who stole a large amount of money in diamonds. After one is captured by the Ninja-gang protecting the rogue cop, the other one gets ...\n \nMovie title: Motorway\nPlot: A rookie cop takes on a veteran escape driver in a death defying final showdown on the streets of Hong Kong.\n \nMovie title: The Corruptor\nPlot: With the aid from a NYC policeman, a top immigrant cop tries to stop drug-trafficking and corruption by immigrant Chinese Triads, but things complicate when the Triads try to bribe the policeman.\n```\n\nWe found *Rush Hour* to be our top match. Just what I had in mind! If its premise resonates with you, there are a few other films you might enjoy.\n\nYou can test this yourself by changing the `plotDescription` we have in the cURL command.\n\n## Conclusion\n\nThis tutorial walked through the comprehensive steps of creating a semantic search application using MongoDB Atlas, OpenAI, and Spring Boot. \n\nSemantic search offers a plethora of applications, ranging from sophisticated product queries on e-commerce sites to tailored movie recommendations. This guide is designed to equip you with the essentials, paving the way for your upcoming project. \n\nThinking about integrating vector search into your next project? Check out this article \u2014 How to Model Your Documents for Vector Search \u2014 to learn how to design your documents for vector search.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltda8a1bd484272d2c/656d98d6d28c5a166c3e1879/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9570de3c5dcf3c0f/656d98d6ec7994571696ad1d/image6.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt531721a1672757f9/656d98d6d595490c07b6840b/image4.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt63feb14bcd48bc33/656d98d65af539247a5a12e5/image3.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt29c9e89933337056/656d98d6d28c5a4acb3e1875/image1.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfcb95ac6cfc4ef2b/656d98d68d1092ce5f56dd73/image5.png", "format": "md", "metadata": {"tags": ["Atlas", "Java"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Unlocking Semantic Search: Building a Java-Powered Movie Search Engine with Atlas Vector Search and Spring Boot", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-atlas-azure-functions-nodejs", "action": "created", "body": "# Getting Started with MongoDB Atlas and Azure Functions using Node.js\n\n*This article was originally published on Microsoft's Tech Community.*\n\nSo you're building serverless applications with Microsoft Azure Functions, but you need to persist data to a database. What do you do about controlling the number of concurrent connections to your database from the function? What happens if the function currently connected to your database shuts down or a new instance comes online to scale with demand?\n\nThe concept of serverless in general, whether that be through a function or database, is great because it is designed for the modern application. Applications that scale on-demand reduce the maintenance overhead and applications that are pay as you go reduce unnecessary costs.\n\nIn this tutorial, we\u2019re going to see just how easy it is to interact with\u00a0MongoDB Atlas\u00a0using Azure functions. If you\u2019re not familiar with MongoDB, it offers a flexible document model that can be used to model your data for a variety of use cases and is easily integrated into most application development stacks. On top of the document model, MongoDB Atlas makes it just as easy to scale your database to meet demand as it does your Azure Function.\n\nThe language focus of this tutorial will be Node.js and as a result we will be using the MongoDB Node.js driver, but the same concepts can be carried between Azure Function runtimes.\n\n## Prerequisites\n\nYou will need to have a few of the prerequisites met prior to starting the tutorial:\n\n* A\u00a0MongoDB Atlas\u00a0database deployed and configured with appropriate network rules and user rules.\n* The\u00a0Azure CLI\u00a0installed and configured to use your Azure account.\n* The\u00a0Azure Functions Core Tools\u00a0installed and configured.\n* Node.js 14+ installed and configured to meet Azure Function requirements.\n\nFor this particular tutorial we'll be using a MongoDB Atlas\u00a0serverless instance\u00a0since our interactions with the database will be fairly lightweight and we want to maintain scaling flexibility at the database layer of our application, but any Atlas deployment type, including the free tier, will work just fine so we recommend you evaluate and choose the option best for your needs. It\u2019s worth noting that you can also configure scaling flexibility for our dedicated clusters with\u00a0auto-scaling\u00a0which allows you to select minimum and maximum scaling thresholds for your database.\n\nWe'll also be referencing the sample data sets that MongoDB offers, so if you'd like to follow along make sure you install them from the MongoDB Atlas dashboard.\n\nWhen defining your network rules for your MongoDB Atlas database, use the outbound IP addresses for the Azure data centers as defined in the\u00a0Microsoft Azure documentation.\n\n## Create an Azure Functions App with the CLI\n\nWhile we're going to be using the command line, most of what we see here can be done from the web portal as well.\n\nAssuming you have the Azure CLI installed and it is configured to use your Azure account, execute the following:\n\n```\naz group create --name --location \n```\n\nYou'll need to choose a name for your group as well as a supported Azure region. Your choice will not impact the rest of the tutorial as long as you're consistent throughout. It\u2019s a good idea to choose a region closest to you or your users so you get the best possible latency for your application.\n\nWith the group created, execute the following to create a storage account:\n\n```\naz storage account create --name --location --resource-group --sku Standard_LRS\n```\n\nThe above command should use the same region and group that you defined in the previous step. This command creates a new and unique storage account to use with your function. The storage account won't be used locally, but it will be used when we deploy our function to the cloud.\n\nWith the storage account created, we need to create a new Function application. Execute the following from the CLI:\n\n```\naz functionapp create --resource-group --consumption-plan-location --runtime node --functions-version 4 --name --storage-account \n```\n\nAssuming you were consistent and swapped out the placeholder items where necessary, you should have an Azure Function project ready to go in the cloud.\n\nThe commands used thus far can be found in the\u00a0Microsoft documentation. We just changed anything .NET related to Node.js instead, but as mentioned earlier MongoDB Atlas does support a variety of runtimes including .NET and this tutorial can be referenced for other languages.\n\nWith most of the cloud configuration out of the way, we can focus on the local project where we'll be writing all of our code. This will be done with the\u00a0Azure Functions Core Tools\u00a0application.\n\nExecute the following command from the CLI to create a new project:\n\n```\nfunc init MongoExample\n```\n\nWhen prompted, choose Node.js and JavaScript since that is what we'll be using for this example.\n\nNavigate into the project and create your first Azure Function with the following command:\n\n```\nfunc new --name GetMovies --template \"HTTP trigger\"\n```\n\nThe above command will create a Function titled \"GetMovies\" based off the \"HTTP trigger\" template. The goal of this function will be to retrieve several movies from our database. When the time comes, we'll add most of our code to the\u00a0*GetMovies/index.js*\u00a0file in the project.\n\nThere are a few more things that must be done before we begin writing code.\n\nOur local project and cloud account is configured, but we\u2019ve yet to link them together. We need to link them together,\u00a0so our function deploys to the correct place.\n\nWithin the project, execute the following from the CLI:\n\n```\nfunc azure functionapp fetch-app-settings \n```\n\nDon't forget to replace the placeholder value in the above command with your actual Azure Function name. The above command will download the configuration details from Azure and place them in your local project, particularly in the project's\u00a0*local.settings.json*\u00a0file.\n\nNext execute the following from the CLI:\n\n```\nfunc azure functionapp fetch-app-settings \n```\n\nThe above command will add the storage details to the project's\u00a0*local.settings.json*\u00a0file.\n\nFor more information on these two commands, check out the\u00a0Azure Functions documentation.\n\n## Install and Configure the MongoDB Driver for Node.js within the Azure Functions Project\n\nBecause we plan to use the MongoDB Node.js driver, we will need to add the driver to our project and configure it. Neither of these things will be complicated or time consuming to do.\n\nFrom the root of your local project, execute the following from the command line:\n\n```\nnpm install mongodb\n```\n\nThe above command will add MongoDB to our project and add it to our project's\u00a0*package.json*\u00a0file so that it can be added automatically when we deploy our project to the cloud.\n\nBy now you should have a \"GetMovies\" function if you're following along with this tutorial. Open the project's\u00a0*GetMovies/index.j*s file so we can configure it for MongoDB:\n\n```\nconst { MongoClient } = require(\"mongodb\");\nconst mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);\nmodule.exports = async function (context, req) {\n\n // Function logic here ...\n\n}\n```\n\nIn the above snippet we are importing MongoDB and we are creating a new client to communicate with our cluster. We are making use of an environment variable to hold our connection information.\n\nTo find your URI, go to the MongoDB Atlas dashboard and click \"Connect\" for your cluster.\n\nBring this URI string into your project's\u00a0*local.settings.json*\u00a0file. Your file might look something like this:\n\n```\n{\n\n \"IsEncrypted\": false,\n\n \"Values\": {\n\n // Other fields here ...\n\n \"MONGODB_ATLAS_URI\": \"mongodb+srv://demo:@examples.mx9pd.mongodb.net/?retryWrites=true&w=majority\",\n\n \"MONGODB_ATLAS_CLUSTER\": \"examples\",\n\n \"MONGODB_ATLAS_DATABASE\": \"sample_mflix\",\n\n \"MONGODB_ATLAS_COLLECTION\": \"movies\"\n\n },\n\n \"ConnectionStrings\": {}\n\n}\n```\n\nThe values in the\u00a0*local.settings.json*\u00a0file will be accessible as environment variables in our local project. We'll be completing additional steps later in the tutorial to make them cloud compatible.\n\nThe first phase of our installation and configuration of MongoDB Atlas is complete!\n\n## Interact with Your Data using the Node.js Driver for MongoDB\n\nWe're going to continue in our projects\u00a0*GetMovies/index.js*\u00a0file, but this time we're going to focus on some basic MongoDB logic.\n\nIn the Azure Function code we should have the following as of now:\n\n```\nconst { MongoClient } = require(\"mongodb\");\nconst mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);\nmodule.exports = async function (context, req) {\n\n // Function logic here ...\n\n}\n```\n\nWhen working with a serverless function you don't have control as to whether or not your function is available immediately. In other words you don't have control as to whether the function is ready to be consumed or if it has to be created. The point of serverless is that you're using it as needed.\n\nWe have to be cautious about how we use a serverless function with a database. All databases, not specific to MongoDB, can maintain a certain number of concurrent connections before calling it quits. In a traditional application you generally establish a single connection that lives on for as long as your application does. Not the case with an Azure Function. If you establish a new connection inside your function block, you run the risk of too many connections being established if your function is popular. Instead what we're doing is we are creating the MongoDB client outside of the function and we are using that same client within our function. This allows us to only create connections if connections don't exist.\n\nNow we can skip into the function logic:\n\n```\nmodule.exports = async function (context, req) {\n\n try {\n\n const database = await mongoClient.db(process.env.MONGODB_ATLAS_DATABASE);\n\n const collection = database.collection(process.env.MONGODB_ATLAS_COLLECTION);\n\n const results = await collection.find({}).limit(10).toArray();\n\n context.res = {\n\n \"headers\": {\n\n \"Content-Type\": \"application/json\"\n\n },\n\n \"body\": results\n\n }\n\n } catch (error) {\n\n context.res = {\n\n \"status\": 500,\n\n \"headers\": {\n\n \"Content-Type\": \"application/json\"\n\n },\n\n \"body\": {\n\n \"message\": error.toString()\n\n }\n\n }\n\n }\n\n}\n```\n\nWhen the function is executed, we make reference to the database and collection we plan to use. These are pulled from our\u00a0*local.settings.json*\u00a0file when working locally.\n\nNext we do a `find` operation against our collection with an empty match criteria. This will return all the documents in our collection so the next thing we do is limit it to ten (10) or less results.\n\nAny results that come back we use as a response. By default the response is plaintext, so by defining the header we can make sure the response is JSON. If at any point there was an exception, we catch it and return that instead.\n\nWant to see what we have in action?\n\nExecute the following command from the root of your project:\n\n```\nfunc start\n```\n\nWhen it completes, you'll likely be able to access your Azure Function at the following local endpoint:\u00a0http://localhost:7071/api/GetMovies\n\nRemember, we haven't deployed anything and we're just simulating everything locally.\n\nIf the local server starts successfully, but you cannot access your data when visiting the endpoint, double check that you have the correct network rules in MongoDB Atlas. Remember, you may have added the Azure Function network rules, but if you're testing locally, you may be forgetting your local IP in the list.\n\n## Deploy an Azure Function with MongoDB Support to the Cloud\n\nIf everything is performing as expected when you test your function locally, then you're ready to get it deployed to the Microsoft Azure cloud.\n\nWe need to ensure our local environment variables make it to the cloud. This can be done through the web dashboard in Azure or through the command line. We're going to do everything from the command line.\n\nFrom the CLI, execute the following commands, replacing the placeholder values with your own values:\n\n```\naz functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_URI=\n\naz functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_DATABASE=\n\naz functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_COLLECTION=\n```\n\nThe above commands were taken almost exactly from the\u00a0Microsoft documentation.\n\nWith the environment variables in place, we can deploy the function using the following command from the CLI:\n\n```\nfunc azure functionapp publish \n```\n\nIt might take a few moments to deploy, but when it completes the CLI will provide you with a public URL for your functions.\n\nBefore you attempt to test them from cURL, Postman, or similar, make sure you obtain a \"host key\" from Azure to use in your HTTP requests.\n\n## Conclusion\nIn this tutorial we saw how to connect MongoDB Atlas with Azure Functions using the MongoDB Node.js driver to build scalable serverless applications. While we didn't see it in this tutorial, there are many things you can do with the Node.js driver for MongoDB such as complex queries with an aggregation pipeline as well as basic CRUD operations.\n\nTo see more of what you can accomplish with MongoDB and Node.js, check out the\u00a0MongoDB Developer Center.\n\nWith MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud\u2013based developer data platform in the market. Now, with the availability of Atlas on the Azure Marketplace, it\u2019s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the\u00a0Atlas on Azure Marketplace\u00a0listing.", "format": "md", "metadata": {"tags": ["Atlas", "Azure", "Node.js"], "pageDescription": "In this tutorial, we\u2019re going to see just how easy it is to interact with MongoDB Atlas using Azure functions.", "contentType": "Tutorial"}, "title": "Getting Started with MongoDB Atlas and Azure Functions using Node.js", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/symfonylive-berlin-2024", "action": "created", "body": "# SymfonyLive Berlin 2024\n\nCome and meet our team at SymfonyLive Berlin!\n\n## Sessions\nDon't miss these talks by our team:\n\n|Date| Titre| Speaker|\n|---|---|---|\n|June 20th|From Pickles to Pie: Sweeten Your PHP Extension Installs|Andreas Braun|\n\n## Additional Resources\nDive deeper in your MongoDB exploration with the following resources:\n* Tutorial MongoDB + Symfony\n* Tutorial MongoDB + Doctrine", "format": "md", "metadata": {"tags": ["MongoDB", "PHP"], "pageDescription": "Join us at Symfony Live Berlin!", "contentType": "Event"}, "title": "SymfonyLive Berlin 2024", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/springio-2024", "action": "created", "body": "# Spring I/O 2024\n\nCome and meet our team at Spring I/O!\n\n## Sessions\nDon't miss these talks by our team:\n\n|Date| Titre| Speaker|\n|---|---|---|\n| May 30th | MongoDB Sprout: Where Data Meets Spring | Tim Kelly |\n\n## Additional Resources\nDive deeper in your MongoDB exploration with the following resources:\nCheck out how to add Vector Search to your Java Spring Boot application in this tutorial.\n\nIntegrating Spring Boot, Reactive, Spring Data, and MongoDB can be a challenge, especially if you are just starting out. Check out this code example to get started right away!\n\nNeed to deploy an application on K8s that connects to MongoDB Atlas? This tutorial will take you through the steps you need to get started in no time.", "format": "md", "metadata": {"tags": ["MongoDB", "Java"], "pageDescription": "Join us at Spring I/O!", "contentType": "Event"}, "title": "Spring I/O 2024", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/cppcon-2024", "action": "created", "body": "# CppCon 2024\n\nCome and meet our team at CppCon!\n\n## Sessions\nDon't miss these talks by our team:\n\n|Date| Titre| Speaker|\n|---|---|---|\n|September 21st and 22nd|Workshop: C++ Testing like a Ninja for Novice Testers|Jorge Ortiz & Rishabh Bisht|\n\n## Additional Resources\n\nWe will publish a repository with all of the code for the workshop, so remember to visit this page again and check if it is available.\n\nDive deeper in your MongoDB exploration with the following resources:\n- MongoDB Resources for Cpp developers", "format": "md", "metadata": {"tags": ["MongoDB", "C++"], "pageDescription": "Join us at CppCon!", "contentType": "Event"}, "title": "CppCon 2024", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-melbourne", "action": "created", "body": "# Developer Day Melbourne\n\nWelcome to MongoDB Developer Day Melbourne! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n* Slides\n* Library application\n* System requirements\n\n## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting\n* Intro lab: hands-on exercises\n* Data import tool\n\n## Aggregation Pipelines Lab\n* Slides\n* Aggregations lab: hands-on exercises\n\n## Search Lab\n* Slides\n* Search lab: hands-on exercises\n\n## Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n## How was it?\nLet us know what you liked about this day, and how we can improve (and get a cool \ud83e\udde6 gift \ud83e\udde6) by filling out this survey.\n\n## Join the Community\nStay connected, and join our community:\n* Join the Melbourne MongoDB User Group!\n* Sign up for the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Melbourne", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-sydney", "action": "created", "body": "# Developer Day Sydney\n\nWelcome to MongoDB Developer Day Sydney! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n* Slides\n* Library application\n* System requirements\n\n## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting\n* Intro lab: hands-on exercises\n* Data import tool\n\n## Aggregation Pipelines Lab\n* Slides\n* Aggregations lab: hands-on exercises\n\n## Search Lab\n* Slides\n* Search lab: hands-on exercises\n\n## Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n## How was it?\nLet us know what you liked about this day, and how we can improve (and get a cool \ud83e\udde6 gift \ud83e\udde6) by filling out this survey.\n\n## Join the Community\nStay connected, and join our community:\n* Join the Sydney MongoDB User Group!\n* Sign up for the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Sydney", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-auckland", "action": "created", "body": "# Developer Day Auckland\n\nWelcome to MongoDB Developer Day Auckland! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n* Slides\n* Library application\n* System requirements\n\n## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting\n* Intro lab: hands-on exercises\n* Data import tool\n\n## Aggregation Pipelines Lab\n* Slides\n* Aggregations lab: hands-on exercises\n\n## Search Lab\n* Slides\n* Search lab: hands-on exercises\n\n## Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n## How was it?\nLet us know what you liked about this day, and how we can improve by filling out this survey.\n\n## Join the Community\nStay connected, and join our community:\n* Join the Auckland MongoDB User Group!\n* Sign up for the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Auckland", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/deprecating-mongodb-atlas-graphql-hosting-services", "action": "created", "body": "# Deprecating MongoDB Atlas GraphQL and Hosting Services\n\nAs part of MongoDB\u2019s ongoing commitment to innovation and providing the best possible developer experience, we have some important updates about our Atlas GraphQL and Atlas Hosting services. Our goal is always to offer developers the best services and tools on Atlas, whether built by MongoDB or delivered by our trusted partners so that builders can focus on providing the best application possible. In line with this vision, we strategically decided to deprecate the Atlas GraphQL API and Atlas Hosting services. \n\nThis blog post outlines what this means for users, the timeline for this transition, and how we plan to support you through this change.\n\n**What\u2019s Changing?**\n\nNew users cannot create apps with GraphQL / hosting enabled. Existing customers will have time to move off of the service and find an alternative solution by **March 12, 2025**.\n\n**Why Are We Making This Change?**\n\nThe decision to streamline our services reflects our commitment to natively offering best-in-class services while collaborating with leading partners to provide the most comprehensive developer data platform.\n\n**How We\u2019re Supporting You**\n\nWe recognize that challenges can come with change, so our team will continue to provide comprehensive assistance and guidance to ensure a smooth migration process. As part of our commitment to providing developers the best services and tools, we have identified several MongoDB partners who offer best-in-class solutions with similar functionality to our GraphQL and hosting services.\n\nWe\u2019ve collaborated with some of these partners to create official step by step migration guides in order to provide a seamless transition to our customers. We encourage you to explore these options here.\n\n- **Migration Assistance**: Learn more about the MongoDB partner integrations that make it easy to connect to your Atlas database:\n - **GraphQL Partners**: Apollo, Hasura, WunderGraph, Grafbase, AWS AppSync\n - **Hosting Partners**: Vercel, Netlify, Koyeb, Northflank, DigitalOcean\n- **Support and Guidance**: Our support team is available to assist you with any questions or concerns. We encourage you to reach out via the MongoDB Support Portal or contact your Account Executive for personalized assistance.\n\n**Looking Forward**\n\nWe\u2019re here to support you every step of the way as you explore and migrate to alternative solutions. Our team is working diligently to ensure this transition is as seamless as possible for all affected users. We\u2019re also excited about what the future holds for the MongoDB Atlas, the industry\u2019s leading developer data platform, and the new features we\u2019re developing to enhance your experience.", "format": "md", "metadata": {"tags": ["Atlas", "GraphQL"], "pageDescription": "Guidance and resources on how to migrate from MongoDB Atlas GraphQL and Hosting services.", "contentType": "News & Announcements"}, "title": "Deprecating MongoDB Atlas GraphQL and Hosting Services", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/http-server-persist-data", "action": "created", "body": "# HTTP Servers Persisting Data in MongoDB\n\n# HTTP Servers Persisting Data in MongoDB\n\nIn the previous article and the corresponding video, we wrote a basic HTTP server from scratch. We used Go 1.22's new capabilities to deal with different HTTP verbs and we deserialized data that was sent from an HTTP client.\n\nExchanging data is worthless if you forget it right away. We are going to persist that data using MongoDB. You will need a MongoDB Atlas cluster. The free one is more than enough. If you don't have an account, you can find guidance on how this is done on this workshop or YouTube. You don't have to do the whole lab, just the parts \"Create an Account\" and \"Create a Cluster\" in the \"MongoDB Atlas\" section. Call your cluster \"NoteKeeper\" in a **FREE** cluster. Create a username and password which you will use in a moment. Verify that your IP address is included. Verify that your server's IP address is allowed access. If you use the codespace, include the address 0.0.0.0 to indicate that access is allowed to any IP.\n\n## Connect to MongoDB Atlas from Go\n\n1. So far, we have used packages of the standard library, but we would like to use the MongoDB driver to connect to our Atlas cluster. This adds the MongoDB Go driver to the dependencies of our project, including entries in `go.mod` for it and all of its dependencies. It also keeps hashes of the dependencies in `go.sum` to ensure integrity and downloads all the code to be able to include it in the program.\n ```shell\n go get go.mongodb.org/mongo-driver/mongo\n ```\n2. MongoDB uses BSON to serialize and store the data. It is more efficient and supports more types than JSON (we are looking at you, dates, but also BinData). And we can use the same technique that we used for deserializing JSON for converting to BSON, but in this case, the conversion will be done by the driver. We are going to declare a global variable to hold the connection to MongoDB Atlas and use it from the handlers. That is **not** a best practice. Instead, we could define a type that holds the client and any other dependencies and provides methods \u2013which will have access to the dependencies\u2013 that can be used as HTTP handlers.\n ```go\n var mdbClient *mongo.Client\n ```\n3. If your editor has any issues importing the MongoDB driver packages, you need to have these two in your import block.\n ```go\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n ```\n4. In the `main` function, we initialize the connection to Atlas. Notice that this function returns two things. For the first one, we are using a variable that has already been defined at the global scope. The second one, `err`, isn't defined in the current scope, so we could potentially use the short variable declaration here. However, if we do, it will ignore the global variable that we created for the client (`mdbClient`) and define a local one only for this scope. So let's use a regular assignment and we need `err` to be declared to be able to assign a value to it.\n ```go\n var err error\n mdbClient, err = mongo.Connect(ARG1, ARG2)\n ```\n5. The first argument of that `Connect()` call is a context that allows sharing data and cancellation requests between the main function and the client. Let's create one that is meant to do background work. You could add a cancellation timer to this context, among other things.\n ```go\n ctxBg := context.Background()\n ```\n6. The second argument is a struct that contains the options used to create the connection. The bare minimum is to have a URI to our Atlas MongoDB cluster. We get that URI from the cluster page by clicking on \"Get Connection String.\" We create a constant with that connection string. **Don't** use this one. It won't work. Get it from **your** cluster. Having the connection URI with user the and password as a constant isn't a best practice either. You should pass this data using an environment variable instead.\n ```go\n const connStr string = \"mongodb+srv://yourusername:yourpassword@notekeeper.xxxxxx.mongodb.net/?retryWrites=true&w=majority&appName=NoteKeeper\"\n ```\n7. We can now use that constant to create the second argument in place.\n ```go\n var err error\n mdbClient, err = mongo.Connect(ctxBg, options.Client().ApplyURI(connStr))\n ```\n8. If we cannot connect to Atlas, there is no point in continuing, so we log the error and exit. `log.Fatal()` takes care of both things.\n ```go\n if err != nil {\n log.Fatal(err)\n }\n ```\n9. If the connection has been successful, the first thing that we want to do is to ensure that it will be closed if we leave this function. We use `defer` for that. Everything that we defer will be executed when it exits that function scope, even if things go badly and a panic takes place. We enclose the work in an anonymous function and we call it because defer is a statement. This way, we can use the return value of the `Disconnect()` method and act accordingly.\n ```go\n defer func() {\n if err = mdbClient.Disconnect(ctxBg); err != nil {\n panic(err)\n }\n }()\n ```\n\n## Persist data in MongoDB Atlas from Go\n has all the code for this series and the next ones so you can follow along.\n\nStay curious. Hack your code. See you next time!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt10326f71fc7c76c8/6630dc2086ffea48da8e43cb/persistence.jpg", "format": "md", "metadata": {"tags": ["Go"], "pageDescription": "This tutorial explains how to persist data obtained from an HTTP endpoint into Atlas MongoDB.", "contentType": "Tutorial"}, "title": "HTTP Servers Persisting Data in MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-singapore", "action": "created", "body": "# Developer Day Singapore\n\nWelcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n\n* Slides\n* Library application\n* System requirements\n\n### Hands-on exercises setup and troubleshooting\n* Self-paced content -- Atlas cluster creation and loading sample data.\n* Data import tool\n* If CodeSpaces doesn't work, try downloading the code.\n* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.\n\n### Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n\n## Aggregation Pipelines Lab\n* Aggregations hands-on exercises\n* Slides\n\n## Search Lab\n* Slides\n* Search lab hands-on content\n\n### Dive deeper\nDo you want to learn more about Atlas Search? Check these out.\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Singapore", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/mongodb-day-kementerian-kesehatan", "action": "created", "body": "# MongoDB Day with Kementerian Kesehatan\n\nWelcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n\n* Slides\n* Library application\n* System requirements\n\n### Hands-on exercises setup and troubleshooting\n* Self-paced content -- Atlas cluster creation and loading sample data.\n* Data import tool\n* If CodeSpaces doesn't work, try downloading the code.\n* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.\n\n### Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n\n## Aggregation Pipelines Lab\n* Aggregations hands-on exercises\n* Slides\n\n## Search Lab\n* Slides\n* Search lab hands-on content\n\n### Dive deeper\nDo you want to learn more about Atlas Search? Check these out.\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "MongoDB Day with Kementerian Kesehatan", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-jakarta", "action": "created", "body": "# Developer Day Jakarta\n\nWelcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n\n* Slides\n* Library application\n* System requirements\n\n### Hands-on exercises setup and troubleshooting\n* Self-paced content -- Atlas cluster creation and loading sample data.\n* Data import tool\n* If CodeSpaces doesn't work, try downloading the code.\n* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.\n\n### Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n\n## Aggregation Pipelines Lab\n* Aggregations hands-on exercises\n* Slides\n\n## Search Lab\n* Slides\n* Search lab hands-on content\n\n### Dive deeper\nDo you want to learn more about Atlas Search? Check these out.\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Jakarta", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-kl", "action": "created", "body": "# Developer Day Kuala Lumpur\n\nWelcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.\n\n## Data Modeling and Design Patterns\n\n* Slides\n* Library application\n* System requirements\n\n### Hands-on exercises setup and troubleshooting\n* Self-paced content -- Atlas cluster creation and loading sample data.\n* Data import tool\n* If CodeSpaces doesn't work, try downloading the code.\n* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.\n\n### Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n\n## Aggregation Pipelines Lab\n* Aggregations hands-on exercises\n* Slides\n\n## Search Lab\n* Slides\n* Search lab hands-on content\n\n### Dive deeper\nDo you want to learn more about Atlas Search? Check these out.\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.", "contentType": "Event"}, "title": "Developer Day Kuala Lumpur", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/migration-411-50", "action": "created", "body": "# Java Driver: Migrating From 4.11 to 5.0\n\n## Introduction\n\nThe MongoDB Java driver 5.0.0 is now available!\n\nWhile this version doesn't include many new features, it's removing a lot of deprecated methods and is preparing for the\nfuture.\n\n## How to upgrade\n\n- Ensure your server version is compatible with Java Driver 5.0.\n- Compile against the 4.11 version of the driver with deprecation warnings enabled.\n- Remove deprecated classes and methods.\n\n### Maven\n\n```xml\n\n org.mongodb\n mongodb-driver-sync\n 5.0.0\n\n```\n\n### Gradle\n\n```\nimplementation group: 'org.mongodb', name: 'mongodb-driver-sync', version: '5.0.0'\n```\n\n## New features\n\nYou can\nread the full list of new features\nbut here is a summary.\n\n### getElapsedTime()\n\nThe behavior of the method `getElapsedTime()` was modified in the following classes:\n\n```text\ncom.mongodb.event.ConnectionReadyEvent\ncom.mongodb.event.ConnectionCheckedOutFailedEvent\ncom.mongodb.event.ConnectionCheckedOutEvent\n```\n\nIf you are using one of these methods, make sure to recompile and\nread the details.\n\n### authorizedCollection option\n\n5.0.0 adds support for the `authorizedCollection` option of the `listCollections` command.\n\n### Scala\n\nThe `org.mongodb.scala.Observable.completeWithUnit()` method is now marked deprecated.\n\n## Breaking changes\n\nOne of the best ways to identify if your code will require any changes following the upgrade to Java Driver 5.0 is to compile against 4.11.0 with deprecation warnings enabled and remove the use of any deprecated methods and classes.\n\nYou can read the full list of breaking changes but here is a summary.\n\n### StreamFactoryFactory and NettyStreamFactoryFactory\n\nThe following methods and classes have been removed in 5.0.0: \n\n- `streamFactoryFactory()` method from `MongoClientSettings.Builder`\n- `getStreamFactoryFactory()` method from `MongoClientSettings`\n- `NettyStreamFactoryFactory` class\n- `NettyStreamFactory` class\n- `AsynchronousSocketChannelStreamFactory` class\n- `AsynchronousSocketChannelStreamFactoryFactory` class\n- `BufferProvider` class\n- `SocketStreamFactory` class\n- `Stream` class\n- `StreamFactory` class\n- `StreamFactoryFactory` class\n- `TlsChannelStreamFactoryFactory` class\n\nIf you configure Netty using the `streamFactoryFactory()`, your code is probably like this: \n\n```java\nimport com.mongodb.connection.netty.NettyStreamFactoryFactory;\n// ...\nMongoClientSettings settings = MongoClientSettings.builder()\n .streamFactoryFactory(NettyStreamFactoryFactory.builder().build())\n .build();\n```\n\nNow, you should use the `TransportSettings.nettyBuilder()`:\n\n```java\nimport com.mongodb.connection.TransportSettings;\n// ...\nMongoClientSettings settings = MongoClientSettings.builder()\n .transportSettings(TransportSettings.nettyBuilder().build())\n .build();\n```\n\n### ConnectionId\n\nIn 4.11, the class `ConnectionId` was using integers.\n\n```java\n@Immutable\npublic final class ConnectionId {\n private static final AtomicInteger INCREMENTING_ID = new AtomicInteger();\n\n private final ServerId serverId;\n private final int localValue;\n private final Integer serverValue;\n private final String stringValue;\n // ...\n}\n```\n\n```java\n@Immutable\npublic final class ConnectionId {\n private static final AtomicLong INCREMENTING_ID = new AtomicLong();\n private final ServerId serverId;\n private final long localValue;\n @Nullable\n private final Long serverValue;\n private final String stringValue;\n// ...\n}\n```\n\nWhile this should have a very minor impact on your code, it's breaking binary and source compatibility. Make sure to\nrebuild your binary and you should be good to go.\n\n### Package update\n\nThree record annotations moved from:\n\n```text\norg.bson.codecs.record.annotations.BsonId\norg.bson.codecs.record.annotations.BsonProperty\norg.bson.codecs.record.annotations.BsonRepresentation\n```\n\nTo:\n\n```text\norg.bson.codecs.pojo.annotations.BsonId\norg.bson.codecs.pojo.annotations.BsonProperty\norg.bson.codecs.pojo.annotations.BsonRepresentation\n```\n\nSo if you are using these annotations, please make sure to update the imports and rebuild.\n\n### SocketSettings is now using long\n\nThe first parameters of the two following builder methods in `SocketSettings` are now using a long instead of an\ninteger.\n\n```java\npublic Builder connectTimeout(final long connectTimeout, final TimeUnit timeUnit) {/*...*/}\npublic Builder readTimeout(final long readTimeout, final TimeUnit timeUnit){/*...*/}\n```\n\nThis breaks binary compatibility but shouldn't require a code change in your code.\n\n### Filters.eqFull()\n\n`Filters.eqFull()` was only released in `Beta` for vector search. It's now deprecated. Use `Filters.eq()` instead when\ninstantiating a `VectorSearchOptions`.\n\n```java\nVectorSearchOptions opts = vectorSearchOptions().filter(eq(\"x\", 8));\n```\n\n### ClusterConnectionMode\n\nThe way the driver is computing the `ClusterConnectionMode` is now more consistent by using a specified replica set\nname, regardless of how it's configured.\n\nIn the following example, both the 4.11 and 5.0.0 drivers were returning the same\nthing: `ClusterConnectionMode.MULTIPLE`.\n\n```java\nClusterSettings.builder()\n .applyConnectionString(new ConnectionString(\"mongodb://127.0.0.1:27017/?replicaSet=replset\"))\n .build()\n .getMode();\n```\n\nBut in this example, the 4.11 driver was returning `ClusterConnectionMode.SINGLE` instead\nof `ClusterConnectionMode.MULTIPLE`.\n\n```java\nClusterSettings.builder()\n .hosts(Collections.singletonList(new ServerAddress(\"127.0.0.1\", 27017)))\n .requiredReplicaSetName(\"replset\")\n .build()\n .getMode();\n```\n\n### BsonDecimal128\n\nThe behaviour of `BsonDecimal128` is now more consistent with the behaviour of `Decimal128`.\n\n```java\nBsonDecimal128.isNumber(); // returns true\nBsonDecimal128.asNumber(); // returns the BsonNumber\n```\n\n## Conclusion\n\nWith the release of MongoDB Java Driver 5.0.0, it's evident that the focus has been on refining existing functionalities, removing deprecated methods, and ensuring compatibility for future enhancements. While the changes may necessitate some adjustments in your codebase, they pave the way for a more robust and efficient development experience.\n\nReady to upgrade? Dive into the latest version of the MongoDB Java drivers and start leveraging its enhanced capabilities today!\n\nTo finish with, don't forget to enable virtual threads in your Spring Boot 3.2.0+ projects! You just need to add this in your `application.properties` file:\n\n```properties\nspring.threads.virtual.enabled=true\n```\n\nGot questions or itching to share your success? Head over to the MongoDB Community Forum \u2013 we're all ears and ready to help!\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB"], "pageDescription": "Learn how to migrate smoothly your MongoDB Java project from 4.11 to 5.0.", "contentType": "Article"}, "title": "Java Driver: Migrating From 4.11 to 5.0", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/lambda-nodejs", "action": "created", "body": "# Using the Node.js MongoDB Driver with AWS Lambda\n\nJavaScript has come a long way since its modest debut in the 1990s. It has been the most popular language, according to the Stack Overflow Developer Survey, for 10 years in a row now. So it's no surprise that it has emerged as the most popular language for writing serverless functions.\n\nWriting a serverless function using JavaScript is straightforward and similar to writing a route handler in Express.js. The main difference is how the server will handle the code. As a developer, you only need to focus on the handler itself, and the cloud provider will maintain all the infrastructure required to run this function. This is why serverless is getting more and more traction. There is almost no overhead that comes with server management; you simply write your code and deploy it to the cloud provider of your choice.\n\nThis article will show you how to write an AWS Lambda serverless function that connects to\u00a0 MongoDB Atlas to query some data and how to avoid common pitfalls that would cause poor performance.\n\n## Prerequisites\n\nFor this article, you will need basic JavaScript knowledge. You will also need:\n\n- A MongoDB Atlas database loaded with sample data (a free tier is good).\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n- An AWS account.\n\n## Creating your first Lambda function\n\nTo get started, let's create a basic lambda function. This function will be used later on to connect to our MongoDB instance.\n\nIn AWS, go to the Lambda service. From there, you can click on the \"Create Function\" button. Fill in the form with a name for your function, and open the advanced settings.\n\nBecause you'll want to access this function from a browser, you will need to change these settings:\n\n- Check the \"Enable function URL\" option.\n\n- Under \"Auth Type,\" pick \"NONE.\"\n\n- Check the \"Configure cross-origin resource sharing (CORS)\" box.\n\nNow click \"Create Function\" and you're ready to go. You will then be presented with a screen similar to the following.\n\nYou can see a window with some code. This function will return a 200 (OK) status code, and the body of the request will be \"Hello from Lambda!\".\n\nYou can test this function by going to the \"Configuration\" above the code editor. Then choose \"Function URL\" from the left navigation menu. You will then see a link labeled \"Function URL.\" Clicking this link will open a new tab with the expected message.\n\nIf you change the code to return a different body, click \"Deploy\" at the top, and refresh that second tab, you will see your new message.\n\nYou've just created your first HTTPS endpoint that will serve the response generated from your function.\n\n## Common pitfalls with the Node.js driver for MongoDB\n\nWhile it can be trivial to write simple functions, there are some considerations that you'll want to keep in mind when dealing with AWS Lambda and MongoDB.\n\n### Storing environment variables\n\nYou can write your functions directly in the code editor provided by AWS Lambda, but chances are you will want to store your code in a repository to share with your team. When you push your code, you will want to be careful not to upload some of your secret keys. With your database, for example, you wouldn't want to push your connection string accidentally. You could use an environment variable for this.\n\nFrom the AWS Lambda screen, go into the \"Configuration\" tab at the top, and pick \"Environment Variables\" from the left navigation bar. Click \"Edit,\" and you will be presented with the option to add a new environment variable. Fill in the form with the following values:\n\n- Key: MONGODB_CONNECTION_STRING\n\n- Value: This is a connection string\n\nNow go back to the code editor, and use the `process.env` to return the newly created environment variable as the body of your request.\n\n```javascript\nexport const handler = async(event) => {\n\u00a0\u00a0\u00a0\u00a0const response = {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0statusCode: 200,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0body: process.env.MONGODB_CONNECTION_STRING,\n\u00a0\u00a0\u00a0\u00a0};\n\u00a0\u00a0\u00a0\u00a0return response;\n};\n```\n\nIf you refresh the tab you opened earlier, you will see the value of that environment variable. In the example below, you will change the value of that environment variable to connect to your MongoDB Atlas database.\n\n### Connection pool\n\nWhen you initialize a `MongoClient` with the Node.js driver, it will create a pool of connections that can be used by your application. The MongoClient ensures that those connections are closed after a while so you don't reach your limit.\n\nA common mistake when using MongoDB Atlas with AWS Lambda is creating a new connection pool every time your function gets a request. A poorly written function can lead to new connections being created every time, as displayed in the following diagram from the Atlas monitoring screen.\n\nThat sudden peak in connections comes from hitting a Lambda function every second for approximately two minutes.\n\nThe secret to fixing this is to move the creation of the MongoDB client outside the handler. This will be shown in the example below. Once the code has been fixed, you can see a significant improvement in the number of simultaneous connections.\n\nNow that you know the pitfalls to avoid, it's time to create a function that connects to MongoDB Atlas.\n\n## Using the MongoDB Node.js driver on AWS Lambda\n\nFor this example, you can use the same function you created earlier. Go to the \"Environment Variables\" settings, and put the connection string for your MongoDB database as the value for the \"MONGODB_CONNECTION_STRING\" environment variable. You can find your connection string in the Atlas UI.\n\nBecause you'll need additional packages to run this function, you won't be able to use the code editor anymore.\n\nCreate a new folder on your machine, initialize a new Node.js project using `npm`, and install the `mongodb` package.\n\n```bash\nnpm init -y\nnpm install mongodb\n```\n\nCreate a new `index.mjs` file in this directory, and paste in the following code.\n\n```javascript\nimport { MongoClient } from \"mongodb\";\nconst client = new MongoClient(process.env.MONGODB_CONNECTION_STRING);\nexport const handler = async(event) => {\n\u00a0\u00a0\u00a0\u00a0const db = await client.db(\"sample_mflix\");\n\u00a0\u00a0\u00a0\u00a0const collection = await db.collection(\"movies\");\n\u00a0\u00a0\u00a0\u00a0const body = await collection.find().limit(10).toArray();\n\u00a0\u00a0\u00a0\u00a0const response = {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0statusCode: 200,\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0body\n\u00a0\u00a0\u00a0\u00a0};\n\u00a0\u00a0\u00a0\u00a0return response;\n};\n```\n\nThis code will start by creating a new MongoClient. Note how the client is declared *outside* the handler function. This is how you'll avoid problems with your connection pool. Also, notice how it uses the connection string provided in the Lambda configuration rather than a hard-coded value.\n\nInside the handler, the code connects to the `sample_mflix` database and the `movies` collection. It then finds the first 10 results and converts them into an array.\n\nThe 10 results are then returned as the body of the Lambda function.\n\nYour function is now ready to be deployed. This time, you will need to zip the content of this folder. To do so, you can use your favorite GUI or the following command if you have the `zip` utility installed.\n\n```bash\nzip -r output.zip .\n```\n\nGo back to the Lambda code editor, and look for the \"Upload from\" button in the upper right corner of the editor. Choose your newly created `output.zip` file, and click \"Save.\"\n\nNow go back to the tab with the result of the function, and hit refresh. You should see the first 10 documents from the `movies` collection.\n\n## Summary\n\nUsing AWS Lambda is a great way to write small functions that can run efficiently without worrying about configuring servers. It's also a very cost-effective way to host your application since you only pay per usage. You can find more details on how to build Lambda functions to connect to your MongoDB database in the documentation.\n\nIf you want a fully serverless solution, you can also run MongoDB as a serverless service. Like the Lambda functions, you will only pay for a serverless database instance based on usage.\n\nIf you want to learn more about how to use MongoDB, check out our Community Forums.", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "AWS"], "pageDescription": "In this article, you will learn how to use the MongoDB Node.js driver in AWS Lambda functions.", "contentType": "Tutorial"}, "title": "Using the Node.js MongoDB Driver with AWS Lambda", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/react-summit", "action": "created", "body": "# React Summit\n\nCome and meet our team at React Summit!\n\n## Sessions\nDon't miss these talks by our team:\n\n|Date| Titre| Speaker|\n|---|---|---|\n\n## Additional Resources\n", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript"], "pageDescription": "Join us at React Summit!", "contentType": "Event"}, "title": "React Summit", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/insurance-data-model-relational-migrator-refactor", "action": "created", "body": "# Modernize your insurance data models with MongoDB Relational Migrator\n\nIn the 70s and 80s, there were few commercial off-the-shelf solutions available for many core insurance functions, so insurers had to build their own applications. Such applications are often host-based, meaning that they are mainframe technologies. These legacy platforms include software languages such as COBOL and CICS. Many insurers are still struggling to replace these legacy technologies due to a confluence of variables such as a lack of developers with programming skills in these older technologies and complicated insurance products. This results in high maintenance costs and difficulty in making changes. In brief, legacy systems are a barrier to progress in the insurance industry.\n\nWhether you\u2019re looking to maintain and improve existing applications or push new products and features to market, the data trapped inside those systems is a drag on innovation.\n\nThis is particularly true when we think about the data models that sit at the core of application systems (e.g., underwriting), defining entities and relationships between them.\n\nIn this tutorial, we will demonstrate:\n\n - Why the document model simplifies the data model of a standard insurance system.\n - How MongoDB's Relational Migrator effortlessly transforms an unwieldy 21-table schema into a lean five-collection MongoDB model.\n\nThis will ultimately prove that with MongoDB, insurers will launch new products faster, swiftly adapt to regulatory changes, and enhance customer experiences. \n\nTo do this, we will focus on the Object Management Group\u2019s Party Role model and how the model can be ported from a relational structure to MongoDB\u2019s document model.\n\n \n\nIn particular, we will describe the refactoring of Party in the context of Policy and Claim & Litigation. For each of them, a short description, a simplified Hackolade model (Entity Relationship Diagrams - ERD), and the document refactoring using Relational Migrator are provided.\n\nRelational Migrator is a tool that allows you to:\n\n - Design an effective MongoDB schema, derived from an existing relational schema.\n - Migrate data from Oracle, SQL Server, MySQL, PostgreSQL, or Sybase ASE to MongoDB, while transforming to the target schema.\n - Generate code artifacts to reduce the time required to update application code.\n\nAt the end of this tutorial, you will have learned how to use Relational Migrator to refactor the Party Role relational data model and migrate the data into MongoDB collections.\n\n## Connect to Postgres and set up Relational Migrator\n\n### Prerequisites\n\n - **MongoDB Relational Migrator** (version 1.4.3 or higher): MongoDB Relational Migrator is a powerful tool to help you migrate relational workloads to MongoDB. Download and install the latest version.\n - **PostgreSQL** (version 15 or higher): PostgreSQL is a relational database management system. It will serve as the source database for our migration task. Download the last version of PostgreSQL. \n - **MongoDB** (version 7.0 or higher): You will need access to a MongoDB instance with write permissions to create the new database to where we are going to migrate the data. You can install the latest version of the MongoDB Community Server or simply deploy a free MongoDB Atlas cluster in less than three minutes!\n\nIn this tutorial, we are going to use PostgreSQL as the RDBMS to hold the original tabular schema to be migrated to MongoDB. In order to follow it, you will need to have access to a PostgreSQL database server instance with permissions to create a new database and user. The instance may be in the cloud, on-prem, or in your local machine. You just need to know the URL, port, user, and password of the PostgreSQL instance of your choice. \n\nWe will also use two PostgreSQL Client Applications: psql and pg_restore. These terminal-based applications will allow us to interact with our PostgreSQL database server instance. The first application, `psql`, enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. It will be useful to create the database and run queries to verify that the schema has been successfully replicated. On the other hand, we will use `pg_restore` to restore the PostgreSQL database from the archive file available in the GitHub repository. This archive file contains all the tables, relationships, and sample data from the Party Role model in a tabular format. It will serve as the starting point in our data migration journey.\n\nThe standard ready-to-use packages will already include both the server and these client tools. We recommend using version 15 or higher. You can download it from the official PostgreSQL Downloads site or, if you are a macOS user, just run the command below in your terminal.\n\n```\nbrew install postgresql@15\n```\n\n>Note: Verify that Postgres database tools have been successfully installed by running `psql --version` and `pg_restore --version`. If you see an error message, make sure the containing directory of the tools is added to your `PATH`.\n\n### Replicate the Party Role model in PostgreSQL\n\nFirst, we need to connect to the PostgreSQL database.\n\n```\npsql -h -p -U -d \n```\nIf it\u2019s a newly installed local instance with the default parameters, you can use `127.0.0.1` as your host, `5432` as the port, `postgres` as database, and type `whoami` in your terminal to get your default username if no other has been specified during the installation of the PostgreSQL database server.\n\nOnce you are connected, we need to create a database to load the data.\n\n```\nCREATE DATABASE mongodb_insurance_model;\n```\nThen, we will create the user that will have access to the new database, so we don\u2019t need to use the root user in the relational migrator. Please remember to change the password in the command below. \n\n```\nCREATE USER istadmin WITH PASSWORD '';\nALTER DATABASE mongodb_insurance_model OWNER TO istadmin;\n```\n\nFinally, we will populate the database with the Party Role model, a standard widely used in the insurance industry to define how people, organizations, and groups are involved in agreements, policies, claims, insurable objects, and other major entities. This will not only replicate the table structure, relationships, and ownership, but it will also load some sample data.\n\n 1. First, download the .tar file that contains the backup of the database. \n 2. Navigate to the folder where the file is downloaded using your terminal. \n 3. Run the command below in your terminal to load the data. Please remember to change the host, port, and user before executing the command. \n\n```\npg_restore -h -p -U -d mongodb_insurance_model mongodb_insurance_model.tar\n```\n\nAfter a few seconds, our new database will be ready to use. Verify the successful restore by running the command below:\n\n```\npsql -h -p -U -d mongodb_insurance_model -c \"SELECT * FROM pg_catalog.pg_tables WHERE schemaname='omg';\"\n```\n\nYou should see a list of 21 tables similar to the one in the figure below. \n\nIf all looks good, you are ready to connect your data to MongoDB Relational Migrator.\n\n### Connect to Relational Migrator\n\nOpen the Relational Migrator app and click on the \u201cNew Project\u201d button. We will start a new project from scratch by connecting to the database we just created. Click on \u201cConnect database,\u201d select \u201cPostgreSQL\u201d as the database type, and fill in the connection details. Test the connection before proceeding and if the connection test is successful, click \u201cConnect.\u201d If a \u201cno encryption\u201d error is thrown, click on SSL \u2192 enable SSL.\n\nIn the next screen, select all 21 tables from the OMG schema and click \u201cNext.\u201d On this new screen, you will need to define your initial schema. We will start with a MongoDB schema that matches your relational schema. Leave the other options as default. Next, give the project a name and click \u201cDone.\u201d \n\nThis will generate a schema that matches the original one. That is, we will have one collection per table in the original schema. This is a good starting point, but as we have seen, one of the advantages of the document model is that it is able to reduce this initial complexity. To do so, we will take an object-modeling approach. We will focus on four top-level objects that will serve as the starting point to define the entire schema: Party, Policy, Claim, and Litigation.\n\nBy default, you will see a horizontal split view of the Relational (upper part) and MongoDB (lower part) model. You can change the view model from the bottom left corner \u201cView\u201d menu. Please note that all the following steps in the tutorial will be done in the MongoDB view (MDB). Feel free to change the view mode to \u201cMDB\u201d for a more spacious working view. \n\n## Party domain\n\nThe Party Subject Area (Figure 3.) shows that all persons, organizations, and groups can be represented as \u201cparties\u201d and parties can then be related to other major objects with specified roles. The Party design also provides a common approach to describing communication identifiers, relationships between parties, and legal identifiers.\n\nTo illustrate the process in a simpler and clearer way, we reduced the number of objects and built a new ERD in Relational Migrator (Figure 4). Such models are most often implemented in run-time transactional systems. Their impact and dependencies can be found across multiple systems and domains. Additionally, they can result in very large physical database objects, and centralized storage and access patterns can be bottlenecks.\n\nThe key Party entities are:\n\nParty represents people, organizations, and groups. In the original schema, this is represented through one-to-one relationships. Party holds the common attributes for all parties, while each of the other three tables stores the particularities of each party class. These differences result in distinct fields for each class, which forces tabular schemas to create new tables. The inherent flexibility of the document model allows embedding this information in a single document. To do this, follow the steps below: \n\n - Select the \"party\" collection in the MDB view of Relational Migrator. At the moment, this collection has the same fields as the original matched table. \n - On the right-hand side, you will see the mappings menu (Figure 5). Click on the \u201cAdd\u201d button, select \u201cEmbedded documents,\u201d and choose \"person\" in the \u201cSource table\u201d dropdown menu. Click \u201cSave and close\u201d and repeat this process for the \"organization\" and \"grouping\" tables.\n - After this, you can remove the \"person,\" \"organization,\" and \"grouping\" collections. Right-click on them, select \u201cRemove Entity,\u201d and confirm \u201cRemove from the MongoDB model.\u201d You have already simplified your original model by three tables, and we\u2019re just getting started. \n\nLooking at Figure 4, we can see that there is another entity that could be easily embedded in the party collection: location addresses. In this case, this table has a many-to-many relationship facilitated by the \"party_location_address\" table. As a party can have many location addresses, instead of an embedded document, we will use an embedded array. You can do it in the following way:\n\n - Select the collection \"party\" again, click the \u201cAdd\u201d button, select \u201cEmbedded array,\u201d and choose \"party_location_address\" in the \u201cSource table\u201d dropdown. Under the \u201cAll fields\u201d checkbox, uncheck the `partyIdentifier` field. We are not going to need it. Addresses will be contained in the \u201cparty\u201d document anyway. Leave the other fields as default and click the \u201cSave and close\u201d button. \n - We have now established the relationship, but we want to have the address details too. From the \u201cparty\u201d mapping menu, click the \u201cAdd\u201d button again. Then, select \u201cEmbedded documents,\u201d choose \u201clocation_address,\u201d and in the \u201cRoot path\u201d section, check the box that says \u201cMerge fields into the parent.\u201d This will ensure that we don\u2019t have more nested fields than necessary. Click \u201cSave and close.\u201d\n - You can now delete the \u201cparty_location_address\u201d collection, but don\u2019t delete \u201clocation_address\u201d as it still has an existing relationship with \u201cinsurable_object.\u201d\n\nYou are done. The \u201cparty\u201d entity is ready to go. We have not only reduced six tables to just one, but the \u201cperson,\u201d \u201corganization,\u201d and \u201cgrouping\u201d embedded documents will only show up if that party is indeed a person, organization, or grouping. One collection can contain documents with different schemas for each of these classes.\n\nAt the beginning of the section, we also spoke about the \u201cparty role\u201d entity. It represents the role a party plays in a specific context such as policy, claim, or litigation. In the original schema, this many-to-many relationship is facilitated via intermediate tables like \u201cpolicy_party_role,\u201d \u201cclaim_party_role,\u201d and \u201clitigation_party_role\u201d respectively. These intermediate tables will be embedded in other collections, but the \u201cparty_role\u201d table can be left out as a reference collection on its own. In this way, we avoid having to update one by one all policy, claim, and litigation documents if one of the attributes of \u201cparty role\u201d changes.\n\nLet\u2019s see next how we can model the \u201cpolicy\u201d entity.\n\n## Policy Domain\n\nThe key entities of Policy are:\n\nFrom a top-level perspective, we can observe that the \u201cpolicy\u201d entity is composed of policy coverage parts and the agreements of each of the parties involved with their respective roles. A policy can have both several parts to cover and several parties agreements involved. Therefore, similarly to what happened with party location addresses, they will be matched to array embeddings. \n\nLet\u2019s start with the party agreements. A policy may have many parties involved, and each party may be part of many policies. This results in a many-to-many relationship facilitated by the \u201cpolicy_party_role\u201d table. This table also covers the relationships between roles and agreements, as each party will play a role and will have an agreement in a specific policy. \n\n - From the MDB view, select the \u201cpolicy\u201d collection. Click on the \u201cAdd\u201d button, select \u201cembedded array,\u201d and choose \u201cpolicy_party_role\u201d in the source table dropdown. Uncheck the `policyIdentifier` field, leave the other fields as default, and click \u201cSave and close.\u201d\n - We will leave the party as a referenced object to the \u201cparty\u201d collection we created earlier, so we don\u2019t need to take any further action on this. The relationship remains in the new model through the `partyIdentifier` field acting as a foreign key. However, we need to include the agreements. From the \u201cpolicy\u201d mapping menu, click \u201cAdd,\u201d select \u201cEmbedded document,\u201d pick \u201cagreement\u201d as the source table, leave the other options as default, and click \u201cSave and close.\u201d \n - At this point, we can remove the collections \u201cpolicy_party_role\u201d and \u201cagreement.\u201d Remember that we have defined \u201cparty_role\u201d as a separate reference collection, so just having `partyRoleCode` as an identifier in the destination table will be enough. \n\nNext, we will include the policy coverage parts. \n\n - From the \u201cpolicy\u201d mapping menu, click \u201cAdd,\u201d select \u201cEmbedded array,\u201d pick \u201cpolicy_coverage_part\u201d as the source table, uncheck the `policyIdentifier` field, leave the other options as default, and click \u201cSave and close.\u201d\n - Each coverage part has details included in the \u201cpolicy_coverage_detail\u201d. We will add this as an embedded array inside of each coverage part. In the \u201cpolicy\u201d mapping menu, click \u201cAdd,\u201d select \u201cEmbedded array,\u201d pick \u201cpolicy_coverage_detail,\u201d and make sure that the prefix selected in the \u201cRoot path\u201d section is `policyCoverageParts`. Remove `policyIdentifier` and `coveragePartCode` fields and click \u201cSave and close.\u201d\n - Coverage details include \u201climits,\u201d \u201cdeductibles,\u201d and \u201cinsurableObjects.\u201d Let\u2019s add them in! Click \u201cAdd\u201d in the \u201cpolicy\u201d mapping menu, \u201cEmbedded Array,\u201d pick \u201cpolicy_limit,\u201d remove the `policyCoverageDetailIdentifier`, and click \u201cSave and close.\u201d Repeat the process for \u201cpolicy_deductible.\u201d For \u201cinsurable_object,\u201d repeat the process but select \u201cEmbedded document\u201d instead of \u201cEmbedded array.\u201d\n - As you can see in Figure 8, insurable objects have additional relationships to specify the address and roles played by the different parties. To add them, we just need to embed them in the same fashion we have done so far. Click \u201cAdd\u201d in the \u201cpolicy\u201d mapping menu, select \u201cEmbedded array,\u201d and pick \u201cinsurable_object_party_role.\u201d This is the table used to facilitate the many-to-many relationship between insurable objects and party roles. Uncheck `insurableObjectIdentifier` and click \u201cSave and close.\u201d Party will be referenced by the `partyIdentifier` field. For the sake of simplicity, we won\u2019t embed address details here, but remember in a production environment, you would need to add it in a similar way as we did before in the \u201cparty\u201d collection. \n - After this, we can safely remove the collections \u201cpolicy_coverage_part,\u201d \u201cpolicy_coverage_detail,\u201d \u201cpolicy_deductible,\u201d and \u201cpolicy_limit.\u201d\n\nBy now, we should have a collection similar to the one below and five fewer tables from our original model.\n\n## Claim & Litigation Domain\n\nThe key entities of Claim and Litigation are:\n\nIn this domain, we have already identified the two main entities: claim and litigation. We will use them as top-level documents to refactor the relationships shown in Figure 10 in a more intuitive way. Let\u2019s see how you can model claims first. \n\n - We\u2019ll begin embedding the parties involved in a claim with their respective roles. Select \u201cclaim\u201d collection, click \u201cAdd\u201d in the mapping menu, select \u201cEmbedded array,\u201d and pick \u201cclaim_party_role\u201d as the source table. You can uncheck `claimIdentifier` from the field list. Last, click the \u201cSave and close\u201d button.\n - Next, we will integrate the insurable object that is part of the claim. Repeat the previous step but choose \u201cEmbedded documents\u201d as the table migration option and \u201cinsurable_object\u201d as the source table. Again, we will not embed the \u201clocation_address\u201d entity to keep it simple.\n - Within `insurableObject`, we will include the policy coverage details establishing the link between claims and policies. Add a new mapping, select \u201cEmbedded array,\u201d choose \u201cpolicy_coverage_detail\u201d as the source table, and uncheck the field `insurableObjectIdentifier`. Leave the other options as default. \n - Lastly, we will recreate the many-to-many relationship between litigation and claim. As we will have a separate litigation entity, we just need to reference that entity from the claims document, which means that just having an array of litigation identifiers will be enough. Repeat the previous step by selecting \u201cEmbedded array,\u201d \u201clitigation_party_role,\u201d and unchecking all fields except `litigationIdentifier` in the field list. \n\nThe claim model is ready to go. We can now remove the collection \u201cclaimPartyRole.\u201d \n\nLet\u2019s continue with the litigation entity. Litigations may have several parties involved, each playing a specific role and with a particular associated claim. This relationship is facilitated through the \u201clitigation_party_role\u201d collection. We will represent it using an embedded array. Additionally, we will include some fields in the claim domain apart from its identifier. This is necessary so we can have a snapshot of the claim details at the time the litigation was made, so even if the claim details change, we won\u2019t lose the original claim data associated with the litigation. To do so, follow the steps below:\n\n - From the \u201clitigation\u201d mapping menu, click on the \u201cAdd\u201d button, select \u201cEmbedded array,\u201d and pick \u201clitigation_party_role\u201d as the source table. Remove `litigationIdentifier` from the field list and click \u201cSave and Close.\u201d \n - In a similar way, add claim details by adding \u201cclaim\u201d as an \u201cEmbedded document.\u201d \n - Repeat the process again but choose \u201cinsurable_object\u201d as the source table for the embedded document. Make sure the root path prefix is set to `litigationPartyRoles.claim`.\n - Finally, add \u201cinsurable_object_party_role\u201d as an \u201cEmbedded array.\u201d The root path prefix should be `litigationPartyRoles.claim.insurableObject`.\n\nAnd that\u2019s it. We have modeled the entire relationship schema in just five collections: \u201cparty,\u201d \u201cpartyRole,\u201d \u201cpolicy,\u201d \u201cclaim,\u201d and \u201clitigation.\u201d You can remove the rest of the collections and compare the original tabular schema composed of 21 tables to the resulting five collections. \n\n## Migrate your data to MongoDB\n\nNow that our model is complete, we just need to migrate the data to our MongoDB instance. First, verify that you have \u201cdbAdmin\u201d permissions in the destination OMG database. You can check and update permissions from the Atlas left-side security menu in the \u201cDatabase Access\u201d section. \n\nOnce this is done, navigate to the \u201cData Migration\u201d tab in the top navigation bar and click \u201cCreate sync job.\u201d You will be prompted to add the source and destination database details. In our case, these are PostgreSQL and MongoDB respectively. Fill in the details and click \u201cConnect\u201d in both steps until you get to the \u201cMigration Options\u201d step. In this menu, we will leave all options as default. This will migrate our data in a snapshot mode, which means it will load all our data at once. Feel free to check our documentation for more sync job alternatives. \n\nFinally, click the \u201cStart\u201d button and wait until the migration is complete. This can take a couple of minutes. Once ready, you will see the \u201cCompleted\u201d tag in the snapshot state card. You can now connect to your database in MongoDB Atlas or Compass and check how all your data is now loaded in MongoDB ready to leverage all the advantages of the document model. \n\n## Additional resources\n\nCongratulations, you\u2019ve just completed your data migration! We've not just simplified the data model of a standard insurance system; we've significantly modernized how information flows in the industry.\n\nOn the technical side, MongoDB's Relational Migrator truly is a game-changer, effortlessly transforming an unwieldy 21-table schema into a lean five-collection MongoDB model. This translates to quicker, more efficient data operations, making it a dream for developers and administrators alike.\n\nOn the business side, imagine the agility gained \u2014 faster time-to-market for new insurance products, swift adaptation to regulatory changes, and enhanced customer experiences. \n\nThe bottom line? MongoDB's document model and Relational Migrator aren't just tools; they're the catalysts for a future-ready, nimble insurance landscape.\n\nIf you want to learn how MongoDB can help you modernize, move to any cloud, and embrace the AI-driven future of insurance, check the resources below. What will you build next?\n\n - MongoDB for Insurance\n - Relational Migrator: Migrate to MongoDB with confidence\n - From RDBMS to NoSQL at Enterprise Scale\n\n>Access our GitHub repository for DDL scripts, Hackolade models, and more! \n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "This tutorial walks you through the refactoring of the OMG Party Role data model, a widely used insurance standard. With the help of MongoDB Relational Migrator you\u2019ll be able to refactor your relational tables into MongoDB collections and reap all the document model benefits.", "contentType": "Tutorial"}, "title": "Modernize your insurance data models with MongoDB Relational Migrator", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/beyond-basics-enhancing-kotlin-ktor-api-vector-search", "action": "created", "body": "# Beyond Basics: Enhancing Kotlin Ktor API With Vector Search\n\nIn this article, we will delve into advanced MongoDB techniques in conjunction with the Kotlin Ktor API, building upon the foundation established in our previous article, Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas. Our focus will be on integrating robust features such as Hugging Face, Vector Search, and MongoDB Atlas triggers/functions to augment the functionality and performance of our API.\n\nWe will start by providing an overview of these advanced MongoDB techniques and their critical role in contemporary API development. Subsequently, we will delve into practical implementations, showcasing how you can seamlessly integrate Hugging Face for natural language processing, leverage Vector Search for rapid data retrieval, and automate database processes using triggers and functions.\n\n## Prerequisites\n\n- MongoDB Atlas account\n - Note: Get started with MongoDB Atlas for free! If you don\u2019t already have an account, MongoDB offers a free-forever Atlas cluster.\n- Hugging Face account\n- Source code from the previous article\n- MongoDB Tools\n\n## Demonstration\n\nWe'll begin by importing a dataset of fitness exercises into MongoDB Atlas as documents. Then, we'll create a trigger that activates upon insertion. For each document in the dataset, a function will be invoked to request Hugging Face's API. This function will send the exercise description for conversion into an embedded array, which will be saved into the exercises collection as *descEmbedding*:\n\n to create your key:\n\n to import the exercises.json file via the command line. After installing MongoDB Tools, simply paste the \"exercises.json\" file into the \"bin\" folder and execute the command, as shown in the image below:\n\n. Our objective is to create an endpoint **/processRequest** to send an input to HuggingFace, such as:\n\n*\"**I need an exercise for my shoulders and to lose my belly fat**.\"*\n\n.\n\nIf you have any questions or want to discuss further implementations, feel free to reach out to the MongoDB Developer Community forum for support and guidance.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt634af19fb7ed14c5/65fc4b9c73d0bc30f7f3de73/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd30af9afe5366352/65fc4bb7f2a29205cfbf725b/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdf42f1df193dbd24/65fc4bd6e55fcb1058237447/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta874d80fcc78b8bd/65fc4bf5d467d22d530bd73a/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7547c6fcc6e1f2d2/65fc4c0fd95760d277508123/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt96197cd32df66580/65fc4c38d4e0c0250b2947b4/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38c3724da63f3c95/65fc4c56fc863105d7d732c1/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49218ab4f7a3cb91/65fc4c8ca1e8152dccd5da77/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cae67683fad5f9c/65fc4ca3f2a2920d57bf7268/9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fc6270e5e2f8665/65fc4cbb5fa1c6c4db4bfb01/10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf428bc700f44f2b5/65fc4cd6f4a4cf171d150bb2/11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad71144e071e11af/65fc4cf0d467d2595d0bd74a/12.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9009a43a7cd07975/65fc4d6d039fddd047339cbe/13.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd016d8390bd80397/65fc4d83d957609ea9508134/14.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcb767717bc6af497/65fc4da49b2cda321e9404bd/15.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc2e0005d6df9a273/65fc4db80780b933c761f14f/16.png\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt17850b744335f8f7/65fc4dce39973e99456eab16/17.png\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6915b7c63ea2bf5d/65fc4de754369a8839696baf/18.png\n [19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt75993bebed24f8ff/65fc4df9a93acb7b58313f7d/19.png\n [20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc68892874fb5cafc/65fc4e0f55464dd4470e2097/20.png\n [21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdbd1c93b61b7535/65fc4e347a44b0822854bc61/21.png\n [22]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01ebdac5cf78243d/65fc4e4a54369ac59e696bbe/22.png\n [23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte13c91279d8805ef/65fc4e5dfc8631011ed732e7/23.png\n [24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb17a754e566be42b/65fc4e7054369ac0c5696bc2/24.png\n [25]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03f0581b399701e8/65fc4e8bd4e0c0e18c2947e2/25.png\n [26]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56c5fae14d1a2b6a/65fc4ea0d95760693a508145/26.png", "format": "md", "metadata": {"tags": ["Atlas", "Kotlin", "AI"], "pageDescription": "Learn how to integrate Vector Search into your Kotlin with Ktor application using MongoDB.", "contentType": "Tutorial"}, "title": "Beyond Basics: Enhancing Kotlin Ktor API With Vector Search", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-inventory-management-system-using-mongodb-atlas", "action": "created", "body": "# Build an Inventory Management System Using MongoDB Atlas\n\nIn the competitive retail landscape, having the right stock in the right place at the right time is crucial. Too little inventory when and where it\u2019s needed can create unhappy customers. However, a large inventory can increase costs and risks associated with its storage. Companies of all sizes struggle with inventory management. Solutions such as a single view of inventory, real-time analytics, and event-driven architectures can help your businesses overcome these challenges and take your inventory management to the next level. By the end of this guide, you'll have inventory management up and running, capable of all the solutions mentioned above. \n\nWe will walk you through the process of configuring and using MongoDB Atlas as your back end for your Next.js app, a powerful framework for building modern web applications with React.\n\nThe architecture we're about to set up is depicted in the diagram below:\n\n Let's get started!\n\n## Prerequisites\n\nBefore you begin working with this project, ensure that you have the following prerequisites set up in your development environment:\n\n- **Git** (version 2.39 or higher): This project utilizes Git for version control. Make sure you have Git installed on your system. You can download and install the latest version of Git from the official website: Git Downloads.\n- **Node.js** (version 20 or higher) and **npm** (version 9.6 or higher): The project relies on the Node.js runtime environment and npm (Node Package Manager) to manage dependencies and run scripts. You need to have them both installed on your machine. You can download Node.js from the official website: Node.js Downloads. After installing Node.js, npm will be available by default.\n- **jq** (version 1.6 or higher): jq is a lightweight and flexible command-line JSON processor. We will use it to filter and format some command outputs to better identify the values we are interested in. Visit the official Download jq page to get the latest version.\n- **mongorestore** (version 100.9.4 or higher): The mongorestore tool loads data from a binary database dump. The dump directory in the GitHub repository includes a demo database with preloaded collections, views, and indexes, to get you up and running in no time. This tool is part of the MongoDB Database Tools package. Follow the Database Tools Installation Guide to install mongorestore. When you are done with the installation, run mongorestore --version in your terminal to verify the tool is ready to use.\n- **App Services CLI** (version 1.3.1 or higher): The Atlas App Services Command Line Interface (appservices) allows you to programmatically manage your applications. We will use it to speed up the app backend setup by using the provided template in the app_services directory in the GitHub repository. App Services CLI is available on npm. To install the CLI on your system, ensure that you have Node.js installed and then run the following command in your shell: npm install -g atlas-app-services-cli.\n- **MongoDB Atlas cluster** (M0 or higher): This project uses a MongoDB Atlas cluster to manage the database. You should have a MongoDB Atlas account and a minimum free tier cluster set up. If you don't have an account, you can sign up for free at MongoDB Atlas. Once you have an account, follow these steps to set up a minimum free tier cluster or follow the Getting Started guide:\n - Log into your MongoDB Atlas account.\n - Create a new project or use an existing one, and then click \u201cCreate a new database.\u201d\n - Choose the free tier option (M0).\n - You can choose the cloud provider of your choice but we recommend using the same provider and region both for the cluster and the app hosting in order to improve performance.\n - Configure the cluster settings according to your preferences and then click \u201cfinish and close\u201d on the bottom right.\n\n## Initial configuration\n\n### Obtain your connection string\n\nOnce the MongoDB Atlas cluster is set up, locate your newly created cluster, click the \"Connect\" button, and select the \"Compass\" section. Copy the provided connection string. It should resemble something like this:\n\n```\nmongodb+srv://:@cluster-name.xxxxx.mongodb.net/\n```\n\n> Note: You will need the connection string to set up your environment variables later (`MONGODB_URI`).\n\n### Cloning the GitHub repository\n\nNow, it's time to clone the demo app source code from GitHub to your local machine:\n\n1. Open your terminal or command prompt.\n\n2. Navigate to your preferred directory where you want to store the project using the cd command. For example:\n\n ```\n cd /path/to/your/desired/directory\n ```\n\n3. Once you're in the desired directory, use the `git clone` command to clone the repository. Copy the repository URL from the GitHub repository's main page:\n\n ```\n git clone git@github.com:mongodb-industry-solutions/Inventory_mgmt.git\n ```\n\n4. After running the `git clone` command, a new directory with the repository's name will be created in your chosen directory. To navigate into the cloned repository, use the cd command:\n\n ```\n cd Inventory_mgmt\n ```\n\n## MongoDB Atlas configuration\n\n### Replicate the sample database\n\nThe database contains:\n\n- Five collections\n - **Products**: The sample database contains 17 products corresponding to T-shirts of different colors. Each product has five variants that represent five different sizes, from XS to XL. These variants are stored as an embedded array inside the product. Each variant will have a different SKU and therefore, its own stock level. Stock is stored both at item (`items.stock`) and product level (`total_stock_sum`).\n - **Transactions**: This collection will be empty initially. Transactions will be generated using the app, and they can be of inbound or outbound type. Outbound transactions result in a decrease in the product stock such as a sale. On the other hand, inbound transactions result in a product stock increase, such as a replenishment order.\n - **Locations**: This collection stores details of each of the locations where we want to keep track of the product stock. For the sake of this guide, we will just have two stores to demonstrate a multi-store scenario, but this could be scaled to thousands of locations. Warehouses and other intermediate locations could be also included. In this case, we assume a single warehouse, and therefore, we don\u2019t need to include a location record for it.\n - **Users**: Our app will have three users: two store managers and one area manager. Store managers will be in charge of the inventory for each of the store locations. Both stores are part of the same area, and the area manager will have an overview of the inventory in all stores assigned to the area.\n - **Counters**: This support collection will keep track of the number of documents in the transactions collection so an auto-increment number can be assigned to each transaction. In this way, apart from the default _id field, we can have a human-readable transaction identifier.\n- One view:\n - Product area view: This view is used by the area manager to have an overview of the inventory in the area. Using the aggregation pipeline, the product and item stock levels are grouped for all the locations in the same area.\n- One index:\n - The number of transactions can grow quickly as we use the app. To improve performance, it is a good practice to set indexes that can be leveraged by common queries. In this case, the latest transactions are usually more relevant and therefore, they are displayed first. We also tend to filter them by type \u2014 inbound/outbound \u2014 and product. These three fields \u2014 `placement_timestamp`, type, and `product.name` \u2014 are part of a compound index that will help us to improve transaction retrieval time.\n\nTo replicate the sample database on your MongoDB Atlas cluster, run the following command in your terminal:\n\n```\n mongorestore --uri dump/\n```\n\nMake sure to replace `` with your MongoDB Atlas connection string. If you've already followed the initial configuration steps, you should have obtained this connection string. Ensure that the URI includes the username, password, and cluster details.\n\nAfter executing these commands, you can verify the successful restoration of the demo database by checking the last line of the command output, which should display \"22 document(s) restored successfully.\" These correspond to the 17 products, three users, and two locations mentioned earlier.\n\n are fully managed backend services and APIs that help you build apps, integrate services, and connect to your Atlas data faster. \n\nAtlas\u2019s built-in device-to-cloud-synchronization service \u2014 Device Sync \u2014 will enable real-time low-stock alerts. Triggers and functions can execute serverless application and database logic in response to these events to automatically issue replenishment orders. And by using the Data API and Custom HTTPS Endpoints, we ensure a seamless and secure integration with the rest of the components in our inventory management solution.\n\nCheck how the stock is automatically replenished when a low-stock event occurs. \n\n pair to authenticate your CLI calls. Navigate to MongoDB Cloud Access Manager, click the \"Create API Key\" button, and select the `Project Owner` permission level. For an extra layer of security, you can add your current IP address to the Access List Entry.\n\n3. Authenticate your CLI user by running the command below in your terminal. Make sure you replace the public and private API keys with the ones we just generated in the previous step. \n\n ```\n appservices login --api-key=\"\" --private-api-key=\"\"\n ```\n\n4. Import the app by running the following command. Remember to replace `` by your preferred name. \n\n ```\n appservices push --local ./app_services/ --remote \n ```\n\n You will be prompted to configure the app options. Set them according to your needs. If you are unsure which options to choose, the default ones are usually a good way to start! For example, this is the configuration I've used.\n\n ```\n ? Do you wish to create a new app? Yes\n ? App Name inventory-management-demo\n ? App Deployment Model LOCAL\n ? Cloud Provider aws\n ? App Region aws-eu-west-1\n ? App Environment testing\n ? Please confirm the new app details shown above Yes\n ```\n\n Once the app is successfully created, you will be asked to confirm some changes. These changes will load the functions, triggers, HTTP endpoints, and other configuration parameters our inventory management system will use. \n\n After a few seconds, you will see a success message like \u201cSuccessfully pushed app up: ``\u201d. Take note of the obtained app ID.\n\n5. In addition to the app ID, our front end will also need the base URL to send HTTP requests to the back end. Run the command below in your terminal to obtain it. Remember to replace `` with your own value. The jq tool will help us to get the appropriate field and format. Take note of the obtained URI.\n\n ```\n appservices apps describe --app -f json | jq -r '.doc.http_endpoints0].url | split(\"/\") | (.[0] + \"//\" + .[2])'\n ```\n\n6. Finally, our calls to the back end will need to be authenticated. For this reason, we will create an API key that will be used by the server side of our inventory management system to generate an access token. It is only this access token that will be passed to the client side of the system to authenticate the calls to the back end.\n\n> Important: This API key is not the same as the key used to log into the `appservices` CLI.\n\nAgain, before running the command, remember to replace the placeholder ``.\n\n```\nappservices users create --type=api-key --app= --name=tutorial-key\n```\n\nAfter a few seconds, you should see the message \u201cSuccessfully created API Key,\u201d followed by a JSON object. Copy the content of the field `key` and store it in a secure place. Remember that if you lose this key, you will need to create a new one.\n\n> Note: You will need the app ID, base App Services URI, and API key to set up your environment variables later (`REALM_APP_ID`, `APP_SERVICES_URI`, `API_KEY`).\n\n### Set up Atlas Search and filter facets\n\nFollow these steps to configure search indexes for full-text search and filter facets:\n\n1. Navigate to the \"Data Services\" section within Atlas. Select your cluster and click on \"Atlas Search\" located next to \"Collections.\"\n\n2. If you are in the M0 tier, you can create two search indexes for the products collection. This will allow you to merely search across the products collection. However, if you have a tier above M0, you can create additional search indexes. This will come in handy if you want to search and filter not only across your product catalog but also your transaction records, such as sales and replenishment orders.\n\n3. Let's begin with creating the indexes for full-text search:\n\n 1. Click \"Create Search Index.\"\n\n 2. You can choose to use either the visual or JSON editor. Select \"JSON Editor\" and click \"Next.\"\n\n 3. Leave the index name as `default`.\n\n 4. Select your newly created database and choose the **products** collection. We will leave the default index definition, which should look like the one below.\n\n ```\n {\n \"mappings\": {\n \"dynamic\": true\n }\n }\n ```\n\n 5. Click \"Next\" and on the next screen, confirm by clicking \"Create Search Index.\"\n 6. After a few moments, your index will be ready for use. While you wait, you can create the other search index for the **transactions** collection. You need to repeat the same process but change the selected collection in the \"Database and Collection\" menu next to the JSON Editor.\n\n> Important: The name of the index (default) must be the same in order for the application to be able to work properly.\n\n4. Now, let's proceed to create the indexes required for the filter facets. Note that this process is slightly different from creating default search indexes:\n 1. Click \"Create Index\" again, select the JSON Editor, and click \"Next.\"\n 2. Name this index `facets`.\n 3. Select your database and the **products** collection. For the index definition, paste the code below.\n\n**Facets index definition for products**\n\n```javascript\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"items\": {\n \"fields\": {\n \"name\": {\n \"type\": \"stringFacet\"\n }\n },\n \"type\": \"document\"\n },\n \"name\": {\n \"type\": \"stringFacet\"\n }\n }\n }\n}\n```\n\nClick \"Next\" and confirm by clicking \"Create Search Index.\" The indexing process will take some time. You can create the **transactions** index while waiting for the indexing to complete. In order to do that, just repeat the process but change the selected collection and the index definition by the one below:\n\n**Facets index definition for transactions**\n\n```javascript\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"items\": {\n \"fields\": {\n \"name\": {\n \"type\": \"stringFacet\"\n },\n \"product\": {\n \"fields\": {\n \"name\": {\n \"type\": \"stringFacet\"\n }\n },\n \"type\": \"document\"\n }\n },\n \"type\": \"document\"\n }\n }\n }\n}\n```\n\n> Important: The name of the index (`facets`) must be the same in order for the application to be able to work properly.\n\nBy setting up these search indexes and filter facets, your application will gain powerful search and filtering capabilities, making it more user-friendly and efficient in managing inventory data.\n\n### Set up Atlas Charts\n\nEnhance your application's visualization and analytics capabilities with Atlas Charts. Follow these steps to set up two dashboards \u2014 one for product information and another for general analytics:\n\n1. Navigate to the \"Charts\" section located next to \"App Services.\"\n2. Let's begin by creating the product dashboard:\n 1. If this is your first time using Atlas Charts, click on \u201cChart builder.\u201d Then, select the relevant project, the database, and the collection. \n 2. If you\u2019ve already used Atlas Charts (i.e., you\u2019re not a first-time user), then click on \"Add Dashboard\" in the top right corner. Give the dashboard a name and an optional description. Choose a name that clearly reflects the purpose of the dashboard. You don't need to worry about the charts in the dashboard for now. You'll configure them after the app is ready to use. \n3. Return to the Dashboards menu, click on the three dots in the top right corner of the newly created dashboard, and select \"Embed.\"\n4. Check the \"Enable unauthenticated access\" option. In the \"Allowed filter fields\" section, edit the fields and select \"Allow all fields in the data sources used in this dashboard.\" Choose the embedding method through the JavaScript SDK, and copy both the \"Base URL\" and the \"Dashboard ID.\" Click \u201cClose.\u201d\n5. Repeat the same process for the general dashboard. Select products again, as we will update this once the app has generated data. Note that the \"Base URL\" will be the same for both dashboards but the \u201cdashboard ID\u201d will be different so please take note of it.\n\n> Note: You will need the base URL and dashboard IDs to set up your environment variables later (`CHARTS_EMBED_SDK_BASEURL`, `DASHBOARD_ID_PRODUCT`, `DASHBOARD_ID_GENERAL`).\n\nSetting up Atlas Charts will provide you with visually appealing and insightful dashboards to monitor product information and overall analytics, enhancing your decision-making process and improving the efficiency of your inventory management system.\n\n## Frontend configuration\n\n### Set up environment variables\n\nCopy the `env.local.example` file in this directory to `.env.local` (which will be ignored by Git), as seen below:\n\n```\ncp .env.local.example .env.local\n```\n\nNow, open this file in your preferred text editor and update each variable on .env.local.\n\nRemember all of the notes you took earlier? Grab them because you\u2019ll use them now! Also, remember to remove any spaces after the equal sign. \n\n- `MONGODB_URI` \u2014 This is your MongoDB connection string to [MongoDB Atlas. You can find this by clicking the \"Connect\" button for your cluster. Note that you will have to input your Atlas password into the connection string.\n- `MONGODB_DATABASE_NAME` \u2014 This is your MongoDB database name for inventory management.\n- `REALM_APP_ID` \u2014 This variable should contain the app ID of the MongoDB Atlas App Services app you've created for the purpose of this project.\n- `APP_SERVICES_URI` \u2014 This is the base URL for your MongoDB App Services. It typically follows the format `https://..data.mongodb-api.com`.\n- `API_KEY` \u2014 This is your API key for authenticating calls using the MongoDB Data API.\n- `CHARTS_EMBED_SDK_BASEURL` \u2014 This variable should hold the URL of the charts you want to embed in your application.\n- `DASHBOARD_ID_PRODUCT` \u2014 This variable should store the Atlas Charts dashboard ID for product information.\n- `DASHBOARD_ID_GENERAL` \u2014 This variable should store the Atlas Charts dashboard ID for the general analytics tab.\n\n> Note: You may observe that some environment variables in the .env.local.example file are commented out. Don\u2019t worry about them for now. These variables will be used in the second part of the inventory management tutorial series.\n\nPlease remember to save the updated file. \n\n### Run locally\n\nExecute the following commands to run your app locally: \n\n```\nnpm ci\nnpm run dev\n```\n\nYour app should be up and running on http://localhost:3000! If it doesn't work, ensure that you have provided the correct environment variables.\n\nAlso, make sure your local IP is in the Access List of your project. If it\u2019s not, just click the \u201cAdd IP address\u201d button in the top right corner. This will display a popup menu. Within the menu, select \u201cAdd current IP address,\u201d and click \u201cConfirm.\u201d\n\n.\n\n### Enable real-time analytics\n\n1. To create a general analytics dashboard based on sales, we will need to generate sales data. Navigate to the control panel in your app by clicking http://localhost:3000/control.\n\n2. Then, click the \u201cstart selling\u201d button. When you start selling, remember to not close this window as selling will only work when the window is open. This will simulate a sale every five seconds, so we recommend letting it run for a couple of minutes. \n\n3. In the meantime, navigate back to Atlas Charts to create a general analytics dashboard. For example, you can create a line graph that displays sales over the last hour, minute by minute. Now, you\u2019ll see live data coming in, offering you real-time insights!\n\n To achieve this, from the general dashboard, click \u201cAdd Chart\u201d and select `transactions` as the data source. Select \u201cDiscrete Line\u201d in the chart type dropdown menu. Then, you will need to add `timestamp` in the X axis and `quantity` in the Y axis. \n\n of this guide to learn how to enable offline inventory management with Atlas Edge Server.\n\nCurious for more? Learn how MongoDB is helping retailers to build modern consumer experiences. Check the additional resources below:\n\n- MongoDB for Retail Innovation\n- How to Enhance Inventory Management With Real-Time Data Strategies\n- Radial Powers Retail Sales With 10x Higher Performance on MongoDB Atlas\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfbb152b55b18d55f/66213da1ac4b003831c3fdee/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c9cf9349644772a/66213dc851b16f3fd6c4b39d/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0346d4fd163ccf8f/66213de5c9de46299bd456c3/3.gif\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcbb13b4ab1197d82/66213e0aa02ad73b34ee6aa5/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70eebbb5ceb633ca/66213e29b054413a7e99b163/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf4880b64b44d0e59/66213e45a02ad7144fee6aaa/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7501da1b41ea0574/66213e6233301d04c488fb0f/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd493259a179fefa3/66213eaf210d902e8c3a2157/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt902ff936d063d450/66213ecea02ad76743ee6ab9/9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd99614dbd04fd894/66213ee545f9898396cf295c/10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt993d7e3579fb930e/66213efffb977c3ce2368432/11.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9a5075ae0de6f94b/6621579d81c884e44937d10f/12-fixed.gif", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "This tutorial takes you through the process of building a web app capable of efficiently navigating through your product catalog, receiving alerts, and automating restock workflows, all while maintaining control of your inventory through real-time analytics.", "contentType": "Tutorial"}, "title": "Build an Inventory Management System Using MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/csharp-crud-tutorial", "action": "created", "body": "# MongoDB & C Sharp: CRUD Operations Tutorial\n\n \n\nIn this Quick Start post, I'll show how to set up connections between C# and MongoDB. Then I'll walk through the database Create, Read, Update, and Delete (CRUD) operations. As you already know, C# is a general-purpose language and MongoDB is a general-purpose data platform. Together, C# and MongoDB are a powerful combination.\n\n## Series Tools & Versions\n\nThe tools and versions I'm using for this series are:\n\n- MongoDB Atlas with an M0 free cluster,\n- MongoDB Sample Dataset loaded, specifically the `sample_training` and `grades` dataset,\n- Windows 10,\n- Visual Studio Community 2019,\n- NuGet packages,\n- MongoDB C# Driver: version 2.9.1,\n- MongoDB BSON Library: version 2.9.1.\n\n>C# is a popular language when using the .NET framework. If you're going to be developing in .NET and using MongoDB as your data layer, the C# driver makes it easy to do so.\n\n## Setup\n\nTo follow along, I'll be using Visual Studio 2019 on Windows 10 and connecting to a MongoDB Atlas cluster. If you're using a different OS, IDE, or text editor, the walkthrough might be slightly different, but the code itself should be fairly similar. Let's jump in and take a look at how nicely C# and MongoDB work together.\n\n>Get started with an M0 cluster on MongoDB Atlas today. It's free forever and you'll be able to work alongside this blog series.\n\nFor this demonstration, I've chosen a Console App (.NET Core), and I've named it `MongoDBConnectionDemo`. Next, we need to install the MongoDB Driver for C#/.NET for a Solution. We can do that quite easily with NuGet. Inside Visual Studio for Windows, by going to *Tools* -> *NuGet Package Manager* -> Manage NuGet Packages for Solution... We can browse for *MongoDB.Driver*. Then click on our Project and select the driver version we want. In this case, the latest stable version is 2.9.1. Then click on *Install*. Accept any license agreements that pop up and head into `Program.cs` to get started.\n\n### Putting the Driver to Work\n\nTo use the `MongoDB.Driver` we need to add a directive.\n\n``` csp\nusing MongoDB.Driver;\n```\n\nInside the `Main()` method we'll establish a connection to MongoDB Atlas with a connection string and to test the connection we'll print out a list of the databases on the server. The Atlas cluster to which we'll be connecting has the MongoDB Atlas Sample Dataset installed, so we'll be able to see a nice database list.\n\nThe first step is to pass in the MongoDB Atlas connection string into a MongoClient object, then we can get the list of databases and print them out.\n\n``` csp\nMongoClient dbClient = new MongoClient(<>);\n\nvar dbList = dbClient.ListDatabases().ToList();\n\nConsole.WriteLine(\"The list of databases on this server is: \");\nforeach (var db in dbList)\n{\n Console.WriteLine(db);\n}\n```\n\nWhen we run the program, we get the following out showing the list of databases:\n\n``` bash\nThe list of databases on this server is:\n{ \"name\" : \"sample_airbnb\", \"sizeOnDisk\" : 57466880.0, \"empty\" : false }\n{ \"name\" : \"sample_geospatial\", \"sizeOnDisk\" : 1384448.0, \"empty\" : false }\n{ \"name\" : \"sample_mflix\", \"sizeOnDisk\" : 45084672.0, \"empty\" : false }\n{ \"name\" : \"sample_supplies\", \"sizeOnDisk\" : 1347584.0, \"empty\" : false }\n{ \"name\" : \"sample_training\", \"sizeOnDisk\" : 73191424.0, \"empty\" : false }\n{ \"name\" : \"sample_weatherdata\", \"sizeOnDisk\" : 4427776.0, \"empty\" : false }\n{ \"name\" : \"admin\", \"sizeOnDisk\" : 245760.0, \"empty\" : false }\n{ \"name\" : \"local\", \"sizeOnDisk\" : 1919799296.0, \"empty\" : false }\n```\n\nThe whole program thus far comes in at just over 20 lines of code:\n\n``` csp\nusing System;\nusing MongoDB.Driver;\n\nnamespace test\n{\n class Program\n {\n static void Main(string] args)\n {\n MongoClient dbClient = new MongoClient(<>);\n\n var dbList = dbClient.ListDatabases().ToList();\n\n Console.WriteLine(\"The list of databases on this server is: \");\n foreach (var db in dbList)\n {\n Console.WriteLine(db);\n }\n }\n }\n}\n```\n\nWith a connection in place, let's move on and start doing CRUD operations inside the MongoDB Atlas database. The first step there is to *Create* some data.\n\n## Create\n\n### Data\n\nMongoDB stores data in JSON Documents. Actually, they are stored as Binary JSON (BSON) objects on disk, but that's another blog post. In our sample dataset, there is a `sample_training` with a `grades` collection. Here's what a sample document in that collection looks like:\n\n``` json\n{\n \"_id\":{\"$oid\":\"56d5f7eb604eb380b0d8d8ce\"},\n \"student_id\":{\"$numberDouble\":\"0\"},\n \"scores\":[\n {\"type\":\"exam\",\"score\":{\"$numberDouble\":\"78.40446309504266\"}},\n {\"type\":\"quiz\",\"score\":{\"$numberDouble\":\"73.36224783231339\"}},\n {\"type\":\"homework\",\"score\":{\"$numberDouble\":\"46.980982486720535\"}},\n {\"type\":\"homework\",\"score\":{\"$numberDouble\":\"76.67556138656222\"}}\n ],\n \"class_id\":{\"$numberDouble\":\"339\"}\n}\n```\n\n### Connecting to a Specific Collection\n\nThere are 10,000 students in this collection, 0-9,999. Let's add one more by using C#. To do this, we'll need to use another package from NuGet, `MongoDB.Bson`. I'll start a new Solution in Visual Studio and call it `MongoDBCRUDExample`. I'll install the `MongoDB.Bson` and `MongoDB.Driver` packages and use the connection string provided from MongoDB Atlas. Next, I'll access our specific database and collection, `sample_training` and `grades`, respectively.\n\n``` csp\nusing System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\nnamespace MongoDBCRUDExample\n{\n class Program\n {\n static void Main(string[] args)\n {\n MongoClient dbClient = new MongoClient(<>);\n\n var database = dbClient.GetDatabase(\"sample_training\");\n var collection = database.GetCollection(\"grades\");\n\n }\n }\n}\n```\n\n#### Creating a BSON Document\n\nThe `collection` variable is now our key reference point to our data. Since we are using a `BsonDocument` when assigning our `collection` variable, I've indicated that I'm not going to be using a pre-defined schema. This utilizes the power and flexibility of MongoDB's document model. I could define a plain-old-C#-object (POCO) to more strictly define a schema. I'll take a look at that option in a future post. For now, I'll create a new `BsonDocument` to insert into the database.\n\n``` csp\nvar document = new BsonDocument\n {\n { \"student_id\", 10000 },\n { \"scores\", new BsonArray\n {\n new BsonDocument{ {\"type\", \"exam\"}, {\"score\", 88.12334193287023 } },\n new BsonDocument{ {\"type\", \"quiz\"}, {\"score\", 74.92381029342834 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 89.97929384290324 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 82.12931030513218 } }\n }\n },\n { \"class_id\", 480}\n };\n```\n\n### Create Operation\n\nThen to *Create* the document in the `sample_training.grades` collection, we can do an insert operation.\n\n``` csp\ncollection.InsertOne(document);\n```\n\nIf you need to do that insert asynchronously, the MongoDB C# driver is fully async compatible. The same operation could be done with:\n\n``` csp\nawait collection.InsertOneAsync(document);\n```\n\nIf you have a need to insert multiple documents at the same time, MongoDB has you covered there as well with the `InsertMany` or `InsertManyAsync` methods.\n\nWe've seen how to structure a BSON Document in C# and then *Create* it inside a MongoDB database. The MongoDB C# Driver makes it easy to do with the `InsertOne()`, `InsertOneAsync()`, `InsertMany()`, or `InsertManyAsync()` methods. Now that we have *Created* data, we'll want to *Read* it.\n\n## Read\n\nTo *Read* documents in MongoDB, we use the [Find() method. This method allows us to chain a variety of methods to it, some of which I'll explore in this post. To get the first document in the collection, we can use the `FirstOrDefault` or `FirstOrDefaultAsync` method, and print the result to the console.\n\n``` csp\nvar firstDocument = collection.Find(new BsonDocument()).FirstOrDefault();\nConsole.WriteLine(firstDocument.ToString());\n```\n\nreturns...\n\n``` json\n{ \"_id\" : ObjectId(\"56d5f7eb604eb380b0d8d8ce\"),\n\"student_id\" : 0.0,\n\"scores\" : \n{ \"type\" : \"exam\", \"score\" : 78.404463095042658 },\n{ \"type\" : \"quiz\", \"score\" : 73.362247832313386 },\n{ \"type\" : \"homework\", \"score\" : 46.980982486720535 },\n{ \"type\" : \"homework\", \"score\" : 76.675561386562222 }\n],\n\"class_id\" : 339.0 }\n```\n\nYou may wonder why we aren't using `Single` as that returns one document too. Well, that has to also ensure the returned document is the only document like that in the collection and that means scanning the whole collection.\n\n### Reading with a Filter\n\nLet's find the [document we created and print it out to the console. The first step is to create a filter to query for our specific document.\n\n``` csp\nvar filter = Builders.Filter.Eq(\"student_id\", 10000);\n```\n\nHere we're setting a filter to look for a document where the `student_id` is equal to `10000`. We can pass the filter into the `Find()` method to get the first document that matches the query.\n\n``` csp\nvar studentDocument = collection.Find(filter).FirstOrDefault();\nConsole.WriteLine(studentDocument.ToString());\n```\n\nreturns...\n\n``` json\n{ \"_id\" : ObjectId(\"5d88f88cec6103751b8a0d7f\"),\n\"student_id\" : 10000,\n\"scores\" : \n{ \"type\" : \"exam\", \"score\" : 88.123341932870233 },\n{ \"type\" : \"quiz\", \"score\" : 74.923810293428346 },\n{ \"type\" : \"homework\", \"score\" : 89.979293842903246 },\n{ \"type\" : \"homework\", \"score\" : 82.129310305132179 }\n],\n\"class_id\" : 480 }\n```\n\nIf a document isn't found that matches the query, the `Find()` method returns null. Finding the first document in a collection, or with a query is a frequent task. However, what about situations when all documents need to be returned, either in a collection or from a query?\n\n### Reading All Documents\n\nFor situations in which the expected result set is small, the `ToList()` or `ToListAsync()` methods can be used to retrieve all documents from a query or in a collection.\n\n``` csp\nvar documents = collection.Find(new BsonDocument()).ToList();\n```\n\nFilters can be passed in here as well, for example, to get documents with exam scores equal or above 95. The filter here looks slightly more complicated, but thanks to the MongoDB driver syntax, it is relatively easy to follow. We're filtering on documents in which inside the `scores` array there is an `exam` subdocument with a `score` value greater than or equal to 95.\n\n``` csp\nvar highExamScoreFilter = Builders.Filter.ElemMatch(\n\"scores\", new BsonDocument { { \"type\", \"exam\" },\n{ \"score\", new BsonDocument { { \"$gte\", 95 } } }\n});\nvar highExamScores = collection.Find(highExamScoreFilter).ToList();\n```\n\nFor situations where it's necessary to iterate over the documents that are returned there are a couple of ways to accomplish that as well. In a synchronous situation, a C# `foreach` statement can be used with the `ToEnumerable` adapter method. In this situation, instead of using the `ToList()` method, we'll use the `ToCursor()` method.\n\n``` csp\nvar cursor = collection.Find(highExamScoreFilter).ToCursor();\nforeach (var document in cursor.ToEnumerable())\n{\n Console.WriteLine(document);\n}\n```\n\nThis can be accomplished in an asynchronous fashion with the `ForEachAsync` method as well:\n\n``` csp\nawait collection.Find(highExamScoreFilter).ForEachAsync(document => Console.WriteLine(document));\n```\n\n### Sorting\n\nWith many documents coming back in the result set, it is often helpful to sort the results. We can use the [Sort() method to accomplish this to see which student had the highest exam score.\n\n``` csp\nvar sort = Builders.Sort.Descending(\"student_id\");\n\nvar highestScores = collection.Find(highExamScoreFilter).Sort(sort);\n```\n\nAnd we can append the `First()` method to that to just get the top student.\n\n``` csp\nvar highestScore = collection.Find(highExamScoreFilter).Sort(sort).First();\n\nConsole.WriteLine(highestScore);\n```\n\nBased on the Atlas Sample Data Set, the document with a `student_id` of 9997 should be returned with an exam score of 95.441609472871946.\n\nYou can see the full code for both the *Create* and *Read* operations I've shown in the gist here.\n\nThe C# Driver for MongoDB provides many ways to *Read* data from the database and supports both synchronous and asynchronous methods for querying the data. By passing a filter into the `Find()` method, we are able to query for specific records. The syntax to build filters and query the database is straightforward and easy to read, making this step of CRUD operations in C# and MongoDB simple to use.\n\nWith the data created and being able to be read, let's take a look at how we can perform *Update* operations.\n\n## Update\n\nSo far in this C# Quick Start for MongoDB CRUD operations, we have explored how to *Create* and *Read* data into a MongoDB database using C#. We saw how to add filters to our query and how to sort the data. This section is about the *Update* operation and how C# and MongoDB work together to accomplish this important task.\n\nRecall that we've been working with this `BsonDocument` version of a student record:\n\n``` csp\nvar document = new BsonDocument\n {\n { \"student_id\", 10000 },\n { \"scores\", new BsonArray\n {\n new BsonDocument{ {\"type\", \"exam\"}, {\"score\", 88.12334193287023 } },\n new BsonDocument{ {\"type\", \"quiz\"}, {\"score\", 74.92381029342834 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 89.97929384290324 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 82.12931030513218 } }\n }\n },\n { \"class_id\", 480}\n };\n```\n\nAfter getting part way through the grading term, our sample student's instructor notices that he's been attending the wrong class section. Due to this error the school administration has to change, or *update*, the `class_id` associated with his record. He'll be moving into section 483.\n\n### Updating Data\n\nTo update a document we need two bits to pass into an `Update` command. We need a filter to determine *which* documents will be updated. Second, we need what we're wanting to update.\n\n### Update Filter\n\nFor our example, we want to filter based on the document with `student_id` equaling 10000.\n\n``` csp\nvar filter = Builders.Filter.Eq(\"student_id\", 10000)\n```\n\n### Data to be Changed\n\nNext, we want to make the change to the `class_id`. We can do that with `Set()` on the `Update()` method.\n\n``` csp\nvar update = Builders.Update.Set(\"class_id\", 483);\n```\n\nThen we use the `UpdateOne()` method to make the changes. Note here that MongoDB will update at most one document using the `UpdateOne()` method. If no documents match the filter, no documents will be updated.\n\n``` csp\ncollection.UpdateOne(filter, update);\n```\n\n### Array Changes\n\nNot all changes are as simple as changing a single field. Let's use a different filter, one that selects a document with a particular score type for quizes:\n\n``` csp\nvar arrayFilter = Builders.Filter.Eq(\"student_id\", 10000) & Builders\n .Filter.Eq(\"scores.type\", \"quiz\");\n```\n\nNow if we want to make the change to the quiz score we can do that with `Set()` too, but to identify which particular element should be changed is a little different. We can use the positional $ operator to access the quiz `score` in the array. The $ operator on its own says \"change the array element that we matched within the query\" - the filter matches with `scores.type` equal to `quiz` and that's the element will get updated with the set.\n\n``` csp\nvar arrayUpdate = Builders.Update.Set(\"scores.$.score\", 84.92381029342834);\n```\n\nAnd again we use the `UpdateOne()` method to make the changes.\n\n``` csp\ncollection.UpdateOne(arrayFilter , arrayUpdate);\n```\n\n### Additional Update Methods\n\nIf you've been reading along in this blog series I've mentioned that the C# driver supports both sync and async interactions with MongoDB. Performing data *Updates* is no different. There is also an `UpdateOneAsync()` method available. Additionally, for those cases in which multiple documents need to be updated at once, there are `UpdateMany()` or `UpdateManyAsync()` options. The `UpdateMany()` and `UpdateManyAsync()` methods match the documents in the `Filter` and will update *all* documents that match the filter requirements.\n\n`Update` is an important operator in the CRUD world. Not being able to update things as they change would make programming incredibly difficult. Fortunately, C# and MongoDB continue to work well together to make the operations possible and easy to use. Whether it's updating a student's grade or updating a user's address, *Update* is here to handle the changes. The code for the *Create*, *Read*, and *Update* operations can be found in this gist.\n\nWe're winding down this MongoDB C# Quick Start CRUD operation series with only one operation left to explore, *Delete*.\n\n>Remember, you can get started with an M0 cluster on MongoDB Atlas today. It's free forever and you'll be able to work alongside this blog series.\n\n## Delete\n\nTo continue along with the student story, let's take a look at how what would happen if the student dropped the course and had to have their grades deleted. Once again, the MongoDB driver for C# makes it a breeze. And, it provides both sync and async options for the operations.\n\n### Deleting Data\n\nThe first step in the deletion process is to create a filter for the document(s) that need to be deleted. In the example for this series, I've been using a document with a `student_id` value of `10000` to work with. Since I'll only be deleting that single record, I'll use the `DeleteOne()` method (for async situations the `DeleteOneAsync()` method is available). However, when a filter matches more than a single document and all of them need to be deleted, the `DeleteMany()` or `DeleteManyAsync` method can be used.\n\nHere's the record I want to delete.\n\n``` json\n{\n { \"student_id\", 10000 },\n { \"scores\", new BsonArray\n {\n new BsonDocument{ {\"type\", \"exam\"}, {\"score\", 88.12334193287023 } },\n new BsonDocument{ {\"type\", \"quiz\"}, {\"score\", 84.92381029342834 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 89.97929384290324 } },\n new BsonDocument{ {\"type\", \"homework\"}, {\"score\", 82.12931030513218 } }\n }\n },\n { \"class_id\", 483}\n};\n```\n\nI'll define the filter to match the `student_id` equal to `10000` document:\n\n``` csp\nvar deleteFilter = Builders.Filter.Eq(\"student_id\", 10000);\n```\n\nAssuming that we have a `collection` variable assigned to for the `grades` collection, we next pass the filter into the `DeleteOne()` method.\n\n``` csp\ncollection.DeleteOne(deleteFilter);\n```\n\nIf that command is run on the `grades` collection, the document with `student_id` equal to `10000` would be gone. Note here that `DeleteOne()` will delete the first document in the collection that matches the filter. In our example dataset, since there is only a single student with a `student_id` equal to `10000`, we get the desired results.\n\nFor the sake of argument, let's imagine that the rules for the educational institution are incredibly strict. If you get below a score of 60 on the first exam, you are automatically dropped from the course. We could use a `for` loop with `DeleteOne()` to loop through the entire collection, find a single document that matches an exam score of less than 60, delete it, and repeat. Recall that `DeleteOne()` only deletes the first document it finds that matches the filter. While this could work, it isn't very efficient as multiple calls to the database are made. How do we handle situations that require deleting multiple records then? We can use `DeleteMany()`.\n\n### Multiple Deletes\n\nLet's define a new filter to match the exam score being less than 60:\n\n``` csp\nvar deleteLowExamFilter = Builders.Filter.ElemMatch(\"scores\",\n new BsonDocument { { \"type\", \"exam\" }, {\"score\", new BsonDocument { { \"$lt\", 60 }}}\n});\n```\n\nWith the filter defined, we pass it into the `DeleteMany()` method:\n\n``` csp\ncollection.DeleteMany(deleteLowExamFilter);\n```\n\nWith that command being run, all of the student record documents with low exam scores would be deleted from the collection.\n\nCheck out the gist for all of the CRUD commands wrapped into a single file.\n\n## Wrap Up\n\nThis C# Quick Start series has covered the various CRUD Operations (Create, Read, Update, and Delete) operations in MongoDB using basic BSON Documents. We've seen how to use filters to match specific documents that we want to read, update, or delete. This series has, thus far, been a gentle introduction to C Sharp and MongoDB.\n\nBSON Documents are not, however, the only way to be able to use MongoDB with C Sharp. In our applications, we often have classes defining objects. We can map our classes to BSON Documents to work with data as we would in code. I'll take a look at mapping in a future post.", "format": "md", "metadata": {"tags": ["C#"], "pageDescription": "Learn how to perform CRUD operations using C Sharp for MongoDB databases.", "contentType": "Quickstart"}, "title": "MongoDB & C Sharp: CRUD Operations Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/harnessing-natural-language-mongodb-queries-google-gemini", "action": "created", "body": "# Harnessing Natural Language for MongoDB Queries With Google Gemini\n\nIn the digital age, leveraging natural language for database queries represents a leap toward more intuitive data management. Vertex AI Extensions, currently in **private preview**, help in interacting with MongoDB using natural language. This tutorial introduces an approach that combines Google Gemini's advanced natural language processing with MongoDB, facilitated by Vertex AI Extensions. These extensions address key limitations of large language models (LLMs) by enabling real-time data querying and modification, which LLMs cannot do due to their static knowledge base post-training. By integrating MongoDB Atlas with Vertex AI Extensions, we offer a solution that enhances the accessibility and usability of the database. \n\nMongoDB's dynamic schema, scalability, and comprehensive querying capabilities render it exemplary for Generative AI applications. It is adept at handling the versatile and unpredictable nature of data that these applications generate and use. From personalized content generation, where user data shapes content in real time, to sophisticated, AI-driven recommendation systems leveraging up-to-the-minute data for tailored suggestions, MongoDB stands out. Furthermore, it excels in complex data analysis, allowing AI tools to interact with vast and varied datasets to extract meaningful insights, showcasing its pivotal role in enhancing the efficiency and effectiveness of Generative AI applications.\n\n## Natural language to MongoDB queries\n\nNatural language querying represents a paradigm shift in data interaction, allowing users to retrieve information without the need for custom query languages. By integrating MongoDB with a system capable of understanding and processing natural language, we streamline database operations, making them more accessible to non-technical users.\n\n### Solution blueprint\n\nThe solution involves a synergy of several components, including MongoDB, the Google Vertex AI SDK, Google Secrets Manager, and OpenAPI 3 specifications. Together, these elements create a robust framework that translates natural language queries into MongoDB Data API calls. In this solution, we have explored basic CRUD operations with Vertex AI Extensions. We are closely working with Google to enable vector search aggregations in the near future.\n\n### Components involved\n\n1. **MongoDB**: A versatile, document-oriented database that stores data in JSON-like formats, making it highly adaptable to various data types and structures\n2. **Google Vertex AI SDK**: Facilitates the creation and management of AI and machine learning models, including the custom extension for Google Vertex AI \n3. **Vertex AI Extensions:** Enhance LLMs by allowing them to interact with external systems in real-time, extending their capabilities beyond static knowledge \n4. **Google Secrets Manager**: Securely stores sensitive information, such as MongoDB API keys, ensuring the solution's security and integrity\n5. **OpenAPI 3 Specification for MongoDB Data API**: Defines a standard, language-agnostic interface to MongoDB that allows for both easy integration and clear documentation of the API's capabilities\n\n### Description of the solution\n\nThe solution operates by converting natural language queries into parameters that the MongoDB Data API can understand. This conversion is facilitated by a custom extension developed using the Google Vertex AI extension SDK, which is then integrated with Gemini 1.0 Pro. The extension leverages OpenAPI 3 specifications to interact with MongoDB, retrieving data based on the user's natural language input. Google Secrets Manager plays a critical role in securely managing API keys required for MongoDB access, ensuring the solution's security.\n\n or to create a new project.\n2. If you are new to MongoDB Atlas, you can sign up to MongoDB either through the Google Cloud Marketplace or with the Atlas registration page.\n3. Vertex AI Extensions are not publicly available. Please sign up for the Extensions Trusted Tester Program.\n4. Basic knowledge of OpenAPI specifications and how to create them for APIs will be helpful.\n5. You\u2019ll need a Google Cloud Storage bucket for storing the OpenAPI specifications.\n\nBefore we begin, also make sure you:\n\n**Enable MongoDB Data API**: To enable the Data API from the Atlas console landing page, open the Data API section from the side pane, enable the Data API, and copy the URL Endpoint as shown below.\n\n). To create a new secret on the Google Cloud Console, navigate to Secrets Manager, and click on **CREATE SECRET**. Paste the secret created from MongoDB to the secret value field and click on **Create**.\n\n. This specification outlines how natural language queries will be translated into MongoDB operations.\n\n## Create Vertex AI extensions\n\nThis tutorial uses the MongoDB default dataset from the **sample_mflix** database, **movies** collection. We will run all the below code on the Enterprise Colab notebook.\n\n1. Vertex AI Extensions is a platform for creating and managing extensions that connect large language models to external systems via APIs. These external systems can provide LLMs with real-time data and perform data processing actions on their behalf.\n\n```python\nfrom google.colab import auth\nauth.authenticate_user(\"GCP project id\")\n!gcloud config set project {\"GCP project id\"}\n```\n\n2. Install the required Python dependencies.\n\n```python\n!gsutil cp gs://vertex_sdk_private_releases/llm_extension/google_cloud_aiplatform-1.44.dev20240315+llm.extension-py2.py3-none-any.whl .\n!pip install --force-reinstall --quiet google_cloud_aiplatform-1.44.dev20240315+llm.extension-py2.py3-none-any.whlextension]\n!pip install --upgrade --quiet google-cloud-resource-manager\n!pip install --force-reinstall --quiet langchain==0.0.298!pip install pytube\n!pip install --upgrade google-auth\n!pip install bigframes==0.26.0\n```\n\n3. Once the dependencies are installed, restart the kernel.\n\n```python\nimport IPython\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True) # Re-run the Env variable cell again after Kernel restart\n```\n\n4. Initialize the environment variables.\n\n```python\nimport os\n## This is just a sample values please replace accordingly to your project\n# Setting up the GCP project\nos.environ['PROJECT_ID'] = 'gcp project id' # GCP Project ID\nos.environ['REGION'] = \"us-central1\" # Project Region\n## GCS Bucket location\nos.environ['STAGING_BUCKET'] = \"gs://vertexai_extensions\"\n## Extension Config\nos.environ['EXTENSION_DISPLAY_HOME'] = \"MongoDb Vertex API Interpreter\"\nos.environ['EXTENSION_DESCRIPTION'] = \"This extension makes api call to mongodb to do all crud operations\"\n\n## OPEN API SPec config\nos.environ['MANIFEST_NAME'] = \"mdb_crud_interpreter\"\nos.environ['MANIFEST_DESCRIPTION'] = \"This extension makes api call to mongodb to do all crud operations\"\nos.environ['OPENAPI_GCS_URI'] = \"gs://vertexai_extensions/mongodbopenapispec.yaml\"\n\n## API KEY secret location\nos.environ['API_SECRET_LOCATION'] = \"projects/787220387490/secrets/mdbapikey/versions/1\"\n\n##LLM config\nos.environ['LLM_MODEL'] = \"gemini-1.0-pro\"\n```\n\n5. Download the Open API specification from [GitHub and upload the YAML file to the Google Cloud Storage bucket. \n\n```python\nfrom google.cloud import aiplatformfrom google.cloud.aiplatform.private_preview import llm_extension\n\nPROJECT_ID = os.environ'PROJECT_ID']\nREGION = os.environ['REGION']\nSTAGING_BUCKET = os.environ['STAGING_BUCKET']\n\naiplatform.init(\n project=PROJECT_ID,\n location=REGION,\n staging_bucket=STAGING_BUCKET,\n)\n```\n\n6. To create the Vertex AI extension, run the below script. The manifest here is a structured JSON object containing several key components:\n\n```python\nmdb_crud = llm_extension.Extension.create(\ndisplay_name = os.environ['EXTENSION_DISPLAY_HOME'],\ndescription = os.environ['EXTENSION_DESCRIPTION'], # Optional manifest = { \"name\": os.environ['MANIFEST_NAME'],\n \"description\": os.environ['MANIFEST_DESCRIPTION'],\n \"api_spec\": {\n \"open_api_gcs_uri\": os.environ['OPENAPI_GCS_URI'],\n }, \"auth_config\": {\n \"apiKeyConfig\":{\n \"name\":\"api-key\",\n \"apiKeySecret\":os.environ['API_SECRET_LOCATION'],\n \"httpElementLocation\": \"HTTP_IN_HEADER\"\n },\n \"authType\":\"API_KEY_AUTH\"\n },\n },\n)\n```\n\n7. Validate the Created Extension, and print the Operation Schema and Parameters.\n\n```python\nprint(\"Name:\", mdb_crud.gca_resource.name)print(\"Display Name:\", mdb_crud.gca_resource.display_name)print(\"Description:\", mdb_crud.gca_resource.description)\nimport pprint\npprint.pprint(mdb_crud.operation_schemas())\n```\n\n## Extension in action\n\nOnce the extension is created, navigate to [Vertex AI UI and then Vertex UI Extension on the left pane. \n\n with MongoDB Atlas on Google Cloud.\n2. Connect models to APIs by using Vertex AI extensions.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f6e6aa6cea13ba1/661471b70c47840e25a3437a/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltae31621998903a57/661471cd4180c1c4ede408cb/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58474a722f262f1a/661471e40d99455ada032667/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cd6a0e4c6b2ed4c/661471f5da0c3a5c7ff77441/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3ac5e7c88ed9d678/661472114180c1f08ee408d1/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39e9b0f8b7040dab/661472241a0e49338babc9e1/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfc26acb17bfca16d/6614723b2b98e9f356100e6b/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bdaa1e8a1cf5a51/661472517cacdc0fbad4a075/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt272144a86fea7776/661472632b98e9562f100e6f/9.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt62198b1ba0785a55/66147270be36f54af2d96927/10.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4a7e5371abe658e1/66147281be36f5ed61d9692b/11.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "Google Cloud"], "pageDescription": "By integrating MongoDB Atlas with Vertex AI Extensions, we offer a solution that enhances the accessibility and usability of the database.", "contentType": "Article"}, "title": "Harnessing Natural Language for MongoDB Queries With Google Gemini", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/stream-data-aws-glue", "action": "created", "body": "# Stream Data Into MongoDB Atlas Using AWS Glue\n\nIn this tutorial, you'll find a tangible showcase of how AWS Glue, Amazon Kinesis, and MongoDB Atlas seamlessly integrate, creating a streamlined data streaming solution alongside extract, transform, and load (ETL) capabilities. This repository also harnesses the power of AWS CDK to automate deployment across diverse environments, enhancing the efficiency of the entire process.\n\nTo follow along with this tutorial, you should have intermediate proficiency with AWS and MongoDB services.\n\n## Architecture diagram\n\n installed and configured\n - NVM/NPM installed and configured\n - AWS CDK installed and configured\n - MongoDB Atlas account, with the Organization set up\n - Python packages\n - Python3 - `yum install -y python3`\n - Python Pip - `yum install -y python-pip`\n - Virtualenv - `pip3 install virtualenv`\n\n>This repo is developed taking us-east-1 as the default region. Please update the scripts to your specific region (if required). This repo will create a MongoDB Atlas project and a free-tier database cluster automatically. No need to create a database cluster manually. This repo is created for a demo purpose and IP access is not restricted (0.0.0.0/0). Ensure you strengthen the security by updating the relevant IP address (if required).\n\n### Setting up the environment\n\n#### Get the application code\n\n`git clone https://github.com/mongodb-partners/Stream_Data_into_MongoDB_AWS_Glue\ncd kinesis-glue-aws-cdk`\n\n#### Prepare the dev environment to run AWS CDK\n\na. Set up the AWS Environment variable AWS Access Key ID, AWS Secret Access Key, and optionally, the AWS Session Token.\n\n```\nexport AWS_ACCESS_KEY_ID = <\"your AWS access key\">\n export AWS_SECRET_ACCESS_KEY =<\"your AWS secret access key\">\n export AWS_SESSION_TOKEN = <\"your AWS session token\">\n```\nb. We will use CDK to make our deployments easier. \n\nYou should have npm pre-installed.\nIf you don\u2019t have CDK installed:\n`npm install -g aws-cdk`\n\nMake sure you\u2019re in the root directory.\n`python3 -m venv .venv`\n`source .venv/bin/activate`\n`pip3 install -r requirements.txt`\n\n> For development setup, use requirements-dev.txt.\n\nc. Bootstrap the application with the AWS account.\n\n`cdk bootstrap`\n\nd. Set the ORG_ID as an environment variable in the .env file. All other parameters are set to default in global_args.py in the kinesis-glue-aws-cdk folder. MONGODB_USER and MONGODB_PASSWORD parameters are set directly in mongodb_atlas_stack.py and glue_job_stack.py\n\nThe below screenshot shows the location to get the Organization ID from MongoDB Atlas.\n\n to create a new CloudFormation stack to create the execution role.\n\n to create a new CloudFormation stack for the default profile that all resources will attempt to use unless a different override is specified.\n\n#### Profile secret stack\n\n to resolve some common issues encountered when using AWS CloudFormation/CDK with MongoDB Atlas Resources.\n\n## Useful commands\n\n`cdk ls` lists all stacks in the app.\n`cdk synth` emits the synthesized CloudFormation template.\n`cdk deploy` deploys this stack to your default AWS account/region.\n`cdk diff` compares the deployed stack with the current state.\n`cdk docs` opens CDK documentation.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt95d66e4812fd56ed/661e9e36e9e603e1aa392ef0/architecture-diagram.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt401e52a6c6ca2f6f/661ea008f5bcd1bf540c99bd/organization-settings.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltffa00dde10848a90/661ea1fe190a257fcfbc5b4e/cloudformation-stack.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc9e58d51f80206d/661ea245ad926e2701a4985b/registry-public-extensions.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc5a089f4695e6897/661ea2c60d5626cbb29ccdfb/cluster-organization-settings.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a0bed34c4fbaa4f/661ea2ed243a4fa958838c90/edit-api-key.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt697616247b2b7ae9/661ea35cdf48e744da7ea2bd/aws-cloud-formation-stack.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcba40f2c7e837849/661ea396f19ed856a2255c19/output-cloudformation-stack.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1beb900395ae5533/661ea3d6a7375b6a462d7ca2/creation-mongodb-atlas-cluster.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt298896d4b3f0ecbb/661ea427e9e6030914392f35/output-cloudformation.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt880cc957dbb069b9/661ea458a7375b89a52d7cb8/kinesis-stream.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4fd8a062813bee88/661ea4c0a3e622865f4be23e/output.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt355c545ee15e9222/661ea50645b6a80f09390845/s3-buckets-created.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4c1cdd506b90172/661ea54ba7375b40172d7cc5/output-2.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta91c3d8a8bd9d857/661ea57a243a4fe9d4838caa/aws-glue-studio.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1993ccc1ca70cd57/661ea5bc061bb15fd5421300/aws-glue-parameters.png", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "In this tutorial, find a tangible showcase of how AWS Glue, Amazon Kinesis, and MongoDB Atlas seamlessly integrate.", "contentType": "Tutorial"}, "title": "Stream Data Into MongoDB Atlas Using AWS Glue", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/rag_with_claude_opus_mongodb", "action": "created", "body": "# How to Build a RAG System Using Claude 3 Opus And MongoDB\n\n# Introduction\nAnthropic, a provider of large language models (LLMs), recently introduced three state-of-the-art models classified under the Claude 3 model family. This tutorial utilises one of the Claude 3 models within a retrieval-augmented generation (RAG) system powered by the MongoDB vector database. Before diving into the implementation of the retrieval-augmented generation system, here's an overview of the latest Anthropic release:\n\n**Introduction of the Claude 3 model family:**\n\n- **Models**: The family comprises Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, each designed to cater to different needs and applications.\n- **Benchmarks**: The Claude 3 models have established new standards in AI cognition, excelling in complex tasks, comprehension, and reasoning.\n\n**Capabilities and features:**\n\n- **Multilingual and multimodal support**: Claude 3 models can generate code and text in a non-English language. The models are also multimodal, with the ability to understand images. \n- **Long context window**: The Claude 3 model initially has a 200K token context window, with the ability to extend up to one million tokens for specific use cases.\n- **Near-perfect recall**: The models demonstrate exceptional recall capabilities when analyzing extensive amounts of text.\n\n**Design considerations:**\n\n- **Balanced attributes**: The development of the Claude 3 models was guided by three main factors \u2014 speed, intelligence, and cost-effectiveness. This gives consumers a variety of models to leverage for different use cases requiring a tradeoff on one of the factors for an increase in another.\n\nThat\u2019s a quick update on the latest Anthropic release. Although the Claude 3 model has a large context window, a substantial cost is still associated with every call that reaches the upper thresholds of the context window provided. RAG is a design pattern that leverages a knowledge source to provide additional information to LLMs by semantically matching the query input with data points within the knowledge store.\n\nThis tutorial implements a chatbot prompted to take on the role of a venture capital tech analyst. The chatbot is a naive RAG system with a collection of tech news articles acting as its knowledge source.\n\n**What to expect from this tutorial:**\n\n- Gain insights into constructing a retrieval-augmented generation system by integrating Claude 3 models with MongoDB to enhance query response accuracy.\n- Follow a comprehensive tutorial on setting up your development environment, from installing necessary libraries to configuring a MongoDB database.\n- Learn efficient data handling methods, including creating vector search indexes and preparing data for ingestion and query processing.\n- Understand how to employ Claude 3 models within the RAG system for generating precise responses based on contextual information retrieved from the database.\n\n**All implementation code presented in this tutorial is located in this GitHub repository**\n\n-----\n## Step 1: Library installation, data loading, and preparation\nThis section covers the steps taken to prepare the development environment source and clean the data utilised as the knowledge base for the venture capital tech analyst chatbot.\n\nThe following code installs all the required libraries:\n\n```pip install pymongo datasets pandas anthropic openai```\n\n**Below are brief explanations of the tools and libraries utilised within the implementation code:**\n\n- **anthropic:** This is the official Python library for Anthropic that enables access to state-of-the-art language models. This library provides access to the Claude 3 family models, which can understand text and images.\n- **datasets**: This library is part of the Hugging Face ecosystem. By installing datasets, we gain access to several pre-processed and ready-to-use datasets, which are essential for training and fine-tuning machine learning models or benchmarking their performance.\n- **pandas**: This data science library provides robust data structures and methods for data manipulation, processing, and analysis.\n- **openai**: This is the official Python client library for accessing OpenAI's embedding models.\n- **pymongo**: PyMongo is a Python toolkit for MongoDB. It enables interactions with a MongoDB database.\n\nTools like Pyenv and Conda can create isolated development environments to separate package versions and dependencies across your projects. In these environments, you can install specific versions of libraries, ensuring that each project operates with its own set of dependencies without interference. The implementation code presentation in this tutorial is best executed within a Colab or Notebook environment.\n\nAfter importing the necessary libraries, the subsequent steps in this section involve loading the dataset that serves as the foundational knowledge base for the RAG system and chatbot. This dataset contains a curated collection of tech news articles from HackerNoon, supplemented with an additional column of embeddings. These embeddings were created by processing the descriptions of each article in the dataset. The embeddings for this dataset were generated using OpenAI\u2019s embedding model \"text-embedding-3-small,\" with an embedding dimension of 256. This information on the embedding model and dimension is crucial when handling and embedding user queries in later processes.\n\nThe tech-news-embedding dataset contains more than one million data points, mirroring the scale of data typically encountered in a production setting. However, for this particular application, only 228,012 data points are utilized.\n\n```\nimport os\nimport requests\nfrom io import BytesIO\nimport pandas as pd\nfrom google.colab import userdata\n\ndef download_and_combine_parquet_files(parquet_file_urls, hf_token):\n \"\"\"\n Downloads Parquet files from the provided URLs using the given Hugging Face token,\n and returns a combined DataFrame.\n\n Parameters:\n - parquet_file_urls: List of strings, URLs to the Parquet files.\n - hf_token: String, Hugging Face authorization token.\n\n Returns:\n - combined_df: A pandas DataFrame containing the combined data from all Parquet files.\n \"\"\"\n headers = {\"Authorization\": f\"Bearer {hf_token}\"}\n all_dataframes = ]\n\n for parquet_file_url in parquet_file_urls:\n response = requests.get(parquet_file_url, headers=headers)\n if response.status_code == 200:\n parquet_bytes = BytesIO(response.content)\n df = pd.read_parquet(parquet_bytes)\n all_dataframes.append(df)\n else:\n print(f\"Failed to download Parquet file from {parquet_file_url}: {response.status_code}\")\n\n if all_dataframes:\n combined_df = pd.concat(all_dataframes, ignore_index=True)\n return combined_df\n else:\n print(\"No dataframes to concatenate.\")\n return None\n\n```\n\nThe code snippet above executes the following steps:\n\n**Import necessary libraries**:\n- `os` for interacting with the operating system\n- `requests` for making HTTP requests\n- `BytesIO` from the io module to handle bytes objects like files in memory\n- `pandas` (as pd) for data manipulation and analysis\n- `userdata` from google.colab to enable access to environment variables stored in Google Colab secrets\n\n**Function definition**: The `download_and_combine_parquet_files` function is defined with two parameters:\n- `parquet_file_urls`: a list of URLs as strings, each pointing to a Parquet file that contains a sub-collection of the tech-news-embedding dataset\n- `hf_token`: a string representing a Hugging Face authorization token; access tokens can be created or copied from the [Hugging Face platform\n\n**Download and read Parquet files**: The function iterates over each URL in parquet\\_file\\_urls. For each URL, it:\n- Makes a GET request using the requests.get method, passing the URL and the headers for authorization.\n- Checks if the response status code is 200 (OK), indicating the request was successful.\n- Reads (if successful) the content of the response into a BytesIO object (to handle it as a file in memory), then uses pandas.read\\_parquet to read the Parquet file from this object into a Pandas DataFrame.\n- Appends the DataFrame to the list `all_dataframes`.\n\n**Combine DataFrames**: After downloading and reading all Parquet files into DataFrames, there\u2019s a check to ensure that `all_dataframes` is not empty. If there are DataFrames to work with, then all DataFrames are concatenated into a single DataFrame using pd.concat, with `ignore_index=True` to reindex the new combined DataFrame. This combined DataFrame is the overall process output in the `download_and_combine_parquet_files` function.\n\nBelow is a list of the Parquet files required for this tutorial. The complete list of all files is located on Hugging Face. Each Parquet file represents approximately 45,000 data points.\n\n```\n# Commented out other parquet files below to reduce the amount of data ingested.\n# One praquet file has an estimated 50,000 datapoint \nparquet_files = \n \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0000.parquet\",\n # \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0001.parquet\",\n # \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0002.parquet\",\n # \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0003.parquet\",\n # \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0004.parquet\",\n # \"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0005.parquet\",\n]\n\nhf_token = userdata.get(\"HF_TOKEN\")\ncombined_df = download_and_combine_parquet_files(parquet_files, hf_token)\n\n```\n\nIn the code snippet above, a subset of the tech-news-embeddings dataset is grouped into a single DataFrame, which is then assigned to the variable `combined_df`.\n\nAs a final phase in data preparation, the code snippet below shows the step to remove the `_id` column from the grouped dataset, as it is unnecessary for subsequent steps in this tutorial. Additionally, the data within the embedding column for each data point is converted from a numpy array to a Python list to prevent errors related to incompatible data types during the data ingestion. \n\n```\n# Remove the _id coloum from the intital dataset\ncombined_df = combined_df.drop(columns=['_id'])\n\n# Convert each numpy array in the 'embedding' column to a normal Python list\ncombined_df['embedding'] = combined_df['embedding'].apply(lambda x: x.tolist())\n```\n## Step 2: Database and collection creation\nAn approach to composing an AI stack focused on handling large data volumes and reducing data siloed is to utilise the same database provider for your operational and vector data. MongoDB acts as both an operational and a vector database. It offers a database solution that efficiently stores queries and retrieves vector embeddings.\n\n**To create a new MongoDB database, set up a database cluster:**\n\n1. Register for a [free MongoDB Atlas account, or existing users can sign into MongoDB Atlas.\n1. Select the \u201cDatabase\u201d option on the left-hand pane, which will navigate to the Database Deployment page with a deployment specification of any existing cluster. Create a new database cluster by clicking on the **+Create** button.\n1. For assistance with database cluster setup and obtaining the unique resource identifier (URI), refer to our guide for setting up a MongoDB cluster and getting your connection string.\n\n***Note: Don\u2019t forget to whitelist the IP for the Python host or 0.0.0.0/0 for any IP when creating proof of concepts.***\n\nAt this point, you have created a database cluster, obtained a connection string to the database, and placed a reference to the connection string within the development environment. The next step is to create a database and collect data through the MongoDB Atlas user interface.\n\nOnce you have created a cluster, navigate to the cluster page and create a database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named `tech_news` and the collection will be named `hacker_noon_tech_news`.\n\n.\n\nIn the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named **vector_index** and the vector search index definition is as follows:\n\n```\n{\n \"fields\": {\n \"numDimensions\": 256,\n \"path\": \"embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }]\n}\n```\n\n## Step 4: Data ingestion\nTo ingest data into the MongoDB database created in the previous steps, the following operations have to be carried out:\n\n- Connect to the database and collection.\n- Clear out the collection of any existing records.\n- Convert the Pandas DataFrame of the dataset into dictionaries before ingestion.\n- Ingest dictionaries into MongoDB using a batch operation.\n\nThis tutorial requires the cluster's URI. Grab the URI and copy it into the Google Colab Secrets environment in a variable named `MONGO_URI`, or place it in a .env file or equivalent.\n\n```\nimport pymongo\nfrom google.colab import userdata\n\ndef get_mongo_client(mongo_uri):\n \"\"\"Establish connection to the MongoDB.\"\"\"\n try:\n client = pymongo.MongoClient(mongo_uri)\n print(\"Connection to MongoDB successful\")\n return client\n except pymongo.errors.ConnectionFailure as e:\n print(f\"Connection failed: {e}\")\n return None\n\nmongo_uri = userdata.get('MONGO_URI')\nif not mongo_uri:\n print(\"MONGO_URI not set in environment variables\")\n\nmongo_client = get_mongo_client(mongo_uri)\n\nDB_NAME=\"tech_news\"\nCOLLECTION_NAME=\"hacker_noon_tech_news\"\n\ndb = mongo_client[DB_NAME]\ncollection = db[COLLECTION_NAME]\n```\n\nThe code snippet above uses PyMongo to create a MongoDB client object, representing the connection to the cluster and enabling access to its databases and collections. The variables `DB_NAME` and `COLLECTION_NAME` are given the names set for the database and collection in the previous step. If you\u2019ve chosen different database and collection names, ensure they are reflected in the implementation code.\n\nThe code snippet below guarantees that the current database collection is empty by executing the `delete_many()` operation on the collection.\n\n```\n# To ensure we are working with a fresh collection\n# delete any existing records in the collection\ncollection.delete_many({})\n```\n\nIngesting data into a MongoDB collection from a pandas DataFrame is a straightforward process that can be efficiently accomplished by converting the DataFrame into dictionaries and then utilising the `insert_many` method on the collection to pass the converted dataset records.\n\n```\n# Data Ingestion\ncombined_df_json = combined_df.to_dict(orient='records')\ncollection.insert_many(combined_df_json)\n```\n\nThe data ingestion process should take less than a minute, and when data ingestion is completed, the IDs of the corresponding records of the ingested document are returned.\n\n## Step 5: Vector search\nThis section showcases the creation of a vector search custom function that accepts a user query, which corresponds to entries to the chatbot. The function also takes a second parameter, `collection`, which points to the database collection containing records against which the vector search operation should be conducted.\n\nThe `vector_search` function produces a vector search result derived from a series of operations outlined in a MongoDB aggregation pipeline. This pipeline includes the `$vectorSearch` and `$project` stages and performs queries based on the vector embeddings of user queries. It then formats the results, omitting any record attributes unnecessary for subsequent processes.\n\n```\ndef vector_search(user_query, collection):\n \"\"\"\n Perform a vector search in the MongoDB collection based on the user query.\n\n Args:\n user_query (str): The user's query string.\n collection (MongoCollection): The MongoDB collection to search.\n\n Returns:\n list: A list of matching documents.\n \"\"\"\n\n # Generate embedding for the user query\n query_embedding = get_embedding(user_query)\n\n if query_embedding is None:\n return \"Invalid query or embedding generation failed.\"\n\n # Define the vector search pipeline\n pipeline = [\n {\n \"$vectorSearch\": {\n \"index\": \"vector_index\",\n \"queryVector\": query_embedding,\n \"path\": \"embedding\",\n \"numCandidates\": 150, # Number of candidate matches to consider\n \"limit\": 5 # Return top 5 matches\n }\n },\n {\n \"$project\": {\n \"_id\": 0, # Exclude the _id field\n \"embedding\": 0, # Exclude the embedding field\n \"score\": {\n \"$meta\": \"vectorSearchScore\" # Include the search score\n }\n }\n }\n ]\n\n # Execute the search\n results = collection.aggregate(pipeline)\n return list(results)\n```\n\nThe code snippet above conducts the following operations to allow semantic search for tech news articles:\n\n1. Define the `vector_search` function that takes a user's query string and a MongoDB collection as inputs and returns a list of documents that match the query based on vector similarity search.\n1. Generate an embedding for the user's query by calling the previously defined function, `get_embedding`, which converts the query string into a vector representation.\n1. Construct a pipeline for MongoDB's aggregate function, incorporating two main stages: `$vectorSearch` and `$project`.\n1. The `$vectorSearch` stage performs the actual vector search. The index field specifies the vector index to utilise for the vector search, and this should correspond to the name entered in the vector search index definition in previous steps. The queryVector field takes the embedding representation of the use query. The path field corresponds to the document field containing the embeddings. The numCandidates specifies the number of candidate documents to consider and the limit on the number of results to return.\n1. The `$project` stage formats the results to exclude the `_id` and the `embedding` field.\n1. The aggregate executes the defined pipeline to obtain the vector search results. The final operation converts the returned cursor from the database into a list.\n\n## Step 6: Handling user queries with Claude 3 models\nThe final section of the tutorial outlines the sequence of operations performed as follows:\n\n- Accept a user query in the form of a string.\n- Utilize the OpenAI embedding model to generate embeddings for the user query.\n- Load the Anthropic Claude 3\u2014 specifically, the \u2018claude-3-opus-20240229\u2019 model \u2014 to serve as the base model, which is the large language model for the RAG system.\n- Execute a vector search using the embeddings of the user query to fetch relevant information from the knowledge base, which provides additional context for the base model.\n- Submit both the user query and the gathered additional information to the base model to generate a response.\n\nThe code snippet below focuses on generating new embeddings using OpenAI's embedding model. An [OpenAI API key is required to ensure the successful completion of this step. More details on OpenAI's embedding models can be found on the official site.\n\nAn important note is that the dimensions of the user query embedding match the dimensions set in the vector search index definition on MongoDB Atlas.\n\n```\nimport openai\nfrom google.colab import userdata\n\nopenai.api_key = userdata.get(\"OPENAI_API_KEY\")\n\nEMBEDDING_MODEL = \"text-embedding-3-small\"\n\ndef get_embedding(text):\n \"\"\"Generate an embedding for the given text using OpenAI's API.\"\"\"\n\n # Check for valid input\n if not text or not isinstance(text, str):\n return None\n\n try:\n # Call OpenAI API to get the embedding\n embedding = openai.embeddings.create(input=text, model=EMBEDDING_MODEL, dimensions=256).data0].embedding\n return embedding\n except Exception as e:\n print(f\"Error in get_embedding: {e}\")\n return None\n```\n\nThe next step in this section is to import the Anthropic library and load the client to access Anthropic\u2019s methods for handling messages and accessing Claude models. Ensure you obtain an Anthropic API key located within the settings page on the [official Anthropic website.\n\n```\nimport anthropic\nclient = anthropic.Client(api_key=userdata.get(\"ANTHROPIC_API_KEY\"))\n\n```\n\nThe following code snippet introduces the function `handle_user_query`, which serves two primary purposes: It leverages a previously defined custom vector search function to query and retrieve relevant information from a MongoDB database, and it utilizes the Anthropic API via a client object to use one of the Claude 3 models for query response generation.\n\n```\ndef handle_user_query(query, collection):\n\n get_knowledge = vector_search(query, collection)\n\n search_result = ''\n for result in get_knowledge:\n search_result += (\n f\"Title: {result.get('title', 'N/A')}, \"\n f\"Company Name: {result.get('companyName', 'N/A')}, \"\n f\"Company URL: {result.get('companyUrl', 'N/A')}, \"\n f\"Date Published: {result.get('published_at', 'N/A')}, \"\n f\"Article URL: {result.get('url', 'N/A')}, \"\n f\"Description: {result.get('description', 'N/A')}, \\n\"\n )\n\n response = client.messages.create(\n model=\"claude-3-opus-20240229\",\n max_tokens=1024,\n system=\"You are Venture Captital Tech Analyst with access to some tech company articles and information. Use the information you are given to provide advice.\",\n messages=\n {\"role\": \"user\", \"content\": \"Answer this user query: \" + query + \" with the following context: \" + search_result}\n ]\n )\n\n return (response.content[0].text), search_result\n```\n\nThis function begins by executing the vector search against the specified MongoDB collection based on the user's input query. It then proceeds to format the retrieved information for further processing. Subsequently, the function invokes the Anthropic API, directing the request to a specific Claude 3 model.\n\nBelow is a more detailed description of the operations in the code snippet above:\n\n1. **Vector search execution**: The function begins by calling `vector_search` with the user's query and a specified collection as arguments. This performs a search within the collection, leveraging vector embeddings to find relevant information related to the query.\n1. **Compile search results**: `search_result` is initialized as an empty string to aggregate information from the search. The search results are compiled by iterating over the results returned by the `vector_search` function and formates each item's details (title, company name, URL, publication date, article URL, and description) into a human-readable string, appending this information to search_result with a newline character \\n at the end of each entry.\n1. **Generate response using Anthropic client**: The function then constructs a request to the Anthropic API (through a client object, presumably an instance of the Anthropic client class created earlier). It specifies:\n\n The model to use (\"claude-3-opus-20240229\"), which indicates a specific version of the Claude 3 model.\n\n The maximum token limit for the generated response (max_tokens=1024).\n\n A system description guides the model to behave as a \"Venture Capital Tech Analyst\" with access to tech company articles and information, using this as context to advise.\n\n The actual message for the model to process, which combines the user query with the aggregated search results as context.\n1. **Return the generated response and search results**: It extracts and returns the response text from the first item in the response's content alongside the compiled search results.\n\n```\n# Conduct query with retrieval of sources\nquery = \"Give me the best tech stock to invest in and tell me why\"\nresponse, source_information = handle_user_query(query, collection)\n\nprint(f\"Response: {response}\")\nprint(f\"Source Information: \\\\n{source_information}\")\n```\n\nThe final step in this tutorial is to initialize the query, pass it into the `handle_user_query` function, and print the response returned.\n\n1. **Initialise query**: The variable `query` is assigned a string value containing the user's request: \"Give me the best tech stock to invest in and tell me why.\" This serves as the input for the `handle_user_query` function.\n1. **Execute `handle_user_query` function**: The function takes two parameters \u2014 the user's query and a reference to the collection from which information will be retrieved. It performs a vector search to find relevant documents within the collection and formats the results for further use. It then queries the Anthropic Claude 3 model, providing it with the query and the formatted search results as context to generate an informed response.\n1. **Retrieve response and source information**: The function returns two pieces of data: response and source_information. The response contains the model-generated answer to the user's query, while source_information includes detailed data from the collection used to inform the response.\n1. **Display results**: Finally, the code prints the response from the Claude 3 model, along with the source information that contributed to this response. \n\n![Response from Claude 3 Opus][2]\n\nClaude 3 models possess what seems like impressive reasoning capabilities. From the response in the screenshot, it is able to consider expressive language as a factor in its decision-making and also provide a structured approach to its response. \n\nMore impressively, it gives a reason as to why other options in the search results are not candidates for the final selection. And if you notice, it factored the date into its selection as well. \n\nObviously, this is not going to replace any human tech analyst soon, but with a more extensive knowledge base and real-time data, this could very quickly become a co-pilot system for VC analysts. \n\n**Please remember that Opus's response is not financial advice and is only shown for illustrative purposes**.\n\n----------\n\n# Conclusion\nThis tutorial has presented the essential steps of setting up your development environment, preparing your dataset, and integrating state-of-the-art language models with a powerful database system. \n\nBy leveraging the unique strengths of Claude 3 models and MongoDB, we've demonstrated how to create a RAG system that not only responds accurately to user queries but does so by understanding the context in depth. The impressive performance of the RAG system is a result of Opus parametric knowledge and the semantic matching capabilities facilitated by vector search.\n\nBuilding a RAG system with the latest Claude 3 models and MongoDB sets up an efficient AI infrastructure. It offers cost savings and low latency by combining operational and vector databases into one solution. The functionalities of the naive RAG system presented in this tutorial can be extended to do the following:\n\n- Get real-time news on the company returned from the search results.\n- Get additional information by extracting text from the URLs provided in accompanying search results.\n- Store additional metadata before data ingestion for each data point.\n\nSome of the proposed functionality extensions can be achieved by utilising Anthropic function calling capabilities or leveraging search APIs. The key takeaway is that whether you aim to develop a chatbot, a recommendation system, or any application requiring nuanced AI responses, the principles and techniques outlined here will serve as a valuable starting point.\n\nWant to leverage another state-of-the-art model for your RAG system? Check out our article that uses [Google\u2019s Gemma alongside open-source embedding models provided by Hugging Face.\n\n----------\n\n# FAQs\n**1. What are the Claude 3 models, and how do they enhance a RAG system?**\n\nThe Claude 3 models (Haiku, Sonnet, Opus) are state-of-the-art large language models developed by Anthropic. They offer advanced features like multilingual support, multimodality, and long context windows up to one million tokens. These models are integrated into RAG systems to leverage their ability to understand and generate text, enhancing the system's response accuracy and comprehension.\n\n**2. Why is MongoDB chosen for a RAG system powered by Claude 3?**\n\nMongoDB is utilized for its dual capabilities as an operational and a vector database. It efficiently stores, queries, and retrieves vector embeddings, making it ideal for managing the extensive data volumes and real-time processing demands of AI applications like a RAG system.\n\n**3. How does the vector search function work within the RAG system?**\n\n The vector search function in the RAG system conducts a semantic search against a MongoDB collection using the vector embeddings of user queries. It relies on a MongoDB aggregation pipeline, including the $vectorSearch and $project stages, to find and format the most relevant documents based on query similarity.\n\n**4. What is the significance of data embeddings in the RAG system?**\n\n Data embeddings are crucial for matching the semantic content of user queries with the knowledge stored in the database. They transform text into a vector space, enabling the RAG system to perform vector searches and retrieve contextually relevant information to inform the model's responses.\n\n**5. How does the RAG system handle user queries with Claude 3 models?**\n\n The RAG system processes user queries by generating embeddings using an embedding model (e.g., OpenAI's \"text-embedding-3-small\") and conducting a vector search to fetch relevant information. This information and the user query are passed to a Claude 3 model, which generates a detailed and informed response based on the combined context.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt793687aeea00c719/65e8ff7f08a892d1c1d52824/Creation_of_database_and_collections.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6cc891ae6c3fdbc1/65e90287a8b0116c485c79ce/Screenshot_2024-03-06_at_23.55.28.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "Pandas"], "pageDescription": "This guide details creating a Retrieval-Augmented Generation (RAG) system using Anthropic's Claude 3 models and MongoDB. It covers environment setup, data preparation, and chatbot implementation as a tech analyst. Key steps include database creation, vector search index setup, data ingestion, and query handling with Claude 3 models, emphasizing accurate, context-aware responses.\n\n\n\n\n\n", "contentType": "Tutorial"}, "title": "How to Build a RAG System Using Claude 3 Opus And MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-performance-over-rdbms", "action": "created", "body": "# MongoDB's Performance over RDBMS\n\nSomeone somewhere might be wondering why we get superior performance with MongoDB over RDBMS databases. What is the secret behind it? I too had this question until I learned about the internal workings of MongoDB, especially data modeling, advanced index methods, and finally, how the WiredTiger storage engine works.\n\nI wanted to share my learnings and experiences to reveal the secret of it so that it might be helpful to you, too.\n\n## Data modeling: embedded structure (no JOINs)\n\nMongoDB uses a document-oriented data model, storing data in JSON-like BSON documents. This allows for efficient storage and retrieval of complex data structures. \n\nMongoDB's model can lead to simpler and more performant queries compared to the normalization requirements of RDBMS.\n\nThe initial phase of enhancing performance involves comprehending the query behaviors of your application. This understanding enables you to tailor your data model and choose suitable indexes to align with these patterns effectively.\n\nAlways remember MongoDB's optimized document size (which is 16 MB) so you can avoid embedding images, audio, and video files in the same collection, as depicted in the image below. \n\nCustomizing your data model to match the query patterns of your application leads to streamlined queries, heightened throughput for insert and update operations, and better workload distribution across a sharded cluster.\n\nWhile MongoDB offers a flexible schema, overlooking schema design is not advisable. Although you can adjust your schema as needed, adhering to schema design best practices from the outset of your project can prevent the need for extensive refactoring down the line.\n\nA major advantage of BSON documents is that you have the flexibility to model your data any way your application needs. The inclusion of arrays and subdocuments within documents provides significant versatility in modeling intricate data relationships. But you can also model flat, tabular, and columnar structures, simple key-value pairs, text, geospatial and time-series data, or the nodes and edges of connected graph data structures. The ideal schema design for your application will depend on its specific query patterns.\n\n### How is embedding within collections in MongoDB different from storing in multiple tables in RDBMS?\n\nAn example of a best practice for an address/contact book involves separating groups and portraits information in a different collection because as they can go big due to n-n relations and image size, respectively. They may hit a 16 MB optimized document size. \n\nEmbedding data in a single collection in MongoDB (or minimizing the number of collections, at least) versus storing in multiple tables in RDBMS offers huge performance improvements due to the data locality which will reduce the data seeks, as shown in the picture below. \n\nData locality is the major reason why MongoDB data seeks are faster. \n\n**Difference: tabular vs document** \n| | Tabular | MongoDB |\n| --------------------------- | ----------------------------- | --------------- |\n| Steps to create the model | 1 - define schema. 2 - develop app and queries | 1 - identifying the queries 2- define schema |\n| Initial schema | 3rd normal form. One possible solution | Many possible solutions |\n| Final schema | Likely denormalized | Few changes |\n| Schema evolution | Difficult and not optimal. Likely downtime | Easy. No downtime |\n| Performance | Mediocre | Optimized |\n\n## WiredTiger\u2019s cache and compression\nWiredTiger is an open-source, high-performance storage engine for MongoDB. WiredTiger provides features such as document-level concurrency control, compression, and support for both in-memory and on-disk storage.\n\n**Cache:**\n\nWiredTiger cache architecture: WiredTiger utilizes a sophisticated caching mechanism to efficiently manage data in memory. The cache is used to store frequently accessed data, reducing the need to read from disk and improving overall performance.\n\nMemory management: The cache dynamically manages memory usage based on the workload. It employs techniques such as eviction (removing less frequently used data from the cache) and promotion (moving frequently used data to the cache) to optimize memory utilization.\n\nConfiguration: WiredTiger allows users to configure the size of the cache based on their system's available memory and workload characteristics. Properly sizing the cache is crucial for achieving optimal performance.\n\nDurability: WiredTiger ensures durability by flushing modified data from the cache to disk. This process helps maintain data consistency in case of a system failure.\n\n**Compression**:\n\nData compression: WiredTiger supports data compression to reduce the amount of storage space required. Compressing data can lead to significant disk space savings and improved I/O performance.\n\nConfigurable compression: Users can configure compression options based on their requirements. WiredTiger supports different compression algorithms, allowing users to choose the one that best suits their workload and performance goals.\n\nTrade-offs: While compression reduces storage costs and can improve read/write performance, it may introduce additional CPU overhead during compression and decompression processes. Users need to carefully consider the trade-offs and select compression settings that align with their application's needs.\n\nCompatibility: WiredTiger's compression features are transparent to applications and don't require any changes to the application code. The engine handles compression and decompression internally.\n\nOverall, WiredTiger's cache and compression features contribute to its efficiency and performance characteristics. By optimizing memory usage and providing configurable compression options, WiredTiger aims to meet the diverse needs of MongoDB users in terms of both speed and storage efficiency.\n\nFew RDBMS systems also employ caching, but the performance benefits may vary based on the database system and configuration. \n\n### Advanced indexing capabilities \n\nMongoDB, being a NoSQL database, offers advanced indexing capabilities to optimize query performance and support efficient data retrieval. Here are some of MongoDB's advanced indexing features:\n\n**Compound indexes**\n\nMongoDB allows you to create compound indexes on multiple fields. A compound index is an index on multiple fields in a specific order. This can be useful for queries that involve multiple criteria.\n\nThe order of fields in a compound index is crucial. MongoDB can use the index efficiently for queries that match the index fields from left to right.\n\n**Multikey indexes**\n\nMongoDB supports indexing on arrays. When you index an array field, MongoDB creates separate index entries for each element of the array.\n\nMultikey indexes are helpful when working with documents that contain arrays, and you need to query based on elements within those arrays.\n\n**Text indexes**\n\nMongoDB provides text indexes to support full-text search. Text indexes tokenize and stem words, allowing for more flexible and language-aware text searches.\n\nText indexes are suitable for scenarios where users need to perform text search operations on large amounts of textual data.\n\n**Geospatial indexes**\n\nMongoDB supports geospatial indexes to optimize queries that involve geospatial data. These indexes can efficiently handle queries related to location-based information.\n\nGeospatial indexes support 2D and 3D indexing, allowing for the representation of both flat and spherical geometries.\n\n**Wildcard indexes**\n\nMongoDB supports wildcard indexes, enabling you to create indexes that cover only a subset of fields in a document. This can be useful when you have specific query patterns and want to optimize for those patterns without indexing every field.\n\n**Partial indexes**\n\nPartial indexes allow you to index only the documents that satisfy a specified filter expression. This can be beneficial when you have a large collection but want to create an index for a subset of documents that meet specific criteria.\n\n**Hashed indexes**\n\nHashed indexes are useful for sharding scenarios. MongoDB automatically hashes the indexed field's values and distributes the data across the shards, providing a more even distribution of data and queries.\n\n**TTL (time-to-live) indexes**\n\nTTL indexes allow you to automatically expire documents from a collection after a certain amount of time. This is helpful for managing data that has a natural expiration, such as session information or log entries.\n\nThese advanced indexing capabilities in MongoDB provide developers with powerful tools to optimize query performance for a wide range of scenarios and data structures. Properly leveraging these features can significantly enhance the efficiency and responsiveness of MongoDB databases.\n\nIn conclusion, the superior performance of MongoDB over traditional RDBMS databases stems from its adept handling of data modeling, advanced indexing methods, and the efficiency of the WiredTiger storage engine. By tailoring your data model to match application query patterns, leveraging MongoDB's optimized document structure, and harnessing advanced indexing capabilities, you can achieve enhanced throughput and more effective workload distribution.\n\nRemember, while MongoDB offers flexibility in schema design, it's crucial not to overlook the importance of schema design best practices from the outset of your project. This proactive approach can save you from potential refactoring efforts down the line.\n\nFor further exploration and discussion on MongoDB and database optimization strategies, consider joining our Developer Community. There, you can engage with fellow developers, share insights, and stay updated on the latest developments in database technology.\n\nKeep optimizing and innovating with MongoDB to unlock the full potential of your applications. \n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Guest Author Srinivas Mutyala discusses the reasons for MongoDB's improved performance over traditional RDMBS.", "contentType": "Article"}, "title": "MongoDB's Performance over RDBMS", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/cpp/adventures-iot-project-intro", "action": "created", "body": "# Plans and Hardware Selection for a Hands-on Implementation of IoT with MCUs and MongoDB\n\nDo you have a cool idea for a device that you may consider producing and selling? Would you want to have some new functionality implemented for your smart home? Do you want to understand how IoT works with a real example that we can work with from beginning to end? Are you a microcontroller aficionado and want to find some implementation alternatives? If the answer is yes to any of these questions, welcome to this series of articles and videos.\n\n# Table of Contents\n\n1. The idea and the challenge\n2. The vision, the mission and the product\n3. Rules of engagement\n4. The plan\n5. Hardware selection\n 1. Raspberry Pi Pico W\n 2. Micro:bit\n 3. Adafruit Circuit Playground Bluefruit\n 4. Adafruit Feather nRF52840 Sense\n 5. Espressif ESP32-C6-DevKitC-1\n6. Recap and future content\n\n# The idea and the challenge\n\nFor a while, I have been implementing many automations at home using third-party hardware and software products. This brings me a lot of joy and, in most cases, improves the environment my family and I live in. In the past, this used to be harder, but nowadays, the tools and their compatibility have improved greatly. You can start with something as trivial but useful as turning on the garden lights right after sunset and off at the time that you usually go to bed. But you can easily go much further.\n\nFor example, I have a door sensor that is installed in my garage door that triggers a timer when the door is opened and turns a light red after six minutes. This simple application of domotics has helped me to avoid leaving the door open countless times.\n\nAll the fun, and sometimes even frustration, that I have experienced implementing these functionalities, together with the crazy ideas that I sometimes have for creating and building things, have made me take a step forward and accept a new challenge in this area. So I did some thinking and came up with a project that combined different requirements that made it suitable to be used as a proof of concept and something that I could share with you.\n\nLet me describe the main characteristics of this project:\n\n- It should be something that a startup could do (or, at least, close enough.) So, I will share the vision and the mission of that wannabe startup. But most importantly, I will introduce the concept for our first product. You don't have to *buy* the idea, nor will I spend time trying to demonstrate that there is a suitable business need for that, in other words, this is a BPNI (business plan not included) project.\n- The idea should involve something beyond just a plain microcontroller (MCU). I would like to have some of those, maybe even in different rooms, and have their data collected in some way.\n- The data will be collected wirelessly. Having several sensors in different places, the wired option isn't very appealing. I will opt for communications implemented over radio frequencies: Bluetooth and WiFi. I might consider using ZigBee, Thread, or something similar in the future if there is enough interest. Please be vocal in your comments on this article.\n- I will use a computer to collect all the sensor measurements locally and send them to the cloud.\n- The data is going to be ingested into MongoDB Atlas and we will use some of its IoT capabilities, such as time series collections and real-time analytics.\n- Finally, I'm going to use some programming languages that are on the edge or even out of my comfort zone, just to prove that they shouldn't be the limitation.\n\n# The vision, the mission and the product\n\n**Vision**: we need work environments that enhance our productivity.\nConsider that technology, and IoT in particular, can be helpful for that.\n\n**Mission**: We are going to create, sell, and support IoT products that will help our users to be more productive and feel more comfortable in their work environments.\n\nThe first product in the pipeline is going to help our customers to measure and control noise levels in the workspace.\n\nHopefully, by now you are relieved that this isn't going to be another temperature sensor tutorial. Yippee-ki-yay!\n\nLet's use an implementation diagram that we will refine in the future. In the diagram, I have included an *undetermined* number of sensors (actually, 5) to measure the noise levels in different places, hence the ear shape used for them. In my initial implementation, I will only use a few (two or three) with the sole purpose of verifying that the collecting station can work with more than one at any given time. My first choice for the collecting station, which is represented by the inbox icon, is to use a Raspberry Pi (RPi) that has built-in support for Bluetooth and WiFi. Finally, on the top of the diagram, we have a MongoDB Atlas cluster that we will use to store and use the sensor data.\n\n videos in the past. Please forget my mistakes when using it.\n\nFinally, there are some things that I won't be covering in this project, both for the sake of brevity and for my lack of knowledge of them. The most obvious ones are creating custom PCBs with the components and 3D printing a case for the resulting device. But most importantly, I won't be implementing firmware for all of the devkits that I will select and even less in different languages. Just some of the boards in some of the languages. As we lazy people like to say, this is left as an exercise to the reader.\n\n# The plan\n\nComing back to the goal of this project, it is to mimic what one would do when one wants to create a new device from scratch. I will start, then, by selecting some microcontroller devkits that are available on the market. That is the first step and it is included in this article.\n\nOne of the main features of the hardware that I plan to use is to have some way of working wirelessly. I plan to have some sensors, and if they require a wired connection to the collecting station, it would be a very strong limitation. Thus, my next step is to implement this communication. I have considered two alternatives for the communication. The first one is Bluetooth Low Energy (BLE) and the second one is MQTT over WiFi. I will give a more detailed explanation when we get to them. From the perspective of power consumption, the first option seems to be better, and consuming less power means batteries that last longer and happier users.\n\nBut, there seems to be less (complete) documentation on how to implement it. For example, I could find neither good documentation for the BLE library that comes with MicroPython nor anything on how to use BLE with Bluez and DBus. Also, if I successfully implement both sides of the BLE communication, I need to confirm that I can make it work concurrently with more than one sensor.\n\nMy second and third steps will be to implement the peripheral role of the BLE communication on the microcontroller devkits and then the central role on the RPi.\n\nI will continue with the implementation of the WiFi counterparts. Step 4 is going to be making the sensors publish their measurements via MQTT over WiFi, and Step 5 will be to have the Raspberry Pi subscribe to the MQTT service and receive the data.\n\nEventually, in Step 6, I will use the MongoDB C++ driver to upload the data to a MongoDB Atlas cluster. Once the data is ingested by the MongoDB Atlas cluster, we will be able to enjoy the advantages it offers in terms of storing and archiving the data, querying it, and using real-time analytics and visualization.\n\nSo, this is the list of steps of the plan:\n\n1. Project intro (you are here)\n2. BLE peripheral firmware\n3. BLE central for Raspberry Pi OS\n4. MQTT publisher firmware\n5. MQTT subscriber for Raspberry Pi OS\n6. Upload data from Raspberry Pi OS to MongoDB Atlas clusters\n7. Work with the data using MongoDB\n\nI have a couple of ideas that I may add at the end of this series, but for now, this matches my goals and what I wanted to share with you. Keep in mind that it is also possible that I will need to include intermediate steps to refine some code or include some required functionality. I am open to suggestions for topics that can be added and enhancements to this content. Send them my way while the project is still in progress.\n\n# Hardware selection\n\nI will start this hands-on part by defining the features that I will be using and then come up with some popular and affordable devkit boards that implement those features or, at least, can be made to do so. I will end up with a list of devkit boards. It will be nothing like the \"top devkit boards\" of this year, but rather a list of suggested boards that can be used for a project like this one.\n\nLet's start with the features:\n\n- They have to implement at least one of the two radio-frequency communication standards: WiFi and/or Bluetooth.\n- They have to have a microphone or some pins that allow me to connect one.\n- Having another sensor on board is appreciated but not required. Reading the temperature is extremely simple, so I will start by using that instead of getting audio. I will focus on the audio part later when the communications are implemented and working.\n- I plan to have independent sensors, so it would be nice if I could plug a battery instead of using the USB power. Again, a nice feature, but not a must-have.\n- Last, but not least, having documentation available, examples, and a vibrant community will make our lives easier.\n\n## Raspberry Pi Pico W\n\n is produced by the same company that sells the well-known Raspberry Pi single-board computers, but it is a microcontroller board with its own RP-2040 chip. The RP-2040 is a dual-core Arm Cortex-M0+ processor. The W model includes a fully certified module that provides 802.11n WiFi and Bluetooth 5.2. It doesn't contain a microphone in the devkit board, but there are examples and code available for connecting an electret microphone. It does have a temperature sensor, though. It also doesn't have a battery socket so we will have to use our spare USB chargers.\n\nFinally, in terms of creating code for this board, we can use:\n\n- MicroPython, which is an implementation of Python3 for microcontrollers. It is efficient and offers the niceties of the Python language: easy to learn, mature ecosystem with many libraries, and even REPL.\n- C/C++ that provide a lower-level interface to extract every bit of juice from the board.\n- JavaScript as I have learned very recently. The concept is similar to the one in the MicroPython environment but less mature (for now).\n- There are some Rust crates for this processor and the board, but it may require extra effort to use BLE or WiFi using the embassy crate.\n\n## Micro:bit\n\n is a board created for learning purposes. It comes with several built-in sensors, including a microphone, and LEDs that we can use to get feedback on the noise levels. It uses a Nordic nRF52833 that features an ARM Cortex-M4 processor with a full Bluetooth Low Energy stack, but no WiFi. It has a battery socket and it can be bought with a case for AA batteries.\n\nThe educational goal is also present when we search for options to write code. These are the main options:\n\n- Microsoft MakeCode which is a free platform to learn programming online using a graphical interface to operate with different blocks of code.\n- Python using MicroPython or its own web interface.\n- C/C++ with the Arduino IDE.\n- Rust, because the introductory guide for embedded Rust uses the microbit as the reference board. So, no better board to learn how to use Rust with embedded devices. BLE is not in the guide, but we could also use the embassy nrf-softdevice crate to implement it.\n\n## Adafruit Circuit Playground Bluefruit\n\n is also aimed at people who want to have their first contact with electronics. It comes with a bunch of sensors, including temperature one and a microphone, and it also has some very nice RGB LEDs. Its main chip is a Nordic nRF52840 Cortex M4 processor with Bluetooth Low Energy support. As was the case with the micro:bit board, there's no WiFi support on this board. It has a JST PH connector for a lipo battery or an AAA battery pack.\n\nIt can be used with Microsoft MakeCode, but its preferred programming environment is CircuitPython. CircuitPython is a fork of MicroPython with some specific and slightly more refined libraries for Adafruit products, such as this board. If you want to use Rust, there is a crate for an older version of this board, without BLE support. But then again, we could use the embassy crates for that purpose.\n\n## Adafruit Feather nRF52840 Sense\n\n is also based on the Nordic nRF52840 Cortex M4 and offers Bluetooth Low Energy but no WiFi. It comes with many on-board sensors, including microphone and temperature. It also features an RGB LED and a JST PH connector for a battery that can be charged using the USB connector.\n\nWhile this board can also be used to learn, I would argue that it's more aimed at prototyping and the programming options are:\n\n- CircuitPython as with all the Adafruit boards.\n- C/C++ with the Arduino IDE.\n- Rust, using the previously mentioned crates.\n\n## Espressif ESP32-C6-DevKitC-1\n\n features a RISC-V single-core processor and a WROOM module that provides not only WiFi and Bluetooth connectivity but also Zigbee and Thread (both are network protocols specifically designed for IoT). It has no sensors on-board, but it does have an LED and two USB-C ports, one for UART communications and the other one for USB Type-C serial communications.\n\nEspressif boards have traditionally been programmed in C/C++, but during the last year, they have been promoting Rust as a supported environment. It even has an introductory book that explains the basics for their boards.\n\n# Recap and future content\n\n:youtube]{vid=FW8n8IcEwTM}\n\nIn this article, we have introduced the project that I will be developing. It will be a series of sensors that gather noise data that will be collected by a bespoke implementation of a collecting station. I will explore two mechanisms for the communication between the sensors and the collecting station: BLE and MQTT over WiFi. Once the data is in the collecting station, I will send it to a MongoDB Atlas cluster on the Cloud using the C++ driver and we will finish the project by showing some potential uses of the data in the Cloud.\n\nI have presented you with a list of requirements for the development boards and some alternatives that match those requirements, and you can use it for this or similar projects. In our next episode, I will try to implement the BLE peripheral role in one or more of the boards.\n\nIf you have any questions or feedback, head to the [MongoDB Developer Community forum.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt28e47a9dd6c27329/65533c1b9f2b99ec15bc9579/Adventures_in_IoT.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf77df78f4a2fdad2/65536b9e647c28790d4e8033/devices.jpeg\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38747f873267cbc5/655365a46053f868fac92221/rp2.jpeg\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0d38d6409869d9a1/655365b64d285956b1afabf2/microbit.jpeg\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa0bf312ed01d222/655365cc2e0ea10531178104/circuit-playground.jpeg\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5ed758c5d26c0382/655365da9984b880675a9ace/feather.jpeg\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1ad337c149332b61/655365e9e9c23ce2e927441b/esp32-c6.jpeg", "format": "md", "metadata": {"tags": ["C++", "Python"], "pageDescription": "In the first article of this series, you can learn about the hands-on IoT project that we will be delivering. It discusses the architecture that will be implemented and the step-by-step approach that will be followed to implement it. There is a discussion about the rules of engagement for the project and the tools that will be used. The last section covers a a selection of MCU devkit boards that would be suitable for the project.", "contentType": "Tutorial"}, "title": "Plans and Hardware Selection for a Hands-on Implementation of IoT with MCUs and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/quarkus-rest-crud", "action": "created", "body": "# Creating a REST API for CRUD Operations With Quarkus and MongoDB\n\n## What is Quarkus?\n\nWhen we write a traditional Java application, our Java source code is compiled and transformed into Java bytecode.\nThis bytecode can then be executed by a Java virtual machine (JVM) specific to the operating system you are\nrunning. This is why we can say that Java is a portable language. You compile once, and you can run it everywhere,\nas long as you have the right JVM on the right machine.\n\nThis is a great mechanism, but it comes at a cost. Starting a program is slow because the JVM and the entire context\nneed to be loaded first before running anything. It's not memory-efficient because we need to load hundreds of classes that might not be used at all in the end as the classpath scanning only occurs after.\n\nThis was perfectly fine in the old monolithic realm, but this is totally unacceptable in the new world made of lambda\nfunctions, cloud, containers, and Kubernetes. In this context, a low memory footprint and a lightning-fast startup time\nare absolutely mandatory.\n\nThis is where Quarkus comes in. Quarkus is a Kubernetes-native Java framework tailored\nfor GraalVM and HotSpot).\n\nWith Quarkus, you can build native binaries that can boot and send their first response in 0.042 seconds versus 9.5\nseconds for a traditional Java application.\n\nIn this tutorial, we are going to build a Quarkus application that can manage a `persons` collection in MongoDB. The\ngoal is to perform four simple CRUD operations with a REST API using a native application.\n\n## Prerequisites\n\nFor this tutorial, you'll need:\n\n- cURL.\n- Docker.\n- GraalVM.\n- A MongoDB Atlas cluster or a local instance. I'll use a Docker container in\n this tutorial.\n\nIf you don't want to code along and prefer to check out directly the final code:\n\n```bash\ngit clone git@github.com:mongodb-developer/quarkus-mongodb-crud.git\n```\n\n## How to set up Quarkus with MongoDB\n\n**TL;DR**:\nUse this link\nand click on `generate your application` or clone\nthe GitHub repository.\n\nThe easiest way to get your project up and running with Quarkus and all the dependencies you need is to\nuse https://code.quarkus.io/.\n\nSimilar to Spring initializr, the Quarkus project starter website will help you\nselect your dependencies and build your Maven or Gradle configuration file. Some dependencies will also include a\nstarter code to assist you in your first steps.\n\nFor our project, we are going to need:\n\n- MongoDB client quarkus-mongodb-client].\n- SmallRye OpenAPI [quarkus-smallrye-openapi].\n- REST [quarkus-rest].\n- REST Jackson [quarkus-rest-jackson].\n\nFeel free to use the `group` and `artifact` of your choice. Make sure the Java version matches the version of your\nGraalVM version, and we are ready to go.\n\nDownload the zip file and unzip it in your favorite project folder. Once it's done, take some time to read the README.md\nfile provided.\n\nFinally, we need a MongoDB cluster. Two solutions:\n\n- Create a new cluster on [MongoDB Atlas and retrieve the connection string, or\n- Create an ephemeral single-node replica set with Docker.\n\n```bash\ndocker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:latest --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval \"rs.initiate();\"\n```\n\nEither way, the next step is to set up your connection string in the `application.properties` file.\n\n```properties\nquarkus.mongodb.connection-string=mongodb://localhost:27017\n```\n\n## CRUD operations in Quarkus with MongoDB\n\nNow that our Quarkus project is ready, we can start developing.\n\nFirst, we can start the developer mode which includes live coding (automatic refresh) without the need to restart the\nprogram.\n\n```bash\n./mvnw compile quarkus:dev\n```\n\nThe developer mode comes with two handy features:\n\n- Swagger UI\n- Quarkus Dev UI\n\nFeel free to take some time to explore both these UIs and see the capabilities they offer.\n\nAlso, as your service is now running, you should be able to receive your first HTTP communication. Open a new terminal and execute the following query:\n\n```bash\ncurl http://localhost:8080/hello\n```\n\n> Note: If you cloned the repo, then it\u2019s `/api/hello`. We are changing this below in a minute.\n\nResult:\n\n```\nHello from Quarkus REST\n```\n\nThis works because your project currently contains a single class `GreetingResource.java` with the following code.\n\n```java\npackage com.mongodb;\n\nimport jakarta.ws.rs.GET;\nimport jakarta.ws.rs.Path;\nimport jakarta.ws.rs.Produces;\nimport jakarta.ws.rs.core.MediaType;\n\n@Path(\"/hello\")\npublic class GreetingResource {\n\n @GET\n @Produces(MediaType.TEXT_PLAIN)\n public String hello() {\n return \"Hello from Quarkus REST\";\n }\n}\n```\n\n### PersonEntity\n\n\"Hello from Quarkus REST\" is nice, but it's not our goal! We want to manipulate data from a `persons` collection in\nMongoDB.\n\nLet's create a classic `PersonEntity.java` POJO class. I created\nit in the default `com.mongodb` package which is my `group` from earlier. Feel free to change it.\n\n```java\npackage com.mongodb;\n\nimport com.fasterxml.jackson.databind.annotation.JsonSerialize;\nimport com.fasterxml.jackson.databind.ser.std.ToStringSerializer;\nimport org.bson.types.ObjectId;\n\nimport java.util.Objects;\n\npublic class PersonEntity {\n\n @JsonSerialize(using = ToStringSerializer.class)\n public ObjectId id;\n public String name;\n public Integer age;\n\n public PersonEntity() {\n }\n\n public PersonEntity(ObjectId id, String name, Integer age) {\n this.id = id;\n this.name = name;\n this.age = age;\n }\n\n @Override\n public int hashCode() {\n int result = id != null ? id.hashCode() : 0;\n result = 31 * result + (name != null ? name.hashCode() : 0);\n result = 31 * result + (age != null ? age.hashCode() : 0);\n return result;\n }\n\n @Override\n public boolean equals(Object o) {\n if (this == o) return true;\n if (o == null || getClass() != o.getClass()) return false;\n\n PersonEntity that = (PersonEntity) o;\n\n if (!Objects.equals(id, that.id)) return false;\n if (!Objects.equals(name, that.name)) return false;\n return Objects.equals(age, that.age);\n }\n\n public ObjectId getId() {\n return id;\n }\n\n public void setId(ObjectId id) {\n this.id = id;\n }\n\n public String getName() {\n return name;\n }\n\n public void setName(String name) {\n this.name = name;\n }\n\n public Integer getAge() {\n return age;\n }\n\n public void setAge(Integer age) {\n this.age = age;\n }\n}\n```\n\nWe now have a class to map our MongoDB documents to using Jackson.\n\n### PersonRepository\n\nNow that we have a `PersonEntity`, we can create a `PersonRepository` template, ready to welcome our CRUD queries.\n\nCreate a `PersonRepository.java` class next to the `PersonEntity.java` one.\n\n```java\npackage com.mongodb;\n\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoCollection;\nimport jakarta.enterprise.context.ApplicationScoped;\n\n@ApplicationScoped\npublic class PersonRepository {\n\n private final MongoClient mongoClient;\n private final MongoCollection coll;\n\n public PersonRepository(MongoClient mongoClient) {\n this.mongoClient = mongoClient;\n this.coll = mongoClient.getDatabase(\"test\").getCollection(\"persons\", PersonEntity.class);\n }\n\n // CRUD methods will go here\n\n}\n```\n\n### PersonResource\n\nWe are now almost ready to create our first CRUD method. Let's update the default `GreetingResource.java` class to match\nour goal.\n\n1. Rename the file `GreetingResource.java` to `PersonResource.java`.\n2. In the `test` folder, also rename the default test files to `PersonResourceIT.java` and `PersonResourceTest.java`.\n3. Update `PersonResource.java` like this:\n\n```java\npackage com.mongodb;\n\nimport jakarta.inject.Inject;\nimport jakarta.ws.rs.*;\nimport jakarta.ws.rs.core.MediaType;\n\n@Path(\"/api\")\n@Consumes(MediaType.APPLICATION_JSON)\n@Produces(MediaType.APPLICATION_JSON)\npublic class PersonResource {\n\n @Inject\n PersonRepository personRepository;\n\n @GET\n @Path(\"/hello\")\n public String hello() {\n return \"Hello from Quarkus REST\";\n }\n\n // CRUD routes will go here\n\n}\n```\n\n> Note that with the `@Path(\"/api\")` annotation, the URL of our `/hello` service is now `/api/hello`.\n\nAs a consequence, update `PersonResourceTest.java` so our test keeps working.\n\n```java\npackage com.mongodb;\n\nimport io.quarkus.test.junit.QuarkusTest;\nimport org.junit.jupiter.api.Test;\n\nimport static io.restassured.RestAssured.given;\nimport static org.hamcrest.CoreMatchers.is;\n\n@QuarkusTest\nclass PersonResourceTest {\n @Test\n void testHelloEndpoint() {\n given().when().get(\"/api/hello\").then().statusCode(200).body(is(\"Hello from Quarkus REST\"));\n }\n}\n```\n\n### Create a person\n\nAll the code blocks are now in place. We can create our first route to be able to create a new person.\n\nIn\nthe repository,\nadd the following method that inserts a `PersonEntity` and returns the inserted document's `ObjectId` in `String`\nformat.\n\n```java\npublic String add(PersonEntity person) {\n return coll.insertOne(person).getInsertedId().asObjectId().getValue().toHexString();\n}\n```\n\nIn\nthe resource\nfile, we can create the corresponding route:\n\n```java\n@POST\n@Path(\"/person\")\npublic String createPerson(PersonEntity person) {\n return personRepository.add(person);\n}\n```\n\nWithout restarting the project (remember the dev mode?), you should be able to test this route.\n\n```bash\ncurl -X POST http://localhost:8080/api/person \\\n -H 'Content-Type: application/json' \\\n -d '{\"name\": \"John Doe\", \"age\": 30}'\n```\n\nThis should return the `ObjectId` of the new `person` document.\n\n```\n661dccf785cd323349ca42f7\n```\n\nIf you connect to the MongoDB instance with mongosh, you can confirm that\nthe document made it:\n\n```\nRS direct: primary] test> db.persons.find()\n[\n {\n _id: ObjectId('661dccf785cd323349ca42f7'),\n age: 30,\n name: 'John Doe'\n }\n]\n```\n\n### Read persons\n\nNow, we can read all the persons in the database, for example.\n\nIn\nthe [repository,\nadd:\n\n```java\npublic List getPersons() {\n return coll.find().into(new ArrayList<>());\n}\n```\n\nIn\nthe resource,\nadd:\n\n```java\n@GET\n@Path(\"/persons\")\npublic List getPersons() {\n return personRepository.getPersons();\n}\n```\n\nNow, we can retrieve all the persons in our database:\n\n```bash\ncurl http://localhost:8080/api/persons\n```\n\nThis returns a list of persons:\n\n```json\n\n {\n \"id\": \"661dccf785cd323349ca42f7\",\n \"name\": \"John Doe\",\n \"age\": 30\n }\n]\n```\n\n### Update person\n\nIt's John Doe's anniversary! Let's increment his age by one.\n\nIn\nthe [repository,\nadd:\n\n```java\npublic long anniversaryPerson(String id) {\n Bson filter = eq(\"_id\", new ObjectId(id));\n Bson update = inc(\"age\", 1);\n return coll.updateOne(filter, update).getModifiedCount();\n}\n```\n\nIn\nthe resource,\nadd:\n\n```java\n@PUT\n@Path(\"/person/{id}\")\npublic long anniversaryPerson(@PathParam(\"id\") String id) {\n return personRepository.anniversaryPerson(id);\n}\n```\n\nTime to test this party:\n\n```bash\ncurl -X PUT http://localhost:8080/api/person/661dccf785cd323349ca42f7\n```\n\nThis returns `1` which is the number of modified document(s). If the provided `ObjectId` doesn't match a person's id,\nthen it returns `0` and MongoDB doesn't perform any update.\n\n### Delete person\n\nFinally, it's time to delete John Doe...\n\nIn\nthe repository,\nadd:\n\n```java\npublic long deletePerson(String id) {\n Bson filter = eq(\"_id\", new ObjectId(id));\n return coll.deleteOne(filter).getDeletedCount();\n}\n```\n\nIn\nthe resource,\nadd:\n\n```java\n@DELETE\n@Path(\"/person/{id}\")\npublic long deletePerson(@PathParam(\"id\") String id) {\n return personRepository.deletePerson(id);\n}\n```\n\nLet's test:\n\n```bash\ncurl -X DELETE http://localhost:8080/api/person/661dccf785cd323349ca42f7\n```\n\nAgain, it returns `1` which is the number of deleted document(s).\n\nNow that we have a working Quarkus application with a MongoDB CRUD service, it's time to experience the full\npower of Quarkus.\n\n## Quarkus native build\n\nQuit the developer mode by simply hitting the `q` key in the relevant terminal.\n\nIt's time to build\nthe native executable\nthat we can use in production with GraalVM and experience the *insanely* fast start-up time.\n\nUse this command line to build directly with your local GraalVM and other dependencies.\n\n```bash\n./mvnw package -Dnative\n```\n\nOr use the Docker image that contains everything you need:\n\n```bash\n./mvnw package -Dnative -Dquarkus.native.container-build=true\n```\n\nThe final result is a native application, ready to be launched, in your `target` folder.\n\n```bash\n./target/quarkus-mongodb-crud-1.0.0-SNAPSHOT-runner\n```\n\nOn my laptop, it starts in **just 0.019s**! Remember how much time Spring Boot needs to start an application and respond\nto queries for the first time?!\n\nYou can read more about how Quarkus makes this miracle a reality in\nthe container first documentation.\n\n## Conclusion\n\nIn this tutorial, we've explored how Quarkus and MongoDB can team up to create a lightning-fast RESTful API with CRUD\ncapabilities.\n\nNow equipped with these insights, you're ready to build blazing-fast APIs with Quarkus, GraalVM, and MongoDB. Dive into\nthe\nprovided GitHub repository for more details.\n\n> If you have questions, please head to our Developer Community website where the\n> MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Quarkus", "Docker"], "pageDescription": "Explore the seamless integration of Quarkus, GraalVM, and MongoDB for lightning-fast CRUD RESTful APIs. Harness Quarkus' rapid startup time and Kubernetes compatibility for streamlined deployment.", "contentType": "Quickstart"}, "title": "Creating a REST API for CRUD Operations With Quarkus and MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/unlock-value-data-mongodb-atlas-intelligent-analytics-microsoft-fabric", "action": "created", "body": "# Unlock the Value of Data in MongoDB Atlas with the Intelligent Analytics of Microsoft Fabric\n\nTo win in this competitive digital economy, enterprises are striving to create smarter intelligent apps. These apps provide a superior customer experience and can derive insights and predictions in real-time. \n\nSmarter apps use data \u2014 in fact, lots of data, AI and analytics together. MongoDB Atlas stores valuable operational data and has capabilities to support operational analytics and AI based applications. This blog details MongoDB Atlas\u2019 seamless integration with Microsoft Fabric to run large scale AI/ML and varied analytics and BI reports across the enterprise data estate, reshaping how teams work with data by bringing everyone together on a single, AI-powered platform built for the era of AI. Customers can leverage MongoDB Atlas with Microsoft Fabric as the foundation to build smart and intelligent applications.\n\n## Better together \n\nMongoDB was showcased as a key partner at Microsoft Ignite, highlighting the collaboration to build seamless integrations and joint solutions complementing capabilities to address diverse use cases. \n\n, Satya Nadella, Chairman and Chief Executive Officer of Microsoft, announced that Microsoft Fabric is now generally available for purchase. Satya addressed the strategic plan to enable MongoDB Atlas mirroring in Microsoft Fabric to enable our customers to use mirroring to access their data in OneLake. \n\nMongoDB Atlas\u2019 flexible data model, versatile query engine, integration with LLM frameworks, and inbuilt Vector Search, analytical nodes, aggregation framework, Atlas Data Lake, Atlas Data Federation, Charts, etc. enables operational analytics and application-driven intelligence from the source of the data itself. However, the analytics and AI needs of an enterprise span across their data estate and require them to combine multiple data sources and run multiple types of analytics like big data, Spark, SQL, or KQL-based ones at a large-scale. They bring data from sources like MongoDB Atlas to one uniform format in OneLake in Microsoft Fabric to enable them to run Batch Spark analytics and AI/ML of petabyte scale and use data warehousing abilities, big data analytics, and real-time analytics across the delta tables populated from disparate sources. \n\nis a Microsoft-certified connector which can be accessed from the \u201cDataflow Gen2\u201d feature from \u201cData Factory\u201d in Microsoft Fabric.\n\nDataflow Gen2 selection takes us to the familiar Power Query interface of Microsoft Power BI. To bring data from MongoDB Atlas collections, search the MongoDB Atlas SQL connector from the \u201cGet Data\u201d option on the menu.\n\n or set up an Atlas federated database and get a connection string for the same. Also, note that the connector needs a Gateway set up to communicate from Fabric and schedule refreshes. Get more details on Gateway setup.\n\nOnce data is retrieved from MongoDB Atlas into Power Query, the magic of Power Query can be used to transform the data, including flattening object data into separate columns, unwinding array data into separate rows, or changing data types. These are typically required when converting MongoDB data in JSON format to the relational format in Power BI. Additionally, the blank query option can be used for a quick query execution. Below is a sample query to start with:\n\n```\nlet\n Source = MongoDBAtlasODBC.Query(\"\", \u201c\", \"select * from \", null)\nin\n Source\n```\n\n#### MongoDB Data Pipeline connector (preview)\n\nThe announcement at Microsoft Ignite of the Data Pipeline connector being released for MongoDB Atlas in Microsoft Fabric is definitely good news for MongoDB customers. The connector provides a quick and similar experience as the MongoDB connector in Data Factory and Synapse Pipelines. \n\nThe connector is accessed from the \u201cData Pipelines\u201d feature from \u201cData Factory\u201d in Fabric. Choose the \u201cCopy data\u201d activity to use the MongoDB connector to get data from MongoDB or to push data to MongoDB. To get data from MongoDB, add MongoDB in Source. Select the MongoDB connector and create a linked service by providing the **connection string** and the **database** to connect to in MongoDB Atlas.\n\n to capture the change events in a MongoDB collection and using an Atlas function to trigger an Azure function. The Azure function can directly write to the Lake House in Microsoft Fabric or to ADLS Gen2 storage using ADLS Gen2 APIs. ADLS Gen2 storage accounts can be referenced in Microsoft Fabric using shortcuts, eliminating the need for an ETL process to move data from ADLS Gen2 to OneLake. Data in Microsoft Fabric can be accessed using the existing ADLS Gen2 APIs but there are some changes and constraints which can be referred to in the Microsoft Fabric documentation. \n\n provides streaming capabilities which allows structured streaming of changes from MongoDB or to MongoDB in both continuous and micro-batch modes. Using the connector, we just need a simple code that reads a stream of changes from the MongoDB collection and writes the stream to the Lakehouse in Microsoft Fabric or to ADLS Gen2 storage which can be referenced in Microsoft Fabric using shortcuts. MongoDB Atlas can be set up as a source for structured streaming by referring to the MongoDB documentation. Refer to the Microsoft Fabric documentation on setting up Lakehouse as Sink for structured streaming. \n\n and get started for free today on Azure Marketplace. \n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc783c7ca51ffc321/655678560e64b945e26edeb7/Fabric_Keynote.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt11079cade4dbe467/6553ef5253e8ec0e05c46baa/image2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt133326ec100a6ccd/6553ef7a9984b8c9045a9fc6/image5.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd2c2a8c4741c1849/6553efa09984b8a1685a9fca/image6.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc1faa0c6b3e2a93d/6553f00253e8ecacacc46bb4/image3.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4789acc89ff7d1ef/6553f021647c28121d4e84f6/image7.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58b3b258cfeb5614/6553f0410e64b9dbad6ece06/image1.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltba161d80a4442dd0/6553f06ac2479d218b7822e0/image4.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn how you can use Microsoft Fabric with MongoDB Atlas for intelligent analytics for your data. ", "contentType": "News & Announcements"}, "title": "Unlock the Value of Data in MongoDB Atlas with the Intelligent Analytics of Microsoft Fabric", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/semantic-search-made-easy-langchain-mongodb", "action": "created", "body": "# Semantic Search Made Easy With LangChain and MongoDB\n\nEnabling semantic search on user-specific data is a multi-step process that includes loading, transforming, embedding, and storing data before it can be queried. \n\n, whose goal is to provide a set of utilities to greatly simplify this process. \n\nIn this tutorial, we'll walk through each of these steps, using MongoDB Atlas as our Store. Specifically, we'll use the AT&T Wikipedia page as our data source. We'll then use libraries from LangChain to load, transform, embed, and store: \n\n (Free tier is fine)\n* Open AI API key\n\n## Quick start steps\n1. Get the code:\n```zsh\ngit clone https://github.com/mongodb-developer/atlas-langchain.git\n```\n2. Update params.py with your MongoDB connection string and Open AI API key.\n3. Create a new Python environment\n```zsh\npython3 -m venv env\n```\n4. Activate the new Python environment\n```zsh\nsource env/bin/activate\n```\n\n5. Install the requirements\n```zsh\npip3 install -r requirements.txt\n```\n6. Load, transform, embed, and store\n```zsh\npython3 vectorize.py\n```\n\n7. Retrieve\n```zsh\npython3 query.py -q \"Who started AT&T?\"\n```\n\n## The details\n### Load -> Transform -> Embed -> Store \n#### Step 1: Load\nThere's no lack of sources of data \u2014 Slack, YouTube, Git, Excel, Reddit, Twitter, etc. \u2014 and LangChain provides a growing list of integrations that includes this list and many more.\n\nFor this exercise, we're going to use the WebBaseLoader to load the Wikipedia page for AT&T. \n\n```python\nfrom langchain.document_loaders import WebBaseLoader\nloader = WebBaseLoader(\"https://en.wikipedia.org/wiki/AT%26T\")\ndata = loader.load()\n```\n\n #### Step 2: Transform (Split)\nNow that we have a bunch of text loaded, it needs to be split into smaller chunks so we can tease out the relevant portion based on our search query. For this example, we'll use the recommended RecursiveCharacterTextSplitter. As I have it configured, it attempts to split on paragraphs (`\"\\n\\n\"`), then sentences(`\"(?<=\\. )\"`), and then words (`\" \"`) using a chunk size of 1,000 characters. So if a paragraph doesn't fit into 1,000 characters, it will truncate at the next word it can fit to keep the chunk size under 1,000 characters. You can tune the `chunk_size` to your liking. Smaller numbers will lead to more documents, and vice-versa.\n\n```python\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\ntext_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0, separators=\n \"\\n\\n\", \"\\n\", \"(?<=\\. )\", \" \"], length_function=len)\ndocs = text_splitter.split_documents(data)\n```\n\n#### Step 3: Embed\n[Embedding is where you associate your text with an LLM to create a vector representation of that text. There are many options to choose from, such as OpenAI and Hugging Face, and LangChang provides a standard interface for interacting with all of them. \n\nFor this exercise, we're going to use the popular OpenAI embedding. Before proceeding, you'll need an API key for the OpenAI platform, which you will set in params.py.\n\nWe're simply going to load the embedder in this step. The real power comes when we store the embeddings in Step 4. \n\n```python\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nembeddings = OpenAIEmbeddings(openai_api_key=params.openai_api_key)\n```\n\n#### Step 4: Store\nYou'll need a vector database to store the embeddings, and lucky for you MongoDB fits that bill. Even luckier for you, the folks at LangChain have a MongoDB Atlas module that will do all the heavy lifting for you! Don't forget to add your MongoDB Atlas connection string to params.py.\n\n```python\nfrom pymongo import MongoClient\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\n\nclient = MongoClient(params.mongodb_conn_string)\ncollection = clientparams.db_name][params.collection_name]\n\n# Insert the documents in MongoDB Atlas with their embedding\ndocsearch = MongoDBAtlasVectorSearch.from_documents(\n docs, embeddings, collection=collection, index_name=index_name\n)\n```\n\nYou'll find the complete script in [vectorize.py, which needs to be run once per data source (and you could easily modify the code to iterate over multiple data sources).\n\n```zsh\npython3 vectorize.py\n```\n\n#### Step 5: Index the vector embeddings\nThe final step before we can query the data is to create a search index on the stored embeddings. \n\nIn the Atlas console and using the JSON editor, create a Search Index named `vsearch_index` with the following definition: \n```JSON\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"cosine\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n```\n\n or max_marginal_relevance_search. That would return the relevant slice of data, which in our case would be an entire paragraph. However, we can continue to harness the power of the LLM to contextually compress the response so that it more directly tries to answer our question. \n\n```python\nfrom pymongo import MongoClient\nfrom langchain.vectorstores import MongoDBAtlasVectorSearch\nfrom langchain.embeddings.openai import OpenAIEmbeddings\nfrom langchain.llms import OpenAI\nfrom langchain.retrievers import ContextualCompressionRetriever\nfrom langchain.retrievers.document_compressors import LLMChainExtractor\n\nclient = MongoClient(params.mongodb_conn_string)\ncollection = clientparams.db_name][params.collection_name]\n\nvectorStore = MongoDBAtlasVectorSearch(\n collection, OpenAIEmbeddings(openai_api_key=params.openai_api_key), index_name=params.index_name\n)\n\nllm = OpenAI(openai_api_key=params.openai_api_key, temperature=0)\ncompressor = LLMChainExtractor.from_llm(llm)\n\ncompression_retriever = ContextualCompressionRetriever(\n base_compressor=compressor,\n base_retriever=vectorStore.as_retriever()\n)\n```\n\n```zsh\npython3 query.py -q \"Who started AT&T?\"\n\nYour question:\n-------------\nWho started AT&T?\n\nAI Response:\n-----------\nAT&T - Wikipedia\n\"AT&T was founded as Bell Telephone Company by Alexander Graham Bell, Thomas Watson and Gardiner Greene Hubbard after Bell's patenting of the telephone in 1875.\"[25] \"On December 30, 1899, AT&T acquired the assets of its parent American Bell Telephone, becoming the new parent company.\"[28]\n```\n\n## Resources\n* [MongoDB Atlas\n* Open AI API key\n* LangChain\n * WebBaseLoader\n * RecursiveCharacterTextSplitter\n * MongoDB Atlas module \n * Contextual Compression \n * MongoDBAtlasVectorSearch API\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt60cb0020b79c0f26/6568d2ba867c0b46e538aff4/semantic-search-made-easy-langchain-mongodb-1.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7d06677422184347/6568d2edf415044ec2127397/semantic-search-made-easy-langchain-mongodb-2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd8e8fb5c5fdfbed8/6568d30e81b93e1e25a1bf8e/semantic-search-made-easy-langchain-mongodb-3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b65b6cb87008f2a/6568d337867c0b1e0238b000/semantic-search-made-easy-langchain-mongodb-4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt567ad5934c0f7a34/6568d34a7e63e37d4e110d3d/semantic-search-made-easy-langchain-mongodb-5.png", "format": "md", "metadata": {"tags": ["Python", "Atlas", "AI"], "pageDescription": "Discover the power of semantic search with our comprehensive tutorial on integrating LangChain and MongoDB. This step-by-step guide simplifies the complex process of loading, transforming, embedding, and storing data for enhanced search capabilities. Using MongoDB Atlas and the AT&T Wikipedia page as a case study, we demonstrate how to effectively utilize LangChain libraries to streamline semantic search in your projects. Ideal for developers with a MongoDB Atlas subscription and OpenAI API key, this tutorial covers everything from setting up your environment to querying embedded data. Dive into the world of semantic search with our easy-to-follow instructions and expert insights.", "contentType": "Tutorial"}, "title": "Semantic Search Made Easy With LangChain and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/no-connectivity-no-problem-enable-offline-inventory-atlas-edge-server", "action": "created", "body": "# No Connectivity? No Problem! Enable Offline Inventory with Atlas Edge Server\n\n> If you haven\u2019t yet followed our guide on how to build an inventory management system using MongoDB Atlas, we strongly advise doing so now. This article builds on top of the previous one to bring powerful new capabilities for real-time sync, conflict resolution, and disconnection tolerance!\n\nIn the relentless world of retail logistics, where products are always on the move, effective inventory management is crucial. Fast-moving operations can\u2019t afford to pause when technical systems go offline. That's why it's essential for inventory management processes to remain functional, even without connectivity. To address this challenge, supply chains turn to Atlas Edge Server to enable offline inventory management in a reliable and cost-effective way. In this guide, we will demonstrate how you can easily incorporate Edge Server into your existing solution.\n\n, we explored how MongoDB Atlas enables event-driven architectures to enhance inventory management with real-time data strategies. Now, we are taking that same architecture a step further to ensure our store operations run seamlessly even in the face of connectivity issues. Our multi-store setup remains the same: We\u2019ll have three users \u2014 two store managers and one area manager \u2014 overviewing the inventory of their stores and areas respectively. We'll deploy identical systems in both individual stores and the public cloud to serve the out-of-store staff. The only distinction will be that the store apps will be linked to Edge Server, whereas the area manager's app will remain connected to MongoDB Atlas. Just like that, our stores will be able to handle client checkouts, issue replenishment orders, and access the product catalog with no interruptions and minimal latency. This is how Atlas Edge Server bridges the gap between connected retail stores and the cloud.\n\nWithout further ado, let's dive in and get started!\n\n## Prerequisites\n\nFor this next phase, we'll need to ensure we have all the prerequisites from Part 1 in place, as well as some additional requirements related to integrating Edge Server. Here are the extra tools you'll need:\n\n- **Docker** (version 24 or higher): Docker allows us to package our application into containers, making it easy to deploy and manage across different environments. Since Edge Server is a containerized product, Docker is essential to run it. You can choose to install Docker Engine alone if you're using one of the supported platforms or as part of the Docker Desktop package for other platforms.\n- **Docker Compose** (version 2.24 or higher): Docker Compose is a tool for defining and running multi-container Docker applications. The Edge Server package deploys a group of containers that need to be orchestrated effectively. If you have installed Docker Desktop in the previous step, Docker Compose will be available by default. For Linux users, you can install Docker Compose manually from this page: Install the Docker Compose plugin.\n- **edgectl** (version 0.23.2 or higher): edgectl is the CLI tool for Edge Server, allowing you to manage and interact with Edge Server instances. To install this tool, you can visit the official documentation on how to configure Edge Server or simply run the following command in your terminal: `curl https://services.cloud.mongodb.com/edge/install.sh | bash`.\n\nWith these additional tools in place, we'll be ready to take our inventory management system to the next level.\n\n## A quick recap\n\nAlright, let's do a quick recap of what we should have in place already:\n\n- **Sample database**: We created a sample database with a variety of collections, each serving a specific purpose in our inventory management system. From tracking products and transactions to managing user roles, our database laid the groundwork for a single view of inventory.\n- **App Services back end**: Leveraging Atlas App Services, we configured our app back end with triggers, functions, HTTPS endpoints, and the Data API. This setup enabled seamless communication between our application and the database, facilitating real-time responses to events.\n- **Search Indexes**: We enhanced our system's search capabilities by setting up Search Indexes. This allows for efficient full-text search and filtering, improving the user experience and query performance.\n- **Atlas Charts**: We integrated Atlas Charts to visualize product information and analytics through intuitive dashboards. With visually appealing insights, we can make informed decisions and optimize our inventory management strategy.\n\n documentation.\n\nFollow these instructions to set up and run Edge Server on your own device:\n\nWe will configure Edge Server using the command-line tool edgectl. By default, this tool will be installed at `.mongodb-edge` in your home directory. You can reference the entire path to use this tool, `~/.mongodb-edge/bin/edgectl`, or simply add it to your `PATH` by running the command below: \n\n```\nexport PATH=\"~/.mongodb-edge/bin/:$PATH\"\n```\n\nThe next command will generate a docker-compose file in your current directory with all the necessary steps to deploy and manage your Edge Server instance. Replace `` with the value obtained in the first part of this tutorial series, and `` with the token generated in the previous section.\n\n```\nedgectl init --platform compose --app-id --registration-token --insecure-disable-auth\n```\n\n> Note: To learn more about each of the config flags, visit our documentation on how to install and configure Edge Server.\n\nThis application is able to simulate offline scenarios by setting the edge server connectivity off. In order to enable this feature in Edge Server, run the command below.\n\n```\nedgectl offline-demo setup\n```\n\n- Atlas Edge Server\n- How Atlas Edge Server Bridges the Gap Between Connected Retail Stores and the Cloud\n- Grainger Innovates at the Edge With MongoDB Atlas Device Sync and Machine Learning\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8ea62e50ad7e7a88/66293b9985518c840a558497/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43b8b288aa9fe608/66293bb6cac8480a1228e08b/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2995665e6146367/66293bccb054417d969a04b5/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fad48387c6d5355/66293bdb33301d293a892dd1/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt480ffa6b4b77dbc0/66293bed33301d8bcb892dd5/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc5f671f8c40b9e2e/66293c0458ce881776c309ed/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd31d0f728d5ec83c/66293c16b8b5ce162edc25d0/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c061edca6899a69/66293c2cb0ec77e21cd6e8e4/8.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d7fd9a68a6fa499/66293c4281c884eb36380366/9.gif", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "", "contentType": "Tutorial"}, "title": "No Connectivity? No Problem! Enable Offline Inventory with Atlas Edge Server", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/migrate-from-rdbms-mongodb-help-ai-introduction-query-converter", "action": "created", "body": "# Migrate From an RDBMS to MongoDB With the Help of AI: An Introduction to Query Converter\n\nMigrating your applications between databases and programming languages can often feel like a chore. You have to export and import your data, transfer your schemas, and make potential application logic changes to accommodate the new programming language or database syntax. With MongoDB and the Relational Migrator tool, these activities no longer need to feel like a chore and instead can become more automated and streamlined.\n\n tool as it contains sample schemas that will work for experimentation. However, if you want to play around with your own data, you can connect to one of the popular relational database management systems (RDBMS).\n\n## Generate MongoDB queries with the help of AI\n\nOpen Relational Migrator and choose to create a new project. For the sake of this article, we'll click \"Use a sample schema\" to play around. Running queries and looking at data is not important here. We only want to know our schema, our SQL queries, and what we'll end our adventure with query-wise.\n\n.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4df0d0f28d4b9b30/66294a9458ce883a7ec30a80/query-converter-animated.gif\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte5216ae48b0c15c1/66294aad210d90a3c53a53dd/relational-migrator-new-project.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte7b77b8c7e1ca080/66294ac4fb977c24fa36b921/relational-migrator-erd-model.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc0f9b2fa7b1434d7/66294adffb977c441436b929/relational-migrator-query-converter-tab.png", "format": "md", "metadata": {"tags": ["Atlas", "SQL"], "pageDescription": "Learn how to quickly and easily migrate your SQL queries from a relational database to MongoDB queries and aggregation pipelines using the AI features of Relational Migrator and Query Converter.", "contentType": "Tutorial"}, "title": "Migrate From an RDBMS to MongoDB With the Help of AI: An Introduction to Query Converter", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/javascript/scale-up-office-music", "action": "created", "body": "# Listen Along at Scale Up with Atlas Application Services\n\nHere at Scale Up, we value music a lot. We have a Google Home speaker at our office that gets a lot of use. Music gets us going and helps us express ourselves individually as well as an organization. With how important music is in our lives, an idea came to our minds: We wanted to share what we like to listen to. We made a Scale Up Spotify playlist that we share on our website and listen to quite often, but we wanted to take it one step further. We wanted a way for others to be able to see what we're currently listening to in the office, and to host that, we turned to Atlas Application Services. \n\nSources of music and ways to connect to the speaker are varied. Some people listen on YouTube, others on Spotify, some like to connect via a Cast feature that Google Home provides, and others just use Bluetooth connection to play the tunes. We sometimes use the voice features of Google Home and politely ask the speaker to put some music on.\n\nAll of this means that there's no easily available \"one source of truth\" for what's currently playing on the speaker. We could try to somehow connect to Spotify or Google Home's APIs to see what's being cast, but that doesn\u2019t cover all the aforementioned cases\u2014connecting via Bluetooth or streaming from YouTube. The only real source of truth is what our ears can actually hear. \n\nThat's what we ultimately landed on\u2014trying to figure out what song is playing by actually listening to soundwaves coming out of the speaker. Thankfully, there are a lot of public APIs that can recognize songs based on a short audio sample. We decided to pick one that's pretty accurate when it comes to Polish music. In the end, it\u2019s a big part of what we're listening to.\n\nAll of this has to run somewhere. The first thing that came to mind was to build this \"listening device\" by getting a Raspberry Pi with a microphone, but after going through my \"old tech drawer\"\u2014let's face it, all of us techies have one\u2014I found an old Nexus 5. After playing with some custom ROMs, I managed to run node.js applications there. If you think about it, it really is a perfect device for this use case. It has more than enough computing power, a built-in microphone, and a screen just in case you need to do a quick debug. I ended up writing a small program that takes a short audio sample every couple of minutes between 7:00 am and 5:00 pm and uses the API mentioned above to recognize the song.\n\nThe piece of information about what we're currently listening to is a good starting point, but in order to embed it on our website, we need to store it somewhere first. Here's where MongoDB's and Mongo Atlas' powers come into play. Setting up a cloud database was very easy. It took me less than five minutes. The free tier is more than enough for prototyping and simple use cases like this one, and if you end up needing more, you can always switch to a higher tier. I connected my application to a MongoDB Atlas instance using the MongoDB Node Driver.\n\nNow that we have information about what's currently playing captured and safely stored in the MongoDB Atlas instance, there's only one piece of the puzzle missing: a way to retrieve the latest song from the database. Usually, this would require a separate application that we would have to develop, manage in the cloud, or provide a bare metal to run on, but here's the kicker: MongoDB has a way to do this easily with MongoDB Application Services. Application Services allows writing custom HTTP endpoints to retrieve or manipulate database data.\n\nTo create an endpoint like that, log in to your MongoDB Atlas Account. After creating a project, go to App Services at the top and then Create a New App. Name your app, click on Create App Service, and then on the left, you\u2019ll see the HTTP Endpoints entry. After clicking Add Endpoint, select all the relevant settings. \n\nThe fetchsong function is a small JavaScript function that returns the latest song if the latest song had been played in the last 15 minutes and connected it to an HTTPS endpoint. Here it is in full glory: \n\n```Javascript\nexports = async function (request, response) {\n const filter = {date: {$gt: new Date(new Date().getTime() - 15 * 60000)}};\n const projection = {artist: 1, title: 1, _id: 0};\n\n const songsCollection = context.services.get(\"mongodb-atlas\")\n .db(\"scaleup\")\n .collection(\"songs\");\n const docs = await songsCollection\n .find(filter, projection)\n .sort({date: -1})\n .limit(1).toArray();\n\n const latestSong] = docs;\n response.setBody(latestSong);\n};\n```\n\nAnd voil\u00e0! After embedding a JavaScript snippet on our website to read song data here\u2019s the final outcome:\n\n![Currently played song on Spotify\n\nTo see the results for yourself, visit https://scaleup.com.pl/en/#music. If you don't see anything, don\u2019t worry\u2014we work in the Central European Time Zone, so the office might be currently empty. :) Also, if you need to hire IT specialists here in Poland, don't hesitate to drop us a message. ;) \n\nHuge thanks to John Page for being an inspiration to play with MongoDB's products and to write this article. The source code for the whole project is available on GitHub. :) ", "format": "md", "metadata": {"tags": ["JavaScript", "Atlas", "Node.js"], "pageDescription": "Learn how Scale Up publishes the title of the song currently playing in their office regardless of the musical source using an old cellphone and MongoDB Atlas Application Services.", "contentType": "Article"}, "title": "Listen Along at Scale Up with Atlas Application Services", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/bigquery-spark-stored-procedure", "action": "created", "body": "# Spark Up Your MongoDB and BigQuery using BigQuery Spark Stored Procedures\n\nTo empower enterprises that strive to transform their data into insights, BigQuery has emerged as a powerful, scalable, cloud-based data warehouse solution offered by Google Cloud Platform (GCP). Its cloud-based approach allows efficient data management and manipulation, making BigQuery a game-changer for businesses seeking advanced data insights. Notably, one of BigQuery\u2019s standout features is its seamless integration with Spark-based data processing that enables users to further enhance their queries. Now, leveraging BigQuery APIs, users can create and execute Spark stored procedures, which are reusable code modules that can encapsulate complex business logic and data transformations. This feature helps data engineers, data scientists, and data analysts take advantage of BigQuery\u2019s advanced capabilities and Spark\u2019s robust data processing capabilities.\n\nMongoDB, a developer data platform, is a popular choice for storing and managing operational data for its scalability, performance, flexible schema, and real-time capabilities (change streams and aggregation). By combining the capabilities of BigQuery with the versatility of Apache Spark and the flexibility of MongoDB, you can unlock a powerful data processing pipeline.\n\nApache Spark is a powerful open-source distributed computing framework that excels at processing large amounts of data quickly and efficiently. It supports a wide range of data formats, including structured, semi-structured, and unstructured data, making it an ideal choice for integrating data from various sources, such as MongoDB.\n\nBigQuery Spark stored procedures are routines that are executed within the BigQuery environment. These procedures can perform various tasks, such as data manipulation, complex calculations, and even external data integration. They provide a way to modularize and reuse code, making it easier to maintain and optimize data processing workflows. Spark stored procedures use the serverless Spark engine that enables serverless, autoscaling Spark. However, you don\u2019t need to enable Dataproc APIs or be charged for Dataproc when you leverage this new capability. \n\nLet's explore how to extend BigQuery\u2019s data processing to Apache Spark, and integrate MongoDB with BigQuery to effectively facilitate data movement between the two platforms.\n\n## Connecting them together\n\n the MongoDB Spark connector JAR file to Google Cloud Storage to connect and read from MongoDB Atlas. Copy and save the gsutil URI for the JAR file that will be used in upcoming steps.\n\n a MongoDB Atlas cluster with sample data loaded to it. \n2. Navigate to the BigQuery page on the Google Cloud console.\n3. Create a **BigQuery dataset** with the name **spark_run**.\n4. You will type the PySpark code directly into the query editor. To create a PySpark stored procedure, click on **Create Pyspark Procedure**, and then select **Create PySpark Procedure**.\n\n BigQuery Storage Admin, Secret Manager Secret Accessor, and Storage Object Admin access to this service account from IAM. \n\n into Google Cloud Secret Manager, or you can hardcode it in the MongoDB URI string itself. \n8. Copy the below Python script in the PySpark procedure editor and click on **RUN**. The snippet takes around two to three minutes to complete. The below script will create a new table under dataset **spark_run** with the name **sample_mflix_comments**.\n\n```python\nfrom pyspark.sql import SparkSession\nfrom google.cloud import secretmanager\n\ndef access_secret_version(secret_id, project_id):\n client = secretmanager.SecretManagerServiceClient()\n name = f\"projects/{project_id}/secrets/{secret_id}/versions/1\"\n response = client.access_secret_version(request={\"name\": name})\n payload = response.payload.data.decode(\"UTF-8\")\n return payload\n# Update project_number, username_secret_id and password_secret_id, comment them out if you did not create the secrets earlier \n\nproject_id = \"\"\nusername_secret_id = \"\"\npassword_secret_id = \"\"\n\nusername = access_secret_version(username_secret_id, project_id)\npassword = access_secret_version(password_secret_id, project_id)\n\n # Update the mongodb_uri directly if with your username and password if you did not create a secret from Step 7, update the hostname with your hostname\nmongodb_uri = \"mongodb+srv://\"+username+\":\"+password+\"@/sample_mflix.comments\"\n\nmy_spark = SparkSession \\\n .builder \\\n .appName(\"myApp\") \\\n .config(\"spark.mongodb.read.connection.uri\", mongodb_uri) \\\n .config(\"spark.mongodb.write.connection.uri\", mongodb_uri) \\\n .getOrCreate()\n\ndataFrame = my_spark.read.format(\"mongodb\").option(\"database\", \"sample_mflix\").option(\"collection\", \"comments\").load()\n\ndataFrame.show()\n\n# Saving the data to BigQuery\ndataFrame.write.format(\"bigquery\") \\\n .option(\"writeMethod\", \"direct\") \\\n .save(\"spark_run.sample_mflix_comments\")\n```\n\n or bq command line with connection type as CLOUD_RESOURCE.\n\n```\n!bq mk \\\n --connection \\\n --location=US \\\n --project_id= \\\n --connection_type=CLOUD_RESOURCE gentext-conn \n```\n\n11. To grant IAM permissions to access Vertex AI from BigQuery, navigate to **External connections** > Find the **gettext-conn** connection > Copy the **Service account id**. Grant the **Vertex AI User** access to this service account from **IAM**.\n12. Create a model using the CREATE MODEL command.\n\n```\nCREATE OR REPLACE MODEL `gcp-pov.spark_run.llm_model`\nREMOTE WITH CONNECTION `us.gentext-conn`\nOPTIONS (ENDPOINT = 'gemini-pro');\n```\n\n13. Run the SQL command against the BigQuery table. This query allows the user to extract the host name from the email leveraging the Gemini Pro model. The resulting output includes the response and safety attributes.\n\n```\nSELECT prompt,ml_generate_text_result\nFROM\nML.GENERATE_TEXT( MODEL `gcp-pov.spark_run.llm_model`,\n (\n SELECT CONCAT('Extract the host name from the email: ', email) AS prompt,\n * FROM `gcp-pov.spark_run.sample_mflix_comments`\n LIMIT 5),\n STRUCT(\n 0.9 AS temperature,\n 100 AS max_output_tokens\n )\n );\n```\n\n14. Here is the sample output showing the prompt as well as the response. The prompt parameter provides the text for the model to analyze. Prompt design can strongly affect the responses returned by the LLM.\n\n by using GoogleSQL queries. \n3. BigQuery ML also lets you access LLMs and Cloud AI APIs to perform artificial intelligence (AI) tasks like text generation and machine translation. \n\n## Conclusion\n\nBy combining the power of BigQuery, Spark stored procedures, and MongoDB, you can create a robust and scalable data processing pipeline that leverages the strengths of each technology. BigQuery provides a reliable and scalable data warehouse for storing and analyzing structured data, while Spark allows you to process and transform data from various sources, including semi-structured and unstructured data from MongoDB. Spark stored procedures enable you to encapsulate and reuse this logic, making it easier to maintain and optimize your data processing workflows.\n\n### Further reading\n\n1. Get started with MongoDB Atlas on Google Cloud.\n2. Work with stored procedures for Apache Spark.\n3. Create machine learning models in BigQuery ML.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3cc99ff9b6ad9cec/66155da90c478454e8a349f1/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd635dea2d750e73/66155dc254d7c1521e8eea3a/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta48182269e87f9e7/66155dd2cbc2fbae6d8175ea/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb399b53e83efffb9/66155de5be36f52825d96ea5/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6efef490b3d34cf0/66155dfd2b98e91579101401/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56b91d83e11dea5f/66155e0f7cacdc153bd4a78b/6.png", "format": "md", "metadata": {"tags": ["Connectors", "Python", "Spark"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Spark Up Your MongoDB and BigQuery using BigQuery Spark Stored Procedures", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/events/developer-day-gdg-philadelphia", "action": "created", "body": "# Google Developer Day Philadelphia\n\nWelcome to Google Developer Day Philadelphia! Below you can find all the resources you will need for the day.\n\n## Michael's Slide Deck\n* Slides\n\n## Search Lab\n* Slides\n* Intro Lab\n* Search lab: hands-on exercises\n* Survey\n\n## Resources\n* Try Atlas\n* Try Compass\n* Try Relational Migrator\n* Try Vector Search\n\n## Full Developer Day Content\n### Data Modeling and Design Patterns\n* Slides\n* Library application\n* System requirements\n\n### MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting\n* Intro lab: hands-on exercises\n* Data import tool\n\n### Aggregation Pipelines Lab\n* Slides\n* Aggregations lab: hands-on exercises\n\n### Search Lab\n* Slides\n* Search lab: hands-on exercises\n\n### Additional resources\n* Library management system code\n* MongoDB data modeling book\n* Data Modeling course on MongoDB University\n* MongoDB for SQL Pros on MongoDB University\n* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search\n\n## Join the Community\nStay connected, and join our community:\n* Join the New York MongoDB User Group!\n* Sign up for the MongoDB Community Forums.", "format": "md", "metadata": {"tags": ["Atlas", "Google Cloud"], "pageDescription": "Experience the future of technology with GDG Philadelphia at our Build with AI event series & Google I/O Extended! Join us for a half-day event showcasing the latest technologies from Google, including AI, Cloud, and Web development. Connect with experts and enthusiasts for learning and networking. Your ticket gives you access to in-person event venues.", "contentType": "Event"}, "title": "Google Developer Day Philadelphia", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/socks5-proxy", "action": "created", "body": "# Connection to MongoDB With Java And SOCKS5 Proxy\n\n## Introduction\n\nSOCKS5 is a standardized protocol for communicating with network services through a proxy server. It offers several\nadvantages like allowing the users to change their virtual location or hide their IP address from the online services.\n\nSOCKS5 also offers an authentication layer that can be used to enhance security.\n\nIn our case, the network service is MongoDB. Let's see how we can connect to MongoDB through a SOCKS5 proxy with Java.\n\n## SOCKS5 with vanilla Java\n\nAuthentication is optional for SOCKS5 proxies. So to be able to connect to a SOCKS5 proxy, you need:\n\n- **proxyHost**: IPv4, IPv6, or hostname of the proxy\n- **proxyPort**: TCP port number (default 1080)\n\nIf authentication is activated, then you'll also need a username and password. Both need to be provided, or it won't\nwork.\n\n- **proxyUsername**: the proxy username (not null or empty)\n- **proxyPassword**: the proxy password (not null or empty)\n\n### Using connection string parameters\n\nThe first method to connect to MongoDB through a SOCKS5 proxy is to simply provide the above parameters directly in the\nMongoDB connection string.\n\n```java\npublic MongoClient connectToMongoDBSock5WithConnectionString() {\n String connectionString = \"mongodb+srv://myDatabaseUser:myPassword@example.org/\" +\n \"?proxyHost=\" +\n \"&proxyPort=\" +\n \"&proxyUsername=\" +\n \"&proxyPassword=\";\n return MongoClients.create(connectionString);\n}\n```\n\n### Using MongoClientSettings\n\nThe second method involves passing these parameters into a MongoClientSettings class, which is then used to create the\nconnection to the MongoDB cluster.\n\n```java\npublic MongoClient connectToMongoDBSocks5WithMongoClientSettings() {\n String URI = \"mongodb+srv://myDatabaseUser:myPassword@example.org/\";\n ConnectionString connectionString = new ConnectionString(URI);\n Block socketSettings = builder -> builder.applyToProxySettings(\n proxyBuilder -> proxyBuilder.host(\"\")\n .port(1080)\n .username(\"\")\n .password(\"\"));\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .applyToSocketSettings(socketSettings)\n .build();\n return MongoClients.create(settings);\n}\n```\n\n## Connection with Spring Boot\n\n### Using connection string parameters\n\nIf you are using Spring Boot or Spring Data MongoDB, you can connect like so if you are passing the SOCKS5 parameters in\nthe connection string.\n\nMost of the time, if you are using Spring Boot or Spring Data, you'll need the codec registry to\nsupport the POJO mappings. So I included this as well.\n\n```java\npackage com.mongodb.starter;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.springframework.beans.factory.annotation.Value;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\n\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\n@Configuration\npublic class MongoDBConfiguration {\n\n @Value(\"${spring.data.mongodb.uri}\")\n private String connectionString;\n\n @Bean\n public MongoClient mongoClient() {\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n return MongoClients.create(MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(connectionString))\n .codecRegistry(codecRegistry)\n .build());\n }\n\n}\n```\n\nIn this case, all the SOCKS5 action is actually happening in the `application.properties` file of your Spring Boot\nproject.\n\n```properties\nspring.data.mongodb.uri=${MONGODB_URI:\"mongodb+srv://myDatabaseUser:myPassword@example.org/?proxyHost=&proxyPort=&proxyUsername=&proxyPassword=\"}\n```\n\n### Using MongoClientSettings\n\nIf you prefer to use the MongoClientSettings, then you can just pass a classic MongoDB URI and handle the different\nSOCKS5 parameters directly in the `SocketSettings.Builder`.\n\n```java\npackage com.mongodb.starter;\n\nimport com.mongodb.Block;\nimport com.mongodb.ConnectionString;\nimport com.mongodb.MongoClientSettings;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\nimport com.mongodb.connection.SocketSettings;\nimport org.bson.codecs.configuration.CodecRegistry;\nimport org.bson.codecs.pojo.PojoCodecProvider;\nimport org.springframework.beans.factory.annotation.Value;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\n\nimport static org.bson.codecs.configuration.CodecRegistries.fromProviders;\nimport static org.bson.codecs.configuration.CodecRegistries.fromRegistries;\n\n@Configuration\npublic class MongoDBConfigurationSocks5 {\n\n @Value(\"${spring.data.mongodb.uri}\")\n private String connectionString;\n\n @Bean\n public MongoClient mongoClient() {\n CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());\n CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);\n Block socketSettings = builder -> builder.applyToProxySettings(\n proxyBuilder -> proxyBuilder.host(\"\")\n .port(1080)\n .username(\"\")\n .password(\"\"));\n return MongoClients.create(MongoClientSettings.builder()\n .applyConnectionString(new ConnectionString(connectionString))\n .applyToSocketSettings(socketSettings)\n .codecRegistry(codecRegistry)\n .build());\n }\n\n}\n```\n\n## Conclusion\n\nLeveraging a SOCKS5 proxy for connecting to MongoDB in Java offers enhanced security and flexibility. Whether through connection string parameters or MongoClientSettings, integrating SOCKS5 functionality is straightforward.\n\nIf you want to read more details, you can check out the SOCKS5 documentation online.\n\nIf you have questions, please head to our Developer Community website where\nthe MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring"], "pageDescription": "In this post, we explain the different methods you can use to connect to a MongoDB cluster through a SOCKS5 proxy with vanilla Java and Spring Boot.", "contentType": "Tutorial"}, "title": "Connection to MongoDB With Java And SOCKS5 Proxy", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/neurelo-series-two-lambda", "action": "created", "body": "# Building a Restaurant Locator Using Atlas, Neurelo, and AWS Lambda\n\nReady to build a robust and efficient application that can quickly process real-time data, is capable of adapting to changing environments, and is fully customizable with seamless integration? \n\nThe developer dream trifecta of MongoDB Atlas, Neurelo, and AWS Lambda will propel your cloud-based applications in ways you didn\u2019t know were possible! With this lethal combination, you can build a huge variety of applications, like the restaurant locator we will build in this tutorial. \n\nThis combination of platforms can help developers build scalable, cost-efficient, and performant serverless functions. Some huge benefits are that the Lambda functions used still remain stateless \u2014 data operations are now stateless API calls and there are no stateful connections opened with every Lambda invocation when Neurelo is incorporated in the application. We also are enabling higher performance and lower costs as no execution (and billing) time is spent setting up or tearing down established connections. This also enables significantly higher concurrency of Lambda invocations, as we can leverage built-in connection pooling through Neurelo which allows for you to open fewer connections on your MongoDB instance. \n\nWe will be going over how to properly set up the integration infrastructure to ensure you\u2019re set up for success, and then we will dive into actually building our application. At the end, we will have a restaurant locator that we can use to search for restaurants that fit our desired criteria. Let\u2019s get started! \n\n## Pre-reqs\n\n - MongoDB Atlas account\n - Neurelo account\n - AWS account; Lambda access is necessary \n\n## Setting up our MongoDB cluster \n\nOur first step is to spin up a free MongoDB cluster and download the sample dataset. For help on how to do this, please refer to our tutorial. \n\nFor this tutorial, we will be using the `sample_restaurants` collection that is located inside the sample dataset. Please ensure you have included the correct IP address access for this tutorial, along with a secure username and password as you will need these throughout. \n\nOnce your cluster is up and running, we can start setting up our Neurelo project.\n\n## Setting up our Neurelo project\n\nOnce we have our MongoDB cluster created, we need to create a project in Neurelo. For help on this step, please refer to our first tutorial in this series, Neurelo and MongoDB: Getting Started and Fun Extras. \n\nSave your API key someplace safe. Otherwise, you will need to create a new key if it gets lost. Additionally, please ensure your Neurelo project is connected to your MongoDB cluster. For help on grabbing a MongoDB connection string, we have directions to guide you through it. Now, we can move on to setting up our AWS Lambda function. \n\n## Creating our AWS Lambda function\n\nLog into your AWS account and access Lambda either through the search bar or in the \u201cServices\u201d section. Click on the orange \u201cCreate function\u201d button and make sure to press the \u201cAuthor from scratch\u201d option on the screen that pops up. Select a name for your function \u2014 we are using \u201cConnectTest\u201d to keep things simple \u2014 and then, choose \u201cPython 3.12\u201d for your runtime, since this is a Python tutorial! Your Lambda function should look like this prior to hitting \u201cCreate function.\u201d\n\nOnce you\u2019re taken to the \u201cFunction overview\u201d page, we can start writing our code to perfectly integrate MongoDB Atlas, Neurelo, and AWS Lambda. Let\u2019s dive into it.\n\n## Part 1: The integration\n\nLuckily, we don\u2019t need to import any requirements for this Lambda function tutorial and we can write our code directly into the function we just created. \n\nThe first step is to import the packages `urllib3` and `json` with the line:\n\n```\nimport urllib3, json\n```\nThese two packages hold everything we need to deal with our connection strings and make it so we don\u2019t need to write our code in a separate IDE. \n\nOnce we have our imports in, we can configure our API key to our Neurelo environment. We are using a placeholder `API_KEY`, and for ease in this tutorial, you can put your key directly in. But it\u2019s not good practice to ever hardcode your keys in code, and in a production environment, it should never be done. \n\n```\n# Put in your API Key to connect to your Neurelo environment\nNEURELO_API_KEY = \u2018API_KEY\u2019\n```\nOnce you\u2019ve set up your API key connection, we can set up our headers for the REST API call. For this, we can take the auto-generated `lambda_function` function and edit it to better suit our needs:\n\n```\ndef lambda_handler(event, context):\n \n # Setup the headers\n headers = {\n 'X-API-KEY': NEURELO_API_KEY\n }\n \n # Creating a PoolManager instance for sending HTTP requests\n http = urllib3.PoolManager()\n```\nHere, we are creating a dictionary named `headers` to set the value of our API key. This step is necessary so Neurelo can authenticate our API request and we can return our necessary documents. We are then utilizing the `PoolManager` class to manage our server connections. This is an efficient way to ensure we are reusing connections with Lambda instead of creating a new connection with each individual call. For this tutorial, we are only using one connection, but if you have a more complex Lambda or a project with the need for multiple connections, you will be able to see the magic of the `PoolManager` class a bit more. \n\nNow, we are ready to set up our first API call! Please recall that in this first step, we are connecting to our \u201crestaurants\u201d collection within our `sample_restaurants` database and we are returning our necessary documents. \n\nWe have decided that we want to retrieve a list of restaurants from this collection that fit specific criteria: These restaurants are located in the borough of Brooklyn, New York, and serve American cuisine. Prior to writing the code below, we suggest you take a second to look through the sample database to view the fields inside our documents. \n\nSo now that we\u2019ve defined the query parameters we are interested in, let\u2019s translate it into a query request. We are going to be using three parameters for our query: \u201cfilter,\u201d \u201ctake,\u201d and \u201cselect.\u201d These are the same parameter keys from our first article in this series, so please refer back to it if you need help. We are using the \u201cfilter\u201d parameter to ensure we are receiving restaurants that fit our criteria of being in Brooklyn and that are American, the \u201ctake\u201d parameter is so we only return five documents instead of thousands (our collection has over 25,000 documents!), and the \u201cselect\u201d parameter is so that only our specific fields are being returned in our output. \n\nOur query request will look like this:\n```\n# Define the query parameters\n params1 = {\n 'filter': '{\"AND\": {\"borough\": {\"equals\": \"Brooklyn\"}, \"cuisine\": {\"equals\": \"American\"}}}',\n 'take': '5',\n 'select': '{\"id\": false, \"name\": true, \"borough\": true, \"cuisine\": true}',\n }\n```\nDon\u2019t forget to send a GET request with our necessary parameters, and set up some print statements so we can see if our request was successful. Once completed, the whole code block for our Part 1 should look something like this: \n```\nimport urllib3, json\n\n# Configure the API Key for our Neurelo environment\nNEURELO_API_KEY = 'API_KEY'\n\ndef lambda_handler(event, context):\n \n # Setup the headers\n headers = {\n 'X-API-KEY': NEURELO_API_KEY\n }\n \n # Creating a PoolManager instance for sending HTTP requests\n http = urllib3.PoolManager()\n\n # Choose the \"restaurants\" collection from our Neurelo environment connected to 'sample_restaurants'\n api1 = 'https://us-east-2.aws.neurelo.com/rest/restaurants'\n \n # Define the query parameters\n params1 = {\n 'filter': '{\"AND\": {\"borough\": {\"equals\": \"Brooklyn\"}, \"cuisine\": {\"equals\": \"American\"}}}',\n 'take': '5',\n 'select': '{\"id\": false, \"name\": true, \"borough\": true, \"cuisine\": true}',\n }\n \n # Send a GET request with URL parameters\n response = http.request(\"GET\", api1, headers=headers, fields=params1)\n \n # Print results if the request was successful\n if response.status == 200:\n # Print the JSON content of the response\n print ('Restaurants Endpoint: ' + json.dumps(json.loads(response.data), indent=4))\n```\nAnd our output will look like this:\n\nCongratulations! As you can see, we have successfully returned five American cuisine restaurants located in Brooklyn, and we have successfully integrated our MongoDB cluster with our Neurelo project and have used AWS Lambda to access our data. \n\nNow that we\u2019ve set everything up, let\u2019s move on to the second part of our tutorial, where we will filter our results with a custom API endpoint for the best restaurants possible. \n\n## Part 2: Filtering our results further with a custom API endpoint\n\nBefore we can call our custom endpoint to filter for our desired results, we need to create one. While Neurelo has a large list of auto-generated endpoints available for your project, sometimes we need an endpoint that we can customize with a complex query to return information that is nuanced. From the sample database in our cluster, we can see that there is a `grades` field where the grade and score received by each restaurant exist. \n\nSo, what if we want to return documents based on their scores? Let\u2019s say we want to expand our search and find restaurants that are really good restaurants. \n\nHead over to Neurelo and access the \u201cDefinitions\u201d tab on the left-hand side of the screen. Go to the \u201cCustom Queries\u201d tab and create a complex query named \u201cgetGoodRestaurants.\u201d For more help on this section, please refer to the first article in this series for a more detailed explanation. \n\nWe want to filter restaurants where the most recent grades are either \u201cA\u201d or \u201cB,\u201d and the latest grade score is greater than 10. Then, we want to aggregate the restaurants by cuisine and borough and list the restaurant name, so we can know where to go!\n\nOur custom query will look like this:\n\n```\n{\n \"aggregate\": \"restaurants\",\n \"pipeline\": \n {\n \"$match\": {\n \"borough\": \"Brooklyn\",\n \"cuisine\": \"American\",\n \"grades.0.grade\": {\n \"$in\": [\n \"A\",\n \"B\"\n ]\n },\n \"grades.1.grade\": {\n \"$in\": [\n \"A\",\n \"B\"\n ]\n },\n \"grades.0.score\": {\n \"$gt\": 10\n }\n }\n },\n {\n \"$limit\": 5\n },\n {\n \"$group\": {\n \"_id\": {\n \"cuisine\": \"$cuisine\",\n \"borough\": \"$borough\"\n },\n \"restaurants_info\": {\n \"$push\": {\n \"name\": \"$name\" \n}\n }\n }\n }\n ],\n \"cursor\": {}\n}\n\n```\nGreat! Now that we have our custom query in place, hit the \u201cCommit\u201d button at the top of the screen, add a commit message, and make sure that the \u201cDeploy to environment\u201d option is selected. This is a crucial step that will ensure that we are committing our custom query into the definitions repo for the project and deploying the changes to our environment. \n\nNow, we can head back to Lambda and incorporate our second endpoint to return restaurants that have high scores serving our desired food in our desired location. \n\nAdd this code to the bottom of the previous code we had written. \n\n```\n# Choose the custom-query endpoint from our Neurelo environment connected to 'sample_restaurants'\n api2 = 'https://us-east-2.aws.neurelo.com/custom/getGoodRestaurants'\n\n # Send a GET request with URL parameters\n response = http.request(\"GET\", api2, headers=headers)\n \n if response.status == 200:\n # Print the JSON content of the response\n print ('Custom Query Endpoint: ' + json.dumps(json.loads(response.data), indent=4))\n```\nHere, we are choosing our custom endpoint, `getGoodRestaurants`, and then sending a GET request to acquire the necessary information. \n\nPlease deploy the changes in Lambda and hit the \u201cTest\u201d button. \n\nYour output will look like this:\n![[Fig 4: custom complex query endpoint output in Lambda]\n\nAs you can see from the results above, we have received a sample size of five American cuisine, Brooklyn borough restaurants that meet our criteria and are considered good restaurants! \n\n## Conclusion\n\nIn this tutorial, we have covered how to properly integrate a MongoDB Atlas cluster with our Neurelo project and return our desired results by using AWS Lambda. We have shown the full process of utilizing our Neurelo project automated API endpoints and even how to use unique and fully customizable endpoints as well! \n\nFor more help with using MongoDB Atlas, Neurelo, and AWS Lambda, please visit the hyperlinked documentation. \n\n> This tutorial is the second in our series. Please check out the first tutorial: Neurelo and MongoDB: Getting Started and Fun Extras.", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Neurelo"], "pageDescription": "Follow along with this in-depth tutorial covering the integration of MongoDB Atlas, Neurelo, and AWS Lambda to build a restaurant locator.", "contentType": "Tutorial"}, "title": "Building a Restaurant Locator Using Atlas, Neurelo, and AWS Lambda", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-atlas-stream-processing-security", "action": "created", "body": "# Getting Started With Atlas Stream Processing Security\n\nSecurity is paramount in the realm of databases, and the safeguarding of streaming data is no exception. Stream processing services like Atlas Stream Processing handle sensitive data from a variety of sources, making them prime targets for malicious activities. Robust security measures, including encryption, access controls, and authentication mechanisms, are essential to mitigating risks and upholding the trustworthiness of the information flowing through streaming data pipelines. \n\nIn addition, regulatory compliance may impose comprehensive security protocols and configurations such as enforcing auditing and separation of duties. In this article, we will cover the security capabilities of Atlas Stream Processing, including access control, and how to configure your environment to support least privilege access. Auditing and activity monitoring will be covered in a future article.\n\n## A primer on Atlas security\n\nRecall that in MongoDB Atlas, organizations, projects, and clusters are hierarchical components that facilitate the organization and management of MongoDB resources. An organization is a top-level entity representing an independent deployment of MongoDB Atlas, and it contains one or more projects. \n\nA project is a logical container within an organization, grouping related resources and serving as a unit for access control and billing. Within a project, MongoDB clusters are deployed. Clusters are instances of MongoDB databases, each with its own configurations, performance characteristics, and data. Clusters can span multiple cloud regions and availability zones for high availability and disaster recovery. \n\nThis hierarchy allows for the efficient management of MongoDB deployments, access control, and resource isolation within MongoDB Atlas.\n\n authenticate with Atlas UI, API, or CLI only (a.k.a the control plane). Authorization includes access to an Atlas organization and the Atlas projects within the organization. \n\n or via a MongoDB driver like the MongoDB Java driver. If you have previously used a self-hosted MongoDB server, Atlas database users are the equivalent of the MongoDB user. MongoDB Atlas supports a variety of authentication methods such as SCRAM (username and password), LDAP Proxy Authentication, OpenID Connect, Kerberos, and x.509 Certificates. While clients use any one of these methods to authenticate, Atlas services, such as Atlas Data Federation, access other Atlas services like Atlas clusters via temporary x.509 certificates. This same concept is used within Atlas Stream Processing and will be discussed later in this post.\n\n**Note:** Unless otherwise specified, a \u201cuser\u201d in this article refers to an Atlas database user.\n\n.\n\nAuthentication to SPIs operates similarly to Atlas clusters, where only users defined within the Atlas data plane (e.g., Atlas database users) are allowed to connect to and create SPIs. It's crucial to grasp this concept because SPIs and Atlas clusters are distinct entities within an Atlas project, yet they share the same authentication process via Atlas database users.\n\nBy default, **only Atlas users who are Project Owners or Project Stream Processing Owners can create Stream Processing Instances.** These users also have the ability to create, update, and delete connection registry connections associated with SPIs. \n\n### Connecting to the Stream Processing Instance\n\nOnce the SPI is created, Atlas database users can connect to it just as they would with an Atlas cluster through a client tool such as mongosh. Any Atlas database user with the built-in \u201creadWriteAnyDatabase\u201d or \u201catlasAdmin\u201d can connect to any SPIs within the project.\n\nFor users without one of these built-in permissions, or for scenarios where administrators want to follow the principle of least privilege, administrators can create a custom database role made up of specific actions.\n\n#### Custom actions\n\nAtlas Stream Processing introduces a number of custom actions that can be assigned to a custom database role. For example, if administrators wanted to create an operations-level role that could only start, stop, and view stream statistics, they could create a database user role, \u201cASPOps,\u201d and add the startStreamProcessor, stopStreamProcessor, and listStreamProcessors actions. The administrator would then grant this role to the user. \n\nThe following is a list of Atlas Stream Processing actions:\n\n- createStreamProcessor\n- processStreamProcessor\n- startStreamProcessor\n- stopStreamProcessor\n- dropStreamProcessor\n- sampleStreamProcessor\n- listStreamProcessors\n- listConnections\n- streamProcessorStats\n\nOne issue you might realize is if a database user with the built-in \u201creadWriteAnyDatabase\u201d has all these actions granted by default, or if a custom role has these actions, they have these action permissions for all Stream Processing Instances within the Atlas project! If your organization wants to lock this down and restrict access to specific SPIs, they can do this by navigating to the \u201cRestrict Access\u201d section and selecting the desired SPIs.\n\n or read more about MongoDB Atlas Stream Processing in our documentation.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt493531d0261fd667/6629225351b16f1ecac4e6cd/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5a6ba7eaf44c67a2/662922674da2a996e6ff2ea8/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdd67839c2a52fa2/6629227fb0ec7701ffd6e743/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5188a9aabfeae08c/66292291b0ec775eb8d6e747/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt586319eaf4e5422b/662922ab45f9893914cf6a93/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7cfd6608d0aca8b0/662922c3b054410cfd9a038c/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb01ee1b6f7f4b89c/662922d9c9de46ee62d4944f/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc251c6f17b584861/662922edb0ec77eee0d6e750/8.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Take a deep dive into Atlas Stream Processing security. Learn how Atlas Stream Processing achieves a principle of least privilege.", "contentType": "Tutorial"}, "title": "Getting Started With Atlas Stream Processing Security", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/ef-core-ga-updates", "action": "created", "body": "# What's New in the MongoDB Provider for EF Core?\n\nExciting news! As announced at .local NYC, the MongoDB Provider for Entity Framework (EF) has gone into General Availability (GA) with the release of version 8.0 in NuGet. The major version numbers are set to align with the version number of .NET and EF so the release of 8.0 means the provider now officially supports .NET 8 and EF 8! \ud83c\udf89\n\nIn this article, we will take a look at four highlights of what\u2019s new and how you can add the features to your EF projects today.\n\n## Prerequisites\n\nThis will be an article with code snippets, so it is assumed you have some knowledge of not just MongoDB but also EF. If you want to see an example application in Blazor that uses the new provider and how to get started implementing the CRUD operations, there is a previous tutorial I wrote on getting started that will take you from a blank application all the way to a working system.\n\n> Should you wish to see the application from which the code snippets in this article are taken, you can find it on GitHub.\n## Support for embedded documents\nWhile the provider was in preview, it wasn\u2019t possible to handle lists or embedded documents. But this has changed in GA. It is now just as easy as with the MongoDB C# driver to handle embedded documents and arrays in your application.\n\nIn MongoDB\u2019s sample restaurants collection from the sample dataset, the documents have an address field, which is an embedded document, and a grades field which contains an array of embedded documents representing each time the restaurant was graded.\n\nJust like before, you can have a C# model class that represents your restaurant documents and classes for each embedded document, and the provider will take care of mapping to and from the documents from the database to those classes and making those available inside your DbSet.\n\nWe can then access the properties on that class to display data retrieved from MongoDB using the provider.\n\n```csharp\nvar restaurants = dbContext.Restaurants.AsNoTracking().Take(numOfDocsToReturn).AsEnumerable();\n\n foreach (var restaurant in restaurants)\n {\n Console.WriteLine($\"{restaurant.Id.ToLower()}: {restaurant.Name} - {restaurant.Borough}, {restaurant.Address.Zipcode}\");\n foreach (var grade in restaurant.Grades)\n {\n Console.WriteLine($\"Grade: {grade.GradeLetter}, Score: {grade.Score}\");\n }\n Console.WriteLine(\"--------------------\");\n }\n\n```\n\nThis code is pretty straightforward. It creates an IEnumerable of restaurants from querying the Dbset, only selecting (using the Take method) the number of requested restaurant documents. It then loops through each returned restaurant and prints out data from it, including the zip code from the embedded address document.\n\nBecause grades is an array of grade documents, there is also an additional loop to access data from each document in the array.\n\nCreating documents is also able to support embedded documents. As expected, you can create a new Restaurant object and new versions of both the Address and Grade objects to populate those fields too.\n\n```csharp\nnew Restaurant()\n{\n Id = \"5678\",\n Name = \"My Awesome Restaurant\",\n Borough = \"Brooklyn\",\n Cuisine = \"American\",\n Address = new Address()\n {\n Building = \"123\",\n Coord = new double] { 0, 0 },\n Street = \"Main St\",\n Zipcode = \"11201\"\n },\n Grades = new List()\n {\n new Grade()\n {\n Date = DateTime.Now,\n GradeLetter = \"A\",\n Score = 100\n }\n },\n IsTestData = true,\n RestaurantId = \"123456\"\n\n```\n\nThen, just like with any EF code, you can call Add on the db context, passing in the object to insert and call save changes to sync the db context with your chosen storage \u2014 in this case, MongoDB.\n\n```csharp\ndbContext.Add(newResturant);\n\nawait dbContext.SaveChangesAsync();\n\n```\n## Detailed logging and view of queries\nAnother exciting new feature available in the GA is the ability to get more detailed information, to your logging provider of choice, about what is going on under the hood.\n\nYou can achieve this using the LogTo and EnableSensitiveLogging methods, available from the DbContextOptionsBuilder in EF. For example, you can log to your own logger, logging factory, or even Console.Write.\n\n```csharp\npublic static RestaurantDbContext Create(IMongoDatabase database) =>\n new(new DbContextOptionsBuilder()\n .LogTo(Console.WriteLine)\n .EnableSensitiveDataLogging()\n .UseMongoDB(database.Client, database.DatabaseNamespace.DatabaseName)\n .Options);\n\n```\n\nOne of the reasons you might choose to do this, and the reason why it is so powerful, is that it will show you what the underlying query was that was used to carry out your requested LINQ.\n\n![Logging showing an aggregation query to match on an object id and limit to 1 result\n\nThis can be helpful for debugging purposes, but also for learning more about MongoDB as well as seeing what fields are used most in queries and might benefit from being indexed, if not already an index.\n## BSON attributes\nAnother feature that has been added that is really useful is support for the BSON attributes. One of the most common use cases for these is to allow for the use of different field names in your document versus the property name in your class.\n\nOne of the most often seen differences between MongoDB documents and C# properties is in the capitalization. MongoDB documents, including fields in the restaurant documents, use lowercase. But in C#, it is common to use camel casing. We have a set of naming convention packs you can use in your code to apply class-wide handling of that, so you can specify once that you will be using that convention, such as camel case in your code, and it will automatically handle the conversion. But sometimes, that alone isn\u2019t enough.\n\nFor example, in the restaurant data, there is a field called \u201crestaurant_id\u201d and the most common naming convention in C# would be to call the class property \u201cRestaurantId.\u201d As you can see, the difference is more than just the capitalization. In these instances, you can use attributes from the underlying MongoDB driver to specify what the element in the document would be.\n\n```csharp\nBsonElement(\"restaurant_id\")]\npublic string RestaurantId { get; set; }\n```\n\nOther useful attributes include the ```[BsonId]``` attribute, to specify which property is to be used to represent your _id field, and ```[BsonRequired]```, which states that a field is required.\n\nThere are other BSON attributes as well, already in the C# driver, that will be available in the provider in future releases, such as ```[BsonDiscriminator]``` and ```[BsonGuideRepresentation]```.\n\n## Value converters\nLastly, we have value converters. These allow you to convert the type of data as it goes to/from storage.\n\nThe one I use the most is a string as the type for the Id property instead of the ObjectId data type, as this can be more beneficial when using web frameworks such as Blazor, where the front end will utilize that property. Before GA, you would have to set your Id property to ObjectId, such as: \n```csharp\n public ObjectId Id { get; set; }\n```\n\nHowever, you might prefer to use string because of the string-related methods available or for other reasons, so now you can use:\n```csharp\n public string Id { get; set; }\n```\n\nTo enable the provider to handle mapping an incoming _id value to the string type, you use HasConversion on the entity type.\n\n```csharp\nmodelBuilder.Entity ()\n .Property(r => r.Id)\n .HasConversion();\n```\n\nIt means if you want to, you can then manipulate the value, such as converting it to lowercase more easily.\n\n```csharp\nConsole.WriteLine(restaurant.Id.ToLower());\n```\n\nThere is one thing, though, to take note of and that is when creating documents/entities. Although MongoDB can support not specifying an _id \u2014 because if it is missing, one will be automatically generated \u2014 EF requires that a key not be null. Since the _id field is the primary key in MongoDB documents, EF will error when creating a document if you don\u2019t provide an id.\n\nThis can easily be solved by creating a new ObjectId and casting to a string when creating a new document, such as a new restaurant.\n\n```csharp\nId = new ObjectId().ToString()\n```\n## Summary and roadmap\nToday is a big milestone in the journey for the official MongoDB Provider for EF Core, but it is by no means the end of the journey. Work is only just beginning!\n\nYou have read today about some of the highlights of the release, including value converters, support for embedded documents, and detailed logging to see how a query was generated and used under the hood. But there is not only more in this release, thanks to the hard work of engineers in both MongoDB and Microsoft, but more to come.\n\nThe code for the provider is all open source so you can see how it works. But even better, the [Readme contains the roadmap, showing you what is available now, what is to come, and what is out of scope.\n\nPlus, it has a link to where you can submit issues or more excitingly, feature requests!\n\nSo get started today, taking advantage of your existing EF knowledge and application code, while enjoying the benefits of MongoDB!", "format": "md", "metadata": {"tags": ["C#"], "pageDescription": "Learn more about the new features in the GA release of the MongoDB Provider for EF Core.\n", "contentType": "Article"}, "title": "What's New in the MongoDB Provider for EF Core?", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/mongodb-php-symfony-rental-workshop", "action": "created", "body": "# Symfony and MongoDB Workshop: Building a Rental Listing Application\n\n## Introduction\n\nWe are pleased to release our MongoDB and Symfony workshop to help PHP developers build better applications with MongoDB.\n\nThe workshop guides participants through developing a rental listing application using the Symfony framework and MongoDB. In this article, we will focus on creating a \"Rental\" main page feature, showcasing the integration between Symfony and MongoDB.\n\nThis project uses MongoDB Doctrine ODM, which is an object-document mapper (ODM) for MongoDB and PHP. It provides a way to work with MongoDB in Symfony, using the same principles as Doctrine ORM for SQL databases. Its main features include:\n- Mapping of PHP objects to MongoDB documents.\n- Querying MongoDB using an expressive API.\n- Integration with Symfony's dependency injection and configuration system.\n\n## Prerequisites\n\n- Basic understanding of PHP and Symfony\n- Familiarity with MongoDB and its query language\n- PHP 7.4 or higher installed\n- Symfony 5.2 or higher installed\n- MongoDB Atlas cluster\n- Composer for managing PHP dependencies\n\nEnsure you have the MongoDB PHP driver installed and configured with Symfony. For installation instructions, visit MongoDB PHP Driver Installation.\n\n## What you will learn\n\n- Setting up a MongoDB database for use with Symfony\n- Creating a document schema using Doctrine MongoDB ODM\n- Developing a controller in Symfony to fetch data from MongoDB\n- Displaying data in a Twig template\n- Best practices for integrating Symfony with MongoDB\n\n## Workshop content\n\n### Step 1: Setting up your project\n\nFollow the guide to set the needed prerequisites. \n\nThose steps cover how to install the needed PHP tools and set up your MongoDB Atlas project and cluster.\n\n### Step 2: Configuring the Symfony project and connecting the database to the ODM\n\nFollow the Quick Start section to connect MongoDB Atlas and build the first project files to connect the ODM classes to the database collections.\n\n### Step 3: Building and testing the application \n\nIn this section, you will create the controllers, views, and business logic to list, search, and book rentals:\n- Building the application\n- Testing the application\n\n### Cloud deployment\n\nA very neat and handy ability is a chapter allowing users to seamlessly deploy their applications using MongoDB Atlas and Symfony to the platform.sh cloud.\n\n## Conclusion\n\nThis workshop provides hands-on experience in integrating MongoDB with Symfony to build a rental listing application. Participants will learn how to set up their MongoDB environment, define document schemas, interact with the database using Symfony's controllers, and display data using Twig templates.\nFor further exploration, check out the official Symfony documentation, Doctrine MongoDB guide and MongoDB manual.\n\nStart building with Atlas today! If you have questions or want to discuss things further, visit our community.\n\n ## Frequently asked questions (FAQ)\n\n**Q: Who should attend the Symfony and MongoDB rental workshop?**\n\n**A**: This workshop is designed for PHP developers who want to enhance their skills in building web applications using Symfony and MongoDB. A basic understanding of PHP, Symfony, and MongoDB is recommended to get the most out of the workshop.\n\n**Q: What are the prerequisites for the workshop?**\n\n**A**: Participants should have a basic understanding of PHP and Symfony, familiarity with MongoDB and its query language, PHP 7.4 or higher, Symfony 5.2 or higher, a MongoDB Atlas cluster, and Composer installed on their machine.\n\n**Q: What will I learn in the workshop?**\n\n**A**: You will learn how to set up a MongoDB database with Symfony, create a document schema using Doctrine MongoDB ODM, develop a Symfony controller to fetch data from MongoDB, display data in a Twig template, and understand best practices for integrating Symfony with MongoDB.\n\n**Q: How long is the workshop?**\n\n**A**: The duration of the workshop can vary based on the pace of the participants. However, it's designed to be comprehensive yet concise enough to be completed in a few sessions.\n\n**Q: Do I need to install anything before the workshop?**\n\n**A**: Yes, you should have PHP, Symfony, MongoDB Atlas, and Composer installed on your computer. Also, ensure the MongoDB PHP driver is installed and configured with Symfony. Detailed installation instructions are provided in the prerequisites section.\n\n**Q: Is there any support available during the workshop?**\n\n**A**: Yes, support will be available through various channels including the workshop forums, direct messaging with instructors, and the MongoDB community forums.\n\n**Q: Can I access the workshop materials after completion?**\n\n**A**: Yes, all participants will have access to the workshop materials, including code samples and documentation, even after the workshop concludes.\n\n**Q: How does this workshop integrate with MongoDB Atlas?**\n\n**A**: The workshop includes a module on setting up and connecting your application with a MongoDB Atlas cluster, allowing you to experience a real-world scenario of deploying a Symfony application backed by a managed MongoDB service.\n\n**Q: What is Doctrine MongoDB ODM?**\n\n**A**: Doctrine MongoDB ODM (Object-Document Mapper) is a library that provides a way to work with MongoDB in Symfony using the same principles as Doctrine ORM for SQL databases. It offers features like the mapping of PHP objects to MongoDB documents and querying MongoDB with an expressive API.\n\n**Q: Can I deploy the application built during the workshop?**\n\n**A**: Yes, the workshop includes a section on cloud deployment, with instructions on deploying your application using MongoDB Atlas and Symfony to a cloud platform, such as Platform.sh.\n\n**Q: Where can I find more resources to learn about the Symfony and MongoDB integration?**\n\n**A**: For further exploration, check out the official Symfony documentation, Doctrine MongoDB ODM guide, and MongoDB manual. Links to these resources are provided in the conclusion section of the workshop.\n", "format": "md", "metadata": {"tags": ["MongoDB", "PHP"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Symfony and MongoDB Workshop: Building a Rental Listing Application", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/go-concurrency-graceful-shutdown", "action": "created", "body": "# Concurrency and Gracefully Closing the MDB Client\n\n# Concurrency and Gracefully Closing the MDB Client\n\nIn the previous article and the corresponding video, we learned to persist the data that was exchanged with our HTTP server using MongoDB. We used the MongoDB driver for Go to access a **free** MongoDB Atlas cluster and use instances of our data directly with it.\n\nIn this article, we are going to focus on a more advanced topic that often gets ignored: how to properly shut down our server. This can be used with the `WaitGroups` provided by the `sync` package, but I decided to do it using goroutines and channels for the sake of getting to cover them in a more realistic but understandable use case.\n\nIn the latest version of the code of this program, we had set a way to properly close the connection to the database. However, we had no way of gracefully stopping the web server. Using Control+C closed the server immediately and that code was never executed.\n\n## Use custom multiplexer\n\n1. Before we are able to customize the way our HTTP server shuts down, we need to organize the way it is built. First, the routes we created are added to the `DefaultServeMux`. We can create our own router instead, and add the routes to it (instead of the old ones).\n \n ```go\n router := http.newservemux()\n router.handlefunc(\"get /\", func(w http.responsewriter, r *http.request) {\n w.write(]byte(\"HTTP caracola\"))\n })\n router.handlefunc(\"post /notes\", createNote)\n ```\n2. The router that we have just created, together with other configuration parameters, can be used to create an `http.Server`. Other parameters can also be set: Read the [documentation for this one.\n \n ```go\n server := http.Server{\n Addr: serverAddr,\n Handler: router,\n }\n ```\n3. Use this server to listen to connections, instead of the default one. Here, we don't need parameters in the function because they are provided with the `server` instance, and we are invoking one of its methods.\n \n ```go\n log.Fatal(server.ListenAndServe())\n ```\n4. If you compile and run this version, it should behave exactly the same as before.\n5. The `ListenAndServe()` function returns a specific error when the server is closed with a `Shutdown()`. Let's handle it separately.\n \n ```go\n if err := server.ListenAndServe(); !errors.Is(err, http.ErrServerClosed) {\n log.Fatalf(\"HTTP server error %v\\n\", err)\n }\n ```\n\n## Use shutdown function on signal interrupt\n has all the code for this series so you can follow along. The topics covered in it are the foundations that you need to know to produce full-featured REST APIs, back-end servers, or even microservices written in Go. The road is in front of you and we are looking forward to learning what you will create with this knowledge.\n\nStay curious. Hack your code. See you next time!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6825b2a270c9bd36/664b2922428432eba2198f28/signal.jpg", "format": "md", "metadata": {"tags": ["Go"], "pageDescription": "A practical explanation on how to use goroutines and channels to achieve a graceful shutdown of the server and get the most out of it.", "contentType": "Tutorial"}, "title": "Concurrency and Gracefully Closing the MDB Client", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-local-unit-testing", "action": "created", "body": "# How to Enable Local and Automatic Testing of Atlas Search-Based Features\n\n## Introduction\n\nAtlas Search enables you to perform full-text queries on your MongoDB database. In this post, I want to show how you can\nuse test containers to write integration tests for Atlas Search-based queries, so that you can run them locally and in\nyour CI/CD pipeline without the need to connect to an actual MongoDB Atlas instance.\n\nTL;DR: All the source code explained in this post is available on GitHub:\n\n```bash\ngit clone git@github.com:mongodb-developer/atlas-search-local-testing.git\n```\n\nMongoDB Atlas Search is a powerful combination of a document-oriented database and full-text search capabilities. This\nis not only valuable for use cases where you want to perform full-text queries on your data. With Atlas Search, it is\npossible to easily enable use cases that would be hard to implement in standard MongoDB due to certain limitations.\n\nSome of these limitations hit us in a recent project in which we developed a webshop. The rather obvious requirement\nfor this shop included that customers should be able to filter products and that the filters should show how many items\nare available in each category. Over the course of the project, we kept increasing the number of filters in the\napplication. This led to two problems:\n\n- We wanted customers to be able to arbitrarily choose filters. Since every filter needs an index to run efficiently,\n and since indexes can\u2018t be combined (intersected), this leads to a proliferation of indexes that are hard to\n maintain (in addition MongoDB allows only 64 indexes, adding another complexity level).\n\n- With an increasing number of filters, the calculation of the facets for indicating the number of available items in\n each category also gets more complex and more expensive.\n\nAs the developer effort to handle this complexity with standard MongoDB tools grew larger over time, we decided to give\nAtlas Search a try. We knew that Atlas Search is an embedded full-text search in MongoDB Atlas based on Apache Lucene\nand that Lucene is a mighty tool for text search, but we were actually surprised at how well it supports our filtering use\ncase.\n\nWith Atlas Search, you can create one or more so-called search indexes that contain your documents as a whole or just\nparts of them. Therefore, you can use just one index for all of your queries without the need to maintain additional\nindexes, e.g., for the most used filter combinations. Plus, you can also use the search index to calculate the facets\nneeded to show item availability without writing complex queries that are not 100% backed up by an index.\n\nThe downside of this approach is that Atlas Search makes it harder to write unit or integration tests. When you're using\nstandard MongoDB, you'll easily find some plug-ins for your testing framework that provide an in-memory MongoDB to run\nyour tests against, or you use some kind of test container to set the stage for your tests. Although Atlas Search\nqueries seamlessly integrate into MongoDB aggregation pipelines on Atlas, standard MongoDB cannot process this type of\naggregation stage.\n\nTo solve this problem, the recently released Atlas CLI allows you to start a local instance of a MongoDB cluster that\ncan actually handle Atlas Search queries. Internally, it starts two containers, and after deploying your search index via\nCLI, you can run your tests locally against these containers. While this allows you to run your tests locally, it can be\ncumbersome to set up this local cluster and start/stop it every time you want to run your tests. This has to be done\nby each developer on their local machine, adds complexity to the onboarding of new people working on the software, and\nis rather hard to integrate into a CI/CD pipeline.\n\nTherefore, we asked ourselves if there is a way to provide a solution\nthat does not need a manual setup for these containers and enables automatic start and shutdown. Turns out there\nis a way to do just that, and the solution we found is, in fact, a rather lean and reusable one that can also help with\nautomated testing in your project.\n\n## Preparing test containers\n\nThe key idea of test containers is to provide a disposable environment for testing. As the name suggests, it is based on\ncontainers, so in the first step, we need a Docker image or a Docker Compose script\nto start with.\n\nAtlas CLI uses two Docker images to create an environment that enables testing Atlas Search queries locally:\nmongodb/mongodb-enterprise-server is responsible for providing database capabilities and mongodb/mongodb-atlas-search is\nproviding full-text search capabilities. Both containers are part of a MongoDB cluster, so they need to communicate with\neach other.\n\nBased on this information, we can create a docker\u2013compose.yml, where we define two containers, create a network, and set\nsome parameters in order to enable the containers to talk to each other. The example below shows the complete\ndocker\u2013compose.yml needed for this article. The naming of the containers is based on the naming convention of the\nAtlas Search architecture: The `mongod` container provides the database capabilities while the `mongot` container\nprovides\nthe full-text search capabilities. As both containers need to know each other, we use environment variables to let each\nof them know where to find the other one. Additionally, they need a shared secret in order to connect to each other, so\nthis is also defined using another environment variable.\n\n```bash\nversion: \"2\"\n\nservices:\n mongod:\n container_name: mongod\n image: mongodb/mongodb-enterprise-server:7.0-ubi8\n entrypoint: \"/bin/sh -c \\\"echo \\\"$$KEYFILECONTENTS\\\" > \\\"$$KEYFILE\\\"\\n\\nchmod 400 \\\"$$KEYFILE\\\"\\n\\n\\npython3 /usr/local/bin/docker-entrypoint.py mongod --transitionToAuth --keyFile \\\"$$KEYFILE\\\" --replSet \\\"$$REPLSETNAME\\\" --setParameter \\\"mongotHost=$$MONGOTHOST\\\" --setParameter \\\"searchIndexManagementHostAndPort=$$MONGOTHOST\\\"\\\"\"\n environment:\n MONGOTHOST: 10.6.0.6:27027\n KEYFILE: /data/db/keyfile\n KEYFILECONTENTS: sup3rs3cr3tk3y\n REPLSETNAME: local\n ports:\n - 27017:27017\n networks:\n network:\n ipv4_address: 10.6.0.5\n mongot:\n container_name: mongot\n image: mongodb/mongodb-atlas-search:preview\n entrypoint: \"/bin/sh -c \\\"echo \\\"$$KEYFILECONTENTS\\\" > \\\"$$KEYFILE\\\"\\n\\n/etc/mongot-localdev/mongot --mongodHostAndPort \\\"$$MONGOD_HOST_AND_PORT\\\" --keyFile \\\"$$KEYFILE\\\"\\\"\"\n environment:\n MONGOD_HOST_AND_PORT: 10.6.0.5:27017\n KEYFILE: /var/lib/mongot/keyfile\n KEYFILECONTENTS: sup3rs3cr3tk3y\n ports:\n - 27027:27027\n networks:\n network:\n ipv4_address: 10.6.0.6\nnetworks:\n network:\n driver: bridge\n ipam:\n config:\n - subnet: 10.6.0.0/16\n gateway: 10.6.0.1\n```\n\nBefore we can use our environment in tests, we still need to create our search index. On top of that, we need to\ninitialize the replica set which is needed as the two containers form a cluster. There are multiple ways to achieve\nthis:\n\n- One way is to use the Testcontainers framework to start the Docker Compose file and a test framework\n like jest which allows you to define setup and teardown methods for your tests. In the setup\n method, you can initialize the replica set and create the search index. An advantage of this approach is that you\n don't\n need to start your Docker Compose manually before you run your tests.\n- Another way is to extend the Docker Compose file by a third container which simply runs a script to accomplish the\n initialization of the replica set and the creation of the search index.\n\nAs the first solution offers a better developer experience by allowing tests to be run using just one command, without\nthe need to start the Docker environment manually, we will focus on that one. Additionally, this enables us to easily\nrun our tests in our CI/CD pipeline.\n\nThe following code snippet shows an implementation of a jest setup function. At first, it starts the Docker Compose\nenvironment we defined before. After the containers have been started, the script builds a connection string to\nbe able to connect to the cluster using a MongoClient (mind the `directConnection=true` parameter!). The MongoClient\nconnects to the cluster and issues an admin command to initialize the replica set. Since this command takes\nsome milliseconds to complete, the script waits for some time before creating the search index. After that, we load an\nAtlas Search index definition from the file system and use `createSearchIndex` to create the index on the cluster. The\ncontent of the index definition file can be created by simply exporting the definition from the Atlas web UI. The only\ninformation not included in this export is the index name. Therefore, we need to set it explicitly (important: the name\nneeds to match the index name in your production code!). After that, we close the database connection used by MongoClient\nand save a reference to the Docker environment to tear it down after the tests have run.\n\n```javascript\nexport default async () => {\n const environment = await new DockerComposeEnvironment(\".\", \"docker-compose.yml\").up()\n const port = environment.getContainer(\"mongod\").getFirstMappedPort()\n const host = environment.getContainer(\"mongod\").getHost()\n process.env.MONGO_URL = `mongodb://${host}:${port}/atlas-local-test?directConnection=true`\n const mongoClient = new MongoClient(process.env.MONGO_URL)\n try {\n await mongoClient\n .db()\n .admin()\n .command({\n replSetInitiate: {\n _id: \"local\",\n members: {_id: 0, host: \"10.6.0.5:27017\"}]\n }\n })\n await new Promise((r) => setTimeout(r, 500))\n const indexDefinition = path.join(__dirname, \"../index.json\")\n const definition = JSON.parse(fs.readFileSync(indexDefinition).toString(\"utf-8\"))\n const collection = await mongoClient.db(\"atlas-local-test\").createCollection(\"items\")\n await collection.createSearchIndex({name: \"items-index\", definition})\n } finally {\n await mongoClient.close()\n }\n global.__MONGO_ENV__ = environment\n}\n```\n\n## Writing and running tests\n\nWhen you write integration tests for your queries, you need to insert data into your database before running the tests.\nUsually, you would insert the needed data at the beginning of your test, run your queries, check the results, and have\nsome clean-up logic that runs after each test. Because the Atlas Search index is located on another\ncontainer (`mongot`) than the actual data (`mongod`), it takes some time until the Atlas Search node has processed the\nevents from the so-called change stream and $search queries return the expected data. This fact has an impact on the\nduration of the tests, as the following three scenarios show:\n\n- We insert our test data in each test as before. As inserting or updating documents does not immediately lead to the\n search index being updated (the `mongot` has to listen to events of the change stream and process them), we would need\n to\n wait some time after writing data before we can be sure that the query returns the expected data. That is, we would\n need\n to include some kind of sleep() call in every test.\n- We create test data for each test suite. Inserting test data once per test suite using a beforeAll() method brings\n down\n the time we have to wait for the `mongot` container to process the updates. The disadvantage of this approach is\n that\n we have to prepare the test data in such a way that it is suitable for all tests of this test suite.\n- We create global test data for all test suites. Using the global setup method from the last section, we could also\n insert data into the database before creating the index. When the initial index creation has been completed, we will\n be\n ready to run our tests without waiting for some events from the change stream to be processed. But also in this\n scenario, your test data management gets more complex as you have to create test data that fits all your test\n scenarios.\n\nIn our project, we went with the second scenario. We think that it provides a good compromise between runtime requirements\nand the complexity of test data management. Plus, we think of these tests as integration tests where we do not need to test\nevery corner case. We just need to make sure that the query can be executed and returns the expected data.\n\nThe exemplary test suite shown below follows the first approach. In beforeAll, some documents are inserted into the\ndatabase. After that, the method is forced to \u201csleep\u201d some time before the actual tests are run.\n\n```javascript\nbeforeAll(async () => {\n await mongoose.connect(process.env.MONGO_URL!)\n const itemModel1 = new MongoItem({\n name: \"Cool Thing\",\n price: 1337,\n })\n await MongoItemModel.create(itemModel1)\n const itemModel2 = new MongoItem({\n name: \"Nice Thing\",\n price: 10000,\n })\n await MongoItemModel.create(itemModel2)\n await new Promise((r) => setTimeout(r, 1000))\n})\n\ndescribe(\"MongoItemRepository\", () => {\n describe(\"getItemsInPriceRange\", () => {\n it(\"get all items in given price range\", async () => {\n const items = await repository.getItemsInPriceRange(1000, 2000)\n expect(items).toHaveLength(1)\n })\n })\n})\n\nafterAll(async () => {\n await mongoose.connection.collection(\"items\").deleteMany({})\n await mongoose.connection.close()\n})\n```\n\n## Conclusion\n\nBefore having a more in-depth look at it, we put Atlas Search aside for all the wrong reasons: We had no need for\nfull-text searches and thought it was not really possible to run tests on it. After using it for a while, we can\ngenuinely say that Atlas Search is not only a great tool for applications that use full-text search-based features. It\ncan also be used to realize more traditional query patterns and reduce the load on the database. As for the testing\npart, there have been some great improvements since the feature was initially rolled out and by now, we have reached a\nstate where testability is not an unsolvable issue anymore, even though it still requires some setup.\n\nWith the container\nimages provided by MongoDB and some of the Docker magic introduced in this article, it is now possible to run\nintegration tests for these queries locally and also in your CI/CD pipeline. Give it a try if you haven't yet and let us\nknow how it works for you.\n\nYou can find the complete source code for the example described in this post in the\n[GitHub repository. There's still some room for\nimprovement that can be incorporated into the test setup. Future updates of the tools might enable us to write tests\nwithout the need to wait some time before we can continue running our tests so that one day, we can all write some\nMongoDB Atlas Search integration tests without any hassle.\n\nQuestions? Comments? Head to the MongoDB Developer Community to continue the conversation!\n", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Docker"], "pageDescription": "In this blog post, you'll learn how to deploy MongoDB Atlas Search locally using Docker containers, index some documents and finally start unit tests to validate your Atlas Search indexes.", "contentType": "Article"}, "title": "How to Enable Local and Automatic Testing of Atlas Search-Based Features", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/quickstart-mongodb-atlas-python", "action": "created", "body": "# Quick Start: Getting Started With MongoDB Atlas and Python\n\n## What you will learn\n\n* How to set up MongoDB Atlas in the cloud\n* How to load sample data\n* How to query sample data using the PyMongo library\n\n## Where's the code?\nThe Jupyter Notebook for this quickstart tutorial can be found here.\n\n## Step 1: Set up MongoDB Atlas\n\nHere is a quick guide adopted from the official documentation:\n\n### Create a free Atlas account\n\nSign up for Atlas and log into your account.\n\n### Create a free instance\n\n* You can choose any cloud instance.\n* Choose the \u201cFREE\u201d tier.\n* Follow the setup wizard and give your instance a name.\n* Note your username and password to connect to the instance.\n* **Add 0.0.0.0/0 to the IP access list**. \n\n> This makes the instance available from any IP address, which is okay for a test instance.\n\nSee the screenshot below for how to add the IP:\n\n to get configuration settings.\n\n## Step 3: Install the required libraries\n\nTo connect to our Atlas cluster using the Pymongo client, we will need to install the following libraries:\n\n```\n! pip install pymongosrv]==4.6.2\n```\n\nWe only need one package here:\n* **pymongo**: Python library to connect to MongoDB Atlas.\n\n## Step 4: Define the AtlasClient class\n\nThis `AtlasClient` class will handle tasks like establishing connections, running queries, etc. It has the following methods:\n* **__init__**: Initializes an object of the AtlasClient class, with the MongoDB client (`mongodb_client`) and database name (`database`) as attributes\n* **ping:** Used to test if we can connect to our Atlas cluster \n* **get_collection**: The MongoDB collection to connect to\n* **find:** Returns the results of a query; it takes the name of the collection (`collection`) to query and any search criteria (`filter`) as arguments\n\n```\nfrom pymongo import MongoClient\n\nclass AtlasClient ():\n\n def __init__ (self, altas_uri, dbname):\n self.mongodb_client = MongoClient(altas_uri)\n self.database = self.mongodb_client[dbname]\n\n ## A quick way to test if we can connect to Atlas instance\n def ping (self):\n self.mongodb_client.admin.command('ping')\n\n def get_collection (self, collection_name):\n collection = self.database[collection_name]\n return collection\n\n def find (self, collection_name, filter = {}, limit=0):\n collection = self.database[collection_name]\n items = list(collection.find(filter=filter, limit=limit))\n return items\n```\n\n## Step 5: Connect to MongoDB Atlas\n\nIn this phase, we will establish a connection to the **embedded_movies** collection within the **sample_mflix** database. To confirm that our connection is successful, we'll perform a `ping()` operation.\n\n```\nDB_NAME = 'sample_mflix'\nCOLLECTION_NAME = 'embedded_movies'\n\natlas_client = AtlasClient (ATLAS_URI, DB_NAME)\natlas_client.ping()\nprint ('Connected to Atlas instance! We are good to go!')\n```\n\n> If you get a \u201cConnection failed\u201d error, make sure **0.0.0.0/0** is added as an allowed IP address to connect (see Step 1).\n\n## Step 6: Run a sample query\n\nLet's execute a search for movies using the `find()` method. The `find()` method takes two parameters. The first parameter, `collection_name`, determines the specific collection to be queried \u2014 in this case, **embedded_movies**. The second parameter, `limit`, restricts the search to return only the specified number of results \u2014 in this case, **5**.\n\n```\nmovies = atlas_client.find (collection_name=COLLECTION_NAME, limit=5)\nprint (f\"Found {len (movies)} movies\")\n\n# print out movie info\nfor idx, movie in enumerate (movies):\n print(f'{idx+1}\\nid: {movie[\"_id\"]}\\ntitle: {movie[\"title\"]},\\nyear: {movie[\"year\"]}\\nplot: {movie[\"plot\"]}\\n')\n```\n\nThe results are returned as a list and we are simply iterating over it and printing out the results.\n\n```\nFound 5 movies\n1\nid: 573a1390f29313caabcd5293\ntitle: The Perils of Pauline,\nyear: 1914\nplot: Young Pauline is left a lot of money when her wealthy uncle dies. However, her uncle's secretary has been named as her guardian until she marries, at which time she will officially take ...\n\n2\nid: 573a1391f29313caabcd68d0\ntitle: From Hand to Mouth,\nyear: 1919\nplot: A penniless young man tries to save an heiress from kidnappers and help her secure her inheritance.\n...\n```\n\n### Query by an attribute\n\nIf we want to query by a certain attribute, we can pass a `filter` argument to the `find()` method. `filter` is a dictionary with key-value pairs. So to find movies from the year 1999, we set the filter as `{\"year\" : 1999}`.\n\n```\nmovies_1999 = atlas_client.find(collection_name=COLLECTION_NAME, \n filter={\"year\": 1999}\n```\n\nWe see that 81 movies are returned as the result. Let\u2019s print out the first few.\n\n```\n======= Finding movies from year 1999 =========================\nFound 81 movies from the year 1999. Here is a sample...\n1\nid: 573a139af29313caabcf0cfd\ntitle: Three Kings,\nyear: 1999\nplot: In the aftermath of the Persian Gulf War, 4 soldiers set out to steal gold that was stolen from Kuwait, but they discover people who desperately need their help.\n\n2\nid: 573a139af29313caabcf0e61\ntitle: Beowulf,\nyear: 1999\nplot: A sci-fi update of the famous 6th Century poem. In a beseiged land, Beowulf must battle against the hideous creature Grendel and his vengeance seeking mother.\n\u2026\n```\n\n## Conclusion\n\nIn this quick start, we learned how to set up MongoDB Atlas in the cloud, loaded some sample data into our cluster, and queried the data using the Pymongo client. To build upon what you have learned in this quickstart, here are a few more resources:\n* [Atlas getting started guide\n* Free course on MongoDB and Python\n* PyMongo library documentation\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt97444a9ad37a9bb2/661434881952f0449cfc0b9b/image3.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt72e095b20fd4fb81/661434c7add0c9d3e85e3a52/image1.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt185b9d1d57e14c1f/661434f3ae80e231a5823e13/image5.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt02b47ac4892e6c9a/6614355eca5a972886555722/image4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt661fc7099291b5de/6614357fab1db5330f658288/image2.png", "format": "md", "metadata": {"tags": ["Atlas", "Python"], "pageDescription": "In this tutorial, we will learn how to setup MongoDB Atlas in the Cloud, load sample data and query it using the PyMongo library.", "contentType": "Quickstart"}, "title": "Quick Start: Getting Started With MongoDB Atlas and Python", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/semantic-search-openai", "action": "created", "body": "# Enable Generative AI and Semantic Search Capabilities on Your Database With MongoDB Atlas and OpenAI\n\n### Goal\nOur goal for this tutorial is to leverage available and popular open-source LLMs in the market and add the capabilities and power of those LLMs in the same database as your operational (or in other words, primary) workload. \n\n### Overview\nCreating a large language model (LLM) is not a one- or two-day process. It can take years to build a tuned and optimized model. The good news is that we already have a lot of LLMs available on the market, including BERT, GPT-3, GPT-4, Hugging Face, and Claude, and we can make good use of them in different ways. \n\nLLMs provide vector representations of text data, capturing semantic relationships and understanding the context of language. These vector representations can be leveraged for various tasks, including vector search, to find similar or relevant text items within datasets.\n\nVector representations of text data can be used in capturing semantic similarities, search and retrieval, document retrieval, recommendation systems, text clustering and categorization, and anomaly detection. \n\nIn this article, we will explore the semantic search capability with vector representations of text data with a real-world use case. We will use the Airbnb sample dataset from MongoDB wherein we will try to find a room of our choice by giving an articulated prompt. \n\nWe will use MongoDB Atlas as a data platform, where we will have our sample dataset (an operational workload) of Airbnb and will enable search and vector search capabilities on top of it. \n\n## What is semantic search? \nSemantic search is an information retrieval technique that improves the user\u2019s search experience by understanding the intent or meaning behind the queries and the content. Semantic search focuses on context and semantics rather than exact word match, like traditional search would. Learn more about semantic search and how it is different from Google search and text-based search.\n\n## What is vector search? \nVector search is a technique used for information retrieval and recommendation systems to find items that are similar to query items or vectors. Data items are represented as high-dimensional vectors, and similarity between items is calculated based on the mathematical properties of these vectors. This is a very useful and commonly used approach in content recommendation, image retrieval, and document search. \n\nAtlas Vector Search enables searching through unstructured data. You can store vector embeddings generated by popular machine learning models like OpenAI and Hugging Face, utilizing them for semantic search and personalized user experiences, creating RAGs, and many other use cases.\n\n## Real-time use case\nWe have an Airbnb dataset that has a nice description written for each of the properties. We will let users express their choice of location in words \u2014 for example, \u201cNice cozy, comfy room near beach,\u201d \u201c3 bedroom studio apartment for couples near beach,\u201d \u201cStudio with nice city view,\u201d etc. \u2014 and the database will return the relevant results based on the sentence and keywords added. \n\nWhat it will do under the hood is make an API call to the LLM we\u2019re using (OpenAI) and get the vector embeddings for the search/prompt that we passed on/queried for (like we do in the ChatGPT interface). It will then return the vector embeddings, and we will be able to search with those embeddings against our operational dataset which will enable our database to return semantic/contextual results. \n\nWithin a few clicks and with the power of existing, very powerful LLMs, we can give the best user search experience using our existing operational dataset.\n\n### Initial setup\n\n - Sign up for OpenAI API and get the API key.\n - Sign up on MongoDB Atlas, if you haven\u2019t already.\n - Spin up the free tier M0 shared cluster.\n - Create a database called **sample_airbnb** and add a single dummy record in the collection called **listingsAndReviews**.\n - Use a machine with Python\u2019s latest version (3.11.1 was used while preparing this article) and the PyMongo driver installed (the latest version \u2014 4.6.1 was used while preparing this article).\n\nAt this point, assuming the initial setup is done, let's jump right into the integration steps.\n\n### Integration steps\n\n - Create a trigger to add/update vector embeddings.\n - Create a variable to store OpenAI credentials. (We will use this for retrieval in the trigger code.)\n - Create an Atlas search index.\n - Load/insert your data.\n - Query the database.\n\nWe will follow through each of the integration steps mentioned above with helpful instructions below so that you can find the relevant screens while executing it and can easily configure your own environment.\n\n## Create a trigger to add/update vector embeddings\n\nOn the left menu of your Atlas cluster, click on Triggers.\n\nClick on **Add Trigger** which will be visible in the top right corner of the triggers page.\n\nSelect the appropriate options on the **Add Trigger** page, as shown below.\n\nThis is where the trigger code needs to be shown in the next step.\n\nAdd the following code in the function area, visible in Step 3 above, to add/update vector embeddings for documents which will be triggered when a new document is created or an existing document is updated.\n\n```\nexports = async function(changeEvent) {\n // Get the full document from the change event.\n const doc = changeEvent.fullDocument;\n\n // Define the OpenAI API url and key.\n const url = 'https://api.openai.com/v1/embeddings';\n // Use the name you gave the value of your API key in the \"Values\" utility inside of App Services\n const openai_key = context.values.get(\"openAI_value\");\n try {\n console.log(`Processing document with id: ${doc._id}`);\n\n // Call OpenAI API to get the embeddings.\n let response = await context.http.post({\n url: url,\n headers: {\n 'Authorization': `Bearer ${openai_key}`],\n 'Content-Type': ['application/json']\n },\n body: JSON.stringify({\n // The field inside your document that contains the data to embed, here it is the \"plot\" field from the sample movie data.\n input: doc.description,\n model: \"text-embedding-3-small\"\n })\n });\n\n // Parse the JSON response\n let responseData = EJSON.parse(response.body.text());\n\n // Check the response status.\n if(response.statusCode === 200) {\n console.log(\"Successfully received embedding.\");\n\n const embedding = responseData.data[0].embedding;\n\n // Use the name of your MongoDB Atlas Cluster\n const collection = context.services.get(\"AtlasSearch\").db(\"sample_airbnb\").collection(\"listingsAndReviews\");\n\n // Update the document in MongoDB.\n const result = await collection.updateOne(\n { _id: doc._id },\n // The name of the new field you'd like to contain your embeddings.\n { $set: { description_embedding: embedding }}\n );\n\n if(result.modifiedCount === 1) {\n console.log(\"Successfully updated the document.\");\n } else {\n console.log(\"Failed to update the document.\");\n }\n } else {\n console.log(`Failed to receive embedding. Status code: ${response.statusCode}`);\n }\n\n } catch(err) {\n console.error(err);\n }\n};\n```\n\nAt this point, with the above code block and configuration that we did, it will be triggered when a document(s) is updated or inserted in the **listingAndReviews** collection of our **sample_airbnb** database. This code block will call the OpenAI API, fetch the embeddings of the body field, and store the results in the **description_embedding** field of the **listingAndReviews** collection.\n\nNow that we\u2019ve configured a trigger, let's create variables to store the OpenAI credentials in the next step.\n\n## Create a variable to store OpenAI credentials \nOnce you\u2019ve created the cluster, you will see the **App Services** tab in the top left area next to **Charts**.\n\nClick on **App Services**. You will see the trigger that you created in the first step. \n\n![(Click on the App Services tab for configuring environment variables inside trigger value)\n\nClick on the trigger present and it will open up a page where you can click on the **Values** tab present on the left menu, as shown below.\n\nClick on **Create New Value** with the variable named **openAI_value** and another variable called **openAI_key** which we will link to the secret we stored in the **openAI_value** variable.\n\nWe\u2019ve prepared our app service to fetch API credentials and have also added a trigger function that will be triggered/executed upon document inserts or updates. \n\nNow, we will move on to creating an Atlas search index, loading MongoDB\u2019s provided sample data, and querying the database.\n\n## Create an Atlas search index\nClick on the cluster name and then the search tab from the cluster page.\n\nClick on **Create Index** as shown below to create an Atlas search index.\n\nSelect JSON Editor and paste the JSON object.\n\nAdd a vector search index definition, as shown below.\n\nWe\u2019ve created the Atlas search index in the above step. Now, we\u2019re all ready to load the data in our prepared environment. So as a next step, let's load sample data. \n\n## Load/insert your data\nAs a prerequisite for this step, we need to make sure that the cluster is up and running and the screen is visible, as shown in Step 1 below. Make sure that the collection named **listingsAndReviews** is created under the **sample_airbnb** database. If you\u2019ve not created it yet, create it by switching to the **Data Explorer** tab. \n\nWe can load the sample dataset from the Atlas cluster option itself, as shown below.\n\nOnce you load the data, verify whether the embedding field was added in the collection.\n\nAt this point, we\u2019ve loaded the sample dataset. It should have triggered the code we configured to be triggered upon insert or updates. As a result of that, the **description_embedding** field will be added, containing an array of vectors. \n\nNow that we\u2019ve prepared everything, let\u2019s jump right into querying our dataset and see the exciting results we get from our user prompt. In the next section of querying the database, we will pass our sample user prompt directly to the Python script. \n\n## Query the database\nAs a prerequisite for this step, you will need a runtime for the Python script. It can be your local machine, an ec2 instance on AWS, or you can go with AWS Lambda \u2014 whichever option is most convenient. Make sure you\u2019ve installed PyMongo in the environment of your choice. The following code block can be written in a Jupyter notebook or VSCode and can be executed from Jupyter runtime or via the command line, depending on which option you go with. The following code block demonstrates how you can perform an Atlas vector search and retrieve records from your operational database by finding embeddings of user prompts received from the OpenAI API.\n\n```\nimport pymongo\nimport requests\nimport pprint\n\ndef get_vector_embeddings_from_openai(query):\n openai_api_url = \"https://api.openai.com/v1/embeddings\"\n openai_api_key = \"\"\n\n data = {\n 'input': query,\n 'model': \"text-embedding-3-small\"\n }\n\n headers = {\n 'Authorization': 'Bearer {0}'.format(openai_api_key),\n 'Content-Type': 'application/json'\n }\n\n response = requests.post(openai_api_url, json=data, headers=headers)\n embedding = ]\n if response.status_code == 200:\n embedding = response.json()['data'][0]['embedding']\n return embedding\n\ndef find_similar_documents(embedding):\n mongo_url = 'mongodb+srv://:@/?retryWrites=true&w=majority'\n client = pymongo.MongoClient(mongo_url)\n db = client.sample_airbnb\n collection = db[\"listingsAndReviews\"]\n\n pipeline = [\n {\n \"$vectorSearch\": {\n \"index\": \"default\",\n \"path\": \"descriptions_embedding\",\n \u201cqueryVector\u201d: \u201cembedding\u201d,\n \u201cnumCandidates\u201d: 150, \n \u201climit\u201d: 10\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"description\": 1\n }\n }\n ]\n documents = collection.aggregate(pipeline)\n return documents\n\ndef main():\n query = \"Best for couples, nearby beach area with cool weather\"\n try:\n embedding = get_vector_embeddings_from_openai(query)\n documents = find_similar_documents(embedding)\n print(\"Documents\")\n pprint.pprint(list(documents))\n except Exception as e:\n print(\"Error occured: {0}\".format(e))\n\nmain()\n```\n\n## Output\n![(Python script output, showing vector search results)\n\nWe did a search for \u201cbest for couples, nearby beach area with cool weather\u201d from the code block. Check out the interesting results we got which are contextually and semantically matched and closely match with user expectations.\n\nTo summarize, we used Atlas Apps Services to configure the triggers and OpenAI API keys. In the trigger code, we wrote a logic to fetch the embeddings from OpenAI and stored it in imported/newly created documents. With these steps, we have enabled semantic search capabilities into our primary workload dataset which, in this case, is Airbnb. \n\nIf you\u2019ve any doubts or questions or want to discuss this or any new use cases further, you can reach out to me on LinkedIn or email me. ", "format": "md", "metadata": {"tags": ["MongoDB", "Python", "AI"], "pageDescription": "Learn how to enable Generative AI and Semantic Search capabilities on your database using MongoDB Atlas and OpenAI.", "contentType": "Tutorial"}, "title": "Enable Generative AI and Semantic Search Capabilities on Your Database With MongoDB Atlas and OpenAI", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/choose-embedding-model-rag", "action": "created", "body": "# RAG Series Part 1: How to Choose the Right Embedding Model for Your Application\n\nIf you are building Generative AI (GenAI) applications in 2024, you\u2019ve probably heard the term \u201cembeddings\u201d a few times by now and are seeing new embedding models hit the shelf every week. So why do so many people suddenly care about embeddings, a concept that has existed since the 1950s? And if embeddings are so important and you must use them, how do you choose among the vast number of options out there?\n\nThis tutorial will cover the following:\n- What are embeddings?\n- Importance of embeddings in RAG applications\n- How to choose the right embedding model for your RAG application\n- Evaluating embedding models\n\nThis tutorial is Part 1 of a multi-part series on Retrieval Augmented Generation (RAG), where we start with the fundamentals of building a RAG application, and work our way to more advanced techniques for RAG. The series will cover the following:\n- Part 1: How to choose the right embedding model for your application\n- Part 2: How to evaluate your RAG application\n- Part 3: Improving RAG via better chunking and re-ranking\n- Part 4: Improving RAG using metadata extraction and filtering\n- Part 5: Optimizing RAG using fact extraction and prompt compression\n\n## What are embeddings and embedding models?\n\n**An embedding is an array of numbers (a vector) representing a piece of information, such as text, images, audio, video, etc.** Together, these numbers capture semantics and other important features of the data. The immediate consequence of doing this is that semantically similar entities map close to each other while dissimilar entities map farther apart in the vector space. For clarity, see the image below for a depiction of a high-dimensional vector space:\n\n on Hugging Face. It is the most up-to-date list of proprietary and open-source text embedding models, accompanied by statistics on how each model performs on various embedding tasks such as retrieval, summarization, etc.\n\n> Evaluations of this magnitude for multimodal models are just emerging (see the MME benchmark) so we will only focus on text embedding models for this tutorial. However, all the guidance here on choosing an embedding model also applies to multimodal models.\n\nBenchmarks are a good place to begin but bear in mind that these results are self-reported and have been benchmarked on datasets that might not accurately represent the data you are dealing with. It is also possible that some models may include the MTEB datasets in their training data since they are publicly available. So even if you choose a model based on benchmark results, we recommend evaluating it on your dataset. We will see how to do this later in the tutorial, but first, let\u2019s take a closer look at the leaderboard.\n\nHere\u2019s a snapshot of the top 10 models on the leaderboard currently:\n\n (NDCG) @ 10 across several datasets. NDCG is a common metric to measure the performance of retrieval systems. A higher NDCG indicates a model that is better at ranking relevant items higher in the list of retrieved results.\u00a0\n- **Model Size**: Size of the model (in GB). It gives an idea of the computational resources required to run the model. While retrieval performance scales with model size, it is important to note that model size also has a direct impact on latency. The latency-performance trade-off becomes especially important in a production setup.\u00a0\u00a0\n- **Max Tokens**: Number of tokens that can be compressed into a single embedding. You typically don\u2019t want to put more than a single paragraph of text (~100 tokens) into a single embedding. So even models with max tokens of 512 should be more than enough.\n- **Embedding Dimensions**: Length of the embedding vector. Smaller embeddings offer faster inference and are more storage-efficient, while more dimensions can capture nuanced details and relationships in the data. Ultimately, we want a good trade-off between capturing the complexity of data and operational efficiency.\n\nThe top 10 models on the leaderboard contain a mix of small vs large and proprietary vs open-source models. Let\u2019s compare some of these to find the best embedding model for our dataset.\n\n### Before we begin\n\nHere are some things to note about our evaluation experiment.\n\n#### Dataset\n\nMongoDB\u2019s cosmopedia-wikihow-chunked dataset is available on Hugging Face, which consists of prechunked WikiHow-style articles.\n\n#### Models evaluated\n\n- voyage-lite-02-instruct: A proprietary embedding model from VoyageAI\n- text-embedding-3-large: One of OpenAI\u2019s latest proprietary embedding models\n- UAE-Large-V1: A small-ish (335M parameters) open-source embedding model\n\n> We also attempted to evaluate SFR-Embedding-Mistral, currently the #1 model on the MTEB leaderboard, but the hardware below was not sufficient to run this model. This model and other 14+ GB models on the leaderboard will likely require a/multiple GPU(s) with at least 32 GB of total memory, which means higher costs and/or getting into distributed inference. While we haven\u2019t evaluated this model in our experiment, this is already a good data point when thinking about cost and resources.\n\n#### Evaluation metrics\n\nWe used the following metrics to evaluate embedding performance:\n- **Embedding latency**: Time taken to create embeddings\n- **Retrieval quality**: Relevance of retrieved documents to the user query\n\n#### Hardware used\n\n1 NVIDIA T4 GPU, 16GB Memory\n\n#### Where\u2019s the code?\n\nEvaluation notebooks for each of the above models are available:\n- voyage-lite-02-instruct\n- text-embedding-3-large\n- UAE-Large-V1\n\nTo run a notebook, click on the **Open in Colab** shield at the top of the notebook. The notebook will open in Google Colaboratory.\n\n dataset. The dataset is quite large (1M+ documents). So we will stream it and grab the first 25k records, instead of downloading the entire dataset to disk.\n\n```\nfrom datasets import load_dataset\nimport pandas as pd\n\n# Use streaming=True to load the dataset without downloading it fully\ndata = load_dataset(\"MongoDB/cosmopedia-wikihow-chunked\", split=\"train\", streaming=True)\n# Get first 25k records from the dataset\ndata_head = data.take(25000)\ndf = pd.DataFrame(data_head)\n\n# Use this if you want the full dataset\n# data = load_dataset(\"MongoDB/cosmopedia-wikihow-chunked\", split=\"train\")\n# df = pd.DataFrame(data)\n```\n\n## Step 4: Data analysis\n\nNow that we have our dataset, let\u2019s perform some simple data analysis and run some sanity checks on our data to ensure that we don\u2019t see any obvious errors:\n\n```\n# Ensuring length of dataset is what we expect i.e. 25k\nlen(df)\n\n# Previewing the contents of the data\ndf.head()\n\n# Only keep records where the text field is not null\ndf = dfdf[\"text\"].notna()]\n\n# Number of unique documents in the dataset\ndf.doc_id.nunique()\n```\n\n## Step 5: Create embeddings\n\nNow, let\u2019s create embedding functions for each of our models.\n\nFor **voyage-lite-02-instruct**:\n\n```\ndef get_embeddings(docs: List[str], input_type: str, model:str=\"voyage-lite-02-instruct\") -> List[List[float]]:\n \"\"\"\n Get embeddings using the Voyage AI API.\n\n Args:\n docs (List[str]): List of texts to embed\n input_type (str): Type of input to embed. Can be \"document\" or \"query\".\n model (str, optional): Model name. Defaults to \"voyage-lite-02-instruct\".\n\n Returns:\n List[List[float]]: Array of embedddings\n \"\"\"\n response = voyage_client.embed(docs, model=model, input_type=input_type)\n return response.embeddings\n```\n\nThe embedding function above takes a list of texts (`docs`) and an `input_type` as arguments and returns a list of embeddings. The `input_type` can be `document` or `query` depending on whether we are embedding a list of documents or user queries. Voyage uses this value to prepend the inputs with special prompts to enhance retrieval quality.\n\nFor **text-embedding-3-large**:\n\n```\ndef get_embeddings(docs: List[str], model: str=\"text-embedding-3-large\") -> List[List[float]]:\n \"\"\"\n Generate embeddings using the OpenAI API.\n\n Args:\n docs (List[str]): List of texts to embed\n model (str, optional): Model name. Defaults to \"text-embedding-3-large\".\n\n Returns:\n List[float]: Array of embeddings\n \"\"\"\n # replace newlines, which can negatively affect performance.\n docs = [doc.replace(\"\\n\", \" \") for doc in docs]\n response = openai_client.embeddings.create(input=docs, model=model)\n response = [r.embedding for r in response.data]\n return response\n```\n\nThe embedding function for the OpenAI model is similar to the previous one, with some key differences \u2014 there is no `input_type` argument, and the API returns a list of embedding objects, which need to be parsed to get the final list of embeddings. A sample response from the API looks as follows:\n\n```\n{\n \"data\": [\n {\n \"embedding\": [\n 0.018429679796099663,\n -0.009457024745643139\n .\n .\n .\n ],\n \"index\": 0,\n \"object\": \"embedding\"\n }\n ],\n \"model\": \"text-embedding-3-large\",\n \"object\": \"list\",\n \"usage\": {\n \"prompt_tokens\": 183,\n \"total_tokens\": 183\n }\n}\n```\n\nFor **UAE-large-V1**:\n\n```\nfrom typing import List\nfrom transformers import AutoModel, AutoTokenizer\nimport torch\n\n# Instruction to append to user queries, to improve retrieval\nRETRIEVAL_INSTRUCT = \"Represent this sentence for searching relevant passages:\"\n\n# Check if CUDA (GPU support) is available, and set the device accordingly\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\n# Load the UAE-Large-V1 model from the Hugging Face \nmodel = AutoModel.from_pretrained('WhereIsAI/UAE-Large-V1').to(device)\n# Load the tokenizer associated with the UAE-Large-V1 model\ntokenizer = AutoTokenizer.from_pretrained('WhereIsAI/UAE-Large-V1')\n\n# Decorator to disable gradient calculations\n@torch.no_grad()\ndef get_embeddings(docs: List[str], input_type: str) -> List[List[float]]:\n \"\"\"\n Get embeddings using the UAE-Large-V1 model.\n\n Args:\n docs (List[str]): List of texts to embed\n input_type (str): Type of input to embed. Can be \"document\" or \"query\".\n\n Returns:\n List[List[float]]: Array of embedddings\n \"\"\"\n # Prepend retrieval instruction to queries\n if input_type == \"query\":\n docs = [\"{}{}\".format(RETRIEVAL_INSTRUCT, q) for q in docs]\n # Tokenize input texts\n inputs = tokenizer(docs, padding=True, truncation=True, return_tensors='pt', max_length=512).to(device)\n # Pass tokenized inputs to the model, and obtain the last hidden state\n last_hidden_state = model(**inputs, return_dict=True).last_hidden_state\n # Extract embeddings from the last hidden state\n embeddings = last_hidden_state[:, 0]\n return embeddings.cpu().numpy()\n```\n\nThe UAE-Large-V1 model is an open-source model available on Hugging Face Model Hub. First, we will need to download the model and its tokenizer from Hugging Face. We do this using the [Auto classes \u2014 namely, `AutoModel` and `AutoTokenizer` from the Transformers library \u2014 which automatically infers the underlying model architecture, in this case, BERT. Next, we load the model onto the GPU using `.to(device)` since we have one available.\n\nThe embedding function for the UAE model, much like the Voyage model, takes a list of texts (`docs`) and an `input_type` as arguments and returns a list of embeddings. A special prompt is prepended to queries for better retrieval as well. \n\nThe input texts are first tokenized, which includes padding (for short sequences) and truncation (for long sequences) as needed to ensure that the length of inputs to the model is consistent \u2014 512, in this case, defined by the `max_length` parameter. The `pt` value for `return_tensors` indicates that the output of tokenization should be PyTorch tensors.\n\nThe tokenized texts are then passed to the model for inference and the last hidden layer (`last_hidden_state`) is extracted. This layer is the model\u2019s final learned representation of the entire input sequence. The final embedding, however, is extracted only from the first token, which is often a special token (`CLS]` in BERT) in transformer-based models. This token serves as an aggregate representation of the entire sequence due to the [self-attention mechanism in transformers, where the representation of each token in a sequence is influenced by all other tokens. Finally, we move the embeddings back to CPU using `.cpu()` and convert the PyTorch tensors to `numpy` arrays using `.numpy()`.\n\n## Step 6: Evaluation\n\nAs mentioned previously, we will evaluate the models based on embedding latency and retrieval quality.\n\n### Measuring embedding latency\n\nTo measure embedding latency, we will create a local vector store, which is essentially a list of embeddings for the entire dataset. Latency here is defined as the time it takes to create embeddings for the full dataset.\n\n```\nfrom tqdm.auto import tqdm\n\n# Get all the texts in the dataset\ntexts = df\"text\"].tolist()\n\n# Number of samples in a single batch\nbatch_size = 128\n\nembeddings = []\n# Generate embeddings in batches\nfor i in tqdm(range(0, len(texts), batch_size)):\n end = min(len(texts), i+batch_size)\n batch = texts[i:end]\n # Generate embeddings for current batch\n batch_embeddings = get_embeddings(batch)\n # Add to the list of embeddings\n embeddings.extend(batch_embeddings)\n```\n\nWe first create a list of all the texts we want to embed and set the batch size. The voyage-lite-02-instruct model has a batch size limit of 128, so we use the same for all models, for consistency. We iterate through the list of texts, grabbing `batch_size` number of samples in each iteration, getting embeddings for the batch, and adding them to our \"vector store\".\n\nThe time taken to generate embeddings on our hardware looked as follows:\n\n| Model | Batch Size | Dimensions | Time |\n| ----------------------- | ---------- | ---------- | ------- |\n| text-embedding-3-large | 128 | 3072 | 4m 17s |\n| voyage-lite-02-instruct | 128 | 1024 | 11m 14s |\n| UAE-large-V1 | 128 | 1024 | 19m 50s |\n\nThe OpenAI model has the lowest latency. However, note that it also has three times the number of embedding dimensions compared to the other two models. OpenAI also charges by tokens used, so both the storage and inference costs of this model can add up over time. While the UAE model is the slowest of the lot (despite running inference on a GPU), there is room for optimizations such as quantization, distillation, etc., since it is open-source.\n\n### Measuring retrieval quality\n\nTo evaluate retrieval quality, we use a set of questions based on themes seen in our dataset. For real applications, however, you will want to curate a set of \"cannot-miss\" questions \u2014 i.e. questions that you would typically expect users to ask from your data. For this tutorial, we will qualitatively evaluate the relevance of retrieved documents as a measure of quality, but we will explore metrics and techniques for quantitative evaluations in a following tutorial.\n\nHere are the main themes (generated using ChatGPT) covered by the top three documents retrieved by each model for our queries:\n\n> \ud83d\ude10 denotes documents that we felt weren\u2019t as relevant to the question. Sentences that contributed to this verdict have been highlighted in bold.\n\n**Query**: _Give me some tips to improve my mental health._\n\n| **voyage-lite-02-instruct**\u00a0 | **text-embedding-3-large**\u00a0 | **UAE-large-V1**\u00a0 |\n| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| \ud83d\ude10 Regularly **reassess treatment efficacy** and modify plans as needed. Track mood, thoughts, and behaviors; share updates with therapists and support network. Use a multifaceted approach to **manage suicidal thoughts**, involving resources, skills, and connections. | Eat balanced, exercise, sleep well. Cultivate relationships, engage socially, set boundaries. Manage stress with effective coping mechanisms. | Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. |\n| Recognize early signs of stress, share concerns, and develop coping mechanisms. Combat isolation by nurturing relationships and engaging in social activities. Set boundaries, communicate openly, and seek professional help for social anxiety. | Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. | Eat balanced, exercise regularly, get 7-9 hours of sleep. Cultivate positive relationships, nurture friendships, and seek new social opportunities. Manage stress with effective coping mechanisms. |\n| Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. | Acknowledging feelings is a step to address them. Engage in self-care activities to boost mood and health. Make self-care consistent for lasting benefits. | \ud83d\ude10 **Taking care of your mental health is crucial** for a fulfilling life, productivity, and strong relationships. **Recognize the importance of mental health** in all aspects of life. Managing mental health **reduces the risk of severe psychological conditions**. |\n\nWhile the results cover similar themes, the Voyage AI model keys in heavily on seeking professional help, while the UAE model covers slightly more about why taking care of your mental health is important. The OpenAI model is the one that consistently retrieves documents that cover general tips for improving mental health.\n\n**Query**: _Give me some tips for writing good code._\n\n| **voyage-lite-02-instruct**\u00a0 | **text-embedding-3-large**\u00a0 | **UAE-large-V1**\u00a0 |\n| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. | Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. | Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. |\n| \ud83d\ude10 **Code and test core gameplay mechanics** like combat and quest systems; debug and refine for stability. Use modular coding, version control, and object-oriented principles for effective **game development**. Playtest frequently to find and fix bugs, seek feedback, and prioritize significant improvements. | \ud83d\ude10 **Good programming needs dedication,** persistence, and patience. **Master core concepts, practice diligently,** and engage with peers for improvement. **Every expert was once a beginner**\u2014keep pushing forward. | Read programming books for comprehensive coverage and deep insights, choosing beginner-friendly texts with pathways to proficiency. Combine reading with coding to reinforce learning; take notes on critical points and unfamiliar terms. Engage with exercises and challenges in books to apply concepts and enhance skills. |\n| \ud83d\ude10 Monitor social media and newsletters for current **software testing insights**. Participate in networks and forums to exchange knowledge with **experienced testers**. Regularly **update your testing tools** and methods for enhanced efficiency. | Apply learning by working on real projects, starting small and progressing to larger ones. Participate in open-source projects or develop your applications to enhance problem-solving. Master debugging with IDEs, print statements, and understanding common errors for productivity. | \ud83d\ude10 **Programming is key in various industries**, offering diverse opportunities. **This guide covers programming fundamentals**, best practices, and improvement strategies. **Choose a programming language based on interests, goals, and resources.** |\n\nAll the models seem to struggle a bit with this question. They all retrieve at least one document that is not as relevant to the question. However, it is interesting to note that all the models retrieve the same document as their number one.\n\n**Query**: _What are some environment-friendly practices I can incorporate in everyday life?_\n\n| **voyage-lite-02-instruct**\u00a0 | **text-embedding-3-large**\u00a0 | **UAE-large-V1**\u00a0 |\n| :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| \ud83d\ude10 Conserve resources by reducing waste, reusing, and recycling, **reflecting Jawa culture's values** due to their planet's limited resources. Monitor consumption (e.g., water, electricity), repair goods, and join local environmental efforts. Eco-friendly practices **enhance personal and global well-being,** **aligning with Jawa values.** | Carry reusable bags for shopping, keeping extras in your car or bag. Choose sustainable alternatives like reusable water bottles and eco-friendly cutlery. Support businesses that minimize packaging and use biodegradable materials. | Educate others on eco-friendly practices; lead by example. Host workshops or discussion groups on sustainable living.Embody respect for the planet; every effort counts towards improvement. |\n| Learn and follow local recycling rules, rinse containers, and educate others on proper recycling. Opt for green transportation like walking, cycling, or electric vehicles, and check for incentives. Upgrade to energy-efficient options like LED lights, seal drafts, and consider renewable energy sources. | Opt for sustainable transportation, energy-efficient appliances, solar panels, and eat less meat to reduce emissions. Conserve water by fixing leaks, taking shorter showers, and using low-flow fixtures. Water conservation protects ecosystems, ensures food security, and reduces infrastructure stress. | Carry reusable bags for shopping, keeping extras in your car or bag. Choose sustainable alternatives like reusable water bottles and eco-friendly cutlery. Support businesses that minimize packaging and use biodegradable materials. |\n| \ud83d\ude10 **Consistently implement these steps**. **Actively contribute to a cleaner, greener world**. **Support resilience for future generations.** | Conserve water with low-flow fixtures, fix leaks, and use rainwater for gardening. Compost kitchen scraps to reduce waste and enrich soil, avoid meat and dairy. Shop locally at farmers markets and CSAs to lower emissions and support local economies. | Join local tree-planting events and volunteer at community gardens or restoration projects. Integrate native plants into landscaping to support pollinators and remove invasive species. Adopt eco-friendly transportation methods to decrease fossil fuel consumption. |\n\nWe see a similar trend with this query as with the previous two examples \u2014 the OpenAI model consistently retrieves documents that provide the most actionable tips, followed by the UAE model. The Voyage model provides more high-level advice.\n\nOverall, based on our preliminary evaluation, OpenAI\u2019s text-embedding-3-large model comes out on top. When working with real-world systems, however, a more rigorous evaluation of a larger dataset is recommended. Also, operational costs become an important consideration. More on evaluation coming in Part 2 of this series!\n\n## Conclusion\n\nIn this tutorial, we looked into how to choose the right model to embed data for RAG. The MTEB leaderboard is a good place to start, especially for text embedding models, but evaluating them on your data is important to find the best one for your RAG application. Storage and inference costs, embedding latency, and retrieval quality are all important parameters to consider while evaluating embedding models. The best model is typically one that offers the best trade-off across these dimensions.\n\nNow that you have a good understanding of embedding models, here are some resources to get started with building RAG applications using MongoDB:\n- [Using Latest OpenAI Embeddings in a RAG System With MongoDB\n- Building a RAG System With Google\u2019s Gemma, Hugging Face, and MongoDB\n- How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB\n\nFollow along with these by creating a free MongoDB Atlas cluster and reach out to us in our Generative AI community forums if you have any questions.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43ad2104f781d7fa/65eb303db5a879179e81a129/embeddings.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf5d51d2ee907cbc2/65eb329c2d59d4804e828e21/rag.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2f97b4a5ed1afa1a/65eb340799cd92ca89c0c0b5/top-10-mteb.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt46d3deb05ed920f8/65eb360e56de68aa49aa1f54/open-in-colab-github.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8049cc17064bda0b/65eb364e3eefeabfd3a5c969/connect-to-runtime-colab.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "In this tutorial, we will see why embeddings are important for RAG, and how to choose the right embedding model for your RAG application.", "contentType": "Tutorial"}, "title": "RAG Series Part 1: How to Choose the Right Embedding Model for Your Application", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/code-examples/java/spring-boot-reactive", "action": "created", "body": "# Reactive Java Spring Boot with MongoDB\n\n## Introduction\nSpring Boot +\nReactive +\nSpring Data +\nMongoDB. Putting these four technologies together can be a challenge, especially if you are just starting out.\nWithout getting into details of each of these technologies, this tutorial aims to help you get a jump start on a working code base based on this technology stack.\nThis tutorial features:\n- Interacting with MongoDB using ReactiveMongoRepositories.\n- Interacting with MongoDB using ReactiveMongoTemplate.\n- Wrapping queries in a multi-document ACID transaction.\n\nThis simplified cash balance application allows you to make REST API calls to:\n- Create or fetch an account.\n- Perform transactions on one account or between two accounts.\n\n## GitHub repository\nAccess the repository README for more details on the functional specifications.\nThe README also contains setup, API usage, and testing instructions. To clone the repository:\n\n```shell\ngit clone git@github.com:mongodb-developer/mdb-spring-boot-reactive.git\n```\n\n## Code walkthrough\nLet's do a logical walkthrough of how the code works.\nI would include code snippets, but to reduce verbosity, I will exclude lines of code that are not key to our understanding of how the code works.\n\n### Creating or fetching an account\nThis section showcases how you can perform Create and Read operations with `ReactiveMongoRepository`.\n\nThe API endpoints to create or fetch an account can be found \nin AccountController.java:\n\n```java\n@RestController\npublic class AccountController {\n //...\n @PostMapping(\"/account\")\n public Mono createAccount(@RequestBody Account account) {\n return accountRepository.save(account);\n }\n\n @GetMapping(\"/account/{accountNum}\")\n public Mono getAccount(@PathVariable String accountNum) {\n return accountRepository.findByAccountNum(accountNum).switchIfEmpty(Mono.error(new AccountNotFoundException()));\n }\n //...\n}\n```\nThis snippet shows two endpoints:\n- A POST method endpoint that creates an account\n- A GET method endpoint that retrieves an account but throws an exception if it cannot be found\n\nThey both simply return a `Mono` from AccountRepository.java, \na `ReactiveMongoRespository` interface which acts as an abstraction from the underlying\nReactive Streams Driver.\n- `.save(...)` method creates a new document in the accounts collection in our MongoDB database.\n- `.findByAccountNum()` method fetches a document that matches the `accountNum`.\n\n```java\npublic interface AccountRepository extends ReactiveMongoRepository {\n \n @Query(\"{accountNum:'?0'}\")\n Mono findByAccountNum(String accountNum);\n //...\n}\n```\n\nThe @Query annotation\nallows you to specify a MongoDB query with placeholders so that it can be dynamically substituted with values from method arguments.\n`?0` would be substituted by the value of the first method argument and `?1` would be substituted by the second, and so on and so forth.\n\nThe built-in query builder mechanism\ncan actually determine the intended query based on the method's name.\nIn this case, we could actually exclude the @Query annotation\nbut I left it there for better clarity and to illustrate the previous point.\n\nNotice that there is no need to declare a `save(...)` method even though we are actually using `accountRepository.save()` \nin AccountController.java.\nThe `save(...)` method, and many other base methods, are already declared by interfaces up in the inheritance chain of `ReactiveMongoRepository`.\n\n### Debit, credit, and transfer\nThis section showcases:\n- Update operations with `ReactiveMongoRepository`.\n- Create, Read, and Update operations with `ReactiveMongoTemplate`.\n\nBack to `AccountController.java`:\n```java\n@RestController\npublic class AccountController {\n //...\n @PostMapping(\"/account/{accountNum}/debit\")\n public Mono debitAccount(@PathVariable String accountNum, @RequestBody Map requestBody) {\n //...\n txn.addEntry(new TxnEntry(accountNum, amount));\n return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);\n }\n\n @PostMapping(\"/account/{accountNum}/credit\")\n public Mono creditAccount(@PathVariable String accountNum, @RequestBody Map requestBody) {\n //...\n txn.addEntry(new TxnEntry(accountNum, -amount));\n return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);\n }\n\n @PostMapping(\"/account/{from}/transfer\")\n public Mono transfer(@PathVariable String from, @RequestBody TransferRequest transferRequest) {\n //...\n txn.addEntry(new TxnEntry(from, -amount));\n txn.addEntry(new TxnEntry(to, amount));\n //save pending transaction then execute\n return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);\n }\n //...\n}\n```\nThis snippet shows three endpoints:\n- A `.../debit` endpoint that adds to an account balance\n- A `.../credit` endpoint that subtracts from an account balance\n- A `.../transfer` endpoint that performs a transfer from one account to another\n\nNotice that all three methods look really similar. The main idea is:\n- A `Txn` can consist of one to many `TxnEntry`.\n- A `TxnEntry` is a reflection of a change we are about to make to a single account.\n- A debit or credit `Txn` will only have one `TxnEntry`.\n- A transfer `Txn` will have two `TxnEntry`.\n- In all three operations, we first save one record of the `Txn` we are about to perform, \nand then make the intended changes to the target accounts using the TxnService.java.\n\n```java\n@Service\npublic class TxnService {\n //...\n public Mono saveTransaction(Txn txn) {\n return txnTemplate.save(txn);\n }\n\n public Mono executeTxn(Txn txn) {\n return updateBalances(txn)\n .onErrorResume(DataIntegrityViolationException.class\n /*lambda expression to handle error*/)\n .onErrorResume(AccountNotFoundException.class\n /*lambda expression to handle error*/)\n .then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS));\n }\n\n public Flux updateBalances(Txn txn) {\n //read entries to update balances, concatMap maintains the sequence\n Flux updatedCounts = Flux.fromIterable(txn.getEntries()).concatMap(\n entry -> accountRepository.findAndIncrementBalanceByAccountNum(entry.getAccountNum(), entry.getAmount())\n );\n return updatedCounts.handle(/*...*/);\n }\n}\n```\nThe `updateBalances(...)` method is responsible for iterating through each `TxnEntry` and making the corresponding updates to each account.\nThis is done by calling the `findAndIncrementBalanceByAccountNum(...)` method \nin AccountRespository.java.\n\n```java\npublic interface AccountRepository extends ReactiveMongoRepository {\n //...\n @Update(\"{'$inc':{'balance': ?1}}\")\n Mono findAndIncrementBalanceByAccountNum(String accountNum, double increment);\n}\n```\nSimilar to declaring `find` methods, you can also declare Data Manipulation Methods\nin the `ReactiveMongoRepository`, such as `update` methods.\nOnce again, the query builder mechanism\nis able to determine that we are interested in querying by `accountNum` based on the naming of the method, and we define the action of an update using the `@Update` annotation.\nIn this case, the action is an `$inc` and notice that we used `?1` as a placeholder because we want to substitute it with the value of the second argument of the method.\n\nMoving on, in `TxnService` we also have:\n- A `saveTransaction` method that saves a `Txn` document into `transactions` collection.\n- A `executeTxn` method that calls `updateBalances(...)` and then updates the transaction status in the `Txn` document created.\n\nBoth utilize the `TxnTemplate` that contains a `ReactiveMongoTemplate`.\n\n```java\n@Service\npublic class TxnTemplate {\n //...\n public Mono save(Txn txn) {\n return template.save(txn);\n }\n\n public Mono findAndUpdateStatusById(String id, TxnStatus status) {\n Query query = query(where(\"_id\").is(id));\n Update update = update(\"status\", status);\n FindAndModifyOptions options = FindAndModifyOptions.options().returnNew(true);\n return template.findAndModify(query, update, options, Txn.class);\n }\n //...\n}\n```\nThe `ReactiveMongoTemplate` provides us with more customizable ways to interact with MongoDB and is a thinner layer of abstraction compared to `ReactiveMongoRepository`.\n\nIn the `findAndUpdateStatusById(...)` method, we are pretty much defining the query logic by code, but we are also able to specify that the update should return the newly updated document.\n\n### Multi-document ACID transactions\nThe transfer feature in this application is a perfect use case for multi-document transactions because the updates across two accounts need to be atomic.\n\nIn order for the application to gain access to Spring's transaction support, we first need to add a `ReactiveMongoTransactionManager` bean to our configuration as such:\n\n```java\n@Configuration\npublic class ReactiveMongoConfig extends AbstractReactiveMongoConfiguration {\n //...\n @Bean\n ReactiveMongoTransactionManager transactionManager(ReactiveMongoDatabaseFactory dbFactory) {\n return new ReactiveMongoTransactionManager(dbFactory);\n }\n}\n```\nWith this, we can proceed to define the scope of our transactions. We will showcase two methods:\n\n**1. Using _TransactionalOperator_**\n\nThe `ReactiveMongoTransactionManager` provides us with a `TransactionOperator`.\n\nWe can then define the scope of a transaction by appending `.as(transactionalOperator::transactional)` to the method call.\n```java\n@Service\npublic class TxnService {\n //In the actual code we are using constructor injection instead of @Autowired\n //Using @Autowired here to keep code snippet concise\n @Autowired\n private TransactionalOperator transactionalOperator;\n //...\n public Mono executeTxn(Txn txn) {\n return updateBalances(txn)\n .onErrorResume(DataIntegrityViolationException.class\n /*lambda expression to handle error*/)\n .onErrorResume(AccountNotFoundException.class\n /*lambda expression to handle error*/)\n .then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS))\n .as(transactionalOperator::transactional);\n }\n //...\n}\n```\n\n**2. Using _@Transactional_ annotation**\n\nWe can also simply define the scope of our transaction by annotating the method with the `@Transactional` annotation.\n```java\npublic class TxnService {\n //...\n @Transactional\n public Mono executeTxn(Txn txn) {\n return updateBalances(txn)\n .onErrorResume(DataIntegrityViolationException.class\n /*lambda expression to handle error*/)\n .onErrorResume(AccountNotFoundException.class\n /*lambda expression to handle error*/)\n .then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS));\n }\n //...\n}\n```\nRead more about transactions and sessions in Spring Data MongoDB for more information.\n\n## Conclusion\nWe are done! I hope this post was helpful for you in one way or another. If you have any questions, visit the MongoDB Community, where MongoDB engineers and the community can help you with your next big idea!\n\nOnce again, you may access the code from the GitHub repository,\nand if you are just getting started, it may be worth bookmarking Spring Data MongoDB.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring"], "pageDescription": "Quick start to Reactive Java Spring Boot and Spring Data MongoDB with an example application which includes implementations ofReactiveMongoRepository and ReactiveMongoTemplate and multi-document ACID transactions", "contentType": "Code Example"}, "title": "Reactive Java Spring Boot with MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/aggregation-framework-springboot-jdk-coretto", "action": "created", "body": "# MongoDB Advanced Aggregations With Spring Boot and Amazon Corretto\n\n# Introduction\n\nIn this tutorial, we'll get into the understanding of aggregations and explore how to construct aggregation pipelines within your Spring Boot applications.\n\nIf you're new to Spring Boot, it's advisable to understand the fundamentals by acquainting yourself with the example template provided for performing Create, Read, Update, Delete (CRUD) operations with Spring Boot and MongoDB before delving into advanced aggregation concepts.\n\nThis tutorial serves as a complement to the example code template accessible in the GitHub repository. The code utilises sample data, which will be introduced later in the tutorial.\n\nAs indicated in the tutorial title, we'll compile the Java code using Amazon Corretto.\n\nWe recommend following the tutorial meticulously, progressing through each stage of the aggregation pipeline creation process.\n\nLet's dive in!\n\n# Prerequisites\n\nThis tutorial follows a few specifications mentioned below. Before you start practicing it, please make sure you have all the necessary downloads and uploads in your environment.\n\n1. Amazon Corretto 21 JDK.\n2. A free Atlas tier, also known as an M0 cluster.\n3. Sample Data loaded in the cluster.\n4. Spring Data Version 4.2.2.\n5. MongoDB version 6.0.3.\n6. MongoDB Java Driver version 4.11.1.\n\nLet\u2019s understand each of these in detail.\n\n# Understanding and installing Corretto\n\nCorretto comes with the ability to be a no-cost, multiplatform, production-ready open JDK. It also provides the ability to work across multiple distributions of Linux, Windows, and macOS.\n\nYou can read more about Amazon Corretto in Introduction to Amazon Corretto: A No-Cost Distribution of OpenJDK.\n\nWe will begin the tutorial with the first step of installing the Amazon Corretto 21 JDK and setting up your IDE with the correct JDK.\n\nStep 1: Install Amazon Corretto 21 from the official website based on the operating system specifications.\n\nStep 2: If you are on macOS, you will need to set the JAVA_HOME variable with the path for the Corretto. To do this, go to the system terminal and set the variable JAVA_HOME as:\n\n```\nexport JAVA_HOME=/Library/Java/JavaVirtualMachines/amazon-corretto-21.jdk/Contents/Home \n```\n\nOnce the variable is set, you should check if the installation is done correctly using:\n\n```\njava --version\nopenjdk 21.0.2 2024-01-16 LTS\nOpenJDK Runtime Environment Corretto-21.0.2.13.1 (build 21.0.2+13-LTS)\nOpenJDK 64-Bit Server VM Corretto-21.0.2.13.1 (build 21.0.2+13-LTS, mixed mode, sharing)\n```\n\nFor any other operating system, you will need to follow the steps mentioned in the official documentation from Java on how to set or change the PATH system variable and check if the version has been set.\n\nOnce the JDK is installed on the system, you can set up your IDE of choice to use Amazon Corretto to compile the code.\n\nAt this point, you have all the necessary environment components ready to kickstart your application.\n\n# Creating the Spring Boot application\n\nIn this part of the tutorial, we're going to explore how to write aggregation queries for a Spring Boot application.\n\nAggregations in MongoDB are like super-powered tools for doing complex calculations on your data and getting meaningful results back. They work by applying different operations to your data and then giving you the results in a structured way.\n\nBut before we get into the details, let's first understand what an aggregation pipeline is and how it operates in MongoDB.\n\nThink of an aggregation pipeline as a series of steps or stages that MongoDB follows to process your data. Each stage in the pipeline performs a specific task, like filtering or grouping your data in a certain way. And just like a real pipeline, data flows through each stage, with the output of one stage becoming the input for the next. This allows you to build up complex operations step by step to get the results you need.\n\nBy now, you should have the sample data loaded in your Atlas cluster. In this tutorial, we will be using the `sample_supplies.sales` collection for our aggregation queries.\n\nThe next step is cloning the repository from the link to test the aggregations. You can start by cloning the repository using the below command:\n\n```\ngit clone https://github.com/mongodb-developer/mongodb-springboot-aggregations\n```\n\nOnce the above step is complete, upon forking and cloning the repository to your local environment, it's essential to update the connection string in the designated placeholder within the `application.properties` file. This modification enables seamless connectivity to your cluster during project execution.\n\n# README\n\nAfter cloning the repository and changing the URI in the environment variables, you can try running the REST APIs in your Postman application.\n\nAll the extra information and commands you need to get this project going are in the README.md file which you can read on GitHub.\n\n# Writing aggregation queries in Spring\n\nThe Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions:\n\n- Aggregation\n- AggregationDefinition\n- AggregationResults\n\nThe Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions: Aggregation, AggregationDefinition, and AggregationResults.\n\nWhile writing the aggregation queries, the first step is to generate the pipelines to perform the computations using the operations supported.\n\nThe documentation on spring.io explains each step clearly and gives simple examples to help you understand.\n\nFor the tutorial, we have the REST APIs defined in the SalesController.java class, and the methods have been mentioned in the SalesRepository.java class.\n\nThe first aggregation makes use of a simple $match operation to find all the documents where the `storeLocation` has been specified as the match value.\n\n```\ndb.sales.aggregate({ $match: { \"storeLocation\": \"London\"}}])\n```\n\nAnd now when we convert the aggregation to the spring boot function, it would look like this:\n\n```\n@Override\npublic List matchOp(String matchValue) {\nMatchOperation matchStage = match(new Criteria(\"storeLocation\").is(matchValue));\nAggregation aggregation = newAggregation(matchStage);\nAggregationResults results = mongoTemplate.aggregate(aggregation, \"sales\", SalesDTO.class);\nreturn results.getMappedResults();\n}\n```\nIn this Spring Boot method, we utilise the `MatchOperation` to filter documents based on the specified criteria, which in this case is the `storeLocation` matching the provided value. The aggregation is then executed using the `mongoTemplate` to aggregate data from the `sales` collection into `SalesDTO` objects, returning the mapped results.\n\nThe REST API can be tested using the curl command in the terminal which shows all documents where `storeLocation` is `London`.\n\nThe next aggregation pipeline that we have defined with the rest API is to group all documents according to `storeLocation` and then calculate the total sales and the average satisfaction based on the `matchValue`. This stage makes use of the `GroupOperation` to perform the evaluation.\n\n```\n@Override\npublic List groupOp(String matchValue) {\nMatchOperation matchStage = match(new Criteria(\"storeLocation\").is(matchValue));\nGroupOperation groupStage = group(\"storeLocation\").count()\n .as(\"totalSales\")\n .avg(\"customer.satisfaction\")\n .as(\"averageSatisfaction\");\nProjectionOperation projectStage = project(\"storeLocation\", \"totalSales\", \"averageSatisfaction\");\nAggregation aggregation = newAggregation(matchStage, groupStage, projectStage);\nAggregationResults results = mongoTemplate.aggregate(aggregation, \"sales\", GroupDTO.class);\nreturn results.getMappedResults();\n}\n```\n\nThe REST API call would look like below:\n\n```bash\ncurl http://localhost:8080/api/sales/aggregation/groupStage/Denver | jq\n```\n\n![Total sales and the average satisfaction for storeLocation as \"Denver\"\n\nThe next REST API is an extension that will streamline the above aggregation. In this case, we will be calculating the total sales for each store location. Therefore, you do not need to specify the store location and directly get the value for all the locations.\n```\n@Override\npublic List TotalSales() {\nGroupOperation groupStage = group(\"storeLocation\").count().as(\"totalSales\");\nSkipOperation skipStage = skip(0);\nLimitOperation limitStage = limit(10);\nAggregation aggregation = newAggregation(groupStage, skipStage, limitStage);\nAggregationResults results = mongoTemplate.aggregate(aggregation, \"sales\", TotalSalesDTO.class);\nreturn results.getMappedResults();\n}\n```\n\nAnd the REST API calls look like below:\n\n```bash\ncurl http://localhost:8080/api/sales/aggregation/TotalSales | jq\n```\n\nThe next API makes use of $sort and $limit operations to calculate the top 5 items sold in each category.\n\n```\n@Override\npublic List findPopularItems() {\nUnwindOperation unwindStage = unwind(\"items\");\nGroupOperation groupStage = group(\"$items.name\").sum(\"items.quantity\").as(\"totalQuantity\");\nSortOperation sortStage = sort(Sort.Direction.DESC, \"totalQuantity\");\nLimitOperation limitStage = limit(5);\nAggregation aggregation = newAggregation(unwindStage,groupStage, sortStage, limitStage);\nreturn mongoTemplate.aggregate(aggregation, \"sales\", PopularDTO.class).getMappedResults();\n}\n```\n\n```bash\ncurl http://localhost:8080/api/sales/aggregation/PopularItem | jq\n```\n\nThe last API mentioned makes use of the $bucket to create buckets and then calculates the count and total amount spent within each bucket.\n\n```\n@Override\npublic List findTotalSpend(){\nProjectionOperation projectStage = project()\n .and(ArrayOperators.Size.lengthOfArray(\"items\")).as(\"numItems\")\n .and(ArithmeticOperators.Multiply.valueOf(\"price\")\n .multiplyBy(\"quantity\")).as(\"totalAmount\");\n\nBucketOperation bucketStage = bucket(\"numItems\")\n .withBoundaries(0, 3, 6, 9)\n .withDefaultBucket(\"Other\")\n .andOutputCount().as(\"count\")\n .andOutput(\"totalAmount\").sum().as(\"totalAmount\");\n\nAggregation aggregation = newAggregation(projectStage, bucketStage);\nreturn mongoTemplate.aggregate(aggregation, \"sales\", BucketsDTO.class).getMappedResults();\n}\n```\n\n```bash\ncurl http://localhost:8080/api/sales/aggregation/buckets | jq\n```\n\n# Conclusion\n\nThis tutorial provides a comprehensive overview of aggregations in MongoDB and how to implement them in a Spring Boot application. We have learned about the significance of aggregation queries for performing complex calculations on data sets, leveraging MongoDB's aggregation pipeline to streamline this process effectively.\n\nAs you continue to experiment and apply these concepts in your applications, feel free to reach out on our MongoDB community forums. Remember to explore further resources in the MongoDB Developer Center and documentation to deepen your understanding and refine your skills in working with MongoDB aggregations.", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring"], "pageDescription": "This tutorial will help you create MongoDB aggregation pipelines using Spring Boot applications.", "contentType": "Tutorial"}, "title": "MongoDB Advanced Aggregations With Spring Boot and Amazon Corretto", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/azure-kubernetes-services-java-microservices", "action": "created", "body": "# Using Azure Kubernetes Services for Java Spring Boot Microservices\n\n## Introduction\nIn the early days of software development, application development consisted of monolithic codebases. With challenges in scaling, singular points of failure, and inefficiencies in updating, a solution was proposed. A modular approach. A symphony of applications managing their respective domains in harmony. This is achieved using microservices.\n\nMicroservices are an architectural approach that promotes the division of applications into smaller, loosely coupled services. This allows application code to be delivered in manageable pieces, independent of each other. These services operate independently, addressing a lot of the concerns of monolithic applications mentioned above.\n\nWhile each application has its own needs, microservices have proven themselves as a viable solution time and time again, as you can see in the success of the likes of Netflix.\n\nIn this tutorial, we are going to deploy a simple Java Spring Boot microservice application, hosted on the Azure Kubernetes Service (AKS). AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. We'll explore containerizing our application and setting up communication between our APIs, a MongoDB database, and the external world. You can access the full code here:\n\n```bash\ngit clone https://github.com/mongodb-developer/simple-movie-microservice.git\n```\n\nThough we won't dive into the most advanced microservice best practices and design patterns, this application gives a simplistic approach that will allow you to write reviews for the movies in the MongoDB sample data, by first communicating with the review API and that service, verifying that the user and the movie both exist. The architecture will look like this.\n\n, we simply send a request to `http://user-management-service/users/`. In this demo application, communication is done with RESTful HTTP/S requests, using RestTemplate. \n\n## Prerequisites\nBefore you begin, you'll need a few prerequisites to follow along with this tutorial, including:\n- A MongoDB Atlas account, if you don't have one already, with a cluster ready with the MongoDB sample data.\n- A Microsoft Azure account with an active subscription.\n- Azure CLI, or you can install Azure PowerShell, but this tutorial uses Azure CLI. Sign in and configure your command line tool following the steps in the documentation for Azure CLI and Azure PowerShell.\n- Docker for creating container images of our microservices.\n- Java 17.\n- Maven 3.9.6.\n\n## Set up an Azure Kubernetes Service cluster\nStarting from the very beginning, set up an Azure Kubernetes Service (AKS) cluster.\n\n### Install kubectl and create an AKS cluster\nInstall `kubectl`, the Kubernetes command-line tool, via the Azure CLI with the following command (you might need to sudo this command), or you can download the binaries from:\n```bash\naz aks install-cli\n```\n\nLog into your Azure account using the Azure CLI:\n```bash\naz login\n```\n\nCreate an Azure Resource Group:\n```bash\naz group create --name myResourceGroup --location northeurope\n```\n\nCreate an AKS cluster: Replace `myAKSCluster` with your desired cluster name. (This can take a couple of minutes.)\n```bash\naz aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --enable-addons monitoring --generate-ssh-keys\n```\n\n### Configure kubectl to use your AKS cluster\nAfter successfully creating your AKS cluster, you can proceed to configure `kubectl` to use your new AKS cluster. Retrieve the credentials for your AKS cluster and configure `kubectl`:\n```bash\naz aks get-credentials --resource-group myResourceGroup --name myAKSCluster\n```\n\n### Create an Azure Container Registry (ACR)\nCreate an ACR to store and manage container images across all types of Azure deployments:\n```bash\naz acr create --resource-group --name --sku Basic\n```\n> Note: Save the app service id here. We\u2019ll need it later when we are creating a service principal.\n\nLog into ACR:\n```bash\naz acr login --name \n```\n## Containerize your microservices application\nEach of your applications (User Management, Movie Catalogue, Reviews) has a `Dockerfile`. Create a .jar by running the command `mvn package` for each application, in the location of the pom.xml file. Depending on your platform, the following steps are slightly different.\n\nFor those wielding an M1 Mac, a bit of tweaking is in order due to our image's architecture. As it stands, Azure Container Apps can only jive with linux/amd64 container images. However, the M1 Mac creates images as `arm` by default. To navigate this hiccup, we'll be leveraging Buildx, a handy Docker plugin. Buildx allows us to build and push images tailored for a variety of platforms and architectures, ensuring our images align with Azure's requirements.\n\n### Build the Docker image (not M1 Mac)\nTo build your image, make sure you run the following command in the same location as the `Dockerfile`. Repeat for each application.\n```bash\ndocker build -t movie-catalogue-service .\n```\n**Or** you can run the following command from the simple-movie-microservice folder to loop through all three repositories.\n\n```bash\nfor i in movie-catalogue reviews user-management; do cd $i; ./mvnw clean package; docker build -t $i-service .; cd -; done\n```\n### Build the Docker image (M1 Mac)\nIf you are using an M1 Mac, use the following commands to use Buildx to create your images:\n```bash\ndocker buildx install\n```\n\nNext, enable Buildx to use the Docker CLI:\n```bash\ndocker buildx create --use\n```\n\nOpen a terminal and navigate to the root directory of the microservice where the `Dockerfile` is located. Run the following command to build the Docker image, replacing `movie-catalogue-service` with the appropriate name for each service.\n```bash\ndocker buildx build --platform linux/amd64 -t movie-catalogue-service:latest --output type=docker .\n```\n\n### Tag and push\nNow, we're ready to tag and push your images. Replace `` with your actual ACR name. Repeat these two commands for each microservice. \n```bash\ndocker tag movie-catalogue-service .azurecr.io/movie-catalogue-service:latest\ndocker push .azurecr.io/movie-catalogue-service:latest\n```\n**Or** run this script in the terminal, like before:\n\n```bash\nACR_NAME=\".azurecr.io\"\n\nfor i in movie-catalogue reviews user-management; do \n # Tag the Docker image for Azure Container Registry\n docker tag $i-service $ACR_NAME/$i-service:latest\n # Push the Docker image to Azure Container Registry\n docker push $ACR_NAME/$i-service:latest\ndone\n```\n\n## Deploy your microservices to AKS\nNow that we have our images ready, we need to create Kubernetes deployment and service YAML files for each microservice. We are going to create one *mono-file* to create the Kubernetes objects for our deployment and services. We also need one to store our MongoDB details. It is good practice to use secrets for sensitive data like the MongoDB URI.\n\n### Create a Kubernetes secret for MongoDB URI\nFirst, you'll need to create a secret to securely pass the MongoDB connection string to your microservices. In Kubernetes, the data within a secret object is stored as base64-encoded strings. This encoding is used because it allows you to store binary data in a format that can be safely represented and transmitted as plain text. It's not a form of encryption or meant to secure the data, but it ensures compatibility with systems that may not handle raw binary data well.\n\nCreate a Kubernetes secret that contains the MongoDB URI and database name. You will encode these values in Base64 format, but Kubernetes will handle them as plain text when injecting them into your pods. You can encode them with the bash command, and copy them into the YAML file, next to the appropriate data keys:\n\n```bash\necho -n 'your-mongodb-uri' | base64\necho -n 'your-database-name' | base64\n```\n\nThis is the mongodb-secret.yaml.\n```yaml\napiVersion: v1\nkind: Secret\nmetadata:\n name: mongodb-secret\ntype: Opaque\ndata:\n MONGODB_URI: \n MONGODB_DATABASE: \n\n```\n\nRun the following command to apply your secrets:\n```bash\nkubectl apply -f mongodb-secret.yaml\n```\n\nSo, while base64 encoding doesn't secure the data, it formats it in a way that's safe to store in the Kubernetes API and easy to consume from your applications running in pods.\n\n### Authorize access to the ACR\nIf your ACR is private, you'll need to ensure that your Kubernetes cluster has the necessary credentials to access it. You can achieve this by creating a Kubernetes secret with your registry credentials and then using that secret in your deployments. \n\nThe next step is to create a service principal or use an existing one that has access to your ACR. This service principal needs the `AcrPull` role assigned to be able to pull images from the ACR. Replace ``, ``, ``, and `` with your own values.\n: This can be any unique identifier you want to give this service principal.\n: You can get the id for the subscription you\u2019re using with `az account show --query id --output tsv`.\n: Use the same resource group you have your AKS set up in.\n: This is the Azure Container Registry you have your images stored in.\n\n```bash\naz ad sp create-for-rbac --name --role acrPull --scopes /subscriptions//resourceGroups//providers/Microsoft.ContainerRegistry/registries/\n```\n\nThis command will output JSON that looks something like this:\n```bash\n{ \n\"appId\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \n\"displayName\": \"\", \n\"password\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\", \n\"tenant\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\" \n}\n```\n\n- `appId` is your ``.\n- `password` is your ``.\n\n**Note:** It's important to note that the `password` is only displayed once at the creation time. Make sure to copy and secure it.\n\n**Create a Kubernetes secret with the service principal's credentials.** You can do this with the following command:\n\n```bash\nkubectl create secret docker-registry acr-auth \\\n --namespace default \\\n --docker-server=.azurecr.io \\\n --docker-username= \\\n --docker-password= \\\n --docker-email=\n```\n\n### Create Kubernetes deployment and service YAML files\nThere are a couple of points to note in the YAML file for this tutorial, but these points are not exhaustive of everything happening in this file. If you want to learn more about configuring your YAML for Kubernetes, check out the documentation for configuring Kubernetes objects.\n- We will have our APIs exposed externally. This means you will be able to access the endpoints from the addresses we'll receive when we have everything running. Setting the `type: LoadBalancer` triggers the cloud provider's load balancer to be provisioned automatically. The external load balancer will be configured to route traffic to the Kubernetes service, which in turn routes traffic to the appropriate pods based on the service's selector.\n- The `containers:` section defines a single container named `movie-catalogue-service`, using an image specified by `/movie-catalogue-service:latest`.\n- `containerPort: 8080` exposes port 8080 inside the container for network communication.\n- Environment variables `MONGODB_URI` and `MONGODB_DATABASE` are set using values from secrets (`mongodb-secret`), enhancing security by not hardcoding sensitive information.\n- `imagePullSecrets: - name: acr-auth` allows Kubernetes to authenticate to a private container registry to pull the specified image, using the secret we just created.\n\n```yaml\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: movie-catalogue-service-deployment\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: movie-catalogue-service\n template:\n metadata:\n labels:\n app: movie-catalogue-service\n spec:\n containers:\n - name: movie-catalogue-service\n image: /movie-catalogue-service:latest\n ports:\n - containerPort: 8080\n env:\n - name: MONGODB_URI\n valueFrom:\n secretKeyRef:\n name: mongodb-secret\n key: MONGODB_URI\n - name: MONGODB_DATABASE\n valueFrom:\n secretKeyRef:\n name: mongodb-secret\n key: MONGODB_DATABASE\n imagePullSecrets:\n - name: acr-auth\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: movie-catalogue-service\nspec:\n selector:\n app: movie-catalogue-service\n ports:\n - protocol: TCP\n port: 80\n targetPort: 8080\n type: LoadBalancer\n---\n```\n\nRemember, before applying your Kubernetes YAML files, make sure your Kubernetes cluster has access to your ACR. You can configure this by granting AKS the ACRPull role on your ACR:\n\n```bash\naz aks update -n -g --attach-acr \n```\n\nReplace ``, ``, and `` with your AKS cluster name, Azure resource group name, and ACR name, respectively.\n\n### Apply the YAML file\nApply the YAML file with `kubectl`:\n```bash\nkubectl apply -f all-microservices.yaml\n```\n## Access your services\nOnce deployed, it may take a few minutes for the LoadBalancer to be provisioned and for the external IP addresses to be assigned. You can check the status of your services with:\n```bash \nkubectl get services\n```\n\nLook for the external IP addresses for your services and use them to access your microservices.\n\nAfter deploying, ensure your services are running:\n```bash\nkubectl get pods\n```\nAccess your services based on the type of Kubernetes service you've defined (e.g., LoadBalancer in our case) and perform your tests.\n\nYou can test if the endpoint is running with the CURL command:\n```bash\ncurl -X POST http:///reviews \\\n -H \"Content-Type: application/json\" \\\n -d '{\"movieId\": \"573a1391f29313caabcd68d0\", \"userId\": \"59b99db5cfa9a34dcd7885b8\", \"rating\": 4}'\n```\n\nAnd this review should now appear in your database. You can check with a simple:\n```bash\ncurl -X GET http:///reviews\n```\n\nHooray!\n\n## Conclusion\nAs we wrap up this tutorial, it's clear that embracing microservices architecture, especially when paired with the power of Kubernetes and Azure Kubernetes Service (AKS), can significantly enhance the scalability, maintainability, and deployment flexibility of applications. Through the practical deployment of a simple microservice application using Java Spring Boot on AKS, we've demonstrated the steps and considerations involved in bringing a microservice architecture to life in the cloud.\n\nKey takeaways:\n- **Modular approach**: The transition from monolithic to microservices architecture facilitates a modular approach to application development, enabling independent development, deployment, and scaling of services.\n- **Simplified Kubernetes deployment**: AKS abstracts away much of the complexity involved in managing a Kubernetes cluster, offering a streamlined path to deploying microservices at scale.\n- **Inter-service communication**: Utilizing Kubernetes' internal DNS for service discovery simplifies the communication between services within a cluster, making microservice interactions more efficient and reliable.\n- **Security and configuration best practices**: The tutorial underscored the importance of using Kubernetes secrets for sensitive configurations and the Azure Container Registry for securely managing and deploying container images.\n- **Exposing services externally**: By setting services to `type: LoadBalancer`, we've seen how to expose microservices externally, allowing for easy access and integration with other applications and services.\n\nThe simplicity and robustness of Kubernetes, combined with the scalability of AKS and the modularity of microservices, equip developers with the tools necessary to build complex applications that are both resilient and adaptable. If you found this tutorial useful, find out more about what you can do with MongoDB and Azure on our Developer Center.\n\nAre you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bcbee53fc14653b/661808089070f0c0a8d50771/AKS_microservices.png", "format": "md", "metadata": {"tags": ["Java", "Azure", "Spring", "Kubernetes"], "pageDescription": "Learn how to deploy your Java Spring Boot microservice to Azure Kubernetes Services.", "contentType": "Tutorial"}, "title": "Using Azure Kubernetes Services for Java Spring Boot Microservices", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/ai-shop-mongodb-atlas-langchain-openai", "action": "created", "body": "# AI Shop: The Power of LangChain, OpenAI, and MongoDB Atlas Working Together\n\nBuilding AI applications in the last few months has made my mind run into different places, mostly inspired by ideas and new ways of interacting with sources of information. After eight years at MongoDB, I can clearly see the potential of MongoDB when it comes to powering AI applications. Surprisingly, it's the same main fundamental reason users chose MongoDB and MongoDB Atlas up until the generative AI era, and it's the document model flexibility. \n\nUsing unstructured data is not always easy. The data produced by GenAI models is considered highly unstructured. It can come in different wording formats as well as sound, images, and even videos. Applications are efficient and built correctly when the application can govern and safely predict data structures and inputs. Therefore, in order to build successful AI applications, we need a method to turn unstructured data into what we call *semi-structured* or flexible documents.\n\nOnce we can fit our data stream into a flexible pattern, we are in power of efficiently utilizing this data and providing great features for our users. \n\n## RAG as a fundamental approach to building AI applications\n\nIn light of this, retrieval-augmented generation (RAG) emerges as a pivotal methodology in the realm of AI development. This approach synergizes the retrieval of information and generative processes to refine the quality and relevance of AI outputs. By leveraging the document model flexibility inherent to MongoDB and MongoDB Atlas, RAG can dynamically incorporate a vast array of unstructured data, transforming it into a more manageable semi-structured format. This is particularly advantageous when dealing with the varied and often unpredictable data produced by AI models, such as textual outputs, auditory clips, visual content, and video sequences. \n\nMongoDB's prowess lies in its ability to act as a robust backbone for RAG processes, ensuring that AI applications can not only accommodate but also thrive on the diversity of generative AI data streams. The integration of MongoDB Atlas with features like vector search and the linguistic capabilities of LangChain, detailed in RAG with Atlas Vector Search, LangChain, and OpenAI, exemplifies the cutting-edge potential of MongoDB in harnessing the full spectrum of AI-generated content. This seamless alignment between data structuring and AI innovation positions MongoDB as an indispensable asset in the GenAI era, unlocking new horizons for developers and users alike\n\nOnce we can fit our data stream into a flexible pattern we are in power of efficiently utilising this data and provide great features for our users. \n\n## Instruct to struct unstructured AI structures\n\nTo demonstrate the ability of Gen AI models like Open AI chat/image generation I decided to build a small grocery store app that provides a catalog of products to the user. Searching for online grocery stores is now a major portion of world wide shopping habits and I bet almost all readers have used those.\n\nHowever, I wanted to take the user experience to another level by providing a chatbot which anticipate users' grocery requirements. Whether it's from predefined lists, casual text exchanges, or specific recipe inquiries like \"I need to cook a lasagne, what should I buy?\". \n\nThe stack I decided to use is: \n* A MongoDB Atlas cluster to store products, categories, and orders.\n* Atlas search indexes to power vector search (semantic search based on meaning).\n* Express + LangChain to orchestrate my AI tasks.\n* OpenAI platform API - GPT4, GPT3.5 as my AI engine.\n\nI quickly realized that in any application I will build with AI, I want to control the way my inputs are passed and produced by the AI, at least their template structure.\n\nSo in the store query, I want the user to provide a request and the AI to produce a list of potential groceries. \n\nAs I don\u2019t know how many ingredients there are or what their categories and types are, I need the template to be flexible enough to describe the list in a way my application can safely traverse it further down the search pipeline. \n\nThe structured I decided to use is: \n```javascript\nconst schema = z.object({\n\"shopping_list\": z.array(z.object({\n\"product\": z.string().describe(\"The name of the product\"),\n\"quantity\": z.number().describe(\"The quantity of the product\"),\n\"unit\": z.string().optional(),\n\"category\": z.string().optional(),\n})),\n}).deepPartial();\n```\n\nI have used a `zod` package which is recommended by LangChain in order to describe the expected schema. Since the shopping_list is an array of objects, it can host N entries filled by the AI, However, their structure is strictly predictable.\n\nAdditionally, I don\u2019t want the AI engine to provide me with ingredients or products that are far from the categories I\u2019m selling in my shop. For example, if a user requests a bicycle from a grocery store, the AI model should have context that it's not reasonable to have something for the user. Therefore, the relevant categories that are stored in the database have to be provided as context to the model. \n\n```javascript\n // Initialize OpenAI instance\n const llm = new OpenAI({ \n openAIApiKey: process.env.OPEN_AI_KEY,\n modelName: \"gpt-4\",\n temperature: 0\n });\n \n // Create a structured output parser using the Zod schema\n const outputParser = StructuredOutputParser.fromZodSchema(schema);\n const formatInstructions = outputParser.getFormatInstructions();\n \n // Create a prompt template\n const prompt = new PromptTemplate({\n template: \"Build a user grocery list in English as best as possible, if all the products does not fit the categories output empty list, however if some does add only those. \\n{format_instructions}\\n possible category {categories}\\n{query}. Don't output the schema just the json of the list\",\n inputVariables: \"query\", \"categories\"],\n partialVariables: { format_instructions: formatInstructions },\n });\n```\nWe take advantage of the LangChain library to turn the schema into a set of instructions and produce an engineering prompt consisting of the category documents we fetched from our database and the extraction instructions.\n\nThe user query has a flexible requirement to be built by an understandable schema by our application. The rest of the code only needs to validate and access the well formatted lists of products provided by the LLM.\n```javascript\n // Fetch all categories from the database\n const categories = await db.collection('categories').find({}, { \"_id\": 0 }).toArray();\n const docs = categories.map((category) => category.categoryName);\n \n // Format the input prompt\n const input = await prompt.format({\n query: query,\n categories: docs\n });\n\n // Call the OpenAI model\n const response = await llm.call(input);\n const responseDoc = await outputParser.parse(response);\n \n let shoppingList = responseDoc.shopping_list;\n // Embed the shopping list\n shoppingList = await placeEmbeddings(shoppingList);\n```\n\nHere is an example of how this list might look like: \n![Document with Embeddings\n\n## LLM to embeddings\nA structured flexible list like this will allow me to create embeddings for each of those terms found by the LLM as relevant to the user input and the categories my shop has.\n\nFor simplicity reasons, I am going to only embed the product name.\n```javascript\nconst placeEmbeddings = async (documents) => {\n\n const embeddedDocuments = documents.map(async (document) => {\n const embeddedDocument = await embeddings.embedQuery(document.product);\n document.embeddings = embeddedDocument;\n return document;\n });\n return Promise.all(embeddedDocuments);\n};\n```\nBut in real life applications, we can provide the attributes to quantity or unit inventory search filtering.\n\nFrom this point, coding and aggregation that will fetch three candidates for each product is straightforward. \n\nIt will be a vector search for each item connected in a union with the next item until the end of the list. \n\n## Embeddings to aggregation\n```javascript\n {$vectorSearch: // product 1 (vector 3 alternatives)},\n { $unionWith : { $search : //product 2...},\n { $unionWith : { $search : //product 3...}]\n```\nFinally, I will reshape the data so each term will have an array of its three candidates to make the frontend coding simpler.\n```\n[ { searchTerm : \"parmesan\" ,\n Products : [ //parmesan 1, //parmesan 2, // Mascarpone ]},\n ...\n]\n```\nHere\u2019s my NodeJS server-side code to building the vector search:\n``` javascript\nconst aggregationQuery = [\n { \"$vectorSearch\": {\n \"index\": \"default\",\n \"queryVector\": shoppingList[0].embeddings,\n \"path\": \"embeddings\",\n \"numCandidates\": 20,\n \"limit\": 3\n }\n },\n { $addFields: { \"searchTerm\": shoppingList[0].product } },\n ...shoppingList.slice(1).map((item) => ({\n $unionWith: {\n coll: \"products\",\n pipeline: [\n {\n \"$search\": {\n \"index\": \"default\",\n \"knnBeta\": {\n \"vector\": item.embeddings,\n \"path\": \"embeddings\",\n \"k\": 20\n }\n }\n },\n {$limit: 3},\n { $addFields: { \"searchTerm\": item.product } }\n ]\n }\n })),\n { $group: { _id: \"$searchTerm\", products: { $push: \"$$ROOT\" } } },\n { $project: { \"_id\": 0, \"category\": \"$_id\", \"products.title\": 1, \"products.description\": 1,\"products.emoji\" : 1, \"products.imageUrl\" : 1,\"products.price\": 1 } }\n ]\n```\n## The process\nThe process we presented here can be applied to a massive amount of use cases. Let\u2019s reiterate it according to the chart below.\n![RAG-AI-Diagram\nIn this context, we have enriched our product catalog with embeddings on the title/description of the products. We\u2019ve also provided the categories and structuring instructions as context to engineer our prompt. Finally, we pipped the prompt through the LLM which creates a manageable list that can be transformed to answers and follow-up questions. \n\nEmbedding LLM results can create a chain of semantic searches whose results can be pipped back to LLMs or manipulated smartly by the robust aggregation framework.\n\nEventually, data becomes clay we can shape and morph using powerful LLMs and combining with aggregation pipelines to add relevance and compute power to our applications.\n\nFor the full example and step-by-step tutorial to set up the demo grocery store, use the GitHub project.\n\n## Summary\nIn conclusion, the journey of integrating AI with MongoDB showcases the transformative impact of combining generative AI capabilities with MongoDB's dynamic data model. The flexibility of MongoDB's document model has proven to be the cornerstone for managing the unpredictable nature of AI-generated data, paving the way for innovative applications that were previously inconceivable. Through the use of structured schemas, vector searches, and the powerful aggregation framework, developers can now craft AI-powered applications that not only understand and predict user intent but also offer unprecedented levels of personalization and efficiency.\n\nThe case study of the grocery store app exemplifies the practical application of these concepts, illustrating how a well-structured data approach can lead to more intelligent and responsive AI interactions. MongoDB stands out as an ideal partner for AI application development, enabling developers to structure, enrich, and leverage unstructured data in ways that unlock new possibilities.\n\nAs we continue to explore the synergy between MongoDB and AI, it is evident that the future of application development lies in our ability to evolve data management techniques that can keep pace with the rapid advancements in AI technology. MongoDB's role in this evolution is indispensable, as it provides the agility and power needed to turn the challenges of unstructured data into opportunities for innovation and growth in the GenAI era.\n\nWant to continue the conversation? Meet us over in the MongoDB Developer Community.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Node.js", "AI"], "pageDescription": "Explore the synergy of MongoDB Atlas, LangChain, and OpenAI GPT-4 in our cutting-edge AI Shop application. Discover how flexible document models and advanced AI predictions revolutionize online shopping, providing personalized grocery lists from simple recipe requests. Dive into the future of retail with our innovative AI-powered solutions.", "contentType": "Article"}, "title": "AI Shop: The Power of LangChain, OpenAI, and MongoDB Atlas Working Together", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-stream-processing-development-guide", "action": "created", "body": "# Introduction to Atlas Stream Processing Development\n\nWelcome to this MongoDB Stream Processing tutorial! In this guide, we will quickly set up a coding workflow and have you write and run your first Stream Processing Instance in no time. In a very short time, we'll learn how to create a new stream processor instance, conveniently code and execute stream processors from Visual Studio Code, and simply aggregate stream data, thus opening the door to a whole new field of the MongoDB Atlas developer data platform.\n\nWhat we'll cover\n----------------\n\n- Prerequisites\n- Setup\n- Create a stream processor instance\n- Set up Visual Studio Code\n- The anatomy of a stream processor\n- Let's execute a stream processor!\n- Hard-coded data in $source declaration\n- Simplest stream processor\n- Stream processing aggregation\n- Add time stamps to the data\n\nPrerequisites\n-------------\n\n- Basic knowledge of the MongoDB Aggregation Pipeline and Query API\n\n- Ideally, read the official high-level Atlas Stream Processing overview\u00a0\n\n- A live MongoDB Atlas cluster that supports stream processing\n\n- Visual Studio Code + MongoDB for VS Code extension\n\n## Setup\n### Create an Atlas stream processing instance\n\nWe need to have an Atlas Stream Processing Instance (SPI) ready. Follow the steps in the tutorial Get Started with Atlas Stream Processing: Creating Your First Stream Processor until we have our connection string and username/password, then come back here.\n\nDon't forget to add your IP address to the Atlas Network Access to allow the client to access the instance.\n\n### Set up Visual Studio Code for MongoDB Atlas Stream Processing\n\nThanks to the MongoDB for VS Code extension, we can rapidly develop stream processing (SP) aggregation pipelines and run them directly from inside a VS Code MongoDB playground. This provides a much better developer experience. In the rest of this article, we'll be using VS Code.\n\nSuch a playground is a NodeJS environment where we can execute JS code interacting with a live stream processor on MongoDB Atlas. To get started, install VS Code and the MongoDB for VS Code extension.\n\nBelow is a great tutorial about installing the extension. It also lists some shell commands we'll need later.\n\n- **Tutorial**: Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension\n\n- **Goal**: If everything works, we should see our live SP connection in the MongoDB Extension tab.\n\n. It is described by an array of processing stages. However, there are some differences. The most basic SP can be created using only its data source (we'll have executable examples next).\n\n```\n// our array of stages\n// source is defined earlier\nsp_aggregation_pipeline = source]\nsp.createStreamProcessor(\"SP_NAME\", sp_aggregation_pipeline, )\n```\n\nA more realistic stream processor would contain at least one aggregation stage, and there can be a large number of stages performing various operations to the incoming data stream. There's a generous limit of 16MB for the total processor size.\n\n```\nsp_aggregation_pipeline = [source, stage_1, stage_2...]\nsp.createStreamProcessor(\"SP_NAME\", sp_aggregation_pipeline, )\n```\n\nTo increase the development loop velocity, there's an sp.process() function which starts an ephemeral stream processor that won't persist in your stream processing instance.\n\nLet's execute a stream processor!\n---------------------------------\n\nLet's create basic stream processors and build our way up. First, we need to have some data! Atlas Stream Processing supports [several data sources for incoming streaming events. These sources include:\n\n- Hard-coded data declaration in $source.\n- Kafka streams.\n- MongoDB Atlas databases.\n\n### Hard-coded data in $source declaration\n\nFor quick testing or self-contained examples, having a small set of hard-coded data is a very convenient way to produce events. We can declare an array of events. Here's an extremely simple example, and note that we'll make some tweaks later to cover different use cases.\n\n### Simplest stream processor\n\nIn VS Code, we run an ephemeral stream processor with sp.process(). This way, we don't have to use sp.createStreamProcessor() and sp..drop() constantly as we would for SPs meant to be saved permanently in the instance.\n\n```\nsrc_hard_coded = {\n\u00a0\u00a0$source: {\n\u00a0\u00a0\u00a0\u00a0// our hard-coded dataset\n\u00a0\u00a0\u00a0\u00a0documents: \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 1},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 3},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 7},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 4},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 1}\n\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}\nsp.process( [src_hard_coded] );\n```\n\nUpon running this playground, we should see data coming out in the VS Code \"OUTPUT\" tab (CTRL+SHIFT+U to make it appear)\n\n**Note**: It can take a few seconds for the SP to be uploaded and executed, so don't expect an immediate output.\n\n```\n{\n\u00a0\u00a0id: 'entity_1',\n\u00a0\u00a0value: 1,\n\u00a0\u00a0_ts: 2024-02-14T18:52:33.704Z,\n\u00a0\u00a0_stream_meta: { timestamp: 2024-02-14T18:52:33.704Z }\n}\n{\n\u00a0\u00a0id: 'entity_1',\n\u00a0\u00a0value: 3,\n\u00a0\u00a0_ts: 2024-02-14T18:52:33.704Z,\n\u00a0\u00a0_stream_meta: { timestamp: 2024-02-14T18:52:33.704Z }\n}\n...\n```\n\nThis simple SP can be used to ensure that data is coming into the SP and there are no problems upstream with our source. Timestamps data was generated at ingestion time.\n\nStream processing aggregation\n-----------------------------\n\nBuilding on what we have, adding a simple aggregation pipeline to our SP is easy. Below, we're adding a $group stage to aggregate/accumulate incoming messages' \"value\" field into an array for the requested interval.\n\nNote that the \"w\" stage (w stands for \"Window\") of the SP pipeline contains an aggregation pipeline inside. With Stream Processing, we have aggregation pipelines in the stream processing pipeline.\n\nThis stage features a [$tumblingWindow which defines the time length the aggregation will be running against. Remember that streams are supposed to be continuous, so a window is similar to a buffer.\n\ninterval defines the time length of a window. Since the window is a continuous data stream, we can only aggregate on a slice at a time.\n\nidleTimeout defines how long the $source can remain idle before closing the window. This is useful if the stream is not sustained.\n\n```\nsrc_hard_coded = {\n\u00a0\u00a0$source: {\n\u00a0\u00a0\u00a0\u00a0// our hard-coded dataset\n\u00a0\u00a0\u00a0\u00a0documents: \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 1},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 3},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 7},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 4},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 1}\n\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}\n\nw = {\n\u00a0\u00a0$tumblingWindow: {\n\u00a0\u00a0\u00a0\u00a0// This is the slice of time we want to look at every iteration\n\u00a0\u00a0\u00a0\u00a0interval: {size: NumberInt(2), unit: \"second\"},\n\u00a0\u00a0\u00a0\u00a0// If no additional data is coming in, idleTimeout defines when the window is forced to close\n\u00a0\u00a0\u00a0\u00a0idleTimeout : {size: NumberInt(2), unit: \"second\"},\n\u00a0\u00a0\u00a0\u00a0\"pipeline\": [\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'$group': {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'_id': '$id',\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0'values': { '$push': \"$value\" }\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}\nsp_pipeline = [src_hard_coded, w];\nsp.process( sp_pipeline );\n```\n\nLet it run for a few seconds, and we should get an output similar to the following. $group will create one document per incoming \"id\" field and aggregate the relevant values into a new array field, \"values.\"\n\n```\n{\n\u00a0\u00a0_id: 'entity_2',\n\u00a0\u00a0values: [ 7, 1 ],\n\u00a0\u00a0_stream_meta: {\n\u00a0\u00a0\u00a0\u00a0windowStartTimestamp: 2024-02-14T19:29:46.000Z,\n\u00a0\u00a0\u00a0\u00a0windowEndTimestamp: 2024-02-14T19:29:48.000Z\n\u00a0\u00a0}\n}\n{\n\u00a0\u00a0_id: 'entity_1',\n\u00a0\u00a0values: [ 1, 3, 4 ],\n\u00a0\u00a0_stream_meta: {\n\u00a0\u00a0\u00a0\u00a0windowStartTimestamp: 2024-02-14T19:29:46.000Z,\n\u00a0\u00a0\u00a0\u00a0windowEndTimestamp: 2024-02-14T19:29:48.000Z\n\u00a0\u00a0}\n}\n```\nDepending on the $tumblingWindow settings, the aggregation will output several documents that match the timestamps. For example, these settings...\n\n```\n...\n$tumblingWindow: {\n\u00a0\u00a0\u00a0\u00a0interval: {size: NumberInt(10), unit: \"second\"},\n\u00a0\u00a0\u00a0\u00a0idleTimeout : {size: NumberInt(10), unit: \"second\"},\n...\n``` \n\n...will yield the following aggregation output:\n```\n{\n\u00a0\u00a0_id: 'entity_1',\n\u00a0\u00a0values: [ 1 ],\n\u00a0\u00a0_stream_meta: {\n\u00a0\u00a0\u00a0\u00a0windowStartTimestamp: 2024-02-13T14:51:30.000Z,\n\u00a0\u00a0\u00a0\u00a0windowEndTimestamp: 2024-02-13T14:51:40.000Z\n\u00a0\u00a0}\n}\n{\n\u00a0\u00a0_id: 'entity_1',\n\u00a0\u00a0values: [ 3, 4 ],\n\u00a0\u00a0_stream_meta: {\n\u00a0\u00a0\u00a0\u00a0windowStartTimestamp: 2024-02-13T14:51:40.000Z,\n\u00a0\u00a0\u00a0\u00a0windowEndTimestamp: 2024-02-13T14:51:50.000Z\n\u00a0\u00a0}\n}\n{\n\u00a0\u00a0_id: 'entity_2',\n\u00a0\u00a0values: [ 7, 1 ],\n\u00a0\u00a0_stream_meta: {\n\u00a0\u00a0\u00a0\u00a0windowStartTimestamp: 2024-02-13T14:51:40.000Z,\n\u00a0\u00a0\u00a0\u00a0windowEndTimestamp: 2024-02-13T14:51:50.000Z\n\u00a0\u00a0}\n}\n```\n\nSee how the windowStartTimestamp and windowEndTimestamp fields show the 10-second intervals as requested (14:51:30 to 14:51:40 etc.).\n\n### Additional learning resources: building aggregations\n\nAtlas Stream Processing uses the MongoDB Query API. You can learn more about the MongoDB Query API with the [official Query API documentation, free] [interactive course, and tutorial.\n\nImportant: Stream Processing aggregation pipelines do not support all database aggregation operations and have additional operators specific to streaming, like $tumblingWindow. Check the official Stream Processing aggregation documentation.\n\n### Add timestamps to the data\n\nEven when we hard-code data, there's an opportunity to provide a timestamp in case we want to perform $sort operations and better mimic a real use case. This would be the equivalent of an event-time timestamp embedded in the message.\n\nThere are many other types of timestamps if we use a live Kafka stream (producer-assigned, server-side, ingestion-time, and more). Add a timestamp to our messages and use the document's\u00a0 \"timeField\" property to make it the authoritative stream timestamp.\n\n```\nsrc_hard_coded = {\n\u00a0\u00a0$source: {\n\u00a0\u00a0\u00a0\u00a0// define our event \"timestamp_gps\" as the _ts\n\u00a0\u00a0\u00a0\u00a0timeField: { '$dateFromString': { dateString: '$timestamp_msg' } },\n\u00a0\u00a0\u00a0\u00a0// our hard-coded dataset\n\u00a0\u00a0\u00a0\u00a0documents: \n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 1, 'timestamp_msg': '2024-02-13T14:51:39.402336'},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 3, 'timestamp_msg': '2024-02-13T14:51:41.402674'},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 7, 'timestamp_msg': '2024-02-13T14:51:43.402933'},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_1', 'value': 4, 'timestamp_msg': '2024-02-13T14:51:45.403352'},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{'id': 'entity_2', 'value': 1, 'timestamp_msg': '2024-02-13T14:51:47.403752'}\n\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}\n```\n\nAt this point, we have everything we need to test new pipelines and create proofs of concept in a convenient and self-contained form. In a subsequent article, we will demonstrate how to connect to various streaming sources.\n\n## Tip and tricks\n\nAt the time of publishing, Atlas Stream Processing is in public preview and there are a number of [known Stream Processing limitations that you should be aware of, such as regional data center availability, connectivity with other Atlas projects, and user privileges.\n\nWhen running an ephemeral stream processor via sp.process(), many errors (JSON serialization issue, late data, divide by zero, $validate errors) that might have gone to a dead letter queue (DLQ) are sent to the default output to help you debug.\n\nFor SPs created with sp.createStreamProcessor(), you'll have to configure your DLQ manually. Consult the documentation for this. On the \"Manage Stream Processor\" documentation page, search for \"Define a DLQ.\"\n\nAfter merging data into an Atlas database, it is possible to use existing pipeline aggregation building tools in the Atlas GUI's builder or MongoDB Compass to create and debug pipelines. Since these tools are meant for the core database API, remember that some operators are not supported by stream processors, and streaming features like windowing are not currently available.\n\n## Conclusion\n\nWith that, you should have everything you need to get your first stream processor up and running. In a future post, we will dive deeper into connecting to different sources of data for your stream processors.\n\nIf you have any questions, share them in our community forum, meet us during local MongoDB User Groups (MUGs), or come check out one of our MongoDB .local events.\n\n## References\n\n- MongoDB Atlas Stream Processing Documentation\n\n- Introducing Atlas Stream Processing - Simplifying the Path to Reactive, Responsive, Event-Driven Apps\n\n- The Challenges and Opportunities of Processing Streaming Data\n\n- Atlas Stream Processing is Now in Public Preview (Feb 13, 2024)\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9fc619823204a23c/65fcd88eba94f0ad8e7d1460/atlas-stream-processor-connected-visual-studio-code.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn to set up and run your first MongoDB Atlas stream processor with our straightforward tutorial. Discover how to create instances, code in Visual Studio Code, and aggregate stream data effectively.", "contentType": "Quickstart"}, "title": "Introduction to Atlas Stream Processing Development", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/virtual-threads-reactive-programming", "action": "created", "body": "# Optimizing Java Performance With Virtual Threads, Reactive Programming, and MongoDB\n\n## Introduction\n\nWhen I first heard about Project Loom and virtual threads, my first thought was that this was a death sentence for\nreactive programming. It wasn't bad news at first because reactive programming comes with its additional layer of\ncomplexity and using imperative programming without wasting resources was music to my ears.\n\nBut I was actually wrong and a bit more reading and learning helped me understand why thinking this was a mistake.\n\nIn this post, we'll explore virtual threads and reactive programming, their differences, and how we can leverage both in\nthe same project to achieve peak concurrency performance in Java.\n\nLearn more about virtual threads support with MongoDB in my previous post on this topic.\n\n## Virtual threads\n\n### Traditional thread model in Java\n\nIn traditional Java concurrency, threads are heavyweight entities managed by the operating system. Each OS\nthread is wrapped by a platform thread which is managed by the Java Virtual Machine (JVM) that executes the Java code.\n\nEach thread requires significant system resources, leading to limitations in scalability when dealing with a\nlarge number of concurrent tasks. Context switching between threads is also resource-intensive and can deteriorate the\nperformance.\n\n### Introducing virtual threads\n\nVirtual threads, introduced by Project Loom in JEP 444, are lightweight by\ndesign and aim to overcome the limitations of traditional threads and create high-throughput concurrent applications.\nThey implement `java.lang.Thread` and they are managed by the JVM. Several of them can\nrun on the same platform thread, making them more efficient to work with a large number of small concurrent tasks.\n\n### Benefits of virtual threads\n\nVirtual threads allow the Java developer to use the system resources more efficiently and non-blocking I/O.\n\nBut with the closely related JEP 453: Structured Concurrency and JEP 446: Scoped Values,\nvirtual threads also support structured concurrency to treat a group of related tasks as a single unit of work and\ndivide a task into smaller independent subtasks to improve response time and throughput.\n\n### Example\n\nHere is a basic Java example.\n\n```java\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\n\npublic class VirtualThreadsExample {\n\n public static void main(String] args) {\n try (ExecutorService virtualExecutor = Executors.newVirtualThreadPerTaskExecutor()) {\n for (int i = 0; i < 10; i++) {\n int taskNumber = i + 1;\n Runnable task = () -> taskRunner(taskNumber);\n virtualExecutor.submit(task);\n }\n }\n }\n\n private static void taskRunner(int number) {\n System.out.println(\"Task \" + number + \" executed by virtual thread: \" + Thread.currentThread());\n }\n}\n```\n\nOutput of this program:\n\n```\nTask 6 executed by virtual thread: VirtualThread[#35]/runnable@ForkJoinPool-1-worker-6\nTask 2 executed by virtual thread: VirtualThread[#31]/runnable@ForkJoinPool-1-worker-2\nTask 10 executed by virtual thread: VirtualThread[#39]/runnable@ForkJoinPool-1-worker-10\nTask 1 executed by virtual thread: VirtualThread[#29]/runnable@ForkJoinPool-1-worker-1\nTask 5 executed by virtual thread: VirtualThread[#34]/runnable@ForkJoinPool-1-worker-5\nTask 7 executed by virtual thread: VirtualThread[#36]/runnable@ForkJoinPool-1-worker-7\nTask 4 executed by virtual thread: VirtualThread[#33]/runnable@ForkJoinPool-1-worker-4\nTask 3 executed by virtual thread: VirtualThread[#32]/runnable@ForkJoinPool-1-worker-3\nTask 8 executed by virtual thread: VirtualThread[#37]/runnable@ForkJoinPool-1-worker-8\nTask 9 executed by virtual thread: VirtualThread[#38]/runnable@ForkJoinPool-1-worker-9\n```\n\nWe can see that the tasks ran in parallel \u2014 each in a different virtual thread, managed by a single `ForkJoinPool` and\nits associated workers.\n\n## Reactive programming\n\nFirst of all, [reactive programming is a programming paradigm whereas virtual threads\nare \"just\" a technical solution. Reactive programming revolves around asynchronous and event-driven programming\nprinciples, offering solutions to manage streams of data and asynchronous operations efficiently.\n\nIn Java, reactive programming is traditionally implemented with\nthe observer pattern.\n\nThe pillars of reactive programming are:\n\n- Non-blocking I/O.\n- Stream-based asynchronous communication.\n- Back-pressure handling to prevent overwhelming downstream components with more data than they can handle.\n\nThe only common point of interest with virtual threads is the first one: non-blocking I/O.\n\n### Reactive programming frameworks\n\nThe main frameworks in Java that follow the reactive programming principles are:\n\n- Reactive Streams: provides a standard for asynchronous stream processing with\n non-blocking back pressure.\n- RxJava: JVM implementation of Reactive Extensions.\n- Project Reactor: foundation of the reactive stack in the Spring ecosystem.\n\n### Example\n\nMongoDB also offers an implementation of the Reactive Streams API:\nthe MongoDB Reactive Streams Driver.\n\nHere is an example where I insert a document in MongoDB and then retrieve it.\n\n```java\nimport com.mongodb.client.result.InsertOneResult;\nimport com.mongodb.quickstart.SubscriberHelpers.OperationSubscriber;\nimport com.mongodb.quickstart.SubscriberHelpers.PrintDocumentSubscriber;\nimport com.mongodb.reactivestreams.client.MongoClient;\nimport com.mongodb.reactivestreams.client.MongoClients;\nimport com.mongodb.reactivestreams.client.MongoCollection;\nimport org.bson.Document;\n\npublic class MongoDBReactiveExample {\n\n public static void main(String] args) {\n try (MongoClient mongoClient = MongoClients.create(\"mongodb://localhost\")) {\n MongoCollection coll = mongoClient.getDatabase(\"test\").getCollection(\"testCollection\");\n\n Document doc = new Document(\"reactive\", \"programming\");\n\n var insertOneSubscriber = new OperationSubscriber();\n coll.insertOne(doc).subscribe(insertOneSubscriber);\n insertOneSubscriber.await();\n\n var printDocumentSubscriber = new PrintDocumentSubscriber();\n coll.find().first().subscribe(printDocumentSubscriber);\n printDocumentSubscriber.await();\n }\n }\n}\n```\n\n> Note: The `SubscriberHelpers.OperationSubscriber` and `SubscriberHelpers.PrintDocumentSubscriber` classes come from\n> the [Reactive Streams Quick Start Primer.\n> You can find\n> the SubscriberHelpers.java\n> in the MongoDB Java Driver repository code examples.\n\n## Virtual threads and reactive programming working together\n\nAs you might have understood, virtual threads and reactive programming aren't competing against each other, and they\ncertainly agree on one thing: Blocking I/O operations is evil!\n\nWho said that we had to make a choice? Why not use them both to achieve peak performance and prevent blocking I/Os once\nand for all?\n\nGood news: The `reactor-core`\nlibrary added virtual threads support in 3.6.0. Project Reactor\nis the library that provides a rich and functional implementation of `Reactive Streams APIs`\nin Spring Boot\nand WebFlux.\n\nThis means that we can use virtual threads in a Spring Boot project that is using MongoDB Reactive Streams Driver and\nWebflux.\n\nThere are a few conditions though:\n\n- Use Tomcat because \u2014 as I'm writing this post \u2014 Netty (used by default by Webflux)\n doesn't support virtual threads. See GitHub issues 12848\n and 39425 for more details.\n- Activate virtual threads: `spring.threads.virtual.enabled=true` in `application.properties`.\n\n### Let's test\n\nIn the repository, my colleague Wen Jie Teo and I\nupdated the `pom.xml` and `application.properties` so we could use virtual threads in this reactive project.\n\nYou can run the following commands to get this project running quickly and test that it's running with virtual threads\ncorrectly. You can get more details in the\nREADME.md file but here is the gist.\n\nHere are the instructions in English:\n\n- Clone the repository and access the folder.\n- Update the log level in `application.properties` to `info`.\n- Start a local MongoDB single node replica set instance or use MongoDB Atlas.\n- Run the `setup.js` script to initialize the `accounts` collection.\n- Start the Java application.\n- Test one of the APIs available.\n\nHere are the instructions translated into Bash.\n\nFirst terminal:\n\n```shell\ngit clone git@github.com:mongodb-developer/mdb-spring-boot-reactive.git\ncd mdb-spring-boot-reactive/\nsed -i 's/warn/info/g' src/main/resources/application.properties\ndocker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:latest --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval \"rs.initiate();\"\nmongosh --file setup.js\nmvn spring-boot:run\n```\n\n> Note: On macOS, you may have to use `sed -i '' 's/warn/info/g' src/main/resources/application.properties` if you are not using `gnu-sed`, or you can just edit the file manually.\n\nSecond terminal\n\n```shell\ncurl 'localhost:8080/account' -H 'Content-Type: application/json' -d '{\"accountNum\": \"1\"}'\n```\n\nIf everything worked as planned, you should see this line in the first terminal (where you are running Spring).\n\n```\nStack trace's last line: java.base/java.lang.VirtualThread.run(VirtualThread.java:309) from POST /account\n```\n\nThis is the last line in the stack trace that we are logging. It proves that we are using virtual threads to handle\nour query.\n\nIf we disable the virtual threads in the `application.properties` file and try again, we'll read instead:\n\n```\nStack trace's last line: java.base/java.lang.Thread.run(Thread.java:1583) from POST /account\n```\n\nThis time, we are using a classic `java.lang.Thread` instance to handle our query.\n\n## Conclusion\n\nVirtual threads and reactive programming are not mortal enemies. The truth is actually far from that.\n\nThe combination of virtual threads\u2019 advantages over standard platform threads with the best practices of reactive\nprogramming opens up new frontiers of scalability, responsiveness, and efficient resource utilization for your\napplications. Be gone, blocking I/Os!\n\nMongoDB Reactive Streams Driver is fully equipped to\nbenefit from both virtual threads optimizations with Java 21, and \u2014 as always \u2014 benefit from the reactive programming\nprinciples and best practices.\n\nI hope this post motivated you to give it a try. Deploy your cluster on\nMongoDB Atlas and give the\nrepository a spin.\n\nFor further guidance and support, and to engage with a vibrant community of developers, head over to the\nMongoDB Forum where you can find help, share insights, and ask those\nburning questions. Let's continue pushing the boundaries of Java development together!\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring"], "pageDescription": "Join us as we delve into the dynamic world of Java concurrency with Virtual Threads and Reactive Programming, complemented by MongoDB's seamless integration. Elevate your app's performance with practical tips and real-world examples in this comprehensive guide.", "contentType": "Article"}, "title": "Optimizing Java Performance With Virtual Threads, Reactive Programming, and MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/cpp/me-and-the-devil-bluez-1", "action": "created", "body": "# Me and the Devil BlueZ: Implementing a BLE Central in Linux - Part 1\n\nIn my last article, I covered the basic Bluetooth Low Energy concepts required to implement a BLE peripheral in an MCU board. We used a Raspberry Pi Pico board and MicroPython for our implementation. We ended up with a prototype firmware that used the on-board LED, read from the on-board temperature sensor, and implemented a BLE peripheral with two services and several characteristics \u2013 one that depended on measured data and could push notifications to its client.\n\nIn this article, we will be focusing on the other side of the BLE communication: the BLE central, rather than the BLE peripheral. Our collecting station is going to gather the data from the sensors and it is a Raspberry Pi 3A+ with a Linux distribution, namely, Raspberry Pi OS wormbook which is a Debian derivative commonly used in this platform.\n, replacing the previously available OpenBT.\n\nInitially, all the tools were command-line based and the libraries used raw sockets to access the Host Controller Interface offered by hardware. But since the early beginning of its adoption, there was interest to integrate it into the different desktop alternatives, mainly Gnome and KDE. Sharing the Bluetooth interface across the different desktop applications required a different approach: a daemon that took care of all the Bluetooth tasks that take place outside of the Linux Kernel, and an interface that would allow sharing access to that daemon. D-Bus had been designed as a common initiative for interoperability among free-software desktop environments, managed by FreeDesktop, and had already been adopted by the major Linux desktops, so it became the preferred option for that interface.\n\n### D-Bus\n\nD-Bus, short for desktop bus, is an interprocess communication mechanism that uses a message bus. The bus is responsible for taking the messages sent by any process connected to it and delivering them to other processes in the same bus.\n and `hcitool` were the blessed tools to work with Bluetooth, but they used raw sockets and were deprecated around 2017. Nowadays, the recommended tools are `bluetoothctl` and `btmgmt`, although I believe that the old tools have been changed under their skin and are available without using raw sockets.\n\nEnabling the Bluetooth radio was usually done with `sudo hciconfig hci0 up`. Nowadays, we can use `bluetoothctl` instead:\n\n```sh\nbluetoothctl\nbluetooth]# show\nController XX:XX:XX:XX:XX:XX (public)\n Name: ...\n Alias: ...\n Powered: no\n ...\n[bluetooth]# power on\nChanging power on succeeded\n[CHG] Controller XX:XX:XX:XX:XX:XX Powered: yes\n[bluetooth]# show\nController XX:XX:XX:XX:XX:XX (public)\n Name: ...\n Alias: ...\n Powered: yes\n ...\n```\n\nWith the radio on, we can start scanning for BLE devices:\n\n```sh\nbluetoothctl\n[bluetooth]# menu scan\n[bluetooth]# transport le\n[bluetooth]# back\n[bluetooth]# scan on\n[bluetooth]# devices\n```\n\nThis shows several devices and my RP2 here:\n\n> Device XX:XX:XX:XX:XX:XX RP2-SENSOR\n\nNow that we know the MAC address/name pairs, we can use the former piece of data to connect to it:\n\n```sh\n [bluetooth]# connect XX:XX:XX:XX:XX:XX\n Attempting to connect to XX:XX:XX:XX:XX:XX\n Connection successful\n [NEW] Primary Service (Handle 0x2224)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0004\n 00001801-0000-1000-8000-00805f9b34fb\n Generic Attribute Profile\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0004/char0005\n 00002a05-0000-1000-8000-00805f9b34fb\n Service Changed\n[NEW] Primary Service (Handle 0x78c4)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007\n 0000180a-0000-1000-8000-00805f9b34fb\n Device Information\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char0008\n 00002a29-0000-1000-8000-00805f9b34fb\n Manufacturer Name String\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000a\n 00002a24-0000-1000-8000-00805f9b34fb\n Model Number String\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000c\n 00002a25-0000-1000-8000-00805f9b34fb\n Serial Number String\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000e\n 00002a26-0000-1000-8000-00805f9b34fb\n Firmware Revision String\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char0010\n 00002a27-0000-1000-8000-00805f9b34fb\n Hardware Revision String\n[NEW] Primary Service (Handle 0xb324)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012\n 0000181a-0000-1000-8000-00805f9b34fb\n Environmental Sensing\n[NEW] Characteristic (Handle 0x7558)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013\n 00002a1c-0000-1000-8000-00805f9b34fb\n Temperature Measurement\n[NEW] Descriptor (Handle 0x75a0)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013/desc0015\n 00002902-0000-1000-8000-00805f9b34fb\n Client Characteristic Configuration\n[NEW] Descriptor (Handle 0x75a0)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013/desc0016\n 0000290d-0000-1000-8000-00805f9b34fb\n Environmental Sensing Trigger Setting\n[RP2-SENSOR]# scan off\n```\n\nNow we can use the General Attribute Profile (GATT) to send commands to the device, including listing the attributes, reading a characteristic, and receiving notifications.\n\n```sh\n[RP2-SENSOR]# menu gatt\n[RP2-SENSOR]# list-attributes\n...\nCharacteristic (Handle 0x0001)\n /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013\n 00002a1c-0000-1000-8000-00805f9b34fb\n Temperature Measurement\n...\n[RP2-SENSOR]# select-attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013\n[MPY BTSTACK:/service0012/char0013]# read\nAttempting to read /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013\n[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:\n 00 0c 10 00 fe .....\n 00 0c 10 00 fe .....\n[MPY BTSTACK:/service0012/char0013]# notify on\n[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Notifying: yes\nNotify started\n[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:\n 00 3b 10 00 fe .;...\n[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:\n 00 6a 10 00 fe .j...\n[MPY BTSTACK:/service0012/char0013]# notify off\n```\n\nAnd we leave it in its original state:\n\n```sh\n[MPY BTSTACK:/service0012/char0013]# back\n[MPY BTSTACK:/service0012/char0013]# disconnect\nAttempting to disconnect from 28:CD:C1:0F:4B:AE\n[CHG] Device 28:CD:C1:0F:4B:AE ServicesResolved: no\nSuccessful disconnected\n[CHG] Device 28:CD:C1:0F:4B:AE Connected: no\n[bluetooth]# power off\nChanging power off succeeded\n[CHG] Controller B8:27:EB:4D:70:A6 Powered: no\n[CHG] Controller B8:27:EB:4D:70:A6 Discovering: no\n[bluetooth]# exit\n```\n\n### Query the services in the system bus\n\n`dbus-send` comes with D-Bus.\n\nWe are going to send a message to the system bus. The message is addressed to \"org.freedesktop.DBus\" which is the service implemented by D-Bus itself. We use the single D-Bus instance, \"/org/freedesktop/DBus\". And we use the \"Introspect\" method of the \"org.freedesktop.DBus.Introspectable\". Hence, it is a method call. Finally, it is important to highlight that we must request that the reply gets printed, with \"\u2013print-reply\" if we want to be able to watch it.\n\n```sh\ndbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.Introspectable.Introspect | less\n```\n\nThis method call has a long reply, but let me highlight some interesting parts. Right after the header, we get the description of the interface \"org.freedesktop.DBus\":\n\n```xml\n\n \n \n \n \n \n ...\n \n \n ...\n```\n\nThese are the methods, properties and signals related to handling connections to the bus and information about it. Methods may have parameters (args with direction \"in\") and results (args with direction \"out\") and both define the type of the expected data. Signals also declare the arguments, but they are broadcasted and no response is expected, so there is no need to use \"direction.\"\n\nThen we have an interface to expose the D-Bus properties:\n\n```xml\n\n...\n```\n\nAnd a description of the \"org.freedesktop.DBus.Introspectable\" interface that we have already used to obtain all the interfaces. Inception? Maybe.\n\n```xml\n\n \n \n \n\n```\n\nFinally, we find three other interfaces:\n\n```xml\n \n ...\n \n \n ...\n \n \n ...\n \n\n```\n\nLet's use the method of the first interface that tells us what is connected to the bus. In my case, I get:\n\n```sh\ndbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames\nmethod return time=1698320750.822056 sender=org.freedesktop.DBus -> destination=:1.50 serial=3 reply_serial=2\n array [\n string \"org.freedesktop.DBus\"\n string \":1.7\"\n string \"org.freedesktop.login1\"\n string \"org.freedesktop.timesync1\"\n string \":1.50\"\n string \"org.freedesktop.systemd1\"\n string \"org.freedesktop.Avahi\"\n string \"org.freedesktop.PolicyKit1\"\n string \":1.43\"\n string \"org.bluez\"\n string \"org.freedesktop.ModemManager1\"\n string \":1.0\"\n string \":1.1\"\n string \":1.2\"\n string \":1.3\"\n string \":1.4\"\n string \"fi.w1.wpa_supplicant1\"\n string \":1.5\"\n string \":1.6\"\n ]\n```\n\nThe \"org.bluez\" is the service that we want to use. We can use introspect with it:\n\n```xml\ndbus-send --system --print-reply=literal --dest=org.bluez /org/bluez org.freedesktop.DBus.Introspectable.Introspect |\nxmllint --format - | less\n```\n\n> xmllint can be installed with `sudo apt-get install libxml2-utils`.\n\nAfter the header, I get the following interfaces:\n\n```xml\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n```\n\nHave you noticed the node that represents the child object for the HCI0? We could also have learned about it using `busctl tree org.bluez`. And we can query that child object too. We will now obtain the information about HCI0 using introspection but send the message to BlueZ and refer to the HCI0 instance.\n\n```sh\ndbus-send --system --print-reply=literal --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Introspectable.Introspect | xmllint --format - | less\n```\n\n```xml\n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n```\n\nLet's check the status of the Bluetooth radio using D-Bus messages to query the corresponding property:\n\n```sh\ndbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:Powered\n```\n\nWe can then switch the radio on, setting the same property:\n\n```sh\ndbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Set string:org.bluez.Adapter1 string:Powered variant:boolean:true\n```\n\nAnd check the status of the radio again to verify the change:\n\n```sh\ndbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:Powered\n```\n\nThe next step is to start scanning, and it seems that we should use this command:\n\n```sh\ndbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.bluez.Adapter1.StartDiscovery\n```\n\nBut this doesn't work because `dbus-send` exits almost immediately and BlueZ keeps track of the D-Bus clients that request the discovery.\n\n### Capture the messages produced by `bluetoothctl`\n\nInstead, we are going to use the command line utility `bluetoothctl` and monitor the messages that go through the system bus.\n\nWe start `dbus-monitor` for the system bus and redirect the output to a file. We launch `bluetoothctl` and inspect the log. This connects to the D-Bus with a \"Hello\" method. It invokes AddMatch to show interest in BlueZ. It does `GetManagedObjects` to find the objects that are managed by BlueZ.\n\nWe then select Low Energy (`menu scan`, `transport le`, `back`). This doesn't produce messages because it just configures the tool.\n\nWe start scanning (`scan on`), connect to the device (`connect XX:XX:XX:XX:XX:XX`), and stop scanning (`scan off`). In the log, the second message is a method call to start scanning (`StartDiscovery`), preceded by a call (to `SetDiscoveryFilter`) with LE as a parameter. Then, we find signals \u2013one per device that is discoverable\u2013 with all the metadata of the device, including its MAC address, its name (if available), and the transmission power that is normally used to estimate how close a device is, among other properties. The app shows its interest in the devices it has found with an `AddMatch` method call, and we can see signals with properties updates.\n\nThen, a call to the method `Connect` of the `org.bluez.Device1` interface is invoked with the path pointing to the desired device. Finally, when we stop scanning, we can find an immediate call to `StopDiscovery`, and the app declares that it is no longer interested in updates of the previously discovered devices with calls to the `RemoveMatch` method. A little later, an announcement signal tells us that the \"connected\" property of that device has changed, and then there's a signal letting us know that `InterfacesAdded` implemented `org.bluez.GattService1`, `org.bluez.GattCharacteristic1` for each of the services and characteristics. We get a signal with a \"ServicesResolved\" property stating that the present services are Generic Access Service, Generic Attribute Service, Device Information Service, and Environmental Sensing Service (0x1800, 0x1801, 0x180A, and 0x181A). In the process, the app uses `AddMatch` to show interest in the different services and characteristics.\n\nWe select the attribute for the temperature characteristic (`select-attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013`), which doesn't produce any D-Bus messages. Then, we `read` the characteristic that generates a method call to `ReadValue` of the `org.bluez.GattCharacteristic1` interface with the path that we have previously selected. Right after, we receive a method return message with the five bytes of that characteristic.\n\nAs for notifications, when we enable them (`notify on`), a method call to `StartNotify` is issued with the same parameters as the `ReadValue` one. The notification comes as a `PropertiesChanged` signal that contains the new value and then we send the `StopNotify` command. Both changes to the notification state produce signals that share the new state.\n\n## Recap and future content\n\nIn this article, I have explained all the steps required to interact with the BLE peripheral from the command line. Then, I did some reverse engineering to understand how those steps translated into D-Bus messages. Find the [resources for this article and links to others.\n\nIn the next article, I will try to use the information that we have gathered about the D-Bus messages to interact with the Bluetooth stack using C++.\n\nIf you have questions or feedback, join me in the MongoDB Developer Community!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb57cbfa9d1521fb5/657704f0529e1390f6b953bc/Debian.jpg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt177196690c5045e5/65770264529e137250b953a8/Bus.jpg", "format": "md", "metadata": {"tags": ["C++", "RaspberryPi"], "pageDescription": "In this new article, we will be focusing on the client side of the Bluetooth Low Energy communication: the BLE central.", "contentType": "Tutorial"}, "title": "Me and the Devil BlueZ: Implementing a BLE Central in Linux - Part 1", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/ecommerce-search-openai", "action": "created", "body": "# Build an E-commerce Search Using MongoDB Vector Search and OpenAI\n\n## Introduction\n\nIn this article, we will build a product search system using MongoDB Vector Search and OpenAI APIs. We will build a search API endpoint that receives natural language queries and delivers relevant products as results in JSON format. In this article, we will see how to generate vector embeddings using the OpenAI embedding model, store them in MongoDB, and query the same using Vector Search. We will also see how to use the OpenAI text generation model to classify user search inputs and build our DB query.\n\nThe API server is built using Node.js and Express. We will be building API endpoints for creating, updating, and searching. Also note that this guide focuses only on the back end and to facilitate testing, we will be using Postman. Relevant screenshots will be provided in the respective sections for clarity. The below GIF shows a glimpse of what we will be building.\n\n. \n\n```\ngit clone https://github.com/ashiqsultan/mongodb-vector-openai.git\n```\n\n2. Create a `.env` file in the root directory of the project.\n\n```\ntouch .env\n```\n\n3. Create two variables in your `.env` file: **MONGODB_URI** and **OPENAI_API_KEY**.\n\nYou can follow the steps provided in the OpenAI docs to get the API key.\n\n```\necho \"MONGODB_URI=your_mongodb_uri\" >> .env\necho \"OPENAI_API_KEY=your_openai_api_key\" >> .env\n```\n\n4. Install node modules.\n\n```\nnpm install # (or) yarn install\n```\n\n5. Run `yarn run dev` or `npm run dev` to start the server.\n\n```\nnpm run dev # (or) yarn run dev\n```\n\nIf the `MONGODB_URI` is correct, it should connect without any error and start the server at port 5000. For the OpenAI API key, you need to create a new account.\n\n. Once you have the connection string, just paste it in the `.env` file as `MONGODB_URI`. In our codebase, we have created a separate dbclient.ts file which exports a singleton function to connect with MongoDB. Now, we can call this function at the entry point file of our application like below. \n\n```\n// server.ts\nimport dbClient from './dbClient';\nserver.listen(app.get('port'), async () => {\n try {\n await dbClient();\n } catch (error) {\n console.error(error);\n }\n}); \n```\n\n## Collection schema overview\n\nYou can refer to the schema model file in the codebase. We will keep the collection schema simple. Each product item will maintain the interface shown below. \n\n```\ninterface IProducts {\n name: string;\n category: string;\n description: string;\n price: number;\n embedding: number];\n}\n```\n\nThis interface is self-explanatory, with properties such as name, category, description, and price, representing typical attributes of a product. The unique addition is the embedding property, which will be explained in subsequent sections. This straightforward schema provides a foundation for organizing and storing product data efficiently.\n\n## Setting up vector index for collection\n\nTo enable semantic search in our MongoDB collection, we need to set up [vector indexes. If that sounds fancy, in simpler terms, this allows us to query the collection using natural language.\n\nFollow the step-by-step procedure outlined in the documentation to create a vector index from the Atlas UI.\n\nBelow is the config we need to provide in the JSON editor when creating the vector index. \n\n``` \n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding\": {\n \"dimensions\": 1536,\n \"similarity\": \"euclidean\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n```\n\nFor those who prefer visual guides, watch our video explaining the process.\n\nThe key variables in the index configuration are the field name in the collection to be indexed (here, it's called **embedding**) and the dimensions value (here, set to **1536**). The significance of this value will be discussed in the next section.\n\n.\n\n## Generating embedding using OpenAI\n\nWe have created a reusable util function in our codebase which will take a string as an input and return a vector embedding as output. This function can be used in places where we need to call the OpenAI embedding model.\n\n``` \nasync function generateEmbedding(inputText: string): Promise {\n try {\n const vectorEmbedding = await openai.embeddings.create({\n input: inputText,\n model: 'text-embedding-ada-002',\n });\n const embedding = vectorEmbedding.data0].embedding;\n return embedding;\n } catch (error) {\n console.error('Error generating embedding:', error);\n return null;\n }\n}\n```\n\nThe function is fairly straightforward. The specific model employed in our example is `text-embedding-ada-002`. However, you have the flexibility to choose other embedding models but it's crucial to ensure that the output dimensions of the selected model match the dimensions we have set when initially creating the vector index.\n\n## What should we embed for Vector Search?\n\nNow that we know what an embedding is, let's discuss what to embed. For semantic search, you should embed all the fields that you intend to query. This includes any relevant information or features that you want to use as search criteria. In our product example, we will be embedding **the name of the product, its category, and its description**.\n\n## Embed on create\n\nTo create a new product item, we need to make a POST call to \u201clocalhost:5000/product/\u201d with the required properties **{name, category, description, price}**. This will call the [createOne service which handles the creation of a new product item. \n\n```\n// Example Product item\n// product = {\n// name: 'foo phone',\n// category: Electronics,\n// description: 'This phone has good camera',\n// price: 150,\n// };\n\nconst toEmbed = {\n name: product.name,\n category: product.category,\n description: product.description,\n};\n\n// Generate Embedding\nconst embedding = await generateEmbedding(JSON.stringify(toEmbed));\nconst documentToInsert = {\n\u2026product,\nembedding,\n}\n\nawait productCollection.insertOne(documentToInsert);\n```\n\nIn the code snippet above, we first create an object named `toEmbed` containing the fields intended for embedding. This object is then converted to a stringified JSON and passed to the `generateEmbedding` function. As discussed in the previous section, generateEmbedding will call the OpenAPI embedding model and return us the required embedding array. Once we have the embedding, the new product document is created using the `insertOne` function. The below screenshot shows the create request and its response.\n\n\u201d where id is the MongoDB document id. This will call the updateOne.ts service.\n\nLet's make a PATCH request to update the name of the phone from \u201cfoo phone\u201d to \u201cSuper Phone.\u201d\n\n```\n// updateObj contains the extracted request body with updated data \nconst updateObj = {\n name: \u201cSuper Phone\"\n};\n\nconst product = await collection.findOne({ _id });\n\nconst objToEmbed = {\n name: updateObj.name || product.name,\n category: updateObj.category || product.category,\n description: updateObj.description || product.description,\n};\n\nconst embedding = await generateEmbedding(JSON.stringify(objToEmbed));\n\nupdateObj.embedding = embedding;\n\nconst updatedDoc = await collection.findOneAndUpdate(\n { _id },\n { $set: updateObj },\n {\n returnDocument: 'after',\n projection: { embedding: 0 },\n }\n);\n```\n\nIn the above code, the variable `updateObj` contains the PATCH request body data. Here, we are only updating the name. Then, we use `findOne` to get the existing product item. The `objToEmbed` object is constructed to determine which fields to embed in the document. It incorporates both the new values from `updateObj` and the existing values from the `product` document, ensuring that any unchanged fields are retained.\n\nIn simple terms, we are re-generating the embedding array with the updated data with the same set of fields we used on the creation of the document. This is important to ensure that our search function works correctly and that the updated document stays relevant to its context.\n\n. Let\u2019s look at the search product function step by step. \n\n``` \nconst searchProducts = async (searchText: string): Promise<IProductDocument]> => {\n try {\n const embedding = await generateEmbedding(searchText); // Generate Embedding\n const gptResponse = (await searchAssistant(searchText)) as IGptResponse;\n \u2026\n```\n\nIn the first line, we are creating embedding using the same `generateEmbedding` function we used for create and update. Let\u2019s park this for now and focus on the second function, `searchAssistant`. \n\n### Search assistant function\n\nThis is a reusable function that is responsible for calling the OpenAI completion model. You can find the [searchAssistant file on GitHub. It's here we have described the prompt for the generative model with output instructions. \n\n```\nasync function main(userMessage: string): Promise<any> {\n const completion = await openai.chat.completions.create({\n messages: \n {\n role: 'system',\n content: `You are an e-commerce search assistant. Follow the below list of instructions for generating the response.\n - You should only output JSON strictly following the Output Format Instructions.\n - List of Categories: Books, Clothing, Electronics, Home & Kitchen, Sports & Outdoors.\n - Identify whether user message matches any category from the List of Categories else it should be empty string. Do not invent category outside the provided list.\n - Identify price range from user message. minPrice and maxPrice must only be number or null.\n - Output Format Instructions for JSON: { category: 'Only one category', minPrice: 'Minimum price if applicable else null', maxPrice: 'Maximum Price if applicable else null' }\n `,\n\n },\n { role: 'user', content: userMessage },\n ],\n model: 'gpt-3.5-turbo-1106',\n response_format: { type: 'json_object' },\n });\n\n const outputJson = JSON.parse(completion.choices[0].message.content);\n\n return outputJson;\n}\n```\n\n### Prompt explanation \n\nYou can refer to the [Open AI Chat Completion docs to understand the function definition. Here, we will explain the system prompt. This is the place where we give some context to the model.\n\n* First, we tell the model about its role and instruct it to follow the set of rules we are about to define.\n* We explicitly instruct it to output only JSON following the \u201cOutput Instruction\u201d we have provided within the prompt.\n* Next, we provide a list of categories to classify the user request. This is hardcoded here but in a real-time scenario, we might generate a category list from DB.\n* Next, we are instructing it to identify if users have mentioned any price so that we can use that in our aggregation query.\n\nLet\u2019s add some console logs before the return statement and test the function. \n\n```\n// \u2026 Existing code\nconst outputJson = JSON.parse(completion.choices0].message.content);\nconsole.log({ userMessage });\nconsole.log({ outputJson });\nreturn outputJson;\n```\n\nWith the console logs in place, make a GET request to /products with search query param. Example: \n\n``` \n// Request \nhttp://localhost:5000/product?search=phones with good camera under 160 dollars\n\n// Console logs from terminal\n{ userMessage: 'phones with good camera under 160 dollars' }\n{ outputJson: { category: 'Electronics', minPrice: null, maxPrice: 160 } }\n```\n\nFrom the OpenAI response above, we can see that the model has classified the user message under the \u201cElectronics\u201d category and identified the price range. It has followed our output instructions, as well, and returned the JSON we desired. Now, let\u2019s use this output and structure our aggregation pipeline.\n\n### Aggregation pipeline\n\nIn our [searchProducts file, right after we get the `gptResponse`, we are calling a function called `constructMatch`. The purpose of this function is to construct the $match stage query object using the output we received from the GPT model \u2014 i.e., it will extract the category and min and max prices from the GPT response to generate the query.\n\n**Example**\n\nLet\u2019s do a search that includes a price range: **\u201c?search=show me some good programming books between 100 to 150 dollars\u201d**.\n\n.\n\n``` \nconst aggCursor = collection.aggregate<IProductDocument>(\n {\n $vectorSearch: {\n index: VECTOR_INDEX_NAME,\n path: 'embedding',\n queryVector: embedding,\n numCandidates: 150,\n limit: 10,\n },\n },\n matchStage,\n {\n $project: {\n _id: 1,\n name: 1,\n category: 1,\n description: 1,\n price: 1,\n score: { $meta: 'vectorSearchScore' },\n },\n },\n ]);\n```\n\n**The first stage** in our pipeline is the [$vector-search-stage. \n\n* **index:** refers to the vector index name we provided when initially creating the index under the section **Setting up vector index for collection (mynewvectorindex). **\n* **path:** the field name in our document that holds the vector values \u2014 in our case, the field name itself is **embedding. **\n* **queryVector: **the embedded format of the search text. We have generated the embedding for the user\u2019s search text using the same `generateEmebdding` function, and its value is added here.\n* **numCandidates: **Number of nearest neighbors to use during the search. The value must be less than or equal to (<=) 10000. You can't specify a number less than the number of documents to return (limit).\n* **Limit: **number of docs to return in the result.\n\nPlease refer to the vector search fields docs for more information regarding these fields. You can adjust the numCandidates and limit based on requirements.\n\n**The second stage** is the match stage which just contains the query object we generated using the constructMatch function, as explained previously.\n\nThe third stage is the $project stage which only deals with what to show and how to show it. Here, you can omit the fields you don\u2019t wish to return.\n\n## Demonstration\n\nLet\u2019s see our search functionality in action. To do this, we will create a new product and make a search with related keywords. Later, we will update the same product and do a search with keywords matching the updated document.\n\n### Create and search\n\nWe can create a new book using our POST request. \n\n**Book 01**\n\n```\n{\"name\": \"JavaScript 101\",\n \"category\": \"Books\",\n \"description\": \"This is a good book for learning JavaScript for beginners. It covers fundamental concepts such as variables, data types, operators, control flow, functions, and more.\",\n \"price\": 60\n}\n```\n\nThe below GIF shows how we can create a book from Postman and view the created book in MongoDB Atlas UI by filtering the category with Books.\n\n.\n\nIf you wonder why we see all the books in our response, this is due to our limited sample data of three books. However, in real-world scenarios, if more relevant items are available in DB, then based on the search term, they will have higher scores and be prioritized.\n\n### Update and search\n\nLet\u2019s update something in our books using our PATCH request. Here, we will update our JavaScript 101 book to a Python book using its document _id.\n\n for more details. Thanks for reading.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt898d49694c91fd80/65eed985f2a292a194bf2b7f/01_Gif_demonstration_of_a_search_request_with_natural_language_as_input_returns_relevant_products_as_output..gif\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt83a03bb92d0eb2b4/65eed981a1e8159facd59535/02_high-level_design_for_create_operation.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt55947d21d7d10d84/65eed982a7eab4edfa913e0b/03_high-level_design_for_search_operation.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39fcf72a5e07a264/65eed9808330b3377402c8eb/04_terminal_output_if_server_starts_successfully.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt32becefb4321630c/65eed9850c744dfb937bea1a/05_Creating_vector_index_from_atlas_ui_for_product_collection.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde5e43178358d742/65eed9818330b3583802c8ef/06_Postman_screenshot_of_create_request_with_response.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf563211a378d0fad/65eed98068d57e89d7450abd/07_screenshot_of_created_data_from_MongoDB_Atlas.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd141bce8542392ab/65eed9806119723d59643b7e/08_screenshot_of_update_request_and_response_from_Postman.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbf7ec7769a8a9690/65eed98054369a30ca692466/09_screenshot_from_MongoDB_Atlas_of_the_updated_document.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69a6410fde97fd0f/65eed981a7eab43012913e07/10_screenshot_of_product_search_request.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt613b325989282dd0/65eed9815a287d4b75f2c10d/11_console_logs_of_GPT_response_and_match_query.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt71e639731bd48d2c/65eed98468d57e1f28450ac1/12_GIF_showing_creation_of_book_from_postman_and_viewing_the_same_in_MongoDB_Atlas.gif\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2713fa0ec7f0778d/65eed981ba94f03cbe7cc9dd/13_List_of_inserted_books_in_MongoDB_Atlas_UI.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt251e5131b377a535/65eed981039fddbb7333555a/14_Search_API_call_with_search_text_I_want_to_learn_Javascript.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4d94489e9bb93f6e/65eed9817a44b0024754744f/15_Search_API_call_with_search_text_I%E2%80%99m_preparing_for_coding_interview.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9d9d221d6c4424ba/65eed980f4a4cf78b814c437/16_Patch_request_to_update_the_JavaScript_book_to_Python_book_with_response.png\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd5b7817f0d21d3b4/65eed981e55fcb16fe232eb8/17_Book_list_in_Atlas_UI_showing_Javascript_book_has_been_renamed_to_python_book.png\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0a8e206b86f55176/65eed9810780b947d861ab80/18_Search_API_call_with_search_text_Python_for_beginners.png", "format": "md", "metadata": {"tags": ["Atlas", "AI"], "pageDescription": "Create an e-commerce semantic search utilizing MongoDB Vector Search and OpenAI models", "contentType": "Article"}, "title": "Build an E-commerce Search Using MongoDB Vector Search and OpenAI", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/secure-api-spring-microsoft-entraid", "action": "created", "body": "# Secure your API with Spring Data MongoDB and Microsoft EntraID\n\n## Introduction\n\nWelcome to our hands-on tutorial, where you'll learn how to build a RESTful API with Spring Data MongoDB, fortified by the security of Microsoft Entra ID and OAuth2. On this journey, we'll lead you through the creation of a streamlined to-do list API, showcasing not just how to set it up, but also how to secure it effectively. \n\nThis guide is designed to provide you with the tools and knowledge needed to implement a secure, functional API from the ground up. Let's dive in and start building something great together!\n\n## Prerequisites\n- A MongoDB account and cluster set up\n- An Azure subscription (Get started for free)\n- Java Development Kit (JDK) version 17 or higher\n- Apache Maven\n- A Spring Boot application \u2014 you can create a **Maven project** with the Spring Initializr; there are a couple of dependencies you will need:\n- Spring Web\n- OAuth2 Resource Server\n- Azure Active Directory \n- Select Java version 17 or higher and generate a **JAR**\n\nYou can follow along with this tutorial and build your project as you read or you can clone the repository directly:\n\n```bash\ngit clone git@github.com:mongodb-developer/java-spring-boot-secure-todo-app.git\n```\n\n## Create our API with Spring Data MongoDB\nOnce these prerequisites are in place, we're ready to start setting up our Spring Boot secure RESTful API. Our first step will be to lay the foundation with `application.properties`.\n\n```properties\nspring.application.name=todo\nspring.cloud.azure.active-directory.enabled=true\nspring.cloud.azure.active-directory.profile.tenant-id=\nspring.cloud.azure.active-directory.credential.client-id=\nspring.security.oauth2.client.registration.azure.client-authentication-method=none\nspring.security.oauth2.resourceserver.jwt.issuer-uri=https://login.microsoftonline.com//swagger-ui/oauth2-redirect.html\nspring.data.mongodb.uri=\nspring.data.mongodb.database=\n```\n- `spring.application.name=todo`: Defines the name of your Spring Boot application\n- `spring.cloud.azure.active-directory...`: Integrates your application with Azure AD for authentication and authorization\n- `spring.security.oauth2.client.registration.azure.client-authentication-method=none`: Specifies the authentication method for the OAuth2 client; setting it to `none` is used for public clients, where a client secret is not applicable\n- `spring.security.oauth2.resourceserver.jwt.issuer-uri=https://login.microsoftonline.com/ {\n}\n\n```\n\nNext, create a `service` package and a class TodoService. This will contain our business logic for our application.\n```java\npackage com.example.todo.service;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Service;\n\nimport com.example.todo.model.Todo;\nimport com.example.todo.model.repository.TodoRepository;\n\nimport java.util.List;\nimport java.util.Optional;\n\n@Service\npublic class TodoService {\n\nprivate final TodoRepository todoRepository;\n\n public TodoService(TodoRepository todoRepository) {\n this.todoRepository = todoRepository;\n }\n\n public List findAll() {\n return todoRepository.findAll();\n }\n\n public Optional findById(String id) {\n return todoRepository.findById(id);\n }\n\n public Todo save(Todo todo) {\n return todoRepository.save(todo);\n }\n\n public void deleteById(String id) {\n todoRepository.deleteById(id);\n }\n}\n```\nTo establish your API endpoints, create a `controller` package and a TodoController class. There are a couple things going on here. For each of the API endpoints we want to restrict access to, we use `@PreAuthorize(\"hasAuthority('SCOPE_Todo.')\")` where `` corresponds to the scopes we will define in Microsoft Entra ID.\n\nWe have also disabled CORS here. In a production application, you will want to specify who can access this and probably not just allow all, but this is fine for this tutorial.\n```java\npackage com.example.todo.controller;\n\nimport com.example.todo.model.Todo;\nimport com.example.todo.sevice.TodoService;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.security.access.prepost.PreAuthorize;\nimport org.springframework.security.core.Authentication;\nimport org.springframework.web.bind.annotation.*;\nimport java.util.List;\n\n@CrossOrigin(origins = \"*\")\n@RestController\n@RequestMapping(\"/api/todos\")\npublic class TodoController {\n\n public TodoController(TodoService todoService) {\n this.todoService = todoService;\n }\n\n @GetMapping\n public List getAllTodos() {\n return todoService.findAll();\n }\n\n @GetMapping(\"/{id}\")\n public Todo getTodoById(@PathVariable String id) {\n return todoService.findById(id).orElse(null);\n }\n\n @PostMapping\n @PreAuthorize(\"hasAuthority('SCOPE_Todo.User')\")\n public Todo createTodo(@RequestBody Todo todo, Authentication authentication) {\n return todoService.save(todo);\n }\n\n @PutMapping(\"/{id}\")\n @PreAuthorize(\"hasAuthority('SCOPE_Todo.User')\")\n public Todo updateTodo(@PathVariable String id, @RequestBody Todo todo) {\n return todoService.save(todo);\n }\n\n @DeleteMapping(\"/{id}\")\n @PreAuthorize(\"hasAuthority('SCOPE_Todo.Admin')\")\n public void deleteTodo(@PathVariable String id) {\n todoService.deleteById(id);\n }\n}\n```\n\nNow, we need to configure our Swagger UI for our app. Create a `config` package and an OpenApiConfiguration class. A lot of this is boilerplate, based on the demo applications provided by springdoc.org. We're setting up an authorization flow and specifying the scopes available in our application. We'll create these in a later part of this application, but pay attention to the API name when setting scopes (`.addString(\"api://todo/Todo.User\", \"Access todo as a user\")`. You have an option to configure this later but it needs to be the same in the application and on Microsoft Entra ID.\n\n```java\npackage com.example.todo.config;\n\nimport io.swagger.v3.oas.models.Components;\nimport io.swagger.v3.oas.models.OpenAPI;\nimport io.swagger.v3.oas.models.info.Info;\nimport io.swagger.v3.oas.models.security.OAuthFlow;\nimport io.swagger.v3.oas.models.security.OAuthFlows;\nimport io.swagger.v3.oas.models.security.Scopes;\nimport io.swagger.v3.oas.models.security.SecurityScheme;\n\nimport org.springframework.beans.factory.annotation.Value;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\n\n@Configuration\nclass OpenApiConfiguration {\n\n@Value(\"${spring.cloud.azure.active-directory.profile.tenant-id}\")\n private String tenantId;\n\n @Bean\n OpenAPI customOpenAPI() {\n OAuthFlow authorizationCodeFlow = new OAuthFlow();\n authorizationCodeFlow.setAuthorizationUrl(String.format(\"https://login.microsoftonline.com/%s/oauth2/v2.0/authorize\", tenantId));\n authorizationCodeFlow.setRefreshUrl(String.format(\"https://login.microsoftonline.com/%s/oauth2/v2.0/token\", tenantId));\n authorizationCodeFlow.setTokenUrl(String.format(\"https://login.microsoftonline.com/%s/oauth2/v2.0/token\", tenantId));\n authorizationCodeFlow.setScopes(new Scopes()\n \n.addString(\"api://todo/Todo.User\", \"Access todo as a user\")\n \n.addString(\"api://todo/Todo.Admin\", \"Access todo as an admin\"));\n OAuthFlows oauthFlows = new OAuthFlows();\n oauthFlows.authorizationCode(authorizationCodeFlow);\n SecurityScheme securityScheme = new SecurityScheme();\n securityScheme.setType(SecurityScheme.Type.OAUTH2);\n securityScheme.setFlows(oauthFlows);\n return new OpenAPI()\n .info(new Info().title(\"RESTful APIs for Todo\"))\n .components(new Components().addSecuritySchemes(\"Microsoft Entra ID\", securityScheme));\n }\n}\n```\nThe last thing we need to do is create a WebConfig class in our `config` package. Here, we just need to disable Cross-Site Request Forgery (CSRF).\n\n```java\npackage com.example.todo.config;\n\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.context.annotation.Configuration;\nimport org.springframework.security.config.annotation.web.builders.HttpSecurity;\nimport org.springframework.security.web.SecurityFilterChain;\nimport org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;\n\n@Configuration\npublic class WebConfig {\n \n@Bean\npublic SecurityFilterChain filterChain(HttpSecurity http) throws Exception {\n http.csrf(AbstractHttpConfigurer::disable);\n return http.build();\n}\n}\n\n```\nWhen using OAuth for authentication in a web application, the necessity of CSRF tokens depends on the specific context of your application and how OAuth is being implemented.\n\nIn our application, we are using a single-page application (SPAs) for interacting with our API. OAuth is often used with tokens (such as JWTs) obtained via the OAuth Authorization Code Flow with PKCE so the CSRF is not necessary. If your application still uses cookies (traditional web access) for maintaining session state post-OAuth flow, implement CSRF tokens to protect against CSRF attacks. For API serving an SPA, we will rely on bearer tokens.\n\n## Expose your RESTful APIs in Microsoft Entra ID\nIt is time to register a new application with Microsoft Entra ID (formerly known as Azure Active Directory) and get everything ready to secure our RESTful API with OAuth2 authentication and authorization. Microsoft Entra ID is a comprehensive identity and access management (IAM) solution provided by Microsoft. It encompasses various services designed to help manage and secure access to applications, services, and resources across the cloud and on-premises environments.\n1. Sign in to the Azure portal. If you have access to multiple tenants, select the tenant in which you want to register an application.\n2. Search for and select the **Microsoft Entra ID** service.\n- If you don't already have one, create one here.\n1. From the left side menu, under **Manage**, select **App registrations** and **New registration**.\n2. Enter a name for your application in the **Name** field. For this tutorial, we are going to stick with the classic CRUD example, a to-do list API, so we'll call it `TodoAPI`. \n3. For **Supported account types**, select **Accounts in any organizational directory (Any Microsoft Entra directory - Multitenant) and personal Microsoft accounts**. This will allow the widest set of Microsoft entities. \n4. Select **Register** to create the application.\n5. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You need it to configure the `application.properties` file for this app.\n6. Navigate to **Manage** and click on **Expose an API**. Locate the **Application ID URI** at the top of the page and click **Add**.\n7. On the **Edit application ID URI** screen, it's necessary to generate a distinctive Application ID URI. Opt for the provided default `api://{client ID}` or choose a descriptive name like `api://todo` before hitting **Save**.\n8. Go to **Manage**, click on **Expose an API**, then **Add a scope**, and provide the specified details:\n - For **Scope name**, enter _ToDo.User_.\n - For **Who can consent**, select **Admins and Users**.\n - For **Admin consent display name**, enter _Create and edit ToDo data_.\n - For **Admin consent description**, enter _Allows authenticated users to create and edit the ToDo data._\n - For **State**, keep it enabled.\n - Select **Add scope**.\n9. Repeat the previous steps to add the other scopes: _ToDo.Admin_, which will grant the authenticated user permission to delete.\nNow that we have our application created and our EntraID configured, we will look at how to request our access token. At this point, you can upload your API to Azure Spring Apps, following our tutorial, Getting Started With Azure Spring Apps and MongoDB Atlas, but we'll keep everything running local for this tutorial. \n\n## Grant access to our client with Swagger\nThe RESTful APIs serve as a resource server, safeguarded by Microsoft Entra ID. To obtain an access token, you are required to register a different application within Microsoft Entra ID and assign permissions to the client application.\n\n### Register the client application\nWe are going to register a second app in Microsoft Entra ID.\n1. Repeat steps 1 through 6 above, but this time, name your application `TodoClient`.\n2. On the app **Overview** page, look for the **Application (client) ID** value. Record it for later use. You need it to acquire an access token.\n3. Select **API permissions** and **Add a permission**. \n4. Under **My APIs**, select the `TodoAPI` application that you registered earlier.\nChoose the permissions your client application needs to operate correctly. In this case, select both **ToDo.Admin** and **ToDo.User** permissions.\nConfirm your selection by clicking on **Add permissions** to apply these to your `TodoClient` application.\n\n5. Select **Grant admin consent for ``** to grant admin consent for the permissions you added.\n\n### Add a user\nNow that we have the API created and the client app registered, it is time to create our user to grant permission to. We are going to make a member in our Microsoft Entra tenant to interact with our `TodoAPI`.\n1. Navigate to your Microsoft Entra ID and under **Manage**, choose **Users**.\n2. Click on **New user** and then on **Create new user**.\n3. In the **Create new user** section, fill in **User principal name**, **Display name**, and **Password**. The user will need to change this after their first sign-in.\n4. Click **Review + create** to examine your entries. Press **Create** to finalize the creation of the user.\n\n### Update the OAuth2 configuration for Swagger UI authorization\nTo connect our application for this tutorial, we will use Swagger. We need to refresh the OAuth2 settings for authorizing users in Swagger UI, allowing them to get access tokens via the `TodoClient` application.\n1. Access your Microsoft Entra ID tenant, and navigate to the `TodoClient` app you've registered.\n2. Click on **Manage**, then **Authentication**, choose **Add a platform**, and select **Single-page application**. For implicit grant and hybrid flows, choose both access tokens and ID tokens.\n3. In the **Redirect URIs** section, input your application's URL or endpoint followed by `/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL, and then click on **Configure**.\n\n## Log into your application\nNavigate to the app's published URL, then click on **Authorize** to initiate the OAuth2 authentication process. In the **Available authorizations** dialog, input the `TodoClient` app's client ID in the **client_id** box, check all the options under the **Scopes** field, leave the **client_secret** box empty, and then click **Authorize** to proceed to the Microsoft Entra sign-in page. After signing in with the previously mentioned user, you will be taken back to the **Available authorizations** dialog. Voila! You should be greeted with your successful login screen. \n\n, or read more about securing your data with How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB. \n\nAre you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltacdf6418ee4a5504/66016e4fdf6972781d39cc8d/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt594d8d4172aae733/66016e4f1741ea64ba650c9e/image3.png", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Azure", "Spring"], "pageDescription": "Using Microsoft Entra ID, Spring Boot Security, and Spring Data MongoDB, make a secure rest API.", "contentType": "Tutorial"}, "title": "Secure your API with Spring Data MongoDB and Microsoft EntraID", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/cdc-kafka-relational-migrator", "action": "created", "body": "# The Great Continuous Migration: CDC Jobs With Kafka and Relational Migrator\n\nAre you ready to *finally* move your relational data over to MongoDB while ensuring every change to your database is properly handled? While this process can be jarring, MongoDB\u2019s Relational Migrator is here to help simplify things. In this tutorial, we will go through in-depth how to conduct change data captures from your relational data from MySQL to MongoDB Atlas using Confluent Cloud and Relational Migrator. \n\n## What are CDC jobs? \nChange data capture or CDC jobs are specific processes that track any and all changes in a database! Even if there is a small update to one row (or 100), a change data capture job will ensure that this change is accurately reflected. This is very important in a world where people want accurate results immediately \u2014 data needs to be updated constantly. From basic CRUD (create, read, update, delete) instances to more complex data changes, CDC jobs are incredibly important when dealing with data. \n\n## What is MongoDB Relational Migrator?\nMongoDB Relational Migrator is our tool to help developers migrate their relational databases to MongoDB. The great part about it is that Relational Migrator will actually help you to write new code or edit existing code to ensure your migration process works as smoothly as possible, as well as automate the conversion process of your database's schema design. This means there\u2019s less complexity and downtime and fewer errors than if tasked with dealing with this manually. \n\n## What is Confluent Cloud and why are we using it?\nConfluent Cloud is a Kafka service used to handle real-time data streaming. We are using it to deal with streaming real-time changes from our relational database to our MongoDB Atlas cluster. The great thing about Confluent Cloud is it\u2019s simple to set up and integrates seamlessly with a number of other platforms and connectors. Also, you don\u2019t need Kafka to run production migrations as the embedded mode is sufficient for the majority of migrations. \n\nWe also recommend that users start off with the embedded version even if they are planning to use Relational Migrator in the future for a quick start since it has all of the same features, except for the additional resilience in long-running jobs. \n\nKafka can be relatively complex, so it\u2019s best added to your migration job as a specific step to ensure there is limited confusion with the process. We recommend working immediately on your migration plan and schema design and then adding Kafka when planning your production cutover. \nLet\u2019s get started.\n\n## Pre-requisites for success \n\n - MongoDB Atlas account\n - Amazon RDS account\n - Confluent Cloud account\n - MongoDB Relational Migrator \u2014 this tutorial uses version 1.5.\n - MySQL\n - MySQL Workbench \u2014 this tutorial uses version 8.0.36. Workbench is so you can visually interact with your MySQL database, so it is optional, but if you\u2019d like to follow the tutorial exactly, please download it onto your machine.\n\n## Download MongoDB Relational Migrator \nPlease make sure you download Relational Migrator on your machine. The version we are using for this tutorial is version 1.5.0. Make sure it works and you can see it in your browser before moving on. \n\n## Create your sink database\nWhile our relational database is our source database, where our data ends up is called our sink database. In this tutorial, we want our data and all our changes to end up in MongoDB, so let\u2019s create a MongoDB Atlas cluster to ensure that happens. \nIf you need help creating a cluster, please refer to the documentation. \nPlease keep note of the region you\u2019re creating your cluster in and ensure you are choosing to host your cluster in AWS. Keep your username and password somewhere safe since you\u2019ll need them later on in this tutorial, and please make sure you\u2019ve allowed access from anywhere (0.0.0.0/0) in your \u201cNetwork Access\u201d tab. If you do not have the proper network access in place, you will not be able to connect to any of the other necessary platforms. Note that \u201cAccess from Anywhere\u201d is not recommended for production and is used for this tutorial for ease of reference. \nGrab your cluster\u2019s connection string and save it in a safe place. We will need it later. \n\n## Get your relational database ready \nFor this tutorial, I created a relational database using MySQL Workbench. The data used is taken from Kaggle in the form of a `.csv` file, if you want to use the same one: World Happiness Index: 2019. \nOnce your dataset has been properly downloaded into your MySQL database, let\u2019s configure our relational database to our Amazon RDS account. For this tutorial, please make sure you\u2019ve downloaded your `.csv` file into your MySQL database either by using the terminal commands or by using MySQL Workbench. \nWe\u2019re configuring our relational database to our Amazon RDS account so that instead of hosting our database locally, we can host it in the cloud, and then connect it to Confluent Cloud and ensure any changes to our database are accurately reflected when we eventually sync our data over to MongoDB Atlas. \n\n## Create a database in Amazon RDS\nAs of right now, Confluent Cloud\u2019s Custom Connector only supports Amazon instances, so please ensure you\u2019re using Amazon RDS for your relational databases since other cloud providers will not work at the moment. Since it\u2019s important to keep everything secure, you will need to ensure networking access, with the possibility of requiring AWS Privatelink. \nSign in to your Amazon account and head over to \u201cAmazon RDS.\u201d You can find it in the search bar at the top of the screen. \n\nClick on \u201cDatabases\u201d on the left-hand side of the screen. If you don\u2019t have a database ready to use (specifically in your Amazon account), please create one by clicking the orange button. \nYou\u2019ll be taken to this page. Please select the MySQL option:\n\nAfter selecting this, scroll down and change the MySQL version to the version compatible with your version of Workbench. For the tutorial, we are using version `8.0.36`.\nThen, please fill out the Settings area. For your `DB cluster identifier`, choose a name for your database cluster. Choose a `Master username`, hit the `Self managed` credentials toggle, and fill in a password. Please do not forget this username and password, you will need it throughout the tutorial to successfully set up your various connections. \nFor the rest of this database set-up process, you can keep everything `default` except please press the toggle to ensure the database allows Public Access. This is crucial! Follow the rest of the steps to complete and create your database. \n\nWhen you see the green \u201cAvailable\u201d status button, that means your database is ready to go. \n### Create a parameter group \nNow that our database is set up, we need to create a parameter group and modify some things to ensure we can do CDC jobs. We need to make sure this part works in order to successfully handle our CDC jobs. \nOn the left-hand side of your Amazon RDS homepage, you\u2019ll see the \u201cParameter groups\u201d button. Please press that and create a new parameter group. \nUnder the dropdown \u201cParameter group family,\u201d please pick `mysql8.0` since that is the version we are running for this tutorial. If you\u2019re using something different, please feel free to use a different version. Give the parameter group a name and a description and hit the orange \u201ccreate\u201d button. \nOnce it\u2019s created, click on the parameter name, hit the \u201cEdit\u201d button, search for `binlog_format`, and change the \u201cValue\u201d column from \u201cMIXED\u201d to \u201cROW.\u201d \nThis is important to do because changing this setting allows for recording any database changes at a \u201crow\u201d level. This means each and every little change to your database will be accurately recorded. Without making this change, you won\u2019t be able to properly conduct any CDC jobs. \nNow, let\u2019s associate our database with this new parameter group. \nClick on \u201cDatabases,\u201d choose the one we just created, and hit \u201cModify.\u201d Scroll all the way down to \u201cDB Parameter Group.\u201d Click on the drop-down and associate it with the group you just created. As an example, here is mine:\n\nModify the instance and click \u201cSave.\u201d Once you\u2019re done, go in and \u201cReboot\u201d your database to ensure these changes are properly saved. Please keep in mind that you\u2019re unable to reboot while the database is being modified and need to wait until it\u2019s in the \u201cAvailable\u201d state.\nHead over to the \u201cConnectivity & security\u201d tab in your database and copy your \u201cEndpoint\u201d under where it says \u201cEndpoint & port.\u201d \nNow, we\u2019re going to connect our Amazon RDS database to our MySQL Workbench! \n\n## Connect Amazon RDS to relational database\nLaunch MySQL Workbench and click the \u201c+\u201d button to establish a new connection. \nYour endpoint that was copied above will go into your \u201cHostname.\u201d Keep the port the same. (It should be 3306.) Your username and password are from when you created your cluster. It should look something like this:\n\nClick on \u201cTest Connection\u201d and you should see a successful connection. \n\n> If you\u2019re unable to connect when you click on \u201cTest Connection,\u201d go into your Amazon RDS database, click on the VPC security group, click on \u201cEdit inbound rules,\u201d click on \u201cAdd rule,\u201d select \u201cAll traffic\u201d under \u201cType,\u201d select \u201cAnywhere-IPv4,\u201d and save it. Try again and it will work. \n\nNow, run a simple SQL command in Workbench to test and see if you can interact with your database and see the logs in Amazon RDS. I\u2019m just running a simple update statement:\n```\nUPDATE world_happiness_report\nSET Score = 7.800\nWHERE `Country or region` = 'Finland'\nLIMIT 1;\n\n```\nThis is just changing the original score of Finland from 7.769 to 7.8. \n\nIt\u2019s been successfully changed and if we keep an eye on Amazon RDS, we don\u2019t see any issues. \n\nNow, let\u2019s configure our Confluent Cloud account! \n\n## Configure Confluent Cloud account \n\nOur first step is to create a new environment. We can use a free account here as well:\n\nOn the cluster page, please choose the \u201cBasic\u201d tier. This tier is free as well. Please make sure you have configured your zones and your region for where you are. These need to match up with both your MongoDB Atlas cluster region and your Amazon RDS database region. \n\nOnce your cluster is configured, we need to take note of a number of keys and IDs in order to properly connect to Relational Migrator. We need to take note of the:\n\n - Cluster ID.\n - Environment ID.\n - Bootstrap server.\n - REST endpoint.\n - Cloud API key and secret.\n - Kafka API key and secret. \n\nYou can find most of these from your \u201cCluster Settings,\u201d and the Environment ID can be found on the right-hand side of your environment page in Confluent. \n\nFor Cloud API keys, click on the three lines on the right-hand side of Confluent\u2019s homepage.\n\nClick on \u201cCloud API keys\u201d and grab the \u201ckey\u201d and \u201csecret\u201d if you\u2019ve already created them, or create them if necessary. \n\nFor the Kafka API keys, head over to your Cluster Overview, and on the left-hand side, click \u201cAPI Keys\u201d to create them. Once again, save your \u201ckey\u201d and \u201csecret.\u201d \n\nAll of this information is crucial since you\u2019re going to need it to insert into your `user.properties` folder to configure the connection between Confluent Cloud and MongoDB\u2019s Relational Migrator. \n\nAs you can see from the documentation linked above, your Cloud API keys will be saved in your `user.properties` file as: \n\n - migrator.confluent.cloud-credentials.api-key \n - migrator.confluent.cloud-credentials.api-secret \n\nAnd your Kafka API keys as:\n\n - migrator.confluent.kafka-credentials.api-key \n - migrator.confluent.kafka-credentials.api-secret \n\nNow that we have our Confluent Cloud configured and all our necessary information saved, let\u2019s configure our connection to MongoDB Relational Migrator. \n\n## Connect Confluent Cloud to MongoDB Relational Migrator\n\nPrior to this step, please ensure you have successfully downloaded Relational Migrator locally. \n\nWe are going to use our terminal to access our `user.properties` file located inside our Relational Migrator download and edit it accordingly to ensure a smooth connection takes place. \n\nUse the commands to find our file in your terminal window:\n\n```\ncd ~/Library/Application\\ Support /MongoDB/Relational\\ Migrator/\nls \n```\nOnce you see your `user.properties` file, open it with:\n\n```\nnano user.properties\n```\n\nOnce your file is opened, we need to make some edits. At the very top of the file, uncomment the line that says:\n\n```\nspring.profiles.active: confluent \n```\nBe sure to comment out anything else in this section that is uncommented. We only want the Confluent profile active. Immediately under this section, we need to add in all our keys from above. Do it as such:\n\n```\nmigrator.confluent.environment.environment-id: \nmigrator.confluent.environment.cluster-id: \nmigrator.confluent.environment.bootstrap-server: \nmigrator.confluent.environment.rest-endpoint: \n\nmigrator.confluent.cloud-credentials.api-key: \nmigrator.confluent.cloud-credentials.api-secret: \n\nmigrator.confluent.kafka-credentials.api-key: \nmigrator.confluent.kafka-credentials.api-secret: \n\n```\n\nThere is no need to edit anything else in this file. Just please make sure you\u2019re using the correct server port: 8278. \n\nOnce this is properly edited, write it to the file using Ctr + O. Press enter, and exit the file using Ctr + X. \n\nNow, once the file is saved, let\u2019s run MongoDB Relational Migrator. \n\n## Running MongoDB Relational Migrator \n\nWe can get it up and running straight from our terminal. Use the commands shown below to do so:\n\n```\ncd \"/Applications/MongoDB Relational Migrator.app/Contents/app\"\njava -jar application-1.5.0.jar\n```\nThis will open Spring and the Relational Migrator in your browser:\n\nOnce Relational Migrator is running in your browser, connect it to your MySQL database:\n\nYou want to put in your host name (what we used to connect our Amazon RDS to MySQL Workbench in the beginning), the database with your data in it (mine is called amazonTest but yours will be different), and then your username and password. Hit the \u201cTest connection\u201d button to ensure the connection is successful. You\u2019ll see a green bar at the bottom if it is. \n\nNow, we want to select the tables to use. We are just going to click our database:\n\nThen, define your initial schema. We are just going to start with a recommended MongoDB schema because it\u2019s a little easier to work with.\n\nOnce this is done, you\u2019ll see what your relational schema will look like once it\u2019s migrated as documents in MongoDB Atlas!\n\nNow, click on the \u201cData Migration\u201d tab at the top of the screen. Remember we created a MongoDB cluster at the beginning of this tutorial for our sink data? We need all that connection information. \n\nFirst, enter in again all your AWS RDS information that we had loaded in earlier. That is our source data, and now we are setting up our destination, or sink, database. \n\nEnter in the MongoDB connection string for your cluster. Please ensure you are putting in the correct username and password. \n\nThen, hit \u201cTest connection\u201d to make sure you can properly connect to your Atlas database. \n\nWhen you first specify that you want a continuous migration, you will get this message saying you need to generate a script to do so. Click the button and a script will download and then will be placed in your MySQL Workbench. The script looks like this:\n\n```\n/*\n* Relational Migrator needs source database to allow change data capture.\n* The following scripts must be executed on MySQL source database before starting migration.\n* For more details, please see https://debezium.io/documentation/reference/stable/connectors/mysql.html#setting-up-mysql\n*/\n\n/*\n* Before initiating migration job, the MySQL user is required to be able to connect to the source database.\n* This MySQL user must have appropriate permissions on all databases for which the Relational Migrator is supposed to capture changes.\n*\n* Connect to Amazon RDS Mysql instance, follow the below link for instructions:\n* https://dev.mysql.com/doc/mysql-cluster-excerpt/8.0/en/mysql-cluster-replication-schema.html\n*\n* Grant the required permissions to the user\n*/\n\nGRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'anaiya'@'%' ;\n\n/* Finalize the user\u2019s permissions: */\nFLUSH PRIVILEGES;\n\n/* Furthermore, binary logging must be enabled for MySQL replication on AWS RDS instance. Please see the below for instructions:\n* https://aws.amazon.com/premiumsupport/knowledge-center/enable-binary-logging-aurora/\n*\n* If the instance is using the default parameter group, you will need to create a new one before you can make any changes.\n* For MySQL RDS instances, create a Parameter Group for your chosen MySQL version.\n* For Aurora MySQL clusters, create a DB Cluster Parameter Group for your chosen MySQL version.\n* Edit the group and set the \"binlog_format\" parameter to \"ROW\".\n* Make sure your database or cluster is configured to use the new Parameter Group.\n*\n* Please note that you must reboot the database cluster or instance to apply changes, follow below for instructions:\n* https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RebootCluster.html\n*/\n```\nRun this script in MySQL Workbench by hitting the lightning button. You\u2019ll know it was successful if you don\u2019t see any error messages in Workbench. You will also see that in Relational Migrator, the \u201cGenerate Script\u201d message is gone, telling you that you can now use continuous snapshot. \n\nStart it and it\u2019ll run! Your snapshot stage will finish first, and then your continuous stage will run:\n\nWhile the continuous snapshot is running, make a change in your database. I am changing the happiness score for Finland from 7.8 to 5.8:\n\n```\nUPDATE world_happiness_report\nSET Score = 5.800\nWHERE `Country or region` = `Finland`\nLIMIT 1;\n```\n\nOnce you run your change in MySQL Workbench, click on the \u201cComplete CDC\u201d button in Relational Migrator. \n\nNow, let\u2019s check out our MongoDB Atlas cluster and see if the data is properly loaded with the correct schema and our change has been properly streamed:\n\nAs you can see, all your information from your original MySQL database has been migrated to MongoDB Atlas, and you\u2019re capable of streaming in any changes to your database! \n\n## Conclusion\n\nIn this tutorial, we have successfully migrated your MySQL data and set up continuous data captures to MongoDB Atlas using Confluent Cloud and MongoDB Relational Migrator. This is super important since it means you are able to see real-time changes in your MongoDB Atlas database which mirrors the changes impacting your relational database. \n\nFor more information and help, please use the following resources:\n\n - MongoDB Relational Migrator\n - Confluent Cloud\n\n", "format": "md", "metadata": {"tags": ["MongoDB", "AWS", "Kafka", "SQL"], "pageDescription": "This tutorial explains how to configure CDC jobs on your relational data from MySQL Workbench to MongoDB Atlas using MongoDB Relational Migrator and Confluent Cloud.", "contentType": "Tutorial"}, "title": "The Great Continuous Migration: CDC Jobs With Kafka and Relational Migrator", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/agent-fireworksai-mongodb-langchain", "action": "created", "body": "# Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain\n\nThis tutorial provides a step-by-step guide on building an AI research assistant agent that uses MongoDB as the memory provider, Fireworks AI for function calling, and LangChain for integrating and managing conversational components.\n\nThis agent can assist researchers by allowing them to search for research papers with semantic similarity and vector search, using MongoDB as a structured knowledge base and a data store for conversational history.\n\nThis repository contains all the steps to implement the agent in this tutorial, including code snippets and explanations for setting up the agent's memory, integrating tools, and configuring the language model to interact effectively with humans and other systems.\n\n**What to expect in this tutorial:**\n- Definitions and foundational concepts of an agent\n- Detailed understanding of the agent's components\n- Step-by-step implementation guide for building a research assistance agent\n- Insights into equipping agents with effective memory systems and knowledge management\n\n----------\n\n# What is an agent?\n**An agent is an artificial computational entity with an awareness of its environment. It is equipped with faculties that enable perception through input, action through tool use, and cognitive abilities through foundation models backed by long-term and short-term memory.** Within AI, agents are artificial entities that can make intelligent decisions followed by actions based on environmental perception, enabled by large language models.\n\n.\n- Obtain a Fireworks AI key.\n- Get instructions on how to obtain a MongoDB URI connection string, which is provided right after creating a MongoDB database.\n\n```\n import os\n\n # Be sure to have all the API keys in your local environment as shown below\n # Do not publish environment keys in production\n # os.environ\"OPENAI_API_KEY\"] = \"sk\"\n # os.environ[\"FIREWORKS_API_KEY\"] = \"\"\n # os.environ[\"MONGO_URI\"] = \"\"\n\n FIREWORKS_API_KEY = os.environ.get(\"FIREWORKS_API_KEY\")\n OPENAI_API_KEY = os.environ.get(\"OPENAI_API_KEY\")\n MONGO_URI = os.environ.get(\"MONGO_URI\")\n```\n\nThe code snippet above does the following:\n1. Retrieving the environment variables: `os.environ.get()` enables retrieving the value assigned to an environment variable by name reference.\n\n## Step 3: data ingestion into MongoDB vector database\n\nThis tutorial uses a [specialized subset of the arXiv dataset hosted on MongoDB, derived from the extensive original collection on the Hugging Face platform. This subset version encompasses over 50,000 scientific articles sourced directly from arXiv. Each record in the subset dataset has an embedding field, which encapsulates a 256-dimensional representation of the text derived by combining the authors' names, the abstracts, and the title of each paper. \n\nThese embeddings are generated using OpenAI's `text-embedding-3-small model`, which was selected primarily due to its minimal dimension size that takes less storage space. Read the tutorial, which explores ways to select appropriate embedding models for various use cases.\n\nThis dataset will act as the agent's knowledge base. The aim is that before using any internet search tools, the agent will initially attempt to answer a question using its knowledge base or long-term memory, which, in this case, are the arXiv records stored in the MongoDB vector database.\n\nThe following step in this section loads the dataset, creates a connection to the database, and ingests the records into the database.\n\nThe code below is the implementation step to obtain the subset of the arXiv dataset using the `datasets` library from Hugging Face. Before executing the code snippet below, ensure that an `HF_TOKEN` is present in your development environment; this is the user access token required for authorized access to resources from Hugging Face. Follow the instructions to get the token associated with your account.\n\n```\n import pandas as pd\n from datasets import load_dataset\n\n data = load_dataset(\"MongoDB/subset_arxiv_papers_with_embeddings\")\n dataset_df = pd.DataFrame(data\"train\"])\n```\n\n1. Import the pandas library using the namespace `pd` for referencing the library and accessing functionalities.\n2. Import the datasets library to use the `load_dataset` method, which enables access to datasets hosted on the Hugging Face platform by referencing their path.\n3. Assign the loaded dataset to the variable data.\n4. Convert the training subset of the dataset to a pandas DataFrame and assign the result to the variable `dataset_df`.\n\nBefore executing the operations in the following code block below, ensure that you have created a MongoDB database with a collection and have obtained the URI string for the MongoDB database cluster. Creating a database and collection within MongoDB is made simple with MongoDB Atlas. [Register a free Atlas account or sign in to your existing Atlas account. Follow the instructions (select Atlas UI as the procedure) to deploy your first cluster.\n\nThe database for this tutorial is called `agent_demo` and the collection that will hold the records of the arXiv scientific papers metadata and their embeddings is called `knowledge`.\n\nTo enable MongoDB's vector search capabilities, a vector index definition must be defined for the field holding the embeddings. Follow the instructions here to create a vector search index. Ensure the name of your vector search index is `vector_index`.\n\nYour vector search index definition should look something like what is shown below:\n```\n \u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"fields\": \n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"numDimensions\": 256,\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": \"embedding\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"similarity\": \"cosine\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"vector\"\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n \u00a0\u00a0\u00a0\u00a0}\n```\n\nOnce your database, collection, and vector search index are fully configured, connect to your database and execute data ingestion tasks with just a few lines of code with PyMongo.\n\n```\n from pymongo import MongoClient\n\n # Initialize MongoDB python client\n client = MongoClient(MONGO_URI)\n\n DB_NAME = \"agent_demo\"\n COLLECTION_NAME = \"knowledge\"\n ATLAS_VECTOR_SEARCH_INDEX_NAME = \"vector_index\"\n collection = client.get_database(DB_NAME).get_collection(COLLECTION_NAME)\n```\n1. Import the `MongoClient` class from the PyMongo library to enable MongoDB connections in your Python application.\n2. Utilize the MongoClient with your `MONGO_URI` to establish a connection to your MongoDB database. Replace `MONGO_URI` with your actual connection string.\n3. Set your database name to `agent_demo` by assigning it to the variable `DB_NAME`.\n4. Set your collection name to `knowledge` by assigning it to the variable `COLLECTION_NAME`.\n5. Access the knowledge collection within the `agent_demo` database by using `client.get_database(DB_NAME).get_collection(COLLECTION_NAME)` and assigning it to a variable for easy reference.\n6. Define the vector search index name as `vector_index` by assigning it to the variable `ATLAS_VECTOR_SEARCH_INDEX_NAME`, preparing for potential vector-based search operations within your collection.\n\nThe code snippet below outlines the ingestion process. First, the collection is emptied to ensure the tutorial is completed with a clean collection. The next step is to convert the pandas DataFrame into a list of dictionaries, and finally, the ingestion process is executed using the `insert_many()` method available on the PyMongo collection object.\n\n```\n # Delete any existing records in the collection\n collection.delete_many({})\n\n # Data Ingestion\n records = dataset_df.to_dict('records')\n collection.insert_many(records)\n\n print(\"Data ingestion into MongoDB completed\")\n```\n\n## Step 4: create LangChain retriever with MongoDB\n\nThe LangChain open-source library has an interface implementation that communicates between the user query and a data store. This interface is called a retriever.\n\nA retriever is a simple, lightweight interface within the LangChain ecosystem that takes a query string as input and returns a list of documents or records that matches the query based on some similarity measure and score threshold.\n\nThe data store for the back end of the retriever for this tutorial will be a vector store enabled by the MongoDB database. The code snippet below shows the implementation required to initialize a MongoDB vector store using the MongoDB connection string and specifying other arguments. The final operation uses the vector store instance as a retriever.\n\n```\n from langchain_openai import OpenAIEmbeddings\n from langchain_mongodb import MongoDBAtlasVectorSearch\n\n embedding_model = OpenAIEmbeddings(model=\"text-embedding-3-small\", dimensions=256)\n\n # Vector Store Creation\n vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n \u00a0\u00a0\u00a0\u00a0connection_string=MONGO_URI,\n \u00a0\u00a0\u00a0\u00a0namespace=DB_NAME + \".\" + COLLECTION_NAME,\n \u00a0\u00a0\u00a0\u00a0embedding= embedding_model,\n \u00a0\u00a0\u00a0\u00a0index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n \u00a0\u00a0\u00a0\u00a0text_key=\"abstract\"\n )\n\n`retriever` = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 5})\n\n```\n\n1. Start by importing `OpenAIEmbeddings` from langchain_openai and `MongoDBAtlasVectorSearch` from langchain_mongodb. These imports will enable you to generate text embeddings and interface with MongoDB Atlas for vector search operations.\n2. Instantiate an `OpenAIEmbeddings` object by specifying the model parameter as \"text-embedding-3-small\" and the dimensions as 256. This step prepares the model for generating 256-dimensional vector embeddings from the query passed to the retriever.\n3. Use the `MongoDBAtlasVectorSearch.from_connection_string` method to configure the connection to your MongoDB Atlas database. The parameters for this function are as follows:\n - `connection_string`: This is the actual MongoDB connection string.\n - `namespace`: Concatenate your database name (DB_NAME) and collection name (COLLECTION_NAME) to form the namespace where the records are stored.\n - `embedding`: Pass the previously initialized embedding_model as the embedding parameter. Ensure the embedding model specified in this parameter is the same one used to encode the embedding field within the database collection records.\n - `index_name`: Indicate the name of your vector search index. This index facilitates efficient search operations within the database.\n - `text_key`: Specify \"abstract\" as the text_key parameter. This indicates that the abstract field in your documents will be the focus for generating and searching embeddings.\n4. Create a `retriever` from your vector_store using the `as_retriever` method, tailored for semantic similarity searches. This setup enables the retrieval of the top five documents most closely matching the user's query based on vector similarity, using MongoDB's vector search capabilities for efficient document retrieval from your collection.\n\n## Step 5: configure LLM using Fireworks AI\n\nThe agent for this tutorial requires an LLM as its reasoning and parametric knowledge provider. The agent's model provider is Fireworks AI. More specifically, the [FireFunction V1 model, which is Fireworks AI's function-calling model, has a context window of 32,768 tokens.\n\n**What is function calling?**\n\n**Function calling refers to the ability of large language models (LLMs) to select and use available tools to complete specific tasks**. First, the LLM chooses a tool by a name reference, which, in this context, is a function. It then constructs the appropriate structured input for this function, typically in the JSON schema that contains fields and values corresponding to expected function arguments and their values. This process involves invoking a selected function or an API with the input prepared by the LLM. The result of this function invocation can then be used as input for further processing by the LLM.\u00a0\n\nFunction calling transforms LLMs' conditional probabilistic nature into a predictable and explainable model, mainly because the functions accessible by LLMs are constructed, deterministic, and implemented with input and output constraints.\n\nFireworks AI's firefunction model is based on Mixtral and is open-source. It integrates with the LangChain library, which abstracts some of the implementation details for function calling with LLMs with tool-calling capabilities. The LangChain library provides an easy interface to integrate and interact with the Fireworks AI function calling model.\n\nThe code snippet below initializes the language model with function-calling capabilities. The `Fireworks` class is instantiated with a specific model, \"accounts/fireworks/models/firefunction-v1,\" and configured to use a maximum of 256 tokens.\n\n```\n import os\n from langchain_fireworks import Fireworks\n\n llm = Fireworks(\n \u00a0\u00a0\u00a0\u00a0model=\"accounts/fireworks/models/firefunction-v1\",\n \u00a0\u00a0\u00a0\u00a0max_tokens=256)\n```\nThat is all there is to configure an LLM for the LangChain agent using Fireworks AI. The agent will be able to select a function from a list of provided functions to complete a task. It generates function input as a structured JSON schema, which can be invoked and the output processed.\n\n## Step 6: create tools for the agent\n\nAt this point, we\u2019ve done the following:\n- Ingested data into our knowledge base, which is held in a MongoDB vector database\n- Created a retriever object to interface between queries and the vector database\n- Configured the LLM for the agent\n\nThis step focuses on specifying the tools that the agent can use when attempting to execute operations to achieve its specified objective. The LangChain library has multiple methods of specifying and configuring tools for an agent. In this tutorial, two methods are used:\n\n1. Custom tool definition with the `@tool` decorator\n2. LangChain built-in tool creator using the `Tool` interface\n\nLangChain has a collection of Integrated tools to provide your agents with. An agent can leverage multiple tools that are specified during its implementation. When implementing tools for agents using LangChain, it\u2019s essential to configure the model's name and description. The name and description of the tool enable the LLM to know when and how to leverage the tool. Another important note is that LangChain tools generally expect single-string input.\n\nThe code snippet below imports the classes and methods required for tool configuration from various LangChain framework modules.\n\n```\n from langchain.agents import tool\n from langchain.tools.retriever import create_retriever_tool\n from langchain_community.document_loaders import ArxivLoader\n```\n\n- Import the `tool` decorator from `langchain.agents`. These are used to define and instantiate custom tools within the LangChain framework, which allows the creation of modular and reusable tool components.\n- Lastly, `create_retriever_tool` from `langchain.tools.retriever` is imported. This method provides the capability of using configured retrievers as tools for an agent.\u00a0\n- Import `ArxivLoader` from `langchain_community.document_loaders`. This class provides a document loader specifically designed to fetch and load documents from the arXiv repository.\n\nOnce all the classes and methods required to create a tool are imported into the development environment, the next step is to create the tools.\n\nThe code snippet below outlines the creation of a tool using the LangChain tool decorator. The main purpose of this tool is to take a query from the user, which can be a search term or, for our specific use case, a term for the basis of research exploration, and then use the `ArxivLoader` to extract at least 10 documents that correspond to arXiv papers that match the search query.\n\n\u00a0\n\nThe `get_metadata_information_from_arxiv` returns a list containing the metadata of each document returned by the search. The metadata includes enough information for the LLM to start research exploration or utilize further tools for a more in-depth exploration of a particular paper.\n\n```\n @tool\n def get_metadata_information_from_arxiv(word: str) -> list:\n \u00a0\u00a0\"\"\"\n \u00a0\u00a0Fetches and returns metadata for a maximum of ten documents from arXiv matching the given query word.\n\n \u00a0\u00a0Args:\n \u00a0\u00a0\u00a0\u00a0word (str): The search query to find relevant documents on arXiv.\n\n \u00a0\u00a0Returns:\n \u00a0\u00a0\u00a0\u00a0list: Metadata about the documents matching the query.\n \u00a0\u00a0\"\"\"\n \u00a0\u00a0docs = ArxivLoader(query=word, load_max_docs=10).load()\n \u00a0\u00a0# Extract just the metadata from each document\n \u00a0\u00a0metadata_list = doc.metadata for doc in docs]\n \u00a0\u00a0return metadata_list\n```\n\nTo get more information about a specific paper, the `get_information_from_arxiv` tool created using the `tool` decorator returns the full document of a single paper by using the ID of the paper, entered as the input to the tool as the query for the `ArxivLoader` document loader. The code snippet below provides the implementation steps to create the `get_information_from_arxiv` tool.\n\n```\n @tool\n def get_information_from_arxiv(word: str) -> list:\n \u00a0\u00a0\"\"\"\n \u00a0\u00a0Fetches and returns metadata for a single research paper from arXiv matching the given query word, which is the ID of the paper, for example: 704.0001.\n\n \u00a0\u00a0Args:\n \u00a0\u00a0\u00a0\u00a0word (str): The search query to find the relevant paper on arXiv using the ID.\n\n \u00a0\u00a0Returns:\n \u00a0\u00a0\u00a0\u00a0list: Data about the paper matching the query.\n \u00a0\u00a0\"\"\"\n \u00a0\u00a0doc = ArxivLoader(query=word, load_max_docs=1).load()\n \u00a0\u00a0return doc\n```\n\nThe final tool for the agent in this tutorial is the retriever tool. This tool encapsulates the agent's ability to use some form of knowledge base to answer queries initially. This is analogous to humans using previously gained information to answer queries before conducting some search via the internet or alternate information sources.\n\nThe `create_retriever_tool` takes in three arguments:\n\n- retriever: This argument should be an instance of a class derived from BaseRetriever, responsible for the logic behind retrieving documents. In this use case, this is the previously configured retriever that uses MongoDB\u2019s vector database feature.\n- name: This is a unique and descriptive name given to the retriever tool. The LLM uses this name to identify the tool, which also indicates its use in searching a knowledge base.\n- description: The third parameter provides a detailed description of the tool's purpose. For this tutorial and our use case, the tool acts as the foundational knowledge source for the agent and contains records of research papers from arXiv.\n\n```\n retriever_tool = create_retriever_tool(\n \u00a0\u00a0\u00a0\u00a0retriever=retriever,\n \u00a0\u00a0\u00a0\u00a0name=\"knowledge_base\",\n \u00a0\u00a0\u00a0\u00a0description=\"This serves as the base knowledge source of the agent and contains some records of research papers from Arxiv. This tool is used as the first step for exploration and research efforts.\"\u00a0\n )\n```\n\nLangChain agents require the specification of tools available for use as a Python list. The code snippet below creates a list named `tools` that consists of the three tools created in previous implementation steps.\n\n```\ntools = [get_metadata_information_from_arxiv, get_information_from_arxiv, retriever_tool]\n```\n\n## Step 7: prompting the agent\n\nThis step in the tutorial specifies the instruction taken to instruct the agent using defined prompts. The content passed into the prompt establishes the agent's execution flow and objective, making prompting the agent a crucial step in ensuring the agent's behaviour and output are as expected.\n\nConstructing prompts for conditioning LLMs and chat models is genuinely an art form. Several prompt methods have emerged in recent years, such as ReAct and chain-of-thought prompt structuring, to amplify LLMs' ability to decompose a problem and act accordingly. The LangChain library turns what could be a troublesome exploration process of prompt engineering into a systematic and programmatic process.\n\nLangChain offers the `ChatPromptTemplate.from_message()` class method to construct basic prompts with predefined roles such as \"system,\" \"human,\" and \"ai.\" Each role corresponds to a different speaker type in the chat, allowing for structured dialogues. Placeholders in the message templates (like `{name}` or `{user_input}`) are replaced with actual values passed to the `invoke()` method, which takes a dictionary of variables to be substituted in the template.\n\nThe prompt template includes a variable to reference the chat history or previous conversation the agent has with other entities, either humans or systems. The `MessagesPlaceholder` class provides a flexible way to add and manage historical or contextual chat messages within structured chat prompts.\n\nFor this tutorial, the \"system\" role scopes the chat model into the specified role of a helpful research assistant; the chat model, in this case, is FireFunction V1 from Fireworks AI. The code snippet below outlines the steps to implement a structured prompt template with defined roles and variables for user inputs and some form of conversational history record.\n\n```\n from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n agent_purpose = \"You are a helpful research assistant\"\n prompt = ChatPromptTemplate.from_messages(\n \u00a0\u00a0\u00a0\u00a0[\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(\"system\", agent_purpose),\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0(\"human\", \"{input}\"),\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0MessagesPlaceholder(\"agent_scratchpad\")\n \u00a0\u00a0\u00a0\u00a0]\n )\n```\nThe `{agent_scratchpad}` represents the short-term memory mechanism of the agent. This is an essential agent component specified in the prompt template. The agent scratchpad is responsible for appending the intermediate steps of the agent operations, thoughts, and actions to the thought component of the prompt. The advantage of this short-term memory mechanism is the maintenance of context and coherence throughout an interaction, including the ability to revisit and revise decisions based on new information.\n\n## Step 8: create the agent\u2019s long-term memory using MongoDB\n\nThe LangChain and MongoDB integration makes incorporating long-term memory for agents a straightforward implementation process. The code snippet below demonstrates how MongoDB can store and retrieve chat history in an agent system.\n\nLangChain provides the `ConversationBufferMemory` interface to store interactions between an LLM and the user within a specified data store, MongoDB, which is used for this tutorial. This interface also provides methods to extract previous interactions and format the stored conversation as a list of messages. The `ConversationBufferMemory` is the long-term memory component of the agent.\n\nThe main advantage of long-term memory within an agentic system is to have some form of persistent storage that acts as a state, enhancing the relevance of responses and task execution by using previous interactions. Although using an agent\u2019s scratchpad, which acts as a short-term memory mechanism, is helpful, this temporary state is removed once the conversation ends or another session is started with the agent.\u00a0\n\nA long-term memory mechanism provides an extensive record of interaction that can be retrieved across multiple interactions occurring at various times. Therefore, whenever the agent is invoked to execute a task, it\u2019s also provided with a recollection of previous interactions.\n\n```\n from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory\n from langchain.memory import ConversationBufferMemory\n\n def get_session_history(session_id: str) -> MongoDBChatMessageHistory:\n \u00a0\u00a0\u00a0\u00a0return MongoDBChatMessageHistory(MONGO_URI, session_id, database_name=DB_NAME, collection_name=\"history\")\n\n memory = ConversationBufferMemory(\n \u00a0\u00a0\u00a0\u00a0memory_key=\"chat_history\",\u00a0\n \u00a0\u00a0\u00a0\u00a0chat_memory=get_session_history(\"my-session\")\n )\n```\n\n- The function `get_session_history` takes a `session_id` as input and returns an instance of `MongoDBChatMessageHistory`. This instance is configured with a MongoDB URI (MONGO_URI), the session ID, the database name (DB_NAME), and the collection name (history).\n- A `ConversationBufferMemory` instance is created and assigned to the variable memory. This instance is specifically designed to keep track of the chat_history.\n- The chat_memory parameter of ConversationBufferMemory is set using the `get_session_history` function, which means the chat history is loaded from MongoDB based on the specified session ID (\"my-session\").\n\nThis setup allows for the dynamic retrieval of chat history for a given session, using MongoDB as the agent\u2019s vector store back end.\n\n## Step 9: agent creation\n\nThis is a crucial implementation step in this tutorial. This step covers the creation of your agent and configuring its brain, which is the LLM, the tools available for task execution, and the objective prompt that targets the agents for the completion of a specific task or objective. This section also covers the initialization of a LangChain runtime interface, `AgentExecutor`, that enables the execution of the agents with configured properties such as memory and error handling.\n\n```\n from langchain.agents import AgentExecutor, create_tool_calling_agent\n agent = create_tool_calling_agent(llm, tools, prompt)\n\n agent_executor = AgentExecutor(\n \u00a0\u00a0\u00a0\u00a0agent=agent,\n \u00a0\u00a0\u00a0\u00a0tools=tools,\n \u00a0\u00a0\u00a0\u00a0verbose=True,\n \u00a0\u00a0\u00a0\u00a0handle_parsing_errors=True,\n \u00a0\u00a0\u00a0\u00a0memory=memory,\n )\n```\n- The `create_tool_calling_agent` function initializes an agent by specifying a language model (llm), a set of tools (tools), and a prompt template (prompt). This agent is designed to interact based on the structured prompt and leverage external tools within their operational framework.\n- An `AgentExecutor` instance is created with the Tool Calling agent. The `AgentExecutor` class is responsible for managing the agent's execution, facilitating interaction with inputs, and intermediary steps such as error handling and logging. The `AgentExecutor` is also responsible for creating a recursive environment for the agent to be executed, and it passes the output of a previous iteration as input to the next iteration of the agent's execution.\n - agent: The Tool Calling agent\n - tools: A sequence of tools that the agent can use. These tools are predefined abilities or integrations that augment the agent's capabilities.\n - handle_parsing_errors: Ensure the agent handles parsing errors gracefully. This enhances the agent's robustness by allowing it to recover from or ignore errors in parsing inputs or outputs.\n - memory: Specifies the memory mechanism the agent uses to remember past interactions or data. This integration provides the agent additional context or historical interaction to ensure ongoing interactions are relevant and grounded in relative truth.\n\n## Step 10: agent execution\n\nThe previous steps created the agent, prompted it, and initiated a runtime interface for its execution. This final implementation step covers the method to start the agent's execution and its processes.\n\nIn the LangChain framework, native objects such as models, retrievers, and prompt templates inherit the `Runnable` protocol. This protocol endows the LangChain native components with the capability to perform their internal operations. Objects implementing the Runnable protocol are recognized as runnable and introduce additional methods for initiating their process execution through a `.invoke()` method, modifying their behaviour, logging their internal configuration, and more.\n\nThe agent executor developed in this tutorial exemplifies a Runnable object. We use the `.invoke()` method on the `AgentExecutor` object to call the agent. The agent executor initialized it with a string input in the example code provided. This input is used as the `{input}` in the question component of the template or the agent's prompt.\n\n```\nagent_chain.invoke({\"input\": \"Get me a list of research papers on the topic Prompt Compression\"})\n```\n\nIn the first initial invocation of the agent, the ideal steps would be as follows:\n- The agent uses the retriever tool to access its inherent knowledge base and check for research papers that are semantically similar to the user input/instruction using vector search enabled by MongoDB Atlas.\n- If the agent retrieves research papers from its knowledge base, it will provide it as its response.\n- If the agent doesn\u2019t find research papers from its knowledge base, it should use the `get_metadata_information_from_arxiv()` tool to retrieve a list of documents that match the term in the user input and return it as its response.\n\n```\n agent_executor.invoke({\"input\":\"Get me the abstract of the first paper on the list\"})\n```\n\nThis next agent invocation demonstrates the agent's ability to reference conversational history, which is retrieved from the MongoDB database from the `chat_history` collection and used as input into the model.\n\nIn the second invocation of the agent, the ideal outcome would be as follows:\n- The agent references research papers in its history or short-term memory and recalls the details of the first paper on the list.\n- The agent uses the details of the first research paper on the list as input to the `get_information_from_arxiv()` tool to extract the abstract of the query paper.\n\n----------\n\n# Conclusion\n\nThis tutorial has guided you through building an AI research assistant agent, leveraging tools such as MongoDB, Fireworks AI, and LangChain. It\u2019s shown how these technologies combine to create a sophisticated agent capable of assisting researchers by effectively managing and retrieving information from an extensive database of research papers.\n\nIf you have any questions regarding this training, head to the [forums.\n\nIf you want to explore more RAG and Agents examples, visit the GenAI Showcase repository.\n\nOr, if you simply want to get a well-rounded understanding of the AI Stack in the GenAI era, read this piece.\n\n----------\n\n# FAQs\n\n1. **What is an Agent?**\nAn agent is an artificial computational entity with an awareness of its environment. It is equipped with faculties that enable perception through input, action through tool use, and cognitive abilities through foundation models backed by long-term and short-term memory. Within AI, agents are artificial entities that can make intelligent decisions followed by actions based on environmental perception, enabled by large language models.\n\n1. **What is the primary function of MongoDB in the AI agent?**\nMongoDB serves as the memory provider for the agent, storing conversational history, vector embedding data, and operational data. It supports information retrieval through its vector database capabilities, enabling semantic searches between user queries and stored data.\u00a0\n\n2. **How does Fireworks AI enhance the functionality of the agent?**\nFireworks AI, through its FireFunction V1 model, enables the agent to generate responses to user queries and decide when to use specific tools by providing a structured input for the available tools.\n\n3. **What are some key characteristics of AI agents?**\nAgents are autonomous, introspective, proactive, reactive, and interactive. They can independently plan and reason, respond to stimuli with advanced methodologies, and interact dynamically within their environments.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc72cddd5357a7d9a/6627c077528fc1247055ab24/Screenshot_2024-04-23_at_15.06.25.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09001ac434120f7/6627c10e33301d39a8891e2e/Perception_(3).png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "Pandas"], "pageDescription": "Creating your own AI agent equipped with a sophisticated memory system. This guide provides a detailed walkthrough on leveraging the capabilities of Fireworks AI, MongoDB, and LangChain to construct an AI agent that not only responds intelligently but also remembers past interactions.", "contentType": "Tutorial"}, "title": "Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/spring-application-on-k8s", "action": "created", "body": "# MongoDB Orchestration With Spring & Atlas Kubernetes Operator\n\nIn this tutorial, we'll delve into containerization concepts, focusing on Docker, and explore deploying your Spring Boot application from a previous tutorial. By the tutorial's conclusion, you'll grasp Docker and Kubernetes concepts and gain hands-on experience deploying your application within a cloud infrastructure.\n\nThis tutorial is an extension of the previous tutorial where we explained how to write advanced aggregation queries in MongoDB using the Spring Boot framework. We will use the same GitHub repository to create this tutorial's deployment files.\n\nWe'll start by learning about containers, like digital packages that hold software. Then, we'll dive into Kubernetes, a system for managing those containers. Finally, we'll use Kubernetes to set up MongoDB and our Spring application, seeing how they work together.\n\n## Prerequisites\n\n1. A Spring Boot application running on your local machine\n2. Elastic Kubernetes Service deployed on AWS using eksctl\n3. A MongoDB Atlas account\n\n## Understanding containerization\n\nOften as a software developer, one comes across an issue where the features of the application work perfectly on the local machine, and many features seem to be broken on the client machine. This is where the concept of containers would come in.\n\nIn simple words, a container is just a simple, portable computing environment that contains everything an application needs to run. The process of creating containers for the application to run in any environment is known as containerization.\n\nContainerization is a form of virtualization where an application, along with all its components, is packaged into a single container image. These containers operate in their isolated environment within the shared operating system, allowing for efficient and consistent deployment across different environments.\n\n### Advantages of containerizing the application\n\n1. **Portability**: The idea of \u201cwrite once and run anywhere\u201d encapsulates the essence of containers, enabling applications to seamlessly transition across diverse environments, thereby enhancing their portability and flexibility.\n2. **Efficiency**: When configured properly, containers utilize the available resources, and also, isolated containers can perform their operations without interfering with other containers, allowing a single host to perform many functions. This makes the containerized application work efficiently and effectively.\n3. **Better security**: Because containers are isolated from one another, you can be confident that your applications are running in their self-contained environment. That means that even if the security of one container is compromised, other containers on the same host remain secure.\n\n### Comparing containerization and traditional virtualization methods\n\n| | | |\n|----------------------|-------------------------|--------------------------------------|\n| **Aspect** | **Containers** | **Virtual Machines** |\n| Abstraction Level | OS level virtualization | Hardware-level virtualization |\n| Resource Overhead | Minimal | Higher |\n| Isolation | Process Level | Stronger |\n| Portability | Highly Portable | Less Portable |\n| Deployment Speed | Fast | Slower |\n| Footprint | Lightweight | Heavier |\n| Startup Time | Almost instant | Longer |\n| Resource Utilisation | Efficient | Less Efficient |\n| Scalability | Easily Scalable | Scalable, but with resource overhead |\n\n## Understanding Docker\n\nDocker application provides the platform to develop, ship, and run containers. This separates the application from the infrastructure and makes it portable. It packages the application into lightweight containers that can run across without worrying about underlying infrastructures.\n\nDocker containers have minimal overhead compared to traditional virtual machines, as they share the host OS kernel and only include necessary dependencies. Docker facilitates DevOps practices by enabling developers to build, test, and deploy applications in a consistent and automated manner. You can read more about Docker containers and the steps to install them on your local machine from their official documentation.\n\n## Understanding Kubernetes\n\nKubernetes, often called K8s, is an open-source orchestration platform that automates containerized applications' deployment, scaling, and management. It abstracts away the underlying infrastructure complexity, allowing developers to focus on building and running their applications efficiently.\n\nIt simplifies the deployment and management of containerized applications at scale. Its architecture, components, and core concepts form the foundation for building resilient, scalable, and efficient cloud-native systems. The Kubernetes architectures have been helpful in typical use cases like microservices architecture, hybrid and multi-cloud deployments, and DevOps where continuous deployments are done.\n\nLet's understand a few components related to Kubernetes:\n\nThe K8s environment works in the controller-worker node architecture and therefore, two nodes manage the communication. The Master Node is responsible for controlling the cluster and making decisions for the cluster whereas the Worker node(s) is responsible for running the application receiving instructions from the Master Node and resorting back to the status.\n\nThe other components of the Kubernetes cluster are:\n\n**Pods**: The basic building block of Kubernetes, representing one or more containers deployed together on the same host\n\n**ReplicaSets**: Ensures that a specified number of pod replicas are running at any given time, allowing for scaling and self-healing\n\n**Services**: Provide networking and load balancing for pods, enabling communication between different parts of the application\n\n**Volumes**: Persist data in Kubernetes, allowing containers to share and store data independently of the container lifecycle\n\n**Namespaces**: Virtual clusters within a physical cluster, enabling multiple users, teams, or projects to share a Kubernetes cluster securely\n\nThe below diagrams give a detailed description of the Kubernetes architecture.\n\n## Atlas Kubernetes Operator\n\nConsider a use case where a Spring application running locally is connected to a database deployed on the Atlas cluster. Later, your organization introduces you to the Kubernetes environment and plans to deploy all the applications in the cloud infrastructure.\n\nThe question of how you will connect your Kubernetes application to the Atlas cluster running on a different environment will arise. This is when the Atlas Kubernetes Operator will come into the picture.\n\nThis operator allows you to manage the Atlas resources in the Kubernetes infrastructure.\n\nFor this tutorial, we will deploy the operator on the Elastic Kubernetes Service on the AWS infrastructure.\n\nStep 1: Deploy an EKS cluster using _eksctl_. Follow the documentation, Getting Started with Amazon EKS - eksctl, to deploy the cluster. This step will take some time to deploy the cluster in the AWS.\n\nI created the cluster using the command:\n\n```bash\neksctl create cluster \\\n--name MongoDB-Atlas-Kubernetes-Operator \\\n--version 1.29 \\\n--region ap-south-1 \\\n--nodegroup-name linux-nodes \\\n--node-type t2.2xlarge \\\n--nodes 2\n```\n\nStep 2: Once the EKS cluster is deployed, run the command:\n\n```bash\nkubectl get ns\n```\n\nAnd you should see an output similar to this.\n\n```bash\nNAME STATUS AGE\ndefault Active 18h\nkube-node-lease Active 18h\nkube-public Active 18h\nkube-system Active 18h\n```\n\nStep 3: Register a new Atlas account or log in to your Atlas account.\n\nStep 4: As the quick start tutorial mentioned, you need the API key for the project in your Atlas cluster. You can follow the documentation page if you don\u2019t already have an API key.\n\nStep 5: All files that are being discussed in the following sub-steps are available in the GitHub repository.\n\nIf you are following the above tutorials, the first step is to create the API keys. You need to make sure that while creating the API key for the project, you add the public IPs of the EC2 instances created using the command in Step 1 to the access list.\n\nThis is how the access list should look like:\n\nFigure showing the addition of the Public IPs address to the API key access list.\n\nThe first step mentioned in the Atlas Kubernetes Operator documentation is to apply all the YAML file configurations to all the namespaces created in the Kubernetes environment. Before applying the YAML files, make sure to export the below variables using:\n\n```bash\nexport VERSION=v2.2.0\nexport ORG_ID=\nexport PUBLIC_API_KEY=\nexport PRIVATE_API_KEY=\n```\n\nThen, apply the command below:\n\n```bash\nkubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/$VERSION/deploy/all-in-one.yaml\n```\n\nTo let the Kubernetes Operator create the project in Atlas, you must have certain permissions using the API key at the organizational level in the Atlas UI.\n\nYou can create the API key using the Get Started with the Atlas Administration API documentation.\n\nOnce the API key is created, create the secret with the credentials using the below command:\n\n```bash\nkubectl create secret generic mongodb-atlas-operator-api-key \\\n --from-literal=\"orgId=$ORG_ID\" \\\n --from-literal=\"publicApiKey=$PUBLIC_API_KEY\" \\\n --from-literal=\"privateApiKey=$PRIVATE_API_KEY\" \\\n -n mongodb-atlas-system \n```\n\nLabel the secrets created using the below command:\n\n```bash\nkubectl label secret mongodb-atlas-operator-api-key atlas.mongodb.com/type=credentials -n mongodb-atlas-system\n```\n\nThe next step is to create the YAML file to create the project and deployment using the project and deployment YAML files respectively.\n\nPlease ensure the deployment files mention the zone, instance, and region correctly.\n\nThe files are available in the Git repository in the atlas-kubernetes-operator folder.\n\nIn the initial **project.yaml** file, the specified content initiates the creation of a project within your Atlas deployment, naming it as indicated. With the provided YAML configuration, a project named \"atlas-kubernetes-operator\" is established, permitting access from all IP addresses (0.0.0.0/0) within the Access List.\n\nproject.yaml: \n\n```bash\napiVersion: atlas.mongodb.com/v1\nkind: AtlasProject\nmetadata:\n name: project-ako\nspec:\n name: atlas-kubernetes-operator\n projectIpAccessList:\n - cidrBlock: \"0.0.0.0/0\"\n comment: \"Allowing access to database from everywhere (only for Demo!)\"\n```\n\n> **Please note that 0.0.0.0 is not recommended in the production environment. This is just for test purposes.**\n\nThe next file named, **deployment.yaml** would create a new deployment in the project created above with the name specified as cluster0. The YAML also specifies the instance type as M10 in the AP_SOUTH_1 region. Please make sure you use the region close to you.\n\ndeployment.yaml: \n\n```bash\napiVersion: atlas.mongodb.com/v1\nkind: AtlasDeployment\nmetadata:\n name: my-atlas-cluster\nspec:\n projectRef:\n name: project-ako\n deploymentSpec:\n clusterType: REPLICASET\n name: \"cluster0\"\n replicationSpecs:\n - zoneName: AP-Zone\n regionConfigs:\n - electableSpecs:\n instanceSize: M10\n nodeCount: 3\n providerName: AWS\n regionName: AP_SOUTH_1\n priority: 7\n```\n\nThe **user.yaml** file will create the user for your project. Before creating the user YAML file, create the secret with the password of your choice for the project.\n\n```bash\nkubectl create secret generic the-user-password --from-literal=\"password=\"\nkubectl label secret the-user-password atlas.mongodb.com/type=credentials\n```\n\nuser.yaml\n\n```bash\napiVersion: atlas.mongodb.com/v1\nkind: AtlasDatabaseUser\nmetadata:\n name: my-database-user\nspec:\n roles:\n - roleName: \"readWriteAnyDatabase\"\n databaseName: \"admin\"\n projectRef:\n name: project-ako\n username: theuser\n passwordSecretRef:\n name: the-user-password\n```\n\nOnce all the YAML are created, apply these YAML files to the default namespace.\n\n```bash\nkubectl apply -f project.yaml\nkubectl apply -f deployment.yaml \nkubectl apply -f user.yaml \n```\n\nAfter this step, you should be able to see the deployment and user created for the project in your Atlas cluster.\n\n## Deploying the Spring Boot application in the cluster\n\nIn this tutorial, we'll be building upon our existing guide found on Developer Center, MongoDB Advanced Aggregations With Spring Boot, and Amazon Corretto.\n\nWe'll utilize the same GitHub repository to create a DockerFile. If you're new to this, we highly recommend following the tutorial first before diving into containerizing the application.\n\nThere are certain steps to be followed to containerize the application.\n\nStep 1: Create a JAR file for the application. This executable JAR will be needed to create the Docker image.\n\nTo create the JAR, do:\n\n```bash\nmvn clean package\n```\n\nand the jar would be stored in the target/ folder.\n\nStep 2: The second step is to create the Dockerfile for the application. A Dockerfile is a text file that contains the information to create the Docker image of the application.\n\nCreate a file named Dockerfile with the following content. This file describes what will run into this container.\n\nStep 3: Build the Docker image. The `docker build` command will read the specifications from the Dockerfile created above.\n\n```bash\n docker build -t mongodb_spring_tutorial:docker_image . \u2013load\n```\n\nStep 4: Once the image is built, you will need to push it to a registry. In this example, we are using Docker Hub. You can create your account by following the documentation.\n\n```bash\ndocker tag mongodb_spring_tutorial:docker_image /mongodb_spring_tutorial\ndocker push /mongodb_spring_tutorial\n```\n\nOnce the Docker image has been pushed into the repo, the last step is to connect your application with the database running on the Atlas Kubernetes Operator.\n\n### Connecting the application with the Atlas Kubernetes Operator\n\nTo make the connection, we need Deployment and Service files. While Deployments manage the lifecycle of pods, ensuring a desired state, Services provide a way for other components to access and communicate with those pods. Together, they form the backbone for managing and deploying applications in Kubernetes.\n\nA Deployment in Kubernetes is a resource object that defines the desired state for your application. It allows you to declaratively manage a set of identical pods. Essentially, it ensures that a specified number of pod replicas are running at any given time.\n\nA deployment file will have the following information. In the above app-deployment.yaml file, the following details are mentioned:\n\n1. **apiVersion**: Specifies the Kubernetes API version\n2. **kind**: Specifies that it is a type of Kubernetes resource, Deployment\n3. **metadata**: Contains metadata about the Deployment, including its name\n\nIn the spec section:\n\nThe **replicas** specify the number of instances of the application. The name and image refer to the application image created in the above step and the name of the container that would run the image.\n\nIn the last section, we will specify the environment variable for SPRING_DATA_MONGODB_URI which will pick the value from the connectionStringStandardSrv of the Atlas Kubernetes Operator.\n\nCreate the deployment.yaml file:\n\n```bash\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: spring-app\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: springboot-application\n template:\n metadata:\n labels:\n app: springboot-application\n spec:\n containers:\n - name: spring-app\n image: /mongodb_spring_tutorial\n ports:\n - containerPort: 8080\n env:\n - name: SPRING_DATA_MONGODB_URI\n valueFrom:\n secretKeyRef:\n name: atlas-kubernetes-operator-cluster0-theuser\n key: connectionStringStandardSrv\n - name: SPRING_DATA_MONGODB_DATABASE\n value: sample_supplies\n - name: LOGGING_LEVEL_ORG_SPRINGFRAMEWORK\n value: INFO\n - name: LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_WEB\n value: DEBUG\n```\n\nA Service in Kubernetes is an abstraction that defines a logical set of pods and a policy by which to access them. It enables other components within or outside the Kubernetes cluster to communicate with your application running on pods.\n\n```bash\napiVersion: v1\nkind: Service\nmetadata:\n name: spring-app-service\nspec:\n selector:\n app: spring-app\n ports:\n - protocol: TCP\n port: 8080\n targetPort: 8080\n type: LoadBalancer\n```\n\nYou can then apply those two files to your cluster, and Kubernetes will create all the pods and start the application.\n\n```bash\nkubectl apply -f ./*.yaml\n```\n\nNow, when you do\u2026\n\n```bash\nkubectl get svc\n```\n\n\u2026it will give you the output as below with an external IP link created. This link will be used with the default port to access the RESTful calls.\n\n>In an ideal scenario, the service file is applied with type: ClusterIP but since we need test the application with the API calls, we would be specifying the type as LoadBalancer.\n\nYou can use the external IP allocated with port 8080 and test the APIs.\n\nOr use the following command to store the external address to the `EXTERNAL_IP` variable.\n\n```bash\nEXTERNAL_IP=$(kubectl get svc|grep spring-app-service|awk '{print $4}')\n\necho $EXTERNAL_IP\n```\n\nIt should give you the response as\n\n```bash\na4874d92d36fe4d2cab1ccc679b5fca7-1654035108.ap-south-1.elb.amazonaws.com\n```\n\nBy this time, you should be able to deploy Atlas in the Kubernetes environment and connect with the front-end and back-end applications deployed in the same environment.\n\nLet us test a few REST APIs using the external IP created in the next section.\n\n## Tests\n\nNow that your application is deployed, running in Kubernetes, and exposed to the outside world, you can test it with the following curl commands.\n\n1. Finding sales in London\n2. Finding total sales:\n3. Finding the total quantity of each item\n\nAs we conclude our exploration of containerization in Spring applications, we're poised to delve into Kubernetes and Docker troubleshooting. Let us move into the next section as we uncover common challenges and effective solutions for a smoother deployment experience.\n\n## Common troubleshooting errors in Kubernetes\n\nIn a containerized environment, the path to a successful deployment can sometimes involve multiple factors. To navigate any hiccups along the way, it's wise to turn to certain commands for insights:\n\n- Examine pod status:\n```bash\nkubectl describe pods -n \n\nkubectl get pods -n \n````\n\n- Check node status:\n\n```bash\nkubectl get nodes\n```\n- Dive into pod logs:\n```bash\nkubectl get logs -f -n \n```\n\n- Explore service details:\n```bash\nkubectl get describe svc -n \n```\n\nDuring troubleshooting, encountering errors is not uncommon. Here are a few examples where you might seek additional information:\n\n1. **Image Not Found**: This error occurs when attempting to execute a container with an image that cannot be located. It typically happens if the image hasn't been pulled successfully or isn't available in the specified Docker registry. It's crucial to ensure that the correct image name and tag are used, and if necessary, try pulling the image from the registry locally before running the container to ensure it\u2019s there.\n\n2. **Permission Denied:** Docker containers often operate with restricted privileges, especially for security purposes. If your application requires access to specific resources or directories within the container, it's essential to set appropriate file permissions and configure user/group settings accordingly. Failure to do so can result in permission-denied errors when trying to access these resources.\n\n3. **Port Conflicts**:Running multiple containers on the same host machine, each attempting to use the same host port, can lead to port conflicts. This issue arises when the ports specified in the `docker run` command overlap with ports already in use by other containers or services on the host. To avoid conflicts, ensure that the ports assigned to each container are unique and not already occupied by other processes.\n\n4. **Out of Disk Space**: Docker relies on disk space to store images, containers, and log files. Over time, these files can accumulate and consume a significant amount of disk space, potentially leading to disk space exhaustion. To prevent this, it's advisable to periodically clean up unused images and containers using the `docker system prune` command, which removes dangling images, unused containers, and other disk space-consuming artifacts.\n\n5. **Container Crashes**: Containers may crash due to various reasons, including misconfigurations, application errors, or resource constraints. When a container crashes, it's essential to examine its logs using the `kubectl logs -f ` -n `` command. These logs often contain valuable error messages and diagnostic information that can help identify the underlying cause of the crash and facilitate troubleshooting and resolution.\n\n6. **Docker Build Failures**: Building Docker images can fail due to various reasons, such as syntax errors in the Dockerfile, missing files or dependencies, or network issues during package downloads. It's essential to carefully review the Dockerfile for any syntax errors, ensure that all required files and dependencies are present, and troubleshoot any network connectivity issues that may arise during the build process.\n\n7. **Networking Problems**: Docker containers may rely on network connectivity to communicate with other containers or external services. Networking issues, such as incorrect network configuration, firewall rules blocking required ports, or DNS misconfigurations, can cause connectivity problems. It's crucial to verify that the container is attached to the correct network, review firewall settings to ensure they allow necessary traffic, and confirm that DNS settings are correctly configured.\n\n8. **Resource Constraints**: Docker containers may require specific CPU and memory resources to function correctly. Failure to allocate adequate resources can result in performance issues or application failures. When running containers, it's essential to specify resource limits using the `--cpu` and `--memory` flags to ensure that containers have sufficient resources to operate efficiently without overloading the host system.\n\nYou can specify in the resource section of the YAML file as:\n\n```bash\ndocker_container:\n name: my_container\n resources:\n cpu: 2\n memory: 4G\n```\n\n## Conclusion\n\nThroughout this tutorial, we've covered essential aspects of modern application deployment, focusing on containerization, Kubernetes orchestration, and MongoDB management with Atlas Kubernetes Operator. Beginning with the fundamentals of containerization and Docker, we proceeded to understand Kubernetes' role in automating application deployment and management. By deploying Atlas Operator on AWS's EKS, we seamlessly integrated MongoDB into our Kubernetes infrastructure. Additionally, we containerized a Spring Boot application, connecting it to Atlas for database management. Lastly, we addressed common Kubernetes troubleshooting scenarios, equipping you with the skills needed to navigate challenges in cloud-native environments. With this knowledge, you're well-prepared to architect and manage sophisticated cloud-native applications effectively.\n\nTo learn more, please visit the resource, What is Container Orchestration? and reach out with any specific questions.\n\nAs you delve deeper into your exploration and implementation of these concepts within your projects, we encourage you to actively engage with our vibrant MongoDB community forums. Be sure to leverage the wealth of resources available on the MongoDB Developer Center and documentation to enhance your proficiency and finesse your abilities in harnessing the power of MongoDB and its features.\n", "format": "md", "metadata": {"tags": ["MongoDB", "Java", "AWS"], "pageDescription": "Learn how to use Spring application in production using Atlas Kubernetes Operator", "contentType": "Article"}, "title": "MongoDB Orchestration With Spring & Atlas Kubernetes Operator", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-2", "action": "created", "body": "# Part #2: Create Your Model Endpoint With Amazon SageMaker, AWS Lambda, and AWS API Gateway\n\nWelcome to Part 2 of the `Amazon SageMaker + Atlas Vector Search` series. In Part 1, I showed you how to set up an architecture that uses both tools to create embeddings for your data and how to use those to then semantically search through your data.\n\nIn this part of the series, we will look into the actual doing. No more theory! Part 2 will show you how to create the REST service described in the architecture.\n\nThe REST endpoint will serve as the encoder that creates embeddings (vectors) that will then be used in the next part of this series to search through your data semantically. The deployment of the model will be handled by Amazon SageMaker, AWS's all-in-one ML service. We will expose this endpoint using AWS Lambda and AWS API Gateway later on to make it available to the server app.\n\n## Amazon SageMaker\n\nAmazon SageMaker is a cloud-based, machine-learning platform that enables developers to build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.\n\n## Getting Started With Amazon SageMaker\n\nAmazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and support one-click deployment and fine-tuning of more than 150 popular open-source models, such as natural language processing, object detection, and image classification models.\n\nIt includes a number of popular solutions:\n- Extract and analyze data: Automatically extract, process, and analyze documents for more accurate investigation and faster decision-making.\n- Fraud detection: Automate detection of suspicious transactions faster and alert your customers to reduce potential financial loss.\n- Churn prediction: Predict the likelihood of customer churn and improve retention by honing in on likely abandoners and taking remedial actions such as promotional offers.\n- Personalized recommendations: Deliver customized, unique experiences to customers to improve customer satisfaction and grow your business rapidly.\n\n## Let's set up a playground for you to try it out!\n\n> Before we start, make sure you choose a region that is supported for `RStudio` (more on that later) and `JumpStart`. You can check both on the Amazon SageMaker pricing page by checking if your desired region appears in the `On-Demand Pricing` list.\n\nOn the main page of Amazon SageMaker, you'll find the option to `Set up for a single user`. This will set up a domain and a quick-start user.\n\nA QuickSetupDomain is basically just a default configuration so that you can get started deploying models and trying out SageMaker. You can customize it later to your needs.\n\nThe initial setup only has to be done once, but it might take several minutes. When finished, Amazon SageMaker will notify you that the new domain is ready.\n\nAmazon SageMaker Domain supports Amazon SageMaker machine learning (ML) environments and contains the following:\n\n- The domain itself, which holds an AWS EC2 that models will be deployed onto. This inherently contains a list of authorized users and a variety of security, application, policy, and Amazon Virtual Private Cloud (Amazon VPC) configurations.\n- The `UserProfile`, which represents a single user within a domain that you will be working with.\n- A `shared space`, which consists of a shared JupyterServer application and shared directory. All users within the domain have access to the same shared space.\n- An `App`, which represents an application that supports the reading and execution experience of the user\u2019s notebooks, terminals, and consoles.\n\nAfter the creation of the domain and the user, you can launch the SageMaker Studio, which will be your platform to interact with SageMaker, your models, and deployments for this user.\n\nAmazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning that lets you build, train, debug, deploy, and monitor your machine learning models.\n\nHere, we\u2019ll go ahead and start with a new JumpStart solution.\n\nAll you need to do to set up your JumpStart solution is to choose a model. For this tutorial, we will be using an embedding model called `All MiniLM L6 v2` by Hugging Face.\n\nWhen choosing the model, click on `Deploy` and SageMaker will get everything ready for you.\n\nYou can adjust the endpoint to your needs but for this tutorial, you can totally go with the defaults.\n\nAs soon as the model shows its status as `In service`, everything is ready to be used.\n\nNote that the endpoint name here is `jumpstart-dft-hf-textembedding-all-20240117-062453`. Note down your endpoint name \u2014 you will need it in the next step.\n\n## Using the model to create embeddings\n\nNow that the model is set up and the endpoint ready to be used, we can expose it for our server application.\n\nWe won\u2019t be exposing the SageMaker endpoint directly. Instead, we will be using AWS API Gateway and AWS Lambda.\n\nLet\u2019s first start by creating the lambda function that uses the endpoint to create embeddings.\n\nAWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of Amazon Web Services. It is designed to enable developers to run code without provisioning or managing servers. It executes code in response to events and automatically manages the computing resources required by that code.\n\nIn the main AWS Console, go to `AWS Lambda` and click `Create function`.\n\nChoose to `Author from scratch`, give your function a name (`sageMakerLambda`, for example), and choose the runtime. For this example, we\u2019ll be running on Python.\n\nWhen everything is set correctly, create the function.\n\nThe following code snippet assumes that the lambda function and the Amazon SageMaker endpoint are deployed in the same AWS account. All you have to do is replace `` with your actual endpoint name from the previous section.\n\nNote that the `lambda_handler` returns a status code and a body. It\u2019s ready to be exposed as an endpoint, for using AWS API Gateway.\n\n```\nimport json\nimport boto3\n\nsagemaker_runtime_client = boto3.client(\"sagemaker-runtime\")\n\ndef lambda_handler(event, context):\n try:\n # Extract the query parameter 'query' from the event\n query_param = event.get('queryStringParameters', {}).get('query', '')\n\n if query_param:\n embedding = get_embedding(query_param)\n return {\n 'statusCode': 200,\n 'body': json.dumps({'embedding': embedding})\n }\n else:\n return {\n 'statusCode': 400,\n 'body': json.dumps({'error': 'No query parameter provided'})\n }\n\n except Exception as e:\n return {\n 'statusCode': 500,\n 'body': json.dumps({'error': str(e)})\n }\n\ndef get_embedding(synopsis):\n input_data = {\"text_inputs\": synopsis}\n response = sagemaker_runtime_client.invoke_endpoint(\n EndpointName=\"\",\n Body=json.dumps(input_data),\n ContentType=\"application/json\"\n )\n result = json.loads(response\"Body\"].read().decode())\n embedding = result[\"embedding\"][0]\n return embedding\n```\n\nDon\u2019t forget to click `Deploy`!\n\n![Lambda code editor\n\nOne last thing we need to do before we can use this lambda function is to make sure it actually has permission to execute the SageMaker endpoint. Head to the `Configuration` part of your Lambda function and then to `Permissions`. You can just click on the `Role Name` link to get to the associated role in AWS Identity and Access Management (IAM).\n\nIn IAM, you want to choose `Add permissions`.\n\nYou can choose `Attach policies` to attach pre-created policies from the IAM policy list.\n\nFor now, let\u2019s use the `AmazonSageMakerFullAccess`, but keep in mind to select only those permissions that you need for your specific application.\n\n## Exposing your lambda function via AWS API Gateway\n\nNow, let\u2019s head to AWS API Gateway, click `Create API`, and then `Build` on the `REST API`.\n\nChoose to create a new API and name it. In this example, we\u2019re calling it `sageMakerApi`.\n\nThat\u2019s all you have to do for now. The API endpoint type can stay on regional, assuming you created the lambda function in the same region. Hit `Create API`.\n\nFirst, we need to create a new resource.\n\nThe resource path will be `/`. Pick a name like `sageMakerResource`.\n\nNext, you'll get back to your API overview. This time, click `Create method`. We need a GET method that integrates with a lambda function.\n\nCheck the `Lambda proxy integration` and choose the lambda function that you created in the previous section. Then, create the method.\n\nFinally, don\u2019t forget to deploy the API.\n\nChoose a stage. This will influence the URL that we need to use (API Gateway will show you the full URL in a moment). Since we\u2019re still testing, `TEST` might be a good choice.\n\nThis is only a test for a tutorial, but before deploying to production, please also add security layers like API keys. When everything is ready, the `Resources` tab should look something like this.\n\nWhen sending requests to the API Gateway, we will receive the query as a URL query string parameter. The next step is to configure API Gateway and tell it so, and also tell it what to do with it.\nGo to your `Resources`, click on `GET` again, and head to the `Method request` tab. Click `Edit`.\n\nIn the `URL query string parameters` section, you want to add a new query string by giving it a name. We chose `query` here. Set it to `Required` but not cached and save it.\n\nThe new endpoint is created. At this point, we can grab the URL and test it via cURL to see if that part worked fine. You can find the full URL (including stage and endpoint) in the `Stages` tab by opening the stage and endpoint and clicking on `GET`. For this example, it\u2019s `https://4ug2td0e44.execute-api.ap-northeast-2.amazonaws.com/TEST/sageMakerResource`. Your URL should look similar.\n\nUsing the Amazon Cloud Shell or any other terminal, try to execute a cURL request:\n\n```\ncurl -X GET 'https://4ug2td0e44.execute-api.ap-northeast-2.amazonaws.com/TEST/sageMakerResource?query=foo'\n```\n\nIf everything was set up correctly, you should get a result that looks like this (the array contains 384 entries in total):\n\n```\n{\"embedding\": 0.01623343490064144, -0.007662375457584858, 0.01860642433166504, 0.031969036906957626,................... -0.031003709882497787, 0.008777940645813942]}\n```\n\nYour embeddings REST service is ready. Congratulations! Now you can convert your data into a vector with 384 dimensions!\n\nIn the next and final part of the tutorial, we will be looking into using this endpoint to prepare vectors and execute a vector search using MongoDB Atlas.\n\n\u2705 [Sign-up for a free cluster.\n\n\u2705 Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\n\u2705 Get help on our Community Forums.\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "AWS", "Serverless"], "pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.", "contentType": "Tutorial"}, "title": "Part #2: Create Your Model Endpoint With Amazon SageMaker, AWS Lambda, and AWS API Gateway", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/connectors/deploying-kubernetes-operator", "action": "created", "body": "# Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud\n\nThis article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.\n\n- Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud\n\n- Mastering MongoDB Ops Manager\n\n- Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti\n\nDeploying and managing MongoDB on Kubernetes can be a daunting task. It requires creating and configuring various Kubernetes resources, such as persistent volumes, services, and deployments, which can be time-consuming and require a deep understanding of both Kubernetes and MongoDB products. Furthermore, tasks such as scaling, backups, and upgrades must be handled manually, which can be complex and error-prone. This can impact the reliability and availability of your MongoDB deployment and may require frequent manual intervention to keep it running smoothly. Additionally, it can be hard to ensure that your MongoDB deployment is running in the desired state and is able to recover automatically from failures.\n\nFortunately, MongoDB offers operators, which are software extensions to the Kubernetes API that use custom resources to manage applications and their components. The MongoDB Operator translates human knowledge of creating a MongoDB instance into a scalable, repeatable, and standardized method, and leverages Kubernetes features to operate MongoDB for you. This makes it easier to deploy and manage MongoDB on Kubernetes, providing advanced features and functionality for running MongoDB in cloud-native environments.\n\nThere are three main Kubernetes operators available for deploying and managing MongoDB smoothly and efficiently in Kubernetes environments:\n\n- The MongoDB Community Kubernetes Operator is an open-source operator that is available for free and can be used to deploy and manage MongoDB Replica Set on any Kubernetes cluster. It provides basic functionality for deploying and managing MongoDB but does not include some of the more advanced features available in the Enterprise and Atlas operators.\n\n- The MongoDB Enterprise Kubernetes Operator is a commercial Kubernetes operator included with the MongoDB Enterprise subscription. It allows you to easily deploy and manage any type of MongoDB deployment (standalone, replica set, sharded cluster) on Kubernetes, providing advanced features and functionality for deploying and managing MongoDB in cloud-native environments.\n\n- The MongoDB Atlas Kubernetes Operator is an operator that is available as part of the Atlas service. It allows you to quickly deploy and manage MongoDB on the Atlas cloud platform, providing features such as automatic provisioning and scaling of MongoDB clusters, integration with Atlas features and services, and automatic backups and restores. You can learn more about this operator in our blog post on application deployment in Kubernetes.\n\nThis article will focus on the Enterprise Operator. The MongoDB Enterprise Kubernetes Operator seamlessly integrates with other MongoDB Enterprise features and services, such as MongoDB Ops Manager (which can also run on Kubernetes) and MongoDB Cloud Manager. This allows you to easily monitor, back up, upgrade, and manage your MongoDB deployments from a single, centralized location, and provides access to a range of tools and services for managing, securing, and optimizing your deployment.\n\n## MongoDB Enterprise Kubernetes Operator\n\nThe MongoDB Enterprise Kubernetes Operator automates the process of creating and managing MongoDB instances in a scalable, repeatable, and standardized manner. It uses the Kubernetes API and tools to handle the lifecycle events of a MongoDB cluster, including provisioning storage and computing resources, configuring network connections, setting up users, and making changes to these settings as needed. This helps to ease the burden of manually configuring and managing stateful applications, such as databases, within the Kubernetes environment.\n\n## Kubernetes Custom Resource Definitions\n\nKubernetes CRDs (Custom Resource Definitions) is a feature in Kubernetes that allows users to create and manage custom resources in their Kubernetes clusters. Custom resources are extensions of the Kubernetes API that allow users to define their own object types and associated behaviors. With CRDs, you can create custom resources that behave like built-in Kubernetes resources, such as StatefulSets, Deployments, Pods, and Services, and manage them using the same tools and interfaces. This allows you to extend the functionality of Kubernetes and tailor it to their specific needs and requirements.\n\nThe MongoDB Enterprise Operator currently provides the following custom resources for deploying MongoDB on Kubernetes:\n\n- MongoDBOpsManager Custom Resource\n\n- MongoDB Custom Resource\n\n - Standalone\n\n - ReplicaSet\n\n - ShardedCluster\n\n- MongoDBUser Custom Resource\n\n- MongoDBMulti\n\nExample of Ops Manager and MongoDB Custom Resources on Kubernetes\n\n## Installing and configuring Enterprise Kubernetes Operator\n\nFor this tutorial, we will need the following tools:\u00a0\n\n- gcloud\u00a0\n\n- gke-cloud-auth-plugin\n\n- Helm\n\n- kubectl\n\n- kubectx\n\n- Git\n\n## GKE Kubernetes cluster creation\u00a0\n\nTo start, let's create a Kubernetes cluster in a new project. We will be using GKE Kubernetes. I use this script to create the cluster. The cluster will have four worker nodes and act as Ops Manager and MongoDB Enterprise Operators Kubernetes Cluster.\n\n```bash\nCLUSTER_NAME=master-operator\nZONE=us-south1-a\nK8S_VERSION=1.23\nMACHINE=n2-standard-2\ngcloud container clusters create \"${CLUSTER_NAME}\" \\\n --zone \"${ZONE}\" \\\n --machine-type \"${MACHINE}\" --cluster-version=\"${K8S_VERSION}\" \\\n --disk-type=pd-standard --num-nodes 4\n```\n\nNow that the cluster has been created, we need to obtain the credentials.\n\n```bash\ngcloud container clusters get-credentials \"${CLUSTER_NAME}\" \\\n --zone \"${ZONE}\"\n```\n\nDisplay the newly created cluster.\n\n```bash\ngcloud container clusters list\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 LOCATION \u00a0 \u00a0 \u00a0 MASTER_VERSION\u00a0 \u00a0 NUM_NODES\u00a0 STATUS\nmaster-operator \u00a0 \u00a0 us-south1-a\u00a0 \u00a0 1.23.14-gke.1800\u00a0 \u00a0 \u00a0 4\u00a0 \u00a0 \u00a0 RUNNING\n```\n\nWe can also display Kubernetes full cluster name using `kubectx`.\n\n```bash\nkubectx\n```\n\nYou should see your cluster listed here. Make sure your context is set to master cluster.\n\n```bash\nkubectx $(kubectx | grep \"master-operator\" | awk '{print $1}')\n```\n\nWe are able to start MongoDB Kubernetes Operator installation on our newly created Kubernetes cluster!\u00a0\n\n## Enterprise Kubernetes Operator\u00a0\n\nWe can install the MongoDB Enterprise Operator with a single line Helm command. The first step is to\u00a0 add the MongoDB Helm Charts for Kubernetes repository to Helm.\n\n```bash\nhelm repo add mongodb https://mongodb.github.io/helm-charts\n```\n\nI want to create the operator in a separate, dedicated Kubernetes namespace (the operator uses `default` namespace by default). This will allow me to isolate the operator and any resources it creates from other resources in my cluster. The following command will install the CRDs and the Enterprise Operator in the `mongodb-operator`namespace. The operator will be watching only the `mongodb-operator` namespace. You can read more about setting up the operator to watch more namespaces in the official MongoDB documentation.\n\nStart by creating the `mongodb-operator`namespace.\n\n```bash\nNAMESPACE=mongodb-operator\nkubectl create ns \"${NAMESPACE}\"\n```\n\nInstall the MongoDB Kubernetes Operator and set it to watch only the `mongodb-operator` namespace.\n\n```bash\nHELM_CHART_VERSION=1.16.3\nhelm install enterprise-operator mongodb/enterprise-operator \\\n --namespace \"${NAMESPACE}\" \\\n --version=\"${HELM_CHART_VERSION}\" \\\n --set operator.watchNamespace=\"${NAMESPACE}\"\n```\n\nThe namespace has been created and the operator is running! You can see this by listing the pods in the newly created namespace.\n\n```bash\nkubectl get ns\n\nNAME \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 STATUS \u00a0 AGE\ndefault\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Active \u00a0 4m9s\nkube-node-lease\u00a0 \u00a0 Active \u00a0 4m11s\nkube-public\u00a0 \u00a0 \u00a0 \u00a0 Active \u00a0 4m12s\nkube-system\u00a0 \u00a0 \u00a0 \u00a0 Active \u00a0 4m12s\nmongodb-operator \u00a0 Active \u00a0 75s\n```\n\n```bash\nkubectl get po -n \"${NAMESPACE}\"\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 READY \u00a0 STATUS \u00a0 RESTARTS \u00a0 AGE\nmongodb-enterprise-operator-649bbdddf5 \u00a0 1/1\u00a0 \u00a0 Running \u00a0 0 \u00a0 \u00a0 \u00a0 \u00a0 7m9s\n```\n\nYou can see that the helm chart is running with this command.\n\n```bash\nhelm list --namespace \"${NAMESPACE}\"\n\nNAME\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 NAMESPACE \u00a0 \u00a0 REVISION \u00a0 \u00a0 \u00a0 VERSION\nenterprise-operator mongodb-operator 1 deployed enterprise-operator-1.17.2\n```\n\n### Verify the installation\n\nYou can verify that the installation was successful and is currently running with the following command.\n\n```bash\nhelm get manifest enterprise-operator --namespace \"${NAMESPACE}\"\n```\n\nLet's display Custom Resource Definitions installed in the step above in the watched namespace.\n\n```bash\nkubectl -n \"${NAMESPACE}\" get crd | grep -E '^(mongo|ops)'\n\nmongodb.mongodb.com\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2022-12-30T16:17:07Z\nmongodbmulti.mongodb.com \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2022-12-30T16:17:08Z\nmongodbusers.mongodb.com \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2022-12-30T16:17:09Z\nopsmanagers.mongodb.com\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2022-12-30T16:17:09Z\n```\n\nAll required service accounts has been created in watched namespace.\n\n```bash\nkubectl -n \"${NAMESPACE}\" get sa | grep -E '^(mongo)'\n\nmongodb-enterprise-appdb \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 36s\nmongodb-enterprise-database-pods \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 36s\nmongodb-enterprise-operator\u00a0 \u00a0 \u00a0 \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 36s\nmongodb-enterprise-ops-manager \u00a0 \u00a0 1 \u00a0 \u00a0 \u00a0 \u00a0 36s\n```\n\nValidate if the Kubernetes Operator was installed correctly by running the following command and verify the output.\n\n```bash\nkubectl describe deployments mongodb-enterprise-operator -n \\\n \"${NAMESPACE}\"\n```\n\nFinally, double-check watched namespaces.\n\n```bash\nkubectl describe deploy mongodb-enterprise-operator -n \"${NAMESPACE}\" | grep WATCH\n\n\u00a0 \u00a0WATCH_NAMESPACE: \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 mongodb-operator\n```\n\nThe MongoDB Enterprise Operator is now running in your GKE cluster.\n\n## MongoDB Atlas Kubernetes Operator\n\nIt's worth mentioning another operator here --- a new service that integrates Atlas resources with your Kubernetes cluster. Atlas can be deployed in multi-cloud environments including Google Cloud. The Atlas Kubernetes Operator allows you to deploy and manage cloud-native applications that require data services in a single control plane with secure enterprise platform integration.\n\nThis operator is responsible for managing resources in Atlas using Kubernetes custom resources, ensuring that the configurations of projects, database deployments, and database users in Atlas are consistent with each other. The Atlas Kubernetes Operator uses the `AtlasProject`, `AtlasDeployment`, and `AtlasDatabaseUser` Custom Resources that you create in your Kubernetes cluster to manage resources in Atlas.\n\nThese custom resources allow you to define and configure the desired state of your projects, database deployments, and database users in Atlas. To learn more, head over to our blog post on application deployment in Kubernetes with the MongoDB Atlas Operator.\n\n## Conclusion\n\nUpon the successful installation of the Kubernetes Operator, we are able to use the capabilities of the MongoDB Enterprise Kubernetes Operator to run MongoDB objects on our Kubernetes cluster. The Operator enables easy deploy of the following applications into Kubernetes clusters:\n\n- MongoDB --- replica sets, sharded clusters, and standalones --- with authentication, TLS, and many more options.\n\n- Ops Manager --- enterprise management, monitoring, and backup platform for MongoDB. The Operator can install and manage Ops Manager in Kubernetes for you. Ops Manager can manage MongoDB instances both inside and outside Kubernetes. Installing Ops Manager is covered in the second article of the series.\n\n- MongoMulti --- Multi-Kubernetes-cluster deployments allow you to add MongoDB instances in global clusters that span multiple geographic regions for increased availability and global distribution of data. This is covered in the final part of this series.\n\nWant to see the MongoDB Enterprise Kubernetes Operator in action and discover all the benefits it can bring to your Kubernetes deployment? Continue reading the next blog of this series and we'll show you how to best utilize the Operator for your needs", "format": "md", "metadata": {"tags": ["Connectors", "Kubernetes"], "pageDescription": "Learn how to deploy the MongoDB Enterprise Kubernetes Operator in this tutorial.", "contentType": "Tutorial"}, "title": "Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/mongodb-atlas-terraform-database-users-vault", "action": "created", "body": "# MongoDB Atlas With Terraform: Database Users and Vault\n\nIn this tutorial, I will show how to create a user for the MongoDB database in Atlas using Terraform and how to store this credential securely in HashiCorp Vault. We saw in the previous article, MongoDB Atlas With Terraform - Cluster and Backup Policies, how to create a cluster with configured backup policies. Now, we will go ahead and create our first user. If you haven't seen the previous articles, I suggest you look to understand how to get started.\n\nThis article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.\n\nEverything we do here is contained in the provider/resource documentation: \n\n - mongodbatlas_database_user\n- vault_kv_secret_v2\n\n> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.\n\n## Creating a User\nAt this point, we will create our first user using Terraform in MongoDB Atlas and store the URI to connect to my cluster in HashiCorp Vault. For those unfamiliar, HashiCorp Vault is a secrets management tool that allows you to securely store, access, and manage sensitive credentials such as passwords, API keys, certificates, and more. It is designed to help organizations protect their data and infrastructure in complex, distributed IT environments. In it, we will store the connection URI of the user that will be created with the cluster we created in the last article.\n\nBefore we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project and a cluster in Atlas. These steps are essential to ensure the success of creating your database user.\n\n### Configuring HashiCorp Vault to run on Docker\nThe first step is to run HashiCorp Vault so that we can test our module. It is possible to run Vault on Docker Local. If you don't have Docker installed, you can download it. After downloading Docker, we will download the image we want to run \u2014 in this case, from Vault. To do this, we will execute a command in the terminal `docker pull vault:1.13.3` or download using Docker Desktop.\n\n## Creating the Terraform version file\nThe version file continues to have the same purpose, as mentioned in other articles, but we will add the version of the Vault provider as something new.\n\n```\nterraform {\n required_version = \">= 0.12\"\n required_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n version = \"1.14.0\"\n }\n vault = {\n source = \"hashicorp/vault\"\n version = \"4.0.0\"\n }\n }\n}\n```\n### Defining the database user and vault resource\nAfter configuring the version file and establishing the Terraform and provider versions, the next step is to define the user resource in MongoDB Atlas. This is done by creating a .tf file \u2014 for example, main.tf \u2014 where we will create our module. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create users with different permissions, without having to write a new module.\n\n```\n# ------------------------------------------------------------------------------\n# RANDOM PASSWORD\n# ------------------------------------------------------------------------------\nresource \"random_password\" \"default\" {\n length = var.password_length\n special = false\n}\n\n# ------------------------------------------------------------------------------\n# DATABASE USER\n# ------------------------------------------------------------------------------\nresource \"mongodbatlas_database_user\" \"default\" {\n project_id = data.mongodbatlas_project.default.id\n username = var.username\n password = random_password.default.result\n auth_database_name = var.auth_database_name\n\n dynamic \"roles\" {\n for_each = var.roles\n content {\n role_name = try(roles.value\"role_name\"], null)\n database_name = try(roles.value[\"database_name\"], null)\n collection_name = try(roles.value[\"collection_name\"], null)\n }\n }\n\n dynamic \"scopes\" {\n for_each = var.scope\n content {\n name = scopes.value[\"name\"]\n type = scopes.value[\"type\"]\n }\n }\n\n dynamic \"labels\" {\n for_each = local.tags\n content {\n key = labels.key\n value = labels.value\n }\n }\n}\n\nresource \"vault_kv_secret_v2\" \"default\" {\n mount = var.vault_mount\n name = var.secret_name\n data_json = jsonencode(local.secret)\n}\n```\n\nAt the beginning of the file, we have the random_password resource that is used to generate a random password for our user. In the mongodbatlas_database_user resource, we will specify our user details. We are placing some values as variables as done in other articles, such as name and auth_database_name with a default value of admin. Below, we create three dynamic blocks: roles, scopes, and labels. For roles, it is a list of maps that can contain the name of the role (read, readWrite, or some other), the database_name, and the collection_name. These values can be optional if you create a user with atlasAdmin permission, as in this case, it does not. It is necessary to specify a database or collection, or if you wanted, to specify only the database and not a specific collection. We will do an example. For the scopes block, the type is a DATA_LAKE or a CLUSTER. In our case, we will specify a cluster, which is the name of our created cluster, the demo cluster. And the labels serve as tags for our user.\n\nFinally, we define the vault_kv_secret_v2 resource that will create a secret in our Vault. It receives the mount where it will be created and the name of the secret. The data_json is the value of the secret; we are creating it in the locals.tf file that we will evaluate below. It is a JSON value \u2014 that is why we are encoding it.\n\nIn the variable.tf file, we create variables with default values:\n```\nvariable \"project_name\" {\n description = \"The name of the Atlas project\"\n type = string\n}\n\nvariable \"cluster_name\" {\n description = \"The name of the Atlas cluster\"\n type = string\n}\n\nvariable \"password_length\" {\n description = \"The length of the password\"\n type = number\n default = 20\n}\n\nvariable \"username\" {\n description = \"The username of the database user\"\n type = string\n}\n\nvariable \"auth_database_name\" {\n description = \"The name of the database in which the user is created\"\n type = string\n default = \"admin\"\n}\n\nvariable \"roles\" {\n description = < Note: Remember to export the environment variables with the public and private key.\n\n```terraform\nexport MONGODB_ATLAS_PUBLIC_KEY=\"your_public_key\"\nexport MONGODB_ATLAS_PRIVATE_KEY=your_private_key\"\n```\n\nNow, we run init and then plan, as in previous articles.\n\nWe assess that our plan is exactly what we expect and run the apply to create it.\n\nWhen running the `terraform apply` command, you will be prompted for approval with `yes` or `no`. Type `yes`.\n\nNow, let's look in Atlas to see if the user was created successfully...\n\n![User displayed in database access][6]\n\n![Access permissions displayed][7]\n\nLet's also look in the Vault to see if our secret was created.\n\n![MongoDB secret URI][8]\n\nIt was created successfully! Now, let's test if the URI is working perfectly.\n\nThis is the format of the URI that is generated:\n`mongosh \"mongodb+srv://usr_myapp:@/admin?retryWrites=true&majority&readPreference=secondaryPreferred\"`\n\n![Mongosh login ][9]\n\nWe connect and will make an insertion to evaluate whether the permissions are adequate \u2014 initially, in db1 in collection1.\n\n![Command to insert to db and acknowledged][10]\n\nSuccess! Now, in db3, make sure it will not have permission in another database.\n\n![Access denied to unauthroized collection][11]\nExcellent \u2014 permission denied, as expected.\n\nWe have reached the end of this series of articles about MongoDB. I hope they were enlightening and useful for you!\n\nTo learn more about MongoDB and various tools, I invite you to visit the [Developer Center to read the other articles.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3adf134a1cc654f8/661cefe94c473591d2ee4ca7/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01b4534800d306c0/661cefe912f2752a7aeff578/image8.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabb003cbf7efb6fa/661cefe936f462858244ec50/image1.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2c34530c41490c28/661cefe90aca6b12ed3273b3/image7.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt867d41655e363848/661cefe931ff3a1d35a41344/image9.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdaf7406e85f79d5/661cefe936f462543444ec54/image3.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbb0d0b37cd3e7e23/661cefe91c390d5d3c98ec3d/image10.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0dc4c9ad575c4118/661cefe9ba18470cf69b8c14/image6.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc6c61799f656701f/661cf85d4c4735186bee4ce7/image5.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd01eaae2a3d9d24/661cefe936f462254644ec58/image11.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt05fe248cb479b18a/661cf85d4c47359b89ee4ce5/image4.png", "format": "md", "metadata": {"tags": ["Atlas", "Terraform"], "pageDescription": "Learn how to create a user for MongoDB and secure their credentials securely in Hashicorp Vault.", "contentType": "Tutorial"}, "title": "MongoDB Atlas With Terraform: Database Users and Vault", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-one-click-deployment-integ", "action": "created", "body": "# Single Click to Success: Deploying on Netlify, Vercel, Heroku, and Render with Atlas\n\nMongoDB One-Click Starters are pre-configured project templates tailored for specific development stacks, designed to be deployed with just a few clicks. The primary purpose of these starters is to streamline the process of setting up new projects by providing a battle-tested structure that includes MongoDB Atlas as the database.\n\nBy utilizing MongoDB One-Click Starters, developers can significantly speed up project setup, reduce configuration errors, and promote best practices in using MongoDB. These starters eliminate the need to start from scratch or spend time configuring the database, allowing developers to focus more on the core features of their applications.\n\nIn this document, we will cover detailed insights into four specific MongoDB One-Click Starters:\n\n1. Netlify MongoDB Starter\n1. Vercel MongoDB Next.js FastAPI Starter\n1. Heroku MERN Atlas Starter\n1. Render MERN Atlas Starter\n\nFor each starter, we will provide a single-click deploy button as well as information on how to deploy and effectively use that starter to kickstart your projects efficiently.\n\n## Netlify MongoDB Starter\n\n \n\n--------------------------------------------------------------------------------\n\nThe Netlify MongoDB Starter is a template specifically designed for projects that intend to utilize MongoDB paired with Netlify, particularly focusing on JAMstack applications. This starter template comes equipped with key features that streamline the development process and enhance the functionality of applications built on this stack.\n\n**Frameworks**:\n- Next.js\n- React\n\n**Key features**:\n**Pre-configured environment for serverless functions**: The starter provides a seamless environment setup for serverless functions, enabling developers to create dynamic functionalities without the hassle of server management.\n\n**Integrated MongoDB connection**: With an integrated MongoDB connection, developers can easily leverage the powerful features of MongoDB for storing and managing data within their applications.\n\n**Ideal use cases**:\n\nThe Netlify MongoDB Starter is ideal for the following scenarios:\n\n**Rapid prototyping**: Developers looking to quickly prototype web applications that require a backend database can benefit from the pre-configured setup of this starter template.\n\n**Full-fledged applications with minimal server management**: For projects aiming to build comprehensive applications with minimal server management overhead, the Netlify MongoDB Starter offers a robust foundation.\n\n### Deployment guide\n\nTo deploy the Netlify MongoDB Starter, follow these steps:\n\n**Clone the GitHub repository**:\nClick the \u201cDeploy to Netlify\u201d button or clone the repository from Netlify MongoDB Starter GitHub repository to your local machine using Git.\n\n**Setting up environment variables for MongoDB connection**:\nWithin the cloned repository, set up the necessary environment variables to establish a connection with your MongoDB database.\n\n### Exploring and customizing the Starter:\nTo explore and modify the Netlify MongoDB Starter for custom use, consider the following tips:\n\n**Directory structure**: Familiarize yourself with the directory structure of the starter to understand the organization of files and components.\n\n**Netlify functions**: Explore the pre-configured serverless functions and customize them to suit your application's requirements.\n\n## Vercel MongoDB Next FastAPI Starter\n\n \n \n\n--------------------------------------------------------------------------------\n\nThe Vercel MongoDB Next.js FastAPI Starter is a unique combination designed for developers who seek a powerful setup to effectively utilize MongoDB in applications requiring both Next.js for frontend development and FastAPI for backend API services, all while being hosted on Vercel. This starter kit offers a seamless integration between Next.js and FastAPI, enabling developers to build web applications with a dynamic front end and a robust backend API.\n\n**Frameworks**:\n- Next.js\n- React\n- FastAPI\n\n**Key features**:\n\n**Integration**: The starter provides a smooth integration between Next.js and FastAPI, allowing developers to work on the front end and back end seamlessly.\n\n**Database**: It leverages MongoDB Atlas as the database solution, offering a reliable and scalable option for storing application data.\n\n**Deployment**: Easy deployment on Vercel provides developers with a hassle-free process to host their applications and make them accessible on the web.\n\n**Ideal Use Cases**:\n\nThe Vercel MongoDB Next FastAPI Starter is ideal for developers looking to build modern web applications that require a dynamic front end powered by Next.js and a powerful backend API using FastAPI. Use cases include building AI applications, e-commerce platforms, content management systems, or any application requiring real-time data updates and user interactions.\n\n### Step-by-step deployment guide\n\n**Use starter kit**: Click \u201cDeploy\u201d or clone or download the starter kit from the GitHub repository\n\nConfiguration:\nConfigure MongoDB Atlas: Set up a database cluster on MongoDB Atlas and obtain the connection string.\nVercel setup: Create an account on Vercel and install the Vercel CLI for deployment.\n\n**Environment setup**:\nCreate a `.env` file in the project root to store environment variables like the MongoDB connection string.\nConfigure the necessary environment variables in the `.env` file.\n\n**Deployment**:\nUse the Vercel CLI to deploy the project to Vercel by running the command after authentication.\nFollow the prompts to deploy the application on Vercel.\n\n**Customizations**:\nFor specific application needs, developers can customize the starter kit by:\n- Adding additional features to the front end using Next.js components and libraries.\n- Extending the backend API functionality by adding more endpoints and services in FastAPI.\n- Integrating other third-party services or databases to suit the project requirements.\n\nBy leveraging the flexibility and capabilities of the Vercel MongoDB Next FastAPI Starter, developers can efficiently create and deploy modern web applications with a well-integrated frontend and backend system that utilizes MongoDB for data management.\n\n## Heroku MERN Atlas Starter\n\n \n\n--------------------------------------------------------------------------------\n\nThe Heroku MERN Atlas Starter is meticulously designed for developers looking to effortlessly deploy MERN stack applications, which combine MongoDB, Express.js, React, and Node.js, on the Heroku platform. This starter kit boasts key features that simplify the deployment process, including seamless Heroku integration, pre-configured connectivity to MongoDB Atlas, and a structured scaffolding for implementing CRUD (Create, Read, Update, Delete) operations.\n\nIdeal for projects requiring a robust and versatile technology stack spanning both client-side and server-side components, the Heroku MERN Atlas Starter is best suited for building scalable web applications. By leveraging the functionalities provided within this starter kit, developers can expedite the development process and focus on crafting innovative solutions rather than getting bogged down by deployment complexities.\n\n### Deployment Guide\nTo begin utilizing the Heroku MERN Atlas Starter, developers can click the \u201cDeploy to Heroku\u201d button or first clone the project repository from GitHub using the Heroku MERN Atlas starter repository.Subsequently, configuring Heroku and MongoDB details is a straightforward process, enabling developers to seamlessly set up their deployment environment.\n\nUpon completion of the setup steps, deploying and running the application on Heroku becomes a breeze. Developers can follow a structured deployment guide provided within the starter kit to ensure a smooth transition from development to the production environment. It is recommended that readers explore the source code of the Heroku MERN Atlas Starter to foster a deeper understanding of the implementation details and to tailor the starter kit to their specific project requirements.\n\nEmbark on your journey with the Heroku MERN Atlas Starter today to experience a streamlined deployment process and unleash the full potential of MERN stack applications.\n\n## Render MERN Atlas Starter\n\n \n\n--------------------------------------------------------------------------------\n\nRender MERN Atlas Starter is a specialized variant tailored for developers who prefer leveraging Render's platform for hosting MERN stack applications. This starter pack is designed to simplify and streamline the process of setting up a full-stack application on Render, with integrated support for MongoDB Atlas, a popular database service offering flexibility and scalability.\n\n**Key Features**:\n**Automatic deployments**: It facilitates seamless deployments directly from GitHub repositories, ensuring efficient workflow automation.\n**Free SSL certificates**: It comes with built-in support for SSL certificates, guaranteeing secure communication between the application and the users.\n**Easy scaling options**: Render.com provides hassle-free scalability options, allowing applications to adapt to varying levels of demand effortlessly.\n\n**Use cases**:\n\nRender MERN Atlas Starter is especially beneficial for projects that require straightforward deployment and easy scaling capabilities. It is ideal for applications where rapid development cycles and quick scaling are essential, such as prototyping new ideas, building MVPs, or deploying small- to medium-sized web applications.\n\n## Deployment guide\n\nTo deploy the Render MERN Atlas Starter on Render, follow these steps:\n\n**Setting up MongoDB Atlas Database**: Create a MongoDB Atlas account and configure a new database instance according to your application's requirements.\n\n**Linking project to Render from GitHub**: Click \u201cDeploy to Render\u201d or share the GitHub repository link containing your MERN stack application code with Render. This enables Render to automatically fetch code updates for deployments.\n\n**Configuring deployment settings**: On Render, specify the deployment settings, including the environment variables, build commands, and other configurations relevant to your application.\n\nFeel free to use the repository link for the Render MERN Atlas Starter.\n\nWe encourage developers to experiment with the Render MERN Atlas Starter to explore its architecture and customization possibilities fully. By leveraging this starter pack, developers can quickly launch robust MERN stack applications on Render and harness the benefits of its deployment and scaling features.\n\n## Conclusion\n\nIn summary, the MongoDB One-Click Starters provide an efficient pathway for developers to rapidly deploy and integrate MongoDB into various application environments. Whether you\u2019re working with Netlify, Vercel, Heroku, or Render, these starters offer a streamlined setup process, pre-configured features, and seamless MongoDB Atlas integration. By leveraging these starters, developers can focus more on building robust applications rather than the intricacies of deployment and configuration. Embrace these one-click solutions to enhance your development workflow and bring your MongoDB projects to life with ease.\n\nReady to elevate your development experience? Dive into the world of MongoDB One-Click Starters today and unleash the full potential of your projects, register to Atlas and start building today! \n\nHave questions or want to engage with our community, visit MongoDB community.\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "JavaScript", "Next.js", "Vercel", "Netlify"], "pageDescription": "Explore the 'MongoDB One-Click Starters: A Comprehensive Guide' for an in-depth look at deploying MongoDB with Netlify, Vercel, Heroku, and Render. This guide covers essential features, ideal use cases, and step-by-step deployment instructions to kickstart your MongoDB projects.", "contentType": "Quickstart"}, "title": "Single Click to Success: Deploying on Netlify, Vercel, Heroku, and Render with Atlas", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/interact-aws-lambda-function-go", "action": "created", "body": "# Interact with MongoDB in an AWS Lambda Function Using Go\n\nIf you're a Go developer and you're looking to go serverless, AWS Lambda is a solid choice that will get you up and running in no time. But what happens when you need to connect to your database? With serverless functions, also known as functions as a service (FaaS), you can never be sure about the uptime of your function or how it has chosen to scale automatically with demand. For this reason, concurrent connections to your database, which aren't infinite, happen a little differently. In other words, we want to be efficient in how connections and interactions to the database are made.\n\nIn this tutorial, we'll see how to create a serverless function using the Go programming language and that function will connect to and query MongoDB Atlas in an efficient manner.\n\n## The prerequisites\n\nTo narrow the scope of this particular tutorial, there are a few prerequisites that must be met prior to starting:\n\n- A MongoDB Atlas cluster with network access and user roles already configured.\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n- The sample MongoDB Atlas dataset loaded.\n- Knowledge of the Go programming language.\n- An Amazon Web Services (AWS) account with a basic understanding of AWS Lambda.\n\nWe won't go through the process of deploying a MongoDB Atlas cluster in this tutorial, including the configuration of network allow lists or users. As long as AWS has access through a VPC or global IP allow and a user that can read from the sample databases, you'll be fine.\n\nIf you need help getting started with MongoDB Atlas, check out this tutorial on the subject.\n\nThe point of this tutorial is not to explore the ins and outs of AWS Lambda, but instead see how to include MongoDB in our workflow. For this reason, you should have some knowledge of AWS Lambda and how to use it prior to proceeding.\n\n## Build an AWS Lambda function with Golang and MongoDB\n\nTo kick things off, we need to create a new Go project on our local computer. Execute the following commands from your command line:\n\n```bash\nmkdir lambdaexample\ncd lambdaexample\ngo mod init lambdaexample\n```\n\nThe above commands will create a new project directory and initialize the use of Go Modules for our AWS Lambda and MongoDB dependencies.\n\nNext, execute the following commands from within your project:\n\n```bash\ngo get go.mongodb.org/mongo-driver/mongo\ngo get github.com/aws/aws-lambda-go/lambda\n```\n\nThe above commands will download the Go driver for MongoDB and the AWS Lambda SDK.\n\nFinally, create a **main.go** file in your project. The **main.go** file will be where we add all our project code.\n\nWithin the **main.go** file, add the following code:\n\n```go\npackage main\n\nimport (\n\"context\"\n\"os\"\n\n\"github.com/aws/aws-lambda-go/lambda\"\n\"go.mongodb.org/mongo-driver/bson\"\n\"go.mongodb.org/mongo-driver/bson/primitive\"\n\"go.mongodb.org/mongo-driver/mongo\"\n\"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\ntype EventInput struct {\nLimit int64 `json:\"limit\"`\n}\n\ntype Movie struct {\nID primitive.ObjectID `bson:\"_id\" json:\"_id\"`\nTitle string `bson:\"title\" json:\"title\"`\nYear int32 `bson:\"year\" json:\"year\"`\n}\n\nvar client, err = mongo.Connect(context.Background(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n\nfunc HandleRequest(ctx context.Context, input EventInput) (]Movie, error) {\nif err != nil {\nreturn nil, err\n}\n\ncollection := client.Database(\"sample_mflix\").Collection(\"movies\")\n\nopts := options.Find()\n\nif input.Limit != 0 {\nopts = opts.SetLimit(input.Limit)\n}\ncursor, err := collection.Find(context.Background(), bson.M{}, opts)\nif err != nil {\nreturn nil, err\n}\nvar movies []Movie\nif err = cursor.All(context.Background(), &movies); err != nil {\nreturn nil, err\n}\n\nreturn movies, nil\n}\n\nfunc main() {\nlambda.Start(HandleRequest)\n}\n```\n\nDon't worry, we're going to break down what the above code does and how it relates to your serverless function.\n\nFirst, you'll notice the following two data structures:\n\n```go\ntype EventInput struct {\nLimit int64 `json:\"limit\"`\n}\n\ntype Movie struct {\nID primitive.ObjectID `bson:\"_id\" json:\"_id\"`\nTitle string `bson:\"title\" json:\"title\"`\nYear int32 `bson:\"year\" json:\"year\"`\n}\n```\n\nIn this example, `EventInput` represents any input that can be sent to our AWS Lambda function. The `Limit` field will represent how many documents the user wants to return with their request. The data structure can include whatever other fields you think would be helpful.\n\nThe `Movie` data structure represents the data that we plan to return back to the user. It has both BSON and JSON annotations on each of the fields. The BSON annotation maps the MongoDB document fields to the local variable and the JSON annotation maps the local field to data that AWS Lambda can understand.\n\nWe will be using the **sample_mflix** database in this example and that database has a **movies** collection. Our `Movie` data structure is meant to map documents in that collection. You can include as many or as few fields as you want, but only the fields included will be returned to the user.\n\nNext, we want to handle a connection to the database:\n\n```go\nvar client, err = mongo.Connect(context.Background(), options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n```\n\nThe above line creates a database client for our application. It uses an `ATLAS_URI` environment variable with the connection information. We'll set that later in AWS Lambda.\n\nWe don't want to establish a database connection every time the function is executed. We only want to connect when the function starts. We don't have control over when a function starts, so the correct solution is to connect outside of the `HandleRequest` function and outside of the `main` function.\n\nMost of our magic happens in the `HandleRequest` function:\n\n```go\nfunc HandleRequest(ctx context.Context, input EventInput) ([]Movie, error) {\nif err != nil {\nreturn nil, err\n}\n\ncollection := client.Database(\"sample_mflix\").Collection(\"movies\")\n\nopts := options.Find()\n\nif input.Limit != 0 {\nopts = opts.SetLimit(input.Limit)\n}\ncursor, err := collection.Find(context.Background(), bson.M{}, opts)\nif err != nil {\nreturn nil, err\n}\nvar movies []Movie\nif err = cursor.All(context.Background(), &movies); err != nil {\nreturn nil, err\n}\n\nreturn movies, nil\n}\n```\n\nNotice in the declaration of the function we are accepting the `EventInput` and we're returning a slice of `Movie` to the user.\n\nWhen we first enter the function, we check to see if there was an error. Remember, the connection to the database could have failed, so we're catching it here.\n\nOnce again, for this example we're using the **sample_mflix** database and the **movies** collection. We're storing a reference to this in our `collection` variable.\n\nSince we've chosen to accept user input and this input happens to be related to how queries are done, we are creating an options variable. One of our many possible options is the limit, so if we provide a limit, we should probably set it. Using the options, we execute a `Find` operation on the collection. To keep this example simple, our filter criteria is an empty map which will result in all documents from the collection being returned \u2014 of course, the maximum being whatever the limit was set to.\n\nRather than iterating through a cursor of the results in our function, we're choosing to do the `All` method to load the results into our `movies` slice.\n\nAssuming there were no errors along the way, we return the result and AWS Lambda should present it as JSON.\n\nWe haven't uploaded our function yet!\n\n## Building and packaging the AWS Lambda function with Golang\n\nSince Go is a compiled programming language, you need to create a binary before uploading it to AWS Lambda. There are certain requirements that come with this job.\n\nFirst, we need to worry about the compilation operating system and CPU architecture. AWS Lambda expects Linux and AMD64, so if you're using something else, you need to make use of the Go cross compiler.\n\nFor best results, execute the following command:\n\n```bash\nenv GOOS=linux GOARCH=amd64 go build\n```\n\nThe above command will build the project for the correct operating system and architecture regardless of what computer you're using.\n\nDon't forget to add your binary file to a ZIP archive after it builds. In our example, the binary file should have a **lambdaexample** name unless you specify otherwise.\n\n![AWS Lambda MongoDB Go Project\n\nWithin the AWS Lambda dashboard, upload your project and confirm that the handler and architecture are correct.\n\nBefore testing the function, don't forget to update your environment variables within AWS Lambda.\n\nYou can get your URI string from the MongoDB Atlas dashboard.\n\nOnce done, you can test everything using the \"Test\" tab of the AWS Lambda dashboard. Provide an optional \"limit\" for the \"Event JSON\" and check the results for your movies!\n\n## Conclusion\n\nYou just saw how to use MongoDB with AWS Lambda and the Go runtime! AWS makes it very easy to use Go for serverless functions and the Go driver for MongoDB makes it even easier to use with MongoDB.\n\nAs a further reading exercise, it is worth checking out the MongoDB Go Quick Start as well as some documentation around connection pooling in serverless functions.", "format": "md", "metadata": {"tags": ["Go", "AWS", "Serverless"], "pageDescription": "In this tutorial, we'll see how to create a serverless function using the Go programming language and that function will connect to and query MongoDB Atlas in an efficient manner.", "contentType": "Tutorial"}, "title": "Interact with MongoDB in an AWS Lambda Function Using Go", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/vector-search-hashicorp", "action": "created", "body": "# Leveraging Atlas Vector Search With HashiCorp Terraform: Empowering Semantic Search in Modern Applications\n\nLast year, MongoDB announced the general availability of Atlas Vector Search, a new capability in Atlas that allows developers to search across data stored in MongoDB based on its semantic meaning using high dimensional vectors (i.e., \u201cembeddings\u201d) created by machine learning models. \n\nThis allows developers to build intelligent applications that can understand and process human language in a way traditional, text-based search methods cannot since they will only produce an exact match for the query. \n\nFor example, searching for \u201cwarm winter jackets\u201d on an e-commerce website that only supports text-based search might return products with the exact match keywords \"warm,\" \"winter,\" and \"jackets.\" Vector search, on the other hand, understands the semantic meaning of \"warm winter jackets'' as apparel designed for cold temperatures. It retrieves items that are not only labeled as \"winter jackets\u201d but are specifically designed for warmth, including products that might be described with related terms like \"insulated,\" giving users more helpful search results. \n\nIntegrating Atlas Vector Search with infrastructure-as-code (IaC) tools like HashiCorp Terraform can then streamline and optimize your development workflows, ensuring that sophisticated search capabilities are built directly into the infrastructure deployment process. \n\nThis guide will walk you through how to get started with Atlas Vector Search through our HashiCorp Terraform Atlas provider. Let\u2019s get started! \n\n### Pre-requisites\n\n - Create a MongoDB Atlas account. \n - Install HashiCorp Terraform on your terminal or sign up for a free Terraform Cloud account.\n - Create MongoDB Atlas programmatic API keys and associate them with Terraform. \n - Select an IDE of your choice. For this tutorial, we will be using VS Code.\n\n## Step 1: Deploy Atlas dedicated cluster with Atlas Search Nodes\n\nFirst, we need to deploy basic Atlas resources to get started. This includes an Atlas project, an M10 dedicated Atlas cluster (which is pay-as-you-go, great for development and low-traffic applications), a database user, and an IP Access List Entry. \n\n**Note**: When configuring your MongoDB Atlas cluster with Terraform, it's important to restrict IP access to only the IP address from which the Terraform script will be deployed. This minimizes the risk of unauthorized access. \n\nIn addition, as part of this tutorial, we will be using Atlas Search Nodes (optional). These provide dedicated infrastructure for Atlas Search and Vector Search workloads, allowing you to fully scale search independent of database needs. Incorporating Search Nodes into your Atlas deployment allows for better performance at scale and delivers workload isolation, higher availability, and the ability to optimize resource usage. \n\nLastly, when using Terraform to manage infrastructure, it is recommended to maintain organized file management practices. Typically, your Terraform configurations/scripts will be written in files with the `.tf` extension, such as `main.tf`. This file, which we are using in this tutorial, contains the primary configuration details for deploying resources and should be located ideally in a dedicated project directory on your local machine or on Terraform Cloud. \n\nSee the below Terraform script as part of our `main.tf` file: \n\n```\nterraform {\n required_providers {\n mongodbatlas = {\n source = \"mongodb/mongodbatlas\"\n }\n }\n required_version = \">= 0.13\"\n}\n\nresource \"mongodbatlas_project\" \"exampleProject\" {\n name = \"exampleProject\"\n org_id = \"63234d3234ec0946eedcd7da\"\n}\n\nresource \"mongodbatlas_advanced_cluster\" \"exampleCluster\" {\n project_id = mongodbatlas_project.exampleProject.id\n name = \"ClusterExample\"\n cluster_type = \"REPLICASET\"\n\n replication_specs {\n region_configs {\n electable_specs {\n instance_size = \"M10\"\n node_count = 3\n }\n provider_name = \"AWS\"\n priority = 7\n region_name = \"US_EAST_1\"\n }\n }\n}\n\nresource \"mongodbatlas_search_deployment\" \"exampleSearchNode\" {\n project_id = mongodbatlas_project.exampleProject.id\n cluster_name = mongodbatlas_advanced_cluster.exampleCluster.name\n specs = \n {\n instance_size = \"S20_HIGHCPU_NVME\"\n node_count = 2\n }\n ]\n}\n\nresource \"mongodbatlas_database_user\" \"testUser\" {\n username = \"username123\"\n password = \"password-test123\"\n project_id = mongodbatlas_project.exampleProject.id\n auth_database_name = \"admin\"\n\n roles {\n role_name = \"readWrite\"\n database_name = \"dbforApp\"\n }\n}\n\nresource \"mongodbatlas_project_ip_access_list\" \"test\" {\n project_id = mongodbatlas_project.exampleProject.id\n ip_address = \"174.218.210.1\"\n}\n```\n\n**Note**: Before deploying, be sure to store your MongoDB Atlas programmatic API keys created as part of the prerequisites as [environment variables. To deploy, you can use the below commands from the terminal: \n\n```\nterraform init \nterraform plan\nterraform apply \n```\n\n## Step 2: Create your collections with vector data \n\nFor this tutorial, you can create your own collection of vectorized data if you have data to use. \n\nAlternatively, you can use our sample data. This is great for testing purposes. The collection you can use is the \"sample_mflix.embedded_movies\" which already has embeddings generated by Open AI. \n\nTo use sample data, from the Atlas UI, go into the Atlas cluster Overview page and select \u201cAtlas Search\u201d at the top of the menu presented. \n\nThen, click \u201cLoad a Sample Dataset.\u201d\n\n## Step 3: Add vector search index in Terraform configuration \n\nNow, head back over to Terraform and create an Atlas Search index with type \u201cvectorSearch.\u201d If you are using the sample data, also include a reference to the database \u201csample_mflix\u201d and the collection \u201cembedded_movies.\u201d \n\nLastly, you will need to set the \u201cfields\u201d parameter as per our example below. See our documentation to learn more about how to index fields for vector search and the associated required parameters. \n\n```\nresource \"mongodbatlas_search_index\" \"test-basic-search-vector\" {\n name = \"test-basic-search-index\" \n project_id = mongodbatlas_project.exampleProject.id\n cluster_name = mongodbatlas_advanced_cluster.exampleCluster.name\n type = \"vectorSearch\"\n database = \"sample_mflix\"\n collection_name = \"embedded_movies\"\n fields = <<-EOF\n {\n \"type\": \"vector\",\n \"path\": \"plot_embedding\",\n \"numDimensions\": 1536,\n \"similarity\": \"euclidean\"\n }]\n EOF\n}\n```\n\nTo deploy again, you can use the below commands from the terminal: \n```\nterraform init \nterraform plan\nterraform apply \n```\n\nIf your deployment was successful, you should be greeted with \u201cApply complete!\u201d \n![(Terraform in terminal showcasing deployment)\n\nTo confirm, you should be able to see your newly created Atlas Search index resource in the Atlas UI with Index Type \u201cvectorSearch\u201d and Status as \u201cACTIVE.\u201d \n\n## Step 4: Get connection string and connect to the MongoDB Shell to begin Atlas Vector Search queries \n\nWhile still in the Atlas UI, go back to the homepage, click \u201cConnect\u201d on your Atlas cluster, and select \u201cShell.\u201d \n\nThis will generate your connection string which you can use in the MongoDB Shell to connect to your Atlas cluster. \n\n### All done\n\nCongratulations! You have everything that you need now to run your first Vector Search queries.\n\nWith the above steps, teams can leverage Atlas Vector Search indexes and dedicated Search Nodes for the Terraform MongoDB Atlas provider to build a retrieval-augmented generation, semantic search, or recommendation system with ease. \n\nThe HashiCorp Terraform Atlas provider is open-sourced under the Mozilla Public License v2.0 and we welcome community contributions. To learn more, see our contributing guidelines.\n\nThe fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace or Google Cloud Marketplace. To learn more about the Terraform provider, check out the documentation, solution brief, and tutorials, or get started today. \n\nGo build with MongoDB Atlas and the HashiCorp Terraform Atlas provider today! \n\n", "format": "md", "metadata": {"tags": ["MongoDB", "Terraform"], "pageDescription": "Learn how to leverage Atlas Vector Search with HashiCorp Terraform in this tutorial.", "contentType": "Tutorial"}, "title": "Leveraging Atlas Vector Search With HashiCorp Terraform: Empowering Semantic Search in Modern Applications", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/quickstart-vectorsearch-mongodb-python", "action": "created", "body": "# Quick Start 2: Vector Search With MongoDB and OpenAI\n\nThis quick start will guide you through how to perform vector search using MongoDB Atlas and OpenAI API. \n**Code (Python notebook)**: View on Github or Open in Colab\n\n### What you will learn\n - Creating a vector index on Atlas\n - Performing vector search using OpenAI embeddings\n\n### Pre-requisites\n - A free Atlas account \u2014 create one now!\n - A Python Jupyter notebook environment \u2014 we recommend Google Colab. It is a free, cloud-based environment and very easy to get up and running. \n\n### Suggested\nYou may find this quick start helpful in getting Atlas and a Python client running:\nGetting Started with MongoDB Atlas and Python.\n\n### Vector search: beyond keyword matching\nIn the realm of information retrieval, keyword search has long been the standard. This method involves matching exact words within texts to find relevant information. For instance, if you're trying to locate a film but can only recall that its title includes the word \"battle,\" a keyword search enables you to filter through content to find matches.\n\nHowever, what if your memory of a movie is vague, limited to a general plot or theme rather than specific titles or keywords? This is where vector search steps in, revolutionizing how we find information. Unlike keyword search, vector search delves into the realm of **semantics**, allowing for the retrieval of content **based on the meanings behind the words**.\n\nConsider you're trying to find a movie again, but this time, all you remember is a broad plot description like \"humans fight aliens.\" Traditional search methods might leave you combing through endless irrelevant results. Vector search, however, uses advanced algorithms to understand the contextual meaning of your query, capable of guiding you to movies that align with your description \u2014 such as \"Terminator\" \u2014 even if the exact words aren't used in your search terms.\n\n## Big picture\nLet's understand how all the pieces fit together.\n\nWe are going to use the **embedded_movies** collection in the Atlas sample data. This one **already has embeddings calculated** for plots, making our lives easier.\n\nHere is how it all works. When a semantic search query is issued (e.g., \"fatalistic sci-fi movies\"):\n\n - Steps 1 and 2: **We call the OpenAI API to get embeddings** for the query text.\n - Step 3: Send the **embedding to Atlas** to perform a vector search.\n - Step 4: **Atlas returns relevant search results using Vector Search**.\n\nHere is a visual:\n\n## Understanding embeddings\nEmbeddings are an interesting way of transforming different types of data \u2014 whether it's text, images, audio, or video \u2014 into a numerical format, specifically, into an array known as a \u201cvector.\u201d This conversion allows the data to be processed and understood by machines.\n\nTake text data as an example: Words can be converted into numbers, with each unique word assigned its own distinct numerical value. These numerical representations can vary in size, ranging anywhere from 128 to 4096 elements.\n\nHowever, what sets embeddings apart is their ability to capture more than just random sequences of numbers. They actually preserve some of the inherent meaning of the original data. For instance, words that share similar meanings tend to have embeddings that are closer together in the numerical space.\n\nTo illustrate, consider a simplified scenario where we plot the embeddings of several words on a two-dimensional graph for easier visualization. Though in practice, embeddings can span many dimensions (from 128 to 4096), this example helps clarify the concept. On the graph, you'll notice that items with similar contexts or meanings \u2014 like different types of fruits or various pets \u2014 are positioned closer together. This clustering is a key strength of embeddings, highlighting their ability to capture and reflect the nuances of meaning and similarity within the data.\n\n## How to create embeddings\nSo, how do we go about creating these useful embeddings? Thankfully, there's a variety of embedding models out there designed to transform your text, audio, or video data into meaningful numerical representations.\n\nSome of these models are **proprietary**, meaning they are owned by certain companies and accessible **mainly through their APIs**. OpenAI is a notable example of a provider offering such models.\n\nThere are also **open-source models** available. These can be freely downloaded and operated on your own computer. Whether you opt for a proprietary model or an open-source option depends on your specific needs and resources.\n\nHugging Face's embedding model leaderboard is a great place to start looking for embedding models. They periodically test available embedding models and rank them according to various criteria.\n\nYou can read more about embeddings:\n\n - Explore some of the embedding choices: RAG Series Part 1: How to Choose the Right Embedding Model for Your Application, by Apoorva Joshi\n - The Beginner\u2019s Guide to Text Embeddings\n - Getting Started With Embeddings\n\n## Step 1: Setting up Atlas in the cloud\nHere is a quick guide adopted from the official documentation. Refer to the documentation for full details. \n\n### Create a free Atlas account\nSign up for Atlas and log into your account.\n\n### Create a free instance\n\n - You can choose any cloud instance.\n - Choose the \u201cFREE\u201d tier, so you won't incur any costs.\n - Follow the setup wizard and give your instance a name.\n - Note your username and password to connect to the instance.\n - Configuring IP access: Add 0.0.0.0/0 to the IP access list. This makes it available to connect from Google Colab. (Note: This makes the instance available from any IP address, which is okay for a test instance). See the screenshot below for how to add the IP:\n\n### Load sample data\nNext, we'll load the default sample datasets in Atlas, which may take a few minutes.\n\n### View sample data\nIn the Atlas UI, explore the **embedded_movies** collection within the **sample_mflix** database to view document details like title, year, and plot.\n\n### Inspect embeddings\nFortunately, the **sample_mflix.embedded_movies** dataset already includes vector embeddings for plots, generated with OpenAI's **text-embedding-ada-002** model. By inspecting the **plot_embedding** attribute in the Atlas UI, as shown in the screenshot below, you'll find it comprises an array of 1536 numbers.\n\nCongrats! You now have an Atlas cluster, with some sample data. \ud83d\udc4f\n\n## Step 2: Create Atlas index\nBefore we can run a vector search, we need to create a vector index. Creating an index allows Atlas to execute queries faster. Here is how to create a vector index.\n\n### Navigate to the Atlas Vector Search UI\n\n### Choose \u201cCreate a Vector Search Index\u201d\n\n### Create a vector index as follows\nLet's define a vector index as below. Here is what the parameters mean.\n\n - **\"type\": \"vector\"** \u2014 This indicates we are defining a vector index.\n - **\"path\": \"plot_embedding\"** \u2014 This is the attribute we are indexing \u2014 in our case, the embedding data of plot.\n - **\"numDimensions\": 1536** \u2014 This indicates the dimension of the embedding field. This has to match the embedding model we have used (in our case, the OpenAI model).\n - **\"similarity\": \"dotProduct\"** \u2014 Finally, we are defining the matching algorithm to be used by the vector index. The choices are **euclidean**, **cosine**, and **dotProduct**. You can read more about these choices in How to Index Fields for Vector Search.\n\nIndex name: **idx_plot_embedding**\n\nIndex definition\n\n```\n{\n \"fields\": \n {\n \"type\": \"vector\",\n \"path\": \"plot_embedding\",\n \"numDimensions\": 1536,\n \"similarity\": \"dotProduct\"\n }\n ]\n}\n```\n![Figure 11: Creating a vector index\n\nWait until the index is ready to be used\n\n## Step 3: Configuration\nWe will start by setting the following configuration parameters:\n\n - Atlas connection credentials \u2014 see below for a step-by-step guide.\n - OpenAI API key \u2014 get it from the OpenAI dashboard.\n\nHere is how you get the **ATLAS_URI** setting.\n\n - Navigate to the Atlas UI.\n - Select your database.\n - Choose the \u201cConnect\u201d option to proceed.\n - Within the connect section, click on \u201cDrivers\u201d to view connection details.\n - Finally, copy the displayed ATLAS_URI value for use in your application's configuration.\n\nSee these screenshots as guidance.\n\n## On to code\nNow, let's look at the code. We will walk through and execute the code step by step. You can also access the fully functional Python notebook at the beginning of this guide.\n\nStart by setting up configurations for **ATLAS_URI** and **OPENAI_API_KEY**. \n\n(Run this code block in your Google Colab under Step 3.)\n\n```\n# We will keep all global variables in an object to not pollute the global namespace.\nclass MyConfig(object):\n pass\n\nMY_CONFIG = MyConfig()\n\nMY_CONFIG.ATLAS_URI = \"Enter your Atlas URI value here\" ## TODO\nMY_CONFIG.OPENAI_API_KEY = \"Enter your OpenAI API Key here\" ## TODO\n```\n\nPro tip \ud83d\udca1\nWe will keep all global variables in an object called **MY_CONFIG** so as not to pollute the global namespace. **MyConfig** is just a placeholder class to hold our variables and settings.\n\n## Step 4: Install dependencies\nLet's install the dependencies required. We are installing two packages:\n\n - **pymongo**: Python library to connect to MongoDB Atlas instances \n - **openai**: For calling the OpenAI library\n\n(Run this code block in your Google Colab under Step 4.)\n```\n!pip install openai==1.13.3 pymongo==4.6.2\n```\n\nPro tip \ud83d\udca1\nYou will notice that we are specifying a version (openai==1.13.3) for packages we are installing. This ensures the versions we are installing are compatible with our code. This is a good practice and is called **version pinning** or **freezing**.\n\n## Step 5: AtlasClient and OpenAIClient\n### AtlasClient\nAtlasClient\nThis class handles establishing connections, running queries, and performing a vector search on MongoDB Atlas.\n\n(Run this code block in your Google Colab under Step 5.)\n\n```\nfrom pymongo import MongoClient\n\nclass AtlasClient ():\n\n def __init__ (self, altas_uri, dbname):\n self.mongodb_client = MongoClient(altas_uri)\n self.database = self.mongodb_clientdbname]\n\n ## A quick way to test if we can connect to Atlas instance\n def ping (self):\n self.mongodb_client.admin.command('ping')\n\n def get_collection (self, collection_name):\n collection = self.database[collection_name]\n return collection\n\n def find (self, collection_name, filter = {}, limit=10):\n collection = self.database[collection_name]\n items = list(collection.find(filter=filter, limit=limit))\n return items\n\n # https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/\n def vector_search(self, collection_name, index_name, attr_name, embedding_vector, limit=5):\n collection = self.database[collection_name]\n results = collection.aggregate([\n {\n '$vectorSearch': {\n \"index\": index_name,\n \"path\": attr_name,\n \"queryVector\": embedding_vector,\n \"numCandidates\": 50,\n \"limit\": limit,\n }\n },\n ## We are extracting 'vectorSearchScore' here\n ## columns with 1 are included, columns with 0 are excluded\n {\n \"$project\": {\n '_id' : 1,\n 'title' : 1,\n 'plot' : 1,\n 'year' : 1,\n \"search_score\": { \"$meta\": \"vectorSearchScore\" }\n }\n }\n ])\n return list(results)\n\n def close_connection(self):\n self.mongodb_client.close()\n```\n\n**Initializing class**:\nThe constructor (__init__) function takes two arguments:\nATLAS URI (that we obtained from settings)\nDatabase to connect\n\n**Ping**: \nThis is a handy method to test if we can connect to Atlas.\n\n**find**\nThis is the \u201csearch\u201d function. We specify the collection to search and any search criteria using filters.\n\n**vector_search**\nThis is a key function that performs vector search on MongoDB Atlas. It takes the following parameters:\n\n - collection_name: **embedded_movies**\n - index_name: **idx_plot_embedding**\n - attr_name: **\"plot_embedding\"**\n - embedding_vector: Embeddings returned from the OpenAI API call\n - limit: How many results to return\n\nThe **$project** section extracts the attributes we want to return as search results.\n\n(This code block is for review purposes. No need to execute.)\n```\n results = collection.aggregate([\n {\n '$vectorSearch': {\n \"index\": index_name,\n \"path\": attr_name,\n \"queryVector\": embedding_vector,\n \"numCandidates\": 50,\n \"limit\": limit,\n }\n },\n ## We are extracting 'vectorSearchScore' here\n ## columns with 1 are included, columns with 0 are excluded\n {\n \"$project\": {\n '_id' : 1,\n 'title' : 1,\n 'plot' : 1,\n 'year' : 1,\n \"search_score\": { \"$meta\": \"vectorSearchScore\" }\n }\n }\n ])\n```\nAlso, note this line:\n```\n \"search_score\": { \"$meta\": \"vectorSearchScore\" }\n```\nThis particular line extracts the search score of the vector search. The search score ranges from 0.0 to 1.0. Scores close to 1.0 are a great match.\n\n### OpenAI client\nThis is a handy class for OpenAI interaction.\n\n(Run this code block in your Google Colab under Step 5.)\n```\nfrom openai import OpenAI\n\nclass OpenAIClient():\n def __init__(self, api_key) -> None:\n self.client = OpenAI(\n api_key= api_key, # defaults to os.environ.get(\"OPENAI_API_KEY\")\n )\n # print (\"OpenAI Client initialized!\")\n\n def get_embedding(self, text: str, model=\"text-embedding-ada-002\") -> list[float]:\n text = text.replace(\"\\n\", \" \")\n resp = self.client.embeddings.create (\n input=[text],\n model=model )\n\n return resp.data[0].embedding\n```\n\n**Initializing class**:\nThis class is initialized with the OpenAI API key.\n\n**get_embedding method**:\n - **text**: This is the text we are trying to get embeddings for. \n - **model**: This is the embedding model. Here we are specifying the model **text-embedding-ada-002** because this is the model that is used to create embeddings in our sample data. So we want to use the same model to encode our query string.\n\n## Step 6: Connect to Atlas\nInitialize the Atlas client and do a quick connectivity test. We are connecting to the **sample_mflix** database and the **embedded_movies** collection. This dataset is loaded as part of the setup (Step 1).\n\nIf everything goes well, the connection will succeed. \n\n(Run this code block in your Google Colab under Step 6.)\n```\nMY_CONFIG.DB_NAME = 'sample_mflix'\nMY_CONFIG.COLLECTION_NAME = 'embedded_movies'\nMY_CONFIG.INDEX_NAME = 'idx_plot_embedding'\n\natlas_client = AtlasClient (MY_CONFIG.ATLAS_URI, MY_CONFIG.DB_NAME)\natlas_client.ping()\nprint ('Connected to Atlas instance! We are good to go!')\n```\n\n***Troubleshooting***\nIf you get a \u201cconnection failed\u201d error, make sure **0.0.0.0/0** is added as an allowed IP address to connect (see Step 1).\n\n## Step 7: Initialize the OpenAI client\nInitialize the OpenAI client with the OpenAI API key.\n\n(Run this code block in your Google Colab under Step 7.)\n```\nopenAI_client = OpenAIClient (api_key=MY_CONFIG.OPENAI_API_KEY)\nprint (\"OpenAI client initialized\")\n```\n\n## Step 8: Let's do a vector search!\nNow that we have everything set up, let's do a vector search! We are going to query movie plots, not just based on keywords but also meaning. For example, we will search for movies where the plot is \"humans fighting aliens.\"\n\nThis function takes one argument: **query** string.\n1. We convert the **query into embeddings**. We do this by calling the OpenAI API. We also time the API call (t1b - t1a) so we understand the network latencies.\n2. We send the embeddings (we just got back from OpenAI) to Atlas to **perform a vector search** and get the results.\n3. We are printing out the results returned by the vector search.\n\n(Run this code block in your Google Colab under Step 8.)\n```\nimport time\n\n# Handy function\ndef do_vector_search (query:str) -> None:\n query = query.lower().strip() # cleanup query string\n print ('query: ', query)\n\n # call openAI API to convert text into embedding\n t1a = time.perf_counter()\n embedding = openAI_client.get_embedding(query)\n t1b = time.perf_counter()\n print (f\"Getting embeddings from OpenAI took {(t1b-t1a)*1000:,.0f} ms\")\n\n # perform a vector search on Atlas\n # using embeddings (returned from OpenAI above) \n t2a = time.perf_counter()\n movies = atlas_client.vector_search(collection_name=MY_CONFIG.COLLECTION_NAME, index_name=MY_CONFIG.INDEX_NAME, attr_name='plot_embedding', embedding_vector=embedding,limit=10 )\n t2b = time.perf_counter()\n\n # and printing out the results\n print (f\"Altas query returned {len (movies)} movies in {(t2b-t2a)*1000:,.0f} ms\")\n print()\n\n for idx, movie in enumerate (movies):\n print(f'{idx+1}\\nid: {movie[\"_id\"]}\\ntitle: {movie[\"title\"]},\\nyear: {movie[\"year\"]}' +\n f'\\nsearch_score(meta):{movie[\"search_score\"]}\\nplot: {movie[\"plot\"]}\\n')\n```\n### First query\nHere is our first query. We want to find movies where the plot is about \"humans fighting aliens.\"\n\n(Run this code block in your Google Colab under Step 8.)\n```\nquery=\"humans fighting aliens\"\ndo_vector_search (query=query)\n```\nWe will see search results like this: \n\n```\nquery: humans fighting aliens\nusing cached embeddings\nAltas query returned 10 movies in 138 ms\n\n1\nid: 573a1398f29313caabce8f83\ntitle: V: The Final Battle,\nyear: 1984\nsearch_score(meta):0.9573556184768677\nplot: A small group of human resistance fighters fight a desperate guerilla war against the genocidal extra-terrestrials who dominate Earth.\n\n2\nid: 573a13c7f29313caabd75324\ntitle: Falling Skies,\nyear: 2011\u00e8\nsearch_score(meta):0.9550596475601196\nplot: Survivors of an alien attack on earth gather together to fight for their lives and fight back.\n\n3\nid: 573a139af29313caabcf0cff\ntitle: Starship Troopers,\nyear: 1997\nsearch_score(meta):0.9523435831069946\nplot: Humans in a fascistic, militaristic future do battle with giant alien bugs in a fight for survival.\n\n...\nyear: 2002\nsearch_score(meta):0.9372057914733887\nplot: A young woman from the future forces a local gunman to help her stop an impending alien invasion which will wipe out the human race.\n```\n\n***Note the score***\nIn addition to movie attributes (title, year, plot, etc.), we are also displaying search_score. This is a meta attribute \u2014 not really part of the movies collection but generated as a result of the vector search.\nThis is a number between 0 and 1. Values closer to 1 represent a better match. The results are sorted from best match down (closer to 1 first). [Read more about search score.\n\n***Troubleshooting***\nNo search results?\nMake sure the vector search index is defined and active (Step 2)!\n\n### Sample Query 2\n(Run this code block in your Google Colab under Step 8.)\n```\nquery=\"relationship drama between two good friends\"\ndo_vector_search (query=query)\n```\nSample results will look like the following:\n```\nquery: relationship drama between two good friends\nusing cached embeddings\nAltas query returned 10 movies in 71 ms\n\n1\nid: 573a13a3f29313caabd0dfe2\ntitle: Dark Blue World,\nyear: 2001\nsearch_score(meta):0.9380425214767456\nplot: The friendship of two men becomes tested when they both fall for the same woman.\n\n2\nid: 573a13a3f29313caabd0e14b\ntitle: Dark Blue World,\nyear: 2001\nsearch_score(meta):0.9380425214767456\nplot: The friendship of two men becomes tested when they both fall for the same woman.\n\n3\nid: 573a1399f29313caabcec488\ntitle: Once a Thief,\nyear: 1991\nsearch_score(meta):0.9260045289993286\nplot: A romantic and action packed story of three best friends, a group of high end art thieves, who come into trouble when a love-triangle forms between them.\n\n...\nyear: 1987\nsearch_score(meta):0.9181452989578247\nplot: A modern day Romeo & Juliet story is told in New York when an Italian boy and a Chinese girl become lovers, causing a tragic conflict between ethnic gangs.\n```\n\n## Conclusion\nThere we go! We have successfully performed a vector search combining Atlas and the OpenAI API.\n\nTo summarize, in this quick start, we have accomplished the following:\n\n - Set up Atlas in the cloud\n - Loaded sample data into our Atlas cluster\n - Set up a vector search index\n - Performed a vector search using OpenAI embeddings and Atlas\n\nAs we can see, **vector search** is very powerful as it can fetch results based on the semantic meaning of search terms instead of just keyword matching. Vector search allows us to build more powerful applications.\n\n## Next steps\nHere are some suggested resources for you to explore:\n - Atlas Vector Search Explained in 3 Minutes\n - Audio Find - Atlas Vector Search for Audio\n - The MongoDB community forums \u2014a great place to ask questions and get help from fellow developers!\n\n", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "This quick start will guide you through how to perform vector search using MongoDB Atlas and OpenAI API. ", "contentType": "Quickstart"}, "title": "Quick Start 2: Vector Search With MongoDB and OpenAI", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/realm-flex-sync-tutorial", "action": "created", "body": "# Using Realm Flexible Sync in Your App\u2014an iOS Tutorial\n\n## Introduction\n\nIn January 2022, we announced the release of the Realm Flexible Sync preview\u2014an opportunity for developers to take it for a spin and give us feedback. Flexible Sync is now Generally Available as part of MongoDB Atlas Device Sync. That article provided an overview of the benefits of flexible sync and how it works. TL;DR: You typically don't want to sync the entire backend database to every device\u2014whether for capacity or security concerns. Flexible Sync lets the developer provide queries to control exactly what the mobile app asks to sync, together with backend rules to ensure users can only access the data that they're entitled to.\n\nThis post builds on that introduction by showing how to add flexible sync to the RChat mobile app. I'll show how to configure the backend Atlas app, and then what code needs adding to the mobile app.\n\nEverything you see in this tutorial can be found in the flex-sync branch of the RChat repo.\n\n## Prerequisites\n\n- Xcode 13.2+\n- iOS 15+\n- Realm-Swift 10.32.0+\n- MongoDB 5.0+\n\n## The RChat App\n\nRChat is a messaging app. Users can add other users to a chat room and then share messages, images, and location with each other.\n\nAll of the user and chat message data is shared between instances of the app via Atlas Device Sync.\n\nThere's a common Atlas backend app. There are frontend apps for iOS and Android. This post focuses on the backend and the iOS app.\n\n## Configuring the Realm Backend App\n\nThe backend app contains a lot of functionality that isn't connected to the sync functionality, and so I won't cover that here. If you're interested, then check out the original RChat series.\n\nAs a starting point, you can install the app. I'll then explain the parts connected to Atlas Device Sync.\n\n### Import the Backend Atlas App\n\n1. If you don't already have one, create a MongoDB Atlas Cluster, keeping the default name of `Cluster0`. The Atlas cluster must be running MongoDB 5.0 or later.\n2. Install the Realm CLI and create an API key pair.\n3. Download the repo and install the Atlas app:\n\n```bash\ngit clone https://github.com/ClusterDB/RChat.git\ngit checkout flex-sync\ncd RChat/RChat-Realm/RChat\nrealm-cli login --api-key --private-api-key \nrealm-cli import # Then answer prompts, naming the app RChat\n\n```\n\n4. From the Atlas UI, click on the \"App Services\" tab and you will see the RChat app. Open it and copy the App Id. You'll need to use this before building the iOS app.\n\n### How Flexible Sync is Enabled in the Back End\n#### Schema\n\nThe schema represents how the data will be stored in MongoDB Atlas **and*- what the Swift (and Kotlin) model classes must contain. \n\nEach collection/class requires a schema. If you enable the \"Developer Mode\" option, then Atlas will automatically define the schema based on your Swift or Kotlin model classes. In this case, your imported `App` includes the schemas, and so developer mode isn't needed. You can view the schemas by browsing to the \"Schema\" section in the Atlas UI:\n\nYou can find more details about the schema/model in Building a Mobile Chat App Using Realm \u2013 Data Architecture, but note that for flexible sync (as opposed to the original partition-based sync), the `partition` field has been removed.\n\nWe're interested in the schema for three collections/model-classes:\n\n**User:**\n\n```json\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"conversations\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"string\"\n },\n \"members\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"membershipStatus\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": \n \"membershipStatus\",\n \"userName\"\n ],\n \"title\": \"Member\"\n }\n },\n \"unreadCount\": {\n \"bsonType\": \"long\"\n }\n },\n \"required\": [\n \"unreadCount\",\n \"id\",\n \"displayName\"\n ],\n \"title\": \"Conversation\"\n }\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n },\n \"userPreferences\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [],\n \"title\": \"UserPreferences\"\n }\n },\n \"required\": [\n \"_id\",\n \"userName\",\n \"presence\"\n ],\n \"title\": \"User\"\n}\n```\n\n`User` documents/objects represent users of the app.\n\n**Chatster:**\n```json\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"_id\",\n \"presence\",\n \"userName\"\n ],\n \"title\": \"Chatster\"\n}\n```\n\n`Chatster` documents/objects represent a read-only subset of instances of `User` documents. `Chatster` is needed because there's a subset of `User` data that we want to make accessible to all users. E.g., I want everyone to be able to see my username, presence status, and avatar image, but I don't want them to see which chat rooms I'm a member of. \n\nDevice Sync lets you control which users can sync which documents. When this article was first published, you couldn't sync just a subset of a document's fields. That's why `Chatster` was needed. At some point, I can remove `Chatster` from the app.\n\n**ChatMessage:**\n```json\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"authorID\": {\n \"bsonType\": \"string\"\n },\n \"conversationID\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"timestamp\": {\n \"bsonType\": \"date\"\n }\n },\n \"required\": [\n \"_id\",\n \"authorID\",\n \"conversationID\",\n \"text\",\n \"timestamp\"\n ],\n \"title\": \"ChatMessage\"\n}\n```\nThere's a `ChatMessage` document object for every message sent to any chat room.\n\n#### Flexible Sync Configuration\nYou can view and edit the sync configuration by browsing to the \"Sync\" section of the Atlas UI:\n\n![Enabling Atlas Flexible Device Sync in the Atlas UI\n\nFor this deployment, I've selected the Atlas cluster to use. **That cluster must be running MongoDB 5.0 or later**.\n\nYou must specify which fields the mobile app can use in its sync filter queries. Without this, you can't refer to those fields in your sync queries or permissions. You are currently limited to 10 fields.\n\nScrolling down, you can see the sync permissions:\n\nThe UI has flattened the permissions JSON document; here's a version that's easier to read:\n\n```json\n{\n \"rules\": {\n \"User\": \n {\n \"name\": \"anyone\",\n \"applyWhen\": {},\n \"read\": {\n \"_id\": \"%%user.id\"\n },\n \"write\": {\n \"_id\": \"%%user.id\"\n }\n }\n ],\n \"Chatster\": [\n {\n \"name\": \"anyone\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": false\n }\n ],\n \"ChatMessage\": [\n {\n \"name\": \"anyone\",\n \"applyWhen\": {},\n \"read\": true,\n \"write\": {\n \"authorID\": \"%%user.id\"\n }\n }\n ]\n },\n \"defaultRoles\": [\n {\n \"name\": \"all\",\n \"applyWhen\": {},\n \"read\": {},\n \"write\": {}\n }\n ]\n}\n```\n\nThe `rules` component contains a sub-document for each of our collections. Each of those sub-documents contain an array of roles. Each role contains:\n\n- The `name` of the role, this should be something that helps other developers understand the purpose of the role (e.g., \"admin,\" \"owner,\" \"guest\").\n- `applyWhen`, which defines whether the requesting user matches the role or not. Each of our collections have a single role, and so `applyWhen` is set to `{}`, which always evaluates to true.\n- A read rule\u2014how to decide whether this user can view a given document. This is where our three collections impose different rules:\n - A user can read and write to their own `User` object. No one else can read or write to it.\n - Anyone can read any `Chatster` document, but no one can write to them. Note that these documents are maintained by database triggers to keep them consistent with their associated `User` document.\n - The author of a `ChatMessage` is allowed to write to it. Anyone can read any `ChatMessage`. Ideally, we'd restrict it to just members of the chat room, but permissions don't currently support arrays\u2014this is another feature that I'm keen to see added.\n\n## Adding Flexible Sync to the iOS App\n\nAs with the back end, the iOS app is too big to cover in its entirety in this post. I'll explain how to build and run the app and then go through the components relevant to Flexible Sync.\n\n### Configure, Build, and Run the RChat iOS App\n\nYou've already downloaded the repo containing the iOS app, but you need to change directory before opening and running the app:\n\n```bash\ncd ../../RChat-iOS\nopen RChat.xcodeproj\n```\n\nUpdate `RChatApp.swift` with your App Id (you copied that from the Atlas UI when configuring your backend app). In Xcode, select your device or simulator before building and running the app (\u2318R). Select a second device or simulator and run the app a second time (\u2318R).\n\nOn each device, provide a username and password and select the \"Register new user\" checkbox:\n![iOS screenshot of registering a new user through the RChat app\n\nOnce registered and logged in on both devices, you can create a new chat room, invite your second user, and start sharing messages and photos. To share location, you first need to enable it in the app's settings.\n\n### Key Pieces of the iOS App Code\n#### The Model\n\nYou've seen the schemas that were defined for the \"User,\" \"Chatster,\" and \"ChatMessage\" collections in the back end Atlas app. Each of those collections has an associated Realm `Object` class in the iOS app. Sub-documents map to embedded objects that conform to `RealmEmbeddedObject`:\n\nLet's take a close look at each of these classes:\n\n**User Class**\n\n``` swift\nclass User: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var userName = \"\"\n @Persisted var userPreferences: UserPreferences?\n @Persisted var lastSeenAt: Date?\n @Persisted var conversations = List()\n @Persisted var presence = \"On-Line\"\n}\n\nclass UserPreferences: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var displayName: String?\n @Persisted var avatarImage: Photo?\n}\n\nclass Photo: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var _id = UUID().uuidString\n @Persisted var thumbNail: Data?\n @Persisted var picture: Data?\n @Persisted var date = Date()\n}\n\nclass Conversation: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var id = UUID().uuidString\n @Persisted var displayName = \"\"\n @Persisted var unreadCount = 0\n @Persisted var members = List()\n}\n\nclass Member: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var userName = \"\"\n @Persisted var membershipStatus = \"User added, but invite pending\"\n}\n```\n\n**Chatster Class**\n\n```swift\nclass Chatster: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString // This will match the _id of the associated User\n @Persisted var userName = \"\"\n @Persisted var displayName: String?\n @Persisted var avatarImage: Photo?\n @Persisted var lastSeenAt: Date?\n @Persisted var presence = \"Off-Line\"\n}\n\nclass Photo: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var _id = UUID().uuidString\n @Persisted var thumbNail: Data?\n @Persisted var picture: Data?\n @Persisted var date = Date()\n}\n```\n\n**ChatMessage Class**\n\n```swift\nclass ChatMessage: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id = UUID().uuidString\n @Persisted var conversationID = \"\"\n @Persisted var author: String? // username\n @Persisted var authorID: String\n @Persisted var text = \"\"\n @Persisted var image: Photo?\n @Persisted var location = List()\n @Persisted var timestamp = Date()\n}\n\nclass Photo: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var _id = UUID().uuidString\n @Persisted var thumbNail: Data?\n @Persisted var picture: Data?\n @Persisted var date = Date()\n}\n```\n\n#### Accessing Synced Realm Data\n\nAny iOS app that wants to sync Realm data needs to create a Realm `App` instance, providing the Realm App ID so that the Realm SDK can connect to the backend Realm app:\n\n```swift\nlet app = RealmSwift.App(id: \"rchat-xxxxx\") // TODO: Set the Realm application ID\n```\n\nWhen a SwiftUI view (in this case, `LoggedInView`) needs to access synced data, the parent view must flag that flexible sync will be used. It does this by passing the Realm configuration through the SwiftUI environment:\n\n```swift\nLoggedInView(userID: $userID)\n .environment(\\.realmConfiguration,\n app.currentUser!.flexibleSyncConfiguration())\n```\n\n`LoggedInView` can then access two variables from the SwiftUI environment:\n\n```swift\nstruct LoggedInView: View {\n ...\n @Environment(\\.realm) var realm\n @ObservedResults(User.self) var users\n```\n\nThe users variable is a live query containing all synced `User` objects in the Realm. But at this point, no `User` documents have been synced because we haven't subscribed to anything.\n\nThat's easy to fix. We create a new function (`setSubscription`) that's invoked when the view is opened:\n\n```swift\nstruct LoggedInView: View {\n ...\n @Binding var userID: String?\n ...\n var body: some View {\n ZStack {\n ...\n }\n .onAppear(perform: setSubscription)\n }\n\n private func setSubscription() {\n let subscriptions = realm.subscriptions\n subscriptions.update {\n if let currentSubscription = subscriptions.first(named: \"user_id\") {\n print(\"Replacing subscription for user_id\")\n currentSubscription.updateQuery(toType: User.self) { user in\n user._id == userID!\n }\n } else {\n print(\"Appending subscription for user_id\")\n subscriptions.append(QuerySubscription(name: \"user_id\") { user in\n user._id == userID!\n })\n }\n }\n }\n}\n```\n\nSubscriptions are given a name to make them easier to work with. I named this one `user_id`. \n\nThe function checks whether there's already a subscription named `user_id`. If there is, then the function replaces it. If not, then it adds the new subscription. In either case, the subscription is defined by passing in a query that finds any `User` documents/objects where the `_id` field matches the current user's ID.\n\nThe subscription should sync exactly one `User` object to the realm, and so the code for the view's body can work with the `first` object in the results:\n\n```swift\nstruct LoggedInView: View {\n ...\n @ObservedResults(User.self) var users\n @Binding var userID: String?\n ...\n var body: some View {\n ZStack {\n if let user = users.first {\n ...\n ConversationListView(user: user)\n ...\n }\n }\n .navigationBarTitle(\"Chats\", displayMode: .inline)\n .onAppear(perform: setSubscription)\n }\n}\n```\n\nOther views work with different model classes and sync queries. For example, when the user clicks on a chat room, a new view is opened that displays all of the `ChatMessage`s for that conversation:\n\n```swift\nstruct ChatRoomBubblesView: View {\n ...\n @ObservedResults(ChatMessage.self, sortDescriptor: SortDescriptor(keyPath: \"timestamp\", ascending: true)) var chats\n @Environment(\\.realm) var realm\n ...\n var conversation: Conversation?\n ...\n var body: some View {\n VStack {\n ...\n }\n .onAppear { loadChatRoom() }\n }\n\n private func loadChatRoom() {\n ...\n setSubscription()\n ...\n }\n\n private func setSubscription() {\n let subscriptions = realm.subscriptions\n subscriptions.update {\n if let conversation = conversation {\n if let currentSubscription = subscriptions.first(named: \"conversation\") {\n currentSubscription.updateQuery(toType: ChatMessage.self) { chatMessage in\n chatMessage.conversationID == conversation.id\n }\n } else {\n subscriptions.append(QuerySubscription(name: \"conversation\") { chatMessage in\n chatMessage.conversationID == conversation.id\n })\n }\n }\n }\n }\n}\n```\n\nIn this case, the query syncs all `ChatMessage` objects where the `conversationID` matches the `id` of the `Conversation` object passed to the view.\n\nThe view's body can then iterate over all of the matching, synced objects:\n\n```swift\nstruct ChatRoomBubblesView: View {\n...\n @ObservedResults(ChatMessage.self,\n sortDescriptor: SortDescriptor(keyPath: \"timestamp\", ascending: true)) var chats\n ...\n var body: some View {\n ...\n ForEach(chats) { chatMessage in\n ChatBubbleView(chatMessage: chatMessage,\n authorName: chatMessage.author != user.userName ? chatMessage.author : nil,\n isPreview: isPreview)\n }\n ...\n }\n}\n```\n\nAs it stands, there's some annoying behavior. If you open conversation A, go back, and then open conversation B, you'll initially see all of the messages from conversation A. The reason is that it takes a short time for the updated subscription to replace the `ChatMessage` objects in the synced Realm. I solve that by explicitly removing the subscription (which purges the synced objects) when closing the view:\n\n```swift\nstruct ChatRoomBubblesView: View {\n ...\n @Environment(\\.realm) var realm\n ...\n var body: some View {\n VStack {\n ...\n }\n .onDisappear { closeChatRoom() }\n }\n\n private func closeChatRoom() {\n clearSubscription()\n ...\n }\n\n private func clearSunscription() {\n print(\"Leaving room, clearing subscription\")\n let subscriptions = realm.subscriptions\n subscriptions.update {\n subscriptions.remove(named: \"conversation\")\n }\n }\n}\n```\n\nI made a design decision that I'd use the same name (\"conversation\") for this view, regardless of which conversation/chat room it's working with. An alternative would be to create a unique subscription whenever a new chat room is opened (including the ID of the conversation in the name). I could then avoid removing the subscription when navigating away from a chat room. This second approach would come with two advantages: \n\n1. The app should be more responsive when navigating between chat rooms (if you'd previously visited the chat room that you're opening).\n2. You can switch between chat rooms even when the device isn't connected to the internet.\n\nThe disadvantages of this approach would be:\n\n1. The app could end up with a lot of subscriptions (and there's a cost to them).\n2. The app continues to store all of the messages from any chat room that you've ever visited from this device. That consumes extra device storage and network bandwidth as messages from all of those rooms continue to be synced to the app.\n\nA third approach would be to stick with a single subscription (named \"conversations\") that matches every `ChatMessage` object. The view would then need to apply a filter on the resulting `ChatMessage` objects so it only displayed those for the open chat room. This has the same advantages as the second approach, but can consume even more storage as the device will contain messages from all chat rooms\u2014including those that the user has never visited.\n\nNote that a different user can log into the app from the same device. You don't want that user to be greeted with someone else's data. To avoid that, the app removes all subscriptions when a user logs out:\n\n```swift\nstruct LogoutButton: View {\n ...\n @Environment(\\.realm) var realm\n\n var body: some View {\n Button(\"Log Out\") { isConfirming = true }\n .confirmationDialog(\"Are you that you want to logout\",\n isPresented: $isConfirming) {\n Button(\"Confirm Logout\", role: .destructive, action: logout)\n Button(\"Cancel\", role: .cancel) {}\n }\n .disabled(state.shouldIndicateActivity)\n }\n\n private func logout() {\n ...\n clearSubscriptions()\n ...\n }\n\n private func clearSubscriptions() {\n let subscriptions = realm.subscriptions\n subscriptions.update {\n subscriptions.removeAll()\n }\n }\n}\n```\n## Conclusion\n\nIn this article, you've seen how to include Flexible Sync in your mobile app. I've shown the code for Swift, but the approach would be the same when building apps with Kotlin, Javascript, or .NET.\n\nSince this post was initially released, Flexible Sync has evolved to include more query and permission operators. For example, array operators (that would allow me to add tighter restrictions on who can ask to read which chat messages). \n\nYou can now limit which fields from a document get synced to a given user. This could allow the removal of the `Chatster` collection, as it's only there to provide a read-only view of a subset of `User` fields to other users.\n\nWant to suggest an enhancement or up-vote an existing request? The most effective way is through our feedback portal.\n\nGot questions? Ask them in our Community forum.\n", "format": "md", "metadata": {"tags": ["Realm", "iOS"], "pageDescription": "How to use Realm Flexible Sync in your app. Worked example of an iOS chat app.", "contentType": "Tutorial"}, "title": "Using Realm Flexible Sync in Your App\u2014an iOS Tutorial", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/advanced-rag-langchain-mongodb", "action": "created", "body": "# Adding Semantic Caching and Memory to Your RAG Application Using MongoDB and LangChain\n\n# Introduction\n\nRetrieval-augmented generation (RAG) is an architectural design pattern prevalent in modern AI applications that provides generative AI functionalities. RAG has gained adoption within generative applications due to its additional benefit of grounding the responses and outputs of large language models (LLMs) with some relevant, factual, and updated information. The key contribution of RAG is the supplementing of non-parametric knowledge with the parametric knowledge of the LLM to generate adequate responses to user queries.\n\nModern AI applications that leverage LLMs and generative AI require more than effective response capabilities. AI engineers and developers should consider two other functionalities before moving RAG applications to production. Semantic caching and memory are two important capabilities for generative AI applications that extend the usefulness of modern AI applications by reducing infrastructure costs, response latency, and conversation storage.\n\n**Semantic caching is a process that utilizes a data store to keep a record of the queries and their results based on the semantics or context within the queries themselves.** \n\nThis means that, as opposed to a traditional cache that caches data based on exact matches of data requests or specific identifiers, a semantic cache understands and leverages the meaning and relationships inherent in the data. Within an LLM or RAG application, this means that user queries that are both exact matches and contextually similar to any queries that have been previously cached will benefit from an efficient information retrieval process. \n\nTake, for example, an e-commerce platform's customer support chatbot; integrating semantic caching enables the system to respond to inquiries by understanding the context behind user queries. So, whether a customer asks about the \"best smartphone for night photography\" or \"a phone for night photos,\" the chatbot can leverage its semantic cache to pull relevant, previously stored responses, improving both the efficiency and relevance of its answers.\n\nLLM-powered chatbot interfaces are now prevalent in generative AI applications. Still, the conversations held between LLM and application users must be stored and retrieved to create a coherent and contextually relevant interaction history. The benefits of having a reference of interaction history lie in providing additional context to LLMs, understanding previously held conversations, improving the personalization of GenAI applications, and enabling the chatbot to provide more accurate responses to queries.\n\nMongoDB Atlas vector search capabilities enable the creation of a semantic cache, and the new LangChain-MongoDB integration makes integrating this cache in RAG applications easier. The LangChain-MongoDB integration also makes implementing a conversation store for interactions with RAG applications easier.\n\n**Here's what\u2019s covered in this tutorial:**\n- How to implement memory and storage of conversation history using LangChain and MongoDB\n- How to implement semantic cache using LangChain and MongoDB\n- Overview of semantic cache and memory utilization within RAG applications\n\nThe following GitHub repository contains all implementations presented in this tutorial, along with other use cases and examples of RAG implementations.\n\n----------\n\n# Step 1: Installing required libraries\n\nThis section guides you through the installation process of the essential libraries needed to implement the RAG application, complete with memory and history capabilities, within your current development environment. Here is the list of required libraries:\n\n- **datasets**: Python library to get access to datasets available on Hugging Face Hub\n- **langchain**: Python toolkit for LangChain\n- **langchain-mongodb**: Python package to use MongoDB as a vector store, semantic cache, chat history store, etc., in LangChain\n- **langchain-openai**: Python package to use OpenAI models with LangChain\n- **pymongo**: Python toolkit for MongoDB\n- **pandas**: Python library for data analysis, exploration, and manipulation\n\n```\n! pip install -qU datasets langchain langchain-mongodb langchain-openai pymongo pandas\n```\n\nDo note that this tutorial utilizes OpenAI embedding and base models. To access the models, ensure you have an\u00a0 OpenAI API key.\n\nIn your development environment, create a reference to the OpenAI API key.\n\n```\nimport getpass\nOPENAI_API_KEY = getpass.getpass(\"Enter your OpenAI API key:\")\n```\n\n----------\n\n# Step 2: Database setup\n\nTo handle the requirements for equipping the RAG application with the capabilities of storing interaction or conversation history and a semantic cache, two new collections must be created alongside the collection that will hold the main application data.\n\nCreating a database and collection within MongoDB is made simple with MongoDB Atlas.\n\n1. Register a free Atlas account or sign in to your existing Atlas account.\n2. Follow the instructions (select Atlas UI as the procedure)\u00a0 to deploy your first cluster.\u00a0\n3. Create the database: \\`langchain\\_chatbot\\`.\n4. Within the database\\` langchain\\_chatbot\\`, create the following collections:\u00a0\n - `data` : Hold all data that acts as a knowledge source for the chatbot.\n - `history` : Hold all conversations held between the chatbot and the application user.\n - `semantic_cache` : Hold all queries made to the chatbot along with their LLM responses.\n5. Create a vector search index named `vector_index` for the `data` collection. This index enables the RAG application to retrieve records as additional context to supplement user queries via vector search. Below is the JSON definition of the `data` collection vector search index.\u00a0\n\n```\n {\n \u00a0\u00a0\"fields\": \n \u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"numDimensions\": 1536,\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": \"embedding\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"similarity\": \"cosine\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"vector\"\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0]\n }\n```\n\n6\\. Create a [vector search index with a text filter named `vector_index` for the `semantic_cache` collection. This index enables the RAG application to retrieve responses to queries semantically similar to a current query asked by the application user. Below is the JSON definition of the `semantic_cache` collection vector search index.\n\n```\n {\n \u00a0\u00a0\"fields\": \n \u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"numDimensions\": 1536,\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": \"embedding\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"similarity\": \"cosine\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"vector\"\n \u00a0\u00a0\u00a0\u00a0},\n \u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": \"llm_string\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"filter\"\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0]\n }\n```\n\nBy the end of this step, you should have a database with three collections and two defined vector search indexes. The final step of this section is to obtain the connection URI string to the created Atlas cluster to establish a connection between the databases and the current development environment. Follow the steps to [get the connection string from the Atlas UI.\u00a0\n\nIn your development environment, create a reference to the MongoDB URI string.\n\n```\nMONGODB_URI = getpass.getpass(\"Enter your MongoDB connection string:\")\n```\n\n----------\n\n# Step 3: Download and prepare the dataset\n\nThis tutorial uses MongoDB\u2019s embedded_movies dataset. A datapoint within the movie dataset contains information corresponding to a particular movie; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas data frame object, which enables data structure manipulation and analysis with relative ease.\n\n```\nfrom datasets import load_dataset\nimport pandas as pd\n\ndata = load_dataset(\"MongoDB/embedded_movies\")\ndf = pd.DataFrame(data\"train\"])\n\n# Only keep records where the fullplot field is not null\ndf = df[df[\"fullplot\"].notna()]\n\n# Renaming the embedding field to \"embedding\" -- required by LangChain\ndf.rename(columns={\"plot_embedding\": \"embedding\"}, inplace=True)\n```\n\n**The code above executes the following operations:**\n\n - Import the `load_dataset` module from the `datasets` library, which enables the appropriate dataset to be loaded for this tutorial by specifying the path. The full dataset is loaded environment and referenced by the variable `data`.\n - Only the dataset's train partition is required to be utilized; the variable `df` holds a reference to the dataset training partition as a Pandas DataFrame.\n - The DataFrame is filtered to only keep records where the `fullplot` field is not null. This step ensures that any subsequent operations or analyses that rely on the `fullplot` field, such as the embedding process, will not be hindered by missing data. The filtering process uses pandas' notna() method to check for non-null entries in the `fullplot` column.\n - The column `plot_embedding` in the DataFrame is renamed to `embedding`. This step is necessary for compatibility with LangChain, which requires an input field named embedding.\n\nBy the end of the operations in this section, we have a full dataset that acts as a knowledge source for the chatbot and is ready to be ingested into the `data` collection in the `langchain_chatbot` database.\n\n----------\n\n# Step 4: Create a naive RAG chain with MongoDB Vector Store\u00a0\n\nBefore adding chat history and caching, let\u2019s first see how to create a simple RAG chain using LangChain, with MongoDB as the vector store. Here\u2019s what the workflow looks like:\n\n![Naive RAG workflow][1]\n\nThe user question is embedded, and relevant documents are retrieved from the MongoDB vector store. The retrieved documents, along with the user query, are passed as a prompt to the LLM, which generates an answer to the question.\n\nLet\u2019s first ingest data into a MongoDB collection. We will use this collection as the vector store for our RAG chain.\n\n```\nfrom pymongo import MongoClient\n\n# Initialize MongoDB python client\nclient = MongoClient(MONGODB_URI)\n\nDB_NAME = \"langchain_chatbot\"\nCOLLECTION_NAME = \"data\"\nATLAS_VECTOR_SEARCH_INDEX_NAME = \"vector_index\"\ncollection = client[DB_NAME][COLLECTION_NAME]\n```\n\nThe code above creates a MongoDB client and defines the database `langchain_chatbot` and collection `data` where we will store our data. Remember, you will also need to create a vector search index to efficiently retrieve data from the MongoDB vector store, as documented in Step 2 of this tutorial. To do this, refer to our official [vector search index creation guide.\n\nWhile creating the vector search index for the `data` collection, ensure that it is named `vector_index` and that the index definition looks as follows:\n```\n {\n \u00a0\u00a0\"fields\": \n \u00a0\u00a0\u00a0\u00a0{\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"numDimensions\": 1536,\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"path\": \"embedding\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"similarity\": \"cosine\",\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"type\": \"vector\"\n \u00a0\u00a0\u00a0\u00a0}\n \u00a0\u00a0]\n }\n```\n\n> *NOTE*: We set `numDimensions`\u00a0 to `1536`\u00a0 because we use OpenAI\u2019s `text-embedding-ada-002` model to create embeddings.\n\nNext, we delete any existing documents from the \\`data\\` collection and ingest our data into it:\n\n```\n# Delete any existing records in the collection\ncollection.delete_many({})\n\n# Data Ingestion\nrecords = df.to_dict('records')\ncollection.insert_many(records)\n\nprint(\"Data ingestion into MongoDB completed\")\n```\n\nIngesting data into a MongoDB collection from a pandas DataFrame is a straightforward process. We first convert the DataFrame to a list of dictionaries and then utilize the `insert_many` method to bulk ingest documents into the collection.\n\nWith our data in MongoDB, let\u2019s use it to construct a vector store for our RAG chain:\n\n```\nfrom langchain_openai import OpenAIEmbeddings\nfrom langchain_mongodb import MongoDBAtlasVectorSearch\n\n# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset\nembeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model=\"text-embedding-ada-002\")\n\n# Vector Store Creation\nvector_store = MongoDBAtlasVectorSearch.from_connection_string(\n connection_string=MONGODB_URI,\n namespace=DB_NAME + \".\" + COLLECTION_NAME,\n embedding= embeddings,\n index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n text_key=\"fullplot\"\n)\n\n```\n\nWe use the `from_connection_string` method of the `MongoDBAtlasVectorSearch` class from the `langchain_mongodb` integration to create a MongoDB vector store from a MongoDB connection URI. The `get_connection_string`\u00a0method takes the following arguments:\n\n- **connection_string**: MongoDB connection URI\n- **namespace**: A valid MongoDB namespace (database and collection)\n- **embedding**: Embedding model to use to generate embeddings for a vector search\n- **index_name**: MongoDB Atlas vector search index name\n- **text_key**: Field in the ingested documents that contain the text\n\nThe next step is to use the MongoDB vector store as a retriever in our RAG chain. In LangChain, a retriever is an interface that returns documents given a query. You can use a vector store as a retriever by using the `as_retriever` method:\n\n```\nretriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 5})\n```\n`as_retriever` can take arguments such as `search_type` \u2014 i.e., what metric to use to retrieve documents. Here, we choose `similarity` since we want to retrieve the most similar documents to a given query. We can also specify additional search arguments such as\u00a0 `k` \u2014 i.e., the number of documents to retrieve. In our example, we set it to 5, which means the 5 most similar documents will be retrieved for a given query.\n\nThe final step is to put all of these pieces together to create a RAG chain.\u00a0\n\n> NOTE: Chains in LangChain are a sequence of calls either to an LLM, a\n> tool, or a data processing step. The recommended way to compose chains\n> in LangChain is using the [LangChain Expression\n> Language\n> (LCEL). Each component in a chain is referred to as a `Runnable` and\n> can be invoked, streamed, etc., independently of other components in\n> the chain.\n\n```python\n\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.runnables import RunnablePassthrough\nfrom langchain_core.output_parsers import StrOutputParser\n\n# Generate context using the retriever, and pass the user question through\nretrieve = {\"context\": retriever | (lambda docs: \"\\n\\n\".join(d.page_content for d in docs])), \"question\": RunnablePassthrough()}\ntemplate = \"\"\"Answer the question based only on the following context: \\\n{context}\n\nQuestion: {question}\n\"\"\"\n# Defining the chat prompt\nprompt = ChatPromptTemplate.from_template(template)\n# Defining the model to be used for chat completion\nmodel = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)\n# Parse output as a string\nparse_output = StrOutputParser()\n\n# Naive RAG chain \nnaive_rag_chain = (\n retrieve\n | prompt\n | model\n | parse_output\n)\n```\n\nThe code snippet above does the following:\n\n - Defines the `retrieve` component: It takes the user input (a question)\u00a0 and sends it to the `retriever` to obtain similar documents. It also formats the output to match the input format expected by the next Runnable, which in this case is a dictionary with `context` and `question` as keys. The `RunnablePassthrough()` call for the `question` key indicates that the user input is simply passed through to the next stage under the `question` key.\n - Defines the `prompt` component: It crafts a prompt by populating a prompt template with the `context` and `question` from the `retrieve` stage.\n - Defines the `model` component: This specifies the chat model to use. We use OpenAI \u2014 unless specified otherwise, the `gpt-3.5-turbo` model is used by default.\n - Defines the `parse_output` component: A simple output parser parses the result from the LLM into a string.\n - Defines a `naive_rag_chain`: It uses LCEL pipe ( | ) notation to chain together the above components.\n\nLet\u2019s test out our chain by asking a question. We do this using the \\`invoke()\\` method, which is used to call a chain on an input:\n\n```\nnaive_rag_chain.invoke(\"What is the best movie to watch when sad?\")\nOutput: Once a Thief\n```\n\n> NOTE: With complex chains, it can be hard to tell whether or not\n> information is flowing through them as expected. We highly recommend\n> using [LangSmith for debugging and\n> monitoring in such cases. Simply grab an API\n> key and add the following lines\n> to your code to view\n> traces\n> in the LangSmith UI:\n\n```\n export LANGCHAIN_TRACING_V2=true\n export LANGCHAIN_API_KEY=\n```\n\n----------\n\n# Step 5: Create a RAG chain with chat history\n\nNow that we have seen how to create a simple RAG chain, let\u2019s see how to add chat message history to it and persist it in MongoDB. The workflow for this chain looks something like this:\n\n.\n\n----------\n\n# FAQs\n\n1\\. **What is retrieval-augmented generation (RAG)?**\nRAG is a design pattern in AI applications that enhances the capabilities of large language models (LLMs) by grounding their responses with relevant, factual, and up-to-date information. This is achieved by supplementing LLMs' parametric knowledge with non-parametric knowledge, enabling the generation of more accurate and contextually relevant responses.\n\n2\\. **How does integrating memory and chat history enhance RAG applications?**\nIntegrating memory and chat history into RAG applications allows for the retention and retrieval of past interactions between the large language model (LLM) and users. This functionality enriches the model's context awareness, enabling it to generate responses that are relevant to the immediate query and reflect the continuity and nuances of ongoing conversations. By maintaining a coherent and contextually relevant interaction history, RAG applications can offer more personalized and accurate responses, significantly enhancing the user experience and the application's overall effectiveness.\n\n3\\. **Why is semantic caching important in RAG applications?**\nSemantic caching stores the results of user queries and their associated responses based on the query's semantics. This approach allows for efficient information retrieval when semantically similar queries are made in the future, reducing API calls to LLM providers and lowering both latency and operational costs.\n\n4\\. **How does MongoDB Atlas support RAG applications?**\nMongoDB Atlas offers vector search capabilities, making it easier to implement semantic caches and conversation stores within RAG applications. This integration facilitates the efficient retrieval of semantically similar queries and the storage of interaction histories, enhancing the application's overall performance and user experience.\n\n5\\. **How can semantic caching reduce query execution times in RAG applications?**\nRAG applications can quickly retrieve cached answers for semantically similar queries without recomputing them by caching responses to queries based on their semantic content. This significantly reduces the time to generate responses, as demonstrated by the decreased query execution times upon subsequent similar queries.\n\n6\\. **What benefits does the LangChain-MongoDB integration offer?**\nThis integration simplifies the process of adding semantic caching and memory capabilities to RAG applications. It enables the efficient management of conversation histories and the implementation of semantic caches using MongoDB's powerful vector search features, leading to improved application performance and user experience.\n\n7\\. **How does one measure the impact of semantic caching on a RAG application?**\nBy monitoring query execution times before and after implementing semantic caching, developers can observe the efficiency gains the cache provides. A noticeable reduction in execution times for semantically similar queries indicates the cache's effectiveness in improving response speeds and reducing operational costs.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2f4885e0ad80cf6c/65fb18fda1e8151092d5d332/Screenshot_2024-03-20_at_17.12.00.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6653138666384116/65fb2b7996251beeef7212b8/Screenshot_2024-03-20_at_18.31.05.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd29de502dea33ac5/65fb1de1f4a4cf95f4150473/Screenshot_2024-03-20_at_16.39.13.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI", "Pandas"], "pageDescription": "This guide outlines how to enhance Retrieval-Augmented Generation (RAG) applications with semantic caching and memory using MongoDB and LangChain. It explains integrating semantic caching to improve response efficiency and relevance by storing query results based on semantics. Additionally, it describes adding memory for maintaining conversation history, enabling context-aware interactions. \n\nThe tutorial includes steps for setting up MongoDB, implementing semantic caching, and incorporating these features into RAG applications with LangChain, leading to improved response times and enriched user interactions through efficient data retrieval and personalized experiences.", "contentType": "Tutorial"}, "title": "Adding Semantic Caching and Memory to Your RAG Application Using MongoDB and LangChain", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/how-use-cohere-embeddings-rerank-modules-mongodb-atlas", "action": "created", "body": "# How to Use Cohere Embeddings and Rerank Modules with MongoDB Atlas\n\nThe daunting task that developers currently face while developing solutions powered by the retrieval augmented generation (RAG) framework is the choice of retrieval mechanism. Augmenting the large language model (LLM) prompt with relevant and exhaustive information creates better responses from such systems.. One is tasked with choosing the most appropriate embedding model in the case of semantic similarity search. Alternatively, in the case of full-text search implementation, you have to be thorough about your implementation to achieve a precise recall and high accuracy in your results. Sometimes, the solutions require a combined implementation that benefits from both retrieval mechanisms.\n\nIf your current full-text search scoring workflow is leaving things to be desired, or if you find yourself spending too much time writing numerous lines of code to get semantic search functionality working within your applications, then Cohere and MongoDB can help. To prevent these issues from holding you back from leveraging powerful AI search functionality or machine learning within your application, Cohere and MongoDB offer easy-to-use and fully managed solutions.\n\nCohere is an AI company specializing in large language models.\n\n1. With a powerful tool for embedding natural language in their projects, it can help you represent more accurate, relevant, and engaging content as embeddings. The Cohere language model also offers a simple and intuitive API that allows you to easily integrate it with your existing workflows and platforms. \n2. The Cohere Rerank module is a component of the Cohere natural language processing system that helps to select the best output from a set of candidates. The module uses a neural network to score each candidate based on its relevance, semantic similarity, theme, and style. The module then ranks the candidates according to their scores and returns the top N as the final output.\n\nMongoDB Atlas is a fully managed developer data platform service that provides scalable, secure, and reliable data storage and access for your applications. One of the key features of MongoDB Atlas is the ability to perform vector search and full-text search on your data, which can enhance the capabilities of your AI/ML-driven applications. MongoDB Atlas can help you build powerful and flexible AI/ML-powered applications that can leverage both structured and unstructured data. You can easily create and manage search indexes, perform queries, and analyze results using MongoDB Atlas's intuitive interface, APIs, and drivers. MongoDB Atlas Vector Search provides a unique feature \u2014 pre-filtering and post-filtering on vector search queries \u2014 that helps users control the behavior of their vector search results, thereby improving the accuracy and retrieval performance, and saving money at the same time.\n\nTherefore, with Cohere and MongoDB Atlas, we can demonstrate techniques where we can easily power a semantic search capability on your private dataset with very few lines of code. Additionally, you can enhance the existing ranking of your full-text search retrieval systems using the Cohere Rerank module. Both techniques are highly beneficial for building more complex GenAI applications, such as RAG- or LLM-powered summarization or data augmentation.\n\n## What will we do in this tutorial?\n\n### Store embeddings and prepare the index\n\n1. Use the Cohere Embed Jobs to generate vector embeddings for the first time on large datasets in an asynchronous and scheduled manner.\n2. Add vector embeddings into MongoDB Atlas, which can store and index these vector embeddings alongside your other operational/metadata. \n3. Finally, prepare the indexes for both vector embeddings and full-text search on our private dataset.\n\n### Search with vector embeddings\n\n1. Write a simple Python function to accept search terms/phrases and pass it through the Cohere embed API again to get a query vector.\n2. Take these resultant query vector embeddings and perform a vector search query using the $vectorsearch operator in the MongoDB Aggregation Pipeline.\n3. Pre-filter documents using meta information to narrow the search across your dataset, thereby speeding up the performance of vector search results while retaining accuracy.\n4. The retrieved semantically similar documents can be post-filtered (relevancy score) to demonstrate a higher degree of control over the semantic search behaviour.\n\n### Search with text and Rerank with Cohere\n\n1. Write a simple Python function to accept search terms/phrases and prepare a query using the $search operator and MongoDB Aggregation Pipeline.\n2. Take these resultant documents and perform a reranking operation of the retrieved documents to achieve higher accuracy with full-text search results using the Cohere rerank module.\n\n- Cohere CLI tool\n\nAlso, if you have not created a MongoDB Atlas instance for yourself, you can follow the tutorial to create one. This will provide you with your `MONGODB_CONNECTION_STR`. \n\nRun the following lines of code in Jupyter Notebook to initialize the Cohere secret or API key and MongoDB Atlas connection string.\n\n```python\nimport os\nimport getpass\n# cohere api key\ntry:\n cohere_api_key = os.environ\"COHERE_API_KEY\"]\nexcept KeyError:\n cohere_api_key = getpass.getpass(\"Please enter your COHERE API KEY (hit enter): \")\n\n# MongoDB connection string\ntry:\n MONGO_CONN_STR = os.environ[\"MONGODB_CONNECTION_STR\"]\nexcept KeyError:\n MONGO_CONN = getpass.getpass(\"Please enter your MongoDB Atlas Connection String (hit enter): \")\n```\n\n### Load dataset from the S3 bucket\n\nRun the following lines of code in Jupyter Notebook to read data from an AWS S3 bucket directly to a pandas dataframe.\n\n```python\nimport pandas as pd\nimport s3fs\ndf = pd.read_json(\"s3://ashwin-partner-bucket/cohere/movies_sample_dataset.jsonl\", orient=\"records\", lines=True)\ndf.to_json(\"./movies_sample_dataset.jsonl\", orient=\"records\", lines=True)\ndf[:3]\n```\n\n![Loaded AWS S3 Dataset][2]\n\n### Initialize and schedule the Cohere embeddings job to embed the \"sample_movies\" dataset\n\nHere we will create a movies dataset in Cohere by uploading our sample movies dataset that we fetched from the S3 bucket and have stored locally. Once we have created a dataset, we can use the Cohere embed jobs API to schedule a batch job to embed all the entire dataset.\n\nYou can run the following lines of code in your Jupyter Notebook to upload your dataset to Cohere and schedule an embedding job.\n\n```python\nimport cohere \nco_client = cohere.Client(cohere_api_key, client_name='mongodb')\n# create a dataset in Cohere Platform\ndataset = co_client.create_dataset(name='movies',\n data=open(\"./movies_sample_dataset.jsonl\",'r'),\n keep_fields=[\"overview\",\"title\",\"year\"],\n dataset_type=\"embed-input\").wait()\ndataset.wait()\ndataset\n\ndataset.wait()\n# Schedule an Embedding job to run on the entire movies dataset\nembed_job = co_client.create_embed_job(dataset_id=dataset.id, \n input_type='search_document',\n model='embed-english-v3.0', \n truncate='END')\nembed_job.wait()\noutput_dataset = co_client.get_dataset(embed_job.output.id)\nresults = list(map(lambda x:{\"text\":x[\"text\"], \"embedding\": x[\"embeddings\"][\"float\"]},output_dataset))\nlen(results)\n```\n\n### How to initialize MongoDB Atlas and insert data to a MongoDB collection\n\nNow that we have created the vector embeddings for our sample movies dataset, we can initialize the MongoDB client and insert the documents into our collection of choice by running the following lines of code in the Jupyter Notebook.\n\n```python\nfrom pymongo import MongoClient\nmongo_client = MongoClient(MONGO_CONN_STR)\n# Upload documents along with vector embeddings to MongoDB Atlas Collection\noutput_collection = mongo_client[\"sample_mflix\"][\"cohere_embed_movies\"]\nif output_collection.count_documents({})>0:\n output_collection.delete_many({})\ne = output_collection.insert_many(results)\n```\n\n### Programmatically create vector search and full-text search index\n\nWith the latest update to the **Pymongo** Python package, you can now create your vector search index as well as full-text search indexes from the Python client itself. You can also create vector indexes using the MongoDB Atlas UI or `mongosh`.\n\nRun the following lines of code in your Jupyter Notebook to create search and vector search indexes on your new collection.\n\n```\noutput_collection.create_search_index({\"definition\":\n {\"mappings\":\n {\"dynamic\": true,\n \"fields\": {\n \"embedding\" : {\n \"dimensions\": 1024,\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n },\n \"fullplot\":\n }}},\n \"name\": \"default\"\n }\n)\n```\n\n### Query MongoDB vector index using $vectorSearch\n\nMongoDB Atlas brings the flexibility of using vector search alongside full-text search filters. Additionally, you can apply range, string, and numeric filters using the aggregation pipeline. This allows the end user to control the behavior of the semantic search response from the search engine. The below lines of code will demonstrate how you can perform vector search along with pre-filtering on the **year** field to get movies earlier than **1990.** Plus, you have better control over the relevance of returned results, so you can perform post-filtering on the response using the MongoDB Query API. In this demo, we are filtering on the **score** field generated as a result of performing the vector similarity between the query and respective documents, using a heuristic to retain only the accurate results.\n\nRun the below lines of code in Jupyter Notebook to initialize a function that can help you achieve **vector search + pre-filter + post-filter**.\n\n```python\ndef query_vector_search(q, prefilter = {}, postfilter = {},path=\"embedding\",topK=2):\n ele = co_client.embed(model=\"embed-english-v3.0\",input_type=\"search_query\",texts=[q])\n query_embedding = ele.embeddings[0]\n vs_query = {\n \"index\": \"default\",\n \"path\": path,\n \"queryVector\": query_embedding,\n \"numCandidates\": 10,\n \"limit\": topK,\n }\n if len(prefilter)>0:\n vs_query[\"filter\"] = prefilter\n new_search_query = {\"$vectorSearch\": vs_query}\n project = {\"$project\": {\"score\": {\"$meta\": \"vectorSearchScore\"},\"_id\": 0,\"title\": 1, \"release_date\": 1, \"overview\": 1,\"year\": 1}}\n if len(postfilter.keys())>0:\n postFilter = {\"$match\":postfilter}\n res = list(output_collection.aggregate([new_search_query, project, postFilter]))\n else:\n res = list(output_collection.aggregate([new_search_query, project]))\n return res\n```\n\n#### Vector search query example\n\nRun the below lines of code in Jupyter Notebook cell and you can see the following results.\n\n```python\nquery_vector_search(\"romantic comedy movies\", topK=5)\n```\n\n![Vector Search Query Example Results][3]\n\n#### Vector search query example with prefilter\n\n```python\nquery_vector_search(\"romantic comedy movies\", prefilter={\"year\":{\"$lt\": 1990}}, topK=5)\n```\n\n![Vector Search with Prefilter Example Results][4]\n\n#### Vector search query example with prefilter and postfilter to control the semantic search relevance and behaviour\n\n```python\nquery_vector_search(\"romantic comedy movies\", prefilter={\"year\":{\"$lt\": 1990}}, postfilter={\"score\": {\"$gt\":0.76}},topK=5)\n```\n\n![Vector Search with Prefilter and Postfilter Example Results][5]\n\n### Leverage MongoDB Atlas full-text search with Cohere Rerank module\n\n[Cohere Rerank is a module in the Cohere suite of offerings that enhances the quality of search results by leveraging semantic search. This helps elevate the traditional search engine performance, which relies solely on keywords. Rerank goes a step further by ranking results retrieved from the search engine based on their semantic relevance to the input query. This pass of re-ranking search results helps achieve more appropriate and contextually similar search results.\n\nTo demonstrate how the Rerank module can be leveraged with MongoDB Atlas full-text search, we can follow along by running the following line of code in your Jupyter Notebook.\n\n```python\n# sample search query using $search operator in aggregation pipeline\ndef query_fulltext_search(q,topK=25):\n v = {\"$search\": {\n \"text\": {\n \"query\": q,\n \"path\":\"overview\"\n }\n }}\n project = {\"$project\": {\"score\": {\"$meta\": \"searchScore\"},\"_id\": 0,\"title\": 1, \"release-date\": 1, \"overview\": 1}}\n docs = list(output_collection.aggregate(v,project, {\"$limit\":topK}]))\n return docs\n# results before re ranking\ndocs = query_fulltext_search(\"romantic comedy movies\", topK=10)\ndocs\n```\n\n![Cohere Rerank Model Sample Results][6]\n\n```python\n# After passing the search results through the Cohere rerank module\nq = \"romantic comedy movies\"\ndocs = query_fulltext_search(q)\nresults = co_client.rerank(query=q, documents=list(map(lambda x:x[\"overview\"], docs)), top_n=5, model='rerank-english-v2.0') # Change top_n to change the number of results returned. If top_n is not passed, all results will be returned.\nfor idx, r in enumerate(results):\n print(f\"Document Rank: {idx + 1}, Document Index: {r.index}\")\n print(f\"Document Title: {docs[r.index]['title']}\")\n print(f\"Document: {r.document['text']}\")\n print(f\"Relevance Score: {r.relevance_score:.2f}\")\n print(\"\\n\")\n```\n\nOutput post reranking the full-text search results:\n\n```\nDocument Rank: 1, Document Index: 22\nDocument Title: Love Finds Andy Hardy\nDocument: A 1938 romantic comedy film which tells the story of a teenage boy who becomes entangled with three different girls all at the same time.\nRelevance Score: 0.99\n\nDocument Rank: 2, Document Index: 12\nDocument Title: Seventh Heaven\nDocument: Seventh Heaven or De zevende zemel is a 1993 Dutch romantic comedy film directed by Jean-Paul Lilienfeld.\nRelevance Score: 0.99\n\nDocument Rank: 3, Document Index: 19\nDocument Title: Shared Rooms\nDocument: A new romantic comedy feature film that brings together three interrelated tales of gay men seeking family, love and sex during the holiday season.\nRelevance Score: 0.97\n\nDocument Rank: 4, Document Index: 3\nDocument Title: Too Many Husbands\nDocument: Romantic comedy adapted from a Somerset Maugham play.\nRelevance Score: 0.97\n\nDocument Rank: 5, Document Index: 20\nDocument Title: Walking the Streets of Moscow\nDocument: \"I Am Walking Along Moscow\" aka \"Ya Shagayu Po Moskve\" (1963) is a charming lyrical comedy directed by Georgi Daneliya in 1963 that was nominated for Golden Palm at Cannes Film Festival. Daneliya proved that it is possible to create a masterpiece in the most difficult genre of romantic comedy. Made by the team of young and incredibly talented artists that besides Daneliya included writer/poet Gennady Shpalikov, composer Andrei Petrov, and cinematographer Vadim Yusov (who had made four films with Andrei Tarkovski), and the dream cast of the talented actors even in the smaller cameos, \"I Am Walking Along Moscow\" keeps walking victoriously through the decades remaining deservingly one of the best and most beloved Russian comedies and simply one of the best Russian movies ever made. Funny and gentle, dreamy and humorous, romantic and realistic, the film is blessed with the eternal youth and will always take to the walk on the streets of Moscow new generations of the grateful viewers.\nRelevance Score: 0.96\n```\n\n## Summary\n\nIn this tutorial, we were able to demonstrate the following:\n\n1. Using the Cohere embedding along with MongoDB Vector Search, we were able to show how easy it is to achieve semantic search functionality alongside your operational data functions.\n2. With Cohere Rerank, we were able to search results using full-text search capabilities in MongoDB and then rank them by semantic relevance, thereby delivering richer, more relevant results without replacing your existing search architecture setup.\n3. The implementations were achieved with minimal lines of code and showcasing ease of use.\n4. Leveraging Cohere Embeddings and Rerank does not need a team of ML experts to develop and maintain. So the monthly costs of maintenance were kept to a minimum.\n5. Both solutions are cloud-agnostic and, hence, can be set up on any cloud platform.\n\nThe same can be found on a [notebook which will help reduce the time and effort following the steps in this blog.\n\n## What's next?\n\nTo learn more about how MongoDB Atlas is helping build application-side ML integration in real-world applications, you can visit the MongoDB for AI page.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte8f8f2d8681106dd/660c5dfcdd5b9e752ba8949a/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt11b31c83a7a30a85/660c5e236c4a398354e46705/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09db7ce89c89f05/660c5e4a3110d0a96d069608/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5707998b8d57764c/660c5e75c3bc8bfdfbdd1fc1/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt533d00bfde1ec48f/660c5e94c3bc8b26dedd1fcd/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc67a9ac477d5029e/660c5eb0df43aaed1cf11e70/6.png", "format": "md", "metadata": {"tags": ["Atlas", "Python"], "pageDescription": "", "contentType": "Tutorial"}, "title": "How to Use Cohere Embeddings and Rerank Modules with MongoDB Atlas", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/add-memory-to-javascript-rag-application-mongodb-langchain", "action": "created", "body": "# Add Memory to Your JavaScript RAG Application Using MongoDB and LangChain\n\n## Introduction\n\nAI applications with generative AI capabilities, such as text and image generation, require more than just the base large language models (LLMs). This is because LLMs are limited to their parametric knowledge, which can be outdated and not context-specific to a user query. The retrieval-augmented generation (RAG) design pattern solves the problem experienced with naive LLM systems by adding relevant information and context retrieved from an information source, such as a database, to the user's query before obtaining a response from the base LLM. The RAG architecture design pattern for AI applications has seen wide adoption due to its ease of implementation and effectiveness in grounding LLM systems with up-to-date and relevant data.\n\nFor developers creating new AI projects that use LLMs and this kind of advanced AI, it's important to think about more than just giving smart answers. Before they share their RAG-based projects with the world, they need to add features like memory. Adding memory to your AI systems can help by lowering costs, making them faster, and handling conversations in a smarter way.\n\nChatbots that use LLMs are now a regular feature in many online platforms, from customer service to personal assistants. However, one of the keys to making these chatbots more effective lies in their ability to recall and utilize previous conversations. By maintaining a detailed record of interactions, AI systems can significantly improve their understanding of the user's needs, preferences, and context. This historical insight allows the chatbot to offer responses that are not only relevant but also tailored to the individual user, enhancing the overall user experience.\n\nConsider, for example, a customer who contacts an online bookstore's chatbot over several days, asking about different science fiction novels and authors. On the first day, the customer asks for book recommendations based on classic science fiction themes. The next day, they return to ask about books from specific authors in that genre. If the chatbot keeps a record of these interactions, it can connect the dots between the customer's various interests. By the third interaction, the chatbot could suggest new releases that align with the customer's demonstrated preference for classic science fiction, even recommending special deals or related genres the customer might not have explored yet.\n\nThis ability goes beyond simple question-and-answer dynamics; it creates a conversational memory for the chatbot, making each interaction more personal and engaging. Users feel understood and valued, leading to increased satisfaction and loyalty. In essence, by keeping track of conversations, chatbots powered by LLMs transform from impersonal answering machines into dynamic conversational partners capable of providing highly personalized and meaningful engagements.\n\nMongoDB Atlas Vector Search and the new LangChain-MongoDB integration make adding these advanced data handling features to RAG projects easier.\n\nWhat\u2019s covered in this article:\n\n* How to add memory and save records of chats using LangChain and MongoDB\n* How adding memory helps in RAG projects\n\nFor more information, including step-by-step guides and examples, check out the GitHub repository.\n\n> This article outlines how to add memory to a JavaScript-based RAG application. See how it\u2019s done in Python and even add semantic caching!\n\n## Step 1: Set up the environment\n\nYou may be used to notebooks that use Python, but you may have noticed that the notebook linked above uses JavaScript, specifically Deno. \n\nTo run this notebook, you will need to install Deno and set up the Deno Jupyter kernel. You can also follow the instructions.\n\nBecause Deno does not require any packages to be \u201cinstalled,\u201d it\u2019s not necessary to install anything with npm. \n\nHere is a breakdown of the dependencies for this project:\n\n* mongodb: official Node.js driver from MongoDB\n* nodejs-polars: JavaScript library for data analysis, exploration, and manipulation\n* @langchain: JavaScript toolkit for LangChain\n* @langchain/openai: JavaScript library to use OpenAI with LangChain\n* @langchain/mongodb: JavaScript library to use MongoDB as a vector store and chat history store with LangChain\n\nYou\u2019ll also need an OpenAI API key since we\u2019ll be utilizing OpenAI for embedding and base models. Save your API key as an environment variable.\n\n## Step 2: Set up the database\n\nFor this tutorial, we\u2019ll use a free tier cluster on Atlas. If you don\u2019t already have an account, register, then follow the instructions to deploy your first cluster.\n\nGet your database connection string from the Atlas UI and save it as an environment variable.\n\n## Step 3: Download and prepare the dataset\n\nWe\u2019re going to use MongoDB\u2019s sample dataset called embedded_movies. This dataset contains a wide variety of movie details such as plot, genre, cast, and runtime. Embeddings on the full_plot field have already been created using OpenAI\u2019s `text-embedding-ada-002` model and can be found in the plot_embedding field.\n\nAfter loading the dataset, we\u2019ll use Polars to convert it into a DataFrame, which will allow us to manipulate and analyze it easily.\n\nThe code above executes the following operations:\n\n* Import the nodejs-polars library for data management.\n* fetch the sample_mflix.embedded_movies.json file directly from HuggingFace.\n* The df variable parses the JSON into a DataFrame.\n* The DataFrame is cleaned up to keep only the records that have information in the fullplot field. This guarantees that future steps or analyses depending on the fullplot field, like the embedding procedure, are not disrupted by any absence of data.\n* Additionally, the plot_embedding column within the DataFrame is renamed to embedding. This step is necessary since LangChain requires an input field named \u201cembedding.\u201d\n\nAfter finishing the steps in this part, we end up with a complete dataset that serves as the information base for the chatbot. Next, we\u2019ll add the data into our MongoDB database and set up our first RAG chain using it.\n\n## Step 4: Create a naive RAG chain with a MongoDB vector store\n\nWe\u2019ll start by creating a simple RAG chain using LangChain, with MongoDB as the vector store. Once we get this set up, we\u2019ll add chat history to optimize it even further.\n\n in MongoDB Atlas. This is what enables our RAG application to query semantically similar records to use as additional context in our LLM prompts.\n\nBe sure to create your vector search index on the `data` collection and name it `vector_index`. Here is the index definition you\u2019ll need:\n\n> **NOTE**: We set `numDimensions` to `1536` because we use OpenAI\u2019s `text-embedding-ada-002` model to create embeddings.\n\nNow, we can start constructing the vector store for our RAG chain.\n\nWe\u2019ll use `OpenAIEmbeddings` from LangChain and define the model used. Again, it\u2019s the `text-embedding-ada-002` model, which was used in the original embeddings of this dataset.\n\nNext, we define our configuration by identifying the collection, index name, text key (full-text field of the embedding), and embedding key (which field contains the embeddings).\n\nThen, pass everything into our `MongoDBAtlasVectorSearch()` method to create our vector store.\n\nNow, we can \u201cdo stuff\u201d with our vector store. We need a way to return the documents that get returned from our vector search. For that, we can use a retriever. (Not the golden kind.)\n\nWe\u2019ll use the retriever method on our vector store and identify the search type and the number of documents to retrieve represented by k. \n\nThis will return the five most similar documents that match our vector search query. \n\nThe final step is to assemble everything into a RAG chain.\n\n> **KNOWLEDGE**: In LangChain, the concept of chains refers to a sequence that may include interactions with an LLM, utilization of a specific tool, or a step related to processing data. To effectively construct these chains, it is advised to employ the LangChain Expression Language (LCEL). Within this structure, each part of a chain is called a Runnable, allowing for independent operation or streaming, separate from the chain's other components.\n\nHere\u2019s the breakdown of the code above:\n\n1. retrieve: Utilizes the user's input to retrieve similar documents using the retriever. The input (question) also gets passed through using a RunnablePassthrough().\n2. prompt: ChatPromptTemplate allows us to construct a prompt with specific instructions for our AI bot or system, passing two variables: context and question. These variables are populated from the retrieve stage above.\n3. model: Here, we can specify which model we want to use to answer the question. The default is currently gpt-3.5-turbo if unspecified. \n4. naiveRagChain: Using a RunnableSequence, we pass each stage in order: retrieve, prompt, model, and finally, we parse the output from the LLM into a string using StringOutputParser().\n\nIt\u2019s time to test! Let\u2019s ask it a question. We\u2019ll use the invoke() method to do this.\n\n## Step 5: Implement chat history into a RAG chain\n\nThat was a simple, everyday RAG chain. Next, let\u2019s take it up a notch and implement persistent chat message history. Here is what that could look like.\n\n. \n\n## FAQs\n\n1. **What is retrieval-augmented generation (RAG)?**\n\n RAG is a way of making big computer brain models (like LLMs) smarter by giving them the latest and most correct information. This is done by mixing in extra details from outside the model's built-in knowledge, helping it give better and more right answers.\n\n2. **How does integrating memory and chat history enhance RAG applications?**\n\n Adding memory and conversation history to RAG apps lets them keep and look back at past messages between the large language model (LLM) and people. This feature makes the model more aware of the context, helping it give answers that fit the current question and match the ongoing conversations flow. By keeping track of a chat history, RAG apps can give more personal and correct answers, greatly making the experience better for the user and improving how well the app works overall.\n\n3. **How does MongoDB Atlas support RAG applications?**\n\n MongoDB's vector search capabilities enable RAG applications to become smarter and provide more relevant responses. It enhances memory functions, streamlining the storage and recall of conversations. This boosts context awareness and personalizes user interactions. The result is a significant improvement in both application performance and user experience, making AI interactions more dynamic and user-centric.\n\n4. **What benefits does the LangChain-MongoDB integration offer?**\n\n This setup makes it easier to include meaning-based memory in RAG apps. It allows for the easy handling of past conversation records through MongoDB's strong vector search tools, leading to a better running app and a nicer experience for the user.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt892d50c61236c4b6/660b015018980fc9cf2025ab/js-rag-history-2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt712f63e36913f018/660b0150071375f3acc420e1/js-rag-history-3.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "AI"], "pageDescription": "Unlock the full potential of your JavaScript RAG application with MongoDB and LangChain. This guide dives into enhancing AI systems with a conversational memory, improving response relevance and user interaction by integrating MongoDB's Atlas Vector Search and LangChain-MongoDB. Discover how to setup your environment, manage chat histories, and construct advanced RAG chains for smarter, context-aware applications. Perfect for developers looking to elevate AI projects with real-time, personalized user engagement.", "contentType": "Tutorial"}, "title": "Add Memory to Your JavaScript RAG Application Using MongoDB and LangChain", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/build-newsletter-website-mongodb-data-platform", "action": "created", "body": "# Build a Newsletter Website With the MongoDB Data Platform\n\n>\n>\n>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.\n>\n>\n\n\"This'll be simple,\" I thought. \"How hard can it be?\" I said to myself, unwisely.\n\n*record scratch*\n\n*freeze frame*\n\nYup, that's me. You're probably wondering how I ended up in this\nsituation.\n\nOnce upon a time, there was a small company, and that small company had an internal newsletter to let people know what was going on. Because the company was small and everyone was busy, the absolute simplest and most minimal approach was chosen, i.e. a Google Doc that anyone in the Marketing team could update when there was relevant news. This system worked well.\n\nAs the company grew, one Google Doc became many Google Docs, and an automated email was added that went out once a week to remind people to look at the docs. Now, things were not so simple. Maybe the docs got updated, and maybe they didn't, because it was not always clear who owned what. The people receiving the email just saw links to the docs, with no indication of whether there was anything new or good in there, and after a while, they stopped clicking through, or only did so occasionally. The person who had been sending the emails got a new job and asked for someone to take over the running of the newsletter.\n\nThis is where I come in. Yes, I failed to hide when the boss came asking for volunteers.\n\nI took one look at the existing system, and knew it could not continue as it was \u2014 so of course, I also started looking for suckers er I mean volunteers. Unfortunately, I could not find anyone who wanted to take over whitewashing this particular fence, so I set about trying to figure out how hard it could be to roll my own automated fence-whitewashing system to run the newsletter back end.\n\nPretty quickly I had my minimum viable product, thanks to MongoDB Atlas and Stitch. And the best part? The whole thing fits into the free tier of both. You can get your own free-forever instance here, just by supplying your email address. And if you ask me nicely, I might even throw some free credits your way to try out some of the paid features too.\n\n>\n>\n>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.\n>\n>\n\n## Modelling Data: The Document\n\nThe first hurdle of this project was unlearning bad relational habits. In the relational database world, a newsletter like this would probably use several JOINs:\n\n- A table of issues\n- Containing references to a table of news items\n- Containing references to further tables of topics, authors\n\nIn the document-oriented world, we don't do it that way. Instead, I defined a simple document format:\n\n``` javascript\n{\n _id: 5e715b2099e27fa8539274ea,\n section: \"events\",\n itemTitle: \"Webinar] Building FHIR Applications with MongoDB, April 14th\",\n itemText: \"MongoDB and FHIR both natively support the JSON format, the standard e...\",\n itemLink: \"https://www.mongodb.com/webinar/building-fhir-applications-with-mongod...\",\n tags: [\"fhir\", \"healthcare\", \"webinar\"],\n createdDate: 2020-03-17T23:01:20.038+00:00\n submitter: \"marketing.genius@mongodb.com\",\n updates: [],\n published: \"true\",\n publishedDate: 2020-03-30T07:10:06.955+00:00\n email: \"true\"\n}\n```\n\nThis structure should be fairly self-explanatory. Each news item has:\n\n- A title\n- Some descriptive text\n- A link to more information\n- One or more topic tags\n- Plus some utility fields to do things like tracking edits\n\nEach item is part of a section and can be published simply to the web, or also to email. I don't want to spam readers with everything, so the email is curated; only items with `email: true` go to email, while everything else just shows up on the website but not in readers' inboxes.\n\nOne item to point out is the updates array, which is empty in this particular example. This field was a later addition to the format, as I realised when I built the edit functionality that it would be good to track who made edits and when. The flexibility of the document model meant that I could simply add that field without causing any cascading changes elsewhere in the code, or even to documents that had already been created in the database.\n\nSo much for the database end. Now we need something to read the documents and do something useful with them.\n\nI went with [Stitch, which together with the Atlas database is another part of the MongoDB Cloud platform. In keeping with the general direction of the project, Stitch makes my life super-easy by taking care of things like authentication, access rules, MongoDB queries, services, and functions. It's a lot more than just a convenient place to store files; using Stitch let me write the code in JavaScript, gave me somewhere easy to host the application logic, and connects to the MongoDB Atlas database with a single line of code:\n\n``` javascript\nclient = stitch.Stitch.initializeDefaultAppClient(APP_ID);\n```\n\n`APP_ID` is, of course, my private application ID, which I'm not going\nto include here! All of the code for the app can be found in my personal Github repository; almost all the functionality (and all of the code from the examples below) is in a single Javascript file.\n\n## Reading Documents\n\nThe newsletter goes out in HTML email, and it has a companion website, so my Stitch app assembles DOM sections in Javascript to display the\nnewsletter. I won't go through the whole thing, but each step looks\nsomething like this:\n\n``` javascript\nlet itemTitleContainer = document.createElement(\"div\");\nitemTitleContainer.setAttribute(\"class\", \"news-item-title\");\nitemContainer.append(itemTitleContainer);\n\nlet itemTitle = document.createElement(\"p\");\nitemTitle.textContent = currentNewsItem.itemTitle;\nitemTitleContainer.append(itemTitle);\n```\n\nThis logic showcases the benefit of the document object model in MongoDB. `currentNewsItem` is an object in JavaScript which maps exactly to the document in MongoDB, and I can access the fields of the document simply by name, as in `currentNewsItem.itemTitle`. I don't have to create a whole separate object representation in my code and laboriously populate that with relational queries among many different tables of a database; I have the exact same object representation in the code as in the database.\n\nIn the same way, inputting a new item is simple because I can build up a JSON object from fields in a web form:\n\n``` javascript\nworkingJSONe.name] = e.value;\n```\n\nAnd then I can write that directly into the database:\n\n``` javascript\nsubmitJSON.createdDate = today;\nif ( submitJSON.section == null ) { submitJSON.section = \"news\"; }\nsubmitJSON.submitter = userEmail;\ndb.collection('atf').insertOne(submitJSON)\n .then(returnResponse => {\n console.log(\"Return Response: \", returnResponse);\n window.alert(\"Submission recorded, thank you!\");\n })\n.catch(errorFromInsert => {\n console.log(\"Error from insert: \", errorFromInsert);\n window.alert(\"Submission failed, sorry!\");\n});\n```\n\nThere's a little bit more verbose feedback and error handling on this one than in some other parts of the code since people other than me use this part of the application!\n\n## Aggregating An Issue\n\nSo much for inserting news items into the database. What about when someone wants to, y'know, read an issue of the newsletter? The first thing I need to do is to talk to the MongoDB Atlas database and figure out what is the most recent issue, where an issue is defined as the set of all the news items with the same published date. MongoDB has a feature called the [aggregation pipeline, which works a bit like piping data from one command to another in a UNIX shell. An aggregation pipeline has multiple stages, each one of which makes a transformation to the input data and passes it on to the next stage. It's a great way of doing more complex queries like grouping documents, manipulating arrays, reshaping documents into different models, and so on, while keeping each individual step easy to reason about and debug.\n\nIn my case, I used a very simple aggregation pipeline to retrieve the most recent publication dates in the database, with three stages. In the first stage, using $group, I get all the publication dates. In the second stage, I use $match to remove any null dates, which correspond to items without a publication date \u2014 that is, unpublished items. Finally, I sort the dates, using \u2014 you guessed it \u2014 $sort to get the most recent ones.\n\n``` javascript\nlet latestIssueDate = db.collection('atf').aggregate( \n { $match : { _id: {$ne: null }}},\n { $group : { _id : \"$publishedDate\" } },\n { $sort: { _id: -1 }}\n]).asArray().then(latestIssueDate => {\n thisIssueDate = latestIssueDate[0]._id;\n prevIssueDate = latestIssueDate[1]._id;\n ATFmakeIssueNav(thisIssueDate, prevIssueDate);\ntheIssue = { published: \"true\", publishedDate: thisIssueDate };\ndb.collection('atf').find(theIssue).asArray().then(dbItems => {\n orderSections(dbItems); })\n .catch(err => { console.error(err) });\n}).catch(err => { console.error(err) });\n```\n\nAs long as I have a list of all the publication dates, I can use the next most recent date for the navigation controls that let readers look at previous issues of the newsletter. The most important usage, though, is to retrieve the current issue, namely the list of all items with that most recent publication date. That's what the `find()` command does, and it takes as its argument a simple document:\n\n``` javascript\n{ published: \"true\", publishedDate: thisIssueDate }\n```\n\nIn other words, I want all the documents which are published (not the drafts that are sitting in the queue waiting to be published), and where the published date is the most recent date that I found with the aggregation pipeline above.\n\nThat reference to `orderSections` is a utility function that makes sure that the sections of the newsletter come out in the right order. I can also catch any errors that occur, either in the aggregation pipeline or in the find operation itself.\n\n## Putting It All Together\n\nAt this point publishing a newsletter is a question of selecting which items go into the issue and updating the published date for all those items:\n\n``` javascript\nconst toPublish = { _id: { '$in': itemsToPublish } };\nlet today = new Date();\nconst update = { '$set': { publishedDate: today, published: \"true\" } };\nconst options = {};\ndb.collection('atf').updateMany(toPublish, update, options)\n .then(returnResponse => {console.log(\"Return Response: \", returnResponse);})\n .catch(errorFromUpdate => {console.log(\"Error from update: \", errorFromUpdate);});\n```\n\nThe [updateMany() command has three documents as its arguments.\n\n- The first, the filter, specifies which documents to update, which here means all the ones with an ID in the `itemsToPublish` array.\n- The second is the actual update we are going to make, which is to set the `publishedDate` to today's date and mark them as published.\n- The third, optional argument, is actually empty in my case because I don't need to specify any options.\n\n## Moving The Mail\n\nNow I could send emails myself from Stitch, but we already use an external specialist service that has a nice REST API. I used a Stitch Function to assemble the HTTP calls and talk to that external service. Stitch Functions are a super-easy way to run simple JavaScript functions in the Stitch serverless platform, making it easy to implement application logic, securely integrate with cloud services and microservices, and build APIs \u2014 exactly my use case!\n\nI set up a simple HTTP service, which I can then access easily like this:\n\n``` javascript\nconst http = context.services.get(\"mcPublish\");\n```\n\nAs is common, the REST API I want to use requires an API key. I generated the key on their website, but I don't want to leave that lying around. Luckily, Stitch also lets me define a secret, so I don't need that API key in plaintext:\n\n``` javascript\nlet mcAPIkey = context.values.get(\"MCsecret\");\n```\n\nAnd that (apart from 1200 more lines of special cases, admin functions, workarounds, and miscellanea) is that. But I wanted a bit more visibility on which topics were popular, who was using the service and so on. How to do that?\n\n## Charting Made Super Easy\n\nFortunately, there's an obvious answer to my prayers in the shape of Charts, yet another part of the MongoDB Cloud platform, which let me very quickly build a visualisation of activity on the back-end.\n\nHere's how simple that is: I have my database, imaginatively named \"newsletter\", and the collection, named \"atf\" for Above the Fold, the name of the newsletter I inherited. I can see all of the fields from my document, so I can take the `_id` field for my X-axis, and then the `createdDate` for the Y-axis, binning by month, to create a real-time chart of the number of news items submitted each month.\n\nIt really is that easy to create visualizations in Charts, including much more complicated ones than this, using all MongoDB's rich data types. Take a look at some of the more advanced options and give it a go with your own data, or with the sample data in a free instance of MongoDB Atlas.\n\nIt was a great learning experience to build this thing, and the whole exercise gave me a renewed appreciation for the power of MongoDB, the document model, and the extended MongoDB Cloud platform - both the Atlas database and the correlated services like Stitch and Charts. There's also room for expansion; one of the next features I want to build is search, using MongoDB Atlas' Text Search feature.\n\n## Over To You\n\nAs I mentioned at the beginning, one of the nice things about this project is that the whole thing fits in the free tier of MongoDB Atlas, Stitch, and Charts. You can sign up for your own free-forever instance and start building today, no credit card required, and no expiry date either. There's a helpful onboarding wizard that will walk you through loading some sample data and performing some basic tasks, and when you're ready to go further, the MongoDB docs are top-notch, with plenty of worked examples. Once you get into it and want to learn more, the best place to turn is MongoDB University, which gives you the opportunity to learn MongoDB at your own pace. You can also get certified on MongoDB, which will get you listed on our public list of certified MongoDB professionals.", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "How I ended up building a whole CMS for a newsletter \u2014 when it wasn't even my job", "contentType": "Article"}, "title": "Build a Newsletter Website With the MongoDB Data Platform", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/http-basics-with-go", "action": "created", "body": "# HTTP basics With Go 1.22\n\n# HTTP basics with Go 1.22\n\nGo is a wonderful programming language \u2013very productive with many capabilities. This series of articles is designed to offer you a feature walkthrough of the language, while we build a realistic program from scratch.\n\nIn order for this to work, there are some things we must agree upon:\n\n- This is not a comprehensive explanation of the Go syntax. I will only explain the bits strictly needed to write the code of these articles.\n- Typing the code is better than just copying and pasting, but do as you wish.\n- Materials are available to try by yourself at your own pace, but it is recommended to play along if you do this in a live session.\n- If you are a Golang newbie, type and believe. If you have written some Go, ask any questions. If you have Golang experience, there are comments about best practices \u2013let's discuss those. In summary: Ask about the syntax, ask about the magic, talk about the advanced topics, or go to the bar.\n- Finally, although we are only going to cover the essential parts, the product of this series is the seed for a note-keeping back end where we will deal with the notes and its metadata. I hope you like it.\n\n## Hello world\n\n1. Let's start by creating a directory for our project and initializing the project. Create a directory and get into it. Initialize the project as a Go module with the identifier \u201cgithub.com/jdortiz/go-intro,\u201d which you should change to something that is unique and owned by you.\n ```shell\n go mod init github.com/jdortiz/go-intro\n ```\n2. In the file explorer of VSCode, add a new file called `main.go` with the following content:\n ```go\n package main\n \n import \"fmt\"\n \n func main() {\n fmt.Println(\"Hola Caracola\")\n }\n ```\n3. Let's go together through the contents of that file to understand what we are doing here.\n 1. Every source file must belong to a `package`. All the files in a directory must belong to the same package. Package `main` is where you should create your `main` function.\n 2. `func` is the keyword for declaring functions and `main` is where your program starts to run at.\n 3. `fmt.Println()` is a function of the standard library (stdlib) to print some text to the standard output. It belongs to the `fmt` package.\n 4. Having the `import` statement allows us to use the `fmt` package in the code, as we are doing with the `fmt.Println()` function.\n4. The environment is configured so we can run the program from VS Code. Use \"Run and Debug\" on the left bar and execute the program. The message \"Hola caracola\" will show up on the debug console.\n5. You can also run the program from the embedded terminal by using\n ```sh\n go run main.go\n ```\n\n## Simplest web server\n\n1. Go's standard library includes all the pieces needed to create a full-fledged HTTP server. Until version 1.22, using third-party packages for additional functionality, such as easily routing requests based on the HTTP verb, was very common. Go 1.22 has added most of the features of those packages in a backward compatible way.\n2. Webservers listen to requests done to a given IP address and port. Let's define that in a constant inside of the main function:\n ```go\n const serverAddr string = \"127.0.0.1:8081\"\n ```\n3. If we want to reply to requests sent to the root directory of our web server, we must tell it that we are interested in that URL path and what we want to happen when a request is received. We do this by using `http.HandleFunc()` at the bottom of the main function, with two parameters: a pattern and a function. The pattern indicates the path that we are interested in (like in `\"/\"` or `\"/customers\"` ) but, since Go 1.22, the pattern can also be used to specify the HTTP verb, restrict to a given host name, and/or extract parameters from the URL. We will use `\"GET /\"`, meaning that we are interested in GET requests to the root. The function takes two parameters: an `http.ResponseWriter`, used to produce the response, and an `http.Request` that holds the request data. We will be using an anonymous function (a.k.a. lambda) that initially doesn't do anything. You will need to import the \"net/http\" package, and VS Code can do it automatically using its *quick fix* features.\n ```go\n http.HandleFunc(\"GET /\", func(w http.ResponseWriter, r *http.Request) {\n })\n ```\n4. Inside of our lambda, we can use the response writer to add a message to our response. We use the `Write()` method of the response writer that takes a slice of bytes (i.e., a \"view\" of an array), so we need to convert the string. HTML could be added here.\n ```go\n w.Write(]byte(\"HTTP Caracola\"))\n ```\n5. Tell the server to accept connections to the IP address and port with the functionality that we have just set up. Do it after the whole invocation to `http.HandleFunc()`.\n ```go\n http.ListenAndServe(serverAddr, nil)\n ```\n6. `http.ListenAndServe()` returns an error when it finishes. It is a good idea to wrap it with another function that will log the message when that happens. `log` also needs to be imported: Do it yourself if VSCode didn't take care of it.\n ```go\n log.Fatal(http.ListenAndServe(serverAddr, nil))\n ```\n7. Compile and run. The codespace will offer to use a browser or open the port. You can ignore this for now.\n8. If you run the program from the terminal, open a second terminal using the \"~~\" on the right of your zsh shell. Make a request from the terminal to get our web server to respond. If you have chosen to use your own environment, this won't work unless you are using Go 1.22~~.\n ```shell\n curl -i localhost:8081/\n ```\n\n## (De)Serialization\n![Unloading and deserializing task][1]\n1. HTTP handlers can also be implemented as regular functions \u2013i.e., non-anonymous\u2013 and are actually easier to maintain. Let's define one for an endpoint that can be used to create a note after the `main` function.\n ```go\n func createNote(w http.ResponseWriter, r *http.Request) {\n }\n ```\n2. Before we can implement that handler, we need to define a type that will hold the data for a note. The simplest note could have a title and text. We will put this code before the `main` function.\n ```go\n type Note struct {\n Title string\n Text string\n }\n ```\n3. But we can have some more data, like a list of categories, that in Go is represented as a slice of strings (`[]string`), or a field that uses another type that defines the scope of this note as a combination of a project and an area. The complete definition of these types would be:\n ```go\n type Scope struct {\n Project string\n Area string\n }\n \n type Note struct {\n Title string\n Tags []string\n Text string\n Scope Scope\n }\n ```\n4. Notice that both the names of the types and the names of the fields start with a capital letter. That is the way to say in Go that something is exported and it would also apply to function names. It is similar to using a `public` attribute in other programming languages.\n5. Also, notice that field declarations have the name of the field first and its type later. The latest field is called \"Scope,\" because it is exported, and its type, defined a few lines above, is also called Scope. No problem here \u2013Go will understand the difference based on the position.\n6. Inside of our `createNote()` handler, we can now define a variable for that type. The order is also variable name first, type second. `note` is a valid variable from here on, but at the moment all the fields are empty.\n ```go\n var note Note\n ```\n7. Data is exchanged between HTTP servers and clients using some serialization format. One of the most common ones nowadays is JSON. After the previous line, let's create a decoder that can convert bytes from the HTTP request stream into an actual object. The `encoding/json` package of the standard library provides what we need. Notice that I hadn't declared the `decoder` variable. I use the \"short variable declaration\" (`:=`), which declares and assigns value to the variable. In this case, Go is also doing type inference.\n ```go\n decoder := json.NewDecoder(r.Body)\n ```\n8. This decoder can now be used in the next line to deserialize the data in the HTTP request. That method returns an error, which will be `nil` (no value) if everything went well, or some (error) value otherwise. Notice that we use `&` to pass a reference to the variable, so the method can change its value.\n ```go\n err := decoder.Decode(\u00ace)\n ```\n9. The expression can be wrapped to be used as the condition in an if statement. It is perfectly fine in Go to obtain some value and then compare in an expression after a semicolon. There are no parentheses surrounding the conditional expression.\n ```go\n if err := decoder.Decode(\u00ace); err != nil {\n }\n ```\n10. If anything goes wrong, we want to inform the HTTP client that there is a problem and exit the function. This early exit is very common when you handle errors in Go. `http.Error()` is provided by the `net/http` package, writes to the response writer the provided error message, and sets the HTTP status.\n ```go\n http.Error(w, err.Error(), http.StatusBadRequest)\n return\n ```\n11. If all goes well, we just print the value of the note that was sent by the client. Here, we use another function of the `fmt` package that writes to a Writer the given data, using a format string. Format strings are similar to the ones used in C but with some extra options and more safety. `\"%+v\"` means print the value in a default format and include the field names (% to denote this is a format specifier, v for printing the value, the + for including the field names).\n ```go\n fmt.Fprintf(w, \"Note: %+v\", note)\n ```\n12. Let's add this handler to our server. It will be used when a POST request is sent to the `/notes` path.\n ```go\n http.HandleFunc(\"POST /notes\", createNote)\n ```\n13. Run this new version.\n14. Let's first test what happens when it cannot deserialize the data. We should get a 400 status code and the error message in the body.\n ```shell\n curl -iX POST localhost:8081/notes\n ```\n15. Finally, let's see what happens when we pass some good data. The deserialized data will be printed to the standard output of the program.\n ```shell\n curl -iX POST -d '{ \"title\": \"Master plan\", \"tags\": [\"ai\",\"users\"], \"text\": \"ubiquitous AI\", \"scope\": {\"project\": \"world domination\", \"area\":\"strategy\"} }' localhost:8081/notes\n ```\n\n## Conclusion\n\nIn this article, we have learned:\n\n- How to start and initialize a Go project.\n- How to write a basic HTTP server from scratch using just Go standard library functionality.\n- How to add endpoints to our HTTP server that provide different requests for different HTTP verbs in the client request.\n- How to deserialize JSON data from the request and use it in our program.\n\nDeveloping this kind of program in Go is quite easy and requires no external packages or, at least, not many. If this has been your first step into the world of Go programming, I hope that you have enjoyed it and that if you had some prior experience with Go, there was something of value for you.\n\nIn the next article of this series, we will go a step further and persist the data that we have exchanged with the HTTP client. [This repository with all the code for this article and the next ones so you can follow along.\n\nStay curious. Hack your code. See you next time!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt76b3a1b7e9e5be0f/661f8900394ea4203a75b196/unloading-serialization.jpg", "format": "md", "metadata": {"tags": ["Go"], "pageDescription": "This tutorial explains how to create a basic HTTP server with a couple of endpoints to backend developers with no prior experience on Go. It uses only the standard library functionality, but takes advantages of the new features introduced in Go 1.22.", "contentType": "Tutorial"}, "title": "HTTP basics With Go 1.22", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/practical-exercise-atlas-device-sdk-web-sync", "action": "created", "body": "# A Practical Exercise of Atlas Device SDK for Web With Sync (Preview)\n\n**Table of contents**\u00a0\n* Atlas Device SDK for web with Device Sync and its real-world usage\u00a0\n* Architecture\u00a0\n* Basic components\n* Building your own React web app\u00a0\n - Step 1: Setting up the back end\n - Step 2: Creating an App Services app\u00a0\n - Step 3: Getting ready for Device Sync\u00a0\n - Step 4: Atlas Device SDK\n - Step 5: Let the data flow (start using sync)!\n* Implementation of the coffee app\n* A comparison between Device Sync and the Web SDK without Sync\n - What will our web app look like without Device Sync?\n - Which one should we choose?\n* Conclusions\n* Appendix\u00a0\n\n## Atlas Device SDK for web with Sync and its real-world usage\n\nThe Device Sync feature of the Web SDK is a powerful tool designed to bring real-time data synchronization and automatic conflict resolution capabilities to cross-platform applications, seamlessly bridging the gap between users\u2019 back ends and client-side data. It facilitates the creation of dynamic user experiences by ensuring eventual data consistency across client apps with syncing and conflict resolution.\n\nIn the real-world environment, certain client apps benefit from a high level of automation, therefore bringing users an intuitive interaction with the client app. For example, a coffee consumption counter web app that allows the user to keep track of the cups of coffee he/she consumes from different devices and dynamically calculates daily coffee intake will create a ubiquitous user experience.\n\nIn this tutorial, I will demonstrate how a developer can easily enable Device Sync to the above-mentioned coffee consumption web app. By the end of this article, you will be able to build a React web app that first syncs the cups of coffee you consumed during the day with MongoDB Atlas, then syncs the same data to different devices running the app. However, our journey doesn\u2019t stop here. I will also showcase the difference between the aforementioned Web SDK with Sync (preview) and the Web SDK without automatic syncing when our app needs to sync data with the MongoDB back end. Hopefully, this article will also help you to make a choice between these two options.\n\n## Architecture\n\nIn this tutorial, we will create two web apps with the same theme: a coffee consumption calculator. The web app that benefits from Device Sync will be named `Coffee app Device Sync` while the one following the traditional MongoDB client will be named `Coffee app`. \n\nThe coffee app with Device Sync utilizes Atlas Device Sync to synchronize data between the client app and backend server in real time whilst our coffee app without Device Sync relies on the MongoDB driver. \n\nData synchronization relies on the components below.\n\n 1. _App Services_: App Services and its Atlas Device SDKs are a suite of development tools optimized for cross-platform devices such as mobile, IoT, and edge, to manage data via MongoDB\u2019s edge database called Realm, which in turn leverages Device Sync. With various SDKs designed for different platforms, Realm enables the possibility of building data-driven applications for multiple mobile devices. The Web SDK we are going to explore in this article is one of the handy tools that help developers build intuitive web app experiences. \n 2. _User authentication_: Before setting up Device Sync, we will need to authenticate a user. In our particular case, for the sake of simplicity, the `anonymous` method is being used. This procedure allows the sync from the client app to be associated with a specific user account on your backend App Services app.\n 3. _Schema_: You can see schema as the description of how your data looks, or a data model. Schema exists both in your client app\u2019s code and App Services\u2019 UI. You will need to provide the name of it within the configuration. \n 4. _Sync configuration_: It is mandatory to provide the authenticated user, sync mode (flexible: true), and `initialSubscriptions` which defines a function that sets up initial subscriptions when Realm is opened.\n 5. _Opening a synced realm_: As you will see, we use `Realm.open(config);` to open a realm that is synchronized with Atlas. The whole process between your client app and back end, as you may have guessed, is bridged by Device Sync. \n\nOnce Realm is opened with the configuration we discussed above, any changes to the coffee objects in the local realm are _automatically_ synchronized with Atlas. Likewise, changes made in Atlas are synchronized back to the local realm, keeping the data up-to-date across devices and the server. What\u2019s even better is that the process of data synchronization happens seamlessly in the background without any user action involved.\n\n)\n\nBack end:\n* MongoDB Atlas, as the cloud storage\n* Data (in this article, we will use dummy data)\n* MongoDB App Services app, as the web app\u2019s business logic\n\nThese components briefly describe the building blocks of a web app powered by MongoDB App Services. The coffee app is just an example to showcase how Device Sync works and the possibilities for developers to build more complicated apps. \n\n## Building your own React web app\n\nIn this section, I will provide step-by-step instructions on how to build your own copy of Coffee App. By the end, you will be able to interact with Realm and Device Sync on your own. \n\n### Step 1. Setting up the back end\n\nMongoDB Atlas is used as the backend server of the web app. Essentially, Atlas has different tiers, from M0 to M700, which represent the difference in storage size, cloud server performance, and limitations from low to high. For more details on this topic, kindly refer to our documentation.\n\nIn this tutorial, we will use the free tier (M0), as it is just powerful enough for learning purposes. \n\nTo set up an M0 cluster, you will first need to create an account with MongoDB. \n\nOnce the account is ready, we can proceed to \u201cCreate a Project.\u201d \n\n as this will not be in the scope of this article.\n\n### Step 2. Creating an App Services app\n\nApp Services (previously named Realm) is a suite of cloud-based tools (i.e., serverless functions, Device Sync, user management, rules) designed to streamline app development with Atlas. In other words, Atlas works as the datasource for App Services. \n\nOur coffee app will utilize App Services in such a way that the back end will provide data sync among client apps. \n\nFor this tutorial, we just need to create an empty app. You can easily do so by skipping any template recommendations. \n\n gives a very good explanation of why schema is a mandatory and important component of Device Sync: \n\n_To use Atlas Device Sync, you must define your data model in two formats:_\n\n* _**App Services schema**: This is a server-side schema that defines your data in BSON. Device Sync uses the App Services schema to convert your data to MongoDB documents, enforce validation, and synchronize data between client devices and Atlas._\n* _**Realm object schema**: This is client-side schema of data defined using the Realm SDKs. Each Realm SDK defines the Realm object schema in its own language-specific way. The Realm SDKs use this schema to store data in the Realm database and synchronize data with Device Sync._\n\n> Note: As you can see, Development Mode allows your client app to define a schema and have it automatically reflected server-side. (Simply speaking, schema on your server will be modified by the client app.) \n\nAs you probably already guessed, this has the potential to mess with your app\u2019s schema and cause serious issues (i.e., stopping Device Sync) in the production environment. \n\nWe only use Development Mode for learning purposes and a development environment, hence the name.\n\nBy now, we have created an App Services app and configured it to be ready for our coffee app project.\n\n### Step 3. Getting ready for Device Sync\n\nWe are now ready to implement Device Sync in the coffee app. Sync happens when the following requirements are satisfied.\n\n* Client devices are connected to the network and have an established connection to the server.\n* The client has data to sync with the server and it initiates a sync session. \n* The client sends IDENT messages to the server. *You can see IDENT messages as an identifier that the client uses to tell the server exactly what Realm file it needs to sync and the status of the client realm (i.e., if the current version is the client realm\u2019s most recently synced server version). \n\nThe roadmap below shows the workflow of a web app with the Device Sync feature. \n\n and MongoDB Atlas Device SDK for the coffee app in this article. \n \nDespite the differences in programming languages and functionalities, SDKs share the following common points:\n\nDespite the differences in programming languages and functionalities, SDKs share the following common points:\n* Providing a core database API for creating and working with local databases\n* Providing an API that you need to connect to an Atlas App Services server, and therefore, server-side features like Device Sync, functions, triggers, and authentication will be available at your disposal \n\nWe will be using Atlas Device SDK for web later. \n\n### Step 5. Let the data flow\n\n**Implementation**:\n\nWithout further ado, I will walk you through the process of creating the coffee app.\n\nOur work here is concentrated on the following parts:\n\n* App.css \u2014 adjusts everything about UI style, color\n* App.js \u2014\u00a0authentication, data model, business logic, and Sync\n* Footer.js. \u2014 add optional information about the developer\n* index.css. \u2014\u00a0add fonts and web page styling\n\nAs mentioned previously, React will be used as the library for our web app. Below are some options you can follow to create the project. \n\nAs mentioned previously, React will be used as the library for our web app. Below are some options you can follow to create the project. \n\n**Option 1 (the old-fashioned way)**: Create React App (CRA) has always been an \u201cofficial\u201d way to start a React project. However, it is no longer recommended in the React documents. The coffee app was originally developed using CRA. However, if you are coming from the older set-up or just wish to see how Device Sync is implemented within a React app, this will still be fine to follow. \n\n**Option 2**: Vite addresses a major issue with CRA, the cumbersome dependency size, by introducing dependency pre-bundling. It provides lightning-fast, cold-starting performance. \n\nIf you already have your project built using CRA, there is also a fast way to make it Vite-compatible by using the code below. \n\n`npx nx@latest init`\n\nThe line above will automatically detect your project\u2019s dependency and structure and make it compatible with Vite. Your application will therefore also enjoy the high performance brought by Vite. \n\nOur simple example app has most of its functionality within the `App.js` file. Therefore, let\u2019s focus on this one and dive into the details. \n\n(1)\nDependency-wise, below are the necessary `imports`. \n\n```\n import React, { useEffect, useState } from 'react';\nimport Realm, { App } from 'realm';\nimport './App.css';\nimport Footer from './Footer';\n```\n\nNotice `realm` is being imported above as we need to do this to the source files where we need to interact with the database. \n\n(Consider using the `@realm/react` package to leverage hooks and providers for opening realms and managing the data. Refer to MongoDB\u2019s other Web Sync Preview example app for how to integrate @realm/react.)\n\n(2)\n\n```\n const REALM_APP_ID = 'mycoffeemenu-hflty'; // Input APP ID here.\nconst app = new App({ id: REALM_APP_ID });\n```\n\nTo link your client app to an App Services app, you will need to supply the App ID within the code. The App ID is a unique identifier for each App Services app, and it will be needed as a reference while working with many MongoDB products. \n\nNote: The client app refers to your actual web app whilst the App Services app refers to the app we create on the cloud, which resides on the Atlas App Services server. \n\nYou can easily copy your App ID from the App Services UI.\n\n.\n* Sync `config`: Within the `sync` block, we supply the information shown below. \n\n`user`: Passing in the user\u2019s login credentials\n`flexible`: Defining what Sync mode the app will use \n`initialSubscriptions`: Defining the queries for the data that needs to be synced; the two parameters `subs` and `realm` refer to the sync\u2019s subscriptions and local database instance. \n\nWe now have built a crucial part that manages the data model used for Sync, authentication, sync mode, and subscription. This part customizes the initial data sync process and tailors it to fit the business logic. \n\n(5)\nOur coffee app calculates the cups of coffee we consume during the day. The simple app relies on inputs from the user. In this case, the data flowing in and out of the app is the number of different coffees the user consumes.\n\n, as shown by the code snippet below. \n\n```\n await coffeeCollection.updateOne( \n { user_id: user.id },\n { $set: { consumed: total } },\n { upsert: true }\n );\n```\n\nHere, we use `upsert` to update and insert the changed values of specific coffee drinks. As you can see, this code snippet works directly with documents stored in the back end. Instead of opening up a realm with the Device Sync feature, the coffee app without Device Sync still uses Web SDK. \n\nHowever, the above-described method is also known as \u201cMongoDB Atlas Client.\u201d The name itself is quite self-explanatory as it allows the client application to take advantage of Atlas features and access data from your app directly.\n\n2: Which one should we choose?\n\nEssentially, whether you should use the Device Sync feature from the Web SDK or follow the more traditional Atlas Client depends on your use cases, working environments, and existing codebase. We talked about two different ways to keep data updated between the client apps and the back end. Although both sample apps don\u2019t look very different due to their simple functionality, this will be quite different in more complicated applications. \n \nLook at the UI of both implementations of the web apps: \n\n, functions) we can keep a heavy workload on the App Services server while making sure our web app remains responsive.\n\n* No encryption at rest: You can understand this limitation as Realm JS Web SDK only encrypts data in transit between the browser and server over HTTPS. Anything that\u2019s saved in the device\u2019s memory will be stored in a non-encrypted format. \n\nHowever, there\u2019s no need to panic. As previously mentioned, Device Sync uses roles and rules to strictly control users\u2019 access permissions to different data. \n\nA limitation of Atlas Client is the way data is updated/downloaded between the client and server. Compared to Device Sync, Atlas Client does not have the ability to keep data synced automatically. This can also be seen as a feature, in some use cases, where data should only be synced manually.\n\n## Conclusion\n\nIn this article, we: \n\n* Talked about the usage of the App Services Web SDK in a React web app. \n* Compared Web SDK\u2019s Device Sync feature against Atlas Client.\n* Discussed which method we should choose.\n\nThe completed code examples are available in the appendix below. You can use them as live examples of MongoDB\u2019s App Services Web SDK. As previously mentioned, the coffee apps are designed to be simple and straightforward when it comes to demoing the basic functionality of the Web SDK and its sync feature. It is also easy to add extra features and tailor the app\u2019s source code according to your specific needs. For example:\n\n 1. Instead of anonymous authentication, further configure `credentials` to use other more secure auth methods, such as email/password.\n 2. Modify the data model to fit your app\u2019s theme. For now, our coffee app keeps track of coffee consumption. However, the app can be quickly rebuilt into a recipe app or something similar without complicated modifications and refactoring. \n\nAlternatively, the example apps can also serve as starting points for your own web app project. \n\nApp Services\u2019 Web SDK is MongoDB\u2019s answer to developing modern web apps that take advantage of Realm (a local database) and Atlas (a cloud storage solution). Web SDK now supports Device Sync (in preview) whilst before the preview release, Atlas Client allowed web apps to modify and manipulate data on the server. Both of the solutions have the use cases where they are the best fit, and there is no \u201cright answer\u201d that you need to follow. \n\nAs a developer, a better choice can be made by first figuring out the purpose of the app and how you would like it to interact with users. If you already have been working on an existing project, it is beneficial to check whether you indeed need the background auto-syncing feature (Device Sync), compared to using queries to perform CRUD operations (Atlas Client). Take a look at our example app and notice the `App.js` file contains the basic components that are needed for Device Sync to work. Therefore, you will be able to decide whether it is a good idea to integrate Device Sync into your project.\n\n### Appendix (Useful links)\n\n* App Services\n* Atlas Device SDK for the web\n* Realm Web and Atlas Device Sync (preview)\n* Realm SDK references\n* The coffee apps source code: \n - Coffee app with Device Sync\n - Coffee app without Device Sync\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6af95a5a3f41ac6d/664500a901b7992a8fd19134/device-sync-between-client-device-atlas.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaca56a39c1fbdbd2/6645015652b746f9042818d7/create-project.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc5cf094e67020f1/66450181acadaf4f23726805/deploy-database.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltebeb83b68b03a525/664501d366b81d2b3033f241/database-deployments.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc5e7b6718067804/6645021499f5a835bfc369c4/create-app.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbe5631204b1c717c/664502448c5cd134d503a6e6/app-id-code.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8b6e7bbe222c51bc/66450296a3f9dfd191c0eeb5/define-schema.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt763b14b569b2b372/664502be5c24836146bc18f2/configure-schema.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c56ca7b13ca10fd/6645033fefc97a60764befe9/device-sync-roadmap.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56fd820a250bd117/664503915c2483382cbc1901/configure-access.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte63e871b8c2ece1d/664503b699f5a89764c369dc/server-side-schema.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6cf1ae1ff25656f0/6645057a4df3f52f6aee7df4/development-mode-switch.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt25990d0bdcb9dccd/6645059da0104b10b7c6459d/auto-generated-data-model.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf415f5c37901fc1d/6645154ba0104bde23c6465c/side-panel.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt478c0ad63be16e4b/664515915c24835ebebc19b0/auto-generated-data-model.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte1c7f47382325f02/664515bb8c5cd1758403a7ac/switching-on-development-mode.png\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltda38b0c77bf82622/664516a466b81d234f33f33e/coffee-drinks-quantity-tracker-UI.png\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc0a269aa5d954891/66451902a3f9df91c3c0efe0/coffee-drinks-quantity-tracker-UI.png\n [19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5dc1e20f474f4c6b/6645191abfbef587de5f695f/web-app-features-atlas-client.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "In this tutorial, we demonstrate how a developer can easily enable Device Sync to a coffee consumption web app./practical-exercise-atlas-device-sdk-web-sync", "contentType": "Tutorial"}, "title": "A Practical Exercise of Atlas Device SDK for Web With Sync (Preview)", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/iot-mongodb-powering-time-series-analysis-household-power-consumption", "action": "created", "body": "# IoT and MongoDB: Powering Time Series Analysis of Household Power Consumption\n\nIoT (Internet of Things) systems are increasingly becoming a part of our daily lives, offering smart solutions for homes and businesses. \n\nThis article will explore a practical case study on household power consumption, showcasing how MongoDB's time series collections can be leveraged to store, manage, and analyze data generated by IoT devices efficiently.\n\n## Time series collections\n\nTime series collections in MongoDB effectively store time series data \u2014 a sequence of data points analyzed to observe changes over time.\n\nTime series collections provide the following benefits:\n\n- Reduced complexity for working with time series data\n- Improved query efficiency\n- Reduced disk usage\n- Reduced I/O for read operations\n- Increased WiredTiger cache usage\n\nGenerally, time series data is composed of the following elements:\n\n- The timestamp of each data point\n- Metadata (also known as the source), which is a label or tag that uniquely identifies a series and rarely changes\n- Measurements (also known as metrics or values), representing the data points tracked at increments in time \u2014 generally key-value pairs that change over time\n\n## Case study: household electric power consumption\n\nThis case study focuses on analyzing the data set with over two million data points of household electric power consumption, with a one-minute sampling rate over almost four years.\n\nThe dataset includes the following information:\n\n- **date**: Date in format dd/mm/yyyy \n- **time**: Time in format hh:mm:ss \n- **global_active_power**: Household global minute-averaged active power (in kilowatt) \n- **global_reactive_power**: Household global minute-averaged reactive power (in kilowatt) \n- **voltage**: Minute-averaged voltage (in volt) \n- **global_intensity**: Household global minute-averaged current intensity (in ampere) \n- **sub_metering_1**: Energy sub-metering No. 1 (in watt-hour of active energy); corresponds to the kitchen, containing mainly a dishwasher, an oven, and a microwave (hot plates are not electric but gas-powered) \n- **sub_metering_2**: Energy sub-metering No. 2 (in watt-hour of active energy); corresponds to the laundry room, containing a washing machine, a tumble drier, a refrigerator, and a light. \n- **sub_metering_3**: Energy sub-metering No. 3 (in watt-hour of active energy); corresponds to an electric water heater and an air conditioner\n\n## Schema modeling\n\nTo define and model our time series collection, we will use the Mongoose library. Mongoose, an Object Data Modeling (ODM) library for MongoDB, is widely used in the Node.js ecosystem for its ability to provide a straightforward way to model our application data.\n\nThe schema will include:\n\n- **timestamp:** A combination of the \u201cdate\u201d and \u201ctime\u201d fields from the dataset.\n- **global_active_power**: A numerical representation from the dataset.\n- **global_reactive_power**: A numerical representation from the dataset. \n- **voltage**: A numerical representation from the dataset. \n- **global_intensity**: A numerical representation from the dataset.\n- **sub_metering_1**: A numerical representation from the dataset. \n- **sub_metering_2**: A numerical representation from the dataset.\n- **sub_metering_3**: A numerical representation from the dataset.\n\nTo configure the collection as a time series collection, an additional \u201c**timeseries**\u201d configuration with \u201c**timeField**\u201d and \u201c**granularity**\u201d properties is necessary. The \u201c**timeField**\u201d will use our schema\u2019s \u201c**timestamp**\u201d property, and \u201c**granularity**\u201d will be set to \u201cminutes\u201d to match the dataset's sampling rate.\n\nAdditionally, an index on the \u201ctimestamp\u201d field will be created to enhance query performance \u2014 note that you can query a time series collection the same way you query a standard MongoDB collection.\n\nThe resulting schema is structured as follows:\n\n```javascript\nconst { Schema, model } = require('mongoose');\n\nconst powerConsumptionSchema = new Schema(\n {\n timestamp: { type: Date, index: true },\n global_active_power: { type: Number },\n global_reactive_power: { type: Number },\n voltage: { type: Number },\n global_intensity: { type: Number },\n sub_metering_1: { type: Number },\n sub_metering_2: { type: Number },\n sub_metering_3: { type: Number },\n },\n {\n timeseries: {\n timeField: 'timestamp',\n granularity: 'minutes',\n },\n }\n);\n\nconst PowerConsumptions = model('PowerConsumptions', powerConsumptionSchema);\n\nmodule.exports = PowerConsumptions;\n```\n\nFor further details on creating time series collections, refer to MongoDB's official time series documentation.\n\n## Inserting data to MongoDB\n\nThe dataset is provided as a .txt file, which is not directly usable with MongoDB. To import this data into our MongoDB database, we need to preprocess it so that it aligns with our database schema design.\n\nThis can be accomplished by performing the following steps:\n\n1. Connect to MongoDB.\n2. Load data from the .txt file.\n3. Normalize the data and split the content into lines.\n4. Parse the lines into structured objects.\n5. Transform the data to match our MongoDB schema model.\n6. Filter out invalid data.\n7. Insert the final data into MongoDB in chunks.\n\nHere is the Node.js script that automates these steps:\n\n```javascript\n// Load environment variables from .env file\nrequire('dotenv').config();\n\n// Import required modules\nconst fs = require('fs');\nconst mongoose = require('mongoose');\nconst PowerConsumptions = require('./models/power-consumption');\n\n// Connect to MongoDB and process the data file\nconst processData = async () => {\n try {\n // Connect to MongoDB using the connection string from environment variables\n await mongoose.connect(process.env.MONGODB_CONNECTION_STRING);\n\n // Define the file path for the data source\n const filePath = 'Household_Power_Consumption.txt';\n\n // Read data file\n const rawFileContent = fs.readFileSync(filePath, 'utf8');\n\n // Normalize line endings and split the content into lines\n const lines = rawFileContent.replace(/\\r\\n/g, '\\n').replace(/\\r/g, '\\n').trim().split('\\n');\n\n // Extract column headers\n const headers = lines0].split(';').map((header) => header.trim());\n\n // Parse the lines into structured objects\n const parsedRecords = lines.slice(1).map((line) => {\n const values = line.split(';').map((value) => value.trim());\n return headers.reduce((object, header, index) => {\n object[header] = values[index];\n return object;\n }, {});\n });\n\n // Transform and prepare data for insertion\n const transformedRecords = parsedRecords.map((item) => {\n const [day, month, year] = item.Date.split('/').map((num) => parseInt(num, 10));\n const [hour, minute, second] = item.Time.split(':').map((num) => parseInt(num, 10));\n const dateObject = new Date(year, month - 1, day, hour, minute, second);\n\n return {\n timestamp: dateObject.toISOString(),\n global_active_power: parseFloat(item.Global_active_power),\n global_reactive_power: parseFloat(item.Global_reactive_power),\n voltage: parseFloat(item.Voltage),\n global_intensity: parseFloat(item.Global_intensity),\n sub_metering_1: parseFloat(item.Sub_metering_1),\n sub_metering_2: parseFloat(item.Sub_metering_2),\n sub_metering_3: parseFloat(item.Sub_metering_3),\n };\n });\n\n // Filter out invalid data\n const finalData = transformedRecords.filter(\n (item) =>\n item.timestamp !== 'Invalid Date' &&\n !isNaN(item.global_active_power) &&\n !isNaN(item.global_reactive_power) &&\n !isNaN(item.voltage) &&\n !isNaN(item.global_intensity) &&\n !isNaN(item.sub_metering_1) &&\n !isNaN(item.sub_metering_2) &&\n !isNaN(item.sub_metering_3)\n );\n\n // Insert final data into the database in chunks of 1000\n const chunkSize = 1000;\n for (let i = 0; i < finalData.length; i += chunkSize) {\n const chunk = finalData.slice(i, i + chunkSize);\n await PowerConsumptions.insertMany(chunk);\n }\n\n console.log('Data processing and insertion completed.');\n } catch (error) {\n console.error('An error occurred:', error);\n }\n};\n\n// Call the processData function\nprocessData();\n```\n\nBefore you start the script, you need to make sure that your environment variables are set up correctly. To do this, create a file named \u201c.env\u201d in the root folder, and add a line for \u201cMONGODB_CONNECTION_STRING\u201d, which is your link to the MongoDB database. \n\nThe content of the .env file should look like this:\n\n```javascript\nMONGODB_CONNECTION_STRING = 'mongodb+srv://{{username}}:{{password}}@{{your_cluster_url}}/{{your_database}}?retryWrites=true&w=majority'\n```\n\nFor more details on constructing your connection string, refer to the [official MongoDB documentation.\n\n## Visualization with MongoDB Atlas Charts\n\nOnce the data has been inserted into our MongoDB time series collection, MongoDB Atlas Charts can be used to effortlessly connect to and visualize the data.\n\nIn order to connect and use MongoDB Atlas Charts, we should:\n\n1. Establish a connection to the time series collection as a data source.\n2. Associate the desired fields with the appropriate X and Y axes.\n3. Implement filters as necessary to refine the data displayed.\n4. Explore the visualizations provided by Atlas Charts to gain insights.\n\n to share your experiences, ask questions, and collaborate with fellow enthusiasts. Whether you are seeking advice, sharing your latest project, or exploring innovative uses of MongoDB, the community is a great place to continue the conversation.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt50c50df158186ba2/65f8b9fad467d26c5f0bbf14/image1.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript"], "pageDescription": "", "contentType": "Tutorial"}, "title": "IoT and MongoDB: Powering Time Series Analysis of Household Power Consumption", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/quarkus-eclipse-jnosql", "action": "created", "body": "# Create a Java REST API with Quarkus and Eclipse JNoSQL for MongoDB\n\n## Introduction\n\nIn this tutorial, you will learn how to create a RESTful API using Quarkus, a framework for building Java applications,\nand integrate it with Eclipse JNoSQL to work with MongoDB. We will create a simple API to manage developer records.\n\nCombining Quarkus with Eclipse JNoSQL allows you to work with NoSQL databases using a unified API, making switching\nbetween different NoSQL database systems easier.\n\n## Prerequisites\n\nFor this tutorial, you\u2019ll need:\n\n- Java 17.\n- Maven.\n- A MongoDB cluster.\n - Docker (Option 1)\n - MongoDB Atlas (Option 2)\n\nYou can use the following Docker command to start a standalone MongoDB instance:\n\n```shell\ndocker run --rm -d --name mongodb-instance -p 27017:27017 mongo\n```\n\nOr you can use MongoDB Atlas and try the M0 free tier to deploy your cluster.\n\n## Create a Quarkus project\n\n- Visit the Quarkus Code Generator.\n- Configure your project by selecting the desired options, such as the group and artifact ID.\n- Add the necessary dependencies to your project. For this tutorial, we will add:\n - JNoSQL Document MongoDB quarkus-jnosql-document-mongodb]\n - RESTEasy Reactive [quarkus-resteasy-reactive]\n - RESTEasy Reactive Jackson [quarkus-resteasy-reactive-jackson]\n - OpenAPI [quarkus-smallrye-openapi]\n- Generate the project, download the ZIP file, and extract it to your preferred location. Remember that the file\n structure may vary with different Quarkus versions, but this should be fine for the tutorial. The core focus will be\n modifying the `pom.xml` file and source code, which remains relatively consistent across versions. Any minor\n structural differences should be good for your progress, and you can refer to version-specific documentation if needed\n for a seamless learning experience.\n\nAt this point, your `pom.xml` file should look like this:\n\n```xml\n\n \n io.quarkus\n quarkus-resteasy-reactive-jackson\n \n \n io.quarkiverse.jnosql\n quarkus-jnosql-document-mongodb\n 1.0.5\n \n \n io.quarkus\n quarkus-smallrye-openapi\n \n \n io.quarkus\n quarkus-resteasy-reactive\n \n \n io.quarkus\n quarkus-arc\n \n \n io.quarkus\n quarkus-junit5\n test\n \n \n io.rest-assured\n rest-assured\n test\n \n\n```\n\nBy default, [quarkus-jnosql-document-mongodb\nis in version `1.0.5`, but the latest release is `3.2.2.1`. You should update your `pom.xml` to use the latest version:\n\n```xml\n\n io.quarkiverse.jnosql\n quarkus-jnosql-document-mongodb\n 3.2.2.1\n\n```\n\n## Database configuration\n\nBefore you dive into the implementation, it\u2019s essential to configure your MongoDB database properly. In MongoDB, you\nmust often set up credentials and specific configurations to connect to your database instance. Eclipse JNoSQL provides\na flexible configuration mechanism that allows you to manage these settings efficiently.\n\nYou can find detailed configurations and setups for various databases, including MongoDB, in the Eclipse JNoSQL GitHub\nrepository.\n\nTo run your application locally, you can configure the database name and properties in your application\u2019s\n`application.properties` file. Open this file and add the following line to set the database name:\n\n```properties\nquarkus.mongodb.connection-string=mongodb://localhost:27017\njnosql.document.database=school\n```\n\nThis configuration will enable your application to:\n- Use the \u201cschool\u201d database.\n- Connect to the MongoDB cluster available at the provided connection string.\n\nIn production, make sure to enable access control and enforce authentication. See the security checklist for more\ndetails.\n\nIt\u2019s worth mentioning that Eclipse JNoSQL leverages Eclipse MicroProfile Configuration, which is designed to facilitate\nthe implementation of twelve-factor applications, especially in configuration management. It means you can override\nproperties through environment variables, allowing you to switch between different configurations for development,\ntesting, and production without modifying your code. This flexibility is a valuable aspect of building robust and easily\ndeployable applications.\n\nNow that your database is configured, you can proceed with the tutorial and create your RESTful API with Quarkus and\nEclipse JNoSQL for MongoDB.\n\n## Create a developer entity\n\nIn this step, we will create a simple `Developer` entity using Java records. Create a new record in the `src/main/java`\ndirectory named `Developer`.\n\n```java\nimport jakarta.nosql.Column;\nimport jakarta.nosql.Entity;\nimport jakarta.nosql.Id;\n\nimport java.time.LocalDate;\nimport java.util.Objects;\nimport java.util.UUID;\n\n@Entity\npublic record Developer(\n@Id String id,\n@Column String name,\n@Column LocalDate birthday\n) {\n\n public static Developer newDeveloper(String name, LocalDate birthday) {\n Objects.requireNonNull(name, \"name is required\");\n Objects.requireNonNull(birthday, \"birthday is required\");\n return new Developer(\n UUID.randomUUID().toString(),\n name,\n birthday);\n }\n\n public Developer update(String name, LocalDate birthday) {\n Objects.requireNonNull(name, \"name is required\");\n Objects.requireNonNull(birthday, \"birthday is required\");\n return new Developer(\n this.id(),\n name,\n birthday);\n }\n}\n```\n\n## Create a REST API\n\nNow, let\u2019s create a RESTful API to manage developer records. Create a new class in `src/main/java`\nnamed `DevelopersResource`.\n\n```java\nimport jakarta.inject.Inject;\nimport jakarta.nosql.document.DocumentTemplate;\nimport jakarta.ws.rs.*;\nimport jakarta.ws.rs.core.MediaType;\nimport jakarta.ws.rs.core.Response;\n\nimport java.time.LocalDate;\nimport java.util.List;\n\n@Path(\"developers\")\n@Consumes({MediaType.APPLICATION_JSON})\n@Produces({MediaType.APPLICATION_JSON})\npublic class DevelopersResource {\n\n @Inject\n DocumentTemplate template;\n\n @GET\n public List listAll(@QueryParam(\"name\") String name) {\n if (name == null) {\n return template.select(Developer.class).result();\n }\n\n return template.select(Developer.class)\n .where(\"name\")\n .like(name)\n .result();\n }\n\n public record NewDeveloperRequest(String name, LocalDate birthday) {\n }\n\n @POST\n public Developer add(NewDeveloperRequest request) {\n var newDeveloper = Developer.newDeveloper(request.name(), request.birthday());\n return template.insert(newDeveloper);\n }\n\n @Path(\"{id}\")\n @GET\n public Developer get(@PathParam(\"id\") String id) {\n return template.find(Developer.class, id)\n .orElseThrow(() -> new WebApplicationException(Response.Status.NOT_FOUND));\n }\n\n public record UpdateDeveloperRequest(String name, LocalDate birthday) {\n }\n\n @Path(\"{id}\")\n @PUT\n public Developer update(@PathParam(\"id\") String id, UpdateDeveloperRequest request) {\n var developer = template.find(Developer.class, id)\n .orElseThrow(() -> new WebApplicationException(Response.Status.NOT_FOUND));\n var updatedDeveloper = developer.update(request.name(), request.birthday());\n return template.update(updatedDeveloper);\n\n }\n\n @Path(\"{id}\")\n @DELETE\n public void delete(@PathParam(\"id\") String id) {\n template.delete(Developer.class, id);\n }\n}\n```\n\n## Test the REST API\n\nNow that we've created our RESTful API for managing developer records, it's time to put it to the test. We'll\ndemonstrate how to interact with the API using various HTTP requests and command-line tools.\n\n### Start the project:\n\n```shell\n./mvnw compile quarkus:dev\n```\n\n### Create a new developer with POST\n\nYou can use the `POST` request to create a new developer record. We'll use `curl` for this demonstration:\n\n```shell\ncurl -X POST \"http://localhost:8080/developers\" -H 'Content-Type: application/json' -d '{\"name\": \"Max\", \"birthday\": \"\n2022-05-01\"}'\n```\n\nThis `POST` request sends a JSON payload with the developer\u2019s name and birthday to the API endpoint. You\u2019ll receive a\nresponse with the details of the newly created developer.\n\n### Read the developers with GET\n\nTo retrieve a list of developers, you can use the `GET` request:\n\n```shell\ncurl http://localhost:8080/developers\n```\n\nThis `GET` request returns a list of all developers stored in the database.\nTo fetch details of a specific developer, provide their unique id in the URL:\n\n```shell\ncurl http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582\n```\n\nThis request will return the developer\u2019s information associated with the provided id.\n\n### Update a developer with PUT\n\nYou can update a developer\u2019s information using the `PUT` request:\n\n```shell\ncurl -X PUT \"http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582\" -H 'Content-Type: application/json'\n-d '{\"name\": \"Owen\", \"birthday\": \"2022-05-01\"}'\n```\n\nIn this example, we update the developer with the given id by providing a new name and birthday in the JSON payload.\n\n### Delete a developer with DELETE\n\nFinally, to delete a developer record, use the DELETE request:\n\n```shell\ncurl -X DELETE \"http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582\"\n```\n\nThis request removes the developer entry associated with the provided `id` from the database.\n\nFollowing these simple steps, you can interact with your RESTful API to manage developer records effectively. These HTTP\nrequests allow you to create, read, update, and delete developer entries, providing full control and functionality for\nyour API.\n\nExplore and adapt these commands to suit your specific use cases and requirements.\n\n## Using OpenAPI to test and explore your API\n\nOpenAPI is a powerful tool that allows you to test and explore your API visually. You can access the OpenAPI\ndocumentation for your Quarkus project at the following URL:\n\n```html\nhttp://localhost:8080/q/swagger-ui/\n```\n\nOpenAPI provides a user-friendly interface that displays all the available endpoints and their descriptions and allows\nyou to make API requests directly from the browser. It\u2019s an essential tool for API development because it:\n1. Facilitates API testing: You can send requests and receive responses directly from the OpenAPI interface, making it easy\nto verify the functionality of your API.\n2. Generates documentation: This is crucial for developers who need to understand how to use your API effectively.\n3. Allows for exploration: You can explore all the available endpoints, their input parameters, and expected responses,\nwhich helps you understand the API\u2019s capabilities.\n4. Assists in debugging: It shows request and response details, making identifying and resolving issues easier.\n\nIn conclusion, using OpenAPI alongside your RESTful API simplifies the testing and exploration process, improves\ndocumentation, and enhances the overall developer experience when working with your API. It\u2019s an essential tool in\nmodern API development practices.\n\n## Conclusion\n\nIn this tutorial, you\u2019ve gained valuable insights into building a REST API using Quarkus and seamlessly integrating it\nwith Eclipse JNoSQL for MongoDB. You now can efficiently manage developer records through a unified API, streamlining\nyour NoSQL database operations. However, to take your MongoDB experience even further and leverage the full power of\nMongoDB Atlas, consider migrating your application to MongoDB Atlas.\n\nMongoDB Atlas offers a powerful document model, enabling you to store data as JSON-like objects that closely resemble\nyour application code. With MongoDB Atlas, you can harness your preferred tools and programming languages. Whether you\nmanage your clusters through the MongoDB CLI for Atlas or embrace infrastructure-as-code (IaC) tools like Terraform or\nCloudformation, MongoDB Atlas provides a seamless and scalable solution for your database needs.\n\nReady to explore the benefits of MongoDB Atlas? Get started now by trying MongoDB Atlas.\n\nAccess the source code used in this tutorial.\n\nAny questions? Come chat with us in the MongoDB Community Forum.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Quarkus"], "pageDescription": "Learn to create a REST API with Quarkus and Eclipse JNoSQL for MongoDB", "contentType": "Tutorial"}, "title": "Create a Java REST API with Quarkus and Eclipse JNoSQL for MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/cluster-to-cluster", "action": "created", "body": "# Efficient Sync Solutions: Cluster-to-Cluster Sync and Live Migration to Atlas\n\nThe challenges that are raised in modern business contexts are increasingly complex. These challenges range from the ability to minimize downtime during migrations to adopting efficient tools for transitioning from relational to non-relational databases, and from implementing resilient architectures that ensure high availability to the ability to scale horizontally, allowing large amounts of data to be efficiently managed and queried.\n\nTwo of the main challenges, which will be covered in this article, are:\n\n- The need to create resilient IT infrastructures that can ensure business continuity or minimal downtime even in critical situations, such as the loss of a data center.\n\n- Conducting migrations from one infrastructure to another without compromising operations.\n\nIt is in this context that MongoDB stands out by offering innovative solutions such as MongoSync and live migrate.\n\nEnsuring business continuity with MongoSync: an approach to disaster recovery\n-----------------------------------------------------------------------------\n\nMongoDB Atlas, with its capabilities and remarkable flexibility, offers two distinct approaches to implementing business continuity strategies. These two strategies are:\n\n- Creating a cluster with a geographic distribution of nodes.\n\n- The implementation of two clusters in different regions synchronized via MongoSync.\n\nIn this section, we will explore the second point (i.e., the implementation of two clusters in different regions synchronized via MongoSync) in more detail.\n\nWhat exactly is MongoSync? For a correct definition, we can refer to the official documentation:\n\n\"The `mongosync` binary is the primary process used in Cluster-to-Cluster Sync. `mongosync` migrates data from one cluster to another and can keep the clusters in continuous sync.\"\n\nThis tool performs the following operations:\n\n- It migrates data from one cluster to another.\n\n- It keeps the clusters in continuous sync.\n\nLet's make this more concrete with an example:\n\n- Initially, the situation looks like this for the production cluster and the disaster recovery cluster:\n\n. The commands described below have been tested in the CentOS 7 operating system.\n\nLet's proceed with the configuration of `mongosync` by defining a configuration file and a service:\n\n```\nvi /etc/mongosync.conf\n```\n\nYou can copy and paste the current configuration into this file using the appropriate connection strings. You can also test with two Atlas clusters, which must be M10 level or higher. For more details on how to get the connection strings from your Atlas cluster, you can consult the documentation.\n\n```\ncluster0: \"mongodb+srv://test_u:test_p@cluster0.*****.mongodb.net/?retryWrites=true&w=majority\"\ncluster1: \"mongodb+srv://test_u:test_p@cluster1.*****.mongodb.net/?retryWrites=true&w=majority\"\nlogPath: \"/data/log/mongosync\"\nverbosity: \"INFO\"\n```\n\n>Generally, this step is performed on a Linux machine by system administrators. Although the step is optional, it is recommended to implement it in a production environment.\n\nNext, you will be able to create a service named mongosync.service.\n\n```\nvi /usr/lib/systemd/system/mongosync.service\n```\n\nThis is what your service file should look like.\n\n```\n\u00a0Unit]\nDescription=Cluster-to-Cluster Sync\nDocumentation=https://www.mongodb.com/docs/cluster-to-cluster-sync/\n[Service]\nUser=root\nGroup=root\nExecStart=/usr/local/bin/mongosync --config /etc/mongosync.conf\n[Install]\nWantedBy=multi-user.target\n```\n\nReload all unit files:\n\n```\nsystemctl daemon-reload\n```\n\nNow, we can start the service:\u00a0\n\n```\nsystemctl start mongosync\n```\n\nWe can also check whether the service has been started correctly:\n\n```\nsystemctl status mongosync\n```\n\nOutput:\n\n```\nmongosync.service - Cluster-to-Cluster Sync\n\u00a0 \u00a0 Loaded: loaded (/usr/lib/systemd/system/mongosync.service; disabled; vendor preset: disabled)\nActive: active (running) since dom 2024-04-14 21:45:45 CEST; 4s ago\n Docs: https://www.mongodb.com/docs/cluster-to-cluster-sync/\nMain PID: 1573 (mongosync)\n\u00a0 \u00a0 CGroup: /system.slice/mongosync.service\n \u2514\u25001573 /usr/local/bin/mongosync --config /etc/mongosync.conf\n\napr 14 21:45:45 mongosync.mongodb.int systemd[1]: Started Cluster-to-Cluster Sync.\n```\n\n> If a service is not created and executed, in a more general way, you can start the process in the following way: \n> `mongosync --config mongosync.conf `\n\nAfter starting the service, verify that it is in the idle state:\n\n```\ncurl localhost:27182/api/v1/progress -XGET | jq\n```\n\nOutput:\n\n```\n\u00a0 % Total\u00a0 \u00a0 % Received % Xferd\u00a0 Average Speed \u00a0 Time\u00a0 \u00a0 Time \u00a0 \u00a0 Time\u00a0 Current\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Dload\u00a0 Upload \u00a0 Total \u00a0 Spent\u00a0 \u00a0 Left\u00a0 Speed\n100 191 100 191 0 0 14384 0 --:--:-- --:--:-- --:--:-- 14692\n{\n \"progress\": {\n \"state\": \"IDLE\",\n \"canCommit\": false,\n \"canWrite\": false,\n \"info\": null,\n \"lagTimeSeconds\": null,\n \"collectionCopy\": null,\n \"directionMapping\": null,\n \"mongosyncID\": \"coordinator\",\n \"coordinatorID\": \"\"\n\u00a0 }\n}\n```\nWe can run the synchronization:\n\n```\ncurl localhost:27182/api/v1/start -XPOST \\\n--data '\n\u00a0 \u00a0 {\n \"source\": \"cluster0\",\n \"destination\": \"cluster1\",\n \"reversible\": true,\n \"enableUserWriteBlocking\": true\n\u00a0 \u00a0 } '\n```\n\nOutput:\n\n```\n{\"success\":true}\n```\n\nWe can also keep track of the synchronization status:\n\n```\ncurl localhost:27182/api/v1/progress -XGET | jq\n```\n\nOutput:\n\n```\n\u00a0 % Total\u00a0 \u00a0 % Received % Xferd\u00a0 Average Speed \u00a0 Time\u00a0 \u00a0 Time \u00a0 \u00a0 Time\u00a0 Current\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Dload\u00a0 Upload \u00a0 Total \u00a0 Spent\u00a0 \u00a0 Left\u00a0 Speed\n100 502 100 502 0 0 36001 0 --:--:-- --:--:-- --:--:-- 38615\n{\n \"progress\": {\n \"state\": \"RUNNING\",\n \"canCommit\": false,\n \"canWrite\": false,\n \"info\": \"collection copy\",\n \"lagTimeSeconds\": 54,\n \"collectionCopy\": {\n \"estimatedTotalBytes\": 390696597,\n \"estimatedCopiedBytes\": 390696597\n\u00a0 \u00a0 },\n \"directionMapping\": {\n \"Source\": \"cluster0: cluster0.*****.mongodb.net\",\n \"Destination\": \"cluster1: cluster1.*****.mongodb.net\"\n\u00a0 \u00a0 },\n \"mongosyncID\": \"coordinator\",\n \"coordinatorID\": \"coordinator\"\n\u00a0 }\n}\n\n\u00a0 % Total\u00a0 \u00a0 % Received % Xferd\u00a0 Average Speed \u00a0 Time\u00a0 \u00a0 Time \u00a0 \u00a0 Time\u00a0 Current\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 Dload\u00a0 Upload \u00a0 Total \u00a0 Spent\u00a0 \u00a0 Left\u00a0 Speed\n100 510 100 510 0 0 44270 0 --:--:-- --:--:-- --:--:-- 46363\n{\n \"progress\": {\n \"state\": \"RUNNING\",\n \"canCommit\": true,\n \"canWrite\": false,\n \"info\": \"change event application\",\n \"lagTimeSeconds\": 64,\n \"collectionCopy\": {\n \"estimatedTotalBytes\": 390696597,\n \"estimatedCopiedBytes\": 390696597\n\u00a0 \u00a0 },\n \"directionMapping\": {\n \"Source\": \"cluster0: cluster0.*****.mongodb.net\",\n \"Destination\": \"cluster1: cluster1.*****.mongodb.net\"\n\u00a0 \u00a0 },\n \"mongosyncID\": \"coordinator\",\n \"coordinatorID\": \"coordinator\"\n\u00a0 }\n}\n```\n\nAt this time, the DR environment is aligned with the production environment and will also maintain synchronization for the next operations:\u00a0\n\n![Image of two clusters located in different datacenters, aligned and remained synchronized via mongosync. Mongosync runs on an on-premises server.][2]\n\n```\nAtlas atlas-qsd40w-shard-0 [primary] test> show dbs\nadmin \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 140.00 KiB\nconfig\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 276.00 KiB\nlocal \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 524.00 KiB\nsample_airbnb\u00a0 \u00a0 \u00a0 \u00a0 52.09 MiB\nsample_analytics\u00a0 \u00a0 \u00a0 9.44 MiB\nsample_geospatial \u00a0 \u00a0 1.02 MiB\nsample_guides\u00a0 \u00a0 \u00a0 \u00a0 40.00 KiB\nsample_mflix\u00a0 \u00a0 \u00a0 \u00a0 109.01 MiB\nsample_restaurants\u00a0 \u00a0 5.73 MiB\nsample_supplies \u00a0 \u00a0 976.00 KiB\nsample_training\u00a0 \u00a0 \u00a0 41.20 MiB\nsample_weatherdata\u00a0 \u00a0 2.39 MiB\n```\n\nAnd our second cluster is now in sync with the following data.\n\n```\nAtlas atlas-lcu71y-shard-0 [primary] test> show dbs\nadmin\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 172.00 KiB\nconfig \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 380.00 KiB\nlocal\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 427.22 MiB\nmongosync_reserved_for_internal_use\u00a0 420.00 KiB\nsample_airbnb \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 53.06 MiB\nsample_analytics \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 9.55 MiB\nsample_geospatial\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1.40 MiB\nsample_guides \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 40.00 KiB\nsample_mflix \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 128.38 MiB\nsample_restaurants \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 6.47 MiB\nsample_supplies\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 1.03 MiB\nsample_training \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 47.21 MiB\nsample_weatherdata \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 2.61 MiB\n````\n\nArmed with what we've discussed so far, we could ask a last question like:\n\n*Is it possible to take advantage of the disaster recovery environment in some way, or should we just let it synchronize?*\n\nBy making the appropriate `mongosync` configurations --- for example, by setting the \"buildIndexes\" option to false and omitting the \"enableUserWriteBlocking\" parameter (which is set to false by default) --- we can take advantage of the [limitation regarding non-synchronization of users and roles to create read-only users. We do this in such a way that no entries can be entered, thereby ensuring consistency between the origin and destination clusters and allowing us to use the disaster recovery environment to create the appropriate indexes that will go into optimizing slow queries identified in the production environment.\n\nLive migrate to Atlas: minimizing downtime\n------------------------------------------\n\nLive migrate is a tool that allows users to perform migrations to MongoDB Atlas and more specifically, as mentioned by the official documentation, is a process that uses `mongosync` as the underlying data migration tool, enabling faster live migrations with less downtime if both the source and destination clusters are running MongoDB 6.0.8 or later.\n\nSo, what is the added value of this tool compared to `mongosync`?\n\nIt brings two advantages:\n\n- You can avoid the need to provision and configure a server to host `mongosync`.\n\n- You have the ability to migrate from previous versions, as indicated in the migration path.\n\n!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf8bd745448a43713/663e5cdea2616e0474ff1789/Screenshot_2024-05-10_at_1.40.54_PM.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0c6cb1fbf15ed87/663e5d0fa2616e5e82ff178f/Screenshot_2024-05-10_at_1.41.09_PM.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9ace7dcef1c2e7e9/663e67322ff97d34907049ac/Screenshot_2024-05-10_at_2.24.24_PM.png", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Learn about how to enable cluster-to-cluster sync", "contentType": "Tutorial"}, "title": "Efficient Sync Solutions: Cluster-to-Cluster Sync and Live Migration to Atlas", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/quarkus-pagination", "action": "created", "body": "# Introduction to Data Pagination With Quarkus and MongoDB: A Comprehensive Tutorial\n\n## Introduction\n\nIn modern web development, managing large datasets efficiently through APIs is crucial for enhancing application\nperformance and user experience. This tutorial explores pagination techniques using Quarkus and MongoDB, a robust\ncombination for scalable data delivery. Through a live coding session, we'll delve into different pagination methods and\ndemonstrate how to implement these in a Quarkus-connected MongoDB environment. This guide empowers developers to\noptimize REST APIs for effective data handling.\n\nYou can find all the code presented in this tutorial in\nthe GitHub repository:\n\n```bash\ngit clone git@github.com:mongodb-developer/quarkus-pagination-sample.git\n```\n\n## Prerequisites\n\nFor this tutorial, you'll need:\n\n- Java 21.\n- Maven.\n- A MongoDB cluster.\n - MongoDB Atlas (Option 1)\n - Docker (Option 2)\n\nYou can use the following Docker command to start a standalone MongoDB instance:\n\n```bash\ndocker run --rm -d --name mongodb-instance -p 27017:27017 mongo\n```\n\nOr you can use MongoDB Atlas and try the M0 free tier to deploy your cluster.\n\n## Create a Quarkus project\n\n- Visit the Quarkus Code Generator.\n- Configure your project by selecting the desired options, such as the group and artifact ID.\n- Add the necessary dependencies to your project. For this tutorial, we will add:\n - JNoSQL Document MongoDB quarkus-jnosql-document-mongodb].\n - RESTEasy Reactive [quarkus-resteasy-reactive].\n - RESTEasy Reactive Jackson [quarkus-resteasy-reactive-jackson].\n - OpenAPI [quarkus-smallrye-openapi].\n\n> Note: If you cannot find some dependencies, you can add them manually in the `pom.xml`. See the file below.\n\n- Generate the project, download the ZIP file, and extract it to your preferred location. Remember that the file\n structure\n may vary with different Quarkus versions, but this should be fine for the tutorial. The core focus will be modifying\n the `pom.xml` file and source code, which remains relatively consistent across versions. Any minor structural\n differences should be good for your progress, and you can refer to version-specific documentation if needed for a\n seamless learning experience.\n\nAt this point, your pom.xml file should look like this:\n\n```xml\n\n \n io.quarkus\n quarkus-smallrye-openapi\n \n \n io.quarkiverse.jnosql\n quarkus-jnosql-document-mongodb\n 3.3.0\n \n \n io.quarkus\n quarkus-resteasy\n \n \n io.quarkus\n quarkus-resteasy-jackson\n \n \n io.quarkus\n quarkus-arc\n \n \n io.quarkus\n quarkus-junit5\n test\n \n \n io.rest-assured\n rest-assured\n test\n \n\n```\n\nWe will work with the latest version of Quarkus alongside Eclipse JNoSQL Lite, a streamlined integration that notably\ndoes not rely on reflection. This approach enhances performance and simplifies the configuration process, making it an\noptimal choice for developers looking to maximize efficiency in their applications.\n\n## Database configuration\n\nBefore you dive into the implementation, it's essential to configure your MongoDB database properly. In MongoDB, you\nmust often set up credentials and specific configurations to connect to your database instance. Eclipse JNoSQL provides\na flexible configuration mechanism that allows you to manage these settings efficiently.\n\nYou can find detailed configurations and setups for various databases, including MongoDB, in the [Eclipse JNoSQL GitHub\nrepository.\n\nTo run your application locally, you can configure the database name and properties in your application's\n`application.properties` file. Open this file and add the following line to set the database name:\n\n```properties\nquarkus.mongodb.connection-string = mongodb://localhost\njnosql.document.database = fruits\n```\n\nThis configuration will enable your application to:\n- Use the \"fruits\" database.\n- Connect to the MongoDB cluster available at the provided connection string.\n\nIn production, make sure to enable access control and enforce authentication. See the security checklist for more\ndetails.\n\nIt's worth mentioning that Eclipse JNoSQL leverages Eclipse MicroProfile Configuration, which is designed to facilitate\nthe implementation of twelve-factor applications, especially in configuration management. It means you can override\nproperties through environment variables, allowing you to switch between different configurations for development,\ntesting, and production without modifying your code. This flexibility is a valuable aspect of building robust and easily\ndeployable applications.\n\nNow that your database is configured, you can proceed with the tutorial and create your RESTful API with Quarkus and\nEclipse JNoSQL for MongoDB.\n\n## Create a fruit entity\n\nIn this step, we will create a simple `Fruit` entity using Java records. Create a new class in the `src/main/java`\ndirectory named `Fruit`.\n\n```java\nimport jakarta.nosql.Column;\nimport jakarta.nosql.Convert;\nimport jakarta.nosql.Entity;\nimport jakarta.nosql.Id;\nimport org.eclipse.jnosql.databases.mongodb.mapping.ObjectIdConverter;\n\n@Entity\npublic class Fruit {\n\n @Id\n @Convert(ObjectIdConverter.class)\n private String id;\n\n @Column\n private String name;\n\n public String getId() {\n return id;\n }\n\n public void setId(String id) {\n this.id = id;\n }\n\n public String getName() {\n return name;\n }\n\n public void setName(String name) {\n this.name = name;\n }\n\n @Override\n public String toString() {\n return \"Fruit{\" +\n \"id='\" + id + '\\'' +\n \", name='\" + name + '\\'' +\n '}';\n }\n\n public static Fruit of(String name) {\n Fruit fruit = new Fruit();\n fruit.setName(name);\n return fruit;\n }\n\n}\n```\n\n## Create a fruit repository\n\nWe will simplify the integration between Java and MongoDB using the Jakarta Data repository by creating an interface\nthat extends NoSQLRepository. The framework automatically implements this interface, enabling us to define methods for\ndata retrieval that integrate seamlessly with MongoDB. We will focus on implementing two types of pagination: offset\npagination represented by `Page` and keyset (cursor) pagination represented by `CursoredPage`.\n\nHere's how we define the FruitRepository interface to include methods for both pagination strategies:\n\n```java\nimport jakarta.data.Sort;\nimport jakarta.data.page.CursoredPage;\nimport jakarta.data.page.Page;\nimport jakarta.data.page.PageRequest;\nimport jakarta.data.repository.BasicRepository;\nimport jakarta.data.repository.Find;\nimport jakarta.data.repository.OrderBy;\nimport jakarta.data.repository.Repository;\n\n@Repository\npublic interface FruitRepository extends BasicRepository {\n\n @Find\n CursoredPage cursor(PageRequest pageRequest, Sort order);\n\n @Find\n @OrderBy(\"name\")\n Page offSet(PageRequest pageRequest);\n\n long countBy();\n\n}\n```\n\n## Create setup\n\nWe'll demonstrate how to populate and manage the MongoDB database with a collection of fruit entries at the start of the\napplication using Quarkus. We'll ensure our database is initialized with predefined data, and we'll also handle cleanup\non application shutdown. Here's how we can structure the SetupDatabase class:\n\n```java\nimport jakarta.enterprise.context.ApplicationScoped;\n\nimport jakarta.enterprise.event.Observes;\n\nimport io.quarkus.runtime.ShutdownEvent;\nimport io.quarkus.runtime.StartupEvent;\nimport org.jboss.logging.Logger;\n\nimport java.util.List;\n\n@ApplicationScoped\npublic class SetupDatabase {\n\n private static final Logger LOGGER = Logger.getLogger(SetupDatabase.class.getName());\n\n private final FruitRepository fruitRepository;\n\n public SetupDatabase(FruitRepository fruitRepository) {\n this.fruitRepository = fruitRepository;\n }\n\n void onStart(@Observes StartupEvent ev) {\n LOGGER.info(\"The application is starting...\");\n long count = fruitRepository.countBy();\n if (count > 0) {\n LOGGER.info(\"Database already populated\");\n return;\n }\n List fruits = List.of(\n Fruit.of(\"apple\"),\n Fruit.of(\"banana\"),\n Fruit.of(\"cherry\"),\n Fruit.of(\"date\"),\n Fruit.of(\"elderberry\"),\n Fruit.of(\"fig\"),\n Fruit.of(\"grape\"),\n Fruit.of(\"honeydew\"),\n Fruit.of(\"kiwi\"),\n Fruit.of(\"lemon\")\n );\n fruitRepository.saveAll(fruits);\n }\n\n void onStop(@Observes ShutdownEvent ev) {\n LOGGER.info(\"The application is stopping...\");\n fruitRepository.deleteAll(fruitRepository.findAll().toList());\n }\n\n}\n```\n\n## Create a REST API\n\nNow, let's create a RESTful API to manage developer records. Create a new class in `src/main/java`\nnamed `FruitResource`.\n\n```java\nimport jakarta.data.Sort;\nimport jakarta.data.page.PageRequest;\nimport jakarta.ws.rs.DefaultValue;\nimport jakarta.ws.rs.GET;\nimport jakarta.ws.rs.Path;\nimport jakarta.ws.rs.Produces;\nimport jakarta.ws.rs.QueryParam;\nimport jakarta.ws.rs.core.MediaType;\n\n@Path(\"/fruits\")\npublic class FruitResource {\n\n private final FruitRepository fruitRepository;\n\n private static final Sort ASC = Sort.asc(\"name\");\n private static final Sort DESC = Sort.asc(\"name\");\n\n public FruitResource(FruitRepository fruitRepository) {\n this.fruitRepository = fruitRepository;\n }\n\n @Path(\"/offset\")\n @GET\n @Produces(MediaType.APPLICATION_JSON)\n public Iterable hello(@QueryParam(\"page\") @DefaultValue(\"1\") long page,\n @QueryParam(\"size\") @DefaultValue(\"2\") int size) {\n var pageRequest = PageRequest.ofPage(page).size(size);\n return fruitRepository.offSet(pageRequest).content();\n }\n\n @Path(\"/cursor\")\n @GET\n @Produces(MediaType.APPLICATION_JSON)\n public Iterable cursor(@QueryParam(\"after\") @DefaultValue(\"\") String after,\n @QueryParam(\"before\") @DefaultValue(\"\") String before,\n @QueryParam(\"size\") @DefaultValue(\"2\") int size) {\n if (!after.isBlank()) {\n var pageRequest = PageRequest.ofSize(size).afterCursor(PageRequest.Cursor.forKey(after));\n return fruitRepository.cursor(pageRequest, ASC).content();\n } else if (!before.isBlank()) {\n var pageRequest = PageRequest.ofSize(size).beforeCursor(PageRequest.Cursor.forKey(before));\n return fruitRepository.cursor(pageRequest, DESC).stream().toList();\n }\n var pageRequest = PageRequest.ofSize(size);\n return fruitRepository.cursor(pageRequest, ASC).content();\n }\n\n}\n```\n\n## Test the REST API\n\nNow that we've created our RESTful API for managing developer records, it's time to put it to the test. We'll\ndemonstrate how to interact with the API using various HTTP requests and command-line tools.\n\n### Start the project\n\n```bash\n./mvnw compile quarkus:dev\n```\n\n### Exploring pagination with offset\n\nWe will use `curl` to learn more about pagination using the URLs provided. It is a command-line tool that is often used\nto send HTTP requests. The URLs you have been given are used to access a REST API endpoint fetching fruit pages using\noffset pagination. Each URL requests a different page, enabling us to observe how pagination functions via the API.\nBelow is how you can interact with these endpoints using the `curl` tool.\n\n#### Fetching the first page\n\nThis command requests the first page of fruits from the server.\n\n```bash\ncurl --location http://localhost:8080/fruits/offset?page=1\n```\n\n#### Fetching the second page\n\nThis command gets the next set of fruits, which is the second page.\n\n```bash\ncurl --location http://localhost:8080/fruits/offset?page=2\n```\n\n#### Fetching the fifth page\n\nBy requesting the fifth page, you can see how the API responds when you request a page that might be beyond the range of\nexisting data.\n\n```bash\ncurl --location http://localhost:8080/fruits/offset?page=5\n```\n\n### Exploring pagination with a cursor\n\nTo continue exploring cursor-based pagination with your API, using both `after` and `before` parameters provides a way\nto navigate through your dataset forward and backward respectively. This method allows for flexible data retrieval,\nwhich can be particularly useful for interfaces that allow users to move to the next or previous set of results. Here's\nhow you can structure your `curl` commands to use these parameters effectively:\n\n#### Fetching the initial set of fruits\n\nThis command gets the first batch of fruits without specifying a cursor, starting from the beginning.\n\n```bash\ncurl --location http://localhost:8080/fruits/cursor\n```\n\n#### Fetching fruits after \"banana\"\n\nThis command fetches the list of fruits that appear after \"banana\" in your dataset. This is useful for moving forward in\nthe list.\n\n```bash\ncurl --location http://localhost:8080/fruits/cursor?after=banana\n```\n\n#### Fetching fruits before \"date\"\n\nThis command is used to go back to the set of fruits that precede \"date\" in the dataset. This is particularly useful for\nimplementing \"Previous\" page functionality.\n\n```bash\ncurl --location http://localhost:8080/fruits/cursor?before=date\n```\n\n## Conclusion\n\nThis tutorial explored the fundamentals and implementation of pagination using Quarkus and MongoDB, demonstrating how to\nmanage large datasets in web applications effectively. By integrating the Jakarta Data repository with Quarkus, we\ndesigned interfaces that streamline the interaction between Java and MongoDB, supporting offset and cursor-based\npagination techniques. We started by setting up a basic Quarkus application and configuring MongoDB connections. Then,\nwe demonstrated how to populate the database with initial data and ensure clean shutdown behavior.\n\nThroughout this tutorial, we've engaged in live coding sessions, implementing and testing various pagination methods.\nWe've used the `curl` command to interact with the API, fetching data with no parameters, and using `after` and `before`\nparameters to navigate through the dataset forward and backward. The use of cursor-based pagination, in particular,\nhas showcased its benefits in scenarios where datasets are frequently updated or when precise data retrieval control is\nneeded. This approach not only boosts performance by avoiding the common issues of offset pagination but also provides a\nuser-friendly way to navigate through data.\n\nReady to explore the benefits of MongoDB Atlas? Get started now by trying MongoDB Atlas.\n\nAccess the source code used in this tutorial.\n\nAny questions? Come chat with us in the MongoDB Community Forum.\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Quarkus"], "pageDescription": "In this blog post, you'll learn how to create a RESTful API with Quarkus that supports MongoDB queries with pagination.", "contentType": "Tutorial"}, "title": "Introduction to Data Pagination With Quarkus and MongoDB: A Comprehensive Tutorial", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-atlas-serverless-aws-cdk-serverless-computing", "action": "created", "body": "# Getting Started With MongoDB Atlas Serverless, AWS CDK, and AWS Serverless Computing\n\nServerless development is a cloud computing execution model where cloud and SaaS providers dynamically manage the allocation and provisioning of servers on your behalf, dropping all the way to $0 cost when not in use. This approach allows developers to build and run applications and services without worrying about the underlying infrastructure, focusing primarily on writing code for their core product and associated business logic. Developers opt for serverless architectures to benefit from reduced operational overhead, cost efficiency through pay-per-use billing, and the ability to easily scale applications in response to real-time demand without manual intervention. \n\nMongoDB Atlas serverless instances eliminate the cognitive load of sizing infrastructure and allow you to get started with minimal configuration, so you can focus on building your app. Simply choose a cloud region and then start building with documents that map directly to objects in your code. Your serverless database will automatically scale with your app's growth, charging only for the resources utilized. Whether you\u2019re just getting started or already have users all over the world, Atlas provides the capabilities to power today's most innovative applications while meeting the most demanding requirements for resilience, scale, and data privacy.\n\nIn this tutorial, we will walk you through getting started to build and deploy a simple serverless app that aggregates sales data stored in a MongoDB Atlas serverless instance using AWS Lambda as our compute engine and Amazon API Gateway as our fully managed service to create a RESTful API interface. Lastly, we will show you how easy this is using our recently published AWS CDK Level 3 constructs to better incorporate infrastructure as code (IaC) and DevOps best practices into your software development life cycle (SDLC). \n\nIn this step-by-step guide, we will walk you through the entire process. We will be starting from an empty directory in an Ubuntu 20.04 LTS environment, but feel free to follow along in any supported OS that you prefer.\n\nLet's get started!\n\n## Setup\n\n1. Create a MongoDB Atlas account. Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via the AWS Marketplace.\n2. Create a MongoDB Atlas programmatic API key (PAK)\n3. Install and configure the AWS CLI and Atlas CLI in your terminal if you don\u2019t have them already. \n4. Install the latest versions of Node.js and npm. \n5. Lastly, for the playground code running on Lambda function, we will be using Python so will also require Python3 and pip installed on your terminal. \n\n## Step 1: install AWS CDK, Bootstrap, and Initialize \n\nThe AWS CDK is an open-source framework that lets you define and provision cloud infrastructure using code via AWS CloudFormation. It offers preconfigured components for easy cloud application development without the need for expertise. For more details, see the AWS CDK Getting Started guide. \n\nYou can install CDK using npm:\n\n```\nsudo npm install -g aws-cdk\n```\n\nNext, we need to \u201cbootstrap\u201d our AWS environment to create the necessary resources to manage the CDK apps (see AWS docs for full details). Bootstrapping is the process of preparing an environment for deployment. Bootstrapping is a one-time action that you must perform for every environment that you deploy resources into.\n\nThe `cdk bootstrap` command creates an Amazon S3 bucket for storing files, AWS IAM roles, and a CloudFormation stack to manage these scaffolding resources: \n\n```\ncdk bootstrap aws://ACCOUNT_NUMBER/REGION\n```\n\nNow, we can initialize a new CDK app using TypeScript. This is done using the cdk init command:\n\n```\ncdk init -l typescript \n```\n\nThis command initializes a new CDK app in TypeScript language. It creates a new directory with the necessary files and directories for a CDK app. When you initialize a new AWS CDK app, the CDK CLI sets up a project structure that organizes your application's code into a conventional layout. This layout includes bin and lib directories, among others, each serving a specific purpose in the context of a CDK app. Here's what each of these directories is for:\n\n- The **bin directory** contains the entry point of your CDK application. It's where you define which stacks from your application should be synthesized and deployed. Typically, this directory will have a .ts file (with the same name as your project or another meaningful name you choose) that imports stacks from the lib directory and initializes them.\n\n The bin directory's script is the starting point that the CDK CLI executes to synthesize CloudFormation templates from your definitions. It acts as the orchestrator, telling the CDK which stacks to include in the synthesis process.\n\n- The **lib directory** is where the core of your application's cloud infrastructure code lives. It's intended for defining CDK stacks and constructs, which are the building blocks of your AWS infrastructure. Typically, this directory will have a .ts file (with the same name as your project or another meaningful name you choose). \n\n The lib directory contains the actual definitions of those stacks \u2014 what resources they include, how those resources are configured, and how they interact. You can define multiple stacks in the lib directory and selectively instantiate them in the bin directory as needed.\n\n## Step 2: create and deploy the MongoDB Atlas Bootstrap Stack \n\nThe `atlas-cdk-bootstrap` CDK construct was designed to facilitate the smooth configuration and setup of the MongoDB Atlas CDK framework. This construct simplifies the process of preparing your environment to run the Atlas CDK by automating essential configurations and resource provisioning.\n\nKey features:\n\n- User provisioning: The atlas-cdk-bootstrap construct creates a dedicated execution role within AWS Identity and Access Management (IAM) for executing CloudFormation Extension resources. This helps maintain security and isolation for Atlas CDK operations.\n\n- Programmatic API key management: It sets up an AWS Secrets Manager to securely store and manage programmatic API Keys required for interacting with the Atlas services. This ensures sensitive credentials are protected and can be easily rotated.\n\n- CloudFormation Extensions activation: This construct streamlines the activation of CloudFormation public extensions essential for the MongoDB Atlas CDK. It provides a seamless interface for users to specify the specific CloudFormation resources that need to be deployed and configured.\n\nWith `atlas-cdk-bootstrap`, you can accelerate the onboarding process for Atlas CDK and reduce the complexity of environment setup. By automating user provisioning, credential management, and resource activation, this CDK construct empowers developers to focus on building and deploying applications using the MongoDB Atlas CDK without getting bogged down by manual configuration tasks.\n\nTo use the atlas-cdk-bootstrap, we will first need a specific CDK package called `awscdk-resources-mongodbatlas` (see more details on this package on our \n\nConstruct Hub page). Let's install it:\n\n```\nnpm install awscdk-resources-mongodbatlas\n```\n\nTo confirm that this package was installed correctly and to find its version number, see the package.json file. \n\nNext, in the .ts file in the **bin directory** (typically the same name as your project, i.e., `cloudshell-user.ts`), delete the entire contents and update with: \n\n```javascript\n#!/usr/bin/env node\nimport 'source-map-support/register';\nimport * as cdk from 'aws-cdk-lib';\nimport { AtlasBootstrapExample } from '../lib/cloudshell-user-stack'; //replace \"cloudshell-user\" with name of the .ts file in the lib directory\n\nconst app = new cdk.App();\nconst env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };\n\nnew AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });\n```\n\nNext, in the .ts file in the **lib directory** (typically the same name as your project concatenated with \u201c-stack\u201d, i.e., `cloudshell-user-stack.ts`), delete the entire contents and update with: \n\n```javascript\nimport * as cdk from 'aws-cdk-lib'\nimport { Construct } from 'constructs'\nimport {\n MongoAtlasBootstrap,\n MongoAtlasBootstrapProps,\n AtlasBasicResources\n} from 'awscdk-resources-mongodbatlas'\n\nexport class AtlasBootstrapExample extends cdk.Stack {\n constructor (scope: Construct, id: string, props?: cdk.StackProps) {\n super(scope, id, props)\n\n const roleName = 'MongoDB-Atlas-CDK-Excecution'\n const mongoDBProfile = 'development' \n\n const bootstrapProperties: MongoAtlasBootstrapProps = {\n roleName, secretProfile: mongoDBProfile,\n typesToActivate: 'ServerlessInstance', ...AtlasBasicResources]\n }\n\n new MongoAtlasBootstrap(this, 'mongodb-atlas-bootstrap', bootstrapProperties)\n }\n}\n```\n\nLastly, you can check and deploy the atlas-cdk-bootstrap CDK construct with: \n\n```\nnpx cdk diff mongodb-atlas-bootstrap-stack\nnpx cdk deploy mongodb-atlas-bootstrap-stack\n```\n\n## Step 3: store MongoDB Atlas PAK as env variables and update AWS Secrets Manager\n\nNow that the atlas-cdk-bootstrap CDK construct has been provisioned, we then store our previously created [MongoDB Atlas programmatic API keys in AWS Secrets Manager. For more information on how to create MongoDB Atas PAK, refer to Step 2 from our prerequisites setup. \n\nThis will allow the CloudFormation Extension execution role to provision key components including: MongoDB Atlas serverless instance, Atlas project, Atlas project IP access list, and database user. \n\nFirst, we must store these secrets as environment variables: \n\n```\nexport MONGO_ATLAS_PUBLIC_KEY=\u2019INPUT_YOUR_PUBLIC_KEY'\nexport MONGO_ATLAS_PRIVATE_KEY=\u2019INPUT_YOUR_PRIVATE_KEY'\n```\n\nThen, we can update AWS Secrets Manager with the following AWS CLI command: \n\n```\naws secretsmanager update-secret --secret-id cfn/atlas/profile/development --secret-string \"{\\\"PublicKey\\\":\\\"${MONGO_ATLAS_PUBLIC_KEY}\\\",\\\"PrivateKey\\\":\\\"${MONGO_ATLAS_PRIVATE_KEY}\\\"}\"\n```\n\n## Step 4: create and deploy the atlas-serverless-basic resource CDK L3 construct\n\nThe AWS CDK Level 3 (L3) constructs are high-level abstractions that encapsulate a set of related AWS resources and configuration logic into reusable components, allowing developers to define cloud infrastructure using familiar programming languages with less code. Developers use L3 constructs to streamline the process of setting up complex AWS and MongoDB Atlas services, ensuring best practices, reducing boilerplate code, and enhancing productivity through simplified syntax.\n\nThe MongoDB Atlas AWS CDK L3 construct for Atlas Serverless Basic provides developers with an easy and idiomatic way to deploy MongoDB Atlas serverless instances within AWS environments. Under the hood, this construct abstracts away the intricacies of configuring and deploying MongoDB Atlas serverless instances and related infrastructure on your behalf. \n\nNext, we then update our .ts file in the **bin directory** to: \n\n- Add the AtlasServerlessBasicStack to the import statement. \n- Add the Atlas Organization ID.\n- Add the IP address of NAT gateway which we suggest to be the only IP address on your Atlas serverless instance access whitelist. \n\n```javascript\n#!/usr/bin/env node\nimport 'source-map-support/register';\nimport * as cdk from 'aws-cdk-lib';\nimport { AtlasBootstrapExample, AtlasServerlessBasicStack } from '../lib/cloudshell-user-stack'; //update \"cloudshell-user\" with your stack name \n\nconst app = new cdk.App();\nconst env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };\n\n// the bootstrap stack\nnew AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });\n\ntype AccountConfig = {\n readonly orgId: string;\n readonly projectId?: string;\n}\n\nconst MyAccount: AccountConfig = {\n orgId: '63234d3234ec0946eedcd7da', //update with your Atlas Org ID \n};\n\nconst MONGODB_PROFILE_NAME = 'development';\n\n// the serverless stack with mongodb atlas serverless instance\nconst serverlessStack = new AtlasServerlessBasicStack(app, 'atlas-serverless-basic-stack', {\n env,\n ipAccessList: '46.137.146.59', //input your static IP Address from NAT Gateway\n profile: MONGODB_PROFILE_NAME,\n ...MyAccount,\n});\n```\n\nTo leverage this, we can update our .ts file in the **lib directory** to:\n\n- Update import blocks for newly used resources. \n- Activate underlying CloudFormation resources on the third-party CloudFormation registry. \n- Create a database username and password and store them in AWS Secrets Manager. \n- Update output blocks to display the Atlas serverless instance connection string and project name. \n\n```javascript\nimport * as path from 'path';\nimport {\n App, Stack, StackProps,\n Duration,\n CfnOutput,\n SecretValue,\n aws_secretsmanager as secretsmanager,\n} from 'aws-cdk-lib';\nimport * as cdk from 'aws-cdk-lib';\nimport { SubnetType } from 'aws-cdk-lib/aws-ec2';\nimport {\n MongoAtlasBootstrap,\n MongoAtlasBootstrapProps,\n AtlasBasicResources,\n AtlasServerlessBasic,\n ServerlessInstanceProviderSettingsProviderName,\n} from 'awscdk-resources-mongodbatlas';\nimport { Construct } from 'constructs';\n\nexport class AtlasBootstrapExample extends cdk.Stack {\n constructor (scope: Construct, id: string, props?: cdk.StackProps) {\n super(scope, id, props)\n\n const roleName = 'MongoDB-Atlas-CDK-Excecution'\n const mongoDBProfile = 'development' \n\n const bootstrapProperties: MongoAtlasBootstrapProps = {\n roleName: roleName,\n secretProfile: mongoDBProfile,\n typesToActivate: 'ServerlessInstance', ...AtlasBasicResources]\n }\n\n new MongoAtlasBootstrap(this, 'mongodb-atlascdk-bootstrap', bootstrapProperties)\n }\n}\n\nexport interface AtlasServerlessBasicStackProps extends StackProps {\n readonly profile: string;\n readonly orgId: string;\n readonly ipAccessList: string;\n}\nexport class AtlasServerlessBasicStack extends Stack {\n readonly dbUserSecret: secretsmanager.ISecret;\n readonly connectionString: string;\n constructor(scope: Construct, id: string, props: AtlasServerlessBasicStackProps) {\n super(scope, id, props);\n\n const stack = Stack.of(this);\n const projectName = `${stack.stackName}-proj`;\n\n const dbuserSecret = new secretsmanager.Secret(this, 'DatabaseUserSecret', {\n generateSecretString: {\n secretStringTemplate: JSON.stringify({ username: 'serverless-user' }),\n generateStringKey: 'password',\n excludeCharacters: '%+~`#$&*()|[]{}:;<>?!\\'/@\"\\\\=-.,',\n },\n });\n\n this.dbUserSecret = dbuserSecret;\n const ipAccessList = props.ipAccessList;\n\n // see https://github.com/mongodb/awscdk-resources-mongodbatlas/blob/main/examples/l3-resources/atlas-serverless-basic.ts#L22\n const basic = new AtlasServerlessBasic(this, 'serverless-basic', {\n serverlessProps: {\n profile: props.profile,\n providerSettings: {\n providerName: ServerlessInstanceProviderSettingsProviderName.SERVERLESS,\n regionName: 'EU_WEST_1',\n },\n },\n projectProps: {\n orgId: props.orgId,\n name: projectName,\n },\n dbUserProps: {\n username: 'serverless-user',\n },\n ipAccessListProps: {\n accessList: [\n { ipAddress: ipAccessList, comment: 'My first IP address' },\n ],\n },\n profile: props.profile,\n });\n\n this.connectionString = basic.mserverless.getAtt('ConnectionStrings.StandardSrv').toString();\n\n new CfnOutput(this, 'ProjectName', { value: projectName });\n new CfnOutput(this, 'ConnectionString', { value: this.connectionString });\n }\n}\n```\n\nLastly, you can check and deploy the atlas-serverless-basic CDK construct with: \n\n```\nnpx cdk diff atlas-serverless-basic-stack\nnpx cdk deploy atlas-serverless-basic-stack\n```\n\nVerify in the Atlas UI, as well as the AWS Management Console, that all underlying MongoDB Atlas resources have been created. Note the database username and password is stored as a new secret in AWS Secrets Manager (as specified in above AWS region of your choosing). \n\n## Step 5: copy the auto-generated database username and password created in AWS Secrets Manager secret into Atlas \n\nWhen we initially created the Atlas database user credentials, we created a random password, and we can\u2019t simply copy that into AWS Secrets Manager because this would expose our database password in our CloudFormation template. \n\nTo avoid this, we need to manually update the MongoDB Atlas database user password from the secret stored in AWS Secrets Manager so they will be in sync. The AWS Lambda function will then pick this password from AWS Secrets Manager to successfully authenticate to the Atlas serverless instance.\n\nWe can do this programmatically via the [Atlas CLI. To get started, we first need to make sure we have configured with the correct PAK that we created as part of our initial setup: \n\n```\natlas config init\n```\n\nWe then input the correct PAK and select the correct project ID. For example: \n\n for an AWS Lambda function that interacts with the MongoDB Atlas serverless instance via a public endpoint. It fetches database credentials from AWS Secrets Manager, constructs a MongoDB Atlas connection string using these credentials, and connects to the MongoDB Atlas serverless instance. \n\nThe function then generates and inserts 20 sample sales records with random data into a sales collection within the database. It also aggregates sales data for the year 2023, counting the number of sales and summing the total sales amount by item. Finally, it prints the count of sales in 2023 and the aggregation results, returning this information as a JSON response.\n\nHence, we populate the Lambda/playground/index.py with: \n\n```python\nfrom datetime import datetime, timedelta\nfrom pymongo.mongo_client import MongoClient\nfrom pymongo.server_api import ServerApi\nimport random, json, os, re, boto3\n\n# Function to generate a random datetime between two dates\ndef random_date(start_date, end_date):\n time_delta = end_date - start_date\n random_days = random.randint(0, time_delta.days)\n return start_date + timedelta(days=random_days)\n\ndef get_private_endpoint_srv(mongodb_uri, username, password):\n \"\"\"\n Get the private endpoint SRV address from the given MongoDB URI.\n e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to \n `mongodb+srv://:@my-cluster-pl-0.mzvjf.mongodb.net/?retryWrites=true&w=majority`\n \"\"\"\n match = re.match(r\"mongodb\\+srv://(.+)\\.(.+).mongodb.net\", mongodb_uri)\n if match:\n return \"mongodb+srv://{}:{}@{}-pl-0.{}.mongodb.net/?retryWrites=true&w=majority\".format(username, password, match.group(1), match.group(2))\n else:\n raise ValueError(\"Invalid MongoDB URI: {}\".format(mongodb_uri))\n\ndef get_public_endpoint_srv(mongodb_uri, username, password):\n \"\"\"\n Get the private endpoint SRV address from the given MongoDB URI.\n e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to \n `mongodb+srv://:@my-cluster.mzvjf.mongodb.net/?retryWrites=true&w=majority`\n \"\"\"\n match = re.match(r\"mongodb\\+srv://(.+)\\.(.+).mongodb.net\", mongodb_uri)\n if match:\n return \"mongodb+srv://{}:{}@{}.{}.mongodb.net/?retryWrites=true&w=majority\".format(username, password, match.group(1), match.group(2))\n else:\n raise ValueError(\"Invalid MongoDB URI: {}\".format(mongodb_uri))\n\n client = boto3.client('secretsmanager')\n conn_string_srv = os.environ.get('CONN_STRING_STANDARD')\n secretId = os.environ.get('DB_USER_SECRET_ARN')\n json_secret = json.loads(client.get_secret_value(SecretId=secretId).get('SecretString'))\n username = json_secret.get('username')\n password = json_secret.get('password')\n\ndef handler(event, context):\n# conn_string_private = get_private_endpoint_srv(conn_string_srv, username, password)\n conn_string = get_public_endpoint_srv(conn_string_srv, username, password)\n print('conn_string=', conn_string)\n\n client = MongoClient(conn_string, server_api=ServerApi('1'))\n\n # Select the database to use.\n db = client'mongodbVSCodePlaygroundDB']\n\n # Create 20 sample entries with dates spread between 2021 and 2023.\n entries = []\n\n for _ in range(20):\n item = random.choice(['abc', 'jkl', 'xyz', 'def'])\n price = random.randint(5, 30)\n quantity = random.randint(1, 20)\n date = random_date(datetime(2021, 1, 1), datetime(2023, 12, 31))\n entries.append({\n 'item': item,\n 'price': price,\n 'quantity': quantity,\n 'date': date\n })\n\n # Insert a few documents into the sales collection.\n sales_collection = db['sales']\n sales_collection.insert_many(entries)\n\n # Run a find command to view items sold in 2023.\n sales_2023 = sales_collection.count_documents({\n 'date': {\n '$gte': datetime(2023, 1, 1),\n '$lt': datetime(2024, 1, 1)\n }\n })\n\n # Print a message to the output window.\n print(f\"{sales_2023} sales occurred in 2023.\")\n\n pipeline = [\n # Find all of the sales that occurred in 2023.\n { '$match': { 'date': { '$gte': datetime(2023, 1, 1), '$lt': datetime(2024, 1, 1) } } },\n # Group the total sales for each product.\n { '$group': { '_id': '$item', 'totalSaleAmount': { '$sum': { '$multiply': [ '$price', '$quantity' ] } } } }\n ]\n\n cursor = sales_collection.aggregate(pipeline)\n results = list(cursor)\n print(results)\n response = {\n 'statusCode': 200,\n 'headers': {\n 'Content-Type': 'application/json'\n },\n 'body': json.dumps({\n 'sales_2023': sales_2023,\n 'results': results\n })\n }\n\n return response\n```\n\nLastly, we need to create one last file that will store our requirements for the Python playground application with: \n\n```\ntouch lambda/playground/requirements.txt \n```\n\nIn this file, we populate with: \n\n```\npymongo\nrequests\nboto3\ntestresources\nurllib3==1.26\n```\n\nTo then install these dependencies used in requirements.txt: \n\n```\ncd lambda/playground \npip install -r requirements.txt -t .\n```\n\nThis installs all required Python packages in the playground directory and AWS CDK would bundle into a zip file which we can see from AWS Lambda console after deployment.\n\n## Step 7: create suggested AWS networking infrastructure\n\nAWS Lambda functions placed in public subnets do not automatically have internet access because Lambda functions do not have public IP addresses, and a public subnet routes traffic through an internet gateway (IGW). To access the internet, a Lambda function can be associated with a private subnet with a route to a NAT gateway.\n\nFirst, ensure that you have NAT gateway created in your public subnet. Then, create a route from a private subnet (where your AWS Lambda resource will live) to the NAT gateway and route the public subnet to IGW. The benefits of this networking approach is that we can associate a static IP to our NAT gateway so this will be our one and only Atlas project IP access list entry. This means that all traffic is still going to the public internet through the NAT gateway and is TLS encrypted. The whitelist only allows the NAT gateway static public IP and nothing else. \n\nAlternatively, you can choose to build with [AWS PrivateLink which does carry additional costs but will dramatically simplify networking management by directly connecting AWS Lambda to a MongoDB Atlas severless instance without the need to maintain subnets, IGWs, or NAT gateways. Also, AWS PrivateLink creates a private connection to AWS services, reducing the risk of exposing data to the public internet. \n\nSelect whichever networking approach best suits your organization\u2019s needs. \n\n and walkthrough on a recent episode of MongoDB TV Cloud Connect (aired 15 Feb 2024). Also, see the GitHub repo with the full open-source code of materials used in this demo serverless application. \n\nThe MongoDB Atlas CDK resources are open-sourced under the Apache-2.0 license and we welcome community contributions. To learn more, see our contributing guidelines.\n\nGet started quickly by creating a MongoDB Atlas account through the AWS Marketplace and start building with MongoDB Atlas and the AWS CDK today! \n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e6c094f4e095c73/65e61b2572b3874d4222d572/1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0fe3f8f42f0b0ef/65e61b4a51368b8d36844989/2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt262d82355ecaefdd/65e61b6caca1713e9fa00cbb/3.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6c9d68ba0093af02/65e61badffa94a03503d58ca/4.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt76d88352feb37251/65e61bcf0f1d3518c7ca6612/5.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c090ebd5066daf8/65e61beceef4e3c3891e7f5f/6.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte6c19ea4fdea4edf/65e61c10c7f05b2df68697fd/7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt904c5fdc6ec9b544/65e61c3111cd1d29f1a1b696/8.png", "format": "md", "metadata": {"tags": ["Atlas", "JavaScript", "Python", "Serverless", "AWS"], "pageDescription": "", "contentType": "Tutorial"}, "title": "Getting Started With MongoDB Atlas Serverless, AWS CDK, and AWS Serverless Computing", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/polymorphism-with-mongodb-csharp", "action": "created", "body": "# Using Polymorphism with MongoDB and C#\n\nIn comparison to relational database management systems (RDBMS), MongoDB's flexible schema is a huge step forward when handling object-oriented data. These structures often make use of polymorphism where common base classes contain the shared fields that are available for all classes in the hierarchy; derived classes add the fields that are relevant only to the specific objects. An example might be to have several types of vehicles, like cars and motorcycles, that have some fields in common, but each type also adds some fields that make only sense if used for a type: \n\nFor RDBMS, storing an object hierarchy is a challenge. One way is to store the data in a table that contains all fields of all classes, though for each row, only a subset of fields is needed. Another approach is to create a table for the base class that contains the shared fields and add a table for each derived class that stores the columns for the specific type and references the base table. Neither of these approaches is optimal in terms of storage and when it comes to querying the data. \n\nHowever, with MongoDB's flexible schema, one can easily store documents in the same collection that do only share some but not all fields. This article shows how the MongoDB C# driver makes it easy to use this for storing class hierarchies in a very natural way. \n\nExample use cases include storing metadata for various types of documents, e.g., offers, invoices, or other documents related to business partners in a collection. Common fields could be a document title, a summary, the date, a vector embedding, and the reference to the business partner, whereas an invoice would add fields for the line items and totals but would not add the fields for a project report. \n\nAnother possible use case is to serve both an overview and a detail view from the same collection. We will have a closer look at how to implement this in the summary of this article. \n\n# Basics\n\nWhen accessing a collection from C#, we use an object that implements `IMongoCollection` interface. This object can be created like this: \n\n```csharp\nvar vehiclesColl = db.CreateCollection(\"vehicles\");\n```\n\nWhen serializing or deserializing documents, the type parameter `T` and the actual type of the object provide the MongoDB C# driver with a hint on how to map the BSON representation to a C# class and vice versa. If only documents of the same type reside in the collection, the driver uses the class map of the type. \n\nHowever, to be able to handle class hierarchies correctly, the driver needs more information. This is where the *type discriminator* comes in. When storing a document of a derived type in the collection, the driver adds a field named `_t` to the document that contains the name of the class, e.g.:\n\n```csharp\nawait vehiclesColl.InsertOneAsync(new Car());\n```\n\nleads to the following document structure: \n\n```JSON\n{\n \"_id\": ObjectId(\"660d7d43e042f8f6f2726f6a\"),\n \"_t\": \"Car\",\n // ... fields for vehicle \n // ... fields specific to car\n}\n```\n\nWhen deserializing the document, the value of the `_t` field is used to identify the type of the object that is to be created. \n\nThough this works out of the box without specific configuration, it is advised to support the driver by specifying the class hierarchy explicitly by using the `BsonKnownTypes` attribute, if you are using declarative mapping: \n\n```csharp\nBsonKnownTypes(typeof(Car), typeof(Motorcycle))]\npublic abstract class Vehicle\n{\n // ...\n}\n```\n\nIf you configure the class maps imperatively, just add a class map for each type in the hierarchy to reach the same effect. \n\nBy default, only the name of the class is used as value for the type discriminator. Especially if the hierarchy spans several levels and you want to query for any level in the hierarchy, you should store the hierarchy as an array in the type discriminator by using the `BsonDiscriminator` attribute: \n\n```csharp\n[BsonDiscriminator(RootClass = true)]\n[BsonKnownTypes(typeof(Car), typeof(Motorcycle))]\npublic abstract class Vehicle\n{\n // ...\n}\n```\n\nThis applies a different discriminator convention to the documents and stores the hierarchy as an array:\n\n```JSON\n{\n \"_id\": ObjectId(\"660d81e5825f1c064024a591\"),\n \"_t\": [\n \"Vehicle\",\n \"Car\"\n ],\n // ...\n}\n```\n\nFor additional details on how to configure the class maps for polymorphic objects, see the [documentation of the driver. \n\n# Querying collections with polymorphic documents\n\nWhen reading objects from a collection, the MongoDB C# driver uses the type discriminator to identify the matching type and creates a C# object of the corresponding class. The following query might yield both `Car` and `Motorcycle` objects: \n\n```csharp\nvar vehiclesColl = db.GetCollection(\"vehicles\");\nvar vehicles = (await vehiclesColl.FindAsync(FilterDefinition.Empty))\n .ToEnumerable();\n```\n\nIf you are only interested in documents of a specific type, you can create another instance of `IMongoCollection` that returns only these: \n\n```csharp\nvar carsColl = vehiclesColl.OfType();\nvar cars = (await carsColl.FindAsync(FilterDefinition.Empty))\n .ToEnumerable();\n```\n\nThis new collection instance respects the corresponding type discriminator whenever an operation is performed. The following statement removes only `Car` documents from the collection but keeps the `Motorcycle` documents as they are: \n\n```csharp\nawait carsColl.DeleteManyAsync(FilterDefinition.Empty);\n```\n\nIf you are using the LINQ provider brought by the MongoDB C# driver, you can also use the LINQ `OfType` extension method to only retrieve the `Car` objects: \n\n```csharp\nvar cars = vehiclesColl.AsQueryable().OfType();\n```\n\n# Serving multiple views from a single collection\n\nAs promised before, we now take a closer look at a use case for polymorphism: Let's suppose we are building a system that supports monitoring sensors that are distributed over several sites. The system should provide an overview that lists all sites with their name and the last value that was reported for the site along with a timestamp. When selecting a site, the system shows detailed information for the site that consists of all the data on the overview and also lists the sensors that are located at the specific site with their last value and its timestamp. \n\nThis can be depicted by creating a base class for the documents that contains the id of the site, a name to identify the document, and the last measurement, if available. A derived class for the site overview adds the site address; another one for the sensor detail contains the location of the sensor: \n\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.Bson.Serialization.Attributes;\n\npublic abstract class BaseDocument\n{\n BsonRepresentation(BsonType.ObjectId)]\n public string Id { get; set; } = ObjectId.GenerateNewId().ToString();\n\n [BsonRepresentation(BsonType.ObjectId)]\n public string SiteId { get; set; } = ObjectId.GenerateNewId().ToString();\n\n public string Name { get; set; } = string.Empty;\n\n public Measurement? Last { get; set; }\n}\n\npublic class Measurement\n{\n public int Value { get; set; }\n\n public DateTime Timestamp { get; set; }\n}\n\npublic class Address\n{\n // ...\n}\n\npublic class SiteOverview : BaseDocument\n{\n public Address Address { get; set; } = new();\n}\n\npublic class SensorDetail : BaseDocument\n{\n public string Location { get; set; } = string.Empty;\n}\n```\n\nWhen ingesting new measurements, both the site overview and the sensor detail are updated (for simplicity, we do not use a multi-document transaction): \n\n```csharp\nasync Task IngestMeasurementAsync(\n IMongoCollection overviewsColl,\n string sensorId,\n int value)\n{\n var measurement = new Measurement()\n {\n Value = value,\n Timestamp = DateTime.UtcNow\n };\n var sensorUpdate = Builders\n .Update\n .Set(x => x.Last, measurement);\n var sensorDetail = await overviewsColl\n .OfType()\n .FindOneAndUpdateAsync(\n x => x.Id == sensorId,\n sensorUpdate,\n new() { ReturnDocument = ReturnDocument.After });\n if (sensorDetail != null)\n {\n var siteUpdate = Builders\n .Update\n .Set(x => x.Last, measurement);\n var siteId = sensorDetail.SiteId;\n await overviewsColl\n .OfType()\n .UpdateOneAsync(x => x.SiteId == siteId, siteUpdate);\n }\n}\n```\n\nAbove sample uses `FindAndUpdateAsync` to both update the sensor detail document and also retrieve the resulting document so that the site id can be determined. If the site id is known beforehand, a simple update can also be used. \n\nWhen retrieving the documents for the site overview, the following code returns all the relevant documents: \n\n```csharp\nvar siteOverviews = (await overviewsColl\n .OfType()\n .FindAsync(FilterDefinition.Empty))\n .ToEnumerable();\n```\n\nWhen displaying detailed data for a specific site, the following query retrieves all documents for the site by its id in a single request: \n\n```csharp \nvar siteDetails = await (await overviewsColl\n .FindAsync(x => x.SiteId == siteId))\n .ToListAsync();\n```\n\nThe result of the query can contain objects of different types; you can use the LINQ `OfType` extension method on the list to discern between the types, e.g., when building a view model. \n\nThis approach allows for efficient querying from different perspectives so that central views of the application can be served with minimum load on the server. \n\n# Summary\n\nPolymorphism is an important feature of object-oriented languages and there is a wide range of use cases for it. As you can see, the MongoDB C# driver provides a solid bridge between object orientation and the MongoDB flexible document schema. If you want to dig deeper into the subject from a data modeling perspective, be sure to check out the [polymorphic pattern part of the excellent series \"Building With Patterns\" on the MongoDB Developer Center. ", "format": "md", "metadata": {"tags": ["C#"], "pageDescription": "An article discussing when and how to use polymorphism in a C# application using the MongoDB C# Driver.", "contentType": "Tutorial"}, "title": "Using Polymorphism with MongoDB and C#", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/cpp/me-and-the-devil-bluez-2", "action": "created", "body": "# Me and the Devil BlueZ: Reading BLE sensors from C++\n\n# Me and the Devil Bluez: Reading BLE sensors from C++\n\nIn our last article, I shared how to interact with Bluetooth Low Energy devices from a Raspberry Pi with Linux, using DBus and BlueZ. I did a step-by-step walkthrough on how to talk to a BLE device using a command line tool, so we had a clear picture of the sequence of operations that had to be performed to interact with the device. Then, I repeated the process but focused on the DBus messages that have to be exchanged to achieve that interaction.\n\nNow, it is time to put that knowledge into practice and implement an application that connects to the RP2 BLE sensor that we created in our second article and reads the value of the\u2026 temperature. (Yep, we will switch to noise sometime soon. Please, bear with me.)\n\nReady to start? Let's get cracking!\n\n## Setup\n\nThe application that we will be developing in this article is going to run on a Raspberry Pi 4B, our collecting station. You can use most other models, but I strongly recommend you connect it to your network using an ethernet cable and disable your WiFi. Otherwise, it might interfere with the Bluetooth communications.\n\nI will do all my development using Visual Studio Code on my MacBook Pro and connect via SSH to the Raspberry Pi (RPi). The whole project will be held in the RPi, and I will compile it and run it there. You will need the Remote - SSH extension installed in Visual Studio Code for this to work, and the first time you connect to the RPi, it will take some time to set it up. If you use Emacs, TRAMP is available out of the box.\n\nWe also need some software installed on the RPi. At the very least, we will need `git` and `CMake`, because that is the build system that I will be using for the project. The C++ compiler (g++) is installed by default in Raspberry Pi OS, but you can install `Clang` if you prefer to use LLVM.\n\n```sh\nsudo apt-get install git git-flow cmake\n```\n\nIn any case, we will need to install `sdbus-c++`. That is the library that allows us to interact with DBus using C++ bindings. There are several alternatives, but sdbus-c++ is properly maintained and has good documentation.\n\n```sh\nsudo apt-get install libsdbus-c++-{bin,dev,doc}\n```\n\n## Initial project\n\nI am going to write this project from scratch, so I want to be sure that you and I start with the same set of files. I am going to begin with a trivial `main.cpp` file, and then I will create the seed for the build instructions that we will use to produce the executable throughout this episode.\n\n### Initial main.cpp\n\nOur initial `main.cpp` file is just going to print a message:\n\n```cpp\n#include \n\nint main(int argc, char *argv])\n{\n std::cout << \"Noise Collector BLE\" << std::endl;\n\n return 0;\n}\n```\n\n### Basic project\n\nAnd now we should create a `CMakeLists.txt` file with the minimal build instructions for this project:\n\n```cmake\ncmake_minimum_required(VERSION 3.5)\nproject(NoiseCollectorBLE CXX)\nadd_executable(${PROJECT_NAME} main.cpp)\n```\n\nBefore we move forward, we are going to check that it all works fine:\n\n```sh\nmkdir build\ncmake -S . -B build\ncmake --build build\n./build/NoiseCollectorBLE\n```\n\n## Talk to DBus from C++\n\n### Send the first message\n\nNow that we have set the foundations of the project, we can send our first message to DBus. A good one to start with is the one we use to query if the Bluetooth radio is on or off.\n\n1. Let's start by adding the library to the project using CMake's `find_package` command:\n \n ```cmake\n find_package(sdbus-c++ REQUIRED)\n ```\n2. The library must be linked to our binary:\n \n ```cmake\n target_link_libraries(${PROJECT_NAME} PRIVATE SDBusCpp::sdbus-c++)\n ```\n3. And we enforce the usage of the C++17 standard because it is required by the library:\n \n ```cmake\n set(CMAKE_CXX_STANDARD 17)\n set(CMAKE_CXX_STANDARD_REQUIRED ON)\n ```\n4. With the library in place, let's create the skeleton to implement our BLE sensor. We first create the `BleSensor.h` file:\n \n ```cpp\n #ifndef BLE_SENSOR_H\n #define BLE_SENSOR_H\n \n class BleSensor\n {\n };\n \n #endif // BLE_SENSOR_H\n ```\n5. We add a constructor and a method that will take care of all the steps required to scan for and connect to the sensor:\n \n ```cpp\n public:\n BleSensor();\n void scanAndConnect();\n ```\n6. In order to talk to BlueZ, we should create a proxy object. A proxy is a local object that allows us to interact with the remote DBus object. Creating the proxy instance without passing a connection to it means that the proxy will create its own connection automatically, and it will be a system bus connection.\n \n ```cpp\n private:\n std::unique_ptr bluezProxy;\n ```\n7. And we need to include the library:\n \n ```cpp\n #include \n ```\n8. Let's create a `BleSensor.cpp` file for the implementation and include the header file that we have just created:\n \n ```cpp\n #include \"BleSensor.h\"\n ```\n9. That proxy requires the name of the service and a path to the instance that we want to talk to, so let's define both as constants inside of the constructor:\n \n ```cpp\n BleSensor::BleSensor()\n {\n const std::string SERVICE_BLUEZ { \"org.bluez\" };\n const std::string OBJECT_PATH { \"/org/bluez/hci0\" };\n \n bluezProxy = sdbus::createProxy(SERVICE_BLUEZ, OBJECT_PATH);\n }\n ```\n10. Let's add the first step to our scanAndConnect method using a private function that we declare in the header:\n \n ```cpp\n bool getBluetoothStatus();\n ```\n11. Following this, we write the implementation, where we use the proxy that we created before to send a message. We define a message to a method on an interface using the required parameters, which we learned using the introspectable interface and the DBus traces. The result is a *variant* that can be casted to the proper type using the overloaded `operator()`:\n \n ```cpp\n bool BleSensor::getBluetoothStatus()\n {\n const std::string METHOD_GET { \"Get\" };\n const std::string INTERFACE_PROPERTIES { \"org.freedesktop.DBus.Properties\" };\n const std::string INTERFACE_ADAPTER { \"org.bluez.Adapter1\" };\n const std::string PROPERTY_POWERED { \"Powered\" };\n sdbus::Variant variant;\n \n // Invoke a method that gets a property as a variant\n bluezProxy->callMethod(METHOD_GET)\n .onInterface(INTERFACE_PROPERTIES)\n .withArguments(INTERFACE_ADAPTER, PROPERTY_POWERED)\n .storeResultsTo(variant);\n \n return (bool)variant;\n }\n ```\n12. We use this private method from our public one:\n \n ```cpp\n void BleSensor::scanAndConnect()\n {\n try\n {\n // Enable Bluetooth if not yet enabled\n if (getBluetoothStatus())\n {\n std::cout << \"Bluetooth powered ON\\n\";\n } else\n {\n std::cout << \"Powering bluetooth ON\\n\";\n }\n }\n catch(sdbus::Error& error)\n {\n std::cerr << \"ERR: on scanAndConnect(): \" << error.getName() << \" with message \" << error.getMessage() << std::endl;\n }\n }\n ```\n13. And include the iostream header:\n \n ```cpp\n #include \n ```\n14. We need to add the source files to the project:\n \n ```cmake\n file(GLOB SOURCES \"*.cpp\")\n add_executable(${PROJECT_NAME} ${SOURCES})\n ```\n15. Finally, we import the header that we have defined in the `main.cpp`, create an instance of the object, and invoke the method:\n \n ```cpp\n #include \"BleSensor.h\"\n \n int main(int argc, char *argv[])\n {\n std::cout << \"Noise Collector BLE\" << std::endl;\n BleSensor bleSensor;\n bleSensor.scanAndConnect();\n ```\n16. We compile it with CMake and run it.\n\n### Send a second message\n\nOur first message queried the status of a property. We can also change things using messages, like the status of the Bluetooth radio:\n\n1. We declare a second private method in the header:\n \n ```cpp\n void setBluetoothStatus(bool enable);\n ```\n2. And we also add it to the implementation file \u2013in this case, only the message without the constants:\n \n ```cpp\n void BleSensor::setBluetoothStatus(bool enable)\n {\n // Invoke a method that sets a property as a variant\n bluezProxy->callMethod(METHOD_SET)\n .onInterface(INTERFACE_PROPERTIES)\n .withArguments(INTERFACE_ADAPTER, PROPERTY_POWERED, sdbus::Variant(enable))\n // .dontExpectReply();\n .storeResultsTo();\n }\n ```\n3. As you can see, the calls to create and send the message use most of the same constants. The only new one is the `METHOD_SET`, used instead of `METHOD_GET`. We set that one inside of the method:\n \n ```cpp\n const std::string METHOD_SET { \"Set\" };\n ```\n4. And we make the other three static constants of the class. Prior to C++17, we would have had to declare them in the header and initialize them in the implementation, but since then, we can use `inline` to initialize them in place. That helps readability:\n \n ```cpp\n static const std::string INTERFACE_ADAPTER { \"org.bluez.Adapter1\" };\n static const std::string PROPERTY_POWERED { \"Powered\" };\n static const std::string INTERFACE_PROPERTIES { \"org.freedesktop.DBus.Properties\" };\n ```\n5. With the private method complete, we use it from the public one:\n \n ```cpp\n if (getBluetoothStatus())\n {\n std::cout << \"Bluetooth powered ON\\n\";\n } else\n {\n std::cout << \"Powering bluetooth ON\\n\";\n setBluetoothStatus(true);\n }\n ```\n6. The second message is ready and we can build and run the program. You can verify its effects using `bluetoothctl`.\n\n## Deal with signals\n\nThe next thing we would like to do is to enable scanning for BLE devices, find the sensor that we care about, connect to it, and disable scanning. Obviously, when we start scanning, we don't get to know the available BLE devices right away. Some reply almost instantaneously, and some will answer a little later. DBus will send signals, asynchronous messages that are pushed to a given object, that we will listen to.\n\n### Use messages that have a delayed response\n\n1. We are going to use a private method to enable and disable the scanning. The first thing to do is to have it declared in our header:\n \n ```cpp\n void enableScanning(bool enable);\n ```\n2. In the implementation file, the method is going to be similar to the ones we have defined before. Here, we don't have to worry about the reply because we have to wait for our sensor to show up:\n \n ```cpp\n void BleSensor::enableScanning(bool enable)\n {\n const std::string METHOD_START_DISCOVERY { \"StartDiscovery\" };\n const std::string METHOD_STOP_DISCOVERY { \"StopDiscovery\" };\n \n std::cout << (enable?\"Start\":\"Stop\") << \" scanning\\n\";\n bluezProxy->callMethod(enable?METHOD_START_DISCOVERY:METHOD_STOP_DISCOVERY)\n .onInterface(INTERFACE_ADAPTER)\n .dontExpectReply();\n }\n ```\n3. We can then use that method in our public one to enable and disable scanning:\n \n ```cpp\n enableScanning(true);\n // Wait to be connected to the sensor\n enableScanning(false);\n ```\n4. We need to wait for the devices to answer, so let's add some delay between both calls:\n \n ```cpp\n // Wait to be connected to the sensor\n std::this_thread::sleep_for(std::chrono::seconds(10))\n ```\n5. And we add the headers for this new code:\n \n ```cpp\n #include \n #include \n ```\n6. If we build and run, we will see no errors but no results of our scanning, either. Yet.\n\n### Subscribe to signals\n\nIn order to get the data of the devices that scanning for devices produces, we need to be listening to the signals sent that are broadcasted through the bus.\n\n1. We need to interact with a different DBus object so we need another proxy. Let's declare it in the header:\n \n ```cpp\n std::unique_ptr rootProxy;\n ```\n2. And instantiate it in the constructor:\n \n ```cpp\n rootProxy = sdbus::createProxy(SERVICE_BLUEZ, \"/\");\n ```\n3. Next, we define the private method that will take care of the subscription:\n \n ```cpp\n void subscribeToInterfacesAdded();\n ```\n4. The implementation is simple: We provide a closure to be called on a different thread every time we receive a signal that matches our parameters:\n \n ```cpp\n void BleSensor::subscribeToInterfacesAdded()\n {\n const std::string INTERFACE_OBJ_MGR { \"org.freedesktop.DBus.ObjectManager\" };\n const std::string MEMBER_IFACE_ADDED { \"InterfacesAdded\" };\n \n // Let's subscribe for the interfaces added signals (AddMatch)\n rootProxy->uponSignal(MEMBER_IFACE_ADDED).onInterface(INTERFACE_OBJ_MGR).call(interfaceAddedCallback);\n rootProxy->finishRegistration();\n }\n ```\n5. The closure has to take as arguments the data that comes with a signal: a string for the path that points to an object in DBus and a dictionary of key/values, where the keys are strings and the values are dictionaries of strings and values:\n \n ```cpp\n auto interfaceAddedCallback = [this\n {\n };\n ```\n6. We will be doing more with the data later, but right now, displaying the thread id, the object path, and the device name, if it exists, will suffice. We use a regular expression to restrict our attention to the Bluetooth devices:\n \n ```cpp\n const std::regex DEVICE_INSTANCE_RE{\"^/org/bluez/hci0-9]/dev(_[0-9A-F]{2}){6}$\"};\n std::smatch match;\n std::cout << \"(TID: \" << std::this_thread::get_id() << \") \";\n if (std::regex_match(path, match, DEVICE_INSTANCE_RE)) {\n std::cout << \"Device iface \";\n \n if (dictionary[\"org.bluez.Device1\"].count(\"Name\") == 1)\n {\n auto name = (std::string)(dictionary[\"org.bluez.Device1\"].at(\"Name\"));\n std::cout << name << \" @ \" << path << std::endl;\n } else\n {\n std::cout << \" @ \" << path << std::endl;\n }\n } else {\n std::cout << \"*** UNEXPECTED SIGNAL ***\";\n }\n ```\n7. And we add the header for regular expressions:\n \n ```cpp\n #include \n ```\n8. We use the private method **before** we start scanning:\n \n ```cpp\n subscribeToInterfacesAdded();\n ```\n9. And we print the thread id in that same method:\n \n ```cpp\n std::cout << \"(TID: \" << std::this_thread::get_id() << \") \";\n ```\n10. If you build and run this code, it should display information about the BLE devices that you have around you. You can show it to your friends and tell them that you are searching for spy microphones.\n\n## Communicate with the sensor\n\nWell, that looks like progress to me, but we are still missing the most important features: connecting to the BLE device and reading values from it.\n\nWe should connect to the device, if we find it, from the closure that we use in `subscribeToInterfacesAdded()`, and then, we should stop scanning. However, that closure and the method `scanAndConnect()` are running in different threads concurrently. When the closure connects to the device, it should *inform* the main thread, so it stops scanning. We are going to use a mutex to protect concurrent access to the data that is shared between those two threads and a conditional variable to let the other thread know when it has changed.\n\n### Connect to the BLE device\n\n1. First, we are going to declare a private method to connect to a device by name:\n \n ```cpp\n void connectToDevice(sdbus::ObjectPath path);\n ```\n2. We will obtain that object path from the signals that tell us about the devices discovered while scanning. We will compare the name in the dictionary of properties of the signal with the name of the sensor that we are looking for. We'll receive that name through the constructor, so we need to change its declaration:\n \n ```cpp\n BleSensor(const std::string &sensor_name);\n ```\n3. And declare a field that will be used to hold the value:\n \n ```cpp\n const std::string deviceName;\n ```\n4. If we find the device, we will create a proxy to the object that represents it:\n \n ```cpp\n std::unique_ptr deviceProxy;\n ```\n5. We move to the implementation and start by adapting the constructor to initialize the new values using the preamble:\n \n ```cpp\n BleSensor::BleSensor(const std::string &sensor_name)\n : deviceProxy{nullptr}, deviceName{sensor_name}\n ```\n6. We then create the method:\n \n ```cpp\n void BleSensor::connectToDevice(sdbus::ObjectPath path)\n {\n }\n ```\n7. We create a proxy for the device that we have selected using the name:\n \n ```cpp\n deviceProxy = sdbus::createProxy(SERVICE_BLUEZ, path);\n ```\n8. And move the declaration of the service constant, which is now used in two places, to the header:\n \n ```cpp\n inline static const std::string SERVICE_BLUEZ{\"org.bluez\"};\n ```\n9. And send a message to connect to it:\n \n ```cpp\n deviceProxy->callMethodAsync(METHOD_CONNECT).onInterface(INTERFACE_DEVICE).uponReplyInvoke(connectionCallback);\n std::cout << \"Connection method started\" << std::endl;\n ```\n10. We define the constants that we are using:\n \n ```cpp\n const std::string INTERFACE_DEVICE{\"org.bluez.Device1\"};\n const std::string METHOD_CONNECT{\"Connect\"};\n ```\n11. And the closure that will be invoked. The use of `this` in the capture specification allows access to the object instance. The code in the closure will be added below.\n \n ```cpp\n auto connectionCallback = [this\n {\n };\n ```\n12. The private method can now be used to connect from the method `BleSensor::subscribeToInterfacesAdded()`. We were already extracting the name of the device, so now we use it to connect to it:\n \n ```cpp\n if (name == deviceName)\n {\n std::cout << \"Connecting to \" << name << std::endl;\n connectToDevice(path);\n }\n ```\n13. We would like to stop scanning once we are connected to the device. This happens in two different threads, so we are going to use the producer-consumer concurrency design pattern to achieve the expected behavior. We define a few new fields \u2013one for the mutex, one for the conditional variable, and one for a boolean flag:\n \n ```cpp\n std::mutex mtx;\n std::condition_variable cv;\n bool connected;\n ```\n14. And we include the required headers:\n \n ```cpp\n #include \n ```\n15. They are initialized in the constructor preamble:\n \n ```cpp\n BleSensor::BleSensor(const std::string &sensor_name)\n : deviceProxy{nullptr}, deviceName{sensor_name},\n cv{}, mtx{}, connected{false}\n ```\n16. We can then use these new fields in the `BleSensor::scanAndConnect()` method. First, we get a unique lock on the mutex before subscribing to notifications:\n \n ```cpp\n std::unique_lock lock(mtx);\n ```\n17. Then, between the start and the stop of the scanning process, we wait for the conditional variable to be signaled. This is a more robust and reliable implementation than using the delay:\n \n ```cpp\n enableScanning(true);\n // Wait to be connected to the sensor\n cv.wait(lock, this\n { return connected; });\n enableScanning(false);\n ```\n18. In the `connectionCallback`, we first deal with errors, in case they happen:\n \n ```cpp\n if (error != nullptr)\n {\n std::cerr << \"Got connection error \"\n << error->getName() << \" with message \"\n << error->getMessage() << std::endl;\n return;\n }\n ```\n19. Then, we get a lock on the same mutex, change the flag, release the lock, and signal the other thread through the connection variable:\n \n ```cpp\n std::unique_lock lock(mtx);\n std::cout << \"Connected!!!\" << std::endl;\n connected = true;\n lock.unlock();\n cv.notify_one();\n std::cout << \"Finished connection method call\" << std::endl;\n ```\n20. Finally, we change the initialization of the BleSensor in the main file to pass the sensor name:\n \n ```cpp\n BleSensor bleSensor { \"RP2-SENSOR\" };\n ```\n21. If we compile and run what we have so far, we should be able to connect to the sensor. But if the sensor isn't there, it will wait indefinitely. If you have problems connecting to your device and get \"le-connection-abort-by-local,\" use an ethernet cable instead of WiFi and disable it with `sudo ip link set wlan0 down`.\n\n### Read from the sensor\n\nNow that we have a connection to the BLE device, we will receive signals about other interfaces added. These are going to be the services, characteristics, and descriptors. If we want to read data from a characteristic, we have to find it \u2013using its UUID for example\u2013 and use DBus's \"Read\" method to get its value. We already have a closure that is invoked every time a signal is received because an interface is added, but in this closure, we verify that the object path corresponds to a device, instead of to a Bluetooth attribute.\n\n1. We want to match the object path against the structure of a BLE attribute, but we want to do that only when the device is already connected. So, we surround the existing regular expression match:\n \n ```cpp\n if (!connected)\n {\n // Current code with regex goes here.\n }\n else\n {\n }\n ```\n2. In the *else* part, we add a different match:\n \n ```cpp\n if (std::regex_match(path, match, DEVICE_ATTRS_RE))\n {\n }\n else\n {\n std::cout << \"Not a characteristic\" << std::endl;\n }\n ```\n3. That code requires the regular expression declared in the method:\n \n ```cpp\n const std::regex DEVICE_ATTRS_RE{\"^/org/bluez/hci\\\\d/dev(_0-9A-F]{2}){6}/service\\\\d{4}/char\\\\d{4}\"};\n ```\n4. If the path matches the expression, we check if it has the UUID of the characteristic that we want to read:\n \n ```cpp\n std::cout << \"Characteristic \" << path << std::endl;\n if ((dictionary.count(\"org.bluez.GattCharacteristic1\") == 1) &&\n (dictionary[\"org.bluez.GattCharacteristic1\"].count(\"UUID\") == 1))\n {\n auto name = (std::string)(dictionary[\"org.bluez.GattCharacteristic1\"].at(\"UUID\"));\n if (name == \"00002a1c-0000-1000-8000-00805f9b34fb\")\n {\n }\n }\n ```\n5. When we find the desired characteristic, we need to create (yes, you guessed it) a proxy to send messages to it.\n \n ```cpp\n tempAttrProxy = sdbus::createProxy(SERVICE_BLUEZ, path);\n std::cout << \"<<>> \" << path << std::endl;\n ```\n6. That proxy is stored in a field that we haven't declared yet. Let's do so in the header file:\n \n ```cpp\n std::unique_ptr tempAttrProxy;\n ```\n7. And we do an explicit initialization in the constructor preamble:\n \n ```cpp\n BleSensor::BleSensor(const std::string &sensor_name)\n : deviceProxy{nullptr}, tempAttrProxy{nullptr},\n cv{}, mtx{}, connected{false}, deviceName{sensor_name}\n ```\n8. Everything is ready to read, so let's declare a public method to do the reading:\n \n ```cpp\n void getValue();\n ```\n9. And a private method to send the DBus messages:\n \n ```cpp\n void readTemperature();\n ```\n10. We implement the public method, just using the private method:\n \n ```cpp\n void BleSensor::getValue()\n {\n readTemperature();\n }\n ```\n11. And we do the implementation on the private method:\n \n ```cpp\n void BleSensor::readTemperature()\n {\n tempAttrProxy->callMethod(METHOD_READ)\n .onInterface(INTERFACE_CHAR)\n .withArguments(args)\n .storeResultsTo(result);\n }\n ```\n12. We define the constants that we used:\n \n ```cpp\n const std::string INTERFACE_CHAR{\"org.bluez.GattCharacteristic1\"};\n const std::string METHOD_READ{\"ReadValue\"};\n ```\n13. And the variable that will be used to qualify the query to have a zero offset as well as the one to store the response of the method:\n \n ```cpp\n std::map args{{{\"offset\", sdbus::Variant{std::uint16_t{0}}}}};\n std::vector result;\n ```\n14. The temperature starts on the second byte of the result (offset 1) and ends on the fifth, which in this case is the last one of the array of bytes. We can extract it:\n \n ```cpp\n std::cout << \"READ: \";\n for (auto value : result)\n {\n std::cout << +value << \" \";\n }\n std::vector number(result.begin() + 1, result.end());\n ```\n15. Those bytes in ieee11073 format have to be transformed into a regular float, and we use a private method for that:\n \n ```cpp\n float valueFromIeee11073(std::vector binary);\n ```\n16. That method is implemented by reversing the transformation that we did on [the second article of this series:\n \n ```cpp\n float BleSensor::valueFromIeee11073(std::vector binary)\n {\n float value = static_cast(binary0]) + static_cast(binary[1]) * 256.f + static_cast(binary[2]) * 256.f * 256.f;\n float exponent;\n if (binary[3] > 127)\n {\n exponent = static_cast(binary[3]) - 256.f;\n }\n else\n {\n exponent = static_cast(binary[3]);\n }\n return value * pow(10, exponent);\n }\n ```\n17. That implementation requires including the math declaration:\n \n ```cpp\n #include \n ```\n18. We use the transformation after reading the value:\n \n ```cpp\n std::cout << \"\\nTemp: \" << valueFromIeee11073(number);\n std::cout << std::endl;\n ```\n19. And we use the public method in the main function. We should use the producer-consumer pattern here again to know when the proxy to the temperature characteristic is ready, but I have cut corners again for this initial implementation using a couple of delays to ensure that everything works fine.\n \n ```cpp\n std::this_thread::sleep_for(std::chrono::seconds(5));\n bleSensor.getValue();\n std::this_thread::sleep_for(std::chrono::seconds(5));\n ```\n20. In order for this to work, the thread header must be included:\n \n ```cpp\n #include \n ```\n21. We build and run to check that a value can be read.\n\n### Disconnect from the BLE sensor\n\nFinally, we should disconnect from this device to leave things as we found them. If we don't, re-running the program won't work because the sensor will still be connected and busy.\n\n1. We declare a public method in the header to handle disconnections:\n \n ```cpp\n void disconnect();\n ```\n2. And a private one to send the corresponding DBus message:\n \n ```cpp\n void disconnectFromDevice();\n ```\n3. In the implementation, the private method sends the required message and creates a closure that gets invoked when the device gets disconnected:\n \n ```cpp\n void BleSensor::disconnectFromDevice()\n {\n const std::string INTERFACE_DEVICE{\"org.bluez.Device1\"};\n const std::string METHOD_DISCONNECT{\"Disconnect\"};\n \n auto disconnectionCallback = [this\n {\n };\n \n {\n deviceProxy->callMethodAsync(METHOD_DISCONNECT).onInterface(INTERFACE_DEVICE).uponReplyInvoke(disconnectionCallback);\n std::cout << \"Disconnection method started\" << std::endl;\n }\n }\n ```\n4. And that closure has to change the connected flag using exclusive access:\n \n ```cpp\n if (error != nullptr)\n {\n std::cerr << \"Got disconnection error \" << error->getName() << \" with message \" << error->getMessage() << std::endl;\n return;\n }\n std::unique_lock lock(mtx);\n std::cout << \"Disconnected!!!\" << std::endl;\n connected = false;\n deviceProxy = nullptr;\n lock.unlock();\n std::cout << \"Finished connection method call\" << std::endl;\n ```\n5. The private method is used from the public method:\n \n ```cpp\n void BleSensor::disconnect()\n {\n std::cout << \"Disconnecting from device\" << std::endl;\n disconnectFromDevice();\n }\n ```\n6. And the public method is used from the main function:\n \n ```cpp\n bleSensor.disconnect();\n ```\n7. Build and run to see the final result.\n\n## Recap and future work\n\nIn this article, I have used C++ to write an application that reads data from a Bluetooth Low Energy sensor. I have realized that writing C++ is **not** like riding a bike. Many things have changed since I wrote my last C++ code that went into production, but I hope I did a decent job at using it for this task.\n,\" caused by a \"Connection Failed to be Established (0x3e),\" when attempting to connect to the Bluetooth sensor. It happened often but not always. In the beginning, I didn't know if it was my code to blame, the library, or what. After catching exceptions everywhere, printing every message, capturing Bluetooth traces with `btmon`, and not finding much (although I did learn a few new things from Unix & Linux StackExchange, Stack Overflow and the Raspberry Pi forums), I suddenly realized that the culprit was the Raspberry Pi WiFi/Bluetooth chip. The symptom was an unreliable Bluetooth connection, but my sensor and the RPi were very close to each other and without any relevant interference from the environment. The root cause was sharing the radio frequency (RF) in the same chip (Broadcom BCM43438) with a relatively small antenna. I switched from the RPi3A+ to an RPi4B with an ethernet cable and WiFi disabled and, all of a sudden, things started to work.\n\nEven though the implementation wasn't too complex and the proof of concept was passed, the hardware issue raised some concerns. It would only get worse if I talked to several sensors instead of just one. And that is exactly what we will do in future episodes to collect the data from the sensor and send it to a MongoDB Cluster with time series. I could still use a USB Bluetooth dongle and ignore the internal hardware. But before I take that road, I would like to work on the MQTT alternative and make a better informed decision. And that will be our next episode.\n\nStay curious, hack your code, and see you next time!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdc6a10ecba495cf2/658195252f46f765e48223bf/yogendra-singh-BxHnbYyNfTg-unsplash.jpg", "format": "md", "metadata": {"tags": ["C++", "RaspberryPi"], "pageDescription": "This article is a step-by-step description of the process of writing a C++ application from scratch that reads from a Bluetooth Low Energy sensor using DBus and BlueZ. The resulting app will run in a Raspberry Pi and might be the seed for the collecting station that will upload data to a MongoDB cluster in the Cloud.", "contentType": "Tutorial"}, "title": "Me and the Devil BlueZ: Reading BLE sensors from C++", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/rag-workflow-with-atlas-amazon-bedrock", "action": "created", "body": "# Launch a Fully Managed RAG Workflow With MongoDB Atlas and Amazon Bedrock\n\n## Introduction\n\nMongoDB Atlas is now natively integrated with Amazon Bedrock Knowledge Base, making it even easier to build generative AI applications backed by enterprise data. \n\nAmazon Bedrock, Amazon Web Services\u2019 (AWS) managed cloud service for generative AI, empowers developers to build applications on top of powerful foundation models like Anthropic's Claude, Cohere Embed, and Amazon Titan. By integrating with Atlas Vector Search, Amazon Bedrock enables customers to leverage the vector database capabilities of Atlas to bring up-to-date context to Foundational Model outputs using proprietary data. \n\nWith the click of a button (see below), Amazon Bedrock now integrates MongoDB Atlas as a vector database into its fully managed, end-to-end retrieval-augmented generation (RAG) workflow, negating the need to build custom integrations to data sources or manage data flows. \n\nCompanies using MongoDB Atlas and Amazon Bedrock can now rapidly deploy and scale generative AI apps grounded in the latest up-to-date and accurate enterprise data. For enterprises with the most demanding privacy requirements, this capability is also available via AWS PrivateLink (more details at the bottom of this article).\n\n## What is retrieval-augmented generation?\n\nOne of the biggest challenges when working with generative AI is trying to avoid hallucinations, or erroneous results returned by the foundation model (FM) being used. The FMs are trained on public information that gets outdated quickly and the models cannot take advantage of the proprietary information that enterprises possess.\n\nOne way to tackle hallucinating FMs is to supplement a query with your own data using a workflow known as retrieval-augmented generation, or RAG. In a RAG workflow, the FM will seek specific data \u2014 for instance, a customer's previous purchase history \u2014 from a designated database that acts as a \u201csource of truth\u201d to augment the results returned by the FM. For a generative AI FM to search for, locate, and augment its responses, the relevant data needs to be turned into a vector and stored in a vector database.\n\n## How does the Knowledge Base integration work?\n\nWithin Amazon Bedrock, developers can now \u201cclick to add\u201d MongoDB Atlas as a knowledge base for their vector data store to power RAG.\n\nIn the workflow, a customer chooses two different models: an embedding model and a generative model. These models are then orchestrated and used by Bedrock Agents during the interaction with the knowledge base \u2014 in this case, MongoDB Atlas.\n\nBedrock reads your text data from an S3 bucket, chunks the data, and then uses the embedding model chosen by the user to create the vector embeddings, storing these text chunks, embeddings, and related metadata in MongoDB Atlas\u2019 vector database. An Atlas vector search index is also created as part of the setup for querying the vector embeddings.\n\n combines operational, vector, and metadata in a single platform, making it an ideal knowledge base for Amazon Bedrock users who want to augment their generative AI experiences while also simplifying their generative AI stack.\n\nIn addition, MongoDB Atlas gives developers the ability to set up dedicated infrastructure for search and vector search workloads, optimizing compute resources to scale search and database independently. \n\n## Solution architecture\n\n to populate our knowledge base. Please download the PDF (by clicking on \u201cRead Whitepaper\u201d or \u201cEmail me the PDF\u201d). Alternatively, you can download it from the GitHub repository. Once you have the PDF, upload it into an S3 bucket for hosting. (Note the bucket name as we will use it later in the article.) \n\n## Prerequisites\n\n* MongoDB Atlas account\n* AWS account \n\n## Implementation steps\n\n### Atlas Cluster and Database Setup \n\n* Login or Signup][3] to MongoDB Atlas \n* [Setup][4] the MongoDB Atlas cluster with a M10 or greater configuration. *Note M0 or free cluster will not support this setup.*\n* Setup the [database user][5] and [Network access][6].\n* Copy the [connection string][7].\n* [Create][8] a database and collection\n\n![The screenshot shows the navigation of creating a database in MongoDB Atlas.][9]\n\n### Atlas Vector Search index\n\nBefore we create an Amazon Bedrock knowledge base (using MongoDB Atlas), we need to create an Atlas Vector Search index.\n\n* In the MongoDB Atlas Console, navigate to your cluster and select the _Atlas Search_ tab. \n\n![Atlas console navigation to create the search index][10]\n\n* Select _Create Search Index_, select _Atlas Vector Search_, and select _Next_.\n\n![The screenshot shows the MongoDB Atlas Search Index navigation.][11]\n\n* Select the database and the collection where the embeddings are stored.\n\n![MongoDB Atlas Search Index navigation][12] \n\n* Supply the following JSON in the index definition and click _Next_, confirming and creating the index on the next page.\n\n ```\n {\n \"fields\": [\n {\n \"numDimensions\": 1536,\n \"path\": \"bedrock_embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n },\n {\n \"path\": \"bedrock_metadata\",\n \"type\": \"filter\"\n },\n {\n \"path\": \"bedrock_text_chunk\",\n \"type\": \"filter\"\n }\n ]\n }\n ```\n![The screenshot shows the MongoDB Atlas Search Index navigation][13]\n\nNote: The fields in the JSON are customizable but should match the fields we configure in the Amazon Bedrock AWS console. If your source content contains [filter metadata, the fields need to be included in the JSON array above in the same format: `{\"path\": \"<attribute_name>\",\"type\":\"filter\"}`.\n\n### Amazon Bedrock Knowledge Base \n\n* In the AWS console, navigate to Amazon Bedrock, and then click _Get started_.\n\n orchestrate interactions between foundation models, data sources, software applications, and user conversations. In addition, agents automatically call APIs to take actions and invoke knowledge bases to supplement information for these actions\n\n* In the AWS Bedrock console, create an Agent. \n\n* AWS docs about MongoDB Bedrock integration\n* MongoDB Vector Search\n* Bedrock User Guide\n* MongoDB Atlas on AWS Marketplace\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt511ed709f8f6d72c/66323fa5ba17b0c937cb77a0/1_image.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt33b9187a37c516d8/66323fa65319a05c071f59a6/2_image.png\n [3]: https://www.mongodb.com/docs/guides/atlas/account/\n [4]: https://www.mongodb.com/docs/guides/atlas/cluster/\n [5]: https://www.mongodb.com/docs/guides/atlas/db-user/\n [6]: https://www.mongodb.com/docs/guides/atlas/network-connections/\n [7]: https://www.mongodb.com/docs/guides/atlas/connection-string/\n [8]: https://www.mongodb.com/basics/create-database#using-the-mongodb-atlas-ui\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6fe6d13267c2898e/663bb48445868a5510839ee6/27_bedrock.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt81882c0004f72351/66323fa5368bfca5faff012c/3_image.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7f47b6bde3a5c6e0/66323fa6714a1b552cb74ee7/4_image12.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2a5dcee8c6491baf/66323fa63c98e044b720dd9f/5_image.png\n [13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9fec5112de72ed56/663bb5552ff97d53f17030ad/28_bedrock.png\n [14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1fd932610875a23b/66323fa65b8ef39b7025bd85/7_image.png\n [15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbc8d532d6dcd3cc5/66323fa6ba17b06b7ecb77a8/8_image.png\n [16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0a24ae8725a51d07/66323fa6e664765138d445ee/9_image.png\n [17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt88d2e77e437da43f/66323fa6e664767b99d445ea/10_image.png\n [18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad5b66cb713d14f7/66323fa63c98e0b22420dd97/11_image.png\n [19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0da5374bed3cdfc4/66323fa6e66476284ed445e6/12_image.png\n [20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a5f530f0d77cb25/66323fa686ffea3e4a8e4d1e/13_image.png\n [21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c5a3bbfd93fc797/66323fa6ba17b003bccb77a4/14_image.png\n [22]: https://github.com/mongodb-partners/mongodb_atlas_as_aws_bedrock_knowledge_base\n [23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt17ef52eaa92859b7/66323fa6f5bf2dff3c36e840/15_image.png\n [24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt039df3ec479a51b3/66323fa6dafc457afab1d9ca/16_image18.png\n [25]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltff47aa531f588800/66323fa65319a08f491f59aa/17_image.png\n [26]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb886edc28380eaad/66323fa63c98e0f4ca20dda1/18_image.png\n [27]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt94659b32f1eedb41/66323fa657623318c954d39d/19_image.png\n [28]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltce5243d13db956ac/66323fa6599d112fcc850538/20_image.png\n [29]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5cac340ac53e5630/66323fa6d63d2215d9b8ce1e/21_image.png\n [30]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltecb09d22b3b99731/66323fa586ffea4e788e4d1a/22_image.png\n [31]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc59300cd4b46845/66323fa6deafa962708fcb0c/23_image.png\n [32]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt381ef4a7a68c7b40/66323fa54124a57222a6c45d/24_image.png\n [33]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte3d2b90f86472bf1/66323fa5ba17b0c29fcb779c/25_image.png", "format": "md", "metadata": {"tags": ["Atlas", "AWS"], "pageDescription": "Atlas Vector Search and Amazon Bedrock enable the vector database capabilities of Atlas to bring up-to-date context to Foundational Model outputs.", "contentType": "Tutorial"}, "title": "Launch a Fully Managed RAG Workflow With MongoDB Atlas and Amazon Bedrock", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-online-archival", "action": "created", "body": "# Atlas Online Archive: Efficiently Manage the Data Lifecycle\n\n## Problem statement\n\nIn the production environment, in a MongoDB Atlas database, a collection contains massive amounts of data stored, including aged and current data. However, aged data is not frequently accessed through applications, and the data piles up daily in the collection, leading to performance degradation and cost consumption. This results in needing to upgrade the cluster tier size to maintain sufficient resources according to workload, as it would be difficult to continue with the existing tier size.\n\nOverall, this negatively impacts application performance and equates to higher resource utilization and increased costs for business.\n\n## Resolution\n\nTo avoid overpaying, you can offload aged data to a cheaper storage area based on the date criteria, which is called _archival storage_ in MongoDB. Later, you can access those infrequently archived data by using MongoDB federated databases. Hence, cluster size, performance, and resource utilization are optimized.\n\nTo better manage data in the Atlas cluster, MongoDB introduced the Online Archive feature from MongoDB Atlas 4.4 version onward.\n\n### Advantages\n\n* It archives data based on the date criteria in the archival rule, and the job runs every five minutes by default.\n* Query the data through a federated database connection, which is available in the Data Federation tab.\n* Infrequent data access through federated connections apart from the main cluster improves performance and reduces traffic on the main cluster.\n* Archived data can be queried by downstream environments and consumed in read-only mode.\n\n### Limitations\n\n* Archived data is available for reading purposes, but it does not support writing or modification.\n* Capped collections do not support online archival.\n* Atlas serverless clusters do not support online archival.\n* Separate federated connection strings connect archived data.\n\n### Pre-requisites \n\n* Online Archive is supported by cluster tier M10 and above.\n* Indexes offer better performance during archival.\n* To create or delete an online archive, you must have one of the following roles:\n\nProject Data Access Admin, Project Cluster Manager, or Project Owner.\n\n## Online archival configuration setup\n\nThe cluster DemoCluster has a collection called movies in the database sample_mflix. As per the business rule, you are storing aged and the latest data in the main cluster, but day by day, data keeps piling up, as expected. Therefore, right-sizing your cluster resources by upgrading tier size leads to increased costs.\n\nTo overcome this issue and maintain the cluster efficiently, you have to offload the infrequent or aged data to lower cost storage by the online archive feature and access it through a federated database connection. You can manage online archival at any point in time as per business requirements through managing archives.\n\nIn your case, you have loaded a sample dataset from the MongoDB Atlas cluster setup \u2014 one of the databases is sample_mflix \u2014 and there is a collection called movies that has aged, plus the latest data itself. As per the business requirement, the last 10 years of data have been frequently used by customers. Therefore, plan to implement archived data after 10 years from the collection based on the date field.\n\nTo implement the Online Archive feature, you need a basic M10 cluster or above:\n\n### Define archiving rules \n\nOnce business requirements are finalized, define the rules on which data fields will be archived based on criteria like age, size, and other conditions. We can set up Online Archive rules through the Atlas UI or using the Atlas API.\n\nThe movies collection in the sample_mflix database has a date field called released. To make online archival perform better, you need to create an index on the released field using the below command.\n\n use sample_mflix\n db.movies.createIndex({\"released\":1})\n\nAfter creating the index, you can choose this field as a date-based archive and move the data that is older than 10 years (3652 days) to cold storage. This means the cluster will store documents less than 10 years old, and all other documents move to archival storage which is cheaper to maintain.\n\nBefore implementing the archival rule, the movies collection's total document count was 21,349, as seen in the below image.\n\n## Implementation steps\n\nStep 1: Go to Browse Collections on Cluster Overview and select the Online Archive tab.\n\nStep 2: You have to supply a namespace for the collection, storage region, date match field, and age limit to archive. In your case:\n\n* Namespace: sample_mflix.movies\n* Chosen Region: AWS / Mumbai (cloud providers AWS, Azure, GCP)\n* Date Field: released (Indexed field required)\n* Age Limit: 3652 days (10 years from the date)\n\nFor instance, today is February 28, 2024, so that means that 3652 days before today would be Feb 28, 2014.\n\nStep 3: Here are a couple of features you can add as optional.\n\nDelete age limit: This allows the purging of data from archival storage based on the required criteria. It's an optional feature you can use as per your organization's decision.\n\nIn this example, we are not purging any data as per business rules.\n\nSchedule archiving window: This feature enables you to customize schedules. For example, you can run archive jobs during non-business hours or downtime windows to make sure it has a low impact on applications.\n\n\")\n\nStep 4: You can add any further partition fields required.\n\nStep 5: Once the rule configuration is completed, the wizard prompts a detailed review of your archival rule. You can observe Namespace, service provider (AWS), Storage Region (Mumbai), Archive Field, Age Limit, etc.\n\nStep 6: Once the steps are reviewed, click on BeginArchiving to create data federation instances in the DataFederation tab. Then, it will start archiving data based on the validation rule and move to AWS S3 storage. One of the best features is you can modify, pause, and delete online archival rules any time around the clock. For instance, your archival criteria can change at any time.\n\nStep 7: Once the Online Archive is set, there will be an archive job run every five minutes by default. This validates criteria based on the date field and moves the data to archival storage. Apart from that, you can set up this job as per your custom range instead of the default schedule. You can view this archival job in the cluster main section as seen in the below image, with the actual status Archiving/IDLE.\n\nThe Atlas Online Archive feature will create two federated database instances in the Data Federation tab for the cluster to access data apart from the regular connection string:\n\n* A federated database instance to query data on your archive only\n* A federated database instance to query both your cluster and archived data\n\nWhen the archival job runs as per the schedule, it moves documents to archival storage. As a result, the document count of the collection in the main cluster will be reduced by maintaining the latest data or hot data.\n\nTherefore, as per the above scenario, the movies collection now contains fresh/the latest data.\n\nMovies collection document count: 2186 (it excludes documents more than 10 years old).\n\nEvery day, it validates 3652 days later to find documents to move to archival storage.\n\nYou can observe the collection document count in the below image:\n\n## How to connect and access\n\nYou can access archived or read-only data through the Data Federation wizard. Simply connect with connection strings for both:\n\n* Archived only (specific database collection for which we set up archive rule)\n* Cluster archive (all the databases in it)\n\n** You can point these connection strings to downstream environments to read the data or consume it via end-user applications._\n\n## Atlas Data Federation \n\nData Federation provides the capability to federate queries across data stored in various supported storage formats, including Atlas clusters, Atlas online archives, Data Lake datasets, AWS S3 buckets, and HTTP stores. You can derive insights or move data between any of the supported storage formats of the service.\n\n2. DemoCluster archive: This is a federated database instance for your archive that allows you to query data on your archive only. By connecting with this string, you will see only archived collections, as shown in the below screen. For more details check, visit the docs.\n\nHere, the cluster name DemoCluster has archived collection data that you can retrieve only by using the below connection string, as shown in the image.\n\nConnection string: \"mongodb://Username:Password@archived-atlas-online-archive-65df00164668c44159eb65c8-abcd6.a.query.mongodb.net/?ssl=true&authSource=admin\"\n\nAs shown in the image, you can view only those archived collections data in the form of READ-ONLY mode, which means you cannot modify these documents in the future.\n\n2. DemoCluster cluster archive:\n\nThis federated database instance for your cluster and archive allows you to query both your cluster and archived data. Here, you can access all the databases in the cluster, including non-archived collections, as shown in the below image.\n\nConnection string:\n```bash\nmongodb://Username:Password@atlas-online-archive-65df00164668c44159eb65c8-abcd6.a.query.mongodb.net/?ssl=true&authSource=admin\n```\n\nNote: Using this connection string, you can view all the databases inside the cluster and the archived collection\u2019s total document count. It also allows READ-ONLY mode.\n\n## Project cluster overview\n\nAs discussed earlier, the main cluster DemoCluster contains the latest data as per the business requirements \u2014 i.e., frequently consumed data. You can access data and perform read and write operations at any time by pointing to live application changes.\n\nNote: In your case, the latest data refers to anything less than 10 years old.\n\nConnection string:\n```bash\nmongodb+srv://Username:Password@democluster.abcd6.mongodb.net/\n```\n\nIn this scenario, after archiving aged data, you can see only 2186 documents for the movies collection with data less than 10 years old.\n\nYou can use MongoShell, an application, or any third-party tools (like MongoCompass) to access the archived data and main cluster data.\n\nAlternatively, with all three of these connection strings, you can fetch from the below wizard in cluster connect.\n\n1. Connect to cluster and Online Archive (read-only archived instance connection string)\n2. Connect to cluster (direct cluster connection to perform CRUD operations)\n3. Connect to Online Archive (read-only specific to an archived database connection string)\n\nMongoShell prompt: To connect both archived data from the Data Federation tab, you can view the difference between both archived data in the form of READ-ONLY mode.\n\nMongoShell prompt: Here in the main cluster, you can view a list of databases where you can access, read, and write frequent data through a cluster connection string.\n\n## Conclusion\n\nOverall, MongoDB Atlas's online archival feature empowers organizations to optimize storage costs, enhance performance, adhere to data retention policies by securely storing data for long-term retention periods, and effectively manage data and storage efficiency throughout its lifecycle.\n\nWe\u2019d love to hear your thoughts on everything you\u2019ve learned! Join us in the Developer Community to continue the conversation and see what other people are building with MongoDB.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "This article explains the MongoDB's Online Archival feature and its advantages.", "contentType": "Article"}, "title": "Atlas Online Archive: Efficiently Manage the Data Lifecycle", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/jina-ai-semantic-search", "action": "created", "body": "# Semantic search with Jina Embeddings v2 and MongoDB Atlas\n\nSemantic search is a great ally for AI embeddings.\n\nUsing vectors to identify and rank matches has been a part of search for longer than AI has. The venerable tf/idf algorithm, which dates back to the 1960s, uses the counts of words, and sometimes parts of words and short combinations of words, to create representative vectors for text documents. It then uses the distance between vectors to find and rank potential query matches and compare documents to each other. It forms the basis of many information retrieval systems.\n\nWe call this \u201csemantic search\u201d because these vectors already have information about the meaning of documents built into them. Searching with semantic embeddings works the same way, but instead, the vectors come from AI models that do a much better job of making sense of the documents.\n\nBecause vector-based retrieval is a time-honored technique for retrieval, there are database platforms that already have all the mechanics to do it. All you have to do is plug in your AI embeddings model.\n\nThis article will show you how to enhance MongoDB Atlas \u2014 an out-of-the-box, cloud-based solution for document retrieval \u2014 with Jina Embeddings\u2019 top-of-the-line AI to produce your own killer search solution.\n\n### Setting up\nYou will first need a MongoDB Atlas account. Register for a new account or sign in using your Google account directly on the website.\n\n### Create a project\nOnce logged in, you should see your **Projects** page. If not, use the navigation menu on the left to get to it.\n\nCreate a new project by clicking the **New Project** button on the right.\n\nYou can add new members as you like, but you shouldn\u2019t need to for this tutorial.\n\n### Create a deployment\nThis should return you to the **Overview** page where you can now create a deployment. Click the **+Create** button to do so.\n\nSelect the **M0 Free** tier for this project and the provider of your choice, and then click the **Create** button at the bottom of the screen.\n\n On the next screen, you will need to create a user with a username and secure password for this deployment. Do not lose this password and username! They are the only way you will be able to access your work.\n\nThen, select access options. We recommend for this tutorial selecting **My Local Environment**, and clicking the **Add My Current IP Address** button.\n\nIf you have a VPN or a more complex security topology, you may have to consult your system administrator to find out what IP number you should insert here instead of your current one.\n\nAfter that, click **Finish and Deploy** at the bottom of the page. After a brief pause, you will now have an empty MongoDB database deployed on Atlas for you to use.\n\nNote: If you have difficulty accessing your database from outside, you can get rid of the IP Access List and accept connections from all IP addresses. Normally, this would be very poor security practice, but because this is a tutorial that uses publicly available sample data, there is little real risk.\n\nTo do this, click the **Network Access** tab under **Security** on the left side of the page:\n\nThen, click **ADD IP ADDRESS** from the right side of the page:\n\nYou will get a modal window. Click the button marked **ALLOW ACCESS FROM ANYWHERE**, and then click **Confirm**.\n\nYour Network Access tab should now have an entry labeled `0.0.0.0/0`.\n\nThis will allow any IP address to access your database if it has the right username and password.\n\n## Adding Data\n\nIn this tutorial, we will be using a sample database of Airbnb reviews. You can add this to your database from the Database tab under Deployments in the menu on the left side of the screen. Once you are on the \u201cDatabase Deployments\u201d page, find your cluster (on the free tier, you are only allowed one, so it should be easy). Then, click the \u201cthree dots\u201d button and choose **Load Sample Data**. It may take several minutes to load the data.\n\nThis will add a collection of free data sources to your MongoDB instance for you to experiment with, including a database of Airbnb reviews.\n\n## Using PyMongo to access your data\nFor the rest of this tutorial, we will use Python and PyMongo to access your new MongoDB Atlas database.\n\nMake sure PyMongo is installed in your Python environment. You can do this with the following command:\n```\npip install pymongo\n```\n\nYou will also need to know:\n\n 1. The username and password you set when you set up the database.\n 2. The URL to access your database deployment.\n\nIf you have lost your username and password, click on the **Database Access** tab under **Security** on the left side of the page. That page will enable you to reset your password.\n\nTo get the URL to access your database, return to the **Database** tab under **Deployment** on the left side of the screen. Find your cluster, and look for the button labeled **Connect**. Click it.\n\nYou will see a modal pop-up window like the one below:\n\nClick **Drivers** under **Connect to your application**. You will see a modal window like the one below. Under number three, you will see the URL you need but without your password. You will need to add your password when using this URL.\n\n## Connecting to your database\n\nCreate a file for a new Python script. You can call it `test_mongo_connection.py`.\n\nWrite into this file the following code, which uses PyMongo to create a client connection to your database:\n```\nfrom pymongo.mongo_client import MongoClient\n\nclient = MongoClient(\"\")\n```\n\nRemember to insert the URL to connect to your database, including the correct username and password.\n\nNext, add code to connect to the Airbnb review dataset that was installed as sample data:\n```\ndb = client.sample_airbnb\ncollection = db.listingsAndReviews\n```\n\nThe variable `collection` is an iterable that will return the entire dataset item by item. To test that it works, add the following line and run `test_mongo_connection.py`:\n```\nprint(collection.find_one())\n```\n\nThis will print JSON formatted text that contains the information in one database entry, whichever one it happened to find first. It should look something like this:\n```\n{'_id': '10006546',\n 'listing_url': 'https://www.airbnb.com/rooms/10006546',\n 'name': 'Ribeira Charming Duplex',\n 'summary': 'Fantastic duplex apartment with three bedrooms, located in the historic \narea of Porto, Ribeira (Cube) - UNESCO World Heritage Site. Centenary \nbuilding fully rehabilitated, without losing their original character.',\n 'space': 'Privileged views of the Douro River and Ribeira square, our apartment offers \nthe perfect conditions to discover the history and the charm of Porto. \nApartment comfortable, charming, romantic and cozy in the heart of Ribeira. \nWithin walking distance of all the most emblematic places of the city of Porto. \nThe apartment is fully equipped to host 8 people, with cooker, oven, washing \nmachine, dishwasher, microwave, coffee machine (Nespresso) and kettle. The \napartment is located in a very typical area of the city that allows to cross \nwith the most picturesque population of the city, welcoming, genuine and happy \npeople that fills the streets with his outspoken speech and contagious with \nyour sincere generosity, wrapped in a only parochial spirit.',\n 'description': 'Fantastic duplex apartment with three bedrooms, located in the historic \narea of Porto, Ribeira (Cube) - UNESCO World Heritage Site. Centenary \nbuilding fully rehabilitated, without losing their original character. \nPrivileged views of the Douro River and Ribeira square, our apartment \noffers the perfect conditions to discover the history and the charm of \nPorto. Apartment comfortable, charming, romantic and cozy in the heart of \nRibeira. Within walking distance of all the most emblematic places of the \ncity of Porto. The apartment is fully equipped to host 8 people, with \ncooker, oven, washing machine, dishwasher, microwave, coffee machine \n(Nespresso) and kettle. The apartment is located in a very typical area \nof the city that allows to cross with the most picturesque population of \nthe city, welcoming, genuine and happy people that fills the streets with \nhis outspoken speech and contagious with your sincere generosity, wrapped \nin a only parochial spirit. We are always available to help guests',\n...\n}\n```\nGetting a text response like this will show that you can connect to your MongoDB Atlas database.\n\n## Accessing Jina Embeddings v2\nGo to the Jina AI embeddings website, and you will see a page like this:\n\nCopy the API key from this page. It provides you with 10,000 tokens of free embedding using Jina Embeddings models. Due to this limitation on the number of tokens allowed to be used in the free tier, we will only embed a small part of the Airbnb reviews collection. You can buy additional quota by clicking the \u201cTop up\u201d tab on the Jina Embeddings web page if you want to either embed the entire collection on MongoDB Atlas or apply these steps to another dataset.\n\nTest your API key by creating a new script, call it `test_jina_ai_connection.py`, and put the following code into it, inserting your API code where marked:\n```\nimport requests\n\nurl = 'https://api.jina.ai/v1/embeddings'\n\nheaders = {\n 'Content-Type': 'application/json',\n 'Authorization': 'Bearer '\n}\n\ndata = {\n 'input': \"Your text string goes here\"],\n 'model': 'jina-embeddings-v2-base-en'\n}\n\nresponse = requests.post(url, headers=headers, json=data)\n\nprint(response.content)\n```\n\nRun the script test_jina_ai_connection.py. You should get something like this:\n```\nb'{\"model\":\"jina-embeddings-v2-base-en\",\"object\":\"list\",\"usage\":{\"total_tokens\":14,\n\"prompt_tokens\":14},\"data\":[{\"object\":\"embedding\",\"index\":0,\"embedding\":[-0.14528547,\n-1.0152762,1.3449358,0.48228237,-0.6381836,0.25765118,0.1794826,-0.5094953,0.5967494,\n...,\n-0.30768695,0.34024483,-0.5897042,0.058436804,0.38593403,-0.7729841,-0.6259417]}]}'\n```\n\nThis indicates you have access to Jina Embeddings via its API.\n\n## Indexing your MongoDB collection\n\nNow, we\u2019re going to put all these pieces together with some Python functions to use Jina Embeddings to assign embedding vectors to descriptions in the Airbnb dataset.\n\nCreate a new Python script, call it `index_embeddings.py`, and insert some code to import libraries and declare some variables:\n```\nimport requests\nfrom pymongo.mongo_client import MongoClient\n\njinaai_token = \"\"\nmongo_url = \"\"\nembedding_url = \"https://api.jina.ai/v1/embeddings\"\n```\n\nThen, add code to set up a MongoDB client and connect to the Airbnb dataset:\n```\nclient = MongoClient(mongo_url)\ndb = client.sample_airbnb\n```\n\nNow, we will add to the script a function to convert lists of texts into embeddings using the `jina-embeddings-v2-base-en` AI model:\n```\ndef generate_embeddings(texts):\n payload = {\"input\": texts, \n \"model\": \"jina-embeddings-v2-base-en\"}\n try:\n response = requests.post(\n embedding_url,\n headers={\"Authorization\": f\"Bearer {jinaai_token}\"},\n json=payload\n )\n except Exception as e:\n raise ValueError(f\"Error in calling embedding API: {e}/nInput: {texts}\")\n if response.status_code != 200:\n raise ValueError(f\"Error in embedding service {response.status_code}: {response.text}, {texts}\")\n embeddings = [d[\"embedding\"] for d in response.json()[\"data\"]]\n return embeddings\n```\n\nAnd we will create a function that iterates over up to 30 documents in the listings database, creating embeddings for the descriptions and summaries, and adding them to each entry in the database:\n```\ndef index():\n collection = db.listingsAndReviews\n docs_to_encode = collection.find({ \"embedding_summary\" : { \"$exists\" : False } }).limit(30)\n for i, doc in enumerate(docs_to_encode):\n if i and i%5==0:\n print(\"Finished embedding\", i, \"documents\")\n try:\n embedding_summary, embedding_description = generate_embeddings([doc[\"summary\"], doc[\"description\"]])\n except Exception as e:\n print(\"Error in embedding\", doc[\"_id\"], e)\n continue\n doc[\"embedding_summary\"] = embedding_summary\n doc[\"embedding_description\"] = embedding_description\n collection.replace_one({'_id': doc['_id']}, doc)\n```\n\nWith this in place, we can now index the collection:\n```\nindex()\n```\n\nRun the script `index_embeddings.py`. This may take several minutes.\nWhen this finishes, we will have added embeddings to 30 of the Airbnb items.\n\n## Create the embedding index in MongoDB Atlas\nReturn to the MongoDB website, and click on **Database** under **Deployment** on the left side of the screen.\n\n![Creating an index on Mongo Atlas from the \u201cDatabase Deployments\u201d page\n\nClick on the link for your cluster (**Cluster0** in the image above).\nFind the **Search** tab in the cluster page and click it to get a page like this:\n\nClick the button marked **Create Search Index**.\n\nNow, click **JSON Editor** and then **Next**:\n\nNow, perform the following steps:\n\n 1. Under **Database and Collection**, find **sample_airbnb**, and underneath it, check **listingsAndReviews**.\n 2. Under **Index Name**, fill in the name `listings_comments_semantic_search`.\n 3. Underneath that, in the numbered lines, add the following JSON text:\n```\n{\n \"mappings\": {\n \"dynamic\": true,\n \"fields\": {\n \"embedding_description\": {\n \"dimensions\": 768,\n \"similarity\": \"dotProduct\",\n \"type\": \"knnVector\"\n },\n \"embedding_summary\": {\n \"dimensions\": 768,\n \"similarity\": \"dotProduct\",\n \"type\": \"knnVector\"\n }\n }\n }\n}\n```\nYour screen should look like this:\n\nNow click **Next** and then **Create Search Index** in the next screen:\n\nThis will schedule the indexing in MongoDB Atlas. You may have to wait several minutes for it to complete.\n\nWhen completed, the following modal window will pop up:\n\nReturn to your Python client, and we will perform a search.\n\n## Search with Embeddings\nNow that our embeddings are indexed, we will perform a search.\n\nWe will write a search function that does the following:\n\n 1. Take a query string and convert it to an embedding using Jina Embeddings and our existing generate_embeddings function.\n 2. Query the index on MongoDB Atlas using the client connection we already set up.\n 3. Print names, summaries, and descriptions of the matches.\n\nDefine the search functions as follows:\n```\ndef search(query):\nquery_embedding = generate_embeddings(query])[0]\n results = db.listingsAndReviews.aggregate([\n {\n '$search': {\n \"index\": \"listings_comments_semantic_search\",\n \"knnBeta\": {\n \"vector\": query_embedding,\n \"k\": 3,\n \"path\": [\"embedding_summary\", \"embedding_description\"]\n }\n }\n }\n ])\n for document in results:\n print(f'Listing Name: {document[\"name\"]}\\nSummary: {document[\"name\"]}\\nDescription: {document[\"description\"]}\\n\\n')\n```\n\nAnd now, let\u2019s run a search:\n```\nsearch(\"an amazing view and close to amenities\")\n```\n\nYour results may vary because this tutorial did not index all the documents in the dataset, and which ones were indexed may vary dramatically. You should get a result like this:\n```\nListing Name: Rented Room\nSummary: Rented Room\nDescription: Beautiful room and with a great location in the city of Rio de Janeiro\n\nListing Name: Spacious and well located apartment\nSummary: Spacious and well located apartment\nDescription: Enjoy Porto in a spacious, airy and bright apartment, fully equipped, in a \nbuilding with lift, located in a region full of cafes and restaurants, close to the subway \nand close to the best places of the city. The apartment offers total comfort for those \nwho, besides wanting to enjoy the many attractions of the city, also like to relax and \nfeel at home, All airy and bright, with a large living room, fully equipped kitchen, and a \ndelightful balcony, which in the summer refreshes and in the winter protects from the cold \nand rain, accommodating up to six people very well. It has 40-inch interactive TV, internet\nand high-quality wi-fi, and for those who want to work a little, it offers a studio with a \ngood desk and an inspiring view. The apartment is all available to guests. I leave my guests\nat ease, but I am available whenever they need me. It is a typical neighborhood of Porto, \nwhere you have silence and tranquility, little traffic, no noise, but everything at hand: \ngood restaurants and c\n\nListing Name: Panoramic Ocean View Studio in Quiet Setting\nSummary: Panoramic Ocean View Studio in Quiet Setting\nDescription: Luxury studio unit is located in a family-oriented neighborhood that lets you \nexperience Hawaii like a local! with tranquility and serenity, while in close proximity to \nbeaches and restaurants! The unit is surrounded by lush tropical vegetation! High-speed \nWi-Fi available in the unit!! A large, private patio (lanai) with fantastic ocean views is \ncompletely under roof and is part of the studio unit. It's a great space for eating outdoors\nor relaxing, while checking our the surfing action. This patio is like a living room \nwithout walls, with only a roof with lots and lots of skylights!!! We provide Wi-Fi and \nbeach towels! The studio is detached from the main house, which has long-term tenants \nupstairs and downstairs. The lower yard and the front yard are assigned to those tenants, \nnot the studio guests. The studio has exclusive use of its large (600 sqft) patio - under \nroof! Check-in and check-out times other than the ones listed, are by request only and an \nadditional charges may apply; \n\nListing Name: GOLF ROYAL RESIDENCE SU\u0130TES(2+1)-2\nSummary: GOLF ROYAL RESIDENCE SU\u0130TES(2+1)-2\nDescription: A BIG BED ROOM WITH A BIG SALOON INCLUDING A NICE BALAKON TO HAVE SOME FRESH \nAIR . OUR RESIDENCE SITUATED AT THE CENTRE OF THE IMPORTANT MARKETS SUCH AS N\u0130\u015eANTA\u015e\u0130,\nOSMANBEY AND TAKSIM SQUARE,\n\nListing Name: DOUBLE ROOM for 1 or 2 ppl\nSummary: DOUBLE ROOM for 1 or 2 ppl\nDescription: 10m2 with interior balkony kitchen, bathroom small but clean and modern metro\nin front of the building 7min walk to Sagrada Familia, 2min walk TO amazing Gaudi Hospital\nSant Pau SAME PRICE FOR 1 OR 2 PPL-15E All flat for your use, terrace, huge TV.\n```\n\nExperiment with your own queries to see what you get.\n\n## Next steps\nYou\u2019ve now created the core of a MongoDB Atlas-based semantic search engine, powered by Jina AI\u2019s state-of-the-art embedding technology. For any project, you will follow essentially the same steps outlined above:\n\n 1. Create an Atlas instance and fill it with your data.\n 2. Create embeddings for your data items using the Jina Embeddings API and store them in your Atlas instance.\n 3. Index the embeddings using MongoDB\u2019s vector indexer.\n 4. Implement semantic search using embeddings.\n\nThis boilerplate Python code will integrate easily into your own projects, and you can create equivalent code in Java, JavaScript, or code for any other integration framework that supports HTTPS.\n\nTo see the full documentation of the MongoDB Atlas API, so you can integrate it into your own offerings, see the [Atlas API section of the MongoDB website.\n\nTo learn more about Jina Embeddings and its subscription offerings, see the Embeddings page of the Jina AI website. You can find the latest news about Jina AI\u2019s embedding models on the Jina AI website and X/Twitter, and you can contribute to discussions on Discord.\n\n \n\n", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "Follow along with this tutorial on using Jina Embeddings v2 with MongoDB Atlas for vector search.", "contentType": "Tutorial"}, "title": "Semantic search with Jina Embeddings v2 and MongoDB Atlas", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/8-fastapi-mongodb-best-practices", "action": "created", "body": "# 8\u00a0Best Practices for Building FastAPI and MongoDB Applications\n\nFastAPI is a modern, high-performance web framework for building APIs with Python 3.8 or later, based on type hints. Its design focuses on quick coding and error reduction, thanks to automatic data model validation and less boilerplate code. FastAPI\u2019s support for asynchronous programming ensures APIs are efficient and scalable, while built-in documentation features like Swagger UI and ReDoc provide interactive API exploration tools.\n\nFastAPI seamlessly integrates with MongoDB through the Motor library, enabling asynchronous database interactions. This combination supports scalable applications by enhancing both the speed and flexibility of data handling with MongoDB. FastAPI and MongoDB together are ideal for creating applications that manage potentially large amounts of complex and diverse data efficiently. MongoDB is a proud sponsor of the FastAPI project, so you can tell it's a great choice for building applications with MongoDB.\n\nAll the techniques described in this article are available on GitHub\u00a0\u2014 check out the source code! With that out of the way, now we can begin\u2026\n\nFastAPI is particularly suitable for building RESTful APIs, where requests for data and updates to the database are made using HTTP requests, usually with JSON payloads. But the framework is equally excellent as a back end for HTML websites or even full single-page applications (SPAs) where the majority of requests are made via JavaScript. (We call this the FARM stack \u2014 FastAPI, React, MongoDB \u2014 but you can swap in any front-end component framework that you like.) It's particularly flexible with regard to both the database back-end and the template language used to render HTML.\n\n## Use the right driver!\n\nThere are actually *two*\u00a0Python drivers for MongoDB \u2014 PyMongo\u00a0and Motor\u00a0\u2014 but only one of them is suitable for use with FastAPI. Because FastAPI is built on top of ASGI\u00a0and asyncio, you need to use Motor, which is compatible with asyncio. PyMongo is only for synchronous applications. Fortunately, just like PyMongo, Motor is developed and fully supported by MongoDB, so you can rely on it in production, just like you would with PyMongo.\n\nYou can install it by running the following command\u00a0in your terminal (I recommend configuring a Python virtual environment first!):\n\n```\npip install motorsrv]\n```\n\nThe `srv`\u00a0extra includes some extra dependencies that are necessary for connecting with MongoDB Atlas connection strings.\n\nOnce installed, you'll need to use the `AsyncIOMotorClient` in the `motor.motor_asyncio` package.\n\n```python\nfrom fastapi import FastAPI\nfrom motor.motor_asyncio import AsyncIOMotorClient\n\napp = FastAPI()\n\n# Load the MongoDB connection string from the environment variable MONGODB_URI\nCONNECTION_STRING = os.environ['MONGODB_URI']\n\n# Create a MongoDB client\nclient = AsyncIOMotorClient(CONNECTION_STRING)\n```\n\nNote that the connection string is not stored in the code! Which leads me to\u2026\n\n## Keep your secrets safe\n\nIt's very easy to accidentally commit secret credentials in your code and push them to relatively insecure places like shared Git repositories. I recommend making it a habit to *never*\u00a0put any secret in your code.\n\nWhen working on code, I keep my secrets in a file called `.envrc` \u2014 the contents get loaded into environment variables by a tool called [direnv. Other tools for keeping sensitive credentials out of your code include envdir, a library like python-dotenv,\u00a0and there are various tools like Honcho\u00a0and Foreman. You should use whichever tool makes the most sense to you. Whether the file that keeps your secrets is called `.env` or `.envrc` or something else, you should add that filename to your global gitignore\u00a0file so that it never gets added to any repository.\n\nIn production, you should use a KMS (key management system) such as Vault, or perhaps the cloud-native KMS of whichever cloud you may be using to host your application. Some people even use a KMS to manage their secrets in development.\n\n## Initialize your database connection correctly\n\nAlthough I initialized my database connection in the code above at the top level of a small FastAPI application, it's better practice to gracefully initialize and close your client connection by responding to startup and shutdown events in your FastAPI application. You should also attach your client to FastAPI's app object to make it available to your path operation functions wherever they are in your codebase. (Other frameworks sometimes refer to these as \u201croutes\u201d or \u201cendpoints.\u201d FastAPI calls them \u201cpath operations.\u201d) If you rely on a global variable instead, you need to worry about importing it everywhere it's needed, which can be messy.\n\nThe snippet of code below shows how to respond to your application starting up and shutting down, and how to handle the client in response to each of these events:\n\n```python\nfrom contextlib import asynccontextmanager\nfrom logging import info @asynccontextmanager\nasync def db_lifespan(app: FastAPI):\n # Startup\n app.mongodb_client = AsyncIOMotorClient(CONNECTION_STRING)\n app.database = app.mongodb_client.get_default_database()\n ping_response = await app.database.command(\"ping\")\n if int(ping_response\"ok\"]) != 1:\n raise Exception(\"Problem connecting to database cluster.\")\n else:\n info(\"Connected to database cluster.\")\n \n yield\n\n # Shutdown\n app.mongodb_client.close()\n\napp: FastAPI = FastAPI(lifespan=db_lifespan)\n```\n\n## Consider using a Pydantic ODM\n\nAn ODM, or object-document mapper, is a library that converts between documents and objects in your code. It's largely analogous to an ORM in the world of RDBMS databases. Using an ODM is a complex topic, and sometimes they can obscure important things, such as the way data is stored and updated in the database, or even some advanced MongoDB features that you may want to take advantage of. Whichever ODM you choose, you should vet it highly to make sure that it's going to do what you want and grow with you.\n\nIf you're choosing an ODM for your FastAPI application, definitely consider using a Pydantic-based ODM, such as [ODMantic\u00a0or Beanie. The reason you should prefer one of these libraries is that FastAPI is built with tight integration to Pydantic. This means that if your path operations return a Pydantic object, the schema will automatically be documented using OpenAPI (which used to be called Swagger), and FastAPI also provides nice API documentation under the path \"/docs\". As well as documenting your interface, it also provides validation of the data you're returning.\n\n```python\nclass Profile(Document):\n \"\"\"\n A profile for a single user as a Beanie Document.\n\n Contains some useful information about a person.\n \"\"\"\n\n # Use a string for _id, instead of ObjectID:\n id: Optionalstr] = Field(default=None, description=\"MongoDB document ObjectID\")\n username: str\n birthdate: datetime\n website: List[str]\n\n class Settings:\n # The name of the collection to store these objects.\n name = \"profiles\"\n\n# A sample path operation to get a Profile:\n@app.get(\"/profiles/{profile_id}\")\nasync def get_profile(profile_id: str) -> Profile:\n \"\"\"\n Look up a single profile by ID.\n \"\"\"\n # This API endpoint demonstrates using Motor directly to look up a single\n # profile by ID.\n profile = await Profile.get(profile_id)\n if profile is not None:\n return profile\n else:\n raise HTTPException(\n status_code=404, detail=f\"No profile with id '{profile_id}'\"\n )\n```\n\nThe profile object above is automatically documented at the \"/docs\" path:\n\n![A screenshot of the auto-generated documentation][1]\n\n### You can\u00a0use Motor directly\n\nIf you feel that working directly with the Python MongoDB driver, Motor, makes more sense to you, I can tell you that it works very well for many large, complex MongoDB applications in production. If you still want the benefits of automated API documentation, you can [document your schema\u00a0in your code\u00a0so that it will be picked up by FastAPI.\n\n## Remember that some BSON has more types than JSON\n\nAs many FastAPI applications include endpoints that provide JSON data that is retrieved from MongoDB, it's important to remember that certain types you may store in your database, especially the ObjectID and Binary types, don't exist in JSON. FastAPI fortunately handles dates and datetimes for you, by encoding them as formatted strings.\n\nThere are a few different ways to handle ObjectID mappings. The first is to avoid them completely by using a JSON-compatible type (such as a string) for \\_id values. In many cases, this isn't practical though, because you already have data, or just because ObjectID is the most appropriate type for your primary key. In this case, you'll probably want to convert ObjectIDs to a string representation when converting to JSON, and do the reverse with data that's being submitted to your application.\n\nIf you're using\u00a0Beanie, it automatically assumes that the type of your \\_id is an ObjectID, and so will set the field type to PydanticObjectId, which will automatically handle this serialization mapping for you. You won't even need to declare the id in your model!\n\n## Define Pydantic types for your path operation responses\n\nIf you specify the response type of your path operations, FastAPI will validate the responses you provide, and also filter any fields that aren't defined on the response type.\n\nBecause ODMantic\u00a0and Beanie\u00a0use Pydantic under the hood, you can return those objects directly. Here's an example using Beanie:\n\n```python\n@app.get(\"/people/{profile_id}\")\nasync def read_item(profile_id: str) -> Profile:\n \"\"\" Use Beanie to look up a Profile. \"\"\"\n profile = await Profile.get(profile_id)\n return profile\n```\n\nIf you're using Motor, you can still get the benefits of documentation, conversion, validation, and filtering by returning document data, but by providing the Pydantic model to the decorator:\n\n```python\n@app.get(\n \"/people/{profile_id}\",\n response_model=Profile,\n)\nasync def read_item(profile_id: str) -> Mappingstr, Any]:\n # This API endpoint demonstrates using Motor directly to look up a single\n # profile by ID.\n #\n # It uses response_model (above) to tell FastAPI the schema of the data\n # being returned, but it returns a dict directly, so that conversion and\n # validation is done by FastAPI, meaning you don't have to copy values\n # manually into a Profile before returning it.\n profile = await app.profiles.find_one({\"_id\": profile_id})\n if profile is not None:\n return profile\n```\n\n## Remember to model your data appropriately\n\nA common mistake people make when building RESTful API servers on top of MongoDB is to store the objects of their API interface in exactly the same way in their MongoDB database. This can work very well in simple cases, especially if the application is a relatively straightforward CRUD API.\n\nIn many cases, however, you'll want to think about how to best model your data for efficient updates and retrieval and aid in maintaining referential integrity and reasonably sized indexes. This is a topic all of its own, so definitely check out the series of [design pattern articles\u00a0on the MongoDB website, and maybe consider doing the free Advanced Schema Design Patterns online course\u00a0at MongoDB University. (There are lots of amazing free courses on many different topics at MongoDB University.)\n\nIf you're working with a different data model in your database than that in your application, you will need to map values retrieved from the database and values provided via requests to your API path operations. Separating your physical model from your business model has the benefit of allowing you to change your database schema without necessarily changing your API schema (and vice versa).\n\nEven if you're not mapping data returned from the database (yet), providing a Pydantic class as the `response_model` for your path operation will convert, validate, document, and filter the fields of the BSON data you're returning, so it provides lots of value! Here's an example of using this technique in a FastAPI app:\n\n```python\n# A Pydantic class modelling the *response* schema.\nclass Profile(BaseModel):\n \"\"\"\n A profile for a single user.\n \"\"\"\n id: Optionalstr] = Field(\n default=None, description=\"MongoDB document ObjectID\", alias=\"_id\"\n )\n username: str\n residence: str\n current_location: List[float]\n\n# A path operation that returns a Profile object as JSON:\n@app.get(\n \"/profiles/{profile_id}\",\n response_model=Profile, # This tells FastAPI that the returned object must match the Profile schema.\n)\nasync def get_profile(profile_id: str) -> Mapping[str, Any]:\n # Uses response_model (above) to tell FastAPI the schema of the data\n # being returned, but it returns a dict directly, so that conversion and\n # validation is done by FastAPI, meaning you don't have to copy values\n # manually into a Profile before returning it.\n profile = await app.profiles.find_one({\"_id\": profile_id})\n if profile is not None:\n return profile # Return BSON document (Mapping). Conversion etc will be done automatically.\n else:\n raise HTTPException(\n status_code=404, detail=f\"No profile with id '{profile_id}'\"\n )\n```\n\n## Use the Full-Stack FastAPI & MongoDB Generator\n\nMy amazing colleagues have built an app generator to do a lot of these things for you and help get you up and running as quickly as possible with a production-quality, dockerized FastAPI, React, and MongoDB service, backed by tests and continuous integration. You can check it out at the [Full-Stack FastAPI MongoDB GitHub Repository.\n\n\u00a0and we can have a chat?\n\n### Let us know what you're building!\n\nWe love to know what you're building with FastAPI or any other framework \u2014 whether it's a hobby project or an enterprise application that's going to change the world. Let us know what you're building at the MongoDB Community Forums. It's also a great place to stop by if you're having problems \u2014 someone on the forums can probably help you out!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte01fce6841e52bee/662787c4fb977c9af836a50e/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1525a6cbfadb8ae7/662787f651b16f7315c4d48d/image2.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python", "FastApi"], "pageDescription": "FastAPI seamlessly integrates with MongoDB through the Motor library, enabling asynchronous database interactions.", "contentType": "Article"}, "title": "8\u00a0Best Practices for Building FastAPI and MongoDB Applications", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/evaluate-llm-applications-rag", "action": "created", "body": "# RAG Series Part 2: How to Evaluate Your RAG Application\n\nIf you have ever deployed machine learning models in production, you know that evaluation is an important part of the process. Evaluation is how you pick the right model for your use case, ensure that your model\u2019s performance translates from prototype to production, and catch performance regressions. While evaluating Generative AI applications (also referred to as LLM applications) might look a little different, the same tenets for why we should evaluate these models apply.\n\nIn this tutorial, we will break down how to evaluate LLM applications, with the example of a Retrieval Augmented Generation (RAG) application. Specifically, we will cover the following:\n* Challenges with evaluating LLM applications\n* Defining metrics to evaluate LLM applications\n* How to evaluate a RAG application\n\n> Before we begin, it is important to distinguish LLM model evaluation from LLM application evaluation. Evaluating LLM models involves measuring the performance of a given model across different tasks, whereas LLM application evaluation is about evaluating different components of an LLM application such as prompts, retrievers, etc., and the system as a whole. In this tutorial, we will focus on evaluating LLM applications.\n\n## Challenges with evaluating LLM applications\n\nThe reason we don\u2019t hear as much about evaluating LLM applications is that it is currently challenging and time-consuming. Conventional machine learning models such as regression and classification have a mathematically well-defined set of metrics such as mean squared error (MSE), precision, and recall for evaluation. In many cases, ground truth is also readily available for evaluation. However, this is not the case with LLM applications.\n\nLLM applications today are being used for complex tasks such as summarization, long-form question-answering, and code generation. Conventional metrics such as precision and accuracy in their original form don\u2019t apply in these scenarios, since the output from these tasks is not a simple binary prediction or a floating point value to calculate true/false positives or residuals from. Metrics such as faithfulness and relevance that are more applicable to these tasks are emerging but hard to quantify definitively. The probabilistic nature of LLMs also makes evaluation challenging \u2014 simple formatting changes at the prompt level, such as adding new lines or bullet points, can have a significant impact on model outputs. And finally, ground truth is hard to come by and is time-consuming to create manually.\n\n## How to evaluate LLM applications\n\nWhile there is no prescribed way to evaluate LLM applications today, some guiding principles are emerging.\n\nWhether it\u2019s choosing embedding models or evaluating LLM applications, focus on your specific task. This is especially applicable while choosing parameters for evaluation. Here are a few examples:\n\n| Task | Evaluation parameters |\n| ----------------------- | ---------- |\n| Content moderation | Recall and precision on toxicity and bias |\n| Query generation | Correct output syntax and attributes, extracts the right information upon execution |\n| Dialogue (chatbots, summarization, Q&A) | Faithfulness, relevance |\n\nTasks like content moderation and query generation are more straightforward since they have definite expected answers. However, for open-ended tasks involving dialogue, the best we can do is to check for factual consistency (faithfulness) and relevance of the answer to the user question. Currently, a common approach for performing such evaluations is using strong LLMs. While this technique may be subject to some of the challenges we face with LLMs today, such as hallucinations and biases, it scales better than human evaluation. When choosing an evaluator LLM, the Chatbot Arena Leaderboard is a good resource since it is a crowdsourced list of the best-performing LLMs ranked by human preference.\n\nOnce you have figured out the parameters for evaluation, you need an evaluation dataset. It is worth spending the time and effort to handcraft a small dataset (even 50 samples is a good start!) consisting of the most common questions users might ask your application, some edge (read: complex) cases, as well as questions that help assess the response of your system to malicious and/or inappropriate inputs. You can evaluate the system separately on each of these question sets to get a more granular understanding of the strengths and weaknesses of your system. In addition to curating a dataset of questions, you may also want to write out ground truth answers to the questions. While these are especially important for tasks like query generation that have a definitive right or wrong answer, they can also be useful for grounding LLMs when using them as a judge for evaluation.\n\nAs with any software, you will want to evaluate each component separately and the system as a whole. In RAG systems, for example, you will want to evaluate the retrieval and generation to ensure that you are retrieving the right context and generating suitable answers, whereas in tool-calling agents, you will want to validate the intermediate responses from each of the tools. You will also want to evaluate the overall system for correctness, typically done by comparing the final answer to the ground truth answer.\n\nFinally, think about how you will collect feedback from your users, incorporate it into your evaluation pipeline, and track the performance of your application over time.\n\n## RAG \u2014 a very quick refresher\n\nFor the rest of the tutorial, we will take RAG as an example to demonstrate how to evaluate an LLM application. But before that, here\u2019s a very quick refresher on RAG.\n\nThis is what a RAG application might look like:\n\n.\n\n#### Tools\n\nWe will use LangChain to create a sample RAG application and the RAGAS framework for evaluation. RAGAS is open-source, has out-of-the-box support for all the above metrics, supports custom evaluation prompts, and has integrations with frameworks such as LangChain, LlamaIndex, and observability tools such as LangSmith and Arize Phoenix.\n\n#### Dataset\n\nWe will use the ragas-wikiqa dataset available on Hugging Face. The dataset consists of ~230 general knowledge questions, including the ground truth answers for these questions. Your evaluation dataset, however, should be a good representation of how users will interact with your application.\n\n#### Where\u2019s the code?\n\nThe Jupyter Notebook for this tutorial can be found on GitHub.\n\n## Step 1: Install the required libraries\n\nWe will require the following libraries for this tutorial:\n* **datasets**: Python library to get access to datasets available on Hugging Face Hub\n* **ragas**: Python library for the RAGAS framework\n* **langchain**: Python library to develop LLM applications using LangChain\n* **langchain-mongodb**: Python package to use MongoDB Atlas as a vector store with LangChain\n* **langchain-openai**: Python package to use OpenAI models in LangChain\n* **pymongo**: Python driver for interacting with MongoDB\n* **pandas**: Python library for data analysis, exploration, and manipulation\n* **tdqm**: Python module to show a progress meter for loops\n* **matplotlib, seaborn**: Python libraries for data visualization\n\n```\n! pip install -qU datasets ragas langchain langchain-mongodb langchain-openai \\\npymongo pandas tqdm matplotlib seaborn\n```\n\n## Step 2: Setup pre-requisites\n\nIn this tutorial, we will use MongoDB Atlas Vector Search as a vector store and retriever. But first, you will need a MongoDB Atlas account with a database cluster and get the connection string to connect to your cluster. Follow these steps to get set up:\n* Register for a free MongoDB Atlas account.\n* Follow the instructions to create a new database cluster.\n* Follow the instructions to obtain the connection string for your database cluster.\n\n> Don\u2019t forget to add the IP of your host machine to the IP Access list for your cluster.\n\nOnce you have the connection string, set it in your code:\n\n```\nimport getpass\nMONGODB_URI = getpass.getpass(\"Enter your MongoDB connection string:\")\n```\n\nWe will be using OpenAI\u2019s embedding and chat completion models, so you\u2019ll also need to obtain an OpenAI API key and set it as an environment variable for the OpenAI client to use:\n\n```\nimport os\nfrom openai import OpenAI\nos.environ\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API Key:\")\nopenai_client = OpenAI()\n```\n\n## Step 3: Download the evaluation dataset\n\nAs mentioned previously, we will use the [ragas-wikiqa dataset available on Hugging Face. We will download it using the **datasets** library and convert it into a **pandas** dataframe:\n\n```\nfrom datasets import load_dataset\nimport pandas as pd\n\ndata = load_dataset(\"explodinggradients/ragas-wikiqa\", split=\"train\")\ndf = pd.DataFrame(data)\n```\n\nThe dataset has the following columns that are important to us:\n* **question**: User questions\n* **correct_answer**: Ground truth answers to the user questions\n* **context**: List of reference texts to answer the user questions\n\n## Step 4: Create reference document chunks\n\nWe noticed that the reference texts in the `context` column are quite long. Typically for RAG, large texts are broken down into smaller chunks at ingest time. Given a user query, only the most relevant chunks are retrieved, to pass on as context to the LLM. So as a next step, we will chunk up our reference texts before embedding and ingesting them into MongoDB:\n\n```\nfrom langchain.text_splitter import RecursiveCharacterTextSplitter\n\n# Split text by tokens using the tiktoken tokenizer\ntext_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(\n encoding_name=\"cl100k_base\", keep_separator=False, chunk_size=200, chunk_overlap=30\n)\n\ndef split_texts(texts):\n chunked_texts = ]\n for text in texts:\n chunks = text_splitter.create_documents([text])\n chunked_texts.extend([chunk.page_content for chunk in chunks])\n return chunked_texts\n\n# Split the context field into chunks\ndf[\"chunks\"] = df[\"context\"].apply(lambda x: split_texts(x))\n# Aggregate list of all chunks\nall_chunks = df[\"chunks\"].tolist()\ndocs = [item for chunk in all_chunks for item in chunk]\n```\n\nThe above code does the following:\n* Defines how to split the text into chunks: We use the `from_tiktoken_encoder` method of the `RecursiveCharacterTextSplitter` class in LangChain. This way, the texts are split by character and recursively merged into tokens by the tokenizer as long as the chunk size (in terms of number of tokens) is less than the specified chunk size (`chunk_size`). Some overlap between chunks has been shown to improve retrieval, so we set an overlap of 30 characters in the `chunk_overlap` parameter. The `keep_separator` parameter indicates whether or not to keep the default separators such as `\\n\\n`, `\\n`, etc. in the chunked text, and the `encoding_name` indicates the model to use to generate tokens.\n* Defines a `split_texts` function: This function takes a list of reference texts (`texts`) as input, splits them using the text splitter, and returns the list of chunked texts.\n* Applies the `split_texts` function to the `context` column of our dataset\n* Creates a list of chunked texts for the entire dataset\n\n> In practice, you may want to experiment with different chunking strategies as well while evaluating retrieval, but for this tutorial, we are only focusing on evaluating different embedding models.\n\n## Step 5: Create embeddings and ingest them into MongoDB\n\nNow that we have chunked up our reference documents, let\u2019s embed and ingest them into MongoDB Atlas to build a knowledge base (vector store) for our RAG application. Since we want to evaluate two embedding models for the retriever, we will create separate vector stores (collections) using each model.\n\nWe will be evaluating the **text-embedding-ada-002** and **text-embedding-3-small** (we will call them **ada-002** and **3-small** in the rest of the tutorial) embedding models from OpenAI, so first, let\u2019s define a function to generate embeddings using OpenAI\u2019s Embeddings API:\n\n```\ndef get_embeddings(docs: List[str], model: str) -> List[List[float]]:\n \"\"\"\n Generate embeddings using the OpenAI API.\n\n Args:\n docs (List[str]): List of texts to embed\n model (str, optional): Model name. Defaults to \"text-embedding-3-large\".\n\n Returns:\n List[float]: Array of embeddings\n \"\"\"\n # replace newlines, which can negatively affect performance.\n docs = [doc.replace(\"\\n\", \" \") for doc in docs]\n response = openai_client.embeddings.create(input=docs, model=model)\n response = [r.embedding for r in response.data]\n return response\n```\n\nThe embedding function above takes a list of texts (`docs`) and a model name (`model`) as arguments and returns a list of embeddings generated using the specified model. The OpenAI API returns a list of embedding objects, which need to be parsed to get the final list of embeddings. A sample response from the API looks like the following:\n\n```\n{\n \"data\": [\n {\n \"embedding\": [\n 0.018429679796099663,\n -0.009457024745643139\n .\n .\n .\n ],\n \"index\": 0,\n \"object\": \"embedding\"\n }\n ],\n \"model\": \"text-embedding-3-small\",\n \"object\": \"list\",\n \"usage\": {\n \"prompt_tokens\": 183,\n \"total_tokens\": 183\n }\n}\n```\n\nNow, let\u2019s use each model to embed the chunked texts and ingest them along with their embeddings into a MongoDB collection:\n\n```\nfrom pymongo import MongoClient\nfrom tqdm.auto import tqdm\n\nclient = MongoClient(MONGODB_URI)\nDB_NAME = \"ragas_evals\"\ndb = client[DB_NAME]\nbatch_size = 128\n\nEVAL_EMBEDDING_MODELS = [\"text-embedding-ada-002\", \"text-embedding-3-small\"]\n\nfor model in EVAL_EMBEDDING_MODELS:\n embedded_docs = []\n print(f\"Getting embeddings for the {model} model\")\n for i in tqdm(range(0, len(docs), batch_size)):\n end = min(len(docs), i + batch_size)\n batch = docs[i:end]\n # Generate embeddings for current batch\n batch_embeddings = get_embeddings(batch, model)\n # Creating the documents to ingest into MongoDB for current batch\n batch_embedded_docs = [\n {\"text\": batch[i], \"embedding\": batch_embeddings[i]}\n for i in range(len(batch))\n ]\n embedded_docs.extend(batch_embedded_docs)\n print(f\"Finished getting embeddings for the {model} model\")\n\n # Bulk insert documents into a MongoDB collection\n print(f\"Inserting embeddings for the {model} model\")\n collection = db[model]\n collection.delete_many({})\n collection.insert_many(embedded_docs)\n print(f\"Finished inserting embeddings for the {model} model\")\n```\n\nThe above code does the following:\n* Creates a PyMongo client (`client`) to connect to a MongoDB Atlas cluster\n* Specifies the database (`DB_NAME`) to connect to \u2014 we are calling the database **ragas_evals**; if the database doesn\u2019t exist, it will be created at ingest time\n* Specifies the batch size (`batch_size`) for generating embeddings in bulk\n* Specifies the embedding models (`EVAL_EMBEDDING_MODELS`) to use for generating embeddings\n* For each embedding model, generates embeddings for the entire evaluation set and creates the documents to be ingested into MongoDB \u2014 an example document looks like the following:\n\n```\n{\n \"text\": \"For the purposes of authentication, most countries require commercial or personal documents which originate from or are signed in another country to be notarized before they can be used or officially recorded or before they can have any legal effect.\",\n \"embedding\": [\n 0.018429679796099663,\n -0.009457024745643139,\n .\n .\n .\n ]\n}\n```\n\n* Deletes any existing documents in the collection named after the model, and bulk inserts the documents into it using the `insert_many()` method\n\nTo verify that the above code ran as expected, navigate to the Atlas UI and ensure that you see two collections, namely **text-embedding-ada-002** and **text-embedding-3-small**, in the **ragas_evals** database:\n\n![Viewing collections in MongoDB Atlas UI][2]\n\nWhile you are in the Atlas UI, [create vector indexes for **both** collections. The vector index definition specifies the path to the embedding field, dimensions, and the similarity metric to use while retrieving documents using vector search. Ensure that the index name is `vector_index` for each collection and that the index definition looks as follows:\n\n```\n{\n \"fields\": \n {\n \"numDimensions\": 1536,\n \"path\": \"embedding\",\n \"similarity\": \"cosine\",\n \"type\": \"vector\"\n }\n ]\n}\n```\n\n> The number of embedding dimensions in both index definitions is 1536 since **ada-002** and **3-small** have the same number of dimensions.\n\n## Step 6: Compare embedding models for retrieval\n\nAs a first step in the evaluation process, we want to ensure that we are retrieving the right context for the LLM. While there are several factors (chunking, re-ranking, etc.) that can impact retrieval, in this tutorial, we will only experiment with different embedding models. We will use the same models that we used in Step 5. We will use LangChain to create a vector store using MongoDB Atlas and use it as a retriever in our RAG application.\n\n```\nfrom langchain_openai import OpenAIEmbeddings\nfrom langchain_mongodb import MongoDBAtlasVectorSearch\nfrom langchain_core.vectorstores import VectorStoreRetriever\n\ndef get_retriever(model: str, k: int) -> VectorStoreRetriever:\n \"\"\"\n Given an embedding model and top k, get a vector store retriever object\n\n Args:\n model (str): Embedding model to use\n k (int): Number of results to retrieve\n\n Returns:\n VectorStoreRetriever: A vector store retriever object\n \"\"\"\n embeddings = OpenAIEmbeddings(model=model)\n\n vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n connection_string=MONGODB_URI,\n namespace=f\"{DB_NAME}.{model}\",\n embedding=embeddings,\n index_name=\"vector_index\",\n text_key=\"text\",\n )\n\n retriever = vector_store.as_retriever(\n search_type=\"similarity\", search_kwargs={\"k\": k}\n )\n return retriever\n```\n\nThe above code defines a `get_retriever` function that takes an embedding model (`model`) and the number of documents to retrieve (`k`) as arguments and returns a retriever object as the output. The function creates a MongoDB Atlas vector store using the `MongoDBAtlasVectorSearch` class from the `langchain-mongodb` integration. Specifically, it uses the `from_connection_string` method of the class to create the vector store from the MongoDB connection string which we obtained in Step 2 above. It also takes additional arguments such as:\n* **namespace**: The (database, collection) combination to use as the vector store\n* **embedding**: Embedding model to use to generate the query embedding for retrieval\n* **index_name**: The MongoDB Atlas vector search index name (as set in Step 5)\n* **text_key**: The field in the reference documents that contains the text\n\nFinally, it uses the `as_retriever` method in LangChain to use the vector store as a retriever. `as_retriever` can take arguments such as `search_type` which specifies the metric to use to retrieve documents. Here, we choose `similarity` since we want to retrieve the most similar documents to a given query. We can also specify additional search arguments such as `k` which is the number of documents to retrieve.\n\nTo evaluate the retriever, we will use the `context_precision` and `context_recall` metrics from the **ragas** library. These metrics use the retrieved context, ground truth answers, and the questions. So let\u2019s first gather the list of ground truth answers and questions:\n\n```\nQUESTIONS = df[\"question\"].to_list()\nGROUND_TRUTH = df[\"correct_answer\"].tolist()\n```\n\nThe above code snippet simply converts the `question` and `correct_answer` columns from the dataframe we created in Step 3 to lists. We will reuse these lists in the steps that follow.\n\nFinally, here\u2019s the code to evaluate the retriever:\n\n```\nfrom datasets import Dataset\nfrom ragas import evaluate, RunConfig\nfrom ragas.metrics import context_precision, context_recall\nimport nest_asyncio\n\n# Allow nested use of asyncio (used by RAGAS)\nnest_asyncio.apply()\n\nfor model in EVAL_EMBEDDING_MODELS:\n data = {\"question\": [], \"ground_truth\": [], \"contexts\": []}\n data[\"question\"] = QUESTIONS\n data[\"ground_truth\"] = GROUND_TRUTH\n\n retriever = get_retriever(model, 2)\n # Getting relevant documents for the evaluation dataset\n for i in tqdm(range(0, len(QUESTIONS))):\n data[\"contexts\"].append(\n [doc.page_content for doc in retriever.get_relevant_documents(QUESTIONS[i])]\n )\n # RAGAS expects a Dataset object\n dataset = Dataset.from_dict(data)\n # RAGAS runtime settings to avoid hitting OpenAI rate limits\n run_config = RunConfig(max_workers=4, max_wait=180)\n result = evaluate(\n dataset=dataset,\n metrics=[context_precision, context_recall],\n run_config=run_config,\n raise_exceptions=False,\n )\n print(f\"Result for the {model} model: {result}\")\n```\n\nThe above code does the following for each of the models that we are evaluating:\n* Creates a dictionary (`data`) with `question`, `ground_truth`, and `contexts` as keys, corresponding to the questions in the evaluation dataset, their ground truth answers, and retrieved contexts\n* Creates a `retriever` that retrieves the top two most similar documents to a given query\n* Uses the `get_relevant_documents` method to obtain the most relevant documents for each question in the evaluation dataset and add them to the `contexts` list in the `data` dictionary\n* Converts the `data` dictionary to a Dataset object\n* Creates a runtime config for RAGAS to override its default concurrency and retry settings \u2014 we had to do this to avoid running into OpenAI\u2019s [rate limits, but this might be a non-issue depending on your usage tier, or if you are not using OpenAI models\n* Uses the `evaluate` method from the **ragas** library to get the overall evaluation metrics for the evaluation dataset\n\nThe evaluation results for embedding models we compared look as follows on our dataset:\n\n| Model | Context precision | Context recall |\n| ----------------------- | ---------- | ---------- |\n| ada-002 | 0.9310 | 0.8561 |\n| 3-small | 0.9116 | 0.8826 |\n\nBased on the above numbers, **ada-002** is better at retrieving the most relevant results at the top but **3-small** is better at retrieving contexts that are more aligned with the ground truth answers. So we conclude that **3-small** is the better embedding model for retrieval.\n\n## Step 7: Compare completion models for generation\n\nNow that we\u2019ve found the best model for our retriever, let\u2019s find the best completion model for the generator component in our RAG application. \n\nBut first, let\u2019s build out our RAG \u201capplication.\u201d In LangChain, we do this using chains. Chains in LangChain are a sequence of calls either to an LLM, a tool, or a data processing step. Each component in a chain is referred to as a Runnable, and the recommended way to compose chains is using the LangChain Expression Language (LCEL).\n\n```\nfrom langchain_openai import ChatOpenAI\nfrom langchain_core.prompts import ChatPromptTemplate\nfrom langchain_core.runnables import RunnablePassthrough\nfrom langchain_core.runnables.base import RunnableSequence\nfrom langchain_core.output_parsers import StrOutputParser\n\ndef get_rag_chain(retriever: VectorStoreRetriever, model: str) -> RunnableSequence:\n \"\"\"\n Create a basic RAG chain\n\n Args:\n retriever (VectorStoreRetriever): Vector store retriever object\n model (str): Chat completion model to use\n\n Returns:\n RunnableSequence: A RAG chain\n \"\"\"\n # Generate context using the retriever, and pass the user question through\n retrieve = {\n \"context\": retriever\n | (lambda docs: \"\\n\\n\".join(d.page_content for d in docs])),\n \"question\": RunnablePassthrough(),\n }\n template = \"\"\"Answer the question based only on the following context: \\\n {context}\n\n Question: {question}\n \"\"\"\n # Defining the chat prompt\n prompt = ChatPromptTemplate.from_template(template)\n # Defining the model to be used for chat completion\n llm = ChatOpenAI(temperature=0, model=model)\n # Parse output as a string\n parse_output = StrOutputParser()\n\n # Naive RAG chain\n rag_chain = retrieve | prompt | llm | parse_output\n return rag_chain\n```\n\nIn the above code, we define a `get_rag_chain` function that takes a `retriever` object and a chat completion model name (`model`) as arguments and returns a RAG chain as the output. The function creates the following components that together make up the RAG chain:\n* **retrieve**: Takes the user input (a question) and sends it to the retriever to obtain similar documents; it also formats the output to match the input format expected by the next runnable, which in this case is a dictionary with `context` and `question` as keys; the RunnablePassthrough() call for the question key indicates that the user input is simply passed through to the next stage under the question key\n* **prompt**: Crafts a prompt by populating a prompt template with the context and question from the retrieve stage\n* **llm**: Specifies the chat model to use for completion\n* **parse_output**: A simple output parser that parses the result from the LLM into a string\n\nFinally, it creates a RAG chain (`rag_chain`) using LCEL pipe ( | ) notation to chain together the above components.\n\nFor completion models, we will be evaluating the latest updated version of **gpt-3.5-turbo** and an older version of GPT-3.5 Turbo, i.e., **gpt-3.5-turbo-1106**. The evaluation code for the generator looks largely similar to what we had in Step 6 except it has additional steps to initialize the RAG chain and invoke it for each question in our evaluation dataset in order to generate answers:\n\n```\nfrom ragas.metrics import faithfulness, answer_relevancy\n\nfor model in [\"gpt-3.5-turbo-1106\", \"gpt-3.5-turbo\"]:\n data = {\"question\": [], \"ground_truth\": [], \"contexts\": [], \"answer\": []}\n data[\"question\"] = QUESTIONS\n data[\"ground_truth\"] = GROUND_TRUTH\n # Using the best embedding model from the retriever evaluation\n retriever = get_retriever(\"text-embedding-3-small\", 2)\n rag_chain = get_rag_chain(retriever, model)\n for i in tqdm(range(0, len(QUESTIONS))):\n question = QUESTIONS[i]\n data[\"answer\"].append(rag_chain.invoke(question))\n data[\"contexts\"].append(\n [doc.page_content for doc in retriever.get_relevant_documents(question)]\n )\n # RAGAS expects a Dataset object\n dataset = Dataset.from_dict(data)\n # RAGAS runtime settings to avoid hitting OpenAI rate limits\n run_config = RunConfig(max_workers=4, max_wait=180)\n result = evaluate(\n dataset=dataset,\n metrics=[faithfulness, answer_relevancy],\n run_config=run_config,\n raise_exceptions=False,\n )\n print(f\"Result for the {model} model: {result}\")\n```\n\nA few changes to note in the above code:\n* The `data` dictionary has an additional `answer` key to accumulate answers to the questions in our evaluation dataset.\n* We use the **text-embedding-3-small** for the retriever since we determined this to be the better embedding model in Step 6.\n* We are using the metrics `faithfulness` and `answer_relevancy` to evaluate the generator.\n\nThe evaluation results for the completion models we compared look as follows on our dataset:\n\n| Model | Faithfulness | Answer relevance |\n| ----------------------- | ---------- | ---------- |\n| gpt-3.5-turbo | 0.9714 | 0.9087 |\n| gpt-3.5-turbo-1106 | 0.9671 | 0.9105 |\n\nBased on the above numbers, the latest version of **gpt-3.5-turbo** produces more factually consistent results than its predecessor, while the older version produces answers that are more pertinent to the given prompt. Let\u2019s say we want to go with the more \u201cfaithful\u201d model.\n\n> If you don\u2019t want to choose between metrics, consider creating consolidated metrics using a weighted summation after the fact, or [customize the prompts used for evaluation.\n\n## Step 8: Measure the overall performance of the RAG application\n\nFinally, let\u2019s evaluate the overall performance of the system using the best-performing models:\n\n```\nfrom ragas.metrics import answer_similarity, answer_correctness\n\ndata = {\"question\": ], \"ground_truth\": [], \"answer\": []}\ndata[\"question\"] = QUESTIONS\ndata[\"ground_truth\"] = GROUND_TRUTH\n# Using the best embedding model from the retriever evaluation\nretriever = get_retriever(\"text-embedding-3-small\", 2)\n# Using the best completion model from the generator evaluation\nrag_chain = get_rag_chain(retriever, \"gpt-3.5-turbo\")\nfor question in tqdm(QUESTIONS):\n data[\"answer\"].append(rag_chain.invoke(question))\n\ndataset = Dataset.from_dict(data)\nrun_config = RunConfig(max_workers=4, max_wait=180)\nresult = evaluate(\n dataset=dataset,\n metrics=[answer_similarity, answer_correctness],\n run_config=run_config,\n raise_exceptions=False,\n)\nprint(f\"Overall metrics: {result}\")\n```\n\nIn the above code, we use the **text-embedding-3-small** model for the retriever and the **gpt-3.5-turbo** model for the generator, to generate answers to questions in our evaluation dataset. We use the `answer_similarity` and `answer_correctness` metrics to measure the overall performance of the RAG chain.\n\nThe evaluation shows that the RAG chain produces an answer similarity of **0.8873** and an answer correctness of **0.5922** on our dataset.\n\nThe correctness seems a bit low so let\u2019s investigate further. You can convert the results from RAGAS to a pandas dataframe to perform further analysis:\n\n```\nresult_df = result.to_pandas()\nresult_df[result_df[\"answer_correctness\"] < 0.7]\n```\n\nFor a more visual analysis, can also create a heatmap of questions vs metrics:\n\n```\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 8))\nsns.heatmap(\n result_df[1:10].set_index(\"question\")[[\"answer_similarity\", \"answer_correctness\"]],\n annot=True,\n cmap=\"flare\",\n)\nplt.show()\n```\n\n![Heatmap visualizing the performance of a RAG application][3]\n\nUpon manually investigating some of the low-scoring results, we observed the following:\n* Some ground-truth answers in the evaluation dataset were in fact incorrect. So although the answer generated by the LLM was right, it didn\u2019t match the ground truth answer, resulting in a low score.\n* Some ground-truth answers were full sentences whereas the LLM-generated answer, although factually correct, was a single word, number, etc.\n\nThe above findings emphasize the importance of spot-checking the LLM evaluations, curating accurate and representative evaluation datasets, and highlight yet another challenge with using LLMs for evaluation. \n\n## Step 9: Track performance over time\n\nEvaluation should not be a one-time event. Each time you want to change a component in the system, you should evaluate the changes against existing settings to assess how they will impact performance. Then, once the application is deployed in production, you should also have a way to monitor performance in real time and detect changes therein.\n\nIn this tutorial, we used MongoDB Atlas as the vector database for our RAG application. You can also use Atlas to monitor the performance of your LLM application via [Atlas Charts. All you need to do is write evaluation results and any feedback metrics (e.g., number of thumbs up, thumbs down, response regenerations, etc.) that you want to track to a MongoDB collection:\n\n```\nfrom datetime import datetime\n\nresult\"timestamp\"] = datetime.now()\ncollection = db[\"metrics\"]\ncollection.insert_one(result)\n```\n\nIn the above code snippet, we add a `timestamp` field containing the current timestamp to the final evaluation result (`result`) from Step 8, and write it to a collection called **metrics** in the **ragas_evals** database using PyMongo\u2019s `insert_one` method. The `result` dictionary inserted into MongoDB looks like this:\n\n```\n{\n \"answer_similarity\": 0.8873,\n \"answer_correctness\": 0.5922,\n \"timestamp\": 2024-04-07T23:27:30.655+00:00\n}\n```\n\nWe can now create a dashboard in Atlas Charts to visualize the data in the **metrics** collection:\n\n![Creating a dashboard in Atlas Charts][4]\n\nOnce the dashboard is created, click the **Add Chart** button and select the **metrics** collection as the data source for the chart. Drag and drop fields to include, choose a chart type, add a title and description for the chart, and save it to the dashboard:\n\n![Creating a chart in Atlas Charts][5]\n\nHere\u2019s what our sample dashboard looks like:\n\n![Sample dashboard created using Atlas Charts][6]\n\nSimilarly, once your application is in production, you can create a dashboard for any feedback metrics you collect.\n\n## Conclusion\n\nIn this tutorial, we looked into some of the challenges with evaluating LLM applications, followed by a detailed, step-by-step workflow for evaluating an LLM application, including persisting and tracking evaluation results over time. While we used RAG as our example for evaluation, the concepts and techniques shown in this tutorial can be extended to other LLM applications, including agents. \n\nNow that you have a good foundation on how to evaluate RAG applications, you can take it up as a challenge to evaluate RAG systems from some of our other tutorials:\n* [Building a RAG System With Google\u2019s Gemma, Hugging Face, and MongoDB\n* Building a RAG System Using Claude Opus and MongoDB\n\nIf you have further questions about LLM evaluations, please reach out to us in our Generative AI community forums and stay tuned for the next tutorial in the RAG series. Previous tutorials from the series can be found below:\n* Part 1: How to Choose the Right Embedding Model for Your Application\n\n## References\n\nIf you would like to learn more about evaluating LLM applications, check out the following references:\n* https://docs.ragas.io/en/latest/getstarted/index.html\n* Yan, Ziyou. (Oct 2023). AI Engineer Summit - Building Blocks for LLM Systems & Products. eugeneyan.com. https://eugeneyan.com/speaking/ai-eng-summit/\n* Yan, Ziyou. (Mar 2024). LLM Task-Specific Evals that Do & Don't Work. eugeneyan.com. https://eugeneyan.com/writing/evals/\n* Yan, Ziyou. (Jul 2023). Patterns for Building LLM-based Systems & Products. eugeneyan.com. https://eugeneyan.com/writing/llm-patterns/\n* https://aiconference.com/speakers/jerry-liu/\n* https://www.databricks.com/blog/LLM-auto-eval-best-practices-RAG\n* https://huggingface.co/learn/cookbook/en/rag_evaluation\n* Llamaindex evals framework\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt50b123e3b95ecbdf/661ad2da36c04ae24dcf9306/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc8c83b7525024bd3/661ad53e16c12012c35dbf4c/image2.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt234eee7c71ffb9c8/661ad86a3c817d17d9e889a0/image5.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcfead54751777066/661ad95120797a9792b05cca/image3.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta1cc527a0f40d9a7/661ad981905fc97e5fec3611/image6.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfbcdc080c23ce55a/661ad99a12f2756e37eff236/image4.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "In this tutorial, we will see how to evaluate LLM applications using the RAGAS framework, taking a RAG system as an example.", "contentType": "Tutorial"}, "title": "RAG Series Part 2: How to Evaluate Your RAG Application", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-terraform-cluster-backup-policies", "action": "created", "body": "# MongoDB Atlas With Terraform - Cluster and Backup Policies\n\nIn this tutorial, I will show you how to create a MongoDB cluster in Atlas using Terraform. We saw in a previous article how to create an API key to start using Terraform and create our first project module. Now, we will go ahead and create our first cluster. If you don't have an API key and a project, I recommend you look at the previous article.\n\nThis article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.\n\nEverything we do here is contained in the provider/resource documentation: mongodbatlas_advanced_cluster | Resources | mongodb/mongodbatlas | Terraform\n\n> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.\n\n \n## Creating a cluster\nAt this point, we will create our first replica set cluster using Terraform in MongoDB Atlas. As discussed in the previous article, Terraform is a powerful infrastructure-as-code tool that allows you to manage and provision IT resources in an efficient and predictable way. By using it in conjunction with MongoDB Atlas, you can automate the creation and management of database resources in the cloud, ensuring a consistent and reliable infrastructure.\n\nBefore we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project in Atlas. These steps are essential to ensure the success of creating your replica set cluster.\n### Terraform provider configuration for MongoDB Atlas\nThe first step is to configure the Terraform provider for MongoDB Atlas. This will allow Terraform to communicate with the MongoDB Atlas API and manage resources within your account. Add the following block of code to your provider.tf file:\u00a0\n\n```\nprovider \"mongodbatlas\" {}\n```\n\nIn the previous article, we configured the Terraform provider by directly entering our public and private keys. Now, in order to adopt more professional practices, we have chosen to use environment variables for authentication. The MongoDB Atlas provider, like many others, supports several authentication methodologies. The safest and most recommended option is to use environment variables. This implies only defining the provider in our Terraform code and exporting the relevant environment variables where Terraform will be executed, whether in the terminal, as a secret in Kubernetes, or a secret in GitHub Actions, among other possible contexts. There are other forms of authentication, such as using MongoDB CLI, AWS Secrets Manager, directly through variables in Terraform, or even specifying the keys in the code. However, to ensure security and avoid exposing our keys in accessible locations, we opt for the safer approaches mentioned.\n\n### Creating the Terraform version file\nInside the versions.tf file, you will start by specifying the version of Terraform that your project requires. This is important to ensure that all users and CI/CD environments use the same version of Terraform, avoiding possible incompatibilities or execution errors. In addition to defining the Terraform version, it is equally important to specify the versions of the providers used in your project. This ensures that resources are managed consistently. For example, to set the MongoDB Atlas provider version, you would add a `required_providers` block inside the Terraform block, as shown below:\n\n```terraform\nterraform {\n\u00a0\u00a0required_version = \">= 0.12\"\n\u00a0\u00a0required_providers {\n\u00a0\u00a0\u00a0\u00a0mongodbatlas = {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0source = \"mongodb/mongodbatlas\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0version = \"1.14.0\"\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0}\n}\n```\n\n### Defining the cluster resource\nAfter configuring the version file and establishing the Terraform and provider versions, the next step is to define the cluster resource in MongoDB Atlas. This is done by creating a .tf file, for example main.tf, where you will specify the properties of the desired cluster. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create clusters with different architectures or sizes, without having to write a new module.\n\nI will look at some attributes and parameters to make this clear.\n```terraform\n# ------------------------------------------------------------------------------\n# MONGODB CLUSTER\n# ------------------------------------------------------------------------------\nresource \"mongodbatlas_advanced_cluster\" \"default\" {\n\u00a0\u00a0project_id = data.mongodbatlas_project.default.id\n\u00a0\u00a0name = var.name\n\u00a0\u00a0cluster_type = var.cluster_type\n\u00a0\u00a0backup_enabled = var.backup_enabled\n\u00a0\u00a0pit_enabled = var.pit_enabled\n\u00a0\u00a0mongo_db_major_version = var.mongo_db_major_version\n\u00a0\u00a0disk_size_gb = var.disk_size_gb\n\u00a0\u00a0\n``` \nIn this first block, we are specifying the name of our cluster through the name parameter, its type (which can be a `REPLICASET`, `SHARDED`, or `GEOSHARDED`), and if we have backup and point in time activated, in addition to the database version and the amount of storage for the cluster.\n\n```terraform\n\u00a0\u00a0advanced_configuration {\n\u00a0\u00a0\u00a0\u00a0fail_index_key_too_long = var.fail_index_key_too_long\n\u00a0\u00a0\u00a0\u00a0javascript_enabled = var.javascript_enabled\n\u00a0\u00a0\u00a0\u00a0minimum_enabled_tls_protocol = var.minimum_enabled_tls_protocol\n\u00a0\u00a0\u00a0\u00a0no_table_scan = var.no_table_scan\n\u00a0\u00a0\u00a0\u00a0oplog_size_mb = var.oplog_size_mb\n\u00a0\u00a0\u00a0\u00a0default_read_concern = var.default_read_concern\n\u00a0\u00a0\u00a0\u00a0default_write_concern = var.default_write_concern\n\u00a0\u00a0\u00a0\u00a0oplog_min_retention_hours = var.oplog_min_retention_hours\n\u00a0\u00a0\u00a0\u00a0transaction_lifetime_limit_seconds = var.transaction_lifetime_limit_seconds\n\u00a0\u00a0\u00a0\u00a0sample_size_bi_connector = var.sample_size_bi_connector\n\u00a0\u00a0\u00a0\u00a0sample_refresh_interval_bi_connector = var.sample_refresh_interval_bi_connector\n}\n```\n\nHere, we are specifying some advanced settings. Many of these values will not be specified in the .tfvars as they have default values in the variables.tf file.\n\nParameters include the type of read/write concern, oplog size in MB, TLS protocol, whether JavaScript will be enabled in MongoDB, and transaction lifetime limit in seconds. no_table_scan is for when the cluster disables the execution of any query that requires a collection scan to return results, when true. There are more parameters that you can look at in the documentation, if you have questions.\n\n```terraform\n\u00a0\u00a0replication_specs {\n\u00a0\u00a0\u00a0\u00a0num_shards = var.cluster_type == \"REPLICASET\" ? null : var.num_shards\n\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0dynamic \"region_configs\" {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0for_each = var.region_configs\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0content {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0provider_name = region_configs.value.provider_name\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0priority = region_configs.value.priority\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0region_name = region_configs.value.region_name\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0electable_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0instance_size = region_configs.value.electable_specs.instance_size\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_count = region_configs.value.electable_specs.node_count\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0disk_iops = region_configs.value.electable_specs.instance_size == \"M10\" || region_configs.value.electable_specs.instance_size == \"M20\" ? null :\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 region_configs.value.electable_specs.disk_iops\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ebs_volume_type = region_configs.value.electable_specs.ebs_volume_type\n}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0disk_gb_enabled = region_configs.value.auto_scaling.disk_gb_enabled\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_enabled = region_configs.value.auto_scaling.compute_enabled\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_scale_down_enabled = region_configs.value.auto_scaling.compute_scale_down_enabled\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_min_instance_size = region_configs.value.auto_scaling.compute_min_instance_size\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_max_instance_size = region_configs.value.auto_scaling.compute_max_instance_size\n}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0analytics_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0instance_size = try(region_configs.value.analytics_specs.instance_size, \"M10\")\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_count = try(region_configs.value.analytics_specs.node_count, 0)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0disk_iops = try(region_configs.value.analytics_specs.disk_iops, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ebs_volume_type = try(region_configs.value.analytics_specs.ebs_volume_type, \"STANDARD\")\n}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0analytics_auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0disk_gb_enabled = try(region_configs.value.analytics_auto_scaling.disk_gb_enabled, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_enabled = try(region_configs.value.analytics_auto_scaling.compute_enabled, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_scale_down_enabled = try(region_configs.value.analytics_auto_scaling.compute_scale_down_enabled, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_min_instance_size = try(region_configs.value.analytics_auto_scaling.compute_min_instance_size, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0compute_max_instance_size = try(region_configs.value.analytics_auto_scaling.compute_max_instance_size, null)\n}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0read_only_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0instance_size = try(region_configs.value.read_only_specs.instance_size, \"M10\")\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0node_count = try(region_configs.value.read_only_specs.node_count, 0)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0disk_iops = try(region_configs.value.read_only_specs.disk_iops, null)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0ebs_volume_type = try(region_configs.value.read_only_specs.ebs_volume_type, \"STANDARD\")\n}\n}\n}\n}\n\n```\n\nAt this moment, we are placing the number of shards we want, in case our cluster is not a REPLICASET. In addition, we specify the configuration of the cluster, region, cloud, priority for failover, autoscaling, electable, analytics, and read-only node configurations, in addition to its autoscaling configurations.\n\n```terraform\n\u00a0\u00a0dynamic \"tags\" {\n\u00a0\u00a0\u00a0\u00a0for_each = local.tags\n\u00a0\u00a0\u00a0\u00a0content {\n\u00a0\u00a0\u00a0\u00a0\u00a0key = tags.key\n\u00a0\u00a0\u00a0\u00a0\u00a0value = tags.value\n}\n}\n\n\u00a0\u00a0bi_connector_config {\n\u00a0\u00a0\u00a0\u00a0enabled = var.bi_connector_enabled\n\u00a0\u00a0\u00a0\u00a0read_preference = var.bi_connector_read_preference\n}\n\n\u00a0\u00a0lifecycle {\n\u00a0\u00a0\u00a0\u00a0ignore_changes = \n\u00a0\u00a0\u00a0\u00a0disk_size_gb,\n\u00a0\u00a0\u00a0\u00a0]\n}\n}\n```\n\nNext, we create a dynamic block to loop for each tag variable we include. In addition, we specify the BI connector, if desired, and the lifecycle block. Here, we are only specifying `disk_size_gb` for an example, but it is recommended to read the documentation that has important warnings about this block, such as including `instance_size`, as autoscaling can change and you don't want to accidentally retire an instance during peak times.\n\n```\n# ------------------------------------------------------------------------------\n# MONGODB BACKUP SCHEDULE\n# ------------------------------------------------------------------------------\nresource \"mongodbatlas_cloud_backup_schedule\" \"default\" {\nproject_id = data.mongodbatlas_project.default.id\ncluster_name = mongodbatlas_advanced_cluster.default.name\nupdate_snapshots = var.update_snapshots\nreference_hour_of_day = var.reference_hour_of_day\nreference_minute_of_hour = var.reference_minute_of_hour\nrestore_window_days = var.restore_window_days\n\npolicy_item_hourly {\nfrequency_interval = var.policy_item_hourly_frequency_interval\nretention_unit = var.policy_item_hourly_retention_unit\nretention_value = var.policy_item_hourly_retention_value\n}\n\npolicy_item_daily {\nfrequency_interval = var.policy_item_daily_frequency_interval\nretention_unit = var.policy_item_daily_retention_unit\nretention_value = var.policy_item_daily_retention_value\n}\n\npolicy_item_weekly {\nfrequency_interval = var.policy_item_weekly_frequency_interval\nretention_unit = var.policy_item_weekly_retention_unit\nretention_value = var.policy_item_weekly_retention_value\n}\n\npolicy_item_monthly {\nfrequency_interval = var.policy_item_monthly_frequency_interval\nretention_unit = var.policy_item_monthly_retention_unit\nretention_value = var.policy_item_monthly_retention_value\n}\n}\n```\n \nFinally, we create the backup block, which contains the policies and settings regarding the backup of our cluster.\n\nThis module, while detailed, encapsulates the full functionality offered by the `mongodbatlas_advanced_cluster` and `mongodbatlas_cloud_backup_schedule` resources, providing a comprehensive approach to creating and managing clusters in MongoDB Atlas. It supports the configuration of replica set, sharded, and geosharded clusters, meeting a variety of scalability and geographic distribution needs.\n\nOne of the strengths of this module is its flexibility in configuring backup policies, allowing fine adjustments that precisely align with the requirements of each database. This is essential to ensure resilience and effective data recovery in any scenario. Additionally, the module comes with vertical scaling enabled by default, in addition to offering advanced storage auto-scaling capabilities, ensuring that the cluster dynamically adjusts to the data volume and workload.\n\nTo complement the robustness of the configuration, the module allows the inclusion of analytical nodes and read-only nodes, expanding the possibilities of using the cluster for scenarios that require in-depth analysis or intensive read operations without impacting overall performance.\n\nThe default configuration includes smart preset values, such as the MongoDB version, which is set to \"7.0\" to take advantage of the latest features while maintaining the option to adjust to specific versions as needed. This \u201cbest practices\u201d approach ensures a solid starting point for most projects, reducing the need for manual adjustments and simplifying the deployment process.\n\nAdditionally, the ability to deploy clusters in any region and cloud provider \u2014 such as AWS, Azure, or GCP \u2014 offers unmatched flexibility, allowing teams to choose the best solution based on their cost, performance, and compliance preferences.\n\nIn summary, this module not only facilitates the configuration and management of MongoDB Atlas clusters with an extensive range of options and adjustments but also promotes secure and efficient configuration practices, making it a valuable tool for developers and database administrators in implementing scalable and reliable data solutions in the cloud.\n\nThe use of the lifecycle directive with the `ignore_changes` option in the Terraform code was specifically implemented to accommodate manual upscale situations of the MongoDB Atlas cluster, which should not be automatically reversed by Terraform in subsequent executions. This approach ensures that, after a manual increase in storage capacity (`disk_size_gb`) or other specific replication configurations (`replication_specs`), Terraform does not attempt to undo these changes to align the resource state with the original definition in the code. Essentially, it allows configuration adjustments made outside of Terraform, such as an upscale to optimize performance or meet growing demands, to remain intact without being overwritten by future Terraform executions, ensuring operational flexibility while maintaining infrastructure management as code.\n\nIn the variable.tf file, we create variables with default values:\n\n```terraform\nvariable \"name\" {\ndescription = \"The name of the cluster.\"\ntype = string\n}\n\nvariable \"cluster_type\" {\ndescription = < Note: Remember to export the environment variables with the public and private keys.\n\n```\nexport MONGODB_ATLAS_PUBLIC_KEY=\"public\"\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\nexport MONGODB_ATLAS_PRIVATE_KEY=\"private\"\n```\n\nNow, we run `terraform init`.\n\n```\n(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform init\n\nInitializing the backend...\n\nInitializing provider plugins...\n- Finding mongodb/mongodbatlas versions matching \"1.14.0\"...\n- Installing mongodb/mongodbatlas v1.14.0...\n- Installed mongodb/mongodbatlas v1.14.0 (signed by a HashiCorp partner, key ID 2A32ED1F3AD25ABF)\n\nPartner and community providers are signed by their developers.\nIf you'd like to know more about provider signing, you can read about it here:\nhttps://www.terraform.io/docs/cli/plugins/signing.html\n\nTerraform has created a lock file .terraform.lock.hcl to record the provider selections it made above. Include this file in your version control repository so that Terraform can guarantee to make the same selections by default when you run `terraform init` in the future.\n\nTerraform has been successfully initialized!\n\nYou may now begin working with Terraform. Try running `terraform plan` to see any changes that are required for your infrastructure. All Terraform commands should now work.\n\nIf you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.\n\n```\n\nNow that init has worked, let's run `terraform plan` and evaluate what will happen:\n\n```\n(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform plan\ndata.mongodbatlas_project.default: Reading...\ndata.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]\n\nTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:\n\u00a0\u00a0+ create\n\nTerraform will perform the following actions:\n\n\u00a0\u00a0# mongodbatlas_advanced_cluster.default will be created\n\u00a0\u00a0+ resource \"mongodbatlas_advanced_cluster\" \"default\" {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ advanced_configuration \u00a0 \u00a0 \u00a0 \u00a0 = [\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ default_read_concern \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"local\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ default_write_concern\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"majority\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ fail_index_key_too_long\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ javascript_enabled \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ minimum_enabled_tls_protocol \u00a0 \u00a0 \u00a0 \u00a0 = \"TLS1_2\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ no_table_scan\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ oplog_size_mb\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ sample_refresh_interval_bi_connector = 300\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ sample_size_bi_connector \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 100\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ transaction_lifetime_limit_seconds \u00a0 = 60\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ backup_enabled \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_type \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"REPLICASET\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ connection_strings \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ create_date\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_size_gb \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 10\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ encryption_at_rest_provider\u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ mongo_db_major_version \u00a0 \u00a0 \u00a0 \u00a0 = \"7.0\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ mongo_db_version \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"cluster-demo\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ paused \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ pit_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ project_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"65bfd71a08b61c36ca4d8eaa\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ root_cert_type \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ state_name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ termination_protection_enabled = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ version_release_system \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ bi_connector_config {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ enabled \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ read_preference = \"secondary\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ replication_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ container_id = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ num_shards \u00a0 = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ zone_name\u00a0 \u00a0 = \"ZoneName managed by Terraform\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ region_configs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ priority\u00a0 \u00a0 \u00a0 = 7\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ provider_name = \"AWS\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ region_name \u00a0 = \"US_EAST_1\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ analytics_auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_max_instance_size\u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_min_instance_size\u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_scale_down_enabled = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_gb_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ analytics_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_max_instance_size\u00a0 = \"M30\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_min_instance_size\u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_scale_down_enabled = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_gb_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ electable_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ read_only_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ tags {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ key \u00a0 = \"environment\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ value = \"dev\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ tags {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ key \u00a0 = \"name\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ value = \"teste-cluster\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0# mongodbatlas_cloud_backup_schedule.default will be created\n\u00a0\u00a0+ resource \"mongodbatlas_cloud_backup_schedule\" \"default\" {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ auto_export_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"cluster-demo\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id_policy\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ next_snapshot\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ project_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"65bfd71a08b61c36ca4d8eaa\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ reference_hour_of_day\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ reference_minute_of_hour \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 30\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ restore_window_days\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ update_snapshots \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ use_org_and_group_names_in_export_prefix = (known after apply)\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_daily {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"days\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 7\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_hourly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 12\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"days\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_monthly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"months\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 12\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_weekly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"weeks\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 4\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n\nPlan: 2 to add, 0 to change, 0 to destroy.\n\n\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n\nNote: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run `terraform apply` now.\n\n```\n\nShow! It was exactly the output we expected to see, the creation of a cluster resource with the backup policies. Let's apply this!\n\nWhen running the `terraform apply` command, you will be prompted for approval with `yes` or `no`. Type `yes`.\n\n```\n(base) samuelmolling@Samuels-MacBook-Pro cluster % terraform apply\u00a0\n\ndata.mongodbatlas_project.default: Reading...\n\ndata.mongodbatlas_project.default: Read complete after 2s [id=65bfd71a08b61c36ca4d8eaa]\n\nTerraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:\n\u00a0\u00a0+ create\n \nTerraform will perform the following actions:\n\n\u00a0\u00a0# mongodbatlas_advanced_cluster.default will be created\n\u00a0\u00a0+ resource \"mongodbatlas_advanced_cluster\" \"default\" {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ advanced_configuration \u00a0 \u00a0 \u00a0 \u00a0 = [\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ default_read_concern \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"local\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ default_write_concern\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"majority\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ fail_index_key_too_long\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ javascript_enabled \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ minimum_enabled_tls_protocol \u00a0 \u00a0 \u00a0 \u00a0 = \"TLS1_2\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ no_table_scan\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ oplog_size_mb\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ sample_refresh_interval_bi_connector = 300\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ sample_size_bi_connector \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 100\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ transaction_lifetime_limit_seconds \u00a0 = 60\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0},\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0]\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ backup_enabled \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_type \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"REPLICASET\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ connection_strings \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ create_date\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_size_gb \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 10\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ encryption_at_rest_provider\u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ mongo_db_major_version \u00a0 \u00a0 \u00a0 \u00a0 = \"7.0\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ mongo_db_version \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"cluster-demo\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ paused \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ pit_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ project_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"65bfd71a08b61c36ca4d8eaa\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ root_cert_type \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ state_name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ termination_protection_enabled = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ version_release_system \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ bi_connector_config {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ enabled \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ read_preference = \"secondary\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ replication_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ container_id = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ num_shards \u00a0 = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ zone_name\u00a0 \u00a0 = \"ZoneName managed by Terraform\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ region_configs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ priority\u00a0 \u00a0 \u00a0 = 7\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ provider_name = \"AWS\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ region_name \u00a0 = \"US_EAST_1\"\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ analytics_auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_max_instance_size\u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_min_instance_size\u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_scale_down_enabled = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_gb_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ analytics_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ auto_scaling {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_max_instance_size\u00a0 = \"M30\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_min_instance_size\u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ compute_scale_down_enabled = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_gb_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = true\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ electable_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ read_only_specs {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ disk_iops \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ ebs_volume_type = \"STANDARD\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ instance_size \u00a0 = \"M10\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ node_count\u00a0 \u00a0 \u00a0 = 0\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ tags {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ key \u00a0 = \"environment\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ value = \"dev\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ tags {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ key \u00a0 = \"name\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ value = \"teste-cluster\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0# mongodbatlas_cloud_backup_schedule.default will be created\n\u00a0\u00a0+ resource \"mongodbatlas_cloud_backup_schedule\" \"default\" {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ auto_export_enabled\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ cluster_name \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"cluster-demo\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id_policy\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ next_snapshot\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ project_id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = \"65bfd71a08b61c36ca4d8eaa\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ reference_hour_of_day\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ reference_minute_of_hour \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 30\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ restore_window_days\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ update_snapshots \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = false\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ use_org_and_group_names_in_export_prefix = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_daily {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"days\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 7\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_hourly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 12\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"days\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 3\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_monthly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"months\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 12\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ policy_item_weekly {\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_interval = 1\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ frequency_type \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ id \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 = (known after apply)\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_unit \u00a0 \u00a0 = \"weeks\"\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0+ retention_value\u00a0 \u00a0 = 4\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0}\n\u00a0\u00a0\u00a0\u00a0}\n\nPlan: 2 to add, 0 to change, 0 to destroy.\n\nDo you want to perform these actions?\n\u00a0\u00a0Terraform will perform the actions described above.\n\u00a0\u00a0Only 'yes' will be accepted to approve.\n\n\u00a0\u00a0Enter a value: yes\u00a0\n\nmongodbatlas_advanced_cluster.default: Creating...\nmongodbatlas_advanced_cluster.default: Still creating... [10s elapsed]\nmongodbatlas_advanced_cluster.default: Still creating... [8m40s elapsed]\nmongodbatlas_advanced_cluster.default: Creation complete after 8m46s [id=Y2x1c3Rlcl9pZA==:NjViZmRmYzczMTBiN2Y2ZDFhYmIxMmQ0-Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]\nmongodbatlas_cloud_backup_schedule.default: Creating...\nmongodbatlas_cloud_backup_schedule.default: Creation complete after 2s [id=Y2x1c3Rlcl9uYW1l:Y2x1c3Rlci1kZW1v-cHJvamVjdF9pZA==:NjViZmQ3MWEwOGI2MWMzNmNhNGQ4ZWFh]\n\nApply complete! Resources: 2 added, 0 changed, 0 destroyed.\n```\n\nThis process took eight minutes and 40 seconds to execute. I shortened the log output, but don't worry if this step takes time.\n\nNow, let\u2019s look in Atlas to see if the cluster was created successfully\u2026\n\n![Atlas Cluster overview][1]\n![Atlas cluster Backup information screen][2]\n\nWe were able to create our first replica set with a standard backup policy with PITR and scheduled snapshots.\n\nIn this tutorial, we saw how to create the first cluster in our project created in the last article. We created a module that also includes a backup policy. In an upcoming article, we will look at how to create an API key and user using Terraform and Atlas.\n\nTo learn more about MongoDB and various tools, I invite you to visit the [Developer Center to read the other articles.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltef08af8a99b7af22/65e0d4dbeef4e3792e1e6ddf/image1.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte24ff6c1fea2a907/65e0d4db31aca16b3e7efa80/image2.png", "format": "md", "metadata": {"tags": ["Atlas", "Terraform"], "pageDescription": "Learn to manage cluster and backup policies using terraform", "contentType": "Tutorial"}, "title": "MongoDB Atlas With Terraform - Cluster and Backup Policies", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/use-union-all-aggregation-pipeline-stage", "action": "created", "body": "# How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4\n\nWith the release of MongoDB 4.4 comes a new aggregation\npipeline\nstage called `$unionWith`. This stage lets you combine multiple\ncollections into a single result set!\n\nHere's how you'd use it:\n\n**Simplified syntax, with no additional processing on the specified\ncollection**\n\n``` \ndb.collection.aggregate(\n { $unionWith: \"\" }\n])\n```\n\n**Extended syntax, using optional pipeline field**\n\n``` \ndb.collection.aggregate([\n { $unionWith: { coll: \"\", pipeline: [ , etc. ] } }\n])\n```\n\n>\n>\n>\u26a0 If you use the pipeline field to process your collection before\n>combining, keep in mind that stages that write data, like `$out` and\n>`$merge`, can't be used!\n>\n>\n\nYour resulting documents will merge your current collection's (or\npipeline's) stream of documents with the documents from the\ncollection/pipeline you specify. Keep in mind that this can include\nduplicates!\n\n## This sounds kinda familiar..\n\nIf you've used the `UNION ALL` operation in SQL before, the `$unionWith`\nstage's functionality may sound familiar to you, and you wouldn't be\nwrong! Both combine the result sets from multiple queries and return the\nmerged rows, some of which may be duplicates. However, that's where the\nsimilarities end. Unlike MongoDB's `$unionWith` stage, you have to\nfollow [a few\nrules\nin order to run a valid `UNION ALL` operation in SQL:\n\n- Make sure your two queries have the *same number of columns*\n- Make sure the *order of columns* are the same\n- Make sure the *matching columns are compatible data types*.\n\nIt'd look something like this in SQL:\n\n``` \nSELECT column1, expression1, column2\nFROM table1\nUNION ALL\nSELECT column1, expression1, column2\nFROM table2\nWHERE conditions]\n```\n\nWith the `$unionWith` stage in MongoDB, you don't have to worry about\nthese stringent constraints.\n\n## So how is MongoDB's `$unionWith` stage different?\n\nThe most convenient difference between the `$unionWith` stage and other\nUNION operations is that there's no matching schema restriction. This\nflexible schema support means you can combine documents that may not\nhave the same type or number of fields. This is common in certain\nscenarios, where the data we need to use comes from different sources:\n\n- TimeSeries data that's stored by month/quarter/some other unit of\n time\n- IoT device data, per fleet or version\n- Archival and Recent data, stored in a Data Lake\n- Regional data\n\nWith MongoDB's `$unionWith` stage, combining these data sources is\npossible.\n\nReady to try the new `$unionWith` stage? Follow along by completing a\nfew setup steps first. Or, you can [skip to the code\nsamples. \ud83d\ude09\n\n## Prerequisites\n\nFirst, a general understanding of what the aggregation\nframework\nis and how to use it will be important for the rest of this tutorial. If\nyou are unfamiliar with the aggregation framework, check out this great\nIntroduction to the MongoDB Aggregation\nFramework,\nwritten by fellow dev advocate Ken Alger!\n\nNext, based on your situation, you may already have a few prerequisites\nsetup or need to start from scratch. Either way, choose your scenario to\nconfigure the things you need so that you can follow the rest of this\ntutorial!\n\nChoose your scenario:\n\n**I don't have an Atlas cluster set up yet**:\n\n1. You'll need an Atlas account to play around with MongoDB Atlas!\n Create\n one\n if you haven't already done so. Otherwise, log into your Atlas\n account.\n2. Setup a free Atlas\n cluster\n (no credit card needed!). Be sure to select **MongoDB 4.4** (may be\n Beta, which is OK) as your version in Additional Settings!\n\n >\n >\n >\ud83d\udca1 **If you don't see the prompt to create a cluster**: You may be\n >prompted to create a project *first* before you see the prompt to create\n >your first cluster. In this case, go ahead and create a project first\n >(leaving all the default settings). Then continue with the instructions\n >to deploy your first free cluster!\n >\n >\n\n3. Once your cluster is set up, add your IP\n address\n to your cluster's connection settings. This tells your cluster who's\n allowed to connect to it.\n4. Finally, create a database\n user\n for your cluster. Atlas requires anyone or anything accessing its\n clusters to authenticate as MongoDB database users for security\n purposes! Keep these credentials handy as you'll need them later on.\n5. Continue with the steps in Connecting to your cluster.\n\n**I have an Atlas cluster set up**:\n\nGreat! You can skip ahead to Connecting to your cluster.\n\n**Connecting to your cluster**\n\nTo connect to your cluster, we'll use the MongoDB for Visual Studio Code\nextension (VS Code for short \ud83d\ude0a). You can view your data directly,\ninteract with your collections, and much more with this helpful\nextension! Using this also consolidates our workspace into a single\nwindow, removing the need for us to jump back and forth between our code\nand MongoDB Atlas!\n\n>\n>\n>\ud83d\udca1 Though we'll be using the VS Code Extension and VS Code for the rest\n>of this tutorial, it's not a requirement to use the `$unionWith`\n>pipeline stage! You can also use the\n>CLI, language-specific\n>drivers, or\n>Compass if you prefer!\n>\n>\n\n1. Install the MongoDB for VS Code extension (or install VS Code first, if you don't already have it \ud83d\ude09).\n\n2. To connect to your cluster, you'll need a connection string. You can get this connection string from your cluster connection settings. Go to your cluster and select the \"Connect\" option:\n\n \n\n3. Select the \"Connect using MongoDB Compass\" option. This will give us a connection string in the DNS Seedlist Connection format that we can use with the MongoDB extension.\n\n \n\n >\n >\n >\ud83d\udca1 The MongoDB for VS Code extension also supports the standard connection string format. Using the DNS seedlist connection format is purely preference.\n >\n >\n\n4. Skip to the second step and copy the connection string (don't worry about the other settings, you won't need them):\n\n \n\n5. Switch back to VS Code. Press `Ctrl` + `Shift` + `P` (on Windows) or `Shift` + `Command` + `P` (on Mac) to bring up the command palette. This shows a list of all VS Code commands.\n\n \n\n6. Start typing \"MongoDB\" until you see the MongoDB extension's list of available commands. Select the \"MongoDB: Connect with Connection String\" option.\n\n \n\n7. Paste in your copied connection string. \ud83d\udca1 Don't forget! You have to replace the placeholder password with your actual password!\n\n \n\n8. Press enter to connect! You'll know the connection was successful if you see a confirmation message on the bottom right. You'll also see your cluster listed when you expand the MongoDB extension pane.\n\nWith the MongoDB extension installed and your cluster connected, you can now use MongoDB Playgrounds to test out the `$unionWith` examples! MongoDB Playgrounds give us a nice sandbox to easily write and test Mongo queries. I love using it when prototying or trying something new because it has query auto-completion and syntax highlighting, something that you don't get in most terminals.\n\nLet's finally dive into some examples!\n\n## Examples\n\nTo follow along, you can use these MongoDB Playground\nfiles I\nhave created to accompany this blog post or create your\nown!\n\n>\n>\n>\ud83d\udca1 If you create your own playground, remember to change the database\n>name and delete the default template's code first!\n>\n>\n\n### `$unionWith` using a pipeline\n\n>\n>\n>\ud83d\udcc3 Use\n>this\n>playground if you'd like follow along with pre-written code for this\n>example.\n>\n>\n\nRight at the top, specify the database you'll be using. In this example,\nI'm using a database also called `union-walkthrough`:\n\n``` \nuse('union-walkthrough');\n```\n\n>\n>\n>\ud83d\udca1 I haven't actually created a database called `union-walkthrough` in\n>Atlas yet, but that's no problem! When the playground runs, it will see\n>that it does not yet exist and create a database of the specified name!\n>\n>\n\nNext, we need data! Particularly about some planets. And particularly\nabout planets in a certain movie series. \ud83d\ude09\n\nUsing the awesome SWAPI API, I've collected such\ninformation on a few planets. Let's add them into two collections,\nseparated by popularity.\n\nAny planets that appear in at least 2 or more films are considered\npopular. Otherwise, we'll add them into the `lonely_planets` collection:\n\n``` \n// Insert a few documents into the lonely_planets collection.\ndb.lonely_planets.insertMany(\n {\n \"name\": \"Endor\",\n \"rotation_period\": \"18\",\n \"orbital_period\": \"402\",\n \"diameter\": \"4900\",\n \"climate\": \"temperate\",\n \"gravity\": \"0.85 standard\",\n \"terrain\": \"forests, mountains, lakes\",\n \"surface_water\": \"8\",\n \"population\": \"30000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/30/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/3/\"\n ],\n \"created\": \"2014-12-10T11:50:29.349000Z\",\n \"edited\": \"2014-12-20T20:58:18.429000Z\",\n \"url\": \"http://swapi.dev/api/planets/7/\"\n },\n {\n \"name\": \"Kamino\",\n \"rotation_period\": \"27\",\n \"orbital_period\": \"463\",\n \"diameter\": \"19720\",\n \"climate\": \"temperate\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"ocean\",\n \"surface_water\": \"100\",\n \"population\": \"1000000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/22/\",\n \"http://swapi.dev/api/people/72/\",\n \"http://swapi.dev/api/people/73/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/5/\"\n ],\n \"created\": \"2014-12-10T12:45:06.577000Z\",\n \"edited\": \"2014-12-20T20:58:18.434000Z\",\n \"url\": \"http://swapi.dev/api/planets/10/\"\n },\n {\n \"name\": \"Yavin IV\",\n \"rotation_period\": \"24\",\n \"orbital_period\": \"4818\",\n \"diameter\": \"10200\",\n \"climate\": \"temperate, tropical\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"jungle, rainforests\",\n \"surface_water\": \"8\",\n \"population\": \"1000\",\n \"residents\": [],\n \"films\": [\n \"http://swapi.dev/api/films/1/\"\n ],\n \"created\": \"2014-12-10T11:37:19.144000Z\",\n \"edited\": \"2014-12-20T20:58:18.421000Z\",\n \"url\": \"http://swapi.dev/api/planets/3/\"\n },\n {\n \"name\": \"Hoth\",\n \"rotation_period\": \"23\",\n \"orbital_period\": \"549\",\n \"diameter\": \"7200\",\n \"climate\": \"frozen\",\n \"gravity\": \"1.1 standard\",\n \"terrain\": \"tundra, ice caves, mountain ranges\",\n \"surface_water\": \"100\",\n \"population\": \"unknown\",\n \"residents\": [],\n \"films\": [\n \"http://swapi.dev/api/films/2/\"\n ],\n \"created\": \"2014-12-10T11:39:13.934000Z\",\n \"edited\": \"2014-12-20T20:58:18.423000Z\",\n \"url\": \"http://swapi.dev/api/planets/4/\"\n },\n {\n \"name\": \"Bespin\",\n \"rotation_period\": \"12\",\n \"orbital_period\": \"5110\",\n \"diameter\": \"118000\",\n \"climate\": \"temperate\",\n \"gravity\": \"1.5 (surface), 1 standard (Cloud City)\",\n \"terrain\": \"gas giant\",\n \"surface_water\": \"0\",\n \"population\": \"6000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/26/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/2/\"\n ],\n \"created\": \"2014-12-10T11:43:55.240000Z\",\n \"edited\": \"2014-12-20T20:58:18.427000Z\",\n \"url\": \"http://swapi.dev/api/planets/6/\"\n }\n]);\n\n// Insert a few documents into the popular_planets collection.\ndb.popular_planets.insertMany([\n {\n \"name\": \"Tatooine\",\n \"rotation_period\": \"23\",\n \"orbital_period\": \"304\",\n \"diameter\": \"10465\",\n \"climate\": \"arid\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"desert\",\n \"surface_water\": \"1\",\n \"population\": \"200000\",\n \"residents\": [\n \"http://swapi.dev/api/people/1/\",\n \"http://swapi.dev/api/people/2/\",\n \"http://swapi.dev/api/people/4/\",\n \"http://swapi.dev/api/people/6/\",\n \"http://swapi.dev/api/people/7/\",\n \"http://swapi.dev/api/people/8/\",\n \"http://swapi.dev/api/people/9/\",\n \"http://swapi.dev/api/people/11/\",\n \"http://swapi.dev/api/people/43/\",\n \"http://swapi.dev/api/people/62/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/1/\",\n \"http://swapi.dev/api/films/3/\",\n \"http://swapi.dev/api/films/4/\",\n \"http://swapi.dev/api/films/5/\",\n \"http://swapi.dev/api/films/6/\"\n ],\n \"created\": \"2014-12-09T13:50:49.641000Z\",\n \"edited\": \"2014-12-20T20:58:18.411000Z\",\n \"url\": \"http://swapi.dev/api/planets/1/\"\n },\n {\n \"name\": \"Alderaan\",\n \"rotation_period\": \"24\",\n \"orbital_period\": \"364\",\n \"diameter\": \"12500\",\n \"climate\": \"temperate\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"grasslands, mountains\",\n \"surface_water\": \"40\",\n \"population\": \"2000000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/5/\",\n \"http://swapi.dev/api/people/68/\",\n \"http://swapi.dev/api/people/81/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/1/\",\n \"http://swapi.dev/api/films/6/\"\n ],\n \"created\": \"2014-12-10T11:35:48.479000Z\",\n \"edited\": \"2014-12-20T20:58:18.420000Z\",\n \"url\": \"http://swapi.dev/api/planets/2/\"\n },\n {\n \"name\": \"Naboo\",\n \"rotation_period\": \"26\",\n \"orbital_period\": \"312\",\n \"diameter\": \"12120\",\n \"climate\": \"temperate\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"grassy hills, swamps, forests, mountains\",\n \"surface_water\": \"12\",\n \"population\": \"4500000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/3/\",\n \"http://swapi.dev/api/people/21/\",\n \"http://swapi.dev/api/people/35/\",\n \"http://swapi.dev/api/people/36/\",\n \"http://swapi.dev/api/people/37/\",\n \"http://swapi.dev/api/people/38/\",\n \"http://swapi.dev/api/people/39/\",\n \"http://swapi.dev/api/people/42/\",\n \"http://swapi.dev/api/people/60/\",\n \"http://swapi.dev/api/people/61/\",\n \"http://swapi.dev/api/people/66/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/3/\",\n \"http://swapi.dev/api/films/4/\",\n \"http://swapi.dev/api/films/5/\",\n \"http://swapi.dev/api/films/6/\"\n ],\n \"created\": \"2014-12-10T11:52:31.066000Z\",\n \"edited\": \"2014-12-20T20:58:18.430000Z\",\n \"url\": \"http://swapi.dev/api/planets/8/\"\n },\n {\n \"name\": \"Coruscant\",\n \"rotation_period\": \"24\",\n \"orbital_period\": \"368\",\n \"diameter\": \"12240\",\n \"climate\": \"temperate\",\n \"gravity\": \"1 standard\",\n \"terrain\": \"cityscape, mountains\",\n \"surface_water\": \"unknown\",\n \"population\": \"1000000000000\",\n \"residents\": [\n \"http://swapi.dev/api/people/34/\",\n \"http://swapi.dev/api/people/55/\",\n \"http://swapi.dev/api/people/74/\"\n ],\n \"films\": [\n \"http://swapi.dev/api/films/3/\",\n \"http://swapi.dev/api/films/4/\",\n \"http://swapi.dev/api/films/5/\",\n \"http://swapi.dev/api/films/6/\"\n ],\n \"created\": \"2014-12-10T11:54:13.921000Z\",\n \"edited\": \"2014-12-20T20:58:18.432000Z\",\n \"url\": \"http://swapi.dev/api/planets/9/\"\n },\n {\n \"name\": \"Dagobah\",\n \"rotation_period\": \"23\",\n \"orbital_period\": \"341\",\n \"diameter\": \"8900\",\n \"climate\": \"murky\",\n \"gravity\": \"N/A\",\n \"terrain\": \"swamp, jungles\",\n \"surface_water\": \"8\",\n \"population\": \"unknown\",\n \"residents\": [],\n \"films\": [\n \"http://swapi.dev/api/films/2/\",\n \"http://swapi.dev/api/films/3/\",\n \"http://swapi.dev/api/films/6/\"\n ],\n \"created\": \"2014-12-10T11:42:22.590000Z\",\n \"edited\": \"2014-12-20T20:58:18.425000Z\",\n \"url\": \"http://swapi.dev/api/planets/5/\"\n }\n]);\n```\n\nThis separation is indicative of how our data may be grouped. Despite\nthe separation, we can use the `$unionWith` stage to combine these two\ncollections if we ever needed to analyze them as a single result set!\n\nLet's say that we needed to find out the total population of planets,\ngrouped by climate. Additionally, we'd like to leave out any planets\nthat don't have population data from our calculation. We can do this\nusing an aggregation:\n\n``` \n// Run an aggregation to view total planet populations, grouped by climate type.\nuse('union-walkthrough');\n\ndb.lonely_planets.aggregate([\n {\n $match: {\n population: { $ne: 'unknown' }\n }\n },\n { \n $unionWith: { \n coll: 'popular_planets',\n pipeline: [{\n $match: {\n population: { $ne: 'unknown' }\n }\n }] \n } \n },\n {\n $group: {\n _id: '$climate', totalPopulation: { $sum: { $toLong: '$population' } }\n }\n }\n]);\n```\n\nIf you've followed along in your own MongoDB playground and have copied\nthe code so far, try running the aggregation!\n\nAnd if you're using the provided MongoDB playground I created, highlight\nlines 264 - 290 and then run the selected code.\n\n>\n>\n>\ud83d\udca1 You'll notice in the code snippet above that I've added another\n>`use('union-walkthrough');` method right above the aggregation code. I\n>do this to make the selection of relevant code within the playground\n>easier. It's also required so that the aggregation code can run against\n>the correct database. However, the same thing can be achieved by\n>selecting multiple lines, namely the original `use('union-walkthrough')`\n>line at the top and whatever additional example you'd like to run!\n>\n>\n\nYou should see the results like so:\n\n``` \n[\n {\n _id: 'arid',\n totalPopulation: 200000\n },\n {\n _id: 'temperate',\n totalPopulation: 1007536000000\n },\n {\n _id: 'temperate, tropical',\n totalPopulation: 1000\n }\n]\n```\n\nUnsurprisingly, planets with \"temperate\" climates seem to have more\ninhabitants. Something about that cool 75 F / 23.8 C, I guess \ud83c\udf1e\n\nLet's break down this aggregation:\n\nThe first object we pass into our aggregation is also our first stage,\nused here as our filter criteria. Specifically, we use the\n[$match\npipeline stage:\n\n``` \n{\n $match: {\n population: { $ne: 'unknown' }\n }\n},\n```\n\nIn this example, we filter out any documents that have `unknown` as\ntheir `population` value using the\n$ne (not\nequal) operator.\n\nThe next object (and next stage) in our aggregation is our `$unionWith`\nstage. Here, we specifiy what collection we'd like to perform a union\nwith (including any duplicates). We also make use of the pipeline field\nto similarly filter out any documents in our `popular_planets`\ncollection that have an unknown population:\n\n``` \n{ \n $unionWith: { \n coll: 'popular_planets',\n pipeline: \n {\n $match: {\n population: { $ne: 'unknown' }\n }\n }\n ] \n } \n},\n```\n\nFinally, we have our last stage in our aggregation. After combining our\n`lonely_planets` and `popular_planets` collections (both filtering out\ndocuments with no population data), we group the resulting documents\nusing a\n[$group\nstage:\n\n``` \n{\n $group: {\n _id: '$climate', \n totalPopulation: { $sum: { $toLong: '$population' } }\n }\n}\n```\n\nSince we want to know the total population per climate type, we first\nspecify `_id` to be the `$climate` field from our combined result set.\nThen, we calculate a new field called `totalPopulation` by using a\n$sum\noperator to add each matching document's population values together.\nYou'll also notice that based on the data we have, we needed to use a\n$toLong\noperator to first convert our `$population` field into a calculable\nvalue!\n\n### `$unionWith` without a pipeline\n\n>\n>\n>\ud83d\udcc3 Use\n>this\n>playground if you'd like follow along with pre-written code for this\n>example.\n>\n>\n\nNow, if you *don't* need to run some additional processing on the\ncollection you're combining with, you don't have to! The `pipeline`\nfield is optional and is only there if you need it.\n\nSo, if you just need to work with the planet data as a unified set, you\ncan do that too:\n\n``` \n// Run an aggregation with no pipeline\nuse('union-walkthrough');\n\ndb.lonely_planets.aggregate(\n { $unionWith: 'popular_planets' }\n]);\n```\n\nCopy this aggregation into your own playground and run it!\nAlternatively, select and run lines 293 - 297 if using the provided\nMongoDB playground!\n\nTada! Now you can use this unified dataset for analysis or further\nprocessing.\n\n### Different Schemas\n\nCombining the same schemas is great, but we can do that in regular SQL\ntoo! The real convenience of the `$unionWith` pipeline stage is that it\ncan also combine collections with different schemas. Let's take a look!\n\n### `$unionWith` using collections with different schemas\n\n>\n>\n>\ud83d\udcc3 Use\n>[this\n>playground if you'd like follow along with pre-written code for this\n>example.\n>\n>\n\nAs before, we'll specifiy the database we want to use:\n\n``` \nuse('union-walkthrough');\n```\n\nThis time, we'll use some acquired information about certain starships\nand vehicles that are used in this same movie series. Let's add them to\ntheir respective collections:\n\n``` \n// Insert a few documents into the starships collection\ndb.starships.insertMany(\n {\n \"name\": \"Death Star\",\n \"model\": \"DS-1 Orbital Battle Station\",\n \"manufacturer\": \"Imperial Department of Military Research, Sienar Fleet Systems\",\n \"cost_in_credits\": \"1000000000000\",\n \"length\": \"120000\",\n \"max_atmosphering_speed\": \"n/a\",\n \"crew\": 342953,\n \"passengers\": 843342,\n \"cargo_capacity\": \"1000000000000\",\n \"consumables\": \"3 years\",\n \"hyperdrive_rating\": 4.0,\n \"MGLT\": 10,\n \"starship_class\": \"Deep Space Mobile Battlestation\",\n \"pilots\": []\n },\n {\n \"name\": \"Millennium Falcon\",\n \"model\": \"YT-1300 light freighter\",\n \"manufacturer\": \"Corellian Engineering Corporation\",\n \"cost_in_credits\": \"100000\",\n \"length\": \"34.37\",\n \"max_atmosphering_speed\": \"1050\",\n \"crew\": 4,\n \"passengers\": 6,\n \"cargo_capacity\": 100000,\n \"consumables\": \"2 months\",\n \"hyperdrive_rating\": 0.5,\n \"MGLT\": 75,\n \"starship_class\": \"Light freighter\",\n \"pilots\": [\n \"http://swapi.dev/api/people/13/\",\n \"http://swapi.dev/api/people/14/\",\n \"http://swapi.dev/api/people/25/\",\n \"http://swapi.dev/api/people/31/\"\n ]\n },\n {\n \"name\": \"Y-wing\",\n \"model\": \"BTL Y-wing\",\n \"manufacturer\": \"Koensayr Manufacturing\",\n \"cost_in_credits\": \"134999\",\n \"length\": \"14\",\n \"max_atmosphering_speed\": \"1000km\",\n \"crew\": 2,\n \"passengers\": 0,\n \"cargo_capacity\": 110,\n \"consumables\": \"1 week\",\n \"hyperdrive_rating\": 1.0,\n \"MGLT\": 80,\n \"starship_class\": \"assault starfighter\",\n \"pilots\": []\n },\n {\n \"name\": \"X-wing\",\n \"model\": \"T-65 X-wing\",\n \"manufacturer\": \"Incom Corporation\",\n \"cost_in_credits\": \"149999\",\n \"length\": \"12.5\",\n \"max_atmosphering_speed\": \"1050\",\n \"crew\": 1,\n \"passengers\": 0,\n \"cargo_capacity\": 110,\n \"consumables\": \"1 week\",\n \"hyperdrive_rating\": 1.0,\n \"MGLT\": 100,\n \"starship_class\": \"Starfighter\",\n \"pilots\": [\n \"http://swapi.dev/api/people/1/\",\n \"http://swapi.dev/api/people/9/\",\n \"http://swapi.dev/api/people/18/\",\n \"http://swapi.dev/api/people/19/\"\n ]\n },\n]);\n\n// Insert a few documents into the vehicles collection\ndb.vehicles.insertMany([\n {\n \"name\": \"Sand Crawler\",\n \"model\": \"Digger Crawler\",\n \"manufacturer\": \"Corellia Mining Corporation\",\n \"cost_in_credits\": \"150000\",\n \"length\": \"36.8 \",\n \"max_atmosphering_speed\": 30,\n \"crew\": 46,\n \"passengers\": 30,\n \"cargo_capacity\": 50000,\n \"consumables\": \"2 months\",\n \"vehicle_class\": \"wheeled\",\n \"pilots\": []\n },\n {\n \"name\": \"X-34 landspeeder\",\n \"model\": \"X-34 landspeeder\",\n \"manufacturer\": \"SoroSuub Corporation\",\n \"cost_in_credits\": \"10550\",\n \"length\": \"3.4 \",\n \"max_atmosphering_speed\": 250,\n \"crew\": 1,\n \"passengers\": 1,\n \"cargo_capacity\": 5,\n \"consumables\": \"unknown\",\n \"vehicle_class\": \"repulsorcraft\",\n \"pilots\": [],\n },\n {\n \"name\": \"AT-AT\",\n \"model\": \"All Terrain Armored Transport\",\n \"manufacturer\": \"Kuat Drive Yards, Imperial Department of Military Research\",\n \"cost_in_credits\": \"unknown\",\n \"length\": \"20\",\n \"max_atmosphering_speed\": 60,\n \"crew\": 5,\n \"passengers\": 40,\n \"cargo_capacity\": 1000,\n \"consumables\": \"unknown\",\n \"vehicle_class\": \"assault walker\",\n \"pilots\": [],\n \"films\": [\n \"http://swapi.dev/api/films/2/\",\n \"http://swapi.dev/api/films/3/\"\n ],\n \"created\": \"2014-12-15T12:38:25.937000Z\",\n \"edited\": \"2014-12-20T21:30:21.677000Z\",\n \"url\": \"http://swapi.dev/api/vehicles/18/\"\n },\n {\n \"name\": \"AT-ST\",\n \"model\": \"All Terrain Scout Transport\",\n \"manufacturer\": \"Kuat Drive Yards, Imperial Department of Military Research\",\n \"cost_in_credits\": \"unknown\",\n \"length\": \"2\",\n \"max_atmosphering_speed\": 90,\n \"crew\": 2,\n \"passengers\": 0,\n \"cargo_capacity\": 200,\n \"consumables\": \"none\",\n \"vehicle_class\": \"walker\",\n \"pilots\": [\n \"http://swapi.dev/api/people/13/\"\n ]\n },\n {\n \"name\": \"Storm IV Twin-Pod cloud car\",\n \"model\": \"Storm IV Twin-Pod\",\n \"manufacturer\": \"Bespin Motors\",\n \"cost_in_credits\": \"75000\",\n \"length\": \"7\",\n \"max_atmosphering_speed\": 1500,\n \"crew\": 2,\n \"passengers\": 0,\n \"cargo_capacity\": 10,\n \"consumables\": \"1 day\",\n \"vehicle_class\": \"repulsorcraft\",\n \"pilots\": [],\n }\n]);\n```\n\nYou may be thinking (as I first did), what's the difference between\nstarships and vehicles? You'll be pleased to know that starships are\ndefined as any \"single transport craft that has hyperdrive capability\".\nAny other single transport craft that **does not have** hyperdrive\ncapability is considered a vehicle. The more you know! \ud83d\ude2e\n\nIf you look at the two collections, you'll see that they have two key\ndifferences:\n\n- The `max_atmosphering_speed` field is present in both collections,\n but is a `string` in the `starships` collection and an `int` in the\n `vehicles` collection.\n- The `starships` collection has two fields (`hyperdrive_rating`,\n `MGLT`) that are not present in the `vehicles` collection, as it\n only relates to starships.\n\nBut you know what? That's not a problem for the `$unionWith` stage! You\ncan combine them just as before:\n\n``` \n// Run an aggregation with no pipeline and differing schemas\nuse('union-walkthrough');\n\ndb.starships.aggregate([\n { $unionWith: 'vehicles' }\n]);\n```\n\nTry running the aggregation in your playground! Or if you're following\nalong in the MongoDB playground I've provided, select and run lines\n185 - 189! You should get the following combined result set as your\noutput:\n\n``` \n[\n {\n _id: 5f306ddca3ee8339643f137e,\n name: 'Death Star',\n model: 'DS-1 Orbital Battle Station',\n manufacturer: 'Imperial Department of Military Research, Sienar Fleet Systems',\n cost_in_credits: '1000000000000',\n length: '120000',\n max_atmosphering_speed: 'n/a',\n crew: 342953,\n passengers: 843342,\n cargo_capacity: '1000000000000',\n consumables: '3 years',\n hyperdrive_rating: 4,\n MGLT: 10,\n starship_class: 'Deep Space Mobile Battlestation',\n pilots: []\n },\n {\n _id: 5f306ddca3ee8339643f137f,\n name: 'Millennium Falcon',\n model: 'YT-1300 light freighter',\n manufacturer: 'Corellian Engineering Corporation',\n cost_in_credits: '100000',\n length: '34.37',\n max_atmosphering_speed: '1050',\n crew: 4,\n passengers: 6,\n cargo_capacity: 100000,\n consumables: '2 months',\n hyperdrive_rating: 0.5,\n MGLT: 75,\n starship_class: 'Light freighter',\n pilots: [\n 'http://swapi.dev/api/people/13/',\n 'http://swapi.dev/api/people/14/',\n 'http://swapi.dev/api/people/25/',\n 'http://swapi.dev/api/people/31/'\n ]\n },\n // + 7 other results, omitted for brevity\n]\n```\n\nCan you imagine doing that in SQL? Hint: You can't! That kind of schema\nrestriction is something you don't need to worry about with MongoDB,\nthough!\n\n### $unionWith using collections with different schemas and a pipeline\n\n>\n>\n>\ud83d\udcc3 Use\n>[this\n>playground if you'd like follow along with pre-written code for this\n>example.\n>\n>\n\nSo we can combine different schemas no problem. What if we need to do a\nlittle extra work on our collection before combining it? That's where\nthe `pipeline` field comes in!\n\nLet's say that there's some classified information in our data about the\nvehicles. Namely, any vehicles manufactured by Kuat Drive Yards (AKA a\ndivision of the Imperial Department of Military Research).\n\nBy direct orders, you are instructed not to give out this information\nunder any circumstances. In fact, you need to intercept any requests for\nvehicle information and remove these classified vehicles from the list!\n\nWe can do that like so:\n\n``` \nuse('union-walkthrough');\n\ndb.starships.aggregate(\n { \n $unionWith: {\n coll: 'vehicles',\n pipeline: [\n { \n $redact: {\n $cond: {\n if: { $eq: [ \"$manufacturer\", \"Kuat Drive Yards, Imperial Department of Military Research\"] },\n then: \"$$PRUNE\",\n else: \"$$DESCEND\"\n }\n }\n }\n ]\n }\n }\n]);\n```\n\nIn this example, we're combining the `starships` and `vehicles`\ncollections as before, using the `$unionWith` pipeline stage. We also\nprocess the `vehicle` data a bit more, using the `$unionWith`'s optional\n`pipeline` field:\n\n``` \n// Pipeline used with the vehicle collection\n{ \n $redact: {\n $cond: {\n if: { $eq: [ \"$manufacturer\", \"Kuat Drive Yards, Imperial Department of Military Research\"] },\n then: \"$$PRUNE\",\n else: \"$$DESCEND\"\n }\n }\n}\n```\n\nInside the `$unionWith`'s pipeline, we use a\n[$redact\nstage to restrict the contents of our documents based on a condition.\nThe condition is specified using the\n$cond\noperator, which acts like an `if/else` statement.\n\nIn our case, we are evaluating whether or not the `manufacturer` field\nholds a value of \"Kuat Drive Yards, Imperial Department of Military\nResearch\". If it does (uh oh, that's classified!), we use a system\nvariable called\n$$PRUNE,\nwhich lets us exclude all fields at the current document/embedded\ndocument level. If it doesn't, we use another system variable called\n$$DESCEND,\nwhich will return all fields at the current document level, except for\nany embedded documents.\n\nThis works perfectly for our use case. Try running the aggregation\n(lines 192 - 211, if using the provided MongoDB Playground). You should\nsee a combined result set, minus any Imperial manufactured vehicles:\n\n``` \n\n {\n _id: 5f306ddca3ee8339643f137e,\n name: 'Death Star',\n model: 'DS-1 Orbital Battle Station',\n manufacturer: 'Imperial Department of Military Research, Sienar Fleet Systems',\n cost_in_credits: '1000000000000',\n length: '120000',\n max_atmosphering_speed: 'n/a',\n crew: 342953,\n passengers: 843342,\n cargo_capacity: '1000000000000',\n consumables: '3 years',\n hyperdrive_rating: 4,\n MGLT: 10,\n starship_class: 'Deep Space Mobile Battlestation',\n pilots: []\n },\n {\n _id: 5f306ddda3ee8339643f1383,\n name: 'X-34 landspeeder',\n model: 'X-34 landspeeder',\n manufacturer: 'SoroSuub Corporation',\n cost_in_credits: '10550',\n length: '3.4 ',\n max_atmosphering_speed: 250,\n crew: 1,\n passengers: 1,\n cargo_capacity: 5,\n consumables: 'unknown',\n vehicle_class: 'repulsorcraft',\n pilots: []\n },\n // + 5 more non-Imperial manufactured results, omitted for brevity\n]\n```\n\nWe did our part to restrict classified information! \ud83c\udfb6 *Hums Imperial\nMarch* \ud83c\udfb6\n\n## Restrictions for UNION ALL\n\nNow that we know how the `$unionWith` stage works, it's important to\ndiscuss its limits and restrictions.\n\n### Duplicates\n\nWe've mentioned it already, but it's important to reiterate: using the\n`$unionWith` stage will give you a combined result set which may include\nduplicates! This is equivalent to how the `UNION ALL` operator works in\n`SQL` as well. As a workaround, using a `$group` stage at the end of\nyour pipeline to remove duplicates is advised, but only when possible\nand if the resulting data does not get inaccurately skewed.\n\nThere are plans to add similar fuctionality to `UNION` (which combines\nresult sets but *removes* duplicates), but that may be in a future\nrelease.\n\n### Sharded Collections\n\nIf you use a `$unionWith` stage as part of a\n[$lookup\npipeline, the collection you specify for the `$unionWith` cannot be\nsharded. As an example, take a look at this aggregation:\n\n``` \n// Invalid aggregation (tried to use sharded collection with $unionWith)\ndb.lonely_planets.aggregate(\n {\n $lookup: {\n from: \"extinct_planets\",\n let: { last_known_population: \"$population\", years_extinct: \"$time_extinct\" },\n pipeline: [\n // Filter criteria\n { $unionWith: { coll: \"questionable_planets\", pipeline: [ { pipeline } ] } },\n // Other pipeline stages\n ],\n as: \"planetdata\"\n }\n }\n])\n```\n\nThe coll `questionable_planets` (located within the `$unionWith` stage)\ncannot be sharded. This is enforced to prevent a significant decrease in\nperformance due to the shuffling of data around the cluster as it\ndetermines the best execution plan.\n\n### Transactions\n\nAggregation pipelines can't use the `$unionWith` stage inside\ntransactions because a rare, but possible 3-thread deadlock can occur in\nvery niche scenarios. Additionally, in MongoDB 4.4, there is a\nfirst-time definition of a view that would restrict its reading from\nwithin a transaction.\n\n### `$out` and `$merge`\n\nThe\n[$out\nand\n$merge\nstages cannot be used in a `$unionWith` pipeline. Since both `$out` and\n`$merge` are stages that *write* data to a collection, they need to be\nthe *last* stage in a pipeline. This conflicts with the usage of the\n`$unionWith` stage as it outputs its combined result set onto the next\nstage, which can be used at any point in an aggregation pipeline.\n\n### Collations\n\nIf your aggregation includes a\ncollation,\nthat collation is used for the operation, ignoring any other collations.\n\nHowever, if your aggregation doesn't include a collation, it will use\nthe collation for the top-level collection/view on which the aggregation\nis run:\n\n- If the `$unionWith` coll is a collection, its collation is ignored.\n- If the `$unionWith` coll is a view, then its collation must match\n that of the top-level collection/view. Otherwise, the operation\n errors.\n\n## You've made it to the end!\n\nWe've discussed what the `$unionWith` pipeline stage is and how you can\nuse it in your aggregations to combine data from multiple collections.\nThough similar to SQL's `UNION ALL` operation, MongoDB's `$unionWith`\nstage distinguishes itself through some convenient and much-needed\ncharacteristics. Most notable is the ability to combine collections with\ndifferent schemas! And as a much needed improvement, using a\n`$unionWith` stage eliminates the need to write additional code, code\nthat was required because we had no other way to combine our data!\n\nIf you have any questions about the `$unionWith` pipeline stage or this\nblog post, head over to the MongoDB Community\nforums or Tweet\nme!\n\n", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "Learn how to use the Union All ($unionWith) aggregation pipeline stage, newly released in MongoDB 4.4.", "contentType": "Tutorial"}, "title": "How to Use the Union All Aggregation Pipeline Stage in MongoDB 4.4", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/beanie-odm-fastapi-cocktails", "action": "created", "body": "# Build a Cocktail API with Beanie and MongoDB\n\nI have a MongoDB collection containing cocktail recipes that I've made during lockdown.\n\nRecently, I've been trying to build an API over it, using some technologies I know well. I wasn't very happy with the results. Writing code to transform the BSON that comes out of MongoDB into suitable JSON is relatively fiddly. I felt I wanted something more declarative, but my most recent attempt\u2014a mash-up of Flask, MongoEngine, and Marshmallow\u2014just felt clunky and repetitive. I was about to start experimenting with building my own declarative framework, and then I stumbled upon an introduction to a brand new MongoDB ODM called Beanie. It looked like exactly what I was looking for.\n\nThe code used in this post borrows heavily from the Beanie post linked above. I've customized it to my needs, and added an extra endpoint that makes use of MongoDB Atlas Search, to provide autocompletion, for a GUI I'm planning to build in the future.\n\nYou can find all the code on GitHub.\n\n>\n>\n>**Note**: The code here was written for Beanie 0.2.3. It's a new library, and things are moving fast! Check out the Beanie Changelog to see what things have changed between this version and the latest version of Beanie.\n>\n>\n\nI have a collection of documents that looks a bit like this:\n\n``` json\n{\n \"_id\": \"5f7daa158ec9dfb536781b0a\",\n \"name\": \"Hunter's Moon\",\n \"ingredients\": \n {\n \"name\": \"Vermouth\",\n \"quantity\": {\n \"quantity\": \"25\",\n \"unit\": \"ml\"\n }\n },\n {\n \"name\": \"Maraschino Cherry\",\n \"quantity\": {\n \"quantity\": \"15\",\n \"unit\": \"ml\"\n }\n },\n {\n \"name\": \"Sugar Syrup\",\n \"quantity\": {\n \"quantity\": \"10\",\n \"unit\": \"ml\"\n }\n },\n {\n \"name\": \"Lemonade\",\n \"quantity\": {\n \"quantity\": \"100\",\n \"unit\": \"ml\"\n }\n },\n {\n \"name\": \"Blackberries\",\n \"quantity\": {\n \"quantity\": \"2\",\n \"unit\": null\n }\n }\n ]\n}\n```\n\nThe promise of Beanie and FastAPI\u2014to just build a model for this data and have it automatically translate the tricky field types, like `ObjectId` and `Date` between BSON and JSON representation\u2014was very appealing, so I fired up a new Python project, and defined my schema in a [models submodule like so:\n\n``` python\nclass Cocktail(Document):\n class DocumentMeta:\n collection_name = \"recipes\"\n\n name: str\n ingredients: List\"Ingredient\"]\n instructions: List[str]\n\nclass Ingredient(BaseModel):\n name: str\n quantity: Optional[\"IngredientQuantity\"]\n\nclass IngredientQuantity(BaseModel):\n quantity: Optional[str]\n unit: Optional[str]\n\nCocktail.update_forward_refs()\nIngredient.update_forward_refs()\n```\n\nI was pleased to see that I could define a `DocumentMeta` inner class and override the collection name. It was a feature that I thought *should* be there, but wasn't totally sure it would be.\n\nThe other thing that was a little bit tricky was to get `Cocktail` to refer to `Ingredient`, which hasn't been defined at that point. Fortunately,\n[Pydantic's `update_forward_refs` method can be used later to glue together the references. I could have just re-ordered the class definitions, but I preferred this approach.\n\nThe beaniecocktails package, defined in the `__init__.py` file, contains mostly boilerplate code for initializing FastAPI, Motor, and Beanie:\n\n``` python\n# ... some code skipped\n\n@app.on_event(\"startup\")\nasync def app_init():\n client = motor.motor_asyncio.AsyncIOMotorClient(Settings().mongodb_url)\n init_beanie(client.get_default_database(), document_models=Cocktail])\n app.include_router(cocktail_router, prefix=\"/v1\")\n```\n\nThe code above defines an event handler for the FastAPI app startup. It connects to MongoDB, configures Beanie with the database connection, and provides the `Cocktail` model I'll be using to Beanie.\n\nThe last line adds the `cocktail_router` to Beanie. It's an `APIRouter` that's defined in the [routes submodule.\n\nSo now it's time to show you the routes file\u2014this is where I spent most of my time. I was *amazed* by how quickly I could get API endpoints developed.\n\n``` python\n# ... imports skipped\n\ncocktail_router = APIRouter()\n```\n\nThe `cocktail_router` is responsible for routing URL paths to different function handlers which will provide data to be rendered as JSON. The simplest handler is probably:\n\n``` python\n@cocktail_router.get(\"/cocktails/\", response_model=ListCocktail])\nasync def list_cocktails():\n return await Cocktail.find_all().to_list()\n```\n\nThis handler takes full advantage of these facts: FastAPI will automatically render Pydantic instances as JSON; and Beanie `Document` models are defined using Pydantic. `Cocktail.find_all()` returns an iterator over all the `Cocktail` documents in the `recipes` collection. FastAPI can't deal with these iterators directly, so the sequence is converted to a list using the `to_list()` method.\n\nIf you have the [Just task runner installed, you can run the server with:\n\n``` bash\njust run\n```\n\nIf not, you can run it directly by running:\n\n``` bash\nuvicorn beaniecocktails:app --reload --debug\n```\n\nAnd then you can test the endpoint by pointing your browser at\n\"\".\n\nA similar endpoint for just a single cocktail is neatly encapsulated by two methods: one to look up a document by `_id` and raise a \"404 Not Found\" error if it doesn't exist, and a handler to route the HTTP request. The two are neatly glued together using the `Depends` declaration that converts the provided `cocktail_id` into a loaded `Cocktail` instance.\n\n``` python\nasync def get_cocktail(cocktail_id: PydanticObjectId) -> Cocktail:\n \"\"\" Helper function to look up a cocktail by id \"\"\"\n\n cocktail = await Cocktail.get(cocktail_id)\n if cocktail is None:\n raise HTTPException(status_code=404, detail=\"Cocktail not found\")\n return cocktail\n\n@cocktail_router.get(\"/cocktails/{cocktail_id}\", response_model=Cocktail)\nasync def get_cocktail_by_id(cocktail: Cocktail = Depends(get_cocktail)):\n return cocktail\n```\n\n*Now* for the thing that I really like about Beanie: its integration with MongoDB's Aggregation Framework. Aggregation pipelines can reshape documents through projection or grouping, and Beanie allows the resulting documents to be mapped to a Pydantic `BaseModel` subclass.\n\nUsing this technique, an endpoint can be added that provides an index of all of the ingredients and the number of cocktails each appears in:\n\n``` python\n# models.py:\n\nclass IngredientAggregation(BaseModel):\n \"\"\" A model for an ingredient count. \"\"\"\n\n id: str = Field(None, alias=\"_id\")\n total: int\n\n# routes.py:\n\n@cocktail_router.get(\"/ingredients\", response_model=ListIngredientAggregation])\nasync def list_ingredients():\n \"\"\" Group on each ingredient name and return a list of `IngredientAggregation`s. \"\"\"\n\n return await Cocktail.aggregate(\n aggregation_query=[\n {\"$unwind\": \"$ingredients\"},\n {\"$group\": {\"_id\": \"$ingredients.name\", \"total\": {\"$sum\": 1}}},\n {\"$sort\": {\"_id\": 1}},\n ],\n item_model=IngredientAggregation,\n ).to_list()\n```\n\nThe results, at \"\", look a bit like this:\n\n``` json\n[\n {\"_id\":\"7-Up\",\"total\":1},\n {\"_id\":\"Amaretto\",\"total\":2},\n {\"_id\":\"Angostura Bitters\",\"total\":1},\n {\"_id\":\"Apple schnapps\",\"total\":1},\n {\"_id\":\"Applejack\",\"total\":1},\n {\"_id\":\"Apricot brandy\",\"total\":1},\n {\"_id\":\"Bailey\",\"total\":1},\n {\"_id\":\"Baileys irish cream\",\"total\":1},\n {\"_id\":\"Bitters\",\"total\":3},\n {\"_id\":\"Blackberries\",\"total\":1},\n {\"_id\":\"Blended whiskey\",\"total\":1},\n {\"_id\":\"Bourbon\",\"total\":1},\n {\"_id\":\"Bourbon Whiskey\",\"total\":1},\n {\"_id\":\"Brandy\",\"total\":7},\n {\"_id\":\"Butterscotch schnapps\",\"total\":1},\n]\n```\n\nI loved this feature so much, I decided to use it along with [MongoDB Atlas Search, which provides free text search over MongoDB collections, to implement an autocomplete endpoint.\n\nThe first step was to add a search index on the `recipes` collection, in the MongoDB Atlas web interface:\n\nI had to add the `name` field as an \"autocomplete\" field type.\n\nI waited for the index to finish building, which didn't take very long, because it's not a very big collection. Then I was ready to write my autocomplete endpoint:\n\n``` python\n@cocktail_router.get(\"/cocktail_autocomplete\", response_model=Liststr])\nasync def cocktail_autocomplete(fragment: str):\n \"\"\" Return an array of cocktail names matched from a string fragment. \"\"\"\n\n return [\n c[\"name\"]\n for c in await Cocktail.aggregate(\n aggregation_query=[\n {\n \"$search\": {\n \"autocomplete\": {\n \"query\": fragment,\n \"path\": \"name\",\n }\n }\n }\n ]\n ).to_list()\n ]\n```\n\nThe `$search` aggregation stage specifically uses a search index. In this case, I'm using the `autocomplete` type, to match the type of the index I created on the `name` field. Because I wanted the response to be as lightweight as possible, I'm taking over the serialization to JSON myself, extracting the name from each `Cocktail` instance and just returning a list of strings.\n\nThe results are great!\n\nPointing my browser at\n\"\" gives me `[\"Imperial Fizz\",\"Vodka Fizz\"]`, and\n\"\" gives me `[\"Manhattan\",\"Espresso Martini\"]`.\n\nThe next step is to build myself a React front end, so that I can truly call this a [FARM Stack app.\n\n## Wrap-Up\n\nI was really impressed with how quickly I could get all of this up and running. Handling of `ObjectId` instances was totally invisible, thanks to Beanie's `PydanticObjectId` type, and I've seen other sample code that shows how BSON `Date` values are equally well-handled.\n\nI need to see how I can build some HATEOAS functionality into the endpoints, with entities linking to their canonical URLs. Pagination is also something that will be important as my collection grows, but I think I already know how to handle that.\n\nI hope you enjoyed this quick run-through of my first experience using Beanie. The next time you're building an API on top of MongoDB, I recommend you give it a try!\n\nIf this was your first exposure to the Aggregation Framework, I really recommend you read our documentation on this powerful feature of MongoDB. Or if you really want to get your hands dirty, why not check out our free MongoDB University course?\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n\n", "format": "md", "metadata": {"tags": ["Python", "Atlas", "Flask"], "pageDescription": "This new Beanie ODM is very good.", "contentType": "Tutorial"}, "title": "Build a Cocktail API with Beanie and MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/cpp/noise-sensor-mqtt-client", "action": "created", "body": "# Red Mosquitto: Implement a noise sensor with an MQTT client in an ESP32\n\nWelcome to another article of the \"Adventures in IoT\" series. So far, we have defined an end-to-end project, written the firmware for a Raspberry Pi Pico MCU board to measure the temperature and send the value via Bluetooth Low Energy, learned how to use Bluez and D-Bus, and implemented a collecting station that was able to read the BLE data. If you haven't had the time yet, you can read them or watch the videos.\n\nIn this article, we are going to write the firmware for a different board: an ESP32-C6-DevKitC-1. ESP32 boards are very popular among the DIY community and for IoT in general. The creator of these boards, Espressif, is putting a good amount of effort into supporting Rust as a first-class developer language for them. I am thankful for that and I will take advantage of the tools they have created for us.\n\nWe can write code for the ESP32 that talks to the bare metal, a.k.a. core, or use an operating system that allows us to take full advantage of the capabilities provided by std library. ESP-IDF \u2013i.e., ESPressif IoT Development Framework\u2013 is created to simplify that development and is not only available in C/C++ but also in Rust, which we will be using for the rest of this article. By using ESP-IDF through the corresponding crates, we can use threads, mutexes, and other synchronization primitives, collections, random number generation, sockets, etc.\n. It provides an abstraction to create drivers that are independent from the MCU. This is very useful for us developers because it allows us to develop and maintain the driver once and use it for the many different MCU boards that honor that abstraction.\n\nThis development board kit has a neopixel LED \u2013i.e., an RGB LED controlled by a WS2812\u2013 which we will use for our \"Hello World!\" iteration and then to inform the user about the state of the device. The WS2812 requires sending sequences of high and low voltages that use the duration of those high and low values to specify the bits that define the RGB color components of the LED. The ESP32 has a Remote Control Transceiver (RMT) that was conceived as an infrared transceiver but can be repurposed to generate the signals required for the single-line serial protocol used by the WS1812. Neither the RMT nor the timers are available in the just released version of the `embedded-hal`, but the ESP-IDF provided by Expressif does implement the full `embedded-hal` abstraction, and the WS2812 driver uses the available abstractions.\n\n## Setup\n\n### The tools\n\nThere are some tools that you will need to have installed in your computer to be able to follow along and compile and install the firmware on your board. I have installed them on my computer, but before spending time on this setup, consider using the container provided by Espressif if you prefer that choice.\n\nThe first thing that might be different for you is that we need the bleeding edge version of the Rust toolchain. We will be using the nightly version of it:\n\n```shell\nrustup toolchain install nightly --component rust-src\n```\n\nAs for the tools, you may already have some of these tools on your computer, but double-check that you have installed all of them:\n\n- Git (in macOS installed with Code)\n- Some tools to assist on the building process (`brew install cmake ninja dfu-util python3` \u2013This works on macOS, but if you use a different OS, please check the list here)\n- A tool to forward linker arguments to the actual linker (`cargo install ldproxy`)\n- A utility to write the firmware to the board (`cargo install espflash`)\n- A tool that is used to produce a new project from a template (`cargo install cargo-generate`)\n\n### Project creation using a template\n\nWe can then create a project using the template for `stdlib` projects (`esp-idf-template`):\n\n```sh\ncargo generate esp-rs/esp-idf-template cargo\n```\n\nAnd we fill in this data:\n\n- **Project name:** mosquitto-bzzz\n- **MCU to target:** esp32c6\n- **Configure advanced template options:** false\n\n`cargo b` produces the build. Target is `riscv32imac-esp-espidf` (RISC-V architecture with support for atomics), so the binary is generated in `target/riscv32imac-esp-espidf/debug/mosquitto-bzzz`. And it can be run on the device using this command:\n\n```sh\nespflash flash target/riscv32imac-esp-espidf/debug/mosquitto-bzzz --monitor\n```\n\nAnd at the end of the output log, you can find these lines:\n\n```\nI (358) app_start: Starting scheduler on CPU0\nI (362) main_task: Started on CPU0\nI (362) main_task: Calling app_main()\nI (362) mosquitto_bzzz: Hello, world!\nI (372) main_task: Returned from app_main()\n```\n\nLet's understand the project that has been created so we can take advantage of all the pieces:\n\n- **Cargo.toml:** It is main the configuration file for the project. Besides what a regular `cargo new` would do, we will see that:\n - It defines some features available that modify the configuration of some of the dependencies.\n - It includes a couple of dependencies: one for the logging API and another for using the ESP-IDF.\n - It adds a build dependency that provides utilities for building applications for embedded systems.\n - It adjusts the profile settings that modify some compiler options, optimization level, and debug symbols, for debug and release.\n- **build.rs:** A build script that doesn't belong to the application but is executed as part of the build process.\n- **rust-toolchain.toml:** A configuration file to enforce the usage of the nightly toolchain as well as a local copy of the Rust standard library source code.\n- **sdkconfig.defaults:** A file with some configuration parameters for the esp-idf.\n- **.cargo/config.toml:** A configuration file for Cargo itself, where we have the architecture, the tools, and the unstable flags of the compiler used in the build process, and the environment variables used in the process.\n- **src/main.rs:** The seed for our code with the minimal skeleton.\n\n## Foundations of our firmware\n\nThe idea is to create firmware similar to the one we wrote for the Raspberry Pi Pico but exposing the sensor data using MQTT instead of Bluetooth Low Energy. That means that we have to connect to the WiFi, then to the MQTT broker, and start publishing data. We will use the RGB LED to show the status of our sensor and use a sound sensor to obtain the desired data.\n\n### Control the LED\n\nMaking an LED blink is considered the *hello world* of embedded programming. We can take it a little bit further and use colors rather than just blink.\n\n1. According to the documentation of the board, the LED is controlled by the GPIO8 pin. We can get access to that pin using the `Peripherals` module of the esp-idf-svc, which exposes the hal adding `use esp_idf_svc::hal::peripherals::Peripherals;`:\n \n ```rust\n let peripherals = Peripherals::take().expect(\"Unable to access device peripherals\");\n let led_pin = peripherals.pins.gpio8;\n ```\n2. Also using the Peripherals singleton, we can access the RMT channel that will produce the desired waveform signal required to set each of the three color components of the LED:\n \n ```rust\n let rmt_channel = peripherals.rmt.channel0;\n ```\n3. We could do the RGB color encoding manually, but there is a crate that will help us talk to the built-in WS2812 (neopixel) controller that drives the RGB LED. The create `smart-leds` could be used on top of it if we had several LEDs, but we don't need it for this board.\n \n ```sh\n cargo add ws2812-esp32-rmt-driver\n ```\n4. We create an instance that talks to the WS2812 in pin 8 and uses the Remote Control Transceiver \u2013 a.k.a. RMT \u2013 peripheral in channel 0. We add the symbol `use ws2812_esp32_rmt_driver::Ws2812Esp32RmtDriver;` and:\n \n ```rust\n let mut neopixel =\n Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect(\"Unable to talk to ws2812\");\n ```\n5. Then, we define the data for a pixel and write it with the instance of the driver so it gets used in the LED. It is important to not only import the type for the 24bi pixel color but also get the trait with `use ws2812_esp32_rmt_driver::driver::color::{LedPixelColor,LedPixelColorGrb24};`:\n \n ```rust\n let color_1 = LedPixelColorGrb24::new_with_rgb(255, 255, 0);\n neopixel\n .write_blocking(color_1.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n ```\n6. At this moment, you can run it with `cargo r` and expect the LED to be on with a yellow color.\n7. Let's add a loop and some changes to complete our \"hello world.\" First, we define a second color:\n \n ```rust\n let color_2 = LedPixelColorGrb24::new_with_rgb(255, 0, 255);\n ```\n8. Then, we add a loop at the end where we switch back and forth between these two colors:\n \n ```rust\n loop {\n neopixel\n .write_blocking(color_1.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n neopixel\n .write_blocking(color_2.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n }\n ```\n9. If we don't introduce any delays, we won't be able to perceive the colors changing, so we add `use std::{time::Duration, thread};` and wait for half a second before every change:\n \n ```rust\n neopixel\n .write_blocking(color_1.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n thread::sleep(Duration::from_millis(500));\n neopixel\n .write_blocking(color_2.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n thread::sleep(Duration::from_millis(500));\n ```\n10. We run and watch the LED changing color from purple to yellow and back every half a second.\n\n### Use the LED to communicate with the user\n\nWe are going to encapsulate the usage of the LED in its own thread. That thread needs to be aware of any changes in the status of the device and use the current one to decide how to use the LED accordingly.\n\n1. First, we are going to need an enum with all of the possible states. Initially, it will contain one variant for no error, one variant for WiFi error, and another one for MQTT error:\n \n ```rust\n enum DeviceStatus {\n Ok,\n WifiError,\n MqttError,\n }\n ```\n2. And we can add an implementation to convert from eight-bit unsigned integers into a variant of this enum:\n \n ```rust\n impl TryFrom for DeviceStatus {\n type Error = &'static str;\n \n fn try_from(value: u8) -> Result {\n match value {\n 0u8 => Ok(DeviceStatus::Ok),\n 1u8 => Ok(DeviceStatus::WifiError),\n 2u8 => Ok(DeviceStatus::MqttError),\n _ => Err(\"Unknown status\"),\n }\n }\n }\n ```\n3. We would like to use the `DeviceStatus` variants by name where a number is required. We achieve the inverse conversion by adding an annotation to the enum:\n \n ```rust\n #repr(u8)]\n enum DeviceStatus {\n ```\n4. Next, I am going to do something that will be considered na\u00efve by anybody that has developed anything in Rust, beyond the simplest \"hello world!\" However, I want to highlight one of the advantages of using Rust, instead of most other languages, to write firmware (and software in general). I am going to define a variable in the main function that will hold the current status of the device and share it among the threads.\n \n ```rust\n let mut status = DeviceStatus::Ok as u8;\n ```\n5. We are going to define two threads. The first one is meant for reporting back to the user the status of the device. The second one is just needed for testing purposes, and we will replace it with some real functionality in a short while. We will be using sequences of colors in the LED to report the status of the sensor. So, let's start by defining each of the steps in those color sequences:\n \n ```rust\n struct ColorStep {\n red: u8,\n green: u8,\n blue: u8,\n duration: u64,\n }\n ```\n6. We also define a constructor as an associated function for our own convenience:\n \n ```rust\n impl ColorStep {\n fn new(red: u8, green: u8, blue: u8, duration: u64) -> Self {\n ColorStep {\n red,\n green,\n blue,\n duration,\n }\n }\n }\n ```\n7. We can then use those steps to transform each status into a different sequence that we can display in the LED:\n \n ```rust\n impl DeviceStatus {\n fn light_sequence(&self) -> Vec {\n match self {\n DeviceStatus::Ok => vec![ColorStep::new(0, 255, 0, 500), ColorStep::new(0, 0, 0, 500)],\n DeviceStatus::WifiError => {\n vec![ColorStep::new(255, 0, 0, 200), ColorStep::new(0, 0, 0, 100)]\n }\n DeviceStatus::MqttError => vec![\n ColorStep::new(255, 0, 255, 100),\n ColorStep::new(0, 0, 0, 300),\n ],\n }\n }\n }\n ```\n8. We start the thread by initializing the WS2812 that controls the LED:\n \n ```rust\n use esp_idf_svc::hal::{\n gpio::OutputPin,\n peripheral::Peripheral,\n rmt::RmtChannel,\n };\n \n fn report_status(\n status: &u8,\n rmt_channel: impl Peripheral,\n led_pin: impl Peripheral,\n ) -> ! {\n let mut neopixel =\n Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect(\"Unable to talk to ws2812\");\n loop {}\n }\n ```\n9. We can keep track of the previous status and the current sequence, so we don't have to regenerate it after displaying it once. This is not required, but it is more efficient:\n \n ```rust\n let mut prev_status = DeviceStatus::WifiError; // Anything but Ok\n let mut sequence: Vec = vec![];\n ```\n10. We then get into an infinite loop, in which we update the status, if it has changed, and the sequence accordingly. In any case, we use each of the steps of the sequence to display it in the LED:\n \n ```rust\n loop {\n if let Ok(status) = DeviceStatus::try_from(*status) {\n if status != prev_status {\n prev_status = status;\n sequence = status.light_sequence();\n }\n for step in sequence.iter() {\n let color = LedPixelColorGrb24::new_with_rgb(step.red, step.green, step.blue);\n neopixel\n .write_blocking(color.as_ref().iter().cloned())\n .expect(\"Error writing to neopixel\");\n thread::sleep(Duration::from_millis(step.duration));\n }\n }\n }\n ```\n11. Notice that the status cannot be compared until we implement `PartialEq`, and assigning it requires Clone and Copy, so we derive them:\n \n ```rust\n #[derive(Clone, Copy, PartialEq)]\n enum DeviceStatus {\n ```\n12. Now, we are going to implement the function that is run in the other thread. This function will change the status every 10 seconds. Since this is for the sake of testing the reporting capability, we won't be doing anything fancy to change the status, just moving from one status to the next and back to the beginning:\n \n ```rust\n fn change_status(status: &mut u8) -> ! {\n loop {\n thread::sleep(Duration::from_secs(10));\n if let Ok(current) = DeviceStatus::try_from(*status) {\n match current {\n DeviceStatus::Ok => *status = DeviceStatus::WifiError as u8,\n DeviceStatus::WifiError => *status = DeviceStatus::MqttError as u8,\n DeviceStatus::MqttError => *status = DeviceStatus::Ok as u8,\n }\n }\n }\n }\n ```\n13. With the two functions in place, we just need to spawn two threads, one with each one of them. We will use a thread scope that will take care of joining the threads that we spawn:\n \n ```rust\n thread::scope(|scope| {\n scope.spawn(|| report_status(&status, rmt_channel, led_pin));\n scope.spawn(|| change_status(&mut status));\n });\n ```\n14. Compiling this code will result in errors. It is the blessing/curse of the borrow checker, which is capable of figuring out that we are sharing memory in an unsafe way. The status can be changed in one thread while being read by the other. We could use a mutex, as we did in the previous C++ code, and wrap it in an `Arc` to be able to use a reference in each thread, but there is an easier way to achieve the same goal: We can use an atomic type. (`use std::sync::atomic::AtomicU8;`)\n \n ```rust\n let status = &AtomicU8::new(0u8);\n ```\n15. We modify `report_status()` to use the reference to the atomic type and add `use std::sync::atomic::Ordering::Relaxed;`:\n \n ```rust\n fn report_status(\n status: &AtomicU8,\n rmt_channel: impl Peripheral,\n led_pin: impl Peripheral,\n ) -> ! {\n let mut neopixel =\n Ws2812Esp32RmtDriver::new(rmt_channel, led_pin).expect(\"Unable to talk to ws2812\");\n let mut prev_status = DeviceStatus::WifiError; // Anything but Ok\n let mut sequence: Vec = vec![];\n loop {\n if let Ok(status) = DeviceStatus::try_from(status.load(Relaxed)) {\n ```\n16. And `change_status()`. Notice that in this case, thanks to the interior mutability, we don't need a mutable reference but a regular one. Also, we need to specify the guaranties in terms of how multiple operations will be ordered. Since we don't have any other atomic operations in the code, we can go with the weakest level \u2013 i.e., `Relaxed`:\n \n ```rust\n fn change_status(status: &AtomicU8) -> ! {\n loop {\n thread::sleep(Duration::from_secs(10));\n if let Ok(current) = DeviceStatus::try_from(status.load(Relaxed)) {\n match current {\n DeviceStatus::Ok => status.store(DeviceStatus::WifiError as u8, Relaxed),\n DeviceStatus::WifiError => status.store(DeviceStatus::MqttError as u8, Relaxed),\n DeviceStatus::MqttError => status.store(DeviceStatus::Ok as u8, Relaxed),\n }\n }\n }\n }\n ```\n17. Finally, we have to change the lines in which we spawn the threads to reflect the changes that we have introduced:\n \n ```rust\n scope.spawn(|| report_status(status, rmt_channel, led_pin));\n scope.spawn(|| change_status(status));\n ```\n18. You can use `cargo r` to compile the code and run it on your board. The lights should be displaying the sequences, which should change every 10 seconds.\n\n## Getting the noise level\n\nIt is time to interact with a temperature sensor\u2026 Just kidding. This time, we are going to use a sound sensor. No more temperature measurements in this project. Promise.\n\nThe sensor I am going to use is an OSEPP Sound-01 that claims to be \"the perfect sensor to detect environmental variations in noise.\" It supports an input voltage from 3V to 5V and provides an analog signal. We are going to connect the signal to pin 0 of the GPIO, which is also the pin for the first channel of the analog-to-digital converter (ADC1_CH0). The other two pins are connected to 5V and GND (+ and -, respectively).\n![enter image description here][2]\nYou don't have to use this particular sensor. There are many other options on the market. Some of them have pins for digital output, instead of just an analog one as in this one. Some sensors also have a potentiometer that allows you to adjust the sensitivity of the microphone.\n\n### Read from the sensor\n\n1. We are going to perform this task in a new function:\n \n ```rust\n fn read_noise_level() -> ! {\n }\n ```\n2. We want to use the ADC on the pin that we have connected the signal. We can get access to the ADC1 using the `peripherals` singleton in the main function.\n \n ```rust\n let adc = peripherals.adc1;\n ```\n3. And also to the pin that will receive the signal from the sensor:\n \n ```rust\n let adc_pin = peripherals.pins.gpio0;\n ```\n4. We modify the signature of our new function to accept the parameters we need:\n \n ```rust\n fn read_noise_level(adc1: ADC1, adc1_pin: GPIO) -> !\n where\n GPIO: ADCPin,\n ```\n5. Now, we use those two parameters to attach a driver that can be used to read from the ADC. Notice that the `AdcDriver` needs a configuration, which we create with the default value. Also, `AdcChannelDriver` requires a [generic const parameter that is used to define the attenuation level. I am going to go with maximum attenuation initially to have more sensibility in the mic, but we can change it later if needed. We add `use esp_idf_svc::hal::adc::{attenuation, AdcChannelDriver};`:\n \n ```rust\n let mut adc =\n AdcDriver::new(adc1, &adc::config::Config::default()).expect(\"Unable to initialze ADC1\");\n let mut adc_channel_drv: AdcChannelDriver<{ attenuation::DB_11 }, _> =\n AdcChannelDriver::new(adc1_pin).expect(\"Unable to access ADC1 channel 0\");\n ```\n6. With the required pieces in place, we can use the `adc_channel` to sample in an infinite loop. A delay of 10ms means that we will be sampling at ~100Hz:\n \n ```rust\n loop {\n thread::sleep(Duration::from_millis(10));\n println!(\"ADC value: {:?}\", adc.read(&mut adc_channel));\n }\n ```\n7. Lastly, we spawn a thread with this function in the same scope that we were using before:\n \n ```rust\n scope.spawn(|| read_noise_level(adc, adc_pin));\n ```\n\n### Compute noise levels (Sorta!)\n\nIn order to get an estimation of the noise level, I am going to compute the Root Mean Square (RMS) of a buffer of 50ms, i.e., five samples at our current sampling rate. Yes, I know this isn't exactly how decibels are measured, but it will be good enough for us and the data that we want to gather.\n\n1. Let's start by creating that buffer where we will be putting the samples:\n \n ```rust\n const LEN: usize = 5;\n let mut sample_buffer = [0u16; LEN];\n ```\n2. Inside the infinite loop, we are going to have a for-loop that goes through the buffer:\n \n ```rust\n for i in 0..LEN {\n }\n ```\n3. We modify the sampling that we were doing before, so a zero value is used if the ADC fails to get a sample:\n \n ```rust\n thread::sleep(Duration::from_millis(10));\n if let Ok(sample) = adc.read(&mut adc_pin) {\n sample_buffer[i] = sample;\n } else {\n sample_buffer[i] = 0u16;\n }\n ```\n4. Before starting with the iterations of the for loop, we are going to define a variable to hold the addition of the squares of the samples:\n \n ```rust\n let mut sum = 0.0f32;\n ```\n5. And each sample is squared and added to the sum. We could do the conversion into floats after the square, but then, the square value might not fit into a u16:\n \n ```rust\n sum += (sample as f32) * (sample as f32);\n ```\n6. And we compute the decibels (or something close enough to that) after the for loop:\n \n ```rust\n let d_b = 20.0f32 * (sum / LEN as f32).sqrt().log10();\n println!(\n \"ADC values: {:?}, sum: {}, and dB: {} \",\n sample_buffer, sum, d_b\n );\n ```\n7. We compile and run with `cargo r` and should get some output similar to:\n \n ```\n ADC values: [0, 0, 0, 0, 0], sum: 0, and dB: -inf\n ADC values: [0, 0, 0, 3, 0], sum: 9, and dB: 2.5527248\n ADC values: [0, 0, 0, 11, 0], sum: 121, and dB: 13.838154\n ADC values: [8, 0, 38, 0, 102], sum: 11912, and dB: 33.770145\n ADC values: [64, 23, 0, 8, 26], sum: 5365, and dB: 30.305998\n ADC values: [0, 8, 41, 0, 87], sum: 9314, and dB: 32.70166\n ADC values: [137, 0, 79, 673, 0], sum: 477939, and dB: 49.804024\n ADC values: [747, 0, 747, 504, 26], sum: 1370710, and dB: 54.379753\n ADC values: [240, 0, 111, 55, 26], sum: 73622, and dB: 41.680374\n ADC values: [8, 26, 26, 58, 96], sum: 13996, and dB: 34.470337\n ```\n\n## MQTT\n\n### Concepts\n\nWhen we wrote our previous firmware, we used Bluetooth Low Energy to make the data from the sensor available to the rest of the world. That was an interesting experiment, but it had some limitations. Some of those limitations were introduced by the hardware we were using, like the fact that we were getting some interferences in the Bluetooth signal from the WiFi communications in the Raspberry Pi. But others are inherent to the Bluetooth technology, like the maximum distance from the sensor to the collecting station.\n\nFor this firmware, we have decided to take a different approach. We will be using WiFi for the communications from the sensors to the collecting station. WiFi will allow us to spread the sensors through a much greater area, especially if we have several access points. However, it comes with a price: The sensors will consume more energy and their batteries will last less.\n\nUsing WiFi practically implies that our communications will be TCP/IP-based. And that opens a wide range of possibilities, which we can summarize with this list in increasing order of likelihood:\n\n- Implement a custom TCP or UDP protocol.\n- Use an existing protocol that is commonly used for writing APIs. There are other options, but HTTP is the main one here.\n- Use an existing protocol that is more tailored for the purpose of sending event data that contains values.\n\nCreating a custom protocol is expensive, time-consuming, and error-prone, especially without previous experience. It''s probably the worst idea for a proof of concept unless you have a very specific requirement that cannot be accomplished otherwise.\n\nHTTP comes to mind as an excellent solution to exchange data. REST APIs are an example of that. However, it has some limitations, like the unidirectional flow of data, the overhead \u2013both in terms of the protocol itself and on using a new connection for every new request\u2013 and even the lack of provision to notify selected clients when the data they are interested in changes.\n\nIf we want to go with a protocol that was designed for this, MQTT is the natural choice. Besides overcoming the limitations of HTTP for this type of communication, it has been tested in the field with many sensors that change very often and out of the box, can do fancy things like storing the last known good value or having specific client commands that allow them to receive updates on specific values or a set of them. MQTT is designed as a protocol for publish/subscribe (pub/sub) in the scenarios that are common for IoT. The server that controls all the communications is commonly referred to as a *broker*, and our sensors will be its clients.\n\n### Connect to the WiFi\n\nNow that we have a better understanding of why we are using MQTT, we are going to connect to our broker and send the data that we obtain from our sensor so it gets published there.\n\nHowever, before being able to do that, we need to connect to the WiFi.\n\nIt is important to keep in mind that the board we are using has support for WiFi but only on the 2.4GHz band. It won't be able to connect to your router using the 5GHz band, no matter how kindly you ask it to do it.\n\nAlso, unless you are a wealthy millionaire and you've got yourself a nice island to focus on following along with this content, it would be wise to use a fairly strong password to keep unauthorized users out of your network.\n\n1. We are going to begin by setting some structure for holding the authentication data to access the network:\n \n ```rust\n struct Configuration {\n wifi_ssid: &'static str,\n wifi_password: &'static str,\n }\n ```\n2. We could set the values in the code, but I like better the approach suggested by Ferrous Systems. We will be using the `toml_cfg` crate. We will have default values (useless in this case other than to get an error) that we will be overriding by using a toml file with the desired values. First things first: Let's add the crate:\n \n ```shell\n cargo add toml-cfg\n ```\n3. Let's now annotate the struct with some macros:\n \n ```rust\n #[toml_cfg::toml_config]\n struct Configuration {\n #[default(\"NotMyWifi\")]\n wifi_ssid: &'static str,\n #[default(\"NotMyPassword\")]\n wifi_password: &'static str,\n }\n ```\n4. We can now add a `cfg.toml` file with the **actual** values of these parameters.\n \n ```\n [mosquitto-bzzz]\n wifi_ssid = \"ThisAintEither\"\n wifi_password = \"NorIsThisMyPassword\"\n ```\n\n5. Please, remember to add that filename to the `.gitignore` configuration, so it doesn't end up in our repository with our dearest secrets:\n \n ```shell\n echo \"cfg.toml\" >> .gitignore\n ```\n6. The code for connecting to the WiFi is a little bit tedious. It makes sense to do it in a different function:\n \n ```rust\n fn connect_to_wifi(ssid: &str, passwd: &str) {}\n ```\n7. This function should have a way to let us know if there has been a problem, but we want to simplify error handling, so we add the `anyhow` crate:\n \n ```rust\n cargo add anyhow\n ```\n8. We can now use the `Result` type provided by anyhow (`import anyhow::Result;`). This way, we don't need to be bored with creating and using a custom error type.\n \n ```rust\n fn connect_to_wifi(ssid: &str, passwd: &str) -> Result<()> {\n Ok(())\n }\n ```\n9. If the function doesn't get an SSID, it won't be able to connect to the WiFi, so it's better to stop here and return an error (`import anyhow::bail;`):\n \n ```rust\n if ssid.is_empty() {\n bail!(\"No SSID defined\");\n }\n ```\n10. If the function gets a password, we will assume that authentication uses WPA2. Otherwise, no authentication will be used (`use esp_idf_svc::wifi::AuthMethod;`):\n \n ```rust\n let auth_method = if passwd.is_empty() {\n AuthMethod::None\n } else {\n AuthMethod::WPA2Personal\n };\n ```\n11. We will need an instance of the system loop to maintain the connection to the WiFi alive and kicking, so we access the system event loop singleton (`use esp_idf_svc::eventloop::EspSystemEventLoop;` and `use anyhow::Context`).\n \n ```rust\n let sys_loop = EspSystemEventLoop::take().context(\"Unable to access system event loop.\")?;\n ```\n12. Although it is not required, the esp32 stores some data from previous network connections in the non-volatile storage, so getting access to it will simplify and accelerate the connection process (`use esp_idf_svc::nvs::EspDefaultNvsPartition;`).\n \n ```rust\n let nvs = EspDefaultNvsPartition::take().context(\"Unable to access default NVS partition\")?;\n ```\n13. The connection to the WiFi is done through the modem, which can be accessed via the peripherals of the board. We pass the peripherals, obtain the modem, and use it to first wrap it with a WiFi driver and then get an instance that we will use to manage the WiFi connection (`use esp_idf_svc::wifi::{EspWifi, BlockingWifi};`):\n \n ```rust\n fn connect_to_wifi(ssid: &str, passwd: &str,\n modem: impl Peripheral + 'static,\n ) -> Result<()> {\n // Auth checks here and sys_loop ...\n let mut esp_wifi = EspWifi::new(modem, sys_loop.clone(), Some(nvs))?;\n let mut wifi = BlockingWifi::wrap(&mut esp_wifi, sys_loop)?;\n ```\n14. Then, we add a configuration to the WiFi (`use esp_idf_svc::wifi;`):\n \n ```rust\n wifi.set_configuration(&mut wifi::Configuration::Client(\n wifi::ClientConfiguration {\n ssid: ssid\n .try_into()\n .map_err(|_| anyhow::Error::msg(\"Unable to use SSID\"))?,\n password: passwd\n .try_into()\n .map_err(|_| anyhow::Error::msg(\"Unable to use Password\"))?,\n auth_method,\n ..Default::default()\n },\n ))?;\n ```\n15. With the configuration in place, we start the WiFi radio, connect to the WiFi network, and wait to have the connection completed. Any errors will bubble up:\n \n ```rust\n wifi.start()?;\n wifi.connect()?;\n wifi.wait_netif_up()?;\n ```\n16. It is useful at this point to display the data of the connection.\n \n ```rust\n let ip_info = wifi.wifi().sta_netif().get_ip_info()?;\n log::info!(\"DHCP info: {:?}\", ip_info);\n ```\n17. We also want to return the variable that holds the connection. Otherwise, the connection will be closed when it goes out of scope at the end of this function. We change the signature to be able to do it:\n \n ```rust\n ) -> Result>> {\n ```\n18. And return that value:\n \n ```rust\n Ok(Box::new(wifi_driver))\n ```\n19. We are going to initialize the connection to the WiFi from our function to read the noise, so let's add the modem as a parameter:\n \n ```rust\n fn read_noise_level(\n adc1: ADC1,\n adc1_pin: GPIO,\n modem: impl Peripheral + 'static,\n ) -> !\n ```\n20. This new parameter has to be initialized in the main function:\n \n ```rust\n let modem = peripherals.modem;\n ```\n21. And passed it onto the function when we spawn the thread:\n \n ```rust\n scope.spawn(|| read_noise_level(adc, adc_pin, modem));\n ```\n22. Inside the function where we plan to use these parameters, we retrieve the configuration. The `CONFIGURATION` constant is generated automatically by the `cfg-toml` crate using the type of the struct:\n \n ```rust\n let app_config = CONFIGURATION;\n ```\n23. Next, we try to connect to the WiFi using those parameters:\n \n ```rust\n let _wifi = match connect_to_wifi(app_config.wifi_ssid, app_config.wifi_password, modem) {\n Ok(wifi) => wifi,\n Err(err) => {\n \n }\n };\n ```\n24. And, when dealing with the error case, we change the value of the status:\n \n ```rust\n log::error!(\"Connect to WiFi: {}\", err);\n status.store(DeviceStatus::WifiError as u8, Relaxed);\n ```\n25. This function doesn't take the state as an argument, so we add it to its signature:\n \n ```rust\n fn read_noise_level(\n status: &AtomicU8,\n ```\n26. That argument is provided when the thread is spawned:\n \n ```rust\n scope.spawn(|| read_noise_level(status, adc, adc_pin, modem));\n ```\n27. We don't want the status to be changed sequentially anymore, so we remove that thread and the function that was implementing that change.\n28. We run this code with `cargo r` to verify that we can connect to the network. However, this version is going to crash. \ud83d\ude31 Our function is going to exceed the default stack size for a thread, which, by default, is 4Kbytes.\n29. We can use a thread builder, instead of the `spawn` function, to change the stack size:\n \n ```rust\n thread::Builder::new()\n .stack_size(6144)\n .spawn_scoped(scope, || read_noise_level(status, adc, adc_pin, modem))\n .unwrap();\n ```\n30. After performing this change, we run it again `cargo r` and it should work as expected.\n\n### Set up the MQTT broker\n\nThe next step after connecting to the WiFi is to connect to the MQTT broker as a client, but we don't have an MQTT broker yet. In this section, I will show you how to install Mosquitto, which is an open-source project of the Eclipse Foundation.\n\n1. For this section, we need to have an MQTT broker. In my case, I will be installing Mosquitto, which implements versions 3.1.1 and 5.0 of the MQTT protocol. It will run in the same Raspberry Pi that I am using as a collecting station.\n \n ```shell\n sudo apt-get update && sudo apt-get upgrade\n sudo apt-get install -y {mosquitto,mosquitto-clients,mosquitto-dev}\n sudo systemctl enable mosquitto.service\n ```\n2. We modify the Mosquitto configuration to enable clients to connect from outside of the localhost. We need some credentials and a configuration that enforces authentication:\n \n ```shell\n sudo mosquitto_passwd -c -b /etc/mosquitto/passwd soundsensor \"Zap\\!Pow\\!Bam\\!Kapow\\!\"\n sudo sh -c 'echo \"listener 1883\\nallow_anonymous false\\npassword_file /etc/mosquitto/passwd\" > /etc/mosquitto/conf.d/remote_access.conf'\n sudo systemctl restart mosquitto\n ```\n3. Let's test that we can subscribe and publish to a topic. The naming convention tends to use lowercase letters, numbers, and dashes only and reserves dashes for separating topics hierarchically. On one terminal, subscribe to the `testTopic`:\n \n ```rust\n mosquitto_sub -t test/topic -u soundsensor -P \"Zap\\!Pow\\!Bam\\!Kapow\\!\"\n ```\n4. And on another terminal, publish something to it:\n \n ```rust\n mosquitto_pub -d -t test/topic -m \"Hola caracola\" -u soundsensor -P \"Zap\\!Pow\\!Bam\\!Kapow\\!\"\n ```\n5. You should see the message that we wrote on the second terminal appear on the first one. This means that Mosquitto is running as expected.\n\n### Publish to MQTT from the sensor\n\nWith the MQTT broker installed and ready, we can write the code to connect our sensor to it as an MQTT client and publish its data.\n\n1. We are going to need the credentials that we have just created to publish data to the MQTT broker, so we add them to the `Configuration` structure:\n \n ```rust\n #[toml_cfg::toml_config]\n struct Configuration {\n #[default(\"NotMyWifi\")]\n wifi_ssid: &'static str,\n #[default(\"NotMyPassword\")]\n wifi_password: &'static str,\n #[default(\"mqttserver\")]\n mqtt_host: &'static str,\n #[default(\"\")]\n mqtt_user: &'static str,\n #[default(\"\")]\n mqtt_password: &'static str,\n }\n ```\n2. You have to remember to add the values that make sense to the `cfg.toml` file for your environment. Don't expect to get them from my repo, because we have asked Git to ignore this file. At the very least, you need the hostname or IP address of your MQTT broker. Copy the user name and password that we created previously:\n \n ```\n [mosquitto-bzzz]\n wifi_ssid = \"ThisAintEither\"\n wifi_password = \"NorIsThisMyPassword\"\n mqtt_host = \"mqttsystem\"\n mqtt_user = \"soundsensor\"\n mqtt_password = \"Zap!Pow!Bam!Kapow!\"\n ```\n3. Coming back to the function that we have created to read the noise sensor, we can now initialize an MQTT client after connecting to the WiFi (`use mqtt::client::{EspMqttClient, MqttClientConfiguration, QoS},`):\n \n ```rust\n let mut mqtt_client =\n EspMqttClient::new()\n .expect(\"Unable to initialize MQTT client\");\n ```\n4. The first parameter is a URL to the MQTT server that will include the user and password, if defined:\n \n ```rust\n let mqtt_url = if app_config.mqtt_user.is_empty() || app_config.mqtt_password.is_empty() {\n format!(\"mqtt://{}/\", app_config.mqtt_host)\n } else {\n format!(\n \"mqtt://{}:{}@{}/\",\n app_config.mqtt_user, app_config.mqtt_password, app_config.mqtt_host\n )\n };\n ```\n5. The second parameter is the configuration. Let's add them to the creation of the MQTT client:\n \n ```rust\n EspMqttClient::new(&mqtt_url, &MqttClientConfiguration::default(), |_| {\n log::info!(\"MQTT client callback\")\n })\n ```\n6. In order to publish, we need to define the topic:\n \n ```rust\n const TOPIC: &str = \"home/noise sensor/01\";\n ```\n7. And a variable that will be used to contain the message that we will publish:\n \n ```rust\n let mut mqtt_msg: String;\n ```\n8. Inside the loop, we will format the noise value because it is sent as a string:\n \n ```rust\n mqtt_msg = format!(\"{}\", d_b);\n ```\n9. We publish this value using the MQTT client:\n \n ```rust\n if let Ok(msg_id) = mqtt_client.publish(TOPIC, QoS::AtMostOnce, false, mqtt_msg.as_bytes())\n {\n println!(\n \"MSG ID: {}, ADC values: {:?}, sum: {}, and dB: {} \",\n msg_id, sample_buffer, sum, d_b\n );\n } else {\n println!(\"Unable to send MQTT msg\");\n }\n ```\n10. As we did when we were publishing from the command line, we need to subscribe, in an independent terminal, to the topic that we plan to publish to. In this case, we are going to start with `home/noise sensor/01`. Notice that we represent a hierarchy, i.e., there are noise sensors at home and each of the sensors has an identifier. Also, notice that levels of the hierarchy are separated by slashes and can include spaces in their names.\n \n ```shell\n mosquitto_sub -t \"home/noise sensor/01\" -u soundsensor -P \"Zap\\!Pow\\!Bam\\!Kapow\\!\"\n ```\n11. Finally, we compile and run the firmware with `cargo r` and will be able to see those values appearing on the terminal that is subscribed to the topic.\n\n### Use a unique ID for each sensor\n\nI would like to finish this firmware solving a problem that won't show up until we have two sensors or more. Our firmware uses a constant topic. That means that two sensors with the same firmware will use the same topic and we won't have a way to know which value corresponds to which sensor. A better option is to use a unique identifier that will be different for every ESP32-C6 board. We can use the MAC address for that.\n\n1. Let's start by creating a function that returns that identifier:\n \n ```rust\n fn get_sensor_id() -> String {\n }\n ```\n2. Our function is going to use an unsafe function from ESP-IDF, and format the result as a `String` (`use esp_idf_svc::sys::{esp_base_mac_addr_get, ESP_OK};` and `use std::fmt::Write`). The function that returns the MAC address uses a pointer and, having been written in C++, couldn't care less about the safety rules that Rust code must obey. That function is considered unsafe and, as such, Rust requires us to use it within an `unsafe` scope. It is their way to tell us, \"Here be dragons\u2026 and you know about it\":\n \n ```rust\n let mut mac_addr = [0u8; 8];\n unsafe {\n match esp_base_mac_addr_get(mac_addr.as_mut_ptr()) {\n ESP_OK => {\n let sensor_id = mac_addr.iter().fold(String::new(), |mut output, b| {\n let _ = write!(output, \"{b:02x}\");\n output\n });\n log::info!(\"Id: {:?}\", sensor_id);\n sensor_id\n }\n _ => {\n log::error!(\"Unable to get id.\");\n String::from(\"BADCAFE00BADBEEF\")\n }\n }\n }\n ```\n3. Then, we use the function before defining the topic and use its result with it:\n \n ```rust\n let sensor_id = get_sensor_id();\n let topic = format!(\"home/noise sensor/{sensor_id}\");\n ```\n4. And we slightly change the way we publish the data to use the topic:\n \n ```rust\n if let Ok(msg_id) = mqtt_client.publish(&topic, QoS::AtMostOnce, false, mqtt_msg.as_bytes())\n ```\n5. We also need to change the subscription so we listen to all the topics that start with `home/sensor/` and have one more level:\n \n ```shell\n mosquitto_sub -t \"home/noise sensor/+\" -u soundsensor -P \"Zap\\!Pow\\!Bam\\!Kapow\\!\"\n ```\n6. We compile and run with `cargo r` and the values start showing up on the terminal where the subscription was initiated.\n\n## Recap and future work\n\nIn this article, we have used Rust to write the firmware for an ESP32-C6-DevKitC-1 board from beginning to end. Although we can agree that Python was an easier approach for our first firmware, I believe that Rust is a more robust, approachable, and useful language for this purpose.\n\nThe firmware that we have created can inform the user of any problems using an RGB LED, measure noise in something close enough to deciBels, connect our board to the WiFi and then to our MQTT broker as a client, and publish the measurements of our noise sensor. Not bad for a single tutorial.\n\nWe have even gotten ahead of ourselves and added some code to ensure that different sensors with the same firmware publish their values to different topics. And to do so, we have done a very brief incursion in the universe of *unsafe Rust* and survived the wilderness. Now you can go to a bar and tell your friends, \"I wrote unsafe Rust.\" Well done!\n\nIn our next article, we will be writing C++ code again to collect the data from the MQTT broker and then send it to our instance of MongoDB Atlas in the Cloud. So get ready!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt567d405088cd0cc8/65f858c6a1e8150c7bd5bf74/ESP32-C6_B.jpeg\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc51e9f705b7af11c/65f858c66405528ee97b0a83/ESP32-C6_A.jpeg", "format": "md", "metadata": {"tags": ["C++", "Rust", "RaspberryPi"], "pageDescription": "We write in Rust from scratch the firmware of a noise sensor implemented with an ESP32. We use the neopixel to inform the user about the status of the device. And we make that sensor expose the measurements through MQTT.", "contentType": "Tutorial"}, "title": "Red Mosquitto: Implement a noise sensor with an MQTT client in an ESP32", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/python-subsets-and-joins", "action": "created", "body": "# Coding With Mark: Abstracting Joins & Subsets in Python\n\nThis tutorial will talk about MongoDB design patterns \u2014 specifically, the Subset Pattern \u2014 and show how you can build an abstraction in your Python data model that hides how data is actually modeled within your database.\n\nThis is the third tutorial in a series! Feel free to check out the first tutorial\u00a0or second tutorial\u00a0if you like, but it's not necessary if you want to just read on.\n\n## Coding with Mark?\n\nThis tutorial is loosely based on some episodes of a livestream I host, called \"Coding with Mark.\" I'm streaming on Wednesdays at 2 p.m. GMT (that's 9 a.m. ET or 6 a.m. PT, if you're an early riser!). If that time doesn't work for you, you can always catch up by watching the recordings!\n\nCurrently, I'm building an experimental data access layer library that should provide a toolkit for abstracting complex document models from the business logic layer of the application that's using them.\n\nYou can check out the code in the project's GitHub repository!\n\n## Setting the scene\n\nThe purpose of docbridge, my Object-Document Mapper, is to abstract the data model used within MongoDB from the data model used by a Python program. With a codebase of any size, you *need*\u00a0something like this because otherwise, every time you change your data model (in your database), you need to change the object model (in your code). By having an abstraction layer, you localize all of this mapping into a single area of your codebase, and that's then the only part that needs to change when you change your data model. This ability to change your data model really allows you to take advantage of the flexibility of MongoDB's document model.\n\nIn the first tutorial, I showed a very simple abstraction, the FallbackField, that would try various different field names in a document until it found one that existed, and then would return that value. This was a very simple implementation of the Schema Versioning pattern.\n\nIn this tutorial, I'm going to abstract something more complex: the Subset Pattern.\n\n## The Subset Pattern\n\nMongoDB allows you to store arrays in your documents, natively. The values in those arrays can be primitive types, like numbers, strings, dates, or even subdocuments. But sometimes, those arrays can get too big, and the Subset Pattern\u00a0describes a technique where the most important subset of the array (often just the *first*\u00a0few items) is stored directly in the embedded array, and any overflow items are stored in other documents and looked up only when necessary.\n\nThis solves two design problems: First, we recommend that you don't store more than 200 items in an array, as the more items you have, the slower the database is at traversing the fields in each document.\u00a0Second, the subset pattern also answers a question that I've seen many times when we've been teaching data modeling: \"How do I stop my array from growing so big that the document becomes bigger than the 16MB limit?\" While we're on the subject, do avoid your documents getting this big \u2014 it usually implies that you could improve your data model, for example, by separating out data into separate documents, or if you're storing lots of binary data, you could keep it outside your database, in an object store.\n\n## Implementing the SequenceField type\n\nBefore delving into how to abstract a lookup for the extra array items that aren't embedded in the source document, I'll first implement a wrapper type for a BSON array. This can be used to declare array fields on a `Document`\u00a0class, instead of the `Field`\u00a0type that I implemented in previous articles.\n\nI'm going to define a `SequenceField`\u00a0to map a document's array into my access layer's object model. The core functionality of a SequenceField is you can specify a type for the array's items, and then when you iterate through the sequence, it will return you objects of that type, instead of just yielding the type that's stored in the document.\n\nA concrete example would be a social media API's UserProfile class, which would store a list of Follower objects. I've created some sample documents with a Python script using Faker. A sample document looks like this:\n\n```python\n{\n\u00a0 \"_id\": { \"$oid\": \"657072b56731c9e580e9dd70\" },\n\u00a0 \"user_id\": \"4\",\n\u00a0 \"user_name\": \"@tanya15\",\n\u00a0 \"full_name\": \"Deborah White\",\n\u00a0 \"birth_date\": { \"$date\": { \"$numberLong\": \"931219200000\" } },\n\u00a0 \"email\": \"deanjacob@yahoo.com\",\n\u00a0 \"bio\": \"Music conference able doctor degree debate. Participant usually above relate.\",\n\u00a0 \"follower_count\": { \"$numberInt\": \"59\" },\n\u00a0 \"followers\": \n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"_id\": { \"$oid\": \"657072b66731c9e580e9dda6\" },\n\u00a0 \u00a0 \u00a0 \"user_id\": \"58\",\n\u00a0 \u00a0 \u00a0 \"user_name\": \"@rduncan\",\n\u00a0 \u00a0 \u00a0 \"bio\": \"Rich beautiful color life. Relationship instead win join enough board successful.\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"_id\": { \"$oid\": \"657072b66731c9e580e9dd99\" },\n\u00a0 \u00a0 \u00a0 \"user_id\": \"45\",\n\u00a0 \u00a0 \u00a0 \"user_name\": \"@paynericky\",\n\u00a0 \u00a0 \u00a0 \"bio\": \"Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid.\"\n },\n # ... other followers\n ]\n}\n```\n\nI can model this data using two classes \u2014 one for the top-level Profile data, and one for the summary data for that profile's followers (embedded in the array).\n\n```python\nclass Follower(Document):\n\u00a0 \u00a0 _id = Field(transform=str)\n\u00a0 \u00a0 user_name = Field()\n\nclass Profile(Document):\n\u00a0 \u00a0 _id = Field(transform=str)\n\u00a0 \u00a0 followers = SequenceField(type=Follower)\n```\n\nIf I want to loop through all the followers of a profile instance, each item should be a `Follower`\u00a0instance:\n\n```python\nprofile = Profile(SOME_BSON_DATA)\nfor follower in profile.followers:\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0assert isinstance(follower, Follower)\n```\n\nThis behavior can be implemented in a similar way to the `Field`\u00a0class, by implementing it as a descriptor, with a `__get__`\u00a0method that, in this case, yields a `Follower`\u00a0constructed for each item in the underlying BSON array.\u00a0The code looks a little like this:\n\n```python\nclass SequenceField:\n\u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 Allows an underlying array to have its elements wrapped in\n\u00a0 \u00a0 Document instances.\n\u00a0 \u00a0 \"\"\"\n\n\u00a0 \u00a0 def __init__(\n\u00a0 \u00a0 \u00a0 \u00a0 self,\n\u00a0 \u00a0 \u00a0 \u00a0 type,\n\u00a0 \u00a0 \u00a0 \u00a0 field_name=None,\n\u00a0 \u00a0 ):\n\u00a0 \u00a0 \u00a0 \u00a0 self._type = type\n\u00a0 \u00a0 \u00a0 \u00a0 self.field_name = field_name\n\n\u00a0 \u00a0 def __set_name__(self, owner, name):\n\u00a0 \u00a0 \u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 \u00a0 \u00a0 Called when the enclosing Document subclass (owner) is defined.\n\u00a0 \u00a0 \u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 \u00a0 \u00a0 self.name = name \u00a0# Store the attribute name.\n\n\u00a0 \u00a0 \u00a0 \u00a0 # If a field-name mapping hasn't been provided,\n\u00a0 \u00a0 \u00a0 \u00a0 # the BSON field will have the same name as the attribute name.\n\u00a0 \u00a0 \u00a0 \u00a0 if self.field_name is None:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 self.field_name = name\n\n\u00a0 \u00a0 def __get__(self, ob, cls):\n\u00a0 \u00a0 \u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 \u00a0 \u00a0 Called when the SequenceField attribute is accessed on the enclosed\n\u00a0 \u00a0 \u00a0 \u00a0 Document subclass.\n\u00a0 \u00a0 \u00a0 \u00a0 \"\"\"\n\u00a0 \u00a0 \u00a0 \u00a0 try:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 # Lookup the field in the BSON, and return an array where each item\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 # is wrapped by the class defined as type in __init__:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 return [\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 self._type(item, ob._db)\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 for item in ob._doc[self.field_name]\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ]\n\u00a0 \u00a0 \u00a0 \u00a0 except KeyError as ke:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 raise ValueError(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 f\"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}.\"\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 ) from ke\n```\n\nThat's a lot of code, but quite a lot of it is duplicated from `Field`\u00a0-\u00a0I'll fix that with some inheritance at some point. The most important part is near the end:\n\n```python\nreturn [\n\n\u00a0 \u00a0 self._type(item, ob._db)\n\n\u00a0 \u00a0 for item in ob._doc[self.field_name]\n]\n```\n\nIn the concrete example above, this would resolve to something like this fictional code:\n\n```python\nreturn [\n\u00a0 \u00a0Follower(item, db=None) for item in profile._doc[\"followers\"]\n]\n```\n\n## Adding in the extra followers\n\nThe dataset I've created for working with this only stores the first 20 followers within a profile document. The rest are stored in a \"followers\" collection, and they're bucketed to store up to 20 followers per document, in a field called \"followers.\" The \"user_id\" field says who the followers belong to.\u00a0A single document in the \"followers\" collection looks like this:\n\n![A document containing a \"followers\" field that contains some more followers for the user with a \"user_id\" of \"4\"][1]\n\n[The Bucket Pattern\u00a0is a technique for putting lots of small subdocuments together in a bucket document, which can make it more efficient to retrieve documents that are usually retrieved together, and it can keep index sizes down. The downside is that it makes updating individual subdocuments slightly slower and more complex.\n\n### How to query documents in buckets\n\nI have a collection where each document contains an array of followers \u2014 a \"bucket\" of followers. But what I *want*\u00a0is a query that returns individual follower documents. Let's break down how this query will work:\n\n1. I want to look up all the documents for a particular user_id.\n1. For each item in followers \u2014 each item is a follower \u2014 I want to yield a single document for that follower.\n1. I want to restructure each document so that it *only*\u00a0contains the follower information, not the bucket information.\n\nThis is what I love about aggregation pipelines \u2014 once I've come up with those steps, I can often convert each step into an aggregation pipeline stage.\n\n**Step 1**: Look up all the documents for a particular user:\n\n```python\n\u00a0{\"$match\": {\"user_id\": \"4\"}}\n```\n\nNote that this stage has hard-coded the value \"4\" for the \"user_id\" field. I'll explain later how dynamic values can be inserted into these queries. This outputs a single document, a bucket, containing many followers, in a field called \"followers\":\n\n```json\n{\n\u00a0 \"user_name\": \"@tanya15\",\n\u00a0 \"full_name\": \"Deborah White\",\n\u00a0 \"birth_date\": {\n\u00a0 \u00a0 \"$date\": \"1999-07-06T00:00:00.000Z\"\n\u00a0 },\n\u00a0 \"email\": \"deanjacob@yahoo.com\",\n\u00a0 \"bio\": \"Music conference able doctor degree debate. Participant usually above relate.\",\n\u00a0 \"user_id\": \"4\",\n\u00a0 \"follower_count\": 59,\n\u00a0 \"followers\": \n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"_id\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dda6\"\n\u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \"user_id\": \"58\",\n\u00a0 \u00a0 \u00a0 \"user_name\": \"@rduncan\",\n\u00a0 \u00a0 \u00a0 \"bio\": \"Rich beautiful color life. Relationship instead win join enough board successful.\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"bio\": \"Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid.\",\n\u00a0 \u00a0 \u00a0 \"_id\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dd99\"\n\u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \"user_id\": \"45\",\n\u00a0 \u00a0 \u00a0 \"user_name\": \"@paynericky\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \"_id\": {\n\u00a0 \u00a0 \u00a0 \u00a0 \"$oid\": \"657072b76731c9e580e9ddba\"\n\u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \"user_id\": \"78\",\n\u00a0 \u00a0 \u00a0 \"user_name\": \"@tiffanyhicks\",\n\u00a0 \u00a0 \u00a0 \"bio\": \"Sign writer win. Look television official information laugh. Lay plan effect break expert message during firm.\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0. . .\n\u00a0 ],\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"657072b56731c9e580e9dd70\"\n\u00a0 }\n}\n```\n\n**Step 2**: Yield a document for each follower \u2014 the $unwind stage can do exactly this:\n\n```python\n{\"$unwind\": \"$followers\"}\n```\n\nThis instructs MongoDB to return one document for each item in the \"followers\" array. All of the document contents will be included, but the followers *array*\u00a0will be replaced with the single follower *subdocument*\u00a0each time. This outputs several documents, each containing a single follower in the \"followers\" field:\n\n```python\n# First document:\n{\n\u00a0 \"bio\": \"Music conference able doctor degree debate. Participant usually above relate.\",\n\u00a0 \"follower_count\": 59,\n\u00a0 \"followers\": {\n\u00a0 \u00a0 \"_id\": {\n\u00a0 \u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dda6\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0 \"user_id\": \"58\",\n\u00a0 \u00a0 \"user_name\": \"@rduncan\",\n\u00a0 \u00a0 \"bio\": \"Rich beautiful color life. Relationship instead win join enough board successful.\"\n\u00a0 },\n\u00a0 \"user_id\": \"4\",\n\u00a0 \"user_name\": \"@tanya15\",\n\u00a0 \"full_name\": \"Deborah White\",\n\u00a0 \"birth_date\": {\n\u00a0 \u00a0 \"$date\": \"1999-07-06T00:00:00.000Z\"\n\u00a0 },\n\u00a0 \"email\": \"deanjacob@yahoo.com\",\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"657072b56731c9e580e9dd70\"\n\u00a0 }\n}\n\n# Second document\n{\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"657072b56731c9e580e9dd70\"\n\u00a0 },\n\u00a0 \"full_name\": \"Deborah White\",\n\u00a0 \"email\": \"deanjacob@yahoo.com\",\n\u00a0 \"bio\": \"Music conference able doctor degree debate. Participant usually above relate.\",\n\u00a0 \"follower_count\": 59,\n\u00a0 \"user_id\": \"4\",\n\u00a0 \"user_name\": \"@tanya15\",\n\u00a0 \"birth_date\": {\n\u00a0 \u00a0 \"$date\": \"1999-07-06T00:00:00.000Z\"\n\u00a0 },\n\u00a0 \"followers\": {\n\u00a0 \u00a0 \"_id\": {\n\u00a0 \u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dd99\"\n\u00a0 \u00a0 },\n\u00a0 \u00a0 \"user_id\": \"45\",\n\u00a0 \u00a0 \"user_name\": \"@paynericky\",\n\u00a0 \u00a0 \"bio\": \"Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid.\"\n\u00a0 }\n\n# . . . More documents follow\n\n```\n\n**Step 3**: Restructure the document, pulling the \"follower\" value up to the top-level of the document. There's a special stage for doing this \u2014 $replaceRoot:\n\n```python\n{\"$replaceRoot\": {\"newRoot\": \"$followers\"}},\n```\n\nAdding the stage above results in each document containing a single follower, at the top level:\n\n```python\n# Document 1:\n{\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dda6\"\n\u00a0 },\n\u00a0 \"user_id\": \"58\",\n\u00a0 \"user_name\": \"@rduncan\",\n\u00a0 \"bio\": \"Rich beautiful color life. Relationship instead win join enough board successful.\"\n}\n\n# Document 2\n{\n\u00a0 \"_id\": {\n\u00a0 \u00a0 \"$oid\": \"657072b66731c9e580e9dd99\"\n\u00a0 },\n\u00a0 \"user_id\": \"45\",\n\u00a0 \"user_name\": \"@paynericky\",\n\u00a0 \"bio\": \"Picture day couple democratic morning. Environment manage opportunity option star food she. Occur imagine population single avoid.\"\n}\n} # . . . More documents follow\n```\n\nPutting it all together, the query looks like this:\n\n```python\n[\n\u00a0 \u00a0 {\"$match\": {\"user_id\": \"4\"}},\n\u00a0 \u00a0 {\"$unwind\": \"$followers\"},\n\u00a0 \u00a0 {\"$replaceRoot\": {\"newRoot\": \"$followers\"}},\n]\n```\n\nI've explained the query that I want to be run each time I iterate through the followers field in my data abstraction library. Now, I'll show you how to hide this query (or whatever query is required) away in the SequenceField implementation.\n\n### Abstracting out the Lookup\n\nNow, I would like to change the behavior of the SequenceField so that it does the following:\n\n- Iterate through the embedded subdocuments and yield each one, wrapped by type\u00a0(the callable that wraps each subdocument.)\n- If the user gets to the end of the embedded array, make a query to look up the rest of the followers and yield them one by one, also wrapped by type.\n\nFirst, I'll change the `__init__`\u00a0method so that the user can provide two extra parameters:\n\n- The collection that contains the extra documents, superset_collection\n- The query to run against that collection to return individual documents, superset_query\n\nThe result looks like this:\n\n```python\nclass Field: \n\u00a0 \u00a0 def __init__(\n\u00a0 \u00a0 \u00a0 \u00a0 self,\n\u00a0 \u00a0 \u00a0 \u00a0 type,\n\u00a0 \u00a0 \u00a0 \u00a0 field_name=None,\n\u00a0 \u00a0 \u00a0 \u00a0 superset_collection=None,\n\u00a0 \u00a0 \u00a0 \u00a0 superset_query: Callable = None,\n\u00a0 \u00a0 ):\n\u00a0 \u00a0 \u00a0 \u00a0 self._type = type\n\u00a0 \u00a0 \u00a0 \u00a0 self.field_name = field_name\n\u00a0 \u00a0 \u00a0 \u00a0 self.superset_collection = superset_collection\n\u00a0 \u00a0 \u00a0 \u00a0 self.superset_query = superset_query\n```\n\nThe query will have to be provided as a callable, i.e., a function, lambda expression, or method. The reason for that is that generating the query will usually need access to some of the state of the document (in this case, the `user_id`, to construct the query to look up the correct follower documents.) The callable is stored in the Field instance, and then when the lookup is needed, it calls the callable, passing it the Document that contains the Field, so the callable can look up the user \"\\_id\" in the wrapped `_doc`\u00a0dictionary.\n\nNow that the user can provide enough information to look up the extra followers (the superset), I changed the `__get__`\u00a0method to perform the lookup when it runs out of embedded followers. To make this simpler to write, I took advantage of *laziness*. Twice! Here's how:\n\n**Laziness Part 1**: When you execute a query by calling `find`\u00a0or `aggregate`, the query is not executed immediately. Instead, the method immediately returns a cursor. Cursors are lazy \u2014 which means they don't do anything until you start to use them, by iterating over their contents. As soon as you start to iterate, or loop, over the cursor, it *then*\u00a0queries the database and starts to yield results.\n\n**Laziness Part 2**: Most of the functions in the core Python `itertools`\u00a0module are also lazy, including the `chain`\u00a0function. Chain is called with one or more iterables as arguments and then *only*\u00a0starts to loop through the later arguments when the earlier iterables are exhausted (meaning the code has looped through all of the contents of the iterable.)\n\nThese can be combined to create a single iterable that will never request any extra followers from the database, *unless*\u00a0the code specifically requests more items after looping through the embedded items:\n\n```python\nembedded_followers = self._doc[\"followers\"] # a list\ncursor = followers.find({\"user_id\": \"4\"}) \u00a0 # a lazy database cursor\n\n# Looping through all_followers will only make a database call if you have \n# looped through all of the contents of embedded_followers:\nall_followers = itertools.chain(embedded_followers, cursor)\n```\n\nThe real code is a bit more flexible, because it supports both find and aggregate queries. It recognises the type because find queries are provided as dicts, and aggregate queries are lists.\n\n```python\ndef __get__(self, ob, cls):\n\u00a0 \u00a0 if self.superset_query is None:\n\u00a0 \u00a0 \u00a0 \u00a0 # Use an empty sequence if there are no extra items.\n\u00a0 \u00a0 \u00a0 \u00a0 # It's still iterable, like a cursor, but immediately exits.\n\u00a0 \u00a0 \u00a0 \u00a0 superset = []\n\u00a0 \u00a0 else:\n\u00a0 \u00a0 \u00a0 \u00a0 # Call the superset_query callable to obtain the generated query:\n\u00a0 \u00a0 \u00a0 \u00a0 query = self.superset_query(ob)\n\n\u00a0 \u00a0 \u00a0 \u00a0 # If the query is a mapping, it's a find query, otherwise it's an\n\u00a0 \u00a0 \u00a0 \u00a0 # aggregation pipeline.\n\u00a0 \u00a0 \u00a0 \u00a0 if isinstance(query, Mapping):\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 superset = ob._db.get_collection(self.superset_collection).find(query)\n\u00a0 \u00a0 \u00a0 \u00a0 elif isinstance(query, Iterable):\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 superset = ob._db.get_collection(self.superset_collection).aggregate(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 query\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 )\n\u00a0 \u00a0 \u00a0 \u00a0 else:\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 raise Exception(\"Returned was not a mapping or iterable.\")\n\n\u00a0 \u00a0 try:\n\u00a0 \u00a0 \u00a0 \u00a0 # Return an iterable that first yields all the embedded items, and\n\n\u00a0 \u00a0 \u00a0 \u00a0 return chain(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 [self._type(item, ob._db) for item in ob._doc[self.field_name]],\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 (self._type(item, ob._db) for item in superset),\n\u00a0 \u00a0 \u00a0 \u00a0 )\n\u00a0 \u00a0 except KeyError as ke:\n\u00a0 \u00a0 \u00a0 \u00a0 raise ValueError(\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 f\"Attribute {self.name!r} is mapped to missing document property {self.field_name!r}.\"\n\u00a0 \u00a0 \u00a0 \u00a0 ) from ke\n```\n\nI've added quite a few comments to the code above, so hopefully you can see the relationship between the simplified code above it and the real code here.\n\n## Using the SequenceField to declare relationships\n\nImplementing `Profile`\u00a0and `Follower`\u00a0is now a matter of providing the query (wrapped in a lambda expression) and the collection that should be queried.\n\n```python\n# This is the same as it was originally\nclass Follower(Document):\n\u00a0 \u00a0 _id = Field(transform=str)\n\u00a0 \u00a0 user_name = Field()\n\ndef extra_followers_query(profile):\n\u00a0 \u00a0 return [\n\u00a0 \u00a0 \u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \"$match\": {\"user_id\": profile.user_id},\n\u00a0 \u00a0 \u00a0 \u00a0 },\n\u00a0 \u00a0 \u00a0 \u00a0 {\"$unwind\": \"$followers\"},\n\u00a0 \u00a0 \u00a0 \u00a0 {\"$replaceRoot\": {\"newRoot\": \"$followers\"}},\n\u00a0 \u00a0 ]\n\u00a0 \u00a0 \nclass Profile(Document):\n\u00a0 \u00a0 _id = Field(transform=str)\n\u00a0 \u00a0 followers = SequenceField(\n\u00a0 \u00a0 \u00a0 \u00a0 type=Follower,\n\u00a0 \u00a0 \u00a0 \u00a0 superset_collection=\"followers\",\n\u00a0 \u00a0 \u00a0 \u00a0 superset_query=lambda ob: extra_followers_query,\n\u00a0 \u00a0 )\n```\n\nAn application that used the above `Profile`\u00a0definition could look up the `Profile`\u00a0with \"user_id\" of \"4\" and then print out the user names of all their followers with some code like this:\n\n```python\nfor follower in profile.followers:\n\u00a0 \u00a0 print(follower.user_name)\n```\n\nSee how the extra query is now part of the type's mapping definition and not the code dealing with the data? That's the kind of abstraction I wanted to provide when I started building this experimental library. I have more plans, so stick with me! But before I implement more data abstractions, I first need to implement updates \u2014 that's something I'll describe in my next tutorial.\n\n### Conclusion\n\nThis is now the third tutorial in my Python data abstraction series, and I'll admit that this was the code I envisioned when I first came up with the idea of the docbridge library. It's been super satisfying to get to this point, and because I've been developing the whole thing with test-driven development practices, there's already good code coverage.\n\nIf you're looking for more information on aggregation pipelines, you should have a look at [Practical MongoDB Aggregations\u00a0\u2014 or now, you can buy an expanded version of the book\u00a0in paperback.\n\nIf you're interested in the abstraction topics and Python code architecture in general, you can buy the Architecture Patterns with Python\u00a0book, or read it online at CosmicPython.com\n\nI livestream most weeks, usually at 2 p.m. UTC on Wednesdays. If that sounds interesting, check out the MongoDB YouTube channel. I look forward to seeing you there!\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt582eb5d324589b37/65f9711af4a4cf479114f828/image1.png", "format": "md", "metadata": {"tags": ["MongoDB", "Python"], "pageDescription": "Learn how to use advanced Python to abstract subsets and joins in MongoDB data models.", "contentType": "Tutorial"}, "title": "Coding With Mark: Abstracting Joins & Subsets in Python", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/building-real-time-dynamic-seller-dashboard", "action": "created", "body": "# Building a Real-Time, Dynamic Seller Dashboard on MongoDB\n\nOne of the key aspects of being a successful merchant is knowing your market. Understanding your top-selling products, trending SKUs, and top customer locations helps you plan, market, and sell effectively. As a marketplace, providing this visibility and insights for your sellers is crucial. For example, SHOPLINE has helped over 350,000 merchants reach more than 680 million customers via e-commerce, social commerce, and offline point-of-sale (POS) transactions. With key features such as inventory and sales management tools, data analytics, etc. merchants have everything they need to build a successful online store.\n\nIn this article, we are going to look at how a single query on MongoDB can power a real-time view of top selling products, and a deep-dive into the top selling regions.\n\n## Status Quo: stale data\n\nIn the relational world, such a dashboard would require multiple joins across at least four distinct tables: seller details, product details, channel details, and transaction details. \n\nThis increases complexity, data latency, and costs for providing insights on real-time, operational data. Often, organizations pre-compute these tables with up to a 24-hour lag to ensure a better user experience. \n\n## How can MongoDB help deliver real-time insights?\n\nWith MongoDB, using the Query API, we could deliver such dashboards in near real-time, working directly on operational data. The required information for each sales transaction can be stored in a single collection. \n\nEach document would look as follows:\n\n```\n{\n \"_id\": { \"$oid\": \"5bd761dcae323e45a93ccfed\" },\n \"saleDate\": { \"$date\": {...} },\n \"items\": \n { \"name\": \"binder\",\n \"tags\": [\n \"school\",\n \"general\"],\n \"price\": { \"$numberDecimal\": \"13.44\" },\n \"quantity\": 8\n },\n { \"name\": \"binder\",\n \"tags\": [\n \"general\",\n \"organization\"\n ],\n \"price\": { \"$numberDecimal\": \"16.66\" },\n \"quantity\": 10\n }\n ],\n \"storeLocation\": \"London\",\n \"customer\": {\n \"gender\": \"M\",\n \"age\": 44,\n \"email\": \"owtar@pu.cd\",\n \"satisfaction\": 2\n },\n \"couponUsed\": false,\n \"purchaseMethod\": \"In store\"\n}\n```\nThis specific document is from the *\u201csales\u201d* collection within the *\u201csample_supplies\u201d* database, available as sample data when you create an Atlas Cluster. [Start free on Atlas and try out this exercise yourself. MongoDB allows for flexible schema and versioning which makes updating this document with a \u201cseller\u201d field, similar to the customer field, and managing it in your application, very simple. From a data modeling perspective, the polymorphic pattern is ideal for our current use case.\n\n## Desired output\n\nIn order to build a dashboard showcasing the top five products sold over a specific period, we would want to transform the documents into the following sorted array: \n\n```\n\n {\n \"total_volume\": 1897,\n \"item\": \"envelopes\"\n },\n {\n \"total_volume\": 1844,\n \"item\": \"binder\"\n },\n {\n \"total_volume\": 1788,\n \"item\": \"notepad\"\n },\n {\n \"total_volume\": 1018,\n \"item\": \"pens\"\n },\n {\n \"total_volume\": 830,\n \"item\": \"printer paper\"\n }\n]\n```\nWith just the \u201c_id\u201d and \u201ctotal_volume\u201d fields, we can build a chart of the top five products. If we wanted to deliver an improved seller experience, we could build a deep-dive chart with the same single query that provides the top five locations and the quantity sold for each. \n\nThe output for each item would look like this:\n\n```\n{\n \"_id\": \"binder\",\n \"totalQuantity\": 100,\n \"topFiveRegionsByQuantity\": {\n \"Seattle\": 41,\n \"Denver\": 26,\n \"New York\": 14,\n \"Austin\": 10,\n \"London\": 9\n }\n}\n```\nWith the Query API, this transformation can be done in real-time in the database with a single query. In this example, we go a bit further to build another transformation on top which can improve user experience. In fact, on our Atlas developer data platform, this becomes significantly easier when you leverage [Atlas Charts.\n\n## Getting started\n\n1. Set up your Atlas Cluster and load sample data \u201csample_supplies.\u201d\n2. Connect to your Atlas cluster through Compass or open the Data Explorer tab on Atlas.\n\nIn this example, we can use the aggregation builder in Compass to build the following pipeline.\n\n(Tip: Click \u201cCreate new pipeline from text\u201d to copy the code below and easily play with the pipeline.) \n\n## Aggregations with the query API\n\nKeep scrolling to see the following code examples in Python, Java, and JavaScript. \n\n```\n{\n $match: {\n saleDate: {\n $gte: ISODate('2017-12-25T05:00:00.000Z'),\n $lt: ISODate('2017-12-30T05:00:00.000Z')\n }\n }\n}, {\n $unwind: {\n path: '$items'\n }\n}, {\n $group: {\n _id: {\n item: '$items.name',\n region: '$storeLocation'\n },\n quantity: {\n $sum: '$items.quantity'\n }\n }\n}, {\n $addFields: {\n '_id.quantity': '$quantity'\n }\n}, {\n $replaceRoot: {\n newRoot: '$_id'\n }\n}, {\n $group: {\n _id: '$item',\n totalQuantity: {\n $sum: '$quantity'\n },\n topFiveRegionsByQuantity: {\n $topN: {\n output: {\n k: '$region',\n v: '$quantity'\n },\n sortBy: {\n quantity: -1\n },\n n: 5\n }\n }\n }\n}, {\n $sort: {\n totalQuantity: -1\n }\n}, {\n $limit: 5\n}, {\n $set: {\n topFiveRegionsByQuantity: {\n $arrayToObject: '$topFiveRegionsByQuantity'\n }\n }\n}]\n```\nThis short but powerful pipeline processes our data through the following stages: \n\n* First, it filters our data to the specific subset we need. In this case, sale transactions are from the specified dates. It\u2019s worth noting here that you can parametrize inputs to the [$match stage to dynamically filter based on user choices.\n\nNote: Beginning our pipeline with this filter stage significantly improves processing times. With the right index, this entire operation can be extremely fast and reduce the number of documents to be processed in subsequent stages.\n\n* To fully leverage the polymorphic pattern and the document model, we store items bought in each order as an embedded array. The second stage unwinds this so our pipeline can look into each array. We then group the unwound documents by item and region and use $sum to calculate the total quantity sold. \n* Ideally, at this stage we would want our documents to have three data points: the item, the region, and the quantity sold. However, at the end of the previous stage, the item and region are in an embedded object, while quantity is a separate field. We use $addFields to move quantity within the embedded object, and then use $replaceRoot to use this embedded _id document as the source document for further stages. This quick maneuver gives us the transformed data we need as a single document. \n* Next, we group the items as per the view we want on our dashboard. In this example, we want the total volume of each product sold, and to make our dashboard more insightful, we could also get the top five regions for each of these products. We use $group for this with two operators within it: \n * $sum to calculate the total quantity sold.\n * $topN to create a new array of the top five regions for each product and the quantity sold at each location.\n* Now that we have the data transformed the way we want, we use a $sort and $limit to find the top five items. \n* Finally, we use $set to convert the array of the top five regions per item to an embedded document with the format {region: quantity}, making it easier to work with objects in code. This is an optional step.\n\nNote: The $topN operator was introduced in MongoDB 5.2. To test this pipeline on Atlas, you would require an M10 cluster. By downloading MongoDB community version, you can test through Compass on your local machine.\n\n## What would you build?\n\nWhile adding visibility on the top five products and the top-selling regions is one part of the dashboard, by leveraging MongoDB and the Query API, we deliver near real-time visibility into live operational data. \n\nIn this article, we saw how to build a single query which can power multiple charts on a seller dashboard. What would you build into your dashboard views? Join our vibrant community forums, to discuss more. \n\n*For reference, here\u2019s what the code blocks look like in other languages.*\n\n*Python*\n\n```python\n# Import the necessary packages\nfrom pymongo import MongoClient\nfrom bson.son import SON\n\n# Connect to the MongoDB server\nclient = MongoClient(URI)\n\n# Get a reference to the sample_supplies collection\ndb = client.\nsupplies = db.sample_supplies\n\n# Build the pipeline stages\nmatch_stage = {\n \"$match\": {\n \"saleDate\": {\n \"$gte\": \"ISODate('2017-12-25T05:00:00.000Z')\",\n \"$lt\": \"ISODate('2017-12-30T05:00:00.000Z')\"\n }\n }\n}\n\nunwind_stage = {\n \"$unwind\": {\n \"path\": \"$items\"\n }\n}\n\ngroup_stage = {\n \"$group\": {\n \"_id\": {\n \"item\": \"$items.name\",\n \"region\": \"$storeLocation\"\n },\n \"quantity\": {\n \"$sum\": \"$items.quantity\"\n }\n }\n}\n\naddfields_stage = {\n $addFields: {\n '_id.quantity': '$quantity'\n }\n}\n\nreplaceRoot_stage = {\n $replaceRoot: {\n newRoot: '$_id'\n }\n}\n\ngroup2_stage = {\n $group: {\n _id: '$item',\n totalQuantity: {\n $sum: '$quantity'\n },\n topFiveRegionsByQuantity: {\n $topN: {\n output: {\n k: '$region',\n v: '$quantity'\n },\n sortBy: {\n quantity: -1\n },\n n: 5\n }\n }\n }\n}\n\nsort_stage = {\n $sort: {\n totalQuantity: -1\n }\n}\n\nlimit_stage = {\n $limit: 5\n}\n\nset_stage = {\n $set: {\n topFiveRegionsByQuantity: {\n $arrayToObject: '$topFiveRegionsByQuantity'\n }\n }\n}\n\npipeline = [match_stage, unwind_stage, group_stage, \n addfields_stage, replaceroot_stage, group2_stage,\n sort_stage, limit_stage, set_stage]\n\n# Execute the aggregation pipeline\nresults = supplies.aggregate(pipeline)\n```\n\n*Java*\n```java\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.model.Aggregates;\nimport org.bson.Document;\n\nimport java.util.Arrays;\n\n// Connect to MongoDB and get the collection\nMongoClient mongoClient = new MongoClient(URI);\nMongoDatabase database = mongoClient.getDatabase();\nMongoCollection collection = database.getCollection(\"sample_supplies\");\n\n// Create the pipeline stages\nBson matchStage = Aggregates.match(Filters.and(\n Filters.gte(\"saleDate\", new Date(\"2017-12-25T05:00:00.000Z\")),\n Filters.lt(\"saleDate\", new Date(\"2017-12-30T05:00:00.000Z\"))\n));\n\nBson unwindStage = Aggregates.unwind(\"$items\");\n\nBson groupStage = Aggregates.group(\"$items.name\",\n Accumulators.sum(\"quantity\", \"$items.quantity\")\n);\n\nBson addFieldsStage = Aggregates.addFields(new Field(\"_id.quantity\", \"$quantity\"));\n\nBson replaceRootStage = Aggregates.replaceRoot(\"_id\");\n\nBson group2Stage = Aggregates.group(\"$item\",\n Accumulators.sum(\"totalQuantity\", \"$quantity\"),\n Accumulators.top(\"topFiveRegionsByQuantity\", 5, new TopOptions()\n .output(new Document(\"k\", \"$region\").append(\"v\", \"$quantity\"))\n .sortBy(new Document(\"quantity\", -1))\n )\n);\n\nBson sortStage = Aggregates.sort(new Document(\"totalQuantity\", -1));\n\nBson limitStage = Aggregates.limit(5);\n\nBson setStage = Aggregates.set(\"topFiveRegionsByQuantity\", new Document(\"$arrayToObject\", \"$topFiveRegionsByQuantity\"));\n\n// Execute the pipeline\nList results = collection.aggregate(Arrays.asList(matchStage, unwindStage, groupStage, addFieldsStage, replaceRootStage, group2Stage, sortStage, limitStage, setStage)).into(new ArrayList<>());\n```\n\n*JavaScript*\n\n```javascript\nconst MongoClient = require('mongodb').MongoClient;\nconst assert = require('assert');\n\n// Connection URL\nconst url = 'URI';\n\n// Database Name\nconst db = 'database_name';\n\n// Use connect method to connect to the server\nMongoClient.connect(url, function(err, client) {\n assert.equal(null, err);\n console.log(\"Connected successfully to server\");\n\n const db = client.db(dbName);\n\n // Create the pipeline stages\n const matchStage = {\n $match: {\n saleDate: {\n $gte: new Date('2017-12-25T05:00:00.000Z'),\n $lt: new Date('2017-12-30T05:00:00.000Z')\n }\n }\n };\n\nconst unwindStage = {\n $unwind: {\n path: '$items'\n }\n};\n\n const groupStage = {\n $group: {\n _id: {\n item: '$items.name',\n region: '$storeLocation'\n },\n quantity: {\n $sum: '$items.quantity'\n }\n }\n };\n\nconst addFieldsStage = {\n $addFields: {\n '_id.quantity': '$quantity'\n }\n};\n\nconst replaceRootStage = {\n $replaceRoot: {\n newRoot: '$_id'\n }\n};\n\nconst groupStage = {\n $group: {\n _id: '$item',\n totalQuantity: {\n $sum: '$quantity'\n },\n topFiveRegionsByQuantity: {\n $topN: {\n output: {\n k: '$region',\n v: '$quantity'\n },\n sortBy: {\n quantity: -1\n },\n n: 5\n }\n }\n }\n};\n\nconst sortStage = {\n $sort: {\n totalQuantity: -1\n }\n};\n\nconst limitStage = {\n $limit: 5\n};\n\nconst setStage = {\n $set: {\n topFiveRegionsByQuantity: {\n $arrayToObject: '$topFiveRegionsByQuantity'\n }\n }\n};\n\nconst pipeline = [matchStage, unwindStage, groupStage,\n addFieldsStage, replaceRootStage, group2Stage, \n sortStage, limitStage, setStage]\n\n // Execute the pipeline\n db.collection('sample_supplies')\n .aggregate(pipeline)\n .toArray((err, results) => {\n assert.equal(null, err);\n console.log(results);\n\n client.close();\n });\n});\n```", "format": "md", "metadata": {"tags": ["Atlas", "Python", "Java", "JavaScript"], "pageDescription": "In this article, we're looking at how a single query on MongoDB can power a real-time view of top-selling products, and deep-dive into the top-selling regions.", "contentType": "Tutorial"}, "title": "Building a Real-Time, Dynamic Seller Dashboard on MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/java/microservices-architecture-spring-mongodb", "action": "created", "body": "# Microservices Architecture With Java, Spring, and MongoDB\n\n## Introduction\n\n\"Microservices are awesome and monolithic applications are evil.\"\n\nIf you are reading this article, you have already read that a million times, and I'm not the one who's going to tell\nyou otherwise!\n\nIn this post, we are going to create a microservices architecture using MongoDB.\n\n## TL;DR\n\nThe source code is available in these two repositories.\n\nThe README.md files will\nhelp you start everything.\n\n```bash\ngit clone git@github.com:mongodb-developer/microservices-architecture-mongodb.git\ngit clone git@github.com:mongodb-developer/microservices-architecture-mongodb-config-repo.git\n```\n\n## Microservices architecture\n\nWe are going to use Spring Boot and Spring Cloud dependencies to build our architecture.\n\nHere is what a microservices architecture looks like, according to Spring:\n\n file and start the service related to each section.\n\n### Config server\n\nThe first service that we need is a configuration server.\n\nThis service allows us to store all the configuration files of our microservices in a single repository so our\nconfigurations are easy to version and store.\n\nThe configuration of our config server is simple and straight to the point:\n\n```properties\nspring.application.name=config-server\nserver.port=8888\nspring.cloud.config.server.git.uri=${HOME}/Work/microservices-architecture-mongodb-config-repo\nspring.cloud.config.label=main\n```\n\nIt allows us to locate the git repository that stores our microservices configuration and the branch that should be\nused.\n\n> Note that the only \"trick\" you need in your Spring Boot project to start a config server is the `@EnableConfigServer`\n> annotation.\n\n```java\npackage com.mongodb.configserver;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.cloud.config.server.EnableConfigServer;\n\n@EnableConfigServer\n@SpringBootApplication\npublic class ConfigServerApplication {\n public static void main(String] args) {\n SpringApplication.run(ConfigServerApplication.class, args);\n }\n}\n```\n\n### Service registry\n\nA service registry is like a phone book for microservices. It keeps track of which microservices are running and where\nthey are located (IP address and port). Other services can look up this information to find and communicate with the\nmicroservices they need.\n\nA service registry is useful because it enables client-side load balancing and decouples service providers from\nconsumers without the need for DNS.\n\nAgain, you don't need much to be able to start a Spring Boot service registry. The `@EnableEurekaServer` annotation\nmakes all the magic happen.\n\n```java\npackage com.mongodb.serviceregistry;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.cloud.netflix.eureka.server.EnableEurekaServer;\n\n@SpringBootApplication\n@EnableEurekaServer\npublic class ServiceRegistryApplication {\n public static void main(String[] args) {\n SpringApplication.run(ServiceRegistryApplication.class, args);\n }\n}\n```\n\nThe configuration is also to the point:\n\n```properties\nspring.application.name=service-registry\nserver.port=8761\neureka.client.register-with-eureka=false\neureka.client.fetch-registry=false\n```\n\n> The last two lines prevent the service registry from registering to itself and retrieving the registry from itself.\n\n### API gateway\n\nThe API gateway service allows us to have a single point of entry to access all our microservices. Of course, you should\nhave more than one in production, but all of them will be able to communicate with all the microservices and distribute\nthe workload evenly by load-balancing the queries across your pool of microservices.\n\nAlso, an API gateway is useful to address cross-cutting concerns like security, monitoring, metrics gathering, and\nresiliency.\n\nWhen our microservices start, they register themselves to the service registry. The API gateway can use this registry to\nlocate the microservices and distribute the queries according to its routing configuration.\n\n```Shell\nserver:\n port: 8080\n\nspring:\n application:\n name: api-gateway\n cloud:\n gateway:\n routes:\n - id: company-service\n uri: lb://company-service\n predicates:\n - Path=/api/company/**,/api/companies\n - id: employee-service\n uri: lb://employee-service\n predicates:\n - Path=/api/employee/**,/api/employees\n\neureka:\n client:\n register-with-eureka: true\n fetch-registry: true\n service-url:\n defaultZone: http://localhost:8761/eureka/\n instance:\n hostname: localhost\n```\n\n> Note that our API gateway runs on port 8080.\n\n### MongoDB microservices\n\nFinally, we have our MongoDB microservices.\n\nMicroservices are supposed to be independent of each other. For this reason, we need two MongoDB instances: one for\neach microservice.\n\nCheck out the [README.md\nfile to run everything.\n\n> Note that in\n> the configuration files for the\n> company and employee services, they are respectively running on ports 8081 and 8082.\n\ncompany-service.properties\n\n```properties\nspring.data.mongodb.uri=${MONGODB_URI_1:mongodb://localhost:27017}\nspring.threads.virtual.enabled=true\nmanagement.endpoints.web.exposure.include=*\nmanagement.info.env.enabled=true\ninfo.app.name=Company Microservice\ninfo.app.java.version=21\ninfo.app.type=Spring Boot\nserver.port=8081\neureka.client.register-with-eureka=true\neureka.client.fetch-registry=true\neureka.client.service-url.defaultZone=http://localhost:8761/eureka/\neureka.instance.hostname=localhost\n```\n\nemployee-service.properties\n\n```properties\nspring.data.mongodb.uri=${MONGODB_URI_2:mongodb://localhost:27018}\nspring.threads.virtual.enabled=true\nmanagement.endpoints.web.exposure.include=*\nmanagement.info.env.enabled=true\ninfo.app.name=Employee Microservice\ninfo.app.java.version=21\ninfo.app.type=Spring Boot\nserver.port=8082\neureka.client.register-with-eureka=true\neureka.client.fetch-registry=true\neureka.client.service-url.defaultZone=http://localhost:8761/eureka/\neureka.instance.hostname=localhost\n```\n\n> Note that the two microservices are connected to two different MongoDB clusters to keep their independence. The\n> company service is using the MongoDB node on port 27017 and the employee service is on port 27018.\n\nOf course, this is only if you are running everything locally. In production, I would recommend to use two clusters on\nMongoDB Atlas. You can overwrite the MongoDB URI with the environment variables (see README.md).\n\n## Test the REST APIs\n\nAt this point, you should have five services running:\n\n- A config-server on port 8888\n- A service-registry on port 8761\n- An api-gateway on port 8080\n- Two microservices:\n - company-service on port 8081\n - employee-service on port 8082\n\nAnd two MongoDB nodes on ports 27017 and 27018 or two MongoDB clusters on MongoDB Atlas.\n\nIf you start the\nscript 2_api-tests.sh,\nyou should get an output like this.\n\n```\nDELETE Companies\n2\nDELETE Employees\n2\n\nPOST Company 'MongoDB'\nPOST Company 'Google'\n\nGET Company 'MongoDB' by 'id'\n{\n \"id\": \"661aac7904e1bf066ee8e214\",\n \"name\": \"MongoDB\",\n \"headquarters\": \"New York\",\n \"created\": \"2009-02-11T00:00:00.000+00:00\"\n}\n\nGET Company 'Google' by 'name'\n{\n \"id\": \"661aac7904e1bf066ee8e216\",\n \"name\": \"Google\",\n \"headquarters\": \"Mountain View\",\n \"created\": \"1998-09-04T00:00:00.000+00:00\"\n}\n\nGET Companies\n\n {\n \"id\": \"661aac7904e1bf066ee8e214\",\n \"name\": \"MongoDB\",\n \"headquarters\": \"New York\",\n \"created\": \"2009-02-11T00:00:00.000+00:00\"\n },\n {\n \"id\": \"661aac7904e1bf066ee8e216\",\n \"name\": \"Google\",\n \"headquarters\": \"Mountain View\",\n \"created\": \"1998-09-04T00:00:00.000+00:00\"\n }\n]\n\nPOST Employee Maxime\nPOST Employee Tim\n\nGET Employee 'Maxime' by 'id'\n{\n \"id\": \"661aac79cf04401110c03516\",\n \"firstName\": \"Maxime\",\n \"lastName\": \"Beugnet\",\n \"company\": \"Google\",\n \"headquarters\": \"Mountain View\",\n \"created\": \"1998-09-04T00:00:00.000+00:00\",\n \"joined\": \"2018-02-12T00:00:00.000+00:00\",\n \"salary\": 2468\n}\n\nGET Employee 'Tim' by 'id'\n{\n \"id\": \"661aac79cf04401110c03518\",\n \"firstName\": \"Tim\",\n \"lastName\": \"Kelly\",\n \"company\": \"MongoDB\",\n \"headquarters\": \"New York\",\n \"created\": \"2009-02-11T00:00:00.000+00:00\",\n \"joined\": \"2023-08-23T00:00:00.000+00:00\",\n \"salary\": 13579\n}\n\nGET Employees\n[\n {\n \"id\": \"661aac79cf04401110c03516\",\n \"firstName\": \"Maxime\",\n \"lastName\": \"Beugnet\",\n \"company\": \"Google\",\n \"headquarters\": \"Mountain View\",\n \"created\": \"1998-09-04T00:00:00.000+00:00\",\n \"joined\": \"2018-02-12T00:00:00.000+00:00\",\n \"salary\": 2468\n },\n {\n \"id\": \"661aac79cf04401110c03518\",\n \"firstName\": \"Tim\",\n \"lastName\": \"Kelly\",\n \"company\": \"MongoDB\",\n \"headquarters\": \"New York\",\n \"created\": \"2009-02-11T00:00:00.000+00:00\",\n \"joined\": \"2023-08-23T00:00:00.000+00:00\",\n \"salary\": 13579\n }\n]\n```\n\n> Note that the employee service sends queries to the company service to retrieve the details of the employees' company.\n\nThis confirms that the service registry is doing its job correctly because the URL only contains a reference to the company microservice, not its direct IP and port.\n\n```java\nprivate CompanyDTO getCompany(String company) {\n String url = \"http://company-service/api/company/name/\";\n CompanyDTO companyDTO = restTemplate.getForObject(url + company, CompanyDTO.class);\n if (companyDTO == null) {\n throw new EntityNotFoundException(\"Company not found: \", company);\n }\n return companyDTO;\n}\n```\n\n## Conclusion\n\nAnd voil\u00e0! You now have a basic microservice architecture running that is easy to use to kickstart your project.\n\nIn this architecture, we could seamlessly integrate additional features to enhance performance and maintainability in\nproduction. Caching would be essential, particularly with a potentially large number of employees within the same\ncompany, significantly alleviating the load on the company service.\n\nThe addition of a [Spring Cloud Circuit Breaker could also\nimprove the resiliency in production and a Spring Cloud Sleuth would\nhelp with distributed tracing and auto-configuration.\n\nIf you have questions, please head to our Developer Community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n\n[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt332394d666c28140/661ab5bf188d353a3e2da005/microservices-architecture.svg\n", "format": "md", "metadata": {"tags": ["Java", "MongoDB", "Spring", "Docker"], "pageDescription": "In this post, you'll learn about microservices architecture and you'll be able to deploy your first architecture locally using Spring Boot, Spring Cloud and MongoDB.", "contentType": "Tutorial"}, "title": "Microservices Architecture With Java, Spring, and MongoDB", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/how-maintain-multiple-versions-record-mongodb", "action": "created", "body": "# How to Maintain Multiple Versions of a Record in MongoDB (2024 Updates)\n\nOver the years, there have been various methods proposed for versioning data in MongoDB. Versioning data means being able to easily get not just the latest version of a document or documents but also view and query the way the documents were at a given point in time.\n\nThere was the blog post from Asya Kamsky written roughly 10 years ago, an update from Paul Done (author of Practical MongoDB Aggregations), and also information on the MongoDB website about the version pattern from 2019.\n\nThese variously maintain two distinct collections of data \u2014 one with the latest version and one with prior versions or updates, allowing you to reconstruct them.\n\nSince then, however, there have been seismic, low-level changes in MongoDB's update and aggregation capabilities. Here, I will show you a relatively simple way to maintain a document history when updating without maintaining any additional collections.\n\nTo do this, we use expressive updates, also sometimes called aggregation pipeline updates. Rather than pass an object with update operators as the second argument to update, things like $push and $set, we express our update as an aggregation pipeline, with an ordered set of changes. By doing this, we can not only make changes but take the previous values of any fields we change and record those in a different field as a history.\n\nThe simplest example of this would be to use the following as the update parameter for an updateOne operation.\n\n```\n { $set : { a: 5 , previous_a: \"$a\" } }]\n```\n\nThis would explicitly set `a` to 5 but also set `previous_a` to whatever `a` was before the update. This would only give us a history look-back of a single change, though.\n\nBefore:\n\n```\n{ \n a: 3\n}\n```\n\nAfter:\n\n```\n{\n a: 5,\n previous_a: 3\n}\n```\n\nWhat we want to do is take all the fields we change and construct an object with those prior values, then push it into an array \u2014 theoretically, like this:\n\n```\n[ { $set : { a: 5 , b: 8 } ,\n $push : { history : { a:\"$a\",b:\"$b\"} } ]\n```\n\nThe above does not work because the $push part in bold is an update operator, not aggregation syntax, so it gives a syntax error. What we instead need to do is rewrite push as an array operation, like so:\n\n```\n{\"$set\":{\"history\":\n {\"$concatArrays\":[[{ _updateTime: \"$$NOW\", a:\"$a\",b:\"$b\"}}],\n {\"$ifNull\":[\"$history\",[]]}]}}}\n```\n\nTo talk through what's happening here, I want to add an object, `{ _updateTime: \"$$NOW\", a:\"$a\",b:\"$b\"}`, to the array at the beginning. I cannot use $push as that is update syntax and expressive syntax is about generating a document with new versions for fields, effectively, just $set. So I need to set the array to the previous array with nym new value prepended.\n\nWe use $concatArrays to join two arrays, so I wrap my single document containing the old values for fields in an array. Then, the new array is my array of one concatenated with the old array.\n\nI use $ifNUll to say if the value previously was null or missing, treat it as an empty array instead, so the first time, it actually does `history = [{ _updateTime: \"$$NOW\", a:\"$a\",b:\"$b\"}] + []`.\n\nBefore:\n\n```\n{ \n a: 3,\n b: 1\n}\n```\n\nAfter:\n\n```\n{\n a: 5,\n b: 8,\n history: [\n { \n _updateTime: Date(...),\n a: 3, \n b: 1 \n }\n ]\n}\n```\n\nThat's a little hard to write but if we actually write out the code to demonstrate this and declare it as separate objects, it should be a lot clearer. The following is a script you can run in the MongoDB shell either by pasting it in or [loading it with `load(\"versioning.js\")`.\n\nThis code first generates some simple records: \n\n```javascript\n// Configure the inspection depth for better readability in output\nconfig.set(\"inspectDepth\", 8) // Set mongosh to print nicely\n\n// Connect to a specific database\ndb = db.getSiblingDB(\"version_example\")\ndb.data.drop()\nconst nFields = 5\n\n// Function to generate random field values based on a specified change percentage\nfunction randomFieldValues(percentageToChange) {\n const fieldVals = new Object();\n for (let fldNo = 1; fldNo < nFields; fldNo++) {\n if (Math.random() < (percentageToChange / 100)) {\n fieldVals`field_${fldNo}`] = Math.floor(Math.random() * 100)\n }\n }\n return fieldVals\n}\n\n// Loop to create and insert 10 records with random data into the 'data' collection\nfor (let id = 0; id < 10; id++) {\n const record = randomFieldValues(100)\n record._id = id\n record.dateUpdated = new Date()\n db.data.insertOne(record)\n}\n\n// Log the message indicating the data that will be printed next\nconsole.log(\"ORIGINAL DATA\")\nconsole.table(db.data.find().toArray())\n```\n\n| (index) | _id | field_1 | field_2 | field_3 | field_4 | dateUpdated |\n| ------- | ---- | ------- | ------- | ------- | ------- | ------------------------ |\n| 0 | 0 | 34 | 49 | 19 | 74 | 2024-04-15T13:30:12.788Z |\n| 1 | 1 | 13 | 9 | 43 | 4 | 2024-04-15T13:30:12.836Z |\n| 2 | 2 | 51 | 30 | 96 | 93 | 2024-04-15T13:30:12.849Z |\n| 3 | 3 | 29 | 44 | 21 | 85 | 2024-04-15T13:30:12.860Z |\n| 4 | 4 | 41 | 35 | 15 | 7 | 2024-04-15T13:30:12.866Z |\n| 5 | 5 | 0 | 85 | 56 | 28 | 2024-04-15T13:30:12.874Z |\n| 6 | 6 | 85 | 56 | 24 | 78 | 2024-04-15T13:30:12.883Z |\n| 7 | 7 | 27 | 23 | 96 | 25 | 2024-04-15T13:30:12.895Z |\n| 8 | 8 | 70 | 40 | 40 | 30 | 2024-04-15T13:30:12.905Z |\n| 9 | 9 | 69 | 13 | 13 | 9 | 2024-04-15T13:30:12.914Z |\n\nThen, we modify the data recording the history as part of the update operation.\n\n```javascript\nconst oldTime = new Date()\n//We can make changes to these without history like so\nsleep(500);\n// Making the change and recording the OLD value\nfor (let id = 0; id < 10; id++) {\n const newValues = randomFieldValues(30)\n //Check if any changes\n if (Object.keys(newValues).length) {\n newValues.dateUpdated = new Date()\n\n const previousValues = new Object()\n for (let fieldName in newValues) {\n previousValues[fieldName] = `$${fieldName}`\n }\n\n const existingHistory = { $ifNull: [\"$history\", []] }\n const history = { $concatArrays: [[previousValues], existingHistory] }\n newValues.history = history\n\n db.data.updateOne({ _id: id }, [{ $set: newValues }])\n }\n}\n\nconsole.log(\"NEW DATA\")\ndb.data.find().toArray()\n```\n\nWe now have records that look like this \u2014 with the current values but also an array reflecting any changes.\n\n```\n{\n _id: 6,\n field_1: 85,\n field_2: 3,\n field_3: 71,\n field_4: 71,\n dateUpdated: ISODate('2024-04-15T13:34:31.915Z'),\n history: [\n {\n field_2: 56,\n field_3: 24,\n field_4: 78,\n dateUpdated: ISODate('2024-04-15T13:30:12.883Z')\n }\n ]\n }\n```\n\nWe can now use an aggregation pipeline to retrieve any prior version of each document. To do this, we first filter the history to include only changes up to the point in time we want. We then merge them together in order:\n\n```javascript\n//Get only history until point required\n\nconst filterHistory = { $filter: { input: \"$history\", cond: { $lt: [\"$$this.dateUpdated\", oldTime] } } }\n\n//Merge them together and replace the top level document\n\nconst applyChanges = { $replaceRoot: { newRoot: { $mergeObjects: { $concatArrays: [[\"$$ROOT\"], { $ifNull: [filterHistory, []] }] } } } }\n\n// You can optionally add a $match here but you would normally be better to\n// $match on the history fields at the start of the pipeline\nconst revertPipeline = [{ $set: { rewoundTO: oldTime } }, applyChanges]\n\n//Show results\ndb.data.aggregate(revertPipeline).toArray()\n```\n\n```\n {\n _id: 6,\n field_1: 85,\n field_2: 56,\n field_3: 24,\n field_4: 78,\n dateUpdated: ISODate('2024-04-15T13:30:12.883Z'),\n history: [\n {\n field_2: 56,\n field_3: 24,\n field_4: 78,\n dateUpdated: ISODate('2024-04-15T13:30:12.883Z')\n }\n ],\n rewoundTO: ISODate('2024-04-15T13:34:31.262Z')\n },\n```\n\nThis technique came about through discussing the needs of a MongoDB customer. They had exactly this use case to retain both current and history and to be able to query and retrieve any of them without having to maintain a full copy of the document. It is an ideal choice if changes are relatively small. It could also be adapted to only record a history entry if the field value is different, allowing you to compute deltas even when overwriting the whole record.\n\nAs a cautionary note, versioning inside a document like this will make the documents larger. It also means an ever-growing array of edits. If you believe there may be hundreds or thousands of changes, this technique is not suitable and the history should be written to a second document using a transaction. To do that, perform the update with findOneAndUpdate and return the fields you are changing from that call to then insert into a history collection.\n\nThis isn't intended as a step-by-step tutorial, although you can try the examples above and see how it works. It's one of many sophisticated data modeling \n\ntechniques you can use to build high-performance services on MongoDB and MongoDB Atlas. If you have a need for record versioning, you can use this. If not, then perhaps spend a little more time seeing what you can create with the aggregation pipeline, a Turing-complete data processing engine that runs alongside your data, saving you the time and cost of fetching it to the client to process. Learn more about [aggregation.", "format": "md", "metadata": {"tags": ["MongoDB"], "pageDescription": "", "contentType": "Tutorial"}, "title": "How to Maintain Multiple Versions of a Record in MongoDB (2024 Updates)", "updated": "2024-05-20T17:32:23.502Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/implementing-right-erasure-csfle", "action": "created", "body": "# Implementing Right to Erasure with CSFLE\n\nThe right to erasure, also known as the right to be forgotten, is a right granted to individuals under laws and regulations such as GDPR. This means that companies storing an individual's personal data must be able to delete it on request. Because this data can be spread across several systems, it can be technically challenging for these companies to identify and remove it from all places. Even if this is properly executed, there is also a risk that deleted data can be restored from backups in the future, potentially contributing to legal and financial risks.\n\nThis blog post addresses those challenges, demonstrating how you can make use of MongoDB's Client-Side Field Level Encryption to strengthen procedures for removing sensitive data.\n\n>***Disclaimer**: We provide no guarantees that the solution and techniques described in this article will fulfill regulatory requirements around the right to erasure. Each organization needs to make their own determination on appropriate or sufficient measures to comply with various regulatory requirements such as GDPR.*\n\n## What is crypto shredding?\nCrypto shredding is a data destruction technique that consists of destroying the encryption keys that allow the data to be decrypted, thus making the data undecipherable. The example below gives a more in-depth explanation.\n\nImagine you are storing data for multiple users. You start by giving each user their own unique data encryption key (DEK), and mapping it to that customer. This is represented in the below diagram, where \"User A\" and \"User B\" each have their own key in the key store. This DEK can then be used to encrypt and decrypt any data related to the user in question.\n\nLet's assume that we want to remove all data for User B. If we remove User B's DEK, we can no longer decrypt any of the data that was encrypted with it; all we have left in our data store is \"junk\" cipher text. As the diagram below illustrates, User A's data is unaffected, but we can no longer read User B's data.\n\n## What is CSFLE?\nWith MongoDB\u2019s Client-Side Field Level Encryption (CSFLE), applications can encrypt sensitive fields in documents prior to transmitting data to the server. This means that even when data is being used by the database in memory, it is never in plain text. The database only sees the encrypted data but still enables you to query it.\n\nMongoDB CSFLE utilizes envelope encryption, which is the practice of encrypting plaintext data with a data key, which itself is in turn encrypted by a top level envelope key (also known as a \"master key\"). \n\nEnvelope keys are usually managed by a Key Management Service (KMS). MongoDB CSFLE supports multiple KMSs, such as AWS KMS, GCP KMS, Azure KeyVault, and Keystores supporting the KMIP standard (e.g., Hashicorp Keyvault).\n\nCSFLE can be used in either automatic mode or explicit mode \u2014 or a combination of both. Automatic mode enables you to perform encrypted read and write operations based on a defined encryption schema, avoiding the need for application code to specify how to encrypt or decrypt fields. This encryption schema is a JSON document that defines what fields need to be encrypted. Explicit mode refers to using the MongoDB driver's encryption library to manually encrypt or decrypt fields in your application.\n\nIn this article, we are going to use the explicit encryption technique to showcase how we can use crypto shredding techniques with CSFLE to implement (or augment) procedures to \"forget\" sensitive data. We'll be using AWS KMS to demonstrate this.\n\n## Bringing it all together\nWith MongoDB as our database, we can use CSFLE to implement crypto shredding, so we can provide stronger guarantees around data privacy.\n\nTo demonstrate how you could implement this, we'll walk you through a demo application. The demo application is a python (Flask) web application with a front end, which exposes functionality for signup, login, and a data entry form. We have also added an \"admin\" page to showcase the crypto shredding related functionality. If you want to follow along, you can run the application yourself \u2014 you'll find the necessary code and instructions in GitHub.\n\nWhen a user signs up, our application will generate a DEK for the user, then store the ID for the DEK along with other user details. Key generation is done via the `create_data_key` method on the `ClientEncryption` class, which we initialized earlier as `app.mongodb_encryption_client`. This encryption client is responsible for generating a DEK, which in this case will be encrypted by the envelope key. In our case, the encryption client is configured to use an envelope key from AWS KMS.\n\n```python\n# flaskapp/db_queries.py\n\n@aws_credential_handler\ndef create_key(userId):\n data_key_id = \\\n app.mongodb_encryption_client.create_data_key(kms_provider,\n master_key, key_alt_names=userId])\n return data_key_id\n```\n\nWe can then use this method when saving the user.\n\n```python\n# flaskapp/user.py\n\ndef save(self):\n dek_id = db_queries.create_key(self.username)\n result = app.mongodb[db_name].user.insert_one(\n {\n \"username\": self.username,\n \"password_hash\": self.password_hash,\n \"dek_id\": dek_id,\n \"createdAt\": datetime.now(),\n }\n )\n if result:\n self.id = result.inserted_id\n return True\n else:\n return False\n```\n\nOnce signed up, the user can then log in, after which they can enter data via a form shown in the screenshot below. This data has a \"name\" and a \"value\", allowing the user to store arbitrary key-value pairs.\n\n![demo application showing a form to add data\n\nIn the database, we'll store this data in a MongoDB collection called \u201cdata,\u201d in documents structured like this:\n\n```json\n{\n \"name\": \"shoe size\",\n \"value\": \"10\",\n \"username\": \"tom\"\n}\n```\n\nFor the sake of this demonstration, we have chosen to encrypt the value and username fields from this document. Those fields will be encrypted using the DEK created on signup belonging to the logged in user.\n\n```python\n# flaskapp/db_queries.py\n\n# Fields to encrypt, and the algorithm to encrypt them with\nENCRYPTED_FIELDS = {\n # Deterministic encryption for username, because we need to search on it\n \"username\": Algorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Deterministic,\n # Random encryption for value, as we don't need to search on it\n \"value\": Algorithm.AEAD_AES_256_CBC_HMAC_SHA_512_Random,\n}\n```\n\nThe insert_data function then loops over the fields we want to encrypt and the algorithm we're using for each.\n\n```python\n# flaskapp/db_queries.py\n\ndef insert_data(document):\n document\"username\"] = current_user.username\n # Loop over the field names (and associated algorithm) we want to encrypt\n for field, algo in ENCRYPTED_FIELDS.items():\n # if the field exists in the document, encrypt it\n if document.get(field):\n document[field] = encrypt_field(document[field], algo)\n # Insert document (now with encrypted fields) to the data collection\n app.data_collection.insert_one(document)\n```\n\nIf the specified fields exist in the document, this will call our encrypt_field function to perform the encryption using the specified algorithm.\n\n```python\n# flaskapp/db_queries.py\n\n# Encrypt a single field with the given algorithm\n@aws_credential_handler\ndef encrypt_field(field, algorithm):\n try:\n field = app.mongodb_encryption_client.encrypt(\n field,\n algorithm,\n key_alt_name=current_user.username,\n )\n return field\n except pymongo.errors.EncryptionError as ex:\n # Catch this error in case the DEK doesn't exist. Log a warning and \n # re-raise the exception\n if \"not all keys requested were satisfied\" in ex._message:\n app.logger.warn(\n f\"Encryption failed: could not find data encryption key for user: {current_user.username}\"\n )\n raise ex\n```\n\nOnce data is added, it will be shown in the web app:\n\n![demo application showing the data added in the previous step\n\nNow let's see what happens if we delete the DEK. To do this, we can head over to the admin page. This admin page should only be provided to individuals that have a need to manage keys, and we have some choices:\n\nWe're going to use the \"Delete data encryption key\" option, which will remove the DEK, but leave all data entered by the user intact. After that, the application will no longer be able to retrieve the data that was stored via the form. When trying to retrieve the data for the logged in user, an error will be thrown\n\n**Note**: After we do perform the data key deletion, the web application may still be able to decrypt and show the data for a short period of time before its cache expires \u2014 this takes a maximum of 60 seconds. \n\nBut what is actually left in the database? To get a view of this, you can go back to the Admin page and choose \"Fetch data for all users.\" In this view, we won't throw an exception if we can't decrypt the data. We'll just show exactly what we have stored in the database. Even though we haven't actually deleted the user's data, because the data encryption key no longer exists, all we can see now is cipher text for the encrypted fields \"username\" and \"value\".\n\nAnd here is the code we're using to fetch the data in this view. As you can see, we use very similar logic to the encrypt method shown earlier. We perform a find operation without any filters to retrieve all the data from our data collection. We'll then loop over our ENCRYPTED_FIELDS dictionary to see which fields need to be decrypted.\n\n```python\n# flaskapp/db_queries.py\n\ndef fetch_all_data_unencrypted(decrypt=False):\n results = list(app.data_collection.find())\n\n if decrypt:\n for field in ENCRYPTED_FIELDS.keys():\n for result in results:\n if result.get(field):\n resultfield], result[\"encryption_succeeded\"] = decrypt_field(result[field])\n return results\n```\n\nThe decrypt_field function is called for each field to be decrypted, but in this case we'll catch the error if we cannot successfully decrypt it due to a missing DEK.\n\n```python\n# flaskapp/db_queries.py\n\n# Try to decrypt a field, returning a tuple of (value, status). This will be either (decrypted_value, True), or (raw_cipher_text, False) if we couldn't decrypt\ndef decrypt_field(field):\n try:\n # We don't need to pass the DEK or algorithm to decrypt a field\n field = app.mongodb_encryption_client.decrypt(field)\n return field, True\n # Catch this error in case the DEK doesn't exist.\n except pymongo.errors.EncryptionError as ex:\n if \"not all keys requested were satisfied\" in ex._message:\n app.logger.warn(\n \"Decryption failed: could not find data encryption key to decrypt the record.\"\n )\n # If we can't decrypt due to missing DEK, return the \"raw\" value.\n return field, False\n raise ex\n```\n\nWe can also use the `mongosh` shell to check directly in the database, just to prove that there's nothing there we can read. \n\n![mongosh\n\nAt this point, savvy readers may be asking the question, \"But what if we restore the database from a backup?\" If we want to prevent this, we can use two separate database clusters in our application \u2014 one for storing data and one for storing DEKs (the \"key vault\"). This theory is applied in the sample application, which requires you to specify two MongoDB connection strings \u2014 one for data and one for the key vault. If we use separate clusters, it decouples the restoration of backups for application data and the key vault; restoring a backup on the data cluster won't restore any DEKs which have been deleted from the key vault cluster.\n\n## Conclusion\nIn this blog post, we've demonstrated how MongoDB's Client-Side Field Level Encryption can be used to simplify the task of \"forgetting\" certain data. With a single \"delete data key\" operation, we can effectively forget data which may be stored across different databases, collections, backups, and logs. In a real production application, we may wish to delete all the user's data we can find, on top of removing their DEK. This \"defense in depth\" approach helps us to ensure that the data is really gone. By implementing crypto shredding, the impact is much smaller if a delete operation fails, or misses some data that should have been wiped.\n\nYou can find more details about MongoDB's Client-Side Field Level Encryption in our documentation. If you have questions, feel free to make a post on our community forums. ", "format": "md", "metadata": {"tags": ["MongoDB", "Python", "Flask"], "pageDescription": "Learn how to make use of MongoDB's Client-Side Field Level Encryption to strengthen procedures for removing sensitive data.", "contentType": "Article"}, "title": "Implementing Right to Erasure with CSFLE", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/using-openai-latest-embeddings-rag-system-mongodb", "action": "created", "body": "# Using OpenAI Latest Embeddings In A RAG System With MongoDB\n\nUsing OpenAI Latest Embeddings in a RAG System With MongoDB\n-----------------------------------------------------------\n\n## Introduction\n\nOpenAI recently released new embeddings and moderation models. This article explores the step-by-step implementation process of utilizing one of the new embedding models: text-embedding-3-small\u00a0within a retrieval-augmented generation (RAG) system powered by MongoDB Atlas Vector Database.\n\n## What is an embedding?\n\n**An embedding is a mathematical representation of data within a high-dimensional space, typically referred to as a vector space.**\u00a0Within a vector space, vector embeddings are positioned based on their semantic relationships, concepts, or contextual relevance. This spatial relationship within the vector space effectively mirrors the associations in the original data, making embeddings useful in various artificial intelligence domains, such as machine learning, deep learning, generative AI (GenAI), natural language processing (NLP), computer vision, and data science.\n\nCreating an embedding involves mapping data related to entities like words, products, audio, and user profiles into a numerical format. In NLP, this process involves transforming words and phrases into vectors, converting their semantic meanings into a machine-readable form.\n\nAI applications that utilize RAG architecture design patterns leverage embeddings to augment the large language model (LLM) generative process by retrieving relevant information from a data store such as MongoDB Atlas. By comparing embeddings of the query with those in the database, RAG systems incorporate external knowledge, improving the relevance and accuracy of the responses.\n\n. This dataset is a collection of movie-related details that include attributes such as the title, release year, cast, and plot. A unique feature of this dataset is the plot_embedding\u00a0field for each movie. These embeddings are generated using OpenAI's text-embedding-ada-002 model.\n\nAfter loading the dataset, it is converted into a pandas DataFrame; this data format simplifies data manipulation and analysis. Display the first five rows using the head(5)\u00a0function to gain an initial understanding of the data. This preview provides a snapshot of the dataset's structure and its various attributes, such as genres, cast, and plot embeddings.\n\n```python\nfrom datasets import load_dataset\nimport pandas as pd\n\n# \ndataset = load_dataset(\"AIatMongoDB/embedded_movies\")\n\n# Convert the dataset to a pandas dataframe\ndataset_df = pd.DataFrame(dataset'train'])\n\ndataset_df.head(5)\n```\n\n**Import libraries:**\n\n- from datasets import load_dataset: imports the load_dataset\u00a0function from the Hugging Face datasets\u00a0library; this function is used to load datasets from Hugging Face's extensive dataset repository.\n- import pandas as pd: imports the pandas library, a fundamental tool in Python for data manipulation and analysis, using the alias pd.\n\n**Load the dataset:**\n\n- `dataset = load_dataset(\"AIatMongoDB/embedded_movies\")`: Loads the dataset named `embedded_movies`\u00a0from the Hugging Face datasets repository; this dataset is provided by MongoDB and is specifically designed for embedding and retrieval tasks.\n\n**Convert dataset to pandas DataFrame:**\n\n- `dataset_df = pd.DataFrame(dataset\\['train'\\])`: converts the training portion of the dataset into a pandas DataFrame.\n\n**Preview the dataset:**\n\n- `dataset_df.head(5)`: displays the first five entries of the DataFrame.\n\n## Step 3: data cleaning and preparation\n\nThe next step cleans the data and prepares it for the next stage, which creates a new embedding data point using the new OpenAI embedding model.\n\n```python\n# Remove data point where plot column is missing\ndataset_df = dataset_df.dropna(subset=['plot'])\nprint(\"\\\\nNumber of missing values in each column after removal:\")\nprint(dataset_df.isnull().sum())\n\n# Remove the plot_embedding from each data point in the dataset as we are going to create new embeddings with the new OpenAI embedding Model \"text-embedding-3-small\"\ndataset_df = dataset_df.drop(columns=['plot_embedding'])\ndataset_df.head(5)\n\n```\n\n**Removing incomplete data:**\n\n- `dataset_df = dataset_df.dropna(subset=\\['plot'\\])`: ensures data integrity by removing any data point/row where the \u201cplot\u201d column is missing data; since \u201cplot\u201d is a vital component for the new embeddings, its completeness affects the retrieval performance.\n\n**Preparing for new embeddings:**\n\n- `dataset_df = dataset_df.drop(columns=\\['plot_embedding'\\])`: remove the existing \u201cplot_embedding\u201d column; new embeddings using OpenAI's \"text-embedding-3-small\" model, the existing embeddings (generated by a different model) are no longer needed.\n- `dataset_df.head(5)`: allows us to preview the first five rows of the updated datagram to ensure the removal of the \u201cplot_embedding\u201d column and confirm data readiness.\n\n## Step 4: create embeddings with OpenAI\n\nThis stage focuses on generating new embeddings using OpenAI's advanced model.\n\nThis demonstration utilises a Google Colab Notebook, where environment variables are configured explicitly within the notebook's Secrets section and accessed using the user data module. In a production environment, the environment variables that store secret keys are usually stored in a .env file or equivalent.\n\nAn [OpenAI API key\u00a0is required to ensure the successful completion of this step. More details on OpenAI's embedding models can be found on the official site.\n\n```\npython\nimport openai\nfrom google.colab import userdata\n\nopenai.api_key = userdata.get(\"open_ai\")\n\nEMBEDDING_MODEL = \"text-embedding-3-small\"\n\ndef get_embedding(text):\n \"\"\"Generate an embedding for the given text using OpenAI's API.\"\"\"\n\n # Check for valid input\n if not text or not isinstance(text, str):\n return None\n\n try:\n # Call OpenAI API to get the embedding\n embedding = openai.embeddings.create(input=text, model=EMBEDDING_MODEL).data0].embedding\n return embedding\n except Exception as e:\n print(f\"Error in get_embedding: {e}\")\n return None\n\ndataset_df[\"plot_embedding_optimised\"] = dataset_df['plot'].apply(get_embedding)\n\ndataset_df.head()\n\n```\n\n**Setting up OpenAI API:**\n\n- Imports and API key:\u00a0Import the openai\u00a0library and retrieve the API key from Google Colab's userdata.\n- Model selection:\u00a0Set the variable EMBEDDING_MODEL\u00a0to text-embedding-3-small.\n\n**Embedding generation function:**\n\n- get_embedding:\u00a0converts text into embeddings; it takes both the string input and the embedding model as arguments and generates the text embedding using the specified OpenAI model.\n- Input validation and API call:\u00a0validates the input to ensure it's a valid string, then calls the OpenAI API to generate the embedding.\n- If the process encounters any issues, such as invalid input or API errors, the function returns None.\n- Applying to dataset:\u00a0The function get_embedding\u00a0is applied to the \u201cplot\u201d column of the DataFrame dataset_df. Each plot is transformed into an optimized embedding data stored in a new column, plot_embedding_optimised.\n- Preview updated dataset:\u00a0dataset_df.head()\u00a0displays the first few rows of the DataFrame.\n\n## Step 5: Vector database setup and data ingestion\n\nMongoDB acts as both an operational and a vector database. It offers a database solution that efficiently stores, queries, and retrieves vector embeddings \u2014 the advantages of this lie in the simplicity of database maintenance, management, and cost.\n\nTo create a new MongoDB database, set up a database cluster:\n\n1. Register for a [free MongoDB Atlas account, or for existing users, sign into MongoDB Atlas.\n2. Select the \u201cDatabase\u201d option on the left-hand pane, which will navigate to the Database Deployment page, where there is a deployment specification of any existing cluster. Create a new database cluster by clicking on the \"+Create\" button.\n\n.\n\n1\\. Navigate to the movie_collection in the movie database. At this point, the database is populated with several documents containing information about various movies, particularly within the action and romance genres.\n\u00a0for vector search.\n- type:\u00a0This field specifies the data type the index will handle. In this case, it is set to `vector`, indicating that this index is specifically designed for handling and optimizing searches over vector data.\n\n\u00a0for the implementation code.\n\nIn practical scenarios, lower-dimension embeddings that can maintain a high level of semantic capture are beneficial for Generative AI applications where the relevance and speed of retrieval are crucial to user experience and value.\n\n**Further advantages of lower embedding dimensions with high performance are:**\n\n- Improved user experience and relevance:\u00a0Relevance of information retrieval is optimized, directly impacting the user experience and value in AI-driven applications.\n- Comparison with previous model:\u00a0In contrast to the previous ada v2\u00a0model, which only provided embeddings at a dimension of 1536, the new models offer more flexibility. The text-embedding-3-large\u00a0extends this flexibility further with dimensions of 256, 1024, and 3072.\n- Efficiency in data processing:\u00a0The availability of lower-dimensional embeddings aids in more efficient data processing, reducing computational load without compromising the quality of results.\n- Resource optimization:\u00a0Lower-dimensional embeddings are resource-optimized, beneficial for applications running on limited memory and processing power, and for reducing overall computational costs.\n\nFuture articles will cover advanced topics, such as benchmarking embedding models and handling migration of embeddings.\n\n______________________________________________________________________\n\n## Frequently asked questions\n\n### 1. What is an embedding?\nAn embedding is a technique where data \u2014 such as words, audio, or images \u2014 is transformed into mathematical representations, vectors of real numbers in a high-dimensional space referred to as a vector space. This process allows AI models to understand and process complex data by capturing the underlying semantic relationships and contextual nuances.\n\n### 2. What is a vector store in the context of AI and databases?\nA vector store, such as a MongoDB Atlas database, is a storage mechanism for vector embeddings. It allows efficient storing, indexing, and retrieval of vector data, essential for tasks like semantic search, recommendation systems, and other AI applications.\n\n### 3. How does a retrieval-augmented generation (RAG) system utilize embeddings?\nA RAG system uses embeddings to improve the response generated by a large language model (LLM) by retrieving relevant information from a knowledge store based on semantic similarities. The query embedding is compared with the knowledge store (database record) embedding to fetch contextually similar and relevant data, which improves the accuracy and relevance of generated responses by the LLM to the user\u2019s query.\n\n [1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdae0dd2e997f2ffb/65bb84bd8fc5c0be070bdc73/image2.png\n [2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blted4bbac5068dcb4c/65bb84bd63dd3a0334963206/image12.png\n [3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt33548488915c749d/65bb84befd23e5ad9c7daf92/image4.png\n [4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4fa575a1c29ef2d2/65bb84bd30d47e0ce7523376/image6.png\n [5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta937aecb6255a6c6/65bb84be1f10e80b6d4bae47/image3.png\n [6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt197dc2ffe0b9b8b0/65bb84bee5c1f3217ad96ce8/image10.png\n [7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf3736d4623ccad02/65bb84bdc6000531b5d5c021/image7.png\n [8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt497f84d6aa7eb7a7/65bb84be461c13598eb900f8/image11.png\n [9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bfa203e05eac169/65bb84bda0c8781b0a5934db/image1.png\n [10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb5c63f5e8ec2ca3c/65bb84bd292a0e5a2f87e7c7/image9.png\n [11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt904b4cb46ada9153/65bb84be292a0e7daa87e7cb/image5.png\n [12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3f7abad9e10b8b24/65bb84bed2067bce2d8c6e6c/image8.png", "format": "md", "metadata": {"tags": ["Atlas", "Python", "AI"], "pageDescription": "Explore OpenAI's latest embeddings in RAG systems with MongoDB. Learn to enhance AI responses in NLP and GenAI with practical examples.", "contentType": "Tutorial"}, "title": "Using OpenAI Latest Embeddings In A RAG System With MongoDB", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/csharp/crud-changetracking-mongodb-provider-for-efcore", "action": "created", "body": "# MongoDB Provider for EF Core Tutorial: Building an App with CRUD and Change Tracking\n\nEntity Framework (EF) has been part of .NET for a long time (since .NET 3.51) and is a popular object relational mapper (ORM) for many applications. EF has evolved into EF Core alongside the evolution of .NET. EF Core supports a number of different database providers and can now be used with MongoDB with the help of the MongoDB Provider for Entity Framework Core.\n\nIn this tutorial, we will look at how you can build a car booking application using the new MongoDB Provider for EF Core that will support create, read, update, and delete operations (CRUD) as well as change tracking, which helps to automatically update the database and only the fields that have changed. \n\nA car booking system is a good example to explore the benefits of using EF Core with MongoDB because there is a need to represent a diverse range of entities. There will be entities like cars with their associated availability status and location, and bookings including the associated car.\n\nAs the system evolves and grows, ensuring data consistency can become challenging. Additionally, as users interact with the system, partial updates to data entities \u2014 like booking details or car specifications \u2014 will happen more and more frequently. Capturing and efficiently handling these updates is paramount for good system performance and data integrity.\n\n## Prerequisites ##\nIn order to follow along with this tutorial, you are going to need a few things:\n\n - .NET 7.0.\n - Basic knowledge of ASP.NET MVC and C#.\n - Free MongoDB Atlas account and free tier\n cluster.\n\nIf you just want to see example code, you can view the full code in the GitHub repository.\n\n## Create the project\nASP.NET Core is a very flexible web framework, allowing you to scaffold out different types of web applications that have slight differences in terms of their UI or structure.\nFor this tutorial, we are going to create an MVC project that will make use of static files and controllers. There are other types of front end you could use, such as React, but MVC with .cshtml views is the most commonly used.\nTo create the project, we are going to use the .NET CLI:\n```bash\ndotnet new mvc -o SuperCarBookingSystem\n``` \nBecause we used the CLI, although easier, it only creates the csproj file and not the solution file which allows us to open it in Visual Studio, so we will fix that.\n```bash\ncd SuperCarBookingSystem\ndotnet new sln\ndotnet sln .\\SuperCarBookingSystem.sln add .\\SuperCarBookingSystem.csproj\n``` \n\n## Add the NuGet packages\nNow that we have the new project created, we will want to go ahead and add the required NuGet packages. Either using the NuGet Package Manager or using the .NET CLI commands below, add the MongoDB MongoDB.EntityFrameworkCore and Microsoft.EntityFrameworkCore packages.\n\n```bash\ndotnet add package MongoDB.EntityFrameworkCore --version 7.0.0-preview.1\ndotnet add package Microsoft.EntityFrameworkCore\n```\n\n> At the time of writing, the MongoDB.EntityFrameworkCore is in preview, so if using the NuGet Package Manager UI inside Visual Studio, be sure to tick the \u201cinclude pre-release\u201d box or you won\u2019t get any results when searching for it.\n\n## Create the models\nBefore we can start implementing the new packages we just added, we need to create the models that represent the entities we want in our car booking system that will of course be stored in MongoDB Atlas as documents.\nIn the following subsections, we will create the following models:\n\n - Car\n - Booking\n - MongoDBSettings\n\n### Car\nFirst, we need to create our car model that will represent the cars that are available to be booked in our system.\n\n 1. Create a new class in the Models folder called Car.\n 2. Add the following code:\n```csharp\nusing MongoDB.Bson;\nusing MongoDB.EntityFrameworkCore;\nusing System.ComponentModel.DataAnnotations;\n\nnamespace SuperCarBookingSystem.Models\n{\n Collection(\"cars\")] \n public class Car\n {\n \n public ObjectId Id { get; set; }\n \n [Required(ErrorMessage = \"You must provide the make and model\")]\n [Display(Name = \"Make and Model\")]\n public string? Model { get; set; }\n\n \n [Required(ErrorMessage = \"The number plate is required to identify the vehicle\")]\n [Display(Name = \"Number Plate\")]\n public string NumberPlate { get; set; }\n\n [Required(ErrorMessage = \"You must add the location of the car\")]\n public string? Location { get; set; }\n\n public bool IsBooked { get; set; } = false;\n }\n}\n```\nThe collection attribute before the class tells the application what collection inside the database we are using. This allows us to have differing names or capitalization between our class and our collection should we want to.\n\n### Booking\nWe also need to create a booking class to represent any bookings we take in our system.\n\n 1. Create a new class inside the Models folder called Booking.\n 2. Add the following code to it:\n```csharp\n using MongoDB.Bson;\nusing MongoDB.EntityFrameworkCore;\nusing System.ComponentModel.DataAnnotations;\n\nnamespace SuperCarBookingSystem.Models\n{\n [Collection(\"bookings\")]\n public class Booking\n {\n public ObjectId Id { get; set; }\n\n public ObjectId CarId { get; set; }\n\n public string CarModel { get; set; }\n\n [Required(ErrorMessage = \"The start date is required to make this booking\")]\n [Display(Name = \"Start Date\")]\n public DateTime StartDate { get; set; }\n\n [Required(ErrorMessage = \"The end date is required to make this booking\")]\n [Display(Name = \"End Date\")]\n public DateTime EndDate { get; set; }\n }\n}\n```\n\n### MongoDBSettings\nAlthough it won\u2019t be a document in our database, we need a model class to store our MongoDB-related settings so they can be used across the application.\n\n 1. Create another class in Models called MongoDBSettings.\n 2. Add the following code:\n```csharp\npublic class MongoDBSettings\n{\n public string AtlasURI { get; set; }\n public string DatabaseName { get; set; }\n}\n```\n## Setting up EF Core\nThis is the exciting part. We are going to start to implement EF Core and take advantage of the new MongoDB Provider. If you are used to working with EF Core already, some of this will be familiar to you.\n### CarBookingDbContext\n 1. In a location of your choice, create a class called CarBookingDbContext. I placed it inside a new folder called Services.\n 2. Replace the code inside the namespace with the following:\n```csharp\nusing Microsoft.EntityFrameworkCore;\nusing SuperCarBookingSystem.Models;\n\nnamespace SuperCarBookingSystem.Services\n{\n public class CarBookingDbContext : DbContext\n {\n public DbSet Cars { get; init; } \n\n public DbSet Bookings { get; init; }\n\n public CarBookingDbContext(DbContextOptions options)\n : base(options)\n {\n }\n\n protected override void OnModelCreating(ModelBuilder modelBuilder)\n {\n base.OnModelCreating(modelBuilder);\n\n modelBuilder.Entity();\n modelBuilder.Entity();\n }\n }\n}\n```\nIf you are used to EF Core, this will look familiar. The class extends the DbContext and we create DbSet properties that store the models that will also be present in the database. We also override the OnModelCreating method. You may notice that unlike when using SQL Server, we don\u2019t call .ToTable(). We could call ToCollection instead but this isn\u2019t required here as we specify the collection using attributes on the classes.\n\n### Add connection string and database details to appsettings\nEarlier, we created a MongoDBSettings model, and now we need to add the values that the properties map to into our appsettings.\n\n 1. In both appsettings.json and appsettings.Development.json, add the following new section:\n```json\n \"MongoDBSettings\": {\n \"AtlasURI\": \"mongodb+srv://:@\",\n \"DatabaseName\": \"cargarage\"\n }\n\n```\n 2. Replace the Atlas URI with your own [connection string from Atlas.\n### Updating program.cs\nNow we have configured our models and DbContext, it is time to add them to our program.cs file.\n\nAfter the existing line `builder.Services.AddControllersWithViews();`, add the following code:\n```csharp\nvar mongoDBSettings = builder.Configuration.GetSection(\"MongoDBSettings\").Get();\nbuilder.Services.Configure(builder.Configuration.GetSection(\"MongoDBSettings\"));\n\nbuilder.Services.AddDbContext(options =>\noptions.UseMongoDB(mongoDBSettings.AtlasURI ?? \"\", mongoDBSettings.DatabaseName ?? \"\"));\n\n```\n\n## Creating the services\nNow, it is time to add the services we will use to talk to the database via the CarBookingDbContext we created. For each service, we will create an interface and the class that implements it.\n### ICarService and CarService\nThe first interface and service we will implement is for carrying out the CRUD operations on the cars collection. This is known as the repository pattern. You may see people interact with the DbContext directly. But most people use this pattern, which is why we are including it here. \n\n 1. If you haven\u2019t already, create a Services folder to store our new classes.\n 2. Create an ICarService interface and add the following code for the methods we will implement:\n```csharp\nusing MongoDB.Bson;\nusing SuperCarBookingSystem.Models;\n\nnamespace SuperCarBookingSystem.Services\n{\n public interface ICarService\n {\n IEnumerable GetAllCars();\n Car? GetCarById(ObjectId id);\n\n void AddCar(Car newCar);\n\n void EditCar(Car updatedCar);\n\n void DeleteCar(Car carToDelete);\n }\n}\n```\n 3. Create a CarService class file.\n 4. Update the CarService class declaration so it implements the ICarService we just created:\n```csharp\nusing Microsoft.EntityFrameworkCore;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\nusing SuperCarBookingSystem.Models;\n\nnamespace SuperCarBookingSystem.Services\n{\n public class CarService : ICarService\n{\n\n```\n\n 5. This will cause a red squiggle to appear underneath ICarService as we haven\u2019t implemented all the methods yet, but we will implement the methods one by one.\n 6. Add the following code after the class declaration that adds a local CarBookingDbContext object and a constructor that gets an instance of the DbContext via dependency injection.\n```csharp\n private readonly CarBookingDbContext _carDbContext;\n public CarService(CarBookingDbContext carDbContext)\n {\n _carDbContext = carDbContext;\n }\n\n```\n\n 7. Next, we will implement the GetAllCars method so add the following code:\n```csharp\npublic IEnumerable GetAllCars()\n{\n return _carDbContext.Cars.OrderBy(c => c.Id).AsNoTracking().AsEnumerable();\n}\n\n```\nThe id property here maps to the _id field in our document which is a special MongoDB ObjectId type and is auto-generated when a new document is created. But what is useful about the _id property is that it can actually be used to order documents because of how it is generated under the hood. \n\nIf you haven\u2019t seen it before, the `AsNoTracking()` method is part of EF Core and prevents EF tracking changes you make to an object. This is useful for reads when you know no changes are going to occur. \n\n 8. Next, we will implement the method to get a specific car using its Id property:\n```csharp\npublic Car? GetCarById(ObjectId id)\n{\n return _carDbContext.Cars.FirstOrDefault(c => c.Id == id);\n}\n```\n\nThen, we will add the AddCar implementation:\n```csharp\npublic void AddCar(Car car)\n{\n _carDbContext.Cars.Add(car);\n\n _carDbContext.ChangeTracker.DetectChanges();\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n\n _carDbContext.SaveChanges();\n}\n```\nIn a production environment, you might want to use something like ILogger to track these changes rather than printing to the console. But this will allow us to clearly see that a new entity has been added, showing change tracking in action.\n\n 9. EditCar is next:\n```csharp\npublic void EditCar(Car car)\n{\n var carToUpdate = _carDbContext.Cars.FirstOrDefault(c => c.Id == car.Id);\n\n if(carToUpdate != null)\n { \n carToUpdate.Model = car.Model;\n carToUpdate.NumberPlate = car.NumberPlate;\n carToUpdate.Location = car.Location;\n carToUpdate.IsBooked = car.IsBooked;\n\n _carDbContext.Cars.Update(carToUpdate);\n\n _carDbContext.ChangeTracker.DetectChanges();\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n\n _carDbContext.SaveChanges();\n \n }\n else\n {\n throw new ArgumentException(\"The car to update cannot be found. \");\n }\n} \n\n```\nAgain, we add a call to print out information from change tracking as it will show that the new EF Core Provider, even when using MongoDB as the database, is able to track modifications.\n\n 10. Finally, we need to implement DeleteCar:\n```csharp\npublic void DeleteCar(Car car)\n{\nvar carToDelete = _carDbContext.Cars.Where(c => c.Id == car.Id).FirstOrDefault();\n\nif(carToDelete != null) {\n _carDbContext.Cars.Remove(carToDelete);\n _carDbContext.ChangeTracker.DetectChanges();\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n _carDbContext.SaveChanges();\n }\n else {\n throw new ArgumentException(\"The car to delete cannot be found.\");\n }\n}\n```\n\n### IBookingService and BookingService\nNext up is our IBookingService and BookingService.\n\n 1. Create the IBookingService interface and add the following methods:\n\n```csharp\nusing MongoDB.Bson;\nusing SuperCarBookingSystem.Models;\nnamespace SuperCarBookingSystem.Services\n{\n public interface IBookingService\n {\n IEnumerable GetAllBookings();\n Booking? GetBookingById(ObjectId id);\n\n void AddBooking(Booking newBooking);\n\n void EditBooking(Booking updatedBooking);\n\n void DeleteBooking(Booking bookingToDelete);\n }\n}\n```\n\n 2. Create the BookingService class, and replace your class with the following code that implements all the methods:\n```csharp\nusing Microsoft.EntityFrameworkCore;\nusing MongoDB.Bson;\nusing SuperCarBookingSystem.Models;\n\nnamespace SuperCarBookingSystem.Services\n{\n public class BookingService : IBookingService\n {\n private readonly CarBookingDbContext _carDbContext;\n\n public BookingService(CarBookingDbContext carDBContext)\n {\n _carDbContext = carDBContext;\n }\n public void AddBooking(Booking newBooking)\n {\n var bookedCar = _carDbContext.Cars.FirstOrDefault(c => c.Id == newBooking.CarId);\n if (bookedCar == null)\n {\n throw new ArgumentException(\"The car to be booked cannot be found.\");\n }\n\n newBooking.CarModel = bookedCar.Model;\n\n bookedCar.IsBooked = true;\n _carDbContext.Cars.Update(bookedCar);\n\n _carDbContext.Bookings.Add(newBooking);\n\n _carDbContext.ChangeTracker.DetectChanges();\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n\n _carDbContext.SaveChanges();\n }\n\n public void DeleteBooking(Booking booking)\n {\n var bookedCar = _carDbContext.Cars.FirstOrDefault(c => c.Id == booking.CarId);\n bookedCar.IsBooked = false;\n\n var bookingToDelete = _carDbContext.Bookings.FirstOrDefault(b => b.Id == booking.Id);\n\n if(bookingToDelete != null)\n {\n _carDbContext.Bookings.Remove(bookingToDelete);\n _carDbContext.Cars.Update(bookedCar);\n\n _carDbContext.ChangeTracker.DetectChanges();\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n\n _carDbContext.SaveChanges();\n }\n else\n {\n throw new ArgumentException(\"The booking to delete cannot be found.\");\n }\n }\n\n public void EditBooking(Booking updatedBooking)\n {\n var bookingToUpdate = _carDbContext.Bookings.FirstOrDefault(b => b.Id == updatedBooking.Id);\n \n \n if (bookingToUpdate != null)\n { \n bookingToUpdate.StartDate = updatedBooking.StartDate;\n bookingToUpdate.EndDate = updatedBooking.EndDate;\n \n\n _carDbContext.Bookings.Update(bookingToUpdate);\n\n _carDbContext.ChangeTracker.DetectChanges();\n _carDbContext.SaveChanges();\n\n Console.WriteLine(_carDbContext.ChangeTracker.DebugView.LongView);\n } \n else \n { \n throw new ArgumentException(\"Booking to be updated cannot be found\");\n }\n \n }\n\n public IEnumerable GetAllBookings()\n {\n return _carDbContext.Bookings.OrderBy(b => b.StartDate).AsNoTracking().AsEnumerable();\n }\n\n public Booking? GetBookingById(ObjectId id)\n {\n return _carDbContext.Bookings.AsNoTracking().FirstOrDefault(b => b.Id == id);\n }\n \n }\n}\n```\n\nThis code is very similar to the code for the CarService class but for bookings instead.\n\n### Adding them to Dependency Injection\nThe final step for the services is to add them to the dependency injection container.\n\nInside Program.cs, add the following code after the code we added there earlier:\n```csharp\nbuilder.Services.AddScoped();\nbuilder.Services.AddScoped();\n```\n## Creating the view models\nBefore we implement the front end, we need to add the view models that will act as a messenger between our front and back ends where required. Even though our application is quite simple, implementing the view model is still good practice as it helps decouple the pieces of the app.\n\n### CarListViewModel\nThe first one we will add is the CarListViewModel. This will be used as the model in our Razor page later on for listing cars in our database.\n\n 1. Create a new folder in the root of the project called ViewModels.\n 2. Add a new class called CarListViewModel.\n 3. Add `public IEnumerable Cars { get; set; }` inside your class.\n\n### CarAddViewModel\nWe also want a view model that can be used by the Add view we will add later.\n\n 1. Inside the ViewModels folder, create a new class called\n CarAddViewModel.\n 2. Add `public Car? Car { get; set; }`.\n\n### BookingListViewModel\nNow, we want to do something very similar for bookings, starting with BookingListViewModel.\n\n 1. Create a new class in the ViewModels folder called\n BookingListViewModel.\n 2. Add `public IEnumerable Bookings { get; set; }`.\n\n### BookingAddViewModel\nFinally, we have our BookingAddViewModel.\n\nCreate the class and add the property `public Booking? Booking { get; set; }` inside the class.\n### Adding to _ViewImports\n\nLater on, we will be adding references to our models and viewmodels in the views. In order for the application to know what they are, we need to add references to them in the _ViewImports.cshtml file inside the Views folder.\n\nThere will already be some references in there, including TagHelpers, so we want to add references to our .Models and .ViewModels folders. When added, it will look something like below, just with your application name instead.\n\n```csharp\n@using \n@using .Models\n@using .ViewModels\n```\n## Creating the controllers\nNow we have the backend implementation and the view models we will refer to, we can start working toward the front end.\nWe will be creating two controllers: one for Car and one for Booking.\n### CarController\nThe first controller we will add is for the car.\n\n 1. Inside the existing Controllers folder, add a new controller. If\n using Visual Studio, use the MVC Controller - Empty controller\n template.\n 2. Add a local ICarService object and a constructor that fetches it\n from dependency injection:\n```csharp\nprivate readonly ICarService _carService;\n\npublic CarController(ICarService carService)\n{\n _carService = carService;\n}\n```\n\n 3. Depending on what your scaffolded controller came with, either\n create or update the Index function with the following:\n```csharp\npublic IActionResult Index()\n{\n CarListViewModel viewModel = new()\n {\n Cars = _carService.GetAllCars(),\n };\n return View(viewModel);\n}\n```\nFor the other CRUD operations \u2014 so create, update, and delete \u2014 we will have two methods for each: one is for Get and the other is for Post.\n\n 4. The HttpGet for Add will be very simple as it doesn\u2019t need to pass\n any data around:\n\n```csharp\npublic IActionResult Add()\n{\n return View();\n}\n```\n\n 5. Next, add the Add method that will be called when a new car is requested to be added:\n\n```csharp\n HttpPost]\n public IActionResult Add(CarAddViewModel carAddViewModel)\n {\n if(ModelState.IsValid)\n {\n Car newCar = new()\n {\n Model = carAddViewModel.Car.Model,\n Location = carAddViewModel.Car.Location,\n NumberPlate = carAddViewModel.Car.NumberPlate\n };\n\n _carService.AddCar(newCar);\n return RedirectToAction(\"Index\");\n }\n\n return View(carAddViewModel); \n }\n```\n\n 6. Now, we will add the code for editing a car:\n```csharp\n public IActionResult Edit(string id)\n {\n if(id == null)\n {\n return NotFound();\n }\n\n var selectedCar = _carService.GetCarById(new ObjectId(id));\n return View(selectedCar);\n }\n\n [HttpPost]\n public IActionResult Edit(Car car)\n {\n try\n {\n if(ModelState.IsValid)\n {\n _carService.EditCar(car);\n return RedirectToAction(\"Index\");\n }\n else\n {\n return BadRequest();\n }\n }\n catch (Exception ex)\n {\n ModelState.AddModelError(\"\", $\"Updating the car failed, please try again! Error: {ex.Message}\");\n }\n\n return View(car);\n }\n```\n 7. Finally, we have Delete:\n```csharp\npublic IActionResult Delete(string id) {\n if (id == null)\n {\n return NotFound();\n }\n\n var selectedCar = _carService.GetCarById(new ObjectId(id));\n return View(selectedCar);\n}\n\n[HttpPost]\npublic IActionResult Delete(Car car)\n{\n if (car.Id == null)\n {\n ViewData[\"ErrorMessage\"] = \"Deleting the car failed, invalid ID!\";\n return View();\n }\n\n try\n {\n _carService.DeleteCar(car);\n TempData[\"CarDeleted\"] = \"Car deleted successfully!\";\n\n return RedirectToAction(\"Index\");\n }\n catch (Exception ex)\n {\n ViewData[\"ErrorMessage\"] = $\"Deleting the car failed, please try again! Error: {ex.Message}\";\n }\n\n var selectedCar = _carService.GetCarById(car.Id);\n return View(selectedCar);\n} \n```\n### BookingController\nNow for the booking controller. This is very similar to the CarController but it has a reference to both the car and booking service as we need to associate a car with a booking. This is because at the moment, the EF Core Provider doesn\u2019t support relationships between entities so we can relate entities in a different way. You can view the roadmap on the [GitHub repo, however.\n\n 1. Create another empty MVC Controller called BookingController.\n 2. Paste the following code replacing the current class:\n```csharp\n public class BookingController : Controller\n {\n private readonly IBookingService _bookingService;\n private readonly ICarService _carService; \n\n public BookingController(IBookingService bookingService, ICarService carService)\n {\n _bookingService = bookingService;\n _carService = carService;\n }\n\n public IActionResult Index()\n {\n BookingListViewModel viewModel = new BookingListViewModel()\n {\n Bookings = _bookingService.GetAllBookings()\n };\n return View(viewModel);\n }\n\n public IActionResult Add(string carId)\n {\n var selectedCar = _carService.GetCarById(new ObjectId(carId));\n \n BookingAddViewModel bookingAddViewModel = new BookingAddViewModel();\n\n bookingAddViewModel.Booking = new Booking();\n bookingAddViewModel.Booking.CarId = selectedCar.Id;\n bookingAddViewModel.Booking.CarModel = selectedCar.Model;\n bookingAddViewModel.Booking.StartDate = DateTime.UtcNow;\n bookingAddViewModel.Booking.EndDate = DateTime.UtcNow.AddDays(1);\n\n return View(bookingAddViewModel);\n }\n\n HttpPost]\n public IActionResult Add(BookingAddViewModel bookingAddViewModel)\n {\n Booking newBooking = new()\n {\n CarId = bookingAddViewModel.Booking.CarId, \n StartDate = bookingAddViewModel.Booking.StartDate,\n EndDate = bookingAddViewModel.Booking.EndDate,\n };\n\n _bookingService.AddBooking(newBooking);\n return RedirectToAction(\"Index\"); \n }\n\n public IActionResult Edit(string Id)\n {\n if(Id == null)\n {\n return NotFound();\n }\n\n var selectedBooking = _bookingService.GetBookingById(new ObjectId(Id));\n return View(selectedBooking);\n }\n\n [HttpPost]\n public IActionResult Edit(Booking booking)\n {\n try\n {\n var existingBooking = _bookingService.GetBookingById(booking.Id);\n if (existingBooking != null)\n {\n _bookingService.EditBooking(existingBooking);\n return RedirectToAction(\"Index\");\n }\n else\n {\n ModelState.AddModelError(\"\", $\"Booking with ID {booking.Id} does not exist!\");\n }\n }\n catch (Exception ex)\n {\n ModelState.AddModelError(\"\", $\"Updating the booking failed, please try again! Error: {ex.Message}\");\n }\n\n return View(booking);\n }\n\n public IActionResult Delete(string Id)\n {\n if (Id == null)\n {\n return NotFound();\n }\n\n var selectedBooking = _bookingService.GetBookingById(Id);\n return View(selectedBooking);\n }\n\n [HttpPost]\n public IActionResult Delete(Booking booking)\n {\n if(booking.Id == null)\n {\n ViewData[\"ErrorMessage\"] = \"Deleting the booking failed, invalid ID!\";\n return View();\n }\n\n try\n {\n _bookingService.DeleteBooking(booking);\n TempData[\"BookingDeleted\"] = \"Booking deleted successfully\";\n\n return RedirectToAction(\"Index\");\n }\n catch (Exception ex)\n {\n ViewData[\"ErrorMessage\"] = $\"Deleting the booking failed, please try again! Error: {ex.Message}\";\n }\n\n var selectedCar = _bookingService.GetBookingById(booking.Id.ToString());\n return View(selectedCar);\n }\n }\n\n```\n## Creating the views\nNow we have the back end and the controllers prepped with the endpoints for our car booking system, it is time to implement the views. This will be using Razor pages. You will also see reference to classes from Bootstrap as this is the CSS framework that comes with MVC applications out of the box.\nWe will be providing views for the CRUD operations for both listings and bookings.\n\n### Listing Cars\nFirst, we will provide a view that will map to the root of /Car, which will by convention look at the Index method we implemented.\n\nASP.NET Core MVC uses a convention pattern whereby you name the .cshtml file the name of the endpoint/method it uses and it lives inside a folder named after its controller.\n\n 1. Inside the Views folder, create a new subfolder called Car.\n 2. Inside that Car folder, add a new view. If using the available\n templates, you want Razor View - Empty. Name the view Index.\n 3. Delete the contents of the file and add a reference to the\n CarListViewModel at the top `@model CarListViewModel`.\n 4. Next, we want to add a placeholder for the error handling. If there\n was an issue deleting a car, we added a string to TempData so we\n want to add that into the view, if there is data to display.\n```csharp\n@if (TempData[\"CarDeleted\"] != null)\n{\n @TempData[\"CarDeleted\"]\n\n}\n\n```\n\n 5. Next, we will handle if there are no cars in the database, by\n displaying a message to the user:\n```csharp\n@if (!Model.Cars.Any())\n{\n \n\nNo results\n\n}\n```\n 6. The easiest way to display the list of cars and the relevant\n information is to use a table:\n```csharp\nelse\n{\n \n \n \n Model\n \n \n Number Plate\n \n \n Location\n \n \n Actions\n \n \n\n @foreach (var car in Model.Cars)\n {\n \n @car.Model\n @car.NumberPlate\n @car.Location \n \n Edit\n Delete\n @if(!car.IsBooked)\n {\n Book\n } \n \n \n }\n\n \n}\n\n Add new car\n\n```\nIt makes sense to have the list of cars as our home page so before we move on, we will update the default route from Home to /Car.\n\n 7. In Program.cs, inside `app.MapControllerRoute`, replace the pattern\n line with the following:\n\n```csharp\npattern: \"{controller=Car}/{action=Index}/{id?}\");\n```\n\nIf we ran this now, the buttons would lead to 404s because we haven\u2019t implemented them yet. So let\u2019s do that now.\n\n### Adding cars\nWe will start with the form for adding new cars.\n\n 1. Add a new, empty Razor View inside the Car subfolder called\n Add.cshtml.\n 2. Before adding the form, we will add the model reference at the top,\n a header, and some conditional content for the error message.\n\n```csharp\n@model CarAddViewModel\n\nCREATE A NEW CAR\n\n@if (ViewData[\"ErrorMessage\"] != null)\n{\n @ViewData[\"ErrorMessage\"]\n\n}\n```\n 3. Now, we can implement the form.\n```csharp\n\n \n\n \n \n \n \n \n\n \n \n \n \n \n\n \n \n \n \n \n\n \n\n```\nNow, we want to add a button at the bottom to easily navigate back to the list of cars in case the user decides not to add a new car after all.\nAdd the following after the `` tag:\n```csharp\n\n Back to list\n\n```\n### Editing cars\nThe code for the Edit page is almost identical to Add, but it uses the Car as a model as it will use the car it is passed to pre-populate the form for editing.\n 1. Add another view inside the Car subfolder called Edit.cshtml.\n 2. Add the following code:\n```csharp\n@model Car\n\nUPDATE @MODEL.MODEL\n\n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n Back to list\n\n```\n### Deleting cars\nThe final page we need to implement is the page that is called when the delete button is clicked for a car.\n\n 1. Create a new empty View called Delete.cshtml.\n 2. Add the following code to add the model, heading, and conditional\n error message:\n```csharp\n@model Car\n\nDELETING @MODEL.MODEL\n\n@if(ViewData[\"ErrorMessage\"] != null)\n{\n @ViewData[\"ErrorMessage\"]\n\n}\n```\nInstead of a form like in the other views, we are going to add a description list to display information about the car that we are confirming deletion of.\n```csharp\n\n \n \n \n \n \n @Model?.Model\n \n \n \n \n \n @Model?.NumberPlate\n \n \n \n \n \n @Model?.Location\n \n\n \n\n```\n \n\n \n 3. Below that, we will add a form for submitting the deletion and the\n button to return to the list:\n \n\n```csharp\n\n \n \n\n Back to list\n\n```\n### Listing bookings\nWe have added the views for the cars so now we will add the views for bookings, starting with listing any existing books.\n\n 1. Create a new folder inside the Views folder called Booking.\n 2. Create a new empty view called Index.\n 3. Add the following code to display the bookings, if any exist:\n```csharp\n@model BookingListViewModel\n\n@if (TempData[\"BookingDeleted\"] != null)\n{\n @TempData[\"BookingDeleted\"]\n\n}\n\n@if (!Model.Bookings.Any())\n{\n \n\nNo results\n\n}\n\nelse\n{ \n \n \n \n Booked Car\n \n \n Start Date\n \n \n End Date\n \n \n Actions\n \n \n\n @foreach(var booking in Model.Bookings)\n {\n \n @booking.CarModel\n @booking.StartDate\n @booking.EndDate\n \n Edit\n Delete\n \n \n }\n\n \n\n}\n```\n### Adding bookings\nAdding bookings is next. This view will be available when the book button is clicked next to a listed car.\n\n 1. Create an empty view called Add.cshtml.\n 2. Add the following code:\n```csharp\n@model BookingAddViewModel\n\n@if (ViewData[\"ErrorMessage\"] != null)\n{\n @ViewData[\"ErrorMessage\"]\n\n}\n\n \n \n \n\n \n \n \n \n \n \n \n \n \n \n\n \n\n```\n### Editing bookings\nJust like with cars, we also want to be able to edit existing books.\n\n 1. Create an empty view called Edit.cshtml.\n 2. Add the following code:\n\n```csharp\n@model Booking\n\nEDITING BOOKING FOR @MODEL.CARMODEL BETWEEN @MODEL.STARTDATE AND @MODEL.ENDDATE\n\n \n \n\n \n \n \n \n \n \n \n \n \n \n \n\n Back to bookings\n\n```\n### Deleting bookings\nThe final view we need to add is to delete a booking. As with cars, we will display the booking information and deletion confirmation.\n\n```csharp\n@model Booking\n\nDELETE BOOKING\n\n@if (ViewData[\"ErrorMessage\"] != null)\n{\n @ViewData[\"ErrorMessage\"]\n\n}\n\n \n \n \n \n \n @Model?.CarModel\n \n \n \n \n \n @Model?.StartDate\n \n \n \n \n \n \n @Model?.EndDate\n \n\n \n \n \n\n Back to list\n\n```\n\nIf you want to view the full solution code, you can find it in the [GitHub Repo.\n## Testing our application\nWe now have a functioning application that uses the new MongoDB Provider for EF Core \u2014 hooray! Now is the time to test it all and visit our endpoints to make sure it all works.\n\nIt is not part of this tutorial as it is not required, but I chose to make some changes to the site.css file to add some color. I also updated the _Layout.cshtml file to add the Car and Bookings pages to the navbar. You will see this reflected in the screenshots in the rest of the article. You are of course welcome to make your own changes if you have ideas of how you would like the application to look.\n### Cars\nBelow are some screenshots I took from the app, showing the features of the Cars endpoint.\n\n### Bookings\nThe bookings pages will look very similar to cars but are adapted for the bookings model that includes dates.\n\n## Conclusion\nThere we have it: a full stack application using ASP.NET MVC that takes advantage of the new MongoDB Provider for EF Core. We are able to do the CRUD operations and track changes. \nEF Core is widely used amongst developers so having an official MongoDB Provider is super exciting. This library is in Preview, which means we are continuing to build out new features. Stay tuned for updates and we are always open to feedback. We can\u2019t wait to see what you build!\n\nYou can view the Roadmap of the provider in the GitHub repository, where you can also find links to the documentation! \n\nAs always, if you have any questions about this or other topics, get involved at our MongoDB Community Forums.\n", "format": "md", "metadata": {"tags": ["C#", ".NET"], "pageDescription": "", "contentType": "Tutorial"}, "title": "MongoDB Provider for EF Core Tutorial: Building an App with CRUD and Change Tracking", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/mongodb/entangled-data-re-modeling-10x-storage-reduction", "action": "created", "body": "# Entangled: A Story of Data Re-modeling and 10x Storage Reduction\n\nOne of the most distinctive projects I've worked on is an application named Entangled. Developed in partnership with the Princeton Engineering Anomalies Research lab (PEAR), The Global Consciousness Project, and the Institute of Noetic Sciences, Entangled aims to test human consciousness. \n\nThe application utilizes a quantum random number generator to measure the influence of human consciousness. This quantum generator is essential because conventional computers, due to their deterministic nature, cannot generate truly random numbers. The quantum generator produces random sequences of 0s and 1s. In large datasets, there should be an equal number of 0s and 1s.\n\nFor the quantum random number generation, we used an in-house Quantis QRNG USB device. This device is plugged into our server, and through specialized drivers, we programmatically obtain the random sequences directly from the USB device.\n\nExperiments were conducted to determine if a person could influence these quantum devices with their thoughts, specifically by thinking about more 0s or 1s. The results were astonishing, demonstrating the real potential of this influence. \n\nTo expand this test globally, we developed a new application. This platform allows users to sign up and track their contributions. The system generates a new random number for each user every second. Every hour, these contributions are grouped for analysis at personal, city, and global levels. We calculate the standard deviation of these contributions, and if this deviation exceeds a certain threshold, users receive notifications. \n\nThis data supports various experiments. For instance, in the \"Earthquake Prediction\" experiment, we use the contributions from all users in a specific area. If the standard deviation is higher than the set threshold, it may indicate that users have predicted an earthquake.\n\nIf you want to learn more about Entangled, you can check the official website.\n\n## Hourly-metrics schema modeling \n\nAs the lead backend developer, and with MongoDB being my preferred database for all projects, it was a natural choice for Entangled.\n\nFor the backend development, I chose Node.js (Express), along with the Mongoose library for schema definition and data modeling. Mongoose, an Object Data Modeling (ODM) library for MongoDB, is widely used in the Node.js ecosystem for its ability to provide a straightforward way to model our application data.\n\nCareful schema modeling was crucial due to the anticipated scaling of the database. Remember, we were generating one random number per second for each user. \n\nMy initial instinct was to create hourly-based schemas, aligning with our hourly analytics snapshots. The initial schema was structured as follows:\n\n- User: a reference to the \"Users\" collection \n- Total Sum: the sum of each user's random numbers; either 1s or 0s, so their sum was sufficient for later analysis \n- Generated At: the timestamp of the snapshot \n- Data File: a reference to the \"Data Files\" collection, which contains all random numbers generated by all users in a given hour\n\n```javascript\nconst { Schema, model } = require(\"mongoose\");\n\nconst hourlyMetricSchema = new Schema({\n user: { type: Schema.Types.ObjectId, ref: \"Users\" },\n total_sum: { type: Number },\n generated_at: { type: Date },\n data_file: { type: Schema.Types.ObjectId, ref: \"DataFiles\" }\n});\n\n// Compound index forr \"user\" (ascending) and \"generated_at\" (descending) fields\nhourlyMetricSchema.index({ user: 1, generated_at: -1 });\n\nconst HourlyMetrics = model(\"HourlyMetrics\", hourlyMetricSchema);\n\nmodule.exports = HourlyMetrics;\n```\n\nAlthough intuitive, this schema faced a significant scaling challenge. We estimated over 100,000 users soon after launch. This meant about 2.4 million records daily or 72 million records monthly. Consequently, we were looking at approximately 5GB of data (including storage and indexes) each month. \n\nThis encouraged me to explore alternative approaches.\n\n## Daily-metrics schema modeling \n\nI explored whether alternative modeling approaches could further optimize storage requirements while also enhancing scalability and cost-efficiency. \n\nA significant observation was that out of 5GB of total storage, 3.5GB was occupied by indexes, a consequence of the large volume of documents. \n\nThis led me to experiment with a schema redesign, shifting from hourly to daily metrics. The new schema was structured as follows:\n\n```javascript\nconst { Schema, model } = require(\"mongoose\");\n\nconst dailyMetricSchema = new Schema({\n user: { type: Schema.Types.ObjectId, ref: \"Users\" },\n date: { type: Date },\n samples: \n {\n total_sum: { type: Number },\n generated_at: { type: Date },\n data_file: { type: Schema.Types.ObjectId, ref: \"DataFiles\" }\n }\n ]\n});\n\n// Compound index forr \"user\" (ascending) and \"date\" (descending) fields\nhourlyMetricSchema.index({ user: 1, date: -1 });\n\nconst DailyMetrics = model(\"DailyMetrics\", dailyMetricSchema);\n\nmodule.exports = DailyMetrics;\n```\n\nRather than storing metrics for just one hour in each document, I now aggregated an entire day's metrics in a single document. Each document included a \"samples\" array with 24 entries, one for each hour of the day. \n\nIt's important to note that this method is a good solution because the array has a fixed size \u2014 a day only has 24 hours. This is very different from the [anti-pattern of using big, massive arrays in MongoDB.\n\nThis minor modification had a significant impact. The storage requirement for a month's worth of data drastically dropped from 5GB to just 0.49GB. This was mainly due to the decrease in index size, from 3.5GB to 0.15GB. The number of documents required each month dropped from 72 million to 3 million. \n\nEncouraged by these results, I didn't stop there. My next step was to consider the potential benefits of shifting to a monthly-metrics schema. Could this further optimize our storage? This was the question that drove my next phase of exploration.\n\n## Monthly-metrics schema modeling \n\nThe monthly-metrics schema was essentially identical to the daily-metrics schema. The key difference lay in how the data was stored in the \"samples\" array, which now contained approximately 720 records representing a full month's metrics.\n\n```javascript\nconst { Schema, model } = require(\"mongoose\");\n\nconst monthlyMetricSchema = new Schema({\n user: { type: Schema.Types.ObjectId, ref: \"Users\" },\n date: { type: Date },\n samples: \n {\n total_sum: { type: Number },\n generated_at: { type: Date },\n data_file: { type: Schema.Types.ObjectId, ref: \"DataFiles\" }\n }\n ]\n});\n\n// Compound index forr \"user\" (ascending) and \"date\" (descending) fields\nmonthlyMetricSchema.index({ user: 1, date: -1 });\n\nconst MonthlyMetrics = model(\"MonthlyMetrics\", monthlyMetricSchema);\n\nmodule.exports = MonthlyMetrics;\n```\n\nThis adjustment was expected to further reduce the document count to around 100,000 documents for a month, leading me to anticipate even greater storage optimization. However, the actual results were surprising. \n\nUpon storing a month's worth of data under this new schema, the storage size unexpectedly increased from 0.49GB to 0.58GB. This increase is likely due to the methods MongoDB's WiredTiger storage engine uses to compress arrays internally.\n\n## Summary\n\nBelow is a detailed summary of the different approaches and their respective results for one month\u2019s worth of data:\n\n| | **Hourly Document** | **Daily Document** | **Monthly Document** |\n| -------------------------------- | ----------------------------------------------- | ----------------------------------- | ----------------------- |\n| **Document Size** | 0.098 KB | 1.67 KB | 49.18 KB |\n| **Total Documents (per month)** | 72,000,000 (100,000 users * 24 hours * 30 days) | 3,000,000 (100,000 users * 30 days) | 100,000 (100,000 users) |\n| **Storage Size** | 1.45 GB | 0.34 GB | 0.58 GB |\n| **Index Size** | 3.49 GB | 0.15 GB | 0.006 GB |\n| **Total Storage (Data + Index)** | 4.94 GB | 0.49 GB | 0.58 GB |\n\n## Conclusion\n\nIn this exploration of schema modeling for the Entangled project, we investigated the challenges and solutions for managing large-scale data in MongoDB.\n\nOur journey began with hourly metrics, which, while intuitive, posed significant scaling challenges due to the large volume of data and index size. \n\nThis prompted a shift to daily metrics, drastically reducing storage requirements by over 10 times, primarily due to a significant decrease in index size. \n\nThe experiment with monthly metrics offered an unexpected twist. Although it further reduced the number of documents, it increased the overall storage size, likely due to the internal compression mechanics of MongoDB's WiredTiger storage engine. \n\nThis case study highlights the critical importance of schema design in database management, especially when dealing with large volumes of data. It also emphasizes the need for continuous experimentation and optimization to balance storage efficiency, scalability, and cost.\n\nIf you want to learn more about designing efficient schemas with MongoDB, I recommend checking out the [MongoDB Data Modeling Patterns series.", "format": "md", "metadata": {"tags": ["MongoDB", "JavaScript", "Node.js"], "pageDescription": "Learn how to reduce your storage in MongoDB by optimizing your data model through various techniques.", "contentType": "Article"}, "title": "Entangled: A Story of Data Re-modeling and 10x Storage Reduction", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/golang-alexa-skills", "action": "created", "body": "# Developing Alexa Skills with MongoDB and Golang\n\nThe popularity of Amazon Alexa and virtual assistants in general is no question, huge. Having a web application and mobile application isn't enough for most organizations anymore, and now you need to start supporting voice operated applications.\n\nSo what does it take to create something for Alexa? How different is it from creating a web application?\n\nIn this tutorial, we're going to see how to create an Amazon Alexa Skill, also referred to as an Alexa application, that interacts with a MongoDB cluster using the Go programming language (Golang) and AWS Lambda.\n\n## The Requirements\n\nA few requirements must be met prior to starting this tutorial:\n\n- Golang must be installed and configured\n- A MongoDB Atlas cluster\n\nIf you don't have a MongoDB Atlas cluster, you can configure one for free. For this example an M0 cluster is more than sufficient.\n\nAlready have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment \u2014 simply sign up for MongoDB Atlas via AWS Marketplace.\n\nMake sure the Atlas cluster has the proper IP addresses on the Network Access List for AWS services. If AWS Lambda cannot reach your cluster then requests made by Alexa will fail.\n\nHaving an Amazon Echo or other Amazon Alexa enabled device is not necessary to be successful with this tutorial. Amazon offers a really great simulator that can be used directly in the web browser.\n\n## Designing an Alexa Skill with an Invocation Term and Sample Utterances\n\nWhen it comes to building an Alexa Skill, it doesn't matter if you start with the code or the design. For this tutorial we're going to start with the design, directly in the Amazon Developer Portal for Alexa.\n\nSign into the portal and choose to create a new custom Skill. After creating the Skill, you'll be brought to a dashboard with several checklist items:\n\nIn the checklist, you should take note of the following:\n\n- Invocation Name\n- Intents, Samples, and Slots\n- Endpoint\n\nThere are other items, one being optional and the other being checked naturally as the others complete.\n\nThe first step is to define the invocation name. This is the name that users will use when they speak to their virtual assistant. It should not be confused with the Skill name because the two do not need to match. The Skill name is what would appear in the online marketplace.\n\nFor our invocation name, let's use **recipe manager**, something that is easy to remember and easy to pronounce. With the invocation name in place, we can anticipate using our Skill like the following:\n\n``` none\nAlexa, ask Recipe Manager to INTENT\n```\n\nThe user would not literally speak **INTENT** in the command. The intent\nis the command that will be defined through sample utterances, also\nknown as sample phrases or data. You can, and probably should, have\nmultiple intents for your Skill.\n\nLet's start by creating an intent titled **GetIngredientsForRecipeIntent** with the following sample utterances:\n\n``` none\nwhat ingredients do i need for {recipe}\nwhat do i need to cook {recipe}\nto cook {recipe} what ingredients do i need\n```\n\nThere are a few things to note about the above phrases:\n\n- The `{recipe}` tag is a slot variable which is going to be user defined when spoken.\n- Every possible spoken phrase to execute the command should be listed.\n\nAlexa operates from machine learning, so the more sample data the better. When defining the `{recipe}` variable, it should be assigned a type of `AMAZON.Food`.\n\nWhen all said and done, you could execute the intent by doing something like:\n\n``` none\nAlexa, ask Recipe Manager what do I need to cook Chocolate Chip Cookies\n```\n\nHaving one intent in your Alexa Skill is no fun, so let's create another intent with its own set of sample phrases. Choose to create a new intent titled `GetRecipeFromIngredientsIntent` with the following sample utterances:\n\n``` none\nwhat can i cook with {ingredientone} and {ingredienttwo}\nwhat are some recipes with {ingredientone} and {ingredienttwo}\nif i have {ingredientone} and {ingredienttwo} what can i cook\n```\n\nThis time around we're using two slot variables instead of one. Like previously mentioned, it is probably a good idea to add significantly more sample utterances to get the best results. Alexa needs to be able to process the data to send to your Lambda function.\n\nAt this point in time, the configuration in the Alexa Developer Portal is about complete. The exception being the endpoint which doesn't exist yet.\n\n## Building a Lambda Function with Golang and MongoDB\n\nAlexa, for the most part should be able to direct requests, so now we need to create our backend to receive and process them. This is where Lambda, Go, and MongoDB come into play.\n\nAssuming Golang has been properly installed and configured, create a new project within your **$GOPATH** and within that project, create a **main.go** file. As boilerplate to get the ball rolling, this file should contain the following:\n\n``` go\npackage main\n\nfunc main() { }\n```\n\nWith the boilerplate code added, now we can install the MongoDB Go driver. To do this, you could in theory do a `go get`, but the preferred approach as of now is to use the dep package management tool for Golang. To do this, after having installed the tool, execute the following:\n\n``` bash\ndep init\ndep ensure -add \"go.mongodb.org/mongo-driver/mongo\"\n```\n\nWe're using `dep` so that way the version of the driver that we're using in our project is version locked.\n\nIn addition to the MongoDB Go driver, we're also going to need to get the AWS Lambda SDK for Go as well as an unofficial SDK for Alexa, since no official SDK exists. To do this, we can execute:\n\n``` bash\ndep ensure -add \"github.com/arienmalec/alexa-go\"\ndep ensure -add \"github.com/aws/aws-lambda-go/lambda\"\n```\n\nWith the dependencies available to us, we can modify the project's **main.go** file. Open the file and add the following code:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"os\"\n\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\n// Stores a handle to the collection being used by the Lambda function\ntype Connection struct {\n collection *mongo.Collection\n}\n\nfunc main() {\n ctx := context.Background()\n client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n panic(err)\n }\n\n defer client.Disconnect(ctx)\n\n connection := Connection{\n collection: client.Database(\"alexa\").Collection(\"recipes\"),\n }\n}\n```\n\nIn the `main` function we are creating a client using the connection string of our cluster. In this case, I'm using an environment variable on my computer that points to my MongoDB Atlas cluster. Feel free to configure that connection string however you feel the most confident.\n\nUpon connecting, we are getting a handle of a `recipes` collection for an `alexa` database and storing it in a `Connection` data structure. Because we won't be writing any data in this example, both the `alexa` database and the `recipes` collection should exist prior to running this application.\n\nYou can check out more information about connecting to MongoDB with the Go programming language in a previous tutorial I wrote.\n\nSo why are we storing the collection handle in a `Connection` data structure?\n\nAWS Lambda behaves a little differently when it comes to web applications. Instead of running the `main` function and then remaining alive for as long as your server remains alive, Lambda functions tend to suspend or shutdown when they are not used. For this reason, we cannot rely on our connection being available and we also don't want to establish too many connections to our database in the scenario where our function hasn't shut down. To handle this, we can pass the connection from our `main` function to our logic function.\n\nLet's make a change to see this in action:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"os\"\n\n \"github.com/arienmalec/alexa-go\"\n \"github.com/aws/aws-lambda-go/lambda\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\n// Stores a handle to the collection being used by the Lambda function\ntype Connection struct {\n collection *mongo.Collection\n}\n\nfunc (connection Connection) IntentDispatcher(ctx context.Context, request alexa.Request) (alexa.Response, error) {\n // Alexa logic here...\n}\n\nfunc main() {\n ctx := context.Background()\n client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n panic(err)\n }\n\n defer client.Disconnect(ctx)\n\n connection := Connection{\n collection: client.Database(\"alexa\").Collection(\"recipes\"),\n }\n\n lambda.Start(connection.IntentDispatcher)\n}\n```\n\nNotice in the above code that we've added a `lambda.Start` call in our `main` function that points to an `IntentDispatcher` function. We're designing this function to use the connection information established in the `main` function, which based on our Lambda knowledge, may not run every time the function is executed.\n\nSo we've got the foundation to our Alexa Skill in place. Now we need to design the logic for each of our intents that were previously defined in the Alexa Developer Portal.\n\nSince this is going to be a recipe related Skill, let's model our MongoDB documents like the following:\n\n``` json\n{\n \"_id\": ObjectID(\"234232358943\"),\n \"name\": \"chocolate chip cookies\",\n \"ingredients\": \n \"flour\",\n \"egg\",\n \"sugar\",\n \"chocolate\"\n ]\n}\n```\n\nThere is no doubt that our documents could be more extravagant, but for this example it will work out fine. Within the MongoDB Atlas cluster, create the **alexa** database if it doesn't already exist and add a document modeled like the above in a **recipes** collection.\n\nIn the `main.go` file of the project, add the following data structure:\n\n``` go\n// A data structure representation of the collection schema\ntype Recipe struct {\n ID primitive.ObjectID `bson:\"_id\"`\n Name string `bson:\"name\"`\n Ingredients []string `bson:\"ingredients\"`\n}\n```\n\nWith the MongoDB Go driver, we can annotate Go data structures with BSON\nso that way we can easily map between the two. It essentially makes our\nlives a lot easier when working with MongoDB and Go.\n\nLet's circle back to the `IntentDispatcher` function:\n\n``` go\nfunc (connection Connection) IntentDispatcher(ctx context.Context, request alexa.Request) (alexa.Response, error) {\n var response alexa.Response\n switch request.Body.Intent.Name {\n case \"GetIngredientsForRecipeIntent\":\n case \"GetRecipeFromIngredientsIntent\":\n default:\n response = alexa.NewSimpleResponse(\"Unknown Request\", \"The intent was unrecognized\")\n }\n return response, nil\n}\n```\n\nRemember the two intents from the Alexa Developer Portal? We need to assign logic to them.\n\nEssentially, we're going to do some database logic and then use the `NewSimpleResponse` function to create a response the the results.\n\nLet's start with the `GetIngredientsForRecipeIntent` logic:\n\n``` go\ncase \"GetIngredientsForRecipeIntent\":\n var recipe Recipe\n recipeName := request.Body.Intent.Slots[\"recipe\"].Value\n if recipeName == \"\" {\n return alexa.Response{}, errors.New(\"Recipe name is not present in the request\")\n }\n if err := connection.collection.FindOne(ctx, bson.M{\"name\": recipeName}).Decode(&recipe); err != nil {\n return alexa.Response{}, err\n }\n response = alexa.NewSimpleResponse(\"Ingredients\", strings.Join(recipe.Ingredients, \", \"))\n```\n\nIn the above snippet, we are getting the slot variable that was passed and are issuing a `FindOne` query against the collection. The filter for the query says that the `name` field of the document must match the recipe that was passed in as a slot variable.\n\nIf there was a match, we are serializing the array of ingredients into a string and are returning it back to Alexa. In theory, Alexa should then read back the comma separated list of ingredients.\n\nNow let's take a look at the `GetRecipeFromIngredientsIntent` intent logic:\n\n``` go\ncase \"GetRecipeFromIngredientsIntent\":\n var recipes []Recipe\n ingredient1 := request.Body.Intent.Slots[\"ingredientone\"].Value\n ingredient2 := request.Body.Intent.Slots[\"ingredienttwo\"].Value\n cursor, err := connection.collection.Find(ctx, bson.M{\n \"ingredients\": bson.D{\n {\"$all\", bson.A{ingredient1, ingredient2}},\n },\n })\n if err != nil {\n return alexa.Response{}, err\n }\n if err = cursor.All(ctx, &recipes); err != nil {\n return alexa.Response{}, err\n }\n var recipeList []string\n for _, recipe := range recipes {\n recipeList = append(recipeList, recipe.Name)\n }\n response = alexa.NewSimpleResponse(\"Recipes\", strings.Join(recipeList, \", \"))\n```\n\nIn the above snippet, we are taking both slot variables that represent\ningredients and are using them in a `Find` query on the collection. This\ntime around we are using the `$all` operator because we want to filter\nfor all recipes that contain both ingredients anywhere in the array.\n\nWith the results of the `Find`, we can create create an array of the\nrecipe names and serialize it to a string to be returned as part of the\nAlexa response.\n\nIf you'd like more information on the `Find` and `FindOne` commands for\nGo and MongoDB, check out my [how to read documents\ntutorial\non the subject.\n\nWhile it might seem simple, the code for the Alexa Skill is actually\ncomplete. We've coded scenarios for each of the two intents that we've\nset up in the Alexa Developer Portal. We could improve upon what we've\ndone or create more intents, but it is out of the scope of what we want\nto accomplish.\n\nNow that we have our application, we need to build it for Lambda.\n\nExecute the following commands:\n\n``` bash\nGOOS=linux go build\nzip handler.zip ./project-name\n```\n\nSo what's happening in the above commands? First we are building a Linux compatible binary. We're doing this because if you're developing on Mac or Windows, you're going to end up with a binary that is incompatible. By defining the operating system, we're telling Go what to build for.\n\nFor more information on cross-compiling with Go, check out my Cross Compiling Golang Applications For Use On A Raspberry Pi post.\n\nNext, we are creating an archive of our binary. It is important to replace the `project-name` with that of your actual binary name. It is important to remember the name of the file as it is used in the Lambda dashboard.\n\nWhen you choose to create a new Lambda function within AWS, make sure Go is the development technology. Choose to upload the ZIP file and add the name of the binary as the handler.\n\nNow it comes down to linking Alexa with Lambda.\n\nTake note of the **ARN** value of your Lambda function. This will be added in the Alexa Portal. Also, make sure you add the Alexa Skills Kit as a trigger to the function. It is as simple as selecting it from the list.\n\nNavigate back to the Alexa Developer Portal and choose the **Endpoint** checklist item. Add the ARN value to the default region and choose to build the Skill using the **Build Model** button.\n\nWhen the Skill is done building, you can test it using the simulator that Amazon offers as part of the Alexa Developer Portal. This simulator can be accessed using the **Test** tab within the portal.\n\nIf you've used the same sample utterances that I have, you can try entering something like this:\n\n``` none\nask recipe manager what can i cook with flour and sugar\nask recipe manager what chocolate chip cookies requires\n```\n\nOf course the assumption is that you also have collection entries for chocolate chip cookies and the various ingredients that I used above. Feel free to modify the variable terms with those of your own data.\n\n## Conclusion\n\nYou just saw how to build an Alexa Skill with MongoDB, Golang, and AWS Lambda. Knowing how to develop applications for voice assistants like Alexa is great because they are becoming increasingly popular, and the good news is that they aren't any more difficult than writing standard applications.\n\nAs previously mentioned, MongoDB Atlas makes pairing MongoDB with Lambda and Alexa very convenient. You can use the free tier or upgrade to something better.\n\nIf you'd like to expand your Alexa with Go knowledge and get more practice, check out a previous tutorial I wrote titled Build an Alexa Skill with Golang and AWS Lambda.", "format": "md", "metadata": {"tags": ["Go", "AWS"], "pageDescription": "Learn how to develop Amazon Alexa Skills that interact with MongoDB using the Go programming language and AWS Lambda.", "contentType": "Tutorial"}, "title": "Developing Alexa Skills with MongoDB and Golang", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/realm/building-a-mobile-chat-app-using-realm", "action": "created", "body": "# Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App\n\nThis article is a follow-up to Building a Mobile Chat App Using Realm \u2013 Data Architecture. Read that post first if you want to understand the Realm data/partitioning architecture and the decisions behind it.\n\nThis article targets developers looking to build the Realm mobile database into their mobile apps and use MongoDB Realm Sync. It focuses on how to integrate the Realm-Cocoa SDK into your iOS (SwiftUI) app. Read Building a Mobile Chat App Using Realm \u2013 Data Architecture This post will equip you with the knowledge needed to persist and sync your iOS application data using Realm.\n\nRChat is a chat application. Members of a chat room share messages, photos, location, and presence information with each other. The initial version is an iOS (Swift and SwiftUI) app, but we will use the same data model and back end Realm application to build an Android version in the future.\n\nIf you're looking to add a chat feature to your mobile app, you can repurpose the article's code and the associated repo. If not, treat it as a case study that explains the reasoning behind the data model and partitioning/syncing decisions taken. You'll likely need to make similar design choices in your apps.\n\n>\n>\n>Update: March 2021\n>\n>Building a Mobile Chat App Using Realm \u2013 The New and Easier Way is a follow-on post from this one. It details building the app using the latest SwiftUI features released with Realm-Cocoa 10.6. If you know that you'll only be building apps with SwiftUI (rather than UIKit) then jump straight to that article.\n>\n>In writing that post, the app was updated to take advantage of those new SwiftUI features, use this snapshot of the app's GitHub repo to view the code described in this article.\n>\n>\n\n## Prerequisites\n\nIf you want to build and run the app for yourself, this is what you'll need:\n\n- iOS14.2+\n- XCode 12.3+\n- MongoDB Atlas account and a (free) Atlas cluster\n\n## Walkthrough\n\nThe iOS app uses MongoDB Realm Sync to share data between instances of the app (e.g., the messages sent between users). This walkthrough covers both the iOS code and the back end Realm app needed to make it work. Remember that all of the code for the final app is available in the GitHub repo.\n\n### Create a Realm App\n\nFrom the Atlas UI, select the \"Realm\" tab. Select the options to indicate that you're creating a new iOS mobile app and then click \"Start a New Realm App\":\n\n \n\nName the app \"RChat\" and click \"Create Realm Application\":\n\n \n\nCopy the \"App ID.\" You'll need to use this in your iOS app code:\n\n \n\n### Connect iOS App to Your Realm App\n\nThe SwiftUI entry point for the app is RChatApp.swift. This is where you define your link to your Realm application (named `app`) using the App ID from your new back end Realm app:\n\n``` swift\nimport SwiftUI\nimport RealmSwift\nlet app = RealmSwift.App(id: \"rchat-xxxxx\") // TODO: Set the Realm application ID\n@main\nstruct RChatApp: SwiftUI.App {\n @StateObject var state = AppState()\n\n var body: some Scene {\n WindowGroup {\n ContentView()\n .environmentObject(state)\n }\n }\n}\n```\n\nNote that we created an instance of AppState and pass it into our top-level view (ContentView) as an `environmentObject`. This is a common SwiftUI pattern for making state information available to every view without the need to explicitly pass it down every level of the view hierarchy:\n\n``` swift\nimport SwiftUI\nimport RealmSwift\nlet app = RealmSwift.App(id: \"rchat-xxxxx\") // TODO: Set the Realm application ID\n@main\nstruct RChatApp: SwiftUI.App {\n @StateObject var state = AppState()\n var body: some Scene {\n WindowGroup {\n ContentView()\n .environmentObject(state)\n }\n }\n}\n```\n\n### Application-Wide State: AppState\n\nViews can pass state up and down the hierarchy. However, it can simplify state management by making some state available application-wide. In this app, we centralize this app-wide state data storage and control in an instance of the AppState class.\n\nThere's a lot going on in `AppState.swift`, and you can view the full file in the repo.\n\nLet's start by looking at some of the `AppState` attributes:\n\n``` swift\nclass AppState: ObservableObject {\n ...\n var userRealm: Realm?\n var chatsterRealm: Realm?\n var user: User?\n ...\n}\n```\n\n`user` represents the user that's currently logged into the app (and Realm). We'll look at the User class later, but it includes the user's username, preferences, presence state, and a list of the conversations/chat rooms they're members of. If `user` is set to `nil`, then no user is logged in.\n\nWhen logged in, the app opens two realms:\n\n- `userRealm` lets the user **read and write just their own data** from the Atlas `User` collection.\n- `chatsterRealm` enables the user to **read data for every user** from the Atlas `Chatster` collection.\n\nThe app uses the Realm SDK to interact with the back end Realm application to perform actions such as logging into Realm. Those operations can take some time as they involve accessing resources over the internet, and so we don't want the app to sit busy-waiting for a response. Instead, we use Combine publishers and subscribers to handle these events. `loginPublisher`, `chatsterLoginPublisher`, `logoutPublisher`, `chatsterRealmPublisher`, and `userRealmPublisher` are publishers to handle logging in, logging out, and opening realms for a user:\n\n``` swift\nclass AppState: ObservableObject {\n ...\n let loginPublisher = PassthroughSubject()\n let chatsterLoginPublisher = PassthroughSubject()\n let logoutPublisher = PassthroughSubject()\n let chatsterRealmPublisher = PassthroughSubject()\n let userRealmPublisher = PassthroughSubject()\n ...\n}\n```\n\nWhen an `AppState` class is instantiated, the realms are initialized to `nil` and actions are assigned to each of the Combine publishers:\n\n``` swift\ninit() {\n _ = app.currentUser?.logOut()\n userRealm = nil\n chatsterRealm = nil\n initChatsterLoginPublisher()\n initChatsterRealmPublisher()\n initLoginPublisher()\n initUserRealmPublisher()\n initLogoutPublisher()\n}\n```\n\nWe'll later see that an event is sent to `loginPublisher` and `chatsterLoginPublisher` when a user has successfully logged into Realm. In `AppState`, we define what should be done when those events are received. For example, events received on `loginPublisher` trigger the opening of a realm with the partition set to `user=`, which in turn sends an event to `userRealmPublisher`:\n\n``` swift\nfunc initLoginPublisher() {\nloginPublisher\n .receive(on: DispatchQueue.main)\n .flatMap { user -> RealmPublishers.AsyncOpenPublisher in\n self.shouldIndicateActivity = true\n let realmConfig = user.configuration(partitionValue: \"user=\\(user.id)\")\n return Realm.asyncOpen(configuration: realmConfig)\n }\n .receive(on: DispatchQueue.main)\n .map {\n return $0\n }\n .subscribe(userRealmPublisher)\n .store(in: &self.cancellables)\n}\n```\n\nWhen the realm has been opened and the realm sent to `userRealmPublisher`, the Realm struct is stored in the `userRealm` attribute and the local `user` is initialized with the `User` object retrieved from the realm:\n\n``` swift\nfunc initUserRealmPublisher() {\n userRealmPublisher\n .sink(receiveCompletion: { result in\n if case let .failure(error) = result {\n self.error = \"Failed to log in and open user realm: \\(error.localizedDescription)\"\n }\n }, receiveValue: { realm in\n print(\"User Realm User file location: \\(realm.configuration.fileURL!.path)\")\n self.userRealm = realm\n self.user = realm.objects(User.self).first\n do {\n try realm.write {\n self.user?.presenceState = .onLine\n }\n } catch {\n self.error = \"Unable to open Realm write transaction\"\n }\n self.shouldIndicateActivity = false\n })\n .store(in: &cancellables)\n}\n```\n\n`chatsterLoginPublisher` behaves in the same way, but for a realm that stores `Chatster` objects:\n\n``` swift\nfunc initChatsterLoginPublisher() {\n chatsterLoginPublisher\n .receive(on: DispatchQueue.main)\n .flatMap { user -> RealmPublishers.AsyncOpenPublisher in\n self.shouldIndicateActivity = true\n let realmConfig = user.configuration(partitionValue: \"all-users=all-the-users\")\n return Realm.asyncOpen(configuration: realmConfig)\n }\n .receive(on: DispatchQueue.main)\n .map {\n return $0\n }\n .subscribe(chatsterRealmPublisher)\n .store(in: &self.cancellables)\n}\n\nfunc initChatsterRealmPublisher() {\n chatsterRealmPublisher\n .sink(receiveCompletion: { result in\n if case let .failure(error) = result {\n self.error = \"Failed to log in and open chatster realm: \\(error.localizedDescription)\"\n }\n }, receiveValue: { realm in\n print(\"Chatster Realm User file location: \\(realm.configuration.fileURL!.path)\")\n self.chatsterRealm = realm\n self.shouldIndicateActivity = false\n })\n .store(in: &cancellables)\n}\n```\n\nAfter logging out of Realm, we simply set the attributes to nil:\n\n``` swift\nfunc initLogoutPublisher() {\n logoutPublisher\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { _ in\n }, receiveValue: { _ in\n self.user = nil\n self.userRealm = nil\n self.chatsterRealm = nil\n })\n .store(in: &cancellables)\n}\n```\n\n### Enabling Email/Password Authentication in the Realm App\n\nAfter seeing what happens **after** a user has logged into Realm, we need to circle back and enable email/password authentication in the back end Realm app. Fortunately, it's straightforward to do.\n\nFrom the Realm UI, select \"Authentication\" from the lefthand menu, followed by \"Authentication Providers.\" Click the \"Edit\" button for \"Email/Password\":\n\n \n\nEnable the provider and select \"Automatically confirm users\" and \"Run a password reset function.\" Select \"New function\" and save without making any edits:\n\n \n\nDon't forget to click on \"REVIEW & DEPLOY\" whenever you've made a change to the back end Realm app.\n\n### Create `User` Document on User Registration\n\nWhen a new user registers, we need to create a `User` document in Atlas that will eventually synchronize with a `User` object in the iOS app. Realm provides authentication triggers that can automate this.\n\nSelect \"Triggers\" and then click on \"Add a Trigger\":\n\n \n\nSet the \"Trigger Type\" to \"Authentication,\" provide a name, set the \"Action Type\" to \"Create\" (user registration), set the \"Event Type\" to \"Function,\" and then select \"New Function\":\n\n \n\nName the function `createNewUserDocument` and add the code for the function:\n\n``` javascript\nexports = function({user}) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const userCollection = db.collection(\"User\");\n const partition = `user=${user.id}`;\n const defaultLocation = context.values.get(\"defaultLocation\");\n const userPreferences = {\n displayName: \"\"\n };\n const userDoc = {\n _id: user.id,\n partition: partition,\n userName: user.data.email,\n userPreferences: userPreferences,\n location: context.values.get(\"defaultLocation\"),\n lastSeenAt: null,\n presence:\"Off-Line\",\n conversations: ]\n };\n return userCollection.insertOne(userDoc)\n .then(result => {\n console.log(`Added User document with _id: ${result.insertedId}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\n```\n\nNote that we set the `partition` to `user=`, which matches the partition used when the iOS app opens the User realm.\n\n\"Save\" then \"REVIEW & DEPLOY.\"\n\n### Define Realm Schema\n\nRefer to [Building a Mobile Chat App Using Realm \u2013 Data Architecture to understand more about the app's schema and partitioning rules. This article skips the analysis phase and just configures the Realm schema.\n\nBrowse to the \"Rules\" section in the Realm UI and click on \"Add Collection.\" Set \"Database Name\" to `RChat` and \"Collection Name\" to `User`. We won't be accessing the `User` collection directly through Realm, so don't select a \"Permissions Template.\" Click \"Add Collection\":\n\n \n\nAt this point, I'll stop reminding you to click \"REVIEW & DEPLOY!\"\n\nSelect \"Schema,\" paste in this schema, and then click \"SAVE\":\n\n``` javascript\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"conversations\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"string\"\n },\n \"members\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"membershipStatus\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": \n \"membershipStatus\",\n \"userName\"\n ],\n \"title\": \"Member\"\n }\n },\n \"unreadCount\": {\n \"bsonType\": \"long\"\n }\n },\n \"required\": [\n \"unreadCount\",\n \"id\",\n \"displayName\"\n ],\n \"title\": \"Conversation\"\n }\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n },\n \"userPreferences\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [],\n \"title\": \"UserPreferences\"\n }\n },\n \"required\": [\n \"_id\",\n \"partition\",\n \"userName\",\n \"presence\"\n ],\n \"title\": \"User\"\n}\n```\n\n \n\nRepeat for the `Chatster` schema:\n\n``` javascript\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"avatarImage\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"displayName\": {\n \"bsonType\": \"string\"\n },\n \"lastSeenAt\": {\n \"bsonType\": \"date\"\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"presence\": {\n \"bsonType\": \"string\"\n },\n \"userName\": {\n \"bsonType\": \"string\"\n }\n },\n \"required\": [\n \"_id\",\n \"partition\",\n \"presence\",\n \"userName\"\n ],\n \"title\": \"Chatster\"\n}\n```\n\nAnd for the `ChatMessage` collection:\n\n``` javascript\n{\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"author\": {\n \"bsonType\": \"string\"\n },\n \"image\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"string\"\n },\n \"date\": {\n \"bsonType\": \"date\"\n },\n \"picture\": {\n \"bsonType\": \"binData\"\n },\n \"thumbNail\": {\n \"bsonType\": \"binData\"\n }\n },\n \"required\": [\n \"_id\",\n \"date\"\n ],\n \"title\": \"Photo\"\n },\n \"location\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"double\"\n }\n },\n \"partition\": {\n \"bsonType\": \"string\"\n },\n \"text\": {\n \"bsonType\": \"string\"\n },\n \"timestamp\": {\n \"bsonType\": \"date\"\n }\n },\n \"required\": [\n \"_id\",\n \"partition\",\n \"text\",\n \"timestamp\"\n ],\n \"title\": \"ChatMessage\"\n}\n```\n\n### Enable Realm Sync\n\nRealm Sync is used to synchronize objects between instances of the iOS app (and we'll extend this app to also include Android). It also syncs those objects with Atlas collections. Note that there are three options to create a Realm schema:\n\n1. Manually code the schema as a JSON schema document.\n2. Derive the schema from existing data stored in Atlas. (We don't yet have any data and so this isn't an option here.)\n3. Derive the schema from the Realm objects used in the mobile app.\n\nWe've already specified the schema and so will stick to the first option.\n\nSelect \"Sync\" and then select your Atlas cluster. Set the \"Partition Key\" to the `partition` attribute (it appears in the list as it's already in the schema for all three collections), and the rules for whether a user can sync with a given partition:\n\n \n\nThe \"Read\" rule controls whether a user can establish one-way read-only sync relationship to the mobile app for a given user and partition. In this case, the rule delegates this to a Realm function named `canReadPartition`:\n\n``` json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canReadPartition\"\n }\n }\n}\n```\n\nThe \"Write\" rule delegates to the `canWritePartition`:\n\n``` json\n{\n \"%%true\": {\n \"%function\": {\n \"arguments\": [\n \"%%partition\"\n ],\n \"name\": \"canWritePartition\"\n }\n }\n}\n```\n\nOnce more, we've already seen those functions in [Building a Mobile Chat App Using Realm \u2013 Data Architecture but I'll include the code here for completeness.\n\ncanReadPartition:\n\n``` javascript\nexports = function(partition) {\n console.log(`Checking if can sync a read for partition = ${partition}`);\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatsterCollection = db.collection(\"Chatster\");\n const userCollection = db.collection(\"User\");\n const chatCollection = db.collection(\"ChatMessage\");\n const user = context.user;\n let partitionKey = \"\";\n let partitionVale = \"\";\n const splitPartition = partition.split(\"=\");\n if (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n } else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n }\n switch (partitionKey) {\n case \"user\":\n console.log(`Checking if partitionValue(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n case \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n case \"all-users\":\n console.log(`Any user can read all-users partitions`);\n return true;\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n }\n};\n```\n\n[canWritePartition:\n\n``` javascript\nexports = function(partition) {\nconsole.log(`Checking if can sync a write for partition = ${partition}`);\nconst db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\nconst chatsterCollection = db.collection(\"Chatster\");\nconst userCollection = db.collection(\"User\");\nconst chatCollection = db.collection(\"ChatMessage\");\nconst user = context.user;\nlet partitionKey = \"\";\nlet partitionVale = \"\";\nconst splitPartition = partition.split(\"=\");\nif (splitPartition.length == 2) {\n partitionKey = splitPartition0];\n partitionValue = splitPartition[1];\n console.log(`Partition key = ${partitionKey}; partition value = ${partitionValue}`);\n} else {\n console.log(`Couldn't extract the partition key/value from ${partition}`);\n return false;\n}\n switch (partitionKey) {\n case \"user\":\n console.log(`Checking if partitionKey(${partitionValue}) matches user.id(${user.id}) \u2013 ${partitionKey === user.id}`);\n return partitionValue === user.id;\n case \"conversation\":\n console.log(`Looking up User document for _id = ${user.id}`);\n return userCollection.findOne({ _id: user.id })\n .then (userDoc => {\n if (userDoc.conversations) {\n let foundMatch = false;\n userDoc.conversations.forEach( conversation => {\n console.log(`Checking if conversaion.id (${conversation.id}) === ${partitionValue}`)\n if (conversation.id === partitionValue) {\n console.log(`Found matching conversation element for id = ${partitionValue}`);\n foundMatch = true;\n }\n });\n if (foundMatch) {\n console.log(`Found Match`);\n return true;\n } else {\n console.log(`Checked all of the user's conversations but found none with id == ${partitionValue}`);\n return false;\n }\n } else {\n console.log(`No conversations attribute in User doc`);\n return false;\n }\n }, error => {\n console.log(`Unable to read User document: ${error}`);\n return false;\n });\n case \"all-users\":\n console.log(`No user can write to an all-users partitions`);\n return false;\n default:\n console.log(`Unexpected partition key: ${partitionKey}`);\n return false;\n }\n};\n```\n\nTo create these functions, select \"Functions\" and click \"Create New Function.\" Make sure you type the function name precisely, set \"Authentication\" to \"System,\" and turn on the \"Private\" switch (which means it can't be called directly from external services such as our mobile app):\n\n \n\n### Linking User and Chatster Documents\n\nAs described in [Building a Mobile Chat App Using Realm \u2013 Data Architecture, there are relationships between different `User` and `Chatster` documents. Now that we've defined the schemas and enabled Realm Sync, it's a convenient time to add the Realm function and database trigger to maintain those relationships.\n\nCreate a Realm function named `userDocWrittenTo`, set \"Authentication\" to \"System,\" and make it private. This article is aiming to focus on the iOS app more than the back end Realm app, and so we won't delve into this code:\n\n``` javascript\nexports = function(changeEvent) {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const chatster = db.collection(\"Chatster\");\n const userCollection = db.collection(\"User\");\n const docId = changeEvent.documentKey._id;\n const user = changeEvent.fullDocument;\n let conversationsChanged = false;\n console.log(`Mirroring user for docId=${docId}. operationType = ${changeEvent.operationType}`);\n switch (changeEvent.operationType) {\n case \"insert\":\n case \"replace\":\n case \"update\":\n console.log(`Writing data for ${user.userName}`);\n let chatsterDoc = {\n _id: user._id,\n partition: \"all-users=all-the-users\",\n userName: user.userName,\n lastSeenAt: user.lastSeenAt,\n presence: user.presence\n };\n if (user.userPreferences) {\n const prefs = user.userPreferences;\n chatsterDoc.displayName = prefs.displayName;\n if (prefs.avatarImage && prefs.avatarImage._id) {\n console.log(`Copying avatarImage`);\n chatsterDoc.avatarImage = prefs.avatarImage;\n console.log(`id of avatarImage = ${prefs.avatarImage._id}`);\n }\n }\n chatster.replaceOne({ _id: user._id }, chatsterDoc, { upsert: true })\n .then (() => {\n console.log(`Wrote Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to write Chatster document for _id=${docId}: ${error}`);\n });\n\n if (user.conversations && user.conversations.length > 0) {\n for (i = 0; i < user.conversations.length; i++) {\n let membersToAdd = ];\n if (user.conversations[i].members.length > 0) {\n for (j = 0; j < user.conversations[i].members.length; j++) {\n if (user.conversations[i].members[j].membershipStatus == \"User added, but invite pending\") {\n membersToAdd.push(user.conversations[i].members[j].userName);\n user.conversations[i].members[j].membershipStatus = \"Membership active\";\n conversationsChanged = true;\n }\n }\n }\n if (membersToAdd.length > 0) {\n userCollection.updateMany({userName: {$in: membersToAdd}}, {$push: {conversations: user.conversations[i]}})\n .then (result => {\n console.log(`Updated ${result.modifiedCount} other User documents`);\n }, error => {\n console.log(`Failed to copy new conversation to other users: ${error}`);\n });\n }\n }\n }\n if (conversationsChanged) {\n userCollection.updateOne({_id: user._id}, {$set: {conversations: user.conversations}});\n }\n break;\n case \"delete\":\n chatster.deleteOne({_id: docId})\n .then (() => {\n console.log(`Deleted Chatster document for _id: ${docId}`);\n }, error => {\n console.log(`Failed to delete Chatster document for _id=${docId}: ${error}`);\n });\n break;\n }\n};\n```\n\nSet up a database trigger to execute the new function whenever anything in the `User` collection changes:\n\n \n\n### Registering and Logging in From the iOS App\n\nWe've now created enough of the back end Realm app that mobile apps can now register new Realm users and use them to log into the app.\n\nThe app's top-level SwiftUI view is [ContentView, which decides which sub-view to show based on whether our `AppState` environment object indicates that a user is logged in or not:\n\n``` swift\n@EnvironmentObject var state: AppState\n...\nif state.loggedIn {\n if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {\n SetProfileView(isPresented: $showingProfileView)\n } else {\n ConversationListView()\n .navigationBarTitle(\"Chats\", displayMode: .inline)\n .navigationBarItems(\n trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(\n photo: state.user?.userPreferences?.avatarImage,\n online: true) { showingProfileView.toggle() } : nil\n )\n }\n} else {\n LoginView()\n}\n...\n```\n\nWhen first run, no user is logged in and so `LoginView` is displayed.\n\nNote that `AppState.loggedIn` checks whether a user is currently logged into the Realm `app`:\n\n``` swift\nvar loggedIn: Bool {\n app.currentUser != nil && app.currentUser?.state == .loggedIn \n && userRealm != nil && chatsterRealm != nil\n}\n```\n\nThe UI for LoginView contains cells to provide the user's email address and password, a radio button to indicate whether this is a new user, and a button to register or log in a user:\n\n \n\nClicking the button executes one of two functions:\n\n``` swift\n...\nCallToActionButton(\n title: newUser ? \"Register User\" : \"Log In\",\n action: { self.userAction(username: self.username, password: self.password) })\n...\nprivate func userAction(username: String, password: String) {\n state.shouldIndicateActivity = true\n if newUser {\n signup(username: username, password: password)\n } else {\n login(username: username, password: password)\n }\n}\n```\n\n`signup` makes an asynchronous call to the Realm SDK to register the new user. Through a Combine pipeline, `signup` receives an event when the registration completes, which triggers it to invoke the `login` function:\n\n``` swift\nprivate func signup(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n state.shouldIndicateActivity = false\n return\n }\n self.state.error = nil\n app.emailPasswordAuth.registerUser(email: username, password: password)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n login(username: username, password: password)\n })\n .store(in: &state.cancellables)\n}\n```\n\nThe `login` function uses the Realm SDK to log in the user asynchronously. If/when the Realm login succeeds, the Combine pipeline sends the Realm user to the `chatsterLoginPublisher` and `loginPublisher` publishers (recall that we've seen how those are handled within the `AppState` class):\n\n``` swift\nprivate func login(username: String, password: String) {\n if username.isEmpty || password.isEmpty {\n state.shouldIndicateActivity = false\n return\n }\n self.state.error = nil\n app.login(credentials: .emailPassword(email: username, password: password))\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: {\n state.shouldIndicateActivity = false\n switch $0 {\n case .finished:\n break\n case .failure(let error):\n self.state.error = error.localizedDescription\n }\n }, receiveValue: {\n self.state.error = nil\n state.chatsterLoginPublisher.send($0)\n state.loginPublisher.send($0)\n })\n .store(in: &state.cancellables)\n}\n```\n\n### Saving the User Profile\n\nOn being logged in for the first time, the user is presented with SetProfileView. (They can also return here later by clicking on their avatar.) This is a SwiftUI sheet where the user can set their profile and preferences by interacting with the UI and then clicking \"Save User Profile\":\n\n \n\nWhen the view loads, the UI is populated with any existing profile information found in the `User` object in the `AppState` environment object:\n\n``` swift\n...\n@EnvironmentObject var state: AppState\n...\n.onAppear { initData() }\n...\nprivate func initData() {\n displayName = state.user?.userPreferences?.displayName ?? \"\"\n photo = state.user?.userPreferences?.avatarImage\n}\n```\n\nAs the user updates the UI elements, the Realm `User` object isn't changed. It's only when they click \"Save User Profile\" that we update the `User` object. Note that it uses the `userRealm` that was initialized when the user logged in to open a Realm write transaction before making the change:\n\n``` swift\n...\n@EnvironmentObject var state: AppState\n...\nCallToActionButton(title: \"Save User Profile\", action: saveProfile)\n...\nprivate func saveProfile() {\n if let realm = state.userRealm {\n state.shouldIndicateActivity = true\n do {\n try realm.write {\n state.user?.userPreferences?.displayName = displayName\n if photoAdded {\n guard let newPhoto = photo else {\n print(\"Missing photo\")\n state.shouldIndicateActivity = false\n return\n }\n state.user?.userPreferences?.avatarImage = newPhoto\n }\n state.user?.presenceState = .onLine\n }\n } catch {\n state.error = \"Unable to open Realm write transaction\"\n }\n }\n state.shouldIndicateActivity = false\n}\n```\n\nOnce saved to the local realm, Realm Sync copies changes made to the `User` object to the associated `User` document in Atlas.\n\n### List of Conversations\n\nOnce the user has logged in and set up their profile information, they're presented with the `ConversationListView`:\n\n``` swift\nif state.loggedIn {\n if (state.user != nil) && !state.user!.isProfileSet || showingProfileView {\n SetProfileView(isPresented: $showingProfileView)\n } else {\n ConversationListView()\n .navigationBarTitle(\"Chats\", displayMode: .inline)\n .navigationBarItems(\n trailing: state.loggedIn && !state.shouldIndicateActivity ? UserAvatarView(\n photo: state.user?.userPreferences?.avatarImage,\n online: true) { showingProfileView.toggle() } : nil\n )\n }\n} else {\n LoginView()\n}\n```\n\nConversationListView displays a list of all the conversations that the user is currently a member of (initially none) by looping over `conversations` within their `User` Realm object:\n\n``` swift\nif let conversations = state.user?.conversations.freeze().sorted(by: sortDescriptors) {\n List {\n ForEach(conversations) { conversation in\n Button(action: {\n self.conversation = conversation\n showConversation.toggle()\n }) {\n ConversationCardView(\n conversation: conversation,\n lastSync: lastSync)\n }\n }\n }\n ...\n}\n```\n\nAt any time, another user can include you in a new group conversation. This view needs to reflect those changes as they happen:\n\n \n\nWhen the other user adds us to a conversation, our `User` document is updated automatically through the magic of Realm Sync and our Realm trigger; but we need to give SwiftUI a nudge to refresh the current view. We do that by registering for Realm notifications and updating the `lastSync` state variable on each change. We register for notifications when the view appears and deregister when it disappears:\n\n``` swift\n@State var lastSync: Date?\n...\nvar body: some View {\n VStack {\n ...\n if let lastSync = lastSync {\n LastSync(date: lastSync)\n }\n ...\n }\n ...\n .onAppear { watchRealms() }\n .onDisappear { stopWatching() }\n}\n\nprivate func watchRealms() {\n if let userRealm = state.userRealm {\n realmUserNotificationToken = userRealm.observe {_, _ in\n lastSync = Date()\n }\n }\n if let chatsterRealm = state.chatsterRealm {\n realmChatsterNotificationToken = chatsterRealm.observe { _, _ in\n lastSync = Date()\n }\n }\n}\n\nprivate func stopWatching() {\n if let userToken = realmUserNotificationToken {\n userToken.invalidate()\n }\n if let chatsterToken = realmChatsterNotificationToken {\n chatsterToken.invalidate()\n }\n}\n```\n\n### Creating New Conversations\n\nNewConversationView is another view that lets the user provide a number of details which are then saved to Realm when the \"Save\" button is tapped. What's new is that it uses Realm to search for all users that match a filter pattern:\n\n``` swift\nprivate func searchUsers() {\n var candidateChatsters: Results\n if let chatsterRealm = state.chatsterRealm {\n let allChatsters = chatsterRealm.objects(Chatster.self)\n if candidateMember == \"\" {\n candidateChatsters = allChatsters\n } else {\n let predicate = NSPredicate(format: \"userName CONTAINScd] %@\", candidateMember)\n candidateChatsters = allChatsters.filter(predicate)\n }\n candidateMembers = []\n candidateChatsters.forEach { chatster in\n if !members.contains(chatster.userName) && chatster.userName != state.user?.userName {\n candidateMembers.append(chatster.userName)\n }\n }\n }\n}\n```\n\n### Conversation Status\n\n \n\nWhen the status of a conversation changes (users go online/offline or new messages are received), the card displaying the conversation details should update.\n\nWe already have a Realm function to set the `presence` status in `Chatster` documents/objects when users log on or off. All `Chatster` objects are readable by all users, and so [ConversationCardContentsView can already take advantage of that information.\n\nThe `conversation.unreadCount` is part of the `User` object and so we need another Realm trigger to update that whenever a new chat message is posted to a conversation.\n\nWe add a new Realm function `chatMessageChange` that's configured as private and with \"System\" authentication (just like our other functions). This is the function code that will increment the `unreadCount` for all `User` documents for members of the conversation:\n\n``` javascript\nexports = function(changeEvent) {\n if (changeEvent.operationType != \"insert\") {\n console.log(`ChatMessage ${changeEvent.operationType} event \u2013 currently ignored.`);\n return;\n }\n\n console.log(`ChatMessage Insert event being processed`);\n let userCollection = context.services.get(\"mongodb-atlas\").db(\"RChat\").collection(\"User\");\n let chatMessage = changeEvent.fullDocument;\n let conversation = \"\";\n\n if (chatMessage.partition) {\n const splitPartition = chatMessage.partition.split(\"=\");\n if (splitPartition.length == 2) {\n conversation = splitPartition1];\n console.log(`Partition/conversation = ${conversation}`);\n } else {\n console.log(\"Couldn't extract the conversation from partition ${chatMessage.partition}\");\n return;\n }\n } else {\n console.log(\"partition not set\");\n return;\n }\n\n const matchingUserQuery = {\n conversations: {\n $elemMatch: {\n id: conversation\n }\n }\n };\n\n const updateOperator = {\n $inc: {\n \"conversations.$[element].unreadCount\": 1\n }\n };\n\n const arrayFilter = {\n arrayFilters:[\n {\n \"element.id\": conversation\n }\n ]\n };\n\n userCollection.updateMany(matchingUserQuery, updateOperator, arrayFilter)\n .then ( result => {\n console.log(`Matched ${result.matchedCount} User docs; updated ${result.modifiedCount}`);\n }, error => {\n console.log(`Failed to match and update User docs: ${error}`);\n });\n};\n```\n\nThat function should be invoked by a new Realm database trigger (`ChatMessageChange`) to fire whenever a document is inserted into the `RChat.ChatMessage` collection.\n\n### Within the Chat Room\n\n \n\n[ChatRoomView has a lot of similarities with `ConversationListView`, but with one fundamental difference. Each conversation/chat room has its own partition, and so when opening a conversation, you need to open a new realm and observe for changes in it:\n\n``` swift\n@EnvironmentObject var state: AppState\n...\nvar body: some View {\n VStack {\n ...\n }\n .onAppear { loadChatRoom() }\n .onDisappear { closeChatRoom() }\n}\n\nprivate func loadChatRoom() {\n clearUnreadCount()\n if let user = app.currentUser, let conversation = conversation {\n scrollToBottom()\n self.state.shouldIndicateActivity = true\n let realmConfig = user.configuration(partitionValue: \"conversation=\\(conversation.id)\")\n Realm.asyncOpen(configuration: realmConfig)\n .receive(on: DispatchQueue.main)\n .sink(receiveCompletion: { result in\n if case let .failure(error) = result {\n self.state.error = \"Failed to open ChatMessage realm: \\(error.localizedDescription)\"\n state.shouldIndicateActivity = false\n }\n }, receiveValue: { realm in\n chatRealm = realm\n chats = realm.objects(ChatMessage.self).sorted(byKeyPath: \"timestamp\")\n realmChatsNotificationToken = realm.observe {_, _ in\n scrollToBottom()\n clearUnreadCount()\n lastSync = Date()\n }\n scrollToBottom()\n state.shouldIndicateActivity = false\n })\n .store(in: &self.state.cancellables)\n }\n}\n```\n\nNote that we only open a `Conversation` realm when the user opens the associated view because having too many realms open concurrently can exhaust resources. It's also important that we stop observing the realm by setting it to `nil` when leaving the view:\n\n``` swift\n@EnvironmentObject var state: AppState\n...\nvar body: some View {\n VStack {\n ...\n }\n .onAppear { loadChatRoom() }\n .onDisappear { closeChatRoom() }\n}\n\nprivate func closeChatRoom() {\n clearUnreadCount()\n if let token = realmChatsterNotificationToken {\n token.invalidate()\n }\n if let token = realmChatsNotificationToken {\n token.invalidate()\n }\n chatRealm = nil\n}\n```\n\nTo send a message, all the app needs to do is to add the new chat message to Realm. Realm Sync will then copy it to Atlas, where it is then synced to the other users:\n\n``` swift\nprivate func sendMessage(text: String, photo: Photo?, location: Double]) {\n if let conversation = conversation {\n let chatMessage = ChatMessage(conversationId: conversation.id,\n author: state.user?.userName ?? \"Unknown\",\n text: text,\n image: photo,\n location: location)\n if let chatRealm = chatRealm {\n do {\n try chatRealm.write {\n chatRealm.add(chatMessage)\n }\n } catch {\n state.error = \"Unable to open Realm write transaction\"\n }\n } else {\n state.error = \"Cannot save chat message as realm is not set\"\n }\n }\n}\n```\n\n## Summary\n\nIn this article, we've gone through the key steps you need to take when building a mobile app using Realm, including:\n\n- Managing the user lifecycle: registering, authenticating, logging in, and logging out.\n- Managing and storing user profile information.\n- Adding objects to Realm.\n- Performing searches on Realm data.\n- Syncing data between your mobile apps and with MongoDB Atlas.\n- Reacting to data changes synced from other devices.\n- Adding some back end magic using Realm triggers and functions.\n\nThere's a lot of code and functionality that hasn't been covered in this article, and so it's worth looking through the rest of the app to see how to use features such as these from a SwiftUI iOS app:\n\n- Location data\n- Maps\n- Camera and photo library\n- Actions when minimizing your app\n- Notifications\n\nWe wrote the iOS version of the app first, but we plan on adding an Android (Kotlin) version soon \u2013 keep checking the [developer hub and the repo for updates.\n\n## References\n\n- GitHub Repo for this app, as it stood when this article was written\n- Read Building a Mobile Chat App Using Realm \u2013 Data Architecture to understand the data model and partitioning strategy behind the RChat app\n- If you're building your first SwiftUI/Realm app, then check out Build Your First iOS Mobile App Using Realm, SwiftUI, & Combine\n- GitHub Repo for Realm-Cocoa SDK\n- Realm Cocoa SDK documentation\n- MongoDB's Realm documentation\n\n>\n>\n>If you have questions, please head to our developer community website where the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.\n>\n>\n", "format": "md", "metadata": {"tags": ["Realm", "Swift", "iOS", "Mobile"], "pageDescription": "How to incorporate Realm into your iOS App. Building a chat app with SwiftUI and Realm-Cocoa", "contentType": "Tutorial"}, "title": "Building a Mobile Chat App Using Realm \u2013 Integrating Realm into Your App", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/go/field-level-encryption-fle-mongodb-golang", "action": "created", "body": "# Client-Side Field Level Encryption (CSFLE) in MongoDB with Golang\n\nOne of the many great things about MongoDB is how secure you can make\nyour data in it. In addition to network and user-based rules, you have\nencryption of your data at rest, encryption over the wire, and now\nrecently, client-side encryption known as client-side field level\nencryption (CSFLE).\n\nSo, what exactly is client-side field level encryption (CSFLE) and how\ndo you use it?\n\nWith field level encryption, you can choose to encrypt certain fields\nwithin a document, client-side, while leaving other fields as plain\ntext. This is particularly useful because when viewing a CSFLE document\nwith the CLI,\nCompass, or directly within\nAltas, the encrypted fields will\nnot be human readable. When they are not human readable, if the\ndocuments should get into the wrong hands, those fields will be useless\nto the malicious user. However, when using the MongoDB language drivers\nwhile using the same encryption keys, those fields can be decrypted and\nare queryable within the application.\n\nIn this quick start themed tutorial, we're going to see how to use\nMongoDB field level\nencryption\nwith the Go programming language (Golang). In particular, we're going to\nbe exploring automatic encryption rather than manual encryption.\n\n## The Requirements\n\nThere are a few requirements that must be met prior to attempting to use\nCSFLE with the Go driver.\n\n- MongoDB Atlas 4.2+\n- MongoDB Go driver 1.2+\n- The libmongocrypt\n library installed\n- The\n mongocryptd\n binary installed\n\n>\n>\n>This tutorial will focus on automatic encryption. While this tutorial\n>will use MongoDB Atlas, you're\n>going to need to be using version 4.2 or newer for MongoDB Atlas or\n>MongoDB Enterprise Edition. You will not be able to use automatic field\n>level encryption with MongoDB Community Edition.\n>\n>\n\nThe assumption is that you're familiar with developing Go applications\nthat use MongoDB. If you want a refresher, take a look at the quick\nstart\nseries\nthat I published on the topic.\n\nTo use field level encryption, you're going to need a little more than\njust having an appropriate version of MongoDB and the MongoDB Go driver.\nWe'll need **libmongocrypt**, which is a companion library for\nencryption in the MongoDB drivers, and **mongocryptd**, which is a\nbinary for parsing automatic encryption rules based on the extended JSON\nformat.\n\n## Installing the Libmongocrypt and Mongocryptd Binaries and Libraries\n\nBecause of the **libmongocrypt** and **mongocryptd** requirements, it's\nworth reviewing how to install and configure them. We'll be exploring\ninstallation on macOS, but refer to the documentation for\nlibmongocrypt and\nmongocryptd\nfor your particular operating system.\n\nThere are a few solutions torward installing the **libmongocrypt**\nlibrary on macOS, the easiest being with Homebrew.\nIf you've got Homebrew installed, you can install **libmongocrypt** with\nthe following command:\n\n``` bash\nbrew install mongodb/brew/libmongocrypt\n```\n\nJust like that, the MongoDB Go driver will be able to handle encryption.\nFurther explanation of the instructions can be found in the\ndocumentation.\n\nBecause we want to do automatic encryption with the driver using an\nextended JSON schema, we need **mongocryptd**, a binary that ships with\nMongoDB Enterprise Edition. The **mongocryptd** binary needs to exist on\nthe computer or server where the Go application intends to run. It is\nnot a development dependency like **libmongocrypt**, but a runtime\ndependency.\n\nYou'll want to consult the\ndocumentation\non how to obtain the **mongocryptd** binary as each operating system has\ndifferent steps.\n\nFor macOS, you'll want to download MongoDB Enterprise Edition from the\nMongoDB Download\nCenter.\nYou can refer to the Enterprise Edition installation\ninstructions\nfor macOS to install, but the gist of the installation involves\nextracting the TAR file and moving the files to the appropriate\ndirectory.\n\nBy this point, all the appropriate components for field level encryption\nshould be installed or available.\n\n## Create a Data Key in MongoDB for Encrypting and Decrypting Document Fields\n\nBefore we can start encrypting and decrypting fields within our\ndocuments, we need to establish keys to do the bulk of the work. This\nmeans defining our key vault location within MongoDB and the Key\nManagement System (KMS) we wish to use for decrypting the data\nencryption keys.\n\nThe key vault is a collection that we'll create within MongoDB for\nstoring encrypted keys for our document fields. The primary key within\nthe KMS will decrypt the keys within the key vault.\n\nFor this particular tutorial, we're going to use a Local Key Provider\nfor our KMS. It is worth looking into something like AWS\nKMS or similar, something we'll explore in\na future tutorial, as an alternative to a Local Key Provider.\n\nOn your computer, create a new Go project with the following **main.go**\nfile:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"crypto/rand\"\n \"fmt\"\n \"io/ioutil\"\n \"log\"\n \"os\"\n\n \"go.mongodb.org/mongo-driver/bson\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nvar (\n ctx = context.Background()\n kmsProviders mapstring]map[string]interface{}\n schemaMap bson.M\n)\n\nfunc createDataKey() {}\nfunc createEncryptedClient() *mongo.Client {}\nfunc readSchemaFromFile(file string) bson.M {}\n\nfunc main() {}\n```\n\nYou'll need to install the MongoDB Go driver to proceed. To learn how to\ndo this, take a moment to check out my previous tutorial titled [Quick\nStart: Golang & MongoDB - Starting and\nSetup.\n\nIn the above code, we have a few variables defined as well as a few\nfunctions. We're going to focus on the `kmsProviders` variable and the\n`createDataKey` function for this particular part of the tutorial.\n\nTake a look at the following `createDataKey` function:\n\n``` go\nfunc createDataKey() {\n kvClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n log.Fatal(err)\n }\n clientEncryptionOpts := options.ClientEncryption().SetKeyVaultNamespace(\"keyvault.datakeys\").SetKmsProviders(kmsProviders)\n clientEncryption, err := mongo.NewClientEncryption(kvClient, clientEncryptionOpts)\n if err != nil {\n log.Fatal(err)\n }\n defer clientEncryption.Close(ctx)\n _, err = clientEncryption.CreateDataKey(ctx, \"local\", options.DataKey().SetKeyAltNames(]string{\"example\"}))\n if err != nil {\n log.Fatal(err)\n }\n}\n```\n\nIn the above `createDataKey` function, we are first connecting to\nMongoDB. The MongoDB connection string is defined by the environment\nvariable `ATLAS_URI` in the above code. While you could hard-code this\nconnection string or store it in a configuration file, for security\nreasons, it makes a lot of sense to use environment variables instead.\n\nIf the connection was successful, we need to define the key vault\nnamespace and the KMS provider as part of the encryption configuration\noptions. The namespace is composed of the database name followed by the\ncollection name. This is where the key information will be stored. The\n`kmsProviders` map, which will be defined later, will have local key\ninformation.\n\nExecuting the `CreateDataKey` function will create the key information\nwithin MongoDB as a document.\n\nWe are choosing to specify an alternate key name of `example` so that we\ndon't have to refer to the data key by its `_id` when using it with our\ndocuments. Instead, we'll be able to use the unique alternate name which\ncould follow a special naming convention. It is important to note that\nthe alternate key name is only useful when using the\n`AEAD_AES_256_CBC_HMAC_SHA_512-Random`, something we'll explore later in\nthis tutorial.\n\nTo use the `createDataKey` function, we can make some modifications to\nthe `main` function:\n\n``` go\nfunc main() {\n localKey := make([]byte, 96)\n if _, err := rand.Read(localKey); err != nil {\n log.Fatal(err)\n }\n kmsProviders = map[string]map[string]interface{}{\n \"local\": {\n \"key\": localKey,\n },\n }\n createDataKey()\n}\n```\n\nIn the above code, we are generating a random key. This random key is\nadded to the `kmsProviders` map that we were using within the\n`createDataKey` function.\n\n>\n>\n>It is insecure to have your local key stored within the application or\n>on the same server. In production, consider using AWS KMS or accessing\n>your local key through a separate request before adding it to the Local\n>Key Provider.\n>\n>\n\nIf you ran the code so far, you'd end up with a `keyvault` database and\na `datakeys` collection which has a document of a key with an alternate\nname. That document would look something like this:\n\n``` none\n{\n \"_id\": UUID(\"27a51d69-809f-4cb9-ae15-d63f7eab1585\"),\n \"keyAltNames\": [\n \"example\"\n ],\n \"keyMaterial\": Binary(\"oJ6lEzjIEskHFxz7zXqddCgl64EcP1A7E/r9zT+OL19/ZXVwDnEjGYMvx+BgcnzJZqkXTFTgJeaRYO/fWk5bEcYkuvXhKqpMq2ZO\", 0),\n \"creationDate\": 2020-11-05T23:32:26.466+00:00,\n \"updateDate\": 2020-11-05T23:32:26.466+00:00,\n \"status\": 0,\n \"masterKey\": {\n \"provider\": \"local\"\n }\n}\n```\n\nThere are a few important things to note with our code so far:\n\n- The `localKey` is random and is not persisting beyond the runtime\n which will result in key mismatches upon consecutive runs of the\n application. Either specify a non-random key or store it somewhere\n after generation.\n- We're using a Local Key Provider with a key that exists locally.\n This is not recommended in a production scenario due to security\n concerns. Instead, use a provider like AWS KMS or store the key\n externally.\n- The `createDataKey` should only be executed when a particular key is\n needed to be created, not every time the application runs.\n- There is no strict naming convention for the key vault and the keys\n that reside in it. Name your database and collection however makes\n sense to you.\n\nAfter we run our application the first time, we'll probably want to\ncomment out the `createDataKey` line in the `main` function.\n\n## Defining an Extended JSON Schema Map for Fields to be Encrypted\n\nWith the data key created, we're at a point in time where we need to\nfigure out what fields should be encrypted in a document and what fields\nshould be left as plain text. The easiest way to do this is with a\nschema map.\n\nA schema map for encryption is extended JSON and can be added directly\nto the Go source code or loaded from an external file. From a\nmaintenance perspective, loading from an external file is easier to\nmaintain.\n\nTake a look at the following schema map for encryption:\n\n``` json\n{\n \"fle-example.people\": {\n \"encryptMetadata\": {\n \"keyId\": \"/keyAltName\"\n },\n \"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n }\n },\n \"bsonType\": \"object\"\n }\n}\n```\n\nLet's assume the above JSON exists in a **schema.json** file which sits\nrelative to our Go files or binary. In the above JSON, we're saying that\nthe map applies to the `people` collection within the `fle-example`\ndatabase.\n\nThe `keyId` field within the `encryptMetadata` object says that\ndocuments within the `people` collection must have a string field called\n`keyAltName`. The value of this field will reflect the alternate key\nname that we defined when creating the data key. Notice the `/` that\nprefixes the value. That is not an error. It is a requirement for this\nparticular value since it is a pointer.\n\nThe `properties` field lists fields within our document and in this\nexample lists the fields that should be encrypted along with the\nencryption algorithm to use. In our example, only the `ssn` field will\nbe encrypted while all other fields will remain as plain text.\n\nThere are two algorithms currently supported:\n\n- AEAD_AES_256_CBC_HMAC_SHA_512-Random\n- AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\n\nIn short, the `AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm is best\nused on fields that have low cardinality or don't need to be used within\na filter for a query. The `AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic`\nalgorithm should be used for fields with high cardinality or for fields\nthat need to be used within a filter.\n\nTo learn more about these algorithms, visit the\n[documentation.\nWe'll be exploring both algorithms in this particular tutorial.\n\nIf we wanted to, we could change the schema map to the following:\n\n``` json\n{\n \"fle-example.people\": {\n \"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"keyId\": \"/keyAltName\",\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n }\n },\n \"bsonType\": \"object\"\n }\n}\n```\n\nThe change made in the above example has to do with the `keyId` field.\nRather than declaring it as part of the `encryptMetadata`, we've\ndeclared it as part of a particular field. This could be useful if you\nwant to use different keys for different fields.\n\nRemember, the pointer used for the `keyId` will only work with the\n`AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm. You can, however, use\nthe actual key id for both algorithms.\n\nWith a schema map for encryption available, let's get it loaded in the\nGo application. Change the `readSchemaFromFile` function to look like\nthe following:\n\n``` go\nfunc readSchemaFromFile(file string) bson.M {\n content, err := ioutil.ReadFile(file)\n if err != nil {\n log.Fatal(err)\n }\n var doc bson.M\n if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {\n log.Fatal(err)\n }\n return doc\n}\n```\n\nIn the above code, we are reading the file, which will be the\n**schema.json** file soon enough. If it is read successfully, we use the\n`UnmarshalExtJSON` function to load it into a `bson.M` object that is\nmore pleasant to work with in Go.\n\n## Enabling MongoDB Automatic Client Encryption in a Golang Application\n\nBy this point, you should have the code in place for creating a data key\nand a schema map defined to be used with the automatic client encryption\nfunctionality that MongoDB supports. It's time to bring it together to\nactually encrypt and decrypt fields.\n\nWe're going to start with the `createEncryptedClient` function within\nour project:\n\n``` go\nfunc createEncryptedClient() *mongo.Client {\n schemaMap = readSchemaFromFile(\"schema.json\")\n mongocryptdOpts := mapstring]interface{}{\n \"mongodcryptdBypassSpawn\": true,\n }\n autoEncryptionOpts := options.AutoEncryption().\n SetKeyVaultNamespace(\"keyvault.datakeys\").\n SetKmsProviders(kmsProviders).\n SetSchemaMap(schemaMap).\n SetExtraOptions(mongocryptdOpts)\n mongoClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")).SetAutoEncryptionOptions(autoEncryptionOpts))\n if err != nil {\n log.Fatal(err)\n }\n return mongoClient\n}\n```\n\nIn the above code we are making use of the `readSchemaFromFile` function\nthat we had just created to load our schema map for encryption. Next, we\nare defining our auto encryption options and establishing a connection\nto MongoDB. This will look somewhat familiar to what we did in the\n`createDataKey` function. When defining the auto encryption options, not\nonly are we specifying the KMS for our key and vault, but we're also\nsupplying the schema map for encryption.\n\nYou'll notice that we are using `mongocryptdBypassSpawn` as an extra\noption. We're doing this so that the client doesn't try to automatically\nstart the **mongocryptd** daemon if it is already running. You may or\nmay not want to use this in your own application.\n\nIf the connection was successful, the client is returned.\n\nIt's time to revisit the `main` function within the project:\n\n``` go\nfunc main() {\n localKey := make([]byte, 96)\n if _, err := rand.Read(localKey); err != nil {\n log.Fatal(err)\n }\n kmsProviders = map[string]map[string]interface{}{\n \"local\": {\n \"key\": localKey,\n },\n }\n // createDataKey()\n client := createEncryptedClient()\n defer client.Disconnect(ctx)\n collection := client.Database(\"fle-example\").Collection(\"people\")\n if _, err := collection.InsertOne(context.TODO(), bson.M{\"name\": \"Nic Raboy\", \"ssn\": \"123456\", \"keyAltName\": \"example\"}); err != nil {\n log.Fatal(err)\n }\n result, err := collection.FindOne(context.TODO(), bson.D{}).DecodeBytes()\n if err != nil {\n log.Fatal(err)\n }\n fmt.Println(result)\n}\n```\n\nIn the above code, we are creating our Local Key Provider using a local\nkey that was randomly generated. Remember, this key should match what\nwas used when creating the data key, so random may not be the best\nlong-term. Likewise, a local key shouldn't be used in production because\nof security reasons.\n\nOnce the KMS providers are established, the `createEncryptedClient`\nfunction is executed. Remember, this particular function will set the\nautomatic encryption options and establish a connection to MongoDB.\n\nTo match the database and collection used in the schema map definition,\nwe are using `fle-example` as the database and `people` as the\ncollection. The operations that follow, such as `InsertOne` and\n`FindOne`, can be used as if field level encryption wasn't even a thing.\nBecause we have an `ssn` field and the `keyAltName` field, the `ssn`\nfield will be encrypted client-side and saved to MongoDB. When doing\nlookup operation, the encrypted field will be decrypted.\n\n![FLE Data in MongoDB Atlas\n\nWhen looking at the data in Atlas, for example, the encrypted fields\nwill not be human readable as seen in the above screenshot.\n\n## Running and Building a Golang Application with MongoDB Field Level Encryption\n\nWhen field level encryption is included in the Go application, a special\ntag must be included in the build or run process, depending on the route\nyou choose. You should already have **mongocryptd** and\n**libmongocrypt**, so to build your Go application, you'd do the\nfollowing:\n\n``` bash\ngo build -tags cse\n```\n\nIf you use the above command to build your binary, you can use it as\nnormal. However, if you're running your application without building,\nyou can do something like the following:\n\n``` bash\ngo run -tags cse main.go\n```\n\nThe above command will run the application with client-side encryption\nenabled.\n\n## Filter Documents in MongoDB on an Encrypted Field\n\nIf you've run the example so far, you'll probably notice that while you\ncan automatically encrypt fields and decrypt fields, you'll get an error\nif you try to use a filter that contains an encrypted field.\n\nIn our example thus far, we use the\n`AEAD_AES_256_CBC_HMAC_SHA_512-Random` algorithm on our encrypted\nfields. To be able to filter on encrypted fields, the\n`AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic` must be used. More\ninformation between the two options can be found in the\ndocumentation.\n\nTo use the deterministic approach, we need to make a few revisions to\nour project. These changes are a result of the fact that we won't be\nable to use alternate key names within our schema map.\n\nFirst, let's change the **schema.json** file to the following:\n\n``` json\n{\n \"fle-example.people\": {\n \"encryptMetadata\": {\n \"keyId\": \n {\n \"$binary\": {\n \"base64\": \"%s\",\n \"subType\": \"04\"\n }\n }\n ]\n },\n \"properties\": {\n \"ssn\": {\n \"encrypt\": {\n \"bsonType\": \"string\",\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\"\n }\n }\n },\n \"bsonType\": \"object\"\n }\n}\n```\n\nThe two changes in the above JSON reflect the new algorithm and the\n`keyId` using the actual `_id` value rather than an alias. For the\n`base64` field, notice the use of the `%s` placeholder. If you know the\nbase64 string version of your key, then swap it out and save yourself a\nbunch of work. Since this tutorial is an example and the data changes\npretty much every time we run it, we probably want to swap out that\nfield after the file is loaded.\n\nStarting with the `createDataKey` function, find the following line with\nthe `CreateDataKey` function call:\n\n``` go\ndataKeyId, err := clientEncryption.CreateDataKey(ctx, \"local\", options.DataKey())\n```\n\nWhat we didn't see in the previous parts of this tutorial is that this\nfunction returns the `_id` of the data key. We should probably update\nour `createDataKey` function to return `primitive.Binary` and then\nreturn that `dataKeyId` variable.\n\nWe need to move that `dataKeyId` value around until it reaches where we\nload our JSON file. We're doing a lot of work for the following reasons:\n\n- We're in the scenario where we don't know the `_id` of our data key\n prior to runtime. If we know it, we can add it to the schema and be\n done.\n- We designed our code to jump around with functions.\n\nThe schema map requires a base64 value to be used, so when we pass\naround `dataKeyId`, we need to have first encoded it.\n\nIn the `main` function, we might have something that looks like this:\n\n``` go\ndataKeyId := createDataKey()\nclient := createEncryptedClient(base64.StdEncoding.EncodeToString(dataKeyId.Data))\n```\n\nThis means that the `createEncryptedClient` needs to receive a string\nargument. Update the `createEncryptedClient` to accept a string and then\nchange how we're reading our JSON file:\n\n``` go\nschemaMap = readSchemaFromFile(\"schema.json\", dataKeyIdBase64)\n```\n\nRemember, we're just passing the base64 encoded value through the\npipeline. By the end of this, in the `readSchemaFromFile` function, we\ncan update our code to look like the following:\n\n``` go\nfunc readSchemaFromFile(file string, dataKeyIdBase64 string) bson.M {\n content, err := ioutil.ReadFile(file)\n if err != nil {\n log.Fatal(err)\n }\n content = []byte(fmt.Sprintf(string(content), dataKeyIdBase64))\n var doc bson.M\n if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {\n log.Fatal(err)\n }\n return doc\n}\n```\n\nNot only are we receiving the base64 string, but we are using an\n`Sprintf` function to swap our `%s` placeholder with the actual value.\n\nAgain, these changes were based around how we designed our code. At the\nend of the day, we were really only changing the `keyId` in the schema\nmap and the algorithm used for encryption. By doing this, we are not\nonly able to decrypt fields that had been encrypted, but we're also able\nto filter for documents using encrypted fields.\n\n## The Field Level Encryption (FLE) Code in Go\n\nWhile it might seem like we wrote a lot of code, the reality is that the\ncode was far simpler than the concepts involved. To get a better look at\nthe code, you can find it below:\n\n``` go\npackage main\n\nimport (\n \"context\"\n \"crypto/rand\"\n \"encoding/base64\"\n \"fmt\"\n \"io/ioutil\"\n \"log\"\n \"os\"\n\n \"go.mongodb.org/mongo-driver/bson\"\n \"go.mongodb.org/mongo-driver/bson/primitive\"\n \"go.mongodb.org/mongo-driver/mongo\"\n \"go.mongodb.org/mongo-driver/mongo/options\"\n)\n\nvar (\n ctx = context.Background()\n kmsProviders map[string]map[string]interface{}\n schemaMap bson.M\n)\n\nfunc createDataKey() primitive.Binary {\n kvClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")))\n if err != nil {\n log.Fatal(err)\n }\n kvClient.Database(\"keyvault\").Collection(\"datakeys\").Drop(ctx)\n clientEncryptionOpts := options.ClientEncryption().SetKeyVaultNamespace(\"keyvault.datakeys\").SetKmsProviders(kmsProviders)\n clientEncryption, err := mongo.NewClientEncryption(kvClient, clientEncryptionOpts)\n if err != nil {\n log.Fatal(err)\n }\n defer clientEncryption.Close(ctx)\n dataKeyId, err := clientEncryption.CreateDataKey(ctx, \"local\", options.DataKey())\n if err != nil {\n log.Fatal(err)\n }\n return dataKeyId\n}\n\nfunc createEncryptedClient(dataKeyIdBase64 string) *mongo.Client {\n schemaMap = readSchemaFromFile(\"schema.json\", dataKeyIdBase64)\n mongocryptdOpts := map[string]interface{}{\n \"mongodcryptdBypassSpawn\": true,\n }\n autoEncryptionOpts := options.AutoEncryption().\n SetKeyVaultNamespace(\"keyvault.datakeys\").\n SetKmsProviders(kmsProviders).\n SetSchemaMap(schemaMap).\n SetExtraOptions(mongocryptdOpts)\n mongoClient, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv(\"ATLAS_URI\")).SetAutoEncryptionOptions(autoEncryptionOpts))\n if err != nil {\n log.Fatal(err)\n }\n return mongoClient\n}\n\nfunc readSchemaFromFile(file string, dataKeyIdBase64 string) bson.M {\n content, err := ioutil.ReadFile(file)\n if err != nil {\n log.Fatal(err)\n }\n content = []byte(fmt.Sprintf(string(content), dataKeyIdBase64))\n var doc bson.M\n if err = bson.UnmarshalExtJSON(content, false, &doc); err != nil {\n log.Fatal(err)\n }\n return doc\n}\n\nfunc main() {\n fmt.Println(\"Starting the application...\")\n localKey := make([]byte, 96)\n if _, err := rand.Read(localKey); err != nil {\n log.Fatal(err)\n }\n kmsProviders = map[string]map[string]interface{}{\n \"local\": {\n \"key\": localKey,\n },\n }\n dataKeyId := createDataKey()\n client := createEncryptedClient(base64.StdEncoding.EncodeToString(dataKeyId.Data))\n defer client.Disconnect(ctx)\n collection := client.Database(\"fle-example\").Collection(\"people\")\n collection.Drop(context.TODO())\n if _, err := collection.InsertOne(context.TODO(), bson.M{\"name\": \"Nic Raboy\", \"ssn\": \"123456\"}); err != nil {\n log.Fatal(err)\n }\n result, err := collection.FindOne(context.TODO(), bson.M{\"ssn\": \"123456\"}).DecodeBytes()\n if err != nil {\n log.Fatal(err)\n }\n fmt.Println(result)\n}\n```\n\nTry to set the `ATLAS_URI` in your environment variables and give the\ncode a spin.\n\n## Troubleshooting Common MongoDB CSFLE Problems\n\nIf you ran the above code and found some encrypted data in your\ndatabase, fantastic! However, if you didn't get so lucky, I want to\naddress a few of the common problems that come up.\n\nLet's start with the following runtime error:\n\n``` none\npanic: client-side encryption not enabled. add the cse build tag to support\n```\n\nIf you see the above error, it is likely because you forgot to use the\n`-tags cse` flag when building or running your application. To get\nbeyond this, just build your application with the following:\n\n``` none\ngo build -tags cse\n```\n\nAssuming there aren't other problems, you won't receive that error\nanymore.\n\nWhen you build or run with the `-tags cse` flag, you might stumble upon\nthe following error:\n\n``` none\n/usr/local/Cellar/go/1.13.1/libexec/pkg/tool/darwin_amd64/link: running clang failed: exit status 1\nld: warning: directory not found for option '-L/usr/local/Cellar/libmongocrypt/1.0.4/lib'\nld: library not found for -lmongocrypt\nclang: error: linker command failed with exit code 1 (use -v to see invocation)\n```\n\nThe error might not look exactly the same as mine depending on the\noperating system you're using, but the gist of it is that it's saying\nyou are missing the **libmongocrypt** library. Make sure that you've\ninstalled it correctly for your operating system per the\n[documentation.\n\nNow, what if you encounter the following?\n\n``` none\nexec: \"mongocryptd\": executable file not found in $PATH\nexit status 1\n```\n\nLike with the **libmongocrypt** error, it just means that we don't have\naccess to **mongocryptd**, a requirement for automatic field level\nencryption. There are numerous methods toward installing this binary, as\nseen in the\ndocumentation,\nbut on macOS it means having MongoDB Enterprise Edition nearby.\n\n## Conclusion\n\nYou just saw how to use MongoDB client-side field level encryption\n(CSFLE) in your Go application. This is useful if you'd like to encrypt\nfields within MongoDB documents client-side before it reaches the\ndatabase.\n\nTo give credit where credit is due, a lot of the code from this tutorial\nwas taken from Kenn White's sandbox\nrepository\non GitHub.\n\nThere are a few things that I want to reiterate:\n\n- Using a local key is a security risk in production. Either use\n something like AWS KMS or load your Local Key Provider with a key\n that was obtained through an external request.\n- The **mongocryptd** binary must be available on the computer or\n server running the Go application. This is easily installed through\n the MongoDB Enterprise Edition installation.\n- The **libmongocrypt** library must be available to add compatibility\n to the Go driver for client-side encryption and decryption.\n- Don't lose your client-side key. Otherwise, you lose the ability to\n decrypt your fields.\n\nIn a future tutorial, we'll explore how to use AWS KMS and similar for\nkey management.\n\nQuestions? Comments? We'd love to connect with you. Join the\nconversation on the MongoDB Community\nForums.\n\n", "format": "md", "metadata": {"tags": ["Go", "MongoDB"], "pageDescription": "Learn how to encrypt document fields client-side in Go with MongoDB client-side field level encryption (CSFLE).", "contentType": "Tutorial"}, "title": "Client-Side Field Level Encryption (CSFLE) in MongoDB with Golang", "updated": "2024-05-20T17:32:23.501Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/products/atlas/atlas-search-relevancy-explained", "action": "created", "body": "# Atlas Search Relevancy Explained\n\nFull-text search powers all of our digital lives \u2014 googling for this and that; asking Siri where to find a tasty, nearby dinner; shopping at Amazon; and so on. We receive relevant results, often even in spite of our typos, voice transcription mistakes, or vaguely formed queries. We have grown accustomed to expecting the best results for our searching intentions, right there, at the top. \n\nBut now it\u2019s your turn, dear developer, to build the same satisfying user experience into your Atlas-powered application.\n\nIf you\u2019ve not yet created an Atlas Search index, it would be helpful to do so before delving into the rest of this article. We\u2019ve got a handy tutorial to get started with Atlas Search. We will happily and patiently wait for you to get started and return here when you\u2019ve got some search results.\n\nWelcome back! We see that you\u2019ve got data, and it lives in MongoDB Atlas. You\u2019ve turned on Atlas Search and run some queries, and now you want to understand why the results are in the order they appear and get some tips on tuning the relevancy ranking order.\n\n## Relevancy riddle\n\nIn the article Using Atlas Search from Java, we left the reader with a bit of a search relevancy mystery, using a query of the cast field for the phrase \u201ckeanu reeves\u201d (lowercase; a `$match` fails at even this inexact of a query) narrowing the results to movies that are both dramatic (`genres:Drama`) _AND_ romantic (`genres:Romance`). We\u2019ll use that same query here. The results of this query match several documents, but with differing scores. The only scoring factor is a `must` clause of the `phrase` \u201ckeanu reeves\u201d. Why don\u2019t \u201cSweet November\u201d and \u201cA Walk in the Clouds\u201d score identically? \n\nCan you spot the difference? Read on as we provide you the tools and tips to suss out and solve these kinds of challenges presented by full-text, inexact/fuzzy/close-but-not-exact search results.\n\n## Score details\n\nAtlas Search makes building full-text search applications possible, and with a few clicks, accepting default settings, you\u2019ve got incredibly powerful capabilities within reach. You\u2019ve got a pretty good auto-pilot system, but you\u2019re in the cockpit of a 747 with knobs and dials all around. The plane will take off and land safely by itself \u2014 most of the time. Depending on conditions and goals, manually going up to 11.0 on the volume knob, and perhaps a bit more on the thrust lever, is needed to fly there in style. Relevancy tuning can be described like this as well, and before you take control of the parameters, you need to understand what the settings do and what\u2019s possible with adjustments.\n\nThe scoring details of each document for a given query can be requested and returned. There are two steps needed to get the score details: first requesting them in the `$search` request, and then projecting the score details metadata into each returned document. Requesting score details is a performance hit on the underlying search engine, so only do this for diagnostic or learning purposes. To request score details from the search request, set `scoreDetails` to `true`. Those score details are available in the results `$meta`data for each document.\n\nHere\u2019s what\u2019s needed to get score details:\n\n```\n{\n \"$search\": {\n ...\n \"scoreDetails\": true\n }\n},\n{\n \"$project\": {\n ...\n \"scoreDetails\": {\"$meta\": \"searchScoreDetails\"}\n }\n}]\n```\n\nLet\u2019s search the movies collection built from the [tutorial for dramatic, romance movies starring \u201ckeanu reeves\u201d (tl; dr: add sample collections, create a search index `default` on movies collection with `dynamic=\u201dtrue\u201d`), bringing in the score and score details:\n\n```\n\n {\n \"$search\": {\n \"compound\": {\n \"filter\": [\n {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"query\": \"Drama\",\n \"path\": \"genres\"\n }\n },\n {\n \"text\": {\n \"query\": \"Romance\",\n \"path\": \"genres\"\n }\n }\n ]\n }\n }\n ],\n \"must\": [\n {\n \"phrase\": {\n \"query\": \"keanu reeves\",\n \"path\": \"cast\"\n }\n }\n ]\n },\n \"scoreDetails\": true\n }\n },\n {\n \"$project\": {\n \"_id\": 0,\n \"title\": 1,\n \"cast\": 1,\n \"genres\": 1,\n \"score\": {\n \"$meta\": \"searchScore\"\n },\n \"scoreDetails\": {\n \"$meta\": \"searchScoreDetails\"\n }\n }\n },\n {\n \"$limit\": 10\n }\n ]\n```\n\nContent warning! The following output is not for the faint of heart. It\u2019s the daunting reason we are here though, so please push through as these details are explained below. The value of the projected `scoreDetails` will look something like the following for the first result:\n\n```\n\"scoreDetails\": {\n \"value\": 6.011996746063232,\n \"description\": \"sum of:\",\n \"details\": [\n {\n \"value\": 0,\n \"description\": \"match on required clause, product of:\",\n \"details\": [\n {\n \"value\": 0,\n \"description\": \"# clause\",\n \"details\": []\n },\n {\n \"value\": 1,\n \"description\": \"+ScoreDetailsWrapped ($type:string/genres:drama) +ScoreDetailsWrapped ($type:string/genres:romance)\",\n \"details\": []\n }\n ]\n },\n {\n \"value\": 6.011996746063232,\n \"description\": \"$type:string/cast:\\\"keanu reeves\\\" [BM25Similarity], result of:\",\n \"details\": [\n {\n \"value\": 6.011996746063232,\n \"description\": \"score(freq=1.0), computed as boost * idf * tf from:\",\n \"details\": [\n {\n \"value\": 13.083234786987305,\n \"description\": \"idf, sum of:\",\n \"details\": [\n {\n \"value\": 6.735175132751465,\n \"description\": \"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:\",\n \"details\": [\n {\n \"value\": 27,\n \"description\": \"n, number of documents containing term\",\n \"details\": []\n },\n {\n \"value\": 23140,\n \"description\": \"N, total number of documents with field\",\n \"details\": []\n }\n ]\n },\n {\n \"value\": 6.348059177398682,\n \"description\": \"idf, computed as log(1 + (N - n + 0.5) / (n + 0.5)) from:\",\n \"details\": [\n {\n \"value\": 40,\n \"description\": \"n, number of documents containing term\",\n \"details\": []\n },\n {\n \"value\": 23140,\n \"description\": \"N, total number of documents with field\",\n \"details\": []\n }\n ]\n }\n ]\n },\n {\n \"value\": 0.4595191478729248,\n \"description\": \"tf, computed as freq / (freq + k1 * (1 - b + b * dl / avgdl)) from:\",\n \"details\": [\n {\n \"value\": 1,\n \"description\": \"phraseFreq=1.0\",\n \"details\": []\n },\n {\n \"value\": 1.2000000476837158,\n \"description\": \"k1, term saturation parameter\",\n \"details\": []\n },\n {\n \"value\": 0.75,\n \"description\": \"b, length normalization parameter\",\n \"details\": []\n },\n {\n \"value\": 8,\n \"description\": \"dl, length of field\",\n \"details\": []\n },\n {\n \"value\": 8.217415809631348,\n \"description\": \"avgdl, average length of field\",\n \"details\": []\n }\n ]\n }\n ]\n }\n ]\n }\n ]\n}\n```\n\nWe\u2019ll write a little code, below, that presents this nested structure in a more concise, readable format, and delve into the details there. Before we get to breaking down the score, we need to understand where these various factors come from. They come from Lucene.\n\n## Lucene inside\n\n[Apache Lucene powers a large percentage of the world\u2019s search experiences, from the majority of e-commerce sites to healthcare and insurance systems, to intranets, to top secret intelligence, and so much more. And it\u2019s no secret that Apache Lucene powers Atlas Search. Lucene has proven itself to be robust and scalable, and it\u2019s pervasively deployed. Many of us would consider Lucene to be the most important open source project ever, where a diverse community of search experts from around the world and across multiple industries collaborate constructively to continually improve and innovate this potent project.\n\nSo, what is this amazing thing called Lucene? Lucene is an open source search engine library written in Java that indexes content and handles sophisticated queries, rapidly returning relevant results. In addition, Lucene provides faceting, highlighting, vector search, and more.\n\n## Lucene indexing\n\nWe cannot discuss search relevancy without addressing the indexing side of the equation as they are interrelated. When documents are added to an Atlas collection with an Atlas Search index enabled, the fields of the documents are indexed into Lucene according to the configured index mappings. \n\nWhen textual fields are indexed, a data structure known as an inverted index is built through a process called analysis. The inverted index, much like a physical dictionary, is a lexicographically/alphabetically ordered list of terms/words, cross-referenced to the documents that contain them. The analysis process is initially fed the entire text value of the field during indexing and, according to the analyzer defined in the mapping, breaks it down into individual terms/words.\n\nFor example, the silly sentence \u201cThe quick brown fox jumps over the lazy dog\u201d is analyzed by the Atlas Search default analyzer (`lucene.standard`) into the following terms: the,quick,brown,fox,jumps,over,the,lazy,dog. Now, if we alphabetize (and de-duplicate, noting the frequency) those terms, it looks like this:\n\n| term | frequency |\n| :-------- | -------: |\n| brown | 1 |\n| dog | 1 |\n| fox | 1 |\n| jumps | 1 |\n| lazy | 1 |\n| over | 1 |\n| quick | 1 |\n| the | 2 |\n\nIn addition to which documents contain a term, the positions of each instance of that term are recorded in the inverted index structure. Recording term positions allows for phrase queries (like our \u201ckeanu reeves\u201d example), where terms of the query must be adjacent to one another in the indexed field.\n\nSuppose we have a Silly Sentences collection where that was our first document (document id 1), and we add another document (id 2) with the text \u201cMy dogs play with the red fox\u201d. Our inverted index, showing document ids and term positions. becomes:\n\n| term | document ids | term frequency | term positions\n| :----| --------------: | ---------------: | ---------------: |\n| brown | 1 | 1 | Document 1: 3 |\n| dog | 1 | 1 | Document 1: 9 | \n| dogs | 2 | 1 | Document 2: 2 |\n| fox | 1,2 | 2 | Document 1: 4; Document 2: 7 |\n| jumps | 1 | 1 | Document 1: 5 |\n| lazy | 1 | 1 | Document 1: 8 |\n| my | 2 | 1 | Document 2: 1 |\n| over | 1 | 1 | Document 1: 6 |\n| play | 2 | 1 | Document 2: 3 |\n| quick | 1 | 1 | Document 1: 2 |\n| red | 2 | 1 | Document 2: 6 | \n| the | 1,2 | 3 | Document 1: 1, 7; Document 2: 5 |\n| with | 2 | 1 | Document 2: 4 |\n\nWith this data structure, Lucene can quickly navigate to a queried term and return the documents containing it.\n\nThere are a couple of notable features of this inverted index example. The words \u201cdog\u201d and \u201cdogs\u201d are separate terms. The terms emitted from the analysis process, which are indexed exactly as they are emitted, are the atomic searchable units, where \u201cdog\u201d is not the same as \u201cdogs\u201d. Does your application need to find both documents for a search of either of these terms? Or should it be more exact? Also of note, out of two documents, \u201cthe\u201d has appeared three times \u2014 more times than there are documents. Maybe words such as \u201cthe\u201d are so common in your data that a search for that term isn\u2019t useful. Your analyzer choices determine what lands in the inverted index, and thus what is searchable or not. Atlas Search provides a variety of analyzer options, with the right choice being the one that works best for your domain and data.\n\nThere are a number of statistics about a document collection that emerge through the analysis and indexing processes, including:\n\n* Term frequency: How many times did a term appear in the field of the document?\n* Document frequency: In how many documents does this term appear?\n* Field length: How many terms are in this field?\n* Term positions: In which position, in the emitted terms, does each instance appear?\n\nThese stats lurk in the depths of the Lucene index structure and surface visibly in the score detail output that we\u2019ve seen above and will delve into below.\n\n## Lucene scoring\n\nThe statistics captured during indexing factor into how documents are scored at query time. Lucene scoring, at its core, is built upon TF/IDF \u2014 term frequency/inverse document frequency. Generally speaking, TF/IDF scores documents with higher term frequencies greater than ones with lower term frequencies, and scores documents with more common terms lower than ones with rarer terms \u2014 the idea being that a rare term in the collection conveys more information than a frequently occurring one and that a term\u2019s weight is proportional to its frequency.\n\nThere\u2019s a bit more math behind the scenes of Lucene\u2019s implementation of TF/IDF, to dampen the effect (e.g., take the square root) of TF and to scale IDF (using a logarithm function).\n\nThe classic TF/IDF formula has worked well in general, when document fields are of generally the same length, and there aren\u2019t nefarious or odd things going on with the data where the same word is repeated many times \u2014\u00a0which happens in product descriptions, blog post comments, restaurant reviews, and where boosting a document to the top of the results has some incentive. Given that not all documents are created equal \u2014 some titles are long, some are short, and some have descriptions that repeat words a lot or are very succinct \u2014 some fine-tuning is warranted to account for these situations.\n\n## Best matches\n\nAs search engines have evolved, refinements have been made to the classic TF/IDF relevancy computation to account for term saturation (an excessively large number of the same term within a field) and reduce the contribution of long field values which contain many more terms than shorter fields, by factoring in the ratio of the field length of the document to the average field length of the collection. The now popular BM25 method has become the default scoring formula in Lucene and is the scoring formula used by Atlas Search. BM25 stands for \u201cBest Match 25\u201d (the 25th iteration of this scoring algorithm). A really great writeup comparing classic TF/IDF to BM25, including illustrative graphs, can be found on OpenSource Connections.\n\nThere are built-in values for the additional BM25 factors, `k1` and `b`. The `k1` factor affects how much the score increases with each reoccurrence of the term, and `b` controls the effect of field length. Both of these factors are currently internally set to the Lucene defaults and are not settings a developer can adjust at this point, but that\u2019s okay as the built-in values have been tuned to provide great relevancy as is.\n\n## Breaking down the score details\n\nLet\u2019s look at those same score details in a slimmer, easier-to-read fashion: \n\nIt\u2019s easier to see in this format that the score of roughly 6.011 comes from the sum of two numbers: 0.0 (the non-scoring `# clause`-labeled filters) and roughly 6.011. And that ~6.011 factor comes from the BM25 scoring formula that multiples the \u201cidf\u201d (inverse document frequency) factor of ~13.083 with the \u201ctf\u201d (term frequency) factor of ~0.459. The \u201cidf\u201d factor is the \u201csum of\u201d two components, one for each of the terms in our `phrase` operator clause. Each of the `idf` factors for our two query terms, \u201ckeanu\u201d and \u201creeves\u201d, is computed using the formula in the output, which is:\n\nlog(1 + (N - n + 0.5) / (n + 0.5))\n\nThe \u201ctf\u201d factor for the full phrase is \u201ccomputed as\u201d this formula:\n\nfreq / (freq + k1 * (1 - b + b * dl / avgdl))\n\nThis uses the factors indented below it, such as the average length (in number of terms) of the \u201ccast\u201d field across all documents in the collection.\n\nIn front of each field name in this output (\u201cgenres\u201d and \u201ccast\u201d) there is a prefix used internally to note the field type (the \u201c$type:string/\u201d prefix).\n\n## Pretty printing the score details\n\nThe more human-friendly output of the score details above was generated using MongoDB VS Code Playgrounds. This JavaScript code will print a more concise, indented version of the scoreDetails, by calling: `print_score_details(doc.scoreDetails);`:\n\n```\nfunction print_score_details(details, indent_level) {\n if (!indent_level) { indent_level = 0; }\n spaces = \" \".padStart(indent_level);\n console.log(spaces + details.value + \", \" + details.description);\n details.details.forEach (d => {\n print_score_details(d, indent_level + 2);\n });\n}\n```\n\nSimilarly, pretty printing in Java can be done like the code developed in the article Using Atlas Search from Java, which is available on GitHub.\n\n## Mystery solved!\n\nGoing back to our Relevancy Riddle, let\u2019s see the score details:\n\nUsing the detailed information provided about the statistics captured in the Lucene inverted index, it turns out that the `cast` fields of these two documents have an interesting difference. They both have four cast members, but remember the analysis process that extracts searchable terms from text. In the lower scoring of the two documents, one of the cast members has a hyphenated last name: Aitana S\u00e8nchez-Gij\u00e8n. The dash/hyphen character is a term separator character for the `lucene.standard` analyzer, making one additional term for that document which in turn increases the length (in number of terms) of the `cast` field. A greater field length causes term matches to weigh less than if they were in a shorter length field.\n\n## Compound is king\n\nEven in this simple phrase query example, the scoring is made up of many factors that are the \u201csum of\u201d, \u201cproduct of\u201d, \u201cresult of\u201d, or \u201cfrom\u201d other factors and formulas. Relevancy tuning involves crafting clauses nested within a `compound` operator using `should` and `must`. Note again that `filter` clauses do not contribute to the score but are valuable to narrow the documents considered for scoring by the `should` and `must` clauses. And of course, `mustNot` clauses don\u2019t contribute to the score, as documents matching those clauses are omitted from the results altogether.\n\nUse multiple `compound.should` and `compound.must` to weight matches in different fields in different ways. It\u2019s a common practice, for example, to weight matches in a `title` field higher than matches in a `description` field (or `plot` field in the movies collection), using boosts on different query operator clauses.\n\n## Boosting clauses\n\nWith a query composed of multiple clauses, you have control over modifying the score in various ways using the optional `score` setting available on all search operators. Scoring factors for a clause can be controlled in these four ways:\n\n* `constant`: The scoring factor for the clause is set to an explicit value.\n* `boost`: Multiply the normal computed scoring factor for the clause by either a specified value or by the value of a field on the document being scored.\n* `function`: Compute the scoring factor using the specified formula expression.\n* `embedded`: Work with the `embeddedDocument` search operator to control how matching embedded documents contribute to the score of the top-level parent document.\n\nThat\u2019s a lot of nuanced control! These are important controls to have when you\u2019re deep into tuning search results rankings. \n\n## Relevancy tuning: a delicate balance\n\nWith the tools and mechanisms illustrated here, you\u2019ve got the basics of Atlas Search scoring insight. When presented with the inevitable results ranking challenges, you\u2019ll be able to assess the situation and understand why and how the scores are computed as they are. Tuning those results is tricky. Nudging one query\u2019s results to the desired order is fairly straightforward, but that\u2019s just one query.\n\nAdjusting boost factors, leveraging more nuanced compound clauses, and tinkering with analysis will affect other query results. To make sure your users get relevant results:\n\n* Test, test, and test again, across many queries \u2014 especially real-world queries mined from your logs, not just your pet queries.\n* Test with a complete collection of data (as representative or as real-world as you can get), not just a subset of data for development purposes. \n* Remember, index stats matter for scores, such as the average length in number of terms of each field. If you test with non-production quality and scale data, relevance measures won\u2019t match a production environment's stats.\n\nRelevancy concerns vary dramatically by domain, scale, sensitivity, and monetary value of search result ordering. Ensuring the \u201cbest\u201d (by whatever metrics are important to you) documents appear in the top positions presented is both an art and a science. The e-commerce biggies are constantly testing query results, running regression tests and A/B experiments behind the scenes , fiddling with all the parameters available. For website search, however, setting a boost for `title` can be all you need.\n\nYou\u2019ve got the tools, and it\u2019s just math, but be judicious about adjusting things, and do so with full real data, real queries, and some time and patience to set up tests and experiments.\n\nRelevancy understanding and tuning is an on-going process and discussion. Questions? Comments? Let's continue the conversation over at our Atlas Search community forum.", "format": "md", "metadata": {"tags": ["Atlas"], "pageDescription": "We've grown accustomed to expecting the best results for our search intentions. Now it\u2019s your turn to build the same experience into your Atlas-powered app. ", "contentType": "Article"}, "title": "Atlas Search Relevancy Explained", "updated": "2024-05-20T17:32:23.500Z"} {"sourceName": "devcenter", "url": "https://www.mongodb.com/developer/languages/python/pymongoarrow-and-data-analysis", "action": "created", "body": "# PyMongoArrow: Bridging the Gap Between MongoDB and Your Data Analysis App\n\n## Overview\n\nMongoDB has always been a great database for data science and data analysis, and that's because you can:\n\n* Import data without a fixed schema.\n* Clean it up within the database.\n* Listen in real-time for updates (a very handy feature that's used by our MongoDB Kafka Connector).\n* Query your data with the super-powerful and intuitive Aggregation Framework.\n\nBut MongoDB is a general-purpose database, and not a data analysis tool, so a common pattern when analysing data that's stored within MongoDB is to extract the results of a query into a Numpy array, or Pandas dataframe, and to run complex and potentially long running analyses using the toolkit those frameworks provide. Until recently, the performance hit of converting large amounts of BSON data, as provided by MongoDB into these data structures, has been slower than we'd like.\n\nFortunately, MongoDB recently released PyMongoArrow, a Python library for efficiently converting the result of a MongoDB query into the Apache Arrow data model. If you're not aware of Arrow, you may now be thinking, \"Mark, how does converting to Apache Arrow help me with my Numpy or Pandas analysis?\" The answer is: Conversion between Arrow, Numpy, and Pandas is super efficient, so it provides a useful intermediate format for your tabular data. This way, we get to focus on building a powerful tool for mapping between MongoDB and Arrow, and leverage the existing PyArrow library for integration with Numpy and MongoDB\n\n## Prerequisites\n\nYou'll need a recent version of Python (I'm using 3.8) with pip available. You can use conda if you like, but PyMongoArrow is released on PyPI, so you'll still need to use pip to install it into your conda Python environment.\n\nThis tutorial was written for PyMongoArrow v0.1.1.\n\n## Getting Started\n\nIn this tutorial, I'm going to be using a sample database you can install when creating a cluster hosted on MongoDB Atlas. The database I'll be using is the \"sample\\_weatherdata\" database. You'll access this with a `mongodb+srv` URI, so you'll need to install PyMongo with the \"srv\" extra, like this:\n\n``` shell\n$ python -m pip install jupyter pymongoarrow 'pymongosrv]' pandas\n```\n\n> **Useful Tip**: If you just run `pip`, you may end up using a copy of `pip` that was installed for a different version of `python` than the one you're using. For some reason, the `PATH` getting messed up this way happens more often than you'd think. A solution to this is to run pip via Python, with the command `python -m pip`. That way, it'll always run the version of `pip` that's associated with the version of `python` in your `PATH`. This is now the [officially recommended way to run `pip`!\n\nYou'll also need a MongoDB cluster set up with the sample datasets imported. Follow these instructions to import them into your MongoDB cluster and then set an environment variable, `MDB_URI`, pointing to your database. It should look like the line below, but with the URI you copy out of the Atlas web interface. (Click the \"Connect\" button for your cluster.)\n\n``` shell\nexport MDB_URI=mongodb+srv://USERNAME:PASSWORD@CLUSTERID.azure.mongodb.net/sample_weatherdata?retryWrites=true&w=majority\n```\n\nA sample document from the \"data\" collection looks like this:\n\n``` json\n{'_id': ObjectId('5553a998e4b02cf7151190bf'),\n 'st': 'x+49700-055900',\n 'ts': datetime.datetime(1984, 3, 5, 15, 0),\n 'position': {'type': 'Point', 'coordinates': -55.9, 49.7]},\n 'elevation': 9999,\n 'callLetters': 'SCGB',\n 'qualityControlProcess': 'V020',\n 'dataSource': '4',\n 'type': 'FM-13',\n 'airTemperature': {'value': -5.1, 'quality': '1'},\n 'dewPoint': {'value': 999.9, 'quality': '9'},\n 'pressure': {'value': 1020.8, 'quality': '1'},\n 'wind': {'direction': {'angle': 100, 'quality': '1'},\n 'type': 'N',\n 'speed': {'rate': 3.1, 'quality': '1'}},\n 'visibility': {'distance': {'value': 20000, 'quality': '1'},\n 'variability': {'value': 'N', 'quality': '9'}},\n 'skyCondition': {'ceilingHeight': {'value': 22000,\n 'quality': '1',\n 'determination': 'C'},\n 'cavok': 'N'},\n 'sections': ['AG1', 'AY1', 'GF1', 'MD1', 'MW1'],\n 'precipitationEstimatedObservation': {'discrepancy': '2',\n 'estimatedWaterDepth': 0},\n 'pastWeatherObservationManual': [{'atmosphericCondition': {'value': '0',\n 'quality': '1'},\n 'period': {'value': 3, 'quality': '1'}}],\n 'skyConditionObservation': {'totalCoverage': {'value': '01',\n 'opaque': '99',\n 'quality': '1'},\n 'lowestCloudCoverage': {'value': '01', 'quality': '1'},\n 'lowCloudGenus': {'value': '01', 'quality': '1'},\n 'lowestCloudBaseHeight': {'value': 800, 'quality': '1'},\n 'midCloudGenus': {'value': '00', 'quality': '1'},\n 'highCloudGenus': {'value': '00', 'quality': '1'}},\n 'atmosphericPressureChange': {'tendency': {'code': '8', 'quality': '1'},\n 'quantity3Hours': {'value': 0.5, 'quality': '1'},\n 'quantity24Hours': {'value': 99.9, 'quality': '9'}},\n 'presentWeatherObservationManual': [{'condition': '02', 'quality': '1'}]}\n```\n\nTo keep things simpler in this tutorial, I'll ignore all the fields except for \"ts,\" \"wind,\" and the \"\\_id\" field.\n\nI set the `MDB_URI` environment variable, installed the dependencies above, and then fired up a new Python 3 Jupyter Notebook. I've put the notebook [on GitHub, if you want to follow along, or run it yourself.\n\nI added the following code to a cell at the top of the file to import the necessary modules, and to connect to my database:\n\n``` python\nimport os\nimport pyarrow\nimport pymongo\nimport bson\nimport pymongoarrow.monkey\nfrom pymongoarrow.api import Schema\n\nMDB_URI = os.environ'MDB_URI']\n\n# Add extra find_* methods to pymongo collection objects:\npymongoarrow.monkey.patch_all()\n\nclient = pymongo.MongoClient(MDB_URI)\ndatabase = client.get_default_database()\ncollection = database.get_collection(\"data\")\n```\n\n## Working With Flat Data\n\nIf the data you wish to convert to Arrow, Pandas, or Numpy data tables is already flat\u2014i.e., the fields are all at the top level of your documents\u2014you can use the methods `find\\_arrow\\_all`, `find\\_pandas\\_all`, and `find\\_numpy\\_all` to query your collection and return the appropriate data structure.\n\n``` python\ncollection.find_pandas_all(\n {},\n schema=Schema({\n 'ts': pyarrow.timestamp('ms'),\n })\n)\n```\n\n| | ts |\n| --- | ---: |\n| 0 | 1984-03-05 15:00:00 |\n| 1 | 1984-03-05 18:00:00 |\n| 2 | 1984-03-05 18:00:00 |\n| 3 | 1984-03-05 18:00:00 |\n| 4 | 1984-03-05 18:00:00 |\n| ... | ... |\n| 9995 | 1984-03-13 06:00:00 |\n| 9996 | 1984-03-13 06:00:00 |\n| 9997 | 1984-03-13 06:00:00 |\n| 9998 | 1984-03-12 09:00:00 |\n| 9999 | 1984-03-12 12:00:00 |\n\n10000 rows \u00d7 1 columns\n\nThe first argument to find\\_pandas\\_all is the `filter` argument. I'm interested in all the documents in the collection, so I've left it empty. The documents in the data collection are quite nested, so the only real value I can access with a find query is the timestamp of when the data was recorded, the \"ts\" field. Don't worry\u2014I'll show you how to access the rest of the data in a moment!\n\nBecause Arrow tables (and the other data types) are strongly typed, you'll also need to provide a Schema to map from MongoDB's permissive dynamic schema into the types you want to handle in your in-memory data structure.\n\nThe `Schema` is a mapping of the field name, to the appropriate type to be used by Arrow, Pandas, or Numpy. At the current time, these types are 64-bit ints, 64-bit floating point numbers, and datetimes. The easiest way to specify these is with the native python types `int` and `float`, and with `pyarrow.datetime`. Any fields in the document that aren't listed in the schema will be ignored.\n\nPyMongoArrow currently hijacks the `projection` parameter to the `find_*_all` methods, so unfortunately, you can't write a projection to flatten the structure at the moment.\n\n## Convert Your Documents to Tabular Data\nMongoDB documents are very flexible, and can support nested arrays and documents. Although Apache Arrow also supports nested lists, structs, and dictionaries, Numpy arrays and Pandas dataframes, in contrast, are tabular or columnar data structures. There are plans to support mapping to the nested Arrow data types in future, but at the moment, only scalar values are supported with all three libraries. So in all these cases, it will be necessary to flatten the data you are exporting from your documents.\n\nTo project your documents into a flat structure, you'll need to use the more powerful `aggregate_*_all` methods that PyMongoArrow adds to your PyMongo Collection objects.\n\nIn an aggregation pipeline, you can add a `$project` stage to your query to project the nested fields you want in your table to top level fields in the aggregation result.\n\nIn order to test my `$project` stage, I first ran it with the standard PyMongo aggregate function. I converted it to a `list` so that Jupyter would display the results.\n\n``` python\nlist(collection.aggregate([\n {'$match': {'_id': bson.ObjectId(\"5553a998e4b02cf7151190bf\")}},\n {'$project': {\n 'windDirection': '$wind.direction.angle',\n 'windSpeed': '$wind.speed.rate',\n }}\n]))\n\n[{'_id': ObjectId('5553a998e4b02cf7151190bf'),\n 'windDirection': 100,\n 'windSpeed': 3.1}]\n```\n\nBecause I've matched a single document by \"\\_id,\" only one document is returned, but you can see that the `$project` stage has mapped `$wind.direction.angle` to the top-level \"windDirection\" field in the result, and the same with `$wind.speed.rate` and \"windSpeed\" in the result.\n\nI can take this `$project` stage and use it to flatten all the results from an aggregation query, and then provide a schema to identify \"windDirection\" as an integer value, and \"windSpeed\" as a floating point number, like this:\n\n``` python\ncollection.aggregate_pandas_all([\n {'$project': {\n 'windDirection': '$wind.direction.angle',\n 'windSpeed': '$wind.speed.rate',\n }}\n ],\n schema=Schema({'windDirection': int, 'windSpeed': float})\n)\n```\n\n| A | B | C |\n| --- | --- | --- |\n| | windDirection | windSpeed |\n| 0 | 100 | 3.1 |\n| 1 | 50 | 9.0 |\n| 2 | 30 | 7.7 |\n| 3 | 270 | 19.0 |\n| 4 | 50 | 8.2 |\n| ... | ... | ... |\n| 9995 | 10 | 7.0 |\n| 9996 | 60 | 5.7 |\n| 9997 | 330 | 3.0 |\n| 9998 | 140 | 7.7 |\n| 9999 | 80 | 8.2 |\n\n10000 rows \u00d7 2 columns\n\nThere are only 10000 documents in this collection, but some basic benchmarks I wrote show this to be around 20% faster than working directly with `DataFrame.from_records` and `PyMongo`. With larger datasets, I'd expect the difference in performance to be more significant. It's early days for the PyMongoArrow library, and so there are some limitations at the moment, such as the ones I've mentioned above, but the future looks bright for this library in providing fast mappings between your rich, flexible MongoDB collections and any in-memory analysis requirements you might have with Arrow, Pandas, or Numpy.\n\n## Next Steps\n\nIf you're planning to do lots of analysis of data that's stored in MongoDB, then make sure you're up on the latest features of MongoDB's powerful [aggregation framework. You can do many things within the database so you may not need to export your data at all. You can connect to secondary servers in your cluster to reduce load on the primary for analytics queries, or even have dedicated analytics nodes for running these kinds of queries.\nCheck out MongoDB 5.0's new window functions and if you're working with time series data, you'll definitely want to know about MongoDB 5.0's new time-series collections.", "format": "md", "metadata": {"tags": ["Python", "MongoDB", "Pandas", "AI"], "pageDescription": "MongoDB has always been a great database for data science and data analysis, and now with PyMongoArrow, it integrates optimally with Apache Arrow, Python's Numpy, and Pandas libraries.", "contentType": "Quickstart"}, "title": "PyMongoArrow: Bridging the Gap Between MongoDB and Your Data Analysis App", "updated": "2024-05-20T17:32:23.501Z"}